|
| 1 | +# SOSR2021 Raw Results & Process Scripts # |
| 2 | + |
| 3 | +This folder contains all raw results and process scripts used to evaluate |
| 4 | +Helix. This directory is divided into three sub-directories that evaluate |
| 5 | +different aspects. |
| 6 | + |
| 7 | +_Note: The paper only presented part of the control-plane evaluation results |
| 8 | +(scenario 1) and part of the TE algorithm evaluation results (AT&T-MPLS |
| 9 | +Topology)._ |
| 10 | + |
| 11 | + |
| 12 | + |
| 13 | +## Data Plane Failure Evaluation # |
| 14 | + |
| 15 | +**Folder:** `SOSR21_ReactVsProact/` |
| 16 | + |
| 17 | +Data plane failure recovery performance evaluation of Helix and comparison |
| 18 | +between reactive versus proactive (Helix) recovery collected using two control |
| 19 | +channel latencies: |
| 20 | +* 4ms |
| 21 | +* 20ms - our calculated average WAN latency (see paper for details). |
| 22 | + |
| 23 | + |
| 24 | +To generate graph using GNU Plot (of processed results data found in |
| 25 | +`reactive_vs_proactive.dat` file) use `gnuplot reactive_vs_proactive.p`. |
| 26 | +Running this script will generate an SVG image `reactive_vs_proactive.svg`. |
| 27 | + |
| 28 | +Raw results were collected using the collection script |
| 29 | +`SOSR2021_collect_react_vs_proact.sh` from the root folder of the repository. |
| 30 | + |
| 31 | + |
| 32 | +### RAW RESULTS ### |
| 33 | + |
| 34 | +`RAW_RESULTS/` - Contains collected raw and processed |
| 35 | +results. Folder also contains process scripts and collection script. |
| 36 | + |
| 37 | +`RAW_RESULTS/ResLinkFail_20ms` - Contains data plane recovery results collected |
| 38 | +using a control channel latency of 20ms. Reactive controller results extended |
| 39 | +Helix to use a restoration based recovery approach while the proactive and |
| 40 | +proactive_alt controllers use Helix's standard protection based recovery method. |
| 41 | +The proactive_alt controller slightly modifies how Helix computes path splices. |
| 42 | + |
| 43 | +`RAW_RESULTS/ResLinkFail_4ms` - Contains data plane recovery results collected |
| 44 | +using a control channel latency of 4ms. This folder only contains results for |
| 45 | +the reactive controller (proactive controller results were consistent with the |
| 46 | +20ms latency results - omitted to save space). |
| 47 | + |
| 48 | +To process the raw results use the `RAW_RESULTS/ResLinkFail_20ms/proc_all.sh` |
| 49 | +and `RAW_RESULTS/ResLinkFail_4ms/proc_all.sh` to process the 20ms and 4ms |
| 50 | +raw results and output the average recovery time and CI intervals. |
| 51 | + |
| 52 | + |
| 53 | + |
| 54 | +## TE Algorithm Evaluation # |
| 55 | + |
| 56 | +**Folder:** `SOSR21_TE/` |
| 57 | + |
| 58 | +Helix TE algorithm evaluation results collected using YATES and presented in |
| 59 | +the paper. The paper contained the results for the AT&T MPLS topology (using |
| 60 | +the three traffic multipliers). This folder also contains results collected |
| 61 | +when evaluating Helix against the other algorithms using the Abilene topology |
| 62 | +across three traffic multipliers. |
| 63 | + |
| 64 | +`ConLoss` - Folder that contains congestion loss results. Files used by the |
| 65 | +GNU plot scripts which generate graphs for the results (presented in the |
| 66 | +paper). |
| 67 | + |
| 68 | +`PathChurn` - Same as `ConLoss` folder but contains path change results. |
| 69 | + |
| 70 | +To generate the graphs presented in the paper for the (AT&T topology) use the |
| 71 | +commands: |
| 72 | +* `gnuplot attmpls_500.p` - Generate graph for AT\&T MPLS topology (500x |
| 73 | + multiplier) |
| 74 | +* `gnuplot attmpls_550.p` - Generate graph for AT\&T MPLS topology (550x |
| 75 | + multiplier) |
| 76 | +* `gnuplot attmpls_560.p` - Generate graph for AT\&T MPLS topology (600x |
| 77 | + multiplier) |
| 78 | +* `gnuplot abi_2_2.p` - Generate graph for Abilene topology (2.2x multiplier) |
| 79 | +* `gnuplot abi_2_8.p` - Generate graph for Abilene topology (2.8x multiplier) |
| 80 | +* `gnuplot abi_3_0.p` - Generate graph for Abilene topology (3.0x multiplier) |
| 81 | + |
| 82 | + |
| 83 | + |
| 84 | +## Control-Plane Resilience Evaluation ## |
| 85 | + |
| 86 | +**Folder:** `SOSR21_CtrlPlaneFail/` |
| 87 | + |
| 88 | +Helix control plane failure resilience evaluation results collect using the |
| 89 | +emulation framework with the first scenario (presented in paper) and second |
| 90 | +scenario. The folder contains a script to check if the output data contains |
| 91 | +a validation error (`process_find_validation_error.py`) and several scripts |
| 92 | +to calculate the metrics based on the emulation framework event output. The |
| 93 | +process scripts use the file name format: `process_scen<scen>_<stage>.py` where |
| 94 | +`<scen>` represents the scenario number (i.e. 1 and 2) and `<stage>` the |
| 95 | +stage number of the experiment we are extracting relevant metrics for. |
| 96 | + |
| 97 | +The `scen1_out.nostat100.txt` and `scen2_out.nostat100.txt` files contain the |
| 98 | +raw emulation framework output for our experiments (100 iterations) using the |
| 99 | +first and second scenario. `scen1_processed.txt` and `scen2_processed.txt` |
| 100 | +files contain the processed metric results from the raw results file (output |
| 101 | +of the process scripts collected in a single file). |
| 102 | + |
| 103 | +The raw results were collected using the collection script |
| 104 | +`SOSR2021_SOSR2021_collect_ctrlfail.sh` from the root folder of the repository. |
| 105 | +The evaluation experiments were collected using two Helix switch-to-controller |
| 106 | +mapping files. `mdc_v2.sw_ctrl_map.json` was used to collect results for the |
| 107 | +first scenario while `mdc_v2.sw_ctrl_map.v2.json` for the second (contains |
| 108 | +extra controller instances). |
0 commit comments