mirror of
https://github.com/logos-messaging/research.git
synced 2026-01-07 00:23:13 +00:00
metrics
This commit is contained in:
parent
d8068d73f3
commit
d3b3d1f0fe
@ -1,31 +1,93 @@
|
|||||||
## rln-delay-simulations
|
## rln-delay-simulations
|
||||||
|
|
||||||
This folder contains a `shadow` configuration to simulate multiple `nwaku` nodes in an end to end setup. Note that `nwaku` requires some minor modifications in the code, that can be found in the `simulations` branch.
|
This folder contains a `shadow` configuration to simulate `1000` `nwaku` nodes in an end to end setup:
|
||||||
|
* `nwaku` binaries are used, built with `make wakunode2`
|
||||||
|
* Minor changes in `nwaku` are required, to timestamp messages and connect the peers without discovery. See [simulations](https://github.com/waku-org/nwaku/tree/simulations) branch.
|
||||||
|
* `rln` is used with hardcoded memberships, to avoid the sepolia node + contract, [see](https://raw.githubusercontent.com/waku-org/nwaku/master/waku/waku_rln_relay/constants.nim).
|
||||||
|
* Focused on measuring message propagation delays. Each message that is sent, encodes the timestamp when it was created.
|
||||||
|
* Same setup can be reused with different parameters, configured either via flags (see `shadow.yaml`) or modifying the code (see [simulations](https://github.com/waku-org/nwaku/tree/simulations)).
|
||||||
|
* TODO delay + bandwidth TODO add payload messages.
|
||||||
|
|
||||||
## how to run
|
## How to run
|
||||||
|
|
||||||
Get `nwaku` code with the modifications and compile it. See diff of latest commit.
|
Get `nwaku` code with the modifications and compile it. See diff of latest commit.
|
||||||
|
|
||||||
|
Get the [simulations](https://github.com/waku-org/nwaku/tree/simulations) branch, build it and start the [shadow](https://github.com/shadow/shadow) simulation. Ensure `path` points to the `wakunode2` binary and you have enough resources.
|
||||||
|
|
||||||
```
|
```
|
||||||
https://github.com/waku-org/nwaku.git
|
git clone https://github.com/waku-org/nwaku.git
|
||||||
cd nwaku
|
cd nwaku
|
||||||
git checkout simulations
|
git checkout simulations
|
||||||
make wakunode2
|
make wakunode2
|
||||||
```
|
|
||||||
|
|
||||||
Run the simulations:
|
|
||||||
|
|
||||||
```
|
|
||||||
shadow shadow.yaml
|
shadow shadow.yaml
|
||||||
```
|
```
|
||||||
|
|
||||||
To ensure everything went allright
|
## How to analyze
|
||||||
|
|
||||||
|
First check that the simulation finished ok. Check that the numbers match.
|
||||||
|
```
|
||||||
|
grep -nr 'ended_simulation' shadow.data | wc -l
|
||||||
|
# expected: 1000 (simulation finished ok in all nodes)
|
||||||
|
|
||||||
|
grep -nr '\[TX MSG\]*' shadow.data | wc -l
|
||||||
|
# expected: 15 (total of published messages)
|
||||||
|
```
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
* no errors in any stderr: eg: shadow.data/hosts/peer1/wakunode2.1000.stderr
|
* no errors in any stderr: eg: shadow.data/hosts/peer1/wakunode2.1000.stderr
|
||||||
* see msg published: grep -nr '\[TX MSG\]*' shadow.data | wc -l
|
* see msg published: grep -nr '\[TX MSG\]*' shadow.data | wc -l
|
||||||
* grep -nr '\[RX MSG\]*' shadow.data | wc -l
|
* grep -nr '\[RX MSG\]*' shadow.data | wc -l
|
||||||
|
|
||||||
|
|
||||||
|
calculate metrics:
|
||||||
|
|
||||||
|
latency
|
||||||
|
```
|
||||||
|
grep -nr '\[RX MSG\]*' shadow.data > latency.txt
|
||||||
|
python metrics.py latency.txt "diff: " " milliseconds"
|
||||||
|
|
||||||
|
Amount of samples: 14985
|
||||||
|
percentile 75: 300.0
|
||||||
|
percentile 25: 201.0
|
||||||
|
mode : ModeResult(mode=300, count=4650)
|
||||||
|
worst: 401
|
||||||
|
best: 100
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
file: latency.txt
|
||||||
|
parse start: diff: parse end: milliseconds
|
||||||
|
[301 400 400 ... 300 502 601]
|
||||||
|
Amount of samples: 14985
|
||||||
|
percentile 75: 402.0
|
||||||
|
percentile 25: 202.0
|
||||||
|
mode : ModeResult(mode=400, count=1542)
|
||||||
|
worst: 1300
|
||||||
|
best: 100
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
mesh
|
||||||
|
```
|
||||||
|
grep -nr 'mesh size' shadow.data > mesh.txt
|
||||||
|
python metrics.py mesh.txt "mesh size: " " of topic"
|
||||||
|
|
||||||
|
Amount of samples: 1000
|
||||||
|
percentile 75: 7.0
|
||||||
|
percentile 25: 5.0
|
||||||
|
mode : ModeResult(mode=5, count=248)
|
||||||
|
worst: 12
|
||||||
|
best: 4
|
||||||
|
|
||||||
|
Amount of samples: 1000
|
||||||
|
percentile 75: 3.0
|
||||||
|
percentile 25: 2.0
|
||||||
|
mode : ModeResult(mode=2, count=469)
|
||||||
|
worst: 5
|
||||||
|
best: 2
|
||||||
|
```
|
||||||
|
|
||||||
```
|
```
|
||||||
TODO
|
TODO
|
||||||
```
|
```
|
||||||
@ -35,3 +97,13 @@ Output
|
|||||||
```
|
```
|
||||||
TODO
|
TODO
|
||||||
```
|
```
|
||||||
|
|
||||||
|
|
||||||
|
```
|
||||||
|
Amount of samples: 14985
|
||||||
|
percentile 75: 300.0
|
||||||
|
percentile 25: 201.0
|
||||||
|
mode : ModeResult(mode=300, count=4650)
|
||||||
|
worst: 401
|
||||||
|
best: 100
|
||||||
|
```
|
||||||
20
rln-delay-simulations/analyze.py
Normal file
20
rln-delay-simulations/analyze.py
Normal file
@ -0,0 +1,20 @@
|
|||||||
|
from scipy import stats as st
|
||||||
|
import numpy as np
|
||||||
|
import sys
|
||||||
|
|
||||||
|
file = sys.argv[1]
|
||||||
|
field = sys.argv[2]
|
||||||
|
print("Config: file:", file, "field:", field)
|
||||||
|
|
||||||
|
latencies = []
|
||||||
|
with open(file, "r") as file:
|
||||||
|
for line in file.readlines():
|
||||||
|
if field in line:
|
||||||
|
print(field, line)
|
||||||
|
x = line.strip().split(field)[1].split(" ")[0]
|
||||||
|
latencies.append(int(x))
|
||||||
|
|
||||||
|
array = np.array(latencies)
|
||||||
|
print("Amount of samples:", array.size)
|
||||||
|
print("Percentiles. P25:", np.percentile(array, 25), "P25:", np.percentile(array, 75), "P95:", np.percentile(array, 95))
|
||||||
|
print("Statistics. mode_value:", st.mode(array).mode, "mode_count:", st.mode(array).count, "mean:", np.mean(array), "max:", array.max(), "min:", array.min())
|
||||||
0
rln-delay-simulations/test.txr
Normal file
0
rln-delay-simulations/test.txr
Normal file
Loading…
x
Reference in New Issue
Block a user