mirror of
https://github.com/logos-messaging/logos-messaging-nim.git
synced 2026-01-07 00:13:06 +00:00
Compare commits
492 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
a4e44dbe05 | ||
|
|
a865ff72c8 | ||
|
|
dafdee9f5f | ||
|
|
96196ab8bc | ||
|
|
e3dd6203ae | ||
|
|
834eea945d | ||
|
|
2d40cb9d62 | ||
|
|
7c24a15459 | ||
|
|
bc5059083e | ||
|
|
3323325526 | ||
|
|
2477c4980f | ||
|
|
10dc3d3eb4 | ||
|
|
9e2b3830e9 | ||
|
|
7d1c6abaac | ||
|
|
868d43164e | ||
|
|
12952d070f | ||
|
|
7920368a36 | ||
|
|
2cf4fe559a | ||
|
|
a8590a0a7d | ||
|
|
8c30a8e1bb | ||
|
|
54f4ad8fa2 | ||
|
|
ae74b9018a | ||
|
|
7eb1fdb0ac | ||
|
|
c6cf34df06 | ||
|
|
1e73213a36 | ||
|
|
c0a7debfd1 | ||
|
|
454b098ac5 | ||
|
|
088e3108c8 | ||
|
|
b0cd75f4cb | ||
|
|
31e1a81552 | ||
|
|
e54851d9d6 | ||
|
|
adeb1a928e | ||
|
|
cd5909fafe | ||
|
|
1762548741 | ||
|
|
262d33e394 | ||
|
|
7b580dbf39 | ||
| 36bc01ac0d | |||
|
|
8be45180aa | ||
|
|
9808e205af | ||
|
|
7a009c8b27 | ||
|
|
deebee45d7 | ||
|
|
7e5041d5e1 | ||
|
|
7e3617cd48 | ||
|
|
a6710b4995 | ||
|
|
62be30da19 | ||
|
|
a87b787c4e | ||
|
|
4d68e2abd5 | ||
|
|
4b0bb29aa9 | ||
|
|
797370ec80 | ||
|
|
63f3234876 | ||
|
|
682c76c714 | ||
|
|
74b3770f6c | ||
|
|
5b5ff4cbe7 | ||
|
|
6958eac6f1 | ||
|
|
d94cb7c736 | ||
|
|
7819a6e09a | ||
|
|
bc8acf7611 | ||
|
|
08d14fb082 | ||
|
|
3c9b355879 | ||
|
|
04fdf0a8c1 | ||
|
|
cc7a6406f5 | ||
|
|
794c3a850d | ||
|
|
2691dcb325 | ||
|
|
b1616e55fc | ||
|
|
3d0c6279e3 | ||
|
|
9327da5a7b | ||
|
|
a1bbb61f47 | ||
|
|
7df526f8e3 | ||
|
|
028bf297af | ||
|
|
eb7a3d137a | ||
|
|
9bba8b0f9c | ||
|
|
5fc8c59f54 | ||
|
|
a36601ab0d | ||
|
|
82926f9dd6 | ||
|
|
cc7db99982 | ||
|
|
6cf3644097 | ||
|
|
228e637c9f | ||
|
|
4db4f830f5 | ||
|
|
84cfdba010 | ||
|
|
4d7f857c42 | ||
|
|
09a407ee40 | ||
|
|
cb54db6c2f | ||
|
|
2936ba838d | ||
|
|
4379f9ec50 | ||
|
|
e4358c9718 | ||
|
|
393e3cce1f | ||
|
|
f68d79996e | ||
|
|
a27eec90d1 | ||
|
|
029022d201 | ||
|
|
89a3f735ef | ||
|
|
c3da29fd63 | ||
|
|
5640232085 | ||
|
|
b6855e85ab | ||
|
|
184cc4a694 | ||
|
|
c2934de79d | ||
|
|
aabd98120b | ||
|
|
2cff70d158 | ||
|
|
61171ed551 | ||
|
|
827aada89d | ||
|
|
b7f8728f23 | ||
|
|
5d1d538b45 | ||
|
|
d05469fd6d | ||
|
|
0830898530 | ||
|
|
8fd862b52e | ||
|
|
a4f8b2bedd | ||
|
|
7123c5532c | ||
|
|
012d719722 | ||
|
|
3133aaaf71 | ||
|
|
4e527ee045 | ||
|
|
b713b6e5f4 | ||
|
|
dde023eacf | ||
|
|
994d485b49 | ||
|
|
0ed3fc8079 | ||
|
|
4b186a4b28 | ||
|
|
ac094eae38 | ||
|
|
5f9625f332 | ||
|
|
cc30666016 | ||
|
|
bed5c9ab52 | ||
|
|
7181d9ca63 | ||
|
|
d820976eaf | ||
|
|
edf416f9e0 | ||
|
|
671a4f0ae2 | ||
|
|
26c2b96cfe | ||
|
|
5c38a53f7c | ||
|
|
15025fe6cc | ||
|
|
d7a3a85db9 | ||
|
|
5f5e0893e0 | ||
|
|
7b234ec78a | ||
|
|
739ad46a7e | ||
|
|
7ff89b385d | ||
|
|
c8b094d6fa | ||
|
|
d3cf24f7a2 | ||
|
|
fd5780eae7 | ||
|
|
921d1d81af | ||
|
|
49b12e6cf3 | ||
|
|
b1dc83ec03 | ||
|
|
d41179e562 | ||
|
|
d01dd9959c | ||
|
|
478925a389 | ||
|
|
d148c536ca | ||
|
|
11b44e3e15 | ||
|
|
0adddb01da | ||
|
|
25a3f4192c | ||
|
|
17c842a542 | ||
|
|
d4198c08ae | ||
|
|
895e202265 | ||
|
|
5132510bc6 | ||
|
|
4f181abe0d | ||
|
|
daa4a6a986 | ||
|
|
66d8d3763d | ||
|
|
336fbf8b64 | ||
|
|
a39bcff6dc | ||
|
|
94cd2f88b4 | ||
|
|
5e22ea18b6 | ||
|
|
768b2785e1 | ||
|
|
f0d668966d | ||
|
|
8812d66eb5 | ||
|
|
f47af16ffb | ||
|
|
9f68c83fed | ||
|
|
39e65dea28 | ||
| e4a4313d82 | |||
|
|
3bb40d48e3 | ||
|
|
d5063e7d89 | ||
|
|
3aab1b83e4 | ||
|
|
e321774e91 | ||
|
|
2926542fcd | ||
|
|
b435b51c4e | ||
|
|
094a68e41d | ||
|
|
42ab866f2c | ||
|
|
d86babac3a | ||
|
|
cc66c7fe78 | ||
|
|
6bc05efc02 | ||
|
|
7c7ed5634f | ||
|
|
98c3979119 | ||
|
|
2d6e5ef9ad | ||
|
|
fc4ca7798c | ||
|
|
0c63ce4e9b | ||
|
|
8394c15a1a | ||
|
|
ab8a30d3d6 | ||
|
|
0304f063b8 | ||
|
|
95b665fa45 | ||
|
|
2f49aae2b7 | ||
|
|
8dd31c200b | ||
|
|
559776557b | ||
|
|
5ae526ce4f | ||
|
|
2786ef6079 | ||
|
|
7c59f7c257 | ||
|
|
ed0474ade3 | ||
|
|
001456cda0 | ||
|
|
e99762ddfe | ||
| c43cee6593 | |||
| bbf9905f46 | |||
| bbdf51ebf2 | |||
|
|
856224c62d | ||
|
|
dffad311a2 | ||
|
|
3098b117d3 | ||
|
|
483103de37 | ||
|
|
75b8838fbf | ||
|
|
b1344bb3b1 | ||
|
|
06562d7a56 | ||
|
|
947f6364d1 | ||
|
|
15a8779842 | ||
|
|
93698a0a88 | ||
|
|
6d3c758540 | ||
|
|
8b443edd98 | ||
|
|
00808c9495 | ||
|
|
63cff2ab42 | ||
|
|
58f76ce467 | ||
|
|
36ee2aa9bf | ||
|
|
fca3b034c2 | ||
|
|
b9d060572f | ||
|
|
9a14446e32 | ||
|
|
afa0bfbd37 | ||
|
|
8397d45f51 | ||
|
|
935914224e | ||
|
|
fe8327627e | ||
|
|
4111f80729 | ||
|
|
02cbc9eb6b | ||
|
|
a28243d446 | ||
|
|
926e69a12d | ||
|
|
dc571d0101 | ||
|
|
0369679704 | ||
|
|
cda48e25f7 | ||
|
|
7dbc1fe061 | ||
|
|
e2329f97e5 | ||
|
|
8b927b92d2 | ||
|
|
bf1a0dc42c | ||
|
|
91e5c7bc13 | ||
|
|
8f775cc638 | ||
|
|
aef2a7045f | ||
|
|
d5f18cf455 | ||
|
|
ed0b260c2d | ||
|
|
324e4292ba | ||
|
|
05995f7ef9 | ||
|
|
5f1a3406d1 | ||
|
|
dcf09dd365 | ||
|
|
935e404782 | ||
|
|
2bb1349162 | ||
|
|
564b6466a8 | ||
|
|
05b46239ba | ||
|
|
187b41d147 | ||
|
|
3f1f76c3a1 | ||
|
|
f90baa1d2f | ||
|
|
57514f5c9e | ||
|
|
92f893987f | ||
|
|
798b4bb57b | ||
|
|
a1901a044e | ||
|
|
fb55ed0b70 | ||
|
|
8275d70f35 | ||
|
|
9bb567eb0e | ||
|
|
6b00684ad1 | ||
|
|
55ef60836f | ||
|
|
9b55665f41 | ||
|
|
9063605669 | ||
|
|
701500665b | ||
|
|
b3e1dc3f49 | ||
|
|
34442390e9 | ||
|
|
93dac1c2c4 | ||
|
|
526078c0d1 | ||
|
|
bf87aa25d6 | ||
|
|
9ebc3924af | ||
|
|
b478788e85 | ||
|
|
21c4ec0d69 | ||
|
|
f65bea0f8e | ||
|
|
32ba56d77c | ||
|
|
a117143ca1 | ||
|
|
c785ff7a6b | ||
|
|
fddc17cea1 | ||
|
|
3d8f4364f4 | ||
|
|
79846e8433 | ||
|
|
80291abc9a | ||
|
|
ed2e26243f | ||
|
|
d8aaa93df9 | ||
|
|
a1014663bd | ||
|
|
8867fd6fa9 | ||
|
|
3eed89796c | ||
|
|
d9e79022fe | ||
|
|
c01a21e01f | ||
|
|
cc864a8e91 | ||
|
|
96d9d40f4b | ||
|
|
401402368d | ||
|
|
7031607b58 | ||
|
|
0e9b332db0 | ||
|
|
2630b88b41 | ||
|
|
f550c76eb1 | ||
|
|
2eca003be0 | ||
|
|
c1b9257948 | ||
|
|
fdfc48c923 | ||
|
|
dd1a70bdb7 | ||
|
|
d7cbe83b19 | ||
|
|
2164ef9f97 | ||
|
|
8270cb9420 | ||
|
|
4d9e11f16b | ||
|
|
192db550c9 | ||
|
|
9d0b30cc99 | ||
|
|
48859c4266 | ||
|
|
f301c6d9db | ||
|
|
ba1870d114 | ||
|
|
569060b190 | ||
|
|
202c2785ca | ||
|
|
c725c96609 | ||
|
|
c52e43a0ac | ||
|
|
9f46c3c123 | ||
|
|
5aeee9dded | ||
|
|
e4a07a99ce | ||
|
|
90cac35c64 | ||
|
|
790de8a5df | ||
|
|
ee9564ec73 | ||
|
|
ddfa212608 | ||
|
|
7731dfad32 | ||
|
|
ae013e1928 | ||
|
|
b5edf6db98 | ||
|
|
8bdf27e188 | ||
|
|
3f98f4a77c | ||
|
|
a0c468b4d5 | ||
|
|
4f77bb21d1 | ||
|
|
ad03b22413 | ||
|
|
dced87038e | ||
|
|
9dc1b88b18 | ||
|
|
439a3ae394 | ||
|
|
682981f967 | ||
|
|
f35a6f10a6 | ||
|
|
293682b57d | ||
|
|
7de94c5c2c | ||
|
|
3772cb2968 | ||
|
|
38cb0598d9 | ||
|
|
2748ab852f | ||
|
|
99d3aaf93d | ||
|
|
db756905dc | ||
|
|
0d3b70fa16 | ||
|
|
76354df9bf | ||
|
|
eec6215229 | ||
|
|
61f4d979ad | ||
|
|
b8d3ee051a | ||
|
|
d225c6e1e2 | ||
|
|
b1402315f5 | ||
|
|
460be6e5a6 | ||
|
|
7f2023353d | ||
|
|
92207c670d | ||
|
|
f3af7fa37e | ||
|
|
90b8b59e4d | ||
|
|
b949941121 | ||
|
|
1e23446721 | ||
|
|
89addda4b0 | ||
|
|
af189952cb | ||
|
|
0a432ee2f3 | ||
|
|
3786ce12e2 | ||
|
|
e5f7a8f776 | ||
|
|
2198d78fc6 | ||
|
|
9f56891b88 | ||
|
|
2ffca2078b | ||
|
|
223ca1db75 | ||
|
|
0d324b1f25 | ||
|
|
37c7b9588a | ||
|
|
33c2fea029 | ||
|
|
ce607bc71e | ||
|
|
e0ee294078 | ||
|
|
f11b6c3b94 | ||
|
|
9d2b931567 | ||
|
|
772c7a365a | ||
|
|
8e1b2a60b2 | ||
|
|
56652960f0 | ||
|
|
7e5546cfff | ||
|
|
c80569e758 | ||
|
|
6cfa477817 | ||
|
|
086820a49b | ||
|
|
92a7b7c7ff | ||
|
|
5ff910b7a8 | ||
|
|
e3de3e9210 | ||
|
|
222251e497 | ||
|
|
b4a04f01fd | ||
|
|
5a8c1f35e2 | ||
|
|
fb632d1029 | ||
|
|
9635ee4021 | ||
|
|
a3bada5154 | ||
|
|
8faca4c024 | ||
|
|
5e3f79896a | ||
|
|
488ea2f815 | ||
|
|
d964b66146 | ||
|
|
f89bfeb82f | ||
|
|
aa7e71325b | ||
|
|
c04b560372 | ||
|
|
b69d2c3142 | ||
|
|
a57729cff3 | ||
|
|
b847403b54 | ||
|
|
da63d22369 | ||
|
|
975db7a0f5 | ||
|
|
0d8e5a903f | ||
|
|
91a91b331f | ||
|
|
e128385e69 | ||
|
|
6d5cbc9331 | ||
|
|
f2e98919db | ||
|
|
1578bc6a28 | ||
|
|
7daecc7faf | ||
|
|
986ebd598d | ||
|
|
8f0f5dd2b0 | ||
|
|
9e96e1911a | ||
|
|
6afa392ae4 | ||
|
|
785cf2e9d9 | ||
|
|
c77a141191 | ||
|
|
bad30bf4e7 | ||
|
|
7fee882533 | ||
|
|
3437e4009d | ||
|
|
5197fac47b | ||
|
|
8c8eea4b67 | ||
|
|
4bafca6df0 | ||
|
|
941a3fe6a0 | ||
|
|
e58b5c15c8 | ||
|
|
96cc2f1b39 | ||
|
|
5ddb22a345 | ||
|
|
a16c64ad45 | ||
|
|
3921036ced | ||
|
|
9cf71e5d49 | ||
|
|
1f768cb3e8 | ||
|
|
d5ff611a5e | ||
|
|
e7ae1a0382 | ||
|
|
6dfefc5e42 | ||
|
|
36df0fd838 | ||
|
|
2a596f4c77 | ||
|
|
b120da2a18 | ||
|
|
b1fd3ef204 | ||
|
|
004b56e422 | ||
|
|
723b009b20 | ||
|
|
43bea3c476 | ||
|
|
f34a044ccf | ||
|
|
aefb7fb73d | ||
|
|
dabb4eb60a | ||
|
|
6fd17549aa | ||
|
|
5f2d87ec71 | ||
|
|
50e15746d1 | ||
|
|
bed16d6a4a | ||
|
|
b0eb488b13 | ||
|
|
ed866321a0 | ||
|
|
930d2a8b04 | ||
|
|
3ab6d99fe6 | ||
|
|
191035ffd2 | ||
|
|
236547ec7d | ||
|
|
97019896ac | ||
|
|
6eff20507a | ||
|
|
95ac6e6c87 | ||
|
|
853025284b | ||
|
|
f1db75262b | ||
|
|
64e6658202 | ||
|
|
7057eacee2 | ||
|
|
fd653ef0da | ||
| a94f571077 | |||
| b2cbc7cbca | |||
|
|
ebea143031 | ||
|
|
8a5a589202 | ||
|
|
2e8f2f0076 | ||
|
|
b68cc07261 | ||
|
|
3212459f77 | ||
|
|
a4c71f01e5 | ||
|
|
4b9bee99a8 | ||
|
|
3f641dff60 | ||
|
|
90b4dc89ff | ||
|
|
73fbe5c337 | ||
|
|
17b23722f3 | ||
|
|
4a89875a36 | ||
|
|
1f3162ae5f | ||
|
|
f094c671ca | ||
|
|
60e2fd90d3 | ||
|
|
301ce8068c | ||
|
|
459221f93d | ||
|
|
6ae46c5fff | ||
|
|
71946b911f | ||
|
|
696587fdac | ||
|
|
3e6b2ea683 | ||
|
|
6b22823b64 | ||
|
|
b839b1c81f | ||
|
|
6748142f29 | ||
| fcc11a7cd9 | |||
|
|
1ab665ce2c | ||
|
|
1ce87c49a8 | ||
|
|
f10a604764 | ||
|
|
1fe23b8a6a | ||
|
|
705ae45363 | ||
|
|
2614d93566 | ||
|
|
f9552e133b | ||
|
|
2134ad76b4 | ||
|
|
65caba9abd | ||
|
|
7fdabe5ad9 | ||
|
|
43e66939b5 | ||
|
|
48a79d3012 | ||
|
|
aaf2b88c62 | ||
|
|
6ca28cd74d | ||
|
|
1d850e43dc | ||
|
|
aa9c30655c | ||
|
|
ad6f6c6bac | ||
|
|
d9a48321d2 | ||
|
|
718e54f80d | ||
|
|
5d83d4a6fe |
12
.github/ISSUE_TEMPLATE/bump_dependencies.md
vendored
12
.github/ISSUE_TEMPLATE/bump_dependencies.md
vendored
@ -37,12 +37,12 @@ Update `nwaku` "vendor" dependencies.
|
||||
- [ ] nim-sqlite3-abi ( update to the latest tag version )
|
||||
- [ ] nim-stew
|
||||
- [ ] nim-stint
|
||||
- [ ] nim-taskpools
|
||||
- [ ] nim-testutils
|
||||
- [ ] nim-taskpools ( update to the latest tag version )
|
||||
- [ ] nim-testutils ( update to the latest tag version )
|
||||
- [ ] nim-toml-serialization
|
||||
- [ ] nim-unicodedb
|
||||
- [ ] nim-unittest2
|
||||
- [ ] nim-web3
|
||||
- [ ] nim-websock
|
||||
- [ ] nim-unittest2 ( update to the latest tag version )
|
||||
- [ ] nim-web3 ( update to the latest tag version )
|
||||
- [ ] nim-websock ( update to the latest tag version )
|
||||
- [ ] nim-zlib
|
||||
- [ ] zerokit ( this should be kept in version `v0.5.1` )
|
||||
- [ ] zerokit ( this should be kept in version `v0.7.0` )
|
||||
|
||||
56
.github/ISSUE_TEMPLATE/prepare_beta_release.md
vendored
Normal file
56
.github/ISSUE_TEMPLATE/prepare_beta_release.md
vendored
Normal file
@ -0,0 +1,56 @@
|
||||
---
|
||||
name: Prepare Beta Release
|
||||
about: Execute tasks for the creation and publishing of a new beta release
|
||||
title: 'Prepare beta release 0.0.0'
|
||||
labels: beta-release
|
||||
assignees: ''
|
||||
|
||||
---
|
||||
|
||||
<!--
|
||||
Add appropriate release number to title!
|
||||
|
||||
For detailed info on the release process refer to https://github.com/logos-messaging/nwaku/blob/master/docs/contributors/release-process.md
|
||||
-->
|
||||
|
||||
### Items to complete
|
||||
|
||||
All items below are to be completed by the owner of the given release.
|
||||
|
||||
- [ ] Create release branch with major and minor only ( e.g. release/v0.X ) if it doesn't exist.
|
||||
- [ ] Assign release candidate tag to the release branch HEAD (e.g. `v0.X.0-beta-rc.0`, `v0.X.0-beta-rc.1`, ... `v0.X.0-beta-rc.N`).
|
||||
- [ ] Generate and edit release notes in CHANGELOG.md.
|
||||
|
||||
- [ ] **Waku test and fleets validation**
|
||||
- [ ] Ensure all the unit tests (specifically logos-messaging-js tests) are green against the release candidate.
|
||||
- [ ] Deploy the release candidate to `waku.test` only through [deploy-waku-test job](https://ci.infra.status.im/job/nim-waku/job/deploy-waku-test/) and wait for it to finish (Jenkins access required; ask the infra team if you don't have it).
|
||||
- After completion, disable [deployment job](https://ci.infra.status.im/job/nim-waku/) so that its version is not updated on every merge to master.
|
||||
- Verify the deployed version at https://fleets.waku.org/.
|
||||
- Confirm the container image exists on [Harbor](https://harbor.status.im/harbor/projects/9/repositories/nwaku/artifacts-tab).
|
||||
- [ ] Analyze Kibana logs from the previous month (since the last release was deployed) for possible crashes or errors in `waku.test`.
|
||||
- Most relevant logs are `(fleet: "waku.test" AND message: "SIGSEGV")`.
|
||||
- [ ] Enable again the `waku.test` fleet to resume auto-deployment of the latest `master` commit.
|
||||
|
||||
- [ ] **Proceed with release**
|
||||
|
||||
- [ ] Assign a final release tag (`v0.X.0-beta`) to the same commit that contains the validated release-candidate tag (e.g. `v0.X.0-beta-rc.N`) and submit a PR from the release branch to `master`.
|
||||
- [ ] Update [nwaku-compose](https://github.com/logos-messaging/nwaku-compose) and [waku-simulator](https://github.com/logos-messaging/waku-simulator) according to the new release.
|
||||
- [ ] Bump nwaku dependency in [waku-rust-bindings](https://github.com/logos-messaging/waku-rust-bindings) and make sure all examples and tests work.
|
||||
- [ ] Bump nwaku dependency in [waku-go-bindings](https://github.com/logos-messaging/waku-go-bindings) and make sure all tests work.
|
||||
- [ ] Create GitHub release (https://github.com/logos-messaging/nwaku/releases).
|
||||
- [ ] Submit a PR to merge the release branch back to `master`. Make sure you use the option "Merge pull request (Create a merge commit)" to perform the merge. Ping repo admin if this option is not available.
|
||||
|
||||
- [ ] **Promote release to fleets**
|
||||
- [ ] Ask the PM lead to announce the release.
|
||||
- [ ] Update infra config with any deprecated arguments or changed options.
|
||||
- [ ] Update waku.sandbox with [this deployment job](https://ci.infra.status.im/job/nim-waku/job/deploy-waku-sandbox/).
|
||||
|
||||
### Links
|
||||
|
||||
- [Release process](https://github.com/logos-messaging/nwaku/blob/master/docs/contributors/release-process.md)
|
||||
- [Release notes](https://github.com/logos-messaging/nwaku/blob/master/CHANGELOG.md)
|
||||
- [Fleet ownership](https://www.notion.so/Fleet-Ownership-7532aad8896d46599abac3c274189741?pvs=4#d2d2f0fe4b3c429fbd860a1d64f89a64)
|
||||
- [Infra-nim-waku](https://github.com/status-im/infra-nim-waku)
|
||||
- [Jenkins](https://ci.infra.status.im/job/nim-waku/)
|
||||
- [Fleets](https://fleets.waku.org/)
|
||||
- [Harbor](https://harbor.status.im/harbor/projects/9/repositories/nwaku/artifacts-tab)
|
||||
76
.github/ISSUE_TEMPLATE/prepare_full_release.md
vendored
Normal file
76
.github/ISSUE_TEMPLATE/prepare_full_release.md
vendored
Normal file
@ -0,0 +1,76 @@
|
||||
---
|
||||
name: Prepare Full Release
|
||||
about: Execute tasks for the creation and publishing of a new full release
|
||||
title: 'Prepare full release 0.0.0'
|
||||
labels: full-release
|
||||
assignees: ''
|
||||
|
||||
---
|
||||
|
||||
<!--
|
||||
Add appropriate release number to title!
|
||||
|
||||
For detailed info on the release process refer to https://github.com/logos-messaging/nwaku/blob/master/docs/contributors/release-process.md
|
||||
-->
|
||||
|
||||
### Items to complete
|
||||
|
||||
All items below are to be completed by the owner of the given release.
|
||||
|
||||
- [ ] Create release branch with major and minor only ( e.g. release/v0.X ) if it doesn't exist.
|
||||
- [ ] Assign release candidate tag to the release branch HEAD (e.g. `v0.X.0-rc.0`, `v0.X.0-rc.1`, ... `v0.X.0-rc.N`).
|
||||
- [ ] Generate and edit release notes in CHANGELOG.md.
|
||||
|
||||
- [ ] **Validation of release candidate**
|
||||
|
||||
- [ ] **Automated testing**
|
||||
- [ ] Ensure all the unit tests (specifically logos-messaging-js tests) are green against the release candidate.
|
||||
- [ ] Ask Vac-QA and Vac-DST to perform the available tests against the release candidate.
|
||||
- [ ] Vac-DST (an additional report is needed; see [this](https://www.notion.so/DST-Reports-1228f96fb65c80729cd1d98a7496fe6f))
|
||||
|
||||
- [ ] **Waku fleet testing**
|
||||
- [ ] Deploy the release candidate to `waku.test` and `waku.sandbox` fleets.
|
||||
- Start the [deployment job](https://ci.infra.status.im/job/nim-waku/) for both fleets and wait for it to finish (Jenkins access required; ask the infra team if you don't have it).
|
||||
- After completion, disable [deployment job](https://ci.infra.status.im/job/nim-waku/) so that its version is not updated on every merge to `master`.
|
||||
- Verify the deployed version at https://fleets.waku.org/.
|
||||
- Confirm the container image exists on [Harbor](https://harbor.status.im/harbor/projects/9/repositories/nwaku/artifacts-tab).
|
||||
- [ ] Search _Kibana_ logs from the previous month (since the last release was deployed) for possible crashes or errors in `waku.test` and `waku.sandbox`.
|
||||
- Most relevant logs are `(fleet: "waku.test" AND message: "SIGSEGV")` OR `(fleet: "waku.sandbox" AND message: "SIGSEGV")`.
|
||||
- [ ] Enable again the `waku.test` fleet to resume auto-deployment of the latest `master` commit.
|
||||
|
||||
- [ ] **Status fleet testing**
|
||||
- [ ] Deploy release candidate to `status.staging`
|
||||
- [ ] Perform [sanity check](https://www.notion.so/How-to-test-Nwaku-on-Status-12c6e4b9bf06420ca868bd199129b425) and log results as comments in this issue.
|
||||
- [ ] Connect 2 instances to `status.staging` fleet, one in relay mode, the other one in light client.
|
||||
- 1:1 Chats with each other
|
||||
- Send and receive messages in a community
|
||||
- Close one instance, send messages with second instance, reopen first instance and confirm messages sent while offline are retrieved from store
|
||||
- [ ] Perform checks based on _end user impact_
|
||||
- [ ] Inform other (Waku and Status) CCs to point their instances to `status.staging` for a few days. Ping Status colleagues on their Discord server or in the [Status community](https://status.app/c/G3kAAMSQtb05kog3aGbr3kiaxN4tF5xy4BAGEkkLwILk2z3GcoYlm5hSJXGn7J3laft-tnTwDWmYJ18dP_3bgX96dqr_8E3qKAvxDf3NrrCMUBp4R9EYkQez9XSM4486mXoC3mIln2zc-TNdvjdfL9eHVZ-mGgs=#zQ3shZeEJqTC1xhGUjxuS4rtHSrhJ8vUYp64v6qWkLpvdy9L9) (this is not a blocking point.)
|
||||
- [ ] Ask Status-QA to perform sanity checks (as described above) and checks based on _end user impact_; specify the version being tested
|
||||
- [ ] Ask Status-QA or infra to run the automated Status e2e tests against `status.staging`
|
||||
- [ ] Get other CCs' sign-off: they should comment on this PR, e.g., "Used the app for a week, no problem." If problems are reported, resolve them and create a new RC.
|
||||
- [ ] **Get Status-QA sign-off**, ensuring that the `status.test` update will not disturb ongoing activities.
|
||||
|
||||
- [ ] **Proceed with release**
|
||||
|
||||
- [ ] Assign a final release tag (`v0.X.0`) to the same commit that contains the validated release-candidate tag (e.g. `v0.X.0`).
|
||||
- [ ] Update [nwaku-compose](https://github.com/logos-messaging/nwaku-compose) and [waku-simulator](https://github.com/logos-messaging/waku-simulator) according to the new release.
|
||||
- [ ] Bump nwaku dependency in [waku-rust-bindings](https://github.com/logos-messaging/waku-rust-bindings) and make sure all examples and tests work.
|
||||
- [ ] Bump nwaku dependency in [waku-go-bindings](https://github.com/logos-messaging/waku-go-bindings) and make sure all tests work.
|
||||
- [ ] Create GitHub release (https://github.com/logos-messaging/nwaku/releases).
|
||||
- [ ] Submit a PR to merge the release branch back to `master`. Make sure you use the option "Merge pull request (Create a merge commit)" to perform the merge. Ping repo admin if this option is not available.
|
||||
|
||||
- [ ] **Promote release to fleets**
|
||||
- [ ] Ask the PM lead to announce the release.
|
||||
- [ ] Update infra config with any deprecated arguments or changed options.
|
||||
|
||||
### Links
|
||||
|
||||
- [Release process](https://github.com/logos-messaging/nwaku/blob/master/docs/contributors/release-process.md)
|
||||
- [Release notes](https://github.com/logos-messaging/nwaku/blob/master/CHANGELOG.md)
|
||||
- [Fleet ownership](https://www.notion.so/Fleet-Ownership-7532aad8896d46599abac3c274189741?pvs=4#d2d2f0fe4b3c429fbd860a1d64f89a64)
|
||||
- [Infra-nim-waku](https://github.com/status-im/infra-nim-waku)
|
||||
- [Jenkins](https://ci.infra.status.im/job/nim-waku/)
|
||||
- [Fleets](https://fleets.waku.org/)
|
||||
- [Harbor](https://harbor.status.im/harbor/projects/9/repositories/nwaku/artifacts-tab)
|
||||
69
.github/ISSUE_TEMPLATE/prepare_release.md
vendored
69
.github/ISSUE_TEMPLATE/prepare_release.md
vendored
@ -1,69 +0,0 @@
|
||||
---
|
||||
name: Prepare release
|
||||
about: Execute tasks for the creation and publishing of a new release
|
||||
title: 'Prepare release 0.0.0'
|
||||
labels: release
|
||||
assignees: ''
|
||||
|
||||
---
|
||||
|
||||
<!--
|
||||
Add appropriate release number to title!
|
||||
|
||||
For detailed info on the release process refer to https://github.com/waku-org/nwaku/blob/master/docs/contributors/release-process.md
|
||||
-->
|
||||
|
||||
### Items to complete
|
||||
|
||||
All items below are to be completed by the owner of the given release.
|
||||
|
||||
Note that `status.staging` refers to `shards.staging` fleet (rename is wip).
|
||||
|
||||
- [ ] Create release branch
|
||||
- [ ] Assign release candidate tag to the release branch HEAD. e.g. v0.30.0-rc.0
|
||||
- [ ] Generate and edit releases notes in CHANGELOG.md
|
||||
- [ ] _End user impact_: Summarize impact of changes on Status end users (can be a comment in this issue).
|
||||
- [ ] **Validate release candidate**
|
||||
|
||||
- [ ] Automated testing
|
||||
- [ ] Ensures js-waku tests are green against release candidate
|
||||
- [ ] Ask Vac-QA and Vac-DST to perform available tests against release candidate
|
||||
|
||||
- [ ] **On Waku fleets**
|
||||
- [ ] Lock `waku.test` fleet to release candidate version
|
||||
- [ ] Continuously stress `waku.test` fleet for a week (e.g. from `wakudev`)
|
||||
- [ ] Search _Kibana_ logs from the previous month (since last release was deployed), for possible crashes or errors in `waku.test` and `waku.sandbox`.
|
||||
- Most relevant logs are `(fleet: "waku.test" OR fleet: "waku.sandbox") AND message: "SIGSEGV"`
|
||||
- [ ] Run release candidate with `waku-simulator`, ensure that nodes connected to each other
|
||||
- [ ] Unlock `waku.test` to resume auto-deployment of latest `master` commit
|
||||
|
||||
- [ ] **On Status fleet**
|
||||
- [ ] Deploy release candidate to `status.staging`
|
||||
- [ ] Perform [sanity check](https://www.notion.so/How-to-test-Nwaku-on-Status-12c6e4b9bf06420ca868bd199129b425) and log results as comments in this issue.
|
||||
- [ ] Connect 2 instances to `status.staging` fleet, one in relay mode, the other one in light client.
|
||||
- [ ] 1:1 Chats with each other
|
||||
- [ ] Send and receive messages in a community
|
||||
- [ ] Close one instance, send messages with second instance, reopen first instance and confirm messages sent while offline are retrieved from store
|
||||
- [ ] Perform checks based _end user impact_
|
||||
- [ ] Ask other (Waku and Status) CCs to point their instance to `status.staging` for a week and use the app as usual.
|
||||
- [ ] Ask Status-QA to perform sanity checks (as described above) + checks based on _end user impact_; do specify the version being tested
|
||||
- [ ] Ask Status-QA or infra to run the automated Status e2e tests against `status.staging`
|
||||
- [ ] Get other CCs sign-off: they comment on this PR "used app for a week, no problem", or problem reported, resolved and new RC
|
||||
- [ ] **Get Status-QA sign-off**. Ensuring that `status.test` update will not disturb ongoing activities.
|
||||
|
||||
- [ ] **Proceed with release**
|
||||
|
||||
- [ ] Assign a release tag to the same commit that contains the validated release-candidate tag
|
||||
- [ ] Create GitHub release
|
||||
- [ ] Deploy the release to DockerHub
|
||||
- [ ] Announce the release
|
||||
|
||||
- [ ] **Promote release to fleets**.
|
||||
- [ ] Update infra config with any deprecated arguments or changed options
|
||||
- [ ] [Deploy final release to `waku.sandbox` fleet](https://ci.infra.status.im/job/nim-waku/job/deploy-waku-sandbox)
|
||||
- [ ] [Deploy final release to `status.staging` fleet](https://ci.infra.status.im/job/nim-waku/job/deploy-shards-staging/)
|
||||
- [ ] [Deploy final release to `status.test` fleet](https://ci.infra.status.im/job/nim-waku/job/deploy-shards-test/) ([soon to be `status.prod`](https://github.com/status-im/infra-shards/issues/33))
|
||||
|
||||
- [ ] **Post release**
|
||||
- [ ] Submit a PR from the release branch to master. Important to commit the PR with "create a merge commit" option.
|
||||
- [ ] Update waku-org/nwaku-compose with the new release version.
|
||||
22
.github/pull_request_template.md
vendored
22
.github/pull_request_template.md
vendored
@ -1,26 +1,8 @@
|
||||
|
||||
# Description
|
||||
<!--- Describe your changes to provide context for reviewrs -->
|
||||
## Description
|
||||
|
||||
# Changes
|
||||
## Changes
|
||||
|
||||
<!-- List of detailed changes -->
|
||||
|
||||
- [ ] ...
|
||||
- [ ] ...
|
||||
|
||||
<!--
|
||||
## How to test
|
||||
|
||||
1.
|
||||
1.
|
||||
1.
|
||||
|
||||
-->
|
||||
|
||||
|
||||
<!--
|
||||
## Issue
|
||||
|
||||
closes #
|
||||
-->
|
||||
2
.github/workflows/auto_assign_pr.yml
vendored
2
.github/workflows/auto_assign_pr.yml
vendored
@ -7,6 +7,6 @@ on:
|
||||
|
||||
jobs:
|
||||
assign_creator:
|
||||
runs-on: ubuntu-latest
|
||||
runs-on: ubuntu-22.04
|
||||
steps:
|
||||
- uses: toshimaru/auto-author-assign@v1.6.2
|
||||
84
.github/workflows/ci.yml
vendored
84
.github/workflows/ci.yml
vendored
@ -17,11 +17,11 @@ env:
|
||||
|
||||
jobs:
|
||||
changes: # changes detection
|
||||
runs-on: ubuntu-latest
|
||||
runs-on: ubuntu-22.04
|
||||
permissions:
|
||||
pull-requests: read
|
||||
steps:
|
||||
- uses: actions/checkout@v3
|
||||
- uses: actions/checkout@v4
|
||||
name: Checkout code
|
||||
id: checkout
|
||||
- uses: dorny/paths-filter@v2
|
||||
@ -54,14 +54,14 @@ jobs:
|
||||
strategy:
|
||||
fail-fast: false
|
||||
matrix:
|
||||
os: [ubuntu-latest, macos-13]
|
||||
os: [ubuntu-22.04, macos-15]
|
||||
runs-on: ${{ matrix.os }}
|
||||
timeout-minutes: 60
|
||||
timeout-minutes: 45
|
||||
|
||||
name: build-${{ matrix.os }}
|
||||
steps:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@v3
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: Get submodules hash
|
||||
id: submodules
|
||||
@ -76,23 +76,33 @@ jobs:
|
||||
.git/modules
|
||||
key: ${{ runner.os }}-vendor-modules-${{ steps.submodules.outputs.hash }}
|
||||
|
||||
- name: Make update
|
||||
run: make update
|
||||
|
||||
- name: Build binaries
|
||||
run: make V=1 QUICK_AND_DIRTY_COMPILER=1 all tools
|
||||
|
||||
build-windows:
|
||||
needs: changes
|
||||
if: ${{ needs.changes.outputs.v2 == 'true' || needs.changes.outputs.common == 'true' }}
|
||||
uses: ./.github/workflows/windows-build.yml
|
||||
with:
|
||||
branch: ${{ github.ref }}
|
||||
|
||||
test:
|
||||
needs: changes
|
||||
if: ${{ needs.changes.outputs.v2 == 'true' || needs.changes.outputs.common == 'true' }}
|
||||
strategy:
|
||||
fail-fast: false
|
||||
matrix:
|
||||
os: [ubuntu-latest, macos-13]
|
||||
os: [ubuntu-22.04, macos-15]
|
||||
runs-on: ${{ matrix.os }}
|
||||
timeout-minutes: 60
|
||||
timeout-minutes: 45
|
||||
|
||||
name: test-${{ matrix.os }}
|
||||
steps:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@v3
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: Get submodules hash
|
||||
id: submodules
|
||||
@ -107,6 +117,9 @@ jobs:
|
||||
.git/modules
|
||||
key: ${{ runner.os }}-vendor-modules-${{ steps.submodules.outputs.hash }}
|
||||
|
||||
- name: Make update
|
||||
run: make update
|
||||
|
||||
- name: Run tests
|
||||
run: |
|
||||
postgres_enabled=0
|
||||
@ -117,27 +130,68 @@ jobs:
|
||||
|
||||
export MAKEFLAGS="-j1"
|
||||
export NIMFLAGS="--colors:off -d:chronicles_colors:none"
|
||||
|
||||
make V=1 LOG_LEVEL=DEBUG QUICK_AND_DIRTY_COMPILER=1 POSTGRES=$postgres_enabled test testwakunode2
|
||||
export USE_LIBBACKTRACE=0
|
||||
|
||||
make V=1 LOG_LEVEL=DEBUG QUICK_AND_DIRTY_COMPILER=1 POSTGRES=$postgres_enabled test
|
||||
make V=1 LOG_LEVEL=DEBUG QUICK_AND_DIRTY_COMPILER=1 POSTGRES=$postgres_enabled testwakunode2
|
||||
|
||||
build-docker-image:
|
||||
needs: changes
|
||||
if: ${{ needs.changes.outputs.v2 == 'true' || needs.changes.outputs.common == 'true' || needs.changes.outputs.docker == 'true' }}
|
||||
uses: waku-org/nwaku/.github/workflows/container-image.yml@master
|
||||
uses: logos-messaging/logos-messaging-nim/.github/workflows/container-image.yml@10dc3d3eb4b6a3d4313f7b2cc4a85a925e9ce039
|
||||
secrets: inherit
|
||||
|
||||
nwaku-nwaku-interop-tests:
|
||||
needs: build-docker-image
|
||||
uses: logos-messaging/logos-messaging-interop-tests/.github/workflows/nim_waku_PR.yml@SMOKE_TEST_STABLE
|
||||
with:
|
||||
node_nwaku: ${{ needs.build-docker-image.outputs.image }}
|
||||
|
||||
secrets: inherit
|
||||
|
||||
js-waku-node:
|
||||
needs: build-docker-image
|
||||
uses: waku-org/js-waku/.github/workflows/test-node.yml@master
|
||||
uses: logos-messaging/logos-messaging-js/.github/workflows/test-node.yml@master
|
||||
with:
|
||||
nim_wakunode_image: ${{ needs.build-docker-image.outputs.image }}
|
||||
test_type: node
|
||||
debug: waku*
|
||||
|
||||
js-waku-node-optional:
|
||||
needs: build-docker-image
|
||||
uses: waku-org/js-waku/.github/workflows/test-node.yml@master
|
||||
uses: logos-messaging/logos-messaging-js/.github/workflows/test-node.yml@master
|
||||
with:
|
||||
nim_wakunode_image: ${{ needs.build-docker-image.outputs.image }}
|
||||
test_type: node-optional
|
||||
debug: waku*
|
||||
|
||||
lint:
|
||||
name: "Lint"
|
||||
runs-on: ubuntu-22.04
|
||||
needs: build
|
||||
steps:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: Get submodules hash
|
||||
id: submodules
|
||||
run: |
|
||||
echo "hash=$(git submodule status | awk '{print $1}' | sort | shasum -a 256 | sed 's/[ -]*//g')" >> $GITHUB_OUTPUT
|
||||
|
||||
- name: Cache submodules
|
||||
uses: actions/cache@v3
|
||||
with:
|
||||
path: |
|
||||
vendor/
|
||||
.git/modules
|
||||
key: ${{ runner.os }}-vendor-modules-${{ steps.submodules.outputs.hash }}
|
||||
|
||||
- name: Build nph
|
||||
run: |
|
||||
make build-nph
|
||||
|
||||
- name: Check nph formatting
|
||||
run: |
|
||||
shopt -s extglob # Enable extended globbing
|
||||
NPH=$(make print-nph-path)
|
||||
echo "using nph at ${NPH}"
|
||||
"${NPH}" examples waku tests tools apps *.@(nim|nims|nimble)
|
||||
git diff --exit-code
|
||||
|
||||
9
.github/workflows/container-image.yml
vendored
9
.github/workflows/container-image.yml
vendored
@ -22,7 +22,7 @@ jobs:
|
||||
build-docker-image:
|
||||
strategy:
|
||||
matrix:
|
||||
os: [ubuntu-latest]
|
||||
os: [ubuntu-22.04]
|
||||
runs-on: ${{ matrix.os }}
|
||||
timeout-minutes: 60
|
||||
|
||||
@ -41,10 +41,10 @@ jobs:
|
||||
env:
|
||||
QUAY_PASSWORD: ${{ secrets.QUAY_PASSWORD }}
|
||||
QUAY_USER: ${{ secrets.QUAY_USER }}
|
||||
|
||||
|
||||
- name: Checkout code
|
||||
if: ${{ steps.secrets.outcome == 'success' }}
|
||||
uses: actions/checkout@v3
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: Get submodules hash
|
||||
id: submodules
|
||||
@ -65,8 +65,9 @@ jobs:
|
||||
id: build
|
||||
if: ${{ steps.secrets.outcome == 'success' }}
|
||||
run: |
|
||||
make update
|
||||
|
||||
make -j${NPROC} V=1 QUICK_AND_DIRTY_COMPILER=1 NIMFLAGS="-d:disableMarchNative -d:postgres" wakunode2
|
||||
make -j${NPROC} V=1 QUICK_AND_DIRTY_COMPILER=1 NIMFLAGS="-d:disableMarchNative -d:postgres -d:chronicles_colors:none" wakunode2
|
||||
|
||||
SHORT_REF=$(git rev-parse --short HEAD)
|
||||
|
||||
|
||||
46
.github/workflows/pr-lint.yml
vendored
46
.github/workflows/pr-lint.yml
vendored
@ -8,52 +8,11 @@ on:
|
||||
- synchronize
|
||||
|
||||
jobs:
|
||||
main:
|
||||
name: Validate PR title
|
||||
runs-on: ubuntu-latest
|
||||
permissions:
|
||||
pull-requests: write
|
||||
steps:
|
||||
- uses: amannn/action-semantic-pull-request@v5
|
||||
id: lint_pr_title
|
||||
with:
|
||||
types: |
|
||||
chore
|
||||
docs
|
||||
feat
|
||||
fix
|
||||
refactor
|
||||
style
|
||||
test
|
||||
env:
|
||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
- uses: marocchino/sticky-pull-request-comment@v2
|
||||
# When the previous steps fails, the workflow would stop. By adding this
|
||||
# condition you can continue the execution with the populated error message.
|
||||
if: always() && (steps.lint_pr_title.outputs.error_message != null)
|
||||
with:
|
||||
header: pr-title-lint-error
|
||||
message: |
|
||||
Hey there and thank you for opening this pull request! 👋🏼
|
||||
|
||||
We require pull request titles to follow the [Conventional Commits specification](https://www.conventionalcommits.org/en/v1.0.0/) and it looks like your proposed title needs to be adjusted.
|
||||
|
||||
Details:
|
||||
|
||||
> ${{ steps.lint_pr_title.outputs.error_message }}
|
||||
|
||||
# Delete a previous comment when the issue has been resolved
|
||||
- if: ${{ steps.lint_pr_title.outputs.error_message == null }}
|
||||
uses: marocchino/sticky-pull-request-comment@v2
|
||||
with:
|
||||
header: pr-title-lint-error
|
||||
delete: true
|
||||
|
||||
labels:
|
||||
runs-on: ubuntu-latest
|
||||
runs-on: ubuntu-22.04
|
||||
|
||||
steps:
|
||||
- uses: actions/checkout@v3
|
||||
- uses: actions/checkout@v4
|
||||
name: Checkout code
|
||||
id: checkout
|
||||
- uses: dorny/paths-filter@v2
|
||||
@ -81,7 +40,6 @@ jobs:
|
||||
Please also make sure the label `release-notes` is added to make sure any changes to the user interface are properly announced in changelog and release notes.
|
||||
comment_tag: configs
|
||||
|
||||
|
||||
- name: Comment DB schema change
|
||||
uses: thollander/actions-comment-pull-request@v2
|
||||
if: ${{steps.filter.outputs.db_schema == 'true'}}
|
||||
|
||||
30
.github/workflows/pre-release.yml
vendored
30
.github/workflows/pre-release.yml
vendored
@ -17,10 +17,10 @@ env:
|
||||
|
||||
jobs:
|
||||
tag-name:
|
||||
runs-on: ubuntu-latest
|
||||
runs-on: ubuntu-22.04
|
||||
steps:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@v3
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: Vars
|
||||
id: vars
|
||||
@ -34,20 +34,20 @@ jobs:
|
||||
needs: tag-name
|
||||
strategy:
|
||||
matrix:
|
||||
os: [ubuntu-latest, macos-13]
|
||||
os: [ubuntu-22.04, macos-15]
|
||||
arch: [amd64]
|
||||
include:
|
||||
- os: macos-13
|
||||
- os: macos-15
|
||||
arch: arm64
|
||||
runs-on: ${{ matrix.os }}
|
||||
steps:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@v3
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: prep variables
|
||||
id: vars
|
||||
run: |
|
||||
ARCH=${{matrix.arch}}
|
||||
ARCH=${{matrix.arch}}
|
||||
|
||||
echo "arch=${ARCH}" >> $GITHUB_OUTPUT
|
||||
|
||||
@ -76,14 +76,14 @@ jobs:
|
||||
tar -cvzf ${{steps.vars.outputs.nwakutools}} ./build/wakucanary ./build/networkmonitor
|
||||
|
||||
- name: upload artifacts
|
||||
uses: actions/upload-artifact@v3
|
||||
uses: actions/upload-artifact@v4
|
||||
with:
|
||||
name: wakunode2
|
||||
path: ${{steps.vars.outputs.nwaku}}
|
||||
retention-days: 2
|
||||
|
||||
- name: upload artifacts
|
||||
uses: actions/upload-artifact@v3
|
||||
uses: actions/upload-artifact@v4
|
||||
with:
|
||||
name: wakutools
|
||||
path: ${{steps.vars.outputs.nwakutools}}
|
||||
@ -91,14 +91,14 @@ jobs:
|
||||
|
||||
build-docker-image:
|
||||
needs: tag-name
|
||||
uses: waku-org/nwaku/.github/workflows/container-image.yml@master
|
||||
uses: logos-messaging/nwaku/.github/workflows/container-image.yml@master
|
||||
with:
|
||||
image_tag: ${{ needs.tag-name.outputs.tag }}
|
||||
secrets: inherit
|
||||
|
||||
js-waku-node:
|
||||
needs: build-docker-image
|
||||
uses: waku-org/js-waku/.github/workflows/test-node.yml@master
|
||||
uses: logos-messaging/logos-messaging-js/.github/workflows/test-node.yml@master
|
||||
with:
|
||||
nim_wakunode_image: ${{ needs.build-docker-image.outputs.image }}
|
||||
test_type: node
|
||||
@ -106,24 +106,24 @@ jobs:
|
||||
|
||||
js-waku-node-optional:
|
||||
needs: build-docker-image
|
||||
uses: waku-org/js-waku/.github/workflows/test-node.yml@master
|
||||
uses: logos-messaging/logos-messaging-js/.github/workflows/test-node.yml@master
|
||||
with:
|
||||
nim_wakunode_image: ${{ needs.build-docker-image.outputs.image }}
|
||||
test_type: node-optional
|
||||
debug: waku*
|
||||
|
||||
create-release-candidate:
|
||||
runs-on: ubuntu-latest
|
||||
runs-on: ubuntu-22.04
|
||||
needs: [ tag-name, build-and-publish ]
|
||||
steps:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@v3
|
||||
uses: actions/checkout@v4
|
||||
with:
|
||||
fetch-depth: 0
|
||||
ref: master
|
||||
|
||||
- name: download artifacts
|
||||
uses: actions/download-artifact@v2
|
||||
uses: actions/download-artifact@v4
|
||||
|
||||
- name: prep variables
|
||||
id: vars
|
||||
@ -150,7 +150,7 @@ jobs:
|
||||
-u $(id -u) \
|
||||
docker.io/wakuorg/sv4git:latest \
|
||||
release-notes ${RELEASE_NOTES_TAG} --previous $(git tag -l --sort -creatordate | grep -e "^v[0-9]*\.[0-9]*\.[0-9]*$") |\
|
||||
sed -E 's@#([0-9]+)@[#\1](https://github.com/waku-org/nwaku/issues/\1)@g' > release_notes.md
|
||||
sed -E 's@#([0-9]+)@[#\1](https://github.com/logos-messaging/nwaku/issues/\1)@g' > release_notes.md
|
||||
|
||||
sed -i "s/^## .*/Generated at $(date)/" release_notes.md
|
||||
|
||||
|
||||
81
.github/workflows/release-assets.yml
vendored
81
.github/workflows/release-assets.yml
vendored
@ -14,10 +14,10 @@ jobs:
|
||||
build-and-upload:
|
||||
strategy:
|
||||
matrix:
|
||||
os: [ubuntu-latest, macos-13]
|
||||
os: [ubuntu-22.04, macos-15]
|
||||
arch: [amd64]
|
||||
include:
|
||||
- os: macos-13
|
||||
- os: macos-15
|
||||
arch: arm64
|
||||
runs-on: ${{ matrix.os }}
|
||||
timeout-minutes: 60
|
||||
@ -41,25 +41,84 @@ jobs:
|
||||
.git/modules
|
||||
key: ${{ runner.os }}-${{matrix.arch}}-submodules-${{ steps.submodules.outputs.hash }}
|
||||
|
||||
- name: prep variables
|
||||
- name: Get tag
|
||||
id: version
|
||||
run: |
|
||||
# Use full tag, e.g., v0.37.0
|
||||
echo "version=${GITHUB_REF_NAME}" >> $GITHUB_OUTPUT
|
||||
|
||||
- name: Prep variables
|
||||
id: vars
|
||||
run: |
|
||||
NWAKU_ARTIFACT_NAME=$(echo "nwaku-${{matrix.arch}}-${{runner.os}}.tar.gz" | tr "[:upper:]" "[:lower:]")
|
||||
VERSION=${{ steps.version.outputs.version }}
|
||||
|
||||
echo "nwaku=${NWAKU_ARTIFACT_NAME}" >> $GITHUB_OUTPUT
|
||||
NWAKU_ARTIFACT_NAME=$(echo "waku-${{matrix.arch}}-${{runner.os}}.tar.gz" | tr "[:upper:]" "[:lower:]")
|
||||
echo "waku=${NWAKU_ARTIFACT_NAME}" >> $GITHUB_OUTPUT
|
||||
|
||||
- name: Install dependencies
|
||||
if [[ "${{ runner.os }}" == "Linux" ]]; then
|
||||
LIBWAKU_ARTIFACT_NAME=$(echo "libwaku-${VERSION}-${{matrix.arch}}-${{runner.os}}-linux.deb" | tr "[:upper:]" "[:lower:]")
|
||||
fi
|
||||
|
||||
if [[ "${{ runner.os }}" == "macOS" ]]; then
|
||||
LIBWAKU_ARTIFACT_NAME=$(echo "libwaku-${VERSION}-${{matrix.arch}}-macos.tar.gz" | tr "[:upper:]" "[:lower:]")
|
||||
fi
|
||||
|
||||
echo "libwaku=${LIBWAKU_ARTIFACT_NAME}" >> $GITHUB_OUTPUT
|
||||
|
||||
- name: Install build dependencies
|
||||
run: |
|
||||
if [[ "${{ runner.os }}" == "Linux" ]]; then
|
||||
sudo apt-get update && sudo apt-get install -y build-essential dpkg-dev
|
||||
fi
|
||||
|
||||
- name: Build Waku artifacts
|
||||
run: |
|
||||
OS=$([[ "${{runner.os}}" == "macOS" ]] && echo "macosx" || echo "linux")
|
||||
|
||||
make -j${NPROC} NIMFLAGS="--parallelBuild:${NPROC} -d:disableMarchNative --os:${OS} --cpu:${{matrix.arch}}" V=1 update
|
||||
make -j${NPROC} NIMFLAGS="--parallelBuild:${NPROC} -d:disableMarchNative --os:${OS} --cpu:${{matrix.arch}} -d:postgres" CI=false wakunode2
|
||||
make -j${NPROC} NIMFLAGS="--parallelBuild:${NPROC} -d:disableMarchNative --os:${OS} --cpu:${{matrix.arch}}" CI=false chat2
|
||||
tar -cvzf ${{steps.vars.outputs.nwaku}} ./build/
|
||||
tar -cvzf ${{steps.vars.outputs.waku}} ./build/
|
||||
|
||||
- name: Upload asset
|
||||
uses: actions/upload-artifact@v2.2.3
|
||||
make -j${NPROC} NIMFLAGS="--parallelBuild:${NPROC} -d:disableMarchNative --os:${OS} --cpu:${{matrix.arch}} -d:postgres" CI=false libwaku
|
||||
make -j${NPROC} NIMFLAGS="--parallelBuild:${NPROC} -d:disableMarchNative --os:${OS} --cpu:${{matrix.arch}} -d:postgres" CI=false STATIC=1 libwaku
|
||||
|
||||
- name: Create distributable libwaku package
|
||||
run: |
|
||||
VERSION=${{ steps.version.outputs.version }}
|
||||
|
||||
if [[ "${{ runner.os }}" == "Linux" ]]; then
|
||||
rm -rf pkg
|
||||
mkdir -p pkg/DEBIAN pkg/usr/local/lib pkg/usr/local/include
|
||||
cp build/libwaku.so pkg/usr/local/lib/
|
||||
cp build/libwaku.a pkg/usr/local/lib/
|
||||
cp library/libwaku.h pkg/usr/local/include/
|
||||
|
||||
echo "Package: waku" >> pkg/DEBIAN/control
|
||||
echo "Version: ${VERSION}" >> pkg/DEBIAN/control
|
||||
echo "Priority: optional" >> pkg/DEBIAN/control
|
||||
echo "Section: libs" >> pkg/DEBIAN/control
|
||||
echo "Architecture: ${{matrix.arch}}" >> pkg/DEBIAN/control
|
||||
echo "Maintainer: Waku Team <ivansete@status.im>" >> pkg/DEBIAN/control
|
||||
echo "Description: Waku library" >> pkg/DEBIAN/control
|
||||
|
||||
dpkg-deb --build pkg ${{steps.vars.outputs.libwaku}}
|
||||
fi
|
||||
|
||||
if [[ "${{ runner.os }}" == "macOS" ]]; then
|
||||
tar -cvzf ${{steps.vars.outputs.libwaku}} ./build/libwaku.dylib ./build/libwaku.a ./library/libwaku.h
|
||||
fi
|
||||
|
||||
- name: Upload waku artifact
|
||||
uses: actions/upload-artifact@v4.4.0
|
||||
with:
|
||||
name: ${{steps.vars.outputs.nwaku}}
|
||||
path: ${{steps.vars.outputs.nwaku}}
|
||||
name: waku-${{ steps.version.outputs.version }}-${{ matrix.arch }}-${{ runner.os }}
|
||||
path: ${{ steps.vars.outputs.waku }}
|
||||
if-no-files-found: error
|
||||
|
||||
- name: Upload libwaku artifact
|
||||
uses: actions/upload-artifact@v4.4.0
|
||||
with:
|
||||
name: libwaku-${{ steps.version.outputs.version }}-${{ matrix.arch }}-${{ runner.os }}
|
||||
path: ${{ steps.vars.outputs.libwaku }}
|
||||
if-no-files-found: error
|
||||
|
||||
2
.github/workflows/sync-labels.yml
vendored
2
.github/workflows/sync-labels.yml
vendored
@ -7,7 +7,7 @@ on:
|
||||
- .github/labels.yml
|
||||
jobs:
|
||||
build:
|
||||
runs-on: ubuntu-latest
|
||||
runs-on: ubuntu-22.04
|
||||
steps:
|
||||
- uses: actions/checkout@v2
|
||||
- uses: micnncim/action-label-syncer@v1
|
||||
|
||||
104
.github/workflows/windows-build.yml
vendored
Normal file
104
.github/workflows/windows-build.yml
vendored
Normal file
@ -0,0 +1,104 @@
|
||||
name: ci / build-windows
|
||||
|
||||
on:
|
||||
workflow_call:
|
||||
inputs:
|
||||
branch:
|
||||
required: true
|
||||
type: string
|
||||
|
||||
jobs:
|
||||
build:
|
||||
runs-on: windows-latest
|
||||
|
||||
defaults:
|
||||
run:
|
||||
shell: msys2 {0}
|
||||
|
||||
env:
|
||||
MSYSTEM: MINGW64
|
||||
|
||||
steps:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@v3
|
||||
|
||||
- name: Setup MSYS2
|
||||
uses: msys2/setup-msys2@v2
|
||||
with:
|
||||
update: true
|
||||
install: >-
|
||||
git
|
||||
base-devel
|
||||
mingw-w64-x86_64-toolchain
|
||||
make
|
||||
cmake
|
||||
upx
|
||||
mingw-w64-x86_64-rust
|
||||
mingw-w64-x86_64-postgresql
|
||||
mingw-w64-x86_64-gcc
|
||||
mingw-w64-x86_64-gcc-libs
|
||||
mingw-w64-x86_64-libwinpthread-git
|
||||
mingw-w64-x86_64-zlib
|
||||
mingw-w64-x86_64-openssl
|
||||
mingw-w64-x86_64-python
|
||||
mingw-w64-x86_64-cmake
|
||||
mingw-w64-x86_64-llvm
|
||||
mingw-w64-x86_64-clang
|
||||
|
||||
- name: Add UPX to PATH
|
||||
run: |
|
||||
echo "/usr/bin:$PATH" >> $GITHUB_PATH
|
||||
echo "/mingw64/bin:$PATH" >> $GITHUB_PATH
|
||||
echo "/usr/lib:$PATH" >> $GITHUB_PATH
|
||||
echo "/mingw64/lib:$PATH" >> $GITHUB_PATH
|
||||
|
||||
- name: Verify dependencies
|
||||
run: |
|
||||
which upx gcc g++ make cmake cargo rustc python
|
||||
|
||||
- name: Updating submodules
|
||||
run: git submodule update --init --recursive
|
||||
|
||||
- name: Creating tmp directory
|
||||
run: mkdir -p tmp
|
||||
|
||||
- name: Building Nim
|
||||
run: |
|
||||
cd vendor/nimbus-build-system/vendor/Nim
|
||||
./build_all.bat
|
||||
cd ../../../..
|
||||
|
||||
- name: Building miniupnpc
|
||||
run: |
|
||||
cd vendor/nim-nat-traversal/vendor/miniupnp/miniupnpc
|
||||
make -f Makefile.mingw CC=gcc CXX=g++ libminiupnpc.a V=1
|
||||
cd ../../../../..
|
||||
|
||||
- name: Building libnatpmp
|
||||
run: |
|
||||
cd ./vendor/nim-nat-traversal/vendor/libnatpmp-upstream
|
||||
make CC="gcc -fPIC -D_WIN32_WINNT=0x0600 -DNATPMP_STATICLIB" libnatpmp.a V=1
|
||||
cd ../../../../
|
||||
|
||||
- name: Building wakunode2.exe
|
||||
run: |
|
||||
make wakunode2 LOG_LEVEL=DEBUG V=3 -j8
|
||||
|
||||
- name: Building libwaku.dll
|
||||
run: |
|
||||
make libwaku STATIC=0 LOG_LEVEL=DEBUG V=1 -j
|
||||
|
||||
- name: Check Executable
|
||||
run: |
|
||||
if [ -f "./build/wakunode2.exe" ]; then
|
||||
echo "wakunode2.exe build successful"
|
||||
else
|
||||
echo "Build failed: wakunode2.exe not found"
|
||||
exit 1
|
||||
fi
|
||||
if [ -f "./build/libwaku.dll" ]; then
|
||||
echo "libwaku.dll build successful"
|
||||
else
|
||||
echo "Build failed: libwaku.dll not found"
|
||||
exit 1
|
||||
fi
|
||||
17
.gitignore
vendored
17
.gitignore
vendored
@ -59,6 +59,10 @@ nimbus-build-system.paths
|
||||
/examples/nodejs/build/
|
||||
/examples/rust/target/
|
||||
|
||||
# Xcode user data
|
||||
xcuserdata/
|
||||
*.xcuserstate
|
||||
|
||||
|
||||
# Coverage
|
||||
coverage_html_report/
|
||||
@ -72,3 +76,16 @@ coverage_html_report/
|
||||
**/rln_tree/
|
||||
**/certs/
|
||||
|
||||
# simple qt example
|
||||
.qmake.stash
|
||||
main-qt
|
||||
waku_handler.moc.cpp
|
||||
|
||||
# Nix build result
|
||||
result
|
||||
|
||||
# llms
|
||||
AGENTS.md
|
||||
nimble.develop
|
||||
nimble.paths
|
||||
nimbledeps
|
||||
|
||||
20
.gitmodules
vendored
20
.gitmodules
vendored
@ -168,4 +168,24 @@
|
||||
path = vendor/db_connector
|
||||
url = https://github.com/nim-lang/db_connector.git
|
||||
ignore = untracked
|
||||
branch = devel
|
||||
[submodule "vendor/nph"]
|
||||
ignore = untracked
|
||||
branch = master
|
||||
path = vendor/nph
|
||||
url = https://github.com/arnetheduck/nph.git
|
||||
[submodule "vendor/nim-minilru"]
|
||||
path = vendor/nim-minilru
|
||||
url = https://github.com/status-im/nim-minilru.git
|
||||
ignore = untracked
|
||||
branch = master
|
||||
[submodule "vendor/waku-rlnv2-contract"]
|
||||
path = vendor/waku-rlnv2-contract
|
||||
url = https://github.com/logos-messaging/waku-rlnv2-contract.git
|
||||
ignore = untracked
|
||||
branch = master
|
||||
[submodule "vendor/nim-ffi"]
|
||||
path = vendor/nim-ffi
|
||||
url = https://github.com/logos-messaging/nim-ffi/
|
||||
ignore = untracked
|
||||
branch = master
|
||||
|
||||
509
AGENTS.md
Normal file
509
AGENTS.md
Normal file
@ -0,0 +1,509 @@
|
||||
# AGENTS.md - AI Coding Context
|
||||
|
||||
This file provides essential context for LLMs assisting with Logos Messaging development.
|
||||
|
||||
## Project Identity
|
||||
|
||||
Logos Messaging is designed as a shared public network for generalized messaging, not application-specific infrastructure.
|
||||
|
||||
This project is a Nim implementation of a libp2p protocol suite for private, censorship-resistant P2P messaging. It targets resource-restricted devices and privacy-preserving communication.
|
||||
|
||||
Logos Messaging was formerly known as Waku. Waku-related terminology remains within the codebase for historical reasons.
|
||||
|
||||
### Design Philosophy
|
||||
|
||||
Key architectural decisions:
|
||||
|
||||
Resource-restricted first: Protocols differentiate between full nodes (relay) and light clients (filter, lightpush, store). Light clients can participate without maintaining full message history or relay capabilities. This explains the client/server split in protocol implementations.
|
||||
|
||||
Privacy through unlinkability: RLN (Rate Limiting Nullifier) provides DoS protection while preserving sender anonymity. Messages are routed through pubsub topics with automatic sharding across 8 shards. Code prioritizes metadata privacy alongside content encryption.
|
||||
|
||||
Scalability via sharding: The network uses automatic content-topic-based sharding to distribute traffic. This is why you'll see sharding logic throughout the codebase and why pubsub topic selection is protocol-level, not application-level.
|
||||
|
||||
See [documentation](https://docs.waku.org/learn/) for architectural details.
|
||||
|
||||
### Core Protocols
|
||||
- Relay: Pub/sub message routing using GossipSub
|
||||
- Store: Historical message retrieval and persistence
|
||||
- Filter: Lightweight message filtering for resource-restricted clients
|
||||
- Lightpush: Lightweight message publishing for clients
|
||||
- Peer Exchange: Peer discovery mechanism
|
||||
- RLN Relay: Rate limiting nullifier for spam protection
|
||||
- Metadata: Cluster and shard metadata exchange between peers
|
||||
- Mix: Mixnet protocol for enhanced privacy through onion routing
|
||||
- Rendezvous: Alternative peer discovery mechanism
|
||||
|
||||
### Key Terminology
|
||||
- ENR (Ethereum Node Record): Node identity and capability advertisement
|
||||
- Multiaddr: libp2p addressing format (e.g., `/ip4/127.0.0.1/tcp/60000/p2p/16Uiu2...`)
|
||||
- PubsubTopic: Gossipsub topic for message routing (e.g., `/waku/2/default-waku/proto`)
|
||||
- ContentTopic: Application-level message categorization (e.g., `/my-app/1/chat/proto`)
|
||||
- Sharding: Partitioning network traffic across topics (static or auto-sharding)
|
||||
- RLN (Rate Limiting Nullifier): Zero-knowledge proof system for spam prevention
|
||||
|
||||
### Specifications
|
||||
All specs are at [rfc.vac.dev/waku](https://rfc.vac.dev/waku). RFCs use `WAKU2-XXX` format (not legacy `WAKU-XXX`).
|
||||
|
||||
## Architecture
|
||||
|
||||
### Protocol Module Pattern
|
||||
Each protocol typically follows this structure:
|
||||
```
|
||||
waku_<protocol>/
|
||||
├── protocol.nim # Main protocol type and handler logic
|
||||
├── client.nim # Client-side API
|
||||
├── rpc.nim # RPC message types
|
||||
├── rpc_codec.nim # Protobuf encoding/decoding
|
||||
├── common.nim # Shared types and constants
|
||||
└── protocol_metrics.nim # Prometheus metrics
|
||||
```
|
||||
|
||||
### WakuNode Architecture
|
||||
- WakuNode (`waku/node/waku_node.nim`) is the central orchestrator
|
||||
- Protocols are "mounted" onto the node's switch (libp2p component)
|
||||
- PeerManager handles peer selection and connection management
|
||||
- Switch provides libp2p transport, security, and multiplexing
|
||||
|
||||
Example protocol type definition:
|
||||
```nim
|
||||
type WakuFilter* = ref object of LPProtocol
|
||||
subscriptions*: FilterSubscriptions
|
||||
peerManager: PeerManager
|
||||
messageCache: TimedCache[string]
|
||||
```
|
||||
|
||||
## Development Essentials
|
||||
|
||||
### Build Requirements
|
||||
- Nim 2.x (check `waku.nimble` for minimum version)
|
||||
- Rust toolchain (required for RLN dependencies)
|
||||
- Build system: Make with nimbus-build-system
|
||||
|
||||
### Build System
|
||||
The project uses Makefile with nimbus-build-system (Status's Nim build framework):
|
||||
```bash
|
||||
# Initial build (updates submodules)
|
||||
make wakunode2
|
||||
|
||||
# After git pull, update submodules
|
||||
make update
|
||||
|
||||
# Build with custom flags
|
||||
make wakunode2 NIMFLAGS="-d:chronicles_log_level=DEBUG"
|
||||
```
|
||||
|
||||
Note: The build system uses `--mm:refc` memory management (automatically enforced). Only relevant if compiling outside the standard build system.
|
||||
|
||||
### Common Make Targets
|
||||
```bash
|
||||
make wakunode2 # Build main node binary
|
||||
make test # Run all tests
|
||||
make testcommon # Run common tests only
|
||||
make libwakuStatic # Build static C library
|
||||
make chat2 # Build chat example
|
||||
make install-nph # Install git hook for auto-formatting
|
||||
```
|
||||
|
||||
### Testing
|
||||
```bash
|
||||
# Run all tests
|
||||
make test
|
||||
|
||||
# Run specific test file
|
||||
make test tests/test_waku_enr.nim
|
||||
|
||||
# Run specific test case from file
|
||||
make test tests/test_waku_enr.nim "check capabilities support"
|
||||
|
||||
# Build and run test separately (for development iteration)
|
||||
make test tests/test_waku_enr.nim
|
||||
```
|
||||
|
||||
Test structure uses `testutils/unittests`:
|
||||
```nim
|
||||
import testutils/unittests
|
||||
|
||||
suite "Waku ENR - Capabilities":
|
||||
test "check capabilities support":
|
||||
## Given
|
||||
let bitfield: CapabilitiesBitfield = 0b0000_1101u8
|
||||
|
||||
## Then
|
||||
check:
|
||||
bitfield.supportsCapability(Capabilities.Relay)
|
||||
not bitfield.supportsCapability(Capabilities.Store)
|
||||
```
|
||||
|
||||
### Code Formatting
|
||||
Mandatory: All code must be formatted with `nph` (vendored in `vendor/nph`)
|
||||
```bash
|
||||
# Format specific file
|
||||
make nph/waku/waku_core.nim
|
||||
|
||||
# Install git pre-commit hook (auto-formats on commit)
|
||||
make install-nph
|
||||
```
|
||||
The nph formatter handles all formatting details automatically, especially with the pre-commit hook installed. Focus on semantic correctness.
|
||||
|
||||
### Logging
|
||||
Uses `chronicles` library with compile-time configuration:
|
||||
```nim
|
||||
import chronicles
|
||||
|
||||
logScope:
|
||||
topics = "waku lightpush"
|
||||
|
||||
info "handling request", peerId = peerId, topic = pubsubTopic
|
||||
error "request failed", error = msg
|
||||
```
|
||||
|
||||
Compile with log level:
|
||||
```bash
|
||||
nim c -d:chronicles_log_level=TRACE myfile.nim
|
||||
```
|
||||
|
||||
|
||||
## Code Conventions
|
||||
|
||||
Common pitfalls:
|
||||
- Always handle Result types explicitly
|
||||
- Avoid global mutable state: Pass state through parameters
|
||||
- Keep functions focused: Under 50 lines when possible
|
||||
- Prefer compile-time checks (`static assert`) over runtime checks
|
||||
|
||||
### Naming
|
||||
- Files/Directories: `snake_case` (e.g., `waku_lightpush`, `peer_manager`)
|
||||
- Procedures: `camelCase` (e.g., `handleRequest`, `pushMessage`)
|
||||
- Types: `PascalCase` (e.g., `WakuFilter`, `PubsubTopic`)
|
||||
- Constants: `PascalCase` (e.g., `MaxContentTopicsPerRequest`)
|
||||
- Constructors: `func init(T: type Xxx, params): T`
|
||||
- For ref types: `func new(T: type Xxx, params): ref T`
|
||||
- Exceptions: `XxxError` for CatchableError, `XxxDefect` for Defect
|
||||
- ref object types: `XxxRef` suffix
|
||||
|
||||
### Imports Organization
|
||||
Group imports: stdlib, external libs, internal modules:
|
||||
```nim
|
||||
import
|
||||
std/[options, sequtils], # stdlib
|
||||
results, chronicles, chronos, # external
|
||||
libp2p/peerid
|
||||
import
|
||||
../node/peer_manager, # internal (separate import block)
|
||||
../waku_core,
|
||||
./common
|
||||
```
|
||||
|
||||
### Async Programming
|
||||
Uses chronos, not stdlib `asyncdispatch`:
|
||||
```nim
|
||||
proc handleRequest(
|
||||
wl: WakuLightPush, peerId: PeerId
|
||||
): Future[WakuLightPushResult] {.async.} =
|
||||
let res = await wl.pushHandler(peerId, pubsubTopic, message)
|
||||
return res
|
||||
```
|
||||
|
||||
### Error Handling
|
||||
The project uses both Result types and exceptions:
|
||||
|
||||
Result types from nim-results are used for protocol and API-level errors:
|
||||
```nim
|
||||
proc subscribe(
|
||||
wf: WakuFilter, peerId: PeerID
|
||||
): Future[FilterSubscribeResult] {.async.} =
|
||||
if contentTopics.len > MaxContentTopicsPerRequest:
|
||||
return err(FilterSubscribeError.badRequest("exceeds maximum"))
|
||||
|
||||
# Handle Result with isOkOr
|
||||
(await wf.subscriptions.addSubscription(peerId, criteria)).isOkOr:
|
||||
return err(FilterSubscribeError.serviceUnavailable(error))
|
||||
|
||||
ok()
|
||||
```
|
||||
|
||||
Exceptions still used for:
|
||||
- chronos async failures (CancelledError, etc.)
|
||||
- Database/system errors
|
||||
- Library interop
|
||||
|
||||
Most files start with `{.push raises: [].}` to disable exception tracking, then use try/catch blocks where needed.
|
||||
|
||||
### Pragma Usage
|
||||
```nim
|
||||
{.push raises: [].} # Disable default exception tracking (at file top)
|
||||
|
||||
proc myProc(): Result[T, E] {.async.} = # Async proc
|
||||
```
|
||||
|
||||
### Protocol Inheritance
|
||||
Protocols inherit from libp2p's `LPProtocol`:
|
||||
```nim
|
||||
type WakuLightPush* = ref object of LPProtocol
|
||||
rng*: ref rand.HmacDrbgContext
|
||||
peerManager*: PeerManager
|
||||
pushHandler*: PushMessageHandler
|
||||
```
|
||||
|
||||
### Type Visibility
|
||||
- Public exports use `*` suffix: `type WakuFilter* = ...`
|
||||
- Fields without `*` are module-private
|
||||
|
||||
## Style Guide Essentials
|
||||
|
||||
This section summarizes key Nim style guidelines relevant to this project. Full guide: https://status-im.github.io/nim-style-guide/
|
||||
|
||||
### Language Features
|
||||
|
||||
Import and Export
|
||||
- Use explicit import paths with std/ prefix for stdlib
|
||||
- Group imports: stdlib, external, internal (separate blocks)
|
||||
- Export modules whose types appear in public API
|
||||
- Avoid include
|
||||
|
||||
Macros and Templates
|
||||
- Avoid macros and templates - prefer simple constructs
|
||||
- Avoid generating public API with macros
|
||||
- Put logic in templates, use macros only for glue code
|
||||
|
||||
Object Construction
|
||||
- Prefer Type(field: value) syntax
|
||||
- Use Type.init(params) convention for constructors
|
||||
- Default zero-initialization should be valid state
|
||||
- Avoid using result variable for construction
|
||||
|
||||
ref object Types
|
||||
- Avoid ref object unless needed for:
|
||||
- Resource handles requiring reference semantics
|
||||
- Shared ownership
|
||||
- Reference-based data structures (trees, lists)
|
||||
- Stable pointer for FFI
|
||||
- Use explicit ref MyType where possible
|
||||
- Name ref object types with Ref suffix: XxxRef
|
||||
|
||||
Memory Management
|
||||
- Prefer stack-based and statically sized types in core code
|
||||
- Use heap allocation in glue layers
|
||||
- Avoid alloca
|
||||
- For FFI: use create/dealloc or createShared/deallocShared
|
||||
|
||||
Variable Usage
|
||||
- Use most restrictive of const, let, var (prefer const over let over var)
|
||||
- Prefer expressions for initialization over var then assignment
|
||||
- Avoid result variable - use explicit return or expression-based returns
|
||||
|
||||
Functions
|
||||
- Prefer func over proc
|
||||
- Avoid public (*) symbols not part of intended API
|
||||
- Prefer openArray over seq for function parameters
|
||||
|
||||
Methods (runtime polymorphism)
|
||||
- Avoid method keyword for dynamic dispatch
|
||||
- Prefer manual vtable with proc closures for polymorphism
|
||||
- Methods lack support for generics
|
||||
|
||||
Miscellaneous
|
||||
- Annotate callback proc types with {.raises: [], gcsafe.}
|
||||
- Avoid explicit {.inline.} pragma
|
||||
- Avoid converters
|
||||
- Avoid finalizers
|
||||
|
||||
Type Guidelines
|
||||
|
||||
Binary Data
|
||||
- Use byte for binary data
|
||||
- Use seq[byte] for dynamic arrays
|
||||
- Convert string to seq[byte] early if stdlib returns binary as string
|
||||
|
||||
Integers
|
||||
- Prefer signed (int, int64) for counting, lengths, indexing
|
||||
- Use unsigned with explicit size (uint8, uint64) for binary data, bit ops
|
||||
- Avoid Natural
|
||||
- Check ranges before converting to int
|
||||
- Avoid casting pointers to int
|
||||
- Avoid range types
|
||||
|
||||
Strings
|
||||
- Use string for text
|
||||
- Use seq[byte] for binary data instead of string
|
||||
|
||||
### Error Handling
|
||||
|
||||
Philosophy
|
||||
- Prefer Result, Opt for explicit error handling
|
||||
- Use Exceptions only for legacy code compatibility
|
||||
|
||||
Result Types
|
||||
- Use Result[T, E] for operations that can fail
|
||||
- Use cstring for simple error messages: Result[T, cstring]
|
||||
- Use enum for errors needing differentiation: Result[T, SomeErrorEnum]
|
||||
- Use Opt[T] for simple optional values
|
||||
- Annotate all modules: {.push raises: [].} at top
|
||||
|
||||
Exceptions (when unavoidable)
|
||||
- Inherit from CatchableError, name XxxError
|
||||
- Use Defect for panics/logic errors, name XxxDefect
|
||||
- Annotate functions explicitly: {.raises: [SpecificError].}
|
||||
- Catch specific error types, avoid catching CatchableError
|
||||
- Use expression-based try blocks
|
||||
- Isolate legacy exception code with try/except, convert to Result
|
||||
|
||||
Common Defect Sources
|
||||
- Overflow in signed arithmetic
|
||||
- Array/seq indexing with []
|
||||
- Implicit range type conversions
|
||||
|
||||
Status Codes
|
||||
- Avoid status code pattern
|
||||
- Use Result instead
|
||||
|
||||
### Library Usage
|
||||
|
||||
Standard Library
|
||||
- Use judiciously, prefer focused packages
|
||||
- Prefer these replacements:
|
||||
- async: chronos
|
||||
- bitops: stew/bitops2
|
||||
- endians: stew/endians2
|
||||
- exceptions: results
|
||||
- io: stew/io2
|
||||
|
||||
Results Library
|
||||
- Use cstring errors for diagnostics without differentiation
|
||||
- Use enum errors when caller needs to act on specific errors
|
||||
- Use complex types when additional error context needed
|
||||
- Use isOkOr pattern for chaining
|
||||
|
||||
Wrappers (C/FFI)
|
||||
- Prefer native Nim when available
|
||||
- For C libraries: use {.compile.} to build from source
|
||||
- Create xxx_abi.nim for raw ABI wrapper
|
||||
- Avoid C++ libraries
|
||||
|
||||
Miscellaneous
|
||||
- Print hex output in lowercase, accept both cases
|
||||
|
||||
### Common Pitfalls
|
||||
|
||||
- Defects lack tracking by {.raises.}
|
||||
- nil ref causes runtime crashes
|
||||
- result variable disables branch checking
|
||||
- Exception hierarchy unclear between Nim versions
|
||||
- Range types have compiler bugs
|
||||
- Finalizers infect all instances of type
|
||||
|
||||
## Common Workflows
|
||||
|
||||
### Adding a New Protocol
|
||||
1. Create directory: `waku/waku_myprotocol/`
|
||||
2. Define core files:
|
||||
- `rpc.nim` - Message types
|
||||
- `rpc_codec.nim` - Protobuf encoding
|
||||
- `protocol.nim` - Protocol handler
|
||||
- `client.nim` - Client API
|
||||
- `common.nim` - Shared types
|
||||
3. Define protocol type in `protocol.nim`:
|
||||
```nim
|
||||
type WakuMyProtocol* = ref object of LPProtocol
|
||||
peerManager: PeerManager
|
||||
# ... fields
|
||||
```
|
||||
4. Implement request handler
|
||||
5. Mount in WakuNode (`waku/node/waku_node.nim`)
|
||||
6. Add tests in `tests/waku_myprotocol/`
|
||||
7. Export module via `waku/waku_myprotocol.nim`
|
||||
|
||||
### Adding a REST API Endpoint
|
||||
1. Define handler in `waku/rest_api/endpoint/myprotocol/`
|
||||
2. Implement endpoint following pattern:
|
||||
```nim
|
||||
proc installMyProtocolApiHandlers*(
|
||||
router: var RestRouter, node: WakuNode
|
||||
) =
|
||||
router.api(MethodGet, "/waku/v2/myprotocol/endpoint") do () -> RestApiResponse:
|
||||
# Implementation
|
||||
return RestApiResponse.jsonResponse(data, status = Http200)
|
||||
```
|
||||
3. Register in `waku/rest_api/handlers.nim`
|
||||
|
||||
### Adding Database Migration
|
||||
For message_store (SQLite):
|
||||
1. Create `migrations/message_store/NNNNN_description.up.sql`
|
||||
2. Create corresponding `.down.sql` for rollback
|
||||
3. Increment version number sequentially
|
||||
4. Test migration locally before committing
|
||||
|
||||
For PostgreSQL: add in `migrations/message_store_postgres/`
|
||||
|
||||
### Running Single Test During Development
|
||||
```bash
|
||||
# Build test binary
|
||||
make test tests/waku_filter_v2/test_waku_client.nim
|
||||
|
||||
# Binary location
|
||||
./build/tests/waku_filter_v2/test_waku_client.nim.bin
|
||||
|
||||
# Or combine
|
||||
make test tests/waku_filter_v2/test_waku_client.nim "specific test name"
|
||||
```
|
||||
|
||||
### Debugging with Chronicles
|
||||
Set log level and filter topics:
|
||||
```bash
|
||||
nim c -r \
|
||||
-d:chronicles_log_level=TRACE \
|
||||
-d:chronicles_disabled_topics="eth,dnsdisc" \
|
||||
tests/mytest.nim
|
||||
```
|
||||
|
||||
## Key Constraints
|
||||
|
||||
### Vendor Directory
|
||||
- Never edit files directly in vendor - it is auto-generated from git submodules
|
||||
- Always run `make update` after pulling changes
|
||||
- Managed by `nimbus-build-system`
|
||||
|
||||
### Chronicles Performance
|
||||
- Log levels are configured at compile time for performance
|
||||
- Runtime filtering is available but should be used sparingly: `-d:chronicles_runtime_filtering=on`
|
||||
- Default sinks are optimized for production
|
||||
|
||||
### Memory Management
|
||||
- Uses `refc` (reference counting with cycle collection)
|
||||
- Automatically enforced by the build system (hardcoded in `waku.nimble`)
|
||||
- Do not override unless absolutely necessary, as it breaks compatibility
|
||||
|
||||
### RLN Dependencies
|
||||
- RLN code requires a Rust toolchain, which explains Rust imports in some modules
|
||||
- Pre-built `librln` libraries are checked into the repository
|
||||
|
||||
## Quick Reference
|
||||
|
||||
Language: Nim 2.x | License: MIT or Apache 2.0
|
||||
|
||||
### Important Files
|
||||
- `Makefile` - Primary build interface
|
||||
- `waku.nimble` - Package definition and build tasks (called via nimbus-build-system)
|
||||
- `vendor/nimbus-build-system/` - Status's build framework
|
||||
- `waku/node/waku_node.nim` - Core node implementation
|
||||
- `apps/wakunode2/wakunode2.nim` - Main CLI application
|
||||
- `waku/factory/waku_conf.nim` - Configuration types
|
||||
- `library/libwaku.nim` - C bindings entry point
|
||||
|
||||
### Testing Entry Points
|
||||
- `tests/all_tests_waku.nim` - All Waku protocol tests
|
||||
- `tests/all_tests_wakunode2.nim` - Node application tests
|
||||
- `tests/all_tests_common.nim` - Common utilities tests
|
||||
|
||||
### Key Dependencies
|
||||
- `chronos` - Async framework
|
||||
- `nim-results` - Result type for error handling
|
||||
- `chronicles` - Logging
|
||||
- `libp2p` - P2P networking
|
||||
- `confutils` - CLI argument parsing
|
||||
- `presto` - REST server
|
||||
- `nimcrypto` - Cryptographic primitives
|
||||
|
||||
Note: For specific version requirements, check `waku.nimble`.
|
||||
|
||||
|
||||
505
CHANGELOG.md
505
CHANGELOG.md
@ -1,3 +1,506 @@
|
||||
## v0.37.1-beta (2025-12-10)
|
||||
|
||||
### Bug Fixes
|
||||
|
||||
- Remove ENR cache from peer exchange ([#3652](https://github.com/logos-messaging/logos-messaging-nim/pull/3652)) ([7920368a](https://github.com/logos-messaging/logos-messaging-nim/commit/7920368a36687cd5f12afa52d59866792d8457ca))
|
||||
|
||||
## v0.37.0-beta (2025-10-01)
|
||||
|
||||
### Notes
|
||||
|
||||
- Deprecated parameters:
|
||||
- `tree_path` and `rlnDB` (RLN-related storage paths)
|
||||
- `--dns-discovery` (fully removed, including dns-discovery-name-server)
|
||||
- `keepAlive` (deprecated, config updated accordingly)
|
||||
- Legacy `store` protocol is no longer supported by default.
|
||||
- Improved sharding configuration: now explicit and shard-specific metrics added.
|
||||
- Mix nodes are limited to IPv4 addresses only.
|
||||
- [lightpush legacy](https://github.com/vacp2p/rfc-index/blob/main/waku/standards/core/19/lightpush.md) is being deprecated. Use [lightpush v3](https://github.com/waku-org/specs/blob/master/standards/core/lightpush.md) instead.
|
||||
|
||||
### Features
|
||||
|
||||
- Waku API: create node via API ([#3580](https://github.com/waku-org/nwaku/pull/3580)) ([bc8acf76](https://github.com/waku-org/nwaku/commit/bc8acf76))
|
||||
- Waku Sync: full topic support ([#3275](https://github.com/waku-org/nwaku/pull/3275)) ([9327da5a](https://github.com/waku-org/nwaku/commit/9327da5a))
|
||||
- Mix PoC implementation ([#3284](https://github.com/waku-org/nwaku/pull/3284)) ([eb7a3d13](https://github.com/waku-org/nwaku/commit/eb7a3d13))
|
||||
- Rendezvous: add request interval option ([#3569](https://github.com/waku-org/nwaku/pull/3569)) ([cc7a6406](https://github.com/waku-org/nwaku/commit/cc7a6406))
|
||||
- Shard-specific metrics tracking ([#3520](https://github.com/waku-org/nwaku/pull/3520)) ([c3da29fd](https://github.com/waku-org/nwaku/commit/c3da29fd))
|
||||
- Libwaku: build Windows DLL for Status-go ([#3460](https://github.com/waku-org/nwaku/pull/3460)) ([5c38a53f](https://github.com/waku-org/nwaku/commit/5c38a53f))
|
||||
- RLN: add Stateless RLN support ([#3621](https://github.com/waku-org/nwaku/pull/3621))
|
||||
- LOG: Reduce log level of messages from debug to info for better visibility ([#3622](https://github.com/waku-org/nwaku/pull/3622))
|
||||
|
||||
### Bug Fixes
|
||||
|
||||
- Prevent invalid pubsub topic subscription via Relay REST API ([#3559](https://github.com/waku-org/nwaku/pull/3559)) ([a36601ab](https://github.com/waku-org/nwaku/commit/a36601ab))
|
||||
- Fixed node crash when RLN is unregistered ([#3573](https://github.com/waku-org/nwaku/pull/3573)) ([3d0c6279](https://github.com/waku-org/nwaku/commit/3d0c6279))
|
||||
- REST: fixed sync protocol issues ([#3503](https://github.com/waku-org/nwaku/pull/3503)) ([393e3cce](https://github.com/waku-org/nwaku/commit/393e3cce))
|
||||
- Regex pattern fix for `username:password@` in URLs ([#3517](https://github.com/waku-org/nwaku/pull/3517)) ([89a3f735](https://github.com/waku-org/nwaku/commit/89a3f735))
|
||||
- Sharding: applied modulus fix ([#3530](https://github.com/waku-org/nwaku/pull/3530)) ([f68d7999](https://github.com/waku-org/nwaku/commit/f68d7999))
|
||||
- Metrics: switched to counter instead of gauge ([#3355](https://github.com/waku-org/nwaku/pull/3355)) ([a27eec90](https://github.com/waku-org/nwaku/commit/a27eec90))
|
||||
- Fixed lightpush metrics and diagnostics ([#3486](https://github.com/waku-org/nwaku/pull/3486)) ([0ed3fc80](https://github.com/waku-org/nwaku/commit/0ed3fc80))
|
||||
- Misc sync, dashboard, and CI fixes ([#3434](https://github.com/waku-org/nwaku/pull/3434), [#3508](https://github.com/waku-org/nwaku/pull/3508), [#3464](https://github.com/waku-org/nwaku/pull/3464))
|
||||
- Raise log level of numerous operational messages from debug to info for better visibility ([#3622](https://github.com/waku-org/nwaku/pull/3622))
|
||||
|
||||
### Changes
|
||||
|
||||
- Enable peer-exchange by default ([#3557](https://github.com/waku-org/nwaku/pull/3557)) ([7df526f8](https://github.com/waku-org/nwaku/commit/7df526f8))
|
||||
- Refactor peer-exchange client and service implementations ([#3523](https://github.com/waku-org/nwaku/pull/3523)) ([4379f9ec](https://github.com/waku-org/nwaku/commit/4379f9ec))
|
||||
- Updated rendezvous to use callback-based shard/capability updates ([#3558](https://github.com/waku-org/nwaku/pull/3558)) ([028bf297](https://github.com/waku-org/nwaku/commit/028bf297))
|
||||
- Config updates and explicit sharding setup ([#3468](https://github.com/waku-org/nwaku/pull/3468)) ([994d485b](https://github.com/waku-org/nwaku/commit/994d485b))
|
||||
- Bumped libp2p to v1.13.0 ([#3574](https://github.com/waku-org/nwaku/pull/3574)) ([b1616e55](https://github.com/waku-org/nwaku/commit/b1616e55))
|
||||
- Removed legacy dependencies (e.g., libpcre in Docker builds) ([#3552](https://github.com/waku-org/nwaku/pull/3552)) ([4db4f830](https://github.com/waku-org/nwaku/commit/4db4f830))
|
||||
- Benchmarks for RLN proof generation & verification ([#3567](https://github.com/waku-org/nwaku/pull/3567)) ([794c3a85](https://github.com/waku-org/nwaku/commit/794c3a85))
|
||||
- Various CI/CD & infra updates ([#3515](https://github.com/waku-org/nwaku/pull/3515), [#3505](https://github.com/waku-org/nwaku/pull/3505))
|
||||
|
||||
### This release supports the following [libp2p protocols](https://docs.libp2p.io/concepts/protocols/):
|
||||
|
||||
| Protocol | Spec status | Protocol id |
|
||||
| ---: | :---: | :--- |
|
||||
| [`11/WAKU2-RELAY`](https://github.com/vacp2p/rfc-index/blob/main/waku/standards/core/11/relay.md) | `stable` | `/vac/waku/relay/2.0.0` |
|
||||
| [`12/WAKU2-FILTER`](https://github.com/vacp2p/rfc-index/blob/main/waku/standards/core/12/filter.md) | `draft` | `/vac/waku/filter/2.0.0-beta1` <br />`/vac/waku/filter-subscribe/2.0.0-beta1` <br />`/vac/waku/filter-push/2.0.0-beta1` |
|
||||
| [`13/WAKU2-STORE`](https://github.com/vacp2p/rfc-index/blob/main/waku/standards/core/13/store.md) | `draft` | `/vac/waku/store/2.0.0-beta4` |
|
||||
| [`19/WAKU2-LIGHTPUSH`](https://github.com/vacp2p/rfc-index/blob/main/waku/standards/core/19/lightpush.md) | `draft` | `/vac/waku/lightpush/2.0.0-beta1` |
|
||||
| [`WAKU2-LIGHTPUSH v3`](https://github.com/waku-org/specs/blob/master/standards/core/lightpush.md) | `draft` | `/vac/waku/lightpush/3.0.0` |
|
||||
| [`66/WAKU2-METADATA`](https://github.com/waku-org/specs/blob/master/standards/core/metadata.md) | `raw` | `/vac/waku/metadata/1.0.0` |
|
||||
| [`WAKU-SYNC`](https://github.com/waku-org/specs/blob/master/standards/core/sync.md) | `draft` | `/vac/waku/sync/1.0.0` |
|
||||
|
||||
## v0.36.0 (2025-06-20)
|
||||
### Notes
|
||||
|
||||
- Extended REST API for better debugging
|
||||
- Extended `/health` report
|
||||
- Very detailed access to peers and actual status through [`/admin/v1/peers/...` endpoints](https://waku-org.github.io/waku-rest-api/#get-/admin/v1/peers/stats)
|
||||
- Dynamic log level change with[ `/admin/v1/log-level`](https://waku-org.github.io/waku-rest-api/#post-/admin/v1/log-level/-logLevel-)
|
||||
|
||||
- The `rln-relay-eth-client-address` parameter, from now on, should be passed as an array of RPC addresses.
|
||||
- new `preset` parameter. `preset=twn` is the RLN-protected Waku Network (cluster 1). Overrides other values.
|
||||
- Removed `dns-addrs` parameter as it was duplicated and unused.
|
||||
- Removed `rln-relay-id-key`, `rln-relay-id-commitment-key`, `rln-relay-bandwidth-threshold` parameters.
|
||||
- Effectively removed `pubsub-topic`, which was deprecated in `v0.33.0`.
|
||||
- Removed `store-sync-max-payload-size` parameter.
|
||||
- Removed `dns-discovery-name-server` and `discv5-only` parameters.
|
||||
|
||||
### Features
|
||||
|
||||
- Update implementation for new contract abi ([#3390](https://github.com/waku-org/nwaku/issues/3390)) ([ee4058b2d](https://github.com/waku-org/nwaku/commit/ee4058b2d))
|
||||
- Lighptush v3 for lite-protocol-tester ([#3455](https://github.com/waku-org/nwaku/issues/3455)) ([3f3c59488](https://github.com/waku-org/nwaku/commit/3f3c59488))
|
||||
- Retrieve metrics from libwaku ([#3452](https://github.com/waku-org/nwaku/issues/3452)) ([f016ede60](https://github.com/waku-org/nwaku/commit/f016ede60))
|
||||
- Dynamic logging via REST API ([#3451](https://github.com/waku-org/nwaku/issues/3451)) ([9fe8ef8d2](https://github.com/waku-org/nwaku/commit/9fe8ef8d2))
|
||||
- Add waku_disconnect_all_peers to libwaku ([#3438](https://github.com/waku-org/nwaku/issues/3438)) ([7f51d103b](https://github.com/waku-org/nwaku/commit/7f51d103b))
|
||||
- Extend node /health REST endpoint with all protocol's state ([#3419](https://github.com/waku-org/nwaku/issues/3419)) ([1632496a2](https://github.com/waku-org/nwaku/commit/1632496a2))
|
||||
- Deprecate sync / local merkle tree ([#3312](https://github.com/waku-org/nwaku/issues/3312)) ([50fe7d727](https://github.com/waku-org/nwaku/commit/50fe7d727))
|
||||
- Refactor waku sync DOS protection ([#3391](https://github.com/waku-org/nwaku/issues/3391)) ([a81f9498c](https://github.com/waku-org/nwaku/commit/a81f9498c))
|
||||
- Waku Sync dashboard new panel & update ([#3379](https://github.com/waku-org/nwaku/issues/3379)) ([5ed6aae10](https://github.com/waku-org/nwaku/commit/5ed6aae10))
|
||||
- Enhance Waku Sync logs and metrics ([#3370](https://github.com/waku-org/nwaku/issues/3370)) ([f6c680a46](https://github.com/waku-org/nwaku/commit/f6c680a46))
|
||||
- Add waku_get_connected_peers_info to libwaku ([#3356](https://github.com/waku-org/nwaku/issues/3356)) ([0eb9c6200](https://github.com/waku-org/nwaku/commit/0eb9c6200))
|
||||
- Add waku_relay_get_peers_in_mesh to libwaku ([#3352](https://github.com/waku-org/nwaku/issues/3352)) ([ef9074443](https://github.com/waku-org/nwaku/commit/ef9074443))
|
||||
- Add waku_relay_get_connected_peers to libwaku ([#3353](https://github.com/waku-org/nwaku/issues/3353)) ([7250d7392](https://github.com/waku-org/nwaku/commit/7250d7392))
|
||||
- Introduce `preset` option ([#3346](https://github.com/waku-org/nwaku/issues/3346)) ([0eaf90465](https://github.com/waku-org/nwaku/commit/0eaf90465))
|
||||
- Add store sync dashboard panel ([#3307](https://github.com/waku-org/nwaku/issues/3307)) ([ef8ee233f](https://github.com/waku-org/nwaku/commit/ef8ee233f))
|
||||
|
||||
### Bug Fixes
|
||||
|
||||
- Fix typo from DIRVER to DRIVER ([#3442](https://github.com/waku-org/nwaku/issues/3442)) ([b9a4d7702](https://github.com/waku-org/nwaku/commit/b9a4d7702))
|
||||
- Fix discv5 protocol id in libwaku ([#3447](https://github.com/waku-org/nwaku/issues/3447)) ([f7be4c2f0](https://github.com/waku-org/nwaku/commit/f7be4c2f0))
|
||||
- Fix dnsresolver ([#3440](https://github.com/waku-org/nwaku/issues/3440)) ([e42e28cc6](https://github.com/waku-org/nwaku/commit/e42e28cc6))
|
||||
- Misc sync fixes, added debug logging ([#3411](https://github.com/waku-org/nwaku/issues/3411)) ([b9efa874d](https://github.com/waku-org/nwaku/commit/b9efa874d))
|
||||
- Relay unsubscribe ([#3422](https://github.com/waku-org/nwaku/issues/3422)) ([9fc631e10](https://github.com/waku-org/nwaku/commit/9fc631e10))
|
||||
- Fix build_rln.sh update version to download v0.7.0 ([#3425](https://github.com/waku-org/nwaku/issues/3425)) ([2678303bf](https://github.com/waku-org/nwaku/commit/2678303bf))
|
||||
- Timestamp based validation ([#3406](https://github.com/waku-org/nwaku/issues/3406)) ([1512bdaf0](https://github.com/waku-org/nwaku/commit/1512bdaf0))
|
||||
- Enable WebSocket connection also in case only websocket-secure-support enabled ([#3417](https://github.com/waku-org/nwaku/issues/3417)) ([698fe6525](https://github.com/waku-org/nwaku/commit/698fe6525))
|
||||
- Fix addPeer could unintentionally override metadata of previously stored peer with defaults and empty ([#3403](https://github.com/waku-org/nwaku/issues/3403)) ([5cccaaac6](https://github.com/waku-org/nwaku/commit/5cccaaac6))
|
||||
- Fix bad HttpCode conversion, add missing lightpush v3 rest api tests ([#3389](https://github.com/waku-org/nwaku/issues/3389)) ([7ff055e42](https://github.com/waku-org/nwaku/commit/7ff055e42))
|
||||
- Adjust mistaken comments and broken link ([#3381](https://github.com/waku-org/nwaku/issues/3381)) ([237f7abbb](https://github.com/waku-org/nwaku/commit/237f7abbb))
|
||||
- Avoid libwaku's redundant allocs ([#3380](https://github.com/waku-org/nwaku/issues/3380)) ([ac454a30b](https://github.com/waku-org/nwaku/commit/ac454a30b))
|
||||
- Avoid performing nil check for userData ([#3365](https://github.com/waku-org/nwaku/issues/3365)) ([b8707b6a5](https://github.com/waku-org/nwaku/commit/b8707b6a5))
|
||||
- Fix waku sync timing ([#3337](https://github.com/waku-org/nwaku/issues/3337)) ([b01b1837d](https://github.com/waku-org/nwaku/commit/b01b1837d))
|
||||
- Fix filter out ephemeral msg from waku sync ([#3332](https://github.com/waku-org/nwaku/issues/3332)) ([4b963d8f5](https://github.com/waku-org/nwaku/commit/4b963d8f5))
|
||||
- Apply latest nph formating ([#3334](https://github.com/waku-org/nwaku/issues/3334)) ([77105a6c2](https://github.com/waku-org/nwaku/commit/77105a6c2))
|
||||
- waku sync 2.0 codecs ENR support ([#3326](https://github.com/waku-org/nwaku/issues/3326)) ([bf735e777](https://github.com/waku-org/nwaku/commit/bf735e777))
|
||||
- waku sync mounting ([#3321](https://github.com/waku-org/nwaku/issues/3321)) ([380d2e338](https://github.com/waku-org/nwaku/commit/380d2e338))
|
||||
- Fix rest-relay-cache-capacity ([#3454](https://github.com/waku-org/nwaku/issues/3454)) ([fed4dc280](https://github.com/waku-org/nwaku/commit/fed4dc280))
|
||||
|
||||
### Changes
|
||||
|
||||
- Lower waku sync log lvl ([#3461](https://github.com/waku-org/nwaku/issues/3461)) ([4277a5349](https://github.com/waku-org/nwaku/commit/4277a5349))
|
||||
- Refactor to unify online and health monitors ([#3456](https://github.com/waku-org/nwaku/issues/3456)) ([2e40f2971](https://github.com/waku-org/nwaku/commit/2e40f2971))
|
||||
- Refactor rm discv5-only ([#3453](https://github.com/waku-org/nwaku/issues/3453)) ([b998430d5](https://github.com/waku-org/nwaku/commit/b998430d5))
|
||||
- Add extra debug REST helper via getting peer statistics ([#3443](https://github.com/waku-org/nwaku/issues/3443)) ([f4ad7a332](https://github.com/waku-org/nwaku/commit/f4ad7a332))
|
||||
- Expose online state in libwaku ([#3433](https://github.com/waku-org/nwaku/issues/3433)) ([e7f5c8cb2](https://github.com/waku-org/nwaku/commit/e7f5c8cb2))
|
||||
- Add heaptrack support build for Nim v2.0.12 builds ([#3424](https://github.com/waku-org/nwaku/issues/3424)) ([91885fb9e](https://github.com/waku-org/nwaku/commit/91885fb9e))
|
||||
- Remove debug for js-waku ([#3423](https://github.com/waku-org/nwaku/issues/3423)) ([5628dc6ad](https://github.com/waku-org/nwaku/commit/5628dc6ad))
|
||||
- Bump dependencies for v0.36 ([#3410](https://github.com/waku-org/nwaku/issues/3410)) ([005815746](https://github.com/waku-org/nwaku/commit/005815746))
|
||||
- Enhance feedback on error CLI ([#3405](https://github.com/waku-org/nwaku/issues/3405)) ([3464d81a6](https://github.com/waku-org/nwaku/commit/3464d81a6))
|
||||
- Allow multiple rln eth clients ([#3402](https://github.com/waku-org/nwaku/issues/3402)) ([861710bc7](https://github.com/waku-org/nwaku/commit/861710bc7))
|
||||
- Separate internal and CLI configurations ([#3357](https://github.com/waku-org/nwaku/issues/3357)) ([dd8d66431](https://github.com/waku-org/nwaku/commit/dd8d66431))
|
||||
- Avoid double relay subscription ([#3396](https://github.com/waku-org/nwaku/issues/3396)) ([7d5eb9374](https://github.com/waku-org/nwaku/commit/7d5eb9374) [#3429](https://github.com/waku-org/nwaku/issues/3429)) ([ee5932ebc](https://github.com/waku-org/nwaku/commit/ee5932ebc))
|
||||
- Improve disconnection handling ([#3385](https://github.com/waku-org/nwaku/issues/3385)) ([1ec9b8d96](https://github.com/waku-org/nwaku/commit/1ec9b8d96))
|
||||
- Return all peers from REST admin ([#3395](https://github.com/waku-org/nwaku/issues/3395)) ([f6fdd960f](https://github.com/waku-org/nwaku/commit/f6fdd960f))
|
||||
- Simplify rln_relay code a little ([#3392](https://github.com/waku-org/nwaku/issues/3392)) ([7a6c00bd0](https://github.com/waku-org/nwaku/commit/7a6c00bd0))
|
||||
- Extended /admin/v1 RESP API with different option to look at current connected/relay/mesh state of the node ([#3382](https://github.com/waku-org/nwaku/issues/3382)) ([3db00f39e](https://github.com/waku-org/nwaku/commit/3db00f39e))
|
||||
- Timestamp set to now in publish if not provided ([#3373](https://github.com/waku-org/nwaku/issues/3373)) ([f7b424451](https://github.com/waku-org/nwaku/commit/f7b424451))
|
||||
- Update lite-protocol-tester for handling shard argument ([#3371](https://github.com/waku-org/nwaku/issues/3371)) ([5ab69edd7](https://github.com/waku-org/nwaku/commit/5ab69edd7))
|
||||
- Fix unused and deprecated imports ([#3368](https://github.com/waku-org/nwaku/issues/3368)) ([6ebb49a14](https://github.com/waku-org/nwaku/commit/6ebb49a14))
|
||||
- Expect camelCase JSON for libwaku store queries ([#3366](https://github.com/waku-org/nwaku/issues/3366)) ([ccb4ed51d](https://github.com/waku-org/nwaku/commit/ccb4ed51d))
|
||||
- Maintenance to c and c++ simple examples ([#3367](https://github.com/waku-org/nwaku/issues/3367)) ([25d30d44d](https://github.com/waku-org/nwaku/commit/25d30d44d))
|
||||
- Skip two flaky tests ([#3364](https://github.com/waku-org/nwaku/issues/3364)) ([b672617b2](https://github.com/waku-org/nwaku/commit/b672617b2))
|
||||
- Retrieve protocols in new added peer from discv5 ([#3354](https://github.com/waku-org/nwaku/issues/3354)) ([df58643ea](https://github.com/waku-org/nwaku/commit/df58643ea))
|
||||
- Better keystore management ([#3358](https://github.com/waku-org/nwaku/issues/3358)) ([a914fdccc](https://github.com/waku-org/nwaku/commit/a914fdccc))
|
||||
- Remove pubsub topics arguments ([#3350](https://github.com/waku-org/nwaku/issues/3350)) ([9778b45c6](https://github.com/waku-org/nwaku/commit/9778b45c6))
|
||||
- New performance measurement metrics for non-relay protocols ([#3299](https://github.com/waku-org/nwaku/issues/3299)) ([68c50a09a](https://github.com/waku-org/nwaku/commit/68c50a09a))
|
||||
- Start triggering CI for windows build ([#3316](https://github.com/waku-org/nwaku/issues/3316)) ([55ac6ba9f](https://github.com/waku-org/nwaku/commit/55ac6ba9f))
|
||||
- Less logs for rendezvous ([#3319](https://github.com/waku-org/nwaku/issues/3319)) ([6df05bae2](https://github.com/waku-org/nwaku/commit/6df05bae2))
|
||||
- Add test reporting doc to benchmarks dir ([#3238](https://github.com/waku-org/nwaku/issues/3238)) ([94554a6e0](https://github.com/waku-org/nwaku/commit/94554a6e0))
|
||||
- Improve epoch monitoring ([#3197](https://github.com/waku-org/nwaku/issues/3197)) ([b0c025f81](https://github.com/waku-org/nwaku/commit/b0c025f81))
|
||||
|
||||
### This release supports the following [libp2p protocols](https://docs.libp2p.io/concepts/protocols/):
|
||||
| Protocol | Spec status | Protocol id |
|
||||
| ---: | :---: | :--- |
|
||||
| [`11/WAKU2-RELAY`](https://github.com/vacp2p/rfc-index/blob/main/waku/standards/core/11/relay.md) | `stable` | `/vac/waku/relay/2.0.0` |
|
||||
| [`12/WAKU2-FILTER`](https://github.com/vacp2p/rfc-index/blob/main/waku/standards/core/12/filter.md) | `draft` | `/vac/waku/filter/2.0.0-beta1` <br />`/vac/waku/filter-subscribe/2.0.0-beta1` <br />`/vac/waku/filter-push/2.0.0-beta1` |
|
||||
| [`13/WAKU2-STORE`](https://github.com/vacp2p/rfc-index/blob/main/waku/standards/core/13/store.md) | `draft` | `/vac/waku/store/2.0.0-beta4` |
|
||||
| [`19/WAKU2-LIGHTPUSH`](https://github.com/vacp2p/rfc-index/blob/main/waku/standards/core/19/lightpush.md) | `draft` | `/vac/waku/lightpush/2.0.0-beta1` |
|
||||
| [`WAKU2-LIGHTPUSH v3`](https://github.com/waku-org/specs/blob/master/standards/core/lightpush.md) | `draft` | `/vac/waku/lightpush/3.0.0` |
|
||||
| [`66/WAKU2-METADATA`](https://github.com/waku-org/specs/blob/master/standards/core/metadata.md) | `raw` | `/vac/waku/metadata/1.0.0` |
|
||||
| [`WAKU-SYNC`](https://github.com/waku-org/specs/blob/feat--waku-sync/standards/core/sync.md) | `draft` | `/vac/waku/sync/1.0.0` |
|
||||
|
||||
|
||||
## v0.35.1 (2025-03-30)
|
||||
|
||||
### Bug fixes
|
||||
|
||||
* Update RLN references ([3287](https://github.com/waku-org/nwaku/pull/3287)) ([ea961fa](https://github.com/waku-org/nwaku/pull/3287/commits/ea961faf4ed4f8287a2043a6b5d84b660745072b))
|
||||
|
||||
**Info:** before upgrading to this version, make sure you delete the previous rln_tree folder, i.e.,
|
||||
the one that is passed through this CLI: `--rln-relay-tree-path`.
|
||||
|
||||
### Features
|
||||
* lightpush v3 ([#3279](https://github.com/waku-org/nwaku/pull/3279)) ([e0b563ff](https://github.com/waku-org/nwaku/commit/e0b563ffe5af20bd26d37cd9b4eb9ed9eb82ff80))
|
||||
Upgrade for Waku Llightpush protocol with enhanced error handling. Read specification [here](https://github.com/waku-org/specs/blob/master/standards/core/lightpush.md)
|
||||
|
||||
This release supports the following [libp2p protocols](https://docs.libp2p.io/concepts/protocols/):
|
||||
| Protocol | Spec status | Protocol id |
|
||||
| ---: | :---: | :--- |
|
||||
| [`11/WAKU2-RELAY`](https://github.com/vacp2p/rfc-index/blob/main/waku/standards/core/11/relay.md) | `stable` | `/vac/waku/relay/2.0.0` |
|
||||
| [`12/WAKU2-FILTER`](https://github.com/vacp2p/rfc-index/blob/main/waku/standards/core/12/filter.md) | `draft` | `/vac/waku/filter/2.0.0-beta1` <br />`/vac/waku/filter-subscribe/2.0.0-beta1` <br />`/vac/waku/filter-push/2.0.0-beta1` |
|
||||
| [`13/WAKU2-STORE`](https://github.com/vacp2p/rfc-index/blob/main/waku/standards/core/13/store.md) | `draft` | `/vac/waku/store/2.0.0-beta4` |
|
||||
| [`19/WAKU2-LIGHTPUSH`](https://github.com/vacp2p/rfc-index/blob/main/waku/standards/core/19/lightpush.md) | `draft` | `/vac/waku/lightpush/2.0.0-beta1` |
|
||||
| [`WAKU2-LIGHTPUSH v3`](https://github.com/waku-org/specs/blob/master/standards/core/lightpush.md) | `draft` | `/vac/waku/lightpush/3.0.0` |
|
||||
| [`66/WAKU2-METADATA`](https://github.com/waku-org/specs/blob/master/standards/core/metadata.md) | `raw` | `/vac/waku/metadata/1.0.0` |
|
||||
| [`WAKU-SYNC`](https://github.com/waku-org/specs/blob/feat--waku-sync/standards/core/sync.md) | `draft` | `/vac/waku/sync/1.0.0` |
|
||||
|
||||
## v0.35.0 (2025-03-03)
|
||||
|
||||
### Notes
|
||||
|
||||
- Deprecated parameter
|
||||
- max-relay-peers
|
||||
|
||||
- New parameters
|
||||
- relay-service-ratio
|
||||
|
||||
String value with peers distribution within max-connections parameter.
|
||||
This percentage ratio represents the relay peers to service peers.
|
||||
For example, 60:40, tells that 60% of the max-connections will be used for relay protocol
|
||||
and the other 40% of max-connections will be reserved for other service protocols (e.g.,
|
||||
filter, lightpush, store, metadata, etc.)
|
||||
|
||||
- rendezvous
|
||||
|
||||
boolean attribute that optionally activates waku rendezvous discovery server.
|
||||
True by default.
|
||||
|
||||
### Release highlights
|
||||
|
||||
- New filter approach to keep push stream opened within subscription period.
|
||||
- Waku sync protocol.
|
||||
- Libwaku async
|
||||
- Lite-protocol-tester enhancements.
|
||||
- New panels and metrics in RLN to control outstanding request quota.
|
||||
|
||||
### Features
|
||||
|
||||
- waku sync shard matching check ([#3259](https://github.com/waku-org/nwaku/issues/3259)) ([42fd6b827](https://github.com/waku-org/nwaku/commit/42fd6b827))
|
||||
- waku store sync 2.0 config & setup ([#3217](https://github.com/waku-org/nwaku/issues/3217)) ([7f64dc03a](https://github.com/waku-org/nwaku/commit/7f64dc03a))
|
||||
- waku store sync 2.0 protocols & tests ([#3216](https://github.com/waku-org/nwaku/issues/3216)) ([6ee494d90](https://github.com/waku-org/nwaku/commit/6ee494d90))
|
||||
- waku store sync 2.0 storage & tests ([#3215](https://github.com/waku-org/nwaku/issues/3215)) ([54a7a6875](https://github.com/waku-org/nwaku/commit/54a7a6875))
|
||||
- waku store sync 2.0 common types & codec ([#3213](https://github.com/waku-org/nwaku/issues/3213)) ([29fda2dab](https://github.com/waku-org/nwaku/commit/29fda2dab))
|
||||
- add txhash-based eligibility checks for incentivization PoC ([#3166](https://github.com/waku-org/nwaku/issues/3166)) ([505ec84ce](https://github.com/waku-org/nwaku/commit/505ec84ce))
|
||||
- connection change event ([#3225](https://github.com/waku-org/nwaku/issues/3225)) ([e81a5517b](https://github.com/waku-org/nwaku/commit/e81a5517b))
|
||||
- libwaku add protected topic ([#3211](https://github.com/waku-org/nwaku/issues/3211)) ([d932dd10c](https://github.com/waku-org/nwaku/commit/d932dd10c))
|
||||
- topic health tracking ([#3212](https://github.com/waku-org/nwaku/issues/3212)) ([6020a673b](https://github.com/waku-org/nwaku/commit/6020a673b))
|
||||
- allowing configuration of application level callbacks ([#3206](https://github.com/waku-org/nwaku/issues/3206)) ([049fbeabb](https://github.com/waku-org/nwaku/commit/049fbeabb))
|
||||
- waku rendezvous wrapper ([#2962](https://github.com/waku-org/nwaku/issues/2962)) ([650a9487e](https://github.com/waku-org/nwaku/commit/650a9487e))
|
||||
- making dns discovery async ([#3175](https://github.com/waku-org/nwaku/issues/3175)) ([d7d00bfd7](https://github.com/waku-org/nwaku/commit/d7d00bfd7))
|
||||
- remove Waku Sync 1.0 & Negentropy ([#3185](https://github.com/waku-org/nwaku/issues/3185)) ([2ab9c3d36](https://github.com/waku-org/nwaku/commit/2ab9c3d36))
|
||||
- add waku_dial_peer and get_connected_peers to libwaku ([#3149](https://github.com/waku-org/nwaku/issues/3149)) ([507b1fc4d](https://github.com/waku-org/nwaku/commit/507b1fc4d))
|
||||
- running periodicaly peer exchange if discv5 is disabled ([#3150](https://github.com/waku-org/nwaku/issues/3150)) ([400d7a54f](https://github.com/waku-org/nwaku/commit/400d7a54f))
|
||||
|
||||
### Bug Fixes
|
||||
|
||||
- avoid double db migration for sqlite ([#3244](https://github.com/waku-org/nwaku/issues/3244)) ([2ce245354](https://github.com/waku-org/nwaku/commit/2ce245354))
|
||||
- libwaku waku_relay_unsubscribe ([#3207](https://github.com/waku-org/nwaku/issues/3207)) ([ab0c1d4aa](https://github.com/waku-org/nwaku/commit/ab0c1d4aa))
|
||||
- libwaku support string and int64 for timestamps ([#3205](https://github.com/waku-org/nwaku/issues/3205)) ([2022f54f5](https://github.com/waku-org/nwaku/commit/2022f54f5))
|
||||
- lite-protocol-tester receiver exit check ([#3187](https://github.com/waku-org/nwaku/issues/3187)) ([beb21c78f](https://github.com/waku-org/nwaku/commit/beb21c78f))
|
||||
- linting error ([#3156](https://github.com/waku-org/nwaku/issues/3156)) ([99ac68447](https://github.com/waku-org/nwaku/commit/99ac68447))
|
||||
|
||||
### Changes
|
||||
|
||||
- more efficient metrics usage ([#3298](https://github.com/waku-org/nwaku/issues/3298)) ([6f004d5d4](https://github.com/waku-org/nwaku/commit/6f004d5d4))([c07e278d8](https://github.com/waku-org/nwaku/commit/c07e278d82c3aa771b9988e85bad7422890e4d74))
|
||||
- filter refactor subscription management and react when the remote peer closes the stream. See the following commits in chronological order:
|
||||
- issue: [#3281](https://github.com/waku-org/nwaku/issues/3281) commit: [5392b8ea4](https://github.com/waku-org/nwaku/commit/5392b8ea4)
|
||||
- issue: [#3198](https://github.com/waku-org/nwaku/issues/3198) commit: [287e9b12c](https://github.com/waku-org/nwaku/commit/287e9b12c)
|
||||
- issue: [#3267](https://github.com/waku-org/nwaku/issues/3267) commit: [46747fd49](https://github.com/waku-org/nwaku/commit/46747fd49)
|
||||
- send msg hash as string on libwaku message event ([#3234](https://github.com/waku-org/nwaku/issues/3234)) ([9c209b4c3](https://github.com/waku-org/nwaku/commit/9c209b4c3))
|
||||
- separate heaptrack from debug build ([#3249](https://github.com/waku-org/nwaku/issues/3249)) ([81f24cc25](https://github.com/waku-org/nwaku/commit/81f24cc25))
|
||||
- capping mechanism for relay and service connections ([#3184](https://github.com/waku-org/nwaku/issues/3184)) ([2942782f9](https://github.com/waku-org/nwaku/commit/2942782f9))
|
||||
- add extra migration to sqlite and improving error message ([#3240](https://github.com/waku-org/nwaku/issues/3240)) ([bfd60ceab](https://github.com/waku-org/nwaku/commit/bfd60ceab))
|
||||
- optimize libwaku size ([#3242](https://github.com/waku-org/nwaku/issues/3242)) ([9c0ad8517](https://github.com/waku-org/nwaku/commit/9c0ad8517))
|
||||
- golang example end using negentropy dependency plus simple readme.md ([#3235](https://github.com/waku-org/nwaku/issues/3235)) ([0e0fcfb1a](https://github.com/waku-org/nwaku/commit/0e0fcfb1a))
|
||||
- enhance libwaku store protocol and more ([#3223](https://github.com/waku-org/nwaku/issues/3223)) ([22ce9ee87](https://github.com/waku-org/nwaku/commit/22ce9ee87))
|
||||
- add two RLN metrics and panel ([#3181](https://github.com/waku-org/nwaku/issues/3181)) ([1b532e8ab](https://github.com/waku-org/nwaku/commit/1b532e8ab))
|
||||
- libwaku async ([#3180](https://github.com/waku-org/nwaku/issues/3180)) ([47a623541](https://github.com/waku-org/nwaku/commit/47a623541))
|
||||
- filter protocol in libwaku ([#3177](https://github.com/waku-org/nwaku/issues/3177)) ([f856298ca](https://github.com/waku-org/nwaku/commit/f856298ca))
|
||||
- add supervisor for lite-protocol-tester infra ([#3176](https://github.com/waku-org/nwaku/issues/3176)) ([a7264d68c](https://github.com/waku-org/nwaku/commit/a7264d68c))
|
||||
- libwaku better error handling and better waku thread destroy handling ([#3167](https://github.com/waku-org/nwaku/issues/3167)) ([294dd03c4](https://github.com/waku-org/nwaku/commit/294dd03c4))
|
||||
- libwaku allow several multiaddresses for a single peer in store queries ([#3171](https://github.com/waku-org/nwaku/issues/3171)) ([3cb8ebdd8](https://github.com/waku-org/nwaku/commit/3cb8ebdd8))
|
||||
- naming connectPeer procedure ([#3157](https://github.com/waku-org/nwaku/issues/3157)) ([b3656d6ee](https://github.com/waku-org/nwaku/commit/b3656d6ee))
|
||||
|
||||
This release supports the following [libp2p protocols](https://docs.libp2p.io/concepts/protocols/):
|
||||
| Protocol | Spec status | Protocol id |
|
||||
| ---: | :---: | :--- |
|
||||
| [`11/WAKU2-RELAY`](https://github.com/vacp2p/rfc-index/blob/main/waku/standards/core/11/relay.md) | `stable` | `/vac/waku/relay/2.0.0` |
|
||||
| [`12/WAKU2-FILTER`](https://github.com/vacp2p/rfc-index/blob/main/waku/standards/core/12/filter.md) | `draft` | `/vac/waku/filter/2.0.0-beta1` <br />`/vac/waku/filter-subscribe/2.0.0-beta1` <br />`/vac/waku/filter-push/2.0.0-beta1` |
|
||||
| [`13/WAKU2-STORE`](https://github.com/vacp2p/rfc-index/blob/main/waku/standards/core/13/store.md) | `draft` | `/vac/waku/store/2.0.0-beta4` |
|
||||
| [`19/WAKU2-LIGHTPUSH`](https://github.com/vacp2p/rfc-index/blob/main/waku/standards/core/19/lightpush.md) | `draft` | `/vac/waku/lightpush/2.0.0-beta1` |
|
||||
| [`66/WAKU2-METADATA`](https://github.com/waku-org/specs/blob/master/standards/core/metadata.md) | `raw` | `/vac/waku/metadata/1.0.0` |
|
||||
| [`WAKU-SYNC`](https://github.com/waku-org/specs/blob/master/standards/core/sync.md) | `draft` | `/vac/waku/reconciliation/1.0.0` & `/vac/waku/transfer/1.0.0` |
|
||||
|
||||
## v0.34.0 (2024-10-29)
|
||||
|
||||
### Notes:
|
||||
|
||||
* The `--protected-topic` CLI configuration has been removed. Equivalent flag, `--protected-shard`, shall be used instead.
|
||||
|
||||
### Features
|
||||
|
||||
- change latency buckets ([#3153](https://github.com/waku-org/nwaku/issues/3153)) ([956fde6e](https://github.com/waku-org/nwaku/commit/956fde6e))
|
||||
- libwaku: ping peer ([#3144](https://github.com/waku-org/nwaku/issues/3144)) ([de11e576](https://github.com/waku-org/nwaku/commit/de11e576))
|
||||
- initial windows support ([#3107](https://github.com/waku-org/nwaku/issues/3107)) ([ff21c01e](https://github.com/waku-org/nwaku/commit/ff21c01e))
|
||||
- circuit relay support ([#3112](https://github.com/waku-org/nwaku/issues/3112)) ([cfde7eea](https://github.com/waku-org/nwaku/commit/cfde7eea))
|
||||
|
||||
### Bug Fixes
|
||||
|
||||
- peer exchange libwaku response handling ([#3141](https://github.com/waku-org/nwaku/issues/3141)) ([76606421](https://github.com/waku-org/nwaku/commit/76606421))
|
||||
- add more logs, stagger intervals & set prune offset to 10% for waku sync ([#3142](https://github.com/waku-org/nwaku/issues/3142)) ([a386880b](https://github.com/waku-org/nwaku/commit/a386880b))
|
||||
- add log and archive message ingress for sync ([#3133](https://github.com/waku-org/nwaku/issues/3133)) ([80c7581a](https://github.com/waku-org/nwaku/commit/80c7581a))
|
||||
- add a limit of max 10 content topics per query ([#3117](https://github.com/waku-org/nwaku/issues/3117)) ([c35dc549](https://github.com/waku-org/nwaku/commit/c35dc549))
|
||||
- avoid segfault by setting a default num peers requested in Peer eXchange ([#3122](https://github.com/waku-org/nwaku/issues/3122)) ([82fd5dde](https://github.com/waku-org/nwaku/commit/82fd5dde))
|
||||
- returning peerIds in base 64 ([#3105](https://github.com/waku-org/nwaku/issues/3105)) ([37edaf62](https://github.com/waku-org/nwaku/commit/37edaf62))
|
||||
- changing libwaku's error handling format ([#3093](https://github.com/waku-org/nwaku/issues/3093)) ([2e6c299d](https://github.com/waku-org/nwaku/commit/2e6c299d))
|
||||
- remove spammy log ([#3091](https://github.com/waku-org/nwaku/issues/3091)) ([1d2b910f](https://github.com/waku-org/nwaku/commit/1d2b910f))
|
||||
- avoid out connections leak ([#3077](https://github.com/waku-org/nwaku/issues/3077)) ([eb2bbae6](https://github.com/waku-org/nwaku/commit/eb2bbae6))
|
||||
- rejecting excess relay connections ([#3065](https://github.com/waku-org/nwaku/issues/3065)) ([8b0884c7](https://github.com/waku-org/nwaku/commit/8b0884c7))
|
||||
- static linking negentropy in ARM based mac ([#3046](https://github.com/waku-org/nwaku/issues/3046)) ([256b7853](https://github.com/waku-org/nwaku/commit/256b7853))
|
||||
|
||||
### Changes
|
||||
|
||||
- support ping with multiple multiaddresses and close stream ([#3154](https://github.com/waku-org/nwaku/issues/3154)) ([3665991a](https://github.com/waku-org/nwaku/commit/3665991a))
|
||||
- liteprotocoltester: easy setup fleets ([#3125](https://github.com/waku-org/nwaku/issues/3125)) ([268e7e66](https://github.com/waku-org/nwaku/commit/268e7e66))
|
||||
- saving peers enr capabilities ([#3127](https://github.com/waku-org/nwaku/issues/3127)) ([69d9524f](https://github.com/waku-org/nwaku/commit/69d9524f))
|
||||
- networkmonitor: add missing field on RlnRelay init, set default for num of shard ([#3136](https://github.com/waku-org/nwaku/issues/3136)) ([edcb0e15](https://github.com/waku-org/nwaku/commit/edcb0e15))
|
||||
- add to libwaku peer id retrieval proc ([#3124](https://github.com/waku-org/nwaku/issues/3124)) ([c5a825e2](https://github.com/waku-org/nwaku/commit/c5a825e2))
|
||||
- adding to libwaku dial and disconnect by peerIds ([#3111](https://github.com/waku-org/nwaku/issues/3111)) ([25da8102](https://github.com/waku-org/nwaku/commit/25da8102))
|
||||
- dbconn: add requestId info as a comment in the database logs ([#3110](https://github.com/waku-org/nwaku/issues/3110)) ([30c072a4](https://github.com/waku-org/nwaku/commit/30c072a4))
|
||||
- improving get_peer_ids_by_protocol by returning the available protocols of connected peers ([#3109](https://github.com/waku-org/nwaku/issues/3109)) ([ed0ee5be](https://github.com/waku-org/nwaku/commit/ed0ee5be))
|
||||
- remove warnings ([#3106](https://github.com/waku-org/nwaku/issues/3106)) ([c861fa9f](https://github.com/waku-org/nwaku/commit/c861fa9f))
|
||||
- better store logs ([#3103](https://github.com/waku-org/nwaku/issues/3103)) ([21b03551](https://github.com/waku-org/nwaku/commit/21b03551))
|
||||
- Improve binding for waku_sync ([#3102](https://github.com/waku-org/nwaku/issues/3102)) ([c3756e3a](https://github.com/waku-org/nwaku/commit/c3756e3a))
|
||||
- improving and temporarily skipping flaky rln test ([#3094](https://github.com/waku-org/nwaku/issues/3094)) ([a6ed80a5](https://github.com/waku-org/nwaku/commit/a6ed80a5))
|
||||
- update master after release v0.33.1 ([#3089](https://github.com/waku-org/nwaku/issues/3089)) ([54c3083d](https://github.com/waku-org/nwaku/commit/54c3083d))
|
||||
- re-arrange function based on responsibility of peer-manager ([#3086](https://github.com/waku-org/nwaku/issues/3086)) ([0f8e8740](https://github.com/waku-org/nwaku/commit/0f8e8740))
|
||||
- waku_keystore: give some more context in case of error ([#3064](https://github.com/waku-org/nwaku/issues/3064)) ([3ad613ca](https://github.com/waku-org/nwaku/commit/3ad613ca))
|
||||
- bump negentropy ([#3078](https://github.com/waku-org/nwaku/issues/3078)) ([643ab20f](https://github.com/waku-org/nwaku/commit/643ab20f))
|
||||
- Optimize store ([#3061](https://github.com/waku-org/nwaku/issues/3061)) ([5875ed63](https://github.com/waku-org/nwaku/commit/5875ed63))
|
||||
- wrap peer store ([#3051](https://github.com/waku-org/nwaku/issues/3051)) ([729e63f5](https://github.com/waku-org/nwaku/commit/729e63f5))
|
||||
- disabling metrics for libwaku ([#3058](https://github.com/waku-org/nwaku/issues/3058)) ([b358c90f](https://github.com/waku-org/nwaku/commit/b358c90f))
|
||||
- test peer connection management ([#3049](https://github.com/waku-org/nwaku/issues/3049)) ([711e7db1](https://github.com/waku-org/nwaku/commit/711e7db1))
|
||||
- updating upload and download artifact actions to v4 ([#3047](https://github.com/waku-org/nwaku/issues/3047)) ([7c4a9717](https://github.com/waku-org/nwaku/commit/7c4a9717))
|
||||
- Better database query logs and logarithmic scale in grafana store panels ([#3048](https://github.com/waku-org/nwaku/issues/3048)) ([d68b06f1](https://github.com/waku-org/nwaku/commit/d68b06f1))
|
||||
- extending store metrics ([#3042](https://github.com/waku-org/nwaku/issues/3042)) ([fd83b42f](https://github.com/waku-org/nwaku/commit/fd83b42f))
|
||||
|
||||
This release supports the following [libp2p protocols](https://docs.libp2p.io/concepts/protocols/):
|
||||
| Protocol | Spec status | Protocol id |
|
||||
| ---: | :---: | :--- |
|
||||
| [`11/WAKU2-RELAY`](https://github.com/vacp2p/rfc-index/blob/main/waku/standards/core/11/relay.md) | `stable` | `/vac/waku/relay/2.0.0` |
|
||||
| [`12/WAKU2-FILTER`](https://github.com/vacp2p/rfc-index/blob/main/waku/standards/core/12/filter.md) | `draft` | `/vac/waku/filter/2.0.0-beta1` <br />`/vac/waku/filter-subscribe/2.0.0-beta1` <br />`/vac/waku/filter-push/2.0.0-beta1` |
|
||||
| [`13/WAKU2-STORE`](https://github.com/vacp2p/rfc-index/blob/main/waku/standards/core/13/store.md) | `draft` | `/vac/waku/store/2.0.0-beta4` |
|
||||
| [`19/WAKU2-LIGHTPUSH`](https://github.com/vacp2p/rfc-index/blob/main/waku/standards/core/19/lightpush.md) | `draft` | `/vac/waku/lightpush/2.0.0-beta1` |
|
||||
| [`66/WAKU2-METADATA`](https://github.com/waku-org/specs/blob/master/standards/core/metadata.md) | `raw` | `/vac/waku/metadata/1.0.0` |
|
||||
| [`WAKU-SYNC`](https://github.com/waku-org/specs/blob/master/standards/core/sync.md) | `draft` | `/vac/waku/sync/1.0.0` |
|
||||
|
||||
## v0.33.1 (2024-10-03)
|
||||
|
||||
### Bug fixes
|
||||
|
||||
* Fix out connections leak ([3077](https://github.com/waku-org/nwaku/pull/3077)) ([eb2bbae6](https://github.com/waku-org/nwaku/commit/eb2bbae6))
|
||||
|
||||
This release supports the following [libp2p protocols](https://docs.libp2p.io/concepts/protocols/):
|
||||
| Protocol | Spec status | Protocol id |
|
||||
| ---: | :---: | :--- |
|
||||
| [`11/WAKU2-RELAY`](https://github.com/vacp2p/rfc-index/blob/main/waku/standards/core/11/relay.md) | `stable` | `/vac/waku/relay/2.0.0` |
|
||||
| [`12/WAKU2-FILTER`](https://github.com/vacp2p/rfc-index/blob/main/waku/standards/core/12/filter.md) | `draft` | `/vac/waku/filter/2.0.0-beta1` <br />`/vac/waku/filter-subscribe/2.0.0-beta1` <br />`/vac/waku/filter-push/2.0.0-beta1` |
|
||||
| [`13/WAKU2-STORE`](https://github.com/vacp2p/rfc-index/blob/main/waku/standards/core/13/store.md) | `draft` | `/vac/waku/store/2.0.0-beta4` |
|
||||
| [`19/WAKU2-LIGHTPUSH`](https://github.com/vacp2p/rfc-index/blob/main/waku/standards/core/19/lightpush.md) | `draft` | `/vac/waku/lightpush/2.0.0-beta1` |
|
||||
| [`66/WAKU2-METADATA`](https://github.com/waku-org/specs/blob/master/standards/core/metadata.md) | `raw` | `/vac/waku/metadata/1.0.0` |
|
||||
| [`WAKU-SYNC`](https://github.com/waku-org/specs/blob/feat--waku-sync/standards/core/sync.md) | `draft` | `/vac/waku/sync/1.0.0` |
|
||||
|
||||
## v0.33.0 (2024-09-30)
|
||||
|
||||
#### Notes:
|
||||
|
||||
* The `--pubsub-topic` CLI configuration has been deprecated and support for it will be removed on release v0.35.0. In order to migrate, please use the `--shard` configuration instead. For example, instead of `--pubsub-topic=/waku/2/rs/<CLUSTER_ID>/<SHARD_ID>`, use `--cluster-id=<CLUSTER_ID>` once and `--shard=<SHARD_ID>` for each subscribed shard
|
||||
* The `--rest-private` CLI configuration has been removed. Please delete any reference to it when running your nodes
|
||||
* Introduced the `--reliability` CLI configuration, activating the new experimental StoreV3 message confirmation protocol
|
||||
* DOS protection configurations of non-relay, req/resp protocols are changed
|
||||
* `--request-rate-limit` and `--request-rate-period` options are no longer supported.
|
||||
* `--rate-limit` CLI configuration is now available.
|
||||
- The new flag can describe various rate-limit requirements for each protocol supported. The setting can be repeated, each instance can define exactly one rate-limit option.
|
||||
- Format is `<protocol>:volume/period<time-unit>`
|
||||
- If protocol is not given, settings will be taken as default for un-set protocols. Ex: 80/2s
|
||||
- Supported protocols are: lightpush|filter|px|store|storev2|storev3
|
||||
- `volume` must be an integer value, representing number of requests over the period of time allowed.
|
||||
- `period <time-unit>` must be an integer with defined unit as one of h|m|s|ms
|
||||
- If not set, no rate limit will be applied to request/response protocols, except for the filter protocol.
|
||||
|
||||
|
||||
### Release highlights
|
||||
|
||||
* a new experimental reliability protocol has been implemented, leveraging StoreV3 to confirm message delivery
|
||||
* Peer Exchange protocol can now be protected by rate-limit boundary checks.
|
||||
* Fine-grained configuration of DOS protection is available with this release. See, "Notes" above.
|
||||
|
||||
### Bug Fixes
|
||||
|
||||
- rejecting excess relay connections ([#3063](https://github.com/waku-org/nwaku/issues/3063)) ([8b0884c7](https://github.com/waku-org/nwaku/commit/8b0884c7))
|
||||
- make Peer Exchange's rpc status_code optional for backward compatibility ([#3059](https://github.com/waku-org/nwaku/pull/3059)) ([5afa9b13](https://github.com/waku-org/nwaku/commit/5afa9b13))
|
||||
- px protocol decode - do not treat missing response field as error ([#3054](https://github.com/waku-org/nwaku/issues/3054)) ([9b445ac4](https://github.com/waku-org/nwaku/commit/9b445ac4))
|
||||
- setting up node with modified config ([#3036](https://github.com/waku-org/nwaku/issues/3036)) ([8f289925](https://github.com/waku-org/nwaku/commit/8f289925))
|
||||
- get back health check for postgres legacy ([#3010](https://github.com/waku-org/nwaku/issues/3010)) ([5a0edff7](https://github.com/waku-org/nwaku/commit/5a0edff7))
|
||||
- libnegentropy integration ([#2996](https://github.com/waku-org/nwaku/issues/2996)) ([c3cb06ac](https://github.com/waku-org/nwaku/commit/c3cb06ac))
|
||||
- peer-exchange issue ([#2889](https://github.com/waku-org/nwaku/issues/2889)) ([43157102](https://github.com/waku-org/nwaku/commit/43157102))
|
||||
|
||||
### Changes
|
||||
|
||||
- append current version in agentString which is used by the identify protocol ([#3057](https://github.com/waku-org/nwaku/pull/3057)) ([368bb3c1](https://github.com/waku-org/nwaku/commit/368bb3c1))
|
||||
- rate limit peer exchange protocol, enhanced response status in RPC ([#3035](https://github.com/waku-org/nwaku/issues/3035)) ([0a7f16a3](https://github.com/waku-org/nwaku/commit/0a7f16a3))
|
||||
- Switch libnegentropy library build from shared to static linkage ([#3041](https://github.com/waku-org/nwaku/issues/3041)) ([83f25c3e](https://github.com/waku-org/nwaku/commit/83f25c3e))
|
||||
- libwaku reduce repetitive code by adding a template handling resp returns ([#3032](https://github.com/waku-org/nwaku/issues/3032)) ([1713f562](https://github.com/waku-org/nwaku/commit/1713f562))
|
||||
- libwaku - extending the library with peer_manager and peer_exchange features ([#3026](https://github.com/waku-org/nwaku/issues/3026)) ([5ea1cf0c](https://github.com/waku-org/nwaku/commit/5ea1cf0c))
|
||||
- use submodule nph in CI to check lint ([#3027](https://github.com/waku-org/nwaku/issues/3027)) ([ce9a8c46](https://github.com/waku-org/nwaku/commit/ce9a8c46))
|
||||
- deprecating pubsub topic ([#2997](https://github.com/waku-org/nwaku/issues/2997)) ([a3cd2a1a](https://github.com/waku-org/nwaku/commit/a3cd2a1a))
|
||||
- lightpush - error metric less variable by only setting a fixed string ([#3020](https://github.com/waku-org/nwaku/issues/3020)) ([d3e6717a](https://github.com/waku-org/nwaku/commit/d3e6717a))
|
||||
- enhance libpq management ([#3015](https://github.com/waku-org/nwaku/issues/3015)) ([45319f09](https://github.com/waku-org/nwaku/commit/45319f09))
|
||||
- per limit split of PostgreSQL queries ([#3008](https://github.com/waku-org/nwaku/issues/3008)) ([e1e05afb](https://github.com/waku-org/nwaku/commit/e1e05afb))
|
||||
- Added metrics to liteprotocoltester ([#3002](https://github.com/waku-org/nwaku/issues/3002)) ([8baf627f](https://github.com/waku-org/nwaku/commit/8baf627f))
|
||||
- extending store metrics ([#2995](https://github.com/waku-org/nwaku/issues/2995)) ([fd83b42f](https://github.com/waku-org/nwaku/commit/fd83b42f))
|
||||
- Better timing and requestId detail for slower store db queries ([#2994](https://github.com/waku-org/nwaku/issues/2994)) ([e8a49b76](https://github.com/waku-org/nwaku/commit/e8a49b76))
|
||||
- remove unused setting from external_config.nim ([#3004](https://github.com/waku-org/nwaku/issues/3004)) ([fd84363e](https://github.com/waku-org/nwaku/commit/fd84363e))
|
||||
- delivery monitor for store v3 reliability protocol ([#2977](https://github.com/waku-org/nwaku/issues/2977)) ([0f68274c](https://github.com/waku-org/nwaku/commit/0f68274c))
|
||||
|
||||
This release supports the following [libp2p protocols](https://docs.libp2p.io/concepts/protocols/):
|
||||
| Protocol | Spec status | Protocol id |
|
||||
| ---: | :---: | :--- |
|
||||
| [`11/WAKU2-RELAY`](https://github.com/vacp2p/rfc-index/blob/main/waku/standards/core/11/relay.md) | `stable` | `/vac/waku/relay/2.0.0` |
|
||||
| [`12/WAKU2-FILTER`](https://github.com/vacp2p/rfc-index/blob/main/waku/standards/core/12/filter.md) | `draft` | `/vac/waku/filter/2.0.0-beta1` <br />`/vac/waku/filter-subscribe/2.0.0-beta1` <br />`/vac/waku/filter-push/2.0.0-beta1` |
|
||||
| [`13/WAKU2-STORE`](https://github.com/vacp2p/rfc-index/blob/main/waku/standards/core/13/store.md) | `draft` | `/vac/waku/store/2.0.0-beta4` |
|
||||
| [`19/WAKU2-LIGHTPUSH`](https://github.com/vacp2p/rfc-index/blob/main/waku/standards/core/19/lightpush.md) | `draft` | `/vac/waku/lightpush/2.0.0-beta1` |
|
||||
| [`66/WAKU2-METADATA`](https://github.com/waku-org/specs/blob/master/standards/core/metadata.md) | `raw` | `/vac/waku/metadata/1.0.0` |
|
||||
| [`WAKU-SYNC`](https://github.com/waku-org/specs/blob/feat--waku-sync/standards/core/sync.md) | `draft` | `/vac/waku/sync/1.0.0` |
|
||||
|
||||
## v0.32.0 (2024-08-30)
|
||||
|
||||
#### Notes:
|
||||
|
||||
* A new `discv5-only` CLI flag was introduced, which if set to true will perform optimizations for nodes that only run the DiscV5 service
|
||||
* The `protected-topic` CLI config item has been deprecated in favor of the new `protected-shard` configuration. Protected topics are still supported and will be completely removed in two releases time for `v0.34.0`
|
||||
|
||||
### Release highlights
|
||||
|
||||
* Merged Nwaku Sync protocol for synchronizing store nodes
|
||||
* Added Store Resume mechanism to retrieve messages sent when the node was offline
|
||||
|
||||
### Features
|
||||
|
||||
- Nwaku Sync ([#2403](https://github.com/waku-org/nwaku/issues/2403)) ([2cc86c51](https://github.com/waku-org/nwaku/commit/2cc86c51))
|
||||
- misc. updates for discovery network analysis ([#2930](https://github.com/waku-org/nwaku/issues/2930)) ([4340eb75](https://github.com/waku-org/nwaku/commit/4340eb75))
|
||||
- store resume ([#2919](https://github.com/waku-org/nwaku/issues/2919)) ([aed2a113](https://github.com/waku-org/nwaku/commit/aed2a113))
|
||||
|
||||
### Bug Fixes
|
||||
|
||||
- return on insert error ([#2956](https://github.com/waku-org/nwaku/issues/2956)) ([5f0fbd78](https://github.com/waku-org/nwaku/commit/5f0fbd78))
|
||||
- network monitor improvements ([#2939](https://github.com/waku-org/nwaku/issues/2939)) ([80583237](https://github.com/waku-org/nwaku/commit/80583237))
|
||||
- add back waku discv5 metrics ([#2927](https://github.com/waku-org/nwaku/issues/2927)) ([e4e01fab](https://github.com/waku-org/nwaku/commit/e4e01fab))
|
||||
- update and shift unittest ([#2934](https://github.com/waku-org/nwaku/issues/2934)) ([08973add](https://github.com/waku-org/nwaku/commit/08973add))
|
||||
- handle rln-relay-message-limit ([#2867](https://github.com/waku-org/nwaku/issues/2867)) ([8d107b0d](https://github.com/waku-org/nwaku/commit/8d107b0d))
|
||||
|
||||
### Changes
|
||||
|
||||
- libwaku retrieve my enr and adapt golang example ([#2987](https://github.com/waku-org/nwaku/issues/2987)) ([1ff9f1dd](https://github.com/waku-org/nwaku/commit/1ff9f1dd))
|
||||
- run `ANALYZE messages` regularly for better db performance ([#2986](https://github.com/waku-org/nwaku/issues/2986)) ([32f2d85d](https://github.com/waku-org/nwaku/commit/32f2d85d))
|
||||
- liteprotocoltester for simulation and for fleets ([#2813](https://github.com/waku-org/nwaku/issues/2813)) ([f4fa73e9](https://github.com/waku-org/nwaku/commit/f4fa73e9))
|
||||
- lock in nph version and add pre-commit hook ([#2938](https://github.com/waku-org/nwaku/issues/2938)) ([d63e3430](https://github.com/waku-org/nwaku/commit/d63e3430))
|
||||
- logging received message info via onValidated observer ([#2973](https://github.com/waku-org/nwaku/issues/2973)) ([e8bce67d](https://github.com/waku-org/nwaku/commit/e8bce67d))
|
||||
- deprecating protected topics in favor of protected shards ([#2983](https://github.com/waku-org/nwaku/issues/2983)) ([e51ffe07](https://github.com/waku-org/nwaku/commit/e51ffe07))
|
||||
- rename NsPubsubTopic ([#2974](https://github.com/waku-org/nwaku/issues/2974)) ([67439057](https://github.com/waku-org/nwaku/commit/67439057))
|
||||
- install dig ([#2975](https://github.com/waku-org/nwaku/issues/2975)) ([d24b56b9](https://github.com/waku-org/nwaku/commit/d24b56b9))
|
||||
- print WakuMessageHash as hex strings ([#2969](https://github.com/waku-org/nwaku/issues/2969)) ([2fd4eb62](https://github.com/waku-org/nwaku/commit/2fd4eb62))
|
||||
- updating dependencies for release 0.32.0 ([#2971](https://github.com/waku-org/nwaku/issues/2971)) ([dfd42a7c](https://github.com/waku-org/nwaku/commit/dfd42a7c))
|
||||
- bump negentropy to latest master ([#2968](https://github.com/waku-org/nwaku/issues/2968)) ([b36cb075](https://github.com/waku-org/nwaku/commit/b36cb075))
|
||||
- keystore: verbose error message when credential is not found ([#2943](https://github.com/waku-org/nwaku/issues/2943)) ([0f11ee14](https://github.com/waku-org/nwaku/commit/0f11ee14))
|
||||
- upgrade peer exchange mounting ([#2953](https://github.com/waku-org/nwaku/issues/2953)) ([42f1bed0](https://github.com/waku-org/nwaku/commit/42f1bed0))
|
||||
- replace statusim.net instances with status.im ([#2941](https://github.com/waku-org/nwaku/issues/2941)) ([f534549a](https://github.com/waku-org/nwaku/commit/f534549a))
|
||||
- updating doc reference to https rpc ([#2937](https://github.com/waku-org/nwaku/issues/2937)) ([bb7bba35](https://github.com/waku-org/nwaku/commit/bb7bba35))
|
||||
- Simplification of store legacy code ([#2931](https://github.com/waku-org/nwaku/issues/2931)) ([d4e8a0da](https://github.com/waku-org/nwaku/commit/d4e8a0da))
|
||||
- add peer filtering by cluster for waku peer exchange ([#2932](https://github.com/waku-org/nwaku/issues/2932)) ([b4618f98](https://github.com/waku-org/nwaku/commit/b4618f98))
|
||||
- return all connected peers from REST API ([#2923](https://github.com/waku-org/nwaku/issues/2923)) ([a29eca77](https://github.com/waku-org/nwaku/commit/a29eca77))
|
||||
- adding lint job to the CI ([#2925](https://github.com/waku-org/nwaku/issues/2925)) ([086cc8ed](https://github.com/waku-org/nwaku/commit/086cc8ed))
|
||||
- improve sonda dashboard ([#2918](https://github.com/waku-org/nwaku/issues/2918)) ([6d385cef](https://github.com/waku-org/nwaku/commit/6d385cef))
|
||||
- Add new custom built and test target to make in order to enable easy build or test single nim modules ([#2913](https://github.com/waku-org/nwaku/issues/2913)) ([ad25f437](https://github.com/waku-org/nwaku/commit/ad25f437))
|
||||
|
||||
This release supports the following [libp2p protocols](https://docs.libp2p.io/concepts/protocols/):
|
||||
| Protocol | Spec status | Protocol id |
|
||||
| ---: | :---: | :--- |
|
||||
| [`11/WAKU2-RELAY`](https://github.com/vacp2p/rfc-index/blob/main/waku/standards/core/11/relay.md) | `stable` | `/vac/waku/relay/2.0.0` |
|
||||
| [`12/WAKU2-FILTER`](https://github.com/vacp2p/rfc-index/blob/main/waku/standards/core/12/filter.md) | `draft` | `/vac/waku/filter/2.0.0-beta1` <br />`/vac/waku/filter-subscribe/2.0.0-beta1` <br />`/vac/waku/filter-push/2.0.0-beta1` |
|
||||
| [`13/WAKU2-STORE`](https://github.com/vacp2p/rfc-index/blob/main/waku/standards/core/13/store.md) | `draft` | `/vac/waku/store/2.0.0-beta4` |
|
||||
| [`19/WAKU2-LIGHTPUSH`](https://github.com/vacp2p/rfc-index/blob/main/waku/standards/core/19/lightpush.md) | `draft` | `/vac/waku/lightpush/2.0.0-beta1` |
|
||||
| [`66/WAKU2-METADATA`](https://github.com/waku-org/specs/blob/master/standards/core/metadata.md) | `raw` | `/vac/waku/metadata/1.0.0` |
|
||||
| [`WAKU-SYNC`](https://github.com/waku-org/specs/blob/feat--waku-sync/standards/core/sync.md) | `draft` | `/vac/waku/sync/1.0.0` |
|
||||
|
||||
## v0.31.1 (2024-08-02)
|
||||
|
||||
### Changes
|
||||
|
||||
- Optimize hash queries with lookup table ([#2933](https://github.com/waku-org/nwaku/issues/2933)) ([6463885bf](https://github.com/waku-org/nwaku/commit/6463885bf))
|
||||
|
||||
### Bug fixes
|
||||
|
||||
* Use of detach finalize when needed [2966](https://github.com/waku-org/nwaku/pull/2966)
|
||||
* Prevent legacy store from creating new partitions as that approach blocked the database.
|
||||
[2931](https://github.com/waku-org/nwaku/pull/2931)
|
||||
|
||||
* lightpush better feedback in case the lightpush service node does not have peers [2951](https://github.com/waku-org/nwaku/pull/2951)
|
||||
|
||||
This release supports the following [libp2p protocols](https://docs.libp2p.io/concepts/protocols/):
|
||||
| Protocol | Spec status | Protocol id |
|
||||
| ---: | :---: | :--- |
|
||||
| [`11/WAKU2-RELAY`](https://github.com/vacp2p/rfc-index/blob/main/waku/standards/core/11/relay.md) | `stable` | `/vac/waku/relay/2.0.0` |
|
||||
| [`12/WAKU2-FILTER`](https://github.com/vacp2p/rfc-index/blob/main/waku/standards/core/12/filter.md) | `draft` | `/vac/waku/filter/2.0.0-beta1` <br />`/vac/waku/filter-subscribe/2.0.0-beta1` <br />`/vac/waku/filter-push/2.0.0-beta1` |
|
||||
| [`WAKU2-STORE`](https://github.com/waku-org/specs/blob/master/standards/core/store.md) | `draft` | `/vac/waku/store-query/3.0.0` |
|
||||
| [`13/WAKU2-STORE`](https://github.com/vacp2p/rfc-index/blob/main/waku/standards/core/13/store.md) | `draft` | `/vac/waku/store/2.0.0-beta4` |
|
||||
| [`19/WAKU2-LIGHTPUSH`](https://github.com/vacp2p/rfc-index/blob/main/waku/standards/core/19/lightpush.md) | `draft` | `/vac/waku/lightpush/2.0.0-beta1` |
|
||||
| [`66/WAKU2-METADATA`](https://github.com/waku-org/specs/blob/master/standards/core/metadata.md) | `raw` | `/vac/waku/metadata/1.0.0` |
|
||||
|
||||
## v0.31.0 (2024-07-16)
|
||||
### Notes
|
||||
|
||||
@ -199,7 +702,7 @@ Release highlights:
|
||||
|
||||
* Store V3 has been merged
|
||||
* Implemented an enhanced and more robust node health check mechanism
|
||||
* Introduced the Waku object to libwaku in order to setup a node and its protocols
|
||||
* Introduced the Waku object to libwaku in order to setup a node and its protocols
|
||||
|
||||
### Features
|
||||
|
||||
|
||||
18
Dockerfile
18
Dockerfile
@ -1,13 +1,14 @@
|
||||
# BUILD NIM APP ----------------------------------------------------------------
|
||||
FROM rust:1.77.1-alpine3.18 AS nim-build
|
||||
FROM rustlang/rust:nightly-alpine3.19 AS nim-build
|
||||
|
||||
ARG NIMFLAGS
|
||||
ARG MAKE_TARGET=wakunode2
|
||||
ARG NIM_COMMIT
|
||||
ARG LOG_LEVEL=TRACE
|
||||
ARG HEAPTRACK_BUILD=0
|
||||
|
||||
# Get build tools and required header files
|
||||
RUN apk add --no-cache bash git build-base pcre-dev linux-headers curl jq
|
||||
RUN apk add --no-cache bash git build-base openssl-dev linux-headers curl jq
|
||||
|
||||
WORKDIR /app
|
||||
COPY . .
|
||||
@ -18,6 +19,10 @@ RUN apk update && apk upgrade
|
||||
# Ran separately from 'make' to avoid re-doing
|
||||
RUN git submodule update --init --recursive
|
||||
|
||||
RUN if [ "$HEAPTRACK_BUILD" = "1" ]; then \
|
||||
git apply --directory=vendor/nimbus-build-system/vendor/Nim docs/tutorial/nim.2.2.4_heaptracker_addon.patch; \
|
||||
fi
|
||||
|
||||
# Slowest build step for the sake of caching layers
|
||||
RUN make -j$(nproc) deps QUICK_AND_DIRTY_COMPILER=1 ${NIM_COMMIT}
|
||||
|
||||
@ -27,7 +32,7 @@ RUN make -j$(nproc) ${NIM_COMMIT} $MAKE_TARGET LOG_LEVEL=${LOG_LEVEL} NIMFLAGS="
|
||||
|
||||
# PRODUCTION IMAGE -------------------------------------------------------------
|
||||
|
||||
FROM alpine:3.18 as prod
|
||||
FROM alpine:3.18 AS prod
|
||||
|
||||
ARG MAKE_TARGET=wakunode2
|
||||
|
||||
@ -41,10 +46,7 @@ LABEL version="unknown"
|
||||
EXPOSE 30303 60000 8545
|
||||
|
||||
# Referenced in the binary
|
||||
RUN apk add --no-cache libgcc pcre-dev libpq-dev
|
||||
|
||||
# Fix for 'Error loading shared library libpcre.so.3: No such file or directory'
|
||||
RUN ln -s /usr/lib/libpcre.so /usr/lib/libpcre.so.3
|
||||
RUN apk add --no-cache libgcc libpq-dev bind-tools
|
||||
|
||||
# Copy to separate location to accomodate different MAKE_TARGET values
|
||||
COPY --from=nim-build /app/build/$MAKE_TARGET /usr/local/bin/
|
||||
@ -78,7 +80,7 @@ RUN make -j$(nproc)
|
||||
|
||||
|
||||
# Debug image
|
||||
FROM prod AS debug
|
||||
FROM prod AS debug-with-heaptrack
|
||||
|
||||
RUN apk add --no-cache gdb libunwind
|
||||
|
||||
|
||||
56
Dockerfile.lightpushWithMix.compile
Normal file
56
Dockerfile.lightpushWithMix.compile
Normal file
@ -0,0 +1,56 @@
|
||||
# BUILD NIM APP ----------------------------------------------------------------
|
||||
FROM rustlang/rust:nightly-alpine3.19 AS nim-build
|
||||
|
||||
ARG NIMFLAGS
|
||||
ARG MAKE_TARGET=lightpushwithmix
|
||||
ARG NIM_COMMIT
|
||||
ARG LOG_LEVEL=TRACE
|
||||
|
||||
# Get build tools and required header files
|
||||
RUN apk add --no-cache bash git build-base openssl-dev linux-headers curl jq
|
||||
|
||||
WORKDIR /app
|
||||
COPY . .
|
||||
|
||||
# workaround for alpine issue: https://github.com/alpinelinux/docker-alpine/issues/383
|
||||
RUN apk update && apk upgrade
|
||||
|
||||
# Ran separately from 'make' to avoid re-doing
|
||||
RUN git submodule update --init --recursive
|
||||
|
||||
# Slowest build step for the sake of caching layers
|
||||
RUN make -j$(nproc) deps QUICK_AND_DIRTY_COMPILER=1 ${NIM_COMMIT}
|
||||
|
||||
# Build the final node binary
|
||||
RUN make -j$(nproc) ${NIM_COMMIT} $MAKE_TARGET LOG_LEVEL=${LOG_LEVEL} NIMFLAGS="${NIMFLAGS}"
|
||||
|
||||
|
||||
# REFERENCE IMAGE as BASE for specialized PRODUCTION IMAGES----------------------------------------
|
||||
FROM alpine:3.18 AS base_lpt
|
||||
|
||||
ARG MAKE_TARGET=lightpushwithmix
|
||||
|
||||
LABEL maintainer="prem@waku.org"
|
||||
LABEL source="https://github.com/waku-org/nwaku"
|
||||
LABEL description="Lite Push With Mix: Waku light-client"
|
||||
LABEL commit="unknown"
|
||||
LABEL version="unknown"
|
||||
|
||||
# DevP2P, LibP2P, and JSON RPC ports
|
||||
EXPOSE 30303 60000 8545
|
||||
|
||||
# Referenced in the binary
|
||||
RUN apk add --no-cache libgcc libpq-dev \
|
||||
wget \
|
||||
iproute2 \
|
||||
python3 \
|
||||
jq
|
||||
|
||||
|
||||
COPY --from=nim-build /app/build/lightpush_publisher_mix /usr/bin/
|
||||
RUN chmod +x /usr/bin/lightpush_publisher_mix
|
||||
|
||||
# Standalone image to be used manually and in lpt-runner -------------------------------------------
|
||||
FROM base_lpt AS standalone_lpt
|
||||
|
||||
ENTRYPOINT ["/usr/bin/lightpush_publisher_mix"]
|
||||
273
Makefile
273
Makefile
@ -4,11 +4,10 @@
|
||||
# - MIT license
|
||||
# at your option. This file may not be copied, modified, or distributed except
|
||||
# according to those terms.
|
||||
BUILD_SYSTEM_DIR := vendor/nimbus-build-system
|
||||
EXCLUDED_NIM_PACKAGES := vendor/nim-dnsdisc/vendor
|
||||
export BUILD_SYSTEM_DIR := vendor/nimbus-build-system
|
||||
export EXCLUDED_NIM_PACKAGES := vendor/nim-dnsdisc/vendor
|
||||
LINK_PCRE := 0
|
||||
LOG_LEVEL := TRACE
|
||||
|
||||
FORMAT_MSG := "\\x1B[95mFormatting:\\x1B[39m"
|
||||
# we don't want an error here, so we can handle things later, in the ".DEFAULT" target
|
||||
-include $(BUILD_SYSTEM_DIR)/makefiles/variables.mk
|
||||
|
||||
@ -28,10 +27,25 @@ GIT_SUBMODULE_UPDATE := git submodule update --init --recursive
|
||||
|
||||
else # "variables.mk" was included. Business as usual until the end of this file.
|
||||
|
||||
ifeq ($(OS),Windows_NT) # is Windows_NT on XP, 2000, 7, Vista, 10...
|
||||
detected_OS := Windows
|
||||
else
|
||||
detected_OS := $(strip $(shell uname))
|
||||
# Determine the OS
|
||||
detected_OS := $(shell uname -s)
|
||||
ifneq (,$(findstring MINGW,$(detected_OS)))
|
||||
detected_OS := Windows
|
||||
endif
|
||||
|
||||
ifeq ($(detected_OS),Windows)
|
||||
# Update MINGW_PATH to standard MinGW location
|
||||
MINGW_PATH = /mingw64
|
||||
NIM_PARAMS += --passC:"-I$(MINGW_PATH)/include"
|
||||
NIM_PARAMS += --passL:"-L$(MINGW_PATH)/lib"
|
||||
NIM_PARAMS += --passL:"-Lvendor/nim-nat-traversal/vendor/miniupnp/miniupnpc"
|
||||
NIM_PARAMS += --passL:"-Lvendor/nim-nat-traversal/vendor/libnatpmp-upstream"
|
||||
|
||||
LIBS = -lws2_32 -lbcrypt -liphlpapi -luserenv -lntdll -lminiupnpc -lnatpmp -lpq
|
||||
NIM_PARAMS += $(foreach lib,$(LIBS),--passL:"$(lib)")
|
||||
|
||||
export PATH := /c/msys64/usr/bin:/c/msys64/mingw64/bin:/c/msys64/usr/lib:/c/msys64/mingw64/lib:$(PATH)
|
||||
|
||||
endif
|
||||
|
||||
##########
|
||||
@ -42,7 +56,21 @@ endif
|
||||
# default target, because it's the first one that doesn't start with '.'
|
||||
all: | wakunode2 example2 chat2 chat2bridge libwaku
|
||||
|
||||
test: | testcommon testwaku
|
||||
test_file := $(word 2,$(MAKECMDGOALS))
|
||||
define test_name
|
||||
$(shell echo '$(MAKECMDGOALS)' | cut -d' ' -f3-)
|
||||
endef
|
||||
|
||||
test:
|
||||
ifeq ($(strip $(test_file)),)
|
||||
$(MAKE) testcommon
|
||||
$(MAKE) testwaku
|
||||
else
|
||||
$(MAKE) compile-test TEST_FILE="$(test_file)" TEST_NAME="$(call test_name)"
|
||||
endif
|
||||
# this prevents make from erroring on unknown targets like "Index"
|
||||
%:
|
||||
@true
|
||||
|
||||
waku.nims:
|
||||
ln -s waku.nimble $@
|
||||
@ -50,6 +78,7 @@ waku.nims:
|
||||
update: | update-common
|
||||
rm -rf waku.nims && \
|
||||
$(MAKE) waku.nims $(HANDLE_OUTPUT)
|
||||
$(MAKE) build-nph
|
||||
|
||||
clean:
|
||||
rm -rf build
|
||||
@ -69,16 +98,17 @@ NIM_PARAMS := $(NIM_PARAMS) -d:git_version=\"$(GIT_VERSION)\"
|
||||
HEAPTRACKER ?= 0
|
||||
HEAPTRACKER_INJECT ?= 0
|
||||
ifeq ($(HEAPTRACKER), 1)
|
||||
# Needed to make nimbus-build-system use the Nim's 'heaptrack_support' branch
|
||||
DOCKER_NIM_COMMIT := NIM_COMMIT=heaptrack_support
|
||||
TARGET := debug
|
||||
# Assumes Nim's lib/system/alloc.nim is patched!
|
||||
TARGET := debug-with-heaptrack
|
||||
|
||||
ifeq ($(HEAPTRACKER_INJECT), 1)
|
||||
# the Nim compiler will load 'libheaptrack_inject.so'
|
||||
HEAPTRACK_PARAMS := -d:heaptracker -d:heaptracker_inject
|
||||
NIM_PARAMS := $(NIM_PARAMS) -d:heaptracker -d:heaptracker_inject
|
||||
else
|
||||
# the Nim compiler will load 'libheaptrack_preload.so'
|
||||
HEAPTRACK_PARAMS := -d:heaptracker
|
||||
NIM_PARAMS := $(NIM_PARAMS) -d:heaptracker
|
||||
endif
|
||||
|
||||
endif
|
||||
@ -89,6 +119,10 @@ endif
|
||||
##################
|
||||
.PHONY: deps libbacktrace
|
||||
|
||||
FOUNDRY_VERSION := 1.5.0
|
||||
PNPM_VERSION := 10.23.0
|
||||
|
||||
|
||||
rustup:
|
||||
ifeq (, $(shell which cargo))
|
||||
# Install Rustup if it's not installed
|
||||
@ -97,11 +131,8 @@ ifeq (, $(shell which cargo))
|
||||
curl https://sh.rustup.rs -sSf | sh -s -- -y --default-toolchain stable
|
||||
endif
|
||||
|
||||
anvil: rustup
|
||||
ifeq (, $(shell which anvil))
|
||||
# Install Anvil if it's not installed
|
||||
./scripts/install_anvil.sh
|
||||
endif
|
||||
rln-deps: rustup
|
||||
./scripts/install_rln_tests_dependencies.sh $(FOUNDRY_VERSION) $(PNPM_VERSION)
|
||||
|
||||
deps: | deps-common nat-libs waku.nims
|
||||
|
||||
@ -109,12 +140,19 @@ deps: | deps-common nat-libs waku.nims
|
||||
### nim-libbacktrace
|
||||
|
||||
# "-d:release" implies "--stacktrace:off" and it cannot be added to config.nims
|
||||
ifeq ($(USE_LIBBACKTRACE), 0)
|
||||
NIM_PARAMS := $(NIM_PARAMS) -d:debug -d:disable_libbacktrace
|
||||
else
|
||||
ifeq ($(DEBUG), 0)
|
||||
NIM_PARAMS := $(NIM_PARAMS) -d:release
|
||||
else
|
||||
NIM_PARAMS := $(NIM_PARAMS) -d:debug
|
||||
endif
|
||||
|
||||
ifeq ($(USE_LIBBACKTRACE), 0)
|
||||
NIM_PARAMS := $(NIM_PARAMS) -d:disable_libbacktrace
|
||||
endif
|
||||
|
||||
# enable experimental exit is dest feature in libp2p mix
|
||||
NIM_PARAMS := $(NIM_PARAMS) -d:libp2p_mix_experimental_exit_is_dest
|
||||
|
||||
libbacktrace:
|
||||
+ $(MAKE) -C vendor/nim-libbacktrace --no-print-directory BUILD_CXX_LIB=0
|
||||
|
||||
@ -136,6 +174,12 @@ endif
|
||||
|
||||
clean: | clean-libbacktrace
|
||||
|
||||
### Create nimble links (used when building with Nix)
|
||||
|
||||
nimbus-build-system-nimble-dir:
|
||||
NIMBLE_DIR="$(CURDIR)/$(NIMBLE_DIR)" \
|
||||
PWD_CMD="$(PWD)" \
|
||||
$(CURDIR)/scripts/generate_nimble_links.sh
|
||||
|
||||
##################
|
||||
## RLN ##
|
||||
@ -143,9 +187,9 @@ clean: | clean-libbacktrace
|
||||
.PHONY: librln
|
||||
|
||||
LIBRLN_BUILDDIR := $(CURDIR)/vendor/zerokit
|
||||
LIBRLN_VERSION := v0.5.1
|
||||
LIBRLN_VERSION := v0.9.0
|
||||
|
||||
ifeq ($(OS),Windows_NT)
|
||||
ifeq ($(detected_OS),Windows)
|
||||
LIBRLN_FILE := rln.lib
|
||||
else
|
||||
LIBRLN_FILE := librln_$(LIBRLN_VERSION).a
|
||||
@ -165,7 +209,6 @@ clean-librln:
|
||||
# Extend clean target
|
||||
clean: | clean-librln
|
||||
|
||||
|
||||
#################
|
||||
## Waku Common ##
|
||||
#################
|
||||
@ -179,15 +222,16 @@ testcommon: | build deps
|
||||
##########
|
||||
## Waku ##
|
||||
##########
|
||||
.PHONY: testwaku wakunode2 testwakunode2 example2 chat2 chat2bridge
|
||||
.PHONY: testwaku wakunode2 testwakunode2 example2 chat2 chat2bridge liteprotocoltester
|
||||
|
||||
# install anvil only for the testwaku target
|
||||
testwaku: | build deps anvil librln
|
||||
# install rln-deps only for the testwaku target
|
||||
testwaku: | build deps rln-deps librln
|
||||
echo -e $(BUILD_MSG) "build/$@" && \
|
||||
$(ENV_SCRIPT) nim test -d:os=$(shell uname) $(NIM_PARAMS) waku.nims
|
||||
|
||||
wakunode2: | build deps librln
|
||||
echo -e $(BUILD_MSG) "build/$@" && \
|
||||
\
|
||||
$(ENV_SCRIPT) nim wakunode2 $(NIM_PARAMS) waku.nims
|
||||
|
||||
benchmarks: | build deps librln
|
||||
@ -206,6 +250,10 @@ chat2: | build deps librln
|
||||
echo -e $(BUILD_MSG) "build/$@" && \
|
||||
$(ENV_SCRIPT) nim chat2 $(NIM_PARAMS) waku.nims
|
||||
|
||||
chat2mix: | build deps librln
|
||||
echo -e $(BUILD_MSG) "build/$@" && \
|
||||
$(ENV_SCRIPT) nim chat2mix $(NIM_PARAMS) waku.nims
|
||||
|
||||
rln-db-inspector: | build deps librln
|
||||
echo -e $(BUILD_MSG) "build/$@" && \
|
||||
$(ENV_SCRIPT) nim rln_db_inspector $(NIM_PARAMS) waku.nims
|
||||
@ -218,6 +266,18 @@ liteprotocoltester: | build deps librln
|
||||
echo -e $(BUILD_MSG) "build/$@" && \
|
||||
$(ENV_SCRIPT) nim liteprotocoltester $(NIM_PARAMS) waku.nims
|
||||
|
||||
lightpushwithmix: | build deps librln
|
||||
echo -e $(BUILD_MSG) "build/$@" && \
|
||||
$(ENV_SCRIPT) nim lightpushwithmix $(NIM_PARAMS) waku.nims
|
||||
|
||||
build/%: | build deps librln
|
||||
echo -e $(BUILD_MSG) "build/$*" && \
|
||||
$(ENV_SCRIPT) nim buildone $(NIM_PARAMS) waku.nims $*
|
||||
|
||||
compile-test: | build deps librln
|
||||
echo -e $(BUILD_MSG) "$(TEST_FILE)" "\"$(TEST_NAME)\"" && \
|
||||
$(ENV_SCRIPT) nim buildTest $(NIM_PARAMS) waku.nims $(TEST_FILE) && \
|
||||
$(ENV_SCRIPT) nim execTest $(NIM_PARAMS) waku.nims $(TEST_FILE) "\"$(TEST_NAME)\""; \
|
||||
|
||||
################
|
||||
## Waku tools ##
|
||||
@ -234,6 +294,45 @@ networkmonitor: | build deps librln
|
||||
echo -e $(BUILD_MSG) "build/$@" && \
|
||||
$(ENV_SCRIPT) nim networkmonitor $(NIM_PARAMS) waku.nims
|
||||
|
||||
############
|
||||
## Format ##
|
||||
############
|
||||
.PHONY: build-nph install-nph clean-nph print-nph-path
|
||||
|
||||
# Default location for nph binary shall be next to nim binary to make it available on the path.
|
||||
NPH:=$(shell dirname $(NIM_BINARY))/nph
|
||||
|
||||
build-nph: | build deps
|
||||
ifeq ("$(wildcard $(NPH))","")
|
||||
$(ENV_SCRIPT) nim c --skipParentCfg:on vendor/nph/src/nph.nim && \
|
||||
mv vendor/nph/src/nph $(shell dirname $(NPH))
|
||||
echo "nph utility is available at " $(NPH)
|
||||
else
|
||||
echo "nph utility already exists at " $(NPH)
|
||||
endif
|
||||
|
||||
GIT_PRE_COMMIT_HOOK := .git/hooks/pre-commit
|
||||
|
||||
install-nph: build-nph
|
||||
ifeq ("$(wildcard $(GIT_PRE_COMMIT_HOOK))","")
|
||||
cp ./scripts/git_pre_commit_format.sh $(GIT_PRE_COMMIT_HOOK)
|
||||
else
|
||||
echo "$(GIT_PRE_COMMIT_HOOK) already present, will NOT override"
|
||||
exit 1
|
||||
endif
|
||||
|
||||
nph/%: | build-nph
|
||||
echo -e $(FORMAT_MSG) "nph/$*" && \
|
||||
$(NPH) $*
|
||||
|
||||
clean-nph:
|
||||
rm -f $(NPH)
|
||||
|
||||
# To avoid hardcoding nph binary location in several places
|
||||
print-nph-path:
|
||||
echo "$(NPH)"
|
||||
|
||||
clean: | clean-nph
|
||||
|
||||
###################
|
||||
## Documentation ##
|
||||
@ -268,31 +367,90 @@ docker-image:
|
||||
--build-arg="NIMFLAGS=$(DOCKER_IMAGE_NIMFLAGS)" \
|
||||
--build-arg="NIM_COMMIT=$(DOCKER_NIM_COMMIT)" \
|
||||
--build-arg="LOG_LEVEL=$(LOG_LEVEL)" \
|
||||
--build-arg="HEAPTRACK_BUILD=$(HEAPTRACKER)" \
|
||||
--label="commit=$(shell git rev-parse HEAD)" \
|
||||
--label="version=$(GIT_VERSION)" \
|
||||
--target $(TARGET) \
|
||||
--tag $(DOCKER_IMAGE_NAME) .
|
||||
|
||||
docker-quick-image: MAKE_TARGET ?= wakunode2
|
||||
docker-quick-image: DOCKER_IMAGE_TAG ?= $(MAKE_TARGET)-$(GIT_VERSION)
|
||||
docker-quick-image: DOCKER_IMAGE_NAME ?= wakuorg/nwaku:$(DOCKER_IMAGE_TAG)
|
||||
docker-quick-image: NIM_PARAMS := $(NIM_PARAMS) -d:chronicles_colors:none -d:insecure -d:postgres --passL:$(LIBRLN_FILE) --passL:-lm
|
||||
docker-quick-image: | build deps librln wakunode2
|
||||
docker build \
|
||||
--build-arg="MAKE_TARGET=$(MAKE_TARGET)" \
|
||||
--tag $(DOCKER_IMAGE_NAME) \
|
||||
--target $(TARGET) \
|
||||
--file docker/binaries/Dockerfile.bn.local \
|
||||
.
|
||||
|
||||
docker-push:
|
||||
docker push $(DOCKER_IMAGE_NAME)
|
||||
|
||||
####################################
|
||||
## Container lite-protocol-tester ##
|
||||
####################################
|
||||
# -d:insecure - Necessary to enable Prometheus HTTP endpoint for metrics
|
||||
# -d:chronicles_colors:none - Necessary to disable colors in logs for Docker
|
||||
DOCKER_LPT_NIMFLAGS ?= -d:chronicles_colors:none -d:insecure
|
||||
|
||||
# build a docker image for the fleet
|
||||
docker-liteprotocoltester: DOCKER_LPT_TAG ?= latest
|
||||
docker-liteprotocoltester: DOCKER_LPT_NAME ?= wakuorg/liteprotocoltester:$(DOCKER_LPT_TAG)
|
||||
# --no-cache
|
||||
docker-liteprotocoltester:
|
||||
docker build \
|
||||
--build-arg="MAKE_TARGET=liteprotocoltester" \
|
||||
--build-arg="NIMFLAGS=$(DOCKER_LPT_NIMFLAGS)" \
|
||||
--build-arg="NIM_COMMIT=$(DOCKER_NIM_COMMIT)" \
|
||||
--build-arg="LOG_LEVEL=TRACE" \
|
||||
--label="commit=$(shell git rev-parse HEAD)" \
|
||||
--label="version=$(GIT_VERSION)" \
|
||||
--target $(if $(filter deploy,$(DOCKER_LPT_TAG)),deployment_lpt,standalone_lpt) \
|
||||
--tag $(DOCKER_LPT_NAME) \
|
||||
--file apps/liteprotocoltester/Dockerfile.liteprotocoltester.compile \
|
||||
.
|
||||
|
||||
docker-quick-liteprotocoltester: DOCKER_LPT_TAG ?= latest
|
||||
docker-quick-liteprotocoltester: DOCKER_LPT_NAME ?= wakuorg/liteprotocoltester:$(DOCKER_LPT_TAG)
|
||||
docker-quick-liteprotocoltester: | liteprotocoltester
|
||||
docker build \
|
||||
--tag $(DOCKER_LPT_NAME) \
|
||||
--file apps/liteprotocoltester/Dockerfile.liteprotocoltester \
|
||||
.
|
||||
|
||||
docker-liteprotocoltester-push:
|
||||
docker push $(DOCKER_LPT_NAME)
|
||||
|
||||
|
||||
################
|
||||
## C Bindings ##
|
||||
################
|
||||
.PHONY: cbindings cwaku_example libwaku
|
||||
|
||||
STATIC ?= false
|
||||
STATIC ?= 0
|
||||
BUILD_COMMAND ?= libwakuDynamic
|
||||
|
||||
ifeq ($(detected_OS),Windows)
|
||||
LIB_EXT_DYNAMIC = dll
|
||||
LIB_EXT_STATIC = lib
|
||||
else ifeq ($(detected_OS),Darwin)
|
||||
LIB_EXT_DYNAMIC = dylib
|
||||
LIB_EXT_STATIC = a
|
||||
else ifeq ($(detected_OS),Linux)
|
||||
LIB_EXT_DYNAMIC = so
|
||||
LIB_EXT_STATIC = a
|
||||
endif
|
||||
|
||||
LIB_EXT := $(LIB_EXT_DYNAMIC)
|
||||
ifeq ($(STATIC), 1)
|
||||
LIB_EXT = $(LIB_EXT_STATIC)
|
||||
BUILD_COMMAND = libwakuStatic
|
||||
endif
|
||||
|
||||
libwaku: | build deps librln
|
||||
rm -f build/libwaku*
|
||||
ifeq ($(STATIC), true)
|
||||
echo -e $(BUILD_MSG) "build/$@.a" && \
|
||||
$(ENV_SCRIPT) nim libwakuStatic $(NIM_PARAMS) waku.nims
|
||||
else
|
||||
echo -e $(BUILD_MSG) "build/$@.so" && \
|
||||
$(ENV_SCRIPT) nim libwakuDynamic $(NIM_PARAMS) waku.nims
|
||||
endif
|
||||
echo -e $(BUILD_MSG) "build/$@.$(LIB_EXT)" && $(ENV_SCRIPT) nim $(BUILD_COMMAND) $(NIM_PARAMS) waku.nims $@.$(LIB_EXT)
|
||||
|
||||
#####################
|
||||
## Mobile Bindings ##
|
||||
@ -359,6 +517,51 @@ libwaku-android:
|
||||
# It's likely this architecture is not used so we might just not support it.
|
||||
# $(MAKE) libwaku-android-arm
|
||||
|
||||
#################
|
||||
## iOS Bindings #
|
||||
#################
|
||||
.PHONY: libwaku-ios-precheck \
|
||||
libwaku-ios-device \
|
||||
libwaku-ios-simulator \
|
||||
libwaku-ios
|
||||
|
||||
IOS_DEPLOYMENT_TARGET ?= 18.0
|
||||
|
||||
# Get SDK paths dynamically using xcrun
|
||||
define get_ios_sdk_path
|
||||
$(shell xcrun --sdk $(1) --show-sdk-path 2>/dev/null)
|
||||
endef
|
||||
|
||||
libwaku-ios-precheck:
|
||||
ifeq ($(detected_OS),Darwin)
|
||||
@command -v xcrun >/dev/null 2>&1 || { echo "Error: Xcode command line tools not installed"; exit 1; }
|
||||
else
|
||||
$(error iOS builds are only supported on macOS)
|
||||
endif
|
||||
|
||||
# Build for iOS architecture
|
||||
build-libwaku-for-ios-arch:
|
||||
IOS_SDK=$(IOS_SDK) IOS_ARCH=$(IOS_ARCH) IOS_SDK_PATH=$(IOS_SDK_PATH) $(ENV_SCRIPT) nim libWakuIOS $(NIM_PARAMS) waku.nims
|
||||
|
||||
# iOS device (arm64)
|
||||
libwaku-ios-device: IOS_ARCH=arm64
|
||||
libwaku-ios-device: IOS_SDK=iphoneos
|
||||
libwaku-ios-device: IOS_SDK_PATH=$(call get_ios_sdk_path,iphoneos)
|
||||
libwaku-ios-device: | libwaku-ios-precheck build deps
|
||||
$(MAKE) build-libwaku-for-ios-arch IOS_ARCH=$(IOS_ARCH) IOS_SDK=$(IOS_SDK) IOS_SDK_PATH=$(IOS_SDK_PATH)
|
||||
|
||||
# iOS simulator (arm64 - Apple Silicon Macs)
|
||||
libwaku-ios-simulator: IOS_ARCH=arm64
|
||||
libwaku-ios-simulator: IOS_SDK=iphonesimulator
|
||||
libwaku-ios-simulator: IOS_SDK_PATH=$(call get_ios_sdk_path,iphonesimulator)
|
||||
libwaku-ios-simulator: | libwaku-ios-precheck build deps
|
||||
$(MAKE) build-libwaku-for-ios-arch IOS_ARCH=$(IOS_ARCH) IOS_SDK=$(IOS_SDK) IOS_SDK_PATH=$(IOS_SDK_PATH)
|
||||
|
||||
# Build all iOS targets
|
||||
libwaku-ios:
|
||||
$(MAKE) libwaku-ios-device
|
||||
$(MAKE) libwaku-ios-simulator
|
||||
|
||||
cwaku_example: | build libwaku
|
||||
echo -e $(BUILD_MSG) "build/$@" && \
|
||||
cc -o "build/$@" \
|
||||
|
||||
116
README.md
116
README.md
@ -1,24 +1,35 @@
|
||||
# Nwaku
|
||||
# Logos Messaging Nim
|
||||
|
||||
## Introduction
|
||||
|
||||
The nwaku repository implements Waku, and provides tools related to it.
|
||||
The logos-messaging-nim, a.k.a. lmn or nwaku, repository implements a set of libp2p protocols aimed to bring
|
||||
private communications.
|
||||
|
||||
- A Nim implementation of the [Waku (v2) protocol](https://specs.vac.dev/specs/waku/v2/waku-v2.html).
|
||||
- CLI application `wakunode2` that allows you to run a Waku node.
|
||||
- Examples of Waku usage.
|
||||
- Nim implementation of [these specs](https://github.com/vacp2p/rfc-index/tree/main/waku).
|
||||
- C library that exposes the implemented protocols.
|
||||
- CLI application that allows you to run an lmn node.
|
||||
- Examples.
|
||||
- Various tests of above.
|
||||
|
||||
For more details see the [source code](waku/v2/README.md)
|
||||
For more details see the [source code](waku/README.md)
|
||||
|
||||
## How to Build & Run
|
||||
## How to Build & Run ( Linux, MacOS & WSL )
|
||||
|
||||
These instructions are generic. For more detailed instructions, see the Waku source code above.
|
||||
These instructions are generic. For more detailed instructions, see the source code above.
|
||||
|
||||
### Prerequisites
|
||||
|
||||
The standard developer tools, including a C compiler, GNU Make, Bash, and Git. More information on these installations can be found [here](https://docs.waku.org/guides/nwaku/build-source#install-dependencies).
|
||||
|
||||
> In some distributions (Fedora linux for example), you may need to install `which` utility separately. Nimbus build system is relying on it.
|
||||
|
||||
You'll also need an installation of Rust and its toolchain (specifically `rustc` and `cargo`).
|
||||
The easiest way to install these, is using `rustup`:
|
||||
|
||||
```bash
|
||||
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
|
||||
```
|
||||
|
||||
### Wakunode
|
||||
|
||||
```bash
|
||||
@ -48,12 +59,52 @@ For more on how to run `wakunode2`, refer to:
|
||||
##### WSL
|
||||
If you encounter difficulties building the project on WSL, consider placing the project within WSL's filesystem, avoiding the `/mnt/` directory.
|
||||
|
||||
### How to Build & Run ( Windows )
|
||||
|
||||
### Windows Build Instructions
|
||||
|
||||
#### 1. Install Required Tools
|
||||
- **Git Bash Terminal**: Download and install from https://git-scm.com/download/win
|
||||
- **MSYS2**:
|
||||
a. Download installer from https://www.msys2.org
|
||||
b. Install at "C:\" (default location). Remove/rename the msys folder in case of previous installation.
|
||||
c. Use the mingw64 terminal from msys64 directory for package installation.
|
||||
|
||||
#### 2. Install Dependencies
|
||||
Open MSYS2 mingw64 terminal and run the following one-by-one :
|
||||
```bash
|
||||
pacman -Syu --noconfirm
|
||||
pacman -S --noconfirm --needed mingw-w64-x86_64-toolchain
|
||||
pacman -S --noconfirm --needed base-devel make cmake upx
|
||||
pacman -S --noconfirm --needed mingw-w64-x86_64-rust
|
||||
pacman -S --noconfirm --needed mingw-w64-x86_64-postgresql
|
||||
pacman -S --noconfirm --needed mingw-w64-x86_64-gcc
|
||||
pacman -S --noconfirm --needed mingw-w64-x86_64-gcc-libs
|
||||
pacman -S --noconfirm --needed mingw-w64-x86_64-libwinpthread-git
|
||||
pacman -S --noconfirm --needed mingw-w64-x86_64-zlib
|
||||
pacman -S --noconfirm --needed mingw-w64-x86_64-openssl
|
||||
pacman -S --noconfirm --needed mingw-w64-x86_64-python
|
||||
```
|
||||
|
||||
#### 3. Build Wakunode
|
||||
- Open Git Bash as administrator
|
||||
- clone nwaku and cd nwaku
|
||||
- Execute: `./scripts/build_windows.sh`
|
||||
|
||||
#### 4. Troubleshooting
|
||||
If `wakunode2.exe` isn't generated:
|
||||
- **Missing Dependencies**: Verify with:
|
||||
`which make cmake gcc g++ rustc cargo python3 upx`
|
||||
If missing, revisit Step 2 or ensure MSYS2 is at `C:\`
|
||||
- **Installation Conflicts**: Remove existing MinGW/MSYS2/Git Bash installations and perform fresh install
|
||||
|
||||
### Developing
|
||||
|
||||
#### Nim Runtime
|
||||
This repository is bundled with a Nim runtime that includes the necessary dependencies for the project.
|
||||
|
||||
Before you can utilise the runtime you'll need to build the project, as detailed in a previous section. This will generate a `vendor` directory containing various dependencies, including the `nimbus-build-system` which has the bundled nim runtime.
|
||||
Before you can utilize the runtime you'll need to build the project, as detailed in a previous section.
|
||||
This will generate a `vendor` directory containing various dependencies, including the `nimbus-build-system` which has the bundled nim runtime.
|
||||
|
||||
After successfully building the project, you may bring the bundled runtime into scope by running:
|
||||
```bash
|
||||
@ -61,11 +112,56 @@ source env.sh
|
||||
```
|
||||
If everything went well, you should see your prompt suffixed with `[Nimbus env]$`. Now you can run `nim` commands as usual.
|
||||
|
||||
### Waku Protocol Test Suite
|
||||
### Test Suite
|
||||
|
||||
```bash
|
||||
# Run all the Waku tests
|
||||
make test
|
||||
|
||||
# Run a specific test file
|
||||
make test <test_file_path>
|
||||
# e.g. : make test tests/wakunode2/test_all.nim
|
||||
|
||||
# Run a specific test name from a specific test file
|
||||
make test <test_file_path> <test_name>
|
||||
# e.g. : make test tests/wakunode2/test_all.nim "node setup is successful with default configuration"
|
||||
```
|
||||
|
||||
### Building single test files
|
||||
|
||||
During development it is helpful to build and run a single test file.
|
||||
To support this make has a specific target:
|
||||
|
||||
targets:
|
||||
- `build/<relative path to your test file.nim>`
|
||||
- `test/<relative path to your test file.nim>`
|
||||
|
||||
Binary will be created as `<path to your test file.nim>.bin` under the `build` directory .
|
||||
|
||||
```bash
|
||||
# Build and run your test file separately
|
||||
make test/tests/common/test_enr_builder.nim
|
||||
```
|
||||
|
||||
### Testing against `js-waku`
|
||||
Refer to [js-waku repo](https://github.com/waku-org/js-waku/tree/master/packages/tests) for instructions.
|
||||
|
||||
## Formatting
|
||||
|
||||
Nim files are expected to be formatted using the [`nph`](https://github.com/arnetheduck/nph) version present in `vendor/nph`.
|
||||
|
||||
You can easily format file with the `make nph/<relative path to nim> file` command.
|
||||
For example:
|
||||
|
||||
```
|
||||
make nph/waku/waku_core.nim
|
||||
```
|
||||
|
||||
A convenient git hook is provided to automatically format file at commit time.
|
||||
Run the following command to install it:
|
||||
|
||||
```shell
|
||||
make install-nph
|
||||
```
|
||||
|
||||
### Examples
|
||||
|
||||
@ -1,49 +1,73 @@
|
||||
import
|
||||
math,
|
||||
std/sequtils,
|
||||
results,
|
||||
options,
|
||||
std/[strutils, times, sequtils, osproc], math, results, options, testutils/unittests
|
||||
|
||||
import
|
||||
waku/[
|
||||
waku_rln_relay/protocol_types,
|
||||
waku_rln_relay/rln,
|
||||
waku_rln_relay,
|
||||
waku_rln_relay/conversion_utils,
|
||||
waku_rln_relay/group_manager/static/group_manager,
|
||||
]
|
||||
waku_rln_relay/group_manager/on_chain/group_manager,
|
||||
],
|
||||
tests/waku_rln_relay/utils_onchain
|
||||
|
||||
import std/[times, os]
|
||||
proc benchmark(
|
||||
manager: OnChainGroupManager, registerCount: int, messageLimit: int
|
||||
): Future[string] {.async, gcsafe.} =
|
||||
# Register a new member so that we can later generate proofs
|
||||
let idCredentials = generateCredentials(registerCount)
|
||||
|
||||
proc main(): Future[string] {.async, gcsafe.} =
|
||||
let rlnIns = createRLNInstance(20).get()
|
||||
let credentials = toSeq(0 .. 1000).mapIt(membershipKeyGen(rlnIns).get())
|
||||
var start_time = getTime()
|
||||
for i in 0 .. registerCount - 1:
|
||||
try:
|
||||
await manager.register(idCredentials[i], UserMessageLimit(messageLimit + 1))
|
||||
except Exception, CatchableError:
|
||||
assert false, "exception raised: " & getCurrentExceptionMsg()
|
||||
|
||||
let manager = StaticGroupManager(
|
||||
rlnInstance: rlnIns,
|
||||
groupSize: 1000,
|
||||
membershipIndex: some(MembershipIndex(900)),
|
||||
groupKeys: credentials,
|
||||
)
|
||||
info "registration finished",
|
||||
iter = i, elapsed_ms = (getTime() - start_time).inMilliseconds
|
||||
|
||||
await manager.init()
|
||||
discard await manager.updateRoots()
|
||||
manager.merkleProofCache = (await manager.fetchMerkleProofElements()).valueOr:
|
||||
error "Failed to fetch Merkle proof", error = error
|
||||
quit(QuitFailure)
|
||||
|
||||
let epoch = default(Epoch)
|
||||
info "epoch in bytes", epochHex = epoch.inHex()
|
||||
let data: seq[byte] = newSeq[byte](1024)
|
||||
|
||||
var proofGenTimes: seq[times.Duration] = @[]
|
||||
var proofVerTimes: seq[times.Duration] = @[]
|
||||
for i in 0 .. 50:
|
||||
var time = getTime()
|
||||
let proof = manager.generateProof(data, default(Epoch)).get()
|
||||
proofGenTimes.add(getTime() - time)
|
||||
|
||||
time = getTime()
|
||||
let res = manager.verifyProof(data, proof).get()
|
||||
proofVerTimes.add(getTime() - time)
|
||||
start_time = getTime()
|
||||
for i in 1 .. messageLimit:
|
||||
var generate_time = getTime()
|
||||
let proof = manager.generateProof(data, epoch, MessageId(i.uint8)).valueOr:
|
||||
raiseAssert $error
|
||||
proofGenTimes.add(getTime() - generate_time)
|
||||
|
||||
let verify_time = getTime()
|
||||
let ok = manager.verifyProof(data, proof).valueOr:
|
||||
raiseAssert $error
|
||||
proofVerTimes.add(getTime() - verify_time)
|
||||
info "iteration finished",
|
||||
iter = i, elapsed_ms = (getTime() - start_time).inMilliseconds
|
||||
|
||||
echo "Proof generation times: ", sum(proofGenTimes) div len(proofGenTimes)
|
||||
echo "Proof verification times: ", sum(proofVerTimes) div len(proofVerTimes)
|
||||
|
||||
proc main() =
|
||||
# Start a local Ethereum JSON-RPC (Anvil) so that the group-manager setup can connect.
|
||||
let anvilProc = runAnvil()
|
||||
defer:
|
||||
stopAnvil(anvilProc)
|
||||
|
||||
# Set up an On-chain group manager (includes contract deployment)
|
||||
let manager = waitFor setupOnchainGroupManager()
|
||||
(waitFor manager.init()).isOkOr:
|
||||
raiseAssert $error
|
||||
|
||||
discard waitFor benchmark(manager, 200, 20)
|
||||
|
||||
when isMainModule:
|
||||
try:
|
||||
waitFor(main())
|
||||
except CatchableError as e:
|
||||
raise e
|
||||
main()
|
||||
|
||||
@ -6,12 +6,11 @@ when not (compileOption("threads")):
|
||||
|
||||
{.push raises: [].}
|
||||
|
||||
import std/[strformat, strutils, times, options, random]
|
||||
import std/[strformat, strutils, times, options, random, sequtils]
|
||||
import
|
||||
confutils,
|
||||
chronicles,
|
||||
chronos,
|
||||
stew/shims/net as stewNet,
|
||||
eth/keys,
|
||||
bearssl,
|
||||
stew/[byteutils, results],
|
||||
@ -33,8 +32,8 @@ import
|
||||
import
|
||||
waku/[
|
||||
waku_core,
|
||||
waku_lightpush/common,
|
||||
waku_lightpush/rpc,
|
||||
waku_lightpush_legacy/common,
|
||||
waku_lightpush_legacy/rpc,
|
||||
waku_enr,
|
||||
discovery/waku_dnsdisc,
|
||||
waku_store_legacy,
|
||||
@ -133,25 +132,14 @@ proc showChatPrompt(c: Chat) =
|
||||
except IOError:
|
||||
discard
|
||||
|
||||
proc getChatLine(c: Chat, msg: WakuMessage): Result[string, string] =
|
||||
proc getChatLine(payload: seq[byte]): string =
|
||||
# No payload encoding/encryption from Waku
|
||||
let
|
||||
pb = Chat2Message.init(msg.payload)
|
||||
chatLine =
|
||||
if pb.isOk:
|
||||
pb[].toString()
|
||||
else:
|
||||
string.fromBytes(msg.payload)
|
||||
return ok(chatline)
|
||||
let pb = Chat2Message.init(payload).valueOr:
|
||||
return string.fromBytes(payload)
|
||||
return $pb
|
||||
|
||||
proc printReceivedMessage(c: Chat, msg: WakuMessage) =
|
||||
let
|
||||
pb = Chat2Message.init(msg.payload)
|
||||
chatLine =
|
||||
if pb.isOk:
|
||||
pb[].toString()
|
||||
else:
|
||||
string.fromBytes(msg.payload)
|
||||
let chatLine = getChatLine(msg.payload)
|
||||
try:
|
||||
echo &"{chatLine}"
|
||||
except ValueError:
|
||||
@ -174,18 +162,16 @@ proc startMetricsServer(
|
||||
): Result[MetricsHttpServerRef, string] =
|
||||
info "Starting metrics HTTP server", serverIp = $serverIp, serverPort = $serverPort
|
||||
|
||||
let metricsServerRes = MetricsHttpServerRef.new($serverIp, serverPort)
|
||||
if metricsServerRes.isErr():
|
||||
return err("metrics HTTP server start failed: " & $metricsServerRes.error)
|
||||
let server = MetricsHttpServerRef.new($serverIp, serverPort).valueOr:
|
||||
return err("metrics HTTP server start failed: " & $error)
|
||||
|
||||
let server = metricsServerRes.value
|
||||
try:
|
||||
waitFor server.start()
|
||||
except CatchableError:
|
||||
return err("metrics HTTP server start failed: " & getCurrentExceptionMsg())
|
||||
|
||||
info "Metrics HTTP server started", serverIp = $serverIp, serverPort = $serverPort
|
||||
ok(metricsServerRes.value)
|
||||
ok(server)
|
||||
|
||||
proc publish(c: Chat, line: string) =
|
||||
# First create a Chat2Message protobuf with this line of text
|
||||
@ -203,19 +189,17 @@ proc publish(c: Chat, line: string) =
|
||||
version: 0,
|
||||
timestamp: getNanosecondTime(time),
|
||||
)
|
||||
|
||||
if not isNil(c.node.wakuRlnRelay):
|
||||
# for future version when we support more than one rln protected content topic,
|
||||
# we should check the message content topic as well
|
||||
let appendRes = c.node.wakuRlnRelay.appendRLNProof(message, float64(time))
|
||||
if appendRes.isErr():
|
||||
debug "could not append rate limit proof to the message"
|
||||
if c.node.wakuRlnRelay.appendRLNProof(message, float64(time)).isErr():
|
||||
info "could not append rate limit proof to the message"
|
||||
else:
|
||||
debug "rate limit proof is appended to the message"
|
||||
let decodeRes = RateLimitProof.init(message.proof)
|
||||
if decodeRes.isErr():
|
||||
info "rate limit proof is appended to the message"
|
||||
let proof = RateLimitProof.init(message.proof).valueOr:
|
||||
error "could not decode the RLN proof"
|
||||
|
||||
let proof = decodeRes.get()
|
||||
return
|
||||
# TODO move it to log after dogfooding
|
||||
let msgEpoch = fromEpoch(proof.epoch)
|
||||
if fromEpoch(c.node.wakuRlnRelay.lastEpoch) == msgEpoch:
|
||||
@ -227,9 +211,9 @@ proc publish(c: Chat, line: string) =
|
||||
c.node.wakuRlnRelay.lastEpoch = proof.epoch
|
||||
|
||||
try:
|
||||
if not c.node.wakuLightPush.isNil():
|
||||
if not c.node.wakuLegacyLightPush.isNil():
|
||||
# Attempt lightpush
|
||||
(waitFor c.node.lightpushPublish(some(DefaultPubsubTopic), message)).isOkOr:
|
||||
(waitFor c.node.legacyLightpushPublish(some(DefaultPubsubTopic), message)).isOkOr:
|
||||
error "failed to publish lightpush message", error = error
|
||||
else:
|
||||
(waitFor c.node.publish(some(DefaultPubsubTopic), message)).isOkOr:
|
||||
@ -333,27 +317,19 @@ proc processInput(rfd: AsyncFD, rng: ref HmacDrbgContext) {.async.} =
|
||||
if conf.logLevel != LogLevel.NONE:
|
||||
setLogLevel(conf.logLevel)
|
||||
|
||||
let natRes = setupNat(
|
||||
let (extIp, extTcpPort, extUdpPort) = setupNat(
|
||||
conf.nat,
|
||||
clientId,
|
||||
Port(uint16(conf.tcpPort) + conf.portsShift),
|
||||
Port(uint16(conf.udpPort) + conf.portsShift),
|
||||
)
|
||||
|
||||
if natRes.isErr():
|
||||
raise newException(ValueError, "setupNat error " & natRes.error)
|
||||
|
||||
let (extIp, extTcpPort, extUdpPort) = natRes.get()
|
||||
).valueOr:
|
||||
raise newException(ValueError, "setupNat error " & error)
|
||||
|
||||
var enrBuilder = EnrBuilder.init(nodeKey)
|
||||
|
||||
let recordRes = enrBuilder.build()
|
||||
let record =
|
||||
if recordRes.isErr():
|
||||
error "failed to create enr record", error = recordRes.error
|
||||
quit(QuitFailure)
|
||||
else:
|
||||
recordRes.get()
|
||||
let record = enrBuilder.build().valueOr:
|
||||
error "failed to create enr record", error = error
|
||||
quit(QuitFailure)
|
||||
|
||||
let node = block:
|
||||
var builder = WakuNodeBuilder.init()
|
||||
@ -379,7 +355,11 @@ proc processInput(rfd: AsyncFD, rng: ref HmacDrbgContext) {.async.} =
|
||||
raise newException(ConfigurationError, "rln-relay-cred-path MUST be passed")
|
||||
|
||||
if conf.relay:
|
||||
await node.mountRelay(conf.topics.split(" "))
|
||||
let shards =
|
||||
conf.shards.mapIt(RelayShard(clusterId: conf.clusterId, shardId: uint16(it)))
|
||||
(await node.mountRelay()).isOkOr:
|
||||
echo "failed to mount relay: " & error
|
||||
return
|
||||
|
||||
await node.mountLibp2pPing()
|
||||
|
||||
@ -416,16 +396,16 @@ proc processInput(rfd: AsyncFD, rng: ref HmacDrbgContext) {.async.} =
|
||||
dnsDiscoveryUrl = some(
|
||||
"enrtree://AIRVQ5DDA4FFWLRBCHJWUWOO6X6S4ZTZ5B667LQ6AJU6PEYDLRD5O@sandbox.waku.nodes.status.im"
|
||||
)
|
||||
elif conf.dnsDiscovery and conf.dnsDiscoveryUrl != "":
|
||||
elif conf.dnsDiscoveryUrl != "":
|
||||
# No pre-selected fleet. Discover nodes via DNS using user config
|
||||
debug "Discovering nodes using Waku DNS discovery", url = conf.dnsDiscoveryUrl
|
||||
info "Discovering nodes using Waku DNS discovery", url = conf.dnsDiscoveryUrl
|
||||
dnsDiscoveryUrl = some(conf.dnsDiscoveryUrl)
|
||||
|
||||
var discoveredNodes: seq[RemotePeerInfo]
|
||||
|
||||
if dnsDiscoveryUrl.isSome:
|
||||
var nameServers: seq[TransportAddress]
|
||||
for ip in conf.dnsDiscoveryNameServers:
|
||||
for ip in conf.dnsAddrsNameServers:
|
||||
nameServers.add(initTAddress(ip, Port(53))) # Assume all servers use port 53
|
||||
|
||||
let dnsResolver = DnsResolver.new(nameServers)
|
||||
@ -435,16 +415,18 @@ proc processInput(rfd: AsyncFD, rng: ref HmacDrbgContext) {.async.} =
|
||||
let resolved = await dnsResolver.resolveTxt(domain)
|
||||
return resolved[0] # Use only first answer
|
||||
|
||||
var wakuDnsDiscovery = WakuDnsDiscovery.init(dnsDiscoveryUrl.get(), resolver)
|
||||
let wakuDnsDiscovery = WakuDnsDiscovery.init(dnsDiscoveryUrl.get(), resolver)
|
||||
if wakuDnsDiscovery.isOk:
|
||||
let discoveredPeers = wakuDnsDiscovery.get().findPeers()
|
||||
let discoveredPeers = await wakuDnsDiscovery.get().findPeers()
|
||||
if discoveredPeers.isOk:
|
||||
info "Connecting to discovered peers"
|
||||
discoveredNodes = discoveredPeers.get()
|
||||
echo "Discovered and connecting to " & $discoveredNodes
|
||||
waitFor chat.node.connectToNodes(discoveredNodes)
|
||||
else:
|
||||
warn "Failed to find peers via DNS discovery", error = discoveredPeers.error
|
||||
else:
|
||||
warn "Failed to init Waku DNS discovery"
|
||||
warn "Failed to init Waku DNS discovery", error = wakuDnsDiscovery.error
|
||||
|
||||
let peerInfo = node.switch.peerInfo
|
||||
let listenStr = $peerInfo.addrs[0] & "/p2p/" & $peerInfo.peerId
|
||||
@ -480,36 +462,37 @@ proc processInput(rfd: AsyncFD, rng: ref HmacDrbgContext) {.async.} =
|
||||
else:
|
||||
newSeq[byte](0)
|
||||
|
||||
let
|
||||
pb = Chat2Message.init(payload)
|
||||
chatLine =
|
||||
if pb.isOk:
|
||||
pb[].toString()
|
||||
else:
|
||||
string.fromBytes(payload)
|
||||
let chatLine = getChatLine(payload)
|
||||
echo &"{chatLine}"
|
||||
info "Hit store handler"
|
||||
|
||||
let queryRes = await node.query(
|
||||
StoreQueryRequest(contentTopics: @[chat.contentTopic]), storenode.get()
|
||||
)
|
||||
if queryRes.isOk():
|
||||
storeHandler(queryRes.value)
|
||||
block storeQueryBlock:
|
||||
let queryRes = (
|
||||
await node.query(
|
||||
StoreQueryRequest(contentTopics: @[chat.contentTopic]), storenode.get()
|
||||
)
|
||||
).valueOr:
|
||||
error "Store query failed", error = error
|
||||
break storeQueryBlock
|
||||
storeHandler(queryRes)
|
||||
|
||||
# NOTE Must be mounted after relay
|
||||
if conf.lightpushnode != "":
|
||||
let peerInfo = parsePeerInfo(conf.lightpushnode)
|
||||
if peerInfo.isOk():
|
||||
await mountLightPush(node)
|
||||
node.mountLightPushClient()
|
||||
(await node.mountLegacyLightPush()).isOkOr:
|
||||
error "failed to mount legacy lightpush", error = error
|
||||
quit(QuitFailure)
|
||||
node.mountLegacyLightPushClient()
|
||||
node.peerManager.addServicePeer(peerInfo.value, WakuLightpushCodec)
|
||||
else:
|
||||
error "LightPush not mounted. Couldn't parse conf.lightpushnode",
|
||||
error = peerInfo.error
|
||||
|
||||
if conf.filternode != "":
|
||||
let peerInfo = parsePeerInfo(conf.filternode)
|
||||
if peerInfo.isOk():
|
||||
if (let peerInfo = parsePeerInfo(conf.filternode); peerInfo.isErr()):
|
||||
error "Filter not mounted. Couldn't parse conf.filternode", error = peerInfo.error
|
||||
else:
|
||||
await node.mountFilter()
|
||||
await node.mountFilterClient()
|
||||
|
||||
@ -520,8 +503,6 @@ proc processInput(rfd: AsyncFD, rng: ref HmacDrbgContext) {.async.} =
|
||||
chat.printReceivedMessage(msg)
|
||||
|
||||
# TODO: Here to support FilterV2 relevant subscription.
|
||||
else:
|
||||
error "Filter not mounted. Couldn't parse conf.filternode", error = peerInfo.error
|
||||
|
||||
# Subscribe to a topic, if relay is mounted
|
||||
if conf.relay:
|
||||
@ -532,33 +513,35 @@ proc processInput(rfd: AsyncFD, rng: ref HmacDrbgContext) {.async.} =
|
||||
chat.printReceivedMessage(msg)
|
||||
|
||||
node.subscribe(
|
||||
(kind: PubsubSub, topic: DefaultPubsubTopic), some(WakuRelayHandler(handler))
|
||||
)
|
||||
(kind: PubsubSub, topic: DefaultPubsubTopic), WakuRelayHandler(handler)
|
||||
).isOkOr:
|
||||
error "failed to subscribe to pubsub topic",
|
||||
topic = DefaultPubsubTopic, error = error
|
||||
|
||||
if conf.rlnRelay:
|
||||
info "WakuRLNRelay is enabled"
|
||||
|
||||
proc spamHandler(wakuMessage: WakuMessage) {.gcsafe, closure.} =
|
||||
debug "spam handler is called"
|
||||
let chatLineResult = chat.getChatLine(wakuMessage)
|
||||
if chatLineResult.isOk():
|
||||
echo "A spam message is found and discarded : ", chatLineResult.value
|
||||
else:
|
||||
echo "A spam message is found and discarded"
|
||||
info "spam handler is called"
|
||||
let chatLineResult = getChatLine(wakuMessage.payload)
|
||||
echo "spam message is found and discarded : " & chatLineResult
|
||||
chat.prompt = false
|
||||
showChatPrompt(chat)
|
||||
|
||||
echo "rln-relay preparation is in progress..."
|
||||
|
||||
let rlnConf = WakuRlnConfig(
|
||||
rlnRelayDynamic: conf.rlnRelayDynamic,
|
||||
rlnRelayCredIndex: conf.rlnRelayCredIndex,
|
||||
rlnRelayEthContractAddress: conf.rlnRelayEthContractAddress,
|
||||
rlnRelayEthClientAddress: string(conf.rlnRelayethClientAddress),
|
||||
rlnRelayCredPath: conf.rlnRelayCredPath,
|
||||
rlnRelayCredPassword: conf.rlnRelayCredPassword,
|
||||
rlnRelayUserMessageLimit: conf.rlnRelayUserMessageLimit,
|
||||
rlnEpochSizeSec: conf.rlnEpochSizeSec,
|
||||
dynamic: conf.rlnRelayDynamic,
|
||||
credIndex: conf.rlnRelayCredIndex,
|
||||
chainId: UInt256.fromBytesBE(conf.rlnRelayChainId.toBytesBE()),
|
||||
ethClientUrls: conf.ethClientUrls.mapIt(string(it)),
|
||||
creds: some(
|
||||
RlnRelayCreds(
|
||||
path: conf.rlnRelayCredPath, password: conf.rlnRelayCredPassword
|
||||
)
|
||||
),
|
||||
userMessageLimit: conf.rlnRelayUserMessageLimit,
|
||||
epochSizeSec: conf.rlnEpochSizeSec,
|
||||
)
|
||||
|
||||
waitFor node.mountRlnRelay(rlnConf, spamHandler = some(spamHandler))
|
||||
@ -581,9 +564,6 @@ proc processInput(rfd: AsyncFD, rng: ref HmacDrbgContext) {.async.} =
|
||||
|
||||
await chat.readWriteLoop()
|
||||
|
||||
if conf.keepAlive:
|
||||
node.startKeepalive()
|
||||
|
||||
runForever()
|
||||
|
||||
proc main(rng: ref HmacDrbgContext) {.async.} =
|
||||
|
||||
@ -18,7 +18,8 @@ type
|
||||
prod
|
||||
test
|
||||
|
||||
EthRpcUrl = distinct string
|
||||
EthRpcUrl* = distinct string
|
||||
|
||||
Chat2Conf* = object ## General node config
|
||||
logLevel* {.
|
||||
desc: "Sets the log level.", defaultValue: LogLevel.INFO, name: "log-level"
|
||||
@ -83,11 +84,19 @@ type
|
||||
name: "keep-alive"
|
||||
.}: bool
|
||||
|
||||
topics* {.
|
||||
desc: "Default topics to subscribe to (space separated list).",
|
||||
defaultValue: "/waku/2/rs/0/0",
|
||||
name: "topics"
|
||||
.}: string
|
||||
clusterId* {.
|
||||
desc:
|
||||
"Cluster id that the node is running in. Node in a different cluster id is disconnected.",
|
||||
defaultValue: 0,
|
||||
name: "cluster-id"
|
||||
.}: uint16
|
||||
|
||||
shards* {.
|
||||
desc:
|
||||
"Shards index to subscribe to [0..NUM_SHARDS_IN_NETWORK-1]. Argument may be repeated.",
|
||||
defaultValue: @[uint16(0)],
|
||||
name: "shard"
|
||||
.}: seq[uint16]
|
||||
|
||||
## Store config
|
||||
store* {.
|
||||
@ -149,7 +158,8 @@ type
|
||||
|
||||
## DNS discovery config
|
||||
dnsDiscovery* {.
|
||||
desc: "Enable discovering nodes via DNS",
|
||||
desc:
|
||||
"Deprecated, please set dns-discovery-url instead. Enable discovering nodes via DNS",
|
||||
defaultValue: false,
|
||||
name: "dns-discovery"
|
||||
.}: bool
|
||||
@ -160,10 +170,11 @@ type
|
||||
name: "dns-discovery-url"
|
||||
.}: string
|
||||
|
||||
dnsDiscoveryNameServers* {.
|
||||
desc: "DNS name server IPs to query. Argument may be repeated.",
|
||||
dnsAddrsNameServers* {.
|
||||
desc:
|
||||
"DNS name server IPs to query for DNS multiaddrs resolution. Argument may be repeated.",
|
||||
defaultValue: @[parseIpAddress("1.1.1.1"), parseIpAddress("1.0.0.1")],
|
||||
name: "dns-discovery-name-server"
|
||||
name: "dns-addrs-name-server"
|
||||
.}: seq[IpAddress]
|
||||
|
||||
## Chat2 configuration
|
||||
@ -204,6 +215,13 @@ type
|
||||
name: "rln-relay"
|
||||
.}: bool
|
||||
|
||||
rlnRelayChainId* {.
|
||||
desc:
|
||||
"Chain ID of the provided contract (optional, will fetch from RPC provider if not used)",
|
||||
defaultValue: 0,
|
||||
name: "rln-relay-chain-id"
|
||||
.}: uint
|
||||
|
||||
rlnRelayCredPath* {.
|
||||
desc: "The path for peristing rln-relay credential",
|
||||
defaultValue: "",
|
||||
@ -232,11 +250,12 @@ type
|
||||
name: "rln-relay-id-commitment-key"
|
||||
.}: string
|
||||
|
||||
rlnRelayEthClientAddress* {.
|
||||
desc: "HTTP address of an Ethereum testnet client e.g., http://localhost:8540/",
|
||||
defaultValue: "http://localhost:8540/",
|
||||
ethClientUrls* {.
|
||||
desc:
|
||||
"HTTP address of an Ethereum testnet client e.g., http://localhost:8540/. Argument may be repeated.",
|
||||
defaultValue: newSeq[EthRpcUrl](0),
|
||||
name: "rln-relay-eth-client-address"
|
||||
.}: EthRpcUrl
|
||||
.}: seq[EthRpcUrl]
|
||||
|
||||
rlnRelayEthContractAddress* {.
|
||||
desc: "Address of membership contract on an Ethereum testnet",
|
||||
|
||||
@ -1,7 +1,7 @@
|
||||
{.push raises: [].}
|
||||
|
||||
import
|
||||
std/[tables, times, strutils, hashes, sequtils],
|
||||
std/[tables, times, strutils, hashes, sequtils, json],
|
||||
chronos,
|
||||
confutils,
|
||||
chronicles,
|
||||
@ -11,7 +11,6 @@ import
|
||||
metrics/chronos_httpserver,
|
||||
stew/byteutils,
|
||||
eth/net/nat,
|
||||
json_rpc/rpcserver,
|
||||
# Matterbridge client imports
|
||||
# Waku v2 imports
|
||||
libp2p/crypto/crypto,
|
||||
@ -24,6 +23,7 @@ import
|
||||
waku_store,
|
||||
factory/builder,
|
||||
common/utils/matterbridge_client,
|
||||
common/rate_limit/setting,
|
||||
],
|
||||
# Chat 2 imports
|
||||
../chat2/chat2,
|
||||
@ -59,7 +59,7 @@ type
|
||||
MbMessageHandler = proc(jsonNode: JsonNode) {.async.}
|
||||
|
||||
###################
|
||||
# Helper funtions #
|
||||
# Helper functions #
|
||||
###################S
|
||||
|
||||
proc containsOrAdd(sequence: var seq[Hash], hash: Hash): bool =
|
||||
@ -126,25 +126,20 @@ proc toMatterbridge(
|
||||
|
||||
assert chat2Msg.isOk
|
||||
|
||||
let postRes = cmb.mbClient.postMessage(
|
||||
text = string.fromBytes(chat2Msg[].payload), username = chat2Msg[].nick
|
||||
)
|
||||
|
||||
if postRes.isErr() or (postRes[] == false):
|
||||
if not cmb.mbClient
|
||||
.postMessage(text = string.fromBytes(chat2Msg[].payload), username = chat2Msg[].nick)
|
||||
.containsValue(true):
|
||||
chat2_mb_dropped.inc(labelValues = ["duplicate"])
|
||||
error "Matterbridge host unreachable. Dropping message."
|
||||
|
||||
proc pollMatterbridge(cmb: Chat2MatterBridge, handler: MbMessageHandler) {.async.} =
|
||||
while cmb.running:
|
||||
let getRes = cmb.mbClient.getMessages()
|
||||
|
||||
if getRes.isOk():
|
||||
for jsonNode in getRes[]:
|
||||
await handler(jsonNode)
|
||||
else:
|
||||
let msg = cmb.mbClient.getMessages().valueOr:
|
||||
error "Matterbridge host unreachable. Sleeping before retrying."
|
||||
await sleepAsync(chronos.seconds(10))
|
||||
|
||||
continue
|
||||
for jsonNode in msg:
|
||||
await handler(jsonNode)
|
||||
await sleepAsync(cmb.pollPeriod)
|
||||
|
||||
##############
|
||||
@ -169,9 +164,7 @@ proc new*(
|
||||
let mbClient = MatterbridgeClient.new(mbHostUri, mbGateway)
|
||||
|
||||
# Let's verify the Matterbridge configuration before continuing
|
||||
let clientHealth = mbClient.isHealthy()
|
||||
|
||||
if clientHealth.isOk() and clientHealth[]:
|
||||
if mbClient.isHealthy().valueOr(false):
|
||||
info "Reached Matterbridge host", host = mbClient.host
|
||||
else:
|
||||
raise newException(ValueError, "Matterbridge client not reachable/healthy")
|
||||
@ -201,7 +194,7 @@ proc start*(cmb: Chat2MatterBridge) {.async.} =
|
||||
|
||||
cmb.running = true
|
||||
|
||||
debug "Start polling Matterbridge"
|
||||
info "Start polling Matterbridge"
|
||||
|
||||
# Start Matterbridge polling (@TODO: use streaming interface)
|
||||
proc mbHandler(jsonNode: JsonNode) {.async.} =
|
||||
@ -211,12 +204,15 @@ proc start*(cmb: Chat2MatterBridge) {.async.} =
|
||||
asyncSpawn cmb.pollMatterbridge(mbHandler)
|
||||
|
||||
# Start Waku v2 node
|
||||
debug "Start listening on Waku v2"
|
||||
info "Start listening on Waku v2"
|
||||
await cmb.nodev2.start()
|
||||
|
||||
# Always mount relay for bridge
|
||||
# `triggerSelf` is false on a `bridge` to avoid duplicates
|
||||
await cmb.nodev2.mountRelay()
|
||||
(await cmb.nodev2.mountRelay()).isOkOr:
|
||||
error "failed to mount relay", error = error
|
||||
return
|
||||
|
||||
cmb.nodev2.wakuRelay.triggerSelf = false
|
||||
|
||||
# Bridging
|
||||
@ -230,7 +226,9 @@ proc start*(cmb: Chat2MatterBridge) {.async.} =
|
||||
except:
|
||||
error "exception in relayHandler: " & getCurrentExceptionMsg()
|
||||
|
||||
cmb.nodev2.subscribe((kind: PubsubSub, topic: DefaultPubsubTopic), some(relayHandler))
|
||||
cmb.nodev2.subscribe((kind: PubsubSub, topic: DefaultPubsubTopic), relayHandler).isOkOr:
|
||||
error "failed to subscribe to relay", topic = DefaultPubsubTopic, error = error
|
||||
return
|
||||
|
||||
proc stop*(cmb: Chat2MatterBridge) {.async: (raises: [Exception]).} =
|
||||
info "Stopping Chat2MatterBridge"
|
||||
@ -242,7 +240,7 @@ proc stop*(cmb: Chat2MatterBridge) {.async: (raises: [Exception]).} =
|
||||
{.pop.}
|
||||
# @TODO confutils.nim(775, 17) Error: can raise an unlisted exception: ref IOError
|
||||
when isMainModule:
|
||||
import waku/common/utils/nat, waku/waku_api/message_cache
|
||||
import waku/common/utils/nat, waku/rest_api/message_cache
|
||||
|
||||
let
|
||||
rng = newRng()
|
||||
@ -251,25 +249,21 @@ when isMainModule:
|
||||
if conf.logLevel != LogLevel.NONE:
|
||||
setLogLevel(conf.logLevel)
|
||||
|
||||
let natRes = setupNat(
|
||||
let (nodev2ExtIp, nodev2ExtPort, _) = setupNat(
|
||||
conf.nat,
|
||||
clientId,
|
||||
Port(uint16(conf.libp2pTcpPort) + conf.portsShift),
|
||||
Port(uint16(conf.udpPort) + conf.portsShift),
|
||||
)
|
||||
if natRes.isErr():
|
||||
error "Error in setupNat", error = natRes.error
|
||||
).valueOr:
|
||||
raise newException(ValueError, "setupNat error " & error)
|
||||
|
||||
# Load address configuration
|
||||
let
|
||||
(nodev2ExtIp, nodev2ExtPort, _) = natRes.get()
|
||||
## The following heuristic assumes that, in absence of manual
|
||||
## config, the external port is the same as the bind port.
|
||||
extPort =
|
||||
if nodev2ExtIp.isSome() and nodev2ExtPort.isNone():
|
||||
some(Port(uint16(conf.libp2pTcpPort) + conf.portsShift))
|
||||
else:
|
||||
nodev2ExtPort
|
||||
## The following heuristic assumes that, in absence of manual
|
||||
## config, the external port is the same as the bind port.
|
||||
let extPort =
|
||||
if nodev2ExtIp.isSome() and nodev2ExtPort.isNone():
|
||||
some(Port(uint16(conf.libp2pTcpPort) + conf.portsShift))
|
||||
else:
|
||||
nodev2ExtPort
|
||||
|
||||
let bridge = Chat2Matterbridge.new(
|
||||
mbHostUri = "http://" & $initTAddress(conf.mbHostAddress, Port(conf.mbHostPort)),
|
||||
|
||||
@ -67,12 +67,6 @@ type Chat2MatterbridgeConf* = object
|
||||
name: "nodekey"
|
||||
.}: crypto.PrivateKey
|
||||
|
||||
topics* {.
|
||||
desc: "Default topics to subscribe to (space separated list)",
|
||||
defaultValue: "/waku/2/rs/0/0",
|
||||
name: "topics"
|
||||
.}: string
|
||||
|
||||
store* {.
|
||||
desc: "Flag whether to start store protocol", defaultValue: true, name: "store"
|
||||
.}: bool
|
||||
@ -97,7 +91,7 @@ type Chat2MatterbridgeConf* = object
|
||||
name: "filternode"
|
||||
.}: string
|
||||
|
||||
# Matterbridge options
|
||||
# Matterbridge options
|
||||
mbHostAddress* {.
|
||||
desc: "Listening address of the Matterbridge host",
|
||||
defaultValue: parseIpAddress("127.0.0.1"),
|
||||
@ -132,11 +126,9 @@ proc completeCmdArg*(T: type keys.KeyPair, val: string): seq[string] =
|
||||
return @[]
|
||||
|
||||
proc parseCmdArg*(T: type crypto.PrivateKey, p: string): T =
|
||||
let key = SkPrivateKey.init(p)
|
||||
if key.isOk():
|
||||
crypto.PrivateKey(scheme: Secp256k1, skkey: key.get())
|
||||
else:
|
||||
let key = SkPrivateKey.init(p).valueOr:
|
||||
raise newException(ValueError, "Invalid private key")
|
||||
return crypto.PrivateKey(scheme: Secp256k1, skkey: key)
|
||||
|
||||
proc completeCmdArg*(T: type crypto.PrivateKey, val: string): seq[string] =
|
||||
return @[]
|
||||
|
||||
663
apps/chat2mix/chat2mix.nim
Normal file
663
apps/chat2mix/chat2mix.nim
Normal file
@ -0,0 +1,663 @@
|
||||
## chat2 is an example of usage of Waku v2. For suggested usage options, please
|
||||
## see dingpu tutorial in docs folder.
|
||||
|
||||
when not (compileOption("threads")):
|
||||
{.fatal: "Please, compile this program with the --threads:on option!".}
|
||||
|
||||
{.push raises: [].}
|
||||
|
||||
import std/[strformat, strutils, times, options, random, sequtils]
|
||||
import
|
||||
confutils,
|
||||
chronicles,
|
||||
chronos,
|
||||
eth/keys,
|
||||
bearssl,
|
||||
results,
|
||||
stew/[byteutils],
|
||||
metrics,
|
||||
metrics/chronos_httpserver
|
||||
import
|
||||
libp2p/[
|
||||
switch, # manage transports, a single entry point for dialing and listening
|
||||
crypto/crypto, # cryptographic functions
|
||||
stream/connection, # create and close stream read / write connections
|
||||
multiaddress,
|
||||
# encode different addressing schemes. For example, /ip4/7.7.7.7/tcp/6543 means it is using IPv4 protocol and TCP
|
||||
peerinfo,
|
||||
# manage the information of a peer, such as peer ID and public / private key
|
||||
peerid, # Implement how peers interact
|
||||
protobuf/minprotobuf, # message serialisation/deserialisation from and to protobufs
|
||||
nameresolving/dnsresolver,
|
||||
protocols/mix/curve25519,
|
||||
] # define DNS resolution
|
||||
import
|
||||
waku/[
|
||||
waku_core,
|
||||
waku_lightpush/common,
|
||||
waku_lightpush/rpc,
|
||||
waku_enr,
|
||||
discovery/waku_dnsdisc,
|
||||
waku_node,
|
||||
node/waku_metrics,
|
||||
node/peer_manager,
|
||||
factory/builder,
|
||||
common/utils/nat,
|
||||
waku_store/common,
|
||||
waku_filter_v2/client,
|
||||
common/logging,
|
||||
],
|
||||
./config_chat2mix
|
||||
|
||||
import libp2p/protocols/pubsub/rpc/messages, libp2p/protocols/pubsub/pubsub
|
||||
import ../../waku/waku_rln_relay
|
||||
|
||||
logScope:
|
||||
topics = "chat2 mix"
|
||||
|
||||
const Help =
|
||||
"""
|
||||
Commands: /[?|help|connect|nick|exit]
|
||||
help: Prints this help
|
||||
connect: dials a remote peer
|
||||
nick: change nickname for current chat session
|
||||
exit: exits chat session
|
||||
"""
|
||||
|
||||
# XXX Connected is a bit annoying, because incoming connections don't trigger state change
|
||||
# Could poll connection pool or something here, I suppose
|
||||
# TODO Ensure connected turns true on incoming connections, or get rid of it
|
||||
type Chat = ref object
|
||||
node: WakuNode # waku node for publishing, subscribing, etc
|
||||
transp: StreamTransport # transport streams between read & write file descriptor
|
||||
subscribed: bool # indicates if a node is subscribed or not to a topic
|
||||
connected: bool # if the node is connected to another peer
|
||||
started: bool # if the node has started
|
||||
nick: string # nickname for this chat session
|
||||
prompt: bool # chat prompt is showing
|
||||
contentTopic: string # default content topic for chat messages
|
||||
conf: Chat2Conf # configuration for chat2
|
||||
|
||||
type
|
||||
PrivateKey* = crypto.PrivateKey
|
||||
Topic* = waku_core.PubsubTopic
|
||||
|
||||
const MinMixNodePoolSize = 4
|
||||
|
||||
#####################
|
||||
## chat2 protobufs ##
|
||||
#####################
|
||||
|
||||
type
|
||||
SelectResult*[T] = Result[T, string]
|
||||
|
||||
Chat2Message* = object
|
||||
timestamp*: int64
|
||||
nick*: string
|
||||
payload*: seq[byte]
|
||||
|
||||
proc getPubsubTopic*(
|
||||
conf: Chat2Conf, node: WakuNode, contentTopic: string
|
||||
): PubsubTopic =
|
||||
let shard = node.wakuAutoSharding.get().getShard(contentTopic).valueOr:
|
||||
echo "Could not parse content topic: " & error
|
||||
return "" #TODO: fix this.
|
||||
return $RelayShard(clusterId: conf.clusterId, shardId: shard.shardId)
|
||||
|
||||
proc init*(T: type Chat2Message, buffer: seq[byte]): ProtoResult[T] =
|
||||
var msg = Chat2Message()
|
||||
let pb = initProtoBuffer(buffer)
|
||||
|
||||
var timestamp: uint64
|
||||
discard ?pb.getField(1, timestamp)
|
||||
msg.timestamp = int64(timestamp)
|
||||
|
||||
discard ?pb.getField(2, msg.nick)
|
||||
discard ?pb.getField(3, msg.payload)
|
||||
|
||||
ok(msg)
|
||||
|
||||
proc encode*(message: Chat2Message): ProtoBuffer =
|
||||
var serialised = initProtoBuffer()
|
||||
|
||||
serialised.write(1, uint64(message.timestamp))
|
||||
serialised.write(2, message.nick)
|
||||
serialised.write(3, message.payload)
|
||||
|
||||
return serialised
|
||||
|
||||
proc `$`*(message: Chat2Message): string =
|
||||
# Get message date and timestamp in local time
|
||||
let time = message.timestamp.fromUnix().local().format("'<'MMM' 'dd,' 'HH:mm'>'")
|
||||
|
||||
return time & " " & message.nick & ": " & string.fromBytes(message.payload)
|
||||
|
||||
#####################
|
||||
|
||||
proc connectToNodes(c: Chat, nodes: seq[string]) {.async.} =
|
||||
echo "Connecting to nodes"
|
||||
await c.node.connectToNodes(nodes)
|
||||
c.connected = true
|
||||
|
||||
proc showChatPrompt(c: Chat) =
|
||||
if not c.prompt:
|
||||
try:
|
||||
stdout.write(">> ")
|
||||
stdout.flushFile()
|
||||
c.prompt = true
|
||||
except IOError:
|
||||
discard
|
||||
|
||||
proc getChatLine(payload: seq[byte]): string =
|
||||
# No payload encoding/encryption from Waku
|
||||
let pb = Chat2Message.init(payload).valueOr:
|
||||
return string.fromBytes(payload)
|
||||
return $pb
|
||||
|
||||
proc printReceivedMessage(c: Chat, msg: WakuMessage) =
|
||||
let chatLine = getChatLine(msg.payload)
|
||||
try:
|
||||
echo &"{chatLine}"
|
||||
except ValueError:
|
||||
# Formatting fail. Print chat line in any case.
|
||||
echo chatLine
|
||||
|
||||
c.prompt = false
|
||||
showChatPrompt(c)
|
||||
trace "Printing message", chatLine, contentTopic = msg.contentTopic
|
||||
|
||||
proc readNick(transp: StreamTransport): Future[string] {.async.} =
|
||||
# Chat prompt
|
||||
stdout.write("Choose a nickname >> ")
|
||||
stdout.flushFile()
|
||||
return await transp.readLine()
|
||||
|
||||
proc startMetricsServer(
|
||||
serverIp: IpAddress, serverPort: Port
|
||||
): Result[MetricsHttpServerRef, string] =
|
||||
info "Starting metrics HTTP server", serverIp = $serverIp, serverPort = $serverPort
|
||||
|
||||
let server = MetricsHttpServerRef.new($serverIp, serverPort).valueOr:
|
||||
return err("metrics HTTP server start failed: " & $error)
|
||||
|
||||
try:
|
||||
waitFor server.start()
|
||||
except CatchableError:
|
||||
return err("metrics HTTP server start failed: " & getCurrentExceptionMsg())
|
||||
|
||||
info "Metrics HTTP server started", serverIp = $serverIp, serverPort = $serverPort
|
||||
ok(server)
|
||||
|
||||
proc publish(c: Chat, line: string) {.async.} =
|
||||
# First create a Chat2Message protobuf with this line of text
|
||||
let time = getTime().toUnix()
|
||||
let chat2pb =
|
||||
Chat2Message(timestamp: time, nick: c.nick, payload: line.toBytes()).encode()
|
||||
|
||||
## @TODO: error handling on failure
|
||||
proc handler(response: LightPushResponse) {.gcsafe, closure.} =
|
||||
trace "lightpush response received", response = response
|
||||
|
||||
var message = WakuMessage(
|
||||
payload: chat2pb.buffer,
|
||||
contentTopic: c.contentTopic,
|
||||
version: 0,
|
||||
timestamp: getNanosecondTime(time),
|
||||
)
|
||||
|
||||
try:
|
||||
if not c.node.wakuLightpushClient.isNil():
|
||||
# Attempt lightpush with mix
|
||||
|
||||
(
|
||||
waitFor c.node.lightpushPublish(
|
||||
some(c.conf.getPubsubTopic(c.node, c.contentTopic)),
|
||||
message,
|
||||
none(RemotePeerInfo),
|
||||
true,
|
||||
)
|
||||
).isOkOr:
|
||||
error "failed to publish lightpush message", error = error
|
||||
else:
|
||||
error "failed to publish message as lightpush client is not initialized"
|
||||
except CatchableError:
|
||||
error "caught error publishing message: ", error = getCurrentExceptionMsg()
|
||||
|
||||
# TODO This should read or be subscribe handler subscribe
|
||||
proc readAndPrint(c: Chat) {.async.} =
|
||||
while true:
|
||||
# while p.connected:
|
||||
# # TODO: echo &"{p.id} -> "
|
||||
#
|
||||
# echo cast[string](await p.conn.readLp(1024))
|
||||
#echo "readAndPrint subscribe NYI"
|
||||
await sleepAsync(100)
|
||||
|
||||
# TODO Implement
|
||||
proc writeAndPrint(c: Chat) {.async.} =
|
||||
while true:
|
||||
# Connect state not updated on incoming WakuRelay connections
|
||||
# if not c.connected:
|
||||
# echo "type an address or wait for a connection:"
|
||||
# echo "type /[help|?] for help"
|
||||
|
||||
# Chat prompt
|
||||
showChatPrompt(c)
|
||||
|
||||
let line = await c.transp.readLine()
|
||||
if line.startsWith("/help") or line.startsWith("/?") or not c.started:
|
||||
echo Help
|
||||
continue
|
||||
|
||||
# if line.startsWith("/disconnect"):
|
||||
# echo "Ending current session"
|
||||
# if p.connected and p.conn.closed.not:
|
||||
# await p.conn.close()
|
||||
# p.connected = false
|
||||
elif line.startsWith("/connect"):
|
||||
# TODO Should be able to connect to multiple peers for Waku chat
|
||||
if c.connected:
|
||||
echo "already connected to at least one peer"
|
||||
continue
|
||||
|
||||
echo "enter address of remote peer"
|
||||
let address = await c.transp.readLine()
|
||||
if address.len > 0:
|
||||
await c.connectToNodes(@[address])
|
||||
elif line.startsWith("/nick"):
|
||||
# Set a new nickname
|
||||
c.nick = await readNick(c.transp)
|
||||
echo "You are now known as " & c.nick
|
||||
elif line.startsWith("/exit"):
|
||||
echo "quitting..."
|
||||
|
||||
try:
|
||||
await c.node.stop()
|
||||
except:
|
||||
echo "exception happened when stopping: " & getCurrentExceptionMsg()
|
||||
|
||||
quit(QuitSuccess)
|
||||
else:
|
||||
# XXX connected state problematic
|
||||
if c.started:
|
||||
echo "publishing message: " & line
|
||||
await c.publish(line)
|
||||
# TODO Connect to peer logic?
|
||||
else:
|
||||
try:
|
||||
if line.startsWith("/") and "p2p" in line:
|
||||
await c.connectToNodes(@[line])
|
||||
except:
|
||||
echo &"unable to dial remote peer {line}"
|
||||
echo getCurrentExceptionMsg()
|
||||
|
||||
proc readWriteLoop(c: Chat) {.async.} =
|
||||
asyncSpawn c.writeAndPrint() # execute the async function but does not block
|
||||
asyncSpawn c.readAndPrint()
|
||||
|
||||
proc readInput(wfd: AsyncFD) {.thread, raises: [Defect, CatchableError].} =
|
||||
## This procedure performs reading from `stdin` and sends data over
|
||||
## pipe to main thread.
|
||||
let transp = fromPipe(wfd)
|
||||
|
||||
while true:
|
||||
let line = stdin.readLine()
|
||||
discard waitFor transp.write(line & "\r\n")
|
||||
|
||||
var alreadyUsedServicePeers {.threadvar.}: seq[RemotePeerInfo]
|
||||
|
||||
proc selectRandomServicePeer*(
|
||||
pm: PeerManager, actualPeer: Option[RemotePeerInfo], codec: string
|
||||
): Result[RemotePeerInfo, void] =
|
||||
if actualPeer.isSome():
|
||||
alreadyUsedServicePeers.add(actualPeer.get())
|
||||
|
||||
let supportivePeers = pm.switch.peerStore.getPeersByProtocol(codec).filterIt(
|
||||
it notin alreadyUsedServicePeers
|
||||
)
|
||||
if supportivePeers.len == 0:
|
||||
return err()
|
||||
|
||||
let rndPeerIndex = rand(0 .. supportivePeers.len - 1)
|
||||
return ok(supportivePeers[rndPeerIndex])
|
||||
|
||||
proc maintainSubscription(
|
||||
wakuNode: WakuNode,
|
||||
filterPubsubTopic: PubsubTopic,
|
||||
filterContentTopic: ContentTopic,
|
||||
filterPeer: RemotePeerInfo,
|
||||
preventPeerSwitch: bool,
|
||||
) {.async.} =
|
||||
var actualFilterPeer = filterPeer
|
||||
const maxFailedSubscribes = 3
|
||||
const maxFailedServiceNodeSwitches = 10
|
||||
var noFailedSubscribes = 0
|
||||
var noFailedServiceNodeSwitches = 0
|
||||
# Use chronos.Duration explicitly to avoid mismatch with std/times.Duration
|
||||
let RetryWait = chronos.seconds(2) # Quick retry interval
|
||||
let SubscriptionMaintenance = chronos.seconds(30) # Subscription maintenance interval
|
||||
while true:
|
||||
info "maintaining subscription at", peer = constructMultiaddrStr(actualFilterPeer)
|
||||
# First use filter-ping to check if we have an active subscription
|
||||
let pingErr = (await wakuNode.wakuFilterClient.ping(actualFilterPeer)).errorOr:
|
||||
await sleepAsync(SubscriptionMaintenance)
|
||||
info "subscription is live."
|
||||
continue
|
||||
|
||||
# No subscription found. Let's subscribe.
|
||||
error "ping failed.", error = pingErr
|
||||
trace "no subscription found. Sending subscribe request"
|
||||
|
||||
let subscribeErr = (
|
||||
await wakuNode.filterSubscribe(
|
||||
some(filterPubsubTopic), filterContentTopic, actualFilterPeer
|
||||
)
|
||||
).errorOr:
|
||||
await sleepAsync(SubscriptionMaintenance)
|
||||
if noFailedSubscribes > 0:
|
||||
noFailedSubscribes -= 1
|
||||
notice "subscribe request successful."
|
||||
continue
|
||||
|
||||
noFailedSubscribes += 1
|
||||
error "Subscribe request failed.",
|
||||
error = subscribeErr, peer = actualFilterPeer, failCount = noFailedSubscribes
|
||||
|
||||
# TODO: disconnet from failed actualFilterPeer
|
||||
# asyncSpawn(wakuNode.peerManager.switch.disconnect(p))
|
||||
# wakunode.peerManager.peerStore.delete(actualFilterPeer)
|
||||
|
||||
if noFailedSubscribes < maxFailedSubscribes:
|
||||
await sleepAsync(RetryWait) # Wait a bit before retrying
|
||||
elif not preventPeerSwitch:
|
||||
# try again with new peer without delay
|
||||
let actualFilterPeer = selectRandomServicePeer(
|
||||
wakuNode.peerManager, some(actualFilterPeer), WakuFilterSubscribeCodec
|
||||
).valueOr:
|
||||
error "Failed to find new service peer. Exiting."
|
||||
noFailedServiceNodeSwitches += 1
|
||||
break
|
||||
|
||||
info "Found new peer for codec",
|
||||
codec = filterPubsubTopic, peer = constructMultiaddrStr(actualFilterPeer)
|
||||
|
||||
noFailedSubscribes = 0
|
||||
else:
|
||||
await sleepAsync(SubscriptionMaintenance)
|
||||
|
||||
{.pop.}
|
||||
# @TODO confutils.nim(775, 17) Error: can raise an unlisted exception: ref IOError
|
||||
proc processInput(rfd: AsyncFD, rng: ref HmacDrbgContext) {.async.} =
|
||||
let
|
||||
transp = fromPipe(rfd)
|
||||
conf = Chat2Conf.load()
|
||||
nodekey =
|
||||
if conf.nodekey.isSome():
|
||||
conf.nodekey.get()
|
||||
else:
|
||||
PrivateKey.random(Secp256k1, rng[]).tryGet()
|
||||
|
||||
# set log level
|
||||
if conf.logLevel != LogLevel.NONE:
|
||||
setLogLevel(conf.logLevel)
|
||||
|
||||
let (extIp, extTcpPort, extUdpPort) = setupNat(
|
||||
conf.nat,
|
||||
clientId,
|
||||
Port(uint16(conf.tcpPort) + conf.portsShift),
|
||||
Port(uint16(conf.udpPort) + conf.portsShift),
|
||||
).valueOr:
|
||||
raise newException(ValueError, "setupNat error " & error)
|
||||
|
||||
var enrBuilder = EnrBuilder.init(nodeKey)
|
||||
|
||||
enrBuilder.withWakuRelaySharding(
|
||||
RelayShards(clusterId: conf.clusterId, shardIds: conf.shards)
|
||||
).isOkOr:
|
||||
error "failed to add sharded topics to ENR", error = error
|
||||
quit(QuitFailure)
|
||||
|
||||
let record = enrBuilder.build().valueOr:
|
||||
error "failed to create enr record", error = error
|
||||
quit(QuitFailure)
|
||||
|
||||
let node = block:
|
||||
var builder = WakuNodeBuilder.init()
|
||||
builder.withNodeKey(nodeKey)
|
||||
builder.withRecord(record)
|
||||
|
||||
builder
|
||||
.withNetworkConfigurationDetails(
|
||||
conf.listenAddress,
|
||||
Port(uint16(conf.tcpPort) + conf.portsShift),
|
||||
extIp,
|
||||
extTcpPort,
|
||||
wsBindPort = Port(uint16(conf.websocketPort) + conf.portsShift),
|
||||
wsEnabled = conf.websocketSupport,
|
||||
wssEnabled = conf.websocketSecureSupport,
|
||||
)
|
||||
.tryGet()
|
||||
builder.build().tryGet()
|
||||
|
||||
node.mountAutoSharding(conf.clusterId, conf.numShardsInNetwork).isOkOr:
|
||||
error "failed to mount waku sharding: ", error = error
|
||||
quit(QuitFailure)
|
||||
node.mountMetadata(conf.clusterId, conf.shards).isOkOr:
|
||||
error "failed to mount waku metadata protocol: ", err = error
|
||||
quit(QuitFailure)
|
||||
|
||||
let (mixPrivKey, mixPubKey) = generateKeyPair().valueOr:
|
||||
error "failed to generate mix key pair", error = error
|
||||
return
|
||||
|
||||
(await node.mountMix(conf.clusterId, mixPrivKey, conf.mixnodes)).isOkOr:
|
||||
error "failed to mount waku mix protocol: ", error = $error
|
||||
quit(QuitFailure)
|
||||
await node.mountRendezvousClient(conf.clusterId)
|
||||
|
||||
await node.start()
|
||||
|
||||
node.peerManager.start()
|
||||
|
||||
await node.mountLibp2pPing()
|
||||
await node.mountPeerExchangeClient()
|
||||
let pubsubTopic = conf.getPubsubTopic(node, conf.contentTopic)
|
||||
echo "pubsub topic is: " & pubsubTopic
|
||||
let nick = await readNick(transp)
|
||||
echo "Welcome, " & nick & "!"
|
||||
|
||||
var chat = Chat(
|
||||
node: node,
|
||||
transp: transp,
|
||||
subscribed: true,
|
||||
connected: false,
|
||||
started: true,
|
||||
nick: nick,
|
||||
prompt: false,
|
||||
contentTopic: conf.contentTopic,
|
||||
conf: conf,
|
||||
)
|
||||
|
||||
var dnsDiscoveryUrl = none(string)
|
||||
|
||||
if conf.fleet != Fleet.none:
|
||||
# Use DNS discovery to connect to selected fleet
|
||||
echo "Connecting to " & $conf.fleet & " fleet using DNS discovery..."
|
||||
|
||||
if conf.fleet == Fleet.test:
|
||||
dnsDiscoveryUrl = some(
|
||||
"enrtree://AOGYWMBYOUIMOENHXCHILPKY3ZRFEULMFI4DOM442QSZ73TT2A7VI@test.waku.nodes.status.im"
|
||||
)
|
||||
else:
|
||||
# Connect to sandbox by default
|
||||
dnsDiscoveryUrl = some(
|
||||
"enrtree://AIRVQ5DDA4FFWLRBCHJWUWOO6X6S4ZTZ5B667LQ6AJU6PEYDLRD5O@sandbox.waku.nodes.status.im"
|
||||
)
|
||||
elif conf.dnsDiscoveryUrl != "":
|
||||
# No pre-selected fleet. Discover nodes via DNS using user config
|
||||
info "Discovering nodes using Waku DNS discovery", url = conf.dnsDiscoveryUrl
|
||||
dnsDiscoveryUrl = some(conf.dnsDiscoveryUrl)
|
||||
|
||||
var discoveredNodes: seq[RemotePeerInfo]
|
||||
|
||||
if dnsDiscoveryUrl.isSome:
|
||||
var nameServers: seq[TransportAddress]
|
||||
for ip in conf.dnsDiscoveryNameServers:
|
||||
nameServers.add(initTAddress(ip, Port(53))) # Assume all servers use port 53
|
||||
|
||||
let dnsResolver = DnsResolver.new(nameServers)
|
||||
|
||||
proc resolver(domain: string): Future[string] {.async, gcsafe.} =
|
||||
trace "resolving", domain = domain
|
||||
let resolved = await dnsResolver.resolveTxt(domain)
|
||||
return resolved[0] # Use only first answer
|
||||
|
||||
let wakuDnsDiscovery = WakuDnsDiscovery.init(dnsDiscoveryUrl.get(), resolver)
|
||||
if wakuDnsDiscovery.isOk:
|
||||
let discoveredPeers = await wakuDnsDiscovery.get().findPeers()
|
||||
if discoveredPeers.isOk:
|
||||
info "Connecting to discovered peers"
|
||||
discoveredNodes = discoveredPeers.get()
|
||||
echo "Discovered and connecting to " & $discoveredNodes
|
||||
waitFor chat.node.connectToNodes(discoveredNodes)
|
||||
else:
|
||||
warn "Failed to find peers via DNS discovery", error = discoveredPeers.error
|
||||
else:
|
||||
warn "Failed to init Waku DNS discovery", error = wakuDnsDiscovery.error
|
||||
|
||||
let peerInfo = node.switch.peerInfo
|
||||
let listenStr = $peerInfo.addrs[0] & "/p2p/" & $peerInfo.peerId
|
||||
echo &"Listening on\n {listenStr}"
|
||||
|
||||
if (conf.storenode != "") or (conf.store == true):
|
||||
await node.mountStore()
|
||||
|
||||
var storenode: Option[RemotePeerInfo]
|
||||
|
||||
if conf.storenode != "":
|
||||
let peerInfo = parsePeerInfo(conf.storenode)
|
||||
if peerInfo.isOk():
|
||||
storenode = some(peerInfo.value)
|
||||
else:
|
||||
error "Incorrect conf.storenode", error = peerInfo.error
|
||||
elif discoveredNodes.len > 0:
|
||||
echo "Store enabled, but no store nodes configured. Choosing one at random from discovered peers"
|
||||
storenode = some(discoveredNodes[rand(0 .. len(discoveredNodes) - 1)])
|
||||
|
||||
if storenode.isSome():
|
||||
# We have a viable storenode. Let's query it for historical messages.
|
||||
echo "Connecting to storenode: " & $(storenode.get())
|
||||
|
||||
node.mountStoreClient()
|
||||
node.peerManager.addServicePeer(storenode.get(), WakuStoreCodec)
|
||||
|
||||
proc storeHandler(response: StoreQueryResponse) {.gcsafe.} =
|
||||
for msg in response.messages:
|
||||
let payload =
|
||||
if msg.message.isSome():
|
||||
msg.message.get().payload
|
||||
else:
|
||||
newSeq[byte](0)
|
||||
|
||||
let chatLine = getChatLine(payload)
|
||||
echo &"{chatLine}"
|
||||
info "Hit store handler"
|
||||
|
||||
let queryRes = await node.query(
|
||||
StoreQueryRequest(contentTopics: @[chat.contentTopic]), storenode.get()
|
||||
)
|
||||
if queryRes.isOk():
|
||||
storeHandler(queryRes.value)
|
||||
|
||||
if conf.edgemode: #Mount light protocol clients
|
||||
node.mountLightPushClient()
|
||||
await node.mountFilterClient()
|
||||
let filterHandler = proc(
|
||||
pubsubTopic: PubsubTopic, msg: WakuMessage
|
||||
): Future[void] {.async, closure.} =
|
||||
trace "Hit filter handler", contentTopic = msg.contentTopic
|
||||
chat.printReceivedMessage(msg)
|
||||
|
||||
node.wakuFilterClient.registerPushHandler(filterHandler)
|
||||
var servicePeerInfo: RemotePeerInfo
|
||||
if conf.serviceNode != "":
|
||||
servicePeerInfo = parsePeerInfo(conf.serviceNode).valueOr:
|
||||
error "Couldn't parse conf.serviceNode", error = error
|
||||
RemotePeerInfo()
|
||||
if servicePeerInfo == nil or $servicePeerInfo.peerId == "":
|
||||
# Assuming that service node supports all services
|
||||
servicePeerInfo = selectRandomServicePeer(
|
||||
node.peerManager, none(RemotePeerInfo), WakuLightpushCodec
|
||||
).valueOr:
|
||||
error "Couldn't find any service peer"
|
||||
quit(QuitFailure)
|
||||
|
||||
node.peerManager.addServicePeer(servicePeerInfo, WakuLightpushCodec)
|
||||
node.peerManager.addServicePeer(servicePeerInfo, WakuPeerExchangeCodec)
|
||||
#node.peerManager.addServicePeer(servicePeerInfo, WakuRendezVousCodec)
|
||||
|
||||
# Start maintaining subscription
|
||||
asyncSpawn maintainSubscription(
|
||||
node, pubsubTopic, conf.contentTopic, servicePeerInfo, false
|
||||
)
|
||||
echo "waiting for mix nodes to be discovered..."
|
||||
while true:
|
||||
if node.getMixNodePoolSize() >= MinMixNodePoolSize:
|
||||
break
|
||||
discard await node.fetchPeerExchangePeers()
|
||||
await sleepAsync(1000)
|
||||
|
||||
while node.getMixNodePoolSize() < MinMixNodePoolSize:
|
||||
info "waiting for mix nodes to be discovered",
|
||||
currentpoolSize = node.getMixNodePoolSize()
|
||||
await sleepAsync(1000)
|
||||
notice "ready to publish with mix node pool size ",
|
||||
currentpoolSize = node.getMixNodePoolSize()
|
||||
echo "ready to publish messages now"
|
||||
|
||||
# Once min mixnodes are discovered loop as per default setting
|
||||
node.startPeerExchangeLoop()
|
||||
|
||||
if conf.metricsLogging:
|
||||
startMetricsLog()
|
||||
|
||||
if conf.metricsServer:
|
||||
let metricsServer = startMetricsServer(
|
||||
conf.metricsServerAddress, Port(conf.metricsServerPort + conf.portsShift)
|
||||
)
|
||||
|
||||
await chat.readWriteLoop()
|
||||
|
||||
runForever()
|
||||
|
||||
proc main(rng: ref HmacDrbgContext) {.async.} =
|
||||
let (rfd, wfd) = createAsyncPipe()
|
||||
if rfd == asyncInvalidPipe or wfd == asyncInvalidPipe:
|
||||
raise newException(ValueError, "Could not initialize pipe!")
|
||||
|
||||
var thread: Thread[AsyncFD]
|
||||
thread.createThread(readInput, wfd)
|
||||
try:
|
||||
await processInput(rfd, rng)
|
||||
# Handle only ConfigurationError for now
|
||||
# TODO: Throw other errors from the mounting procedure
|
||||
except ConfigurationError as e:
|
||||
raise e
|
||||
|
||||
when isMainModule: # isMainModule = true when the module is compiled as the main file
|
||||
let rng = crypto.newRng()
|
||||
try:
|
||||
waitFor(main(rng))
|
||||
except CatchableError as e:
|
||||
raise e
|
||||
|
||||
## Dump of things that can be improved:
|
||||
##
|
||||
## - Incoming dialed peer does not change connected state (not relying on it for now)
|
||||
## - Unclear if staticnode argument works (can enter manually)
|
||||
## - Don't trigger self / double publish own messages
|
||||
## - Test/default to cluster node connection (diff protocol version)
|
||||
## - Redirect logs to separate file
|
||||
## - Expose basic publish/subscribe etc commands with /syntax
|
||||
## - Show part of peerid to know who sent message
|
||||
## - Deal with protobuf messages (e.g. other chat protocol, or encrypted)
|
||||
315
apps/chat2mix/config_chat2mix.nim
Normal file
315
apps/chat2mix/config_chat2mix.nim
Normal file
@ -0,0 +1,315 @@
|
||||
import chronicles, chronos, std/strutils, regex
|
||||
|
||||
import
|
||||
eth/keys,
|
||||
libp2p/crypto/crypto,
|
||||
libp2p/crypto/secp,
|
||||
libp2p/crypto/curve25519,
|
||||
libp2p/multiaddress,
|
||||
libp2p/multicodec,
|
||||
nimcrypto/utils,
|
||||
confutils,
|
||||
confutils/defs,
|
||||
confutils/std/net
|
||||
|
||||
import waku/waku_core, waku/waku_mix
|
||||
|
||||
type
|
||||
Fleet* = enum
|
||||
none
|
||||
sandbox
|
||||
test
|
||||
|
||||
EthRpcUrl* = distinct string
|
||||
|
||||
Chat2Conf* = object ## General node config
|
||||
edgemode* {.
|
||||
defaultValue: true, desc: "Run the app in edge mode", name: "edge-mode"
|
||||
.}: bool
|
||||
|
||||
logLevel* {.
|
||||
desc: "Sets the log level.", defaultValue: LogLevel.INFO, name: "log-level"
|
||||
.}: LogLevel
|
||||
|
||||
nodekey* {.desc: "P2P node private key as 64 char hex string.", name: "nodekey".}:
|
||||
Option[crypto.PrivateKey]
|
||||
|
||||
listenAddress* {.
|
||||
defaultValue: defaultListenAddress(config),
|
||||
desc: "Listening address for the LibP2P traffic.",
|
||||
name: "listen-address"
|
||||
.}: IpAddress
|
||||
|
||||
tcpPort* {.desc: "TCP listening port.", defaultValue: 60000, name: "tcp-port".}:
|
||||
Port
|
||||
|
||||
udpPort* {.desc: "UDP listening port.", defaultValue: 60000, name: "udp-port".}:
|
||||
Port
|
||||
|
||||
portsShift* {.
|
||||
desc: "Add a shift to all port numbers.", defaultValue: 0, name: "ports-shift"
|
||||
.}: uint16
|
||||
|
||||
nat* {.
|
||||
desc:
|
||||
"Specify method to use for determining public address. " &
|
||||
"Must be one of: any, none, upnp, pmp, extip:<IP>.",
|
||||
defaultValue: "any"
|
||||
.}: string
|
||||
|
||||
## Persistence config
|
||||
dbPath* {.
|
||||
desc: "The database path for peristent storage", defaultValue: "", name: "db-path"
|
||||
.}: string
|
||||
|
||||
persistPeers* {.
|
||||
desc: "Enable peer persistence: true|false",
|
||||
defaultValue: false,
|
||||
name: "persist-peers"
|
||||
.}: bool
|
||||
|
||||
persistMessages* {.
|
||||
desc: "Enable message persistence: true|false",
|
||||
defaultValue: false,
|
||||
name: "persist-messages"
|
||||
.}: bool
|
||||
|
||||
## Relay config
|
||||
relay* {.
|
||||
desc: "Enable relay protocol: true|false", defaultValue: true, name: "relay"
|
||||
.}: bool
|
||||
|
||||
staticnodes* {.
|
||||
desc: "Peer multiaddr to directly connect with. Argument may be repeated.",
|
||||
name: "staticnode",
|
||||
defaultValue: @[]
|
||||
.}: seq[string]
|
||||
|
||||
mixnodes* {.
|
||||
desc:
|
||||
"Multiaddress and mix-key of mix node to be statically specified in format multiaddr:mixPubKey. Argument may be repeated.",
|
||||
name: "mixnode"
|
||||
.}: seq[MixNodePubInfo]
|
||||
|
||||
keepAlive* {.
|
||||
desc: "Enable keep-alive for idle connections: true|false",
|
||||
defaultValue: false,
|
||||
name: "keep-alive"
|
||||
.}: bool
|
||||
|
||||
clusterId* {.
|
||||
desc:
|
||||
"Cluster id that the node is running in. Node in a different cluster id is disconnected.",
|
||||
defaultValue: 1,
|
||||
name: "cluster-id"
|
||||
.}: uint16
|
||||
|
||||
numShardsInNetwork* {.
|
||||
desc: "Number of shards in the network",
|
||||
defaultValue: 8,
|
||||
name: "num-shards-in-network"
|
||||
.}: uint32
|
||||
|
||||
shards* {.
|
||||
desc:
|
||||
"Shards index to subscribe to [0..NUM_SHARDS_IN_NETWORK-1]. Argument may be repeated.",
|
||||
defaultValue:
|
||||
@[
|
||||
uint16(0),
|
||||
uint16(1),
|
||||
uint16(2),
|
||||
uint16(3),
|
||||
uint16(4),
|
||||
uint16(5),
|
||||
uint16(6),
|
||||
uint16(7),
|
||||
],
|
||||
name: "shard"
|
||||
.}: seq[uint16]
|
||||
|
||||
## Store config
|
||||
store* {.
|
||||
desc: "Enable store protocol: true|false", defaultValue: false, name: "store"
|
||||
.}: bool
|
||||
|
||||
storenode* {.
|
||||
desc: "Peer multiaddr to query for storage.", defaultValue: "", name: "storenode"
|
||||
.}: string
|
||||
|
||||
## Filter config
|
||||
filter* {.
|
||||
desc: "Enable filter protocol: true|false", defaultValue: false, name: "filter"
|
||||
.}: bool
|
||||
|
||||
## Lightpush config
|
||||
lightpush* {.
|
||||
desc: "Enable lightpush protocol: true|false",
|
||||
defaultValue: false,
|
||||
name: "lightpush"
|
||||
.}: bool
|
||||
|
||||
servicenode* {.
|
||||
desc: "Peer multiaddr to request lightpush and filter services",
|
||||
defaultValue: "",
|
||||
name: "servicenode"
|
||||
.}: string
|
||||
|
||||
## Metrics config
|
||||
metricsServer* {.
|
||||
desc: "Enable the metrics server: true|false",
|
||||
defaultValue: false,
|
||||
name: "metrics-server"
|
||||
.}: bool
|
||||
|
||||
metricsServerAddress* {.
|
||||
desc: "Listening address of the metrics server.",
|
||||
defaultValue: parseIpAddress("127.0.0.1"),
|
||||
name: "metrics-server-address"
|
||||
.}: IpAddress
|
||||
|
||||
metricsServerPort* {.
|
||||
desc: "Listening HTTP port of the metrics server.",
|
||||
defaultValue: 8008,
|
||||
name: "metrics-server-port"
|
||||
.}: uint16
|
||||
|
||||
metricsLogging* {.
|
||||
desc: "Enable metrics logging: true|false",
|
||||
defaultValue: true,
|
||||
name: "metrics-logging"
|
||||
.}: bool
|
||||
|
||||
## DNS discovery config
|
||||
dnsDiscovery* {.
|
||||
desc:
|
||||
"Deprecated, please set dns-discovery-url instead. Enable discovering nodes via DNS",
|
||||
defaultValue: false,
|
||||
name: "dns-discovery"
|
||||
.}: bool
|
||||
|
||||
dnsDiscoveryUrl* {.
|
||||
desc: "URL for DNS node list in format 'enrtree://<key>@<fqdn>'",
|
||||
defaultValue: "",
|
||||
name: "dns-discovery-url"
|
||||
.}: string
|
||||
|
||||
dnsDiscoveryNameServers* {.
|
||||
desc: "DNS name server IPs to query. Argument may be repeated.",
|
||||
defaultValue: @[parseIpAddress("1.1.1.1"), parseIpAddress("1.0.0.1")],
|
||||
name: "dns-discovery-name-server"
|
||||
.}: seq[IpAddress]
|
||||
|
||||
## Chat2 configuration
|
||||
fleet* {.
|
||||
desc:
|
||||
"Select the fleet to connect to. This sets the DNS discovery URL to the selected fleet.",
|
||||
defaultValue: Fleet.test,
|
||||
name: "fleet"
|
||||
.}: Fleet
|
||||
|
||||
contentTopic* {.
|
||||
desc: "Content topic for chat messages.",
|
||||
defaultValue: "/toy-chat-mix/2/huilong/proto",
|
||||
name: "content-topic"
|
||||
.}: string
|
||||
|
||||
## Websocket Configuration
|
||||
websocketSupport* {.
|
||||
desc: "Enable websocket: true|false",
|
||||
defaultValue: false,
|
||||
name: "websocket-support"
|
||||
.}: bool
|
||||
|
||||
websocketPort* {.
|
||||
desc: "WebSocket listening port.", defaultValue: 8000, name: "websocket-port"
|
||||
.}: Port
|
||||
|
||||
websocketSecureSupport* {.
|
||||
desc: "WebSocket Secure Support.",
|
||||
defaultValue: false,
|
||||
name: "websocket-secure-support"
|
||||
.}: bool ## rln-relay configuration
|
||||
|
||||
proc parseCmdArg*(T: type MixNodePubInfo, p: string): T =
|
||||
let elements = p.split(":")
|
||||
if elements.len != 2:
|
||||
raise newException(
|
||||
ValueError, "Invalid format for mix node expected multiaddr:mixPublicKey"
|
||||
)
|
||||
let multiaddr = MultiAddress.init(elements[0]).valueOr:
|
||||
raise newException(ValueError, "Invalid multiaddress format")
|
||||
if not multiaddr.contains(multiCodec("ip4")).get():
|
||||
raise newException(
|
||||
ValueError, "Invalid format for ip address, expected a ipv4 multiaddress"
|
||||
)
|
||||
|
||||
return MixNodePubInfo(
|
||||
multiaddr: elements[0], pubKey: intoCurve25519Key(ncrutils.fromHex(elements[1]))
|
||||
)
|
||||
|
||||
# NOTE: Keys are different in nim-libp2p
|
||||
proc parseCmdArg*(T: type crypto.PrivateKey, p: string): T =
|
||||
try:
|
||||
let key = SkPrivateKey.init(utils.fromHex(p)).tryGet()
|
||||
# XXX: Here at the moment
|
||||
result = crypto.PrivateKey(scheme: Secp256k1, skkey: key)
|
||||
except CatchableError as e:
|
||||
raise newException(ValueError, "Invalid private key")
|
||||
|
||||
proc completeCmdArg*(T: type crypto.PrivateKey, val: string): seq[string] =
|
||||
return @[]
|
||||
|
||||
proc parseCmdArg*(T: type IpAddress, p: string): T =
|
||||
try:
|
||||
result = parseIpAddress(p)
|
||||
except CatchableError as e:
|
||||
raise newException(ValueError, "Invalid IP address")
|
||||
|
||||
proc completeCmdArg*(T: type IpAddress, val: string): seq[string] =
|
||||
return @[]
|
||||
|
||||
proc parseCmdArg*(T: type Port, p: string): T =
|
||||
try:
|
||||
result = Port(parseInt(p))
|
||||
except CatchableError as e:
|
||||
raise newException(ValueError, "Invalid Port number")
|
||||
|
||||
proc completeCmdArg*(T: type Port, val: string): seq[string] =
|
||||
return @[]
|
||||
|
||||
proc parseCmdArg*(T: type Option[uint], p: string): T =
|
||||
try:
|
||||
some(parseUint(p))
|
||||
except CatchableError:
|
||||
raise newException(ValueError, "Invalid unsigned integer")
|
||||
|
||||
proc completeCmdArg*(T: type EthRpcUrl, val: string): seq[string] =
|
||||
return @[]
|
||||
|
||||
proc parseCmdArg*(T: type EthRpcUrl, s: string): T =
|
||||
## allowed patterns:
|
||||
## http://url:port
|
||||
## https://url:port
|
||||
## http://url:port/path
|
||||
## https://url:port/path
|
||||
## http://url/with/path
|
||||
## http://url:port/path?query
|
||||
## https://url:port/path?query
|
||||
## disallowed patterns:
|
||||
## any valid/invalid ws or wss url
|
||||
var httpPattern =
|
||||
re2"^(https?):\/\/((localhost)|([\w_-]+(?:(?:\.[\w_-]+)+)))(:[0-9]{1,5})?([\w.,@?^=%&:\/~+#-]*[\w@?^=%&\/~+#-])*"
|
||||
var wsPattern =
|
||||
re2"^(wss?):\/\/((localhost)|([\w_-]+(?:(?:\.[\w_-]+)+)))(:[0-9]{1,5})?([\w.,@?^=%&:\/~+#-]*[\w@?^=%&\/~+#-])*"
|
||||
if regex.match(s, wsPattern):
|
||||
raise newException(
|
||||
ValueError, "Websocket RPC URL is not supported, Please use an HTTP URL"
|
||||
)
|
||||
if not regex.match(s, httpPattern):
|
||||
raise newException(ValueError, "Invalid HTTP RPC URL")
|
||||
return EthRpcUrl(s)
|
||||
|
||||
func defaultListenAddress*(conf: Chat2Conf): IpAddress =
|
||||
# TODO: How should we select between IPv4 and IPv6
|
||||
# Maybe there should be a config option for this.
|
||||
(static parseIpAddress("0.0.0.0"))
|
||||
4
apps/chat2mix/nim.cfg
Normal file
4
apps/chat2mix/nim.cfg
Normal file
@ -0,0 +1,4 @@
|
||||
-d:chronicles_line_numbers
|
||||
-d:chronicles_runtime_filtering:on
|
||||
-d:discv5_protocol_id:d5waku
|
||||
path = "../.."
|
||||
27
apps/liteprotocoltester/.env
Normal file
27
apps/liteprotocoltester/.env
Normal file
@ -0,0 +1,27 @@
|
||||
START_PUBLISHING_AFTER_SECS=45
|
||||
# can add some seconds delay before SENDER starts publishing
|
||||
|
||||
NUM_MESSAGES=0
|
||||
# 0 for infinite number of messages
|
||||
|
||||
MESSAGE_INTERVAL_MILLIS=8000
|
||||
# ms delay between messages
|
||||
|
||||
|
||||
MIN_MESSAGE_SIZE=15Kb
|
||||
MAX_MESSAGE_SIZE=145Kb
|
||||
|
||||
## for wakusim
|
||||
#SHARD=0
|
||||
#CONTENT_TOPIC=/tester/2/light-pubsub-test/wakusim
|
||||
#CLUSTER_ID=66
|
||||
|
||||
## for status.prod
|
||||
#SHARDS=32
|
||||
CONTENT_TOPIC=/tester/2/light-pubsub-test/fleet
|
||||
CLUSTER_ID=16
|
||||
|
||||
## for TWN
|
||||
#SHARD=4
|
||||
#CONTENT_TOPIC=/tester/2/light-pubsub-test/twn
|
||||
#CLUSTER_ID=1
|
||||
33
apps/liteprotocoltester/Dockerfile.liteprotocoltester
Normal file
33
apps/liteprotocoltester/Dockerfile.liteprotocoltester
Normal file
@ -0,0 +1,33 @@
|
||||
# TESTING IMAGE --------------------------------------------------------------
|
||||
|
||||
## NOTICE: This is a short cut build file for ubuntu users who compiles nwaku in ubuntu distro.
|
||||
## This is used for faster turnaround time for testing the compiled binary.
|
||||
## Prerequisites: compiled liteprotocoltester binary in build/ directory
|
||||
|
||||
FROM ubuntu:noble AS prod
|
||||
|
||||
LABEL maintainer="zoltan@status.im"
|
||||
LABEL source="https://github.com/waku-org/nwaku"
|
||||
LABEL description="Lite Protocol Tester: Waku light-client"
|
||||
LABEL commit="unknown"
|
||||
LABEL version="unknown"
|
||||
|
||||
# DevP2P, LibP2P, and JSON RPC ports
|
||||
EXPOSE 30303 60000 8545
|
||||
|
||||
# Referenced in the binary
|
||||
RUN apt-get update && apt-get install -y --no-install-recommends \
|
||||
libgcc1 \
|
||||
libpq-dev \
|
||||
wget \
|
||||
iproute2 \
|
||||
&& rm -rf /var/lib/apt/lists/*
|
||||
|
||||
COPY build/liteprotocoltester /usr/bin/
|
||||
COPY apps/liteprotocoltester/run_tester_node.sh /usr/bin/
|
||||
COPY apps/liteprotocoltester/run_tester_node_on_fleet.sh /usr/bin/
|
||||
|
||||
ENTRYPOINT ["/usr/bin/run_tester_node.sh", "/usr/bin/liteprotocoltester"]
|
||||
|
||||
# # By default just show help if called without arguments
|
||||
CMD ["--help"]
|
||||
@ -1,58 +1,73 @@
|
||||
# BUILD NIM APP ----------------------------------------------------------------
|
||||
FROM rust:1.77.1-alpine3.18 AS nim-build
|
||||
FROM rust:1.77.1-alpine3.18 AS nim-build
|
||||
|
||||
ARG NIMFLAGS
|
||||
ARG MAKE_TARGET=liteprotocoltester
|
||||
ARG NIM_COMMIT
|
||||
ARG LOG_LEVEL=TRACE
|
||||
ARG NIMFLAGS
|
||||
ARG MAKE_TARGET=liteprotocoltester
|
||||
ARG NIM_COMMIT
|
||||
ARG LOG_LEVEL=TRACE
|
||||
|
||||
# Get build tools and required header files
|
||||
RUN apk add --no-cache bash git build-base pcre-dev linux-headers curl jq
|
||||
# Get build tools and required header files
|
||||
RUN apk add --no-cache bash git build-base openssl-dev linux-headers curl jq
|
||||
|
||||
WORKDIR /app
|
||||
COPY . .
|
||||
WORKDIR /app
|
||||
COPY . .
|
||||
|
||||
# workaround for alpine issue: https://github.com/alpinelinux/docker-alpine/issues/383
|
||||
RUN apk update && apk upgrade
|
||||
# workaround for alpine issue: https://github.com/alpinelinux/docker-alpine/issues/383
|
||||
RUN apk update && apk upgrade
|
||||
|
||||
# Ran separately from 'make' to avoid re-doing
|
||||
RUN git submodule update --init --recursive
|
||||
# Ran separately from 'make' to avoid re-doing
|
||||
RUN git submodule update --init --recursive
|
||||
|
||||
# Slowest build step for the sake of caching layers
|
||||
RUN make -j$(nproc) deps QUICK_AND_DIRTY_COMPILER=1 ${NIM_COMMIT}
|
||||
# Slowest build step for the sake of caching layers
|
||||
RUN make -j$(nproc) deps QUICK_AND_DIRTY_COMPILER=1 ${NIM_COMMIT}
|
||||
|
||||
# Build the final node binary
|
||||
RUN make -j$(nproc) ${NIM_COMMIT} $MAKE_TARGET LOG_LEVEL=${LOG_LEVEL} NIMFLAGS="${NIMFLAGS}"
|
||||
# Build the final node binary
|
||||
RUN make -j$(nproc) ${NIM_COMMIT} $MAKE_TARGET LOG_LEVEL=${LOG_LEVEL} NIMFLAGS="${NIMFLAGS}"
|
||||
|
||||
|
||||
# PRODUCTION IMAGE -------------------------------------------------------------
|
||||
# REFERENCE IMAGE as BASE for specialized PRODUCTION IMAGES----------------------------------------
|
||||
FROM alpine:3.18 AS base_lpt
|
||||
|
||||
FROM alpine:3.18 as prod
|
||||
ARG MAKE_TARGET=liteprotocoltester
|
||||
|
||||
ARG MAKE_TARGET=liteprotocoltester
|
||||
LABEL maintainer="zoltan@status.im"
|
||||
LABEL source="https://github.com/waku-org/nwaku"
|
||||
LABEL description="Lite Protocol Tester: Waku light-client"
|
||||
LABEL commit="unknown"
|
||||
LABEL version="unknown"
|
||||
|
||||
LABEL maintainer="jakub@status.im"
|
||||
LABEL source="https://github.com/waku-org/nwaku"
|
||||
LABEL description="Lite Protocol Tester: Waku light-client"
|
||||
LABEL commit="unknown"
|
||||
LABEL version="unknown"
|
||||
# DevP2P, LibP2P, and JSON RPC ports
|
||||
EXPOSE 30303 60000 8545
|
||||
|
||||
# DevP2P, LibP2P, and JSON RPC ports
|
||||
EXPOSE 30303 60000 8545
|
||||
# Referenced in the binary
|
||||
RUN apk add --no-cache libgcc libpq-dev \
|
||||
wget \
|
||||
iproute2 \
|
||||
python3
|
||||
|
||||
# Referenced in the binary
|
||||
RUN apk add --no-cache libgcc pcre-dev libpq-dev
|
||||
COPY --from=nim-build /app/build/liteprotocoltester /usr/bin/
|
||||
RUN chmod +x /usr/bin/liteprotocoltester
|
||||
|
||||
# Fix for 'Error loading shared library libpcre.so.3: No such file or directory'
|
||||
RUN ln -s /usr/lib/libpcre.so /usr/lib/libpcre.so.3
|
||||
# Standalone image to be used manually and in lpt-runner -------------------------------------------
|
||||
FROM base_lpt AS standalone_lpt
|
||||
|
||||
# Copy to separate location to accomodate different MAKE_TARGET values
|
||||
COPY --from=nim-build /app/build/$MAKE_TARGET /usr/bin/
|
||||
COPY --from=nim-build /app/apps/liteprotocoltester/run_tester_node.sh /usr/bin/
|
||||
COPY --from=nim-build /app/apps/liteprotocoltester/run_tester_node_on_fleet.sh /usr/bin/
|
||||
|
||||
# Copy migration scripts for DB upgrades
|
||||
COPY --from=nim-build /app/migrations/ /app/migrations/
|
||||
RUN chmod +x /usr/bin/run_tester_node.sh
|
||||
|
||||
ENTRYPOINT ["/usr/bin/liteprotocoltester"]
|
||||
ENTRYPOINT ["/usr/bin/run_tester_node.sh", "/usr/bin/liteprotocoltester"]
|
||||
|
||||
# By default just show help if called without arguments
|
||||
CMD ["--help"]
|
||||
# Image for infra deployment -------------------------------------------
|
||||
FROM base_lpt AS deployment_lpt
|
||||
|
||||
# let supervisor python script flush logs immediately
|
||||
ENV PYTHONUNBUFFERED="1"
|
||||
|
||||
COPY --from=nim-build /app/apps/liteprotocoltester/run_tester_node_at_infra.sh /usr/bin/
|
||||
COPY --from=nim-build /app/apps/liteprotocoltester/infra.env /usr/bin/
|
||||
COPY --from=nim-build /app/apps/liteprotocoltester/lpt_supervisor.py /usr/bin/
|
||||
RUN chmod +x /usr/bin/run_tester_node_at_infra.sh
|
||||
RUN chmod +x /usr/bin/lpt_supervisor.py
|
||||
|
||||
ENTRYPOINT ["/usr/bin/lpt_supervisor.py"]
|
||||
|
||||
@ -1,35 +0,0 @@
|
||||
# TESTING IMAGE --------------------------------------------------------------
|
||||
|
||||
## NOTICE: This is a short cut build file for ubuntu users who compiles nwaku in ubuntu distro.
|
||||
## This is used for faster turnaround time for testing the compiled binary.
|
||||
## Prerequisites: compiled liteprotocoltester binary in build/ directory
|
||||
|
||||
FROM ubuntu:noble as prod
|
||||
|
||||
LABEL maintainer="jakub@status.im"
|
||||
LABEL source="https://github.com/waku-org/nwaku"
|
||||
LABEL description="Lite Protocol Tester: Waku light-client"
|
||||
LABEL commit="unknown"
|
||||
LABEL version="unknown"
|
||||
|
||||
# DevP2P, LibP2P, and JSON RPC ports
|
||||
EXPOSE 30303 60000 8545
|
||||
|
||||
# Referenced in the binary
|
||||
RUN apt-get update && apt-get install -y --no-install-recommends \
|
||||
libgcc1 \
|
||||
libpcre3 \
|
||||
libpq-dev \
|
||||
wget \
|
||||
iproute2 \
|
||||
&& rm -rf /var/lib/apt/lists/*
|
||||
|
||||
# Fix for 'Error loading shared library libpcre.so.3: No such file or directory'
|
||||
RUN ln -s /usr/lib/libpcre.so /usr/lib/libpcre.so.3
|
||||
|
||||
COPY build/liteprotocoltester /usr/bin/
|
||||
|
||||
ENTRYPOINT ["/usr/bin/liteprotocoltester"]
|
||||
|
||||
# # By default just show help if called without arguments
|
||||
CMD ["--help"]
|
||||
@ -17,47 +17,123 @@ and multiple receivers.
|
||||
|
||||
Publishers are fill all message payloads with information about the test message and sender, helping the receiver side to calculate results.
|
||||
|
||||
## Phases of development
|
||||
|
||||
### Phase 1
|
||||
|
||||
At the first phase we aims to demonstrate the concept of the testing all boundled into a docker-compose environment where we run
|
||||
one service (full)node and a publisher and a receiver node.
|
||||
At this stage we can only configure number of messages and fixed frequency of the message pump. We do not expect message losses and any significant latency hence the test setup is very simple.
|
||||
|
||||
### Further plans
|
||||
|
||||
- Add more configurability (randomized message sizes, usage of more content topics and support for static sharding).
|
||||
- Extend collected metrics and polish reporting.
|
||||
- Add test metrics to graphana dashboard.
|
||||
- Support for static sharding and auto sharding for being able to test under different conditions.
|
||||
- ...
|
||||
|
||||
## Usage
|
||||
|
||||
### Phase 1
|
||||
### Using lpt-runner
|
||||
|
||||
Lite Protocol Tester application is built under name `liteprotocoltester` in apps/liteprotocoltester folder.
|
||||
For ease of use, you can clone lpt-runner repository. That will utilize previously pushed liteprotocoltester docker image.
|
||||
It is recommended to use this method for fleet testing.
|
||||
|
||||
Starting from nwaku repository root:
|
||||
```bash
|
||||
make liteprotocoltester
|
||||
cd apps/liteprotocoltester
|
||||
docker compose build
|
||||
git clone https://github.com/waku-org/lpt-runner.git
|
||||
cd lpt-runner
|
||||
|
||||
# check Reame.md for more information
|
||||
# edit .env file to your needs
|
||||
|
||||
docker compose up -d
|
||||
docker compose logs -f receivernode
|
||||
|
||||
# navigate localhost:3033 to see the lite-protocol-tester dashboard
|
||||
```
|
||||
|
||||
## Configure
|
||||
> See more detailed examples below.
|
||||
|
||||
### Integration with waku-simulator!
|
||||
|
||||
- For convenience, integration is done in cooperation with waku-simulator repository, but nothing is tightly coupled.
|
||||
- waku-simulator must be started separately with its own configuration.
|
||||
- To enable waku-simulator working without RLN currently a separate branch is to be used.
|
||||
- When waku-simulator is configured and up and running, lite-protocol-tester composite docker setup can be started.
|
||||
|
||||
```bash
|
||||
|
||||
# Start waku-simulator
|
||||
|
||||
git clone https://github.com/waku-org/waku-simulator.git ../waku-simulator
|
||||
cd ../waku-simulator
|
||||
git checkout chore-integrate-liteprotocoltester
|
||||
|
||||
# optionally edit .env file
|
||||
|
||||
docker compose -f docker-compose-norln.yml up -d
|
||||
|
||||
# navigate localhost:30001 to see the waku-simulator dashboard
|
||||
|
||||
cd ../{your-repository}
|
||||
|
||||
make LOG_LEVEL=DEBUG liteprotocoltester
|
||||
|
||||
cd apps/liteprotocoltester
|
||||
|
||||
# optionally edit .env file
|
||||
|
||||
docker compose -f docker-compose-on-simularor.yml build
|
||||
docker compose -f docker-compose-on-simularor.yml up -d
|
||||
docker compose -f docker-compose-on-simularor.yml logs -f receivernode
|
||||
```
|
||||
#### Current setup
|
||||
|
||||
- waku-simulator is configured to run with 25 full node
|
||||
- liteprotocoltester is configured to run with 3 publisher and 1 receiver
|
||||
- liteprotocoltester is configured to run 1 lightpush service and a filter service node
|
||||
- light clients are connected accordingly
|
||||
- publishers will send 250 messages in every 200ms with size between 1KiB and 120KiB
|
||||
- Notice there is a configurable wait before start publishing messages as it is noticed time is needed for the service nodes to get connected to full nodes from simulator
|
||||
- light clients will print report on their and the connected service node's connectivity to the network in every 20 secs.
|
||||
|
||||
#### Test monitoring
|
||||
|
||||
Navigate to http://localhost:3033 to see the lite-protocol-tester dashboard.
|
||||
|
||||
### Run independently on a chosen waku fleet
|
||||
|
||||
This option is simple as is just to run the built liteprotocoltester binary with run_tester_node.sh script.
|
||||
|
||||
Syntax:
|
||||
`./run_tester_node.sh <path-to-liteprotocoltester-binary> <SENDER|RECEIVER> <service-node-address>`
|
||||
|
||||
How to run from you nwaku repository:
|
||||
```bash
|
||||
cd ../{your-repository}
|
||||
|
||||
make LOG_LEVEL=DEBUG liteprotocoltester
|
||||
|
||||
cd apps/liteprotocoltester
|
||||
|
||||
# optionally edit .env file
|
||||
|
||||
# run publisher side
|
||||
./run_tester_node.sh ../../build/liteprotocoltester SENDER [chosen service node address that support lightpush]
|
||||
|
||||
# or run receiver side
|
||||
./run_tester_node.sh ../../build/liteprotocoltester RECEIVER [chosen service node address that support filter service]
|
||||
```
|
||||
|
||||
#### Recommendations
|
||||
|
||||
In order to run on any kind of network, it is recommended to deploy the built `liteprotocoltester` binary with the `.env` file and the `run_tester_node.sh` script to the desired machine.
|
||||
|
||||
Select a lightpush service node and a filter service node from the targeted network, or you can run your own. Note down the selected peers peer_id.
|
||||
|
||||
Run a SENDER role liteprotocoltester and a RECEIVER role one on different terminals. Depending on the test aim, you may want to redirect the output to a file.
|
||||
|
||||
> RECEIVER side will periodically print statistics to standard output.
|
||||
|
||||
## Configuration
|
||||
|
||||
### Environment variables for docker compose runs
|
||||
|
||||
| Variable | Description | Default |
|
||||
| ---: | :--- | :--- |
|
||||
| NUM_MESSAGES | Number of message to publish | 120 |
|
||||
| DELAY_MESSAGES | Frequency of messages in milliseconds | 1000 |
|
||||
| PUBSUB | Used pubsub_topic for testing | /waku/2/rs/0/0 |
|
||||
| NUM_MESSAGES | Number of message to publish, 0 means infinite | 120 |
|
||||
| MESSAGE_INTERVAL_MILLIS | Frequency of messages in milliseconds | 1000 |
|
||||
| SHARD | Used shard for testing | 0 |
|
||||
| CONTENT_TOPIC | content_topic for testing | /tester/1/light-pubsub-example/proto |
|
||||
| CLUSTER_ID | cluster_id of the network | 16 |
|
||||
| START_PUBLISHING_AFTER_SECS | Delay in seconds before starting to publish to let service node connected | 5 |
|
||||
| MIN_MESSAGE_SIZE | Minimum message size in bytes | 1KiB |
|
||||
| MAX_MESSAGE_SIZE | Maximum message size in bytes | 120KiB |
|
||||
|
||||
|
||||
### Lite Protocol Tester application cli options
|
||||
|
||||
@ -65,9 +141,13 @@ docker compose logs -f receivernode
|
||||
| :--- | :--- | :--- |
|
||||
| --test_func | separation of PUBLISHER or RECEIVER mode | RECEIVER |
|
||||
| --service-node| Address of the service node to use for lightpush and/or filter service | - |
|
||||
| --bootstrap-node| Address of the fleet's bootstrap node to use to determine service peer randomly choosen from the network. `--service-node` switch has precedence over this | - |
|
||||
| --num-messages | Number of message to publish | 120 |
|
||||
| --delay-messages | Frequency of messages in milliseconds | 1000 |
|
||||
| --pubsub-topic | Used pubsub_topic for testing | /waku/2/rs/0/0 |
|
||||
| --message-interval | Frequency of messages in milliseconds | 1000 |
|
||||
| --min-message-size | Minimum message size in bytes | 1KiB |
|
||||
| --max-message-size | Maximum message size in bytes | 120KiB |
|
||||
| --start-publishing-after | Delay in seconds before starting to publish to let service node connected in seconds | 5 |
|
||||
| --pubsub-topic | Used pubsub_topic for testing | /waku/2/default-waku/proto |
|
||||
| --content_topic | content_topic for testing | /tester/1/light-pubsub-example/proto |
|
||||
| --cluster-id | Cluster id for the test | 0 |
|
||||
| --config-file | TOML configuration file to fine tune the light waku node<br>Note that some configurations (full node services) are not taken into account | - |
|
||||
@ -77,11 +157,173 @@ docker compose logs -f receivernode
|
||||
| --rest-allow-origin | For convenience rest configuration can be done here | * |
|
||||
| --log-level | Log level for the application | DEBUG |
|
||||
| --log-format | Logging output format (TEXT or JSON) | TEXT |
|
||||
| --metrics-port | Metrics scarpe port | 8003 |
|
||||
|
||||
### Specifying peer addresses
|
||||
|
||||
Service node or bootstrap addresses can be specified in multiadress or ENR form.
|
||||
|
||||
### Docker image notice
|
||||
### Using bootstrap nodes
|
||||
|
||||
There are multiple benefits of using bootstrap nodes. By using them liteprotocoltester will use Peer Exchange protocol to get possible peers from the network that are capable to serve as service peers for testing. Additionally it will test dial them to verify their connectivity - this will be reported in the logs and on dashboard metrics.
|
||||
Also by using bootstrap node and peer exchange discovery, litprotocoltester will be able to simulate service peer switch in case of failures. There are built in tresholds count for service peer failures (3) after service peer will be switched during the test. Also there will be max 10 trials of switching peer before test declared failed and quit.
|
||||
These service peer failures are reported, thus extending network reliability measures.
|
||||
|
||||
### Building docker image
|
||||
|
||||
Easiest way to build the docker image is to use the provided Makefile target.
|
||||
|
||||
```bash
|
||||
cd <your-repository>
|
||||
make docker-liteprotocoltester
|
||||
```
|
||||
This will build liteprotocoltester from the ground up and create a docker image with the binary copied to it under image name and tag `wakuorg/liteprotocoltester:latest`.
|
||||
|
||||
#### Building public image
|
||||
|
||||
If you want to push the image to a public registry, you can use the jenkins job to do so.
|
||||
The job is available at https://ci.status.im/job/waku/job/liteprotocoltester/job/build-liteprotocoltester-image
|
||||
|
||||
#### Building and deployment for infra testing
|
||||
|
||||
For specific and continuous testing purposes we have a deployment of `liteprotocoltester` test suite to our infra appliances.
|
||||
This has its own configuration, constraints and requirements. To ease this job, image shall be built and pushed with `deploy` tag.
|
||||
This can be done by the jenkins job mentioned above.
|
||||
|
||||
or manually by:
|
||||
```bash
|
||||
cd <your-repository>
|
||||
make DOCKER_LPT_TAG=deploy docker-liteprotocoltester
|
||||
```
|
||||
|
||||
The image created with this method will be different from under any other tag. It prepared to run a preconfigured test suite continuously.
|
||||
It will also miss prometheus metrics scraping endpoint and grafana, thus it is not recommended to use it for general testing.
|
||||
|
||||
#### Manually building for docker compose runs on simulator or standalone
|
||||
Please note that currently to ease testing and development tester application docker image is based on ubuntu and uses the externally pre-built binary of 'liteprotocoltester'.
|
||||
This speeds up image creation. Another dokcer build file is provided for proper build of boundle image.
|
||||
|
||||
> `Dockerfile.liteprotocoltester` will create an ubuntu based image with the binary copied from the build directory.
|
||||
|
||||
> `Dockerfile.liteprotocoltester.compile` will create an ubuntu based image completely compiled from source. This can be slow.
|
||||
|
||||
#### Creating standalone runner docker image
|
||||
|
||||
To ease the work with lite-protocol-tester, a docker image is possible to build.
|
||||
With that image it is easy to run the application in a container.
|
||||
|
||||
> `Dockerfile.liteprotocoltester` will create an ubuntu image with the binary copied from the build directory. You need to pre-build the application.
|
||||
|
||||
Here is how to build and run:
|
||||
```bash
|
||||
cd <your-repository>
|
||||
make liteprotocoltester
|
||||
|
||||
cd apps/liteprotocoltester
|
||||
docker build -t liteprotocoltester:latest -f Dockerfile.liteprotocoltester ../..
|
||||
|
||||
# alternatively you can push it to a registry
|
||||
|
||||
# edit and adjust .env file to your needs and for the network configuration
|
||||
|
||||
docker run --env-file .env liteprotocoltester:latest RECEIVER <service-node-peer-address>
|
||||
|
||||
docker run --env-file .env liteprotocoltester:latest SENDER <service-node-peer-address>
|
||||
```
|
||||
|
||||
#### Run test with auto service peer selection from a fleet using bootstrap node
|
||||
|
||||
```bash
|
||||
|
||||
docker run --env-file .env liteprotocoltester:latest RECEIVER <bootstrap-node-peer-address> BOOTSTRAP
|
||||
|
||||
docker run --env-file .env liteprotocoltester:latest SENDER <bootstrap-node-peer-address> BOOTSTRAP
|
||||
```
|
||||
|
||||
> Notice that official image is also available at harbor.status.im/wakuorg/liteprotocoltester:latest
|
||||
|
||||
## Examples
|
||||
|
||||
### Bootstrap or Service node selection
|
||||
|
||||
The easiest way to get the proper bootstrap nodes for the tests from https://fleets.status.im page.
|
||||
Adjust on which fleets you would like to run the tests.
|
||||
|
||||
> Please note that not all of them configured to support Peer Exchange protocol, those ones cannot be for bootstrap nodes for `liteprotocoltester`.
|
||||
|
||||
### Environment variables
|
||||
You need not necessary to use .env file, although it can be more convenient.
|
||||
Anytime you can override all or part of the environment variables defined in the .env file.
|
||||
|
||||
### Run standalone
|
||||
|
||||
Example of running the liteprotocoltester in standalone mode on status.stagin network.
|
||||
Testing includes using bootstrap nodes to gather service peers from the network via Peer Exchange protocol.
|
||||
Both parties will test-dial all the peers retrieved with the corresponding protocol.
|
||||
Sender will start publishing messages after 60 seconds, sending 200 messages with 1 second delay between them.
|
||||
Message size will be between 15KiB and 145KiB.
|
||||
Cluster id and Pubsub-topic must be accurately set according to the network configuration.
|
||||
|
||||
The example shows that either multiaddress or ENR form accepted.
|
||||
|
||||
```bash
|
||||
export START_PUBLISHING_AFTER_SECS=60
|
||||
export NUM_MESSAGES=200
|
||||
export MESSAGE_INTERVAL_MILLIS=1000
|
||||
export MIN_MESSAGE_SIZE=15Kb
|
||||
export MAX_MESSAGE_SIZE=145Kb
|
||||
export SHARD=32
|
||||
export CONTENT_TOPIC=/tester/2/light-pubsub-test/fleet
|
||||
export CLUSTER_ID=16
|
||||
|
||||
docker run harbor.status.im/wakuorg/liteprotocoltester:latest RECEIVER /dns4/boot-01.do-ams3.status.staging.status.im/tcp/30303/p2p/16Uiu2HAmQE7FXQc6iZHdBzYfw3qCSDa9dLc1wsBJKoP4aZvztq2d BOOTSTRAP
|
||||
|
||||
# in different terminal session, repeat the exports and run the other party of the test.
|
||||
docker run harbor.status.im/wakuorg/liteprotocoltester:latest SENDER enr:-QEiuECJPv2vL00Jp5sTEMAFyW7qXkK2cFgphlU_G8-FJuJqoW_D5aWIy3ylGdv2K8DkiG7PWgng4Ql_VI7Qc2RhBdwfAYJpZIJ2NIJpcIQvTKi6im11bHRpYWRkcnO4cgA2NjFib290LTAxLmFjLWNuLWhvbmdrb25nLWMuc3RhdHVzLnN0YWdpbmcuc3RhdHVzLmltBnZfADg2MWJvb3QtMDEuYWMtY24taG9uZ2tvbmctYy5zdGF0dXMuc3RhZ2luZy5zdGF0dXMuaW0GAbveA4Jyc40AEAUAAQAgAEAAgAEAiXNlY3AyNTZrMaEDkbgV7oqPNmFtX5FzSPi9WH8kkmrPB1R3n9xRXge91M-DdGNwgnZfg3VkcIIjKIV3YWt1Mg0 BOOTSTRAP
|
||||
|
||||
```
|
||||
|
||||
### Use of lpt-runner
|
||||
|
||||
Another method is to use [lpt-runner repository](https://github.com/waku-org/lpt-runner/tree/master).
|
||||
This extends testing with grafana dashboard and ease the test setup.
|
||||
Please read the corresponding [README](https://github.com/waku-org/lpt-runner/blob/master/README.md) there as well.
|
||||
|
||||
In this example we will run similar test as above but there will be 3 instances of publisher nodes and 1 receiver node.
|
||||
This test uses waku.sandbox fleet which is connected to TWN. This implies lower message rates due to the RLN rate limation.
|
||||
Also leave a gap of 120 seconds before starting to publish messages to let receiver side fully finish peer test-dialing.
|
||||
For TWN network it is always wise to use bootstrap nodes with Peer Exchange support.
|
||||
|
||||
> Theoritically we can use the same bootstrap nodes for both parties, but it is recommended to use different ones to simulate different network edges, thus getting more meaningful results.
|
||||
|
||||
```bash
|
||||
git clone https://github.com/waku-org/lpt-runner.git
|
||||
cd lpt-runner
|
||||
|
||||
export NUM_PUBLISHER_NODES=3
|
||||
export NUM_RECEIVER_NODES=1
|
||||
export START_PUBLISHING_AFTER_SECS=120
|
||||
export NUM_MESSAGES=300
|
||||
export MESSAGE_INTERVAL_MILLIS=7000
|
||||
export MIN_MESSAGE_SIZE=15Kb
|
||||
export MAX_MESSAGE_SIZE=145Kb
|
||||
export SHARD=4
|
||||
export CONTENT_TOPIC=/tester/2/light-pubsub-test/twn
|
||||
export CLUSTER_ID=1
|
||||
|
||||
export FILTER_BOOTSTRAP=/dns4/node-01.ac-cn-hongkong-c.waku.sandbox.status.im/tcp/30303/p2p/16Uiu2HAmQYiojgZ8APsh9wqbWNyCstVhnp9gbeNrxSEQnLJchC92
|
||||
export LIGHTPUSH_BOOTSTRAP=/dns4/node-01.do-ams3.waku.sandbox.status.im/tcp/30303/p2p/16Uiu2HAmNaeL4p3WEYzC9mgXBmBWSgWjPHRvatZTXnp8Jgv3iKsb
|
||||
|
||||
docker compose up -d
|
||||
|
||||
# we can check logs from one or all SENDER
|
||||
docker compose logs -f --index 1 publishernode
|
||||
|
||||
# for checking receiver side performance
|
||||
docker compose logs -f receivernode
|
||||
|
||||
# when test completed
|
||||
docker compose down
|
||||
```
|
||||
|
||||
For dashboard navigate to http://localhost:3033
|
||||
|
||||
62
apps/liteprotocoltester/diagnose_connections.nim
Normal file
62
apps/liteprotocoltester/diagnose_connections.nim
Normal file
@ -0,0 +1,62 @@
|
||||
when (NimMajor, NimMinor) < (1, 4):
|
||||
{.push raises: [Defect].}
|
||||
else:
|
||||
{.push raises: [].}
|
||||
|
||||
import
|
||||
std/[options, net, strformat],
|
||||
chronicles,
|
||||
chronos,
|
||||
metrics,
|
||||
libbacktrace,
|
||||
libp2p/crypto/crypto,
|
||||
confutils,
|
||||
libp2p/wire
|
||||
|
||||
import
|
||||
tools/confutils/cli_args,
|
||||
waku/[
|
||||
node/peer_manager,
|
||||
waku_lightpush/common,
|
||||
waku_relay,
|
||||
waku_filter_v2,
|
||||
waku_peer_exchange/protocol,
|
||||
waku_core/multiaddrstr,
|
||||
waku_enr/capabilities,
|
||||
]
|
||||
logScope:
|
||||
topics = "diagnose connections"
|
||||
|
||||
proc allPeers(pm: PeerManager): string =
|
||||
var allStr: string = ""
|
||||
for idx, peer in pm.switch.peerStore.peers():
|
||||
allStr.add(
|
||||
" " & $idx & ". | " & constructMultiaddrStr(peer) & " | agent: " &
|
||||
peer.getAgent() & " | protos: " & $peer.protocols & " | caps: " &
|
||||
$peer.enr.map(getCapabilities) & "\n"
|
||||
)
|
||||
return allStr
|
||||
|
||||
proc logSelfPeers*(pm: PeerManager) =
|
||||
let selfLighpushPeers = pm.switch.peerStore.getPeersByProtocol(WakuLightPushCodec)
|
||||
let selfRelayPeers = pm.switch.peerStore.getPeersByProtocol(WakuRelayCodec)
|
||||
let selfFilterPeers = pm.switch.peerStore.getPeersByProtocol(WakuFilterSubscribeCodec)
|
||||
let selfPxPeers = pm.switch.peerStore.getPeersByProtocol(WakuPeerExchangeCodec)
|
||||
|
||||
let printable = catch:
|
||||
"""*------------------------------------------------------------------------------------------*
|
||||
| Self ({constructMultiaddrStr(pm.switch.peerInfo)}) peers:
|
||||
*------------------------------------------------------------------------------------------*
|
||||
| Lightpush peers({selfLighpushPeers.len()}): ${selfLighpushPeers}
|
||||
*------------------------------------------------------------------------------------------*
|
||||
| Filter peers({selfFilterPeers.len()}): ${selfFilterPeers}
|
||||
*------------------------------------------------------------------------------------------*
|
||||
| Relay peers({selfRelayPeers.len()}): ${selfRelayPeers}
|
||||
*------------------------------------------------------------------------------------------*
|
||||
| PX peers({selfPxPeers.len()}): ${selfPxPeers}
|
||||
*------------------------------------------------------------------------------------------*
|
||||
| All peers with protocol support:
|
||||
{allPeers(pm)}
|
||||
*------------------------------------------------------------------------------------------*""".fmt()
|
||||
|
||||
echo printable.valueOr("Error while printing statistics: " & error.msg)
|
||||
227
apps/liteprotocoltester/docker-compose-on-simularor.yml
Normal file
227
apps/liteprotocoltester/docker-compose-on-simularor.yml
Normal file
@ -0,0 +1,227 @@
|
||||
version: "3.7"
|
||||
x-logging: &logging
|
||||
logging:
|
||||
driver: json-file
|
||||
options:
|
||||
max-size: 1000m
|
||||
|
||||
# Environment variable definitions
|
||||
x-eth-client-address: ð_client_address ${ETH_CLIENT_ADDRESS:-} # Add your ETH_CLIENT_ADDRESS after the "-"
|
||||
|
||||
x-rln-environment: &rln_env
|
||||
RLN_RELAY_CONTRACT_ADDRESS: ${RLN_RELAY_CONTRACT_ADDRESS:-0xF471d71E9b1455bBF4b85d475afb9BB0954A29c4}
|
||||
RLN_RELAY_CRED_PATH: ${RLN_RELAY_CRED_PATH:-} # Optional: Add your RLN_RELAY_CRED_PATH after the "-"
|
||||
RLN_RELAY_CRED_PASSWORD: ${RLN_RELAY_CRED_PASSWORD:-} # Optional: Add your RLN_RELAY_CRED_PASSWORD after the "-"
|
||||
|
||||
x-test-running-conditions: &test_running_conditions
|
||||
NUM_MESSAGES: ${NUM_MESSAGES:-120}
|
||||
MESSAGE_INTERVAL_MILLIS: "${MESSAGE_INTERVAL_MILLIS:-1000}"
|
||||
SHARD: ${SHARD:-0}
|
||||
CONTENT_TOPIC: ${CONTENT_TOPIC:-/tester/2/light-pubsub-test/wakusim}
|
||||
CLUSTER_ID: ${CLUSTER_ID:-66}
|
||||
MIN_MESSAGE_SIZE: ${MIN_MESSAGE_SIZE:-1Kb}
|
||||
MAX_MESSAGE_SIZE: ${MAX_MESSAGE_SIZE:-150Kb}
|
||||
START_PUBLISHING_AFTER_SECS: ${START_PUBLISHING_AFTER_SECS:-5} # seconds
|
||||
|
||||
|
||||
# Services definitions
|
||||
services:
|
||||
lightpush-service:
|
||||
image: ${NWAKU_IMAGE:-harbor.status.im/wakuorg/nwaku:latest-release}
|
||||
# ports:
|
||||
# - 30304:30304/tcp
|
||||
# - 30304:30304/udp
|
||||
# - 9005:9005/udp
|
||||
# - 127.0.0.1:8003:8003
|
||||
# - 80:80 #Let's Encrypt
|
||||
# - 8000:8000/tcp #WSS
|
||||
# - 127.0.0.1:8645:8645
|
||||
<<:
|
||||
- *logging
|
||||
environment:
|
||||
DOMAIN: ${DOMAIN}
|
||||
RLN_RELAY_CRED_PASSWORD: "${RLN_RELAY_CRED_PASSWORD}"
|
||||
ETH_CLIENT_ADDRESS: *eth_client_address
|
||||
EXTRA_ARGS: ${EXTRA_ARGS}
|
||||
<<:
|
||||
- *rln_env
|
||||
- *test_running_conditions
|
||||
volumes:
|
||||
- ./run_service_node.sh:/opt/run_service_node.sh:Z
|
||||
- ${CERTS_DIR:-./certs}:/etc/letsencrypt/:Z
|
||||
- ./rln_tree:/etc/rln_tree/:Z
|
||||
- ./keystore:/keystore:Z
|
||||
entrypoint: sh
|
||||
command:
|
||||
- /opt/run_service_node.sh
|
||||
- LIGHTPUSH
|
||||
networks:
|
||||
- waku-simulator_simulation
|
||||
|
||||
publishernode:
|
||||
image: waku.liteprotocoltester:latest
|
||||
build:
|
||||
context: ../..
|
||||
dockerfile: ./apps/liteprotocoltester/Dockerfile.liteprotocoltester
|
||||
deploy:
|
||||
replicas: ${NUM_PUBLISHER_NODES:-3}
|
||||
# ports:
|
||||
# - 30304:30304/tcp
|
||||
# - 30304:30304/udp
|
||||
# - 9005:9005/udp
|
||||
# - 127.0.0.1:8003:8003
|
||||
# - 80:80 #Let's Encrypt
|
||||
# - 8000:8000/tcp #WSS
|
||||
# - 127.0.0.1:8646:8646
|
||||
<<:
|
||||
- *logging
|
||||
environment:
|
||||
DOMAIN: ${DOMAIN}
|
||||
RLN_RELAY_CRED_PASSWORD: "${RLN_RELAY_CRED_PASSWORD}"
|
||||
ETH_CLIENT_ADDRESS: *eth_client_address
|
||||
EXTRA_ARGS: ${EXTRA_ARGS}
|
||||
<<:
|
||||
- *rln_env
|
||||
- *test_running_conditions
|
||||
volumes:
|
||||
- ${CERTS_DIR:-./certs}:/etc/letsencrypt/:Z
|
||||
- ./rln_tree:/etc/rln_tree/:Z
|
||||
- ./keystore:/keystore:Z
|
||||
entrypoint: sh
|
||||
command:
|
||||
- /usr/bin/run_tester_node.sh
|
||||
- /usr/bin/liteprotocoltester
|
||||
- SENDER
|
||||
- waku-sim
|
||||
depends_on:
|
||||
- lightpush-service
|
||||
configs:
|
||||
- source: cfg_tester_node.toml
|
||||
target: config.toml
|
||||
networks:
|
||||
- waku-simulator_simulation
|
||||
|
||||
filter-service:
|
||||
image: ${NWAKU_IMAGE:-harbor.status.im/wakuorg/nwaku:latest-release}
|
||||
# ports:
|
||||
# - 30304:30305/tcp
|
||||
# - 30304:30305/udp
|
||||
# - 9005:9005/udp
|
||||
# - 127.0.0.1:8003:8003
|
||||
# - 80:80 #Let's Encrypt
|
||||
# - 8000:8000/tcp #WSS
|
||||
# - 127.0.0.1:8645:8645
|
||||
<<:
|
||||
- *logging
|
||||
environment:
|
||||
DOMAIN: ${DOMAIN}
|
||||
RLN_RELAY_CRED_PASSWORD: "${RLN_RELAY_CRED_PASSWORD}"
|
||||
ETH_CLIENT_ADDRESS: *eth_client_address
|
||||
EXTRA_ARGS: ${EXTRA_ARGS}
|
||||
<<:
|
||||
- *rln_env
|
||||
- *test_running_conditions
|
||||
volumes:
|
||||
- ./run_service_node.sh:/opt/run_service_node.sh:Z
|
||||
- ${CERTS_DIR:-./certs}:/etc/letsencrypt/:Z
|
||||
- ./rln_tree:/etc/rln_tree/:Z
|
||||
- ./keystore:/keystore:Z
|
||||
entrypoint: sh
|
||||
command:
|
||||
- /opt/run_service_node.sh
|
||||
- FILTER
|
||||
networks:
|
||||
- waku-simulator_simulation
|
||||
|
||||
|
||||
receivernode:
|
||||
image: waku.liteprotocoltester:latest
|
||||
build:
|
||||
context: ../..
|
||||
dockerfile: ./apps/liteprotocoltester/Dockerfile.liteprotocoltester
|
||||
deploy:
|
||||
replicas: ${NUM_RECEIVER_NODES:-1}
|
||||
# ports:
|
||||
# - 30304:30304/tcp
|
||||
# - 30304:30304/udp
|
||||
# - 9005:9005/udp
|
||||
# - 127.0.0.1:8003:8003
|
||||
# - 80:80 #Let's Encrypt
|
||||
# - 8000:8000/tcp #WSS
|
||||
# - 127.0.0.1:8647:8647
|
||||
<<:
|
||||
- *logging
|
||||
environment:
|
||||
DOMAIN: ${DOMAIN}
|
||||
RLN_RELAY_CRED_PASSWORD: "${RLN_RELAY_CRED_PASSWORD}"
|
||||
ETH_CLIENT_ADDRESS: *eth_client_address
|
||||
EXTRA_ARGS: ${EXTRA_ARGS}
|
||||
<<:
|
||||
- *rln_env
|
||||
- *test_running_conditions
|
||||
volumes:
|
||||
- ${CERTS_DIR:-./certs}:/etc/letsencrypt/:Z
|
||||
- ./rln_tree:/etc/rln_tree/:Z
|
||||
- ./keystore:/keystore:Z
|
||||
entrypoint: sh
|
||||
command:
|
||||
- /usr/bin/run_tester_node.sh
|
||||
- /usr/bin/liteprotocoltester
|
||||
- RECEIVER
|
||||
- waku-sim
|
||||
depends_on:
|
||||
- filter-service
|
||||
- publishernode
|
||||
configs:
|
||||
- source: cfg_tester_node.toml
|
||||
target: config.toml
|
||||
networks:
|
||||
- waku-simulator_simulation
|
||||
|
||||
# We have prometheus and grafana defined in waku-simulator already
|
||||
prometheus:
|
||||
image: docker.io/prom/prometheus:latest
|
||||
volumes:
|
||||
- ./monitoring/prometheus-config.yml:/etc/prometheus/prometheus.yml:Z
|
||||
command:
|
||||
- --config.file=/etc/prometheus/prometheus.yml
|
||||
- --web.listen-address=:9099
|
||||
# ports:
|
||||
# - 127.0.0.1:9090:9090
|
||||
restart: on-failure:5
|
||||
depends_on:
|
||||
- filter-service
|
||||
- lightpush-service
|
||||
- publishernode
|
||||
- receivernode
|
||||
networks:
|
||||
- waku-simulator_simulation
|
||||
|
||||
grafana:
|
||||
image: docker.io/grafana/grafana:latest
|
||||
env_file:
|
||||
- ./monitoring/configuration/grafana-plugins.env
|
||||
volumes:
|
||||
- ./monitoring/configuration/grafana.ini:/etc/grafana/grafana.ini:Z
|
||||
- ./monitoring/configuration/dashboards.yaml:/etc/grafana/provisioning/dashboards/dashboards.yaml:Z
|
||||
- ./monitoring/configuration/datasources.yaml:/etc/grafana/provisioning/datasources/datasources.yaml:Z
|
||||
- ./monitoring/configuration/dashboards:/var/lib/grafana/dashboards/:Z
|
||||
- ./monitoring/configuration/customizations/custom-logo.svg:/usr/share/grafana/public/img/grafana_icon.svg:Z
|
||||
- ./monitoring/configuration/customizations/custom-logo.svg:/usr/share/grafana/public/img/grafana_typelogo.svg:Z
|
||||
- ./monitoring/configuration/customizations/custom-logo.png:/usr/share/grafana/public/img/fav32.png:Z
|
||||
ports:
|
||||
- 0.0.0.0:3033:3033
|
||||
restart: on-failure:5
|
||||
depends_on:
|
||||
- prometheus
|
||||
networks:
|
||||
- waku-simulator_simulation
|
||||
|
||||
configs:
|
||||
cfg_tester_node.toml:
|
||||
content: |
|
||||
max-connections = 100
|
||||
|
||||
networks:
|
||||
waku-simulator_simulation:
|
||||
external: true
|
||||
@ -9,20 +9,28 @@ x-logging: &logging
|
||||
x-eth-client-address: ð_client_address ${ETH_CLIENT_ADDRESS:-} # Add your ETH_CLIENT_ADDRESS after the "-"
|
||||
|
||||
x-rln-environment: &rln_env
|
||||
RLN_RELAY_CONTRACT_ADDRESS: ${RLN_RELAY_CONTRACT_ADDRESS:-0xF471d71E9b1455bBF4b85d475afb9BB0954A29c4}
|
||||
RLN_RELAY_CONTRACT_ADDRESS: ${RLN_RELAY_CONTRACT_ADDRESS:-0xB9cd878C90E49F797B4431fBF4fb333108CB90e6}
|
||||
RLN_RELAY_CRED_PATH: ${RLN_RELAY_CRED_PATH:-} # Optional: Add your RLN_RELAY_CRED_PATH after the "-"
|
||||
RLN_RELAY_CRED_PASSWORD: ${RLN_RELAY_CRED_PASSWORD:-} # Optional: Add your RLN_RELAY_CRED_PASSWORD after the "-"
|
||||
|
||||
x-test-running-conditions: &test_running_conditions
|
||||
NUM_MESSAGES: ${NUM_MESSAGES:-120}
|
||||
DELAY_MESSAGES: "${DELAY_MESSAGES:-1000}"
|
||||
PUBSUB: ${PUBSUB:-}
|
||||
CONTENT_TOPIC: ${CONTENT_TOPIC:-}
|
||||
MESSAGE_INTERVAL_MILLIS: "${MESSAGE_INTERVAL_MILLIS:-1000}"
|
||||
SHARD: ${SHARD:-0}
|
||||
CONTENT_TOPIC: ${CONTENT_TOPIC:-/tester/2/light-pubsub-test/wakusim}
|
||||
CLUSTER_ID: ${CLUSTER_ID:-66}
|
||||
MIN_MESSAGE_SIZE: ${MIN_MESSAGE_SIZE:-1Kb}
|
||||
MAX_MESSAGE_SIZE: ${MAX_MESSAGE_SIZE:-150Kb}
|
||||
START_PUBLISHING_AFTER_SECS: ${START_PUBLISHING_AFTER_SECS:-5} # seconds
|
||||
STANDALONE: ${STANDALONE:-1}
|
||||
RECEIVER_METRICS_PORT: 8003
|
||||
PUBLISHER_METRICS_PORT: 8003
|
||||
|
||||
|
||||
# Services definitions
|
||||
services:
|
||||
servicenode:
|
||||
image: ${NWAKU_IMAGE:-harbor.status.im/wakuorg/nwaku:latest}
|
||||
image: ${NWAKU_IMAGE:-harbor.status.im/wakuorg/nwaku:latest-release}
|
||||
ports:
|
||||
- 30304:30304/tcp
|
||||
- 30304:30304/udp
|
||||
@ -40,6 +48,7 @@ services:
|
||||
EXTRA_ARGS: ${EXTRA_ARGS}
|
||||
<<:
|
||||
- *rln_env
|
||||
- *test_running_conditions
|
||||
volumes:
|
||||
- ./run_service_node.sh:/opt/run_service_node.sh:Z
|
||||
- ${CERTS_DIR:-./certs}:/etc/letsencrypt/:Z
|
||||
@ -53,7 +62,7 @@ services:
|
||||
image: waku.liteprotocoltester:latest
|
||||
build:
|
||||
context: ../..
|
||||
dockerfile: ./apps/liteprotocoltester/Dockerfile.liteprotocoltester.copy
|
||||
dockerfile: ./apps/liteprotocoltester/Dockerfile.liteprotocoltester
|
||||
ports:
|
||||
# - 30304:30304/tcp
|
||||
# - 30304:30304/udp
|
||||
@ -73,14 +82,15 @@ services:
|
||||
- *rln_env
|
||||
- *test_running_conditions
|
||||
volumes:
|
||||
- ./run_tester_node.sh:/opt/run_tester_node.sh:Z
|
||||
- ${CERTS_DIR:-./certs}:/etc/letsencrypt/:Z
|
||||
- ./rln_tree:/etc/rln_tree/:Z
|
||||
- ./keystore:/keystore:Z
|
||||
entrypoint: sh
|
||||
command:
|
||||
- /opt/run_tester_node.sh
|
||||
- /usr/bin/run_tester_node.sh
|
||||
- /usr/bin/liteprotocoltester
|
||||
- SENDER
|
||||
- servicenode
|
||||
depends_on:
|
||||
- servicenode
|
||||
configs:
|
||||
@ -91,7 +101,7 @@ services:
|
||||
image: waku.liteprotocoltester:latest
|
||||
build:
|
||||
context: ../..
|
||||
dockerfile: ./apps/liteprotocoltester/Dockerfile.liteprotocoltester.copy
|
||||
dockerfile: ./apps/liteprotocoltester/Dockerfile.liteprotocoltester
|
||||
ports:
|
||||
# - 30304:30304/tcp
|
||||
# - 30304:30304/udp
|
||||
@ -117,8 +127,10 @@ services:
|
||||
- ./keystore:/keystore:Z
|
||||
entrypoint: sh
|
||||
command:
|
||||
- /opt/run_tester_node.sh
|
||||
- /usr/bin/run_tester_node.sh
|
||||
- /usr/bin/liteprotocoltester
|
||||
- RECEIVER
|
||||
- servicenode
|
||||
depends_on:
|
||||
- servicenode
|
||||
- publishernode
|
||||
|
||||
@ -1,117 +0,0 @@
|
||||
## Example showing how a resource restricted client may
|
||||
## subscribe to messages without relay
|
||||
|
||||
import
|
||||
std/options,
|
||||
system/ansi_c,
|
||||
chronicles,
|
||||
chronos,
|
||||
chronos/timer as chtimer,
|
||||
stew/byteutils,
|
||||
results,
|
||||
serialization,
|
||||
json_serialization as js,
|
||||
times
|
||||
import
|
||||
waku/[common/logging, node/peer_manager, waku_node, waku_core, waku_filter_v2/client],
|
||||
./tester_config,
|
||||
./tester_message,
|
||||
./statistics
|
||||
|
||||
proc unsubscribe(
|
||||
wakuNode: WakuNode,
|
||||
filterPeer: RemotePeerInfo,
|
||||
filterPubsubTopic: PubsubTopic,
|
||||
filterContentTopic: ContentTopic,
|
||||
) {.async.} =
|
||||
notice "unsubscribing from filter"
|
||||
let unsubscribeRes = await wakuNode.wakuFilterClient.unsubscribe(
|
||||
filterPeer, filterPubsubTopic, @[filterContentTopic]
|
||||
)
|
||||
if unsubscribeRes.isErr:
|
||||
notice "unsubscribe request failed", err = unsubscribeRes.error
|
||||
else:
|
||||
notice "unsubscribe request successful"
|
||||
|
||||
proc maintainSubscription(
|
||||
wakuNode: WakuNode,
|
||||
filterPeer: RemotePeerInfo,
|
||||
filterPubsubTopic: PubsubTopic,
|
||||
filterContentTopic: ContentTopic,
|
||||
) {.async.} =
|
||||
while true:
|
||||
trace "maintaining subscription"
|
||||
# First use filter-ping to check if we have an active subscription
|
||||
let pingRes = await wakuNode.wakuFilterClient.ping(filterPeer)
|
||||
if pingRes.isErr():
|
||||
# No subscription found. Let's subscribe.
|
||||
trace "no subscription found. Sending subscribe request"
|
||||
|
||||
let subscribeRes = await wakuNode.filterSubscribe(
|
||||
some(filterPubsubTopic), filterContentTopic, filterPeer
|
||||
)
|
||||
|
||||
if subscribeRes.isErr():
|
||||
trace "subscribe request failed. Quitting.", err = subscribeRes.error
|
||||
break
|
||||
else:
|
||||
trace "subscribe request successful."
|
||||
else:
|
||||
trace "subscription found."
|
||||
|
||||
await sleepAsync(chtimer.seconds(60)) # Subscription maintenance interval
|
||||
|
||||
proc setupAndSubscribe*(wakuNode: WakuNode, conf: LiteProtocolTesterConf) =
|
||||
if isNil(wakuNode.wakuFilterClient):
|
||||
error "WakuFilterClient not initialized"
|
||||
return
|
||||
|
||||
info "Start receiving messages to service node using lightpush",
|
||||
serviceNode = conf.serviceNode
|
||||
|
||||
var stats: PerPeerStatistics
|
||||
|
||||
let remotePeer = parsePeerInfo(conf.serviceNode).valueOr:
|
||||
error "Couldn't parse the peer info properly", error = error
|
||||
return
|
||||
|
||||
let pushHandler = proc(pubsubTopic: PubsubTopic, message: WakuMessage) {.async.} =
|
||||
let payloadStr = string.fromBytes(message.payload)
|
||||
let testerMessage = js.Json.decode(payloadStr, ProtocolTesterMessage)
|
||||
|
||||
stats.addMessage(testerMessage.sender, testerMessage)
|
||||
|
||||
trace "message received",
|
||||
index = testerMessage.index,
|
||||
count = testerMessage.count,
|
||||
startedAt = $testerMessage.startedAt,
|
||||
sinceStart = $testerMessage.sinceStart,
|
||||
sincePrev = $testerMessage.sincePrev
|
||||
|
||||
wakuNode.wakuFilterClient.registerPushHandler(pushHandler)
|
||||
|
||||
let interval = millis(20000)
|
||||
var printStats: CallbackFunc
|
||||
|
||||
printStats = CallbackFunc(
|
||||
proc(udata: pointer) {.gcsafe.} =
|
||||
stats.echoStats()
|
||||
|
||||
if stats.checkIfAllMessagesReceived():
|
||||
waitFor unsubscribe(
|
||||
wakuNode, remotePeer, conf.pubsubTopics[0], conf.contentTopics[0]
|
||||
)
|
||||
info "All messages received. Exiting."
|
||||
|
||||
## for gracefull shutdown through signal hooks
|
||||
discard c_raise(ansi_c.SIGTERM)
|
||||
else:
|
||||
discard setTimer(Moment.fromNow(interval), printStats)
|
||||
)
|
||||
|
||||
discard setTimer(Moment.fromNow(interval), printStats)
|
||||
|
||||
# Start maintaining subscription
|
||||
asyncSpawn maintainSubscription(
|
||||
wakuNode, remotePeer, conf.pubsubTopics[0], conf.contentTopics[0]
|
||||
)
|
||||
11
apps/liteprotocoltester/infra.env
Normal file
11
apps/liteprotocoltester/infra.env
Normal file
@ -0,0 +1,11 @@
|
||||
TEST_INTERVAL_MINUTES=180
|
||||
START_PUBLISHING_AFTER_SECS=120
|
||||
NUM_MESSAGES=300
|
||||
MESSAGE_INTERVAL_MILLIS=1000
|
||||
MIN_MESSAGE_SIZE=15Kb
|
||||
MAX_MESSAGE_SIZE=145Kb
|
||||
SHARD=32
|
||||
CONTENT_TOPIC=/tester/2/light-pubsub-test-at-infra/status-prod
|
||||
CLUSTER_ID=16
|
||||
LIGHTPUSH_BOOTSTRAP=enr:-QEKuED9AJm2HGgrRpVaJY2nj68ao_QiPeUT43sK-aRM7sMJ6R4G11OSDOwnvVacgN1sTw-K7soC5dzHDFZgZkHU0u-XAYJpZIJ2NIJpcISnYxMvim11bHRpYWRkcnO4WgAqNiVib290LTAxLmRvLWFtczMuc3RhdHVzLnByb2Quc3RhdHVzLmltBnZfACw2JWJvb3QtMDEuZG8tYW1zMy5zdGF0dXMucHJvZC5zdGF0dXMuaW0GAbveA4Jyc40AEAUAAQAgAEAAgAEAiXNlY3AyNTZrMaEC3rRtFQSgc24uWewzXaxTY8hDAHB8sgnxr9k8Rjb5GeSDdGNwgnZfg3VkcIIjKIV3YWt1Mg0
|
||||
FILTER_BOOTSTRAP=enr:-QEcuED7ww5vo2rKc1pyBp7fubBUH-8STHEZHo7InjVjLblEVyDGkjdTI9VdqmYQOn95vuQH-Htku17WSTzEufx-Wg4mAYJpZIJ2NIJpcIQihw1Xim11bHRpYWRkcnO4bAAzNi5ib290LTAxLmdjLXVzLWNlbnRyYWwxLWEuc3RhdHVzLnByb2Quc3RhdHVzLmltBnZfADU2LmJvb3QtMDEuZ2MtdXMtY2VudHJhbDEtYS5zdGF0dXMucHJvZC5zdGF0dXMuaW0GAbveA4Jyc40AEAUAAQAgAEAAgAEAiXNlY3AyNTZrMaECxjqgDQ0WyRSOilYU32DA5k_XNlDis3m1VdXkK9xM6kODdGNwgnZfg3VkcIIjKIV3YWt1Mg0
|
||||
24
apps/liteprotocoltester/legacy_publisher.nim
Normal file
24
apps/liteprotocoltester/legacy_publisher.nim
Normal file
@ -0,0 +1,24 @@
|
||||
import chronos, results, options
|
||||
import waku/[waku_node, waku_core]
|
||||
import publisher_base
|
||||
|
||||
type LegacyPublisher* = ref object of PublisherBase
|
||||
|
||||
proc new*(T: type LegacyPublisher, wakuNode: WakuNode): T =
|
||||
if isNil(wakuNode.wakuLegacyLightpushClient):
|
||||
wakuNode.mountLegacyLightPushClient()
|
||||
|
||||
return LegacyPublisher(wakuNode: wakuNode)
|
||||
|
||||
method send*(
|
||||
self: LegacyPublisher,
|
||||
topic: PubsubTopic,
|
||||
message: WakuMessage,
|
||||
servicePeer: RemotePeerInfo,
|
||||
): Future[Result[void, string]] {.async.} =
|
||||
# when error it must return original error desc due the text is used for distinction between error types in metrics.
|
||||
discard (
|
||||
await self.wakuNode.legacyLightpushPublish(some(topic), message, servicePeer)
|
||||
).valueOr:
|
||||
return err(error)
|
||||
return ok()
|
||||
@ -1,108 +0,0 @@
|
||||
import
|
||||
std/strformat,
|
||||
system/ansi_c,
|
||||
chronicles,
|
||||
chronos,
|
||||
stew/byteutils,
|
||||
results,
|
||||
json_serialization as js
|
||||
import
|
||||
waku/[common/logging, waku_node, node/peer_manager, waku_core, waku_lightpush/client],
|
||||
./tester_config,
|
||||
./tester_message
|
||||
|
||||
proc prepareMessage(
|
||||
sender: string,
|
||||
messageIndex, numMessages: uint32,
|
||||
startedAt: TimeStamp,
|
||||
prevMessageAt: var Timestamp,
|
||||
contentTopic: ContentTopic,
|
||||
): WakuMessage =
|
||||
let current = getNowInNanosecondTime()
|
||||
let payload = ProtocolTesterMessage(
|
||||
sender: sender,
|
||||
index: messageIndex,
|
||||
count: numMessages,
|
||||
startedAt: startedAt,
|
||||
sinceStart: current - startedAt,
|
||||
sincePrev: current - prevMessageAt,
|
||||
)
|
||||
|
||||
prevMessageAt = current
|
||||
|
||||
let text = js.Json.encode(payload)
|
||||
let message = WakuMessage(
|
||||
payload: toBytes(text), # content of the message
|
||||
contentTopic: contentTopic, # content topic to publish to
|
||||
ephemeral: true, # tell store nodes to not store it
|
||||
timestamp: current, # current timestamp
|
||||
)
|
||||
|
||||
return message
|
||||
|
||||
proc publishMessages(
|
||||
wakuNode: WakuNode,
|
||||
lightpushPubsubTopic: PubsubTopic,
|
||||
lightpushContentTopic: ContentTopic,
|
||||
numMessages: uint32,
|
||||
delayMessages: Duration,
|
||||
) {.async.} =
|
||||
let startedAt = getNowInNanosecondTime()
|
||||
var prevMessageAt = startedAt
|
||||
var failedToSendCount: uint32 = 0
|
||||
|
||||
let selfPeerId = $wakuNode.switch.peerInfo.peerId
|
||||
|
||||
var messagesSent: uint32 = 1
|
||||
while numMessages >= messagesSent:
|
||||
let message = prepareMessage(
|
||||
selfPeerId, messagesSent, numMessages, startedAt, prevMessageAt,
|
||||
lightpushContentTopic,
|
||||
)
|
||||
let wlpRes = await wakuNode.lightpushPublish(some(lightpushPubsubTopic), message)
|
||||
|
||||
if wlpRes.isOk():
|
||||
info "published message using lightpush",
|
||||
index = messagesSent, count = numMessages
|
||||
else:
|
||||
error "failed to publish message using lightpush", err = wlpRes.error
|
||||
inc(failedToSendCount)
|
||||
|
||||
await sleepAsync(delayMessages) # Publish every 5 seconds
|
||||
inc(messagesSent)
|
||||
|
||||
let report = catch:
|
||||
"""*----------------------------------------*
|
||||
| Expected | Sent | Failed |
|
||||
|{numMessages:>11} |{messagesSent-failedToSendCount-1:>11} |{failedToSendCount:>11} |
|
||||
*----------------------------------------*""".fmt()
|
||||
|
||||
if report.isErr:
|
||||
echo "Error while printing statistics"
|
||||
else:
|
||||
echo report.get()
|
||||
|
||||
discard c_raise(ansi_c.SIGTERM)
|
||||
|
||||
proc setupAndPublish*(wakuNode: WakuNode, conf: LiteProtocolTesterConf) =
|
||||
if isNil(wakuNode.wakuLightpushClient):
|
||||
error "WakuFilterClient not initialized"
|
||||
return
|
||||
|
||||
# give some time to receiver side to set up
|
||||
# TODO: this maybe done in more sphisticated way, though.
|
||||
let waitTillStartTesting = 5.seconds
|
||||
|
||||
info "Sending test messages in", wait = waitTillStartTesting
|
||||
waitFor sleepAsync(waitTillStartTesting)
|
||||
|
||||
info "Start sending messages to service node using lightpush"
|
||||
|
||||
# Start maintaining subscription
|
||||
asyncSpawn publishMessages(
|
||||
wakuNode,
|
||||
conf.pubsubTopics[0],
|
||||
conf.contentTopics[0],
|
||||
conf.numMessages,
|
||||
conf.delayMessages.milliseconds,
|
||||
)
|
||||
@ -11,17 +11,25 @@ import
|
||||
confutils
|
||||
|
||||
import
|
||||
tools/confutils/cli_args,
|
||||
waku/[
|
||||
common/enr,
|
||||
common/logging,
|
||||
factory/waku,
|
||||
factory/external_config,
|
||||
node/health_monitor,
|
||||
factory/waku as waku_factory,
|
||||
waku_node,
|
||||
node/waku_metrics,
|
||||
waku_api/rest/builder as rest_server_builder,
|
||||
node/peer_manager,
|
||||
waku_lightpush/common,
|
||||
waku_filter_v2,
|
||||
waku_peer_exchange/protocol,
|
||||
waku_core/peers,
|
||||
waku_core/multiaddrstr,
|
||||
],
|
||||
./tester_config,
|
||||
./lightpush_publisher,
|
||||
./filter_subscriber
|
||||
./publisher,
|
||||
./receiver,
|
||||
./diagnose_connections,
|
||||
./service_peer_management
|
||||
|
||||
logScope:
|
||||
topics = "liteprotocoltester main"
|
||||
@ -39,19 +47,16 @@ when isMainModule:
|
||||
## 5. Start monitoring tools and external interfaces
|
||||
## 6. Setup graceful shutdown hooks
|
||||
|
||||
const versionString = "version / git commit hash: " & waku.git_version
|
||||
const versionString = "version / git commit hash: " & waku_factory.git_version
|
||||
|
||||
let confRes = LiteProtocolTesterConf.load(version = versionString)
|
||||
if confRes.isErr():
|
||||
error "failure while loading the configuration", error = confRes.error
|
||||
let conf = LiteProtocolTesterConf.load(version = versionString).valueOr:
|
||||
error "failure while loading the configuration", error = error
|
||||
quit(QuitFailure)
|
||||
|
||||
var conf = confRes.get()
|
||||
|
||||
## Logging setup
|
||||
logging.setupLog(conf.logLevel, conf.logFormat)
|
||||
|
||||
info "Running Lite Protocol Tester node", version = waku.git_version
|
||||
info "Running Lite Protocol Tester node", version = waku_factory.git_version
|
||||
logConfig(conf)
|
||||
|
||||
##Prepare Waku configuration
|
||||
@ -59,96 +64,73 @@ when isMainModule:
|
||||
## - override according to tester functionality
|
||||
##
|
||||
|
||||
var wakuConf: WakuNodeConf
|
||||
var wakuNodeConf: WakuNodeConf
|
||||
|
||||
if conf.configFile.isSome():
|
||||
try:
|
||||
var configFile {.threadvar.}: InputFile
|
||||
configFile = conf.configFile.get()
|
||||
wakuConf = WakuNodeConf.load(
|
||||
wakuNodeConf = WakuNodeConf.load(
|
||||
version = versionString,
|
||||
printUsage = false,
|
||||
secondarySources = proc(
|
||||
wnconf: WakuNodeConf, sources: auto
|
||||
) {.gcsafe, raises: [ConfigurationError].} =
|
||||
echo "Loading secondary configuration file into WakuNodeConf"
|
||||
sources.addConfigFile(Toml, configFile)
|
||||
,
|
||||
sources.addConfigFile(Toml, configFile),
|
||||
)
|
||||
except CatchableError:
|
||||
error "Loading Waku configuration failed", error = getCurrentExceptionMsg()
|
||||
quit(QuitFailure)
|
||||
|
||||
wakuConf.logLevel = conf.logLevel
|
||||
wakuConf.logFormat = conf.logFormat
|
||||
wakuConf.staticNodes = @[conf.serviceNode]
|
||||
wakuConf.nat = conf.nat
|
||||
wakuConf.maxConnections = 100
|
||||
wakuConf.restAddress = conf.restAddress
|
||||
wakuConf.restPort = conf.restPort
|
||||
wakuConf.restAllowOrigin = conf.restAllowOrigin
|
||||
wakuNodeConf.logLevel = conf.logLevel
|
||||
wakuNodeConf.logFormat = conf.logFormat
|
||||
wakuNodeConf.nat = conf.nat
|
||||
wakuNodeConf.maxConnections = 500
|
||||
wakuNodeConf.restAddress = conf.restAddress
|
||||
wakuNodeConf.restPort = conf.restPort
|
||||
wakuNodeConf.restAllowOrigin = conf.restAllowOrigin
|
||||
|
||||
wakuConf.pubsubTopics = conf.pubsubTopics
|
||||
wakuConf.contentTopics = conf.contentTopics
|
||||
wakuConf.clusterId = conf.clusterId
|
||||
wakuNodeConf.dnsAddrsNameServers =
|
||||
@[parseIpAddress("8.8.8.8"), parseIpAddress("1.1.1.1")]
|
||||
|
||||
wakuNodeConf.shards = @[conf.shard]
|
||||
wakuNodeConf.contentTopics = conf.contentTopics
|
||||
wakuNodeConf.clusterId = conf.clusterId
|
||||
## TODO: Depending on the tester needs we might extend here with shards, clusterId, etc...
|
||||
|
||||
if conf.testFunc == TesterFunctionality.SENDER:
|
||||
wakuConf.lightpushnode = conf.serviceNode
|
||||
else:
|
||||
wakuConf.filterNode = conf.serviceNode
|
||||
wakuNodeConf.metricsServer = true
|
||||
wakuNodeConf.metricsServerAddress = parseIpAddress("0.0.0.0")
|
||||
wakuNodeConf.metricsServerPort = conf.metricsPort
|
||||
|
||||
wakuConf.relay = false
|
||||
wakuConf.filter = false
|
||||
wakuConf.lightpush = false
|
||||
wakuConf.store = false
|
||||
# If bootstrap option is chosen we expect our clients will not mounted
|
||||
# so we will mount PeerExchange manually to gather possible service peers,
|
||||
# if got some we will mount the client protocols afterward.
|
||||
wakuNodeConf.peerExchange = false
|
||||
wakuNodeConf.relay = false
|
||||
wakuNodeConf.filter = false
|
||||
wakuNodeConf.lightpush = false
|
||||
wakuNodeConf.store = false
|
||||
|
||||
wakuConf.rest = true
|
||||
wakuNodeConf.rest = false
|
||||
wakuNodeConf.relayServiceRatio = "40:60"
|
||||
|
||||
# NOTE: {.threadvar.} is used to make the global variable GC safe for the closure uses it
|
||||
# It will always be called from main thread anyway.
|
||||
# Ref: https://nim-lang.org/docs/manual.html#threads-gc-safety
|
||||
var nodeHealthMonitor {.threadvar.}: WakuNodeHealthMonitor
|
||||
nodeHealthMonitor = WakuNodeHealthMonitor()
|
||||
nodeHealthMonitor.setOverallHealth(HealthStatus.INITIALIZING)
|
||||
|
||||
let restServer = rest_server_builder.startRestServerEsentials(
|
||||
nodeHealthMonitor, wakuConf
|
||||
).valueOr:
|
||||
error "Starting esential REST server failed.", error = $error
|
||||
let wakuConf = wakuNodeConf.toWakuConf().valueOr:
|
||||
error "Issue converting toWakuConf", error = $error
|
||||
quit(QuitFailure)
|
||||
|
||||
var wakuApp = Waku.init(wakuConf).valueOr:
|
||||
var waku = (waitFor Waku.new(wakuConf)).valueOr:
|
||||
error "Waku initialization failed", error = error
|
||||
quit(QuitFailure)
|
||||
|
||||
wakuApp.restServer = restServer
|
||||
|
||||
nodeHealthMonitor.setNode(wakuApp.node)
|
||||
|
||||
(waitFor startWaku(addr wakuApp)).isOkOr:
|
||||
(waitFor startWaku(addr waku)).isOkOr:
|
||||
error "Starting waku failed", error = error
|
||||
quit(QuitFailure)
|
||||
|
||||
rest_server_builder.startRestServerProtocolSupport(
|
||||
restServer, wakuApp.node, wakuApp.wakuDiscv5, wakuConf
|
||||
).isOkOr:
|
||||
error "Starting protocols support REST server failed.", error = $error
|
||||
quit(QuitFailure)
|
||||
info "Setting up shutdown hooks"
|
||||
|
||||
wakuApp.metricsServer = waku_metrics.startMetricsServerAndLogging(wakuConf).valueOr:
|
||||
error "Starting monitoring and external interfaces failed", error = error
|
||||
quit(QuitFailure)
|
||||
|
||||
nodeHealthMonitor.setOverallHealth(HealthStatus.READY)
|
||||
|
||||
debug "Setting up shutdown hooks"
|
||||
## Setup shutdown hooks for this process.
|
||||
## Stop node gracefully on shutdown.
|
||||
|
||||
proc asyncStopper(wakuApp: Waku) {.async: (raises: [Exception]).} =
|
||||
nodeHealthMonitor.setOverallHealth(HealthStatus.SHUTTING_DOWN)
|
||||
await wakuApp.stop()
|
||||
proc asyncStopper(waku: Waku) {.async: (raises: [Exception]).} =
|
||||
await waku.stop()
|
||||
quit(QuitSuccess)
|
||||
|
||||
# Handle Ctrl-C SIGINT
|
||||
@ -157,7 +139,7 @@ when isMainModule:
|
||||
# workaround for https://github.com/nim-lang/Nim/issues/4057
|
||||
setupForeignThreadGc()
|
||||
notice "Shutting down after receiving SIGINT"
|
||||
asyncSpawn asyncStopper(wakuApp)
|
||||
asyncSpawn asyncStopper(waku)
|
||||
|
||||
setControlCHook(handleCtrlC)
|
||||
|
||||
@ -165,7 +147,7 @@ when isMainModule:
|
||||
when defined(posix):
|
||||
proc handleSigterm(signal: cint) {.noconv.} =
|
||||
notice "Shutting down after receiving SIGTERM"
|
||||
asyncSpawn asyncStopper(wakuApp)
|
||||
asyncSpawn asyncStopper(waku)
|
||||
|
||||
c_signal(ansi_c.SIGTERM, handleSigterm)
|
||||
|
||||
@ -178,16 +160,55 @@ when isMainModule:
|
||||
# Not available in -d:release mode
|
||||
writeStackTrace()
|
||||
|
||||
waitFor wakuApp.stop()
|
||||
waitFor waku.stop()
|
||||
quit(QuitFailure)
|
||||
|
||||
c_signal(ansi_c.SIGSEGV, handleSigsegv)
|
||||
|
||||
info "Node setup complete"
|
||||
|
||||
var codec = WakuLightPushCodec
|
||||
# mounting relevant client, for PX filter client must be mounted ahead
|
||||
if conf.testFunc == TesterFunctionality.SENDER:
|
||||
setupAndPublish(wakuApp.node, conf)
|
||||
codec = WakuLightPushCodec
|
||||
else:
|
||||
setupAndSubscribe(wakuApp.node, conf)
|
||||
codec = WakuFilterSubscribeCodec
|
||||
|
||||
var lookForServiceNode = false
|
||||
var serviceNodePeerInfo: RemotePeerInfo
|
||||
if conf.serviceNode.len == 0:
|
||||
if conf.bootstrapNode.len > 0:
|
||||
info "Bootstrapping with PeerExchange to gather random service node"
|
||||
let futForServiceNode = pxLookupServiceNode(waku.node, conf)
|
||||
if not (waitFor futForServiceNode.withTimeout(20.minutes)):
|
||||
error "Service node not found in time via PX"
|
||||
quit(QuitFailure)
|
||||
|
||||
futForServiceNode.read().isOkOr:
|
||||
error "Service node for test not found via PX"
|
||||
quit(QuitFailure)
|
||||
|
||||
serviceNodePeerInfo = selectRandomServicePeer(
|
||||
waku.node.peerManager, none(RemotePeerInfo), codec
|
||||
).valueOr:
|
||||
error "Service node selection failed"
|
||||
quit(QuitFailure)
|
||||
else:
|
||||
error "No service or bootstrap node provided"
|
||||
quit(QuitFailure)
|
||||
else:
|
||||
# support for both ENR and URI formatted service node addresses
|
||||
serviceNodePeerInfo = translateToRemotePeerInfo(conf.serviceNode).valueOr:
|
||||
error "failed to parse service-node", node = conf.serviceNode
|
||||
quit(QuitFailure)
|
||||
|
||||
info "Service node to be used", serviceNode = $serviceNodePeerInfo
|
||||
|
||||
logSelfPeers(waku.node.peerManager)
|
||||
|
||||
if conf.testFunc == TesterFunctionality.SENDER:
|
||||
setupAndPublish(waku.node, conf, serviceNodePeerInfo)
|
||||
else:
|
||||
setupAndListen(waku.node, conf, serviceNodePeerInfo)
|
||||
|
||||
runForever()
|
||||
|
||||
56
apps/liteprotocoltester/lpt_metrics.nim
Normal file
56
apps/liteprotocoltester/lpt_metrics.nim
Normal file
@ -0,0 +1,56 @@
|
||||
## Example showing how a resource restricted client may
|
||||
## subscribe to messages without relay
|
||||
|
||||
import metrics
|
||||
|
||||
export metrics
|
||||
|
||||
declarePublicGauge lpt_receiver_sender_peer_count, "count of sender peers"
|
||||
|
||||
declarePublicCounter lpt_receiver_received_messages_count,
|
||||
"number of messages received per peer", ["peer"]
|
||||
|
||||
declarePublicCounter lpt_receiver_received_bytes,
|
||||
"number of received bytes per peer", ["peer"]
|
||||
|
||||
declarePublicGauge lpt_receiver_missing_messages_count,
|
||||
"number of missing messages per peer", ["peer"]
|
||||
|
||||
declarePublicCounter lpt_receiver_duplicate_messages_count,
|
||||
"number of duplicate messages per peer", ["peer"]
|
||||
|
||||
declarePublicGauge lpt_receiver_distinct_duplicate_messages_count,
|
||||
"number of distinct duplicate messages per peer", ["peer"]
|
||||
|
||||
declarePublicGauge lpt_receiver_latencies,
|
||||
"Message delivery latency per peer (min-avg-max)", ["peer", "latency"]
|
||||
|
||||
declarePublicCounter lpt_receiver_lost_subscription_count,
|
||||
"number of filter service peer failed PING requests - lost subscription"
|
||||
|
||||
declarePublicCounter lpt_publisher_sent_messages_count, "number of messages published"
|
||||
|
||||
declarePublicCounter lpt_publisher_failed_messages_count,
|
||||
"number of messages failed to publish per failure cause", ["cause"]
|
||||
|
||||
declarePublicCounter lpt_publisher_sent_bytes, "number of total bytes sent"
|
||||
|
||||
declarePublicCounter lpt_service_peer_failure_count,
|
||||
"number of failure during using service peer [publisher/receiever]", ["role", "agent"]
|
||||
|
||||
declarePublicCounter lpt_change_service_peer_count,
|
||||
"number of times [publisher/receiver] had to change service peer", ["role"]
|
||||
|
||||
declarePublicGauge lpt_px_peers,
|
||||
"Number of peers PeerExchange discovered and can be dialed"
|
||||
|
||||
declarePublicGauge lpt_dialed_peers, "Number of peers successfully dialed", ["agent"]
|
||||
|
||||
declarePublicGauge lpt_dial_failures, "Number of dial failures by cause", ["agent"]
|
||||
|
||||
declarePublicHistogram lpt_publish_duration_seconds,
|
||||
"duration to lightpush messages",
|
||||
buckets = [
|
||||
0.005, 0.01, 0.025, 0.05, 0.075, 0.1, 0.25, 0.5, 0.75, 1.0, 2.5, 5.0, 7.5, 10.0,
|
||||
15.0, 20.0, 30.0, Inf,
|
||||
]
|
||||
54
apps/liteprotocoltester/lpt_supervisor.py
Executable file
54
apps/liteprotocoltester/lpt_supervisor.py
Executable file
@ -0,0 +1,54 @@
|
||||
#!/usr/bin/env python3
|
||||
|
||||
import os
|
||||
import time
|
||||
from subprocess import Popen
|
||||
import sys
|
||||
|
||||
def load_env(file_path):
|
||||
predefined_test_env = {}
|
||||
with open(file_path) as f:
|
||||
for line in f:
|
||||
if line.strip() and not line.startswith('#'):
|
||||
key, value = line.strip().split('=', 1)
|
||||
predefined_test_env[key] = value
|
||||
return predefined_test_env
|
||||
|
||||
def run_tester_node(predefined_test_env):
|
||||
role = sys.argv[1]
|
||||
# override incoming environment variables with the ones from the file to prefer predefined testing environment.
|
||||
for key, value in predefined_test_env.items():
|
||||
os.environ[key] = value
|
||||
|
||||
script_cmd = "/usr/bin/run_tester_node_at_infra.sh /usr/bin/liteprotocoltester {role}".format(role=role)
|
||||
return os.system(script_cmd)
|
||||
|
||||
if __name__ == "__main__":
|
||||
if len(sys.argv) < 2 or sys.argv[1] not in ["RECEIVER", "SENDER", "SENDERV3"]:
|
||||
print("Error: First argument must be either 'RECEIVER' or 'SENDER' or 'SENDERV3'")
|
||||
sys.exit(1)
|
||||
|
||||
predefined_test_env_file = '/usr/bin/infra.env'
|
||||
predefined_test_env = load_env(predefined_test_env_file)
|
||||
|
||||
test_interval_minutes = int(predefined_test_env.get('TEST_INTERVAL_MINUTES', 60)) # Default to 60 minutes if not set
|
||||
print(f"supervisor: Start testing loop. Interval is {test_interval_minutes} minutes")
|
||||
counter = 0
|
||||
|
||||
while True:
|
||||
counter += 1
|
||||
start_time = time.time()
|
||||
print(f"supervisor: Run #{counter} started at {time.strftime('%Y-%m-%d %H:%M:%S', time.localtime(start_time))}")
|
||||
print(f"supervisor: with arguments: {predefined_test_env}")
|
||||
|
||||
exit_code = run_tester_node(predefined_test_env)
|
||||
|
||||
end_time = time.time()
|
||||
run_time = end_time - start_time
|
||||
sleep_time = max(5 * 60, (test_interval_minutes * 60) - run_time)
|
||||
|
||||
print(f"supervisor: Tester node finished at {time.strftime('%Y-%m-%d %H:%M:%S', time.localtime(end_time))}")
|
||||
print(f"supervisor: Runtime was {run_time:.2f} seconds")
|
||||
print(f"supervisor: Next run scheduled in {sleep_time // 60:.2f} minutes")
|
||||
|
||||
time.sleep(sleep_time)
|
||||
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
@ -5,7 +5,7 @@ datasources:
|
||||
type: prometheus
|
||||
access: proxy
|
||||
org_id: 1
|
||||
url: http://prometheus:9090
|
||||
url: http://prometheus:9099
|
||||
is_default: true
|
||||
version: 1
|
||||
editable: true
|
||||
editable: true
|
||||
|
||||
@ -1,9 +1,11 @@
|
||||
instance_name = nwaku dashboard
|
||||
instance_name = liteprotocoltester dashboard
|
||||
|
||||
;[dashboards.json]
|
||||
;enabled = true
|
||||
;path = /home/git/grafana/grafana-dashboards/dashboards
|
||||
|
||||
[server]
|
||||
http_port = 3033
|
||||
|
||||
#################################### Auth ##########################
|
||||
[auth]
|
||||
|
||||
@ -1,284 +0,0 @@
|
||||
pg_replication:
|
||||
query: "SELECT CASE WHEN NOT pg_is_in_recovery() THEN 0 ELSE GREATEST (0, EXTRACT(EPOCH FROM (now() - pg_last_xact_replay_timestamp()))) END AS lag"
|
||||
master: true
|
||||
metrics:
|
||||
- lag:
|
||||
usage: "GAUGE"
|
||||
description: "Replication lag behind master in seconds"
|
||||
|
||||
pg_postmaster:
|
||||
query: "SELECT pg_postmaster_start_time as start_time_seconds from pg_postmaster_start_time()"
|
||||
master: true
|
||||
metrics:
|
||||
- start_time_seconds:
|
||||
usage: "GAUGE"
|
||||
description: "Time at which postmaster started"
|
||||
|
||||
pg_stat_user_tables:
|
||||
query: |
|
||||
SELECT
|
||||
current_database() datname,
|
||||
schemaname,
|
||||
relname,
|
||||
seq_scan,
|
||||
seq_tup_read,
|
||||
idx_scan,
|
||||
idx_tup_fetch,
|
||||
n_tup_ins,
|
||||
n_tup_upd,
|
||||
n_tup_del,
|
||||
n_tup_hot_upd,
|
||||
n_live_tup,
|
||||
n_dead_tup,
|
||||
n_mod_since_analyze,
|
||||
COALESCE(last_vacuum, '1970-01-01Z') as last_vacuum,
|
||||
COALESCE(last_autovacuum, '1970-01-01Z') as last_autovacuum,
|
||||
COALESCE(last_analyze, '1970-01-01Z') as last_analyze,
|
||||
COALESCE(last_autoanalyze, '1970-01-01Z') as last_autoanalyze,
|
||||
vacuum_count,
|
||||
autovacuum_count,
|
||||
analyze_count,
|
||||
autoanalyze_count
|
||||
FROM
|
||||
pg_stat_user_tables
|
||||
metrics:
|
||||
- datname:
|
||||
usage: "LABEL"
|
||||
description: "Name of current database"
|
||||
- schemaname:
|
||||
usage: "LABEL"
|
||||
description: "Name of the schema that this table is in"
|
||||
- relname:
|
||||
usage: "LABEL"
|
||||
description: "Name of this table"
|
||||
- seq_scan:
|
||||
usage: "COUNTER"
|
||||
description: "Number of sequential scans initiated on this table"
|
||||
- seq_tup_read:
|
||||
usage: "COUNTER"
|
||||
description: "Number of live rows fetched by sequential scans"
|
||||
- idx_scan:
|
||||
usage: "COUNTER"
|
||||
description: "Number of index scans initiated on this table"
|
||||
- idx_tup_fetch:
|
||||
usage: "COUNTER"
|
||||
description: "Number of live rows fetched by index scans"
|
||||
- n_tup_ins:
|
||||
usage: "COUNTER"
|
||||
description: "Number of rows inserted"
|
||||
- n_tup_upd:
|
||||
usage: "COUNTER"
|
||||
description: "Number of rows updated"
|
||||
- n_tup_del:
|
||||
usage: "COUNTER"
|
||||
description: "Number of rows deleted"
|
||||
- n_tup_hot_upd:
|
||||
usage: "COUNTER"
|
||||
description: "Number of rows HOT updated (i.e., with no separate index update required)"
|
||||
- n_live_tup:
|
||||
usage: "GAUGE"
|
||||
description: "Estimated number of live rows"
|
||||
- n_dead_tup:
|
||||
usage: "GAUGE"
|
||||
description: "Estimated number of dead rows"
|
||||
- n_mod_since_analyze:
|
||||
usage: "GAUGE"
|
||||
description: "Estimated number of rows changed since last analyze"
|
||||
- last_vacuum:
|
||||
usage: "GAUGE"
|
||||
description: "Last time at which this table was manually vacuumed (not counting VACUUM FULL)"
|
||||
- last_autovacuum:
|
||||
usage: "GAUGE"
|
||||
description: "Last time at which this table was vacuumed by the autovacuum daemon"
|
||||
- last_analyze:
|
||||
usage: "GAUGE"
|
||||
description: "Last time at which this table was manually analyzed"
|
||||
- last_autoanalyze:
|
||||
usage: "GAUGE"
|
||||
description: "Last time at which this table was analyzed by the autovacuum daemon"
|
||||
- vacuum_count:
|
||||
usage: "COUNTER"
|
||||
description: "Number of times this table has been manually vacuumed (not counting VACUUM FULL)"
|
||||
- autovacuum_count:
|
||||
usage: "COUNTER"
|
||||
description: "Number of times this table has been vacuumed by the autovacuum daemon"
|
||||
- analyze_count:
|
||||
usage: "COUNTER"
|
||||
description: "Number of times this table has been manually analyzed"
|
||||
- autoanalyze_count:
|
||||
usage: "COUNTER"
|
||||
description: "Number of times this table has been analyzed by the autovacuum daemon"
|
||||
|
||||
pg_statio_user_tables:
|
||||
query: "SELECT current_database() datname, schemaname, relname, heap_blks_read, heap_blks_hit, idx_blks_read, idx_blks_hit, toast_blks_read, toast_blks_hit, tidx_blks_read, tidx_blks_hit FROM pg_statio_user_tables"
|
||||
metrics:
|
||||
- datname:
|
||||
usage: "LABEL"
|
||||
description: "Name of current database"
|
||||
- schemaname:
|
||||
usage: "LABEL"
|
||||
description: "Name of the schema that this table is in"
|
||||
- relname:
|
||||
usage: "LABEL"
|
||||
description: "Name of this table"
|
||||
- heap_blks_read:
|
||||
usage: "COUNTER"
|
||||
description: "Number of disk blocks read from this table"
|
||||
- heap_blks_hit:
|
||||
usage: "COUNTER"
|
||||
description: "Number of buffer hits in this table"
|
||||
- idx_blks_read:
|
||||
usage: "COUNTER"
|
||||
description: "Number of disk blocks read from all indexes on this table"
|
||||
- idx_blks_hit:
|
||||
usage: "COUNTER"
|
||||
description: "Number of buffer hits in all indexes on this table"
|
||||
- toast_blks_read:
|
||||
usage: "COUNTER"
|
||||
description: "Number of disk blocks read from this table's TOAST table (if any)"
|
||||
- toast_blks_hit:
|
||||
usage: "COUNTER"
|
||||
description: "Number of buffer hits in this table's TOAST table (if any)"
|
||||
- tidx_blks_read:
|
||||
usage: "COUNTER"
|
||||
description: "Number of disk blocks read from this table's TOAST table indexes (if any)"
|
||||
- tidx_blks_hit:
|
||||
usage: "COUNTER"
|
||||
description: "Number of buffer hits in this table's TOAST table indexes (if any)"
|
||||
|
||||
# WARNING: This set of metrics can be very expensive on a busy server as every unique query executed will create an additional time series
|
||||
pg_stat_statements:
|
||||
query: "SELECT t2.rolname, t3.datname, queryid, calls, ( total_plan_time + total_exec_time ) / 1000 as total_time_seconds, ( min_plan_time + min_exec_time ) / 1000 as min_time_seconds, ( max_plan_time + max_exec_time ) / 1000 as max_time_seconds, ( mean_plan_time + mean_exec_time ) / 1000 as mean_time_seconds, ( stddev_plan_time + stddev_exec_time ) / 1000 as stddev_time_seconds, rows, shared_blks_hit, shared_blks_read, shared_blks_dirtied, shared_blks_written, local_blks_hit, local_blks_read, local_blks_dirtied, local_blks_written, temp_blks_read, temp_blks_written, blk_read_time / 1000 as blk_read_time_seconds, blk_write_time / 1000 as blk_write_time_seconds FROM pg_stat_statements t1 JOIN pg_roles t2 ON (t1.userid=t2.oid) JOIN pg_database t3 ON (t1.dbid=t3.oid) WHERE t2.rolname != 'rdsadmin' AND queryid IS NOT NULL"
|
||||
master: true
|
||||
metrics:
|
||||
- rolname:
|
||||
usage: "LABEL"
|
||||
description: "Name of user"
|
||||
- datname:
|
||||
usage: "LABEL"
|
||||
description: "Name of database"
|
||||
- queryid:
|
||||
usage: "LABEL"
|
||||
description: "Query ID"
|
||||
- calls:
|
||||
usage: "COUNTER"
|
||||
description: "Number of times executed"
|
||||
- total_time_seconds:
|
||||
usage: "COUNTER"
|
||||
description: "Total time spent in the statement, in milliseconds"
|
||||
- min_time_seconds:
|
||||
usage: "GAUGE"
|
||||
description: "Minimum time spent in the statement, in milliseconds"
|
||||
- max_time_seconds:
|
||||
usage: "GAUGE"
|
||||
description: "Maximum time spent in the statement, in milliseconds"
|
||||
- mean_time_seconds:
|
||||
usage: "GAUGE"
|
||||
description: "Mean time spent in the statement, in milliseconds"
|
||||
- stddev_time_seconds:
|
||||
usage: "GAUGE"
|
||||
description: "Population standard deviation of time spent in the statement, in milliseconds"
|
||||
- rows:
|
||||
usage: "COUNTER"
|
||||
description: "Total number of rows retrieved or affected by the statement"
|
||||
- shared_blks_hit:
|
||||
usage: "COUNTER"
|
||||
description: "Total number of shared block cache hits by the statement"
|
||||
- shared_blks_read:
|
||||
usage: "COUNTER"
|
||||
description: "Total number of shared blocks read by the statement"
|
||||
- shared_blks_dirtied:
|
||||
usage: "COUNTER"
|
||||
description: "Total number of shared blocks dirtied by the statement"
|
||||
- shared_blks_written:
|
||||
usage: "COUNTER"
|
||||
description: "Total number of shared blocks written by the statement"
|
||||
- local_blks_hit:
|
||||
usage: "COUNTER"
|
||||
description: "Total number of local block cache hits by the statement"
|
||||
- local_blks_read:
|
||||
usage: "COUNTER"
|
||||
description: "Total number of local blocks read by the statement"
|
||||
- local_blks_dirtied:
|
||||
usage: "COUNTER"
|
||||
description: "Total number of local blocks dirtied by the statement"
|
||||
- local_blks_written:
|
||||
usage: "COUNTER"
|
||||
description: "Total number of local blocks written by the statement"
|
||||
- temp_blks_read:
|
||||
usage: "COUNTER"
|
||||
description: "Total number of temp blocks read by the statement"
|
||||
- temp_blks_written:
|
||||
usage: "COUNTER"
|
||||
description: "Total number of temp blocks written by the statement"
|
||||
- blk_read_time_seconds:
|
||||
usage: "COUNTER"
|
||||
description: "Total time the statement spent reading blocks, in milliseconds (if track_io_timing is enabled, otherwise zero)"
|
||||
- blk_write_time_seconds:
|
||||
usage: "COUNTER"
|
||||
description: "Total time the statement spent writing blocks, in milliseconds (if track_io_timing is enabled, otherwise zero)"
|
||||
|
||||
pg_process_idle:
|
||||
query: |
|
||||
WITH
|
||||
metrics AS (
|
||||
SELECT
|
||||
application_name,
|
||||
SUM(EXTRACT(EPOCH FROM (CURRENT_TIMESTAMP - state_change))::bigint)::float AS process_idle_seconds_sum,
|
||||
COUNT(*) AS process_idle_seconds_count
|
||||
FROM pg_stat_activity
|
||||
WHERE state = 'idle'
|
||||
GROUP BY application_name
|
||||
),
|
||||
buckets AS (
|
||||
SELECT
|
||||
application_name,
|
||||
le,
|
||||
SUM(
|
||||
CASE WHEN EXTRACT(EPOCH FROM (CURRENT_TIMESTAMP - state_change)) <= le
|
||||
THEN 1
|
||||
ELSE 0
|
||||
END
|
||||
)::bigint AS bucket
|
||||
FROM
|
||||
pg_stat_activity,
|
||||
UNNEST(ARRAY[1, 2, 5, 15, 30, 60, 90, 120, 300]) AS le
|
||||
GROUP BY application_name, le
|
||||
ORDER BY application_name, le
|
||||
)
|
||||
SELECT
|
||||
application_name,
|
||||
process_idle_seconds_sum as seconds_sum,
|
||||
process_idle_seconds_count as seconds_count,
|
||||
ARRAY_AGG(le) AS seconds,
|
||||
ARRAY_AGG(bucket) AS seconds_bucket
|
||||
FROM metrics JOIN buckets USING (application_name)
|
||||
GROUP BY 1, 2, 3
|
||||
metrics:
|
||||
- application_name:
|
||||
usage: "LABEL"
|
||||
description: "Application Name"
|
||||
- seconds:
|
||||
usage: "HISTOGRAM"
|
||||
description: "Idle time of server processes"
|
||||
|
||||
pg_tb_stats:
|
||||
query: |
|
||||
select pubsubtopic, count(*) AS messages FROM (SELECT id, array_agg(pubsubtopic ORDER BY pubsubtopic) AS pubsubtopic FROM messages GROUP BY id) sub GROUP BY pubsubtopic ORDER BY pubsubtopic;
|
||||
metrics:
|
||||
- pubsubtopic:
|
||||
usage: "LABEL"
|
||||
description: "pubsubtopic"
|
||||
- messages:
|
||||
usage: "GAUGE"
|
||||
description: "Number of messages for the given pubsub topic"
|
||||
|
||||
pg_tb_messages:
|
||||
query: |
|
||||
SELECT
|
||||
COUNT(ID)
|
||||
FROM messages
|
||||
metrics:
|
||||
- count:
|
||||
usage: "GAUGE"
|
||||
description: "Row count in `messages` table"
|
||||
@ -1,9 +0,0 @@
|
||||
auth_modules:
|
||||
mypostgres:
|
||||
type: userpass
|
||||
userpass:
|
||||
username: postgres
|
||||
password: ${POSTGRES_PASSWORD}
|
||||
options:
|
||||
# options become key=value parameters of the DSN
|
||||
sslmode: disable
|
||||
@ -5,6 +5,31 @@ global:
|
||||
monitor: "Monitoring"
|
||||
|
||||
scrape_configs:
|
||||
- job_name: "nwaku"
|
||||
- job_name: "liteprotocoltester"
|
||||
static_configs:
|
||||
- targets: ["nwaku:8003"]
|
||||
- targets: ["liteprotocoltester-publishernode-1:8003",
|
||||
"liteprotocoltester-publishernode-2:8003",
|
||||
"liteprotocoltester-publishernode-3:8003",
|
||||
"liteprotocoltester-publishernode-4:8003",
|
||||
"liteprotocoltester-publishernode-5:8003",
|
||||
"liteprotocoltester-publishernode-6:8003",
|
||||
"liteprotocoltester-receivernode-1:8003",
|
||||
"liteprotocoltester-receivernode-2:8003",
|
||||
"liteprotocoltester-receivernode-3:8003",
|
||||
"liteprotocoltester-receivernode-4:8003",
|
||||
"liteprotocoltester-receivernode-5:8003",
|
||||
"liteprotocoltester-receivernode-6:8003",
|
||||
"publishernode:8003",
|
||||
"publishernode-1:8003",
|
||||
"publishernode-2:8003",
|
||||
"publishernode-3:8003",
|
||||
"publishernode-4:8003",
|
||||
"publishernode-5:8003",
|
||||
"publishernode-6:8003",
|
||||
"receivernode:8003",
|
||||
"receivernode-1:8003",
|
||||
"receivernode-2:8003",
|
||||
"receivernode-3:8003",
|
||||
"receivernode-4:8003",
|
||||
"receivernode-5:8003",
|
||||
"receivernode-6:8003",]
|
||||
|
||||
4
apps/liteprotocoltester/nim.cfg
Normal file
4
apps/liteprotocoltester/nim.cfg
Normal file
@ -0,0 +1,4 @@
|
||||
-d:chronicles_line_numbers
|
||||
-d:chronicles_runtime_filtering:on
|
||||
-d:discv5_protocol_id:d5waku
|
||||
path = "../.."
|
||||
266
apps/liteprotocoltester/publisher.nim
Normal file
266
apps/liteprotocoltester/publisher.nim
Normal file
@ -0,0 +1,266 @@
|
||||
import
|
||||
std/[strformat, sysrand, random, strutils, sequtils],
|
||||
system/ansi_c,
|
||||
chronicles,
|
||||
chronos,
|
||||
chronos/timer as chtimer,
|
||||
stew/byteutils,
|
||||
results,
|
||||
json_serialization as js
|
||||
import
|
||||
waku/[
|
||||
common/logging,
|
||||
waku_node,
|
||||
node/peer_manager,
|
||||
waku_core,
|
||||
waku_lightpush/client,
|
||||
waku_lightpush/common,
|
||||
common/utils/parse_size_units,
|
||||
],
|
||||
./tester_config,
|
||||
./tester_message,
|
||||
./lpt_metrics,
|
||||
./diagnose_connections,
|
||||
./service_peer_management,
|
||||
./publisher_base,
|
||||
./legacy_publisher,
|
||||
./v3_publisher
|
||||
|
||||
randomize()
|
||||
|
||||
type SizeRange* = tuple[min: uint64, max: uint64]
|
||||
|
||||
var RANDOM_PAYLOAD {.threadvar.}: seq[byte]
|
||||
RANDOM_PAYLOAD = urandom(1024 * 1024)
|
||||
# 1MiB of random payload to be used to extend message
|
||||
|
||||
proc prepareMessage(
|
||||
sender: string,
|
||||
messageIndex, numMessages: uint32,
|
||||
startedAt: TimeStamp,
|
||||
prevMessageAt: var Timestamp,
|
||||
contentTopic: ContentTopic,
|
||||
size: SizeRange,
|
||||
): (WakuMessage, uint64) =
|
||||
var renderSize = rand(size.min .. size.max)
|
||||
let current = getNowInNanosecondTime()
|
||||
let payload = ProtocolTesterMessage(
|
||||
sender: sender,
|
||||
index: messageIndex,
|
||||
count: numMessages,
|
||||
startedAt: startedAt,
|
||||
sinceStart: current - startedAt,
|
||||
sincePrev: current - prevMessageAt,
|
||||
size: renderSize,
|
||||
)
|
||||
|
||||
prevMessageAt = current
|
||||
|
||||
let text = js.Json.encode(payload)
|
||||
let contentPayload = toBytes(text & " \0")
|
||||
|
||||
if renderSize < len(contentPayload).uint64:
|
||||
renderSize = len(contentPayload).uint64
|
||||
|
||||
let finalPayload =
|
||||
concat(contentPayload, RANDOM_PAYLOAD[0 .. renderSize - len(contentPayload).uint64])
|
||||
let message = WakuMessage(
|
||||
payload: finalPayload, # content of the message
|
||||
contentTopic: contentTopic, # content topic to publish to
|
||||
ephemeral: true, # tell store nodes to not store it
|
||||
timestamp: current, # current timestamp
|
||||
)
|
||||
|
||||
return (message, renderSize)
|
||||
|
||||
var sentMessages {.threadvar.}: OrderedTable[uint32, tuple[hash: string, relayed: bool]]
|
||||
var failedToSendCause {.threadvar.}: Table[string, uint32]
|
||||
var failedToSendCount {.threadvar.}: uint32
|
||||
var numMessagesToSend {.threadvar.}: uint32
|
||||
var messagesSent {.threadvar.}: uint32
|
||||
var noOfServicePeerSwitches {.threadvar.}: uint32
|
||||
|
||||
proc reportSentMessages() =
|
||||
let report = catch:
|
||||
"""*----------------------------------------*
|
||||
| Service Peer Switches: {noOfServicePeerSwitches:>15} |
|
||||
*----------------------------------------*
|
||||
| Expected | Sent | Failed |
|
||||
|{numMessagesToSend+failedToSendCount:>11} |{messagesSent:>11} |{failedToSendCount:>11} |
|
||||
*----------------------------------------*""".fmt()
|
||||
|
||||
echo report.valueOr("Error while printing statistics")
|
||||
|
||||
echo "*--------------------------------------------------------------------------------------------------*"
|
||||
echo "| Failure cause | count |"
|
||||
for (cause, count) in failedToSendCause.pairs:
|
||||
echo fmt"|{cause:<87}|{count:>10}|"
|
||||
echo "*--------------------------------------------------------------------------------------------------*"
|
||||
|
||||
echo "*--------------------------------------------------------------------------------------------------*"
|
||||
echo "| Index | Relayed | Hash |"
|
||||
for (index, info) in sentMessages.pairs:
|
||||
echo fmt"|{index+1:>10}|{info.relayed:<9}| {info.hash:<76}|"
|
||||
echo "*--------------------------------------------------------------------------------------------------*"
|
||||
# evere sent message hash should logged once
|
||||
sentMessages.clear()
|
||||
|
||||
proc publishMessages(
|
||||
wakuNode: WakuNode,
|
||||
publisher: PublisherBase,
|
||||
servicePeer: RemotePeerInfo,
|
||||
lightpushPubsubTopic: PubsubTopic,
|
||||
lightpushContentTopic: ContentTopic,
|
||||
numMessages: uint32,
|
||||
messageSizeRange: SizeRange,
|
||||
messageInterval: Duration,
|
||||
preventPeerSwitch: bool,
|
||||
) {.async.} =
|
||||
var actualServicePeer = servicePeer
|
||||
let startedAt = getNowInNanosecondTime()
|
||||
var prevMessageAt = startedAt
|
||||
var renderMsgSize = messageSizeRange
|
||||
# sets some default of min max message size to avoid conflict with meaningful payload size
|
||||
renderMsgSize.min = max(1024.uint64, renderMsgSize.min) # do not use less than 1KB
|
||||
renderMsgSize.max = max(2048.uint64, renderMsgSize.max) # minimum of max is 2KB
|
||||
renderMsgSize.min = min(renderMsgSize.min, renderMsgSize.max)
|
||||
renderMsgSize.max = max(renderMsgSize.min, renderMsgSize.max)
|
||||
|
||||
const maxFailedPush = 3
|
||||
var noFailedPush = 0
|
||||
var noFailedServiceNodeSwitches = 0
|
||||
|
||||
let selfPeerId = $wakuNode.switch.peerInfo.peerId
|
||||
failedToSendCount = 0
|
||||
numMessagesToSend = if numMessages == 0: uint32.high else: numMessages
|
||||
messagesSent = 0
|
||||
|
||||
while messagesSent < numMessagesToSend:
|
||||
let (message, msgSize) = prepareMessage(
|
||||
selfPeerId,
|
||||
messagesSent + 1,
|
||||
numMessagesToSend,
|
||||
startedAt,
|
||||
prevMessageAt,
|
||||
lightpushContentTopic,
|
||||
renderMsgSize,
|
||||
)
|
||||
|
||||
let publishStartTime = Moment.now()
|
||||
|
||||
let wlpRes = await publisher.send(lightpushPubsubTopic, message, actualServicePeer)
|
||||
|
||||
let publishDuration = Moment.now() - publishStartTime
|
||||
|
||||
let msgHash = computeMessageHash(lightpushPubsubTopic, message).to0xHex
|
||||
|
||||
if wlpRes.isOk():
|
||||
lpt_publish_duration_seconds.observe(publishDuration.milliseconds.float / 1000)
|
||||
|
||||
sentMessages[messagesSent] = (hash: msgHash, relayed: true)
|
||||
notice "published message using lightpush",
|
||||
index = messagesSent + 1,
|
||||
count = numMessagesToSend,
|
||||
size = msgSize,
|
||||
pubsubTopic = lightpushPubsubTopic,
|
||||
hash = msgHash
|
||||
inc(messagesSent)
|
||||
lpt_publisher_sent_messages_count.inc()
|
||||
lpt_publisher_sent_bytes.inc(amount = msgSize.int64)
|
||||
if noFailedPush > 0:
|
||||
noFailedPush -= 1
|
||||
else:
|
||||
sentMessages[messagesSent] = (hash: msgHash, relayed: false)
|
||||
failedToSendCause.mgetOrPut(wlpRes.error, 1).inc()
|
||||
error "failed to publish message using lightpush",
|
||||
err = wlpRes.error, hash = msgHash
|
||||
inc(failedToSendCount)
|
||||
lpt_publisher_failed_messages_count.inc(labelValues = [wlpRes.error])
|
||||
if not wlpRes.error.toLower().contains("dial"):
|
||||
# retry sending after shorter wait
|
||||
await sleepAsync(2.seconds)
|
||||
continue
|
||||
else:
|
||||
noFailedPush += 1
|
||||
lpt_service_peer_failure_count.inc(
|
||||
labelValues = ["publisher", actualServicePeer.getAgent()]
|
||||
)
|
||||
if not preventPeerSwitch and noFailedPush > maxFailedPush:
|
||||
info "Max push failure limit reached, Try switching peer."
|
||||
actualServicePeer = selectRandomServicePeer(
|
||||
wakuNode.peerManager, some(actualServicePeer), WakuLightPushCodec
|
||||
).valueOr:
|
||||
error "Failed to find new service peer. Exiting."
|
||||
noFailedServiceNodeSwitches += 1
|
||||
break
|
||||
|
||||
info "New service peer in use",
|
||||
codec = lightpushPubsubTopic,
|
||||
peer = constructMultiaddrStr(actualServicePeer)
|
||||
|
||||
noFailedPush = 0
|
||||
noOfServicePeerSwitches += 1
|
||||
lpt_change_service_peer_count.inc(labelValues = ["publisher"])
|
||||
continue # try again with new peer without delay
|
||||
|
||||
await sleepAsync(messageInterval)
|
||||
|
||||
proc setupAndPublish*(
|
||||
wakuNode: WakuNode, conf: LiteProtocolTesterConf, servicePeer: RemotePeerInfo
|
||||
) =
|
||||
var publisher: PublisherBase
|
||||
if conf.lightpushVersion == LightpushVersion.LEGACY:
|
||||
info "Using legacy lightpush protocol for publishing messages"
|
||||
publisher = LegacyPublisher.new(wakuNode)
|
||||
else:
|
||||
info "Using lightpush v3 protocol for publishing messages"
|
||||
publisher = V3Publisher.new(wakuNode)
|
||||
|
||||
# give some time to receiver side to set up
|
||||
let waitTillStartTesting = conf.startPublishingAfter.seconds
|
||||
|
||||
let parsedMinMsgSize = parseMsgSize(conf.minTestMessageSize).valueOr:
|
||||
error "failed to parse 'min-test-msg-size' param: ", error = error
|
||||
return
|
||||
|
||||
let parsedMaxMsgSize = parseMsgSize(conf.maxTestMessageSize).valueOr:
|
||||
error "failed to parse 'max-test-msg-size' param: ", error = error
|
||||
return
|
||||
|
||||
info "Sending test messages in", wait = waitTillStartTesting
|
||||
waitFor sleepAsync(waitTillStartTesting)
|
||||
|
||||
info "Start sending messages to service node using lightpush"
|
||||
|
||||
sentMessages.sort(system.cmp)
|
||||
|
||||
let interval = secs(60)
|
||||
var printStats: CallbackFunc
|
||||
|
||||
printStats = CallbackFunc(
|
||||
proc(udata: pointer) {.gcsafe.} =
|
||||
reportSentMessages()
|
||||
|
||||
if messagesSent >= numMessagesToSend:
|
||||
info "All messages are sent. Exiting."
|
||||
|
||||
## for gracefull shutdown through signal hooks
|
||||
discard c_raise(ansi_c.SIGTERM)
|
||||
else:
|
||||
discard setTimer(Moment.fromNow(interval), printStats)
|
||||
)
|
||||
|
||||
discard setTimer(Moment.fromNow(interval), printStats)
|
||||
|
||||
# Start maintaining subscription
|
||||
asyncSpawn publishMessages(
|
||||
wakuNode,
|
||||
publisher,
|
||||
servicePeer,
|
||||
conf.getPubsubTopic(),
|
||||
conf.contentTopics[0],
|
||||
conf.numMessages,
|
||||
(min: parsedMinMsgSize, max: parsedMaxMsgSize),
|
||||
conf.messageInterval.milliseconds,
|
||||
conf.fixedServicePeer,
|
||||
)
|
||||
14
apps/liteprotocoltester/publisher_base.nim
Normal file
14
apps/liteprotocoltester/publisher_base.nim
Normal file
@ -0,0 +1,14 @@
|
||||
import chronos, results
|
||||
import waku/[waku_node, waku_core]
|
||||
|
||||
type PublisherBase* = ref object of RootObj
|
||||
wakuNode*: WakuNode
|
||||
|
||||
method send*(
|
||||
self: PublisherBase,
|
||||
topic: PubsubTopic,
|
||||
message: WakuMessage,
|
||||
servicePeer: RemotePeerInfo,
|
||||
): Future[Result[void, string]] {.base, async.} =
|
||||
discard
|
||||
# when error it must return original error desc due the text is used for distinction between error types in metrics.
|
||||
180
apps/liteprotocoltester/receiver.nim
Normal file
180
apps/liteprotocoltester/receiver.nim
Normal file
@ -0,0 +1,180 @@
|
||||
## Example showing how a resource restricted client may
|
||||
## subscribe to messages without relay
|
||||
|
||||
import
|
||||
std/options,
|
||||
system/ansi_c,
|
||||
chronicles,
|
||||
chronos,
|
||||
chronos/timer as chtimer,
|
||||
stew/byteutils,
|
||||
results,
|
||||
serialization,
|
||||
json_serialization as js
|
||||
|
||||
import
|
||||
waku/[
|
||||
common/logging,
|
||||
node/peer_manager,
|
||||
waku_node,
|
||||
waku_core,
|
||||
waku_filter_v2/client,
|
||||
waku_filter_v2/common,
|
||||
waku_core/multiaddrstr,
|
||||
],
|
||||
./tester_config,
|
||||
./tester_message,
|
||||
./statistics,
|
||||
./diagnose_connections,
|
||||
./service_peer_management,
|
||||
./lpt_metrics
|
||||
|
||||
var actualFilterPeer {.threadvar.}: RemotePeerInfo
|
||||
|
||||
proc unsubscribe(
|
||||
wakuNode: WakuNode, filterPubsubTopic: PubsubTopic, filterContentTopic: ContentTopic
|
||||
) {.async.} =
|
||||
notice "unsubscribing from filter"
|
||||
let unsubscribeRes = await wakuNode.wakuFilterClient.unsubscribe(
|
||||
actualFilterPeer, filterPubsubTopic, @[filterContentTopic]
|
||||
)
|
||||
if unsubscribeRes.isErr:
|
||||
notice "unsubscribe request failed", err = unsubscribeRes.error
|
||||
else:
|
||||
notice "unsubscribe request successful"
|
||||
|
||||
proc maintainSubscription(
|
||||
wakuNode: WakuNode,
|
||||
filterPubsubTopic: PubsubTopic,
|
||||
filterContentTopic: ContentTopic,
|
||||
preventPeerSwitch: bool,
|
||||
) {.async.} =
|
||||
const maxFailedSubscribes = 3
|
||||
const maxFailedServiceNodeSwitches = 10
|
||||
var noFailedSubscribes = 0
|
||||
var noFailedServiceNodeSwitches = 0
|
||||
var isFirstPingOnNewPeer = true
|
||||
const RetryWaitMs = 2.seconds # Quick retry interval
|
||||
const SubscriptionMaintenanceMs = 30.seconds # Subscription maintenance interval
|
||||
while true:
|
||||
info "maintaining subscription at", peer = constructMultiaddrStr(actualFilterPeer)
|
||||
# First use filter-ping to check if we have an active subscription
|
||||
let pingErr = (await wakuNode.wakuFilterClient.ping(actualFilterPeer)).errorOr:
|
||||
await sleepAsync(SubscriptionMaintenanceMs)
|
||||
info "subscription is live."
|
||||
continue
|
||||
|
||||
if isFirstPingOnNewPeer == false:
|
||||
# Very first ping expected to fail as we have not yet subscribed at all
|
||||
lpt_receiver_lost_subscription_count.inc()
|
||||
isFirstPingOnNewPeer = false
|
||||
# No subscription found. Let's subscribe.
|
||||
error "ping failed.", error = pingErr
|
||||
trace "no subscription found. Sending subscribe request"
|
||||
|
||||
let subscribeErr = (
|
||||
await wakuNode.filterSubscribe(
|
||||
some(filterPubsubTopic), filterContentTopic, actualFilterPeer
|
||||
)
|
||||
).errorOr:
|
||||
await sleepAsync(SubscriptionMaintenanceMs)
|
||||
if noFailedSubscribes > 0:
|
||||
noFailedSubscribes -= 1
|
||||
notice "subscribe request successful."
|
||||
continue
|
||||
|
||||
noFailedSubscribes += 1
|
||||
lpt_service_peer_failure_count.inc(
|
||||
labelValues = ["receiver", actualFilterPeer.getAgent()]
|
||||
)
|
||||
error "Subscribe request failed.",
|
||||
err = subscribeErr, peer = actualFilterPeer, failCount = noFailedSubscribes
|
||||
|
||||
# TODO: disconnet from failed actualFilterPeer
|
||||
# asyncSpawn(wakuNode.peerManager.switch.disconnect(p))
|
||||
# wakunode.peerManager.peerStore.delete(actualFilterPeer)
|
||||
|
||||
if noFailedSubscribes < maxFailedSubscribes:
|
||||
await sleepAsync(RetryWaitMs) # Wait a bit before retrying
|
||||
elif not preventPeerSwitch:
|
||||
# try again with new peer without delay
|
||||
actualFilterPeer = selectRandomServicePeer(
|
||||
wakuNode.peerManager, some(actualFilterPeer), WakuFilterSubscribeCodec
|
||||
).valueOr:
|
||||
error "Failed to find new service peer. Exiting."
|
||||
noFailedServiceNodeSwitches += 1
|
||||
break
|
||||
|
||||
info "Found new peer for codec",
|
||||
codec = filterPubsubTopic, peer = constructMultiaddrStr(actualFilterPeer)
|
||||
|
||||
noFailedSubscribes = 0
|
||||
lpt_change_service_peer_count.inc(labelValues = ["receiver"])
|
||||
isFirstPingOnNewPeer = true
|
||||
else:
|
||||
await sleepAsync(SubscriptionMaintenanceMs)
|
||||
|
||||
proc setupAndListen*(
|
||||
wakuNode: WakuNode, conf: LiteProtocolTesterConf, servicePeer: RemotePeerInfo
|
||||
) =
|
||||
if isNil(wakuNode.wakuFilterClient):
|
||||
# if we have not yet initialized lightpush client, then do it as the only way we can get here is
|
||||
# by having a service peer discovered.
|
||||
waitFor wakuNode.mountFilterClient()
|
||||
|
||||
info "Start receiving messages to service node using filter",
|
||||
servicePeer = servicePeer
|
||||
|
||||
var stats: PerPeerStatistics
|
||||
actualFilterPeer = servicePeer
|
||||
|
||||
let pushHandler = proc(
|
||||
pubsubTopic: PubsubTopic, message: WakuMessage
|
||||
): Future[void] {.async, closure.} =
|
||||
let payloadStr = string.fromBytes(message.payload)
|
||||
let testerMessage = js.Json.decode(payloadStr, ProtocolTesterMessage)
|
||||
let msgHash = computeMessageHash(pubsubTopic, message).to0xHex
|
||||
|
||||
stats.addMessage(testerMessage.sender, testerMessage, msgHash)
|
||||
|
||||
notice "message received",
|
||||
index = testerMessage.index,
|
||||
count = testerMessage.count,
|
||||
startedAt = $testerMessage.startedAt,
|
||||
sinceStart = $testerMessage.sinceStart,
|
||||
sincePrev = $testerMessage.sincePrev,
|
||||
size = $testerMessage.size,
|
||||
pubsubTopic = pubsubTopic,
|
||||
hash = msgHash
|
||||
|
||||
wakuNode.wakuFilterClient.registerPushHandler(pushHandler)
|
||||
|
||||
let interval = millis(20000)
|
||||
var printStats: CallbackFunc
|
||||
|
||||
# calculate max wait after the last known message arrived before exiting
|
||||
# 20% of expected messages times the expected interval but capped to 10min
|
||||
let maxWaitForLastMessage: Duration =
|
||||
min(conf.messageInterval.milliseconds * (conf.numMessages div 5), 10.minutes)
|
||||
|
||||
printStats = CallbackFunc(
|
||||
proc(udata: pointer) {.gcsafe.} =
|
||||
stats.echoStats()
|
||||
|
||||
if conf.numMessages > 0 and
|
||||
waitFor stats.checkIfAllMessagesReceived(maxWaitForLastMessage):
|
||||
waitFor unsubscribe(wakuNode, conf.getPubsubTopic(), conf.contentTopics[0])
|
||||
info "All messages received. Exiting."
|
||||
|
||||
## for gracefull shutdown through signal hooks
|
||||
discard c_raise(ansi_c.SIGTERM)
|
||||
else:
|
||||
discard setTimer(Moment.fromNow(interval), printStats)
|
||||
)
|
||||
|
||||
discard setTimer(Moment.fromNow(interval), printStats)
|
||||
|
||||
# Start maintaining subscription
|
||||
asyncSpawn maintainSubscription(
|
||||
wakuNode, conf.getPubsubTopic(), conf.contentTopics[0], conf.fixedServicePeer
|
||||
)
|
||||
42
apps/liteprotocoltester/run_service_node.sh
Normal file → Executable file
42
apps/liteprotocoltester/run_service_node.sh
Normal file → Executable file
@ -5,6 +5,39 @@ IP=$(ip a | grep "inet " | grep -Fv 127.0.0.1 | sed 's/.*inet \([^/]*\).*/\1/')
|
||||
|
||||
echo "Service node IP: ${IP}"
|
||||
|
||||
if [ -n "${SHARD}" ]; then
|
||||
SHARD=--shard="${SHARD}"
|
||||
else
|
||||
SHARD=--shard="0"
|
||||
fi
|
||||
|
||||
if [ -n "${CLUSTER_ID}" ]; then
|
||||
CLUSTER_ID=--cluster-id="${CLUSTER_ID}"
|
||||
fi
|
||||
|
||||
echo "STANDALONE: ${STANDALONE}"
|
||||
|
||||
if [ -z "${STANDALONE}" ]; then
|
||||
|
||||
RETRIES=${RETRIES:=20}
|
||||
|
||||
while [ -z "${BOOTSTRAP_ENR}" ] && [ ${RETRIES} -ge 0 ]; do
|
||||
BOOTSTRAP_ENR=$(wget -qO- http://bootstrap:8645/debug/v1/info --header='Content-Type:application/json' 2> /dev/null | sed 's/.*"enrUri":"\([^"]*\)".*/\1/');
|
||||
echo "Bootstrap node not ready, retrying (retries left: ${RETRIES})"
|
||||
sleep 3
|
||||
RETRIES=$(( $RETRIES - 1 ))
|
||||
done
|
||||
|
||||
if [ -z "${BOOTSTRAP_ENR}" ]; then
|
||||
echo "Could not get BOOTSTRAP_ENR and none provided. Failing"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo "Using bootstrap node: ${BOOTSTRAP_ENR}"
|
||||
|
||||
fi
|
||||
|
||||
|
||||
exec /usr/bin/wakunode\
|
||||
--relay=true\
|
||||
--filter=true\
|
||||
@ -20,10 +53,11 @@ exec /usr/bin/wakunode\
|
||||
--dns-discovery=true\
|
||||
--discv5-discovery=true\
|
||||
--discv5-enr-auto-update=True\
|
||||
--log-level=DEBUG\
|
||||
--discv5-bootstrap-node=${BOOTSTRAP_ENR}\
|
||||
--log-level=INFO\
|
||||
--metrics-server=True\
|
||||
--metrics-server-port=8003\
|
||||
--metrics-server-address=0.0.0.0\
|
||||
--nodekey=e3f5e64568b3a612dee609f6e7c0203c501dab6131662922bdcbcabd474281d5\
|
||||
--nat=extip:${IP}\
|
||||
--pubsub-topic=/waku/2/rs/0/0\
|
||||
--cluster-id=0
|
||||
${SHARD}\
|
||||
${CLUSTER_ID}
|
||||
|
||||
175
apps/liteprotocoltester/run_tester_node.sh
Normal file → Executable file
175
apps/liteprotocoltester/run_tester_node.sh
Normal file → Executable file
@ -1,76 +1,161 @@
|
||||
#!/bin/sh
|
||||
|
||||
#set -x
|
||||
|
||||
if test -f .env; then
|
||||
echo "Using .env file"
|
||||
. $(pwd)/.env
|
||||
fi
|
||||
|
||||
IP=$(ip a | grep "inet " | grep -Fv 127.0.0.1 | sed 's/.*inet \([^/]*\).*/\1/')
|
||||
|
||||
echo "I am a lite-protocol-tester node"
|
||||
|
||||
# Get an unique node index based on the container's IP
|
||||
FOURTH_OCTET=${IP##*.}
|
||||
THIRD_OCTET="${IP%.*}"; THIRD_OCTET="${THIRD_OCTET##*.}"
|
||||
NODE_INDEX=$((FOURTH_OCTET + 256 * THIRD_OCTET))
|
||||
BINARY_PATH=$1
|
||||
|
||||
echo "NODE_INDEX $NODE_INDEX"
|
||||
if [ ! -x "${BINARY_PATH}" ]; then
|
||||
echo "Invalid binary path '${BINARY_PATH}'. Failing"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
RETRIES=${RETRIES:=10}
|
||||
if [ "${2}" = "--help" ]; then
|
||||
echo "You might want to check nwaku/apps/liteprotocoltester/README.md"
|
||||
exec "${BINARY_PATH}" --help
|
||||
exit 0
|
||||
fi
|
||||
|
||||
while [ -z "${SERIVCE_NODE_ADDR}" ] && [ ${RETRIES} -ge 0 ]; do
|
||||
SERIVCE_NODE_ADDR=$(wget -qO- http://servicenode:8645/debug/v1/info --header='Content-Type:application/json' 2> /dev/null | sed 's/.*"listenAddresses":\["\([^"]*\)".*/\1/');
|
||||
echo "Service node not ready, retrying (retries left: ${RETRIES})"
|
||||
sleep 1
|
||||
RETRIES=$(( $RETRIES - 1 ))
|
||||
done
|
||||
FUNCTION=$2
|
||||
if [ "${FUNCTION}" = "SENDER" ]; then
|
||||
FUNCTION="--test-func=SENDER --lightpush-version=LEGACY"
|
||||
SERVICENAME=lightpush-service
|
||||
fi
|
||||
|
||||
if [ "${FUNCTION}" = "SENDERV3" ]; then
|
||||
FUNCTION="--test-func=SENDER --lightpush-version=V3"
|
||||
SERVICENAME=lightpush-service
|
||||
fi
|
||||
|
||||
if [ "${FUNCTION}" = "RECEIVER" ]; then
|
||||
FUNCTION=--test-func=RECEIVER
|
||||
SERVICENAME=filter-service
|
||||
fi
|
||||
|
||||
SERIVCE_NODE_ADDR=$3
|
||||
if [ -z "${SERIVCE_NODE_ADDR}" ]; then
|
||||
echo "Service node peer_id provided. Failing"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
SELECTOR=$4
|
||||
if [ -z "${SELECTOR}" ] || [ "${SELECTOR}" = "SERVICE" ]; then
|
||||
SERVICE_NODE_DIRECT=true
|
||||
elif [ "${SELECTOR}" = "BOOTSTRAP" ]; then
|
||||
SERVICE_NODE_DIRECT=false
|
||||
else
|
||||
echo "Invalid selector '${SELECTOR}'. Failing"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
DO_DETECT_SERVICENODE=0
|
||||
|
||||
if [ "${SERIVCE_NODE_ADDR}" = "servicenode" ]; then
|
||||
DO_DETECT_SERVICENODE=1
|
||||
SERIVCE_NODE_ADDR=""
|
||||
SERVICENAME=servicenode
|
||||
fi
|
||||
|
||||
if [ "${SERIVCE_NODE_ADDR}" = "waku-sim" ]; then
|
||||
DO_DETECT_SERVICENODE=1
|
||||
SERIVCE_NODE_ADDR=""
|
||||
MY_EXT_IP=$(ip a | grep "inet " | grep -Fv 127.0.0.1 | sed 's/.*inet \([^/]*\).*/\1/')
|
||||
else
|
||||
MY_EXT_IP=$(wget -qO- --no-check-certificate https://api4.ipify.org)
|
||||
fi
|
||||
|
||||
|
||||
if [ $DO_DETECT_SERVICENODE -eq 1 ]; then
|
||||
RETRIES=${RETRIES:=20}
|
||||
|
||||
while [ -z "${SERIVCE_NODE_ADDR}" ] && [ ${RETRIES} -ge 0 ]; do
|
||||
SERVICE_DEBUG_INFO=$(wget -qO- http://${SERVICENAME}:8645/debug/v1/info --header='Content-Type:application/json' 2> /dev/null);
|
||||
echo "SERVICE_DEBUG_INFO: ${SERVICE_DEBUG_INFO}"
|
||||
|
||||
SERIVCE_NODE_ADDR=$(wget -qO- http://${SERVICENAME}:8645/debug/v1/info --header='Content-Type:application/json' 2> /dev/null | sed 's/.*"listenAddresses":\["\([^"]*\)".*/\1/');
|
||||
echo "Service node not ready, retrying (retries left: ${RETRIES})"
|
||||
sleep 3
|
||||
RETRIES=$(( $RETRIES - 1 ))
|
||||
done
|
||||
|
||||
fi
|
||||
|
||||
if [ -z "${SERIVCE_NODE_ADDR}" ]; then
|
||||
echo "Could not get SERIVCE_NODE_ADDR and none provided. Failing"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if $SERVICE_NODE_DIRECT; then
|
||||
FULL_NODE=--service-node="${SERIVCE_NODE_ADDR} --fixed-service-peer"
|
||||
else
|
||||
FULL_NODE=--bootstrap-node="${SERIVCE_NODE_ADDR}"
|
||||
fi
|
||||
|
||||
if [ -n "${PUBSUB}" ]; then
|
||||
PUBSUB=--pubsub-topic="${PUBSUB}"
|
||||
if [ -n "${SHARD}" ]; then
|
||||
SHARD=--shard="${SHARD}"
|
||||
else
|
||||
SHARD=--shard="0"
|
||||
fi
|
||||
|
||||
if [ -n "${CONTENT_TOPIC}" ]; then
|
||||
CONTENT_TOPIC=--content-topic="${CONTENT_TOPIC}"
|
||||
fi
|
||||
|
||||
FUNCTION=$1
|
||||
if [ -n "${CLUSTER_ID}" ]; then
|
||||
CLUSTER_ID=--cluster-id="${CLUSTER_ID}"
|
||||
fi
|
||||
|
||||
if [ -n "${START_PUBLISHING_AFTER_SECS}" ]; then
|
||||
START_PUBLISHING_AFTER_SECS=--start-publishing-after="${START_PUBLISHING_AFTER_SECS}"
|
||||
fi
|
||||
|
||||
if [ -n "${MIN_MESSAGE_SIZE}" ]; then
|
||||
MIN_MESSAGE_SIZE=--min-test-msg-size="${MIN_MESSAGE_SIZE}"
|
||||
fi
|
||||
|
||||
if [ -n "${MAX_MESSAGE_SIZE}" ]; then
|
||||
MAX_MESSAGE_SIZE=--max-test-msg-size="${MAX_MESSAGE_SIZE}"
|
||||
fi
|
||||
|
||||
|
||||
if [ -n "${NUM_MESSAGES}" ]; then
|
||||
NUM_MESSAGES=--num-messages="${NUM_MESSAGES}"
|
||||
fi
|
||||
|
||||
if [ -n "${MESSAGE_INTERVAL_MILLIS}" ]; then
|
||||
MESSAGE_INTERVAL_MILLIS=--message-interval="${MESSAGE_INTERVAL_MILLIS}"
|
||||
fi
|
||||
|
||||
if [ -n "${LOG_LEVEL}" ]; then
|
||||
LOG_LEVEL=--log-level=${LOG_LEVEL}
|
||||
else
|
||||
LOG_LEVEL=--log-level=INFO
|
||||
fi
|
||||
|
||||
echo "Running binary: ${BINARY_PATH}"
|
||||
echo "Tester node: ${FUNCTION}"
|
||||
|
||||
REST_PORT=--rest-port=8647
|
||||
|
||||
if [ "${FUNCTION}" = "SENDER" ]; then
|
||||
FUNCTION=--test-func=SENDER
|
||||
REST_PORT=--rest-port=8646
|
||||
fi
|
||||
|
||||
if [ "${FUNCTION}" = "RECEIVER" ]; then
|
||||
FUNCTION=--test-func=RECEIVER
|
||||
REST_PORT=--rest-port=8647
|
||||
fi
|
||||
|
||||
if [ -z "${FUNCTION}" ]; then
|
||||
FUNCTION=--test-func=RECEIVER
|
||||
fi
|
||||
|
||||
echo "Using service node: ${SERIVCE_NODE_ADDR}"
|
||||
exec /usr/bin/liteprotocoltester\
|
||||
--log-level=DEBUG\
|
||||
--service-node="${SERIVCE_NODE_ADDR}"\
|
||||
--pubsub-topic=/waku/2/rs/0/0\
|
||||
--cluster-id=0\
|
||||
--num-messages=${NUM_MESSAGES}\
|
||||
--delay-messages=${DELAY_MESSAGES}\
|
||||
--nat=extip:${IP}\
|
||||
${FUNCTION}\
|
||||
${PUBSUB}\
|
||||
${CONTENT_TOPIC}\
|
||||
${REST_PORT}
|
||||
echo "My external IP: ${MY_EXT_IP}"
|
||||
|
||||
exec "${BINARY_PATH}"\
|
||||
--nat=extip:${MY_EXT_IP}\
|
||||
--test-peers\
|
||||
${LOG_LEVEL}\
|
||||
${FULL_NODE}\
|
||||
${MESSAGE_INTERVAL_MILLIS}\
|
||||
${NUM_MESSAGES}\
|
||||
${SHARD}\
|
||||
${CONTENT_TOPIC}\
|
||||
${CLUSTER_ID}\
|
||||
${FUNCTION}\
|
||||
${START_PUBLISHING_AFTER_SECS}\
|
||||
${MIN_MESSAGE_SIZE}\
|
||||
${MAX_MESSAGE_SIZE}
|
||||
# --config-file=config.toml\
|
||||
|
||||
119
apps/liteprotocoltester/run_tester_node_at_infra.sh
Normal file
119
apps/liteprotocoltester/run_tester_node_at_infra.sh
Normal file
@ -0,0 +1,119 @@
|
||||
#!/bin/sh
|
||||
|
||||
#set -x
|
||||
#echo "$@"
|
||||
|
||||
if test -f .env; then
|
||||
echo "Using .env file"
|
||||
. $(pwd)/.env
|
||||
fi
|
||||
|
||||
|
||||
echo "I am a lite-protocol-tester node"
|
||||
|
||||
BINARY_PATH=$1
|
||||
|
||||
if [ ! -x "${BINARY_PATH}" ]; then
|
||||
echo "Invalid binary path '${BINARY_PATH}'. Failing"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if [ "${2}" = "--help" ]; then
|
||||
echo "You might want to check nwaku/apps/liteprotocoltester/README.md"
|
||||
exec "${BINARY_PATH}" --help
|
||||
exit 0
|
||||
fi
|
||||
|
||||
FUNCTION=$2
|
||||
if [ "${FUNCTION}" = "SENDER" ]; then
|
||||
FUNCTION="--test-func=SENDER --lightpush-version=LEGACY"
|
||||
SERIVCE_NODE_ADDR=${LIGHTPUSH_SERVICE_PEER:-${LIGHTPUSH_BOOTSTRAP:-}}
|
||||
NODE_ARG=${LIGHTPUSH_SERVICE_PEER:+--service-node="${LIGHTPUSH_SERVICE_PEER}"}
|
||||
NODE_ARG=${NODE_ARG:---bootstrap-node="${LIGHTPUSH_BOOTSTRAP}"}
|
||||
METRICS_PORT=--metrics-port="${PUBLISHER_METRICS_PORT:-8003}"
|
||||
fi
|
||||
|
||||
if [ "${FUNCTION}" = "SENDERV3" ]; then
|
||||
FUNCTION="--test-func=SENDER --lightpush-version=V3"
|
||||
SERIVCE_NODE_ADDR=${LIGHTPUSH_SERVICE_PEER:-${LIGHTPUSH_BOOTSTRAP:-}}
|
||||
NODE_ARG=${LIGHTPUSH_SERVICE_PEER:+--service-node="${LIGHTPUSH_SERVICE_PEER}"}
|
||||
NODE_ARG=${NODE_ARG:---bootstrap-node="${LIGHTPUSH_BOOTSTRAP}"}
|
||||
METRICS_PORT=--metrics-port="${PUBLISHER_METRICS_PORT:-8003}"
|
||||
fi
|
||||
|
||||
if [ "${FUNCTION}" = "RECEIVER" ]; then
|
||||
FUNCTION=--test-func=RECEIVER
|
||||
SERIVCE_NODE_ADDR=${FILTER_SERVICE_PEER:-${FILTER_BOOTSTRAP:-}}
|
||||
NODE_ARG=${FILTER_SERVICE_PEER:+--service-node="${FILTER_SERVICE_PEER}"}
|
||||
NODE_ARG=${NODE_ARG:---bootstrap-node="${FILTER_BOOTSTRAP}"}
|
||||
METRICS_PORT=--metrics-port="${RECEIVER_METRICS_PORT:-8003}"
|
||||
fi
|
||||
|
||||
if [ -z "${SERIVCE_NODE_ADDR}" ]; then
|
||||
echo "Service/Bootsrap node peer_id or enr is not provided. Failing"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
MY_EXT_IP=$(wget -qO- --no-check-certificate https://api4.ipify.org)
|
||||
|
||||
if [ -n "${SHARD}" ]; then
|
||||
SHARD=--shard="${SHARD}"
|
||||
else
|
||||
SHARD=--shard="0"
|
||||
fi
|
||||
|
||||
if [ -n "${CONTENT_TOPIC}" ]; then
|
||||
CONTENT_TOPIC=--content-topic="${CONTENT_TOPIC}"
|
||||
fi
|
||||
|
||||
if [ -n "${CLUSTER_ID}" ]; then
|
||||
CLUSTER_ID=--cluster-id="${CLUSTER_ID}"
|
||||
fi
|
||||
|
||||
if [ -n "${START_PUBLISHING_AFTER_SECS}" ]; then
|
||||
START_PUBLISHING_AFTER_SECS=--start-publishing-after="${START_PUBLISHING_AFTER_SECS}"
|
||||
fi
|
||||
|
||||
if [ -n "${MIN_MESSAGE_SIZE}" ]; then
|
||||
MIN_MESSAGE_SIZE=--min-test-msg-size="${MIN_MESSAGE_SIZE}"
|
||||
fi
|
||||
|
||||
if [ -n "${MAX_MESSAGE_SIZE}" ]; then
|
||||
MAX_MESSAGE_SIZE=--max-test-msg-size="${MAX_MESSAGE_SIZE}"
|
||||
fi
|
||||
|
||||
|
||||
if [ -n "${NUM_MESSAGES}" ]; then
|
||||
NUM_MESSAGES=--num-messages="${NUM_MESSAGES}"
|
||||
fi
|
||||
|
||||
if [ -n "${MESSAGE_INTERVAL_MILLIS}" ]; then
|
||||
MESSAGE_INTERVAL_MILLIS=--message-interval="${MESSAGE_INTERVAL_MILLIS}"
|
||||
fi
|
||||
|
||||
if [ -n "${LOG_LEVEL}" ]; then
|
||||
LOG_LEVEL=--log-level=${LOG_LEVEL}
|
||||
else
|
||||
LOG_LEVEL=--log-level=INFO
|
||||
fi
|
||||
|
||||
echo "Running binary: ${BINARY_PATH}"
|
||||
echo "Node function is: ${FUNCTION}"
|
||||
echo "Using service/bootstrap node as: ${NODE_ARG}"
|
||||
echo "My external IP: ${MY_EXT_IP}"
|
||||
|
||||
exec "${BINARY_PATH}"\
|
||||
--nat=extip:${MY_EXT_IP}\
|
||||
--test-peers\
|
||||
${LOG_LEVEL}\
|
||||
${NODE_ARG}\
|
||||
${MESSAGE_INTERVAL_MILLIS}\
|
||||
${NUM_MESSAGES}\
|
||||
${SHARD}\
|
||||
${CONTENT_TOPIC}\
|
||||
${CLUSTER_ID}\
|
||||
${FUNCTION}\
|
||||
${START_PUBLISHING_AFTER_SECS}\
|
||||
${MIN_MESSAGE_SIZE}\
|
||||
${MAX_MESSAGE_SIZE}\
|
||||
${METRICS_PORT}
|
||||
118
apps/liteprotocoltester/run_tester_node_on_fleet.sh
Normal file
118
apps/liteprotocoltester/run_tester_node_on_fleet.sh
Normal file
@ -0,0 +1,118 @@
|
||||
#!/bin/sh
|
||||
|
||||
#set -x
|
||||
#echo "$@"
|
||||
|
||||
if test -f .env; then
|
||||
echo "Using .env file"
|
||||
. $(pwd)/.env
|
||||
fi
|
||||
|
||||
|
||||
echo "I am a lite-protocol-tester node"
|
||||
|
||||
BINARY_PATH=$1
|
||||
|
||||
if [ ! -x "${BINARY_PATH}" ]; then
|
||||
echo "Invalid binary path '${BINARY_PATH}'. Failing"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if [ "${2}" = "--help" ]; then
|
||||
echo "You might want to check nwaku/apps/liteprotocoltester/README.md"
|
||||
exec "${BINARY_PATH}" --help
|
||||
exit 0
|
||||
fi
|
||||
|
||||
FUNCTION=$2
|
||||
if [ "${FUNCTION}" = "SENDER" ]; then
|
||||
FUNCTION="--test-func=SENDER --lightpush-version=LEGACY"
|
||||
SERIVCE_NODE_ADDR=${LIGHTPUSH_SERVICE_PEER:-${LIGHTPUSH_BOOTSTRAP:-}}
|
||||
NODE_ARG=${LIGHTPUSH_SERVICE_PEER:+--service-node="${LIGHTPUSH_SERVICE_PEER}"}
|
||||
NODE_ARG=${NODE_ARG:---bootstrap-node="${LIGHTPUSH_BOOTSTRAP}"}
|
||||
METRICS_PORT=--metrics-port="${PUBLISHER_METRICS_PORT:-8003}"
|
||||
fi
|
||||
|
||||
if [ "${FUNCTION}" = "SENDERV3" ]; then
|
||||
FUNCTION="--test-func=SENDER --lightpush-version=V3"
|
||||
SERIVCE_NODE_ADDR=${LIGHTPUSH_SERVICE_PEER:-${LIGHTPUSH_BOOTSTRAP:-}}
|
||||
NODE_ARG=${LIGHTPUSH_SERVICE_PEER:+--service-node="${LIGHTPUSH_SERVICE_PEER}"}
|
||||
NODE_ARG=${NODE_ARG:---bootstrap-node="${LIGHTPUSH_BOOTSTRAP}"}
|
||||
METRICS_PORT=--metrics-port="${PUBLISHER_METRICS_PORT:-8003}"
|
||||
fi
|
||||
|
||||
if [ "${FUNCTION}" = "RECEIVER" ]; then
|
||||
FUNCTION=--test-func=RECEIVER
|
||||
SERIVCE_NODE_ADDR=${FILTER_SERVICE_PEER:-${FILTER_BOOTSTRAP:-}}
|
||||
NODE_ARG=${FILTER_SERVICE_PEER:+--service-node="${FILTER_SERVICE_PEER}"}
|
||||
NODE_ARG=${NODE_ARG:---bootstrap-node="${FILTER_BOOTSTRAP}"}
|
||||
METRICS_PORT=--metrics-port="${RECEIVER_METRICS_PORT:-8003}"
|
||||
fi
|
||||
|
||||
if [ -z "${SERIVCE_NODE_ADDR}" ]; then
|
||||
echo "Service/Bootsrap node peer_id or enr is not provided. Failing"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
MY_EXT_IP=$(wget -qO- --no-check-certificate https://api4.ipify.org)
|
||||
|
||||
if [ -n "${SHARD}" ]; then
|
||||
SHARD=--shard=${SHARD}
|
||||
else
|
||||
SHARD=--shard=0
|
||||
fi
|
||||
|
||||
if [ -n "${CONTENT_TOPIC}" ]; then
|
||||
CONTENT_TOPIC=--content-topic="${CONTENT_TOPIC}"
|
||||
fi
|
||||
|
||||
if [ -n "${CLUSTER_ID}" ]; then
|
||||
CLUSTER_ID=--cluster-id="${CLUSTER_ID}"
|
||||
fi
|
||||
|
||||
if [ -n "${START_PUBLISHING_AFTER}" ]; then
|
||||
START_PUBLISHING_AFTER=--start-publishing-after="${START_PUBLISHING_AFTER}"
|
||||
fi
|
||||
|
||||
if [ -n "${MIN_MESSAGE_SIZE}" ]; then
|
||||
MIN_MESSAGE_SIZE=--min-test-msg-size="${MIN_MESSAGE_SIZE}"
|
||||
fi
|
||||
|
||||
if [ -n "${MAX_MESSAGE_SIZE}" ]; then
|
||||
MAX_MESSAGE_SIZE=--max-test-msg-size="${MAX_MESSAGE_SIZE}"
|
||||
fi
|
||||
|
||||
|
||||
if [ -n "${NUM_MESSAGES}" ]; then
|
||||
NUM_MESSAGES=--num-messages="${NUM_MESSAGES}"
|
||||
fi
|
||||
|
||||
if [ -n "${MESSAGE_INTERVAL_MILLIS}" ]; then
|
||||
MESSAGE_INTERVAL_MILLIS=--message-interval="${MESSAGE_INTERVAL_MILLIS}"
|
||||
fi
|
||||
|
||||
if [ -n "${LOG_LEVEL}" ]; then
|
||||
LOG_LEVEL=--log-level=${LOG_LEVEL}
|
||||
else
|
||||
LOG_LEVEL=--log-level=INFO
|
||||
fi
|
||||
|
||||
echo "Running binary: ${BINARY_PATH}"
|
||||
echo "Node function is: ${FUNCTION}"
|
||||
echo "Using service/bootstrap node as: ${NODE_ARG}"
|
||||
echo "My external IP: ${MY_EXT_IP}"
|
||||
|
||||
exec "${BINARY_PATH}"\
|
||||
--nat=extip:${MY_EXT_IP}\
|
||||
${LOG_LEVEL}\
|
||||
${NODE_ARG}\
|
||||
${MESSAGE_INTERVAL_MILLIS}\
|
||||
${NUM_MESSAGES}\
|
||||
${SHARD}\
|
||||
${CONTENT_TOPIC}\
|
||||
${CLUSTER_ID}\
|
||||
${FUNCTION}\
|
||||
${START_PUBLISHING_AFTER}\
|
||||
${MIN_MESSAGE_SIZE}\
|
||||
${MAX_MESSAGE_SIZE}\
|
||||
${METRICS_PORT}
|
||||
223
apps/liteprotocoltester/service_peer_management.nim
Normal file
223
apps/liteprotocoltester/service_peer_management.nim
Normal file
@ -0,0 +1,223 @@
|
||||
{.push raises: [].}
|
||||
|
||||
import
|
||||
std/[options, net, sysrand, random, strformat, strutils, sequtils],
|
||||
chronicles,
|
||||
chronos,
|
||||
metrics,
|
||||
libbacktrace,
|
||||
libp2p/crypto/crypto,
|
||||
confutils,
|
||||
libp2p/wire
|
||||
|
||||
import
|
||||
tools/confutils/cli_args,
|
||||
waku/[
|
||||
common/enr,
|
||||
waku_node,
|
||||
node/peer_manager,
|
||||
waku_lightpush/common,
|
||||
waku_relay,
|
||||
waku_filter_v2,
|
||||
waku_peer_exchange/protocol,
|
||||
waku_core/multiaddrstr,
|
||||
waku_core/topics/pubsub_topic,
|
||||
waku_enr/capabilities,
|
||||
waku_enr/sharding,
|
||||
],
|
||||
./tester_config,
|
||||
./diagnose_connections,
|
||||
./lpt_metrics
|
||||
|
||||
logScope:
|
||||
topics = "service peer mgmt"
|
||||
|
||||
randomize()
|
||||
|
||||
proc translateToRemotePeerInfo*(peerAddress: string): Result[RemotePeerInfo, void] =
|
||||
var peerInfo: RemotePeerInfo
|
||||
var enrRec: enr.Record
|
||||
if enrRec.fromURI(peerAddress):
|
||||
trace "Parsed ENR", enrRec = $enrRec
|
||||
peerInfo = enrRec.toRemotePeerInfo().valueOr:
|
||||
error "failed to convert ENR to RemotePeerInfo", error = error
|
||||
return err()
|
||||
else:
|
||||
peerInfo = parsePeerInfo(peerAddress).valueOr:
|
||||
error "failed to parse node waku peer-exchange peerId", error = error
|
||||
return err()
|
||||
|
||||
return ok(peerInfo)
|
||||
|
||||
## To retrieve peers from PeerExchange partner and return one randomly selected one
|
||||
## among the ones successfully dialed
|
||||
## Note: This is kept for future use.
|
||||
proc selectRandomCapablePeer*(
|
||||
pm: PeerManager, codec: string, pubsubTopic: PubsubTopic
|
||||
): Future[Option[RemotePeerInfo]] {.async.} =
|
||||
var cap = Capabilities.Filter
|
||||
if codec.contains("lightpush"):
|
||||
cap = Capabilities.Lightpush
|
||||
elif codec.contains("filter"):
|
||||
cap = Capabilities.Filter
|
||||
|
||||
var supportivePeers = pm.switch.peerStore.getPeersByCapability(cap)
|
||||
|
||||
trace "Found supportive peers count", count = supportivePeers.len()
|
||||
trace "Found supportive peers", supportivePeers = $supportivePeers
|
||||
if supportivePeers.len == 0:
|
||||
return none(RemotePeerInfo)
|
||||
|
||||
var found = none(RemotePeerInfo)
|
||||
while found.isNone() and supportivePeers.len > 0:
|
||||
let rndPeerIndex = rand(0 .. supportivePeers.len - 1)
|
||||
let randomPeer = supportivePeers[rndPeerIndex]
|
||||
|
||||
info "Dialing random peer",
|
||||
idx = $rndPeerIndex, peer = constructMultiaddrStr(randomPeer)
|
||||
|
||||
supportivePeers.delete(rndPeerIndex .. rndPeerIndex)
|
||||
|
||||
let connOpt = pm.dialPeer(randomPeer, codec)
|
||||
if (await connOpt.withTimeout(10.seconds)):
|
||||
if connOpt.value().isSome():
|
||||
found = some(randomPeer)
|
||||
info "Dialing successful",
|
||||
peer = constructMultiaddrStr(randomPeer), codec = codec
|
||||
else:
|
||||
info "Dialing failed", peer = constructMultiaddrStr(randomPeer), codec = codec
|
||||
else:
|
||||
info "Timeout dialing service peer",
|
||||
peer = constructMultiaddrStr(randomPeer), codec = codec
|
||||
|
||||
return found
|
||||
|
||||
# Debugging PX gathered peers connectivity
|
||||
proc tryCallAllPxPeers*(
|
||||
pm: PeerManager, codec: string, pubsubTopic: PubsubTopic
|
||||
): Future[Option[seq[RemotePeerInfo]]] {.async.} =
|
||||
var capability = Capabilities.Filter
|
||||
if codec.contains("lightpush"):
|
||||
capability = Capabilities.Lightpush
|
||||
elif codec.contains("filter"):
|
||||
capability = Capabilities.Filter
|
||||
|
||||
var supportivePeers = pm.switch.peerStore.getPeersByCapability(capability)
|
||||
|
||||
lpt_px_peers.set(supportivePeers.len)
|
||||
info "Found supportive peers count", count = supportivePeers.len()
|
||||
info "Found supportive peers", supportivePeers = $supportivePeers
|
||||
if supportivePeers.len == 0:
|
||||
return none(seq[RemotePeerInfo])
|
||||
|
||||
var okPeers: seq[RemotePeerInfo] = @[]
|
||||
|
||||
while supportivePeers.len > 0:
|
||||
let rndPeerIndex = rand(0 .. supportivePeers.len - 1)
|
||||
let randomPeer = supportivePeers[rndPeerIndex]
|
||||
|
||||
info "Dialing random peer",
|
||||
idx = $rndPeerIndex, peer = constructMultiaddrStr(randomPeer)
|
||||
|
||||
supportivePeers.delete(rndPeerIndex, rndPeerIndex)
|
||||
|
||||
let connOpt = pm.dialPeer(randomPeer, codec)
|
||||
if (await connOpt.withTimeout(10.seconds)):
|
||||
if connOpt.value().isSome():
|
||||
okPeers.add(randomPeer)
|
||||
info "Dialing successful",
|
||||
peer = constructMultiaddrStr(randomPeer),
|
||||
agent = randomPeer.getAgent(),
|
||||
codec = codec
|
||||
lpt_dialed_peers.inc(labelValues = [randomPeer.getAgent()])
|
||||
else:
|
||||
lpt_dial_failures.inc(labelValues = [randomPeer.getAgent()])
|
||||
error "Dialing failed",
|
||||
peer = constructMultiaddrStr(randomPeer),
|
||||
agent = randomPeer.getAgent(),
|
||||
codec = codec
|
||||
else:
|
||||
lpt_dial_failures.inc(labelValues = [randomPeer.getAgent()])
|
||||
error "Timeout dialing service peer",
|
||||
peer = constructMultiaddrStr(randomPeer),
|
||||
agent = randomPeer.getAgent(),
|
||||
codec = codec
|
||||
|
||||
var okPeersStr: string = ""
|
||||
for idx, peer in okPeers:
|
||||
okPeersStr.add(
|
||||
" " & $idx & ". | " & constructMultiaddrStr(peer) & " | agent: " &
|
||||
peer.getAgent() & " | protos: " & $peer.protocols & " | caps: " &
|
||||
$peer.enr.map(getCapabilities) & "\n"
|
||||
)
|
||||
echo "PX returned peers found callable for " & codec & " / " & $capability & ":\n"
|
||||
echo okPeersStr
|
||||
|
||||
return some(okPeers)
|
||||
|
||||
proc pxLookupServiceNode*(
|
||||
node: WakuNode, conf: LiteProtocolTesterConf
|
||||
): Future[Result[bool, void]] {.async.} =
|
||||
let codec: string = conf.getCodec()
|
||||
|
||||
if node.wakuPeerExchange.isNil():
|
||||
let peerExchangeNode = translateToRemotePeerInfo(conf.bootstrapNode).valueOr:
|
||||
error "Failed to parse bootstrap node - cannot use PeerExchange.",
|
||||
node = conf.bootstrapNode
|
||||
return err()
|
||||
info "PeerExchange node", peer = constructMultiaddrStr(peerExchangeNode)
|
||||
node.peerManager.addServicePeer(peerExchangeNode, WakuPeerExchangeCodec)
|
||||
|
||||
try:
|
||||
await node.mountPeerExchange(some(conf.clusterId))
|
||||
except CatchableError:
|
||||
error "failed to mount waku peer-exchange protocol",
|
||||
error = getCurrentExceptionMsg()
|
||||
return err()
|
||||
|
||||
var trialCount = 5
|
||||
while trialCount > 0:
|
||||
let futPeers = node.fetchPeerExchangePeers(conf.reqPxPeers)
|
||||
if not await futPeers.withTimeout(30.seconds):
|
||||
notice "Cannot get peers from PX", round = 5 - trialCount
|
||||
else:
|
||||
futPeers.value().isOkOr:
|
||||
info "PeerExchange reported error", error = futPeers.read().error
|
||||
return err()
|
||||
|
||||
if conf.testPeers:
|
||||
let peersOpt =
|
||||
await tryCallAllPxPeers(node.peerManager, codec, conf.getPubsubTopic())
|
||||
if peersOpt.isSome():
|
||||
info "Found service peers for codec",
|
||||
codec = codec, peer_count = peersOpt.get().len()
|
||||
return ok(peersOpt.get().len > 0)
|
||||
else:
|
||||
let peerOpt =
|
||||
await selectRandomCapablePeer(node.peerManager, codec, conf.getPubsubTopic())
|
||||
if peerOpt.isSome():
|
||||
info "Found service peer for codec", codec = codec, peer = peerOpt.get()
|
||||
return ok(true)
|
||||
|
||||
await sleepAsync(5.seconds)
|
||||
trialCount -= 1
|
||||
|
||||
return err()
|
||||
|
||||
var alreadyUsedServicePeers {.threadvar.}: seq[RemotePeerInfo]
|
||||
|
||||
## Select service peers by codec from peer store randomly.
|
||||
proc selectRandomServicePeer*(
|
||||
pm: PeerManager, actualPeer: Option[RemotePeerInfo], codec: string
|
||||
): Result[RemotePeerInfo, void] =
|
||||
if actualPeer.isSome():
|
||||
alreadyUsedServicePeers.add(actualPeer.get())
|
||||
|
||||
let supportivePeers = pm.switch.peerStore.getPeersByProtocol(codec).filterIt(
|
||||
it notin alreadyUsedServicePeers
|
||||
)
|
||||
if supportivePeers.len == 0:
|
||||
return err()
|
||||
|
||||
let rndPeerIndex = rand(0 .. supportivePeers.len - 1)
|
||||
return ok(supportivePeers[rndPeerIndex])
|
||||
@ -1,30 +1,42 @@
|
||||
{.push raises: [].}
|
||||
|
||||
import
|
||||
std/[sets, tables, strutils, sequtils, options, strformat],
|
||||
std/[sets, tables, sequtils, options, strformat],
|
||||
chronos/timer as chtimer,
|
||||
chronicles,
|
||||
results
|
||||
chronos,
|
||||
results,
|
||||
libp2p/peerid
|
||||
|
||||
import ./tester_message
|
||||
from std/sugar import `=>`
|
||||
|
||||
import ./tester_message, ./lpt_metrics
|
||||
|
||||
type
|
||||
ArrivalInfo = object
|
||||
arrivedAt: Moment
|
||||
prevArrivedAt: Moment
|
||||
prevIndex: uint32
|
||||
|
||||
MessageInfo = tuple[msg: ProtocolTesterMessage, info: ArrivalInfo]
|
||||
DupStat = tuple[hash: string, dupCount: int, size: uint64]
|
||||
|
||||
StatHelper = object
|
||||
prevIndex: uint32
|
||||
prevArrivedAt: Moment
|
||||
lostIndices: HashSet[uint32]
|
||||
seenIndices: HashSet[uint32]
|
||||
maxIndex: uint32
|
||||
duplicates: OrderedTable[uint32, DupStat]
|
||||
|
||||
Statistics* = object
|
||||
received: Table[uint32, MessageInfo]
|
||||
firstReceivedIdx*: uint32
|
||||
allMessageCount*: uint32
|
||||
receivedMessages*: uint32
|
||||
misorderCount*: uint32
|
||||
lateCount*: uint32
|
||||
duplicateCount*: uint32
|
||||
minLatency*: Duration
|
||||
maxLatency*: Duration
|
||||
cummulativeLatency: Duration
|
||||
helper: StatHelper
|
||||
|
||||
PerPeerStatistics* = Table[string, Statistics]
|
||||
@ -42,24 +54,39 @@ proc init*(T: type Statistics, expectedMessageCount: int = 1000): T =
|
||||
result.helper.prevIndex = 0
|
||||
result.helper.maxIndex = 0
|
||||
result.helper.seenIndices.init(expectedMessageCount)
|
||||
result.minLatency = nanos(0)
|
||||
result.maxLatency = nanos(0)
|
||||
result.cummulativeLatency = nanos(0)
|
||||
result.received = initTable[uint32, MessageInfo](expectedMessageCount)
|
||||
return result
|
||||
|
||||
proc addMessage*(self: var Statistics, msg: ProtocolTesterMessage) =
|
||||
proc addMessage*(
|
||||
self: var Statistics, sender: string, msg: ProtocolTesterMessage, msgHash: string
|
||||
) =
|
||||
if self.allMessageCount == 0:
|
||||
self.allMessageCount = msg.count
|
||||
self.firstReceivedIdx = msg.index
|
||||
elif self.allMessageCount != msg.count:
|
||||
warn "Message count mismatch at message",
|
||||
error "Message count mismatch at message",
|
||||
index = msg.index, expected = self.allMessageCount, got = msg.count
|
||||
|
||||
if not self.helper.seenIndices.contains(msg.index):
|
||||
self.helper.seenIndices.incl(msg.index)
|
||||
else:
|
||||
let currentArrived: MessageInfo = (
|
||||
msg: msg,
|
||||
info: ArrivalInfo(
|
||||
arrivedAt: Moment.now(),
|
||||
prevArrivedAt: self.helper.prevArrivedAt,
|
||||
prevIndex: self.helper.prevIndex,
|
||||
),
|
||||
)
|
||||
lpt_receiver_received_bytes.inc(labelValues = [sender], amount = msg.size.int64)
|
||||
if self.received.hasKeyOrPut(msg.index, currentArrived):
|
||||
inc(self.duplicateCount)
|
||||
warn "Duplicate message", index = msg.index
|
||||
## just do not count into stats
|
||||
self.helper.duplicates.mgetOrPut(msg.index, (msgHash, 0, msg.size)).dupCount.inc()
|
||||
warn "Duplicate message",
|
||||
index = msg.index,
|
||||
hash = msgHash,
|
||||
times_duplicated = self.helper.duplicates[msg.index].dupCount
|
||||
lpt_receiver_duplicate_messages_count.inc(labelValues = [sender])
|
||||
lpt_receiver_distinct_duplicate_messages_count.set(
|
||||
labelValues = [sender], value = self.helper.duplicates.len()
|
||||
)
|
||||
return
|
||||
|
||||
## detect misorder arrival and possible lost messages
|
||||
@ -67,78 +94,134 @@ proc addMessage*(self: var Statistics, msg: ProtocolTesterMessage) =
|
||||
inc(self.misorderCount)
|
||||
warn "Misordered message arrival",
|
||||
index = msg.index, expected = self.helper.prevIndex + 1
|
||||
|
||||
## collect possible lost message indicies
|
||||
for idx in self.helper.prevIndex + 1 ..< msg.index:
|
||||
self.helper.lostIndices.incl(idx)
|
||||
elif self.helper.prevIndex > msg.index:
|
||||
inc(self.lateCount)
|
||||
warn "Late message arrival", index = msg.index, expected = self.helper.prevIndex + 1
|
||||
else:
|
||||
## may remove late arrival
|
||||
self.helper.lostIndices.excl(msg.index)
|
||||
|
||||
## calculate latency
|
||||
let currentArrivedAt = Moment.now()
|
||||
|
||||
let delaySincePrevArrived: Duration = currentArrivedAt - self.helper.prevArrivedAt
|
||||
|
||||
let expectedDelay: Duration = nanos(msg.sincePrev)
|
||||
|
||||
var latency: Duration
|
||||
|
||||
# if we have any latency...
|
||||
if expectedDelay > delaySincePrevArrived:
|
||||
latency = delaySincePrevArrived - expectedDelay
|
||||
if self.minLatency.isZero or (latency < self.minLatency and latency > nanos(0)):
|
||||
self.minLatency = latency
|
||||
if latency > self.maxLatency:
|
||||
self.maxLatency = latency
|
||||
self.cummulativeLatency += latency
|
||||
else:
|
||||
warn "Negative latency detected",
|
||||
index = msg.index, expected = expectedDelay, actual = delaySincePrevArrived
|
||||
|
||||
self.helper.maxIndex = max(self.helper.maxIndex, msg.index)
|
||||
self.helper.prevIndex = msg.index
|
||||
self.helper.prevArrivedAt = currentArrivedAt
|
||||
self.helper.prevArrivedAt = currentArrived.info.arrivedAt
|
||||
inc(self.receivedMessages)
|
||||
lpt_receiver_received_messages_count.inc(labelValues = [sender])
|
||||
lpt_receiver_missing_messages_count.set(
|
||||
labelValues = [sender], value = (self.helper.maxIndex - self.receivedMessages).int64
|
||||
)
|
||||
|
||||
proc addMessage*(
|
||||
self: var PerPeerStatistics, peerId: string, msg: ProtocolTesterMessage
|
||||
self: var PerPeerStatistics,
|
||||
peerId: string,
|
||||
msg: ProtocolTesterMessage,
|
||||
msgHash: string,
|
||||
) =
|
||||
if not self.contains(peerId):
|
||||
self[peerId] = Statistics.init()
|
||||
|
||||
let shortSenderId = PeerId.init(msg.sender).map(p => p.shortLog()).valueOr(msg.sender)
|
||||
|
||||
discard catch:
|
||||
self[peerId].addMessage(msg)
|
||||
self[peerId].addMessage(shortSenderId, msg, msgHash)
|
||||
|
||||
lpt_receiver_sender_peer_count.set(value = self.len)
|
||||
|
||||
proc lastMessageArrivedAt*(self: Statistics): Option[Moment] =
|
||||
if self.receivedMessages > 0:
|
||||
return some(self.helper.prevArrivedAt)
|
||||
return none(Moment)
|
||||
|
||||
proc lossCount*(self: Statistics): uint32 =
|
||||
self.helper.maxIndex - self.receivedMessages
|
||||
|
||||
proc averageLatency*(self: Statistics): Duration =
|
||||
if self.receivedMessages == 0:
|
||||
return nanos(0)
|
||||
return self.cummulativeLatency div self.receivedMessages
|
||||
proc calcLatency*(self: Statistics): tuple[min, max, avg: Duration] =
|
||||
var
|
||||
minLatency = nanos(0)
|
||||
maxLatency = nanos(0)
|
||||
avgLatency = nanos(0)
|
||||
|
||||
if self.receivedMessages > 2:
|
||||
try:
|
||||
var prevArrivedAt = self.received[self.firstReceivedIdx].info.arrivedAt
|
||||
|
||||
for idx, (msg, arrival) in self.received.pairs:
|
||||
if idx <= 1:
|
||||
continue
|
||||
let expectedDelay = nanos(msg.sincePrev)
|
||||
|
||||
## latency will be 0 if arrived in shorter time than expected
|
||||
var latency = arrival.arrivedAt - arrival.prevArrivedAt - expectedDelay
|
||||
|
||||
## will not measure zero latency, it is unlikely to happen but in case happens could
|
||||
## ditort the min latency calulculation as we want to calculate the feasible minimum.
|
||||
if latency > nanos(0):
|
||||
if minLatency == nanos(0):
|
||||
minLatency = latency
|
||||
else:
|
||||
minLatency = min(minLatency, latency)
|
||||
|
||||
maxLatency = max(maxLatency, latency)
|
||||
avgLatency += latency
|
||||
|
||||
avgLatency = avgLatency div (self.receivedMessages - 1)
|
||||
except KeyError:
|
||||
error "Error while calculating latency: " & getCurrentExceptionMsg()
|
||||
|
||||
return (minLatency, maxLatency, avgLatency)
|
||||
|
||||
proc missingIndices*(self: Statistics): seq[uint32] =
|
||||
var missing: seq[uint32] = @[]
|
||||
for idx in 1 .. self.helper.maxIndex:
|
||||
if not self.received.hasKey(idx):
|
||||
missing.add(idx)
|
||||
return missing
|
||||
|
||||
proc distinctDupCount(self: Statistics): int {.inline.} =
|
||||
return self.helper.duplicates.len()
|
||||
|
||||
proc allDuplicates(self: Statistics): int {.inline.} =
|
||||
var total = 0
|
||||
for _, (_, dupCount, _) in self.helper.duplicates.pairs:
|
||||
total += dupCount
|
||||
return total
|
||||
|
||||
proc dupMsgs(self: Statistics): string =
|
||||
var dupMsgs: string = ""
|
||||
for idx, (hash, dupCount, size) in self.helper.duplicates.pairs:
|
||||
dupMsgs.add(
|
||||
" index: " & $idx & " | hash: " & hash & " | count: " & $dupCount & " | size: " &
|
||||
$size & "\n"
|
||||
)
|
||||
return dupMsgs
|
||||
|
||||
proc echoStat*(self: Statistics, peerId: string) =
|
||||
let (minL, maxL, avgL) = self.calcLatency()
|
||||
lpt_receiver_latencies.set(labelValues = [peerId, "min"], value = minL.nanos())
|
||||
lpt_receiver_latencies.set(labelValues = [peerId, "avg"], value = avgL.nanos())
|
||||
lpt_receiver_latencies.set(labelValues = [peerId, "max"], value = maxL.nanos())
|
||||
|
||||
proc echoStat*(self: Statistics) =
|
||||
let printable = catch:
|
||||
"""*------------------------------------------------------------------------------------------*
|
||||
| Expected | Received | Target | Loss | Misorder | Late | Duplicate |
|
||||
|{self.helper.maxIndex:>11} |{self.receivedMessages:>11} |{self.allMessageCount:>11} |{self.lossCount():>11} |{self.misorderCount:>11} |{self.lateCount:>11} |{self.duplicateCount:>11} |
|
||||
| Expected | Received | Target | Loss | Misorder | Late | |
|
||||
|{self.helper.maxIndex:>11} |{self.receivedMessages:>11} |{self.allMessageCount:>11} |{self.lossCount():>11} |{self.misorderCount:>11} |{self.lateCount:>11} | |
|
||||
*------------------------------------------------------------------------------------------*
|
||||
| Latency stat: |
|
||||
| avg latency: {$self.averageLatency():<73}|
|
||||
| min latency: {$self.maxLatency:<73}|
|
||||
| max latency: {$self.minLatency:<73}|
|
||||
| min latency: {$minL:<73}|
|
||||
| avg latency: {$avgL:<73}|
|
||||
| max latency: {$maxL:<73}|
|
||||
*------------------------------------------------------------------------------------------*
|
||||
| Duplicate stat: |
|
||||
| distinct duplicate messages: {$self.distinctDupCount():<57}|
|
||||
| sum duplicates : {$self.allDuplicates():<57}|
|
||||
Duplicated messages:
|
||||
{self.dupMsgs()}
|
||||
*------------------------------------------------------------------------------------------*
|
||||
| Lost indices: |
|
||||
| {self.missingIndices()} |
|
||||
*------------------------------------------------------------------------------------------*""".fmt()
|
||||
|
||||
if printable.isErr():
|
||||
echo "Error while printing statistics: " & printable.error().msg
|
||||
else:
|
||||
echo printable.get()
|
||||
echo printable.valueOr("Error while printing statistics: " & error.msg)
|
||||
|
||||
proc jsonStat*(self: Statistics): string =
|
||||
let minL, maxL, avgL = self.calcLatency()
|
||||
|
||||
let json = catch:
|
||||
"""{{"expected":{self.helper.maxIndex},
|
||||
"received": {self.receivedMessages},
|
||||
@ -148,25 +231,24 @@ proc jsonStat*(self: Statistics): string =
|
||||
"late": {self.lateCount},
|
||||
"duplicate": {self.duplicateCount},
|
||||
"latency":
|
||||
{{"avg": "{self.averageLatency()}",
|
||||
"min": "{self.minLatency}",
|
||||
"max": "{self.maxLatency}"
|
||||
}}
|
||||
{{"avg": "{avgL}",
|
||||
"min": "{minL}",
|
||||
"max": "{maxL}"
|
||||
}},
|
||||
"lostIndices": {self.missingIndices()}
|
||||
}}""".fmt()
|
||||
if json.isErr:
|
||||
return "{\"result:\": \"" & json.error.msg & "\"}"
|
||||
|
||||
return json.get()
|
||||
return json.valueOr("{\"result:\": \"" & error.msg & "\"}")
|
||||
|
||||
proc echoStats*(self: var PerPeerStatistics) =
|
||||
for peerId, stats in self.pairs:
|
||||
let peerLine = catch:
|
||||
"Receiver statistics from peer {peerId}".fmt()
|
||||
if peerLine.isErr:
|
||||
peerLine.isOkOr:
|
||||
echo "Error while printing statistics"
|
||||
else:
|
||||
echo peerLine.get()
|
||||
stats.echoStat()
|
||||
continue
|
||||
echo peerLine.get()
|
||||
stats.echoStat(peerId)
|
||||
|
||||
proc jsonStats*(self: PerPeerStatistics): string =
|
||||
try:
|
||||
@ -189,14 +271,58 @@ proc jsonStats*(self: PerPeerStatistics): string =
|
||||
"{\"result:\": \"Error while generating json stats: " & getCurrentExceptionMsg() &
|
||||
"\"}"
|
||||
|
||||
proc checkIfAllMessagesReceived*(self: PerPeerStatistics): bool =
|
||||
proc lastMessageArrivedAt*(self: PerPeerStatistics): Option[Moment] =
|
||||
var lastArrivedAt = Moment.init(0, Millisecond)
|
||||
for stat in self.values:
|
||||
let lastMsgFromPeerAt = stat.lastMessageArrivedAt().valueOr:
|
||||
continue
|
||||
|
||||
if lastMsgFromPeerAt > lastArrivedAt:
|
||||
lastArrivedAt = lastMsgFromPeerAt
|
||||
|
||||
if lastArrivedAt == Moment.init(0, Millisecond):
|
||||
return none(Moment)
|
||||
|
||||
return some(lastArrivedAt)
|
||||
|
||||
proc checkIfAllMessagesReceived*(
|
||||
self: PerPeerStatistics, maxWaitForLastMessage: Duration
|
||||
): Future[bool] {.async.} =
|
||||
# if there are no peers have sent messages, assume we just have started.
|
||||
if self.len == 0:
|
||||
return false
|
||||
|
||||
# check if numerically all messages are received.
|
||||
# this suggest we received at least one message already from one peer
|
||||
var isAlllMessageReceived = true
|
||||
for stat in self.values:
|
||||
if (stat.allMessageCount == 0 and stat.receivedMessages == 0) or
|
||||
stat.receivedMessages < stat.allMessageCount:
|
||||
stat.helper.maxIndex < stat.allMessageCount:
|
||||
isAlllMessageReceived = false
|
||||
break
|
||||
|
||||
if not isAlllMessageReceived:
|
||||
# if not all message received we still need to check if last message arrived within a time frame
|
||||
# to avoid endless waiting while publishers are already quit.
|
||||
let lastMessageAt = self.lastMessageArrivedAt()
|
||||
if lastMessageAt.isNone():
|
||||
return false
|
||||
|
||||
# last message shall arrived within time limit
|
||||
if Moment.now() - lastMessageAt.get() < maxWaitForLastMessage:
|
||||
return false
|
||||
else:
|
||||
info "No message since max wait time", maxWait = $maxWaitForLastMessage
|
||||
|
||||
## Ok, we see last message arrived from all peers,
|
||||
## lets check if all messages are received
|
||||
## and if not let's wait another 20 secs to give chance the system will send them.
|
||||
var shallWait = false
|
||||
for stat in self.values:
|
||||
if stat.receivedMessages < stat.allMessageCount:
|
||||
shallWait = true
|
||||
|
||||
if shallWait:
|
||||
await sleepAsync(20.seconds)
|
||||
|
||||
return true
|
||||
|
||||
@ -1,8 +1,6 @@
|
||||
import
|
||||
std/[strutils, strformat],
|
||||
results,
|
||||
chronos,
|
||||
regex,
|
||||
confutils,
|
||||
confutils/defs,
|
||||
confutils/std/net,
|
||||
@ -11,28 +9,29 @@ import
|
||||
libp2p/crypto/crypto,
|
||||
libp2p/crypto/secp,
|
||||
libp2p/multiaddress,
|
||||
nimcrypto/utils,
|
||||
secp256k1,
|
||||
json
|
||||
secp256k1
|
||||
|
||||
import
|
||||
waku/[
|
||||
common/confutils/envvar/defs as confEnvvarDefs,
|
||||
common/confutils/envvar/std/net as confEnvvarNet,
|
||||
common/logging,
|
||||
factory/external_config,
|
||||
waku/waku_core,
|
||||
]
|
||||
../../tools/confutils/
|
||||
[cli_args, envvar as confEnvvarDefs, envvar_net as confEnvvarNet],
|
||||
waku/[common/logging, waku_core, waku_core/topics/pubsub_topic]
|
||||
|
||||
export confTomlDefs, confTomlNet, confEnvvarDefs, confEnvvarNet
|
||||
|
||||
const
|
||||
LitePubsubTopic* = PubsubTopic("/waku/2/rs/0/0")
|
||||
LitePubsubTopic* = PubsubTopic("/waku/2/rs/66/0")
|
||||
LiteContentTopic* = ContentTopic("/tester/1/light-pubsub-example/proto")
|
||||
DefaultMinTestMessageSizeStr* = "1KiB"
|
||||
DefaultMaxTestMessageSizeStr* = "150KiB"
|
||||
|
||||
type TesterFunctionality* = enum
|
||||
SENDER # pumps messages to the network
|
||||
RECEIVER # gather and analyze messages from the network
|
||||
|
||||
type LightpushVersion* = enum
|
||||
LEGACY # legacy lightpush protocol
|
||||
V3 # lightpush v3 protocol
|
||||
|
||||
type LiteProtocolTesterConf* = object
|
||||
configFile* {.
|
||||
desc:
|
||||
@ -50,14 +49,22 @@ type LiteProtocolTesterConf* = object
|
||||
|
||||
logFormat* {.
|
||||
desc:
|
||||
"Specifies what kind of logs should be written to stdout. Suported formats: TEXT, JSON",
|
||||
"Specifies what kind of logs should be written to stdout. Supported formats: TEXT, JSON",
|
||||
defaultValue: logging.LogFormat.TEXT,
|
||||
name: "log-format"
|
||||
.}: logging.LogFormat
|
||||
|
||||
## Test configuration
|
||||
servicenode* {.desc: "Peer multiaddr of the service node.", name: "service-node".}:
|
||||
string
|
||||
serviceNode* {.
|
||||
desc: "Peer multiaddr of the service node.", defaultValue: "", name: "service-node"
|
||||
.}: string
|
||||
|
||||
bootstrapNode* {.
|
||||
desc:
|
||||
"Peer multiaddr of the bootstrap node. If `service-node` not set, it is used to retrieve potential service nodes of the network.",
|
||||
defaultValue: "",
|
||||
name: "bootstrap-node"
|
||||
.}: string
|
||||
|
||||
nat* {.
|
||||
desc:
|
||||
@ -72,28 +79,31 @@ type LiteProtocolTesterConf* = object
|
||||
name: "test-func"
|
||||
.}: TesterFunctionality
|
||||
|
||||
lightpushVersion* {.
|
||||
desc: "Version of the sender to use. Supported values: legacy, v3.",
|
||||
defaultValue: LightpushVersion.LEGACY,
|
||||
name: "lightpush-version"
|
||||
.}: LightpushVersion
|
||||
|
||||
numMessages* {.
|
||||
desc: "Number of messages to send.", defaultValue: 120, name: "num-messages"
|
||||
.}: uint32
|
||||
|
||||
delayMessages* {.
|
||||
desc: "Delay between messages in milliseconds.",
|
||||
defaultValue: 1000,
|
||||
name: "delay-messages"
|
||||
startPublishingAfter* {.
|
||||
desc: "Wait number of seconds before start publishing messages.",
|
||||
defaultValue: 5,
|
||||
name: "start-publishing-after"
|
||||
.}: uint32
|
||||
|
||||
pubsubTopics* {.
|
||||
desc: "Default pubsub topic to subscribe to. Argument may be repeated.",
|
||||
defaultValue: @[LitePubsubTopic],
|
||||
name: "pubsub-topic"
|
||||
.}: seq[PubsubTopic]
|
||||
messageInterval* {.
|
||||
desc: "Delay between messages in milliseconds.",
|
||||
defaultValue: 1000,
|
||||
name: "message-interval"
|
||||
.}: uint32
|
||||
|
||||
shard* {.desc: "Shards index to subscribe to. ", defaultValue: 0, name: "shard".}:
|
||||
uint16
|
||||
|
||||
## TODO: extend lite protocol tester configuration based on testing needs
|
||||
# shards* {.
|
||||
# desc: "Shards index to subscribe to [0..MAX_SHARDS-1]. Argument may be repeated.",
|
||||
# defaultValue: @[],
|
||||
# name: "shard"
|
||||
# .}: seq[uint16]
|
||||
contentTopics* {.
|
||||
desc: "Default content topic to subscribe to. Argument may be repeated.",
|
||||
defaultValue: @[LiteContentTopic],
|
||||
@ -105,8 +115,21 @@ type LiteProtocolTesterConf* = object
|
||||
"Cluster id that the node is running in. Node in a different cluster id is disconnected.",
|
||||
defaultValue: 0,
|
||||
name: "cluster-id"
|
||||
.}: uint32
|
||||
.}: uint16
|
||||
|
||||
minTestMessageSize* {.
|
||||
desc:
|
||||
"Minimum message size. Accepted units: KiB, KB, and B. e.g. 1024KiB; 1500 B; etc.",
|
||||
defaultValue: DefaultMinTestMessageSizeStr,
|
||||
name: "min-test-msg-size"
|
||||
.}: string
|
||||
|
||||
maxTestMessageSize* {.
|
||||
desc:
|
||||
"Maximum message size. Accepted units: KiB, KB, and B. e.g. 1024KiB; 1500 B; etc.",
|
||||
defaultValue: DefaultMaxTestMessageSizeStr,
|
||||
name: "max-test-msg-size"
|
||||
.}: string
|
||||
## Tester REST service configuration
|
||||
restAddress* {.
|
||||
desc: "Listening address of the REST HTTP server.",
|
||||
@ -114,12 +137,31 @@ type LiteProtocolTesterConf* = object
|
||||
name: "rest-address"
|
||||
.}: IpAddress
|
||||
|
||||
testPeers* {.
|
||||
desc: "Run dial test on gathered PeerExchange peers.",
|
||||
defaultValue: false,
|
||||
name: "test-peers"
|
||||
.}: bool
|
||||
|
||||
reqPxPeers* {.
|
||||
desc: "Number of peers to request on PeerExchange.",
|
||||
defaultValue: 100,
|
||||
name: "req-px-peers"
|
||||
.}: uint16
|
||||
|
||||
restPort* {.
|
||||
desc: "Listening port of the REST HTTP server.",
|
||||
defaultValue: 8654,
|
||||
name: "rest-port"
|
||||
.}: uint16
|
||||
|
||||
fixedServicePeer* {.
|
||||
desc:
|
||||
"Prevent changing the service peer in case of failures, the full test will stict to the first service peer in use.",
|
||||
defaultValue: false,
|
||||
name: "fixed-service-peer"
|
||||
.}: bool
|
||||
|
||||
restAllowOrigin* {.
|
||||
desc:
|
||||
"Allow cross-origin requests from the specified origin." &
|
||||
@ -129,6 +171,12 @@ type LiteProtocolTesterConf* = object
|
||||
name: "rest-allow-origin"
|
||||
.}: seq[string]
|
||||
|
||||
metricsPort* {.
|
||||
desc: "Listening port of the Metrics HTTP server.",
|
||||
defaultValue: 8003,
|
||||
name: "metrics-port"
|
||||
.}: uint16
|
||||
|
||||
{.push warning[ProveInit]: off.}
|
||||
|
||||
proc load*(T: type LiteProtocolTesterConf, version = ""): ConfResult[T] =
|
||||
@ -138,11 +186,23 @@ proc load*(T: type LiteProtocolTesterConf, version = ""): ConfResult[T] =
|
||||
secondarySources = proc(
|
||||
conf: LiteProtocolTesterConf, sources: auto
|
||||
) {.gcsafe, raises: [ConfigurationError].} =
|
||||
sources.addConfigFile(Envvar, InputFile("liteprotocoltester"))
|
||||
,
|
||||
sources.addConfigFile(Envvar, InputFile("liteprotocoltester")),
|
||||
)
|
||||
ok(conf)
|
||||
except CatchableError:
|
||||
err(getCurrentExceptionMsg())
|
||||
|
||||
proc getPubsubTopic*(conf: LiteProtocolTesterConf): PubsubTopic =
|
||||
return $RelayShard(clusterId: conf.clusterId, shardId: conf.shard)
|
||||
|
||||
proc getCodec*(conf: LiteProtocolTesterConf): string =
|
||||
return
|
||||
if conf.testFunc == TesterFunctionality.RECEIVER:
|
||||
WakuFilterSubscribeCodec
|
||||
else:
|
||||
if conf.lightpushVersion == LightpushVersion.LEGACY:
|
||||
WakuLegacyLightPushCodec
|
||||
else:
|
||||
WakuLightPushCodec
|
||||
|
||||
{.pop.}
|
||||
|
||||
@ -6,7 +6,7 @@ import
|
||||
json_serialization/std/options,
|
||||
json_serialization/lexer
|
||||
|
||||
import ../../waku/waku_api/rest/serdes
|
||||
import waku/rest_api/endpoint/serdes
|
||||
|
||||
type ProtocolTesterMessage* = object
|
||||
sender*: string
|
||||
@ -15,6 +15,7 @@ type ProtocolTesterMessage* = object
|
||||
startedAt*: int64
|
||||
sinceStart*: int64
|
||||
sincePrev*: int64
|
||||
size*: uint64
|
||||
|
||||
proc writeValue*(
|
||||
writer: var JsonWriter[RestJson], value: ProtocolTesterMessage
|
||||
@ -26,6 +27,7 @@ proc writeValue*(
|
||||
writer.writeField("startedAt", value.startedAt)
|
||||
writer.writeField("sinceStart", value.sinceStart)
|
||||
writer.writeField("sincePrev", value.sincePrev)
|
||||
writer.writeField("size", value.size)
|
||||
writer.endRecord()
|
||||
|
||||
proc readValue*(
|
||||
@ -38,6 +40,7 @@ proc readValue*(
|
||||
startedAt: Option[int64]
|
||||
sinceStart: Option[int64]
|
||||
sincePrev: Option[int64]
|
||||
size: Option[uint64]
|
||||
|
||||
for fieldName in readObjectFields(reader):
|
||||
case fieldName
|
||||
@ -77,8 +80,14 @@ proc readValue*(
|
||||
"Multiple `sincePrev` fields found", "ProtocolTesterMessage"
|
||||
)
|
||||
sincePrev = some(reader.readValue(int64))
|
||||
of "size":
|
||||
if size.isSome():
|
||||
reader.raiseUnexpectedField(
|
||||
"Multiple `size` fields found", "ProtocolTesterMessage"
|
||||
)
|
||||
size = some(reader.readValue(uint64))
|
||||
else:
|
||||
unrecognizedFieldWarning()
|
||||
unrecognizedFieldWarning(value)
|
||||
|
||||
if sender.isNone():
|
||||
reader.raiseUnexpectedValue("Field `sender` is missing")
|
||||
@ -98,6 +107,9 @@ proc readValue*(
|
||||
if sincePrev.isNone():
|
||||
reader.raiseUnexpectedValue("Field `sincePrev` is missing")
|
||||
|
||||
if size.isNone():
|
||||
reader.raiseUnexpectedValue("Field `size` is missing")
|
||||
|
||||
value = ProtocolTesterMessage(
|
||||
sender: sender.get(),
|
||||
index: index.get(),
|
||||
@ -105,4 +117,5 @@ proc readValue*(
|
||||
startedAt: startedAt.get(),
|
||||
sinceStart: sinceStart.get(),
|
||||
sincePrev: sincePrev.get(),
|
||||
size: size.get(),
|
||||
)
|
||||
|
||||
29
apps/liteprotocoltester/v3_publisher.nim
Normal file
29
apps/liteprotocoltester/v3_publisher.nim
Normal file
@ -0,0 +1,29 @@
|
||||
import results, options, chronos
|
||||
import waku/[waku_node, waku_core, waku_lightpush, waku_lightpush/common]
|
||||
import publisher_base
|
||||
|
||||
type V3Publisher* = ref object of PublisherBase
|
||||
|
||||
proc new*(T: type V3Publisher, wakuNode: WakuNode): T =
|
||||
if isNil(wakuNode.wakuLightpushClient):
|
||||
wakuNode.mountLightPushClient()
|
||||
|
||||
return V3Publisher(wakuNode: wakuNode)
|
||||
|
||||
method send*(
|
||||
self: V3Publisher,
|
||||
topic: PubsubTopic,
|
||||
message: WakuMessage,
|
||||
servicePeer: RemotePeerInfo,
|
||||
): Future[Result[void, string]] {.async.} =
|
||||
# when error it must return original error desc due the text is used for distinction between error types in metrics.
|
||||
discard (
|
||||
await self.wakuNode.lightpushPublish(some(topic), message, some(servicePeer))
|
||||
).valueOr:
|
||||
if error.code == LightPushErrorCode.NO_PEERS_TO_RELAY and
|
||||
error.desc != some("No peers for topic, skipping publish"):
|
||||
# TODO: We need better separation of errors happening on the client side or the server side.-
|
||||
return err("dial_failure")
|
||||
else:
|
||||
return err($error.code)
|
||||
return ok()
|
||||
@ -18,13 +18,26 @@ networkmonitor [OPTIONS]...
|
||||
|
||||
The following options are available:
|
||||
|
||||
-l, --log-level Sets the log level [=LogLevel.DEBUG].
|
||||
-l, --log-level Sets the log level [=LogLevel.INFO].
|
||||
-t, --timeout Timeout to consider that the connection failed [=chronos.seconds(10)].
|
||||
-b, --bootstrap-node Bootstrap ENR node. Argument may be repeated. [=@[""]].
|
||||
--dns-discovery-url URL for DNS node list in format 'enrtree://<key>@<fqdn>'.
|
||||
--pubsub-topic Default pubsub topic to subscribe to. Argument may be repeated..
|
||||
-r, --refresh-interval How often new peers are discovered and connected to (in seconds) [=5].
|
||||
--cluster-id Cluster id that the node is running in. Node in a different cluster id is
|
||||
disconnected. [=1].
|
||||
--rln-relay Enable spam protection through rln-relay: true|false [=true].
|
||||
--rln-relay-dynamic Enable waku-rln-relay with on-chain dynamic group management: true|false
|
||||
[=true].
|
||||
--rln-relay-eth-client-address HTTP address of an Ethereum testnet client e.g., http://localhost:8540/
|
||||
[=http://localhost:8540/].
|
||||
--rln-relay-eth-contract-address Address of membership contract on an Ethereum testnet.
|
||||
--rln-relay-epoch-sec Epoch size in seconds used to rate limit RLN memberships. Default is 1 second.
|
||||
[=1].
|
||||
--rln-relay-user-message-limit Set a user message limit for the rln membership registration. Must be a positive
|
||||
integer. Default is 1. [=1].
|
||||
--metrics-server Enable the metrics server: true|false [=true].
|
||||
--metrics-server-address Listening address of the metrics server. [=ValidIpAddress.init("127.0.0.1")].
|
||||
--metrics-server-address Listening address of the metrics server. [=parseIpAddress("127.0.0.1")].
|
||||
--metrics-server-port Listening HTTP port of the metrics server. [=8008].
|
||||
--metrics-rest-address Listening address of the metrics rest server. [=127.0.0.1].
|
||||
--metrics-rest-port Listening HTTP port of the metrics rest server. [=8009].
|
||||
|
||||
34
apps/networkmonitor/docker-compose.yml
Normal file
34
apps/networkmonitor/docker-compose.yml
Normal file
@ -0,0 +1,34 @@
|
||||
version: '3.8'
|
||||
networks:
|
||||
monitoring:
|
||||
driver: bridge
|
||||
|
||||
volumes:
|
||||
prometheus-data:
|
||||
driver: local
|
||||
grafana-data:
|
||||
driver: local
|
||||
|
||||
# Services definitions
|
||||
services:
|
||||
|
||||
prometheus:
|
||||
image: docker.io/prom/prometheus:latest
|
||||
container_name: prometheus
|
||||
ports:
|
||||
- 9090:9090
|
||||
command:
|
||||
- '--config.file=/etc/prometheus/prometheus.yaml'
|
||||
volumes:
|
||||
- ./prometheus.yaml:/etc/prometheus/prometheus.yaml:ro
|
||||
- ./data:/prometheus
|
||||
restart: unless-stopped
|
||||
|
||||
grafana:
|
||||
image: grafana/grafana-oss:latest
|
||||
container_name: grafana
|
||||
ports:
|
||||
- '3000:3000'
|
||||
volumes:
|
||||
- grafana-data:/var/lib/grafana
|
||||
restart: unless-stopped
|
||||
@ -1,9 +1,8 @@
|
||||
{.push raises: [].}
|
||||
|
||||
import
|
||||
std/[tables, strutils, times, sequtils, random],
|
||||
std/[net, tables, strutils, times, sequtils, random, sugar],
|
||||
results,
|
||||
stew/shims/net,
|
||||
chronicles,
|
||||
chronicles/topics_registry,
|
||||
chronos,
|
||||
@ -40,21 +39,38 @@ logScope:
|
||||
const ReconnectTime = 60
|
||||
const MaxConnectionRetries = 5
|
||||
const ResetRetriesAfter = 1200
|
||||
const AvgPingWindow = 10.0
|
||||
const PingSmoothing = 0.3
|
||||
const MaxConnectedPeers = 150
|
||||
|
||||
const git_version* {.strdefine.} = "n/a"
|
||||
|
||||
proc setDiscoveredPeersCapabilities(routingTableNodes: seq[Node]) =
|
||||
proc setDiscoveredPeersCapabilities(routingTableNodes: seq[waku_enr.Record]) =
|
||||
for capability in @[Relay, Store, Filter, Lightpush]:
|
||||
let nOfNodesWithCapability =
|
||||
routingTableNodes.countIt(it.record.supportsCapability(capability))
|
||||
routingTableNodes.countIt(it.supportsCapability(capability))
|
||||
info "capabilities as per ENR waku flag",
|
||||
capability = capability, amount = nOfNodesWithCapability
|
||||
networkmonitor_peer_type_as_per_enr.set(
|
||||
int64(nOfNodesWithCapability), labelValues = [$capability]
|
||||
)
|
||||
|
||||
proc setDiscoveredPeersCluster(routingTableNodes: seq[Node]) =
|
||||
var clusters: CountTable[uint16]
|
||||
|
||||
for node in routingTableNodes:
|
||||
let typedRec = node.record.toTyped().valueOr:
|
||||
clusters.inc(0)
|
||||
continue
|
||||
|
||||
let relayShard = typedRec.relaySharding().valueOr:
|
||||
clusters.inc(0)
|
||||
continue
|
||||
|
||||
clusters.inc(relayShard.clusterId)
|
||||
|
||||
for (key, value) in clusters.pairs:
|
||||
networkmonitor_peer_cluster_as_per_enr.set(int64(value), labelValues = [$key])
|
||||
|
||||
proc analyzePeer(
|
||||
customPeerInfo: CustomPeerInfoRef,
|
||||
peerInfo: RemotePeerInfo,
|
||||
@ -87,16 +103,17 @@ proc analyzePeer(
|
||||
info "successfully pinged peer", peer = peerInfo, duration = pingDelay.millis
|
||||
networkmonitor_peer_ping.observe(pingDelay.millis)
|
||||
|
||||
if customPeerInfo.avgPingDuration == 0.millis:
|
||||
customPeerInfo.avgPingDuration = pingDelay
|
||||
# We are using a smoothed moving average
|
||||
customPeerInfo.avgPingDuration =
|
||||
if customPeerInfo.avgPingDuration.millis == 0:
|
||||
pingDelay
|
||||
else:
|
||||
let newAvg =
|
||||
(float64(pingDelay.millis) * PingSmoothing) +
|
||||
float64(customPeerInfo.avgPingDuration.millis) * (1.0 - PingSmoothing)
|
||||
|
||||
int64(newAvg).millis
|
||||
|
||||
# TODO: check why the calculation ends up losing precision
|
||||
customPeerInfo.avgPingDuration = int64(
|
||||
(
|
||||
float64(customPeerInfo.avgPingDuration.millis) * (AvgPingWindow - 1.0) +
|
||||
float64(pingDelay.millis)
|
||||
) / AvgPingWindow
|
||||
).millis
|
||||
customPeerInfo.lastPingDuration = pingDelay
|
||||
|
||||
return ok(customPeerInfo.peerId)
|
||||
@ -116,7 +133,7 @@ proc shouldReconnect(customPeerInfo: CustomPeerInfoRef): bool =
|
||||
|
||||
# TODO: Split in discover, connect
|
||||
proc setConnectedPeersMetrics(
|
||||
discoveredNodes: seq[Node],
|
||||
discoveredNodes: seq[waku_enr.Record],
|
||||
node: WakuNode,
|
||||
timeout: chronos.Duration,
|
||||
restClient: RestClientRef,
|
||||
@ -141,20 +158,10 @@ proc setConnectedPeersMetrics(
|
||||
|
||||
# iterate all newly discovered nodes
|
||||
for discNode in discoveredNodes:
|
||||
let typedRecord = discNode.record.toTypedRecord()
|
||||
if not typedRecord.isOk():
|
||||
warn "could not convert record to typed record", record = discNode.record
|
||||
continue
|
||||
|
||||
let secp256k1 = typedRecord.get().secp256k1
|
||||
if not secp256k1.isSome():
|
||||
warn "could not get secp256k1 key", typedRecord = typedRecord.get()
|
||||
continue
|
||||
|
||||
let peerRes = toRemotePeerInfo(discNode.record)
|
||||
let peerRes = toRemotePeerInfo(discNode)
|
||||
|
||||
let peerInfo = peerRes.valueOr:
|
||||
warn "error converting record to remote peer info", record = discNode.record
|
||||
warn "error converting record to remote peer info", record = discNode
|
||||
continue
|
||||
|
||||
# create new entry if new peerId found
|
||||
@ -169,16 +176,21 @@ proc setConnectedPeersMetrics(
|
||||
let customPeerInfo = allPeers[peerId]
|
||||
|
||||
customPeerInfo.lastTimeDiscovered = currentTime
|
||||
customPeerInfo.enr = discNode.record.toURI()
|
||||
customPeerInfo.enrCapabilities = discNode.record.getCapabilities().mapIt($it)
|
||||
customPeerInfo.enr = discNode.toURI()
|
||||
customPeerInfo.enrCapabilities = discNode.getCapabilities().mapIt($it)
|
||||
customPeerInfo.discovered += 1
|
||||
|
||||
if not typedRecord.get().ip.isSome():
|
||||
warn "ip field is not set", record = typedRecord.get()
|
||||
for maddr in peerInfo.addrs:
|
||||
if $maddr notin customPeerInfo.maddrs:
|
||||
customPeerInfo.maddrs.add $maddr
|
||||
let typedRecord = discNode.toTypedRecord().valueOr:
|
||||
warn "could not convert record to typed record", record = discNode
|
||||
continue
|
||||
let ipAddr = typedRecord.ip.valueOr:
|
||||
warn "ip field is not set", record = typedRecord
|
||||
continue
|
||||
|
||||
let ip = $typedRecord.get().ip.get().join(".")
|
||||
customPeerInfo.ip = ip
|
||||
customPeerInfo.ip = $ipAddr.join(".")
|
||||
|
||||
# try to ping the peer
|
||||
if shouldReconnect(customPeerInfo):
|
||||
@ -201,7 +213,7 @@ proc setConnectedPeersMetrics(
|
||||
continue
|
||||
var customPeerInfo = allPeers[peerIdStr]
|
||||
|
||||
debug "connected to peer", peer = customPeerInfo[]
|
||||
info "connected to peer", peer = customPeerInfo[]
|
||||
|
||||
# after connection, get supported protocols
|
||||
let lp2pPeerStore = node.switch.peerStore
|
||||
@ -301,13 +313,16 @@ proc crawlNetwork(
|
||||
while true:
|
||||
let startTime = Moment.now()
|
||||
# discover new random nodes
|
||||
let discoveredNodes = await wakuDiscv5.protocol.queryRandom()
|
||||
let discoveredNodes = await wakuDiscv5.findRandomPeers()
|
||||
|
||||
# nodes are nested into bucket, flat it
|
||||
let flatNodes = wakuDiscv5.protocol.routingTable.buckets.mapIt(it.nodes).flatten()
|
||||
|
||||
# populate metrics related to capabilities as advertised by the ENR (see waku field)
|
||||
setDiscoveredPeersCapabilities(flatNodes)
|
||||
setDiscoveredPeersCapabilities(discoveredNodes)
|
||||
|
||||
# populate cluster metrics as advertised by the ENR
|
||||
setDiscoveredPeersCluster(flatNodes)
|
||||
|
||||
# tries to connect to all newly discovered nodes
|
||||
# and populates metrics related to peers we could connect
|
||||
@ -321,10 +336,10 @@ proc crawlNetwork(
|
||||
# populate info from ip addresses
|
||||
await populateInfoFromIp(allPeersRef, restClient)
|
||||
|
||||
let totalNodes = flatNodes.len
|
||||
let seenNodes = flatNodes.countIt(it.seen)
|
||||
let totalNodes = discoveredNodes.len
|
||||
#let seenNodes = totalNodes
|
||||
|
||||
info "discovered nodes: ", total = totalNodes, seen = seenNodes
|
||||
info "discovered nodes: ", total = totalNodes #, seen = seenNodes
|
||||
|
||||
# Notes:
|
||||
# we dont run ipMajorityLoop
|
||||
@ -337,14 +352,16 @@ proc crawlNetwork(
|
||||
await sleepAsync(crawlInterval.millis - elapsed.millis)
|
||||
|
||||
proc retrieveDynamicBootstrapNodes(
|
||||
dnsDiscovery: bool, dnsDiscoveryUrl: string, dnsDiscoveryNameServers: seq[IpAddress]
|
||||
): Result[seq[RemotePeerInfo], string] =
|
||||
if dnsDiscovery and dnsDiscoveryUrl != "":
|
||||
dnsDiscoveryUrl: string, dnsAddrsNameServers: seq[IpAddress]
|
||||
): Future[Result[seq[RemotePeerInfo], string]] {.async.} =
|
||||
## Retrieve dynamic bootstrap nodes (DNS discovery)
|
||||
|
||||
if dnsDiscoveryUrl != "":
|
||||
# DNS discovery
|
||||
debug "Discovering nodes using Waku DNS discovery", url = dnsDiscoveryUrl
|
||||
info "Discovering nodes using Waku DNS discovery", url = dnsDiscoveryUrl
|
||||
|
||||
var nameServers: seq[TransportAddress]
|
||||
for ip in dnsDiscoveryNameServers:
|
||||
for ip in dnsAddrsNameServers:
|
||||
nameServers.add(initTAddress(ip, Port(53))) # Assume all servers use port 53
|
||||
|
||||
let dnsResolver = DnsResolver.new(nameServers)
|
||||
@ -352,30 +369,25 @@ proc retrieveDynamicBootstrapNodes(
|
||||
proc resolver(domain: string): Future[string] {.async, gcsafe.} =
|
||||
trace "resolving", domain = domain
|
||||
let resolved = await dnsResolver.resolveTxt(domain)
|
||||
return resolved[0] # Use only first answer
|
||||
if resolved.len > 0:
|
||||
return resolved[0] # Use only first answer
|
||||
|
||||
var wakuDnsDiscovery = WakuDnsDiscovery.init(dnsDiscoveryUrl, resolver)
|
||||
if wakuDnsDiscovery.isOk():
|
||||
return wakuDnsDiscovery.get().findPeers().mapErr(
|
||||
proc(e: cstring): string =
|
||||
$e
|
||||
)
|
||||
else:
|
||||
warn "Failed to init Waku DNS discovery"
|
||||
var wakuDnsDiscovery = WakuDnsDiscovery.init(dnsDiscoveryUrl, resolver).errorOr:
|
||||
return (await value.findPeers()).mapErr(e => $e)
|
||||
warn "Failed to init Waku DNS discovery"
|
||||
|
||||
debug "No method for retrieving dynamic bootstrap nodes specified."
|
||||
info "No method for retrieving dynamic bootstrap nodes specified."
|
||||
ok(newSeq[RemotePeerInfo]()) # Return an empty seq by default
|
||||
|
||||
proc getBootstrapFromDiscDns(
|
||||
conf: NetworkMonitorConf
|
||||
): Result[seq[enr.Record], string] =
|
||||
): Future[Result[seq[enr.Record], string]] {.async.} =
|
||||
try:
|
||||
let dnsNameServers = @[parseIpAddress("1.1.1.1"), parseIpAddress("1.0.0.1")]
|
||||
let dynamicBootstrapNodesRes =
|
||||
retrieveDynamicBootstrapNodes(true, conf.dnsDiscoveryUrl, dnsNameServers)
|
||||
if not dynamicBootstrapNodesRes.isOk():
|
||||
error("failed discovering peers from DNS")
|
||||
let dynamicBootstrapNodes = dynamicBootstrapNodesRes.get()
|
||||
let dynamicBootstrapNodes = (
|
||||
await retrieveDynamicBootstrapNodes(conf.dnsDiscoveryUrl, dnsNameServers)
|
||||
).valueOr:
|
||||
return err("Failed retrieving dynamic bootstrap nodes: " & $error)
|
||||
|
||||
# select dynamic bootstrap nodes that have an ENR containing a udp port.
|
||||
# Discv5 only supports UDP https://github.com/ethereum/devp2p/blob/master/discv5/discv5-theory.md)
|
||||
@ -391,11 +403,11 @@ proc getBootstrapFromDiscDns(
|
||||
discv5BootstrapEnrs.add(enr)
|
||||
return ok(discv5BootstrapEnrs)
|
||||
except CatchableError:
|
||||
error("failed discovering peers from DNS")
|
||||
error("failed discovering peers from DNS: " & getCurrentExceptionMsg())
|
||||
|
||||
proc initAndStartApp(
|
||||
conf: NetworkMonitorConf
|
||||
): Result[(WakuNode, WakuDiscoveryV5), string] =
|
||||
): Future[Result[(WakuNode, WakuDiscoveryV5), string]] {.async.} =
|
||||
let bindIp =
|
||||
try:
|
||||
parseIpAddress("0.0.0.0")
|
||||
@ -424,39 +436,36 @@ proc initAndStartApp(
|
||||
ipAddr = some(extIp), tcpPort = some(nodeTcpPort), udpPort = some(nodeUdpPort)
|
||||
)
|
||||
builder.withWakuCapabilities(flags)
|
||||
let addShardedTopics = builder.withShardedTopics(conf.pubsubTopics)
|
||||
if addShardedTopics.isErr():
|
||||
error "failed to add sharded topics to ENR", error = addShardedTopics.error
|
||||
return err($addShardedTopics.error)
|
||||
|
||||
let recordRes = builder.build()
|
||||
let record =
|
||||
if recordRes.isErr():
|
||||
return err("cannot build record: " & $recordRes.error)
|
||||
else:
|
||||
recordRes.get()
|
||||
builder.withWakuRelaySharding(
|
||||
RelayShards(clusterId: conf.clusterId, shardIds: conf.shards)
|
||||
).isOkOr:
|
||||
error "failed to add sharded topics to ENR", error = error
|
||||
return err("failed to add sharded topics to ENR: " & $error)
|
||||
|
||||
let record = builder.build().valueOr:
|
||||
return err("cannot build record: " & $error)
|
||||
|
||||
var nodeBuilder = WakuNodeBuilder.init()
|
||||
|
||||
nodeBuilder.withNodeKey(key)
|
||||
nodeBuilder.withRecord(record)
|
||||
nodeBUilder.withSwitchConfiguration(maxConnections = some(MaxConnectedPeers))
|
||||
nodeBuilder.withPeerManagerConfig(maxRelayPeers = some(20), shardAware = true)
|
||||
let res = nodeBuilder.withNetworkConfigurationDetails(bindIp, nodeTcpPort)
|
||||
if res.isErr():
|
||||
return err("node building error" & $res.error)
|
||||
nodeBuilder.withSwitchConfiguration(maxConnections = some(MaxConnectedPeers))
|
||||
|
||||
let nodeRes = nodeBuilder.build()
|
||||
let node =
|
||||
if nodeRes.isErr():
|
||||
return err("node building error" & $res.error)
|
||||
else:
|
||||
nodeRes.get()
|
||||
nodeBuilder.withPeerManagerConfig(
|
||||
maxConnections = MaxConnectedPeers,
|
||||
relayServiceRatio = "13.33:86.67",
|
||||
shardAware = true,
|
||||
)
|
||||
nodeBuilder.withNetworkConfigurationDetails(bindIp, nodeTcpPort).isOkOr:
|
||||
return err("node building error" & $error)
|
||||
|
||||
var discv5BootstrapEnrsRes = getBootstrapFromDiscDns(conf)
|
||||
if discv5BootstrapEnrsRes.isErr():
|
||||
let node = nodeBuilder.build().valueOr:
|
||||
return err("node building error" & $error)
|
||||
|
||||
var discv5BootstrapEnrs = (await getBootstrapFromDiscDns(conf)).valueOr:
|
||||
error("failed discovering peers from DNS")
|
||||
var discv5BootstrapEnrs = discv5BootstrapEnrsRes.get()
|
||||
quit(QuitFailure)
|
||||
|
||||
# parse enrURIs from the configuration and add the resulting ENRs to the discv5BootstrapEnrs seq
|
||||
for enrUri in conf.bootstrapNodes:
|
||||
@ -527,28 +536,32 @@ proc subscribeAndHandleMessages(
|
||||
else:
|
||||
msgPerContentTopic[msg.contentTopic] = 1
|
||||
|
||||
node.subscribe((kind: PubsubSub, topic: pubsubTopic), some(WakuRelayHandler(handler)))
|
||||
node.subscribe((kind: PubsubSub, topic: pubsubTopic), WakuRelayHandler(handler)).isOkOr:
|
||||
error "failed to subscribe to pubsub topic", pubsubTopic, error
|
||||
quit(1)
|
||||
|
||||
when isMainModule:
|
||||
# known issue: confutils.nim(775, 17) Error: can raise an unlisted exception: ref IOError
|
||||
{.pop.}
|
||||
let confRes = NetworkMonitorConf.loadConfig()
|
||||
if confRes.isErr():
|
||||
error "could not load cli variables", err = confRes.error
|
||||
quit(1)
|
||||
var conf = NetworkMonitorConf.loadConfig().valueOr:
|
||||
error "could not load cli variables", error = error
|
||||
quit(QuitFailure)
|
||||
|
||||
var conf = confRes.get()
|
||||
info "cli flags", conf = conf
|
||||
|
||||
if conf.clusterId == 1:
|
||||
let twnClusterConf = ClusterConf.TheWakuNetworkConf()
|
||||
let twnNetworkConf = NetworkConf.TheWakuNetworkConf()
|
||||
|
||||
conf.bootstrapNodes = twnClusterConf.discv5BootstrapNodes
|
||||
conf.pubsubTopics = twnClusterConf.pubsubTopics
|
||||
conf.rlnRelayDynamic = twnClusterConf.rlnRelayDynamic
|
||||
conf.rlnRelayEthContractAddress = twnClusterConf.rlnRelayEthContractAddress
|
||||
conf.rlnEpochSizeSec = twnClusterConf.rlnEpochSizeSec
|
||||
conf.rlnRelayUserMessageLimit = twnClusterConf.rlnRelayUserMessageLimit
|
||||
conf.bootstrapNodes = twnNetworkConf.discv5BootstrapNodes
|
||||
conf.rlnRelayDynamic = twnNetworkConf.rlnRelayDynamic
|
||||
conf.rlnRelayEthContractAddress = twnNetworkConf.rlnRelayEthContractAddress
|
||||
conf.rlnEpochSizeSec = twnNetworkConf.rlnEpochSizeSec
|
||||
conf.rlnRelayUserMessageLimit = twnNetworkConf.rlnRelayUserMessageLimit
|
||||
conf.numShardsInNetwork = twnNetworkConf.shardingConf.numShardsInCluster
|
||||
|
||||
if conf.shards.len == 0:
|
||||
conf.shards =
|
||||
toSeq(uint16(0) .. uint16(twnNetworkConf.shardingConf.numShardsInCluster - 1))
|
||||
|
||||
if conf.logLevel != LogLevel.NONE:
|
||||
setLogLevel(conf.logLevel)
|
||||
@ -561,62 +574,65 @@ when isMainModule:
|
||||
|
||||
# start metrics server
|
||||
if conf.metricsServer:
|
||||
let res =
|
||||
startMetricsServer(conf.metricsServerAddress, Port(conf.metricsServerPort))
|
||||
if res.isErr():
|
||||
error "could not start metrics server", err = res.error
|
||||
quit(1)
|
||||
startMetricsServer(conf.metricsServerAddress, Port(conf.metricsServerPort)).isOkOr:
|
||||
error "could not start metrics server", error = error
|
||||
quit(QuitFailure)
|
||||
|
||||
# start rest server for custom metrics
|
||||
let res = startRestApiServer(conf, allPeersInfo, msgPerContentTopic)
|
||||
if res.isErr():
|
||||
error "could not start rest api server", err = res.error
|
||||
quit(1)
|
||||
startRestApiServer(conf, allPeersInfo, msgPerContentTopic).isOkOr:
|
||||
error "could not start rest api server", error = error
|
||||
quit(QuitFailure)
|
||||
|
||||
# create a rest client
|
||||
let clientRest =
|
||||
RestClientRef.new(url = "http://ip-api.com", connectTimeout = ctime.seconds(2))
|
||||
if clientRest.isErr():
|
||||
error "could not start rest api client", err = res.error
|
||||
quit(1)
|
||||
let restClient = clientRest.get()
|
||||
let restClient = RestClientRef.new(
|
||||
url = "http://ip-api.com", connectTimeout = ctime.seconds(2)
|
||||
).valueOr:
|
||||
error "could not start rest api client", error = error
|
||||
quit(QuitFailure)
|
||||
|
||||
# start waku node
|
||||
let nodeRes = initAndStartApp(conf)
|
||||
if nodeRes.isErr():
|
||||
error "could not start node"
|
||||
quit 1
|
||||
let (node, discv5) = (waitFor initAndStartApp(conf)).valueOr:
|
||||
error "could not start node", error = error
|
||||
quit(QuitFailure)
|
||||
|
||||
let (node, discv5) = nodeRes.get()
|
||||
(waitFor node.mountRelay()).isOkOr:
|
||||
error "failed to mount waku relay protocol: ", error = error
|
||||
quit(QuitFailure)
|
||||
|
||||
waitFor node.mountRelay()
|
||||
waitFor node.mountLibp2pPing()
|
||||
|
||||
if conf.rlnRelayEthContractAddress != "":
|
||||
var onFatalErrorAction = proc(msg: string) {.gcsafe, closure.} =
|
||||
## Action to be taken when an internal error occurs during the node run.
|
||||
## e.g. the connection with the database is lost and not recovered.
|
||||
error "Unrecoverable error occurred", error = msg
|
||||
quit(QuitFailure)
|
||||
|
||||
if conf.rlnRelay and conf.rlnRelayEthContractAddress != "":
|
||||
let rlnConf = WakuRlnConfig(
|
||||
rlnRelayDynamic: conf.rlnRelayDynamic,
|
||||
rlnRelayCredIndex: some(uint(0)),
|
||||
rlnRelayEthContractAddress: conf.rlnRelayEthContractAddress,
|
||||
rlnRelayEthClientAddress: string(conf.rlnRelayethClientAddress),
|
||||
rlnRelayCredPath: "",
|
||||
rlnRelayCredPassword: "",
|
||||
rlnRelayTreePath: conf.rlnRelayTreePath,
|
||||
rlnEpochSizeSec: conf.rlnEpochSizeSec,
|
||||
dynamic: conf.rlnRelayDynamic,
|
||||
credIndex: some(uint(0)),
|
||||
ethContractAddress: conf.rlnRelayEthContractAddress,
|
||||
ethClientUrls: conf.ethClientUrls.mapIt(string(it)),
|
||||
epochSizeSec: conf.rlnEpochSizeSec,
|
||||
creds: none(RlnRelayCreds),
|
||||
onFatalErrorAction: onFatalErrorAction,
|
||||
)
|
||||
|
||||
try:
|
||||
waitFor node.mountRlnRelay(rlnConf)
|
||||
except CatchableError:
|
||||
error "failed to setup RLN", err = getCurrentExceptionMsg()
|
||||
quit 1
|
||||
error "failed to setup RLN", error = getCurrentExceptionMsg()
|
||||
quit(QuitFailure)
|
||||
|
||||
node.mountMetadata(conf.clusterId).isOkOr:
|
||||
error "failed to mount waku metadata protocol: ", err = error
|
||||
quit 1
|
||||
node.mountMetadata(conf.clusterId, conf.shards).isOkOr:
|
||||
error "failed to mount waku metadata protocol: ", error = error
|
||||
quit(QuitFailure)
|
||||
|
||||
for pubsubTopic in conf.pubsubTopics:
|
||||
# Subscribe the node to the default pubsubtopic, to count messages
|
||||
subscribeAndHandleMessages(node, pubsubTopic, msgPerContentTopic)
|
||||
for shard in conf.shards:
|
||||
# Subscribe the node to the shards, to count messages
|
||||
subscribeAndHandleMessages(
|
||||
node, $RelayShard(shardId: shard, clusterId: conf.clusterId), msgPerContentTopic
|
||||
)
|
||||
|
||||
# spawn the routine that crawls the network
|
||||
# TODO: split into 3 routines (discovery, connections, ip2location)
|
||||
|
||||
@ -5,10 +5,14 @@ import
|
||||
chronos,
|
||||
std/strutils,
|
||||
results,
|
||||
stew/shims/net,
|
||||
regex
|
||||
|
||||
type EthRpcUrl = distinct string
|
||||
const git_version* {.strdefine.} = "n/a"
|
||||
|
||||
type EthRpcUrl* = distinct string
|
||||
|
||||
proc `$`*(u: EthRpcUrl): string =
|
||||
string(u)
|
||||
|
||||
type NetworkMonitorConf* = object
|
||||
logLevel* {.
|
||||
@ -38,10 +42,17 @@ type NetworkMonitorConf* = object
|
||||
name: "dns-discovery-url"
|
||||
.}: string
|
||||
|
||||
pubsubTopics* {.
|
||||
desc: "Default pubsub topic to subscribe to. Argument may be repeated.",
|
||||
name: "pubsub-topic"
|
||||
.}: seq[string]
|
||||
shards* {.
|
||||
desc:
|
||||
"Shards index to subscribe to [0..NUM_SHARDS_IN_NETWORK-1]. Argument may be repeated.",
|
||||
name: "shard"
|
||||
.}: seq[uint16]
|
||||
|
||||
numShardsInNetwork* {.
|
||||
desc: "Number of shards in the network",
|
||||
name: "num-shards-in-network",
|
||||
defaultValue: 8
|
||||
.}: uint32
|
||||
|
||||
refreshInterval* {.
|
||||
desc: "How often new peers are discovered and connected to (in seconds)",
|
||||
@ -55,7 +66,7 @@ type NetworkMonitorConf* = object
|
||||
"Cluster id that the node is running in. Node in a different cluster id is disconnected.",
|
||||
defaultValue: 1,
|
||||
name: "cluster-id"
|
||||
.}: uint32
|
||||
.}: uint16
|
||||
|
||||
rlnRelay* {.
|
||||
desc: "Enable spam protection through rln-relay: true|false",
|
||||
@ -69,17 +80,12 @@ type NetworkMonitorConf* = object
|
||||
name: "rln-relay-dynamic"
|
||||
.}: bool
|
||||
|
||||
rlnRelayTreePath* {.
|
||||
desc: "Path to the RLN merkle tree sled db (https://github.com/spacejam/sled)",
|
||||
defaultValue: "",
|
||||
name: "rln-relay-tree-path"
|
||||
.}: string
|
||||
|
||||
rlnRelayEthClientAddress* {.
|
||||
desc: "HTTP address of an Ethereum testnet client e.g., http://localhost:8540/",
|
||||
defaultValue: "http://localhost:8540/",
|
||||
ethClientUrls* {.
|
||||
desc:
|
||||
"HTTP address of an Ethereum testnet client e.g., http://localhost:8540/. Argument may be repeated.",
|
||||
defaultValue: newSeq[EthRpcUrl](0),
|
||||
name: "rln-relay-eth-client-address"
|
||||
.}: EthRpcUrl
|
||||
.}: seq[EthRpcUrl]
|
||||
|
||||
rlnRelayEthContractAddress* {.
|
||||
desc: "Address of membership contract on an Ethereum testnet",
|
||||
|
||||
@ -1,7 +1,7 @@
|
||||
{.push raises: [].}
|
||||
|
||||
import
|
||||
std/[json, tables, sequtils],
|
||||
std/[net, json, tables, sequtils],
|
||||
chronicles,
|
||||
chronicles/topics_registry,
|
||||
chronos,
|
||||
@ -10,8 +10,7 @@ import
|
||||
metrics/chronos_httpserver,
|
||||
presto/route,
|
||||
presto/server,
|
||||
results,
|
||||
stew/shims/net
|
||||
results
|
||||
|
||||
logScope:
|
||||
topics = "networkmonitor_metrics"
|
||||
@ -26,6 +25,9 @@ declarePublicGauge networkmonitor_peer_type_as_per_enr,
|
||||
"Number of peers supporting each capability according to the ENR",
|
||||
labels = ["capability"]
|
||||
|
||||
declarePublicGauge networkmonitor_peer_cluster_as_per_enr,
|
||||
"Number of peers on each cluster according to the ENR", labels = ["cluster"]
|
||||
|
||||
declarePublicGauge networkmonitor_peer_type_as_per_protocol,
|
||||
"Number of peers supporting each protocol, after a successful connection) ",
|
||||
labels = ["protocols"]
|
||||
@ -35,8 +37,7 @@ declarePublicGauge networkmonitor_peer_user_agents,
|
||||
|
||||
declarePublicHistogram networkmonitor_peer_ping,
|
||||
"Histogram tracking ping durations for discovered peers",
|
||||
buckets =
|
||||
[100.0, 200.0, 300.0, 400.0, 500.0, 600.0, 700.0, 800.0, 900.0, 1000.0, 2000.0, Inf]
|
||||
buckets = [10.0, 20.0, 50.0, 100.0, 200.0, 300.0, 500.0, 800.0, 1000.0, 2000.0, Inf]
|
||||
|
||||
declarePublicGauge networkmonitor_peer_count,
|
||||
"Number of discovered peers", labels = ["connected"]
|
||||
@ -54,6 +55,7 @@ type
|
||||
enrCapabilities*: seq[string]
|
||||
country*: string
|
||||
city*: string
|
||||
maddrs*: seq[string]
|
||||
|
||||
# only after ok connection
|
||||
lastTimeConnected*: int64
|
||||
|
||||
@ -3,7 +3,6 @@
|
||||
import
|
||||
std/json,
|
||||
results,
|
||||
stew/shims/net,
|
||||
chronicles,
|
||||
chronicles/topics_registry,
|
||||
chronos,
|
||||
@ -32,7 +31,7 @@ proc decodeBytes*(
|
||||
try:
|
||||
let jsonContent = parseJson(res)
|
||||
if $jsonContent["status"].getStr() != "success":
|
||||
error "query failed", result = jsonContent
|
||||
error "query failed", result = $jsonContent
|
||||
return err("query failed")
|
||||
return ok(
|
||||
NodeLocation(
|
||||
|
||||
9
apps/networkmonitor/prometheus.yaml
Normal file
9
apps/networkmonitor/prometheus.yaml
Normal file
@ -0,0 +1,9 @@
|
||||
global:
|
||||
scrape_interval: 15s
|
||||
|
||||
scrape_configs:
|
||||
- job_name: 'prometheus'
|
||||
scrape_interval: 5s
|
||||
static_configs:
|
||||
- targets: ['host.docker.internal:8008']
|
||||
metrics_path: '/metrics'
|
||||
@ -1,12 +1,20 @@
|
||||
# RPC URL for accessing testnet via HTTP.
|
||||
# e.g. https://sepolia.infura.io/v3/123aa110320f4aec179150fba1e1b1b1
|
||||
# e.g. https://linea-sepolia.infura.io/v3/123aa110320f4aec179150fba1e1b1b1
|
||||
RLN_RELAY_ETH_CLIENT_ADDRESS=
|
||||
|
||||
# Private key of testnet where you have sepolia ETH that would be staked into RLN contract.
|
||||
# Account of testnet where you have Linea Sepolia ETH that would be staked into RLN contract.
|
||||
ETH_TESTNET_ACCOUNT=
|
||||
|
||||
# Private key of testnet where you have Linea Sepolia ETH that would be staked into RLN contract.
|
||||
# Note: make sure you don't use the '0x' prefix.
|
||||
# e.g. 0116196e9a8abed42dd1a22eb63fa2a5a17b0c27d716b87ded2c54f1bf192a0b
|
||||
ETH_TESTNET_KEY=
|
||||
|
||||
# Address of the RLN contract on Linea Sepolia.
|
||||
RLN_CONTRACT_ADDRESS=0xB9cd878C90E49F797B4431fBF4fb333108CB90e6
|
||||
# Address of the RLN Membership Token contract on Linea Sepolia used to pay for membership.
|
||||
TOKEN_CONTRACT_ADDRESS=0x185A0015aC462a0aECb81beCc0497b649a64B9ea
|
||||
|
||||
# Password you would like to use to protect your RLN membership.
|
||||
RLN_RELAY_CRED_PASSWORD=
|
||||
|
||||
@ -15,17 +23,20 @@ NWAKU_IMAGE=
|
||||
NODEKEY=
|
||||
DOMAIN=
|
||||
EXTRA_ARGS=
|
||||
RLN_RELAY_CONTRACT_ADDRESS=
|
||||
STORAGE_SIZE=
|
||||
|
||||
|
||||
# -------------------- SONDA CONFIG ------------------
|
||||
METRICS_PORT=8004
|
||||
NODE_REST_ADDRESS="http://nwaku:8645"
|
||||
CLUSTER_ID=16
|
||||
SHARD=32
|
||||
# Comma separated list of store nodes to poll
|
||||
STORE_NODES="/dns4/store-01.do-ams3.shards.test.status.im/tcp/30303/p2p/16Uiu2HAmAUdrQ3uwzuE4Gy4D56hX6uLKEeerJAnhKEHZ3DxF1EfT,\
|
||||
/dns4/store-02.do-ams3.shards.test.status.im/tcp/30303/p2p/16Uiu2HAm9aDJPkhGxc2SFcEACTFdZ91Q5TJjp76qZEhq9iF59x7R,\
|
||||
/dns4/store-01.gc-us-central1-a.shards.test.status.im/tcp/30303/p2p/16Uiu2HAmMELCo218hncCtTvC2Dwbej3rbyHQcR8erXNnKGei7WPZ,\
|
||||
/dns4/store-02.gc-us-central1-a.shards.test.status.im/tcp/30303/p2p/16Uiu2HAmJnVR7ZzFaYvciPVafUXuYGLHPzSUigqAmeNw9nJUVGeM,\
|
||||
/dns4/store-01.ac-cn-hongkong-c.shards.test.status.im/tcp/30303/p2p/16Uiu2HAm2M7xs7cLPc3jamawkEqbr7cUJX11uvY7LxQ6WFUdUKUT,\
|
||||
STORE_NODES="/dns4/store-01.do-ams3.shards.test.status.im/tcp/30303/p2p/16Uiu2HAmAUdrQ3uwzuE4Gy4D56hX6uLKEeerJAnhKEHZ3DxF1EfT,
|
||||
/dns4/store-02.do-ams3.shards.test.status.im/tcp/30303/p2p/16Uiu2HAm9aDJPkhGxc2SFcEACTFdZ91Q5TJjp76qZEhq9iF59x7R,
|
||||
/dns4/store-01.gc-us-central1-a.shards.test.status.im/tcp/30303/p2p/16Uiu2HAmMELCo218hncCtTvC2Dwbej3rbyHQcR8erXNnKGei7WPZ,
|
||||
/dns4/store-02.gc-us-central1-a.shards.test.status.im/tcp/30303/p2p/16Uiu2HAmJnVR7ZzFaYvciPVafUXuYGLHPzSUigqAmeNw9nJUVGeM,
|
||||
/dns4/store-01.ac-cn-hongkong-c.shards.test.status.im/tcp/30303/p2p/16Uiu2HAm2M7xs7cLPc3jamawkEqbr7cUJX11uvY7LxQ6WFUdUKUT,
|
||||
/dns4/store-02.ac-cn-hongkong-c.shards.test.status.im/tcp/30303/p2p/16Uiu2HAm9CQhsuwPR54q27kNj9iaQVfyRzTGKrhFmr94oD8ujU6P"
|
||||
# Wait time in seconds between two consecutive queries
|
||||
QUERY_DELAY=60
|
||||
|
||||
@ -1,3 +1,23 @@
|
||||
FROM python:3.9.18-alpine3.18
|
||||
|
||||
RUN pip install requests argparse prometheus_client
|
||||
ENV METRICS_PORT=8004
|
||||
ENV NODE_REST_ADDRESS="http://nwaku:8645"
|
||||
ENV QUERY_DELAY=60
|
||||
ENV STORE_NODES=""
|
||||
ENV CLUSTER_ID=1
|
||||
ENV SHARD=1
|
||||
ENV HEALTH_THRESHOLD=5
|
||||
|
||||
WORKDIR /opt
|
||||
|
||||
COPY sonda.py /opt/sonda.py
|
||||
|
||||
RUN pip install requests argparse prometheus_client
|
||||
|
||||
CMD python -u /opt/sonda.py \
|
||||
--metrics-port=$METRICS_PORT \
|
||||
--node-rest-address="${NODE_REST_ADDRESS}" \
|
||||
--delay-seconds=$QUERY_DELAY \
|
||||
--pubsub-topic="/waku/2/rs/${CLUSTER_ID}/${SHARD}" \
|
||||
--store-nodes="${STORE_NODES}" \
|
||||
--health-threshold=$HEALTH_THRESHOLD
|
||||
|
||||
@ -30,13 +30,13 @@ It works by running a `nwaku` node, publishing a message from it every fixed int
|
||||
2. If you want to query nodes in `cluster-id` 1, then you have to follow the steps of registering an RLN membership. Otherwise, you can skip this step.
|
||||
|
||||
For it, you need:
|
||||
* Ethereum Sepolia WebSocket endpoint. Get one free from [Infura](https://www.infura.io/).
|
||||
* Ethereum Sepolia account with some balance <0.01 Eth. Get some [here](https://www.infura.io/faucet/sepolia).
|
||||
* Ethereum Linea Sepolia WebSocket endpoint. Get one free from [Infura](https://linea-sepolia.infura.io/).
|
||||
* Ethereum Linea Sepolia account with minimum 0.01ETH. Get some [here](https://docs.metamask.io/developer-tools/faucet/).
|
||||
* A password to protect your rln membership.
|
||||
|
||||
Fill the `RLN_RELAY_ETH_CLIENT_ADDRESS`, `ETH_TESTNET_KEY` and `RLN_RELAY_CRED_PASSWORD` env variables and run
|
||||
|
||||
```
|
||||
```
|
||||
./register_rln.sh
|
||||
```
|
||||
|
||||
|
||||
@ -1,5 +1,4 @@
|
||||
|
||||
version: "3.7"
|
||||
x-logging: &logging
|
||||
logging:
|
||||
driver: json-file
|
||||
@ -10,11 +9,13 @@ x-logging: &logging
|
||||
x-rln-relay-eth-client-address: &rln_relay_eth_client_address ${RLN_RELAY_ETH_CLIENT_ADDRESS:-} # Add your RLN_RELAY_ETH_CLIENT_ADDRESS after the "-"
|
||||
|
||||
x-rln-environment: &rln_env
|
||||
RLN_RELAY_CONTRACT_ADDRESS: ${RLN_RELAY_CONTRACT_ADDRESS:-0xCB33Aa5B38d79E3D9Fa8B10afF38AA201399a7e3}
|
||||
RLN_RELAY_CONTRACT_ADDRESS: ${RLN_RELAY_CONTRACT_ADDRESS:-0xB9cd878C90E49F797B4431fBF4fb333108CB90e6}
|
||||
RLN_RELAY_CRED_PATH: ${RLN_RELAY_CRED_PATH:-} # Optional: Add your RLN_RELAY_CRED_PATH after the "-"
|
||||
RLN_RELAY_CRED_PASSWORD: ${RLN_RELAY_CRED_PASSWORD:-} # Optional: Add your RLN_RELAY_CRED_PASSWORD after the "-"
|
||||
|
||||
x-sonda-env: &sonda_env
|
||||
METRICS_PORT: ${METRICS_PORT:-8004}
|
||||
NODE_REST_ADDRESS: ${NODE_REST_ADDRESS:-"http://nwaku:8645"}
|
||||
CLUSTER_ID: ${CLUSTER_ID:-1}
|
||||
SHARD: ${SHARD:-0}
|
||||
STORE_NODES: ${STORE_NODES:-}
|
||||
@ -24,7 +25,8 @@ x-sonda-env: &sonda_env
|
||||
# Services definitions
|
||||
services:
|
||||
nwaku:
|
||||
image: ${NWAKU_IMAGE:-harbor.status.im/wakuorg/nwaku:v0.30.1}
|
||||
image: ${NWAKU_IMAGE:-harbor.status.im/wakuorg/nwaku:deploy-status-prod}
|
||||
container_name: nwaku
|
||||
restart: on-failure
|
||||
ports:
|
||||
- 30304:30304/tcp
|
||||
@ -54,29 +56,27 @@ services:
|
||||
entrypoint: sh
|
||||
command:
|
||||
- /opt/run_node.sh
|
||||
networks:
|
||||
- nwaku-sonda
|
||||
|
||||
sonda:
|
||||
build:
|
||||
context: .
|
||||
dockerfile: Dockerfile.sonda
|
||||
container_name: sonda
|
||||
ports:
|
||||
- 127.0.0.1:8004:8004
|
||||
- 127.0.0.1:${METRICS_PORT}:${METRICS_PORT}
|
||||
environment:
|
||||
<<:
|
||||
- *sonda_env
|
||||
command: >
|
||||
python -u /opt/sonda.py
|
||||
--delay-seconds=${QUERY_DELAY}
|
||||
--pubsub-topic=/waku/2/rs/${CLUSTER_ID}/${SHARD}
|
||||
--store-nodes=${STORE_NODES}
|
||||
--health-threshold=${HEALTH_THRESHOLD}
|
||||
volumes:
|
||||
- ./sonda.py:/opt/sonda.py:Z
|
||||
depends_on:
|
||||
- nwaku
|
||||
networks:
|
||||
- nwaku-sonda
|
||||
|
||||
prometheus:
|
||||
image: docker.io/prom/prometheus:latest
|
||||
container_name: prometheus
|
||||
volumes:
|
||||
- ./monitoring/prometheus-config.yml:/etc/prometheus/prometheus.yml:Z
|
||||
command:
|
||||
@ -86,9 +86,12 @@ services:
|
||||
restart: on-failure:5
|
||||
depends_on:
|
||||
- nwaku
|
||||
networks:
|
||||
- nwaku-sonda
|
||||
|
||||
grafana:
|
||||
image: docker.io/grafana/grafana:latest
|
||||
container_name: grafana
|
||||
env_file:
|
||||
- ./monitoring/configuration/grafana-plugins.env
|
||||
volumes:
|
||||
@ -104,4 +107,8 @@ services:
|
||||
restart: on-failure:5
|
||||
depends_on:
|
||||
- prometheus
|
||||
networks:
|
||||
- nwaku-sonda
|
||||
|
||||
networks:
|
||||
nwaku-sonda:
|
||||
File diff suppressed because it is too large
Load Diff
@ -24,7 +24,7 @@ fi
|
||||
docker run -v $(pwd)/keystore:/keystore/:Z harbor.status.im/wakuorg/nwaku:v0.30.1 generateRlnKeystore \
|
||||
--rln-relay-eth-client-address=${RLN_RELAY_ETH_CLIENT_ADDRESS} \
|
||||
--rln-relay-eth-private-key=${ETH_TESTNET_KEY} \
|
||||
--rln-relay-eth-contract-address=0xCB33Aa5B38d79E3D9Fa8B10afF38AA201399a7e3 \
|
||||
--rln-relay-eth-contract-address=0xB9cd878C90E49F797B4431fBF4fb333108CB90e6 \
|
||||
--rln-relay-cred-path=/keystore/keystore.json \
|
||||
--rln-relay-cred-password="${RLN_RELAY_CRED_PASSWORD}" \
|
||||
--rln-relay-user-message-limit=20 \
|
||||
|
||||
@ -61,7 +61,6 @@ fi
|
||||
|
||||
if [ "${CLUSTER_ID}" -eq 1 ]; then
|
||||
RLN_RELAY_CRED_PATH=--rln-relay-cred-path=${RLN_RELAY_CRED_PATH:-/keystore/keystore.json}
|
||||
RLN_TREE_PATH=--rln-relay-tree-path="/etc/rln_tree"
|
||||
fi
|
||||
|
||||
if [ -n "${RLN_RELAY_CRED_PASSWORD}" ]; then
|
||||
|
||||
@ -7,6 +7,7 @@ import sys
|
||||
import urllib.parse
|
||||
import requests
|
||||
import argparse
|
||||
from datetime import datetime
|
||||
from prometheus_client import Counter, Gauge, start_http_server
|
||||
|
||||
# Content topic where Sona messages are going to be sent
|
||||
@ -25,13 +26,21 @@ node_health = Gauge('node_health', "Binary indicator of a node's health. 1 is he
|
||||
|
||||
# Argparser configuration
|
||||
parser = argparse.ArgumentParser(description='')
|
||||
parser.add_argument('-p', '--pubsub-topic', type=str, help='pubsub topic', default='/waku/2/rs/1/0')
|
||||
parser.add_argument('-d', '--delay-seconds', type=int, help='delay in second between messages', default=60)
|
||||
parser.add_argument('-n', '--store-nodes', type=str, help='comma separated list of store nodes to query', required=True)
|
||||
parser.add_argument('-t', '--health-threshold', type=int, help='consecutive successful store requests to consider a store node healthy', default=5)
|
||||
parser.add_argument('-m', '--metrics-port', type=int, default=8004, help='Port to expose prometheus metrics.')
|
||||
parser.add_argument('-a', '--node-rest-address', type=str, default="http://nwaku:8645", help='Address of the waku node to send messages to.')
|
||||
parser.add_argument('-p', '--pubsub-topic', type=str, default='/waku/2/rs/1/0', help='PubSub topic.')
|
||||
parser.add_argument('-d', '--delay-seconds', type=int, default=60, help='Delay in seconds between messages.')
|
||||
parser.add_argument('-n', '--store-nodes', type=str, required=True, help='Comma separated list of store nodes to query.')
|
||||
parser.add_argument('-t', '--health-threshold', type=int, default=5, help='Consecutive successful store requests to consider a store node healthy.')
|
||||
args = parser.parse_args()
|
||||
|
||||
|
||||
# Logs message including current UTC time
|
||||
def log_with_utc(message):
|
||||
utc_time = datetime.utcnow().strftime("%Y-%m-%d %H:%M:%S")
|
||||
print(f"[{utc_time} UTC] {message}")
|
||||
|
||||
|
||||
# Sends Sonda message. Returns True if successful, False otherwise
|
||||
def send_sonda_msg(rest_address, pubsub_topic, content_topic, timestamp):
|
||||
message = "Hi, I'm Sonda"
|
||||
@ -47,14 +56,14 @@ def send_sonda_msg(rest_address, pubsub_topic, content_topic, timestamp):
|
||||
url = f'{rest_address}/relay/v1/messages/{encoded_pubsub_topic}'
|
||||
headers = {'content-type': 'application/json'}
|
||||
|
||||
print(f'Waku REST API: {url} PubSubTopic: {pubsub_topic}, ContentTopic: {content_topic}')
|
||||
log_with_utc(f'Sending Sonda message via REST: {url} PubSubTopic: {pubsub_topic}, ContentTopic: {content_topic}, timestamp: {timestamp}')
|
||||
|
||||
try:
|
||||
start_time = time.time()
|
||||
response = requests.post(url, json=body, headers=headers, timeout=10)
|
||||
elapsed_seconds = time.time() - start_time
|
||||
|
||||
print(f'Response from {rest_address}: status:{response.status_code} content:{response.text} [{elapsed_seconds:.4f} s.]')
|
||||
log_with_utc(f'Response from {rest_address}: status:{response.status_code} content:{response.text} [{elapsed_seconds:.4f} s.]')
|
||||
|
||||
if response.status_code == 200:
|
||||
successful_sonda_msgs.inc()
|
||||
@ -62,7 +71,7 @@ def send_sonda_msg(rest_address, pubsub_topic, content_topic, timestamp):
|
||||
else:
|
||||
response.raise_for_status()
|
||||
except requests.RequestException as e:
|
||||
print(f'Error sending request: {e}')
|
||||
log_with_utc(f'Error sending request: {e}')
|
||||
|
||||
failed_sonda_msgs.inc()
|
||||
return False
|
||||
@ -74,7 +83,7 @@ def check_store_response(json_response, store_node, timestamp):
|
||||
# Check for the store node status code
|
||||
if json_response.get('statusCode') != 200:
|
||||
error = f"{json_response.get('statusCode')} {json_response.get('statusDesc')}"
|
||||
print(f'Failed performing store query {error}')
|
||||
log_with_utc(f'Failed performing store query {error}')
|
||||
failed_store_queries.labels(node=store_node, error=error).inc()
|
||||
consecutive_successful_responses.labels(node=store_node).set(0)
|
||||
|
||||
@ -83,7 +92,7 @@ def check_store_response(json_response, store_node, timestamp):
|
||||
messages = json_response.get('messages')
|
||||
# If there's no message in the response, increase counters and return
|
||||
if not messages:
|
||||
print("No messages in store response")
|
||||
log_with_utc("No messages in store response")
|
||||
empty_store_responses.labels(node=store_node).inc()
|
||||
consecutive_successful_responses.labels(node=store_node).set(0)
|
||||
return True
|
||||
@ -92,12 +101,12 @@ def check_store_response(json_response, store_node, timestamp):
|
||||
for message in messages:
|
||||
# If message field is missing in current message, continue
|
||||
if not message.get("message"):
|
||||
print("Could not retrieve message")
|
||||
log_with_utc("Could not retrieve message")
|
||||
continue
|
||||
|
||||
# If a message is found with the same timestamp as sonda message, increase counters and return
|
||||
if timestamp == message.get('message').get('timestamp'):
|
||||
print(f'Found Sonda message in store response node={store_node}')
|
||||
log_with_utc(f'Found Sonda message in store response node={store_node}')
|
||||
successful_store_queries.labels(node=store_node).inc()
|
||||
consecutive_successful_responses.labels(node=store_node).inc()
|
||||
return True
|
||||
@ -121,16 +130,16 @@ def send_store_query(rest_address, store_node, encoded_pubsub_topic, encoded_con
|
||||
s_time = time.time()
|
||||
|
||||
try:
|
||||
print(f'Sending store request to {store_node}')
|
||||
log_with_utc(f'Sending store request to {store_node}')
|
||||
response = requests.get(url, params=params)
|
||||
except Exception as e:
|
||||
print(f'Error sending request: {e}')
|
||||
log_with_utc(f'Error sending request: {e}')
|
||||
failed_store_queries.labels(node=store_node, error=str(e)).inc()
|
||||
consecutive_successful_responses.labels(node=store_node).set(0)
|
||||
return False
|
||||
|
||||
elapsed_seconds = time.time() - s_time
|
||||
print(f'Response from {rest_address}: status:{response.status_code} [{elapsed_seconds:.4f} s.]')
|
||||
log_with_utc(f'Response from {rest_address}: status:{response.status_code} [{elapsed_seconds:.4f} s.]')
|
||||
|
||||
if response.status_code != 200:
|
||||
failed_store_queries.labels(node=store_node, error=f'{response.status_code} {response.content}').inc()
|
||||
@ -141,7 +150,7 @@ def send_store_query(rest_address, store_node, encoded_pubsub_topic, encoded_con
|
||||
try:
|
||||
json_response = response.json()
|
||||
except Exception as e:
|
||||
print(f'Error parsing response JSON: {e}')
|
||||
log_with_utc(f'Error parsing response JSON: {e}')
|
||||
failed_store_queries.labels(node=store_node, error="JSON parse error").inc()
|
||||
consecutive_successful_responses.labels(node=store_node).set(0)
|
||||
return False
|
||||
@ -155,7 +164,7 @@ def send_store_query(rest_address, store_node, encoded_pubsub_topic, encoded_con
|
||||
|
||||
|
||||
def send_store_queries(rest_address, store_nodes, pubsub_topic, content_topic, timestamp):
|
||||
print(f'Sending store queries. nodes = {store_nodes}')
|
||||
log_with_utc(f'Sending store queries. nodes = {store_nodes} timestamp = {timestamp}')
|
||||
encoded_pubsub_topic = urllib.parse.quote(pubsub_topic, safe='')
|
||||
encoded_content_topic = urllib.parse.quote(content_topic, safe='')
|
||||
|
||||
@ -164,29 +173,28 @@ def send_store_queries(rest_address, store_nodes, pubsub_topic, content_topic, t
|
||||
|
||||
|
||||
def main():
|
||||
print(f'Running Sonda with args={args}')
|
||||
log_with_utc(f'Running Sonda with args={args}')
|
||||
|
||||
store_nodes = []
|
||||
if args.store_nodes is not None:
|
||||
store_nodes = [s.strip() for s in args.store_nodes.split(",")]
|
||||
print(f'Store nodes to query: {store_nodes}')
|
||||
log_with_utc(f'Store nodes to query: {store_nodes}')
|
||||
|
||||
# Start Prometheus HTTP server at port 8004
|
||||
start_http_server(8004)
|
||||
# Start Prometheus HTTP server at port set by the CLI(default 8004)
|
||||
start_http_server(args.metrics_port)
|
||||
|
||||
node_rest_address = 'http://nwaku:8645'
|
||||
while True:
|
||||
timestamp = time.time_ns()
|
||||
|
||||
# Send Sonda message
|
||||
res = send_sonda_msg(node_rest_address, args.pubsub_topic, SONDA_CONTENT_TOPIC, timestamp)
|
||||
res = send_sonda_msg(args.node_rest_address, args.pubsub_topic, SONDA_CONTENT_TOPIC, timestamp)
|
||||
|
||||
print(f'sleeping: {args.delay_seconds} seconds')
|
||||
log_with_utc(f'sleeping: {args.delay_seconds} seconds')
|
||||
time.sleep(args.delay_seconds)
|
||||
|
||||
# Only send store query if message was successfully published
|
||||
if(res):
|
||||
send_store_queries(node_rest_address, store_nodes, args.pubsub_topic, SONDA_CONTENT_TOPIC, timestamp)
|
||||
send_store_queries(args.node_rest_address, store_nodes, args.pubsub_topic, SONDA_CONTENT_TOPIC, timestamp)
|
||||
|
||||
# Update node health metrics
|
||||
for store_node in store_nodes:
|
||||
|
||||
@ -32,21 +32,31 @@ $ make wakucanary
|
||||
And used as follows. A reachable node that supports both `store` and `filter` protocols.
|
||||
|
||||
```console
|
||||
$ ./build/wakucanary --address=/dns4/node-01.ac-cn-hongkong-c.waku.sandbox.status.im/tcp/30303/p2p/16Uiu2HAmSJvSJphxRdbnigUV5bjRRZFBhTtWFTSyiKaQByCjwmpV --protocol=store --protocol=filter
|
||||
$ ./build/wakucanary \
|
||||
--address=/dns4/store-01.do-ams3.status.staging.status.im/tcp/30303/p2p/16Uiu2HAm3xVDaz6SRJ6kErwC21zBJEZjavVXg7VSkoWzaV1aMA3F \
|
||||
--protocol=store \
|
||||
--protocol=filter \
|
||||
--cluster-id=16 \
|
||||
--shard=64
|
||||
$ echo $?
|
||||
0
|
||||
```
|
||||
|
||||
A node that can't be reached.
|
||||
```console
|
||||
$ ./build/wakucanary --address=/dns4/node-01.ac-cn-hongkong-c.waku.sandbox.status.im/tcp/1000/p2p/16Uiu2HAmSJvSJphxRdbnigUV5bjRRZFBhTtWFTSyiKaQByCjwmpV --protocol=store --protocol=filter
|
||||
$ ./build/wakucanary \
|
||||
--address=/dns4/store-01.do-ams3.status.staging.status.im/tcp/1000/p2p/16Uiu2HAm3xVDaz6SRJ6kErwC21zBJEZjavVXg7VSkoWzaV1aMA3F \
|
||||
--protocol=store \
|
||||
--protocol=filter \
|
||||
--cluster-id=16 \
|
||||
--shard=64
|
||||
$ echo $?
|
||||
1
|
||||
```
|
||||
|
||||
Note that a domain name can also be used.
|
||||
```console
|
||||
$ ./build/wakucanary --address=/dns4/node-01.do-ams3.status.test.statusim.net/tcp/30303/p2p/16Uiu2HAkukebeXjTQ9QDBeNDWuGfbaSg79wkkhK4vPocLgR6QFDf --protocol=store --protocol=filter
|
||||
--- not defined yet
|
||||
$ echo $?
|
||||
0
|
||||
```
|
||||
|
||||
50
apps/wakucanary/scripts/run_waku_canary.sh
Executable file
50
apps/wakucanary/scripts/run_waku_canary.sh
Executable file
@ -0,0 +1,50 @@
|
||||
#!/bin/bash
|
||||
|
||||
#this script build the canary app and make basic run to connect to well-known peer via TCP .
|
||||
set -e
|
||||
|
||||
PEER_ADDRESS="/dns4/store-01.do-ams3.status.staging.status.im/tcp/30303/p2p/16Uiu2HAm3xVDaz6SRJ6kErwC21zBJEZjavVXg7VSkoWzaV1aMA3F"
|
||||
PROTOCOL="relay"
|
||||
LOG_DIR="logs"
|
||||
CLUSTER="16"
|
||||
SHARD="64"
|
||||
TIMESTAMP=$(date +"%Y-%m-%d_%H-%M-%S")
|
||||
LOG_FILE="$LOG_DIR/canary_run_$TIMESTAMP.log"
|
||||
|
||||
mkdir -p "$LOG_DIR"
|
||||
|
||||
echo "Building Waku Canary app..."
|
||||
( cd ../../../ && make wakucanary ) >> "$LOG_FILE" 2>&1
|
||||
|
||||
echo "Running Waku Canary against:"
|
||||
echo " Peer : $PEER_ADDRESS"
|
||||
echo " Protocol: $PROTOCOL"
|
||||
echo "Log file : $LOG_FILE"
|
||||
echo "-----------------------------------"
|
||||
|
||||
{
|
||||
echo "=== Canary Run: $TIMESTAMP ==="
|
||||
echo "Peer : $PEER_ADDRESS"
|
||||
echo "Protocol : $PROTOCOL"
|
||||
echo "LogLevel : DEBUG"
|
||||
echo "-----------------------------------"
|
||||
../../../build/wakucanary \
|
||||
--address="$PEER_ADDRESS" \
|
||||
--protocol="$PROTOCOL" \
|
||||
--cluster-id="$CLUSTER"\
|
||||
--shard="$SHARD"\
|
||||
--log-level=DEBUG
|
||||
echo "-----------------------------------"
|
||||
echo "Exit code: $?"
|
||||
} 2>&1 | tee "$LOG_FILE"
|
||||
|
||||
EXIT_CODE=${PIPESTATUS[0]}
|
||||
|
||||
|
||||
if [ $EXIT_CODE -eq 0 ]; then
|
||||
echo "SUCCESS: Connected to peer and protocol '$PROTOCOL' is supported."
|
||||
else
|
||||
echo "FAILURE: Could not connect or protocol '$PROTOCOL' is unsupported."
|
||||
fi
|
||||
|
||||
exit $EXIT_CODE
|
||||
46
apps/wakucanary/scripts/test_protocols.sh
Executable file
46
apps/wakucanary/scripts/test_protocols.sh
Executable file
@ -0,0 +1,46 @@
|
||||
#!/bin/bash
|
||||
|
||||
# === Configuration ===
|
||||
WAKUCANARY_BINARY="../../../build/wakucanary"
|
||||
PEER_ADDRESS="/dns4/store-01.do-ams3.status.staging.status.im/tcp/30303/p2p/16Uiu2HAm3xVDaz6SRJ6kErwC21zBJEZjavVXg7VSkoWzaV1aMA3F"
|
||||
TIMEOUT=5
|
||||
LOG_LEVEL="info"
|
||||
PROTOCOLS=("store" "relay" "lightpush" "filter")
|
||||
|
||||
# === Logging Setup ===
|
||||
LOG_DIR="logs"
|
||||
mkdir -p "$LOG_DIR"
|
||||
TIMESTAMP=$(date +"%Y-%m-%d_%H-%M-%S")
|
||||
LOG_FILE="$LOG_DIR/ping_test_$TIMESTAMP.log"
|
||||
|
||||
echo "Building Waku Canary app..."
|
||||
( cd ../../../ && make wakucanary ) >> "$LOG_FILE" 2>&1
|
||||
|
||||
echo "Protocol Support Test - $TIMESTAMP" | tee -a "$LOG_FILE"
|
||||
echo "Peer: $PEER_ADDRESS" | tee -a "$LOG_FILE"
|
||||
echo "---------------------------------------" | tee -a "$LOG_FILE"
|
||||
|
||||
# === Protocol Testing Loop ===
|
||||
for PROTOCOL in "${PROTOCOLS[@]}"; do
|
||||
TIMESTAMP=$(date +"%Y-%m-%d_%H-%M-%S")
|
||||
LOG_FILE="$LOG_DIR/ping_test_${PROTOCOL}_$TIMESTAMP.log"
|
||||
|
||||
{
|
||||
echo "=== Canary Run: $TIMESTAMP ==="
|
||||
echo "Peer : $PEER_ADDRESS"
|
||||
echo "Protocol : $PROTOCOL"
|
||||
echo "LogLevel : DEBUG"
|
||||
echo "-----------------------------------"
|
||||
$WAKUCANARY_BINARY \
|
||||
--address="$PEER_ADDRESS" \
|
||||
--protocol="$PROTOCOL" \
|
||||
--log-level=DEBUG
|
||||
echo "-----------------------------------"
|
||||
echo "Exit code: $?"
|
||||
} 2>&1 | tee "$LOG_FILE"
|
||||
|
||||
echo "✅ Log saved to: $LOG_FILE"
|
||||
echo ""
|
||||
done
|
||||
|
||||
echo "All protocol checks completed. Log saved to: $LOG_FILE"
|
||||
51
apps/wakucanary/scripts/web_socket.sh
Executable file
51
apps/wakucanary/scripts/web_socket.sh
Executable file
@ -0,0 +1,51 @@
|
||||
#!/bin/bash
|
||||
|
||||
#this script build the canary app and make basic run to connect to well-known peer via TCP .
|
||||
set -e
|
||||
|
||||
PEER_ADDRESS="/ip4/127.0.0.1/tcp/7777/ws/p2p/16Uiu2HAm4ng2DaLPniRoZtMQbLdjYYWnXjrrJkGoXWCoBWAdn1tu"
|
||||
PROTOCOL="relay"
|
||||
LOG_DIR="logs"
|
||||
CLUSTER="16"
|
||||
SHARD="64"
|
||||
TIMESTAMP=$(date +"%Y-%m-%d_%H-%M-%S")
|
||||
LOG_FILE="$LOG_DIR/canary_run_$TIMESTAMP.log"
|
||||
|
||||
mkdir -p "$LOG_DIR"
|
||||
|
||||
echo "Building Waku Canary app..."
|
||||
( cd ../../../ && make wakucanary ) >> "$LOG_FILE" 2>&1
|
||||
|
||||
|
||||
echo "Running Waku Canary against:"
|
||||
echo " Peer : $PEER_ADDRESS"
|
||||
echo " Protocol: $PROTOCOL"
|
||||
echo "Log file : $LOG_FILE"
|
||||
echo "-----------------------------------"
|
||||
|
||||
{
|
||||
echo "=== Canary Run: $TIMESTAMP ==="
|
||||
echo "Peer : $PEER_ADDRESS"
|
||||
echo "Protocol : $PROTOCOL"
|
||||
echo "LogLevel : DEBUG"
|
||||
echo "-----------------------------------"
|
||||
../../../build/wakucanary \
|
||||
--address="$PEER_ADDRESS" \
|
||||
--protocol="$PROTOCOL" \
|
||||
--cluster-id="$CLUSTER"\
|
||||
--shard="$SHARD"\
|
||||
--log-level=DEBUG
|
||||
echo "-----------------------------------"
|
||||
echo "Exit code: $?"
|
||||
} 2>&1 | tee "$LOG_FILE"
|
||||
|
||||
EXIT_CODE=${PIPESTATUS[0]}
|
||||
|
||||
|
||||
if [ $EXIT_CODE -eq 0 ]; then
|
||||
echo "SUCCESS: Connected to peer and protocol '$PROTOCOL' is supported."
|
||||
else
|
||||
echo "FAILURE: Could not connect or protocol '$PROTOCOL' is unsupported."
|
||||
fi
|
||||
|
||||
exit $EXIT_CODE
|
||||
43
apps/wakucanary/scripts/web_socket_certitficate.sh
Normal file
43
apps/wakucanary/scripts/web_socket_certitficate.sh
Normal file
@ -0,0 +1,43 @@
|
||||
#!/bin/bash
|
||||
|
||||
WAKUCANARY_BINARY="../../../build/wakucanary"
|
||||
NODE_PORT=60000
|
||||
WSS_PORT=$((NODE_PORT + 1000))
|
||||
PEER_ID="16Uiu2HAmB6JQpewXScGoQ2syqmimbe4GviLxRwfsR8dCpwaGBPSE"
|
||||
PROTOCOL="relay"
|
||||
KEY_PATH="./certs/client.key"
|
||||
CERT_PATH="./certs/client.crt"
|
||||
LOG_DIR="logs"
|
||||
mkdir -p "$LOG_DIR"
|
||||
|
||||
PEER_ADDRESS="/ip4/127.0.0.1/tcp/$WSS_PORT/wss/p2p/$PEER_ID"
|
||||
TIMESTAMP=$(date +"%Y-%m-%d_%H-%M-%S")
|
||||
LOG_FILE="$LOG_DIR/wss_cert_test_$TIMESTAMP.log"
|
||||
|
||||
echo "Building Waku Canary app..."
|
||||
( cd ../../../ && make wakucanary ) >> "$LOG_FILE" 2>&1
|
||||
|
||||
{
|
||||
echo "=== Canary WSS + Cert Test ==="
|
||||
echo "Timestamp : $TIMESTAMP"
|
||||
echo "Node Port : $NODE_PORT"
|
||||
echo "WSS Port : $WSS_PORT"
|
||||
echo "Peer ID : $PEER_ID"
|
||||
echo "Protocol : $PROTOCOL"
|
||||
echo "Key Path : $KEY_PATH"
|
||||
echo "Cert Path : $CERT_PATH"
|
||||
echo "Address : $PEER_ADDRESS"
|
||||
echo "------------------------------------------"
|
||||
|
||||
$WAKUCANARY_BINARY \
|
||||
--address="$PEER_ADDRESS" \
|
||||
--protocol="$PROTOCOL" \
|
||||
--log-level=DEBUG \
|
||||
--websocket-secure-key-path="$KEY_PATH" \
|
||||
--websocket-secure-cert-path="$CERT_PATH"
|
||||
|
||||
echo "------------------------------------------"
|
||||
echo "Exit code: $?"
|
||||
} 2>&1 | tee "$LOG_FILE"
|
||||
|
||||
echo "✅ Log saved to: $LOG_FILE"
|
||||
@ -1,8 +1,7 @@
|
||||
import
|
||||
std/[strutils, sequtils, tables],
|
||||
std/[strutils, sequtils, tables, strformat],
|
||||
confutils,
|
||||
chronos,
|
||||
stew/shims/net,
|
||||
chronicles/topics_registry,
|
||||
os
|
||||
import
|
||||
@ -21,6 +20,15 @@ const ProtocolsTable = {
|
||||
"relay": "/vac/waku/relay/",
|
||||
"lightpush": "/vac/waku/lightpush/",
|
||||
"filter": "/vac/waku/filter-subscribe/2",
|
||||
"filter-push": "/vac/waku/filter-push/",
|
||||
"ipfs-id": "/ipfs/id/",
|
||||
"autonat": "/libp2p/autonat/",
|
||||
"circuit-relay": "/libp2p/circuit/relay/",
|
||||
"metadata": "/vac/waku/metadata/",
|
||||
"rendezvous": "/rendezvous/",
|
||||
"ipfs-ping": "/ipfs/ping/",
|
||||
"peer-exchange": "/vac/waku/peer-exchange/",
|
||||
"mix": "mix/1.0.0",
|
||||
}.toTable
|
||||
|
||||
const WebSocketPortOffset = 1000
|
||||
@ -81,7 +89,8 @@ type WakuCanaryConf* = object
|
||||
.}: bool
|
||||
|
||||
shards* {.
|
||||
desc: "Shards index to subscribe to [0..MAX_SHARDS-1]. Argument may be repeated.",
|
||||
desc:
|
||||
"Shards index to subscribe to [0..NUM_SHARDS_IN_NETWORK-1]. Argument may be repeated.",
|
||||
defaultValue: @[],
|
||||
name: "shard",
|
||||
abbr: "s"
|
||||
@ -104,37 +113,48 @@ proc parseCmdArg*(T: type chronos.Duration, p: string): T =
|
||||
proc completeCmdArg*(T: type chronos.Duration, val: string): seq[string] =
|
||||
return @[]
|
||||
|
||||
# checks if rawProtocols (skipping version) are supported in nodeProtocols
|
||||
proc areProtocolsSupported(
|
||||
rawProtocols: seq[string], nodeProtocols: seq[string]
|
||||
toValidateProtocols: seq[string], nodeProtocols: seq[string]
|
||||
): bool =
|
||||
## Checks if all toValidateProtocols are contained in nodeProtocols.
|
||||
## nodeProtocols contains the full list of protocols currently informed by the node under analysis.
|
||||
## toValidateProtocols contains the protocols, without version number, that we want to check if they are supported by the node.
|
||||
var numOfSupportedProt: int = 0
|
||||
|
||||
for nodeProtocol in nodeProtocols:
|
||||
for rawProtocol in rawProtocols:
|
||||
let protocolTag = ProtocolsTable[rawProtocol]
|
||||
for rawProtocol in toValidateProtocols:
|
||||
let protocolTag = ProtocolsTable[rawProtocol]
|
||||
info "Checking if protocol is supported", expected_protocol_tag = protocolTag
|
||||
|
||||
var protocolSupported = false
|
||||
for nodeProtocol in nodeProtocols:
|
||||
if nodeProtocol.startsWith(protocolTag):
|
||||
info "Supported protocol ok", expected = protocolTag, supported = nodeProtocol
|
||||
info "The node supports the protocol", supported_protocol = nodeProtocol
|
||||
numOfSupportedProt += 1
|
||||
protocolSupported = true
|
||||
break
|
||||
|
||||
if numOfSupportedProt == rawProtocols.len:
|
||||
if not protocolSupported:
|
||||
error "The node does not support the protocol", expected_protocol = protocolTag
|
||||
|
||||
if numOfSupportedProt == toValidateProtocols.len:
|
||||
return true
|
||||
|
||||
return false
|
||||
|
||||
proc pingNode(
|
||||
node: WakuNode, peerInfo: RemotePeerInfo
|
||||
): Future[void] {.async, gcsafe.} =
|
||||
): Future[bool] {.async, gcsafe.} =
|
||||
try:
|
||||
let conn = await node.switch.dial(peerInfo.peerId, peerInfo.addrs, PingCodec)
|
||||
let pingDelay = await node.libp2pPing.ping(conn)
|
||||
info "Peer response time (ms)", peerId = peerInfo.peerId, ping = pingDelay.millis
|
||||
return true
|
||||
except CatchableError:
|
||||
var msg = getCurrentExceptionMsg()
|
||||
if msg == "Future operation cancelled!":
|
||||
msg = "timedout"
|
||||
error "Failed to ping the peer", peer = peerInfo, err = msg
|
||||
return false
|
||||
|
||||
proc main(rng: ref HmacDrbgContext): Future[int] {.async.} =
|
||||
let conf: WakuCanaryConf = WakuCanaryConf.load()
|
||||
@ -163,12 +183,9 @@ proc main(rng: ref HmacDrbgContext): Future[int] {.async.} =
|
||||
protocols = conf.protocols,
|
||||
logLevel = conf.logLevel
|
||||
|
||||
let peerRes = parsePeerInfo(conf.address)
|
||||
if peerRes.isErr():
|
||||
error "Couldn't parse 'conf.address'", error = peerRes.error
|
||||
return 1
|
||||
|
||||
let peer = peerRes.value
|
||||
let peer = parsePeerInfo(conf.address).valueOr:
|
||||
error "Couldn't parse 'conf.address'", error = error
|
||||
quit(QuitFailure)
|
||||
|
||||
let
|
||||
nodeKey = crypto.PrivateKey.random(Secp256k1, rng[])[]
|
||||
@ -194,27 +211,22 @@ proc main(rng: ref HmacDrbgContext): Future[int] {.async.} =
|
||||
let netConfig = NetConfig.init(
|
||||
bindIp = bindIp,
|
||||
bindPort = nodeTcpPort,
|
||||
wsBindPort = wsBindPort,
|
||||
wsBindPort = some(wsBindPort),
|
||||
wsEnabled = isWs,
|
||||
wssEnabled = isWss,
|
||||
)
|
||||
|
||||
var enrBuilder = EnrBuilder.init(nodeKey)
|
||||
|
||||
let relayShards = RelayShards.init(conf.clusterId, conf.shards).valueOr:
|
||||
error "Relay shards initialization failed", error = error
|
||||
return 1
|
||||
enrBuilder.withWakuRelaySharding(relayShards).isOkOr:
|
||||
error "Building ENR with relay sharding failed", error = error
|
||||
return 1
|
||||
enrBuilder.withWakuRelaySharding(
|
||||
RelayShards(clusterId: conf.clusterId, shardIds: conf.shards)
|
||||
).isOkOr:
|
||||
error "could not initialize ENR with shards", error
|
||||
quit(QuitFailure)
|
||||
|
||||
let recordRes = enrBuilder.build()
|
||||
let record =
|
||||
if recordRes.isErr():
|
||||
error "failed to create enr record", error = recordRes.error
|
||||
quit(QuitFailure)
|
||||
else:
|
||||
recordRes.get()
|
||||
let record = enrBuilder.build().valueOr:
|
||||
error "failed to create enr record", error = error
|
||||
quit(QuitFailure)
|
||||
|
||||
if isWss and
|
||||
(conf.websocketSecureKeyPath.len == 0 or conf.websocketSecureCertPath.len == 0):
|
||||
@ -223,7 +235,7 @@ proc main(rng: ref HmacDrbgContext): Future[int] {.async.} =
|
||||
createDir(CertsDirectory)
|
||||
if generateSelfSignedCertificate(certPath, keyPath) != 0:
|
||||
error "Error generating key and certificate"
|
||||
return 1
|
||||
quit(QuitFailure)
|
||||
|
||||
builder.withRecord(record)
|
||||
builder.withNetworkConfiguration(netConfig.tryGet())
|
||||
@ -232,15 +244,17 @@ proc main(rng: ref HmacDrbgContext): Future[int] {.async.} =
|
||||
)
|
||||
|
||||
let node = builder.build().tryGet()
|
||||
node.mountMetadata(conf.clusterId).isOkOr:
|
||||
error "failed to mount waku metadata protocol: ", err = error
|
||||
|
||||
if conf.ping:
|
||||
try:
|
||||
await mountLibp2pPing(node)
|
||||
except CatchableError:
|
||||
error "failed to mount libp2p ping protocol: " & getCurrentExceptionMsg()
|
||||
return 1
|
||||
quit(QuitFailure)
|
||||
|
||||
node.mountMetadata(conf.clusterId, conf.shards).isOkOr:
|
||||
error "failed to mount metadata protocol", error
|
||||
quit(QuitFailure)
|
||||
|
||||
await node.start()
|
||||
|
||||
@ -251,23 +265,34 @@ proc main(rng: ref HmacDrbgContext): Future[int] {.async.} =
|
||||
let timedOut = not await node.connectToNodes(@[peer]).withTimeout(conf.timeout)
|
||||
if timedOut:
|
||||
error "Timedout after", timeout = conf.timeout
|
||||
return 1
|
||||
quit(QuitFailure)
|
||||
|
||||
let lp2pPeerStore = node.switch.peerStore
|
||||
let conStatus = node.peerManager.peerStore[ConnectionBook][peer.peerId]
|
||||
let conStatus = node.peerManager.switch.peerStore[ConnectionBook][peer.peerId]
|
||||
|
||||
var pingSuccess = true
|
||||
if conf.ping:
|
||||
discard await pingFut
|
||||
try:
|
||||
pingSuccess = await pingFut
|
||||
except CatchableError as exc:
|
||||
pingSuccess = false
|
||||
error "Ping operation failed or timed out", error = exc.msg
|
||||
|
||||
if conStatus in [Connected, CanConnect]:
|
||||
let nodeProtocols = lp2pPeerStore[ProtoBook][peer.peerId]
|
||||
|
||||
if not areProtocolsSupported(conf.protocols, nodeProtocols):
|
||||
error "Not all protocols are supported",
|
||||
expected = conf.protocols, supported = nodeProtocols
|
||||
return 1
|
||||
quit(QuitFailure)
|
||||
|
||||
# Check ping result if ping was enabled
|
||||
if conf.ping and not pingSuccess:
|
||||
error "Node is reachable and supports protocols but ping failed - connection may be unstable"
|
||||
quit(QuitFailure)
|
||||
elif conStatus == CannotConnect:
|
||||
error "Could not connect", peerId = peer.peerId
|
||||
return 1
|
||||
quit(QuitFailure)
|
||||
return 0
|
||||
|
||||
when isMainModule:
|
||||
|
||||
@ -9,15 +9,13 @@ import
|
||||
system/ansi_c,
|
||||
libp2p/crypto/crypto
|
||||
import
|
||||
../../tools/rln_keystore_generator/rln_keystore_generator,
|
||||
../../tools/rln_db_inspector/rln_db_inspector,
|
||||
../../tools/[rln_keystore_generator/rln_keystore_generator, confutils/cli_args],
|
||||
waku/[
|
||||
common/logging,
|
||||
factory/external_config,
|
||||
factory/waku,
|
||||
node/health_monitor,
|
||||
node/waku_metrics,
|
||||
waku_api/rest/builder as rest_server_builder,
|
||||
rest_api/endpoint/builder as rest_server_builder,
|
||||
waku_core/message/default_values,
|
||||
]
|
||||
|
||||
logScope:
|
||||
@ -38,63 +36,33 @@ when isMainModule:
|
||||
|
||||
const versionString = "version / git commit hash: " & waku.git_version
|
||||
|
||||
var conf = WakuNodeConf.load(version = versionString).valueOr:
|
||||
var wakuNodeConf = WakuNodeConf.load(version = versionString).valueOr:
|
||||
error "failure while loading the configuration", error = error
|
||||
quit(QuitFailure)
|
||||
|
||||
## Also called within Waku.init. The call to startRestServerEsentials needs the following line
|
||||
logging.setupLog(conf.logLevel, conf.logFormat)
|
||||
## Also called within Waku.new. The call to startRestServerEssentials needs the following line
|
||||
logging.setupLog(wakuNodeConf.logLevel, wakuNodeConf.logFormat)
|
||||
|
||||
case conf.cmd
|
||||
case wakuNodeConf.cmd
|
||||
of generateRlnKeystore:
|
||||
let conf = wakuNodeConf.toKeystoreGeneratorConf()
|
||||
doRlnKeystoreGenerator(conf)
|
||||
of inspectRlnDb:
|
||||
doInspectRlnDb(conf)
|
||||
of noCommand:
|
||||
# NOTE: {.threadvar.} is used to make the global variable GC safe for the closure uses it
|
||||
# It will always be called from main thread anyway.
|
||||
# Ref: https://nim-lang.org/docs/manual.html#threads-gc-safety
|
||||
var nodeHealthMonitor {.threadvar.}: WakuNodeHealthMonitor
|
||||
nodeHealthMonitor = WakuNodeHealthMonitor()
|
||||
nodeHealthMonitor.setOverallHealth(HealthStatus.INITIALIZING)
|
||||
|
||||
let restServer = rest_server_builder.startRestServerEsentials(
|
||||
nodeHealthMonitor, conf
|
||||
).valueOr:
|
||||
error "Starting esential REST server failed.", error = $error
|
||||
let conf = wakuNodeConf.toWakuConf().valueOr:
|
||||
error "Waku configuration failed", error = error
|
||||
quit(QuitFailure)
|
||||
|
||||
var waku = Waku.init(conf).valueOr:
|
||||
var waku = (waitFor Waku.new(conf)).valueOr:
|
||||
error "Waku initialization failed", error = error
|
||||
quit(QuitFailure)
|
||||
|
||||
waku.restServer = restServer
|
||||
|
||||
nodeHealthMonitor.setNode(waku.node)
|
||||
|
||||
(waitFor startWaku(addr waku)).isOkOr:
|
||||
error "Starting waku failed", error = error
|
||||
quit(QuitFailure)
|
||||
|
||||
rest_server_builder.startRestServerProtocolSupport(
|
||||
restServer, waku.node, waku.wakuDiscv5, conf
|
||||
).isOkOr:
|
||||
error "Starting protocols support REST server failed.", error = $error
|
||||
quit(QuitFailure)
|
||||
|
||||
waku.metricsServer = waku_metrics.startMetricsServerAndLogging(conf).valueOr:
|
||||
error "Starting monitoring and external interfaces failed", error = error
|
||||
quit(QuitFailure)
|
||||
|
||||
nodeHealthMonitor.setOverallHealth(HealthStatus.READY)
|
||||
|
||||
debug "Setting up shutdown hooks"
|
||||
## Setup shutdown hooks for this process.
|
||||
## Stop node gracefully on shutdown.
|
||||
|
||||
proc asyncStopper(node: Waku) {.async: (raises: [Exception]).} =
|
||||
nodeHealthMonitor.setOverallHealth(HealthStatus.SHUTTING_DOWN)
|
||||
await node.stop()
|
||||
info "Setting up shutdown hooks"
|
||||
proc asyncStopper(waku: Waku) {.async: (raises: [Exception]).} =
|
||||
await waku.stop()
|
||||
quit(QuitSuccess)
|
||||
|
||||
# Handle Ctrl-C SIGINT
|
||||
|
||||
102
ci/Jenkinsfile.lpt
Normal file
102
ci/Jenkinsfile.lpt
Normal file
@ -0,0 +1,102 @@
|
||||
#!/usr/bin/env groovy
|
||||
library 'status-jenkins-lib@v1.8.17'
|
||||
|
||||
pipeline {
|
||||
agent {
|
||||
docker {
|
||||
label 'linuxcontainer'
|
||||
image 'harbor.status.im/infra/ci-build-containers:linux-base-1.0.0'
|
||||
args '--volume=/var/run/docker.sock:/var/run/docker.sock ' +
|
||||
'--user jenkins'
|
||||
}
|
||||
}
|
||||
|
||||
options {
|
||||
timestamps()
|
||||
timeout(time: 20, unit: 'MINUTES')
|
||||
disableRestartFromStage()
|
||||
buildDiscarder(logRotator(
|
||||
numToKeepStr: '10',
|
||||
daysToKeepStr: '30',
|
||||
))
|
||||
}
|
||||
|
||||
parameters {
|
||||
string(
|
||||
name: 'IMAGE_TAG',
|
||||
description: 'Name of Docker tag to push. Optional Parameter.',
|
||||
defaultValue: 'latest'
|
||||
)
|
||||
string(
|
||||
name: 'IMAGE_NAME',
|
||||
description: 'Name of Docker image to push.',
|
||||
defaultValue: params.IMAGE_NAME ?: 'wakuorg/liteprotocoltester',
|
||||
)
|
||||
string(
|
||||
name: 'DOCKER_CRED',
|
||||
description: 'Name of Docker Registry credential.',
|
||||
defaultValue: params.DOCKER_CRED ?: 'harbor-telemetry-robot',
|
||||
)
|
||||
string(
|
||||
name: 'DOCKER_REGISTRY',
|
||||
description: 'URL of the Docker Registry',
|
||||
defaultValue: params.DOCKER_REGISTRY ?: 'harbor.status.im'
|
||||
)
|
||||
string(
|
||||
name: 'NIMFLAGS',
|
||||
description: 'Flags for Nim compilation.',
|
||||
defaultValue: params.NIMFLAGS ?: [
|
||||
'--colors:off',
|
||||
'-d:disableMarchNative',
|
||||
'-d:chronicles_colors:none',
|
||||
'-d:insecure',
|
||||
].join(' ')
|
||||
)
|
||||
choice(
|
||||
name: "LOWEST_LOG_LEVEL_ALLOWED",
|
||||
choices: ['TRACE', 'DEGUG', 'INFO', 'NOTICE', 'WARN', 'ERROR', 'FATAL'],
|
||||
description: "Defines the log level, which will be available at runtime (Chronicles log level)"
|
||||
)
|
||||
}
|
||||
|
||||
stages {
|
||||
stage('Build') {
|
||||
steps { script {
|
||||
image = docker.build(
|
||||
"${DOCKER_REGISTRY}/${params.IMAGE_NAME}:${params.IMAGE_TAG ?: env.GIT_COMMIT.take(8)}",
|
||||
"--label=commit='${git.commit()}' " +
|
||||
"--label=version='${git.describe('--tags')}' " +
|
||||
"--build-arg=MAKE_TARGET='liteprotocoltester' " +
|
||||
"--build-arg=NIMFLAGS='${params.NIMFLAGS}' " +
|
||||
"--build-arg=LOG_LEVEL='${params.LOWEST_LOG_LEVEL_ALLOWED}' " +
|
||||
"--target ${params.IMAGE_TAG == 'deploy' ? 'deployment_lpt' : 'standalone_lpt'} " +
|
||||
"--file=apps/liteprotocoltester/Dockerfile.liteprotocoltester.compile " +
|
||||
" ."
|
||||
)
|
||||
} }
|
||||
}
|
||||
|
||||
stage('Check') {
|
||||
steps { script {
|
||||
image.inside('--entrypoint=""') { c ->
|
||||
sh '/usr/bin/liteprotocoltester --version'
|
||||
}
|
||||
} }
|
||||
}
|
||||
|
||||
stage('Push') {
|
||||
when { expression { params.IMAGE_TAG != '' } }
|
||||
steps { script {
|
||||
withDockerRegistry([
|
||||
credentialsId: params.DOCKER_CRED, url: "https://${DOCKER_REGISTRY}"
|
||||
]) {
|
||||
image.push(params.IMAGE_TAG)
|
||||
}
|
||||
} }
|
||||
}
|
||||
} // stages
|
||||
|
||||
post {
|
||||
cleanup { cleanWs() }
|
||||
} // post
|
||||
} // pipeline
|
||||
@ -36,6 +36,7 @@ pipeline {
|
||||
|
||||
options {
|
||||
timestamps()
|
||||
disableRestartFromStage()
|
||||
/* Prevent Jenkins jobs from running forever */
|
||||
timeout(time: 30, unit: 'MINUTES')
|
||||
/* Limit builds retained. */
|
||||
|
||||
@ -2,10 +2,18 @@
|
||||
library 'status-jenkins-lib@v1.8.17'
|
||||
|
||||
pipeline {
|
||||
agent { label 'linux' }
|
||||
agent {
|
||||
docker {
|
||||
label 'linuxcontainer'
|
||||
image 'harbor.status.im/infra/ci-build-containers:linux-base-1.0.0'
|
||||
args '--volume=/var/run/docker.sock:/var/run/docker.sock ' +
|
||||
'--user jenkins'
|
||||
}
|
||||
}
|
||||
|
||||
options {
|
||||
timestamps()
|
||||
disableRestartFromStage()
|
||||
timeout(time: 20, unit: 'MINUTES')
|
||||
buildDiscarder(logRotator(
|
||||
numToKeepStr: '10',
|
||||
@ -56,7 +64,12 @@ pipeline {
|
||||
)
|
||||
booleanParam(
|
||||
name: 'DEBUG',
|
||||
description: 'Enable debug features (heaptrack).',
|
||||
description: 'Enable debug features',
|
||||
defaultValue: false
|
||||
)
|
||||
booleanParam(
|
||||
name: 'HEAPTRACK',
|
||||
description: 'Enable heaptrack build',
|
||||
defaultValue: false
|
||||
)
|
||||
}
|
||||
@ -64,16 +77,33 @@ pipeline {
|
||||
stages {
|
||||
stage('Build') {
|
||||
steps { script {
|
||||
image = docker.build(
|
||||
"${params.IMAGE_NAME}:${params.IMAGE_TAG ?: env.GIT_COMMIT.take(8)}",
|
||||
"--label=build='${env.BUILD_URL}' " +
|
||||
"--label=commit='${git.commit()}' " +
|
||||
"--label=version='${git.describe('--tags')}' " +
|
||||
"--build-arg=MAKE_TARGET='${params.MAKE_TARGET}' " +
|
||||
"--build-arg=NIMFLAGS='${params.NIMFLAGS} -d:postgres ' " +
|
||||
"--build-arg=LOG_LEVEL='${params.LOWEST_LOG_LEVEL_ALLOWED}' " +
|
||||
"--target=${params.DEBUG ? "debug" : "prod"} ."
|
||||
)
|
||||
if (params.HEAPTRACK) {
|
||||
echo 'Building with heaptrack support'
|
||||
image = docker.build(
|
||||
"${params.IMAGE_NAME}:${params.IMAGE_TAG ?: env.GIT_COMMIT.take(8)}",
|
||||
"--label=build='${env.BUILD_URL}' " +
|
||||
"--label=commit='${git.commit()}' " +
|
||||
"--label=version='${git.describe('--tags')}' " +
|
||||
"--build-arg=MAKE_TARGET='${params.MAKE_TARGET}' " +
|
||||
"--build-arg=NIMFLAGS='${params.NIMFLAGS} -d:postgres -d:heaptracker ' " +
|
||||
"--build-arg=LOG_LEVEL='${params.LOWEST_LOG_LEVEL_ALLOWED}' " +
|
||||
"--build-arg=DEBUG='${params.DEBUG ? "1" : "0"} ' " +
|
||||
"--build-arg=NIM_COMMIT='NIM_COMMIT=heaptrack_support_v2.0.12' " +
|
||||
"--target='debug-with-heaptrack' ."
|
||||
)
|
||||
} else {
|
||||
image = docker.build(
|
||||
"${params.IMAGE_NAME}:${params.IMAGE_TAG ?: env.GIT_COMMIT.take(8)}",
|
||||
"--label=build='${env.BUILD_URL}' " +
|
||||
"--label=commit='${git.commit()}' " +
|
||||
"--label=version='${git.describe('--tags')}' " +
|
||||
"--build-arg=MAKE_TARGET='${params.MAKE_TARGET}' " +
|
||||
"--build-arg=NIMFLAGS='${params.NIMFLAGS} -d:postgres ' " +
|
||||
"--build-arg=LOG_LEVEL='${params.LOWEST_LOG_LEVEL_ALLOWED}' " +
|
||||
"--build-arg=DEBUG='${params.DEBUG ? "1" : "0"} ' " +
|
||||
"--target='prod' ."
|
||||
)
|
||||
}
|
||||
} }
|
||||
}
|
||||
|
||||
|
||||
40
config.nims
40
config.nims
@ -1,15 +1,27 @@
|
||||
import os
|
||||
|
||||
if defined(release):
|
||||
switch("nimcache", "nimcache/release/$projectName")
|
||||
else:
|
||||
switch("nimcache", "nimcache/debug/$projectName")
|
||||
|
||||
if defined(windows):
|
||||
switch("passL", "rln.lib")
|
||||
switch("define", "postgres=false")
|
||||
|
||||
# Automatically add all vendor subdirectories
|
||||
for dir in walkDir("./vendor"):
|
||||
if dir.kind == pcDir:
|
||||
switch("path", dir.path)
|
||||
switch("path", dir.path / "src")
|
||||
|
||||
# disable timestamps in Windows PE headers - https://wiki.debian.org/ReproducibleBuilds/TimestampsInPEBinaries
|
||||
switch("passL", "-Wl,--no-insert-timestamp")
|
||||
# increase stack size
|
||||
switch("passL", "-Wl,--stack,8388608")
|
||||
# https://github.com/nim-lang/Nim/issues/4057
|
||||
--tlsEmulation:off
|
||||
--tlsEmulation:
|
||||
off
|
||||
if defined(i386):
|
||||
# set the IMAGE_FILE_LARGE_ADDRESS_AWARE flag so we can use PAE, if enabled, and access more than 2 GiB of RAM
|
||||
switch("passL", "-Wl,--large-address-aware")
|
||||
@ -60,14 +72,18 @@ else:
|
||||
switch("passC", "-mno-avx512f")
|
||||
switch("passL", "-mno-avx512f")
|
||||
|
||||
|
||||
--threads:on
|
||||
--opt:speed
|
||||
--excessiveStackTrace:on
|
||||
--threads:
|
||||
on
|
||||
--opt:
|
||||
speed
|
||||
--excessiveStackTrace:
|
||||
on
|
||||
# enable metric collection
|
||||
--define:metrics
|
||||
--define:
|
||||
metrics
|
||||
# for heap-usage-by-instance-type metrics and object base-type strings
|
||||
--define:nimTypeNames
|
||||
--define:
|
||||
nimTypeNames
|
||||
|
||||
switch("define", "withoutPCRE")
|
||||
|
||||
@ -75,13 +91,17 @@ switch("define", "withoutPCRE")
|
||||
# "--debugger:native" build. It can be increased with `ulimit -n 1024`.
|
||||
if not defined(macosx) and not defined(android):
|
||||
# add debugging symbols and original files and line numbers
|
||||
--debugger:native
|
||||
--debugger:
|
||||
native
|
||||
if not (defined(windows) and defined(i386)) and not defined(disable_libbacktrace):
|
||||
# light-weight stack traces using libbacktrace and libunwind
|
||||
--define:nimStackTraceOverride
|
||||
--define:
|
||||
nimStackTraceOverride
|
||||
switch("import", "libbacktrace")
|
||||
|
||||
--define:nimOldCaseObjects # https://github.com/status-im/nim-confutils/issues/9
|
||||
--define:
|
||||
nimOldCaseObjects
|
||||
# https://github.com/status-im/nim-confutils/issues/9
|
||||
|
||||
# `switch("warning[CaseTransition]", "off")` fails with "Error: invalid command line option: '--warning[CaseTransition]'"
|
||||
switch("warning", "CaseTransition:off")
|
||||
|
||||
@ -1,5 +1,5 @@
|
||||
# Dockerfile to build a distributable container image from pre-existing binaries
|
||||
FROM debian:stable-slim as prod
|
||||
FROM debian:bookworm-slim AS prod
|
||||
|
||||
ARG MAKE_TARGET=wakunode2
|
||||
|
||||
@ -13,12 +13,9 @@ EXPOSE 30303 60000 8545
|
||||
|
||||
# Referenced in the binary
|
||||
RUN apt-get update &&\
|
||||
apt-get install -y libpcre3 libpq-dev curl iproute2 wget &&\
|
||||
apt-get install -y libpq-dev curl iproute2 wget dnsutils &&\
|
||||
apt-get clean && rm -rf /var/lib/apt/lists/*
|
||||
|
||||
# Fix for 'Error loading shared library libpcre.so.3: No such file or directory'
|
||||
RUN ln -s /usr/lib/libpcre.so /usr/lib/libpcre.so.3
|
||||
|
||||
# Copy to separate location to accomodate different MAKE_TARGET values
|
||||
ADD ./build/$MAKE_TARGET /usr/local/bin/
|
||||
|
||||
|
||||
60
docker/binaries/Dockerfile.bn.local
Normal file
60
docker/binaries/Dockerfile.bn.local
Normal file
@ -0,0 +1,60 @@
|
||||
# Dockerfile to build a distributable container image from pre-existing binaries
|
||||
# FROM debian:stable-slim AS prod
|
||||
FROM ubuntu:24.04 AS prod
|
||||
|
||||
ARG MAKE_TARGET=wakunode2
|
||||
|
||||
LABEL maintainer="vaclav@status.im"
|
||||
LABEL source="https://github.com/waku-org/nwaku"
|
||||
LABEL description="Wakunode: Waku client"
|
||||
LABEL commit="unknown"
|
||||
|
||||
# DevP2P, LibP2P, and JSON RPC ports
|
||||
EXPOSE 30303 60000 8545
|
||||
|
||||
# Referenced in the binary
|
||||
RUN apt-get update &&\
|
||||
apt-get install -y libpq-dev curl iproute2 wget jq dnsutils &&\
|
||||
apt-get clean && rm -rf /var/lib/apt/lists/*
|
||||
|
||||
# Copy to separate location to accomodate different MAKE_TARGET values
|
||||
ADD ./build/$MAKE_TARGET /usr/local/bin/
|
||||
|
||||
# Copy migration scripts for DB upgrades
|
||||
ADD ./migrations/ /app/migrations/
|
||||
|
||||
# Symlink the correct wakunode binary
|
||||
RUN ln -sv /usr/local/bin/$MAKE_TARGET /usr/bin/wakunode
|
||||
|
||||
ENTRYPOINT ["/usr/bin/wakunode"]
|
||||
|
||||
# By default just show help if called without arguments
|
||||
CMD ["--help"]
|
||||
|
||||
# Build debug tools: heaptrack
|
||||
FROM ubuntu:24.04 AS heaptrack-build
|
||||
|
||||
RUN apt update
|
||||
RUN apt install -y gdb git g++ make cmake zlib1g-dev libboost-all-dev libunwind-dev
|
||||
RUN git clone https://github.com/KDE/heaptrack.git /heaptrack
|
||||
|
||||
WORKDIR /heaptrack/build
|
||||
# going to a commit that builds properly. We will revisit this for new releases
|
||||
RUN git reset --hard f9cc35ebbdde92a292fe3870fe011ad2874da0ca
|
||||
RUN cmake -DCMAKE_BUILD_TYPE=Release ..
|
||||
RUN make -j$(nproc)
|
||||
|
||||
|
||||
# Debug image
|
||||
FROM prod AS debug-with-heaptrack
|
||||
|
||||
RUN apt update
|
||||
RUN apt install -y gdb libunwind8
|
||||
|
||||
# Add heaptrack
|
||||
COPY --from=heaptrack-build /heaptrack/build/ /heaptrack/build/
|
||||
|
||||
ENV LD_LIBRARY_PATH=/heaptrack/build/lib/heaptrack/
|
||||
RUN ln -s /heaptrack/build/bin/heaptrack /usr/local/bin/heaptrack
|
||||
|
||||
ENTRYPOINT ["/heaptrack/build/bin/heaptrack", "/usr/bin/wakunode"]
|
||||
@ -38,6 +38,9 @@ A particular OpenAPI spec can be easily imported into [Postman](https://www.post
|
||||
curl http://localhost:8645/debug/v1/info -s | jq
|
||||
```
|
||||
|
||||
### Store API
|
||||
|
||||
The `page_size` flag in the Store API has a default value of 20 and a max value of 100.
|
||||
|
||||
### Node configuration
|
||||
Find details [here](https://github.com/waku-org/nwaku/tree/master/docs/operators/how-to/configure-rest-api.md)
|
||||
|
||||
@ -100,7 +100,7 @@ The following diagram shows the topology used:
|
||||
|
||||
For that, the next apps were used:
|
||||
|
||||
1. [Waku-publisher.](https://github.com/alrevuelta/waku-publisher/tree/9fb206c14a17dd37d20a9120022e86475ce0503f) This app can publish Relay messages with different numbers of clients
|
||||
1. [Waku-publisher.](https://github.com/alrevuelta/waku-publisher/tree/9fb206c14a17dd37d20a9120022e86475ce0503f) This app can publish Relay messages with different numbers of clients
|
||||
2. [Waku-store-query-generator](https://github.com/Ivansete-status/waku-store-query-generator/tree/19e6455537b6d44199cf0c8558480af5c6788b0d). This app is based on the Waku-publisher but in this case, it can spawn concurrent go-waku Store clients.
|
||||
|
||||
That topology is defined in [this](https://github.com/waku-org/test-waku-query/blob/7090cd125e739306357575730d0e54665c279670/docker/docker-compose-manual-binaries.yml) docker-compose file.
|
||||
@ -109,7 +109,7 @@ Notice that the two `nwaku` nodes run the very same version, which is compiled l
|
||||
|
||||
#### Comparing archive SQLite & Postgres performance in [nwaku-b6dd6899](https://github.com/waku-org/nwaku/tree/b6dd6899030ee628813dfd60ad1ad024345e7b41)
|
||||
|
||||
The next results were obtained by running the docker-compose-manual-binaries.yml from [test-waku-query-c078075](https://github.com/waku-org/test-waku-query/tree/c07807597faa781ae6c8c32eefdf48ecac03a7ba) in the sandbox machine (metal-01.he-eu-hel1.wakudev.misc.statusim.net.)
|
||||
The next results were obtained by running the docker-compose-manual-binaries.yml from [test-waku-query-c078075](https://github.com/waku-org/test-waku-query/tree/c07807597faa781ae6c8c32eefdf48ecac03a7ba) in the sandbox machine (metal-01.he-eu-hel1.misc.wakudev.status.im.)
|
||||
|
||||
**Scenario 1**
|
||||
|
||||
@ -155,7 +155,7 @@ In this case, the performance is similar regarding the timings. The store rate i
|
||||
|
||||
This nwaku commit is after a few **Postgres** optimizations were applied.
|
||||
|
||||
The next results were obtained by running the docker-compose-manual-binaries.yml from [test-waku-query-c078075](https://github.com/waku-org/test-waku-query/tree/c07807597faa781ae6c8c32eefdf48ecac03a7ba) in the sandbox machine (metal-01.he-eu-hel1.wakudev.misc.statusim.net.)
|
||||
The next results were obtained by running the docker-compose-manual-binaries.yml from [test-waku-query-c078075](https://github.com/waku-org/test-waku-query/tree/c07807597faa781ae6c8c32eefdf48ecac03a7ba) in the sandbox machine (metal-01.he-eu-hel1.misc.wakudev.status.im.)
|
||||
|
||||
**Scenario 1**
|
||||
|
||||
@ -181,7 +181,7 @@ It cannot be appreciated but the average *****Store***** time was 11ms.
|
||||
|
||||
**Scenario 3**
|
||||
|
||||
**Store rate:** 25 users generating 1 store-req/sec. Notice that the current Store query used generates pagination which provokes more subsequent queries than the 25 req/sec that would be expected without pagination.
|
||||
**Store rate:** 25 users generating 1 store-req/sec. Notice that the current Store query used generates pagination which provokes more subsequent queries than the 25 req/sec that would be expected without pagination.
|
||||
|
||||
**Relay rate:** 1 user generating 10msg/sec, 10KB each.
|
||||
|
||||
@ -217,7 +217,7 @@ The `db-postgres-hammer` is aimed to stress the database from the `select` point
|
||||
|
||||
#### Results
|
||||
|
||||
The following results were obtained by using the sandbox machine (metal-01.he-eu-hel1.wakudev.misc) and running nim-waku nodes from https://github.com/waku-org/nwaku/tree/b452ed865466a33b7f5b87fa937a8471b28e466e and using the `test-waku-query` project from https://github.com/waku-org/test-waku-query/tree/fef29cea182cc744c7940abc6c96d38a68739356
|
||||
The following results were obtained by using the sandbox machine (metal-01.he-eu-hel1.misc.wakudev) and running nim-waku nodes from https://github.com/waku-org/nwaku/tree/b452ed865466a33b7f5b87fa937a8471b28e466e and using the `test-waku-query` project from https://github.com/waku-org/test-waku-query/tree/fef29cea182cc744c7940abc6c96d38a68739356
|
||||
|
||||
The following shows the results
|
||||
|
||||
|
||||
90
docs/benchmarks/test-results-summary.md
Normal file
90
docs/benchmarks/test-results-summary.md
Normal file
@ -0,0 +1,90 @@
|
||||
---
|
||||
title: Performance Benchmarks and Test Reports
|
||||
---
|
||||
|
||||
|
||||
## Introduction
|
||||
This page summarises key performance metrics for nwaku and provides links to detailed test reports.
|
||||
|
||||
> ## TL;DR
|
||||
>
|
||||
> - Average Waku bandwidth usage: ~**10 KB/s** (minus discv5 Discovery) for 1KB message size and message injection rate of 1msg/s.
|
||||
Confirmed for topologies of up to 2000 Relay nodes.
|
||||
> - Average time for a message to propagate to 100% of nodes: **0.4s** for topologies of up to 2000 Relay nodes.
|
||||
> - Average per-node bandwidth usage of the discv5 protocol: **8 KB/s** for incoming traffic and **7.4 KB/s** for outgoing traffic,
|
||||
in a network with 100 continuously online nodes.
|
||||
> - Future improvements: A messaging API is currently in development to streamline interactions with the Waku protocol suite.
|
||||
Once completed, it will enable benchmarking at the messaging API level, allowing applications to more easily compare their
|
||||
own performance results.
|
||||
|
||||
|
||||
## Insights
|
||||
|
||||
### Relay Bandwidth Usage: nwaku v0.34.0
|
||||
The average per-node `libp2p` bandwidth usage in a 1000-node Relay network with 1KB messages at varying injection rates.
|
||||
|
||||
|
||||
| Message Injection Rate | Average libp2p incoming bandwidth (KB/s) | Average libp2p outgoing bandwidth (KB/s) |
|
||||
|------------------------|------------------------------------------|------------------------------------------|
|
||||
| 1 msg/s | ~10.1 | ~10.3 |
|
||||
| 1 msg/10s | ~1.8 | ~1.9 |
|
||||
|
||||
### Message Propagation Latency: nwaku v0.34.0-rc1
|
||||
The message propagation latency is measured as the total time for a message to reach all nodes.
|
||||
We compare the latency in different network configurations for the following simulation parameters:
|
||||
- Total messages published: 600
|
||||
- Message size: 1KB
|
||||
- Message injection rate: 1msg/s
|
||||
|
||||
The different network configurations tested are:
|
||||
- Relay Config: 1000 nodes with relay enabled
|
||||
- Mixed Config: 210 nodes, consisting of bootstrap nodes, filter clients and servers, lightpush clients and servers, store nodes
|
||||
- Non-persistent Relay Config: 500 persistent relay nodes, 10 store nodes and 100 non-persistent relay nodes
|
||||
|
||||
Click on a specific config to see the detailed test report.
|
||||
|
||||
| Config | Average Message Propagation Latency (s) | Max Message Propagation Latency (s)|
|
||||
|------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------|------------------------------------|
|
||||
| [Relay](https://www.notion.so/Waku-regression-testing-v0-34-1618f96fb65c803bb7bad6ecd6bafff9) (1000 nodes) | 0.05 | 1.6 |
|
||||
| [Mixed](https://www.notion.so/Mixed-environment-analysis-1688f96fb65c809eb235c59b97d6e15b) (210 nodes) | 0.0125 | 0.007 |
|
||||
| [Non-persistent Relay](https://www.notion.so/High-Churn-Relay-Store-Reliability-16c8f96fb65c8008bacaf5e86881160c) (510 nodes)| 0.0125 | 0.25 |
|
||||
|
||||
### Discv5 Bandwidth Usage: nwaku v0.34.0
|
||||
The average bandwidth usage of discv5 for a network of 100 nodes and message injection rate of 0 or 1msg/s.
|
||||
The measurements are based on a stable network where all nodes have already connected to peers to form a healthy mesh.
|
||||
|
||||
|Message size |Average discv5 incoming bandwidth (KB/s)|Average discv5 outgoing bandwidth (KB/s)|
|
||||
|-------------------- |----------------------------------------|----------------------------------------|
|
||||
| no message injection| 7.88 | 6.70 |
|
||||
| 1KB | 8.04 | 7.40 |
|
||||
| 10KB | 8.03 | 7.45 |
|
||||
|
||||
## Testing
|
||||
### DST
|
||||
The VAC DST team performs regression testing on all new **nwaku** releases, comparing performance with previous versions.
|
||||
They simulate large Waku networks with a variety of network and protocol configurations that are representative of real-world usage.
|
||||
|
||||
**Test Reports**: [DST Reports](https://www.notion.so/DST-Reports-1228f96fb65c80729cd1d98a7496fe6f)
|
||||
|
||||
|
||||
### QA
|
||||
The VAC QA team performs interoperability tests for **nwaku** and **go-waku** using the latest main branch builds.
|
||||
These tests run daily and verify protocol functionality by targeting specific features of each protocol.
|
||||
|
||||
**Test Reports**: [QA Reports](https://discord.com/channels/1110799176264056863/1196933819614363678)
|
||||
|
||||
### nwaku
|
||||
The **nwaku** team follows a structured release procedure for all release candidates.
|
||||
This involves deploying RCs to `status.staging` fleet for validation and performing sanity checks.
|
||||
|
||||
**Release Process**: [nwaku Release Procedure](https://github.com/waku-org/nwaku/blob/master/.github/ISSUE_TEMPLATE/prepare_release.md)
|
||||
|
||||
|
||||
### Research
|
||||
The Waku Research team conducts a variety of benchmarking, performance testing, proof-of-concept validations and debugging efforts.
|
||||
They also maintain a Waku simulator designed for small-scale, single-purpose, on-demand testing.
|
||||
|
||||
|
||||
**Test Reports**: [Waku Research Reports](https://www.notion.so/Miscellaneous-2c02516248db4a28ba8cb2797a40d1bb)
|
||||
|
||||
**Waku Simulator**: [Waku Simulator Book](https://waku-org.github.io/waku-simulator/)
|
||||
@ -6,44 +6,52 @@ For more context, see https://trunkbaseddevelopment.com/branch-for-release/
|
||||
|
||||
## How to do releases
|
||||
|
||||
### Before release
|
||||
### Prerequisites
|
||||
|
||||
- All issues under the corresponding release [milestone](https://github.com/waku-org/nwaku/milestones) have been closed or, after consultation, deferred to the next release.
|
||||
- All submodules are up to date.
|
||||
> Updating submodules requires a PR (and very often several "fixes" to maintain compatibility with the changes in submodules). That PR process must be done and merged a couple of days before the release.
|
||||
|
||||
Ensure all items in this list are ticked:
|
||||
- [ ] All issues under the corresponding release [milestone](https://github.com/waku-org/nwaku/milestones) has been closed or, after consultation, deferred to a next release.
|
||||
- [ ] All submodules are up to date.
|
||||
> **IMPORTANT:** Updating submodules requires a PR (and very often several "fixes" to maintain compatibility with the changes in submodules). That PR process must be done and merged a couple of days before the release.
|
||||
> In case the submodules update has a low effort and/or risk for the release, follow the ["Update submodules"](./git-submodules.md) instructions.
|
||||
> If the effort or risk is too high, consider postponing the submodules upgrade for the subsequent release or delaying the current release until the submodules updates are included in the release candidate.
|
||||
- [ ] The [js-waku CI tests](https://github.com/waku-org/js-waku/actions/workflows/ci.yml) pass against the release candidate (i.e. nwaku latest `master`).
|
||||
> **NOTE:** This serves as a basic regression test against typical clients of nwaku.
|
||||
> The specific job that needs to pass is named `node_with_nwaku_master`.
|
||||
|
||||
### Performing the release
|
||||
> If the effort or risk is too high, consider postponing the submodules upgrade for the subsequent release or delaying the current release until the submodules updates are included in the release candidate.
|
||||
|
||||
### Release types
|
||||
|
||||
- **Full release**: follow the entire [Release process](#release-process--step-by-step).
|
||||
|
||||
- **Beta release**: skip just `6a` and `6c` steps from [Release process](#release-process--step-by-step).
|
||||
|
||||
- Choose the appropriate release process based on the release type:
|
||||
- [Full Release](../../.github/ISSUE_TEMPLATE/prepare_full_release.md)
|
||||
- [Beta Release](../../.github/ISSUE_TEMPLATE/prepare_beta_release.md)
|
||||
|
||||
### Release process ( step by step )
|
||||
|
||||
1. Checkout a release branch from master
|
||||
|
||||
```
|
||||
git checkout -b release/v0.1.0
|
||||
git checkout -b release/v0.X.0
|
||||
```
|
||||
|
||||
1. Update `CHANGELOG.md` and ensure it is up to date. Use the helper Make target to get PR based release-notes/changelog update.
|
||||
2. Update `CHANGELOG.md` and ensure it is up to date. Use the helper Make target to get PR based release-notes/changelog update.
|
||||
|
||||
```
|
||||
make release-notes
|
||||
```
|
||||
|
||||
1. Create a release-candidate tag with the same name as release and `-rc.N` suffix a few days before the official release and push it
|
||||
3. Create a release-candidate tag with the same name as release and `-rc.N` suffix a few days before the official release and push it
|
||||
|
||||
```
|
||||
git tag -as v0.1.0-rc.0 -m "Initial release."
|
||||
git push origin v0.1.0-rc.0
|
||||
git tag -as v0.X.0-rc.0 -m "Initial release."
|
||||
git push origin v0.X.0-rc.0
|
||||
```
|
||||
|
||||
This will trigger a [workflow](../../.github/workflows/pre-release.yml) which will build RC artifacts and create and publish a Github release
|
||||
This will trigger a [workflow](../../.github/workflows/pre-release.yml) which will build RC artifacts and create and publish a GitHub release
|
||||
|
||||
1. Open a PR from the release branch for others to review the included changes and the release-notes
|
||||
4. Open a PR from the release branch for others to review the included changes and the release-notes
|
||||
|
||||
1. In case additional changes are needed, create a new RC tag
|
||||
5. In case additional changes are needed, create a new RC tag
|
||||
|
||||
Make sure the new tag is associated
|
||||
with CHANGELOG update.
|
||||
@ -52,25 +60,57 @@ Ensure all items in this list are ticked:
|
||||
# Make changes, rebase and create new tag
|
||||
# Squash to one commit and make a nice commit message
|
||||
git rebase -i origin/master
|
||||
git tag -as v0.1.0-rc.1 -m "Initial release."
|
||||
git push origin v0.1.0-rc.1
|
||||
git tag -as v0.X.0-rc.1 -m "Initial release."
|
||||
git push origin v0.X.0-rc.1
|
||||
```
|
||||
|
||||
1. Validate the release. For the release validation process, please refer to the following [guide](https://www.notion.so/Release-Process-61234f335b904cd0943a5033ed8f42b4#47af557e7f9744c68fdbe5240bf93ca9)
|
||||
Similarly use v0.X.0-rc.2, v0.X.0-rc.3 etc. for additional RC tags.
|
||||
|
||||
1. Once the release-candidate has been validated, create a final release tag and push it.
|
||||
We also need to merge release branch back to master as a final step.
|
||||
6. **Validation of release candidate**
|
||||
|
||||
6a. **Automated testing**
|
||||
- Ensure all the unit tests (specifically js-waku tests) are green against the release candidate.
|
||||
- Ask Vac-QA and Vac-DST to run their available tests against the release candidate; share all release candidates with both teams.
|
||||
|
||||
> We need an additional report like [this](https://www.notion.so/DST-Reports-1228f96fb65c80729cd1d98a7496fe6f) specifically from the DST team.
|
||||
|
||||
6b. **Waku fleet testing**
|
||||
- Start job on `waku.sandbox` and `waku.test` [Deployment job](https://ci.infra.status.im/job/nim-waku/), wait for completion of the job. If it fails, then debug it.
|
||||
- After completion, disable [deployment job](https://ci.infra.status.im/job/nim-waku/) so that its version is not updated on every merge to `master`.
|
||||
- Verify at https://fleets.waku.org/ that the fleet is locked to the release candidate version.
|
||||
- Check if the image is created at [Harbor](https://harbor.status.im/harbor/projects/9/repositories/nwaku/artifacts-tab).
|
||||
- Search _Kibana_ logs from the previous month (since the last release was deployed) for possible crashes or errors in `waku.test` and `waku.sandbox`.
|
||||
- Most relevant logs are `(fleet: "waku.test" AND message: "SIGSEGV")` OR `(fleet: "waku.sandbox" AND message: "SIGSEGV")`.
|
||||
- Enable the `waku.test` fleet again to resume auto-deployment of the latest `master` commit.
|
||||
|
||||
6c. **Status fleet testing**
|
||||
- Deploy release candidate to `status.staging`
|
||||
- Perform [sanity check](https://www.notion.so/How-to-test-Nwaku-on-Status-12c6e4b9bf06420ca868bd199129b425) and log results as comments in this issue.
|
||||
- Connect 2 instances to `status.staging` fleet, one in relay mode, the other one in light client.
|
||||
- 1:1 Chats with each other
|
||||
- Send and receive messages in a community
|
||||
- Close one instance, send messages with second instance, reopen first instance and confirm messages sent while offline are retrieved from store
|
||||
- Perform checks based on _end-user impact_.
|
||||
- Inform other (Waku and Status) CCs to point their instances to `status.staging` for a few days. Ping Status colleagues from their Discord server or [Status community](https://status.app) (not a blocking point).
|
||||
- Ask Status-QA to perform sanity checks (as described above) and checks based on _end user impact_; specify the version being tested.
|
||||
- Ask Status-QA or infra to run the automated Status e2e tests against `status.staging`.
|
||||
- Get other CCs' sign-off: they should comment on this PR, e.g., "Used the app for a week, no problem." If problems are reported, resolve them and create a new RC.
|
||||
- **Get Status-QA sign-off**, ensuring that the `status.test` update will not disturb ongoing activities.
|
||||
|
||||
7. Once the release-candidate has been validated, create a final release tag and push it.
|
||||
We also need to merge the release branch back into master as a final step.
|
||||
|
||||
```
|
||||
git checkout release/v0.1.0
|
||||
git tag -as v0.1.0 -m "Initial release."
|
||||
git push origin v0.1.0
|
||||
git checkout release/v0.X.0
|
||||
git tag -as v0.X.0 -m "final release." (use v0.X.0-beta as the tag if you are creating a beta release)
|
||||
git push origin v0.X.0
|
||||
git switch master
|
||||
git pull
|
||||
git merge release/v0.1.0
|
||||
git merge release/v0.X.0
|
||||
```
|
||||
8. Update `waku-rust-bindings`, `waku-simulator` and `nwaku-compose` to use the new release.
|
||||
|
||||
1. Create a [Github release](https://github.com/waku-org/nwaku/releases) from the release tag.
|
||||
9. Create a [GitHub release](https://github.com/waku-org/nwaku/releases) from the release tag.
|
||||
|
||||
* Add binaries produced by the ["Upload Release Asset"](https://github.com/waku-org/nwaku/actions/workflows/release-assets.yml) workflow. Where possible, test the binaries before uploading to the release.
|
||||
|
||||
@ -80,22 +120,10 @@ We also need to merge release branch back to master as a final step.
|
||||
2. Deploy the release image to [Dockerhub](https://hub.docker.com/r/wakuorg/nwaku) by triggering [the manual Jenkins deployment job](https://ci.infra.status.im/job/nim-waku/job/docker-manual/).
|
||||
> Ensure the following build parameters are set:
|
||||
> - `MAKE_TARGET`: `wakunode2`
|
||||
> - `IMAGE_TAG`: the release tag (e.g. `v0.16.0`)
|
||||
> - `IMAGE_TAG`: the release tag (e.g. `v0.36.0`)
|
||||
> - `IMAGE_NAME`: `wakuorg/nwaku`
|
||||
> - `NIMFLAGS`: `--colors:off -d:disableMarchNative -d:chronicles_colors:none -d:postgres`
|
||||
> - `GIT_REF` the release tag (e.g. `v0.16.0`)
|
||||
3. Update the default nwaku image in [nwaku-compose](https://github.com/waku-org/nwaku-compose/blob/master/docker-compose.yml)
|
||||
4. Deploy the release to appropriate fleets:
|
||||
- Inform clients
|
||||
> **NOTE:** known clients are currently using some version of js-waku, go-waku, nwaku or waku-rs.
|
||||
> Clients are reachable via the corresponding channels on the Vac Discord server.
|
||||
> It should be enough to inform clients on the `#nwaku` and `#announce` channels on Discord.
|
||||
> Informal conversations with specific repo maintainers are often part of this process.
|
||||
- Check if nwaku configuration parameters changed. If so [update fleet configuration](https://www.notion.so/Fleet-Ownership-7532aad8896d46599abac3c274189741?pvs=4#d2d2f0fe4b3c429fbd860a1d64f89a64) in [infra-nim-waku](https://github.com/status-im/infra-nim-waku)
|
||||
- Deploy release to the `waku.sandbox` fleet from [Jenkins](https://ci.infra.status.im/job/nim-waku/job/deploy-waku-sandbox/).
|
||||
- Ensure that nodes successfully start up and monitor health using [Grafana](https://grafana.infra.status.im/d/qrp_ZCTGz/nim-waku-v2?orgId=1) and [Kibana](https://kibana.infra.status.im/goto/a7728e70-eb26-11ec-81d1-210eb3022c76).
|
||||
- If necessary, revert by deploying the previous release. Download logs and open a bug report issue.
|
||||
5. Submit a PR to merge the release branch back to `master`. Make sure you use the option `Merge pull request (Create a merge commit)` to perform such merge.
|
||||
> - `GIT_REF` the release tag (e.g. `v0.36.0`)
|
||||
|
||||
### Performing a patch release
|
||||
|
||||
@ -116,4 +144,14 @@ We also need to merge release branch back to master as a final step.
|
||||
|
||||
4. Once the release-candidate has been validated and changelog PR got merged, cherry-pick the changelog update from master to the release branch. Create a final release tag and push it.
|
||||
|
||||
5. Create a [Github release](https://github.com/waku-org/nwaku/releases) from the release tag and follow the same post-release process as usual.
|
||||
5. Create a [GitHub release](https://github.com/waku-org/nwaku/releases) from the release tag and follow the same post-release process as usual.
|
||||
|
||||
### Links
|
||||
|
||||
- [Release process](https://github.com/waku-org/nwaku/blob/master/docs/contributors/release-process.md)
|
||||
- [Release notes](https://github.com/waku-org/nwaku/blob/master/CHANGELOG.md)
|
||||
- [Fleet ownership](https://www.notion.so/Fleet-Ownership-7532aad8896d46599abac3c274189741?pvs=4#d2d2f0fe4b3c429fbd860a1d64f89a64)
|
||||
- [Infra-nim-waku](https://github.com/status-im/infra-nim-waku)
|
||||
- [Jenkins](https://ci.infra.status.im/job/nim-waku/)
|
||||
- [Fleets](https://fleets.waku.org/)
|
||||
- [Harbor](https://harbor.status.im/harbor/projects/9/repositories/nwaku/artifacts-tab)
|
||||
12
docs/faq.md
12
docs/faq.md
@ -16,9 +16,9 @@ curl -s https://fleets.status.im | jq '.fleets["waku.test"]'
|
||||
# Output
|
||||
{
|
||||
"tcp/p2p/waku": {
|
||||
"node-01.do-ams3.waku.test": "/dns4/node-01.do-ams3.waku.test.statusim.net/tcp/30303/p2p/16Uiu2HAkykgaECHswi3YKJ5dMLbq2kPVCo89fcyTd38UcQD6ej5W",
|
||||
"node-01.gc-us-central1-a.waku.test": "/dns4/node-01.gc-us-central1-a.waku.test.statusim.net/tcp/30303/p2p/16Uiu2HAmDCp8XJ9z1ev18zuv8NHekAsjNyezAvmMfFEJkiharitG",
|
||||
"node-01.ac-cn-hongkong-c.waku.test": "/dns4/node-01.ac-cn-hongkong-c.waku.test.statusim.net/tcp/30303/p2p/16Uiu2HAkzHaTP5JsUwfR9NR8Rj9HC24puS6ocaU8wze4QrXr9iXp"
|
||||
"node-01.do-ams3.waku.test": "/dns4/node-01.do-ams3.waku.test.status.im/tcp/30303/p2p/16Uiu2HAkykgaECHswi3YKJ5dMLbq2kPVCo89fcyTd38UcQD6ej5W",
|
||||
"node-01.gc-us-central1-a.waku.test": "/dns4/node-01.gc-us-central1-a.waku.test.status.im/tcp/30303/p2p/16Uiu2HAmDCp8XJ9z1ev18zuv8NHekAsjNyezAvmMfFEJkiharitG",
|
||||
"node-01.ac-cn-hongkong-c.waku.test": "/dns4/node-01.ac-cn-hongkong-c.waku.test.status.im/tcp/30303/p2p/16Uiu2HAkzHaTP5JsUwfR9NR8Rj9HC24puS6ocaU8wze4QrXr9iXp"
|
||||
},
|
||||
"enr/p2p/waku": {
|
||||
"node-01.do-ams3.waku.test": "enr:-QESuEC1p_s3xJzAC_XlOuuNrhVUETmfhbm1wxRGis0f7DlqGSw2FM-p2Ugl_r25UHQJ3f1rIRrpzxJXSMaJe4yk1XFSAYJpZIJ2NIJpcISygI2rim11bHRpYWRkcnO4XAArNiZub2RlLTAxLmRvLWFtczMud2FrdS50ZXN0LnN0YXR1c2ltLm5ldAZ2XwAtNiZub2RlLTAxLmRvLWFtczMud2FrdS50ZXN0LnN0YXR1c2ltLm5ldAYfQN4DgnJzkwABCAAAAAEAAgADAAQABQAGAAeJc2VjcDI1NmsxoQJATXRSRSUyTw_QLB6H_U3oziVQgNRgrXpK7wp2AMyNxYN0Y3CCdl-DdWRwgiMohXdha3UyDw",
|
||||
@ -26,9 +26,9 @@ curl -s https://fleets.status.im | jq '.fleets["waku.test"]'
|
||||
"node-01.ac-cn-hongkong-c.waku.test": "enr:-QEkuEDzQyIAhs-CgBHIrJqtBv3EY1uP1Psrc-y8yJKsmxW7dh3DNcq2ergMUWSFVcJNlfcgBeVsFPkgd_QopRIiCV2pAYJpZIJ2NIJpcIQI2ttrim11bHRpYWRkcnO4bgA0Ni9ub2RlLTAxLmFjLWNuLWhvbmdrb25nLWMud2FrdS50ZXN0LnN0YXR1c2ltLm5ldAZ2XwA2Ni9ub2RlLTAxLmFjLWNuLWhvbmdrb25nLWMud2FrdS50ZXN0LnN0YXR1c2ltLm5ldAYfQN4DgnJzkwABCAAAAAEAAgADAAQABQAGAAeJc2VjcDI1NmsxoQJIN4qwz3v4r2Q8Bv8zZD0eqBcKw6bdLvdkV7-JLjqIj4N0Y3CCdl-DdWRwgiMohXdha3UyDw"
|
||||
},
|
||||
"wss/p2p/waku": {
|
||||
"node-01.do-ams3.waku.test": "/dns4/node-01.do-ams3.waku.test.statusim.net/tcp/8000/wss/p2p/16Uiu2HAkykgaECHswi3YKJ5dMLbq2kPVCo89fcyTd38UcQD6ej5W",
|
||||
"node-01.gc-us-central1-a.waku.test": "/dns4/node-01.gc-us-central1-a.waku.test.statusim.net/tcp/8000/wss/p2p/16Uiu2HAmDCp8XJ9z1ev18zuv8NHekAsjNyezAvmMfFEJkiharitG",
|
||||
"node-01.ac-cn-hongkong-c.waku.test": "/dns4/node-01.ac-cn-hongkong-c.waku.test.statusim.net/tcp/8000/wss/p2p/16Uiu2HAkzHaTP5JsUwfR9NR8Rj9HC24puS6ocaU8wze4QrXr9iXp"
|
||||
"node-01.do-ams3.waku.test": "/dns4/node-01.do-ams3.waku.test.status.im/tcp/8000/wss/p2p/16Uiu2HAkykgaECHswi3YKJ5dMLbq2kPVCo89fcyTd38UcQD6ej5W",
|
||||
"node-01.gc-us-central1-a.waku.test": "/dns4/node-01.gc-us-central1-a.waku.test.status.im/tcp/8000/wss/p2p/16Uiu2HAmDCp8XJ9z1ev18zuv8NHekAsjNyezAvmMfFEJkiharitG",
|
||||
"node-01.ac-cn-hongkong-c.waku.test": "/dns4/node-01.ac-cn-hongkong-c.waku.test.status.im/tcp/8000/wss/p2p/16Uiu2HAkzHaTP5JsUwfR9NR8Rj9HC24puS6ocaU8wze4QrXr9iXp"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
@ -9,7 +9,6 @@ The following command line options are available:
|
||||
```
|
||||
--dns-discovery Enable DNS Discovery
|
||||
--dns-discovery-url URL for DNS node list in format 'enrtree://<key>@<fqdn>'
|
||||
--dns-discovery-name-server DNS name server IPs to query. Argument may be repeated.
|
||||
```
|
||||
|
||||
- `--dns-discovery` is used to enable DNS discovery on the node.
|
||||
@ -17,8 +16,6 @@ Waku DNS discovery is disabled by default.
|
||||
- `--dns-discovery-url` is mandatory if DNS discovery is enabled.
|
||||
It contains the URL for the node list.
|
||||
The URL must be in the format `enrtree://<key>@<fqdn>` where `<fqdn>` is the fully qualified domain name and `<key>` is the base32 encoding of the compressed 32-byte public key that signed the list at that location.
|
||||
- `--dns-discovery-name-server` is optional and contains the IP(s) of the DNS name servers to query.
|
||||
If left unspecified, the Cloudflare servers `1.1.1.1` and `1.0.0.1` will be used by default.
|
||||
|
||||
A node will attempt connection to all discovered nodes.
|
||||
|
||||
|
||||
@ -1,4 +1,3 @@
|
||||
|
||||
# Configure a REST API node
|
||||
|
||||
A subset of the node configuration can be used to modify the behaviour of the HTTP REST API.
|
||||
@ -21,3 +20,5 @@ Example:
|
||||
```shell
|
||||
wakunode2 --rest=true
|
||||
```
|
||||
|
||||
The `page_size` flag in the Store API has a default value of 20 and a max value of 100.
|
||||
|
||||
@ -17,12 +17,12 @@ or store and serve historical messages itself.
|
||||
|
||||
Ensure that `store` is enabled (this is `true` by default) and provide at least one store service node address with the `--storenode` CLI option.
|
||||
|
||||
See the following example, using the peer at `/dns4/node-01.ac-cn-hongkong-c.waku.test.statusim.net/tcp/30303/p2p/16Uiu2HAkzHaTP5JsUwfR9NR8Rj9HC24puS6ocaU8wze4QrXr9iXp` as store service node.
|
||||
See the following example, using the peer at `/dns4/node-01.ac-cn-hongkong-c.waku.test.status.im/tcp/30303/p2p/16Uiu2HAkzHaTP5JsUwfR9NR8Rj9HC24puS6ocaU8wze4QrXr9iXp` as store service node.
|
||||
|
||||
```sh
|
||||
./build/wakunode2 \
|
||||
--store:true \
|
||||
--storenode:/dns4/node-01.ac-cn-hongkong-c.waku.test.statusim.net/tcp/30303/p2p/16Uiu2HAkzHaTP5JsUwfR9NR8Rj9HC24puS6ocaU8wze4QrXr9iXp
|
||||
--storenode:/dns4/node-01.ac-cn-hongkong-c.waku.test.status.im/tcp/30303/p2p/16Uiu2HAkzHaTP5JsUwfR9NR8Rj9HC24puS6ocaU8wze4QrXr9iXp
|
||||
```
|
||||
|
||||
Your node can now send queries to retrieve historical messages
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Loading…
x
Reference in New Issue
Block a user