Compare commits

...

906 Commits

Author SHA1 Message Date
NagyZoltanPeter
284a0816cc
chore: use chronos' TokenBucket (#3670)
* Adapt using chronos' TokenBucket. Removed TokenBucket and test. bump nim-chronos -> nim-libp2p/nim-lsquic/nim-jwt -> adapt to latest libp2p changes
* Fix libp2p/utility reports unlisted exception can occure from close of socket in waitForService - -d:ssl compile flag caused it
* Adapt request_limiter to new chronos' TokenBucket replenish algorithm to keep original intent of use
* Fix filter dos protection test
* Fix peer manager tests due change caused by new libp2p
* Adjust store test rate limit to eliminate CI test flakyness of timing
* Adjust store test rate limit to eliminate CI test flakyness of timing - lightpush/legacy_lightpush/filter
* Rework filter dos protection test to avoid CI crazy timing causing flakyness in test results compared to local runs
* Rework lightpush dos protection test to avoid CI crazy timing causing flakyness in test results compared to local runs
* Rework lightpush and legacy lightpush rate limit tests to eliminate timing effect in CI that cause longer awaits thus result in minting new tokens unlike local runs
2026-01-07 17:48:19 +01:00
Tanya S
a4e44dbe05
chore: Update anvil config (#3662)
* Use anvil config disable-min-priority-fee to prevent gas price doubling

* remove gas limit set in utils->deployContract
2026-01-06 11:35:16 +02:00
Sasha
a865ff72c8
update js-waku repo reference (#3684) 2026-01-06 10:19:37 +01:00
Ivan Folgueira Bande
dafdee9f5f
small refactor README to start using Logos Messaging Nim term 2025-12-29 23:04:24 +01:00
Pablo Lopez
96196ab8bc
feat: compilation for iOS WIP (#3668)
* feat: compilation for iOS WIP

* fix: nim ios version 18
2025-12-22 15:40:09 +02:00
Ivan FB
e3dd6203ae
Start using nim-ffi to implement libwaku (#3656)
* deep changes in libwaku to adap to nim-ffi
* start using ffi pragma in library
* update some binding examples
* add missing declare_lib.nim file
* properly rename api files in library folder
2025-12-19 17:00:43 +01:00
Tanya S
834eea945d
chore: pin rln dependencies to specific version (#3649)
* Add foundry version in makefile and install scripts

* revert to older verison of Anvil for rln tests and anvil_install fix

* pin pnpm version to be installed as rln dep

* source pnpm after new install

* Add to github path

* use npm to install pnpm for rln ci

* Update foundry and pnpm versions in Makefile
2025-12-19 10:55:53 +02:00
Arseniy Klempner
2d40cb9d62
fix: hash inputs for external nullifier, remove length prefix for sha256 (#3660)
* fix: hash inputs for external nullifier, remove length prefix for sha256

* feat: use nimcrypto keccak instead of sha256 ffi

* feat: wrapper function to generate external nullifier
2025-12-17 18:51:10 -08:00
Ivan FB
7c24a15459
simple cleanup rm unused DiscoveryManager from waku.nim (#3671) 2025-12-18 00:07:29 +01:00
Fabiana Cecin
bc5059083e
chore: pin logos-messaging-interop-tests to SMOKE_TEST_STABLE (#3667)
* pin to interop-tests SMOKE_TEST_STABLE
2025-12-16 17:49:03 +01:00
NagyZoltanPeter
3323325526
chore: extend RequestBroker with supporting native and external types and added possibility to define non-async (aka sync) requests for simplicity and performance (#3665)
* chore: extend RequestBroker with supporting native and external types and added possibility to define non-async (aka sync) requests for simplicity and performance
* Adapt gcsafe pragma for RequestBroker sync requests and provider signatures as requirement
---------

Co-authored-by: Ivan FB <128452529+Ivansete-status@users.noreply.github.com>
2025-12-16 02:52:20 +01:00
Fabiana Cecin
2477c4980f
chore: update ci container-image.yml ref to a commit in master (#3666) 2025-12-15 10:33:39 -03:00
Fabiana Cecin
10dc3d3eb4
chore: misc CI fixes (#3664)
* add make update to CI workflow
* add a nwaku -> logos-messaging-nim workflow rename
* pin local container-image.yml workflow to a commit
2025-12-15 09:15:33 -03:00
Ivan FB
9e2b3830e9
Distribute libwaku (#3612)
* allow create libwaku pkg
* fix Makefile create library extension libwaku
* make sure libwaku is built as part of assets
* Makefile: avoid rm libwaku before building it
* properly format debian pkg in gh release workflow
* waku.nimble set dylib extension correctly
* properly pass lib name and ext to waku.nimble
2025-12-15 12:11:11 +01:00
Sergei Tikhomirov
7d1c6abaac
chore: do not mount lightpush without relay (fixes #2808) (#3540)
* chore: do not mount lightpush without relay (fixes #2808)

- Change mountLightPush signature to return Result[void, string]
- Return error when relay is not mounted
- Update all call sites to handle Result return type
- Add test verifying mounting fails without relay
- Only advertise lightpush capability when relay is enabled

* chore: don't mount legacy lightpush without relay
2025-12-11 10:51:47 +01:00
Darshan K
868d43164e
Release : patch release v0.37.1-beta (#3661) 2025-12-10 17:40:42 +05:30
Sergei Tikhomirov
12952d070f
Add text file for coding LLMs with high-level nwaku info and style guide advice (#3624)
* add CLAUDE.md first version

* extract style guide advice

* use AGENTS.md instead of CLAUDE.md for neutrality

* chore: update AGENTS.md w.r.t. master developments

* Apply suggestions from code review

Co-authored-by: Ivan FB <128452529+Ivansete-status@users.noreply.github.com>

* remove project tree from AGENTS.md; minor editx

* Apply suggestions from code review

Co-authored-by: NagyZoltanPeter <113987313+NagyZoltanPeter@users.noreply.github.com>

---------

Co-authored-by: Ivan FB <128452529+Ivansete-status@users.noreply.github.com>
Co-authored-by: NagyZoltanPeter <113987313+NagyZoltanPeter@users.noreply.github.com>
2025-12-09 10:45:06 +01:00
Fabiana Cecin
7920368a36
fix: remove ENR cache from peer exchange (#3652)
* remove WakuPeerExchange.enrCache
* add forEnrPeers to support fast PeerStore search
* add getEnrsFromStore
* fix peer exchange tests
2025-12-08 06:34:57 -03:00
Tanya S
2cf4fe559a
Chore: bump waku-rlnv2-contract-repo commit (#3651)
* Bump commit for vendor wakurlnv2contract

* Update RLN registration proc for contract updates

* add option to runAnvil for state dump or load with optional contract deployment on setup

* Code clean up

* Upodate rln relay tests to use cached anvil state

* Minor updates to utils and new test for anvil state dump

* stopAnvil needs to wait for graceful shutdown

* configure runAnvil to use load state in other tests

* reduce ci timeout

* Allow for RunAnvil load state file to be compressed

* Fix linting

* Change return type of sendMintCall to Futre[void]

* Update naming of ci path for interop tests
2025-12-08 08:29:48 +02:00
Tanya S
a8590a0a7d
chore: Add gasprice overflow check (#3636)
* Check for gasPrice overflow

* use trace for logging and update comments

* Update log level for gas price logs
2025-12-04 10:26:18 +02:00
Ivan FB
8c30a8e1bb
Rest store api constraints default page size to 20 and max to 100 (#3602)
Co-authored-by: Vishwanath Martur <64204611+vishwamartur@users.noreply.github.com>
2025-12-03 11:55:34 +01:00
Fabiana Cecin
54f4ad8fa2
fix: fix .github waku-org/ --> logos-messaging/ (#3653)
* fix: fix .github waku-org/ --> logos-messaging/
* bump CI tests timeout 45 --> 90 minutes
* fix .gitmodules waku-org --> logos-messaging
2025-12-02 11:00:26 -03:00
NagyZoltanPeter
ae74b9018a
chore: Introduce EventBroker, RequestBroker and MultiRequestBroker (#3644)
* Introduce EventBroker and RequestBroker as decoupling helpers that represent reactive (event-driven) and proactive (request/response) patterns without tight coupling between modules
* Address copilot observation. error log if failed listener call exception, handling listener overuse - run out of IDs
* Address review observations: no exception to leak, listeners must raise no exception, adding listener now reports error with Result.
* Added MultiRequestBroker utility to collect results from many providers
* Support an arbitrary number of arguments for RequestBroker's request/provider signature
* MultiRequestBroker allows provider procs to throw exceptions, which will be handled during request processing.
* MultiRequestBroker supports one zero arg signature and/or multi arg signature
* test no exception leaks from RequestBroker and MultiRequestBroker
* Embed MultiRequestBroker tests into common
* EventBroker: removed all ...Broker typed public procs to simplify EventBroker interface, forger is renamed to dropListener
* Make Request's broker type private
* MultiRequestBroker: Use explicit returns in generated procs
* Updated descriptions of EventBroker and RequestBroker, updated RequestBroker.setProvider, returns error if already set.
* Better description for MultiRequestBroker and its usage
* Add EventBroker support for ref objects, fix emit variant with event object ctor
* Add RequestBroker support for ref objects
* Add MultiRequestBroker support for ref objects
* Mover brokers under waku/common
2025-12-02 00:24:46 +01:00
Darshan K
7eb1fdb0ac
chore: new release process ( beta and full ) (#3647) 2025-12-01 19:03:59 +05:30
Fabiana Cecin
c6cf34df06
feat(tests): robustify waku_rln_relay test utils (#3650) 2025-11-28 14:20:36 -03:00
Sergei Tikhomirov
1e73213a36
chore: Lightpush minor refactor (#3538)
* chore: refactor Lightpush (more DRY)

* chore: apply review suggestions

Co-authored-by: Ivan FB <128452529+Ivansete-status@users.noreply.github.com>

---------

Co-authored-by: Ivan FB <128452529+Ivansete-status@users.noreply.github.com>
2025-11-28 10:41:20 +01:00
Ivan FB
c0a7debfd1
Adapt makefile for libwaku windows (#3648) 2025-11-25 10:05:40 +01:00
Ivan FB
454b098ac5
new metric in postgres_driver to estimate payload stats (#3596) 2025-11-24 10:16:37 +01:00
Prem Chaitanya Prathi
088e3108c8
use exit==dest approach for mix (#3642) 2025-11-22 08:11:05 +05:30
Prem Chaitanya Prathi
b0cd75f4cb
feat: update rendezvous to broadcast and discover WakuPeerRecords (#3617)
* update rendezvous to work with WakuPeeRecord and use libp2p updated version

* split rendezvous client and service implementation

* mount rendezvous client by default
2025-11-21 23:15:12 +05:30
Ivan FB
31e1a81552
nix: add wakucanary Flake package (#3599)
Signed-off-by: Jakub Sokołowski <jakub@status.im>
Co-authored-by: Jakub Sokołowski <jakub@status.im>
2025-11-20 13:40:08 +01:00
Ivan FB
e54851d9d6
fix: admin API peer shards field from metadata protocol (#3594)
* fix: admin API peer shards field from metadata protocol
   Store and return peer shard info from metadata protocol exchange instead of only checking ENR records.
* peer_manager set shard info and extend rest test to validate it

Co-authored-by: MorganaFuture <andrewmochalskyi@gmail.com>
2025-11-20 13:12:16 +01:00
Ivan FB
adeb1a928e
fix: wakucanary now fails correctly when ping fails (#3595)
* wakucanary add some more detail if exception

Co-authored-by: MorganaFuture <andrewmochalskyi@gmail.com>
2025-11-20 08:44:15 +01:00
Darshan K
cd5909fafe
chore: first beta release v0.37.0 (#3607) 2025-11-19 18:53:23 +05:30
NagyZoltanPeter
1762548741
chore: clarify api folders (#3637)
* Rename waku_api to rest_api and underlying rest to endpoint for clearity
* Rename node/api to node/kernel_api to suggest that it is an internal accessor to node interface + make everything compile after renaming
* make waku api a top level import
* fix use of relative path imports and use default to root rather in case of waku and tools modules
2025-11-15 23:31:09 +01:00
Simon-Pierre Vivier
262d33e394
Disable flaky test (#3585) 2025-10-30 10:53:45 -04:00
Fabiana Cecin
7b580dbf39
chore(refactoring): replace some isErr usage with better alternatives (#3615)
* Closes apply isOkOr || valueOr approach (#1969)
2025-10-27 14:07:06 -03:00
36bc01ac0d
ci: move builds to a container
Referenced issue:
* https://github.com/status-im/infra-ci/issues/188
2025-10-23 11:23:55 +02:00
Prem Chaitanya Prathi
8be45180aa
removing mix repo as dependency and using mix from libp2p repo (#3632)
* use released version of libp2p 1.14.2
2025-10-23 10:00:11 +05:30
Ivan FB
9808e205af
use nightly docker rust image to allow release creation (#3628) 2025-10-18 19:08:57 +05:30
Darshan K
7a009c8b27
bump libp2p ( v1.14.0 ) (#3627) 2025-10-17 11:49:28 +02:00
Darshan K
deebee45d7
feat: stateless RLN ( bump v0.9.0 ) (#3621) 2025-10-15 19:08:46 +05:30
Ivan FB
7e5041d5e1
Move log level from debug to info (#3622)
* convert all debug logs to info log level
* waku_relay protocol mv notice spammy logs to debug
2025-10-15 10:49:36 +02:00
Ivan FB
7e3617cd48
Bump to macos-15 in GitHub ci workflow (#3620) 2025-10-14 21:14:24 +02:00
Darshan K
a6710b4995
chore: bump zerokit v0.8.0 in nwaku (#3618) 2025-10-10 15:29:44 +05:30
Prem Chaitanya Prathi
62be30da19
fix: remove pcre dependency as it is not used anymore and causing random CI docker build failures (#3566) 2025-10-09 14:38:34 +02:00
kaichao
a87b787c4e
chore: update nimble for nwaku lib integration. (#3571) 2025-10-09 15:47:39 +08:00
Fabiana Cecin
4d68e2abd5
chore(refactoring): results lib refactors (mostly replace isOk) (#3610)
* Changes isOk usage into better patterns with e.g. valueOr / isOkOr
* Some other refactoring included
* This PR partially addresses #1969
2025-10-08 19:14:54 -03:00
Prem Chaitanya Prathi
4b0bb29aa9
chore: an attempt to move node API's to separate files (#3614)
* chore: move node API's to separate files
2025-10-08 20:06:46 +05:30
Prem Chaitanya Prathi
797370ec80
remove mixPubKey from ENR and provide config param to pass mix nodes statically (#3587) 2025-10-08 10:18:54 +05:30
Darshan K
63f3234876
chore: bump for nim-json-serialization (#3616) 2025-10-07 15:41:54 +02:00
Ivan FB
682c76c714
Test/waku canary (#3597)
* Adding waku canary test scripts
* Update README file

note: The real author of this commit is Aya. I just resubmitted her PR after a deep nwaku history cleanup
---------

Co-authored-by: aya <ayahassan2877@gmail.com>
2025-10-05 22:07:46 +02:00
Ivan FB
74b3770f6c
Fix protocol connection close (#3588)
* Added connection closeWithEof for protocol handler and clients of lightpush/legacy lightpush and filter (except filer push case)
* Store/Legacy store close connections

note: this enhancement is fully made by Zoltán. Me I just resubmitted it after nwaku history cleanup.

---------

Co-authored-by: NagyZoltanPeter <113987313+NagyZoltanPeter@users.noreply.github.com>
2025-10-03 14:42:46 +02:00
fryorcraken
5b5ff4cbe7
chore: rename Waku API's "Waku Config" to "Protocols" Config (#3603)
* chore: rename Waku API's "Waku Config" to "Protocols" Config

Make it clearer that with this config, we are configuring the Waku protocols, in contrast to other parameters which are more executable related.

* ensure var name matches type

* format
2025-10-03 18:24:33 +10:00
Ivan FB
6958eac6f1
peer exchange avoid spammy log (#3609) 2025-10-02 12:13:07 +02:00
Ivan FB
d94cb7c736
fix log libwaku (#3608)
* libwaku properly set a default log config
* waku_example.c set numShardsInNetwork to 257
2025-10-01 22:03:05 +02:00
Prem Chaitanya Prathi
7819a6e09a
use ipv4 address only for mix nodes, dogfooding fixes (#3576) 2025-10-01 13:12:08 +05:30
fryorcraken
bc8acf7611
feat: Waku API create node (#3580)
* introduce createNode

# Conflicts:
#	apps/wakunode2/cli_args.nim

* remove confutils dependency on the library

* test: remove websocket in default test config

* update to latest specs

* test: cli_args

* align to spec changes (sovereign, message conf, entrypoints

* accept enr, entree and multiaddr as entry points

* post rebase

* format

* change from "sovereign" to "core"

* add example

* get example to continue running

* nitpicks

* idiomatic constructors

* fix enum naming

* replace procs with consts

* remove messageConfirmation

* use pure enum

* rename example file
2025-10-01 16:31:34 +10:00
Ivan Folgueira Bande
08d14fb082 add waku/waku_rln_relay/constants.nim file 2025-09-30 17:51:53 +02:00
Darshan K
3c9b355879 feat: deprecate tree_path and rlnDB (#3577) 2025-09-26 14:47:15 +05:30
Darshan K
04fdf0a8c1 chore: add missing metrics (#3565) 2025-09-26 03:30:55 +05:30
Simon-Pierre Vivier
cc7a6406f5 feat: adding rendezvous request interval (#3569) 2025-09-23 09:51:26 -04:00
Darshan K
794c3a850d chore: benchmark for proof generation and verification (#3567) 2025-09-23 17:37:56 +05:30
Prem Chaitanya Prathi
2691dcb325 chore: mix updates (#3570)
* mix updates and fixes
2025-09-22 17:49:54 +05:30
Darshan K
b1616e55fc chore: bump libp2p to v1.13.0 (#3574) 2025-09-21 12:20:34 +02:00
Darshan K
3d0c6279e3 chore: fix node break issue when RLN is unregistered (#3573) 2025-09-20 03:09:38 +05:30
Simon-Pierre Vivier
9327da5a7b feat: waku sync full topic support (#3275) 2025-09-12 08:12:35 -04:00
Prem Chaitanya Prathi
a1bbb61f47 change log level to trace to avoid spam (#3568) 2025-09-12 14:51:02 +05:30
Prem Chaitanya Prathi
7df526f8e3 enable peer-exchange by default and fix log on client (#3557) 2025-09-12 12:16:59 +05:30
Prem Chaitanya Prathi
028bf297af update rendezvous to use callbacks to get updated shards and capabilities (#3558) 2025-09-11 22:40:13 +05:30
Prem Chaitanya Prathi
eb7a3d137a feat: mix poc (#3284)
* feat: poc to integrate mix into waku and use lightpush to demonstrate
2025-09-11 20:40:01 +05:30
Darshan K
9bba8b0f9c fix: refact rln-relay and post sync test (#3434) 2025-09-10 16:18:51 +05:30
Darshan K
5fc8c59f54 chore: bump dependencies to v0.37.0 (#3536) 2025-09-10 13:20:37 +05:30
NagyZoltanPeter
a36601ab0d fix: Do not allow invalid pubsub topic subscription via relay REST api (#3559)
* Check input pubsub topics for REST /relay/v1/subscriptions endpoint
2025-09-09 14:04:10 +02:00
kaichao
82926f9dd6 fix: nimble libraries (#3555) 2025-09-01 10:12:04 +08:00
Prem Chaitanya Prathi
cc7db99982 get shards using callback approach (#3545) 2025-08-29 18:43:29 +05:30
NagyZoltanPeter
6cf3644097 Fix using 'make test <nim source file>', it was failing to run properly without specified test case (#3550) 2025-08-29 13:14:05 +02:00
NagyZoltanPeter
228e637c9f chore: Adapt heaptrack support builds and description to latest nim 2.2.4 compiler (#3522)
* Adapt heaptrack support builds and description to latest nim 2.2.4 compiler
2025-08-29 08:10:22 +02:00
NagyZoltanPeter
4db4f830f5 fix remove libpcre lib dependency used for docker builds (#3552)
* fix libpcre installation in Dockerfile for debian:stable-slim

Seems with debian:stable-slim now points to Debian Bookworm and that made libpcre3 deprecated we have a failing docker build.

* remove libpcre3 dependency completely as we dont use Nims std/re regex lib.

* Remove all remaining reference to pcre library in our image builds
2025-08-29 06:52:56 +02:00
kaichao
84cfdba010 chore: update the version to reflect the latest tag (#3548) 2025-08-27 14:42:39 +02:00
Ivan FB
4d7f857c42 Merge pull request #3465 from waku-org/release/v0.36
chore: release v0.36.0
2025-08-25 13:44:40 +02:00
Ivan Folgueira Bande
09a407ee40 rm extra Notes title from CHANGELOG.md 2025-08-25 13:30:14 +02:00
Ivan FB
cb54db6c2f Update CHANGELOG.md
Co-authored-by: NagyZoltanPeter <113987313+NagyZoltanPeter@users.noreply.github.com>
2025-08-25 13:24:03 +02:00
Ivan FB
2936ba838d fix: detach partition (#3535)
* fix to make sure partitions get properly detached
2025-08-24 22:58:06 +02:00
Prem Chaitanya Prathi
4379f9ec50 segregate peer-exchange client and service implementation (#3523) 2025-08-13 12:04:01 +05:30
Prem Chaitanya Prathi
e4358c9718 chore: remove metadata protocol dependency on enr, relax check when nwaku is edge node (#3519)
* remove metadata protocol dep on enr, do not disconnect peers based on shards mismatch
2025-08-13 10:48:56 +05:30
Darshan K
393e3cce1f fix: rest fix for sync protocol (#3503) 2025-08-07 00:03:35 +05:30
Ivan FB
f68d79996e fix: apply modulus in sharding (#3530) 2025-08-03 17:28:28 +02:00
Prem Chaitanya Prathi
a27eec90d1 fix: use counter instead of gauge for metrics that only increase over time (#3355)
Co-authored-by: Ivan Folgueira Bande <ivansete@status.im>
2025-08-01 12:41:32 +02:00
Darshan K
029022d201 fix: streamline contract api (#3528) 2025-08-01 15:23:47 +05:30
Ashis Kumar Naik
89a3f735ef fix: updates regex pattern to support username:password authentication in http/https URLs (#3517)
* updated regex to support basic auth url
* added a super weird password for unit test: P@$$w0rd-m%^&*()_+-=[]{}|;':",./<>?`~\
2025-07-31 19:47:29 +02:00
Darshan K
c3da29fd63 feat: shard-specific metrics tracking (#3520) 2025-07-31 22:53:38 +05:30
gabrielmer
5640232085 fix: only stop health monitor components if not nil (#3526) 2025-07-24 16:33:49 +02:00
gabrielmer
b6855e85ab chore: guarding against double starting and stopping of nodes (#3525) 2025-07-24 14:10:13 +02:00
Darshan K
184cc4a694 chore: config update (#3510) 2025-07-22 12:06:08 +05:30
Siddarth Kumar
c2934de79d ci: disable restart from stage in jenkins (#3515)
related issue : https://github.com/status-im/infra-ci/issues/202
2025-07-19 09:56:05 +05:30
Ivan FB
aabd98120b avoid too large log lines in rest kightpush (#3516) 2025-07-18 16:46:44 +02:00
Tanya S
2cff70d158 fix: tests using fetchMerkleRoot (#3513)
* Add waitFor for fetchMerkleRoot function in test_rln_group_manager_onchain
2025-07-18 15:34:14 +02:00
NagyZoltanPeter
61171ed551 Building waku_relay tests failed due to shadowed chronicles import (#3498) 2025-07-17 14:16:48 +02:00
Simon-Pierre Vivier
827aada89d fix store sync dashboard (#3508) 2025-07-16 16:12:13 -04:00
Ivan FB
b7f8728f23 mark keep-alive as deprecated (#3511) 2025-07-16 15:26:59 +02:00
gabrielmer
5d1d538b45 chore: improve connection proc (#3509) 2025-07-16 13:25:06 +02:00
Darshan K
d05469fd6d chore: avoid kick off CI when not strictly required (#3505) 2025-07-15 23:22:00 +02:00
gabrielmer
0830898530 fix: libwaku received signal (#3507) 2025-07-15 14:43:05 +02:00
Darshan K
8fd862b52e chore: docs update according to new contract (#3504) 2025-07-15 13:19:31 +05:30
Darshan K
a4f8b2bedd fix: update readme (#3501) 2025-07-14 14:34:22 +05:30
gabrielmer
7123c5532c chore: wait before starting watchdog (#3502) 2025-07-11 12:07:12 +03:00
gabrielmer
012d719722 chore: cleaning waitFor instances (#3495) 2025-07-10 19:49:47 +03:00
fryorcraken
3133aaaf71 chore: use distinct type for Light push status codes (#3488)
* chore: use distinct type for Light push status codes

* Make naming more explicit

* test: use new light push error code in tests

* fix missed line

* fix thing
2025-07-10 10:56:02 +10:00
fryorcraken
4e527ee045 chore: use type for rate limit config (#3489)
* chore: use type for rate limit config

Use type instead of `seq[string]` for rate limit config earlier.
Enables to fail faster (at config time) if the string is malformated

Also enables using object in some scenarios.

* test: remove import warnings

* improve naming and add tests
2025-07-09 15:57:38 +10:00
Darshan K
b713b6e5f4 fix: make test configuration (#3480) 2025-07-08 18:25:36 +05:30
gabrielmer
dde023eacf fix: static sharding setup (#3494) 2025-07-04 15:08:15 +02:00
fryorcraken
994d485b49 chore!: make sharding configuration explicit (#3468)
* Reserve `networkconfig` name to waku network related settings

* Rename cluster conf to network conf

 A `NetworkConf` is a Waku network configuration.

# Conflicts:
#	tests/factory/test_waku_conf.nim

# Conflicts:
#	tests/factory/test_waku_conf.nim

* Improve sharding configuration

A smarter data types simplifies the logic.

* Fixing tests

* fixup! rename to endpointConf

* wip: autosharding is a specific configuration state and treat it like
it

# Conflicts:
#	waku/factory/external_config.nim

* refactor lightpush handler

some metrics error reporting were missing

# Conflicts:
#	waku/waku_lightpush/protocol.nim

* test_node_factory tests pass

* remove warnings

* fix tests

* Revert eager previous replace-all command

* fix up build tools compilation

* metadata is used to store cluster id

* Mount relay routes in static sharding

* Rename activeRelayShards to subscribeShards

To make it clearer that these are the shards the node will subscribe to.

* Remove unused msg var

* Improve error handling

* Set autosharding as default, with 1 shard in network

Also makes shards to subscribe to all shards in auto sharding, none in
static sharding.
2025-07-04 17:10:53 +10:00
fryorcraken
0ed3fc8079 fix: lightpush metrics (#3486)
* fix: lightpush metrics

Some light push errors were not reported in metrics due to
an early return.

* Small improvements

* Bound metrics value by using error codes
2025-07-03 12:29:16 +10:00
fryorcraken
4b186a4b28 fix: deprecate --dns-discovery (#3485)
* fix: deprecate `--dns-discovery`

Properly deprecates `--dns-discovery` CLI arg.
DNS Discovery is enabled if a non-empty DNS Discovery URL is passed.

* test: add test_all for factory

add and use test_all for some tests.
2025-07-03 11:56:43 +10:00
gabrielmer
ac094eae38 chore: not supporting legacy store by default (#3484) 2025-07-02 17:22:51 +02:00
Ivan FB
5f9625f332 fix: Completely clean dns-discovery-name-server references (#3477) 2025-07-02 16:37:02 +02:00
gabrielmer
cc30666016 fix: removing keepAlive from wakuConf (#3481) 2025-07-01 21:14:21 +02:00
NagyZoltanPeter
bed5c9ab52 Fix legacy lightpush diagnostic log(#3478)
DST team needs unintentionally removed my_peer_id back for legacy-lightpush for their analysis tool
2025-06-30 17:11:38 +02:00
Ivan FB
7181d9ca63 fix: release v0.36 (#3475)
* group manager better logs debug chainId
* avoid printing eth client url which contains api key
2025-06-27 12:47:16 +02:00
gabrielmer
d820976eaf chore: improve keep alive (#3458) 2025-06-27 11:16:00 +02:00
Simon-Pierre Vivier
edf416f9e0 fix: remove waku sync broken dos mechanism (#3472) 2025-06-26 11:40:10 -04:00
gabrielmer
671a4f0ae2 fix: libwaku.so compilation (#3474) 2025-06-26 15:41:45 +02:00
Ivan FB
26c2b96cfe chore: rename modules (#3469) 2025-06-26 11:27:39 +02:00
Darshan K
5c38a53f7c feat: libwaku dll for status go (#3460) 2025-06-26 01:03:40 +05:30
fryorcraken
15025fe6cc test: include all factory tests (#3467)
* test: include all factory tests

* test: don't expect to override a preset
2025-06-25 13:58:49 +10:00
Ivan FB
d7a3a85db9 chore: Libwaku watchdog that can potentially raise a WakuNotResponding event if Waku is blocked (#3466)
* refactor add waku not responding event to libwaku

Co-authored-by: NagyZoltanPeter <113987313+NagyZoltanPeter@users.noreply.github.com>
2025-06-24 23:20:08 +02:00
AYAHASSAN287
5f5e0893e0 fix: failed sync test (#3464)
* Increase time window to avoid messages overlapping in the failed test
2025-06-24 14:54:38 +02:00
Ivan Folgueira Bande
7b234ec78a CHANGELONG add notes section 2025-06-20 15:47:19 +02:00
Ivan Folgueira Bande
739ad46a7e Remove PR lint title 2025-06-20 14:45:53 +02:00
Ivan Folgueira Bande
7ff89b385d Simplify PR template 2025-06-20 14:45:31 +02:00
Ivan Folgueira Bande
c8b094d6fa CHANGELOG for v0.36.0 2025-06-20 14:39:17 +02:00
Tanya S
d3cf24f7a2 feat: Update implementation for new contract abi (#3390)
* update RLN contract abi functions and procs

* Clean up debugging lines

* Use more descriptive object field names for MembershipInfo

* Fix formatting

* fix group_manager after rebase to use new contract method sig

* Fix linting for group_manager.nim

* Test idcommitment to BE and debug logs

* Improve IdCommitment logging

* Update all keystore credentials to use BE format

* Add workaround for groupmanager web3 eth_call

* Add await to sendEthCallWithChainID

* Add error handling for failed eth_call

* Improve error handling for eth_call workaround

* Revert keystore credentials back to using LE

* Update toRateCommitment proc to use LE instead of BE

* Add IdCommitment to calldata as BE

* feat: Update rln contract deployment and tests (#3408)

* update RLN contract abi functions and procs

* update waku-rlnv2-contract submodule commit to latest

* Add RlnV2 contract deployment using forge scripts

* Clean up output of forge script command, debug logs to trace, warn to error

* Move TestToken deployment to own proc

* first implementation of token minting and approval

* Update rln tests with usermessagelimit new minimum

* Clean up code and error handling

* Rework RLN tests WIP

* Fix RLN test for new contract

* RLN Tests updated

* Fix formatting

* Improve error logs

* Fix error message formatting

* Fix linting

* Add pnpm dependency installation for rln tests

* Update test dependencies in makefile

* Minor updates, error messages etc

* Code cleanup and change some debug logging to trace

* Improve handling of Result return value

* Use absolute path for waku-rlnv2-contract

* Simplify token approval and balance check

* Remove unused Anvil options

* Add additional checks for stopAnvil process

* Fix anvil process call to null

* Add lock to tests for rln_group_manager_onchain

* Debug for forge command

* Verify paths

* Install pnpm as global

* Cleanup anvil running procs

* Add check before installing anvil

* CLean up onchain group_manager

* Add proc to setup environment for contract deployer

* Refactoring and improved error handling

* Fix anvil install directory string

* Fix linting in test_range_split

* Add const for the contract address length

* Add separate checks for why Approval transaction fails

* Update RLN contract address and chainID for TWN
2025-06-20 11:46:08 +02:00
Simon-Pierre Vivier
fd5780eae7 chore: lower waku sync log lvl (#3461) 2025-06-19 11:35:32 -04:00
Ivan FB
921d1d81af dnsName servers is not properly set to waku node (#3457) 2025-06-18 22:55:18 +02:00
Simon-Pierre Vivier
49b12e6cf3 remove echo from tests (#3459) 2025-06-18 21:53:13 +02:00
AYAHASSAN287
b1dc83ec03 test: Add comprehensive reconciliation unit-tests for Waku Store Sync (#3388)
* Revert "Revert "Add finger print tests""

This reverts commit 36066311f91da31ca69fef3fa327d5e7fda7e50c.

* Adding waku sync tests

* Adding test "reconciliation produces subranges when fingerprints differ"

* Modifing the range split test

* Add more checks to range split tests

* Adding more range split tests

* Make the test file ready for review

* delete fingerprint file

* Fix review points

* Update tests/waku_store_sync/test_range_split.nim

Co-authored-by: Ivan FB <128452529+Ivansete-status@users.noreply.github.com>

* revert change in noise utils file

* Apply linters

* revert to master

* Fix linters

* Update tests/waku_store_sync/test_range_split.nim

Co-authored-by: Simon-Pierre Vivier <simvivier@status.im>

---------

Co-authored-by: Ivan FB <128452529+Ivansete-status@users.noreply.github.com>
Co-authored-by: Simon-Pierre Vivier <simvivier@status.im>
2025-06-18 12:44:46 +03:00
AYAHASSAN287
d41179e562 test: Waku sync tests part2 (#3397)
* Revert "Revert "Add finger print tests""

This reverts commit 36066311f91da31ca69fef3fa327d5e7fda7e50c.

* Add state transition test

* Add last test for state transition

* Add new tests to transfer protocol

* Add stree test scenarios

* Add stress tests and edge scenarios

* Add test outside sync window

* Add edge tests

* Add last corner test

* Apply linters on files
2025-06-17 17:37:25 +03:00
Danish Arora
d01dd9959c fix: typo from DIRVER to DRIVER (#3442) 2025-06-17 10:34:10 +05:30
Ivan FB
478925a389 chore: refactor to unify online and health monitors (#3456) 2025-06-16 18:44:21 +02:00
NagyZoltanPeter
d148c536ca feat: lighptush v3 for lite-protocol-tester (#3455)
* Upgrade lpt to new config methods

* Make choice of legacy and v3 lightpush configurable on cli

* Adjust runner script to allow easy lightpush version selection

* Prepare selectable lightpush for infra env runs

* Fix misused result vs return

* Fixes and more explanatory comments added

* Fix ~pure virtual~ notice to =discard
2025-06-16 12:46:20 +02:00
Ivan FB
11b44e3e15 chore: refactor rm discv5-only (#3453) 2025-06-14 10:09:51 +02:00
Darshan K
0adddb01da chore: rest-relay-cache-capacity (#3454) 2025-06-13 15:08:47 +05:30
Ivan FB
25a3f4192c feat: retrieve metrics from libwaku (#3452) 2025-06-12 12:49:05 +02:00
NagyZoltanPeter
17c842a542 feat: dynamic logging via REST API (#3451)
* Added /admin/v1/log-level/{logLevel} endpoint that is used for dynamic log level setting

credits to @darshankabariya co-authoring:
* Adapted conditional compile switch check from Darshan's solution

* formatting fix
2025-06-12 10:50:08 +02:00
gabrielmer
d4198c08ae fix: discv5 protocol id in libwaku (#3447) 2025-06-11 15:56:25 +02:00
NagyZoltanPeter
895e202265 chore: Added extra debug helper via getting peer statistics (#3443)
* Added extra debug helper via getting peer statistics on new /admin/v1/peers/stats endpoint

* Add /admin/v1/peers/stats client part

* Address review, change protocol names to codec string

* fix formatting
2025-06-10 02:10:06 +02:00
Ivan FB
5132510bc6 fix: dnsresolver (#3440)
Properly transmit the dns name server list parameter to the peer manager
2025-06-06 15:50:08 +02:00
Ivan FB
4f181abe0d bump nph and nitpick change (#3441) 2025-06-06 11:38:34 +02:00
gabrielmer
daa4a6a986 feat: add waku_disconnect_all_peers to libwaku (#3438) 2025-06-05 17:25:14 +02:00
Hanno Cornelius
66d8d3763d fix: misc sync fixes, added debug logging (#3411) 2025-06-04 15:19:14 +02:00
Ivan FB
336fbf8b64 fix: relay unsubscribe (#3422)
* waku_relay protocol fix unsubscribe and remove topic validator
* simplify subscription and avoid unneeded code
* tests adaptations
* call wakuRelay.subscribe only in one place within waku_node
2025-06-02 22:02:49 +02:00
NagyZoltanPeter
a39bcff6dc feat: Extend node /health REST endpoint with all protocol's state (#3419)
* Extend ndoe /health REST endpoint with all protocol's state

* Added check for Rendezvous peers availability

* Fine tune filter, added client protocols to health report

* Fix /health endpoint test

* Add explanatory description for state NOT_READY

* Fix formattings

* Apply suggestions from code review

Co-authored-by: Ivan FB <128452529+Ivansete-status@users.noreply.github.com>

* Apply code style changes and extended test

* Fix formatting

---------

Co-authored-by: Ivan FB <128452529+Ivansete-status@users.noreply.github.com>
2025-06-02 17:21:09 +02:00
gabrielmer
94cd2f88b4 chore: exposing online state in libwaku (#3433) 2025-05-30 17:47:06 +02:00
gabrielmer
5e22ea18b6 chore: don't return error on double relay subscription/unsubscription (#3429) 2025-05-29 12:05:48 +02:00
NagyZoltanPeter
768b2785e1 chore: heaptrack support build for Nim v2.0.12 builds (#3424)
* fix heaptrack build for Nim v2.0.12 builds, fixed docker image creation and local image with copying

* fix Dockerfile.bn.amd64 to support nwaku-compose

* Fix heaptrack image build with jenkins.release

* Fix NIM_COMMIT for heaptrack support in jenkins.release

* Remove leftover echo

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* Fix dockerfile naming

* Fix assignment of NIM_COMMIT in Makefile

---------

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-05-28 19:07:07 +02:00
Sasha
f0d668966d chore: supress debug for js-waku (#3423) 2025-05-28 13:10:47 +02:00
Ivan FB
8812d66eb5 chore: CHANGELOG add lightpush v3 in v0.35.1 (#3427)
Co-authored-by: NagyZoltanPeter <113987313+NagyZoltanPeter@users.noreply.github.com>
2025-05-28 13:00:25 +02:00
Ivan FB
f47af16ffb fix: build_rln.sh update version to dowload to v0.7.0 (#3425) 2025-05-27 16:29:04 +02:00
Ivan FB
9f68c83fed chore: bump dependencies for v0.36 (#3410)
* properly pass userMessageLimit to OnchainGroupManager
* waku.nimble 2.2.4 Nim compiler
* rm stew/shims/net import
* change ValidIpAddress.init with parseIpAddress
* fix serialize for zerokit
* group_manager: separate if statements
* protocol_types: add encode UInt32 with zeros up to 32 bytes
* windows build: skip libunwind build and rm libunwind.a inlcusion step
* bump nph to overcome the compilation issues with 2.2.x
* bump nim-libp2p to v1.10.1
2025-05-26 21:58:02 +02:00
Darshan K
39e65dea28 fix: timestamp based validation (#3406) 2025-05-26 17:56:29 +05:30
e4a4313d82 nix: package outputs of build in .aar file
Add nix `result` folder to gitignore also.

Referenced issue:
* https://github.com/waku-org/nwaku/issues/3232
2025-05-23 09:05:22 +02:00
Ivan FB
3bb40d48e3 wakucanary maintenance (#3415)
- Add more possible protocols to monitor
- Simplify protocol validation simple algorithm
- Properly pass the shard CLI parameter to the ENR info
- Mount metadata protocol
- Properly use of quit(QuitFailure)
2025-05-21 22:13:05 +02:00
NagyZoltanPeter
d5063e7d89 fix: enabling WebSocket connection also in case only websocket-secure-support enabled (#3417)
* Enabling WebSocket connection also in case only websocket-secure-support is enabled
2025-05-21 13:00:12 +02:00
Ivan FB
3aab1b83e4 Update Dockerfile rust image (#3413) 2025-05-16 12:51:49 +02:00
Ivan FB
e321774e91 properly pass userMessageLimit to OnchainGroupManager (#3407) 2025-05-14 23:38:26 +02:00
Ivan FB
2926542fcd simplify libwaku error returns (#3399) 2025-05-14 11:05:02 +02:00
Ivan FB
b435b51c4e chore: Enhance feedback on error cli (#3405)
* better error detail
* rm duplicated block
2025-05-13 09:13:28 +02:00
NagyZoltanPeter
094a68e41d fix: addPeer could unintentionally override metadata of previously stored peer with defaults and empty (#3403)
* fix: addPeer could unintentionally override metadata of previously stored peer with defaults and empty

* Add explanation why we discard updates of different peerStore books.

Co-authored-by: Ivan FB <128452529+Ivansete-status@users.noreply.github.com>

---------

Co-authored-by: Ivan FB <128452529+Ivansete-status@users.noreply.github.com>
2025-05-12 15:23:19 +02:00
Ivan FB
42ab866f2c chore: allow multiple rln eth clients (#3402)
* use of multiple Eth clients instead of just one
* config_chat2 enhance param comment
* group_manager: raise exception if could not connect to any of the eth clients
2025-05-12 10:57:13 +02:00
Darshan K
d86babac3a feat: deprecate sync / local merkle tree (#3312) 2025-05-09 05:37:58 +05:30
fryorcraken
cc66c7fe78 chore!: separate internal and CLI configurations (#3357)
Split `WakuNodeConfig` object for better separation of concerns and to introduce a tree-like structure to configuration.

* fix: ensure twn cluster conf is still applied when clusterId=1
* test: remove usage of `WakuNodeConf`
* Remove macro, split builder files, remove wakunodeconf from tests
* rm network_conf_builder module as it is not used

---------

Co-authored-by: NagyZoltanPeter <113987313+NagyZoltanPeter@users.noreply.github.com>
Co-authored-by: Ivan Folgueira Bande <ivansete@status.im>
2025-05-07 23:05:35 +02:00
Ivan FB
6bc05efc02 chore: Avoid double relay subscription (#3396)
* make sure subscribe once to every topic in waku_node
* start suggest use of removeValidator in waku_relay/protocol. Commented until libp2p updated.
2025-05-05 22:57:20 +02:00
gabrielmer
7c7ed5634f chore: improve disconnection handling (#3385) 2025-04-25 19:23:53 +02:00
NagyZoltanPeter
98c3979119 chore: return all peers from rest admin (#3395)
* Updated version of getting peers by /admin endpoints
2025-04-25 15:36:41 +02:00
Ivan FB
2d6e5ef9ad chore: rln_relay simplify code a little (#3392) 2025-04-25 14:52:37 +02:00
NagyZoltanPeter
fc4ca7798c Added docker-quick-image / docker-quick-liteprotocoltester targets to build runable docker image from the locally build wakunode2 or liteprotocoltester - this speeds up build-test rounds (#3394) 2025-04-25 14:15:39 +02:00
Simon-Pierre Vivier
0c63ce4e9b feat: refactor waku sync DOS protection (#3391) 2025-04-24 09:07:21 -04:00
NagyZoltanPeter
8394c15a1a fix: bad HttpCode conversion, add missing lightpush v3 rest api tests (#3389)
* Fix bad HttpCode conversion, add missing lightpush v3 rest api tests
2025-04-24 08:36:30 +02:00
NagyZoltanPeter
ab8a30d3d6 chore: extended /admin/v1 RESP API with different option to look at current connected/relay/mesh state of the node (#3382)
* Extended /admin/v1 RESP API with different option to look at current connected/relay/mesh state of the node
* Added score information for peer info retrievals
2025-04-24 08:36:02 +02:00
Simon-Pierre Vivier
0304f063b8 waku sync cached message metric (#3387) 2025-04-23 08:26:34 -04:00
Sasha
95b665fa45 chore: add js-waku link to readme for interop tests (#3383) 2025-04-22 19:04:52 +02:00
Simon-Pierre Vivier
2f49aae2b7 feat: Waku Sync dashboard new panel & update (#3379) 2025-04-22 08:37:11 -04:00
gabrielmer
8dd31c200b fix: mistaken comments and broken link (#3381) 2025-04-17 23:16:35 +02:00
gabrielmer
559776557b fix: libwaku's redundant allocs (#3380) 2025-04-17 23:15:35 +02:00
Ivan FB
5ae526ce4f chore: Timestamp now in publish (#3373)
* Ensure timestamp is always set in WakuMessage when publishing
2025-04-17 13:03:56 +02:00
NagyZoltanPeter
2786ef6079 chore: update lite-protocol-tester for handling shard argument. (#3371)
* chore: replace pubsub topic with shard configuration across the lite protocol tester
* chore: enhance protocol performance - response time - metrics
* fix filter-client double mounting possibility.
2025-04-16 17:04:52 +02:00
Simon-Pierre Vivier
7c59f7c257 feat: enhance Waku Sync logs and metrics (#3370) 2025-04-16 09:24:05 -04:00
Miran
ed0474ade3 chore: fix unused and deprecated imports (#3368) 2025-04-11 18:20:23 +03:00
gabrielmer
001456cda0 chore: expect camelCase JSON for libwaku store queries (#3366) 2025-04-11 11:07:35 +02:00
Ivan FB
e99762ddfe chore: maintenance to c and c++ simple examples (#3367) 2025-04-11 11:05:22 +02:00
c43cee6593 makefile: add nimbus-build-system-nimble-dir target
Create a makefile target that runs a script which is a wrapper around
nimbus-build-system create_nimble_link.sh script.

Referenced issue:
* https://github.com/waku-org/nwaku/issues/3232
2025-04-10 17:35:34 +02:00
bbf9905f46 gitmodules: remove unused quic and ngtcp2 2025-04-10 17:35:32 +02:00
bbdf51ebf2 nix: create nix flake and libwaku-android-arm64 target
* android-ndk is added
* in the derivation, system nim is default but one can change it to
  nimbus-build-system
* special script for creating nimble links, necessary for the
  compilation to succeed.

Referenced issue:
* https://github.com/waku-org/nwaku/issues/3232
2025-04-10 17:35:31 +02:00
Hanno Cornelius
856224c62d docs: update prerequisites (#3320)
Add `rustc` and `cargo` as prerequisite to README (required for RLN compilation).
2025-04-10 14:38:56 +01:00
gabrielmer
dffad311a2 fix: avoid performing nil check for userData (#3365) 2025-04-10 14:34:54 +03:00
Ivan FB
3098b117d3 chore: skip two flaky tests (#3364) 2025-04-10 00:28:25 +02:00
Ivan FB
483103de37 Update the upload-artifact from v3 to v4 in pre-release.yml (#3363) 2025-04-09 21:36:06 +02:00
Ivan FB
75b8838fbf chore: retrieve protocols in new added peer from discv5 (#3354)
* add new unit test to validate that any peer can be retrieved
* add new discv5 test and better peer store management
* wakuPeerStore -> switch.peerStore
* simplify waku_peer_store, better logs and peer_manager enhancements
2025-04-07 12:24:03 +02:00
Ivan FB
b1344bb3b1 chore: better keystore management (#3358) 2025-04-04 19:19:38 +02:00
Ivan FB
06562d7a56 Merge pull request #3347 from waku-org/release/v0.35
Patch release v0.35.1
2025-04-04 12:23:20 +02:00
Ivan Folgueira Bande
947f6364d1 node -> version in a comment within changelog.md 2025-04-04 12:01:19 +02:00
Ivan Folgueira Bande
15a8779842 inform in changelog that rln_tree needs to be removed 2025-04-04 11:45:29 +02:00
gabrielmer
93698a0a88 feat: add waku_get_connected_peers_info to libwaku (#3356) 2025-04-04 11:52:33 +03:00
gabrielmer
6d3c758540 feat: waku_relay_get_peers_in_mesh to libwaku (#3352) 2025-04-03 15:13:10 +03:00
gabrielmer
8b443edd98 feat: add waku_relay_get_connected_peers to libwaku (#3353) 2025-04-03 14:27:27 +03:00
fryorcraken
00808c9495 chore!: remove pubsub topics arguments (#3350)
Use `--shards` instead.
2025-04-03 21:11:18 +11:00
fryorcraken
63cff2ab42 test: fix preset tests (#3351) 2025-04-01 11:57:30 +02:00
fryorcraken
58f76ce467 feat: introduce preset option (#3346)
* feat: introduce `preset` option

Overwriting config from cluster-id will be deprecated as a second step.

* doc: improve preset doc

* Change `default` preset to `twn`
2025-04-01 09:28:18 +11:00
NagyZoltanPeter
36ee2aa9bf chore: non-relay protocols cross performance measurement metrics (#3299)
* Introducing new non-relay protocol request handling time metric
2025-03-31 13:27:51 +02:00
Ivan FB
fca3b034c2 waku.nimble force compile logs with TRACE log level by default (#3348) 2025-03-31 13:18:58 +02:00
Ivan FB
b9d060572f relax wakucanary parameters (#3342) 2025-03-31 09:26:35 +02:00
Ivan Folgueira Bande
9a14446e32 setting correct contract address recommeded by Tanya 2025-03-31 09:18:14 +02:00
Ivan Folgueira Bande
afa0bfbd37 CHANGELOG v0.35.1 2025-03-30 13:39:24 +02:00
stubbsta
8397d45f51 Update all references to RLN contract address 2025-03-30 13:34:21 +02:00
Prem Chaitanya Prathi
935914224e newer docker version fails to build due to incorrect case (#3341) 2025-03-27 17:56:30 +05:30
NagyZoltanPeter
fe8327627e Make /debug endpoints /version and /info available as root (#3333) 2025-03-27 11:56:44 +01:00
NagyZoltanPeter
4111f80729 As lightpush/filter clients are always mounted and can handle service peers from other sources there no reason to restrict respective REST endpoints availability connected to filternode and lightpushnode cli arguments - they can help but not mandatory to access filter and lightush clients (#3331) 2025-03-27 11:56:09 +01:00
Darshan K
02cbc9eb6b chore: relevent ci name (#3338) 2025-03-27 16:06:42 +05:30
Ivan FB
a28243d446 set one log to trace in waku_peer_exchang (#3336) 2025-03-27 11:15:03 +01:00
Darshan K
926e69a12d chore: CI for windows build (#3316) 2025-03-24 19:11:01 +05:30
Simon-Pierre Vivier
dc571d0101 fix: waku sync timing (#3337) 2025-03-24 08:36:19 -04:00
Simon-Pierre Vivier
0369679704 feat: added store sync dashboard panel (#3307) 2025-03-20 15:32:35 -04:00
Simon-Pierre Vivier
cda48e25f7 fix: filter out ephemeral msg from waku sync (#3332) 2025-03-20 15:30:29 -04:00
Simon-Pierre Vivier
7dbc1fe061 fix: apply latest nph formating (#3334) 2025-03-19 11:30:47 -04:00
Ivan FB
e2329f97e5 chore: waku_peer_store add clarifying comment to addPeer proc (#3330) 2025-03-18 13:03:01 +01:00
NagyZoltanPeter
8b927b92d2 fix:lightppush v3 not returning relayed peers count (#3329)
* Fixing lightpush publish logs stated it was legacy lightpush, also fix on wrong mounted protocol check.
* Fix lightpush v3 success report did not embed relayed peer count
* Fix test that missed the case of non returning relayed peers
2025-03-18 09:36:58 +01:00
Simon-Pierre Vivier
bf1a0dc42c fix: waku sync 2.0 codecs ENR support (#3326) 2025-03-14 12:01:11 -04:00
Simon-Pierre Vivier
91e5c7bc13 chore: less logs for rendezvous (#3319)
Co-authored-by: fryorcraken <110212804+fryorcraken@users.noreply.github.com>
2025-03-14 08:49:06 -04:00
Tanya S
8f775cc638 docs: Add test reporting doc to benchmarks dir (#3238)
* Add test reporting doc to benchmarks dir

* Updates as per comments

* Add discv5 results to Insights section

* Update to apply PR comment suggestions

* Add future improvements section to TL;DR
2025-03-14 09:23:06 +02:00
Darshan K
aef2a7045f chore: improve epoch monitoring (#3197) 2025-03-14 01:44:33 +05:30
Simon-Pierre Vivier
d5f18cf455 fix: waku sync mounting (#3321) 2025-03-12 08:47:49 -04:00
Ivan FB
ed0b260c2d Merge pull request #3269 from waku-org/release/v0.35
chore: Relase v0.35
2025-03-11 16:45:24 +01:00
Ivan Folgueira Bande
324e4292ba Merge branch 'master' into release/v0.35 2025-03-11 16:33:30 +01:00
NagyZoltanPeter
05995f7ef9 Make lightpush status code better align with http codes - this helps rest api while makes no harm on protocol level (#3315) 2025-03-10 09:08:05 +01:00
Darshan K
5f1a3406d1 feat: remain windows support (#3162)
Refine process so now it's look cleaner and simple
2025-03-05 21:21:59 +05:30
NagyZoltanPeter
dcf09dd365 feat: lightpush v3 (#3279)
* Separate new lightpush protocol
New RPC defined
Rename al occurence of old lightpush to legacy lightpush, fix rest tests of lightpush
New lightpush protocol added back
Setup new lightpush protocol, mounting and rest api for it

	modified:   apps/chat2/chat2.nim
	modified:   tests/node/test_wakunode_lightpush.nim
	modified:   tests/node/test_wakunode_sharding.nim
	modified:   tests/test_peer_manager.nim
	modified:   tests/test_wakunode_lightpush.nim
	renamed:    tests/waku_lightpush/lightpush_utils.nim -> tests/waku_lightpush_legacy/lightpush_utils.nim
	renamed:    tests/waku_lightpush/test_all.nim -> tests/waku_lightpush_legacy/test_all.nim
	renamed:    tests/waku_lightpush/test_client.nim -> tests/waku_lightpush_legacy/test_client.nim
	renamed:    tests/waku_lightpush/test_ratelimit.nim -> tests/waku_lightpush_legacy/test_ratelimit.nim
	modified:   tests/wakunode_rest/test_all.nim
	renamed:    tests/wakunode_rest/test_rest_lightpush.nim -> tests/wakunode_rest/test_rest_lightpush_legacy.nim
	modified:   waku/factory/node_factory.nim
	modified:   waku/node/waku_node.nim
	modified:   waku/waku_api/rest/admin/handlers.nim
	modified:   waku/waku_api/rest/builder.nim
	new file:   waku/waku_api/rest/legacy_lightpush/client.nim
	new file:   waku/waku_api/rest/legacy_lightpush/handlers.nim
	new file:   waku/waku_api/rest/legacy_lightpush/types.nim
	modified:   waku/waku_api/rest/lightpush/client.nim
	modified:   waku/waku_api/rest/lightpush/handlers.nim
	modified:   waku/waku_api/rest/lightpush/types.nim
	modified:   waku/waku_core/codecs.nim
	modified:   waku/waku_lightpush.nim
	modified:   waku/waku_lightpush/callbacks.nim
	modified:   waku/waku_lightpush/client.nim
	modified:   waku/waku_lightpush/common.nim
	modified:   waku/waku_lightpush/protocol.nim
	modified:   waku/waku_lightpush/rpc.nim
	modified:   waku/waku_lightpush/rpc_codec.nim
	modified:   waku/waku_lightpush/self_req_handler.nim
	new file:   waku/waku_lightpush_legacy.nim
	renamed:    waku/waku_lightpush/README.md -> waku/waku_lightpush_legacy/README.md
	new file:   waku/waku_lightpush_legacy/callbacks.nim
	new file:   waku/waku_lightpush_legacy/client.nim
	new file:   waku/waku_lightpush_legacy/common.nim
	new file:   waku/waku_lightpush_legacy/protocol.nim
	new file:   waku/waku_lightpush_legacy/protocol_metrics.nim
	new file:   waku/waku_lightpush_legacy/rpc.nim
	new file:   waku/waku_lightpush_legacy/rpc_codec.nim
	new file:   waku/waku_lightpush_legacy/self_req_handler.nim

Adapt to non-invasive libp2p observers

cherry pick latest lightpush (v1) changes into legacy lightpush code after rebase to latest master

Fix vendor dependencies from origin/master after failed rebase of them

Adjust examples, test to new lightpush - keep using of legacy

Fixup error code mappings

Fix REST admin interface with distinct legacy and new lightpush

Fix lightpush v2 tests

* Utilize new publishEx interface of pubsub libp2p

* Adapt to latest libp2p pubslih design changes. publish returns an outcome as Result error.

* Fix review findings

* Fix tests, re-added lost one

* Fix rebase

* Apply suggestions from code review

Co-authored-by: Ivan FB <128452529+Ivansete-status@users.noreply.github.com>

* Addressing review comments

* Fix incentivization tests

* Fix build failed on libwaku

* Change new lightpush endpoint version to 3 instead of 2. Noticed that old and new lightpush metrics can cause trouble in monitoring dashboards so decided to give new name as v3 for the new lightpush metrics and change legacy ones back - temporarly till old lightpush will be decommissioned

* Fixing flaky test with rate limit timing

* Fixing logscope of lightpush and legacy lightpush

---------

Co-authored-by: Ivan FB <128452529+Ivansete-status@users.noreply.github.com>
2025-03-05 12:07:56 +01:00
Ivan Folgueira Bande
935e404782 adapt CHANGELOG to the lates query metrics update 2025-03-04 19:24:48 +01:00
Ivan FB
2bb1349162 chore: better implementation to properly convert database query metrics (#3314) 2025-03-04 19:16:08 +01:00
Ivan FB
564b6466a8 chore: better implementation to properly convert database query metrics (#3314) 2025-03-04 19:14:31 +01:00
gabrielmer
05b46239ba fix: using nimMainPrefix in libwaku (#3311) 2025-03-03 11:22:48 +02:00
Ivan Folgueira Bande
187b41d147 changelog to reflect the metrics adaptation to avoid overwhel prometheus 2025-03-02 22:38:01 +01:00
Ivan FB
3f1f76c3a1 chore: more efficient metrics usage (#3298)
* Enhance metrics labels
* Bound the metrics-label-values in arbitrary queries
* The metrics-label-values for prepared statements are kept as
  they already represent a fixed set
2025-03-02 22:27:20 +01:00
Ivan FB
f90baa1d2f chore: more efficient metrics usage (#3298)
* Enhance metrics labels
* Bound the metrics-label-values in arbitrary queries
* The metrics-label-values for prepared statements are kept as
  they already represent a fixed set
2025-03-02 22:19:07 +01:00
Ivan FB
57514f5c9e chore: add simple Qt example that uses libwaku (#3310) 2025-02-28 20:28:45 +01:00
NagyZoltanPeter
92f893987f chore: remove flaky test debug logs from rln and store tests (#3303)
* chore: remove flaky test debug logs from rln tests
* Remove flaky test logs from store and legacy store tests
2025-02-28 15:36:50 +01:00
gabrielmer
798b4bb57b fix: subscribing to RelaydefaultHandler in libwaku (#3308) 2025-02-26 18:04:13 +02:00
gabrielmer
a1901a044e chore: deprecating dnsDiscovery flag (#3305) 2025-02-24 22:06:48 +02:00
Sergei Tikhomirov
fb55ed0b70 feat: incentivization PoC: client-side reputation system basics (#3293)
* chore: rename test file for eligibility tests

* add reputation manager

* add simple boolean reputation with dummy response

* set default reputation to true

* use reputation indicator term; remove unnecessary updateReputation

* use PushResponse in reputation manager

* add custom type for reputation

* add reputation update from response quality

* encode reputation indicators as Option[bool]
2025-02-20 16:07:21 +01:00
gabrielmer
8275d70f35 fix: libwaku's invalid waku message error handling (#3301) 2025-02-17 18:37:43 +02:00
Ivan FB
9bb567eb0e chore: better proof handling in REST (#3286)
* better proof handling in REST
2025-02-14 11:14:38 +01:00
gabrielmer
6b00684ad1 chore: supporting parallel libwaku requests (#3296) 2025-02-13 15:08:32 +02:00
Ivan FB
55ef60836f lightpush enhance log when handling request (#3297) 2025-02-13 12:30:49 +01:00
Ivan FB
9b55665f41 lightpush enhance log when handling request (#3297) 2025-02-13 00:48:36 +01:00
gabrielmer
9063605669 fix: libwaku store request parsing (#3294) 2025-02-12 18:35:50 +02:00
Ivan Folgueira Bande
701500665b CHANGELOG.md group the filter changes in one 2025-02-12 14:24:36 +01:00
Ivan FB
b3e1dc3f49 add waku-rlnv2-contract as vendor dependency (#3289)
The waku-rlnv2-contract commit (a576a89) that is being added is the one
that generated the currently deployed waku network
2025-02-10 23:30:56 +01:00
Ivan FB
34442390e9 chore: dbconn truncate possible too long error messages (#3283)
* also: dbconn restrict the max metric label value to 128
2025-02-07 20:23:31 +01:00
Prem Chaitanya Prathi
93dac1c2c4 fix: make light client examples work with sandbox fleet (#3237) 2025-02-07 11:04:48 +05:30
Ivan Folgueira Bande
526078c0d1 nph in test dunno why 2025-02-07 00:15:45 +01:00
Ivan Folgueira Bande
bf87aa25d6 Update CHANGELOG.md as per latest recommendations 2025-02-06 18:53:31 +01:00
Ivan FB
9ebc3924af Update CHANGELOG.md
Co-authored-by: Simon-Pierre Vivier <simvivier@status.im>
2025-02-06 18:49:44 +01:00
Ivan FB
b478788e85 Revert "chore: waku_archive add protection against queries longer than 24h" (#3278)
This reverts commit 401402368d9075f93692d180cb30156785eed5a8.
2025-02-06 17:47:36 +01:00
Ivan FB
21c4ec0d69 chore: refactor filter to react when the remote peer closes the stream (#3281)
Better control when the remote peer closes the WakuFilterPushCodec
stream.
For example, go-waku closes the stream for every received message.
On the other hand, js-waku keeps the stream opened.
Therefore, we support both scenarios.
2025-02-06 17:47:28 +01:00
Ivan FB
f65bea0f8e Revert "chore: waku_archive add protection against queries longer than 24h" (#3278)
This reverts commit 401402368d9075f93692d180cb30156785eed5a8.
2025-02-06 17:44:12 +01:00
Ivan FB
32ba56d77c chore: refactor filter to react when the remote peer closes the stream (#3281)
Better control when the remote peer closes the WakuFilterPushCodec
stream.
For example, go-waku closes the stream for every received message.
On the other hand, js-waku keeps the stream opened.
Therefore, we support both scenarios.
2025-02-06 17:21:23 +01:00
gabrielmer
a117143ca1 fix: avoid sending relay callbacks if relay is disabled (#3276) 2025-02-05 18:16:37 +02:00
Ivan Folgueira Bande
c785ff7a6b update .github checkout version to v4 2025-02-03 14:37:09 +01:00
Ivan Folgueira Bande
fddc17cea1 update CHANGELOG.md for v0.35 2025-02-03 14:37:05 +01:00
NagyZoltanPeter
3d8f4364f4 Bump nim-chronicles to latest and greates - was missing from previous version bumps (#3274) 2025-02-03 14:35:56 +01:00
Sergei Tikhomirov
79846e8433 feat: incentivization POC: add double-spend check for txid-based eligibility (#3264)
* add double-spend check for txid-based eligibility

* Apply suggestions from code review

Co-authored-by: Ivan FB <128452529+Ivansete-status@users.noreply.github.com>

* split assert into two in double-spending test

* remove unnecessary import

Co-authored-by: Ivan FB <128452529+Ivansete-status@users.noreply.github.com>

---------

Co-authored-by: Ivan FB <128452529+Ivansete-status@users.noreply.github.com>
2025-01-31 18:53:46 +01:00
Ivan FB
80291abc9a chore: filter remove all subscription from a peer that is leaving (#3267)
* waku/waku_filter_v2/protocol.nim keeps track of the filter-client connections in Table[PeerId, Connection]
* waku/waku_filter_v2/protocol.nim starts listening for peer-left events in order to completely remove the previous Connection instance. Also, a new Connection is added when the filter-service starts publishing to its peers.

---------
    
Co-authored-by: NagyZoltanPeter <113987313+NagyZoltanPeter@users.noreply.github.com>
2025-01-31 17:01:55 +01:00
Ivan FB
ed2e26243f peer_manager simple cleanup (#3266) 2025-01-31 16:34:53 +01:00
Simon-Pierre Vivier
d8aaa93df9 feat: waku sync shard matching check (#3259) 2025-01-30 08:46:34 -05:00
NagyZoltanPeter
a1014663bd Update notice of build from source pre-requisites for successful fedora builds (#3265) 2025-01-30 14:36:04 +01:00
gabrielmer
8867fd6fa9 chore: sending msg hash as string on libwaku message event (#3234) 2025-01-30 10:15:31 +02:00
gabrielmer
3eed89796c chore: compiling with skipParentCfg (#3262) 2025-01-29 17:57:35 +02:00
Ivan FB
d9e79022fe fix: filter - enhancements in subscription management (#3198)
* waku_filter_v2: idiomatic way run periodic subscription manager
* filter subscriptions: add more debug logs
* filter: make sure the custom start and stop procs are called
* make sure filter protocol is started if it is mounted
* filter: dial push connection on subscribe only
* reduce max num filter peers from 1000 to 100
* adapt filter tests
* waku_peer_exchange protocol remove temporary debug logs
2025-01-28 15:37:33 +01:00
Ivan FB
c01a21e01f chore: bump dependencies for v0.35 (#3255)
Changes:
	modified:   .gitmodules
	modified:   tests/waku_discv5/utils.nim
	modified:   tests/waku_enr/utils.nim
	modified:   tests/waku_rln_relay/test_rln_group_manager_onchain.nim
	modified:   tests/waku_rln_relay/utils.nim
	modified:   tests/waku_rln_relay/utils_onchain.nim

        modified:   vendor/nim-chronicles
	modified:   vendor/nim-eth
	modified:   vendor/nim-http-utils
	modified:   vendor/nim-json-rpc
	modified:   vendor/nim-json-serialization
	modified:   vendor/nim-libp2p - 1.8.0!
	modified:   vendor/nim-metrics
	new file:   vendor/nim-minilru
	modified:   vendor/nim-nat-traversal
	modified:   vendor/nim-presto
	modified:   vendor/nim-secp256k1
	modified:   vendor/nim-serialization
	modified:   vendor/nim-stew
	modified:   vendor/nim-taskpools
	modified:   vendor/nim-testutils
	modified:   vendor/nim-toml-serialization
	modified:   vendor/nim-unicodedb
	modified:   vendor/nim-unittest2
	modified:   vendor/nim-web3 - from distinct branch that solves Ethereum ABI issue.
	modified:   vendor/nim-websock
	modified:   vendor/nim-zlib
	modified:   vendor/nimcrypto
	modified:   waku.nimble

        modified:   waku/common/enr/builder.nim
	modified:   waku/common/enr/typed_record.nim
	modified:   waku/common/utils/nat.nim
	modified:   waku/discovery/waku_discv5.nim
	modified:   waku/waku_rln_relay/conversion_utils.nim
	modified:   waku/waku_rln_relay/group_manager/on_chain/group_manager.nim
	modified:   waku/waku_rln_relay/rln/wrappers.nim
	modified:   waku/waku_rln_relay/rln_relay.nim

* Eliminate C compilation issue with chat2bridge due to an overcomplicating import from json_rpc instead of using std/json
* Adapt ENR Record handling to new interface of nim-eth
* Fix chrash in group_manager on_chain
* Fix signature of register and MemberRegister to UInt256, check transaction success in register
* Upgrade json-rpc and serialization
* Update to match latest enr and nat interface
* Using of extracted result of contract macro - with necessary adaption
* Bump nim-chornicles, nim-libp2p, nimcrypto
* Bump nim-web3, nim-eth and deps - on_chain/group_manager.nim adaption
* Added status-im/nim-minilru submodule required by latest nim-eth
Fixing tests.
* group_manager: adapt smart contract param types
* update web3 vendor
* bump vendors for v0.35.0
* protobuf.nim: fix compilation error after nim-libp2p bump
* changes to make it compile after rebase from master
---------

Co-authored-by: NagyZoltanPeter <113987313+NagyZoltanPeter@users.noreply.github.com>
2025-01-28 10:04:34 +01:00
Ivan FB
cc864a8e91 archive enhance logging (#3258) 2025-01-27 22:00:41 +01:00
Ivan FB
96d9d40f4b move mount store before relay and rln relay (#3257)
This is needed to make a quick creation of the messages
table, before the event rln sync kicks off. With that,
we avoid having errors from postgres-exporter (nwaku-compose)
complaining about non-existing messages table.
2025-01-27 13:12:34 +01:00
Ivan FB
401402368d chore: waku_archive add protection against queries longer than 24h (#3256)
* store test adaptations because the tests were using future times
* waku_store_sync.nim, test_waku_archive, test_rln_group_manager_onchain.nim nph (unrelated change)
2025-01-27 10:44:59 +01:00
Simon-Pierre Vivier
7031607b58 feat: waku store sync 2.0 config & setup (#3217) 2025-01-24 11:46:11 -05:00
gabrielmer
0e9b332db0 chore: separating heaptrack from debug build (#3249) 2025-01-24 11:14:37 +01:00
Simon-Pierre Vivier
2630b88b41 feat: waku store sync 2.0 protocols & tests (#3216) 2025-01-23 16:13:26 -05:00
Simon-Pierre Vivier
f550c76eb1 feat: waku store sync 2.0 storage & tests (#3215) 2025-01-23 10:39:23 -05:00
gabrielmer
2eca003be0 chore: adding debug flag to makefile (#3248) 2025-01-23 12:11:54 +01:00
Simon-Pierre Vivier
c1b9257948 feat: waku store sync 2.0 common types & codec (#3213) 2025-01-22 11:08:23 -05:00
Sergei Tikhomirov
fdfc48c923 feat: add txhash-based eligibility checks for incentivization PoC (#3166)
Implement data structures and tests for checking transaction eligibility based on tx hash. This work will be continues in future PRs. All code added in this PR is only used in tests.

* feat: add simple txid-based eligibility check with hard-coded params (#3166)

* use new proc to generate eligibility status

Co-authored-by: gabrielmer <101006718+gabrielmer@users.noreply.github.com>

* minor fixes

* add comments to clarify eligibility definition

* use Address.fromHex conversion from eth-web3

* move isEligible to common

* refactor: avoid result and unnecesary branching

* define const for simple transfer gas usage

* avoid unnecessary parentheses

* chore: run nph linter manually

* refactor, move all hard-coded constants to tests

* use Result type in eligibility tests

* use standard method of error handling

* make try-block smaller

* add a try-block in case of connection failure to web3 provider

* make queries to web3 provider in parallel

* move Web3 provider RPC URL into env variable

* remove unused import

* rename functions

* use await in async proc

Co-authored-by: gabrielmer <101006718+gabrielmer@users.noreply.github.com>

* add timeout to tx receipt query

* parallelize queries for tx and txreceipt

* make test txids non public

Co-authored-by: Ivan FB <128452529+Ivansete-status@users.noreply.github.com>

* use assert in txid i13n test

Co-authored-by: Ivan FB <128452529+Ivansete-status@users.noreply.github.com>

* use parentheses when calling verb-methods without arguments

Co-authored-by: Ivan FB <128452529+Ivansete-status@users.noreply.github.com>

* remove unused import

Co-authored-by: Ivan FB <128452529+Ivansete-status@users.noreply.github.com>

* use init for stack-allocated objects

* add txReceipt error message to error

Co-authored-by: Ivan FB <128452529+Ivansete-status@users.noreply.github.com>

* introduce eligibility manager

* [WIP] use Anvil for eligibility testing

* add eligibility test with contract deployment tx

* add eligibility test with contract call

* add asyncSetup and asyncTeardown for eligibility tests

* minor refactor

* refactor tests for onchain group manager with asyncSetup and asyncTeardown

* minor refactor

* remove unnecessary defer in asyncTeardown

Co-authored-by: Ivan FB <128452529+Ivansete-status@users.noreply.github.com>

* remove unnecessary call in test (moved to asyncTeardown)

Co-authored-by: Ivan FB <128452529+Ivansete-status@users.noreply.github.com>

* add comment justidying the use of discard

* rename file txid_proof to eligibility_manager

---------

Co-authored-by: gabrielmer <101006718+gabrielmer@users.noreply.github.com>
Co-authored-by: Ivan FB <128452529+Ivansete-status@users.noreply.github.com>
2025-01-22 11:16:49 +01:00
Darshan K
dd1a70bdb7 chore: capping mechanism for relay and service connections (#3184) 2025-01-21 11:29:52 +05:30
AYAHASSAN287
d7cbe83b19 chore: adding new job in the CI.yml file (#3193) 2025-01-20 16:54:00 +01:00
gabrielmer
2164ef9f97 fix: avoid double db migration for sqlite (#3244) 2025-01-20 16:13:20 +01:00
Anton Iakimov
8270cb9420 dosc: fix wakudev hostname 2025-01-17 17:47:49 +01:00
gabrielmer
4d9e11f16b chore: adding extra migration to sqlite and improving error message (#3240) 2025-01-16 16:10:28 +01:00
Ivan FB
192db550c9 chore: optimize libwaku size (#3242)
* avoid compile TRACE level to reduce libwaku size
* waku_rln_relay/constants.nim: avoid adding constant seq that is used in tests only
* ci:yml USE_LIBBACKTRACE=0 to force -d:debug when running tests
2025-01-16 10:54:10 +01:00
Ivan FB
9d0b30cc99 chore: golang example end using negentropy dependency plus simple readme.md (#3235)
* avoid dependency with libpcre by using regex module
2025-01-15 10:32:22 +01:00
Ivan FB
48859c4266 chore: libwaku tweaks (#3233)
* make lightpush return msg hash after successful publish
* libwaku avoid the use of string
* library alloc.nim allocate memory when nil cstring is passed
* libwaku store_request remove extra destroyShared(self)
2025-01-08 20:52:44 +01:00
gabrielmer
f301c6d9db feat: connection change event (#3225) 2025-01-08 18:53:00 +01:00
richΛrd
ba1870d114 feat(libwaku): add protected topic (#3211) 2025-01-07 09:29:39 -04:00
Ivan FB
569060b190 Update prepare_release.md add waku-rust-bindings step (#3229) 2025-01-06 22:46:21 +01:00
Ivan FB
202c2785ca test: test_wakunode_rln_relay use waitForNullifierLog in all tests avoid flaky (#3227) 2025-01-06 13:47:17 +01:00
Ivan FB
c725c96609 libwaku invoke callback within waku_destroy (#3228) 2025-01-03 16:13:26 +01:00
Ivan FB
c52e43a0ac chore: add comment to clarify where contract.nim comes from (#3226) 2025-01-03 12:41:55 +01:00
Ivan FB
9f46c3c123 chore: enhance libwaku store protocol and more (#3223)
* json_message_event: avoid converting a WakuMessageHash into 0x...
* waku_thread: wait until the waku thread completely received the request
* waku_thread: add missing deallocShared
* libwaku avoid nonsense onReceivedMessage cb in waku_relay_publish
2025-01-03 12:26:46 +01:00
Ivan FB
5aeee9dded extend rust example with waku_start (#3224) 2024-12-26 19:37:43 +01:00
gabrielmer
e4a07a99ce feat: topic health tracking (#3212) 2024-12-24 11:47:38 +01:00
Ivan FB
90cac35c64 chore: update prepare_release.md (#3218) 2024-12-18 11:54:59 +01:00
gabrielmer
790de8a5df feat: allowing configuration of application level callbacks (#3206) 2024-12-13 17:38:16 +01:00
richΛrd
ee9564ec73 fix(libwaku): waku_relay_unsubscribe (#3207) 2024-12-12 08:06:54 -04:00
richΛrd
ddfa212608 fix(libwaku): support string and int64 for timestamps (#3205) 2024-12-10 13:52:21 -04:00
NagyZoltanPeter
7731dfad32 chore: Bump nimbus and nim to latest available - nim-2.0.12 (#3188)
* Bump nimbus and nim to latest available - nim-2.0.12
* Fix name collision of templates of result.nim and nwaku serdes.nim - unrecognizedFieldWarning
2024-12-10 14:42:54 +01:00
Simon-Pierre Vivier
ae013e1928 feat: waku rendezvous wrapper (#2962) 2024-12-09 15:22:36 -05:00
clonefetch
b5edf6db98 chore: fix some typos in comment (#3201)
Signed-off-by: clonefetch <c0217@outlook.com>
2024-12-09 14:45:26 +01:00
fuder.eth
8bdf27e188 Update README.md (#3199) 2024-12-09 14:44:46 +01:00
NagyZoltanPeter
3f98f4a77c Merge branch 'release/v0.34.0' 2024-12-09 12:18:12 +01:00
NagyZoltanPeter
a0c468b4d5 fix: lite-protocol-tester receiver exit check (#3187)
* Fix receiver exit criteria, not let it wait forever in some cases, added a timely check from the last arrived message
* Extend dial and service usage failure metrics with agent string to reveal service nodes origins
* Adjusted infra testing content topic to be unique in the system
* Extend error logs with peer's agent string, fix exit criteria
* Add informative log for not waiting for more messages
* Add unknown as default for empty agent identifier
* better explain exit logic of receiver
* Address review comment - checking for last message arrival return Optional Moment instead of result - better explains what is happening.
2024-12-07 01:22:50 +01:00
Darshan K
4f77bb21d1 chore: add two metrics and panal (#3181) 2024-12-04 17:11:41 +05:30
gabrielmer
ad03b22413 feat: making dns discovery async (#3175) 2024-12-03 14:39:37 +01:00
leopardracer
dced87038e Update README.md (#3183) 2024-12-02 19:51:48 +01:00
richΛrd
9dc1b88b18 refactor(libwaku): async (#3180) 2024-12-02 10:56:12 -04:00
Ivan FB
439a3ae394 chore: Filter in libwaku (#3177) 2024-11-29 15:31:08 +01:00
Simon-Pierre Vivier
682981f967 feat: remove Waku Sync 1.0 & Negentropy (#3185) 2024-11-29 09:09:41 -05:00
NagyZoltanPeter
f35a6f10a6 chore: add supervisor for lpt infra (#3176)
* Adding lpt-runner script and assemble into liteprotocoltester image - to ease infra deployment
* Add supervisor that can run lpt continously in infra environment, infra.env defines defaults for run, in case image tag of lpt docker image is deploy it will build a specific image for infra deployment.
* Added message latency metrics
* DELAY_MESSAGES to MESSAGE_INTERVAL renaming
* Adjust name of START_PUBLISHING_AFTER
* Extend lpt readme with how to use make to build dockerized image and notice about infra deployment
* As fixed in discussion, we will control infra testing by built in predefined test setup
* Prevent peer switch in case using fixed service peers
2024-11-26 20:42:27 +01:00
NagyZoltanPeter
293682b57d Adding lpt-runner script and assemble into liteprotocoltester image - to ease infra deployment (#3158) 2024-11-26 15:54:29 +01:00
Darshan K
7de94c5c2c chore: flaky rln test (#3173) 2024-11-26 13:03:23 +05:30
NagyZoltanPeter
3772cb2968 Revert "make pre-release to only include release_notes.md"
This reverts commit 92207c670ddd856857cb0be544e4d2bf50d62586.
2024-11-22 10:29:06 +01:00
NagyZoltanPeter
38cb0598d9 Apply suggestions from code review - changelog style
Co-authored-by: Ivan FB <128452529+Ivansete-status@users.noreply.github.com>
2024-11-21 12:18:34 +01:00
NagyZoltanPeter
2748ab852f no-code-change, extended changlog with notice of removed protected-topic cli config 2024-11-21 00:05:19 +01:00
Ivan FB
99d3aaf93d python example: call waku_setup to initialize libwaku properly (#3179) 2024-11-20 18:35:23 +01:00
NagyZoltanPeter
db756905dc chore: Partial version bumps for v0.34.0-rc.1 (#3172)
* Bumps for v0.34.0-rc.1 - partial bumping - libp2p
* Avoid importing quic and ngtcp2 dependencies through tests
* libp2p 1.7.1, fixes RendezVous construction
2024-11-10 09:38:14 +01:00
NagyZoltanPeter
0d3b70fa16 chore: Partial version bumps for v0.34.0-rc.1 (#3172)
* Bumps for v0.34.0-rc.1 - partial bumping - libp2p
* Avoid importing quic and ngtcp2 dependencies through tests
* libp2p 1.7.1, fixes RendezVous construction
2024-11-10 09:27:04 +01:00
Ivan FB
76354df9bf chore: libwaku - better error handling and better waku thread destroy handling (#3167) 2024-11-08 14:59:02 +07:00
richΛrd
eec6215229 refactor(libwaku): allow several multiaddresses for a single peer in store queries (#3171)
* fix: parameter name
* refactor: allow multiple addresses for a peer in a store query
2024-11-08 14:36:16 +07:00
gabrielmer
61f4d979ad chore: removing protected-topic cli flg (#3160) 2024-10-31 16:00:42 +01:00
gabrielmer
b8d3ee051a chore: removing protected-topic cli flg (#3160) 2024-10-31 14:33:36 +02:00
gabrielmer
d225c6e1e2 feat: adding waku_dial_peer and get_connected_peers to libwaku (#3149) 2024-10-30 16:26:33 +02:00
NagyZoltanPeter
b1402315f5 fix lint error 2024-10-30 14:32:08 +01:00
gabrielmer
460be6e5a6 feat: running periodicaly peer exchange if discv5 is disabled (#3150) 2024-10-30 12:51:04 +02:00
NagyZoltanPeter
7f2023353d Fixed WAKU-SYNC protocol link 2024-10-30 11:14:12 +01:00
NagyZoltanPeter
92207c670d make pre-release to only include release_notes.md 2024-10-30 11:05:57 +01:00
gabrielmer
f3af7fa37e chore: naming connectPeer procedure (#3157) 2024-10-29 18:37:07 +02:00
gabrielmer
90b8b59e4d fix: linting error (#3156) 2024-10-29 15:36:21 +02:00
NagyZoltanPeter
b949941121 CHANGELOG.md for v0.34.0-rc.0 2024-10-29 14:31:01 +01:00
gabrielmer
1e23446721 fixing libwaku's dns discovery multiaddress generation (#3155) 2024-10-29 11:39:38 +02:00
fryorcraken
89addda4b0 feat: change latency buckets (#3153) 2024-10-29 14:39:12 +11:00
richΛrd
af189952cb chore: support ping with multiple multiaddresses and close stream (#3154) 2024-10-28 15:51:07 -04:00
NagyZoltanPeter
0a432ee2f3 Enhancement on building nph and made it available naturaly on the path as being copied next to nim. (#3152) 2024-10-28 16:43:00 +01:00
Ivan FB
3786ce12e2 chore: Circuit relay (#3112)
* undo apt install libpcre (not circuit-relay related.)
* nat.nim: protect against possible exceptions when calling getExternalIP
* new external CLI argument, isRelayClient
* waku factory change to mount circuit hop proto by default
* waku_node: move autonat_service to a separate module
2024-10-28 09:17:46 +01:00
NagyZoltanPeter
e5f7a8f776 chore: easy setup fleets for lpt (#3125)
* Added bootstrap peer exchange discovery option for easy setup ltp
* Extended with PX discovery, auto-dial of PX cap peers, added switching service peers if failed with original
* Added peer-exchange, found capable peers test, metrics on peer stability and availability, dashboard adjustments
* Updated and actualized README.md for liteprotocoltester
* Created jenkinsfile for liteprotocoltester deployment
* Fixed dial exception during lightpublish
* Add configuration for requesting and testing peer exchange peers
* Extended examples added to Readme
* Added metrics port configurability
---------

Co-authored-by: gabrielmer <101006718+gabrielmer@users.noreply.github.com>
2024-10-25 22:59:02 +02:00
Marko Burčul
2198d78fc6 sonda: adapt setup for deployment (#3151)
Referenced issue: https://github.com/status-im/infra-hq/issues/135

Signed-off-by: markoburcul <marko@status.im>
2024-10-25 16:31:59 +02:00
gabrielmer
9f56891b88 fix: linting error (#3146) 2024-10-25 11:04:17 +03:00
richΛrd
2ffca2078b feat(libwaku): ping peer (#3144) 2024-10-24 09:07:08 -04:00
gabrielmer
223ca1db75 chore: saving peers enr capabilities (#3127) 2024-10-24 15:31:04 +03:00
gabrielmer
0d324b1f25 updating available procs in golang example (#3137) 2024-10-24 10:33:53 +03:00
gabrielmer
37c7b9588a decreasing wait time for updating px cache on startup (#3140) 2024-10-24 10:33:32 +03:00
gabrielmer
33c2fea029 fix: peer exchange libwaku response handling (#3141) 2024-10-24 10:32:57 +03:00
Simon-Pierre Vivier
ce607bc71e fix: add more logs, stagger intervals & set prune offset to 10% for waku sync (#3142) 2024-10-23 17:56:19 -04:00
Simon-Pierre Vivier
e0ee294078 fix: add log and archive message ingress for sync (#3133) 2024-10-23 07:25:07 -04:00
Ivan FB
f11b6c3b94 dbconn: allow uuid in requestId by allowing hyphen (#3139) 2024-10-23 13:15:53 +02:00
Vaclav Pavlin
9d2b931567 chore(networkmonitor): add missing field on RlnRelay init, set default for num of shard (#3136) 2024-10-23 10:23:25 +02:00
Darshan K
772c7a365a feat: windows support compress into one big commit (#3107) 2024-10-23 11:59:37 +05:30
Ivan FB
8e1b2a60b2 peer_manager: prevent too intense loop when no peers connected (#3130) 2024-10-22 20:09:25 +02:00
Ivan FB
56652960f0 Update prepare_release.md to add js-waku update (#3134) 2024-10-22 14:00:29 +02:00
gabrielmer
7e5546cfff chore: add to libwaku peer id retrieval proc (#3124) 2024-10-17 19:13:00 +03:00
richΛrd
c80569e758 fix: add a limit of max 10 content topics per query (#3117) 2024-10-16 17:55:04 -04:00
Simon-Pierre Vivier
6cfa477817 added randomness to peer selection (#3123) 2024-10-16 15:18:47 -04:00
gabrielmer
086820a49b fix: avoid segfault by setting a default num peers requested in PX (#3122) 2024-10-16 17:04:27 +03:00
gabrielmer
92a7b7c7ff returning seqs in libwaku as comma separated strings (#3121) 2024-10-16 14:07:23 +03:00
Ivan FB
5ff910b7a8 ci: force use ubuntu-22.04 (#3118) 2024-10-15 20:31:38 +02:00
gabrielmer
e3de3e9210 chore: adding to libwaku dial and disconnect by peerIds (#3111) 2024-10-15 15:32:02 +03:00
Ivan FB
222251e497 chore: dbconn - add requestId info as a comment in the database logs (#3110) 2024-10-15 09:42:53 +02:00
gabrielmer
b4a04f01fd chore: improving get_peer_ids_by_protocol by returning the available protocols of connected peers (#3109) 2024-10-11 13:58:29 +03:00
gabrielmer
5a8c1f35e2 using cstring instead of nim strings to avoid segfault (#3108) 2024-10-11 13:57:55 +03:00
gabrielmer
fb632d1029 fix: returning peerIds in base 64 (#3105) 2024-10-10 16:53:30 +03:00
richΛrd
9635ee4021 chore: remove warnings (#3106)
- Removes deprecation and unused import warnings for libwaku
- Removes unused imports
- Adds .base. pragma to `SubscriptionObserver.onSubscribe`
- Uses casting for uint to enums conversions
- Bumps nim-chronicles
2024-10-10 08:40:09 -04:00
Ivan FB
a3bada5154 chore: better store logs (#3103)
* simple change better waku store debug logs with some timing info
* dbconn: give some more name clarity and more log detail
2024-10-10 11:57:57 +02:00
Darshan K
8faca4c024 chore: Improve binding for waku_sync (#3102) 2024-10-10 14:17:33 +05:30
gabrielmer
5e3f79896a fix: changing libwaku's error handling format (#3093) 2024-10-09 15:12:45 +03:00
gabrielmer
488ea2f815 fix libwaku's returned enr string (#3097) 2024-10-08 17:22:49 +03:00
gabrielmer
d964b66146 chore: improving and temporarily skipping flaky rln test (#3094) 2024-10-07 18:02:06 +03:00
gabrielmer
f89bfeb82f fix: remove spammy log (#3091) 2024-10-04 17:11:40 +03:00
gabrielmer
aa7e71325b chore: update master after release v0.33.1 (#3089) 2024-10-04 17:11:03 +03:00
Darshan K
c04b560372 refactor: re-arrange function based on responsibility of peer-manager (#3086) 2024-10-04 15:23:20 +05:30
gabrielmer
b69d2c3142 fix: out connections leak (#3077) 2024-10-03 12:37:22 +03:00
Ivan FB
a57729cff3 chore: waku_keystore: give some more context in case of error (#3064) 2024-10-03 00:05:49 +02:00
gabrielmer
b847403b54 adding missing error handling in libwaku (#3084) 2024-10-03 00:13:42 +03:00
gabrielmer
da63d22369 Merge pull request #3068 from waku-org/chore-merge-release-v0.33-to-master
chore: update master from release/v0.33
2024-10-02 14:06:01 +03:00
gabrielmer
975db7a0f5 Merge branch 'master' into chore-merge-release-v0.33-to-master 2024-10-02 10:32:34 +03:00
Gabriel mermelstein
0d8e5a903f remove dependency bumping from changelog 2024-10-02 10:31:45 +03:00
richΛrd
91a91b331f chore: bump negentropy (#3078) 2024-10-01 20:37:49 -04:00
Ivan FB
e128385e69 chore: Optimize store (#3061)
* use messages_lookup to retrieve timestamps
* deep refactoring in db_postgres for better use of async approach
2024-10-01 23:36:03 +02:00
gabrielmer
6d5cbc9331 Adding error logs for failed libwaku operations (#3067) 2024-10-01 12:23:04 +03:00
gabrielmer
f2e98919db Update peer_manager.nim 2024-09-30 21:05:03 +03:00
gabrielmer
1578bc6a28 Update waku-fleet-dashboard.json 2024-09-30 21:04:32 +03:00
gabrielmer
7daecc7faf Merge branch 'master' into chore-merge-release-v0.33-to-master 2024-09-30 21:02:26 +03:00
gabrielmer
986ebd598d chore: update changelog for v0.33.0 release (#3044) 2024-09-30 18:26:10 +03:00
gabrielmer
8f0f5dd2b0 fix: rejecting excess relay connections (#3065) 2024-09-27 19:42:51 +03:00
gabrielmer
9e96e1911a fix: rejecting excess relay connections (#3065) 2024-09-27 19:35:18 +03:00
Ivan FB
6afa392ae4 disable colors in PR docker images (#3066) 2024-09-27 15:00:55 +02:00
Darshan K
785cf2e9d9 refactor: wrap peer store (#3051)
Encapsulate peerstore with wakupeerstore
2024-09-27 18:16:46 +05:30
gabrielmer
c77a141191 chore: disabling metrics for libwaku (#3058) 2024-09-25 14:08:01 +03:00
Ivan FB
bad30bf4e7 append current version in agentString which is used by the identify protocol (#3057) 2024-09-25 12:59:46 +03:00
NagyZoltanPeter
7fee882533 Extend fleet dashboard with PeerExhcange metrics (#3056) 2024-09-25 11:56:47 +02:00
Ivan FB
3437e4009d append current version in agentString which is used by the identify protocol (#3057) 2024-09-25 11:52:02 +02:00
NagyZoltanPeter
5197fac47b Fix PeerExchange rpc decode in order not to take response's status_code mandatory - for support old protocol implementation (#3059) 2024-09-25 11:51:50 +02:00
Ivan FB
8c8eea4b67 chore: test peer connection management (#3049)
* Make some useful consts public, add some utils.
* Implement various utilities.
* peer_manager reconnectPeers enhancements

---------

Co-authored-by: Álex Cabeza Romero <alex93cabeza@gmail.com>
2024-09-24 18:20:29 +02:00
gabrielmer
4bafca6df0 chore: updating upload and download artifact actions to v4 (#3047) 2024-09-24 14:30:49 +03:00
NagyZoltanPeter
941a3fe6a0 fix: px protocol decode - do not treat missing response field as error (#3055)
* Fix missing response field of PeerExchange RPC treated as error.
* Fix PX metrics from gauge to counter for better dashboard stats
2024-09-24 12:47:52 +02:00
Ivan FB
e58b5c15c8 chore: Better database query logs and logarithmic scale in grafana store panels (#3048) 2024-09-20 22:47:15 +03:00
Ivan FB
96cc2f1b39 chore: Better database query logs and logarithmic scale in grafana store panels (#3048) 2024-09-20 17:43:56 +02:00
gabrielmer
5ddb22a345 fix: static linking negentropy in ARM based mac (#3046) 2024-09-20 15:43:12 +03:00
Ivan FB
a16c64ad45 chore: extending store metrics (#3042)
* adding query_metrics module
* update fleet-dashboard with new store panels for better timing insight
2024-09-20 15:43:00 +03:00
Ivan FB
3921036ced sharding: reduce log level for a too spammy message (#3045) 2024-09-20 15:42:46 +03:00
gabrielmer
9cf71e5d49 fix: static linking negentropy in ARM based mac (#3046) 2024-09-20 15:41:27 +03:00
Ivan FB
1f768cb3e8 chore: extending store metrics (#3042)
* adding query_metrics module
* update fleet-dashboard with new store panels for better timing insight
2024-09-20 13:23:53 +02:00
Ivan FB
d5ff611a5e sharding: reduce log level for a too spammy message (#3045) 2024-09-20 13:22:10 +02:00
NagyZoltanPeter
e7ae1a0382 chore: rate limit peer exchange protocol, enhanced response status in RPC (#3035)
* Enhanced peer-ex protocol - added rate limiting, added response status and desc to the rpc

* Better error result handling for PeerEx request, adjusted tests

* Refactored RateLimit configuration option for better CLI UX - now possible to set separate limits per protocol. Adjusted mountings. Added and adjusted tests

* Fix libwaku due to changes of error return type of fetchPeerExchangePeers

* Fix rate limit setting tests due to changed defaults

* Introduce new gauge to help dasboard effectively show current rate limit applied for protocol

* Adjust timeing in filter rate limit test to let macos CI test run ok.

* Address review findings, namings, error logs, removed left-overs

* Changes to reflect latest spec agreement and changes. PeerExchange RPC is changed the now respond structure will contain status_code and status_desc.
2024-09-18 15:58:07 +02:00
NagyZoltanPeter
6dfefc5e42 chore: Switch libnegentropy library build from shared to static linkage (#3041)
* Switch libnegentropy library build from shared to static linkage

* Update negentropy with -fPIC compile option that is necessary for libwaku build

* Bump waku-org/negentropy to the latest on master to incorporate merged static build of libnegentropy
2024-09-18 14:34:50 +02:00
gabrielmer
36df0fd838 fix: setting up node with modified config (#3036) 2024-09-16 16:30:38 +03:00
gabrielmer
2a596f4c77 adding a dynamic sleep interval in the connectivity loop (#3031) 2024-09-12 22:49:47 +02:00
Ivan FB
b120da2a18 chore: libwaku reduce repetitive code by adding a template handling resp returns (#3032) 2024-09-11 18:11:59 +02:00
Ivan FB
b1fd3ef204 test: avoid too verbose rln test (#3029) 2024-09-11 10:22:00 +02:00
Ivan FB
004b56e422 chore: libwaku - extending the library with peer_manager and peer_exchange features (#3026)
* libwaku: get peerids by protocol and peer exchange request
2024-09-11 10:13:54 +02:00
fryorcraken
723b009b20 chore: use submodule nph in CI to check lint (#3027) 2024-09-11 11:51:42 +10:00
gabrielmer
43bea3c476 chore: deprecating pubsub topic (#2997) 2024-09-10 15:07:12 -06:00
Ivan FB
f34a044ccf chore: lightpush - error metric less variable by only setting a fixed string (#3020) 2024-09-10 17:30:09 +02:00
Ivan FB
aefb7fb73d fix: get back health check for postgres legacy (#3010) 2024-09-10 15:07:53 +02:00
NagyZoltanPeter
dabb4eb60a Update waku-fleet-dashboard from latest (v142) Grafana (#3025) 2024-09-10 10:02:21 +02:00
gabrielmer
6fd17549aa Merge pull request #3009 from waku-org/release/v0.32
chore: update master from release/v0.32
2024-09-09 10:01:03 -06:00
Ivan FB
5f2d87ec71 chore: Bump dependencies for v0.33 (#3017) 2024-09-09 10:45:14 +02:00
Ivan FB
50e15746d1 chore: enhance libpq management (#3015)
* db_postgres: register pg socket fd to chronos better data available awareness
* waku_store protocol: better logs to track time and new store metrics
2024-09-06 11:33:15 +02:00
Ivan FB
bed16d6a4a Update prepare_release.md (#3011)
add new step to review https://github.com/waku-org/docs.waku.org/blob/develop/docs/guides/nwaku/config-options.md
2024-09-04 17:19:30 +02:00
Ivan FB
b0eb488b13 update prepare_release.md add Status community link (#3013) 2024-09-04 17:19:06 +02:00
Simon-Pierre Vivier
ed866321a0 chore: per limit split of PostgreSQL queries (#3008) 2024-09-04 10:17:28 -04:00
NagyZoltanPeter
930d2a8b04 chore: Added metrics to liteprotocoltester (#3002)
* Added metrics to tests
Fix liteprotocoltester docker files with libnegentropy COPY
docker compose with waku-sim simulation now having test performance dashboard and localhost:3033

Mention dashboard in Readme

* Update apps/liteprotocoltester/statistics.nim

Co-authored-by: Ivan FB <128452529+Ivansete-status@users.noreply.github.com>

* indent fix, more stable finding of service/bootstrap nodes, pre-set for TWN

---------

Co-authored-by: Ivan FB <128452529+Ivansete-status@users.noreply.github.com>
2024-09-04 08:35:51 +02:00
gabrielmer
3ab6d99fe6 chore: update changelog for v0.32.0 release (#2993) 2024-08-30 11:07:12 -06:00
Ivan FB
191035ffd2 libwaku better params validation and a bit more clarity (#3005) 2024-08-29 22:57:23 +02:00
Ivan FB
236547ec7d chore: Better timing and requestId detail for slower store db queries (#2994)
* Better timing and requestId detail for store db queries slower than two seconds
* Adapt tests and client to allow sending custom store requestId
2024-08-29 22:56:14 +02:00
Ivan FB
97019896ac chore: remove unused setting from external_config.nim (#3004) 2024-08-29 17:54:37 +02:00
Ivan FB
6eff20507a libwaku: exposing more features (#3003)
- Allow to start or store discv5
- Expose lightpush request operation
- Expose list of connected and mesh peers
- Expose store client
2024-08-29 14:29:02 +02:00
Simon-Pierre Vivier
95ac6e6c87 fix waku sync config defaults (#3001) 2024-08-28 08:38:39 -06:00
Simon-Pierre Vivier
853025284b fix waku sync config defaults (#3001) 2024-08-28 10:26:38 -04:00
Ivan FB
f1db75262b chore: delivery monitor for store v3 reliability protocol (#2977)
- Use of observer observable pattern to inform delivery_monitor about subscription state
- send_monitor becomes a publish observer of lightpush and relay
- deliver monitor add more protection against possible crash and better logs
- creating a separate proc in store client for delivery monitor
2024-08-27 16:49:46 +02:00
gabrielmer
64e6658202 fix: libnegentropy integration (#2996) 2024-08-24 10:04:45 -06:00
gabrielmer
7057eacee2 fix: libnegentropy integration (#2996) 2024-08-24 10:00:19 -06:00
Darshan K
fd653ef0da fix: peer-exchange issue (#2889) 2024-08-23 23:31:30 +05:30
a94f571077 fix: copy libnegentropy.so from nim-build image (#2991)
We shouldn't assume it exists on the host.

Introduced by:
https://github.com/waku-org/nwaku/pull/2403

Signed-off-by: Jakub Sokołowski <jakub@status.im>
2024-08-22 11:28:53 -06:00
b2cbc7cbca fix: copy libnegentropy.so from nim-build image (#2991)
We shouldn't assume it exists on the host.

Introduced by:
https://github.com/waku-org/nwaku/pull/2403

Signed-off-by: Jakub Sokołowski <jakub@status.im>
2024-08-22 11:27:48 -06:00
Ivan FB
ebea143031 chore: libwaku retrieve my enr and adapt golang example (#2987) 2024-08-22 12:01:14 +02:00
Ivan FB
8a5a589202 chore: Update changelog v0.31.1 (#2985) 2024-08-22 00:55:40 +02:00
Ivan FB
2e8f2f0076 chore: ANALYZE messages query should be performed regularly (#2986)
---------

Co-authored-by: Richard Ramos <info@richardramos.me>
2024-08-21 19:17:08 +02:00
NagyZoltanPeter
b68cc07261 Distinction between gross/net trafic in bandwidth per shard metric, added bandwidths and request rate panels to single node and fleet dashboards (#2920) 2024-08-21 17:10:29 +02:00
NagyZoltanPeter
3212459f77 chore: liteprotocoltester for simulation and for fleets (#2813)
* Added phase 2 - waku-simulatior integration in README.md

* Enhancement on statistics reports, added list of sent messages with hash, fixed latency calculations

* Enable standalonde running liteprotocoltester agains any waku network/fleet

* Fix missing env vars on run_tester_node.sh

* Adjustment on log levels, fix REST initialization

* Added standalon docker image build, fine tune duplicate detection and logging.

* Adjustments for waku-simulator runs

* Extended liteprotocoltester README.md with docker build

* Fix test inside docker service node connectivity failure

* Update apps/liteprotocoltester/README.md

Co-authored-by: gabrielmer <101006718+gabrielmer@users.noreply.github.com>

* Apply suggestions from code review

Co-authored-by: gabrielmer <101006718+gabrielmer@users.noreply.github.com>
Co-authored-by: Ivan FB <128452529+Ivansete-status@users.noreply.github.com>

* Explain minLatency calculation in code comment

---------

Co-authored-by: gabrielmer <101006718+gabrielmer@users.noreply.github.com>
Co-authored-by: Ivan FB <128452529+Ivansete-status@users.noreply.github.com>
2024-08-21 14:54:18 +02:00
fryorcraken
a4c71f01e5 chore: lock in nph version and add pre-commit hook (#2938) 2024-08-20 15:14:35 +10:00
gabrielmer
4b9bee99a8 chore: logging received message info via onValidated observer (#2973) 2024-08-19 14:13:28 +02:00
gabrielmer
3f641dff60 chore: deprecating protected topics in favor of protected shards (#2983) 2024-08-19 12:56:22 +02:00
gabrielmer
90b4dc89ff chore: rename NsPubsubTopic (#2974) 2024-08-19 11:29:35 +02:00
Ivan FB
73fbe5c337 postgres_driver limit max num hashes to 100 (#2976) 2024-08-19 11:12:31 +02:00
fryorcraken
17b23722f3 chore: install dig (#2975)
Install `dig` to enable automatic detection of domain names to increase support of WSS.
2024-08-19 09:22:26 +02:00
richΛrd
4a89875a36 chore: print WakuMessageHash as hex strings (#2969) 2024-08-14 21:04:20 +02:00
gabrielmer
1f3162ae5f avoid using the msg key in chronicles (#2970) 2024-08-14 16:40:08 +02:00
gabrielmer
f094c671ca chore: updating dependencies for release 0.32.0 (#2971) 2024-08-14 16:38:31 +02:00
Ivan FB
60e2fd90d3 chore: bump negentropy to latest master (#2968)
Submodule vendor/negentropy 311a21a22..f15207699:
  > Merge pull request #6 from waku-org/fix/add-missing-include
  > Merge pull request #7 from waku-org/avoid-use-pragma-once
2024-08-13 18:28:13 +02:00
Simon-Pierre Vivier
301ce8068c feat: Nwaku Sync (#2403)
* feat: Waku Sync Protocol

* feat: state machine (#2656)

* feat: pruning storage mehcanism (#2673)

* feat: message transfer mechanism & tests (#2688)

* update docker files

* added ENR filed for sync & misc. fixes

* adding new sync range param & fixes

---------

Co-authored-by: Ivan FB <128452529+Ivansete-status@users.noreply.github.com>
Co-authored-by: Prem Chaitanya Prathi <chaitanyaprem@gmail.com>
2024-08-13 07:27:34 -04:00
gabrielmer
459221f93d pruning excess in relay connections (#2965) 2024-08-12 17:59:11 +02:00
Ivan FB
6ae46c5fff faster retention policy when error and use of detach finalize when needed (#2966) 2024-08-12 10:47:01 +02:00
richΛrd
71946b911f fix: return on insert error (#2956) 2024-08-11 21:35:04 -04:00
Ivan FB
696587fdac chore: Optimize hash queries with lookup table (#2933)
* Upgrade Postgres schema to add messages_lookup table
* Perform optimized query for messageHash-only queries
2024-08-08 21:46:08 +02:00
Ivan FB
3e6b2ea683 networkmonitor: fix compilation issue (#2964) 2024-08-08 20:11:51 +02:00
Simon-Pierre Vivier
6b22823b64 feat: misc. updates for discovery network analysis (#2930)
added metrics, a way to start without RLN and a new avg latency algorithm
2024-08-07 14:58:28 -04:00
Aaryamann Challani
b839b1c81f chore(keystore): verbose error message when credential is not found (#2943) 2024-08-07 11:57:03 +02:00
Darshan K
6748142f29 chore: upgrade peer exchange mounting (#2953) 2024-08-06 13:27:25 +05:30
fcc11a7cd9 chore: replace statusim.net instances with status.im (#2941)
Use of `statusim.net` domain been deprecated since March:
https://github.com/status-im/infra-shards/commit/7df38c14

Also adjust test to match enr with multiaddresses.

Signed-off-by: Jakub Sokołowski <jakub@status.im>
2024-08-05 12:57:43 +02:00
Álex
1ab665ce2c test(rln): Implement rln tests (#2639)
* Implement tests.
* Clean coding.
2024-08-02 16:43:22 +02:00
Ivan FB
1ce87c49a8 lightpush better feedback in case the lightpush service node does not have peers (#2951) 2024-08-02 09:45:05 +02:00
gabrielmer
f10a604764 don't start node when it's discv5 only (#2947) 2024-08-01 23:28:00 +03:00
Ivan FB
1fe23b8a6a prepare_release.md simplify and relax CCs usage in status.staging (#2945) 2024-07-31 10:26:10 +02:00
gabrielmer
705ae45363 chore: updating doc reference to https rpc (#2937) 2024-07-31 11:12:14 +03:00
Prem Chaitanya Prathi
2614d93566 fix: network monitor improvements (#2939) 2024-07-30 16:56:49 +03:00
Ivan FB
f9552e133b chore: Simplification of store legacy code (#2931) 2024-07-30 14:05:23 +02:00
Simon-Pierre Vivier
2134ad76b4 feat: store resume (#2919)
Co-authored-by: Ivan FB <128452529+Ivansete-status@users.noreply.github.com>
2024-07-30 07:23:39 -04:00
Ivan FB
65caba9abd Merge pull request #2915 from waku-org/release/v0.31 2024-07-30 12:34:18 +02:00
Simon-Pierre Vivier
7fdabe5ad9 chore: add peer filtering by cluster for waku peer exchange (#2932) 2024-07-29 15:53:43 -04:00
Simon-Pierre Vivier
43e66939b5 fix: add back waku discv5 metrics (#2927)
Co-authored-by: Ivan FB <128452529+Ivansete-status@users.noreply.github.com>
2024-07-26 16:18:14 -04:00
Darshan K
48a79d3012 fix: update and shift unittest (#2934)
* fix: update and shift location of unit test
2024-07-26 16:57:34 +05:30
gabrielmer
aaf2b88c62 bumping nim-bearssl (#2936) 2024-07-26 13:49:29 +03:00
gabrielmer
6ca28cd74d including UTC time in logs and logging timestamp (#2926) 2024-07-26 12:45:44 +02:00
Sasha
1d850e43dc chore: return all connected peers from REST API (#2923)
* Remove the condition of gathering connected peers with relay and user req/resp protocols.
* Return PeerExchange protocol support of connected nodes with /admin/peers
* Added test for checking return of PeerExchange mounted protocol of connected peer by GET /admin/peers

---------

Co-authored-by: NagyZoltanPeter <113987313+NagyZoltanPeter@users.noreply.github.com>
2024-07-23 12:58:56 +02:00
gabrielmer
aa9c30655c chore: adding lint job to the CI (#2925) 2024-07-23 13:57:24 +03:00
Darshan K
ad6f6c6bac fix: handle rln-relay-message-limit (#2867)
* fix: enforcing rln-contract max message limit and resolve early
2024-07-22 22:28:45 +05:30
gabrielmer
d9a48321d2 chore: improve sonda dashboard (#2918) 2024-07-19 14:31:23 +03:00
Ivan FB
90410f8b80 chore: update CHANGELOG.md for v0.31.0 (#2912)
Co-authored-by: gabrielmer <101006718+gabrielmer@users.noreply.github.com>
2024-07-17 18:42:36 +02:00
NagyZoltanPeter
718e54f80d chore: Add new custom built and test target to make in order to enable easy build or test single nim modules (#2913)
* Add new custom built and test target to make in order to enable easy build or test single nim modules
* Extend README.md describe how to use it
2024-07-17 15:21:37 +02:00
gabrielmer
35d7e51fc3 setting filter handling logs to trace (#2914) 2024-07-17 12:52:44 +02:00
gabrielmer
5d83d4a6fe setting filter handling logs to trace (#2914) 2024-07-17 13:29:53 +03:00
Ivan Folgueira Bande
eddd990c33 update CHANGELOG.md for v0.31.0 2024-07-16 17:23:14 +02:00
NagyZoltanPeter
ca634ef3ba feat: DOS protection of non relay protocols - rate limit phase3 (#2897)
* DOS protection of non relay protocols - rate limit phase3:
- Enhanced TokenBucket to be able to add compensation tokens based on previous usage percentage,
- per peer rate limiter 'PeerRateLimier' applied on waku_filter_v2 with opinionated default of acceptable request rate
- Add traffic metrics to filter message push
- RequestRateLimiter added to combine simple token bucket limiting of request numbers but consider per peer usage over time and prevent some peers to over use the service
  (although currently rule violating peers will not be disconnected by this time only their requests will get not served)
- TimedMap utility created (inspired and taken from libp2p TimedCache) which serves as forgiving feature for peers had been overusing the service.
- Added more tests
- Fix rebase issues
- Applied new RequestRateLimiter for store and legacy_store and lightpush
* Incorporate review comments, typos, file/class naming and placement changes.
* Add issue link reference of the original issue with nim-chronos TokenBucket
* Make TimedEntry of TimedMap private and not mixable with similar named in libp2p
* Fix review comments, renamings, const instead of values and more comments.
2024-07-16 15:46:21 +02:00
Ivan FB
b196bc097b chore: simple PR to enhance postgres and retention policy logs (#2884) 2024-07-15 20:58:31 +02:00
Ivan FB
d6e53631e5 chore: Update master from release v0.30 (#2908)
* chore(rln): rln message limit to 100 (#2883)
* postgres_driver: add more error handling when creating partitions
   Given that multiple nodes can be connected to the same database,
   it can happen that other node did something that my node was willing
   to do. In this commit, we overcome the possible "interleaved" 
   partition creation.

---------

Co-authored-by: Alvaro Revuelta <alvrevuelta@gmail.com>
2024-07-15 18:00:44 +02:00
Ivan FB
7332f03f06 archive legacy run report metrics every 30 minutes instead of 1 (#2903) 2024-07-15 17:39:21 +02:00
gabrielmer
77c19ceda7 logging content topic of spam messages (#2906) 2024-07-15 16:43:53 +03:00
gabrielmer
53893c1b28 chore: improving logging under debugDiscv5 flag (#2899) 2024-07-15 10:55:31 +03:00
Simon-Pierre Vivier
d60ff3e0e6 chore(archive): archive and drivers refactor (#2761)
* queue driver refactor (#2753)
* chore(archive): archive refactor (#2752)
* chore(archive): sqlite driver refactor (#2754)
* chore(archive): postgres driver refactor (#2755)
* chore(archive): renaming & copies (#2751)
* posgres legacy: stop using the storedAt field
* migration script 6: we still need the id column
  The id column is needed because it contains the message digest
  which is used in store v2, and we need to keep support to
  store v2 for a while
* legacy archive: set target migration version to 6
* waku_node: try to use wakuLegacyArchive if wakuArchive is nil
* node_factory, waku_node: mount legacy and future store simultaneously
  We want the nwaku node to simultaneously support store-v2
  requests and store-v3 requests.
  Only the legacy archive is in charge of archiving messages, and the
  archived information is suitable to fulfill both store-v2 and
  store-v3 needs.
* postgres_driver: adding temporary code until store-v2 is removed

---------

Co-authored-by: Ivan FB <128452529+Ivansete-status@users.noreply.github.com>
Co-authored-by: gabrielmer <101006718+gabrielmer@users.noreply.github.com>
Co-authored-by: Ivan Folgueira Bande <ivansete@status.im>
2024-07-12 18:19:12 +02:00
gabrielmer
49d6adca3b feat: sonda tool (#2893) 2024-07-12 11:05:13 +03:00
fryorcraken
9bd8078697 docs: new release process to include Status fleets (#2825) 2024-07-12 16:46:37 +10:00
Aaryamann Challani
6ef45927f1 fix(rln_keystore_generator): improve error handling for unrecoverable failure (#2881) 2024-07-10 19:12:49 +02:00
Ivan FB
9a9b773546 chore: sqlite make sure code is always run (#2891) 2024-07-10 18:34:28 +02:00
gabrielmer
1071ffc6c8 chore: deprecating named sharding (#2723) 2024-07-09 18:36:12 +03:00
gabrielmer
c2b49508c2 setting connectivity loop interval to 30 seconds (#2878) 2024-07-09 17:33:18 +03:00
Ivan FB
36b4a6fff3 postgres_driver: better partition creation without exclusive access (#2887) 2024-07-09 13:41:29 +02:00
Ivan FB
13316201f7 chore: Bump dependencies for v0.31.0 (#2885)
* bump_dependencies.md: add nim-results dependency
* change imports stew/results to results
* switching to Nim 2.0.8
* waku.nimble: reflect the requirement nim 1.6.0 to 2.0.8
  Adding --mm:refc as nim 2.0 enables a new garbage collector that we're
  not yet ready to support
* adapt waku code to Nim 2.0
* gcsafe adaptations because Nim 2.0 is more strict
2024-07-09 13:14:28 +02:00
Darshan K
8da2a9c0a5 chore: refactor relative path to better absolute (#2861) 2024-07-06 00:03:38 +02:00
Ivan FB
3a93d13067 Merge pull request #2874 from waku-org/update-master-from-release-v30
chore: Update master from release v30
2024-07-04 15:51:59 +02:00
Ivan FB
7ec831fd9d Merge branch 'master' into update-master-from-release-v30 2024-07-03 22:52:15 +02:00
Ivan Folgueira Bande
ed5f611f30 CHANGELOG.md fix simple typo 2024-07-03 22:42:38 +02:00
Ivan Folgueira Bande
147db89a80 CHANGELOG.md: setting 0.30.1 as next public release 2024-07-03 18:22:51 +02:00
Alvaro Revuelta
9f1f9f264d chore: use sepolia testnet (#2872) 2024-07-03 18:11:11 +02:00
Ivan FB
533d6fbe0d Merge pull request #2868 from waku-org/update-master-from-release-v0.30
chore: Update master from release v0.30 (#2866)

* CHANGELOG.md add info for v0.30.0
* fix(rln-relay): clear nullifier log only if length is over max epoch gap (#2836)
* chore: add TWN parameters for RLNv2 (#2843)
* fix(rln): nullifierlog vulnerability (#2855)
* chore(rln-relay): add chain-id flag to wakunode and restrict usage if mismatches rpc provider (#2858)
    
---------

Co-authored-by: Aaryamann Challani <43716372+rymnc@users.noreply.github.com>
Co-authored-by: Alvaro Revuelta <alvrevuelta@gmail.com>
2024-07-02 19:15:44 +02:00
Ivan FB
fcba68570e Update CHANGELOG.md
Co-authored-by: Aaryamann Challani <43716372+rymnc@users.noreply.github.com>
2024-07-02 10:29:20 +02:00
Ivan FB
7a4067cf97 Update CHANGELOG.md
Co-authored-by: Aaryamann Challani <43716372+rymnc@users.noreply.github.com>
2024-07-02 10:29:08 +02:00
Ivan Folgueira Bande
b574636e72 Merge branch 'release/v0.30' into update-master-from-release-v0.30 2024-07-02 10:16:40 +02:00
Ivan Folgueira Bande
1454bcb29d CHANGELOG.md: add better details of version v0.30.0 2024-07-01 17:14:38 +02:00
Ivan Folgueira Bande
7705737e04 CHANGELOG.md: simply update the date to reflect the current release date 2024-07-01 14:35:00 +02:00
gabrielmer
3e8094a4ef chore: saving agent and protoVersion in peerStore (#2860) 2024-07-01 13:29:14 +02:00
Darshan K
d41280cc7a chore: unit test for duplicate message push (#2852)
* chore: add unit test for testing duplicate message push with timedcache

* chore: update according to better naming convention

---------

Co-authored-by: NagyZoltanPeter <113987313+NagyZoltanPeter@users.noreply.github.com>
2024-06-28 18:16:06 +05:30
Darshan K
7ad9722ecf chore: remove all pre-nim-1.6 deadcode from codebase (#2857) 2024-06-28 16:04:57 +05:30
Aaryamann Challani
e093af4c12 chore(rln-relay): add chain-id flag to wakunode and restrict usage if mismatches rpc provider (#2858) 2024-06-28 11:19:16 +02:00
Alvaro Revuelta
2d9f12fee8 fix(rln): nullifierlog vulnerability (#2855) 2024-06-28 09:25:10 +02:00
fryorcraken
19d79384bb chore(nim-chronos): bump submodule (#2850) 2024-06-28 10:50:57 +10:00
NagyZoltanPeter
35509ed4fd feat: Added proper per shard bandwidth metric calculation (#2851)
* Added proper per shard bandwidth metric calculation and proper logging of in/out messages
Changed rate limit metrics for dashboard
Updated monitoring dashboard for bw and rate metrics
2024-06-28 02:48:29 +02:00
richΛrd
9362948a02 chore: ignore arbitrary data stored in multiaddrs enr key (#2853) 2024-06-27 10:01:47 -04:00
Alvaro Revuelta
393d65faf9 chore: add TWN parameters for RLNv2 (#2843) 2024-06-27 11:45:21 +02:00
gabrielmer
67c6b2142c chore: adding origin to peers admin endpoint (#2848) 2024-06-26 17:59:12 +02:00
gabrielmer
5f8a45c5df chore: adding discv5 logs (#2811) 2024-06-26 14:25:58 +02:00
Ivan FB
75ed5d14e3 chore: archive.nim - increase the max limit of content topics per query to 100 (#2846) 2024-06-26 12:24:15 +02:00
Darshan K
19c092869f fix: duplicate message forwarding in filter service (#2842)
* fix: it's resolve duplicate message forwarding for filter service

* chore: update little flow

* fix: update implementation using timed cache method

* chore: simple format change

* chore: simple format change

* chore: update put function location

* chore: update according suggestion
2024-06-26 01:05:03 +05:30
Aaryamann Challani
ec3d02a028 fix(rln-relay): clear nullifier log only if length is over max epoch gap (#2836) 2024-06-24 13:29:56 +02:00
gabrielmer
42f6b4dcb7 fix: only set disconnect time on left event (#2831) 2024-06-24 10:20:09 +02:00
Darshan K
493ca50a2e chore: update content-topic parsing for filter (#2835)
* chore: update content parsing for filter

* chore: update according to suggestion

* chore: update according to suggestion
2024-06-21 17:47:44 +05:30
Darshan K
944b044e93 chore: better descriptive log (#2826)
* chore: update logs with topic discription & debug msg

* chore: update unit according to error msg

* chore: update rest unit according to error msg

* chore: add content-topic with debug msg
2024-06-20 18:38:55 +05:30
Ivan Folgueira Bande
2c89d652d4 CHANGELOG.md add info for v0.30.0 2024-06-20 15:07:19 +02:00
Aaryamann Challani
bc3a1851a6 chore(zerokit): bump submodule (#2830) 2024-06-20 14:56:12 +02:00
Aaryamann Challani
aa16002a4e feat(rlnv2): clean fork of rlnv2 (#2828)
* chore(rlnv2): contract interface changes (#2770)
* fix: tests
* fix: remove stuint[32]
* chore(submodule): update zerokit submodule to v0.5.1 (#2782)
* fix: remove cond comp for lightpush test
* fix: ci and nonceManager
2024-06-20 14:55:50 +02:00
Aaryamann Challani
6a7fc4c49b chore(zerokit): bump submodule (#2830) 2024-06-20 14:46:16 +02:00
gabrielmer
57ec129a48 stop connecting to out peers until target is reached (#2823) 2024-06-20 12:16:15 +02:00
Aaryamann Challani
7e4f18cda7 feat(rlnv2): clean fork of rlnv2 (#2828)
* chore(rlnv2): contract interface changes (#2770)
* fix: tests
* fix: remove stuint[32]
* chore(submodule): update zerokit submodule to v0.5.1 (#2782)
* fix: remove cond comp for lightpush test
* fix: ci and nonceManager
2024-06-20 11:35:21 +02:00
Ivan FB
cd3729ed86 Merge pull request #2827 from waku-org/release/v0.29
chore: simple PR from release/v0.29 to fix the tag history
2024-06-20 10:59:34 +02:00
gabrielmer
3403716b4f fix: adding peer exchange peers to the peerStore (#2824) 2024-06-20 10:46:40 +02:00
Ivan FB
88656c4131 CHANGELOG.md named sharding deprecation announcement
Co-authored-by: gabrielmer <101006718+gabrielmer@users.noreply.github.com>
2024-06-20 10:25:31 +02:00
Ivan FB
914b6f81ad chore: merging release v0.29 into master (#2802)
* bump nim-libp2p from v1.2.0 to v1.3.0
* Update changelog for v0.29.0

Co-authored-by: gabrielmer <101006718+gabrielmer@users.noreply.github.com>
2024-06-20 09:39:28 +02:00
6d9705f039 fix(ci): use --tags to match non-annotated tags (#2814)
Currently tags used in the project are a mix of annotated and
non-annotated/lightweight tags. Without `--tags` flag `git describe`
will not take into account annotated tags.

Signed-off-by: Jakub Sokołowski <jakub@status.im>
2024-06-19 17:54:19 +02:00
Ivan FB
0b09e3abdc add epic and link feature_request.md (#2820) 2024-06-19 17:53:36 +02:00
gabrielmer
359f6ff8d5 fix: update peers ENRs in peer store in case they are updated (#2818) 2024-06-19 17:29:55 +02:00
Ivan Folgueira Bande
3f17b1d9bb update CHANGELOG.md to remove comment about observers for message log 2024-06-18 18:45:05 +02:00
gabrielmer
e5bad0f2f1 fix: revert "chore: adding observers for message logging (#2800)" (#2815) 2024-06-17 14:30:30 +02:00
gabrielmer
2e1cbcf89c fix: revert "chore: adding observers for message logging (#2800)" (#2815) 2024-06-17 13:14:05 +02:00
Darshan K
88d25755db fix: mount metadata in wakucanary (#2793)
* chore: integrate cluster id and shards to waku node.
2024-06-14 18:29:42 +05:30
Ivan Folgueira Bande
68c2e99a37 Update CHANGELOG.md for v0.29 2024-06-14 11:17:13 +02:00
Ivan Folgueira Bande
b775ae976f Update changelog for v0.29.0 2024-06-14 10:31:47 +02:00
Ivan Folgueira Bande
76feea3965 bump nim-libp2p from v1.2.0 to v1.3.0 2024-06-14 10:29:58 +02:00
Ivan FB
0bd3087de9 postgres_driver: better sync lock in partition creation (#2809)
With the original approach it happened cases where one connection
acquired the lock and other connection tried to release the same lock,
causing an unrecoverable failure which made the node to end.
2024-06-14 10:07:41 +02:00
Akhil
0b97106cbe feat: RLN proofs as a lightpush service (#2768) 2024-06-13 21:10:00 +04:00
gabrielmer
78c9172aae chore: adding observers for message logging (#2800) 2024-06-13 18:35:56 +02:00
gabrielmer
767e89d5f1 added message to failed assert (#2805) 2024-06-12 22:27:10 +02:00
NagyZoltanPeter
641aa48696 Use random ports in rest tests instead of fixed ones (#2804) 2024-06-12 15:07:33 +02:00
Ivan FB
1436ef73c0 chore: update master from release/v0.28 (#2801)
* update changelog to reflect the patch release v0.28.1
2024-06-12 09:26:40 +02:00
NagyZoltanPeter
09522ff2fb fix: multi nat initialization causing dead lock in waku tests + serialize test runs to avoid timing and port occupied issues (#2799)
* Prevent multiple nat module initialization that cause dead lock in nat refresh thread tear down during tests.
* NPROC to 1 to avoid parallel test runs can lead to timing and port allocation issues

Co-authored-by: gabrielmer <101006718+gabrielmer@users.noreply.github.com>
2024-06-12 07:49:55 +02:00
NagyZoltanPeter
50a3c9e9fd Remove filterTimeout configuration option as it remained after removing legacy filter where it belong to. (#2797) 2024-06-11 14:07:46 +02:00
gabrielmer
4bd46403b5 updating nim-bearssl to release 0.2.3 (#2796) 2024-06-10 18:07:16 +02:00
Ivan FB
beac7f5faa chore: set msg_hash logs to notice level (#2737) 2024-06-10 15:56:55 +02:00
gabrielmer
49372441c5 fix: increase on chain group manager starting balance (#2795) 2024-06-10 14:31:16 +02:00
Darshan K
6250995765 fix: more detailed logs to differentiate shards with peers (#2794) 2024-06-10 13:40:18 +05:30
Ivan FB
f958512e46 chore: Minor enhancements (#2789)
* archive.nim: reduce the database report interval from 1 to 30 min
  This counts the number of rows with "select count(1) from messages"
  which is quite intense and we shouldn't run it every minute
* aside cleanup
2024-06-09 23:09:23 +02:00
Ivan FB
4db31d2506 chore: postgres_driver - acquire/release advisory lock when creating partitions (#2784) 2024-06-07 17:54:26 +02:00
gabrielmer
825f700198 chore: setting fail-fast to false in matrixed github actions (#2787) 2024-06-07 16:06:11 +02:00
Darshan K
34d7e62fbf chore: simple link refactor (#2781) 2024-06-07 13:07:15 +05:30
Ivan FB
dc893e90b4 Update bump_dependencies.md template (#2786)
zero kit dependency should be v0.5.1
2024-06-06 18:24:57 +02:00
Ivan FB
69ce01d2be postgres_driver: simple reformat with nph (#2785) 2024-06-06 12:04:40 +02:00
Ivan FB
98aa45dc30 add new index to optimize the order by storedAt (#2778) 2024-06-06 11:38:58 +02:00
Ivan FB
21a96cea7d postgres partitions: ensure the partition ranges are o'clock (#2776)
Also, skip the error "partition already exists" because that happens
when multiple nodes interact with the same database.
2024-06-05 17:45:38 +02:00
gabrielmer
3fd715cfc2 unifying clusterId to be uint16 (#2777) 2024-06-05 15:32:35 +02:00
Ivan FB
9cc37d037f bump_dependencies.md the nim-libp2p should be updated per tag from now on (#2760) 2024-06-05 09:00:06 +02:00
gabrielmer
b9d0525663 chore: Improving liteprotocolteseter stats (#2750) 2024-06-04 22:57:16 +02:00
Darshan K
7242d2aca2 chore: extract common prefixes into a constant for multiple query (#2747)
* chore: extract select to constant for multiple prefix

* chore: Add a space at the end of selectClause to facilitate better string merging.

Signed-off-by: DarshanBPatel <darshan@status.im>

---------

Signed-off-by: DarshanBPatel <darshan@status.im>
2024-06-03 21:52:53 +05:30
richΛrd
ae52913128 fix(waku_archive): only allow a single instance to execute migrations (#2736) 2024-05-31 12:08:16 -04:00
Ivan FB
e281a008fb postgres_driver.nim: add missing meta field to select queries (#2741) 2024-05-29 22:13:16 +02:00
Ivan FB
20a27def5f release-process.md: add step to merge the release branch back to master (#2734) 2024-05-29 10:09:35 +02:00
Ivan FB
8a51db68bd test_waku_store.nim: add logs to better analyse uncertain flaky tests (#2740) 2024-05-29 10:05:07 +02:00
Ivan FB
07ce86e010 bump dependencies for v0.29 (#2731) 2024-05-29 09:41:28 +02:00
Ivan FB
7353b56a12 Merge pull request #2738 from waku-org/fix/macos-ci
fix: move postgres related tests under linux conditional
2024-05-28 16:46:35 +02:00
Prem Chaitanya Prathi
b13a11ad7c fix: move postgres related tests under linux conditional 2024-05-28 17:24:22 +05:30
Vaclav Pavlin
b1f74e5776 chore(wakucanary): fix fitler protocol, add storev3 (#2735) 2024-05-27 23:13:59 +02:00
Simon-Pierre Vivier
6ab1084d46 fix: invalid cursor returning messages (#2724)
Co-authored-by: Ivan FB <128452529+Ivansete-status@users.noreply.github.com>
2024-05-27 10:54:10 -04:00
kaiserd
e40b4d76c0 chore: bump nim-libp2p version (#2661) 2024-05-27 15:58:18 +02:00
Darshan K
9dff107487 feat: it's push newly released nwaku image with latest-release tag (#2732) 2024-05-27 16:22:48 +05:30
Ivan FB
ddb721b706 Merge pull request #2733 from waku-org/release/v0.28
chore(release): update changelog for v0.28.0 release (#2713)
2024-05-27 10:32:09 +02:00
gabrielmer
17b725afd0 chore(release): merging release/v0.28 branch to master (#2728) 2024-05-24 13:10:13 +02:00
richΛrd
f733d71bd0 fix: do not print the db url on error (#2725) 2024-05-23 18:37:04 -04:00
richΛrd
921509bb51 fix: use when instead of if for adding soname on linux (#2721) 2024-05-23 10:05:53 -04:00
Simon-Pierre Vivier
2db1cf3fcb fix: store v3 bug fixes (#2718) 2024-05-23 08:01:52 -04:00
gabrielmer
57031d5838 chore: link validation process docs to the release process file (#2714) 2024-05-22 11:38:20 +02:00
gabrielmer
4cfa9d710a chore(release): update changelog for v0.28.0 release (#2713) 2024-05-22 09:48:26 +02:00
gabrielmer
e56e96d55f chore(release): update changelog for v0.28.0 release (#2713) 2024-05-22 09:47:06 +02:00
richΛrd
a638ae0598 chore: android support (#2554) 2024-05-21 21:00:22 -04:00
NagyZoltanPeter
16748a986a LiteProtocolTester application and docker compose bundle setup. (#2706)
faster image build with copy from pre-built binary
   cluster-id to 0 
   Added README.md documentation
2024-05-21 23:03:33 +02:00
Ivan FB
4407ea021f chore: Discovery in libwaku (#2711)
* cwaku_example: add discoveryv5-discovery bool option
* libwaku: implement discovery capabilities
* node_lifecycle_request.nim: better control of possible errors when parsing config
2024-05-21 18:37:50 +02:00
Ivan FB
e9cde49ff0 simple library cleanup of unused imports and duplicated code (#2710) 2024-05-18 15:04:04 +02:00
Ivan FB
044bbf8b40 standardize store types by using camel case instead of snake case (#2709) 2024-05-17 16:56:54 +02:00
Ivan FB
ab3b7df42e chore: libwaku - allow to properly set the log level in libwaku and unify a little (#2708)
* waku.nimble: set properly chronicles compilation flags for static libwaku
* adapt examples to new log setup
2024-05-17 16:28:54 +02:00
Aaryamann Challani
ee18937357 feat(rln-relay): use arkzkey variant of zerokit (#2681) 2024-05-17 14:48:29 +05:30
Ivan FB
1afd67994e chore: waku_discv5, peer_manager - add more logs help debug discovery issues (#2705) 2024-05-16 22:30:51 +02:00
Ivan FB
652fc172d4 chore: generic change to reduce the number of compilation warnings (#2696) 2024-05-16 22:29:11 +02:00
Ivan FB
103ed86021 test_client: simple sleep to try avoid macos CI test failures (#2707) 2024-05-16 18:04:04 +02:00
Ivan FB
6df7df2bb2 test_client: add nil error handling after serverSwitch.start() clientSwitch.start() (#2703) 2024-05-15 12:36:17 +02:00
Akhil
41cdb92be7 feat: Added message size check before relay for lightpush (#2695) 2024-05-15 14:13:13 +04:00
gabrielmer
89361f629e refactor shard parsing (#2699) 2024-05-14 20:17:17 +02:00
Ivan FB
0b2859ed5a chore: move code from wakunode2 to a more generic place, waku (#2670)
* testlib/wakunode.nim: not use cluster-id == 1 to avoid test rln by default
2024-05-13 17:45:48 +02:00
Álex Cabeza Romero
d3a2c4e76d test(sharding): Implement sharding tests (#2603)
* Implement sharding tests.
2024-05-13 17:43:14 +02:00
Álex Cabeza Romero
9dd59c727e test(peer-and-connection-management): Implement tests (#2566)
* Implement peer and connection management tests.
* Fix multiple peers added on initialisation.
* Remove clusterId parameter from newTestWakuNode.
2024-05-13 17:25:44 +02:00
gabrielmer
fb7a7473af chore: closing ping streams (#2692) 2024-05-13 12:07:57 +02:00
Ivan FB
5440dd3fbc chore: Postgres enhance get oldest timestamp (#2687)
* postgres: consider also the existing paritions when getting oldest timestamp
* test_driver_postgres_query: adapt test to oldest timestamp
2024-05-10 18:31:01 +02:00
Ivan FB
7add1bdc16 Update bump_dependencies.md (#2693)
We will start bumping the `nim-libp2p` from `master` branch and tags from June'24
2024-05-10 17:37:50 +02:00
gabrielmer
bbffa88a0d fix: use await instead of waitFor in async tests (#2690) 2024-05-10 14:13:58 +02:00
gabrielmer
6451aa14b0 feat: adding json string support to bindings config (#2685) 2024-05-10 10:56:17 +02:00
NagyZoltanPeter
e028362086 feat: Added flexible rate limit checks for store, legacy store and lightpush (#2668)
* Added flexible rate limit checks for store, legacy store and lightpush. Also added rate and traffic metrics.

* Fix chat2 after WakuLegacyStoreCodec rename

* Update waku/common/ratelimit.nim

Co-authored-by: Ivan FB <128452529+Ivansete-status@users.noreply.github.com>

* Update waku/common/ratelimit.nim

Co-authored-by: Ivan FB <128452529+Ivansete-status@users.noreply.github.com>

* Update waku/waku_store_legacy/protocol.nim

Co-authored-by: Ivan FB <128452529+Ivansete-status@users.noreply.github.com>

* Fix review findings, added limit to debug logs

---------

Co-authored-by: Ivan FB <128452529+Ivansete-status@users.noreply.github.com>
2024-05-09 20:07:49 +02:00
Simon-Pierre Vivier
3e51b9d904 fix: message cache removal crash (#2682) 2024-05-09 10:38:55 -04:00
gabrielmer
42b250b9e9 adding wait after starting node to avoid segfault (#2686) 2024-05-09 11:31:58 +02:00
Simon-Pierre Vivier
9ae579b714 feat: store v3 return pubsub topics (#2676) 2024-05-08 15:35:56 -04:00
Aaryamann Challani
bd8b659247 chore(rln-relay): health check should account for window of roots (#2664)
* test(rln-relay): health check should account for window of roots

* fix: some type-fu

* fix: widen the type vs narrowing

* fix: add extra parens
2024-05-08 17:48:44 +05:30
Ivan FB
c3aa80284f postgres_driver: delete partitions in time retention policy (#2679) 2024-05-07 23:42:01 +02:00
richΛrd
7f7aa59c4b fix: add meta to sqlite migration scripts (#2675) 2024-05-07 09:39:06 -04:00
gabrielmer
4bfe1e3306 chore: updating TWN bootstrap fleet to waku.sandbox (#2638) 2024-05-07 13:37:17 +02:00
Ivan FB
8546ddc719 chore: simplify migration script postgres version_4 (#2674) 2024-05-07 11:20:54 +02:00
Ivan FB
bca1429e39 fix: content_script_version_4.nim: migration failed when dropping unexisting constraing (#2672) 2024-05-06 18:22:50 +02:00
gabrielmer
9be3221b5d feat: supporting meta field in store (#2609) 2024-05-06 10:20:21 +02:00
Ivan FB
2a7984b951 postgres_driver.nim: debug -> trace put PotsgresDriver (#2667) 2024-05-03 17:41:14 +02:00
Aaryamann Challani
a07cd90954 fix(filter): log is too large (#2665) 2024-05-03 19:05:24 +05:30
Ivan FB
1d35ca970f refactor: big refactor to add waku component in libwaku instead of onlu waku node (#2658) 2024-05-03 14:07:15 +02:00
Prem Chaitanya Prathi
8e52f12e65 fix: issue #2644 properly (#2663) 2024-05-03 13:40:20 +05:30
Ivan FB
f65eead529 refactor: simplify app.nim and move discovery items to appropriate modules (#2657) 2024-05-01 21:13:08 +02:00
Simon-Pierre Vivier
db72e2b823 fix: store v3 validate cursor & remove messages (#2636) 2024-05-01 14:47:06 -04:00
Aaryamann Challani
44703f2608 fix(waku_keystore): sigsegv on different appInfo (#2654)
* fix(waku_keystore): sigsegv on different appInfo

* fix: field specific errors

* fix: more verbose error logs
2024-05-01 23:05:22 +05:30
Ivan FB
9477c77cd2 chore: log enhancement for message reliability analysis (#2640)
* log enhancement for message reliability analysis

The next modules are touched:
  - waku_node.nim
  - archive.nim
  - waku_filter_v2/protocol.nim
  - waku_relay/protocol.nim

Co-authored-by: gabrielmer <101006718+gabrielmer@users.noreply.github.com>
2024-05-01 10:25:33 +02:00
Aaryamann Challani
8ada4e06f1 fix(rln-relay): persist metadata every batch during initial sync (#2649)
* fix(rln-relay): persist metadata every batch during initial sync

* fix: test

* Apply suggestions from code review

Co-authored-by: Ivan FB <128452529+Ivansete-status@users.noreply.github.com>

* patch: isOkOr template

---------

Co-authored-by: Ivan FB <128452529+Ivansete-status@users.noreply.github.com>
2024-04-30 18:52:47 +05:30
Ivan FB
7bab843003 refactor: metrics server. Simplify app.nim module (#2650) 2024-04-30 15:07:17 +02:00
Ivan FB
bbb32362ca waku_node: first of all stop the waku-switch when stopping the waku-node (#2651)
This is aimed to avoid having flaky tests
2024-04-30 12:52:11 +02:00
kaiserd
cd6424be19 chore: change nim-libp2p branch from unstable to master (#2648) 2024-04-30 11:43:22 +02:00
Ivan FB
489d0c1648 waku_node.nim: simplify stop proc (#2645)
There is no need to explicitly stop mounted libp2p protocols
because they are already being stopped after the switch.stop()
is being called
2024-04-29 17:47:18 +02:00
Ivan FB
8f9bcef2c4 rest/store/types: contentTopic -> content_topic (#2646) 2024-04-29 16:19:07 +02:00
Prem Chaitanya Prathi
d45506253b fix: handle named sharding in enr (#2647) 2024-04-29 18:53:49 +05:30
Prem Chaitanya Prathi
e3a515d86a fix: parse shards properly in enr config for non twn (#2633) 2024-04-26 17:51:52 +05:30
NagyZoltanPeter
ebcabd8ed0 chore: Enabling to use a full node for lightpush via rest api without lightpush client configured (#2626)
* Enabling to use a full node for lightpush via rest api without light push client configured
2024-04-26 12:42:47 +02:00
Aaryamann Challani
7dd7531a4a chore(rln-relay): resultify rln-relay 1/n (#2607)
* chore(rln-relay): resultify rln-relay 1/n

* fix: v2 too

* fix: for static group manager

* fix: cleanup, make PR digestable

* fix: remove resultified retry wrapper

* fix: cleanup

* fix: cleanup
2024-04-26 11:53:58 +02:00
Ivan FB
750f99ce87 chore: ci.yml - avoid calling brew link libpq --force on macos (#2627)
* ci.yml: avoid calling brew link libpq --force on macos
PRs started to fail due to that but we are not actually running
postgres tests on MacOS

* fix: remove brew linking for test job
* fix: conditional compilation for macos
* fix: remove autoformatted details

---------

Co-authored-by: rymnc <43716372+rymnc@users.noreply.github.com>
2024-04-26 09:02:58 +02:00
Simon-Pierre Vivier
160657a540 fix: proto field numbers & status desc (#2632) 2024-04-25 15:43:21 -04:00
NagyZoltanPeter
c1394bc470 fix: missing rate limit setting for legacy store protocol (#2631) 2024-04-25 17:51:34 +02:00
Simon-Pierre Vivier
665d9e3a06 feat: store v3 (#2431) 2024-04-25 09:09:52 -04:00
NagyZoltanPeter
cc9403f970 chore: an enhanced version of convenient node health check script (#2624) 2024-04-25 10:35:34 +02:00
Aaryamann Challani
1b65a47685 chore(rln-db-inspector): add more logging to find zero leaf indices (#2617)
* chore(rln-db-inspector): add more logging to find zero leaf indices

* fix: assumeEmptyAfter var
2024-04-24 17:11:32 +02:00
Aaryamann Challani
f93e47e9eb fix(rln-relay): enforce error callback to remove exception raised from retryWrapper (#2622) 2024-04-24 17:11:22 +02:00
Ivan FB
963d79aee7 refactor: addition of waku_api/rest/builder.nim and reduce app.nim (#2623) 2024-04-24 15:59:50 +02:00
gabrielmer
34aa2c372d Changing references to rfc.vac.dev (#2619) 2024-04-24 11:31:34 +03:00
NagyZoltanPeter
daa88019d0 chore: Separation of node health and initialization state from rln_relay (#2612)
* Separation of node health and initialization state from rln_relay status. Make (only) health endpoint avail early and install others in the last stage of node setup.

* Proper json report from /health, adjusted and fixed test, added convenient script for checking node health

* Stop wakunode2 if configured rest server cannot be started

* Fix wakuRlnRelay protocol existence check

* Fix typo

* Removed unused imports from touched files.

* Added missing /health test for all
2024-04-23 18:53:18 +02:00
Aaryamann Challani
8a2b0dcf7e fix(rln-relay): increase retries for 1 minute recovery time (#2614) 2024-04-23 15:11:14 +02:00
gabrielmer
3ef656fc71 chore: enabling rest api as default (#2600) 2024-04-23 10:23:13 +03:00
Aaryamann Challani
665dd060df fix(ci): unique comment_tag to reference rln version (#2613) 2024-04-22 16:44:59 +02:00
Prem Chaitanya Prathi
9045af9363 fix: don't use WakuMessageSize in req/resp protocols (#2601)
* fix: don't use WakuMessageSize in req/resp protocols
2024-04-20 09:10:52 +05:30
Ivan FB
dc7d036074 refactor: move app.nim and networks_config.nim to waku/factory (#2608) 2024-04-19 20:03:36 +02:00
gabrielmer
2e56eb9c73 chore(release): update changelog for v0.27.0 release (#2596) 2024-04-19 13:10:43 +03:00
Ivan FB
6f09023ce0 docs nph: clarify the version that is needed 0.5.1 (#2605) 2024-04-19 11:34:12 +02:00
Prem Chaitanya Prathi
0fea2816f9 chore: workflow to autoassign PR (#2604) 2024-04-19 14:28:46 +05:30
kaichao
4f63fa9f88 fix: create options api for cors preflight request (#2598) 2024-04-18 18:29:50 +08:00
Ivan FB
1dc7224c48 fix: node restart test issue (#2576)
* test_protocol.nim: enhance test reboot and connect

- Is not necessary to start the node if the switch object has been
already started
- Enable an existing "Relay can receive messages after reboot and
reconnect" test
- Explicit reconnect to peer in "Relay can receive messages after reboot
and reconnect" test

* tests/waku_relay/utils: avoid starting the proto again in newTestSwitch proc
With that, we avoid double start of the protocol.

* bump nim-libp2p
2024-04-18 11:20:39 +02:00
Ivan FB
790b708d11 refactor: start moving discovery modules to waku/discovery (#2587) 2024-04-17 21:48:20 +02:00
NagyZoltanPeter
10d557ad6d Removed remaining of json-rpc reference from connect.md and change to the correct rest api reference page. (#2597) 2024-04-17 09:20:07 +02:00
gabrielmer
5254314645 chore: don't create docker images for users without org's secrets (#2585)
]
2024-04-17 09:44:18 +03:00
Vaclav Pavlin
7a11b371b0 fix(doc): update REST API docs (#2581) 2024-04-16 13:27:22 +02:00
NagyZoltanPeter
559531749b feat: Added simple, configurable rate limit for lightpush and store-query (#2390)
* feat: Added simple, configurable rate limit for lightpush and store-query
Adjust lightpush rest response to rate limit, added tests ann some fixes
Add rest store query test for rate limit checks and proper error response
Update apps/wakunode2/external_config.nim
Move chronos/tokenbucket to nwaku codebasee with limited and fixed feature set
Add meterics counter to lightpush rate limits

Co-authored-by: gabrielmer <101006718+gabrielmer@users.noreply.github.com>
2024-04-15 15:28:35 +02:00
gabrielmer
55d6b95287 chore: adding migration script adding i_query index (#2578) 2024-04-15 12:57:35 +03:00
gabrielmer
d171add431 chore: bumping chronicles version (#2583) 2024-04-15 10:59:37 +03:00
Aaryamann Challani
080a0adf9b fix(rln-relay): reduce sync time (#2577)
* fix(rln-relay): reduce sync time

* fix: add batch handling of futures to prevent over utilization of cpu

* fix: need to handle the futures on last iteration when it isnt full
2024-04-12 19:02:48 +03:00
Ivan FB
b92e67ef41 fix: rest store: content_topic -> contentTopic in the response (#2584) 2024-04-12 17:47:32 +02:00
Roman Zajic
67b4351988 chore: add ARM64 support for Linux/MacOS (#2580) 2024-04-12 14:11:35 +08:00
Aaryamann Challani
d5e361d495 chore(rln): update submodule + rln patch version (#2574) 2024-04-09 14:01:35 +03:00
gabrielmer
88983bc135 chore: bumping dependencies for 0.27.0 (#2572) 2024-04-09 11:17:46 +03:00
NagyZoltanPeter
3043bd8155 Extend release process with the need of merging back release branch to master (#2573) 2024-04-09 05:47:03 +02:00
Ivan FB
7a8b86c01d Dockerfile: workaround to allow creation of docker images (#2569) 2024-04-08 11:26:47 +02:00
richΛrd
91a148b852 fix(c-bindings): rln credential path key (#2564) 2024-04-04 02:41:14 -04:00
Alvaro Revuelta
c0609a1a9e fix: cluster-id 0 disc5 issue (#2562) 2024-04-04 08:19:31 +02:00
Alvaro Revuelta
615f8e2eab fix: regex for rpc endpoint (#2563) 2024-04-02 15:14:55 +02:00
Ivan FB
fd3c23386b feat: examples/golang/waku.go add new example (#2559)
* examples/golang/waku.go: add new example
* waku.go: Richard recommendations
https://github.com/waku-org/nwaku/pull/2559#pullrequestreview-1963210599
Not addressing points 3 and 9 in this commit.
* waku.go: allow setting separate callback methods per WakuNode instance
---------
Co-authored-by: richΛrd <info@richardramos.me>
2024-03-28 11:19:16 +01:00
richΛrd
c46f9d5473 feat(c-bindings): rln relay (#2544) 2024-03-27 10:08:53 -04:00
Alvaro Revuelta
10ac14ca86 fix(rln): set a minimum epoch gap (#2555) 2024-03-27 11:36:14 +01:00
Alvaro Revuelta
7a36ce836d fix: fix regresion + remove deprecated flag (#2556) 2024-03-26 19:44:55 +01:00
Sergei Tikhomirov
e7fb8b5fb1 feat(incentivization): add codec for eligibility proof and status (#2419)
* incentivization: add codec for eligibility proofs

* add codec for eligibility proof and eligibility status

* address minor comments

* make status code mandatory in eligibility status
2024-03-26 18:25:42 +01:00
richΛrd
4ef6b33887 refactor(c-bindings): node initialization (#2547) 2024-03-26 09:10:25 -04:00
Vaclav Pavlin
f0117967b8 fix(networkmanager): regularly disconnect from random peers (#2553) 2024-03-26 12:04:48 +01:00
Alvaro Revuelta
c432c1bfcc chore: remove deprecated legacy filter protocol (#2507)
* chore: remove deprecated legacy filter protocol

* fix: do not use legacy import in test

* fix: remove legacy test references

* fix: more test fixes, starting filter client

* fix: sigh. more references to remove.

* fix: fix dereferencing error

* fix: fix merge mess up

* fix: sigh. merge tool used tabs.

* fix: more peer manager tests needed fixing

---------

Co-authored-by: Hanno Cornelius <hanno@status.im>
Co-authored-by: Hanno Cornelius <68783915+jm-clius@users.noreply.github.com>
2024-03-25 18:07:56 +00:00
Ivan FB
51bc92a18d postgres_driver.nim: make sure the partition list ist loaded in correct order (#2548) 2024-03-25 18:02:07 +01:00
Simon-Pierre Vivier
70de74a210 fix: remove subscription queue limit (#2551) 2024-03-25 10:33:01 -04:00
Anton Iakimov
40b687c0a5 chore: switch wakuv2 to waku fleet (#2519)
See https://github.com/status-im/infra-nim-waku/issues/91
2024-03-20 16:28:00 +01:00
Ivan FB
0adcdb1c85 fix: peer_manager - extend the number of connection requests to known peers (#2534)
* peer_manager: limit the max num out conns from within the conn loop
2024-03-19 19:07:03 +01:00
Álex Cabeza Romero
279d0dfa7f fix(2491): Fix metadata protocol disconnecting light nodes (#2533)
* Fix metadata protocol disconnecting light nodes.
* Implement test cases.
2024-03-19 16:18:52 +01:00
richΛrd
6eaf94f323 fix(rest): filter/v2/subscriptions response (#2529) 2024-03-18 18:21:06 -04:00
Ivan FB
2b312f09bd docs: create nph.md (#2536) 2024-03-18 22:37:26 +01:00
Ivan FB
588530d5c7 chore: Better postgres duplicate insert (#2535)
* postgres_driver: add ON CONFLICT DO NOTHING in the insert statement
* test_driver_postgres: adapt test to ON CONFLICT DO NOTHING
  The insert does not fail when duplicate, it returns a positive response
  when doing 'put' of a duplicated row. The test is adapted so that
  we just check that the number of messages doesn't grow after
  trying to add a duplicated row.
2024-03-18 15:59:45 +01:00
Ivan FB
cf6298ca1f Generic re-style with nph 0.5.1 (#2396) 2024-03-16 00:08:47 +01:00
richΛrd
dde94d4b52 fix(store): retention policy regex (#2532) 2024-03-15 09:46:35 -04:00
Álex Cabeza Romero
297838b145 test(discv5): Implement discv5 tests (#2487)
* Implement discv5 tests.
2024-03-14 19:01:13 +01:00
Álex Cabeza Romero
8e13bfbb65 test(peer-exchange): Implement peer exchange tests (#2464)
* Implement peer exchange tests.
* Refactor, and remove duplicated tests.
* feat(wakunode): Resultify fetch peer exchange peers (#2486)
2024-03-14 17:48:09 +01:00
Vaclav Pavlin
c5960b3133 chore: add 150 kB to msg size histogram metric (#2430) 2024-03-14 12:38:02 +01:00
Ivan FB
cd5c34edb1 chore: content_script_version_2: add simple protection and rename messages_backup if exists (#2531) 2024-03-13 17:18:19 +01:00
richΛrd
b42f2802c1 feat(rest): add support to ephemeral field (#2525) 2024-03-13 08:49:21 -04:00
Alvaro Revuelta
813d0b207c fix: enable autosharding in any cluster (#2505) 2024-03-13 10:58:13 +01:00
kaiserd
5b18537e58 chore(vendor): update nim-libp2p path (#2527) 2024-03-12 18:06:41 +01:00
gabrielmer
72d9ed5b0b chore: adding node factory tests (#2524) 2024-03-12 10:12:44 -05:00
gabrielmer
877c618ef1 chore: factory cleanup (#2523) 2024-03-12 07:44:54 -06:00
Simon-Pierre Vivier
430708ccc6 feat: archive update for store v3 (#2451) 2024-03-12 07:51:03 -04:00
Aaryamann Challani
6d74aa08a9 chore(rln-relay-v2): wakunode testing + improvements (#2501)
* chore(rln-relay-v2): additional testing

* fix: bump librln to v0.4.2 for v2

* fix: catch possible error from the copyFrom

* ci: rename step title for rln-version
2024-03-12 16:20:30 +05:30
NagyZoltanPeter
5e506c5477 chore: update CHANGELOG for v0.26.0 release (#2518)
* CHANGELOG for v0.26.0 release

Co-authored-by: Ivan FB <128452529+Ivansete-status@users.noreply.github.com>
2024-03-11 15:15:19 +01:00
richΛrd
1dd3fc1e29 fix: introduce new field for shards in metadata protocol (#2511)
* fix: repeated fields are packed in proto3
* fix: add new field for shards in metadata protobuffers to avoid breaking change and deprecate original field
2024-03-11 10:08:46 -04:00
gabrielmer
92051e95d2 chore: migrating logic from wakunode2.nim to node_factory.nim (#2504) 2024-03-08 16:46:42 -06:00
Ivan FB
1f2e5065d3 rest: rm openapi defs. they are in https://github.com/waku-org/waku-rest-api (#2520) 2024-03-08 19:43:55 +01:00
Aaryamann Challani
d2f30df8c7 fix(rln-relay): handle empty metadata returned by getMetadata proc (#2516)
* fix(rln-relay): silence error on startup when metadata is not found

* chore: fix fetching value from option

* fix: clarity of returned opt
2024-03-08 19:36:22 +05:30
richΛrd
eb80891c1e feat(c-bindings): add function to dealloc nodes (#2499) 2024-03-07 13:53:03 -04:00
Ivan FB
132bb0bbf2 feat: Postgres partition implementation (#2506)
* postgres: first step to implement partition management
* postgres_driver: use of times.now().toTime().toUnix() instead of Moment.now()
* postgres migrations: set new version to 2
* test_driver_postgres: use of assert instead of require and avoid using times.now()
* postgres_driver: better implementation of the reset method with partitions
* Remove createMessageTable, init, and deleteMessageTable procs
* postgres: ensure we use the version 15.4 in tests
* postgres_driver.nim: enhance debug logs partition addition
* ci.yml: ensure logs are printed without colors
* postgres_driver: starting the loop factory in an asynchronous task
* postgres_driver: log partition name and size when removing a partition
2024-03-06 20:50:22 +01:00
Aaryamann Challani
dc6381264f fix(rln-relay): make nullifier log abide by epoch ordering (#2508)
* fix(rln-relay): nullifier log abide by epoch ordering

* fix: cleaner hasKey method, test

* chore: idiomatic usage of results, error handling

Co-authored-by: Ivan FB <128452529+Ivansete-status@users.noreply.github.com>

---------

Co-authored-by: Ivan FB <128452529+Ivansete-status@users.noreply.github.com>
2024-03-06 23:59:07 +05:30
Aaryamann Challani
af27c97ddd chore(rln_db_inspector): include in wakunode2 binary (#2292) 2024-03-06 19:38:43 +05:30
Aaryamann Challani
16a5d13c94 feat(waku-stealth-commitments): waku stealth commitment protocol (#2490)
* feat(waku-stealth-commitments): initialize app

* feat: works!

* fix: readme

* feat: send and receive, handle received stealth commitment

* fix: remove empty lines

* chore: move to examples
2024-03-06 18:44:33 +05:30
Aaryamann Challani
8b45204fda fix(postgres): import under feature flag (#2500) 2024-03-06 17:39:02 +05:30
NagyZoltanPeter
70b7224336 Removed json-rpc leftovers (#2503) 2024-03-05 15:51:43 +01:00
Benjamin Arntzen
9ced3ba382 docs: Update link to DNS discovery tutorial (#2496) 2024-03-04 18:08:39 +01:00
NagyZoltanPeter
4bb8d59b56 vendor lib dependencies are updated to latest where were possible. For next release 0.26.0 (#2494) 2024-03-04 16:40:58 +01:00
Ivan FB
43bd54e6ee Tiny cleanup and more encapsulation in protocol.nim files (#2488) 2024-03-04 15:31:37 +01:00
Aaryamann Challani
d3e01495b8 chore(rln-relay-v2): added tests for static rln-relay-v2 (#2484)
* chore(rln-relay-v2): added tests for onchain rln-relay-v2

* chore(rln-relay): added tests for static rln-relay-v2

* Update waku/waku_rln_relay/group_manager/static/group_manager.nim

Co-authored-by: Ivan FB <128452529+Ivansete-status@users.noreply.github.com>

* fix: split lines

* fix: remove redundant require

* fix: remove redundant require

* fix: bad await

---------

Co-authored-by: Ivan FB <128452529+Ivansete-status@users.noreply.github.com>
2024-03-04 17:41:33 +05:30
gabrielmer
8cf2f78b6c chore: moving node initialization code to node_factory.nim (#2479) 2024-03-02 18:59:53 -06:00
Simon-Pierre Vivier
8eff17953c fix: notify Waku Metadata when Waku Filter subscribe to a topic (#2493) 2024-03-01 08:01:37 -05:00
Simon-Pierre Vivier
4638756aef fix: time on 32 bits architecture (#2492)
authored-by: Emil Ivanichkov <emil.ivanichkov@gmail.com>
2024-03-01 07:58:45 -05:00
Ivan FB
75521122a4 chore: Postgres migrations (#2477)
* Add postgres_driver/migrations.nim
* Postgres and archive logic adaptation to the migration implementation
* libwaku: adapt node_lifecycle_request.nim to migration refactoring
* test_app.nim: add more detail for test that only fails in CI
* postgres migrations: store the migration scripts inside the resulting wakunode binary instead of external .sql files.
2024-03-01 12:05:27 +01:00
Aaryamann Challani
545d9aee99 chore(rln-relay-v2): added tests for onchain rln-relay-v2 (#2482)
* chore(rln-relay-v2): added tests for onchain rln-relay-v2

* Update tests/waku_rln_relay/test_rln_group_manager_onchain.nim

Co-authored-by: Ivan FB <128452529+Ivansete-status@users.noreply.github.com>

---------

Co-authored-by: Ivan FB <128452529+Ivansete-status@users.noreply.github.com>
2024-03-01 14:15:40 +05:30
richΛrd
b7f3db0b0a fix: return message id on waku_relay_publish (#2485)
* fix: return message id on `waku_relay_publish`
* fix: remove unneeded cast and handle 0 len seqs
* chore: rename messageId to messageHash
2024-02-29 20:58:35 -04:00
Alvaro Revuelta
a8769955f0 chore: remove json rpc (#2416) 2024-02-29 23:35:27 +01:00
5ea532bc80 chore(ci): use git describe for image version
This way we get both the full commit and the version, whether it's a
proper release or not.

Signed-off-by: Jakub Sokołowski <jakub@status.im>
2024-02-29 10:40:14 +01:00
NagyZoltanPeter
7885ce0c9e chore: Implemented CORS handling for nwaku REST server (#2470)
* Add allowOrigin configuration for wakunode and WakuRestServer
Update nim-presto to the latest master that contains middleware support
Rework Rest Server in waku to utilize chronos' and presto's new middleware design and added proper CORS handling.
Added cors tests and fixes

Co-authored-by: Ivan FB <128452529+Ivansete-status@users.noreply.github.com>
2024-02-29 09:48:14 +01:00
Alvaro Revuelta
cd9ebde02f chore: remove rln epoch hardcoding (#2483) 2024-02-28 17:19:20 +01:00
Ivan FB
57068747de chore(cbindings): cbindings rust simple libwaku integration example (#2089) 2024-02-26 16:48:05 +01:00
gabrielmer
942063961f chore: adding NIMFLAGS usage to readme (#2469) 2024-02-23 16:25:13 +02:00
gabrielmer
89736d6997 chore: bumping nim-libp2p after yamux timeout fix (#2468) 2024-02-22 20:19:56 +02:00
Ivan FB
0887a344a4 refactor: new proc to foster different size retention policy implementations (#2463)
* new proc to foster different size retention policy implementations
  The new proc, decreaseDatabaseSize, will have different implementations
  per each driver. For example, in future commits we will implement a size
  retention policy thanks to partitions management, in Postgres.
* RetentionPolicy: use of new instead of init for ref object types
* waku_archive: fix signatures in decreaseDatabaseSize methods
* retention_policy_size: minor cleanup of comments and imports
2024-02-22 16:55:37 +01:00
Aaryamann Challani
2f89fdeee9 chore(rln-relay): use anvil instead of ganache in onchain tests (#2449)
* chore(rln-relay): use anvil instead of ganache in onchain tests

* fix: incl rustup in makefile
2024-02-22 16:59:13 +05:30
Ivan FB
6769d25f54 chore: bindings return multiaddress array (#2461)
* waku_example.c: adapt signature to new parameter 'void* userData'
* libwaku: add new DEBUG request handler to retrieve the list of listened multiaddresses
* waku_example.c: use example the new 'waku_listen_addresses'
* add debug_node_request.nim file
2024-02-21 12:06:05 +01:00
richΛrd
b3ab9ed474 fix(bindings): base64 payload and key for content topic (#2435)
* fix(bindings): base64 payload and key for content topic
* fix(bindings): store userData for event callback
* fix(bindings): json message serialization
* fix(bindings): add messageHash to the event callback
* fix(bindings): add meta field
* refactor(bindings): simplify error handling
* fix: handle undefined keys
2024-02-20 16:00:03 -04:00
Guru
1403327620 Wrong docs link (#2450) 2024-02-20 22:37:26 +05:30
richΛrd
6de38551fe feat(bindings): generate a random private key (#2446) 2024-02-20 11:18:03 -04:00
Ivan FB
34a3200fd6 waku_metrics: change log interval from 30'' to 10' (#2428) 2024-02-19 22:25:20 +01:00
b64080a4ff chore(ci): fix IMAGE_NAME to use harbor.status.im
Signed-off-by: Jakub Sokołowski <jakub@status.im>
2024-02-19 11:31:34 +01:00
Hanno Cornelius
bf1bb45d75 feat: prioritise yamux above mplex (#2417)
* update libp2p submodule

* feat: prefer yamux to mplex
2024-02-17 19:46:01 +00:00
Aaryamann Challani
52af324f47 fix(rln-relay): regex pattern match for extended domains (#2444)
* fix(rln-relay): regex pattern match for extended domains

* fix: enable localhost too
2024-02-16 22:42:35 +05:30
Aaryamann Challani
57220f4606 chore(rln-relay): remove wss support from node config (#2442)
* chore(rln-relay): remove wss support from node config

* fix: incl regex pattern examples

* docs: update rln docs
2024-02-16 18:36:31 +05:30
gabrielmer
fe001b2f98 fix: checking for keystore file existence (#2427) 2024-02-15 17:33:15 +02:00
Aaryamann Challani
1563ea8188 fix(rln-relay): graceful shutdown with non-zero exit code (#2429)
* fix(rln-relay): graceful shutdown with non-zero exit code

* fix: missed args

* fix: exception str

Co-authored-by: Ivan FB <128452529+Ivansete-status@users.noreply.github.com>

* fix: remove old comment

---------

Co-authored-by: Ivan FB <128452529+Ivansete-status@users.noreply.github.com>
2024-02-15 16:55:08 +05:30
8eeeb92c66 chore(ci): reuse discord send function from library
No need to keep an almost identical copy here.

Signed-off-by: Jakub Sokołowski <jakub@status.im>
2024-02-14 16:37:30 +01:00
gabrielmer
9ed0dde494 feat: supporting meta field in WakuMessage (#2384) 2024-02-14 17:29:59 +02:00
gabrielmer
65620edd15 fix: check max message size in validator according to configured value (#2424) 2024-02-14 17:29:10 +02:00
NagyZoltanPeter
aedda7424c chore: update CHANGELOG.md for v0.25.0 (#2399)
* chore: update CHANGELOG.md for v0.25.0
* Added announcements for the next release derpicating features

Co-authored-by: gabrielmer <101006718+gabrielmer@users.noreply.github.com>
Co-authored-by: Ivan FB <128452529+Ivansete-status@users.noreply.github.com>
2024-02-14 11:29:56 +01:00
Aaryamann Challani
e44fc87d29 chore(rln-relay-v2): add tests for serde (#2421)
* chore(rln-relay-v2): add tests for serde

* fix: call isOk fn instead of prop access

Co-authored-by: Ivan FB <128452529+Ivansete-status@users.noreply.github.com>

* fix: make cast more explicit

---------

Co-authored-by: Ivan FB <128452529+Ivansete-status@users.noreply.github.com>
2024-02-14 13:24:05 +05:30
richΛrd
1ac6fb63e2 feat: eventCallback per wakunode and userData (#2418)
* feat: store event callback in `Context`
* feat: add userData to callbacks
2024-02-13 10:22:22 -04:00
richΛrd
66bc31cede chore: add stdef.h to libwaku.h (#2409) 2024-02-13 10:21:31 -04:00
Aaryamann Challani
a9819fce60 fix(wakunode2): move node config inside app init branch (#2423) 2024-02-13 15:40:45 +05:30
Aaryamann Challani
3842584558 feat(rln-relay-v2): nonce/messageId manager (#2413)
* feat(rln-relay-v2): nonce/messageId manager

* fix: simplify
2024-02-13 10:18:02 +05:30
gabrielmer
76ea0c8d72 chore: automatically generating certs if not provided (Waku Canary) (#2408) 2024-02-12 16:28:22 +02:00
Vaclav Pavlin
c4ad8f89d4 feat(networkmonitor): add support for rln (#2401)
* feat(networkmonitor): add support for rln

* remove cred index flag

* use wakunode2 waku network config
2024-02-12 09:58:55 +01:00
Alvaro Revuelta
d6f5ab8ca0 Benchmark RLN proof generation/verification (#2410) 2024-02-09 17:06:25 +01:00
Aaryamann Challani
9133a2439c feat(rln-relay-v2): rln-keystore-generator updates (#2392)
* chore: init rln-v2 in OnchainGroupManager

* chore: update wrappers

* fix: units for userMessageLimit

* valueOr for error handling

* fix: len usage
2024-02-09 16:31:45 +05:30
Ivan FB
ede67cda64 libwaku: simpler ctx mgmt. Param now receiving void* instead of void** (#2398)
This change is needed so that interoperability with other languages becomes simpler.
Particularly, this simplification is needed from the Python point of view,
where it is tricky to pass a void** as a parameter to an FFI function.
2024-02-07 15:24:03 +01:00
Alvaro Revuelta
34d207c4c2 chore: Simplify configuration for the waku network (#2404) 2024-02-07 12:42:20 +01:00
richΛrd
5099af4c8b feat: add yamux support (#2397) 2024-02-06 16:33:13 -04:00
Álex Cabeza Romero
c3dea59e8f test(lightpush): Lightpush functional tests (#2269)
* Add ligthpush payload tests.
* Add end to end lightpush tests.
* updating vendor/nim-unittest2 to protect against core dump issue
* Enable "Valid Payload Sizes" test again
---------
Co-authored-by: Ivan FB <128452529+Ivansete-status@users.noreply.github.com>
2024-02-06 17:37:42 +01:00
gabrielmer
b413122a73 feat: running validators in /relay/v1/auto/messages/{topic} (#2394) 2024-02-05 10:24:54 +02:00
Ivan FB
c5652d9d60 fix: bug fix in ci/Jenkinsfile.release: make -d:postgres part of NIMFLAGS (#2395) 2024-02-02 14:13:07 +01:00
Álex Cabeza Romero
f478f6d339 test(rln): Implement some rln unit tests (#2356)
* Fix sanity check location.
* Implement some rln tests.
2024-02-02 09:56:41 +01:00
Ivan FB
73ddc8c78c ci/Jenkinsfile.release: enforce -d:postgres flag is always used (#2389) 2024-02-01 20:46:25 +01:00
Aaryamann Challani
70893850f8 feat(rln-relay-v2): update C FFI api's and serde (#2385)
* feat(rln-relay-v2): integrate new ffi bindings, serde

* chore: remove ExtendedRateLimitProof, add comments

* fix: typo
2024-02-02 00:26:47 +05:30
gabrielmer
72c67363ec feat: running validators in /relay/v1/messages/{pubsubTopic} (#2373) 2024-02-01 18:16:10 +01:00
Ivan FB
7aa50c8b09 REST store: get msgs from self node when store is mounted and no peerAddr is passed (#2387)
A node that handles REST-Store requests normally acts as a 
Store-client and therefore it retrieved the messages from another
Store-node.
With these changes, we allow a node with Store mounted, to retrieve
its messages. In other words, the node can act as a Store-server of
its messages.

* test_rest_store.nim: add a new test to validate that the self-node can
retrieve its messages to the REST client.

* rest/store/client.nim: add new proc to allow making a GET store
request without peerAddr.

* rest/store/handle.nim: add logic to handle requests that don't
provide peerAddr but the self/local node has Store mounted. In this case,
the self/local node will retrieve its locally stored messages.

* waku_store/self_req_handler.nim: logic to handle "store" requests
allowing the REST-store node to act as a Store-server node. The
'self_req_handler.nim' helps to bypass the store protocol and directly
retrieve the messages from the local/self node. I added this logic in
a separate file from 'protocol.nim' because it doesn't participate in
any libp2p communication.

* waku_store/protocol.nim: make 'queryHandler' attribute public so that
it can be used from the 'self_req_handler.nim' module.
2024-01-31 17:43:59 +01:00
Aaryamann Challani
258ff89de8 chore(rln-relay-v2): use rln-v2 contract code (#2381)
* chore(rln-relay-v2): use rln-v2 contract code

* fix: reduced duped code
2024-01-30 23:28:30 +05:30
Simon-Pierre Vivier
c96f9ad5e3 chore: v0.25 vendor bump and associated fixes (#2352) 2024-01-30 10:57:03 -05:00
Simon-Pierre Vivier
cfd62e495c feat: shard aware relay peer management (#2332)
note that this feature is behind a config flag. `--relay-shard-manager`
2024-01-30 07:28:21 -05:00
Simon-Pierre Vivier
bfb17ef549 chore: handle errors w.r.t. configured cluster-id and pubsub topics (#2368) 2024-01-30 07:15:23 -05:00
Roman Zajic
6d27f0ff70 chore: add coverage target to Makefile (#2382) 2024-01-30 19:55:26 +08:00
Ivan FB
925a592c96 docs: Add check spell allowed words (#2383)
* new docs/benchmarks/cspell.json: add cspell.json to explicitly accept words
* docs/benchmarks/postgres-adoption.md: change test-waku-queryc078075 to test-waku-query-c078075
2024-01-30 09:40:30 +01:00
gabrielmer
08d880d61d fix: adding rln validator as default (#2367) 2024-01-29 16:11:26 +01:00
gabrielmer
8cb464281f chore: adding nwaku compose image update to release process (#2370) 2024-01-26 11:00:03 +01:00
gabrielmer
a5d74095c4 chore: changing digest and hash log format from bytes to hex (#2363) 2024-01-25 16:03:48 +01:00
Prem Chaitanya Prathi
bff4942361 chore: log messageHash for lightpush request that helps in debugging (#2366) 2024-01-24 19:20:21 +05:30
Aaryamann Challani
2065f3db3c chore(rln-relay): remove websocket from OnchainGroupManager (#2364)
* chore(rln-relay): remove websocket from OnchainGroupManager

* fix: swap ws for http
2024-01-23 23:22:45 +05:30
Hanno Cornelius
105b3c2089 fix: Fix test for filter client receiving messages after restart (#2360) 2024-01-19 13:05:06 +02:00
gabrielmer
c573fd0538 chore: improve POST /relay/v1/auto/messages/{topic} error handling (#2339) 2024-01-18 13:49:13 +01:00
gabrielmer
e1f97fff82 fix: making filter admin data test order independent (#2355) 2024-01-17 09:42:22 +01:00
NagyZoltanPeter
f048babdc4 chore: Refactor of FilterV2 subscription management with Time-to-live maintenance (#2341)
* Refactor of FilterV2 subscription handling and maintenance with addition subscription time-to-live support.
Fixed all tests and reworked where subscription handling changes needed it.
Adapted REST API /admin filter subscription retrieve to new filter subscription structure.

* Fix tests and PR comments

* Added filter v2 subscription timeout tests and fixed

* Fix review comments and suggestions. No functional change.

* Remove leftover echoes from test_rest_admin

* Fix failed legacy filter tests due to separation of mounting the filters.

* Small fixes, fix naming typo, removed duplicated checks in test
2024-01-16 17:27:40 +01:00
gabrielmer
ce32156a8b chore: Bump nim-dnsdisc (#2354)
* chore: bump `vendor/nim-dnsdisc`

* chore: Update import path of dnsdisc

---------

Co-authored-by: Emil Ivanichkov <emil.ivanichkov@gmail.com>
2024-01-15 16:54:02 +02:00
Ivan FB
b623ace3e5 docs: postgres-adoption.md add metadata title, description, and better first-readable-title (#2346) 2024-01-12 16:37:32 +01:00
Simon-Pierre Vivier
2381de7c90 chore: update CHANGELOG.md for v0.24.0 (#2347) 2024-01-12 07:30:29 -05:00
vuittont60
cc8045d860 docs: fix typo (#2348) 2024-01-11 15:38:56 +01:00
Ivan FB
b3ec2e9c00 chore: Update CHANGELOG.md to reflect bug fix for issue #2317 (#2340) in v0.23.1 2024-01-10 09:45:26 +01:00
gabrielmer
2b2eec9535 feat: adding filter data admin endpoint (REST) (#2314) 2024-01-09 11:42:29 +01:00
NagyZoltanPeter
25cd5e2ff7 test(peer-connection-managenent): Functional Tests (#2321) @ b16e20e48 introduced build error through ambigous function call, testwaku build failed on master (#2337) 2024-01-08 16:45:03 +01:00
Simon-Pierre Vivier
f97ad7a12c docs: update after release steps (#2336) 2024-01-08 08:11:06 -05:00
joao
9d2971c379 docs: Fix Typos Across Various Documentation Files (#2310) 2024-01-08 13:13:34 +01:00
Álex Cabeza Romero
b16e20e48e test(peer-connection-managenent): Functional Tests (#2321)
* Add simple mock mechanism.
* Implement migrations tests.
* Implement peer storage tests.
* Add simple protobuf serialisation testcase.
2024-01-05 14:49:04 +01:00
Emil Ivanichkov
f2a2f960ae fix: Set record to the Waku Node Builder in the examples as it is required (#2328) 2024-01-05 10:00:41 +01:00
Simon-Pierre Vivier
b5f9987efa fix(discv5): add bootnode filter exception (#2267) 2024-01-04 16:39:03 -05:00
Ivan FB
1900118f3b bump vendors for 0.24.0 (#2333)
The following vendors have changes but are not being updated for
the reason explained.

nim-web3: not updated because unit tests started to fail and no
straightforward solution found.

nim-toml-serialization: not updated because it introduced a breaking
change on how the --config-file attribute is parsed. The array
attributes now need a comma. For example, the following attribute
from within the config file:

pubsub-topic = [ "/waku/2/default-waku/proto" "/waku/2/testing-store" ]

... should be converted to:

pubsub-topic = [ "/waku/2/default-waku/proto", "/waku/2/testing-store" ]

and we cannot accept that breaking change
2024-01-04 17:35:00 +01:00
Álex Cabeza Romero
09129f56ab test(autosharding): Functional Tests (#2318)
* Implement autosharding tests.
2024-01-04 16:26:27 +01:00
Ivan FB
7708082809 docs: add benchmar around postgres adoption (#2316) 2024-01-03 13:13:47 +01:00
Ivan FB
c738841d43 chore: message.nim - set max message size to 150KiB according to spec (#2298)
* message.nim: set max message size to 150KiB according to spec

Using KiB instead of KB because that seems more aligned with
the actual default defined in nim-libp2p (1024 * 1024)

Spec details: https://rfc.vac.dev/spec/64/#message-size

* test_protocol.nim: align test to current WakuMessage limit
* test_waku_client.nim: adapt test to MaxWakuMessageSize change
* make maxMessageSize configurable for wakunode2
* wakunode2 app now accepts max-num-bytes-msg-size with KiB, KB, or B units
* testlib/wakunode.nim: set maxMessageSize: "1024 KiB"
* test_waku_client.nim: remove duplicate check in "Valid Payload Sizes"
* set DefaultMaxWakuMessageSizeStr as the only source of truth
* external_config.nim: rename max-num-bytes-msg-size -> max-msg-size
2024-01-03 13:11:50 +01:00
Ivan FB
ce567acb62 ip colocation is parameterizable. If set to 0, it is disabled (#2323)
The "ip colocation" concept refers to the maximum allowed peers
from the same IP address. For example, we allow disabling this limit when the
node works behind a reverse proxy.
2024-01-02 14:01:18 +01:00
Ivan FB
0999fa44fe CHANGELOG.md for 0.23.0 (#2309) 2023-12-20 15:48:59 +01:00
Ivan FB
cb59623466 fix: Revert "feat: shard aware peer management (#2151)" (#2312)
This reverts commit dc1d6ce4bf7390e23b73d96634ff87ca9341e129.

We need to revert this commit because
the waku-simulator stopped working. i.e. the nodes couldn't establish
connections among them: 054ba9e33f

Also, the following js-waku test fails due to this commit:
"same cluster, different shard: nodes connect"

* waku_lightpush/protocol.nim: minor changes to make it compile after revert
2023-12-20 15:23:41 +01:00
gabrielmer
29c182195d fix: setting connectivity loop interval to 15 seconds (#2307) 2023-12-20 09:38:14 +01:00
Álex Cabeza Romero
91c402b1cc test(store): Implement store tests (#2235, 2240)
* Implement store tests
2023-12-19 15:38:43 +01:00
Álex Cabeza Romero
b8b32eb857 refactor(store): HistoryQuery.direction (#2263)
* Fix issue with default history query ascending value in serde operations: Should use the same value.
* Update direction types to PagingDirection.
2023-12-19 15:10:27 +01:00
Ivan FB
88f2a9f89b test_driver_postgres: enhance test coverage, multiple and single topic (#2301)
Co-authored-by: Abhimanyu <ABresting@users.noreply.github.com>
2023-12-19 10:41:50 +01:00
Ivan FB
51199214f9 chore: examples/nodejs - adapt code to latest callback and ctx/userData definitions (#2281) 2023-12-18 23:16:54 +01:00
Ivan FB
b34f5d3117 chore: archive - move error to trace level when insert row fails (#2283)
* archive: move error to trace level when insert row fails

That is helpful to prevent the node to spam the logs when it shares
connection to the same Postgres database with other nodes, in
which case the following log appears too much:

topics="waku archive" tid=1 file=archive.nim:113 err="error in
runStmt: error in dbConnQueryPrepared calling waitQueryToFinish: error
in query: ERROR: duplicate key value violates unique constraint
"messageindex" DETAIL: Key
(messagehash)=(88f4ee115eef6f233a7dceaf975f03946e18666adda877e38d61be98add934e8)
already exists. "
2023-12-15 18:58:35 +01:00
gabrielmer
e19996951b chore: including content topics on FilterSubscribeRequest logs (#2295) 2023-12-15 14:18:12 +01:00
Ivan FB
45bc8add32 relay: add trace logs in case of msg validation rejection (#2285) 2023-12-15 13:34:30 +01:00
Ivan FB
d7e8477b3f libwaku: avoid using waku_init. Only use waku_new to create node and context (#2282) 2023-12-15 13:32:12 +01:00
Alvaro Revuelta
43ffd1371b fix: make rln rate limit spec compliant (#2294) 2023-12-15 10:26:17 +01:00
Ivan FB
2b434d7e8d bug fix: update num-msgs archive metrics every minute and not only at the beginning (#2287) 2023-12-14 17:00:13 +01:00
Ivan FB
f191b79f28 postgres_driver.nim: restrict getMessages prepared stmt to query with 1 content topic (#2296)
Before this commit, the following execution of a prepared statement
returned nothing even though the database had 2 rows to be returned:

nwaku-db-1  | 2023-12-14 12:55:17.575 UTC [73] LOG:  execute SelectWithoutCursorAsc: SELECT storedAt, contentTopic, payload, pubsubTopic, version, timestamp, id FROM messages
nwaku-db-1  | 	    WHERE contentTopic IN ($1) AND
nwaku-db-1  | 	          pubsubTopic = $2 AND
nwaku-db-1  | 	          storedAt >= $3 AND
nwaku-db-1  | 	          storedAt <= $4
nwaku-db-1  | 	    ORDER BY storedAt ASC LIMIT $5;
nwaku-db-1  | 2023-12-14 12:55:17.575 UTC [73] DETAIL:  parameters: $1 =
'my/ctopic/1,my/ctopic/2', $2 = '/waku/2/default-waku/proto', $3 = '1702552968570786800', $4 = '1702552968585347557', $5 = '101'

The reason why it is not returning anything is that the 'IN' statement doesn't work when using prepared statements with multiple items. It only works when the 'IN' content, i.e. $1, contains one single item.
2023-12-14 15:37:12 +01:00
Ivan FB
7c692cc313 chore: vendor bump for 0.23.0 (#2274)
* on_chain/group_manager: use .async: (raises:[Exception]).
* bump nim-dnsdisc
* update nim-chronos to the latest state
* chat2.nim: catch any possible exception when stopping
* chat2bridge.nim: make it to compile after vendor bump
* ValidIpAddress (deprecated) -> IpAddress
* vendor/nim-libp2p additional bump
* libwaku: adapt to vendor bump
* testlib/wakunode.nim: adapt to vendor bump (ValidIpAddress -> IpAddress)
* waku_node: avoid throwing any exception from stop*(node: WakuNode)
* test_confutils_envvar.nim: ValidIpAddress -> IpAddress
* test_jsonrpc_store: capture exception
* test_rln*: handling exceptions
* adaptation to make test_rln_* to work properly
* signature enhancement of group_manager methods
2023-12-14 07:16:39 +01:00
Ivan FB
ac3a3737de chore: peer_manager.nim - reduce logs from debug to trace (#2279) 2023-12-12 16:00:18 +01:00
Aaryamann Challani
f9bd39ea66 fix(rln-relay): graceful retries on rpc calls (#2250)
* fix(rln-relay): graceful retries on rpc calls

* fix: missing file
2023-12-11 14:59:16 +05:30
Ivan FB
e6fe6df52e archive: simplify and enhance async retention policy application (#2278)
* Avoid using timer and just use an infinite async loop that can be
cancelled at any time.
2023-12-11 08:50:40 +01:00
Ivan FB
de11e19a9f waku_thread_response.nim: use correct alloc() proc to allocate response correctly (#2277) 2023-12-11 08:50:03 +01:00
Ivan FB
2a03416f47 chore: Cbindings allow mounting the Store protocol from libwaku (#2276)
* libwaku: add changes to mount store in self-node
* libwaku: remove unnecessary code for store
2023-12-11 08:49:13 +01:00
Simon-Pierre Vivier
c2ded25e11 added sharded peer store pruning (#2167) 2023-12-07 07:21:18 -05:00
Simon-Pierre Vivier
dc1d6ce4bf feat: shard aware peer management (#2151) 2023-12-07 06:48:28 -05:00
Ivan FB
32d5c24d29 fix: add protection in rest service to always publish with timestamp if user doesn't provide it (#2261) 2023-12-06 14:02:21 +01:00
Sasha
0f3cf94ac2 fix: remove trailing commas from keystore json (#2200)
* fix: remove trailing commas from keystore json

* keyfile.nim: try a different Json formatting approach

* build keystore

* address comment

---------

Co-authored-by: Ivan Folgueira Bande <ivansete@status.im>
2023-12-01 12:57:19 +01:00
Vaclav Pavlin
a0eb05ea19 fix(dockerfile): update dockerignore and base image (#2262) 2023-12-01 11:35:50 +01:00
Ivan FB
75296f055c waku_store/common.nim: correct ret code in PEER_DIAL_FAILURE (#2260) 2023-11-30 16:42:58 +01:00
Ivan FB
7f9a0d97fd chore: Better feedback invalid content topic (#2254)
* typo correction appplication -> application
* content_topic.nim: better feedback to user when wrong topic is passed
* test_namespaced_topics.nim: updating tests accordingly
2023-11-30 11:11:33 +01:00
omahs
2d27c47c82 chore: fix typos (#2239) 2023-11-30 11:08:08 +01:00
Ivan FB
4e953a18d1 fix: waku_filter_v2/common: PEER_DIAL_FAILURE ret code change: 200 -> 504 (#2236) 2023-11-30 10:47:45 +01:00
gabrielmer
3030aa81e7 chore: creating prepare_release template (#2225) 2023-11-30 10:29:26 +01:00
Alexis Pentori
29e66c8dd9 feat: setting image deployment to harbor registry
Adding variable to push image to specific Registry
    Changing image owner name to `waku-org` to match Github Repository naming

Signed-off-by: Alexis Pentori <alexis@status.im>
2023-11-29 11:54:17 +01:00
Simon-Pierre Vivier
ffc39e1f55 chore(rest): refactor message cache (#2221) 2023-11-28 07:21:41 -05:00
gabrielmer
047e493dc9 chore: updating nim-json-serialization dependency (#2248) 2023-11-28 11:47:21 +01:00
Álex Cabeza Romero
a5a85981c5 chore(store-archive): Remove duplicated code (#2234)
* Refactor utility functions for store and archive test.
2023-11-27 18:33:27 +01:00
Simon-Pierre Vivier
6f857b46fe chore: refactoring peer storage (#2243) 2023-11-27 08:08:58 -05:00
Ivan FB
75cbea86cb chore: postres driver allow setting the max number of connection from a parameter (#2246)
* postres driver: allow setting the max number of connections from a parameter
2023-11-24 16:21:22 +01:00
Abhimanyu
d840d3e7cf fix: extended Postgres code to support retention policy + refactoring (#2244)
* updated Postgres retention policy code + refactoring

* Update waku/waku_archive/driver/postgres_driver/postgres_driver.nim

Co-authored-by: Simon-Pierre Vivier <simvivier@status.im>

* updated code review changes

* data unit fixed, processing everything in bytes now

---------

Co-authored-by: Simon-Pierre Vivier <simvivier@status.im>
2023-11-24 15:43:47 +01:00
Prem Chaitanya Prathi
76fa222816 fix: admin REST API to be enabled only if config is set (#2218) 2023-11-24 14:43:20 +05:30
Abhimanyu
60ed32c25a feat: Add new DB column messageHash (#2202)
* feat: added DB column messageHash

* feat: minor change

* feat: minor merge conflict fix

* Update test_resume.nim

* Update test_resume.nim

* randomblob() func used to populate attribute

* PRIMARY key updated - SQLite and Postgres
2023-11-22 17:32:56 +01:00
Abhimanyu
ea33dc2a59 chore: deterministic message hash algorithm updated (#2233)
* deterministic hash algorithm updated + testcases

* updated code review
2023-11-22 15:23:43 +01:00
gabrielmer
d07d378bb1 chore(REST): returning lightpush support and updated filter protocol (#2219) 2023-11-22 10:56:23 +02:00
Ivan FB
3236e29e07 waku_store: better response when the store is requested with wrong cursor (#2231) 2023-11-22 09:32:39 +01:00
Simon-Pierre Vivier
0be13a356f chore: mics. improvements to cluster id and shards setup (#2187) 2023-11-21 15:15:39 -05:00
Alvaro Revuelta
54ec62506e fix(rln): error in api when rate limit (#2212) 2023-11-21 19:24:31 +01:00
Ivan FB
d98363bdd7 peer_manager.nim: better feedback if can't dial peer with WakuMetadataCodec (#2230) 2023-11-21 14:54:45 +01:00
Aaryamann Challani
6926af47e2 chore: update docs for rln-keystore-generator (#2210) 2023-11-21 16:43:15 +03:00
Abhimanyu
9f9c83e984 chore: removing automatic vacuuming from retention policy code (#2228)
* retention policy and testcase updated

* removing dead code

* review updated code
2023-11-21 11:27:50 +01:00
Ivan FB
a20c212d13 group_manager.nim more except detail when cant connect eth client (#2195) 2023-11-20 23:25:55 +01:00
gabrielmer
3f2f11d4a7 chore: decoupling announced and listen addresses (#2203) 2023-11-16 18:15:27 +02:00
Álex Cabeza Romero
69480731ad fix(relay): Failing protocol tests (#2224)
* Fix failing relay protocol tests.
2023-11-16 16:18:50 +01:00
Álex Cabeza Romero
f9d31860bf fix(tests): Compilation failure fix (#2222)
* Add missing required keywords.
2023-11-15 18:10:10 +01:00
Álex Cabeza Romero
3e669e2a1b test(relay-filter): cleanup (#2138)
* Fix some tests.
* Clean legacy tests.
* Fix imports.
2023-11-15 16:15:38 +01:00
Álex Cabeza Romero
e6f8204bc3 test(waku-relay): Relay (#2101)
* Implement message id tests.
* Implement relay tests.
* Update import paths to use test_all.
2023-11-15 16:11:36 +01:00
gabrielmer
a0ee60f394 chore(release): update changelog for v0.22.0 release (#2216) 2023-11-15 15:26:40 +02:00
Álex Cabeza Romero
53d930395b test(waku-filter): Unsubscribe tests (#2085)
* Implement unsubscribe waku filter tests.
* test(waku-filter): Unsubscribe all, payloads and security tests (#2095)
* Implement waku node filter Security and Privacy tests (#2096)
2023-11-15 10:26:01 +01:00
NagyZoltanPeter
bdaae90bec chore: Allow text/plain content type descriptor for json formatted content body (#2209)
* Allow text/plain content type descriptor for json formatted content body. Refactored duplicated encode/decode functions for rest api

* Fix relay endpoint decodings of content bodies to accept text/plain

* Added support for content body decoder for checking media type if additional parameters are present

* Fix wrong usage of ContentTypeData - appeared only for tests
2023-11-14 16:59:53 +01:00
Vaclav Pavlin
22dde84c08 fix(rest): properly check if rln is used (#2205)
* fix(rest): properly check if rln is used

* fix(apis): fix remaining usage of defined(rln)
2023-11-10 15:25:07 +01:00
Hanno Cornelius
391d9849f3 docs: rewrite for clarity, update screenshots (#2206)
* docs: rewrite for clarity, update screenshots

* docs: be less cavalier about private key, other improvements

* docs: missed some spots

* docs: move private key warning to beginning
2023-11-10 13:43:59 +00:00
gabrielmer
4f2d4a9ccb chore(release): update changelog for v0.21.3 release (#2208) 2023-11-09 16:07:29 +02:00
Aaryamann Challani
b349be7ca0 feat: rln-keystore-generator is now a subcommand (#2189) 2023-11-09 11:48:39 +02:00
Anton Iakimov
d2b3ce2cf6 fix: typo 2023-11-08 16:52:53 +01:00
Anton Iakimov
859b46eb10 ci: fix runtime available log level (#2191)
Closes: https://github.com/waku-org/nwaku/issues/2107
2023-11-08 12:54:55 +00:00
Abhimanyu
06a5acb6e9 Revert "feat: amending computeDigest func. + related test cases (#2132)" (#2180)
This reverts commit 8ea31ac6439498de3ab1b6641633a781bf1d64bd.
2023-11-08 01:41:23 +01:00
Ivan FB
4759388664 chore: Optimize postgres - prepared statements in select (#2182)
* db_postgres: use prepared statements on most freq select queries
* db_postgres/dbconn.nim adding better feedback in case of query error
* dbconn: use of isOkOr
* pgasyncpool: refactor to reduce code (valueOr, catch:)
2023-11-07 13:38:37 +01:00
gabrielmer
0def98a3ad chore(release): update changelog for v0.21.2 release (#2188) 2023-11-07 14:17:31 +02:00
Alvaro Revuelta
e22fbc6bfb Add REST API Docs (#2177) 2023-11-07 10:56:22 +01:00
Simon-Pierre Vivier
437d37d620 feat(discv5): filter out peers without any listed capability (#2186) 2023-11-06 07:31:36 -05:00
gabrielmer
4cff5a9dbc chore: upgrade dependencies v0.22 (#2185) 2023-11-06 13:30:34 +02:00
Ivan FB
e81bc8cd06 fix: lightpush rest (#2176)
* rest/lightpush/handlers.nim: enhance feedback in case of error.
* lightpush/openapi.yaml: fix typo in pubsubTopic field.
2023-11-01 11:30:53 +01:00
4174da01ed fix(ci): fix Docker tag for latest and release jobs
Signed-off-by: Jakub Sokołowski <jakub@status.im>
2023-10-31 17:44:49 +01:00
Ivan FB
4a73ee5380 chore: Optimize postgres - use of rowCallback approach (#2171)
* db_postgres, postgres_driver: better performance by using callback.
  There were a bunch of milliseconds being lost due to multiple-row
  processing. This commit aims to have the minimum possible row
  process time.
* pgasyncpool: clarifying logic around pool conn management.
* db_postgres: removing duplicate code and more searchable proc names.
2023-10-31 14:46:46 +01:00
Simon-Pierre Vivier
f0b1c3a7c6 feat: metadata protocol shard subscription (#2149) 2023-10-30 16:58:15 -04:00
Alvaro Revuelta
d008cdf3b1 fix(rest): fix bug in rest api when sending rln message (#2169) 2023-10-30 16:19:49 +01:00
Alvaro Revuelta
6b6b0ca16a chore(networking): lower dhigh to limit amplification factor (#2168) 2023-10-30 16:17:39 +01:00
Ivan FB
eb41bc6c2b chore: Minor Postgres optimizations (#2166)
* postgres_healthcheck: validate once per minute instead of 30 sec
* postgres_driver.nim: change MaxNumCons from 5 to 50
* postgres_driver.nim: split connPool into writeConPool and readConPool
  This aims to avoid clashes in insert and select queries
  because the inserts and selects can happen concurrently
  in relay and store events, respectively.
2023-10-30 15:16:49 +01:00
gabrielmer
876158fe09 chore: adding patch release instructions to release doc (#2157) 2023-10-30 13:26:28 +02:00
Simon-Pierre Vivier
5078ae2430 feat: REST APIs discovery handlers (#2109) 2023-10-27 15:43:54 -04:00
NagyZoltanPeter
b6ea215d71 Pull new version of nim-presto that implements RestServer' new error handler callback (#2144)
Added rest request error handler to capture calls on not installed endpoints
better, more descriptive error message returned.
2023-10-27 16:31:57 +02:00
gabrielmer
f7ed781257 feat: implementing port 0 support (#2125) 2023-10-27 10:11:47 +03:00
gabrielmer
97218ceba6 fix: updating v0.21.1 release date in changelog (#2160) 2023-10-26 11:59:44 +03:00
gabrielmer
a318fc5f7b chore(release): update changelog for v0.21.1 release (#2155) 2023-10-26 10:54:51 +03:00
Ivan FB
71afb3f092 Extend temporary pr images validity to 30 days (#2158) 2023-10-25 17:53:00 +02:00
gabrielmer
855d66df11 chore: adding ext-multiaddr-only CLI flag (#2141) 2023-10-24 18:39:25 +03:00
Abhimanyu
69ddeb93bd Revert "feat: messageHash attribute added in SQLite + testcase (#2142)" (#2154)
This reverts commit a49a0e3c5b475f8a14ad7298e3d1b485969f01d8.
2023-10-24 16:05:39 +02:00
Abhimanyu
a49a0e3c5b feat: messageHash attribute added in SQLite + testcase (#2142)
* feat: messageHash attaribute added in SQLite + testcase

* Update tests/waku_archive/test_driver_sqlite_query.nim

Co-authored-by: Ivan FB <128452529+Ivansete-status@users.noreply.github.com>

---------

Co-authored-by: Ivan FB <128452529+Ivansete-status@users.noreply.github.com>
2023-10-24 12:19:52 +02:00
gabrielmer
ab98d89082 chore: bumping nim-libp2p to include WSS fix (#2150) 2023-10-23 21:35:43 +03:00
Ivan FB
b93ea38e64 chore(cbindings): avoid using global var in libwaku.nim (#2118)
* libwaku: Avoid global variable and changing callback signature

* Better signature for the callback. Two new parameters have been added:
  one aimed to allow passing the caller result code; the other
  param is to pass an optional userData pointer that might need
  to be linked locally with the Context object. For example, this is needed
  in Rust to make the passed closures live as
  long as the Context.

* waku_example.c: adaptation to the latest changes

* libwaku.h: removing 'waku_set_user_data' function

* libwaku.nim: renaming parameter in WakuCallBack (isOk -> callerRet)
2023-10-23 08:37:28 +02:00
Abhimanyu
8ea31ac643 feat: amending computeDigest func. + related test cases (#2132)
* feat: amending computeDigest func. + related test cases

* minor fixes

* minor fixes v1: testcase saga continues

---------

Co-authored-by: Vaclav Pavlin <vaclav@status.im>
2023-10-19 11:59:17 +02:00
gabrielmer
6e8713a1a1 chore: adding postgres flag to manual docker job instructions (#2139) 2023-10-19 10:52:56 +03:00
1008 changed files with 132543 additions and 38482 deletions

View File

@ -4,4 +4,6 @@
/LICENSE*
/tests
/metrics
/nimcache
librln*
**/vendor/*

View File

@ -26,22 +26,23 @@ Update `nwaku` "vendor" dependencies.
- [ ] nim-json-rpc
- [ ] nim-json-serialization
- [ ] nim-libbacktrace
- [ ] nim-libp2p ( update to the unstable branch )
- [ ] nim-libp2p ( update to the latest tag version )
- [ ] nim-metrics
- [ ] nim-nat-traversal
- [ ] nim-presto
- [ ] nim-regex ( update to the latest tag version )
- [ ] nim-results
- [ ] nim-secp256k1
- [ ] nim-serialization
- [ ] nim-sqlite3-abi ( update to the latest tag version )
- [ ] nim-stew
- [ ] nim-stint
- [ ] nim-taskpools
- [ ] nim-testutils
- [ ] nim-taskpools ( update to the latest tag version )
- [ ] nim-testutils ( update to the latest tag version )
- [ ] nim-toml-serialization
- [ ] nim-unicodedb
- [ ] nim-unittest2
- [ ] nim-web3
- [ ] nim-websock
- [ ] nim-unittest2 ( update to the latest tag version )
- [ ] nim-web3 ( update to the latest tag version )
- [ ] nim-websock ( update to the latest tag version )
- [ ] nim-zlib
- [ ] zerokit ( this should be kept in version `v0.3.4` )
- [ ] zerokit ( this should be kept in version `v0.7.0` )

View File

@ -21,3 +21,6 @@ Add any other context or screenshots about the feature request here.
### Acceptance criteria
A list of tasks that need to be done for the issue to be considered resolved.
### Epic
Epic title and link the feature refers to.

View File

@ -0,0 +1,56 @@
---
name: Prepare Beta Release
about: Execute tasks for the creation and publishing of a new beta release
title: 'Prepare beta release 0.0.0'
labels: beta-release
assignees: ''
---
<!--
Add appropriate release number to title!
For detailed info on the release process refer to https://github.com/logos-messaging/nwaku/blob/master/docs/contributors/release-process.md
-->
### Items to complete
All items below are to be completed by the owner of the given release.
- [ ] Create release branch with major and minor only ( e.g. release/v0.X ) if it doesn't exist.
- [ ] Assign release candidate tag to the release branch HEAD (e.g. `v0.X.0-beta-rc.0`, `v0.X.0-beta-rc.1`, ... `v0.X.0-beta-rc.N`).
- [ ] Generate and edit release notes in CHANGELOG.md.
- [ ] **Waku test and fleets validation**
- [ ] Ensure all the unit tests (specifically logos-messaging-js tests) are green against the release candidate.
- [ ] Deploy the release candidate to `waku.test` only through [deploy-waku-test job](https://ci.infra.status.im/job/nim-waku/job/deploy-waku-test/) and wait for it to finish (Jenkins access required; ask the infra team if you don't have it).
- After completion, disable [deployment job](https://ci.infra.status.im/job/nim-waku/) so that its version is not updated on every merge to master.
- Verify the deployed version at https://fleets.waku.org/.
- Confirm the container image exists on [Harbor](https://harbor.status.im/harbor/projects/9/repositories/nwaku/artifacts-tab).
- [ ] Analyze Kibana logs from the previous month (since the last release was deployed) for possible crashes or errors in `waku.test`.
- Most relevant logs are `(fleet: "waku.test" AND message: "SIGSEGV")`.
- [ ] Enable again the `waku.test` fleet to resume auto-deployment of the latest `master` commit.
- [ ] **Proceed with release**
- [ ] Assign a final release tag (`v0.X.0-beta`) to the same commit that contains the validated release-candidate tag (e.g. `v0.X.0-beta-rc.N`) and submit a PR from the release branch to `master`.
- [ ] Update [nwaku-compose](https://github.com/logos-messaging/nwaku-compose) and [waku-simulator](https://github.com/logos-messaging/waku-simulator) according to the new release.
- [ ] Bump nwaku dependency in [waku-rust-bindings](https://github.com/logos-messaging/waku-rust-bindings) and make sure all examples and tests work.
- [ ] Bump nwaku dependency in [waku-go-bindings](https://github.com/logos-messaging/waku-go-bindings) and make sure all tests work.
- [ ] Create GitHub release (https://github.com/logos-messaging/nwaku/releases).
- [ ] Submit a PR to merge the release branch back to `master`. Make sure you use the option "Merge pull request (Create a merge commit)" to perform the merge. Ping repo admin if this option is not available.
- [ ] **Promote release to fleets**
- [ ] Ask the PM lead to announce the release.
- [ ] Update infra config with any deprecated arguments or changed options.
- [ ] Update waku.sandbox with [this deployment job](https://ci.infra.status.im/job/nim-waku/job/deploy-waku-sandbox/).
### Links
- [Release process](https://github.com/logos-messaging/nwaku/blob/master/docs/contributors/release-process.md)
- [Release notes](https://github.com/logos-messaging/nwaku/blob/master/CHANGELOG.md)
- [Fleet ownership](https://www.notion.so/Fleet-Ownership-7532aad8896d46599abac3c274189741?pvs=4#d2d2f0fe4b3c429fbd860a1d64f89a64)
- [Infra-nim-waku](https://github.com/status-im/infra-nim-waku)
- [Jenkins](https://ci.infra.status.im/job/nim-waku/)
- [Fleets](https://fleets.waku.org/)
- [Harbor](https://harbor.status.im/harbor/projects/9/repositories/nwaku/artifacts-tab)

View File

@ -0,0 +1,76 @@
---
name: Prepare Full Release
about: Execute tasks for the creation and publishing of a new full release
title: 'Prepare full release 0.0.0'
labels: full-release
assignees: ''
---
<!--
Add appropriate release number to title!
For detailed info on the release process refer to https://github.com/logos-messaging/nwaku/blob/master/docs/contributors/release-process.md
-->
### Items to complete
All items below are to be completed by the owner of the given release.
- [ ] Create release branch with major and minor only ( e.g. release/v0.X ) if it doesn't exist.
- [ ] Assign release candidate tag to the release branch HEAD (e.g. `v0.X.0-rc.0`, `v0.X.0-rc.1`, ... `v0.X.0-rc.N`).
- [ ] Generate and edit release notes in CHANGELOG.md.
- [ ] **Validation of release candidate**
- [ ] **Automated testing**
- [ ] Ensure all the unit tests (specifically logos-messaging-js tests) are green against the release candidate.
- [ ] Ask Vac-QA and Vac-DST to perform the available tests against the release candidate.
- [ ] Vac-DST (an additional report is needed; see [this](https://www.notion.so/DST-Reports-1228f96fb65c80729cd1d98a7496fe6f))
- [ ] **Waku fleet testing**
- [ ] Deploy the release candidate to `waku.test` and `waku.sandbox` fleets.
- Start the [deployment job](https://ci.infra.status.im/job/nim-waku/) for both fleets and wait for it to finish (Jenkins access required; ask the infra team if you don't have it).
- After completion, disable [deployment job](https://ci.infra.status.im/job/nim-waku/) so that its version is not updated on every merge to `master`.
- Verify the deployed version at https://fleets.waku.org/.
- Confirm the container image exists on [Harbor](https://harbor.status.im/harbor/projects/9/repositories/nwaku/artifacts-tab).
- [ ] Search _Kibana_ logs from the previous month (since the last release was deployed) for possible crashes or errors in `waku.test` and `waku.sandbox`.
- Most relevant logs are `(fleet: "waku.test" AND message: "SIGSEGV")` OR `(fleet: "waku.sandbox" AND message: "SIGSEGV")`.
- [ ] Enable again the `waku.test` fleet to resume auto-deployment of the latest `master` commit.
- [ ] **Status fleet testing**
- [ ] Deploy release candidate to `status.staging`
- [ ] Perform [sanity check](https://www.notion.so/How-to-test-Nwaku-on-Status-12c6e4b9bf06420ca868bd199129b425) and log results as comments in this issue.
- [ ] Connect 2 instances to `status.staging` fleet, one in relay mode, the other one in light client.
- 1:1 Chats with each other
- Send and receive messages in a community
- Close one instance, send messages with second instance, reopen first instance and confirm messages sent while offline are retrieved from store
- [ ] Perform checks based on _end user impact_
- [ ] Inform other (Waku and Status) CCs to point their instances to `status.staging` for a few days. Ping Status colleagues on their Discord server or in the [Status community](https://status.app/c/G3kAAMSQtb05kog3aGbr3kiaxN4tF5xy4BAGEkkLwILk2z3GcoYlm5hSJXGn7J3laft-tnTwDWmYJ18dP_3bgX96dqr_8E3qKAvxDf3NrrCMUBp4R9EYkQez9XSM4486mXoC3mIln2zc-TNdvjdfL9eHVZ-mGgs=#zQ3shZeEJqTC1xhGUjxuS4rtHSrhJ8vUYp64v6qWkLpvdy9L9) (this is not a blocking point.)
- [ ] Ask Status-QA to perform sanity checks (as described above) and checks based on _end user impact_; specify the version being tested
- [ ] Ask Status-QA or infra to run the automated Status e2e tests against `status.staging`
- [ ] Get other CCs' sign-off: they should comment on this PR, e.g., "Used the app for a week, no problem." If problems are reported, resolve them and create a new RC.
- [ ] **Get Status-QA sign-off**, ensuring that the `status.test` update will not disturb ongoing activities.
- [ ] **Proceed with release**
- [ ] Assign a final release tag (`v0.X.0`) to the same commit that contains the validated release-candidate tag (e.g. `v0.X.0`).
- [ ] Update [nwaku-compose](https://github.com/logos-messaging/nwaku-compose) and [waku-simulator](https://github.com/logos-messaging/waku-simulator) according to the new release.
- [ ] Bump nwaku dependency in [waku-rust-bindings](https://github.com/logos-messaging/waku-rust-bindings) and make sure all examples and tests work.
- [ ] Bump nwaku dependency in [waku-go-bindings](https://github.com/logos-messaging/waku-go-bindings) and make sure all tests work.
- [ ] Create GitHub release (https://github.com/logos-messaging/nwaku/releases).
- [ ] Submit a PR to merge the release branch back to `master`. Make sure you use the option "Merge pull request (Create a merge commit)" to perform the merge. Ping repo admin if this option is not available.
- [ ] **Promote release to fleets**
- [ ] Ask the PM lead to announce the release.
- [ ] Update infra config with any deprecated arguments or changed options.
### Links
- [Release process](https://github.com/logos-messaging/nwaku/blob/master/docs/contributors/release-process.md)
- [Release notes](https://github.com/logos-messaging/nwaku/blob/master/CHANGELOG.md)
- [Fleet ownership](https://www.notion.so/Fleet-Ownership-7532aad8896d46599abac3c274189741?pvs=4#d2d2f0fe4b3c429fbd860a1d64f89a64)
- [Infra-nim-waku](https://github.com/status-im/infra-nim-waku)
- [Jenkins](https://ci.infra.status.im/job/nim-waku/)
- [Fleets](https://fleets.waku.org/)
- [Harbor](https://harbor.status.im/harbor/projects/9/repositories/nwaku/artifacts-tab)

View File

@ -1,26 +1,8 @@
# Description
<!--- Describe your changes to provide context for reviewrs -->
## Description
# Changes
## Changes
<!-- List of detailed changes -->
- [ ] ...
- [ ] ...
<!--
## How to test
1.
1.
1.
-->
<!--
## Issue
closes #
-->

12
.github/workflows/auto_assign_pr.yml vendored Normal file
View File

@ -0,0 +1,12 @@
name: Auto Assign PR to Creator
on:
pull_request:
types:
- opened
jobs:
assign_creator:
runs-on: ubuntu-22.04
steps:
- uses: toshimaru/auto-author-assign@v1.6.2

View File

@ -13,15 +13,15 @@ concurrency:
env:
NPROC: 2
MAKEFLAGS: "-j${NPROC}"
NIMFLAGS: "--parallelBuild:${NPROC}"
NIMFLAGS: "--parallelBuild:${NPROC} --colors:off -d:chronicles_colors:none"
jobs:
changes: # changes detection
runs-on: ubuntu-latest
runs-on: ubuntu-22.04
permissions:
pull-requests: read
steps:
- uses: actions/checkout@v3
- uses: actions/checkout@v4
name: Checkout code
id: checkout
- uses: dorny/paths-filter@v2
@ -34,14 +34,12 @@ jobs:
- 'Makefile'
- 'waku.nimble'
- 'library/**'
v2:
- 'waku/**'
- 'apps/**'
- 'tools/**'
- 'tests/all_tests_v2.nim'
- 'tests/**'
docker:
- 'docker/**'
@ -54,15 +52,16 @@ jobs:
needs: changes
if: ${{ needs.changes.outputs.v2 == 'true' || needs.changes.outputs.common == 'true' }}
strategy:
fail-fast: false
matrix:
os: [ubuntu-latest, macos-latest]
os: [ubuntu-22.04, macos-15]
runs-on: ${{ matrix.os }}
timeout-minutes: 60
timeout-minutes: 45
name: build-${{ matrix.os }}
steps:
- name: Checkout code
uses: actions/checkout@v3
uses: actions/checkout@v4
- name: Get submodules hash
id: submodules
@ -77,22 +76,33 @@ jobs:
.git/modules
key: ${{ runner.os }}-vendor-modules-${{ steps.submodules.outputs.hash }}
- name: Make update
run: make update
- name: Build binaries
run: make V=1 QUICK_AND_DIRTY_COMPILER=1 all tools
build-windows:
needs: changes
if: ${{ needs.changes.outputs.v2 == 'true' || needs.changes.outputs.common == 'true' }}
uses: ./.github/workflows/windows-build.yml
with:
branch: ${{ github.ref }}
test:
needs: changes
if: ${{ needs.changes.outputs.v2 == 'true' || needs.changes.outputs.common == 'true' }}
strategy:
fail-fast: false
matrix:
os: [ubuntu-latest, macos-latest]
os: [ubuntu-22.04, macos-15]
runs-on: ${{ matrix.os }}
timeout-minutes: 60
timeout-minutes: 45
name: test-${{ matrix.os }}
steps:
- name: Checkout code
uses: actions/checkout@v3
uses: actions/checkout@v4
- name: Get submodules hash
id: submodules
@ -107,37 +117,81 @@ jobs:
.git/modules
key: ${{ runner.os }}-vendor-modules-${{ steps.submodules.outputs.hash }}
- name: Make update
run: make update
- name: Run tests
run: |
if [ ${{ runner.os }} == "macOS" ]; then
brew unlink postgresql@14
brew link libpq --force
fi
postgres_enabled=0
if [ ${{ runner.os }} == "Linux" ]; then
sudo docker run --rm -d -e POSTGRES_PASSWORD=test123 -p 5432:5432 postgres:9.6-alpine
sudo docker run --rm -d -e POSTGRES_PASSWORD=test123 -p 5432:5432 postgres:15.4-alpine3.18
postgres_enabled=1
fi
make V=1 LOG_LEVEL=DEBUG QUICK_AND_DIRTY_COMPILER=1 POSTGRES=1 test testwakunode2
export MAKEFLAGS="-j1"
export NIMFLAGS="--colors:off -d:chronicles_colors:none"
export USE_LIBBACKTRACE=0
make V=1 LOG_LEVEL=DEBUG QUICK_AND_DIRTY_COMPILER=1 POSTGRES=$postgres_enabled test
make V=1 LOG_LEVEL=DEBUG QUICK_AND_DIRTY_COMPILER=1 POSTGRES=$postgres_enabled testwakunode2
build-docker-image:
needs: changes
if: ${{ needs.changes.outputs.v2 == 'true' || needs.changes.outputs.common == 'true' || needs.changes.outputs.docker == 'true' }}
uses: waku-org/nwaku/.github/workflows/container-image.yml@master
uses: logos-messaging/logos-messaging-nim/.github/workflows/container-image.yml@10dc3d3eb4b6a3d4313f7b2cc4a85a925e9ce039
secrets: inherit
nwaku-nwaku-interop-tests:
needs: build-docker-image
uses: logos-messaging/logos-messaging-interop-tests/.github/workflows/nim_waku_PR.yml@SMOKE_TEST_STABLE
with:
node_nwaku: ${{ needs.build-docker-image.outputs.image }}
secrets: inherit
js-waku-node:
needs: build-docker-image
uses: waku-org/js-waku/.github/workflows/test-node.yml@master
uses: logos-messaging/logos-messaging-js/.github/workflows/test-node.yml@master
with:
nim_wakunode_image: ${{ needs.build-docker-image.outputs.image }}
test_type: node
debug: waku*
js-waku-node-optional:
needs: build-docker-image
uses: waku-org/js-waku/.github/workflows/test-node.yml@master
uses: logos-messaging/logos-messaging-js/.github/workflows/test-node.yml@master
with:
nim_wakunode_image: ${{ needs.build-docker-image.outputs.image }}
test_type: node-optional
debug: waku*
lint:
name: "Lint"
runs-on: ubuntu-22.04
needs: build
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Get submodules hash
id: submodules
run: |
echo "hash=$(git submodule status | awk '{print $1}' | sort | shasum -a 256 | sed 's/[ -]*//g')" >> $GITHUB_OUTPUT
- name: Cache submodules
uses: actions/cache@v3
with:
path: |
vendor/
.git/modules
key: ${{ runner.os }}-vendor-modules-${{ steps.submodules.outputs.hash }}
- name: Build nph
run: |
make build-nph
- name: Check nph formatting
run: |
shopt -s extglob # Enable extended globbing
NPH=$(make print-nph-path)
echo "using nph at ${NPH}"
"${NPH}" examples waku tests tools apps *.@(nim|nims|nimble)
git diff --exit-code

View File

@ -16,11 +16,13 @@ env:
MAKEFLAGS: "-j${NPROC}"
NIMFLAGS: "--parallelBuild:${NPROC}"
# This workflow should not run for outside contributors
# If org secrets are not available, we'll avoid building and publishing the docker image and we'll pass the workflow
jobs:
build-docker-image:
strategy:
matrix:
os: [ubuntu-latest]
os: [ubuntu-22.04]
runs-on: ${{ matrix.os }}
timeout-minutes: 60
@ -28,15 +30,30 @@ jobs:
outputs:
image: ${{ steps.build.outputs.image }}
steps:
- name: Check secrets
id: secrets
continue-on-error: true
run: |
if [[ -z "$QUAY_PASSWORD" || -z "$QUAY_USER" ]]; then
echo "User does not have access to secrets, skipping workflow"
exit 1
fi
env:
QUAY_PASSWORD: ${{ secrets.QUAY_PASSWORD }}
QUAY_USER: ${{ secrets.QUAY_USER }}
- name: Checkout code
uses: actions/checkout@v3
if: ${{ steps.secrets.outcome == 'success' }}
uses: actions/checkout@v4
- name: Get submodules hash
id: submodules
if: ${{ steps.secrets.outcome == 'success' }}
run: |
echo "hash=$(git submodule status | awk '{print $1}' | sort | shasum -a 256 | sed 's/[ -]*//g')" >> $GITHUB_OUTPUT
- name: Cache submodules
if: ${{ steps.secrets.outcome == 'success' }}
uses: actions/cache@v3
with:
path: |
@ -46,9 +63,11 @@ jobs:
- name: Build binaries
id: build
if: ${{ steps.secrets.outcome == 'success' }}
run: |
make update
make -j${NPROC} V=1 QUICK_AND_DIRTY_COMPILER=1 NIMFLAGS="-d:disableMarchNative -d:postgres" wakunode2
make -j${NPROC} V=1 QUICK_AND_DIRTY_COMPILER=1 NIMFLAGS="-d:disableMarchNative -d:postgres -d:chronicles_colors:none" wakunode2
SHORT_REF=$(git rev-parse --short HEAD)
@ -59,7 +78,7 @@ jobs:
echo "commit_hash=$(git rev-parse HEAD)" >> $GITHUB_OUTPUT
docker login -u ${QUAY_USER} -p ${QUAY_PASSWORD} quay.io
docker build -t ${IMAGE} -f docker/binaries/Dockerfile.bn.amd64 --label quay.expires-after=7d .
docker build -t ${IMAGE} -f docker/binaries/Dockerfile.bn.amd64 --label quay.expires-after=30d .
docker push ${IMAGE}
env:
QUAY_PASSWORD: ${{ secrets.QUAY_PASSWORD }}
@ -68,7 +87,7 @@ jobs:
- name: Comment PR
uses: thollander/actions-comment-pull-request@v2
if: ${{ github.event_name == 'pull_request' }}
if: ${{ github.event_name == 'pull_request' && steps.secrets.outcome == 'success' }}
with:
message: |
You can find the image built from this PR at
@ -78,4 +97,4 @@ jobs:
```
Built from ${{ steps.build.outputs.commit_hash }}
comment_tag: execution
comment_tag: execution-rln-v${{ matrix.rln_version }}

View File

@ -8,52 +8,11 @@ on:
- synchronize
jobs:
main:
name: Validate PR title
runs-on: ubuntu-latest
permissions:
pull-requests: write
steps:
- uses: amannn/action-semantic-pull-request@v5
id: lint_pr_title
with:
types: |
chore
docs
feat
fix
refactor
style
test
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
- uses: marocchino/sticky-pull-request-comment@v2
# When the previous steps fails, the workflow would stop. By adding this
# condition you can continue the execution with the populated error message.
if: always() && (steps.lint_pr_title.outputs.error_message != null)
with:
header: pr-title-lint-error
message: |
Hey there and thank you for opening this pull request! 👋🏼
We require pull request titles to follow the [Conventional Commits specification](https://www.conventionalcommits.org/en/v1.0.0/) and it looks like your proposed title needs to be adjusted.
Details:
> ${{ steps.lint_pr_title.outputs.error_message }}
# Delete a previous comment when the issue has been resolved
- if: ${{ steps.lint_pr_title.outputs.error_message == null }}
uses: marocchino/sticky-pull-request-comment@v2
with:
header: pr-title-lint-error
delete: true
labels:
runs-on: ubuntu-latest
runs-on: ubuntu-22.04
steps:
- uses: actions/checkout@v3
- uses: actions/checkout@v4
name: Checkout code
id: checkout
- uses: dorny/paths-filter@v2
@ -81,7 +40,6 @@ jobs:
Please also make sure the label `release-notes` is added to make sure any changes to the user interface are properly announced in changelog and release notes.
comment_tag: configs
- name: Comment DB schema change
uses: thollander/actions-comment-pull-request@v2
if: ${{steps.filter.outputs.db_schema == 'true'}}

View File

@ -17,10 +17,10 @@ env:
jobs:
tag-name:
runs-on: ubuntu-latest
runs-on: ubuntu-22.04
steps:
- name: Checkout code
uses: actions/checkout@v3
uses: actions/checkout@v4
- name: Vars
id: vars
@ -34,20 +34,20 @@ jobs:
needs: tag-name
strategy:
matrix:
os: [ubuntu-latest, macos-latest]
os: [ubuntu-22.04, macos-15]
arch: [amd64]
include:
- os: macos-latest
- os: macos-15
arch: arm64
runs-on: ${{ matrix.os }}
steps:
- name: Checkout code
uses: actions/checkout@v3
uses: actions/checkout@v4
- name: prep variables
id: vars
run: |
ARCH=${{matrix.arch}}
ARCH=${{matrix.arch}}
echo "arch=${ARCH}" >> $GITHUB_OUTPUT
@ -76,14 +76,14 @@ jobs:
tar -cvzf ${{steps.vars.outputs.nwakutools}} ./build/wakucanary ./build/networkmonitor
- name: upload artifacts
uses: actions/upload-artifact@v3
uses: actions/upload-artifact@v4
with:
name: wakunode2
path: ${{steps.vars.outputs.nwaku}}
retention-days: 2
- name: upload artifacts
uses: actions/upload-artifact@v3
uses: actions/upload-artifact@v4
with:
name: wakutools
path: ${{steps.vars.outputs.nwakutools}}
@ -91,14 +91,14 @@ jobs:
build-docker-image:
needs: tag-name
uses: waku-org/nwaku/.github/workflows/container-image.yml@master
uses: logos-messaging/nwaku/.github/workflows/container-image.yml@master
with:
image_tag: ${{ needs.tag-name.outputs.tag }}
secrets: inherit
js-waku-node:
needs: build-docker-image
uses: waku-org/js-waku/.github/workflows/test-node.yml@master
uses: logos-messaging/logos-messaging-js/.github/workflows/test-node.yml@master
with:
nim_wakunode_image: ${{ needs.build-docker-image.outputs.image }}
test_type: node
@ -106,24 +106,24 @@ jobs:
js-waku-node-optional:
needs: build-docker-image
uses: waku-org/js-waku/.github/workflows/test-node.yml@master
uses: logos-messaging/logos-messaging-js/.github/workflows/test-node.yml@master
with:
nim_wakunode_image: ${{ needs.build-docker-image.outputs.image }}
test_type: node-optional
debug: waku*
create-release-candidate:
runs-on: ubuntu-latest
runs-on: ubuntu-22.04
needs: [ tag-name, build-and-publish ]
steps:
- name: Checkout code
uses: actions/checkout@v3
uses: actions/checkout@v4
with:
fetch-depth: 0
ref: master
- name: download artifacts
uses: actions/download-artifact@v2
uses: actions/download-artifact@v4
- name: prep variables
id: vars
@ -150,7 +150,7 @@ jobs:
-u $(id -u) \
docker.io/wakuorg/sv4git:latest \
release-notes ${RELEASE_NOTES_TAG} --previous $(git tag -l --sort -creatordate | grep -e "^v[0-9]*\.[0-9]*\.[0-9]*$") |\
sed -E 's@#([0-9]+)@[#\1](https://github.com/waku-org/nwaku/issues/\1)@g' > release_notes.md
sed -E 's@#([0-9]+)@[#\1](https://github.com/logos-messaging/nwaku/issues/\1)@g' > release_notes.md
sed -i "s/^## .*/Generated at $(date)/" release_notes.md

View File

@ -14,10 +14,10 @@ jobs:
build-and-upload:
strategy:
matrix:
os: [ubuntu-latest, macos-latest]
os: [ubuntu-22.04, macos-15]
arch: [amd64]
include:
- os: macos-latest
- os: macos-15
arch: arm64
runs-on: ${{ matrix.os }}
timeout-minutes: 60
@ -41,25 +41,84 @@ jobs:
.git/modules
key: ${{ runner.os }}-${{matrix.arch}}-submodules-${{ steps.submodules.outputs.hash }}
- name: prep variables
- name: Get tag
id: version
run: |
# Use full tag, e.g., v0.37.0
echo "version=${GITHUB_REF_NAME}" >> $GITHUB_OUTPUT
- name: Prep variables
id: vars
run: |
NWAKU_ARTIFACT_NAME=$(echo "nwaku-${{matrix.arch}}-${{runner.os}}.tar.gz" | tr "[:upper:]" "[:lower:]")
VERSION=${{ steps.version.outputs.version }}
echo "nwaku=${NWAKU_ARTIFACT_NAME}" >> $GITHUB_OUTPUT
NWAKU_ARTIFACT_NAME=$(echo "waku-${{matrix.arch}}-${{runner.os}}.tar.gz" | tr "[:upper:]" "[:lower:]")
echo "waku=${NWAKU_ARTIFACT_NAME}" >> $GITHUB_OUTPUT
- name: Install dependencies
if [[ "${{ runner.os }}" == "Linux" ]]; then
LIBWAKU_ARTIFACT_NAME=$(echo "libwaku-${VERSION}-${{matrix.arch}}-${{runner.os}}-linux.deb" | tr "[:upper:]" "[:lower:]")
fi
if [[ "${{ runner.os }}" == "macOS" ]]; then
LIBWAKU_ARTIFACT_NAME=$(echo "libwaku-${VERSION}-${{matrix.arch}}-macos.tar.gz" | tr "[:upper:]" "[:lower:]")
fi
echo "libwaku=${LIBWAKU_ARTIFACT_NAME}" >> $GITHUB_OUTPUT
- name: Install build dependencies
run: |
if [[ "${{ runner.os }}" == "Linux" ]]; then
sudo apt-get update && sudo apt-get install -y build-essential dpkg-dev
fi
- name: Build Waku artifacts
run: |
OS=$([[ "${{runner.os}}" == "macOS" ]] && echo "macosx" || echo "linux")
make -j${NPROC} NIMFLAGS="--parallelBuild:${NPROC} -d:disableMarchNative --os:${OS} --cpu:${{matrix.arch}}" V=1 update
make -j${NPROC} NIMFLAGS="--parallelBuild:${NPROC} -d:disableMarchNative --os:${OS} --cpu:${{matrix.arch}} -d:postgres" CI=false wakunode2
make -j${NPROC} NIMFLAGS="--parallelBuild:${NPROC} -d:disableMarchNative --os:${OS} --cpu:${{matrix.arch}}" CI=false chat2
tar -cvzf ${{steps.vars.outputs.nwaku}} ./build/
tar -cvzf ${{steps.vars.outputs.waku}} ./build/
- name: Upload asset
uses: actions/upload-artifact@v2.2.3
make -j${NPROC} NIMFLAGS="--parallelBuild:${NPROC} -d:disableMarchNative --os:${OS} --cpu:${{matrix.arch}} -d:postgres" CI=false libwaku
make -j${NPROC} NIMFLAGS="--parallelBuild:${NPROC} -d:disableMarchNative --os:${OS} --cpu:${{matrix.arch}} -d:postgres" CI=false STATIC=1 libwaku
- name: Create distributable libwaku package
run: |
VERSION=${{ steps.version.outputs.version }}
if [[ "${{ runner.os }}" == "Linux" ]]; then
rm -rf pkg
mkdir -p pkg/DEBIAN pkg/usr/local/lib pkg/usr/local/include
cp build/libwaku.so pkg/usr/local/lib/
cp build/libwaku.a pkg/usr/local/lib/
cp library/libwaku.h pkg/usr/local/include/
echo "Package: waku" >> pkg/DEBIAN/control
echo "Version: ${VERSION}" >> pkg/DEBIAN/control
echo "Priority: optional" >> pkg/DEBIAN/control
echo "Section: libs" >> pkg/DEBIAN/control
echo "Architecture: ${{matrix.arch}}" >> pkg/DEBIAN/control
echo "Maintainer: Waku Team <ivansete@status.im>" >> pkg/DEBIAN/control
echo "Description: Waku library" >> pkg/DEBIAN/control
dpkg-deb --build pkg ${{steps.vars.outputs.libwaku}}
fi
if [[ "${{ runner.os }}" == "macOS" ]]; then
tar -cvzf ${{steps.vars.outputs.libwaku}} ./build/libwaku.dylib ./build/libwaku.a ./library/libwaku.h
fi
- name: Upload waku artifact
uses: actions/upload-artifact@v4.4.0
with:
name: ${{steps.vars.outputs.nwaku}}
path: ${{steps.vars.outputs.nwaku}}
name: waku-${{ steps.version.outputs.version }}-${{ matrix.arch }}-${{ runner.os }}
path: ${{ steps.vars.outputs.waku }}
if-no-files-found: error
- name: Upload libwaku artifact
uses: actions/upload-artifact@v4.4.0
with:
name: libwaku-${{ steps.version.outputs.version }}-${{ matrix.arch }}-${{ runner.os }}
path: ${{ steps.vars.outputs.libwaku }}
if-no-files-found: error

View File

@ -7,7 +7,7 @@ on:
- .github/labels.yml
jobs:
build:
runs-on: ubuntu-latest
runs-on: ubuntu-22.04
steps:
- uses: actions/checkout@v2
- uses: micnncim/action-label-syncer@v1

104
.github/workflows/windows-build.yml vendored Normal file
View File

@ -0,0 +1,104 @@
name: ci / build-windows
on:
workflow_call:
inputs:
branch:
required: true
type: string
jobs:
build:
runs-on: windows-latest
defaults:
run:
shell: msys2 {0}
env:
MSYSTEM: MINGW64
steps:
- name: Checkout code
uses: actions/checkout@v3
- name: Setup MSYS2
uses: msys2/setup-msys2@v2
with:
update: true
install: >-
git
base-devel
mingw-w64-x86_64-toolchain
make
cmake
upx
mingw-w64-x86_64-rust
mingw-w64-x86_64-postgresql
mingw-w64-x86_64-gcc
mingw-w64-x86_64-gcc-libs
mingw-w64-x86_64-libwinpthread-git
mingw-w64-x86_64-zlib
mingw-w64-x86_64-openssl
mingw-w64-x86_64-python
mingw-w64-x86_64-cmake
mingw-w64-x86_64-llvm
mingw-w64-x86_64-clang
- name: Add UPX to PATH
run: |
echo "/usr/bin:$PATH" >> $GITHUB_PATH
echo "/mingw64/bin:$PATH" >> $GITHUB_PATH
echo "/usr/lib:$PATH" >> $GITHUB_PATH
echo "/mingw64/lib:$PATH" >> $GITHUB_PATH
- name: Verify dependencies
run: |
which upx gcc g++ make cmake cargo rustc python
- name: Updating submodules
run: git submodule update --init --recursive
- name: Creating tmp directory
run: mkdir -p tmp
- name: Building Nim
run: |
cd vendor/nimbus-build-system/vendor/Nim
./build_all.bat
cd ../../../..
- name: Building miniupnpc
run: |
cd vendor/nim-nat-traversal/vendor/miniupnp/miniupnpc
make -f Makefile.mingw CC=gcc CXX=g++ libminiupnpc.a V=1
cd ../../../../..
- name: Building libnatpmp
run: |
cd ./vendor/nim-nat-traversal/vendor/libnatpmp-upstream
make CC="gcc -fPIC -D_WIN32_WINNT=0x0600 -DNATPMP_STATICLIB" libnatpmp.a V=1
cd ../../../../
- name: Building wakunode2.exe
run: |
make wakunode2 LOG_LEVEL=DEBUG V=3 -j8
- name: Building libwaku.dll
run: |
make libwaku STATIC=0 LOG_LEVEL=DEBUG V=1 -j
- name: Check Executable
run: |
if [ -f "./build/wakunode2.exe" ]; then
echo "wakunode2.exe build successful"
else
echo "Build failed: wakunode2.exe not found"
exit 1
fi
if [ -f "./build/libwaku.dll" ]; then
echo "libwaku.dll build successful"
else
echo "Build failed: libwaku.dll not found"
exit 1
fi

29
.gitignore vendored
View File

@ -30,8 +30,8 @@
/metrics/waku-sim-all-nodes-grafana-dashboard.json
*.log
package-lock.json
package.json
/package-lock.json
/package.json
node_modules/
/.update.timestamp
@ -57,6 +57,12 @@ nimbus-build-system.paths
*.sqlite3-wal
/examples/nodejs/build/
/examples/rust/target/
# Xcode user data
xcuserdata/
*.xcuserstate
# Coverage
coverage_html_report/
@ -64,3 +70,22 @@ coverage_html_report/
# Wildcard
*.ignore.*
# Ignore all possible node runner directories
**/keystore/
**/rln_tree/
**/certs/
# simple qt example
.qmake.stash
main-qt
waku_handler.moc.cpp
# Nix build result
result
# llms
AGENTS.md
nimble.develop
nimble.paths
nimbledeps

37
.gitmodules vendored
View File

@ -10,9 +10,9 @@
branch = master
[submodule "vendor/nim-libp2p"]
path = vendor/nim-libp2p
url = https://github.com/status-im/nim-libp2p.git
url = https://github.com/vacp2p/nim-libp2p.git
ignore = dirty
branch = unstable
branch = master
[submodule "vendor/nim-stew"]
path = vendor/nim-stew
url = https://github.com/status-im/nim-stew.git
@ -143,7 +143,7 @@
path = vendor/zerokit
url = https://github.com/vacp2p/zerokit.git
ignore = dirty
branch = v0.3.4
branch = v0.5.1
[submodule "vendor/nim-regex"]
path = vendor/nim-regex
url = https://github.com/nitely/nim-regex.git
@ -164,3 +164,34 @@
branch = master
path = vendor/nim-results
url = https://github.com/arnetheduck/nim-results.git
[submodule "vendor/db_connector"]
path = vendor/db_connector
url = https://github.com/nim-lang/db_connector.git
ignore = untracked
branch = devel
[submodule "vendor/nph"]
ignore = untracked
branch = master
path = vendor/nph
url = https://github.com/arnetheduck/nph.git
[submodule "vendor/nim-minilru"]
path = vendor/nim-minilru
url = https://github.com/status-im/nim-minilru.git
ignore = untracked
branch = master
[submodule "vendor/waku-rlnv2-contract"]
path = vendor/waku-rlnv2-contract
url = https://github.com/logos-messaging/waku-rlnv2-contract.git
ignore = untracked
branch = master
[submodule "vendor/nim-lsquic"]
path = vendor/nim-lsquic
url = https://github.com/vacp2p/nim-lsquic
[submodule "vendor/nim-jwt"]
path = vendor/nim-jwt
url = https://github.com/vacp2p/nim-jwt.git
[submodule "vendor/nim-ffi"]
path = vendor/nim-ffi
url = https://github.com/logos-messaging/nim-ffi/
ignore = untracked
branch = master

509
AGENTS.md Normal file
View File

@ -0,0 +1,509 @@
# AGENTS.md - AI Coding Context
This file provides essential context for LLMs assisting with Logos Messaging development.
## Project Identity
Logos Messaging is designed as a shared public network for generalized messaging, not application-specific infrastructure.
This project is a Nim implementation of a libp2p protocol suite for private, censorship-resistant P2P messaging. It targets resource-restricted devices and privacy-preserving communication.
Logos Messaging was formerly known as Waku. Waku-related terminology remains within the codebase for historical reasons.
### Design Philosophy
Key architectural decisions:
Resource-restricted first: Protocols differentiate between full nodes (relay) and light clients (filter, lightpush, store). Light clients can participate without maintaining full message history or relay capabilities. This explains the client/server split in protocol implementations.
Privacy through unlinkability: RLN (Rate Limiting Nullifier) provides DoS protection while preserving sender anonymity. Messages are routed through pubsub topics with automatic sharding across 8 shards. Code prioritizes metadata privacy alongside content encryption.
Scalability via sharding: The network uses automatic content-topic-based sharding to distribute traffic. This is why you'll see sharding logic throughout the codebase and why pubsub topic selection is protocol-level, not application-level.
See [documentation](https://docs.waku.org/learn/) for architectural details.
### Core Protocols
- Relay: Pub/sub message routing using GossipSub
- Store: Historical message retrieval and persistence
- Filter: Lightweight message filtering for resource-restricted clients
- Lightpush: Lightweight message publishing for clients
- Peer Exchange: Peer discovery mechanism
- RLN Relay: Rate limiting nullifier for spam protection
- Metadata: Cluster and shard metadata exchange between peers
- Mix: Mixnet protocol for enhanced privacy through onion routing
- Rendezvous: Alternative peer discovery mechanism
### Key Terminology
- ENR (Ethereum Node Record): Node identity and capability advertisement
- Multiaddr: libp2p addressing format (e.g., `/ip4/127.0.0.1/tcp/60000/p2p/16Uiu2...`)
- PubsubTopic: Gossipsub topic for message routing (e.g., `/waku/2/default-waku/proto`)
- ContentTopic: Application-level message categorization (e.g., `/my-app/1/chat/proto`)
- Sharding: Partitioning network traffic across topics (static or auto-sharding)
- RLN (Rate Limiting Nullifier): Zero-knowledge proof system for spam prevention
### Specifications
All specs are at [rfc.vac.dev/waku](https://rfc.vac.dev/waku). RFCs use `WAKU2-XXX` format (not legacy `WAKU-XXX`).
## Architecture
### Protocol Module Pattern
Each protocol typically follows this structure:
```
waku_<protocol>/
├── protocol.nim # Main protocol type and handler logic
├── client.nim # Client-side API
├── rpc.nim # RPC message types
├── rpc_codec.nim # Protobuf encoding/decoding
├── common.nim # Shared types and constants
└── protocol_metrics.nim # Prometheus metrics
```
### WakuNode Architecture
- WakuNode (`waku/node/waku_node.nim`) is the central orchestrator
- Protocols are "mounted" onto the node's switch (libp2p component)
- PeerManager handles peer selection and connection management
- Switch provides libp2p transport, security, and multiplexing
Example protocol type definition:
```nim
type WakuFilter* = ref object of LPProtocol
subscriptions*: FilterSubscriptions
peerManager: PeerManager
messageCache: TimedCache[string]
```
## Development Essentials
### Build Requirements
- Nim 2.x (check `waku.nimble` for minimum version)
- Rust toolchain (required for RLN dependencies)
- Build system: Make with nimbus-build-system
### Build System
The project uses Makefile with nimbus-build-system (Status's Nim build framework):
```bash
# Initial build (updates submodules)
make wakunode2
# After git pull, update submodules
make update
# Build with custom flags
make wakunode2 NIMFLAGS="-d:chronicles_log_level=DEBUG"
```
Note: The build system uses `--mm:refc` memory management (automatically enforced). Only relevant if compiling outside the standard build system.
### Common Make Targets
```bash
make wakunode2 # Build main node binary
make test # Run all tests
make testcommon # Run common tests only
make libwakuStatic # Build static C library
make chat2 # Build chat example
make install-nph # Install git hook for auto-formatting
```
### Testing
```bash
# Run all tests
make test
# Run specific test file
make test tests/test_waku_enr.nim
# Run specific test case from file
make test tests/test_waku_enr.nim "check capabilities support"
# Build and run test separately (for development iteration)
make test tests/test_waku_enr.nim
```
Test structure uses `testutils/unittests`:
```nim
import testutils/unittests
suite "Waku ENR - Capabilities":
test "check capabilities support":
## Given
let bitfield: CapabilitiesBitfield = 0b0000_1101u8
## Then
check:
bitfield.supportsCapability(Capabilities.Relay)
not bitfield.supportsCapability(Capabilities.Store)
```
### Code Formatting
Mandatory: All code must be formatted with `nph` (vendored in `vendor/nph`)
```bash
# Format specific file
make nph/waku/waku_core.nim
# Install git pre-commit hook (auto-formats on commit)
make install-nph
```
The nph formatter handles all formatting details automatically, especially with the pre-commit hook installed. Focus on semantic correctness.
### Logging
Uses `chronicles` library with compile-time configuration:
```nim
import chronicles
logScope:
topics = "waku lightpush"
info "handling request", peerId = peerId, topic = pubsubTopic
error "request failed", error = msg
```
Compile with log level:
```bash
nim c -d:chronicles_log_level=TRACE myfile.nim
```
## Code Conventions
Common pitfalls:
- Always handle Result types explicitly
- Avoid global mutable state: Pass state through parameters
- Keep functions focused: Under 50 lines when possible
- Prefer compile-time checks (`static assert`) over runtime checks
### Naming
- Files/Directories: `snake_case` (e.g., `waku_lightpush`, `peer_manager`)
- Procedures: `camelCase` (e.g., `handleRequest`, `pushMessage`)
- Types: `PascalCase` (e.g., `WakuFilter`, `PubsubTopic`)
- Constants: `PascalCase` (e.g., `MaxContentTopicsPerRequest`)
- Constructors: `func init(T: type Xxx, params): T`
- For ref types: `func new(T: type Xxx, params): ref T`
- Exceptions: `XxxError` for CatchableError, `XxxDefect` for Defect
- ref object types: `XxxRef` suffix
### Imports Organization
Group imports: stdlib, external libs, internal modules:
```nim
import
std/[options, sequtils], # stdlib
results, chronicles, chronos, # external
libp2p/peerid
import
../node/peer_manager, # internal (separate import block)
../waku_core,
./common
```
### Async Programming
Uses chronos, not stdlib `asyncdispatch`:
```nim
proc handleRequest(
wl: WakuLightPush, peerId: PeerId
): Future[WakuLightPushResult] {.async.} =
let res = await wl.pushHandler(peerId, pubsubTopic, message)
return res
```
### Error Handling
The project uses both Result types and exceptions:
Result types from nim-results are used for protocol and API-level errors:
```nim
proc subscribe(
wf: WakuFilter, peerId: PeerID
): Future[FilterSubscribeResult] {.async.} =
if contentTopics.len > MaxContentTopicsPerRequest:
return err(FilterSubscribeError.badRequest("exceeds maximum"))
# Handle Result with isOkOr
(await wf.subscriptions.addSubscription(peerId, criteria)).isOkOr:
return err(FilterSubscribeError.serviceUnavailable(error))
ok()
```
Exceptions still used for:
- chronos async failures (CancelledError, etc.)
- Database/system errors
- Library interop
Most files start with `{.push raises: [].}` to disable exception tracking, then use try/catch blocks where needed.
### Pragma Usage
```nim
{.push raises: [].} # Disable default exception tracking (at file top)
proc myProc(): Result[T, E] {.async.} = # Async proc
```
### Protocol Inheritance
Protocols inherit from libp2p's `LPProtocol`:
```nim
type WakuLightPush* = ref object of LPProtocol
rng*: ref rand.HmacDrbgContext
peerManager*: PeerManager
pushHandler*: PushMessageHandler
```
### Type Visibility
- Public exports use `*` suffix: `type WakuFilter* = ...`
- Fields without `*` are module-private
## Style Guide Essentials
This section summarizes key Nim style guidelines relevant to this project. Full guide: https://status-im.github.io/nim-style-guide/
### Language Features
Import and Export
- Use explicit import paths with std/ prefix for stdlib
- Group imports: stdlib, external, internal (separate blocks)
- Export modules whose types appear in public API
- Avoid include
Macros and Templates
- Avoid macros and templates - prefer simple constructs
- Avoid generating public API with macros
- Put logic in templates, use macros only for glue code
Object Construction
- Prefer Type(field: value) syntax
- Use Type.init(params) convention for constructors
- Default zero-initialization should be valid state
- Avoid using result variable for construction
ref object Types
- Avoid ref object unless needed for:
- Resource handles requiring reference semantics
- Shared ownership
- Reference-based data structures (trees, lists)
- Stable pointer for FFI
- Use explicit ref MyType where possible
- Name ref object types with Ref suffix: XxxRef
Memory Management
- Prefer stack-based and statically sized types in core code
- Use heap allocation in glue layers
- Avoid alloca
- For FFI: use create/dealloc or createShared/deallocShared
Variable Usage
- Use most restrictive of const, let, var (prefer const over let over var)
- Prefer expressions for initialization over var then assignment
- Avoid result variable - use explicit return or expression-based returns
Functions
- Prefer func over proc
- Avoid public (*) symbols not part of intended API
- Prefer openArray over seq for function parameters
Methods (runtime polymorphism)
- Avoid method keyword for dynamic dispatch
- Prefer manual vtable with proc closures for polymorphism
- Methods lack support for generics
Miscellaneous
- Annotate callback proc types with {.raises: [], gcsafe.}
- Avoid explicit {.inline.} pragma
- Avoid converters
- Avoid finalizers
Type Guidelines
Binary Data
- Use byte for binary data
- Use seq[byte] for dynamic arrays
- Convert string to seq[byte] early if stdlib returns binary as string
Integers
- Prefer signed (int, int64) for counting, lengths, indexing
- Use unsigned with explicit size (uint8, uint64) for binary data, bit ops
- Avoid Natural
- Check ranges before converting to int
- Avoid casting pointers to int
- Avoid range types
Strings
- Use string for text
- Use seq[byte] for binary data instead of string
### Error Handling
Philosophy
- Prefer Result, Opt for explicit error handling
- Use Exceptions only for legacy code compatibility
Result Types
- Use Result[T, E] for operations that can fail
- Use cstring for simple error messages: Result[T, cstring]
- Use enum for errors needing differentiation: Result[T, SomeErrorEnum]
- Use Opt[T] for simple optional values
- Annotate all modules: {.push raises: [].} at top
Exceptions (when unavoidable)
- Inherit from CatchableError, name XxxError
- Use Defect for panics/logic errors, name XxxDefect
- Annotate functions explicitly: {.raises: [SpecificError].}
- Catch specific error types, avoid catching CatchableError
- Use expression-based try blocks
- Isolate legacy exception code with try/except, convert to Result
Common Defect Sources
- Overflow in signed arithmetic
- Array/seq indexing with []
- Implicit range type conversions
Status Codes
- Avoid status code pattern
- Use Result instead
### Library Usage
Standard Library
- Use judiciously, prefer focused packages
- Prefer these replacements:
- async: chronos
- bitops: stew/bitops2
- endians: stew/endians2
- exceptions: results
- io: stew/io2
Results Library
- Use cstring errors for diagnostics without differentiation
- Use enum errors when caller needs to act on specific errors
- Use complex types when additional error context needed
- Use isOkOr pattern for chaining
Wrappers (C/FFI)
- Prefer native Nim when available
- For C libraries: use {.compile.} to build from source
- Create xxx_abi.nim for raw ABI wrapper
- Avoid C++ libraries
Miscellaneous
- Print hex output in lowercase, accept both cases
### Common Pitfalls
- Defects lack tracking by {.raises.}
- nil ref causes runtime crashes
- result variable disables branch checking
- Exception hierarchy unclear between Nim versions
- Range types have compiler bugs
- Finalizers infect all instances of type
## Common Workflows
### Adding a New Protocol
1. Create directory: `waku/waku_myprotocol/`
2. Define core files:
- `rpc.nim` - Message types
- `rpc_codec.nim` - Protobuf encoding
- `protocol.nim` - Protocol handler
- `client.nim` - Client API
- `common.nim` - Shared types
3. Define protocol type in `protocol.nim`:
```nim
type WakuMyProtocol* = ref object of LPProtocol
peerManager: PeerManager
# ... fields
```
4. Implement request handler
5. Mount in WakuNode (`waku/node/waku_node.nim`)
6. Add tests in `tests/waku_myprotocol/`
7. Export module via `waku/waku_myprotocol.nim`
### Adding a REST API Endpoint
1. Define handler in `waku/rest_api/endpoint/myprotocol/`
2. Implement endpoint following pattern:
```nim
proc installMyProtocolApiHandlers*(
router: var RestRouter, node: WakuNode
) =
router.api(MethodGet, "/waku/v2/myprotocol/endpoint") do () -> RestApiResponse:
# Implementation
return RestApiResponse.jsonResponse(data, status = Http200)
```
3. Register in `waku/rest_api/handlers.nim`
### Adding Database Migration
For message_store (SQLite):
1. Create `migrations/message_store/NNNNN_description.up.sql`
2. Create corresponding `.down.sql` for rollback
3. Increment version number sequentially
4. Test migration locally before committing
For PostgreSQL: add in `migrations/message_store_postgres/`
### Running Single Test During Development
```bash
# Build test binary
make test tests/waku_filter_v2/test_waku_client.nim
# Binary location
./build/tests/waku_filter_v2/test_waku_client.nim.bin
# Or combine
make test tests/waku_filter_v2/test_waku_client.nim "specific test name"
```
### Debugging with Chronicles
Set log level and filter topics:
```bash
nim c -r \
-d:chronicles_log_level=TRACE \
-d:chronicles_disabled_topics="eth,dnsdisc" \
tests/mytest.nim
```
## Key Constraints
### Vendor Directory
- Never edit files directly in vendor - it is auto-generated from git submodules
- Always run `make update` after pulling changes
- Managed by `nimbus-build-system`
### Chronicles Performance
- Log levels are configured at compile time for performance
- Runtime filtering is available but should be used sparingly: `-d:chronicles_runtime_filtering=on`
- Default sinks are optimized for production
### Memory Management
- Uses `refc` (reference counting with cycle collection)
- Automatically enforced by the build system (hardcoded in `waku.nimble`)
- Do not override unless absolutely necessary, as it breaks compatibility
### RLN Dependencies
- RLN code requires a Rust toolchain, which explains Rust imports in some modules
- Pre-built `librln` libraries are checked into the repository
## Quick Reference
Language: Nim 2.x | License: MIT or Apache 2.0
### Important Files
- `Makefile` - Primary build interface
- `waku.nimble` - Package definition and build tasks (called via nimbus-build-system)
- `vendor/nimbus-build-system/` - Status's build framework
- `waku/node/waku_node.nim` - Core node implementation
- `apps/wakunode2/wakunode2.nim` - Main CLI application
- `waku/factory/waku_conf.nim` - Configuration types
- `library/libwaku.nim` - C bindings entry point
### Testing Entry Points
- `tests/all_tests_waku.nim` - All Waku protocol tests
- `tests/all_tests_wakunode2.nim` - Node application tests
- `tests/all_tests_common.nim` - Common utilities tests
### Key Dependencies
- `chronos` - Async framework
- `nim-results` - Result type for error handling
- `chronicles` - Logging
- `libp2p` - P2P networking
- `confutils` - CLI argument parsing
- `presto` - REST server
- `nimcrypto` - Cryptographic primitives
Note: For specific version requirements, check `waku.nimble`.

File diff suppressed because it is too large Load Diff

View File

@ -1,22 +1,28 @@
# BUILD NIM APP ----------------------------------------------------------------
# alpine:edge supports building rust binaries, alpine:3.16 doesn't for some reason
FROM alpine@sha256:880fafbab5a7602db21ac37f0d17088a29a9a48f98d581f01ce17312c22ccbb5 AS nim-build
FROM rustlang/rust:nightly-alpine3.19 AS nim-build
ARG NIMFLAGS
ARG MAKE_TARGET=wakunode2
ARG NIM_COMMIT
ARG LOG_LEVEL=TRACE
ARG HEAPTRACK_BUILD=0
# Get build tools and required header files
RUN apk add --no-cache bash git build-base pcre-dev linux-headers curl jq rust cargo
RUN apk add --no-cache bash git build-base openssl-dev linux-headers curl jq
WORKDIR /app
COPY . .
# workaround for alpine issue: https://github.com/alpinelinux/docker-alpine/issues/383
RUN apk update && apk upgrade
# Ran separately from 'make' to avoid re-doing
RUN git submodule update --init --recursive
RUN if [ "$HEAPTRACK_BUILD" = "1" ]; then \
git apply --directory=vendor/nimbus-build-system/vendor/Nim docs/tutorial/nim.2.2.4_heaptracker_addon.patch; \
fi
# Slowest build step for the sake of caching layers
RUN make -j$(nproc) deps QUICK_AND_DIRTY_COMPILER=1 ${NIM_COMMIT}
@ -26,7 +32,7 @@ RUN make -j$(nproc) ${NIM_COMMIT} $MAKE_TARGET LOG_LEVEL=${LOG_LEVEL} NIMFLAGS="
# PRODUCTION IMAGE -------------------------------------------------------------
FROM alpine:3.16 as prod
FROM alpine:3.18 AS prod
ARG MAKE_TARGET=wakunode2
@ -34,15 +40,13 @@ LABEL maintainer="jakub@status.im"
LABEL source="https://github.com/waku-org/nwaku"
LABEL description="Wakunode: Waku client"
LABEL commit="unknown"
LABEL version="unknown"
# DevP2P, LibP2P, and JSON RPC ports
EXPOSE 30303 60000 8545
# Referenced in the binary
RUN apk add --no-cache libgcc pcre-dev libpq-dev
# Fix for 'Error loading shared library libpcre.so.3: No such file or directory'
RUN ln -s /usr/lib/libpcre.so /usr/lib/libpcre.so.3
RUN apk add --no-cache libgcc libpq-dev bind-tools
# Copy to separate location to accomodate different MAKE_TARGET values
COPY --from=nim-build /app/build/$MAKE_TARGET /usr/local/bin/
@ -62,7 +66,7 @@ CMD ["--help"]
# DEBUG IMAGE ------------------------------------------------------------------
# Build debug tools: heaptrack
FROM alpine:3.16 AS heaptrack-build
FROM alpine:3.18 AS heaptrack-build
RUN apk update
RUN apk add -- gdb git g++ make cmake zlib-dev boost-dev libunwind-dev
@ -76,7 +80,7 @@ RUN make -j$(nproc)
# Debug image
FROM prod AS debug
FROM prod AS debug-with-heaptrack
RUN apk add --no-cache gdb libunwind

View File

@ -0,0 +1,56 @@
# BUILD NIM APP ----------------------------------------------------------------
FROM rustlang/rust:nightly-alpine3.19 AS nim-build
ARG NIMFLAGS
ARG MAKE_TARGET=lightpushwithmix
ARG NIM_COMMIT
ARG LOG_LEVEL=TRACE
# Get build tools and required header files
RUN apk add --no-cache bash git build-base openssl-dev linux-headers curl jq
WORKDIR /app
COPY . .
# workaround for alpine issue: https://github.com/alpinelinux/docker-alpine/issues/383
RUN apk update && apk upgrade
# Ran separately from 'make' to avoid re-doing
RUN git submodule update --init --recursive
# Slowest build step for the sake of caching layers
RUN make -j$(nproc) deps QUICK_AND_DIRTY_COMPILER=1 ${NIM_COMMIT}
# Build the final node binary
RUN make -j$(nproc) ${NIM_COMMIT} $MAKE_TARGET LOG_LEVEL=${LOG_LEVEL} NIMFLAGS="${NIMFLAGS}"
# REFERENCE IMAGE as BASE for specialized PRODUCTION IMAGES----------------------------------------
FROM alpine:3.18 AS base_lpt
ARG MAKE_TARGET=lightpushwithmix
LABEL maintainer="prem@waku.org"
LABEL source="https://github.com/waku-org/nwaku"
LABEL description="Lite Push With Mix: Waku light-client"
LABEL commit="unknown"
LABEL version="unknown"
# DevP2P, LibP2P, and JSON RPC ports
EXPOSE 30303 60000 8545
# Referenced in the binary
RUN apk add --no-cache libgcc libpq-dev \
wget \
iproute2 \
python3 \
jq
COPY --from=nim-build /app/build/lightpush_publisher_mix /usr/bin/
RUN chmod +x /usr/bin/lightpush_publisher_mix
# Standalone image to be used manually and in lpt-runner -------------------------------------------
FROM base_lpt AS standalone_lpt
ENTRYPOINT ["/usr/bin/lightpush_publisher_mix"]

357
Makefile
View File

@ -4,11 +4,10 @@
# - MIT license
# at your option. This file may not be copied, modified, or distributed except
# according to those terms.
BUILD_SYSTEM_DIR := vendor/nimbus-build-system
EXCLUDED_NIM_PACKAGES := vendor/nim-dnsdisc/vendor
export BUILD_SYSTEM_DIR := vendor/nimbus-build-system
export EXCLUDED_NIM_PACKAGES := vendor/nim-dnsdisc/vendor
LINK_PCRE := 0
LOG_LEVEL := TRACE
FORMAT_MSG := "\\x1B[95mFormatting:\\x1B[39m"
# we don't want an error here, so we can handle things later, in the ".DEFAULT" target
-include $(BUILD_SYSTEM_DIR)/makefiles/variables.mk
@ -28,6 +27,26 @@ GIT_SUBMODULE_UPDATE := git submodule update --init --recursive
else # "variables.mk" was included. Business as usual until the end of this file.
# Determine the OS
detected_OS := $(shell uname -s)
ifneq (,$(findstring MINGW,$(detected_OS)))
detected_OS := Windows
endif
ifeq ($(detected_OS),Windows)
# Update MINGW_PATH to standard MinGW location
MINGW_PATH = /mingw64
NIM_PARAMS += --passC:"-I$(MINGW_PATH)/include"
NIM_PARAMS += --passL:"-L$(MINGW_PATH)/lib"
NIM_PARAMS += --passL:"-Lvendor/nim-nat-traversal/vendor/miniupnp/miniupnpc"
NIM_PARAMS += --passL:"-Lvendor/nim-nat-traversal/vendor/libnatpmp-upstream"
LIBS = -lws2_32 -lbcrypt -liphlpapi -luserenv -lntdll -lminiupnpc -lnatpmp -lpq
NIM_PARAMS += $(foreach lib,$(LIBS),--passL:"$(lib)")
export PATH := /c/msys64/usr/bin:/c/msys64/mingw64/bin:/c/msys64/usr/lib:/c/msys64/mingw64/lib:$(PATH)
endif
##########
## Main ##
@ -37,7 +56,21 @@ else # "variables.mk" was included. Business as usual until the end of this file
# default target, because it's the first one that doesn't start with '.'
all: | wakunode2 example2 chat2 chat2bridge libwaku
test: | testcommon testwaku
test_file := $(word 2,$(MAKECMDGOALS))
define test_name
$(shell echo '$(MAKECMDGOALS)' | cut -d' ' -f3-)
endef
test:
ifeq ($(strip $(test_file)),)
$(MAKE) testcommon
$(MAKE) testwaku
else
$(MAKE) compile-test TEST_FILE="$(test_file)" TEST_NAME="$(call test_name)"
endif
# this prevents make from erroring on unknown targets like "Index"
%:
@true
waku.nims:
ln -s waku.nimble $@
@ -45,6 +78,7 @@ waku.nims:
update: | update-common
rm -rf waku.nims && \
$(MAKE) waku.nims $(HANDLE_OUTPUT)
$(MAKE) build-nph
clean:
rm -rf build
@ -57,22 +91,24 @@ TARGET ?= prod
## Git version
GIT_VERSION ?= $(shell git describe --abbrev=6 --always --tags)
## Compilation parameters. If defined in the CLI the assignments won't be executed
NIM_PARAMS := $(NIM_PARAMS) -d:git_version=\"$(GIT_VERSION)\"
## Heaptracker options
HEAPTRACKER ?= 0
HEAPTRACKER_INJECT ?= 0
ifeq ($(HEAPTRACKER), 1)
# Needed to make nimbus-build-system use the Nim's 'heaptrack_support' branch
DOCKER_NIM_COMMIT := NIM_COMMIT=heaptrack_support
TARGET := debug
# Assumes Nim's lib/system/alloc.nim is patched!
TARGET := debug-with-heaptrack
ifeq ($(HEAPTRACKER_INJECT), 1)
# the Nim compiler will load 'libheaptrack_inject.so'
HEAPTRACK_PARAMS := -d:heaptracker -d:heaptracker_inject
NIM_PARAMS := $(NIM_PARAMS) -d:heaptracker -d:heaptracker_inject
else
# the Nim compiler will load 'libheaptrack_preload.so'
HEAPTRACK_PARAMS := -d:heaptracker
NIM_PARAMS := $(NIM_PARAMS) -d:heaptracker
endif
endif
@ -83,18 +119,40 @@ endif
##################
.PHONY: deps libbacktrace
FOUNDRY_VERSION := 1.5.0
PNPM_VERSION := 10.23.0
rustup:
ifeq (, $(shell which cargo))
# Install Rustup if it's not installed
# -y: Assume "yes" for all prompts
# --default-toolchain stable: Install the stable toolchain
curl https://sh.rustup.rs -sSf | sh -s -- -y --default-toolchain stable
endif
rln-deps: rustup
./scripts/install_rln_tests_dependencies.sh $(FOUNDRY_VERSION) $(PNPM_VERSION)
deps: | deps-common nat-libs waku.nims
### nim-libbacktrace
# "-d:release" implies "--stacktrace:off" and it cannot be added to config.nims
ifeq ($(USE_LIBBACKTRACE), 0)
NIM_PARAMS := $(NIM_PARAMS) -d:debug -d:disable_libbacktrace
else
ifeq ($(DEBUG), 0)
NIM_PARAMS := $(NIM_PARAMS) -d:release
else
NIM_PARAMS := $(NIM_PARAMS) -d:debug
endif
ifeq ($(USE_LIBBACKTRACE), 0)
NIM_PARAMS := $(NIM_PARAMS) -d:disable_libbacktrace
endif
# enable experimental exit is dest feature in libp2p mix
NIM_PARAMS := $(NIM_PARAMS) -d:libp2p_mix_experimental_exit_is_dest
libbacktrace:
+ $(MAKE) -C vendor/nim-libbacktrace --no-print-directory BUILD_CXX_LIB=0
@ -110,8 +168,18 @@ ifeq ($(POSTGRES), 1)
NIM_PARAMS := $(NIM_PARAMS) -d:postgres -d:nimDebugDlOpen
endif
ifeq ($(DEBUG_DISCV5), 1)
NIM_PARAMS := $(NIM_PARAMS) -d:debugDiscv5
endif
clean: | clean-libbacktrace
### Create nimble links (used when building with Nix)
nimbus-build-system-nimble-dir:
NIMBLE_DIR="$(CURDIR)/$(NIMBLE_DIR)" \
PWD_CMD="$(PWD)" \
$(CURDIR)/scripts/generate_nimble_links.sh
##################
## RLN ##
@ -119,9 +187,9 @@ clean: | clean-libbacktrace
.PHONY: librln
LIBRLN_BUILDDIR := $(CURDIR)/vendor/zerokit
LIBRLN_VERSION := v0.3.4
LIBRLN_VERSION := v0.9.0
ifeq ($(OS),Windows_NT)
ifeq ($(detected_OS),Windows)
LIBRLN_FILE := rln.lib
else
LIBRLN_FILE := librln_$(LIBRLN_VERSION).a
@ -131,7 +199,6 @@ $(LIBRLN_FILE):
echo -e $(BUILD_MSG) "$@" && \
./scripts/build_rln.sh $(LIBRLN_BUILDDIR) $(LIBRLN_VERSION) $(LIBRLN_FILE)
librln: | $(LIBRLN_FILE)
$(eval NIM_PARAMS += --passL:$(LIBRLN_FILE) --passL:-lm)
@ -142,7 +209,6 @@ clean-librln:
# Extend clean target
clean: | clean-librln
#################
## Waku Common ##
#################
@ -156,16 +222,22 @@ testcommon: | build deps
##########
## Waku ##
##########
.PHONY: testwaku wakunode2 testwakunode2 example2 chat2 chat2bridge
.PHONY: testwaku wakunode2 testwakunode2 example2 chat2 chat2bridge liteprotocoltester
testwaku: | build deps librln
# install rln-deps only for the testwaku target
testwaku: | build deps rln-deps librln
echo -e $(BUILD_MSG) "build/$@" && \
$(ENV_SCRIPT) nim test -d:os=$(shell uname) $(NIM_PARAMS) waku.nims
wakunode2: | build deps librln
echo -e $(BUILD_MSG) "build/$@" && \
\
$(ENV_SCRIPT) nim wakunode2 $(NIM_PARAMS) waku.nims
benchmarks: | build deps librln
echo -e $(BUILD_MSG) "build/$@" && \
$(ENV_SCRIPT) nim benchmarks $(NIM_PARAMS) waku.nims
testwakunode2: | build deps librln
echo -e $(BUILD_MSG) "build/$@" && \
$(ENV_SCRIPT) nim testwakunode2 $(NIM_PARAMS) waku.nims
@ -178,9 +250,9 @@ chat2: | build deps librln
echo -e $(BUILD_MSG) "build/$@" && \
$(ENV_SCRIPT) nim chat2 $(NIM_PARAMS) waku.nims
rln-keystore-generator: | build deps librln
chat2mix: | build deps librln
echo -e $(BUILD_MSG) "build/$@" && \
$(ENV_SCRIPT) nim rln_keystore_generator $(NIM_PARAMS) waku.nims
$(ENV_SCRIPT) nim chat2mix $(NIM_PARAMS) waku.nims
rln-db-inspector: | build deps librln
echo -e $(BUILD_MSG) "build/$@" && \
@ -190,6 +262,22 @@ chat2bridge: | build deps librln
echo -e $(BUILD_MSG) "build/$@" && \
$(ENV_SCRIPT) nim chat2bridge $(NIM_PARAMS) waku.nims
liteprotocoltester: | build deps librln
echo -e $(BUILD_MSG) "build/$@" && \
$(ENV_SCRIPT) nim liteprotocoltester $(NIM_PARAMS) waku.nims
lightpushwithmix: | build deps librln
echo -e $(BUILD_MSG) "build/$@" && \
$(ENV_SCRIPT) nim lightpushwithmix $(NIM_PARAMS) waku.nims
build/%: | build deps librln
echo -e $(BUILD_MSG) "build/$*" && \
$(ENV_SCRIPT) nim buildone $(NIM_PARAMS) waku.nims $*
compile-test: | build deps librln
echo -e $(BUILD_MSG) "$(TEST_FILE)" "\"$(TEST_NAME)\"" && \
$(ENV_SCRIPT) nim buildTest $(NIM_PARAMS) waku.nims $(TEST_FILE) && \
$(ENV_SCRIPT) nim execTest $(NIM_PARAMS) waku.nims $(TEST_FILE) "\"$(TEST_NAME)\""; \
################
## Waku tools ##
@ -206,17 +294,60 @@ networkmonitor: | build deps librln
echo -e $(BUILD_MSG) "build/$@" && \
$(ENV_SCRIPT) nim networkmonitor $(NIM_PARAMS) waku.nims
############
## Format ##
############
.PHONY: build-nph install-nph clean-nph print-nph-path
# Default location for nph binary shall be next to nim binary to make it available on the path.
NPH:=$(shell dirname $(NIM_BINARY))/nph
build-nph: | build deps
ifeq ("$(wildcard $(NPH))","")
$(ENV_SCRIPT) nim c --skipParentCfg:on vendor/nph/src/nph.nim && \
mv vendor/nph/src/nph $(shell dirname $(NPH))
echo "nph utility is available at " $(NPH)
else
echo "nph utility already exists at " $(NPH)
endif
GIT_PRE_COMMIT_HOOK := .git/hooks/pre-commit
install-nph: build-nph
ifeq ("$(wildcard $(GIT_PRE_COMMIT_HOOK))","")
cp ./scripts/git_pre_commit_format.sh $(GIT_PRE_COMMIT_HOOK)
else
echo "$(GIT_PRE_COMMIT_HOOK) already present, will NOT override"
exit 1
endif
nph/%: | build-nph
echo -e $(FORMAT_MSG) "nph/$*" && \
$(NPH) $*
clean-nph:
rm -f $(NPH)
# To avoid hardcoding nph binary location in several places
print-nph-path:
echo "$(NPH)"
clean: | clean-nph
###################
## Documentation ##
###################
.PHONY: docs
.PHONY: docs coverage
# TODO: Remove unused target
docs: | build deps
echo -e $(BUILD_MSG) "build/$@" && \
$(ENV_SCRIPT) nim doc --run --index:on --project --out:.gh-pages waku/waku.nim waku.nims
coverage:
echo -e $(BUILD_MSG) "build/$@" && \
$(ENV_SCRIPT) ./scripts/run_cov.sh -y
#####################
## Container image ##
@ -236,31 +367,201 @@ docker-image:
--build-arg="NIMFLAGS=$(DOCKER_IMAGE_NIMFLAGS)" \
--build-arg="NIM_COMMIT=$(DOCKER_NIM_COMMIT)" \
--build-arg="LOG_LEVEL=$(LOG_LEVEL)" \
--label="commit=$(GIT_VERSION)" \
--build-arg="HEAPTRACK_BUILD=$(HEAPTRACKER)" \
--label="commit=$(shell git rev-parse HEAD)" \
--label="version=$(GIT_VERSION)" \
--target $(TARGET) \
--tag $(DOCKER_IMAGE_NAME) .
docker-quick-image: MAKE_TARGET ?= wakunode2
docker-quick-image: DOCKER_IMAGE_TAG ?= $(MAKE_TARGET)-$(GIT_VERSION)
docker-quick-image: DOCKER_IMAGE_NAME ?= wakuorg/nwaku:$(DOCKER_IMAGE_TAG)
docker-quick-image: NIM_PARAMS := $(NIM_PARAMS) -d:chronicles_colors:none -d:insecure -d:postgres --passL:$(LIBRLN_FILE) --passL:-lm
docker-quick-image: | build deps librln wakunode2
docker build \
--build-arg="MAKE_TARGET=$(MAKE_TARGET)" \
--tag $(DOCKER_IMAGE_NAME) \
--target $(TARGET) \
--file docker/binaries/Dockerfile.bn.local \
.
docker-push:
docker push $(DOCKER_IMAGE_NAME)
####################################
## Container lite-protocol-tester ##
####################################
# -d:insecure - Necessary to enable Prometheus HTTP endpoint for metrics
# -d:chronicles_colors:none - Necessary to disable colors in logs for Docker
DOCKER_LPT_NIMFLAGS ?= -d:chronicles_colors:none -d:insecure
# build a docker image for the fleet
docker-liteprotocoltester: DOCKER_LPT_TAG ?= latest
docker-liteprotocoltester: DOCKER_LPT_NAME ?= wakuorg/liteprotocoltester:$(DOCKER_LPT_TAG)
# --no-cache
docker-liteprotocoltester:
docker build \
--build-arg="MAKE_TARGET=liteprotocoltester" \
--build-arg="NIMFLAGS=$(DOCKER_LPT_NIMFLAGS)" \
--build-arg="NIM_COMMIT=$(DOCKER_NIM_COMMIT)" \
--build-arg="LOG_LEVEL=TRACE" \
--label="commit=$(shell git rev-parse HEAD)" \
--label="version=$(GIT_VERSION)" \
--target $(if $(filter deploy,$(DOCKER_LPT_TAG)),deployment_lpt,standalone_lpt) \
--tag $(DOCKER_LPT_NAME) \
--file apps/liteprotocoltester/Dockerfile.liteprotocoltester.compile \
.
docker-quick-liteprotocoltester: DOCKER_LPT_TAG ?= latest
docker-quick-liteprotocoltester: DOCKER_LPT_NAME ?= wakuorg/liteprotocoltester:$(DOCKER_LPT_TAG)
docker-quick-liteprotocoltester: | liteprotocoltester
docker build \
--tag $(DOCKER_LPT_NAME) \
--file apps/liteprotocoltester/Dockerfile.liteprotocoltester \
.
docker-liteprotocoltester-push:
docker push $(DOCKER_LPT_NAME)
################
## C Bindings ##
################
.PHONY: cbindings cwaku_example libwaku
STATIC ?= false
STATIC ?= 0
BUILD_COMMAND ?= libwakuDynamic
ifeq ($(detected_OS),Windows)
LIB_EXT_DYNAMIC = dll
LIB_EXT_STATIC = lib
else ifeq ($(detected_OS),Darwin)
LIB_EXT_DYNAMIC = dylib
LIB_EXT_STATIC = a
else ifeq ($(detected_OS),Linux)
LIB_EXT_DYNAMIC = so
LIB_EXT_STATIC = a
endif
LIB_EXT := $(LIB_EXT_DYNAMIC)
ifeq ($(STATIC), 1)
LIB_EXT = $(LIB_EXT_STATIC)
BUILD_COMMAND = libwakuStatic
endif
libwaku: | build deps librln
rm -f build/libwaku*
ifeq ($(STATIC), true)
echo -e $(BUILD_MSG) "build/$@.a" && \
$(ENV_SCRIPT) nim libwakuStatic $(NIM_PARAMS) waku.nims
echo -e $(BUILD_MSG) "build/$@.$(LIB_EXT)" && $(ENV_SCRIPT) nim $(BUILD_COMMAND) $(NIM_PARAMS) waku.nims $@.$(LIB_EXT)
#####################
## Mobile Bindings ##
#####################
.PHONY: libwaku-android \
libwaku-android-precheck \
libwaku-android-arm64 \
libwaku-android-amd64 \
libwaku-android-x86 \
libwaku-android-arm \
rebuild-nat-libs \
build-libwaku-for-android-arch
ANDROID_TARGET ?= 30
ifeq ($(detected_OS),Darwin)
ANDROID_TOOLCHAIN_DIR := $(ANDROID_NDK_HOME)/toolchains/llvm/prebuilt/darwin-x86_64
else
echo -e $(BUILD_MSG) "build/$@.so" && \
$(ENV_SCRIPT) nim libwakuDynamic $(NIM_PARAMS) waku.nims
ANDROID_TOOLCHAIN_DIR := $(ANDROID_NDK_HOME)/toolchains/llvm/prebuilt/linux-x86_64
endif
rebuild-nat-libs: | clean-cross nat-libs
libwaku-android-precheck:
ifndef ANDROID_NDK_HOME
$(error ANDROID_NDK_HOME is not set)
endif
build-libwaku-for-android-arch:
$(MAKE) rebuild-nat-libs CC=$(ANDROID_TOOLCHAIN_DIR)/bin/$(ANDROID_COMPILER) && \
./scripts/build_rln_android.sh $(CURDIR)/build $(LIBRLN_BUILDDIR) $(LIBRLN_VERSION) $(CROSS_TARGET) $(ABIDIR) && \
CPU=$(CPU) ABIDIR=$(ABIDIR) ANDROID_ARCH=$(ANDROID_ARCH) ANDROID_COMPILER=$(ANDROID_COMPILER) ANDROID_TOOLCHAIN_DIR=$(ANDROID_TOOLCHAIN_DIR) $(ENV_SCRIPT) nim libWakuAndroid $(NIM_PARAMS) waku.nims
libwaku-android-arm64: ANDROID_ARCH=aarch64-linux-android
libwaku-android-arm64: CPU=arm64
libwaku-android-arm64: ABIDIR=arm64-v8a
libwaku-android-arm64: | libwaku-android-precheck build deps
$(MAKE) build-libwaku-for-android-arch ANDROID_ARCH=$(ANDROID_ARCH) CROSS_TARGET=$(ANDROID_ARCH) CPU=$(CPU) ABIDIR=$(ABIDIR) ANDROID_COMPILER=$(ANDROID_ARCH)$(ANDROID_TARGET)-clang
libwaku-android-amd64: ANDROID_ARCH=x86_64-linux-android
libwaku-android-amd64: CPU=amd64
libwaku-android-amd64: ABIDIR=x86_64
libwaku-android-amd64: | libwaku-android-precheck build deps
$(MAKE) build-libwaku-for-android-arch ANDROID_ARCH=$(ANDROID_ARCH) CROSS_TARGET=$(ANDROID_ARCH) CPU=$(CPU) ABIDIR=$(ABIDIR) ANDROID_COMPILER=$(ANDROID_ARCH)$(ANDROID_TARGET)-clang
libwaku-android-x86: ANDROID_ARCH=i686-linux-android
libwaku-android-x86: CPU=i386
libwaku-android-x86: ABIDIR=x86
libwaku-android-x86: | libwaku-android-precheck build deps
$(MAKE) build-libwaku-for-android-arch ANDROID_ARCH=$(ANDROID_ARCH) CROSS_TARGET=$(ANDROID_ARCH) CPU=$(CPU) ABIDIR=$(ABIDIR) ANDROID_COMPILER=$(ANDROID_ARCH)$(ANDROID_TARGET)-clang
libwaku-android-arm: ANDROID_ARCH=armv7a-linux-androideabi
libwaku-android-arm: CPU=arm
libwaku-android-arm: ABIDIR=armeabi-v7a
libwaku-android-arm: | libwaku-android-precheck build deps
# cross-rs target architecture name does not match the one used in android
$(MAKE) build-libwaku-for-android-arch ANDROID_ARCH=$(ANDROID_ARCH) CROSS_TARGET=armv7-linux-androideabi CPU=$(CPU) ABIDIR=$(ABIDIR) ANDROID_COMPILER=$(ANDROID_ARCH)$(ANDROID_TARGET)-clang
libwaku-android:
$(MAKE) libwaku-android-amd64
$(MAKE) libwaku-android-arm64
$(MAKE) libwaku-android-x86
# This target is disabled because on recent versions of cross-rs complain with the following error
# relocation R_ARM_THM_ALU_PREL_11_0 cannot be used against symbol 'stack_init_trampoline_return'; recompile with -fPIC
# It's likely this architecture is not used so we might just not support it.
# $(MAKE) libwaku-android-arm
#################
## iOS Bindings #
#################
.PHONY: libwaku-ios-precheck \
libwaku-ios-device \
libwaku-ios-simulator \
libwaku-ios
IOS_DEPLOYMENT_TARGET ?= 18.0
# Get SDK paths dynamically using xcrun
define get_ios_sdk_path
$(shell xcrun --sdk $(1) --show-sdk-path 2>/dev/null)
endef
libwaku-ios-precheck:
ifeq ($(detected_OS),Darwin)
@command -v xcrun >/dev/null 2>&1 || { echo "Error: Xcode command line tools not installed"; exit 1; }
else
$(error iOS builds are only supported on macOS)
endif
# Build for iOS architecture
build-libwaku-for-ios-arch:
IOS_SDK=$(IOS_SDK) IOS_ARCH=$(IOS_ARCH) IOS_SDK_PATH=$(IOS_SDK_PATH) $(ENV_SCRIPT) nim libWakuIOS $(NIM_PARAMS) waku.nims
# iOS device (arm64)
libwaku-ios-device: IOS_ARCH=arm64
libwaku-ios-device: IOS_SDK=iphoneos
libwaku-ios-device: IOS_SDK_PATH=$(call get_ios_sdk_path,iphoneos)
libwaku-ios-device: | libwaku-ios-precheck build deps
$(MAKE) build-libwaku-for-ios-arch IOS_ARCH=$(IOS_ARCH) IOS_SDK=$(IOS_SDK) IOS_SDK_PATH=$(IOS_SDK_PATH)
# iOS simulator (arm64 - Apple Silicon Macs)
libwaku-ios-simulator: IOS_ARCH=arm64
libwaku-ios-simulator: IOS_SDK=iphonesimulator
libwaku-ios-simulator: IOS_SDK_PATH=$(call get_ios_sdk_path,iphonesimulator)
libwaku-ios-simulator: | libwaku-ios-precheck build deps
$(MAKE) build-libwaku-for-ios-arch IOS_ARCH=$(IOS_ARCH) IOS_SDK=$(IOS_SDK) IOS_SDK_PATH=$(IOS_SDK_PATH)
# Build all iOS targets
libwaku-ios:
$(MAKE) libwaku-ios-device
$(MAKE) libwaku-ios-simulator
cwaku_example: | build libwaku
echo -e $(BUILD_MSG) "build/$@" && \
cc -o "build/$@" \

128
README.md
View File

@ -1,24 +1,35 @@
# Nwaku
# Logos Messaging Nim
## Introduction
The nwaku repository implements Waku, and provides tools related to it.
The logos-messaging-nim, a.k.a. lmn or nwaku, repository implements a set of libp2p protocols aimed to bring
private communications.
- A Nim implementation of the [Waku (v2) protocol](https://specs.vac.dev/specs/waku/v2/waku-v2.html).
- CLI application `wakunode2` that allows you to run a Waku node.
- Examples of Waku usage.
- Nim implementation of [these specs](https://github.com/vacp2p/rfc-index/tree/main/waku).
- C library that exposes the implemented protocols.
- CLI application that allows you to run an lmn node.
- Examples.
- Various tests of above.
For more details see the [source code](waku/v2/README.md)
For more details see the [source code](waku/README.md)
## How to Build & Run
## How to Build & Run ( Linux, MacOS & WSL )
These instructions are generic. For more detailed instructions, see the Waku source code above.
These instructions are generic. For more detailed instructions, see the source code above.
### Prerequisites
The standard developer tools, including a C compiler, GNU Make, Bash, and Git. More information on these installations can be found [here](https://docs.waku.org/guides/nwaku/build-source#install-dependencies).
> In some distributions (Fedora linux for example), you may need to install `which` utility separately. Nimbus build system is relying on it.
You'll also need an installation of Rust and its toolchain (specifically `rustc` and `cargo`).
The easiest way to install these, is using `rustup`:
```bash
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
```
### Wakunode
```bash
@ -26,6 +37,10 @@ The standard developer tools, including a C compiler, GNU Make, Bash, and Git. M
# You'll run `make update` after each `git pull` in the future to keep those submodules updated.
make wakunode2
# Build with custom compilation flags. Do not use NIM_PARAMS unless you know what you are doing.
# Replace with your own flags
make wakunode2 NIMFLAGS="-d:chronicles_colors:none -d:disableMarchNative"
# Run with DNS bootstrapping
./build/wakunode2 --dns-discovery --dns-discovery-url=DNS_BOOTSTRAP_NODE_URL
@ -36,7 +51,7 @@ To join the network, you need to know the address of at least one bootstrap node
Please refer to the [Waku README](https://github.com/waku-org/nwaku/blob/master/waku/README.md) for more information.
For more on how to run `wakunode2`, refer to:
- [Run using binaries](https://docs.waku.org/guides/run-nwaku-node#download-the-binary)
- [Run using binaries](https://docs.waku.org/guides/nwaku/build-source)
- [Run using docker](https://docs.waku.org/guides/nwaku/run-docker)
- [Run using docker-compose](https://docs.waku.org/guides/nwaku/run-docker-compose)
@ -44,24 +59,109 @@ For more on how to run `wakunode2`, refer to:
##### WSL
If you encounter difficulties building the project on WSL, consider placing the project within WSL's filesystem, avoiding the `/mnt/` directory.
### How to Build & Run ( Windows )
### Windows Build Instructions
#### 1. Install Required Tools
- **Git Bash Terminal**: Download and install from https://git-scm.com/download/win
- **MSYS2**:
a. Download installer from https://www.msys2.org
b. Install at "C:\" (default location). Remove/rename the msys folder in case of previous installation.
c. Use the mingw64 terminal from msys64 directory for package installation.
#### 2. Install Dependencies
Open MSYS2 mingw64 terminal and run the following one-by-one :
```bash
pacman -Syu --noconfirm
pacman -S --noconfirm --needed mingw-w64-x86_64-toolchain
pacman -S --noconfirm --needed base-devel make cmake upx
pacman -S --noconfirm --needed mingw-w64-x86_64-rust
pacman -S --noconfirm --needed mingw-w64-x86_64-postgresql
pacman -S --noconfirm --needed mingw-w64-x86_64-gcc
pacman -S --noconfirm --needed mingw-w64-x86_64-gcc-libs
pacman -S --noconfirm --needed mingw-w64-x86_64-libwinpthread-git
pacman -S --noconfirm --needed mingw-w64-x86_64-zlib
pacman -S --noconfirm --needed mingw-w64-x86_64-openssl
pacman -S --noconfirm --needed mingw-w64-x86_64-python
```
#### 3. Build Wakunode
- Open Git Bash as administrator
- clone nwaku and cd nwaku
- Execute: `./scripts/build_windows.sh`
#### 4. Troubleshooting
If `wakunode2.exe` isn't generated:
- **Missing Dependencies**: Verify with:
`which make cmake gcc g++ rustc cargo python3 upx`
If missing, revisit Step 2 or ensure MSYS2 is at `C:\`
- **Installation Conflicts**: Remove existing MinGW/MSYS2/Git Bash installations and perform fresh install
### Developing
#### Nim Runtime
This repository is bundled with a Nim runtime that includes the necessary dependencies for the project.
Before you can utilise the runtime you'll need to build the project, as detailed in a previous section. This will generate a `vendor` directory containing various dependencies, including the `nimbus-build-system` which has the bundled nim runtime.
Before you can utilize the runtime you'll need to build the project, as detailed in a previous section.
This will generate a `vendor` directory containing various dependencies, including the `nimbus-build-system` which has the bundled nim runtime.
After successfully building the project, you may bring the bundled runtime into scope by running:
```bash
source env.sh
```
If everything went well, your should see your prompt suffixed with `[Nimbus env]$`. Now you can run `nim` commands as usual.
If everything went well, you should see your prompt suffixed with `[Nimbus env]$`. Now you can run `nim` commands as usual.
### Waku Protocol Test Suite
### Test Suite
```bash
# Run all the Waku tests
make test
# Run a specific test file
make test <test_file_path>
# e.g. : make test tests/wakunode2/test_all.nim
# Run a specific test name from a specific test file
make test <test_file_path> <test_name>
# e.g. : make test tests/wakunode2/test_all.nim "node setup is successful with default configuration"
```
### Building single test files
During development it is helpful to build and run a single test file.
To support this make has a specific target:
targets:
- `build/<relative path to your test file.nim>`
- `test/<relative path to your test file.nim>`
Binary will be created as `<path to your test file.nim>.bin` under the `build` directory .
```bash
# Build and run your test file separately
make test/tests/common/test_enr_builder.nim
```
### Testing against `js-waku`
Refer to [js-waku repo](https://github.com/waku-org/js-waku/tree/master/packages/tests) for instructions.
## Formatting
Nim files are expected to be formatted using the [`nph`](https://github.com/arnetheduck/nph) version present in `vendor/nph`.
You can easily format file with the `make nph/<relative path to nim> file` command.
For example:
```
make nph/waku/waku_core.nim
```
A convenient git hook is provided to automatically format file at commit time.
Run the following command to install it:
```shell
make install-nph
```
### Examples
@ -82,3 +182,7 @@ For bug reports, please [tag your issue with the `bug` label](https://github.com
If you believe the reported issue requires critical attention, please [use the `critical` label](https://github.com/waku-org/nwaku/issues/new?labels=critical,bug) to assist with triaging.
To get help, or participate in the conversation, join the [Waku Discord](https://discord.waku.org/) server.
### Docs
* [REST API Documentation](https://waku-org.github.io/waku-rest-api/)

View File

@ -0,0 +1,73 @@
import
std/[strutils, times, sequtils, osproc], math, results, options, testutils/unittests
import
waku/[
waku_rln_relay/protocol_types,
waku_rln_relay/rln,
waku_rln_relay,
waku_rln_relay/conversion_utils,
waku_rln_relay/group_manager/on_chain/group_manager,
],
tests/waku_rln_relay/utils_onchain
proc benchmark(
manager: OnChainGroupManager, registerCount: int, messageLimit: int
): Future[string] {.async, gcsafe.} =
# Register a new member so that we can later generate proofs
let idCredentials = generateCredentials(registerCount)
var start_time = getTime()
for i in 0 .. registerCount - 1:
try:
await manager.register(idCredentials[i], UserMessageLimit(messageLimit + 1))
except Exception, CatchableError:
assert false, "exception raised: " & getCurrentExceptionMsg()
info "registration finished",
iter = i, elapsed_ms = (getTime() - start_time).inMilliseconds
discard await manager.updateRoots()
manager.merkleProofCache = (await manager.fetchMerkleProofElements()).valueOr:
error "Failed to fetch Merkle proof", error = error
quit(QuitFailure)
let epoch = default(Epoch)
info "epoch in bytes", epochHex = epoch.inHex()
let data: seq[byte] = newSeq[byte](1024)
var proofGenTimes: seq[times.Duration] = @[]
var proofVerTimes: seq[times.Duration] = @[]
start_time = getTime()
for i in 1 .. messageLimit:
var generate_time = getTime()
let proof = manager.generateProof(data, epoch, MessageId(i.uint8)).valueOr:
raiseAssert $error
proofGenTimes.add(getTime() - generate_time)
let verify_time = getTime()
let ok = manager.verifyProof(data, proof).valueOr:
raiseAssert $error
proofVerTimes.add(getTime() - verify_time)
info "iteration finished",
iter = i, elapsed_ms = (getTime() - start_time).inMilliseconds
echo "Proof generation times: ", sum(proofGenTimes) div len(proofGenTimes)
echo "Proof verification times: ", sum(proofVerTimes) div len(proofVerTimes)
proc main() =
# Start a local Ethereum JSON-RPC (Anvil) so that the group-manager setup can connect.
let anvilProc = runAnvil()
defer:
stopAnvil(anvilProc)
# Set up an On-chain group manager (includes contract deployment)
let manager = waitFor setupOnchainGroupManager()
(waitFor manager.init()).isOkOr:
raiseAssert $error
discard waitFor benchmark(manager, 200, 20)
when isMainModule:
main()

View File

@ -1,50 +1,57 @@
## chat2 is an example of usage of Waku v2. For suggested usage options, please
## see dingpu tutorial in docs folder.
when not(compileOption("threads")):
when not (compileOption("threads")):
{.fatal: "Please, compile this program with the --threads:on option!".}
when (NimMajor, NimMinor) < (1, 4):
{.push raises: [Defect].}
else:
{.push raises: [].}
{.push raises: [].}
import std/[strformat, strutils, times, json, options, random]
import confutils, chronicles, chronos, stew/shims/net as stewNet,
eth/keys, bearssl, stew/[byteutils, results],
nimcrypto/pbkdf2,
metrics,
metrics/chronos_httpserver
import libp2p/[switch, # manage transports, a single entry point for dialing and listening
crypto/crypto, # cryptographic functions
stream/connection, # create and close stream read / write connections
multiaddress, # encode different addressing schemes. For example, /ip4/7.7.7.7/tcp/6543 means it is using IPv4 protocol and TCP
peerinfo, # manage the information of a peer, such as peer ID and public / private key
peerid, # Implement how peers interact
protobuf/minprotobuf, # message serialisation/deserialisation from and to protobufs
protocols/secure/secio, # define the protocol of secure input / output, allows encrypted communication that uses public keys to validate signed messages instead of a certificate authority like in TLS
nameresolving/dnsresolver]# define DNS resolution
import std/[strformat, strutils, times, options, random, sequtils]
import
../../waku/waku_core,
../../waku/waku_lightpush,
../../waku/waku_lightpush/rpc,
../../waku/waku_filter,
../../waku/waku_enr,
../../waku/waku_store,
../../waku/waku_dnsdisc,
../../waku/waku_node,
../../waku/node/waku_metrics,
../../waku/node/peer_manager,
../../waku/common/utils/nat,
confutils,
chronicles,
chronos,
eth/keys,
bearssl,
stew/[byteutils, results],
metrics,
metrics/chronos_httpserver
import
libp2p/[
switch, # manage transports, a single entry point for dialing and listening
crypto/crypto, # cryptographic functions
stream/connection, # create and close stream read / write connections
multiaddress,
# encode different addressing schemes. For example, /ip4/7.7.7.7/tcp/6543 means it is using IPv4 protocol and TCP
peerinfo,
# manage the information of a peer, such as peer ID and public / private key
peerid, # Implement how peers interact
protobuf/minprotobuf, # message serialisation/deserialisation from and to protobufs
nameresolving/dnsresolver,
] # define DNS resolution
import
waku/[
waku_core,
waku_lightpush_legacy/common,
waku_lightpush_legacy/rpc,
waku_enr,
discovery/waku_dnsdisc,
waku_store_legacy,
waku_node,
node/waku_metrics,
node/peer_manager,
factory/builder,
common/utils/nat,
waku_relay,
waku_store/common,
],
./config_chat2
import
libp2p/protocols/pubsub/rpc/messages,
libp2p/protocols/pubsub/pubsub
import
../../waku/waku_rln_relay
import libp2p/protocols/pubsub/rpc/messages, libp2p/protocols/pubsub/pubsub
import ../../waku/waku_rln_relay
const Help = """
const Help =
"""
Commands: /[?|help|connect|nick|exit]
help: Prints this help
connect: dials a remote peer
@ -56,14 +63,14 @@ const Help = """
# Could poll connection pool or something here, I suppose
# TODO Ensure connected turns true on incoming connections, or get rid of it
type Chat = ref object
node: WakuNode # waku node for publishing, subscribing, etc
transp: StreamTransport # transport streams between read & write file descriptor
subscribed: bool # indicates if a node is subscribed or not to a topic
connected: bool # if the node is connected to another peer
started: bool # if the node has started
nick: string # nickname for this chat session
prompt: bool # chat prompt is showing
contentTopic: string # default content topic for chat messages
node: WakuNode # waku node for publishing, subscribing, etc
transp: StreamTransport # transport streams between read & write file descriptor
subscribed: bool # indicates if a node is subscribed or not to a topic
connected: bool # if the node is connected to another peer
started: bool # if the node has started
nick: string # nickname for this chat session
prompt: bool # chat prompt is showing
contentTopic: string # default content topic for chat messages
type
PrivateKey* = crypto.PrivateKey
@ -86,11 +93,11 @@ proc init*(T: type Chat2Message, buffer: seq[byte]): ProtoResult[T] =
let pb = initProtoBuffer(buffer)
var timestamp: uint64
discard ? pb.getField(1, timestamp)
discard ?pb.getField(1, timestamp)
msg.timestamp = int64(timestamp)
discard ? pb.getField(2, msg.nick)
discard ? pb.getField(3, msg.payload)
discard ?pb.getField(2, msg.nick)
discard ?pb.getField(3, msg.payload)
ok(msg)
@ -125,19 +132,14 @@ proc showChatPrompt(c: Chat) =
except IOError:
discard
proc getChatLine(c: Chat, msg:WakuMessage): Result[string, string]=
proc getChatLine(payload: seq[byte]): string =
# No payload encoding/encryption from Waku
let
pb = Chat2Message.init(msg.payload)
chatLine = if pb.isOk: pb[].toString()
else: string.fromBytes(msg.payload)
return ok(chatline)
let pb = Chat2Message.init(payload).valueOr:
return string.fromBytes(payload)
return $pb
proc printReceivedMessage(c: Chat, msg: WakuMessage) =
let
pb = Chat2Message.init(msg.payload)
chatLine = if pb.isOk: pb[].toString()
else: string.fromBytes(msg.payload)
let chatLine = getChatLine(msg.payload)
try:
echo &"{chatLine}"
except ValueError:
@ -146,8 +148,8 @@ proc printReceivedMessage(c: Chat, msg: WakuMessage) =
c.prompt = false
showChatPrompt(c)
trace "Printing message", topic=DefaultPubsubTopic, chatLine,
contentTopic = msg.contentTopic
trace "Printing message",
topic = DefaultPubsubTopic, chatLine, contentTopic = msg.contentTopic
proc readNick(transp: StreamTransport): Future[string] {.async.} =
# Chat prompt
@ -155,82 +157,87 @@ proc readNick(transp: StreamTransport): Future[string] {.async.} =
stdout.flushFile()
return await transp.readLine()
proc startMetricsServer(
serverIp: IpAddress, serverPort: Port
): Result[MetricsHttpServerRef, string] =
info "Starting metrics HTTP server", serverIp = $serverIp, serverPort = $serverPort
proc startMetricsServer(serverIp: ValidIpAddress, serverPort: Port): Result[MetricsHttpServerRef, string] =
info "Starting metrics HTTP server", serverIp= $serverIp, serverPort= $serverPort
let server = MetricsHttpServerRef.new($serverIp, serverPort).valueOr:
return err("metrics HTTP server start failed: " & $error)
let metricsServerRes = MetricsHttpServerRef.new($serverIp, serverPort)
if metricsServerRes.isErr():
return err("metrics HTTP server start failed: " & $metricsServerRes.error)
let server = metricsServerRes.value
try:
waitFor server.start()
except CatchableError:
return err("metrics HTTP server start failed: " & getCurrentExceptionMsg())
info "Metrics HTTP server started", serverIp= $serverIp, serverPort= $serverPort
ok(metricsServerRes.value)
info "Metrics HTTP server started", serverIp = $serverIp, serverPort = $serverPort
ok(server)
proc publish(c: Chat, line: string) =
# First create a Chat2Message protobuf with this line of text
let time = getTime().toUnix()
let chat2pb = Chat2Message(timestamp: time,
nick: c.nick,
payload: line.toBytes()).encode()
let chat2pb =
Chat2Message(timestamp: time, nick: c.nick, payload: line.toBytes()).encode()
## @TODO: error handling on failure
proc handler(response: PushResponse) {.gcsafe, closure.} =
trace "lightpush response received", response=response
trace "lightpush response received", response = response
var message = WakuMessage(
payload: chat2pb.buffer,
contentTopic: c.contentTopic,
version: 0,
timestamp: getNanosecondTime(time),
)
var message = WakuMessage(payload: chat2pb.buffer,
contentTopic: c.contentTopic, version: 0, timestamp: getNanosecondTime(time))
if not isNil(c.node.wakuRlnRelay):
# for future version when we support more than one rln protected content topic,
# we should check the message content topic as well
let success = c.node.wakuRlnRelay.appendRLNProof(message, float64(time))
if not success:
debug "could not append rate limit proof to the message", success=success
if c.node.wakuRlnRelay.appendRLNProof(message, float64(time)).isErr():
info "could not append rate limit proof to the message"
else:
debug "rate limit proof is appended to the message", success=success
let decodeRes = RateLimitProof.init(message.proof)
if decodeRes.isErr():
info "rate limit proof is appended to the message"
let proof = RateLimitProof.init(message.proof).valueOr:
error "could not decode the RLN proof"
let proof = decodeRes.get()
return
# TODO move it to log after dogfooding
let msgEpoch = fromEpoch(proof.epoch)
if fromEpoch(c.node.wakuRlnRelay.lastEpoch) == msgEpoch:
echo "--rln epoch: ", msgEpoch, " ⚠️ message rate violation! you are spamming the network!"
echo "--rln epoch: ",
msgEpoch, " ⚠️ message rate violation! you are spamming the network!"
else:
echo "--rln epoch: ", msgEpoch
# update the last epoch
c.node.wakuRlnRelay.lastEpoch = proof.epoch
if not c.node.wakuLightPush.isNil():
# Attempt lightpush
asyncSpawn c.node.lightpushPublish(some(DefaultPubsubTopic), message)
else:
asyncSpawn c.node.publish(some(DefaultPubsubTopic), message)
try:
if not c.node.wakuLegacyLightPush.isNil():
# Attempt lightpush
(waitFor c.node.legacyLightpushPublish(some(DefaultPubsubTopic), message)).isOkOr:
error "failed to publish lightpush message", error = error
else:
(waitFor c.node.publish(some(DefaultPubsubTopic), message)).isOkOr:
error "failed to publish message", error = error
except CatchableError:
error "caught error publishing message: ", error = getCurrentExceptionMsg()
# TODO This should read or be subscribe handler subscribe
proc readAndPrint(c: Chat) {.async.} =
while true:
# while p.connected:
# # TODO: echo &"{p.id} -> "
#
# echo cast[string](await p.conn.readLp(1024))
# while p.connected:
# # TODO: echo &"{p.id} -> "
#
# echo cast[string](await p.conn.readLp(1024))
#echo "readAndPrint subscribe NYI"
await sleepAsync(100.millis)
# TODO Implement
proc writeAndPrint(c: Chat) {.async.} =
while true:
# Connect state not updated on incoming WakuRelay connections
# if not c.connected:
# echo "type an address or wait for a connection:"
# echo "type /[help|?] for help"
# Connect state not updated on incoming WakuRelay connections
# if not c.connected:
# echo "type an address or wait for a connection:"
# echo "type /[help|?] for help"
# Chat prompt
showChatPrompt(c)
@ -240,11 +247,11 @@ proc writeAndPrint(c: Chat) {.async.} =
echo Help
continue
# if line.startsWith("/disconnect"):
# echo "Ending current session"
# if p.connected and p.conn.closed.not:
# await p.conn.close()
# p.connected = false
# if line.startsWith("/disconnect"):
# echo "Ending current session"
# if p.connected and p.conn.closed.not:
# await p.conn.close()
# p.connected = false
elif line.startsWith("/connect"):
# TODO Should be able to connect to multiple peers for Waku chat
if c.connected:
@ -255,23 +262,17 @@ proc writeAndPrint(c: Chat) {.async.} =
let address = await c.transp.readLine()
if address.len > 0:
await c.connectToNodes(@[address])
elif line.startsWith("/nick"):
# Set a new nickname
c.nick = await readNick(c.transp)
echo "You are now known as " & c.nick
elif line.startsWith("/exit"):
if not c.node.wakuFilterLegacy.isNil():
echo "unsubscribing from content filters..."
let peerOpt = c.node.peerManager.selectPeer(WakuLegacyFilterCodec)
if peerOpt.isSome():
await c.node.legacyFilterUnsubscribe(pubsubTopic=some(DefaultPubsubTopic), contentTopics=c.contentTopic, peer=peerOpt.get())
echo "quitting..."
await c.node.stop()
try:
await c.node.stop()
except:
echo "exception happened when stopping: " & getCurrentExceptionMsg()
quit(QuitSuccess)
else:
@ -300,46 +301,53 @@ proc readInput(wfd: AsyncFD) {.thread, raises: [Defect, CatchableError].} =
let line = stdin.readLine()
discard waitFor transp.write(line & "\r\n")
{.pop.} # @TODO confutils.nim(775, 17) Error: can raise an unlisted exception: ref IOError
{.pop.}
# @TODO confutils.nim(775, 17) Error: can raise an unlisted exception: ref IOError
proc processInput(rfd: AsyncFD, rng: ref HmacDrbgContext) {.async.} =
let
transp = fromPipe(rfd)
conf = Chat2Conf.load()
nodekey = if conf.nodekey.isSome(): conf.nodekey.get()
else: PrivateKey.random(Secp256k1, rng[]).tryGet()
nodekey =
if conf.nodekey.isSome():
conf.nodekey.get()
else:
PrivateKey.random(Secp256k1, rng[]).tryGet()
# set log level
if conf.logLevel != LogLevel.NONE:
setLogLevel(conf.logLevel)
let natRes = setupNat(conf.nat, clientId,
Port(uint16(conf.tcpPort) + conf.portsShift),
Port(uint16(conf.udpPort) + conf.portsShift))
if natRes.isErr():
raise newException(ValueError, "setupNat error " & natRes.error)
let (extIp, extTcpPort, extUdpPort) = natRes.get()
let (extIp, extTcpPort, extUdpPort) = setupNat(
conf.nat,
clientId,
Port(uint16(conf.tcpPort) + conf.portsShift),
Port(uint16(conf.udpPort) + conf.portsShift),
).valueOr:
raise newException(ValueError, "setupNat error " & error)
var enrBuilder = EnrBuilder.init(nodeKey)
let recordRes = enrBuilder.build()
let record =
if recordRes.isErr():
error "failed to create enr record", error=recordRes.error
quit(QuitFailure)
else: recordRes.get()
let record = enrBuilder.build().valueOr:
error "failed to create enr record", error = error
quit(QuitFailure)
let node = block:
var builder = WakuNodeBuilder.init()
builder.withNodeKey(nodeKey)
builder.withRecord(record)
builder.withNetworkConfigurationDetails(conf.listenAddress, Port(uint16(conf.tcpPort) + conf.portsShift),
extIp, extTcpPort,
wsBindPort = Port(uint16(conf.websocketPort) + conf.portsShift),
wsEnabled = conf.websocketSupport,
wssEnabled = conf.websocketSecureSupport).tryGet()
builder.build().tryGet()
var builder = WakuNodeBuilder.init()
builder.withNodeKey(nodeKey)
builder.withRecord(record)
builder
.withNetworkConfigurationDetails(
conf.listenAddress,
Port(uint16(conf.tcpPort) + conf.portsShift),
extIp,
extTcpPort,
wsBindPort = Port(uint16(conf.websocketPort) + conf.portsShift),
wsEnabled = conf.websocketSupport,
wssEnabled = conf.websocketSecureSupport,
)
.tryGet()
builder.build().tryGet()
await node.start()
@ -347,21 +355,27 @@ proc processInput(rfd: AsyncFD, rng: ref HmacDrbgContext) {.async.} =
raise newException(ConfigurationError, "rln-relay-cred-path MUST be passed")
if conf.relay:
await node.mountRelay(conf.topics.split(" "))
let shards =
conf.shards.mapIt(RelayShard(clusterId: conf.clusterId, shardId: uint16(it)))
(await node.mountRelay()).isOkOr:
echo "failed to mount relay: " & error
return
await node.mountLibp2pPing()
let nick = await readNick(transp)
echo "Welcome, " & nick & "!"
var chat = Chat(node: node,
transp: transp,
subscribed: true,
connected: false,
started: true,
nick: nick,
prompt: false,
contentTopic: conf.contentTopic)
var chat = Chat(
node: node,
transp: transp,
subscribed: true,
connected: false,
started: true,
nick: nick,
prompt: false,
contentTopic: conf.contentTopic,
)
if conf.staticnodes.len > 0:
echo "Connecting to static peers..."
@ -374,41 +388,45 @@ proc processInput(rfd: AsyncFD, rng: ref HmacDrbgContext) {.async.} =
echo "Connecting to " & $conf.fleet & " fleet using DNS discovery..."
if conf.fleet == Fleet.test:
dnsDiscoveryUrl = some("enrtree://AO47IDOLBKH72HIZZOXQP6NMRESAN7CHYWIBNXDXWRJRZWLODKII6@test.wakuv2.nodes.status.im")
dnsDiscoveryUrl = some(
"enrtree://AOGYWMBYOUIMOENHXCHILPKY3ZRFEULMFI4DOM442QSZ73TT2A7VI@test.waku.nodes.status.im"
)
else:
# Connect to prod by default
dnsDiscoveryUrl = some("enrtree://ANEDLO25QVUGJOUTQFRYKWX6P4Z4GKVESBMHML7DZ6YK4LGS5FC5O@prod.wakuv2.nodes.status.im")
elif conf.dnsDiscovery and conf.dnsDiscoveryUrl != "":
# Connect to sandbox by default
dnsDiscoveryUrl = some(
"enrtree://AIRVQ5DDA4FFWLRBCHJWUWOO6X6S4ZTZ5B667LQ6AJU6PEYDLRD5O@sandbox.waku.nodes.status.im"
)
elif conf.dnsDiscoveryUrl != "":
# No pre-selected fleet. Discover nodes via DNS using user config
debug "Discovering nodes using Waku DNS discovery", url=conf.dnsDiscoveryUrl
info "Discovering nodes using Waku DNS discovery", url = conf.dnsDiscoveryUrl
dnsDiscoveryUrl = some(conf.dnsDiscoveryUrl)
var discoveredNodes: seq[RemotePeerInfo]
if dnsDiscoveryUrl.isSome:
var nameServers: seq[TransportAddress]
for ip in conf.dnsDiscoveryNameServers:
for ip in conf.dnsAddrsNameServers:
nameServers.add(initTAddress(ip, Port(53))) # Assume all servers use port 53
let dnsResolver = DnsResolver.new(nameServers)
proc resolver(domain: string): Future[string] {.async, gcsafe.} =
trace "resolving", domain=domain
trace "resolving", domain = domain
let resolved = await dnsResolver.resolveTxt(domain)
return resolved[0] # Use only first answer
var wakuDnsDiscovery = WakuDnsDiscovery.init(dnsDiscoveryUrl.get(),
resolver)
let wakuDnsDiscovery = WakuDnsDiscovery.init(dnsDiscoveryUrl.get(), resolver)
if wakuDnsDiscovery.isOk:
let discoveredPeers = wakuDnsDiscovery.get().findPeers()
let discoveredPeers = await wakuDnsDiscovery.get().findPeers()
if discoveredPeers.isOk:
info "Connecting to discovered peers"
discoveredNodes = discoveredPeers.get()
echo "Discovered and connecting to " & $discoveredNodes
waitFor chat.node.connectToNodes(discoveredNodes)
else:
warn "Failed to find peers via DNS discovery", error = discoveredPeers.error
else:
warn "Failed to init Waku DNS discovery"
warn "Failed to init Waku DNS discovery", error = wakuDnsDiscovery.error
let peerInfo = node.switch.peerInfo
let listenStr = $peerInfo.addrs[0] & "/p2p/" & $peerInfo.peerId
@ -425,10 +443,9 @@ proc processInput(rfd: AsyncFD, rng: ref HmacDrbgContext) {.async.} =
storenode = some(peerInfo.value)
else:
error "Incorrect conf.storenode", error = peerInfo.error
elif discoveredNodes.len > 0:
echo "Store enabled, but no store nodes configured. Choosing one at random from discovered peers"
storenode = some(discoveredNodes[rand(0..len(discoveredNodes) - 1)])
storenode = some(discoveredNodes[rand(0 .. len(discoveredNodes) - 1)])
if storenode.isSome():
# We have a viable storenode. Let's query it for historical messages.
@ -437,50 +454,55 @@ proc processInput(rfd: AsyncFD, rng: ref HmacDrbgContext) {.async.} =
node.mountStoreClient()
node.peerManager.addServicePeer(storenode.get(), WakuStoreCodec)
proc storeHandler(response: HistoryResponse) {.gcsafe.} =
proc storeHandler(response: StoreQueryResponse) {.gcsafe.} =
for msg in response.messages:
let
pb = Chat2Message.init(msg.payload)
chatLine = if pb.isOk: pb[].toString()
else: string.fromBytes(msg.payload)
let payload =
if msg.message.isSome():
msg.message.get().payload
else:
newSeq[byte](0)
let chatLine = getChatLine(payload)
echo &"{chatLine}"
info "Hit store handler"
let queryRes = await node.query(HistoryQuery(contentTopics: @[chat.contentTopic]))
if queryRes.isOk():
storeHandler(queryRes.value)
block storeQueryBlock:
let queryRes = (
await node.query(
StoreQueryRequest(contentTopics: @[chat.contentTopic]), storenode.get()
)
).valueOr:
error "Store query failed", error = error
break storeQueryBlock
storeHandler(queryRes)
# NOTE Must be mounted after relay
if conf.lightpushnode != "":
let peerInfo = parsePeerInfo(conf.lightpushnode)
if peerInfo.isOk():
await mountLightPush(node)
node.mountLightPushClient()
(await node.mountLegacyLightPush()).isOkOr:
error "failed to mount legacy lightpush", error = error
quit(QuitFailure)
node.mountLegacyLightPushClient()
node.peerManager.addServicePeer(peerInfo.value, WakuLightpushCodec)
else:
error "LightPush not mounted. Couldn't parse conf.lightpushnode",
error = peerInfo.error
error = peerInfo.error
if conf.filternode != "":
let peerInfo = parsePeerInfo(conf.filternode)
if peerInfo.isOk():
if (let peerInfo = parsePeerInfo(conf.filternode); peerInfo.isErr()):
error "Filter not mounted. Couldn't parse conf.filternode", error = peerInfo.error
else:
await node.mountFilter()
await node.mountFilterClient()
node.peerManager.addServicePeer(peerInfo.value, WakuLegacyFilterCodec)
proc filterHandler(pubsubTopic: PubsubTopic, msg: WakuMessage) {.async, gcsafe, closure.} =
trace "Hit filter handler", contentTopic=msg.contentTopic
proc filterHandler(
pubsubTopic: PubsubTopic, msg: WakuMessage
) {.async, gcsafe, closure.} =
trace "Hit filter handler", contentTopic = msg.contentTopic
chat.printReceivedMessage(msg)
await node.legacyFilterSubscribe(pubsubTopic=some(DefaultPubsubTopic),
contentTopics=chat.contentTopic,
filterHandler,
peerInfo.value)
# TODO: Here to support FilterV2 relevant subscription, but still
# Legacy Filter is concurrent to V2 untill legacy filter will be removed
else:
error "Filter not mounted. Couldn't parse conf.filternode",
error = peerInfo.error
# TODO: Here to support FilterV2 relevant subscription.
# Subscribe to a topic, if relay is mounted
if conf.relay:
@ -490,39 +512,45 @@ proc processInput(rfd: AsyncFD, rng: ref HmacDrbgContext) {.async.} =
if msg.contentTopic == chat.contentTopic:
chat.printReceivedMessage(msg)
node.subscribe((kind: PubsubSub, topic: DefaultPubsubTopic), some(handler))
node.subscribe(
(kind: PubsubSub, topic: DefaultPubsubTopic), WakuRelayHandler(handler)
).isOkOr:
error "failed to subscribe to pubsub topic",
topic = DefaultPubsubTopic, error = error
if conf.rlnRelay:
info "WakuRLNRelay is enabled"
proc spamHandler(wakuMessage: WakuMessage) {.gcsafe, closure.} =
debug "spam handler is called"
let chatLineResult = chat.getChatLine(wakuMessage)
if chatLineResult.isOk():
echo "A spam message is found and discarded : ", chatLineResult.value
else:
echo "A spam message is found and discarded"
info "spam handler is called"
let chatLineResult = getChatLine(wakuMessage.payload)
echo "spam message is found and discarded : " & chatLineResult
chat.prompt = false
showChatPrompt(chat)
echo "rln-relay preparation is in progress..."
let rlnConf = WakuRlnConfig(
rlnRelayDynamic: conf.rlnRelayDynamic,
rlnRelayCredIndex: conf.rlnRelayCredIndex,
rlnRelayEthContractAddress: conf.rlnRelayEthContractAddress,
rlnRelayEthClientAddress: conf.rlnRelayEthClientAddress,
rlnRelayCredPath: conf.rlnRelayCredPath,
rlnRelayCredPassword: conf.rlnRelayCredPassword
dynamic: conf.rlnRelayDynamic,
credIndex: conf.rlnRelayCredIndex,
chainId: UInt256.fromBytesBE(conf.rlnRelayChainId.toBytesBE()),
ethClientUrls: conf.ethClientUrls.mapIt(string(it)),
creds: some(
RlnRelayCreds(
path: conf.rlnRelayCredPath, password: conf.rlnRelayCredPassword
)
),
userMessageLimit: conf.rlnRelayUserMessageLimit,
epochSizeSec: conf.rlnEpochSizeSec,
)
waitFor node.mountRlnRelay(rlnConf,
spamHandler=some(spamHandler))
waitFor node.mountRlnRelay(rlnConf, spamHandler = some(spamHandler))
let membershipIndex = node.wakuRlnRelay.groupManager.membershipIndex.get()
let identityCredential = node.wakuRlnRelay.groupManager.idCredentials.get()
echo "your membership index is: ", membershipIndex
echo "your rln identity commitment key is: ", identityCredential.idCommitment.inHex()
echo "your rln identity commitment key is: ",
identityCredential.idCommitment.inHex()
else:
info "WakuRLNRelay is disabled"
echo "WakuRLNRelay is disabled, please enable it by passing in the --rln-relay flag"
@ -531,16 +559,11 @@ proc processInput(rfd: AsyncFD, rng: ref HmacDrbgContext) {.async.} =
if conf.metricsServer:
let metricsServer = startMetricsServer(
conf.metricsServerAddress,
Port(conf.metricsServerPort + conf.portsShift)
conf.metricsServerAddress, Port(conf.metricsServerPort + conf.portsShift)
)
await chat.readWriteLoop()
if conf.keepAlive:
node.startKeepalive()
runForever()
proc main(rng: ref HmacDrbgContext) {.async.} =
@ -557,7 +580,6 @@ proc main(rng: ref HmacDrbgContext) {.async.} =
except ConfigurationError as e:
raise e
when isMainModule: # isMainModule = true when the module is compiled as the main file
let rng = crypto.newRng()
try:

View File

@ -1,270 +1,287 @@
import
std/strutils,
confutils, confutils/defs, confutils/std/net,
chronicles, chronos,
chronicles,
chronos,
confutils,
confutils/defs,
confutils/std/net,
eth/keys,
libp2p/crypto/crypto,
libp2p/crypto/secp,
nimcrypto/utils,
eth/keys
import
../../../waku/waku_core
std/strutils,
regex
import waku/waku_core
type
Fleet* = enum
Fleet* = enum
none
prod
test
Chat2Conf* = object
## General node config
EthRpcUrl* = distinct string
Chat2Conf* = object ## General node config
logLevel* {.
desc: "Sets the log level."
defaultValue: LogLevel.INFO
name: "log-level" }: LogLevel
desc: "Sets the log level.", defaultValue: LogLevel.INFO, name: "log-level"
.}: LogLevel
nodekey* {.
desc: "P2P node private key as 64 char hex string.",
name: "nodekey" }: Option[crypto.PrivateKey]
nodekey* {.desc: "P2P node private key as 64 char hex string.", name: "nodekey".}:
Option[crypto.PrivateKey]
listenAddress* {.
defaultValue: defaultListenAddress(config)
desc: "Listening address for the LibP2P traffic."
name: "listen-address"}: ValidIpAddress
defaultValue: defaultListenAddress(config),
desc: "Listening address for the LibP2P traffic.",
name: "listen-address"
.}: IpAddress
tcpPort* {.
desc: "TCP listening port."
defaultValue: 60000
name: "tcp-port" }: Port
tcpPort* {.desc: "TCP listening port.", defaultValue: 60000, name: "tcp-port".}:
Port
udpPort* {.
desc: "UDP listening port."
defaultValue: 60000
name: "udp-port" }: Port
udpPort* {.desc: "UDP listening port.", defaultValue: 60000, name: "udp-port".}:
Port
portsShift* {.
desc: "Add a shift to all port numbers."
defaultValue: 0
name: "ports-shift" }: uint16
desc: "Add a shift to all port numbers.", defaultValue: 0, name: "ports-shift"
.}: uint16
nat* {.
desc: "Specify method to use for determining public address. " &
"Must be one of: any, none, upnp, pmp, extip:<IP>."
defaultValue: "any" }: string
desc:
"Specify method to use for determining public address. " &
"Must be one of: any, none, upnp, pmp, extip:<IP>.",
defaultValue: "any"
.}: string
## Persistence config
dbPath* {.
desc: "The database path for peristent storage",
defaultValue: ""
name: "db-path" }: string
desc: "The database path for peristent storage", defaultValue: "", name: "db-path"
.}: string
persistPeers* {.
desc: "Enable peer persistence: true|false",
defaultValue: false
name: "persist-peers" }: bool
defaultValue: false,
name: "persist-peers"
.}: bool
persistMessages* {.
desc: "Enable message persistence: true|false",
defaultValue: false
name: "persist-messages" }: bool
defaultValue: false,
name: "persist-messages"
.}: bool
## Relay config
relay* {.
desc: "Enable relay protocol: true|false",
defaultValue: true
name: "relay" }: bool
desc: "Enable relay protocol: true|false", defaultValue: true, name: "relay"
.}: bool
staticnodes* {.
desc: "Peer multiaddr to directly connect with. Argument may be repeated."
name: "staticnode" }: seq[string]
desc: "Peer multiaddr to directly connect with. Argument may be repeated.",
name: "staticnode"
.}: seq[string]
keepAlive* {.
desc: "Enable keep-alive for idle connections: true|false",
defaultValue: false
name: "keep-alive" }: bool
defaultValue: false,
name: "keep-alive"
.}: bool
topics* {.
desc: "Default topics to subscribe to (space separated list)."
defaultValue: "/waku/2/default-waku/proto"
name: "topics" .}: string
clusterId* {.
desc:
"Cluster id that the node is running in. Node in a different cluster id is disconnected.",
defaultValue: 0,
name: "cluster-id"
.}: uint16
shards* {.
desc:
"Shards index to subscribe to [0..NUM_SHARDS_IN_NETWORK-1]. Argument may be repeated.",
defaultValue: @[uint16(0)],
name: "shard"
.}: seq[uint16]
## Store config
store* {.
desc: "Enable store protocol: true|false",
defaultValue: true
name: "store" }: bool
desc: "Enable store protocol: true|false", defaultValue: true, name: "store"
.}: bool
storenode* {.
desc: "Peer multiaddr to query for storage.",
defaultValue: ""
name: "storenode" }: string
desc: "Peer multiaddr to query for storage.", defaultValue: "", name: "storenode"
.}: string
## Filter config
filter* {.
desc: "Enable filter protocol: true|false",
defaultValue: false
name: "filter" }: bool
desc: "Enable filter protocol: true|false", defaultValue: false, name: "filter"
.}: bool
filternode* {.
desc: "Peer multiaddr to request content filtering of messages.",
defaultValue: ""
name: "filternode" }: string
defaultValue: "",
name: "filternode"
.}: string
## Lightpush config
lightpush* {.
desc: "Enable lightpush protocol: true|false",
defaultValue: false
name: "lightpush" }: bool
defaultValue: false,
name: "lightpush"
.}: bool
lightpushnode* {.
desc: "Peer multiaddr to request lightpush of published messages.",
defaultValue: ""
name: "lightpushnode" }: string
## JSON-RPC config
rpc* {.
desc: "Enable Waku JSON-RPC server: true|false",
defaultValue: true
name: "rpc" }: bool
rpcAddress* {.
desc: "Listening address of the JSON-RPC server.",
defaultValue: ValidIpAddress.init("127.0.0.1")
name: "rpc-address" }: ValidIpAddress
rpcPort* {.
desc: "Listening port of the JSON-RPC server.",
defaultValue: 8545
name: "rpc-port" }: uint16
rpcAdmin* {.
desc: "Enable access to JSON-RPC Admin API: true|false",
defaultValue: false
name: "rpc-admin" }: bool
rpcPrivate* {.
desc: "Enable access to JSON-RPC Private API: true|false",
defaultValue: false
name: "rpc-private" }: bool
defaultValue: "",
name: "lightpushnode"
.}: string
## Metrics config
metricsServer* {.
desc: "Enable the metrics server: true|false"
defaultValue: false
name: "metrics-server" }: bool
desc: "Enable the metrics server: true|false",
defaultValue: false,
name: "metrics-server"
.}: bool
metricsServerAddress* {.
desc: "Listening address of the metrics server."
defaultValue: ValidIpAddress.init("127.0.0.1")
name: "metrics-server-address" }: ValidIpAddress
desc: "Listening address of the metrics server.",
defaultValue: parseIpAddress("127.0.0.1"),
name: "metrics-server-address"
.}: IpAddress
metricsServerPort* {.
desc: "Listening HTTP port of the metrics server."
defaultValue: 8008
name: "metrics-server-port" }: uint16
desc: "Listening HTTP port of the metrics server.",
defaultValue: 8008,
name: "metrics-server-port"
.}: uint16
metricsLogging* {.
desc: "Enable metrics logging: true|false"
defaultValue: true
name: "metrics-logging" }: bool
desc: "Enable metrics logging: true|false",
defaultValue: true,
name: "metrics-logging"
.}: bool
## DNS discovery config
dnsDiscovery* {.
desc: "Enable discovering nodes via DNS"
defaultValue: false
name: "dns-discovery" }: bool
desc:
"Deprecated, please set dns-discovery-url instead. Enable discovering nodes via DNS",
defaultValue: false,
name: "dns-discovery"
.}: bool
dnsDiscoveryUrl* {.
desc: "URL for DNS node list in format 'enrtree://<key>@<fqdn>'",
defaultValue: ""
name: "dns-discovery-url" }: string
defaultValue: "",
name: "dns-discovery-url"
.}: string
dnsDiscoveryNameServers* {.
desc: "DNS name server IPs to query. Argument may be repeated."
defaultValue: @[ValidIpAddress.init("1.1.1.1"), ValidIpAddress.init("1.0.0.1")]
name: "dns-discovery-name-server" }: seq[ValidIpAddress]
dnsAddrsNameServers* {.
desc:
"DNS name server IPs to query for DNS multiaddrs resolution. Argument may be repeated.",
defaultValue: @[parseIpAddress("1.1.1.1"), parseIpAddress("1.0.0.1")],
name: "dns-addrs-name-server"
.}: seq[IpAddress]
## Chat2 configuration
fleet* {.
desc: "Select the fleet to connect to. This sets the DNS discovery URL to the selected fleet."
defaultValue: Fleet.prod
name: "fleet" }: Fleet
desc:
"Select the fleet to connect to. This sets the DNS discovery URL to the selected fleet.",
defaultValue: Fleet.prod,
name: "fleet"
.}: Fleet
contentTopic* {.
desc: "Content topic for chat messages."
defaultValue: "/toy-chat/2/huilong/proto"
name: "content-topic" }: string
desc: "Content topic for chat messages.",
defaultValue: "/toy-chat/2/huilong/proto",
name: "content-topic"
.}: string
## Websocket Configuration
websocketSupport* {.
desc: "Enable websocket: true|false",
defaultValue: false
name: "websocket-support"}: bool
defaultValue: false,
name: "websocket-support"
.}: bool
websocketPort* {.
desc: "WebSocket listening port."
defaultValue: 8000
name: "websocket-port" }: Port
desc: "WebSocket listening port.", defaultValue: 8000, name: "websocket-port"
.}: Port
websocketSecureSupport* {.
desc: "WebSocket Secure Support."
defaultValue: false
name: "websocket-secure-support" }: bool
desc: "WebSocket Secure Support.",
defaultValue: false,
name: "websocket-secure-support"
.}: bool
## rln-relay configuration
rlnRelay* {.
desc: "Enable spam protection through rln-relay: true|false",
defaultValue: false
name: "rln-relay" }: bool
defaultValue: false,
name: "rln-relay"
.}: bool
rlnRelayChainId* {.
desc:
"Chain ID of the provided contract (optional, will fetch from RPC provider if not used)",
defaultValue: 0,
name: "rln-relay-chain-id"
.}: uint
rlnRelayCredPath* {.
desc: "The path for peristing rln-relay credential",
defaultValue: ""
name: "rln-relay-cred-path" }: string
defaultValue: "",
name: "rln-relay-cred-path"
.}: string
rlnRelayCredIndex* {.
desc: "the index of the onchain commitment to use",
name: "rln-relay-cred-index" }: Option[uint]
desc: "the index of the onchain commitment to use", name: "rln-relay-cred-index"
.}: Option[uint]
rlnRelayDynamic* {.
desc: "Enable waku-rln-relay with on-chain dynamic group management: true|false",
defaultValue: false
name: "rln-relay-dynamic" }: bool
defaultValue: false,
name: "rln-relay-dynamic"
.}: bool
rlnRelayIdKey* {.
desc: "Rln relay identity secret key as a Hex string",
defaultValue: ""
name: "rln-relay-id-key" }: string
defaultValue: "",
name: "rln-relay-id-key"
.}: string
rlnRelayIdCommitmentKey* {.
desc: "Rln relay identity commitment key as a Hex string",
defaultValue: ""
name: "rln-relay-id-commitment-key" }: string
defaultValue: "",
name: "rln-relay-id-commitment-key"
.}: string
rlnRelayEthClientAddress* {.
desc: "WebSocket address of an Ethereum testnet client e.g., ws://localhost:8540/",
defaultValue: "ws://localhost:8540/"
name: "rln-relay-eth-client-address" }: string
ethClientUrls* {.
desc:
"HTTP address of an Ethereum testnet client e.g., http://localhost:8540/. Argument may be repeated.",
defaultValue: newSeq[EthRpcUrl](0),
name: "rln-relay-eth-client-address"
.}: seq[EthRpcUrl]
rlnRelayEthContractAddress* {.
desc: "Address of membership contract on an Ethereum testnet",
defaultValue: ""
name: "rln-relay-eth-contract-address" }: string
defaultValue: "",
name: "rln-relay-eth-contract-address"
.}: string
rlnRelayCredPassword* {.
desc: "Password for encrypting RLN credentials",
defaultValue: ""
name: "rln-relay-cred-password" }: string
defaultValue: "",
name: "rln-relay-cred-password"
.}: string
rlnRelayUserMessageLimit* {.
desc:
"Set a user message limit for the rln membership registration. Must be a positive integer. Default is 1.",
defaultValue: 1,
name: "rln-relay-user-message-limit"
.}: uint64
rlnEpochSizeSec* {.
desc:
"Epoch size in seconds used to rate limit RLN memberships. Default is 1 second.",
defaultValue: 1,
name: "rln-relay-epoch-sec"
.}: uint64
# NOTE: Keys are different in nim-libp2p
proc parseCmdArg*(T: type crypto.PrivateKey, p: string): T =
@ -278,13 +295,13 @@ proc parseCmdArg*(T: type crypto.PrivateKey, p: string): T =
proc completeCmdArg*(T: type crypto.PrivateKey, val: string): seq[string] =
return @[]
proc parseCmdArg*(T: type ValidIpAddress, p: string): T =
proc parseCmdArg*(T: type IpAddress, p: string): T =
try:
result = ValidIpAddress.init(p)
result = parseIpAddress(p)
except CatchableError as e:
raise newException(ValueError, "Invalid IP address")
proc completeCmdArg*(T: type ValidIpAddress, val: string): seq[string] =
proc completeCmdArg*(T: type IpAddress, val: string): seq[string] =
return @[]
proc parseCmdArg*(T: type Port, p: string): T =
@ -302,7 +319,33 @@ proc parseCmdArg*(T: type Option[uint], p: string): T =
except CatchableError:
raise newException(ValueError, "Invalid unsigned integer")
func defaultListenAddress*(conf: Chat2Conf): ValidIpAddress =
proc completeCmdArg*(T: type EthRpcUrl, val: string): seq[string] =
return @[]
proc parseCmdArg*(T: type EthRpcUrl, s: string): T =
## allowed patterns:
## http://url:port
## https://url:port
## http://url:port/path
## https://url:port/path
## http://url/with/path
## http://url:port/path?query
## https://url:port/path?query
## disallowed patterns:
## any valid/invalid ws or wss url
var httpPattern =
re2"^(https?):\/\/((localhost)|([\w_-]+(?:(?:\.[\w_-]+)+)))(:[0-9]{1,5})?([\w.,@?^=%&:\/~+#-]*[\w@?^=%&\/~+#-])*"
var wsPattern =
re2"^(wss?):\/\/((localhost)|([\w_-]+(?:(?:\.[\w_-]+)+)))(:[0-9]{1,5})?([\w.,@?^=%&:\/~+#-]*[\w@?^=%&\/~+#-])*"
if regex.match(s, wsPattern):
raise newException(
ValueError, "Websocket RPC URL is not supported, Please use an HTTP URL"
)
if not regex.match(s, httpPattern):
raise newException(ValueError, "Invalid HTTP RPC URL")
return EthRpcUrl(s)
func defaultListenAddress*(conf: Chat2Conf): IpAddress =
# TODO: How should we select between IPv4 and IPv6
# Maybe there should be a config option for this.
(static ValidIpAddress.init("0.0.0.0"))
(static parseIpAddress("0.0.0.0"))

View File

@ -1,3 +1,4 @@
-d:chronicles_line_numbers
-d:chronicles_runtime_filtering:on
-d:discv5_protocol_id:d5waku
path = "../.."

View File

@ -1,31 +1,37 @@
when (NimMajor, NimMinor) < (1, 4):
{.push raises: [Defect].}
else:
{.push raises: [].}
{.push raises: [].}
import
std/[tables, times, strutils, hashes, sequtils],
chronos, confutils, chronicles, chronicles/topics_registry, chronos/streams/tlsstream,
metrics, metrics/chronos_httpserver,
std/[tables, times, strutils, hashes, sequtils, json],
chronos,
confutils,
chronicles,
chronicles/topics_registry,
chronos/streams/tlsstream,
metrics,
metrics/chronos_httpserver,
stew/byteutils,
stew/shims/net as stewNet, json_rpc/rpcserver,
eth/net/nat,
# Matterbridge client imports
../../waku/common/utils/matterbridge_client,
# Waku v2 imports
libp2p/crypto/crypto,
libp2p/errors,
../../../waku/waku_core,
../../../waku/waku_node,
../../../waku/node/peer_manager,
../../waku/waku_filter,
../../waku/waku_filter_v2,
../../waku/waku_store,
waku/[
waku_core,
waku_node,
node/peer_manager,
waku_filter_v2,
waku_store,
factory/builder,
common/utils/matterbridge_client,
common/rate_limit/setting,
],
# Chat 2 imports
../chat2/chat2,
# Common cli config
./config_chat2bridge
declarePublicCounter chat2_mb_transfers, "Number of messages transferred between chat2 and Matterbridge", ["type"]
declarePublicCounter chat2_mb_transfers,
"Number of messages transferred between chat2 and Matterbridge", ["type"]
declarePublicCounter chat2_mb_dropped, "Number of messages dropped", ["reason"]
logScope:
@ -35,8 +41,7 @@ logScope:
# Default values #
##################
const
DeduplQSize = 20 # Maximum number of seen messages to keep in deduplication queue
const DeduplQSize = 20 # Maximum number of seen messages to keep in deduplication queue
#########
# Types #
@ -51,10 +56,10 @@ type
seen: seq[Hash] #FIFO queue
contentTopic: string
MbMessageHandler* = proc (jsonNode: JsonNode) {.gcsafe.}
MbMessageHandler = proc(jsonNode: JsonNode) {.async.}
###################
# Helper funtions #
# Helper functions #
###################S
proc containsOrAdd(sequence: var seq[Hash], hash: Hash): bool =
@ -63,25 +68,27 @@ proc containsOrAdd(sequence: var seq[Hash], hash: Hash): bool =
if sequence.len >= DeduplQSize:
trace "Deduplication queue full. Removing oldest item."
sequence.delete 0, 0 # Remove first item in queue
sequence.delete 0, 0 # Remove first item in queue
sequence.add(hash)
return false
proc toWakuMessage(cmb: Chat2MatterBridge, jsonNode: JsonNode): WakuMessage {.raises: [Defect, KeyError]} =
proc toWakuMessage(
cmb: Chat2MatterBridge, jsonNode: JsonNode
): WakuMessage {.raises: [Defect, KeyError].} =
# Translates a Matterbridge API JSON response to a Waku v2 message
let msgFields = jsonNode.getFields()
# @TODO error handling here - verify expected fields
let chat2pb = Chat2Message(timestamp: getTime().toUnix(), # @TODO use provided timestamp
nick: msgFields["username"].getStr(),
payload: msgFields["text"].getStr().toBytes()).encode()
let chat2pb = Chat2Message(
timestamp: getTime().toUnix(), # @TODO use provided timestamp
nick: msgFields["username"].getStr(),
payload: msgFields["text"].getStr().toBytes(),
).encode()
WakuMessage(payload: chat2pb.buffer,
contentTopic: cmb.contentTopic,
version: 0)
WakuMessage(payload: chat2pb.buffer, contentTopic: cmb.contentTopic, version: 0)
proc toChat2(cmb: Chat2MatterBridge, jsonNode: JsonNode) {.async.} =
let msg = cmb.toWakuMessage(jsonNode)
@ -95,9 +102,12 @@ proc toChat2(cmb: Chat2MatterBridge, jsonNode: JsonNode) {.async.} =
chat2_mb_transfers.inc(labelValues = ["mb_to_chat2"])
await cmb.nodev2.publish(some(DefaultPubsubTopic), msg)
(await cmb.nodev2.publish(some(DefaultPubsubTopic), msg)).isOkOr:
error "failed to publish message", error = error
proc toMatterbridge(cmb: Chat2MatterBridge, msg: WakuMessage) {.gcsafe, raises: [Exception].} =
proc toMatterbridge(
cmb: Chat2MatterBridge, msg: WakuMessage
) {.gcsafe, raises: [Exception].} =
if cmb.seen.containsOrAdd(msg.payload.hash()):
# This is a duplicate message. Return.
chat2_mb_dropped.inc(labelValues = ["duplicate"])
@ -116,130 +126,121 @@ proc toMatterbridge(cmb: Chat2MatterBridge, msg: WakuMessage) {.gcsafe, raises:
assert chat2Msg.isOk
let postRes = cmb.mbClient.postMessage(text = string.fromBytes(chat2Msg[].payload),
username = chat2Msg[].nick)
if postRes.isErr() or (postRes[] == false):
if not cmb.mbClient
.postMessage(text = string.fromBytes(chat2Msg[].payload), username = chat2Msg[].nick)
.containsValue(true):
chat2_mb_dropped.inc(labelValues = ["duplicate"])
error "Matterbridge host unreachable. Dropping message."
proc pollMatterbridge(cmb: Chat2MatterBridge, handler: MbMessageHandler) {.async.} =
while cmb.running:
let getRes = cmb.mbClient.getMessages()
if getRes.isOk():
for jsonNode in getRes[]:
handler(jsonNode)
else:
let msg = cmb.mbClient.getMessages().valueOr:
error "Matterbridge host unreachable. Sleeping before retrying."
await sleepAsync(chronos.seconds(10))
continue
for jsonNode in msg:
await handler(jsonNode)
await sleepAsync(cmb.pollPeriod)
##############
# Public API #
##############
proc new*(T: type Chat2MatterBridge,
# Matterbridge initialisation
mbHostUri: string,
mbGateway: string,
# NodeV2 initialisation
nodev2Key: crypto.PrivateKey,
nodev2BindIp: ValidIpAddress, nodev2BindPort: Port,
nodev2ExtIp = none[ValidIpAddress](), nodev2ExtPort = none[Port](),
contentTopic: string): T
{.raises: [Defect, ValueError, KeyError, TLSStreamProtocolError, IOError, LPError].} =
proc new*(
T: type Chat2MatterBridge,
# Matterbridge initialisation
mbHostUri: string,
mbGateway: string,
# NodeV2 initialisation
nodev2Key: crypto.PrivateKey,
nodev2BindIp: IpAddress,
nodev2BindPort: Port,
nodev2ExtIp = none[IpAddress](),
nodev2ExtPort = none[Port](),
contentTopic: string,
): T {.
raises: [Defect, ValueError, KeyError, TLSStreamProtocolError, IOError, LPError]
.} =
# Setup Matterbridge
let
mbClient = MatterbridgeClient.new(mbHostUri, mbGateway)
let mbClient = MatterbridgeClient.new(mbHostUri, mbGateway)
# Let's verify the Matterbridge configuration before continuing
let clientHealth = mbClient.isHealthy()
if clientHealth.isOk() and clientHealth[]:
info "Reached Matterbridge host", host=mbClient.host
if mbClient.isHealthy().valueOr(false):
info "Reached Matterbridge host", host = mbClient.host
else:
raise newException(ValueError, "Matterbridge client not reachable/healthy")
# Setup Waku v2 node
let nodev2 = block:
var builder = WakuNodeBuilder.init()
builder.withNodeKey(nodev2Key)
builder.withNetworkConfigurationDetails(nodev2BindIp, nodev2BindPort, nodev2ExtIp, nodev2ExtPort).tryGet()
builder.build().tryGet()
var builder = WakuNodeBuilder.init()
builder.withNodeKey(nodev2Key)
return Chat2MatterBridge(mbClient: mbClient,
nodev2: nodev2,
running: false,
pollPeriod: chronos.seconds(1),
contentTopic: contentTopic)
builder
.withNetworkConfigurationDetails(
nodev2BindIp, nodev2BindPort, nodev2ExtIp, nodev2ExtPort
)
.tryGet()
builder.build().tryGet()
return Chat2MatterBridge(
mbClient: mbClient,
nodev2: nodev2,
running: false,
pollPeriod: chronos.seconds(1),
contentTopic: contentTopic,
)
proc start*(cmb: Chat2MatterBridge) {.async.} =
info "Starting Chat2MatterBridge"
cmb.running = true
debug "Start polling Matterbridge"
info "Start polling Matterbridge"
# Start Matterbridge polling (@TODO: use streaming interface)
proc mbHandler(jsonNode: JsonNode) {.gcsafe, raises: [Exception].} =
trace "Bridging message from Matterbridge to chat2", jsonNode=jsonNode
proc mbHandler(jsonNode: JsonNode) {.async.} =
trace "Bridging message from Matterbridge to chat2", jsonNode = jsonNode
waitFor cmb.toChat2(jsonNode)
asyncSpawn cmb.pollMatterbridge(mbHandler)
# Start Waku v2 node
debug "Start listening on Waku v2"
info "Start listening on Waku v2"
await cmb.nodev2.start()
# Always mount relay for bridge
# `triggerSelf` is false on a `bridge` to avoid duplicates
await cmb.nodev2.mountRelay()
(await cmb.nodev2.mountRelay()).isOkOr:
error "failed to mount relay", error = error
return
cmb.nodev2.wakuRelay.triggerSelf = false
# Bridging
# Handle messages on Waku v2 and bridge to Matterbridge
proc relayHandler(pubsubTopic: PubsubTopic, msg: WakuMessage): Future[void] {.async, gcsafe.} =
trace "Bridging message from Chat2 to Matterbridge", msg=msg
cmb.toMatterbridge(msg)
proc relayHandler(
pubsubTopic: PubsubTopic, msg: WakuMessage
): Future[void] {.async.} =
trace "Bridging message from Chat2 to Matterbridge", msg = msg
try:
cmb.toMatterbridge(msg)
except:
error "exception in relayHandler: " & getCurrentExceptionMsg()
cmb.nodev2.subscribe((kind: PubsubSub, topic: DefaultPubsubTopic), some(relayHandler))
cmb.nodev2.subscribe((kind: PubsubSub, topic: DefaultPubsubTopic), relayHandler).isOkOr:
error "failed to subscribe to relay", topic = DefaultPubsubTopic, error = error
return
proc stop*(cmb: Chat2MatterBridge) {.async.} =
proc stop*(cmb: Chat2MatterBridge) {.async: (raises: [Exception]).} =
info "Stopping Chat2MatterBridge"
cmb.running = false
await cmb.nodev2.stop()
{.pop.} # @TODO confutils.nim(775, 17) Error: can raise an unlisted exception: ref IOError
{.pop.}
# @TODO confutils.nim(775, 17) Error: can raise an unlisted exception: ref IOError
when isMainModule:
import
../../../waku/common/utils/nat,
../../waku/waku_api/message_cache,
../../waku/waku_api/jsonrpc/debug/handlers as debug_api,
../../waku/waku_api/jsonrpc/filter/handlers as filter_api,
../../waku/waku_api/jsonrpc/relay/handlers as relay_api,
../../waku/waku_api/jsonrpc/store/handlers as store_api
proc startV2Rpc(node: WakuNode, rpcServer: RpcHttpServer, conf: Chat2MatterbridgeConf) {.raises: [Exception].} =
installDebugApiHandlers(node, rpcServer)
# Install enabled API handlers:
if conf.relay:
let cache = MessageCache[string].init(capacity=30)
installRelayApiHandlers(node, rpcServer, cache)
if conf.filter:
let messageCache = filter_api.MessageCache.init(capacity=30)
installFilterApiHandlers(node, rpcServer, messageCache)
if conf.store:
installStoreApiHandlers(node, rpcServer)
rpcServer.start()
import waku/common/utils/nat, waku/rest_api/message_cache
let
rng = newRng()
@ -248,30 +249,32 @@ when isMainModule:
if conf.logLevel != LogLevel.NONE:
setLogLevel(conf.logLevel)
let natRes = setupNat(conf.nat, clientId,
Port(uint16(conf.libp2pTcpPort) + conf.portsShift),
Port(uint16(conf.udpPort) + conf.portsShift))
if natRes.isErr():
error "Error in setupNat", error = natRes.error
let (nodev2ExtIp, nodev2ExtPort, _) = setupNat(
conf.nat,
clientId,
Port(uint16(conf.libp2pTcpPort) + conf.portsShift),
Port(uint16(conf.udpPort) + conf.portsShift),
).valueOr:
raise newException(ValueError, "setupNat error " & error)
# Load address configuration
let
(nodev2ExtIp, nodev2ExtPort, _) = natRes.get()
## The following heuristic assumes that, in absence of manual
## config, the external port is the same as the bind port.
extPort = if nodev2ExtIp.isSome() and nodev2ExtPort.isNone():
some(Port(uint16(conf.libp2pTcpPort) + conf.portsShift))
else:
nodev2ExtPort
## The following heuristic assumes that, in absence of manual
## config, the external port is the same as the bind port.
let extPort =
if nodev2ExtIp.isSome() and nodev2ExtPort.isNone():
some(Port(uint16(conf.libp2pTcpPort) + conf.portsShift))
else:
nodev2ExtPort
let
bridge = Chat2Matterbridge.new(
mbHostUri = "http://" & $initTAddress(conf.mbHostAddress, Port(conf.mbHostPort)),
mbGateway = conf.mbGateway,
nodev2Key = conf.nodekey,
nodev2BindIp = conf.listenAddress, nodev2BindPort = Port(uint16(conf.libp2pTcpPort) + conf.portsShift),
nodev2ExtIp = nodev2ExtIp, nodev2ExtPort = extPort,
contentTopic = conf.contentTopic)
let bridge = Chat2Matterbridge.new(
mbHostUri = "http://" & $initTAddress(conf.mbHostAddress, Port(conf.mbHostPort)),
mbGateway = conf.mbGateway,
nodev2Key = conf.nodekey,
nodev2BindIp = conf.listenAddress,
nodev2BindPort = Port(uint16(conf.libp2pTcpPort) + conf.portsShift),
nodev2ExtIp = nodev2ExtIp,
nodev2ExtPort = extPort,
contentTopic = conf.contentTopic,
)
waitFor bridge.start()
@ -298,20 +301,12 @@ when isMainModule:
if conf.filternode != "":
let filterPeer = parsePeerInfo(conf.filternode)
if filterPeer.isOk():
bridge.nodev2.peerManager.addServicePeer(filterPeer.value, WakuLegacyFilterCodec)
bridge.nodev2.peerManager.addServicePeer(filterPeer.value, WakuFilterSubscribeCodec)
bridge.nodev2.peerManager.addServicePeer(
filterPeer.value, WakuFilterSubscribeCodec
)
else:
error "Error parsing conf.filternode", error = filterPeer.error
if conf.rpc:
let ta = initTAddress(conf.rpcAddress,
Port(conf.rpcPort + conf.portsShift))
var rpcServer = newRpcHttpServer([ta])
# Waku v2 rpc
startV2Rpc(bridge.nodev2, rpcServer, conf)
rpcServer.start()
if conf.metricsServer:
let
address = conf.metricsServerAddress

View File

@ -1,133 +1,119 @@
import
confutils, confutils/defs, confutils/std/net, chronicles, chronos,
confutils,
confutils/defs,
confutils/std/net,
chronicles,
chronos,
libp2p/crypto/[crypto, secp],
eth/keys
type
Chat2MatterbridgeConf* = object
logLevel* {.
desc: "Sets the log level"
defaultValue: LogLevel.INFO
name: "log-level" .}: LogLevel
type Chat2MatterbridgeConf* = object
logLevel* {.
desc: "Sets the log level", defaultValue: LogLevel.INFO, name: "log-level"
.}: LogLevel
listenAddress* {.
defaultValue: defaultListenAddress(config)
desc: "Listening address for the LibP2P traffic"
name: "listen-address"}: ValidIpAddress
listenAddress* {.
defaultValue: defaultListenAddress(config),
desc: "Listening address for the LibP2P traffic",
name: "listen-address"
.}: IpAddress
libp2pTcpPort* {.
desc: "Libp2p TCP listening port (for Waku v2)"
defaultValue: 9000
name: "libp2p-tcp-port" .}: uint16
libp2pTcpPort* {.
desc: "Libp2p TCP listening port (for Waku v2)",
defaultValue: 9000,
name: "libp2p-tcp-port"
.}: uint16
udpPort* {.
desc: "UDP listening port"
defaultValue: 9000
name: "udp-port" .}: uint16
udpPort* {.desc: "UDP listening port", defaultValue: 9000, name: "udp-port".}: uint16
portsShift* {.
desc: "Add a shift to all default port numbers"
defaultValue: 0
name: "ports-shift" .}: uint16
portsShift* {.
desc: "Add a shift to all default port numbers",
defaultValue: 0,
name: "ports-shift"
.}: uint16
nat* {.
desc: "Specify method to use for determining public address. " &
"Must be one of: any, none, upnp, pmp, extip:<IP>"
defaultValue: "any" .}: string
nat* {.
desc:
"Specify method to use for determining public address. " &
"Must be one of: any, none, upnp, pmp, extip:<IP>",
defaultValue: "any"
.}: string
rpc* {.
desc: "Enable Waku RPC server"
defaultValue: false
name: "rpc" .}: bool
metricsServer* {.
desc: "Enable the metrics server", defaultValue: false, name: "metrics-server"
.}: bool
rpcAddress* {.
desc: "Listening address of the RPC server",
defaultValue: ValidIpAddress.init("127.0.0.1")
name: "rpc-address" }: ValidIpAddress
metricsServerAddress* {.
desc: "Listening address of the metrics server",
defaultValue: parseIpAddress("127.0.0.1"),
name: "metrics-server-address"
.}: IpAddress
rpcPort* {.
desc: "Listening port of the RPC server"
defaultValue: 8545
name: "rpc-port" .}: uint16
metricsServerPort* {.
desc: "Listening HTTP port of the metrics server",
defaultValue: 8008,
name: "metrics-server-port"
.}: uint16
metricsServer* {.
desc: "Enable the metrics server"
defaultValue: false
name: "metrics-server" .}: bool
### Waku v2 options
staticnodes* {.
desc: "Multiaddr of peer to directly connect with. Argument may be repeated",
name: "staticnode"
.}: seq[string]
metricsServerAddress* {.
desc: "Listening address of the metrics server"
defaultValue: ValidIpAddress.init("127.0.0.1")
name: "metrics-server-address" }: ValidIpAddress
nodekey* {.
desc: "P2P node private key as hex",
defaultValue: crypto.PrivateKey.random(Secp256k1, newRng()[]).tryGet(),
name: "nodekey"
.}: crypto.PrivateKey
metricsServerPort* {.
desc: "Listening HTTP port of the metrics server"
defaultValue: 8008
name: "metrics-server-port" .}: uint16
store* {.
desc: "Flag whether to start store protocol", defaultValue: true, name: "store"
.}: bool
### Waku v2 options
staticnodes* {.
desc: "Multiaddr of peer to directly connect with. Argument may be repeated"
name: "staticnode" }: seq[string]
filter* {.
desc: "Flag whether to start filter protocol", defaultValue: false, name: "filter"
.}: bool
nodekey* {.
desc: "P2P node private key as hex"
defaultValue: crypto.PrivateKey.random(Secp256k1, newRng()[]).tryGet()
name: "nodekey" }: crypto.PrivateKey
relay* {.
desc: "Flag whether to start relay protocol", defaultValue: true, name: "relay"
.}: bool
topics* {.
desc: "Default topics to subscribe to (space separated list)"
defaultValue: "/waku/2/default-waku/proto"
name: "topics" .}: string
storenode* {.
desc: "Multiaddr of peer to connect with for waku store protocol",
defaultValue: "",
name: "storenode"
.}: string
store* {.
desc: "Flag whether to start store protocol",
defaultValue: true
name: "store" }: bool
filternode* {.
desc: "Multiaddr of peer to connect with for waku filter protocol",
defaultValue: "",
name: "filternode"
.}: string
filter* {.
desc: "Flag whether to start filter protocol",
defaultValue: false
name: "filter" }: bool
# Matterbridge options
mbHostAddress* {.
desc: "Listening address of the Matterbridge host",
defaultValue: parseIpAddress("127.0.0.1"),
name: "mb-host-address"
.}: IpAddress
relay* {.
desc: "Flag whether to start relay protocol",
defaultValue: true
name: "relay" }: bool
mbHostPort* {.
desc: "Listening port of the Matterbridge host",
defaultValue: 4242,
name: "mb-host-port"
.}: uint16
storenode* {.
desc: "Multiaddr of peer to connect with for waku store protocol"
defaultValue: ""
name: "storenode" }: string
mbGateway* {.
desc: "Matterbridge gateway", defaultValue: "gateway1", name: "mb-gateway"
.}: string
filternode* {.
desc: "Multiaddr of peer to connect with for waku filter protocol"
defaultValue: ""
name: "filternode" }: string
# Matterbridge options
mbHostAddress* {.
desc: "Listening address of the Matterbridge host",
defaultValue: ValidIpAddress.init("127.0.0.1")
name: "mb-host-address" }: ValidIpAddress
mbHostPort* {.
desc: "Listening port of the Matterbridge host",
defaultValue: 4242
name: "mb-host-port" }: uint16
mbGateway* {.
desc: "Matterbridge gateway"
defaultValue: "gateway1"
name: "mb-gateway" }: string
## Chat2 options
contentTopic* {.
desc: "Content topic to bridge chat messages to."
defaultValue: "/toy-chat/2/huilong/proto"
name: "content-topic" }: string
## Chat2 options
contentTopic* {.
desc: "Content topic to bridge chat messages to.",
defaultValue: "/toy-chat/2/huilong/proto",
name: "content-topic"
.}: string
proc parseCmdArg*(T: type keys.KeyPair, p: string): T =
try:
@ -140,23 +126,21 @@ proc completeCmdArg*(T: type keys.KeyPair, val: string): seq[string] =
return @[]
proc parseCmdArg*(T: type crypto.PrivateKey, p: string): T =
let key = SkPrivateKey.init(p)
if key.isOk():
crypto.PrivateKey(scheme: Secp256k1, skkey: key.get())
else:
let key = SkPrivateKey.init(p).valueOr:
raise newException(ValueError, "Invalid private key")
return crypto.PrivateKey(scheme: Secp256k1, skkey: key)
proc completeCmdArg*(T: type crypto.PrivateKey, val: string): seq[string] =
return @[]
proc parseCmdArg*(T: type ValidIpAddress, p: string): T =
proc parseCmdArg*(T: type IpAddress, p: string): T =
try:
result = ValidIpAddress.init(p)
result = parseIpAddress(p)
except CatchableError:
raise newException(ValueError, "Invalid IP address")
proc completeCmdArg*(T: type ValidIpAddress, val: string): seq[string] =
proc completeCmdArg*(T: type IpAddress, val: string): seq[string] =
return @[]
func defaultListenAddress*(conf: Chat2MatterbridgeConf): ValidIpAddress =
(static ValidIpAddress.init("0.0.0.0"))
func defaultListenAddress*(conf: Chat2MatterbridgeConf): IpAddress =
(parseIpAddress("0.0.0.0"))

View File

@ -1,3 +1,4 @@
-d:chronicles_line_numbers
-d:chronicles_runtime_filtering:on
-d:discv5_protocol_id:d5waku
path = "../.."

663
apps/chat2mix/chat2mix.nim Normal file
View File

@ -0,0 +1,663 @@
## chat2 is an example of usage of Waku v2. For suggested usage options, please
## see dingpu tutorial in docs folder.
when not (compileOption("threads")):
{.fatal: "Please, compile this program with the --threads:on option!".}
{.push raises: [].}
import std/[strformat, strutils, times, options, random, sequtils]
import
confutils,
chronicles,
chronos,
eth/keys,
bearssl,
results,
stew/[byteutils],
metrics,
metrics/chronos_httpserver
import
libp2p/[
switch, # manage transports, a single entry point for dialing and listening
crypto/crypto, # cryptographic functions
stream/connection, # create and close stream read / write connections
multiaddress,
# encode different addressing schemes. For example, /ip4/7.7.7.7/tcp/6543 means it is using IPv4 protocol and TCP
peerinfo,
# manage the information of a peer, such as peer ID and public / private key
peerid, # Implement how peers interact
protobuf/minprotobuf, # message serialisation/deserialisation from and to protobufs
nameresolving/dnsresolver,
protocols/mix/curve25519,
] # define DNS resolution
import
waku/[
waku_core,
waku_lightpush/common,
waku_lightpush/rpc,
waku_enr,
discovery/waku_dnsdisc,
waku_node,
node/waku_metrics,
node/peer_manager,
factory/builder,
common/utils/nat,
waku_store/common,
waku_filter_v2/client,
common/logging,
],
./config_chat2mix
import libp2p/protocols/pubsub/rpc/messages, libp2p/protocols/pubsub/pubsub
import ../../waku/waku_rln_relay
logScope:
topics = "chat2 mix"
const Help =
"""
Commands: /[?|help|connect|nick|exit]
help: Prints this help
connect: dials a remote peer
nick: change nickname for current chat session
exit: exits chat session
"""
# XXX Connected is a bit annoying, because incoming connections don't trigger state change
# Could poll connection pool or something here, I suppose
# TODO Ensure connected turns true on incoming connections, or get rid of it
type Chat = ref object
node: WakuNode # waku node for publishing, subscribing, etc
transp: StreamTransport # transport streams between read & write file descriptor
subscribed: bool # indicates if a node is subscribed or not to a topic
connected: bool # if the node is connected to another peer
started: bool # if the node has started
nick: string # nickname for this chat session
prompt: bool # chat prompt is showing
contentTopic: string # default content topic for chat messages
conf: Chat2Conf # configuration for chat2
type
PrivateKey* = crypto.PrivateKey
Topic* = waku_core.PubsubTopic
const MinMixNodePoolSize = 4
#####################
## chat2 protobufs ##
#####################
type
SelectResult*[T] = Result[T, string]
Chat2Message* = object
timestamp*: int64
nick*: string
payload*: seq[byte]
proc getPubsubTopic*(
conf: Chat2Conf, node: WakuNode, contentTopic: string
): PubsubTopic =
let shard = node.wakuAutoSharding.get().getShard(contentTopic).valueOr:
echo "Could not parse content topic: " & error
return "" #TODO: fix this.
return $RelayShard(clusterId: conf.clusterId, shardId: shard.shardId)
proc init*(T: type Chat2Message, buffer: seq[byte]): ProtoResult[T] =
var msg = Chat2Message()
let pb = initProtoBuffer(buffer)
var timestamp: uint64
discard ?pb.getField(1, timestamp)
msg.timestamp = int64(timestamp)
discard ?pb.getField(2, msg.nick)
discard ?pb.getField(3, msg.payload)
ok(msg)
proc encode*(message: Chat2Message): ProtoBuffer =
var serialised = initProtoBuffer()
serialised.write(1, uint64(message.timestamp))
serialised.write(2, message.nick)
serialised.write(3, message.payload)
return serialised
proc `$`*(message: Chat2Message): string =
# Get message date and timestamp in local time
let time = message.timestamp.fromUnix().local().format("'<'MMM' 'dd,' 'HH:mm'>'")
return time & " " & message.nick & ": " & string.fromBytes(message.payload)
#####################
proc connectToNodes(c: Chat, nodes: seq[string]) {.async.} =
echo "Connecting to nodes"
await c.node.connectToNodes(nodes)
c.connected = true
proc showChatPrompt(c: Chat) =
if not c.prompt:
try:
stdout.write(">> ")
stdout.flushFile()
c.prompt = true
except IOError:
discard
proc getChatLine(payload: seq[byte]): string =
# No payload encoding/encryption from Waku
let pb = Chat2Message.init(payload).valueOr:
return string.fromBytes(payload)
return $pb
proc printReceivedMessage(c: Chat, msg: WakuMessage) =
let chatLine = getChatLine(msg.payload)
try:
echo &"{chatLine}"
except ValueError:
# Formatting fail. Print chat line in any case.
echo chatLine
c.prompt = false
showChatPrompt(c)
trace "Printing message", chatLine, contentTopic = msg.contentTopic
proc readNick(transp: StreamTransport): Future[string] {.async.} =
# Chat prompt
stdout.write("Choose a nickname >> ")
stdout.flushFile()
return await transp.readLine()
proc startMetricsServer(
serverIp: IpAddress, serverPort: Port
): Result[MetricsHttpServerRef, string] =
info "Starting metrics HTTP server", serverIp = $serverIp, serverPort = $serverPort
let server = MetricsHttpServerRef.new($serverIp, serverPort).valueOr:
return err("metrics HTTP server start failed: " & $error)
try:
waitFor server.start()
except CatchableError:
return err("metrics HTTP server start failed: " & getCurrentExceptionMsg())
info "Metrics HTTP server started", serverIp = $serverIp, serverPort = $serverPort
ok(server)
proc publish(c: Chat, line: string) {.async.} =
# First create a Chat2Message protobuf with this line of text
let time = getTime().toUnix()
let chat2pb =
Chat2Message(timestamp: time, nick: c.nick, payload: line.toBytes()).encode()
## @TODO: error handling on failure
proc handler(response: LightPushResponse) {.gcsafe, closure.} =
trace "lightpush response received", response = response
var message = WakuMessage(
payload: chat2pb.buffer,
contentTopic: c.contentTopic,
version: 0,
timestamp: getNanosecondTime(time),
)
try:
if not c.node.wakuLightpushClient.isNil():
# Attempt lightpush with mix
(
waitFor c.node.lightpushPublish(
some(c.conf.getPubsubTopic(c.node, c.contentTopic)),
message,
none(RemotePeerInfo),
true,
)
).isOkOr:
error "failed to publish lightpush message", error = error
else:
error "failed to publish message as lightpush client is not initialized"
except CatchableError:
error "caught error publishing message: ", error = getCurrentExceptionMsg()
# TODO This should read or be subscribe handler subscribe
proc readAndPrint(c: Chat) {.async.} =
while true:
# while p.connected:
# # TODO: echo &"{p.id} -> "
#
# echo cast[string](await p.conn.readLp(1024))
#echo "readAndPrint subscribe NYI"
await sleepAsync(100)
# TODO Implement
proc writeAndPrint(c: Chat) {.async.} =
while true:
# Connect state not updated on incoming WakuRelay connections
# if not c.connected:
# echo "type an address or wait for a connection:"
# echo "type /[help|?] for help"
# Chat prompt
showChatPrompt(c)
let line = await c.transp.readLine()
if line.startsWith("/help") or line.startsWith("/?") or not c.started:
echo Help
continue
# if line.startsWith("/disconnect"):
# echo "Ending current session"
# if p.connected and p.conn.closed.not:
# await p.conn.close()
# p.connected = false
elif line.startsWith("/connect"):
# TODO Should be able to connect to multiple peers for Waku chat
if c.connected:
echo "already connected to at least one peer"
continue
echo "enter address of remote peer"
let address = await c.transp.readLine()
if address.len > 0:
await c.connectToNodes(@[address])
elif line.startsWith("/nick"):
# Set a new nickname
c.nick = await readNick(c.transp)
echo "You are now known as " & c.nick
elif line.startsWith("/exit"):
echo "quitting..."
try:
await c.node.stop()
except:
echo "exception happened when stopping: " & getCurrentExceptionMsg()
quit(QuitSuccess)
else:
# XXX connected state problematic
if c.started:
echo "publishing message: " & line
await c.publish(line)
# TODO Connect to peer logic?
else:
try:
if line.startsWith("/") and "p2p" in line:
await c.connectToNodes(@[line])
except:
echo &"unable to dial remote peer {line}"
echo getCurrentExceptionMsg()
proc readWriteLoop(c: Chat) {.async.} =
asyncSpawn c.writeAndPrint() # execute the async function but does not block
asyncSpawn c.readAndPrint()
proc readInput(wfd: AsyncFD) {.thread, raises: [Defect, CatchableError].} =
## This procedure performs reading from `stdin` and sends data over
## pipe to main thread.
let transp = fromPipe(wfd)
while true:
let line = stdin.readLine()
discard waitFor transp.write(line & "\r\n")
var alreadyUsedServicePeers {.threadvar.}: seq[RemotePeerInfo]
proc selectRandomServicePeer*(
pm: PeerManager, actualPeer: Option[RemotePeerInfo], codec: string
): Result[RemotePeerInfo, void] =
if actualPeer.isSome():
alreadyUsedServicePeers.add(actualPeer.get())
let supportivePeers = pm.switch.peerStore.getPeersByProtocol(codec).filterIt(
it notin alreadyUsedServicePeers
)
if supportivePeers.len == 0:
return err()
let rndPeerIndex = rand(0 .. supportivePeers.len - 1)
return ok(supportivePeers[rndPeerIndex])
proc maintainSubscription(
wakuNode: WakuNode,
filterPubsubTopic: PubsubTopic,
filterContentTopic: ContentTopic,
filterPeer: RemotePeerInfo,
preventPeerSwitch: bool,
) {.async.} =
var actualFilterPeer = filterPeer
const maxFailedSubscribes = 3
const maxFailedServiceNodeSwitches = 10
var noFailedSubscribes = 0
var noFailedServiceNodeSwitches = 0
# Use chronos.Duration explicitly to avoid mismatch with std/times.Duration
let RetryWait = chronos.seconds(2) # Quick retry interval
let SubscriptionMaintenance = chronos.seconds(30) # Subscription maintenance interval
while true:
info "maintaining subscription at", peer = constructMultiaddrStr(actualFilterPeer)
# First use filter-ping to check if we have an active subscription
let pingErr = (await wakuNode.wakuFilterClient.ping(actualFilterPeer)).errorOr:
await sleepAsync(SubscriptionMaintenance)
info "subscription is live."
continue
# No subscription found. Let's subscribe.
error "ping failed.", error = pingErr
trace "no subscription found. Sending subscribe request"
let subscribeErr = (
await wakuNode.filterSubscribe(
some(filterPubsubTopic), filterContentTopic, actualFilterPeer
)
).errorOr:
await sleepAsync(SubscriptionMaintenance)
if noFailedSubscribes > 0:
noFailedSubscribes -= 1
notice "subscribe request successful."
continue
noFailedSubscribes += 1
error "Subscribe request failed.",
error = subscribeErr, peer = actualFilterPeer, failCount = noFailedSubscribes
# TODO: disconnet from failed actualFilterPeer
# asyncSpawn(wakuNode.peerManager.switch.disconnect(p))
# wakunode.peerManager.peerStore.delete(actualFilterPeer)
if noFailedSubscribes < maxFailedSubscribes:
await sleepAsync(RetryWait) # Wait a bit before retrying
elif not preventPeerSwitch:
# try again with new peer without delay
let actualFilterPeer = selectRandomServicePeer(
wakuNode.peerManager, some(actualFilterPeer), WakuFilterSubscribeCodec
).valueOr:
error "Failed to find new service peer. Exiting."
noFailedServiceNodeSwitches += 1
break
info "Found new peer for codec",
codec = filterPubsubTopic, peer = constructMultiaddrStr(actualFilterPeer)
noFailedSubscribes = 0
else:
await sleepAsync(SubscriptionMaintenance)
{.pop.}
# @TODO confutils.nim(775, 17) Error: can raise an unlisted exception: ref IOError
proc processInput(rfd: AsyncFD, rng: ref HmacDrbgContext) {.async.} =
let
transp = fromPipe(rfd)
conf = Chat2Conf.load()
nodekey =
if conf.nodekey.isSome():
conf.nodekey.get()
else:
PrivateKey.random(Secp256k1, rng[]).tryGet()
# set log level
if conf.logLevel != LogLevel.NONE:
setLogLevel(conf.logLevel)
let (extIp, extTcpPort, extUdpPort) = setupNat(
conf.nat,
clientId,
Port(uint16(conf.tcpPort) + conf.portsShift),
Port(uint16(conf.udpPort) + conf.portsShift),
).valueOr:
raise newException(ValueError, "setupNat error " & error)
var enrBuilder = EnrBuilder.init(nodeKey)
enrBuilder.withWakuRelaySharding(
RelayShards(clusterId: conf.clusterId, shardIds: conf.shards)
).isOkOr:
error "failed to add sharded topics to ENR", error = error
quit(QuitFailure)
let record = enrBuilder.build().valueOr:
error "failed to create enr record", error = error
quit(QuitFailure)
let node = block:
var builder = WakuNodeBuilder.init()
builder.withNodeKey(nodeKey)
builder.withRecord(record)
builder
.withNetworkConfigurationDetails(
conf.listenAddress,
Port(uint16(conf.tcpPort) + conf.portsShift),
extIp,
extTcpPort,
wsBindPort = Port(uint16(conf.websocketPort) + conf.portsShift),
wsEnabled = conf.websocketSupport,
wssEnabled = conf.websocketSecureSupport,
)
.tryGet()
builder.build().tryGet()
node.mountAutoSharding(conf.clusterId, conf.numShardsInNetwork).isOkOr:
error "failed to mount waku sharding: ", error = error
quit(QuitFailure)
node.mountMetadata(conf.clusterId, conf.shards).isOkOr:
error "failed to mount waku metadata protocol: ", err = error
quit(QuitFailure)
let (mixPrivKey, mixPubKey) = generateKeyPair().valueOr:
error "failed to generate mix key pair", error = error
return
(await node.mountMix(conf.clusterId, mixPrivKey, conf.mixnodes)).isOkOr:
error "failed to mount waku mix protocol: ", error = $error
quit(QuitFailure)
await node.mountRendezvousClient(conf.clusterId)
await node.start()
node.peerManager.start()
await node.mountLibp2pPing()
await node.mountPeerExchangeClient()
let pubsubTopic = conf.getPubsubTopic(node, conf.contentTopic)
echo "pubsub topic is: " & pubsubTopic
let nick = await readNick(transp)
echo "Welcome, " & nick & "!"
var chat = Chat(
node: node,
transp: transp,
subscribed: true,
connected: false,
started: true,
nick: nick,
prompt: false,
contentTopic: conf.contentTopic,
conf: conf,
)
var dnsDiscoveryUrl = none(string)
if conf.fleet != Fleet.none:
# Use DNS discovery to connect to selected fleet
echo "Connecting to " & $conf.fleet & " fleet using DNS discovery..."
if conf.fleet == Fleet.test:
dnsDiscoveryUrl = some(
"enrtree://AOGYWMBYOUIMOENHXCHILPKY3ZRFEULMFI4DOM442QSZ73TT2A7VI@test.waku.nodes.status.im"
)
else:
# Connect to sandbox by default
dnsDiscoveryUrl = some(
"enrtree://AIRVQ5DDA4FFWLRBCHJWUWOO6X6S4ZTZ5B667LQ6AJU6PEYDLRD5O@sandbox.waku.nodes.status.im"
)
elif conf.dnsDiscoveryUrl != "":
# No pre-selected fleet. Discover nodes via DNS using user config
info "Discovering nodes using Waku DNS discovery", url = conf.dnsDiscoveryUrl
dnsDiscoveryUrl = some(conf.dnsDiscoveryUrl)
var discoveredNodes: seq[RemotePeerInfo]
if dnsDiscoveryUrl.isSome:
var nameServers: seq[TransportAddress]
for ip in conf.dnsDiscoveryNameServers:
nameServers.add(initTAddress(ip, Port(53))) # Assume all servers use port 53
let dnsResolver = DnsResolver.new(nameServers)
proc resolver(domain: string): Future[string] {.async, gcsafe.} =
trace "resolving", domain = domain
let resolved = await dnsResolver.resolveTxt(domain)
return resolved[0] # Use only first answer
let wakuDnsDiscovery = WakuDnsDiscovery.init(dnsDiscoveryUrl.get(), resolver)
if wakuDnsDiscovery.isOk:
let discoveredPeers = await wakuDnsDiscovery.get().findPeers()
if discoveredPeers.isOk:
info "Connecting to discovered peers"
discoveredNodes = discoveredPeers.get()
echo "Discovered and connecting to " & $discoveredNodes
waitFor chat.node.connectToNodes(discoveredNodes)
else:
warn "Failed to find peers via DNS discovery", error = discoveredPeers.error
else:
warn "Failed to init Waku DNS discovery", error = wakuDnsDiscovery.error
let peerInfo = node.switch.peerInfo
let listenStr = $peerInfo.addrs[0] & "/p2p/" & $peerInfo.peerId
echo &"Listening on\n {listenStr}"
if (conf.storenode != "") or (conf.store == true):
await node.mountStore()
var storenode: Option[RemotePeerInfo]
if conf.storenode != "":
let peerInfo = parsePeerInfo(conf.storenode)
if peerInfo.isOk():
storenode = some(peerInfo.value)
else:
error "Incorrect conf.storenode", error = peerInfo.error
elif discoveredNodes.len > 0:
echo "Store enabled, but no store nodes configured. Choosing one at random from discovered peers"
storenode = some(discoveredNodes[rand(0 .. len(discoveredNodes) - 1)])
if storenode.isSome():
# We have a viable storenode. Let's query it for historical messages.
echo "Connecting to storenode: " & $(storenode.get())
node.mountStoreClient()
node.peerManager.addServicePeer(storenode.get(), WakuStoreCodec)
proc storeHandler(response: StoreQueryResponse) {.gcsafe.} =
for msg in response.messages:
let payload =
if msg.message.isSome():
msg.message.get().payload
else:
newSeq[byte](0)
let chatLine = getChatLine(payload)
echo &"{chatLine}"
info "Hit store handler"
let queryRes = await node.query(
StoreQueryRequest(contentTopics: @[chat.contentTopic]), storenode.get()
)
if queryRes.isOk():
storeHandler(queryRes.value)
if conf.edgemode: #Mount light protocol clients
node.mountLightPushClient()
await node.mountFilterClient()
let filterHandler = proc(
pubsubTopic: PubsubTopic, msg: WakuMessage
): Future[void] {.async, closure.} =
trace "Hit filter handler", contentTopic = msg.contentTopic
chat.printReceivedMessage(msg)
node.wakuFilterClient.registerPushHandler(filterHandler)
var servicePeerInfo: RemotePeerInfo
if conf.serviceNode != "":
servicePeerInfo = parsePeerInfo(conf.serviceNode).valueOr:
error "Couldn't parse conf.serviceNode", error = error
RemotePeerInfo()
if servicePeerInfo == nil or $servicePeerInfo.peerId == "":
# Assuming that service node supports all services
servicePeerInfo = selectRandomServicePeer(
node.peerManager, none(RemotePeerInfo), WakuLightpushCodec
).valueOr:
error "Couldn't find any service peer"
quit(QuitFailure)
node.peerManager.addServicePeer(servicePeerInfo, WakuLightpushCodec)
node.peerManager.addServicePeer(servicePeerInfo, WakuPeerExchangeCodec)
#node.peerManager.addServicePeer(servicePeerInfo, WakuRendezVousCodec)
# Start maintaining subscription
asyncSpawn maintainSubscription(
node, pubsubTopic, conf.contentTopic, servicePeerInfo, false
)
echo "waiting for mix nodes to be discovered..."
while true:
if node.getMixNodePoolSize() >= MinMixNodePoolSize:
break
discard await node.fetchPeerExchangePeers()
await sleepAsync(1000)
while node.getMixNodePoolSize() < MinMixNodePoolSize:
info "waiting for mix nodes to be discovered",
currentpoolSize = node.getMixNodePoolSize()
await sleepAsync(1000)
notice "ready to publish with mix node pool size ",
currentpoolSize = node.getMixNodePoolSize()
echo "ready to publish messages now"
# Once min mixnodes are discovered loop as per default setting
node.startPeerExchangeLoop()
if conf.metricsLogging:
startMetricsLog()
if conf.metricsServer:
let metricsServer = startMetricsServer(
conf.metricsServerAddress, Port(conf.metricsServerPort + conf.portsShift)
)
await chat.readWriteLoop()
runForever()
proc main(rng: ref HmacDrbgContext) {.async.} =
let (rfd, wfd) = createAsyncPipe()
if rfd == asyncInvalidPipe or wfd == asyncInvalidPipe:
raise newException(ValueError, "Could not initialize pipe!")
var thread: Thread[AsyncFD]
thread.createThread(readInput, wfd)
try:
await processInput(rfd, rng)
# Handle only ConfigurationError for now
# TODO: Throw other errors from the mounting procedure
except ConfigurationError as e:
raise e
when isMainModule: # isMainModule = true when the module is compiled as the main file
let rng = crypto.newRng()
try:
waitFor(main(rng))
except CatchableError as e:
raise e
## Dump of things that can be improved:
##
## - Incoming dialed peer does not change connected state (not relying on it for now)
## - Unclear if staticnode argument works (can enter manually)
## - Don't trigger self / double publish own messages
## - Test/default to cluster node connection (diff protocol version)
## - Redirect logs to separate file
## - Expose basic publish/subscribe etc commands with /syntax
## - Show part of peerid to know who sent message
## - Deal with protobuf messages (e.g. other chat protocol, or encrypted)

View File

@ -0,0 +1,315 @@
import chronicles, chronos, std/strutils, regex
import
eth/keys,
libp2p/crypto/crypto,
libp2p/crypto/secp,
libp2p/crypto/curve25519,
libp2p/multiaddress,
libp2p/multicodec,
nimcrypto/utils,
confutils,
confutils/defs,
confutils/std/net
import waku/waku_core, waku/waku_mix
type
Fleet* = enum
none
sandbox
test
EthRpcUrl* = distinct string
Chat2Conf* = object ## General node config
edgemode* {.
defaultValue: true, desc: "Run the app in edge mode", name: "edge-mode"
.}: bool
logLevel* {.
desc: "Sets the log level.", defaultValue: LogLevel.INFO, name: "log-level"
.}: LogLevel
nodekey* {.desc: "P2P node private key as 64 char hex string.", name: "nodekey".}:
Option[crypto.PrivateKey]
listenAddress* {.
defaultValue: defaultListenAddress(config),
desc: "Listening address for the LibP2P traffic.",
name: "listen-address"
.}: IpAddress
tcpPort* {.desc: "TCP listening port.", defaultValue: 60000, name: "tcp-port".}:
Port
udpPort* {.desc: "UDP listening port.", defaultValue: 60000, name: "udp-port".}:
Port
portsShift* {.
desc: "Add a shift to all port numbers.", defaultValue: 0, name: "ports-shift"
.}: uint16
nat* {.
desc:
"Specify method to use for determining public address. " &
"Must be one of: any, none, upnp, pmp, extip:<IP>.",
defaultValue: "any"
.}: string
## Persistence config
dbPath* {.
desc: "The database path for peristent storage", defaultValue: "", name: "db-path"
.}: string
persistPeers* {.
desc: "Enable peer persistence: true|false",
defaultValue: false,
name: "persist-peers"
.}: bool
persistMessages* {.
desc: "Enable message persistence: true|false",
defaultValue: false,
name: "persist-messages"
.}: bool
## Relay config
relay* {.
desc: "Enable relay protocol: true|false", defaultValue: true, name: "relay"
.}: bool
staticnodes* {.
desc: "Peer multiaddr to directly connect with. Argument may be repeated.",
name: "staticnode",
defaultValue: @[]
.}: seq[string]
mixnodes* {.
desc:
"Multiaddress and mix-key of mix node to be statically specified in format multiaddr:mixPubKey. Argument may be repeated.",
name: "mixnode"
.}: seq[MixNodePubInfo]
keepAlive* {.
desc: "Enable keep-alive for idle connections: true|false",
defaultValue: false,
name: "keep-alive"
.}: bool
clusterId* {.
desc:
"Cluster id that the node is running in. Node in a different cluster id is disconnected.",
defaultValue: 1,
name: "cluster-id"
.}: uint16
numShardsInNetwork* {.
desc: "Number of shards in the network",
defaultValue: 8,
name: "num-shards-in-network"
.}: uint32
shards* {.
desc:
"Shards index to subscribe to [0..NUM_SHARDS_IN_NETWORK-1]. Argument may be repeated.",
defaultValue:
@[
uint16(0),
uint16(1),
uint16(2),
uint16(3),
uint16(4),
uint16(5),
uint16(6),
uint16(7),
],
name: "shard"
.}: seq[uint16]
## Store config
store* {.
desc: "Enable store protocol: true|false", defaultValue: false, name: "store"
.}: bool
storenode* {.
desc: "Peer multiaddr to query for storage.", defaultValue: "", name: "storenode"
.}: string
## Filter config
filter* {.
desc: "Enable filter protocol: true|false", defaultValue: false, name: "filter"
.}: bool
## Lightpush config
lightpush* {.
desc: "Enable lightpush protocol: true|false",
defaultValue: false,
name: "lightpush"
.}: bool
servicenode* {.
desc: "Peer multiaddr to request lightpush and filter services",
defaultValue: "",
name: "servicenode"
.}: string
## Metrics config
metricsServer* {.
desc: "Enable the metrics server: true|false",
defaultValue: false,
name: "metrics-server"
.}: bool
metricsServerAddress* {.
desc: "Listening address of the metrics server.",
defaultValue: parseIpAddress("127.0.0.1"),
name: "metrics-server-address"
.}: IpAddress
metricsServerPort* {.
desc: "Listening HTTP port of the metrics server.",
defaultValue: 8008,
name: "metrics-server-port"
.}: uint16
metricsLogging* {.
desc: "Enable metrics logging: true|false",
defaultValue: true,
name: "metrics-logging"
.}: bool
## DNS discovery config
dnsDiscovery* {.
desc:
"Deprecated, please set dns-discovery-url instead. Enable discovering nodes via DNS",
defaultValue: false,
name: "dns-discovery"
.}: bool
dnsDiscoveryUrl* {.
desc: "URL for DNS node list in format 'enrtree://<key>@<fqdn>'",
defaultValue: "",
name: "dns-discovery-url"
.}: string
dnsDiscoveryNameServers* {.
desc: "DNS name server IPs to query. Argument may be repeated.",
defaultValue: @[parseIpAddress("1.1.1.1"), parseIpAddress("1.0.0.1")],
name: "dns-discovery-name-server"
.}: seq[IpAddress]
## Chat2 configuration
fleet* {.
desc:
"Select the fleet to connect to. This sets the DNS discovery URL to the selected fleet.",
defaultValue: Fleet.test,
name: "fleet"
.}: Fleet
contentTopic* {.
desc: "Content topic for chat messages.",
defaultValue: "/toy-chat-mix/2/huilong/proto",
name: "content-topic"
.}: string
## Websocket Configuration
websocketSupport* {.
desc: "Enable websocket: true|false",
defaultValue: false,
name: "websocket-support"
.}: bool
websocketPort* {.
desc: "WebSocket listening port.", defaultValue: 8000, name: "websocket-port"
.}: Port
websocketSecureSupport* {.
desc: "WebSocket Secure Support.",
defaultValue: false,
name: "websocket-secure-support"
.}: bool ## rln-relay configuration
proc parseCmdArg*(T: type MixNodePubInfo, p: string): T =
let elements = p.split(":")
if elements.len != 2:
raise newException(
ValueError, "Invalid format for mix node expected multiaddr:mixPublicKey"
)
let multiaddr = MultiAddress.init(elements[0]).valueOr:
raise newException(ValueError, "Invalid multiaddress format")
if not multiaddr.contains(multiCodec("ip4")).get():
raise newException(
ValueError, "Invalid format for ip address, expected a ipv4 multiaddress"
)
return MixNodePubInfo(
multiaddr: elements[0], pubKey: intoCurve25519Key(ncrutils.fromHex(elements[1]))
)
# NOTE: Keys are different in nim-libp2p
proc parseCmdArg*(T: type crypto.PrivateKey, p: string): T =
try:
let key = SkPrivateKey.init(utils.fromHex(p)).tryGet()
# XXX: Here at the moment
result = crypto.PrivateKey(scheme: Secp256k1, skkey: key)
except CatchableError as e:
raise newException(ValueError, "Invalid private key")
proc completeCmdArg*(T: type crypto.PrivateKey, val: string): seq[string] =
return @[]
proc parseCmdArg*(T: type IpAddress, p: string): T =
try:
result = parseIpAddress(p)
except CatchableError as e:
raise newException(ValueError, "Invalid IP address")
proc completeCmdArg*(T: type IpAddress, val: string): seq[string] =
return @[]
proc parseCmdArg*(T: type Port, p: string): T =
try:
result = Port(parseInt(p))
except CatchableError as e:
raise newException(ValueError, "Invalid Port number")
proc completeCmdArg*(T: type Port, val: string): seq[string] =
return @[]
proc parseCmdArg*(T: type Option[uint], p: string): T =
try:
some(parseUint(p))
except CatchableError:
raise newException(ValueError, "Invalid unsigned integer")
proc completeCmdArg*(T: type EthRpcUrl, val: string): seq[string] =
return @[]
proc parseCmdArg*(T: type EthRpcUrl, s: string): T =
## allowed patterns:
## http://url:port
## https://url:port
## http://url:port/path
## https://url:port/path
## http://url/with/path
## http://url:port/path?query
## https://url:port/path?query
## disallowed patterns:
## any valid/invalid ws or wss url
var httpPattern =
re2"^(https?):\/\/((localhost)|([\w_-]+(?:(?:\.[\w_-]+)+)))(:[0-9]{1,5})?([\w.,@?^=%&:\/~+#-]*[\w@?^=%&\/~+#-])*"
var wsPattern =
re2"^(wss?):\/\/((localhost)|([\w_-]+(?:(?:\.[\w_-]+)+)))(:[0-9]{1,5})?([\w.,@?^=%&:\/~+#-]*[\w@?^=%&\/~+#-])*"
if regex.match(s, wsPattern):
raise newException(
ValueError, "Websocket RPC URL is not supported, Please use an HTTP URL"
)
if not regex.match(s, httpPattern):
raise newException(ValueError, "Invalid HTTP RPC URL")
return EthRpcUrl(s)
func defaultListenAddress*(conf: Chat2Conf): IpAddress =
# TODO: How should we select between IPv4 and IPv6
# Maybe there should be a config option for this.
(static parseIpAddress("0.0.0.0"))

4
apps/chat2mix/nim.cfg Normal file
View File

@ -0,0 +1,4 @@
-d:chronicles_line_numbers
-d:chronicles_runtime_filtering:on
-d:discv5_protocol_id:d5waku
path = "../.."

View File

@ -0,0 +1,27 @@
START_PUBLISHING_AFTER_SECS=45
# can add some seconds delay before SENDER starts publishing
NUM_MESSAGES=0
# 0 for infinite number of messages
MESSAGE_INTERVAL_MILLIS=8000
# ms delay between messages
MIN_MESSAGE_SIZE=15Kb
MAX_MESSAGE_SIZE=145Kb
## for wakusim
#SHARD=0
#CONTENT_TOPIC=/tester/2/light-pubsub-test/wakusim
#CLUSTER_ID=66
## for status.prod
#SHARDS=32
CONTENT_TOPIC=/tester/2/light-pubsub-test/fleet
CLUSTER_ID=16
## for TWN
#SHARD=4
#CONTENT_TOPIC=/tester/2/light-pubsub-test/twn
#CLUSTER_ID=1

View File

@ -0,0 +1,33 @@
# TESTING IMAGE --------------------------------------------------------------
## NOTICE: This is a short cut build file for ubuntu users who compiles nwaku in ubuntu distro.
## This is used for faster turnaround time for testing the compiled binary.
## Prerequisites: compiled liteprotocoltester binary in build/ directory
FROM ubuntu:noble AS prod
LABEL maintainer="zoltan@status.im"
LABEL source="https://github.com/waku-org/nwaku"
LABEL description="Lite Protocol Tester: Waku light-client"
LABEL commit="unknown"
LABEL version="unknown"
# DevP2P, LibP2P, and JSON RPC ports
EXPOSE 30303 60000 8545
# Referenced in the binary
RUN apt-get update && apt-get install -y --no-install-recommends \
libgcc1 \
libpq-dev \
wget \
iproute2 \
&& rm -rf /var/lib/apt/lists/*
COPY build/liteprotocoltester /usr/bin/
COPY apps/liteprotocoltester/run_tester_node.sh /usr/bin/
COPY apps/liteprotocoltester/run_tester_node_on_fleet.sh /usr/bin/
ENTRYPOINT ["/usr/bin/run_tester_node.sh", "/usr/bin/liteprotocoltester"]
# # By default just show help if called without arguments
CMD ["--help"]

View File

@ -0,0 +1,73 @@
# BUILD NIM APP ----------------------------------------------------------------
FROM rust:1.77.1-alpine3.18 AS nim-build
ARG NIMFLAGS
ARG MAKE_TARGET=liteprotocoltester
ARG NIM_COMMIT
ARG LOG_LEVEL=TRACE
# Get build tools and required header files
RUN apk add --no-cache bash git build-base openssl-dev linux-headers curl jq
WORKDIR /app
COPY . .
# workaround for alpine issue: https://github.com/alpinelinux/docker-alpine/issues/383
RUN apk update && apk upgrade
# Ran separately from 'make' to avoid re-doing
RUN git submodule update --init --recursive
# Slowest build step for the sake of caching layers
RUN make -j$(nproc) deps QUICK_AND_DIRTY_COMPILER=1 ${NIM_COMMIT}
# Build the final node binary
RUN make -j$(nproc) ${NIM_COMMIT} $MAKE_TARGET LOG_LEVEL=${LOG_LEVEL} NIMFLAGS="${NIMFLAGS}"
# REFERENCE IMAGE as BASE for specialized PRODUCTION IMAGES----------------------------------------
FROM alpine:3.18 AS base_lpt
ARG MAKE_TARGET=liteprotocoltester
LABEL maintainer="zoltan@status.im"
LABEL source="https://github.com/waku-org/nwaku"
LABEL description="Lite Protocol Tester: Waku light-client"
LABEL commit="unknown"
LABEL version="unknown"
# DevP2P, LibP2P, and JSON RPC ports
EXPOSE 30303 60000 8545
# Referenced in the binary
RUN apk add --no-cache libgcc libpq-dev \
wget \
iproute2 \
python3
COPY --from=nim-build /app/build/liteprotocoltester /usr/bin/
RUN chmod +x /usr/bin/liteprotocoltester
# Standalone image to be used manually and in lpt-runner -------------------------------------------
FROM base_lpt AS standalone_lpt
COPY --from=nim-build /app/apps/liteprotocoltester/run_tester_node.sh /usr/bin/
COPY --from=nim-build /app/apps/liteprotocoltester/run_tester_node_on_fleet.sh /usr/bin/
RUN chmod +x /usr/bin/run_tester_node.sh
ENTRYPOINT ["/usr/bin/run_tester_node.sh", "/usr/bin/liteprotocoltester"]
# Image for infra deployment -------------------------------------------
FROM base_lpt AS deployment_lpt
# let supervisor python script flush logs immediately
ENV PYTHONUNBUFFERED="1"
COPY --from=nim-build /app/apps/liteprotocoltester/run_tester_node_at_infra.sh /usr/bin/
COPY --from=nim-build /app/apps/liteprotocoltester/infra.env /usr/bin/
COPY --from=nim-build /app/apps/liteprotocoltester/lpt_supervisor.py /usr/bin/
RUN chmod +x /usr/bin/run_tester_node_at_infra.sh
RUN chmod +x /usr/bin/lpt_supervisor.py
ENTRYPOINT ["/usr/bin/lpt_supervisor.py"]

View File

@ -0,0 +1,329 @@
# Waku - Lite Protocol Tester
## Aim
Testing reliability of light client protocols in different scale.
Measure message delivery reliability and latency between light push client(s) and a filter client(s) node(s).
## Concept of testing
A tester node is configured either 'publisher' or 'receiver' and connects to a certain service node.
All service protocols are disabled except for lightpush client or filter client. This way we would like to simulate
a light client application.
Each publisher pumps messages to the network in a preconfigured way (number of messages, frequency) while on the receiver side
we would like to track and measure message losses, mis-ordered receives, late arrived messages and latencies.
Ideally the tester nodes will connect to different edge of the network where we can gather more result from mulitple publishers
and multiple receivers.
Publishers are fill all message payloads with information about the test message and sender, helping the receiver side to calculate results.
## Usage
### Using lpt-runner
For ease of use, you can clone lpt-runner repository. That will utilize previously pushed liteprotocoltester docker image.
It is recommended to use this method for fleet testing.
```bash
git clone https://github.com/waku-org/lpt-runner.git
cd lpt-runner
# check Reame.md for more information
# edit .env file to your needs
docker compose up -d
# navigate localhost:3033 to see the lite-protocol-tester dashboard
```
> See more detailed examples below.
### Integration with waku-simulator!
- For convenience, integration is done in cooperation with waku-simulator repository, but nothing is tightly coupled.
- waku-simulator must be started separately with its own configuration.
- To enable waku-simulator working without RLN currently a separate branch is to be used.
- When waku-simulator is configured and up and running, lite-protocol-tester composite docker setup can be started.
```bash
# Start waku-simulator
git clone https://github.com/waku-org/waku-simulator.git ../waku-simulator
cd ../waku-simulator
git checkout chore-integrate-liteprotocoltester
# optionally edit .env file
docker compose -f docker-compose-norln.yml up -d
# navigate localhost:30001 to see the waku-simulator dashboard
cd ../{your-repository}
make LOG_LEVEL=DEBUG liteprotocoltester
cd apps/liteprotocoltester
# optionally edit .env file
docker compose -f docker-compose-on-simularor.yml build
docker compose -f docker-compose-on-simularor.yml up -d
docker compose -f docker-compose-on-simularor.yml logs -f receivernode
```
#### Current setup
- waku-simulator is configured to run with 25 full node
- liteprotocoltester is configured to run with 3 publisher and 1 receiver
- liteprotocoltester is configured to run 1 lightpush service and a filter service node
- light clients are connected accordingly
- publishers will send 250 messages in every 200ms with size between 1KiB and 120KiB
- Notice there is a configurable wait before start publishing messages as it is noticed time is needed for the service nodes to get connected to full nodes from simulator
- light clients will print report on their and the connected service node's connectivity to the network in every 20 secs.
#### Test monitoring
Navigate to http://localhost:3033 to see the lite-protocol-tester dashboard.
### Run independently on a chosen waku fleet
This option is simple as is just to run the built liteprotocoltester binary with run_tester_node.sh script.
Syntax:
`./run_tester_node.sh <path-to-liteprotocoltester-binary> <SENDER|RECEIVER> <service-node-address>`
How to run from you nwaku repository:
```bash
cd ../{your-repository}
make LOG_LEVEL=DEBUG liteprotocoltester
cd apps/liteprotocoltester
# optionally edit .env file
# run publisher side
./run_tester_node.sh ../../build/liteprotocoltester SENDER [chosen service node address that support lightpush]
# or run receiver side
./run_tester_node.sh ../../build/liteprotocoltester RECEIVER [chosen service node address that support filter service]
```
#### Recommendations
In order to run on any kind of network, it is recommended to deploy the built `liteprotocoltester` binary with the `.env` file and the `run_tester_node.sh` script to the desired machine.
Select a lightpush service node and a filter service node from the targeted network, or you can run your own. Note down the selected peers peer_id.
Run a SENDER role liteprotocoltester and a RECEIVER role one on different terminals. Depending on the test aim, you may want to redirect the output to a file.
> RECEIVER side will periodically print statistics to standard output.
## Configuration
### Environment variables for docker compose runs
| Variable | Description | Default |
| ---: | :--- | :--- |
| NUM_MESSAGES | Number of message to publish, 0 means infinite | 120 |
| MESSAGE_INTERVAL_MILLIS | Frequency of messages in milliseconds | 1000 |
| SHARD | Used shard for testing | 0 |
| CONTENT_TOPIC | content_topic for testing | /tester/1/light-pubsub-example/proto |
| CLUSTER_ID | cluster_id of the network | 16 |
| START_PUBLISHING_AFTER_SECS | Delay in seconds before starting to publish to let service node connected | 5 |
| MIN_MESSAGE_SIZE | Minimum message size in bytes | 1KiB |
| MAX_MESSAGE_SIZE | Maximum message size in bytes | 120KiB |
### Lite Protocol Tester application cli options
| Option | Description | Default |
| :--- | :--- | :--- |
| --test_func | separation of PUBLISHER or RECEIVER mode | RECEIVER |
| --service-node| Address of the service node to use for lightpush and/or filter service | - |
| --bootstrap-node| Address of the fleet's bootstrap node to use to determine service peer randomly choosen from the network. `--service-node` switch has precedence over this | - |
| --num-messages | Number of message to publish | 120 |
| --message-interval | Frequency of messages in milliseconds | 1000 |
| --min-message-size | Minimum message size in bytes | 1KiB |
| --max-message-size | Maximum message size in bytes | 120KiB |
| --start-publishing-after | Delay in seconds before starting to publish to let service node connected in seconds | 5 |
| --pubsub-topic | Used pubsub_topic for testing | /waku/2/default-waku/proto |
| --content_topic | content_topic for testing | /tester/1/light-pubsub-example/proto |
| --cluster-id | Cluster id for the test | 0 |
| --config-file | TOML configuration file to fine tune the light waku node<br>Note that some configurations (full node services) are not taken into account | - |
| --nat |Same as wakunode "nat" configuration, appear here to ease test setup | any |
| --rest-address | For convenience rest configuration can be done here | 127.0.0.1 |
| --rest-port | For convenience rest configuration can be done here | 8654 |
| --rest-allow-origin | For convenience rest configuration can be done here | * |
| --log-level | Log level for the application | DEBUG |
| --log-format | Logging output format (TEXT or JSON) | TEXT |
| --metrics-port | Metrics scarpe port | 8003 |
### Specifying peer addresses
Service node or bootstrap addresses can be specified in multiadress or ENR form.
### Using bootstrap nodes
There are multiple benefits of using bootstrap nodes. By using them liteprotocoltester will use Peer Exchange protocol to get possible peers from the network that are capable to serve as service peers for testing. Additionally it will test dial them to verify their connectivity - this will be reported in the logs and on dashboard metrics.
Also by using bootstrap node and peer exchange discovery, litprotocoltester will be able to simulate service peer switch in case of failures. There are built in tresholds count for service peer failures (3) after service peer will be switched during the test. Also there will be max 10 trials of switching peer before test declared failed and quit.
These service peer failures are reported, thus extending network reliability measures.
### Building docker image
Easiest way to build the docker image is to use the provided Makefile target.
```bash
cd <your-repository>
make docker-liteprotocoltester
```
This will build liteprotocoltester from the ground up and create a docker image with the binary copied to it under image name and tag `wakuorg/liteprotocoltester:latest`.
#### Building public image
If you want to push the image to a public registry, you can use the jenkins job to do so.
The job is available at https://ci.status.im/job/waku/job/liteprotocoltester/job/build-liteprotocoltester-image
#### Building and deployment for infra testing
For specific and continuous testing purposes we have a deployment of `liteprotocoltester` test suite to our infra appliances.
This has its own configuration, constraints and requirements. To ease this job, image shall be built and pushed with `deploy` tag.
This can be done by the jenkins job mentioned above.
or manually by:
```bash
cd <your-repository>
make DOCKER_LPT_TAG=deploy docker-liteprotocoltester
```
The image created with this method will be different from under any other tag. It prepared to run a preconfigured test suite continuously.
It will also miss prometheus metrics scraping endpoint and grafana, thus it is not recommended to use it for general testing.
#### Manually building for docker compose runs on simulator or standalone
Please note that currently to ease testing and development tester application docker image is based on ubuntu and uses the externally pre-built binary of 'liteprotocoltester'.
This speeds up image creation. Another dokcer build file is provided for proper build of boundle image.
> `Dockerfile.liteprotocoltester` will create an ubuntu based image with the binary copied from the build directory.
> `Dockerfile.liteprotocoltester.compile` will create an ubuntu based image completely compiled from source. This can be slow.
#### Creating standalone runner docker image
To ease the work with lite-protocol-tester, a docker image is possible to build.
With that image it is easy to run the application in a container.
> `Dockerfile.liteprotocoltester` will create an ubuntu image with the binary copied from the build directory. You need to pre-build the application.
Here is how to build and run:
```bash
cd <your-repository>
make liteprotocoltester
cd apps/liteprotocoltester
docker build -t liteprotocoltester:latest -f Dockerfile.liteprotocoltester ../..
# alternatively you can push it to a registry
# edit and adjust .env file to your needs and for the network configuration
docker run --env-file .env liteprotocoltester:latest RECEIVER <service-node-peer-address>
docker run --env-file .env liteprotocoltester:latest SENDER <service-node-peer-address>
```
#### Run test with auto service peer selection from a fleet using bootstrap node
```bash
docker run --env-file .env liteprotocoltester:latest RECEIVER <bootstrap-node-peer-address> BOOTSTRAP
docker run --env-file .env liteprotocoltester:latest SENDER <bootstrap-node-peer-address> BOOTSTRAP
```
> Notice that official image is also available at harbor.status.im/wakuorg/liteprotocoltester:latest
## Examples
### Bootstrap or Service node selection
The easiest way to get the proper bootstrap nodes for the tests from https://fleets.status.im page.
Adjust on which fleets you would like to run the tests.
> Please note that not all of them configured to support Peer Exchange protocol, those ones cannot be for bootstrap nodes for `liteprotocoltester`.
### Environment variables
You need not necessary to use .env file, although it can be more convenient.
Anytime you can override all or part of the environment variables defined in the .env file.
### Run standalone
Example of running the liteprotocoltester in standalone mode on status.stagin network.
Testing includes using bootstrap nodes to gather service peers from the network via Peer Exchange protocol.
Both parties will test-dial all the peers retrieved with the corresponding protocol.
Sender will start publishing messages after 60 seconds, sending 200 messages with 1 second delay between them.
Message size will be between 15KiB and 145KiB.
Cluster id and Pubsub-topic must be accurately set according to the network configuration.
The example shows that either multiaddress or ENR form accepted.
```bash
export START_PUBLISHING_AFTER_SECS=60
export NUM_MESSAGES=200
export MESSAGE_INTERVAL_MILLIS=1000
export MIN_MESSAGE_SIZE=15Kb
export MAX_MESSAGE_SIZE=145Kb
export SHARD=32
export CONTENT_TOPIC=/tester/2/light-pubsub-test/fleet
export CLUSTER_ID=16
docker run harbor.status.im/wakuorg/liteprotocoltester:latest RECEIVER /dns4/boot-01.do-ams3.status.staging.status.im/tcp/30303/p2p/16Uiu2HAmQE7FXQc6iZHdBzYfw3qCSDa9dLc1wsBJKoP4aZvztq2d BOOTSTRAP
# in different terminal session, repeat the exports and run the other party of the test.
docker run harbor.status.im/wakuorg/liteprotocoltester:latest SENDER enr:-QEiuECJPv2vL00Jp5sTEMAFyW7qXkK2cFgphlU_G8-FJuJqoW_D5aWIy3ylGdv2K8DkiG7PWgng4Ql_VI7Qc2RhBdwfAYJpZIJ2NIJpcIQvTKi6im11bHRpYWRkcnO4cgA2NjFib290LTAxLmFjLWNuLWhvbmdrb25nLWMuc3RhdHVzLnN0YWdpbmcuc3RhdHVzLmltBnZfADg2MWJvb3QtMDEuYWMtY24taG9uZ2tvbmctYy5zdGF0dXMuc3RhZ2luZy5zdGF0dXMuaW0GAbveA4Jyc40AEAUAAQAgAEAAgAEAiXNlY3AyNTZrMaEDkbgV7oqPNmFtX5FzSPi9WH8kkmrPB1R3n9xRXge91M-DdGNwgnZfg3VkcIIjKIV3YWt1Mg0 BOOTSTRAP
```
### Use of lpt-runner
Another method is to use [lpt-runner repository](https://github.com/waku-org/lpt-runner/tree/master).
This extends testing with grafana dashboard and ease the test setup.
Please read the corresponding [README](https://github.com/waku-org/lpt-runner/blob/master/README.md) there as well.
In this example we will run similar test as above but there will be 3 instances of publisher nodes and 1 receiver node.
This test uses waku.sandbox fleet which is connected to TWN. This implies lower message rates due to the RLN rate limation.
Also leave a gap of 120 seconds before starting to publish messages to let receiver side fully finish peer test-dialing.
For TWN network it is always wise to use bootstrap nodes with Peer Exchange support.
> Theoritically we can use the same bootstrap nodes for both parties, but it is recommended to use different ones to simulate different network edges, thus getting more meaningful results.
```bash
git clone https://github.com/waku-org/lpt-runner.git
cd lpt-runner
export NUM_PUBLISHER_NODES=3
export NUM_RECEIVER_NODES=1
export START_PUBLISHING_AFTER_SECS=120
export NUM_MESSAGES=300
export MESSAGE_INTERVAL_MILLIS=7000
export MIN_MESSAGE_SIZE=15Kb
export MAX_MESSAGE_SIZE=145Kb
export SHARD=4
export CONTENT_TOPIC=/tester/2/light-pubsub-test/twn
export CLUSTER_ID=1
export FILTER_BOOTSTRAP=/dns4/node-01.ac-cn-hongkong-c.waku.sandbox.status.im/tcp/30303/p2p/16Uiu2HAmQYiojgZ8APsh9wqbWNyCstVhnp9gbeNrxSEQnLJchC92
export LIGHTPUSH_BOOTSTRAP=/dns4/node-01.do-ams3.waku.sandbox.status.im/tcp/30303/p2p/16Uiu2HAmNaeL4p3WEYzC9mgXBmBWSgWjPHRvatZTXnp8Jgv3iKsb
docker compose up -d
# we can check logs from one or all SENDER
docker compose logs -f --index 1 publishernode
# for checking receiver side performance
docker compose logs -f receivernode
# when test completed
docker compose down
```
For dashboard navigate to http://localhost:3033

View File

@ -0,0 +1,62 @@
when (NimMajor, NimMinor) < (1, 4):
{.push raises: [Defect].}
else:
{.push raises: [].}
import
std/[options, net, strformat],
chronicles,
chronos,
metrics,
libbacktrace,
libp2p/crypto/crypto,
confutils,
libp2p/wire
import
tools/confutils/cli_args,
waku/[
node/peer_manager,
waku_lightpush/common,
waku_relay,
waku_filter_v2,
waku_peer_exchange/protocol,
waku_core/multiaddrstr,
waku_enr/capabilities,
]
logScope:
topics = "diagnose connections"
proc allPeers(pm: PeerManager): string =
var allStr: string = ""
for idx, peer in pm.switch.peerStore.peers():
allStr.add(
" " & $idx & ". | " & constructMultiaddrStr(peer) & " | agent: " &
peer.getAgent() & " | protos: " & $peer.protocols & " | caps: " &
$peer.enr.map(getCapabilities) & "\n"
)
return allStr
proc logSelfPeers*(pm: PeerManager) =
let selfLighpushPeers = pm.switch.peerStore.getPeersByProtocol(WakuLightPushCodec)
let selfRelayPeers = pm.switch.peerStore.getPeersByProtocol(WakuRelayCodec)
let selfFilterPeers = pm.switch.peerStore.getPeersByProtocol(WakuFilterSubscribeCodec)
let selfPxPeers = pm.switch.peerStore.getPeersByProtocol(WakuPeerExchangeCodec)
let printable = catch:
"""*------------------------------------------------------------------------------------------*
| Self ({constructMultiaddrStr(pm.switch.peerInfo)}) peers:
*------------------------------------------------------------------------------------------*
| Lightpush peers({selfLighpushPeers.len()}): ${selfLighpushPeers}
*------------------------------------------------------------------------------------------*
| Filter peers({selfFilterPeers.len()}): ${selfFilterPeers}
*------------------------------------------------------------------------------------------*
| Relay peers({selfRelayPeers.len()}): ${selfRelayPeers}
*------------------------------------------------------------------------------------------*
| PX peers({selfPxPeers.len()}): ${selfPxPeers}
*------------------------------------------------------------------------------------------*
| All peers with protocol support:
{allPeers(pm)}
*------------------------------------------------------------------------------------------*""".fmt()
echo printable.valueOr("Error while printing statistics: " & error.msg)

View File

@ -0,0 +1,227 @@
version: "3.7"
x-logging: &logging
logging:
driver: json-file
options:
max-size: 1000m
# Environment variable definitions
x-eth-client-address: &eth_client_address ${ETH_CLIENT_ADDRESS:-} # Add your ETH_CLIENT_ADDRESS after the "-"
x-rln-environment: &rln_env
RLN_RELAY_CONTRACT_ADDRESS: ${RLN_RELAY_CONTRACT_ADDRESS:-0xF471d71E9b1455bBF4b85d475afb9BB0954A29c4}
RLN_RELAY_CRED_PATH: ${RLN_RELAY_CRED_PATH:-} # Optional: Add your RLN_RELAY_CRED_PATH after the "-"
RLN_RELAY_CRED_PASSWORD: ${RLN_RELAY_CRED_PASSWORD:-} # Optional: Add your RLN_RELAY_CRED_PASSWORD after the "-"
x-test-running-conditions: &test_running_conditions
NUM_MESSAGES: ${NUM_MESSAGES:-120}
MESSAGE_INTERVAL_MILLIS: "${MESSAGE_INTERVAL_MILLIS:-1000}"
SHARD: ${SHARD:-0}
CONTENT_TOPIC: ${CONTENT_TOPIC:-/tester/2/light-pubsub-test/wakusim}
CLUSTER_ID: ${CLUSTER_ID:-66}
MIN_MESSAGE_SIZE: ${MIN_MESSAGE_SIZE:-1Kb}
MAX_MESSAGE_SIZE: ${MAX_MESSAGE_SIZE:-150Kb}
START_PUBLISHING_AFTER_SECS: ${START_PUBLISHING_AFTER_SECS:-5} # seconds
# Services definitions
services:
lightpush-service:
image: ${NWAKU_IMAGE:-harbor.status.im/wakuorg/nwaku:latest-release}
# ports:
# - 30304:30304/tcp
# - 30304:30304/udp
# - 9005:9005/udp
# - 127.0.0.1:8003:8003
# - 80:80 #Let's Encrypt
# - 8000:8000/tcp #WSS
# - 127.0.0.1:8645:8645
<<:
- *logging
environment:
DOMAIN: ${DOMAIN}
RLN_RELAY_CRED_PASSWORD: "${RLN_RELAY_CRED_PASSWORD}"
ETH_CLIENT_ADDRESS: *eth_client_address
EXTRA_ARGS: ${EXTRA_ARGS}
<<:
- *rln_env
- *test_running_conditions
volumes:
- ./run_service_node.sh:/opt/run_service_node.sh:Z
- ${CERTS_DIR:-./certs}:/etc/letsencrypt/:Z
- ./rln_tree:/etc/rln_tree/:Z
- ./keystore:/keystore:Z
entrypoint: sh
command:
- /opt/run_service_node.sh
- LIGHTPUSH
networks:
- waku-simulator_simulation
publishernode:
image: waku.liteprotocoltester:latest
build:
context: ../..
dockerfile: ./apps/liteprotocoltester/Dockerfile.liteprotocoltester
deploy:
replicas: ${NUM_PUBLISHER_NODES:-3}
# ports:
# - 30304:30304/tcp
# - 30304:30304/udp
# - 9005:9005/udp
# - 127.0.0.1:8003:8003
# - 80:80 #Let's Encrypt
# - 8000:8000/tcp #WSS
# - 127.0.0.1:8646:8646
<<:
- *logging
environment:
DOMAIN: ${DOMAIN}
RLN_RELAY_CRED_PASSWORD: "${RLN_RELAY_CRED_PASSWORD}"
ETH_CLIENT_ADDRESS: *eth_client_address
EXTRA_ARGS: ${EXTRA_ARGS}
<<:
- *rln_env
- *test_running_conditions
volumes:
- ${CERTS_DIR:-./certs}:/etc/letsencrypt/:Z
- ./rln_tree:/etc/rln_tree/:Z
- ./keystore:/keystore:Z
entrypoint: sh
command:
- /usr/bin/run_tester_node.sh
- /usr/bin/liteprotocoltester
- SENDER
- waku-sim
depends_on:
- lightpush-service
configs:
- source: cfg_tester_node.toml
target: config.toml
networks:
- waku-simulator_simulation
filter-service:
image: ${NWAKU_IMAGE:-harbor.status.im/wakuorg/nwaku:latest-release}
# ports:
# - 30304:30305/tcp
# - 30304:30305/udp
# - 9005:9005/udp
# - 127.0.0.1:8003:8003
# - 80:80 #Let's Encrypt
# - 8000:8000/tcp #WSS
# - 127.0.0.1:8645:8645
<<:
- *logging
environment:
DOMAIN: ${DOMAIN}
RLN_RELAY_CRED_PASSWORD: "${RLN_RELAY_CRED_PASSWORD}"
ETH_CLIENT_ADDRESS: *eth_client_address
EXTRA_ARGS: ${EXTRA_ARGS}
<<:
- *rln_env
- *test_running_conditions
volumes:
- ./run_service_node.sh:/opt/run_service_node.sh:Z
- ${CERTS_DIR:-./certs}:/etc/letsencrypt/:Z
- ./rln_tree:/etc/rln_tree/:Z
- ./keystore:/keystore:Z
entrypoint: sh
command:
- /opt/run_service_node.sh
- FILTER
networks:
- waku-simulator_simulation
receivernode:
image: waku.liteprotocoltester:latest
build:
context: ../..
dockerfile: ./apps/liteprotocoltester/Dockerfile.liteprotocoltester
deploy:
replicas: ${NUM_RECEIVER_NODES:-1}
# ports:
# - 30304:30304/tcp
# - 30304:30304/udp
# - 9005:9005/udp
# - 127.0.0.1:8003:8003
# - 80:80 #Let's Encrypt
# - 8000:8000/tcp #WSS
# - 127.0.0.1:8647:8647
<<:
- *logging
environment:
DOMAIN: ${DOMAIN}
RLN_RELAY_CRED_PASSWORD: "${RLN_RELAY_CRED_PASSWORD}"
ETH_CLIENT_ADDRESS: *eth_client_address
EXTRA_ARGS: ${EXTRA_ARGS}
<<:
- *rln_env
- *test_running_conditions
volumes:
- ${CERTS_DIR:-./certs}:/etc/letsencrypt/:Z
- ./rln_tree:/etc/rln_tree/:Z
- ./keystore:/keystore:Z
entrypoint: sh
command:
- /usr/bin/run_tester_node.sh
- /usr/bin/liteprotocoltester
- RECEIVER
- waku-sim
depends_on:
- filter-service
- publishernode
configs:
- source: cfg_tester_node.toml
target: config.toml
networks:
- waku-simulator_simulation
# We have prometheus and grafana defined in waku-simulator already
prometheus:
image: docker.io/prom/prometheus:latest
volumes:
- ./monitoring/prometheus-config.yml:/etc/prometheus/prometheus.yml:Z
command:
- --config.file=/etc/prometheus/prometheus.yml
- --web.listen-address=:9099
# ports:
# - 127.0.0.1:9090:9090
restart: on-failure:5
depends_on:
- filter-service
- lightpush-service
- publishernode
- receivernode
networks:
- waku-simulator_simulation
grafana:
image: docker.io/grafana/grafana:latest
env_file:
- ./monitoring/configuration/grafana-plugins.env
volumes:
- ./monitoring/configuration/grafana.ini:/etc/grafana/grafana.ini:Z
- ./monitoring/configuration/dashboards.yaml:/etc/grafana/provisioning/dashboards/dashboards.yaml:Z
- ./monitoring/configuration/datasources.yaml:/etc/grafana/provisioning/datasources/datasources.yaml:Z
- ./monitoring/configuration/dashboards:/var/lib/grafana/dashboards/:Z
- ./monitoring/configuration/customizations/custom-logo.svg:/usr/share/grafana/public/img/grafana_icon.svg:Z
- ./monitoring/configuration/customizations/custom-logo.svg:/usr/share/grafana/public/img/grafana_typelogo.svg:Z
- ./monitoring/configuration/customizations/custom-logo.png:/usr/share/grafana/public/img/fav32.png:Z
ports:
- 0.0.0.0:3033:3033
restart: on-failure:5
depends_on:
- prometheus
networks:
- waku-simulator_simulation
configs:
cfg_tester_node.toml:
content: |
max-connections = 100
networks:
waku-simulator_simulation:
external: true

View File

@ -0,0 +1,172 @@
version: "3.7"
x-logging: &logging
logging:
driver: json-file
options:
max-size: 1000m
# Environment variable definitions
x-eth-client-address: &eth_client_address ${ETH_CLIENT_ADDRESS:-} # Add your ETH_CLIENT_ADDRESS after the "-"
x-rln-environment: &rln_env
RLN_RELAY_CONTRACT_ADDRESS: ${RLN_RELAY_CONTRACT_ADDRESS:-0xB9cd878C90E49F797B4431fBF4fb333108CB90e6}
RLN_RELAY_CRED_PATH: ${RLN_RELAY_CRED_PATH:-} # Optional: Add your RLN_RELAY_CRED_PATH after the "-"
RLN_RELAY_CRED_PASSWORD: ${RLN_RELAY_CRED_PASSWORD:-} # Optional: Add your RLN_RELAY_CRED_PASSWORD after the "-"
x-test-running-conditions: &test_running_conditions
NUM_MESSAGES: ${NUM_MESSAGES:-120}
MESSAGE_INTERVAL_MILLIS: "${MESSAGE_INTERVAL_MILLIS:-1000}"
SHARD: ${SHARD:-0}
CONTENT_TOPIC: ${CONTENT_TOPIC:-/tester/2/light-pubsub-test/wakusim}
CLUSTER_ID: ${CLUSTER_ID:-66}
MIN_MESSAGE_SIZE: ${MIN_MESSAGE_SIZE:-1Kb}
MAX_MESSAGE_SIZE: ${MAX_MESSAGE_SIZE:-150Kb}
START_PUBLISHING_AFTER_SECS: ${START_PUBLISHING_AFTER_SECS:-5} # seconds
STANDALONE: ${STANDALONE:-1}
RECEIVER_METRICS_PORT: 8003
PUBLISHER_METRICS_PORT: 8003
# Services definitions
services:
servicenode:
image: ${NWAKU_IMAGE:-harbor.status.im/wakuorg/nwaku:latest-release}
ports:
- 30304:30304/tcp
- 30304:30304/udp
- 9005:9005/udp
- 127.0.0.1:8003:8003
- 80:80 #Let's Encrypt
- 8000:8000/tcp #WSS
- 127.0.0.1:8645:8645
<<:
- *logging
environment:
DOMAIN: ${DOMAIN}
RLN_RELAY_CRED_PASSWORD: "${RLN_RELAY_CRED_PASSWORD}"
ETH_CLIENT_ADDRESS: *eth_client_address
EXTRA_ARGS: ${EXTRA_ARGS}
<<:
- *rln_env
- *test_running_conditions
volumes:
- ./run_service_node.sh:/opt/run_service_node.sh:Z
- ${CERTS_DIR:-./certs}:/etc/letsencrypt/:Z
- ./rln_tree:/etc/rln_tree/:Z
- ./keystore:/keystore:Z
entrypoint: sh
command:
- /opt/run_service_node.sh
publishernode:
image: waku.liteprotocoltester:latest
build:
context: ../..
dockerfile: ./apps/liteprotocoltester/Dockerfile.liteprotocoltester
ports:
# - 30304:30304/tcp
# - 30304:30304/udp
# - 9005:9005/udp
# - 127.0.0.1:8003:8003
# - 80:80 #Let's Encrypt
# - 8000:8000/tcp #WSS
- 127.0.0.1:8646:8646
<<:
- *logging
environment:
DOMAIN: ${DOMAIN}
RLN_RELAY_CRED_PASSWORD: "${RLN_RELAY_CRED_PASSWORD}"
ETH_CLIENT_ADDRESS: *eth_client_address
EXTRA_ARGS: ${EXTRA_ARGS}
<<:
- *rln_env
- *test_running_conditions
volumes:
- ${CERTS_DIR:-./certs}:/etc/letsencrypt/:Z
- ./rln_tree:/etc/rln_tree/:Z
- ./keystore:/keystore:Z
entrypoint: sh
command:
- /usr/bin/run_tester_node.sh
- /usr/bin/liteprotocoltester
- SENDER
- servicenode
depends_on:
- servicenode
configs:
- source: cfg_tester_node.toml
target: config.toml
receivernode:
image: waku.liteprotocoltester:latest
build:
context: ../..
dockerfile: ./apps/liteprotocoltester/Dockerfile.liteprotocoltester
ports:
# - 30304:30304/tcp
# - 30304:30304/udp
# - 9005:9005/udp
# - 127.0.0.1:8003:8003
# - 80:80 #Let's Encrypt
# - 8000:8000/tcp #WSS
- 127.0.0.1:8647:8647
<<:
- *logging
environment:
DOMAIN: ${DOMAIN}
RLN_RELAY_CRED_PASSWORD: "${RLN_RELAY_CRED_PASSWORD}"
ETH_CLIENT_ADDRESS: *eth_client_address
EXTRA_ARGS: ${EXTRA_ARGS}
<<:
- *rln_env
- *test_running_conditions
volumes:
- ./run_tester_node.sh:/opt/run_tester_node.sh:Z
- ${CERTS_DIR:-./certs}:/etc/letsencrypt/:Z
- ./rln_tree:/etc/rln_tree/:Z
- ./keystore:/keystore:Z
entrypoint: sh
command:
- /usr/bin/run_tester_node.sh
- /usr/bin/liteprotocoltester
- RECEIVER
- servicenode
depends_on:
- servicenode
- publishernode
configs:
- source: cfg_tester_node.toml
target: config.toml
prometheus:
image: docker.io/prom/prometheus:latest
volumes:
- ./monitoring/prometheus-config.yml:/etc/prometheus/prometheus.yml:Z
command:
- --config.file=/etc/prometheus/prometheus.yml
ports:
- 127.0.0.1:9090:9090
depends_on:
- servicenode
grafana:
image: docker.io/grafana/grafana:latest
env_file:
- ./monitoring/configuration/grafana-plugins.env
volumes:
- ./monitoring/configuration/grafana.ini:/etc/grafana/grafana.ini:Z
- ./monitoring/configuration/dashboards.yaml:/etc/grafana/provisioning/dashboards/dashboards.yaml:Z
- ./monitoring/configuration/datasources.yaml:/etc/grafana/provisioning/datasources/datasources.yaml:Z
- ./monitoring/configuration/dashboards:/var/lib/grafana/dashboards/:Z
- ./monitoring/configuration/customizations/custom-logo.svg:/usr/share/grafana/public/img/grafana_icon.svg:Z
- ./monitoring/configuration/customizations/custom-logo.svg:/usr/share/grafana/public/img/grafana_typelogo.svg:Z
- ./monitoring/configuration/customizations/custom-logo.png:/usr/share/grafana/public/img/fav32.png:Z
ports:
- 0.0.0.0:3000:3000
depends_on:
- prometheus
configs:
cfg_tester_node.toml:
content: |
max-connections = 100

View File

@ -0,0 +1,11 @@
TEST_INTERVAL_MINUTES=180
START_PUBLISHING_AFTER_SECS=120
NUM_MESSAGES=300
MESSAGE_INTERVAL_MILLIS=1000
MIN_MESSAGE_SIZE=15Kb
MAX_MESSAGE_SIZE=145Kb
SHARD=32
CONTENT_TOPIC=/tester/2/light-pubsub-test-at-infra/status-prod
CLUSTER_ID=16
LIGHTPUSH_BOOTSTRAP=enr:-QEKuED9AJm2HGgrRpVaJY2nj68ao_QiPeUT43sK-aRM7sMJ6R4G11OSDOwnvVacgN1sTw-K7soC5dzHDFZgZkHU0u-XAYJpZIJ2NIJpcISnYxMvim11bHRpYWRkcnO4WgAqNiVib290LTAxLmRvLWFtczMuc3RhdHVzLnByb2Quc3RhdHVzLmltBnZfACw2JWJvb3QtMDEuZG8tYW1zMy5zdGF0dXMucHJvZC5zdGF0dXMuaW0GAbveA4Jyc40AEAUAAQAgAEAAgAEAiXNlY3AyNTZrMaEC3rRtFQSgc24uWewzXaxTY8hDAHB8sgnxr9k8Rjb5GeSDdGNwgnZfg3VkcIIjKIV3YWt1Mg0
FILTER_BOOTSTRAP=enr:-QEcuED7ww5vo2rKc1pyBp7fubBUH-8STHEZHo7InjVjLblEVyDGkjdTI9VdqmYQOn95vuQH-Htku17WSTzEufx-Wg4mAYJpZIJ2NIJpcIQihw1Xim11bHRpYWRkcnO4bAAzNi5ib290LTAxLmdjLXVzLWNlbnRyYWwxLWEuc3RhdHVzLnByb2Quc3RhdHVzLmltBnZfADU2LmJvb3QtMDEuZ2MtdXMtY2VudHJhbDEtYS5zdGF0dXMucHJvZC5zdGF0dXMuaW0GAbveA4Jyc40AEAUAAQAgAEAAgAEAiXNlY3AyNTZrMaECxjqgDQ0WyRSOilYU32DA5k_XNlDis3m1VdXkK9xM6kODdGNwgnZfg3VkcIIjKIV3YWt1Mg0

View File

@ -0,0 +1,24 @@
import chronos, results, options
import waku/[waku_node, waku_core]
import publisher_base
type LegacyPublisher* = ref object of PublisherBase
proc new*(T: type LegacyPublisher, wakuNode: WakuNode): T =
if isNil(wakuNode.wakuLegacyLightpushClient):
wakuNode.mountLegacyLightPushClient()
return LegacyPublisher(wakuNode: wakuNode)
method send*(
self: LegacyPublisher,
topic: PubsubTopic,
message: WakuMessage,
servicePeer: RemotePeerInfo,
): Future[Result[void, string]] {.async.} =
# when error it must return original error desc due the text is used for distinction between error types in metrics.
discard (
await self.wakuNode.legacyLightpushPublish(some(topic), message, servicePeer)
).valueOr:
return err(error)
return ok()

View File

@ -0,0 +1,214 @@
{.push raises: [].}
import
std/[options, strutils, os, sequtils, net],
chronicles,
chronos,
metrics,
libbacktrace,
system/ansi_c,
libp2p/crypto/crypto,
confutils
import
tools/confutils/cli_args,
waku/[
common/enr,
common/logging,
factory/waku as waku_factory,
waku_node,
node/waku_metrics,
node/peer_manager,
waku_lightpush/common,
waku_filter_v2,
waku_peer_exchange/protocol,
waku_core/peers,
waku_core/multiaddrstr,
],
./tester_config,
./publisher,
./receiver,
./diagnose_connections,
./service_peer_management
logScope:
topics = "liteprotocoltester main"
proc logConfig(conf: LiteProtocolTesterConf) =
info "Configuration: Lite protocol tester", conf = $conf
{.pop.}
when isMainModule:
## Node setup happens in 6 phases:
## 1. Set up storage
## 2. Initialize node
## 3. Mount and initialize configured protocols
## 4. Start node and mounted protocols
## 5. Start monitoring tools and external interfaces
## 6. Setup graceful shutdown hooks
const versionString = "version / git commit hash: " & waku_factory.git_version
let conf = LiteProtocolTesterConf.load(version = versionString).valueOr:
error "failure while loading the configuration", error = error
quit(QuitFailure)
## Logging setup
logging.setupLog(conf.logLevel, conf.logFormat)
info "Running Lite Protocol Tester node", version = waku_factory.git_version
logConfig(conf)
##Prepare Waku configuration
## - load from config file
## - override according to tester functionality
##
var wakuNodeConf: WakuNodeConf
if conf.configFile.isSome():
try:
var configFile {.threadvar.}: InputFile
configFile = conf.configFile.get()
wakuNodeConf = WakuNodeConf.load(
version = versionString,
printUsage = false,
secondarySources = proc(
wnconf: WakuNodeConf, sources: auto
) {.gcsafe, raises: [ConfigurationError].} =
echo "Loading secondary configuration file into WakuNodeConf"
sources.addConfigFile(Toml, configFile),
)
except CatchableError:
error "Loading Waku configuration failed", error = getCurrentExceptionMsg()
quit(QuitFailure)
wakuNodeConf.logLevel = conf.logLevel
wakuNodeConf.logFormat = conf.logFormat
wakuNodeConf.nat = conf.nat
wakuNodeConf.maxConnections = 500
wakuNodeConf.restAddress = conf.restAddress
wakuNodeConf.restPort = conf.restPort
wakuNodeConf.restAllowOrigin = conf.restAllowOrigin
wakuNodeConf.dnsAddrsNameServers =
@[parseIpAddress("8.8.8.8"), parseIpAddress("1.1.1.1")]
wakuNodeConf.shards = @[conf.shard]
wakuNodeConf.contentTopics = conf.contentTopics
wakuNodeConf.clusterId = conf.clusterId
## TODO: Depending on the tester needs we might extend here with shards, clusterId, etc...
wakuNodeConf.metricsServer = true
wakuNodeConf.metricsServerAddress = parseIpAddress("0.0.0.0")
wakuNodeConf.metricsServerPort = conf.metricsPort
# If bootstrap option is chosen we expect our clients will not mounted
# so we will mount PeerExchange manually to gather possible service peers,
# if got some we will mount the client protocols afterward.
wakuNodeConf.peerExchange = false
wakuNodeConf.relay = false
wakuNodeConf.filter = false
wakuNodeConf.lightpush = false
wakuNodeConf.store = false
wakuNodeConf.rest = false
wakuNodeConf.relayServiceRatio = "40:60"
let wakuConf = wakuNodeConf.toWakuConf().valueOr:
error "Issue converting toWakuConf", error = $error
quit(QuitFailure)
var waku = (waitFor Waku.new(wakuConf)).valueOr:
error "Waku initialization failed", error = error
quit(QuitFailure)
(waitFor startWaku(addr waku)).isOkOr:
error "Starting waku failed", error = error
quit(QuitFailure)
info "Setting up shutdown hooks"
proc asyncStopper(waku: Waku) {.async: (raises: [Exception]).} =
await waku.stop()
quit(QuitSuccess)
# Handle Ctrl-C SIGINT
proc handleCtrlC() {.noconv.} =
when defined(windows):
# workaround for https://github.com/nim-lang/Nim/issues/4057
setupForeignThreadGc()
notice "Shutting down after receiving SIGINT"
asyncSpawn asyncStopper(waku)
setControlCHook(handleCtrlC)
# Handle SIGTERM
when defined(posix):
proc handleSigterm(signal: cint) {.noconv.} =
notice "Shutting down after receiving SIGTERM"
asyncSpawn asyncStopper(waku)
c_signal(ansi_c.SIGTERM, handleSigterm)
# Handle SIGSEGV
when defined(posix):
proc handleSigsegv(signal: cint) {.noconv.} =
# Require --debugger:native
fatal "Shutting down after receiving SIGSEGV", stacktrace = getBacktrace()
# Not available in -d:release mode
writeStackTrace()
waitFor waku.stop()
quit(QuitFailure)
c_signal(ansi_c.SIGSEGV, handleSigsegv)
info "Node setup complete"
var codec = WakuLightPushCodec
# mounting relevant client, for PX filter client must be mounted ahead
if conf.testFunc == TesterFunctionality.SENDER:
codec = WakuLightPushCodec
else:
codec = WakuFilterSubscribeCodec
var lookForServiceNode = false
var serviceNodePeerInfo: RemotePeerInfo
if conf.serviceNode.len == 0:
if conf.bootstrapNode.len > 0:
info "Bootstrapping with PeerExchange to gather random service node"
let futForServiceNode = pxLookupServiceNode(waku.node, conf)
if not (waitFor futForServiceNode.withTimeout(20.minutes)):
error "Service node not found in time via PX"
quit(QuitFailure)
futForServiceNode.read().isOkOr:
error "Service node for test not found via PX"
quit(QuitFailure)
serviceNodePeerInfo = selectRandomServicePeer(
waku.node.peerManager, none(RemotePeerInfo), codec
).valueOr:
error "Service node selection failed"
quit(QuitFailure)
else:
error "No service or bootstrap node provided"
quit(QuitFailure)
else:
# support for both ENR and URI formatted service node addresses
serviceNodePeerInfo = translateToRemotePeerInfo(conf.serviceNode).valueOr:
error "failed to parse service-node", node = conf.serviceNode
quit(QuitFailure)
info "Service node to be used", serviceNode = $serviceNodePeerInfo
logSelfPeers(waku.node.peerManager)
if conf.testFunc == TesterFunctionality.SENDER:
setupAndPublish(waku.node, conf, serviceNodePeerInfo)
else:
setupAndListen(waku.node, conf, serviceNodePeerInfo)
runForever()

View File

@ -0,0 +1,56 @@
## Example showing how a resource restricted client may
## subscribe to messages without relay
import metrics
export metrics
declarePublicGauge lpt_receiver_sender_peer_count, "count of sender peers"
declarePublicCounter lpt_receiver_received_messages_count,
"number of messages received per peer", ["peer"]
declarePublicCounter lpt_receiver_received_bytes,
"number of received bytes per peer", ["peer"]
declarePublicGauge lpt_receiver_missing_messages_count,
"number of missing messages per peer", ["peer"]
declarePublicCounter lpt_receiver_duplicate_messages_count,
"number of duplicate messages per peer", ["peer"]
declarePublicGauge lpt_receiver_distinct_duplicate_messages_count,
"number of distinct duplicate messages per peer", ["peer"]
declarePublicGauge lpt_receiver_latencies,
"Message delivery latency per peer (min-avg-max)", ["peer", "latency"]
declarePublicCounter lpt_receiver_lost_subscription_count,
"number of filter service peer failed PING requests - lost subscription"
declarePublicCounter lpt_publisher_sent_messages_count, "number of messages published"
declarePublicCounter lpt_publisher_failed_messages_count,
"number of messages failed to publish per failure cause", ["cause"]
declarePublicCounter lpt_publisher_sent_bytes, "number of total bytes sent"
declarePublicCounter lpt_service_peer_failure_count,
"number of failure during using service peer [publisher/receiever]", ["role", "agent"]
declarePublicCounter lpt_change_service_peer_count,
"number of times [publisher/receiver] had to change service peer", ["role"]
declarePublicGauge lpt_px_peers,
"Number of peers PeerExchange discovered and can be dialed"
declarePublicGauge lpt_dialed_peers, "Number of peers successfully dialed", ["agent"]
declarePublicGauge lpt_dial_failures, "Number of dial failures by cause", ["agent"]
declarePublicHistogram lpt_publish_duration_seconds,
"duration to lightpush messages",
buckets = [
0.005, 0.01, 0.025, 0.05, 0.075, 0.1, 0.25, 0.5, 0.75, 1.0, 2.5, 5.0, 7.5, 10.0,
15.0, 20.0, 30.0, Inf,
]

View File

@ -0,0 +1,54 @@
#!/usr/bin/env python3
import os
import time
from subprocess import Popen
import sys
def load_env(file_path):
predefined_test_env = {}
with open(file_path) as f:
for line in f:
if line.strip() and not line.startswith('#'):
key, value = line.strip().split('=', 1)
predefined_test_env[key] = value
return predefined_test_env
def run_tester_node(predefined_test_env):
role = sys.argv[1]
# override incoming environment variables with the ones from the file to prefer predefined testing environment.
for key, value in predefined_test_env.items():
os.environ[key] = value
script_cmd = "/usr/bin/run_tester_node_at_infra.sh /usr/bin/liteprotocoltester {role}".format(role=role)
return os.system(script_cmd)
if __name__ == "__main__":
if len(sys.argv) < 2 or sys.argv[1] not in ["RECEIVER", "SENDER", "SENDERV3"]:
print("Error: First argument must be either 'RECEIVER' or 'SENDER' or 'SENDERV3'")
sys.exit(1)
predefined_test_env_file = '/usr/bin/infra.env'
predefined_test_env = load_env(predefined_test_env_file)
test_interval_minutes = int(predefined_test_env.get('TEST_INTERVAL_MINUTES', 60)) # Default to 60 minutes if not set
print(f"supervisor: Start testing loop. Interval is {test_interval_minutes} minutes")
counter = 0
while True:
counter += 1
start_time = time.time()
print(f"supervisor: Run #{counter} started at {time.strftime('%Y-%m-%d %H:%M:%S', time.localtime(start_time))}")
print(f"supervisor: with arguments: {predefined_test_env}")
exit_code = run_tester_node(predefined_test_env)
end_time = time.time()
run_time = end_time - start_time
sleep_time = max(5 * 60, (test_interval_minutes * 60) - run_time)
print(f"supervisor: Tester node finished at {time.strftime('%Y-%m-%d %H:%M:%S', time.localtime(end_time))}")
print(f"supervisor: Runtime was {run_time:.2f} seconds")
print(f"supervisor: Next run scheduled in {sleep_time // 60:.2f} minutes")
time.sleep(sleep_time)

Binary file not shown.

After

Width:  |  Height:  |  Size: 11 KiB

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 13 KiB

View File

@ -0,0 +1,9 @@
apiVersion: 1
providers:
- name: 'Prometheus'
orgId: 1
folder: ''
type: file
options:
path: /var/lib/grafana/dashboards

View File

@ -0,0 +1,11 @@
apiVersion: 1
datasources:
- name: Prometheus
type: prometheus
access: proxy
org_id: 1
url: http://prometheus:9099
is_default: true
version: 1
editable: true

View File

@ -0,0 +1,2 @@
#GF_INSTALL_PLUGINS=grafana-worldmap-panel,grafana-piechart-panel,digrich-bubblechart-panel,yesoreyeram-boomtheme-panel,briangann-gauge-panel,jdbranham-diagram-panel,agenty-flowcharting-panel,citilogics-geoloop-panel,savantly-heatmap-panel,mtanda-histogram-panel,pierosavi-imageit-panel,michaeldmoore-multistat-panel,zuburqan-parity-report-panel,natel-plotly-panel,bessler-pictureit-panel,grafana-polystat-panel,corpglory-progresslist-panel,snuids-radar-panel,fzakaria-simple-config.config.annotations-datasource,vonage-status-panel,snuids-trafficlights-panel,pr0ps-trackmap-panel,alexandra-trackmap-panel,btplc-trend-box-panel
GF_INSTALL_PLUGINS=grafana-worldmap-panel,grafana-piechart-panel,yesoreyeram-boomtheme-panel,briangann-gauge-panel,pierosavi-imageit-panel,bessler-pictureit-panel,vonage-status-panel

View File

@ -0,0 +1,53 @@
instance_name = liteprotocoltester dashboard
;[dashboards.json]
;enabled = true
;path = /home/git/grafana/grafana-dashboards/dashboards
[server]
http_port = 3033
#################################### Auth ##########################
[auth]
disable_login_form = false
#################################### Anonymous Auth ##########################
[auth.anonymous]
# enable anonymous access
enabled = true
# specify organization name that should be used for unauthenticated users
;org_name = Public
# specify role for unauthenticated users
org_role = Admin
; org_role = Viewer
;[security]
;admin_user = ocr
;admin_password = ocr
;[users]
# disable user signup / registration
;allow_sign_up = false
# Set to true to automatically assign new users to the default organization (id 1)
;auto_assign_org = true
# Default role new users will be automatically assigned (if disabled above is set to true)
;auto_assign_org_role = Viewer
#################################### SMTP / Emailing ##########################
;[smtp]
;enabled = false
;host = localhost:25
;user =
;password =
;cert_file =
;key_file =
;skip_verify = false
;from_address = admin@grafana.localhost
;[emails]
;welcome_email_on_sign_up = false

View File

@ -0,0 +1,35 @@
global:
scrape_interval: 15s
evaluation_interval: 15s
external_labels:
monitor: "Monitoring"
scrape_configs:
- job_name: "liteprotocoltester"
static_configs:
- targets: ["liteprotocoltester-publishernode-1:8003",
"liteprotocoltester-publishernode-2:8003",
"liteprotocoltester-publishernode-3:8003",
"liteprotocoltester-publishernode-4:8003",
"liteprotocoltester-publishernode-5:8003",
"liteprotocoltester-publishernode-6:8003",
"liteprotocoltester-receivernode-1:8003",
"liteprotocoltester-receivernode-2:8003",
"liteprotocoltester-receivernode-3:8003",
"liteprotocoltester-receivernode-4:8003",
"liteprotocoltester-receivernode-5:8003",
"liteprotocoltester-receivernode-6:8003",
"publishernode:8003",
"publishernode-1:8003",
"publishernode-2:8003",
"publishernode-3:8003",
"publishernode-4:8003",
"publishernode-5:8003",
"publishernode-6:8003",
"receivernode:8003",
"receivernode-1:8003",
"receivernode-2:8003",
"receivernode-3:8003",
"receivernode-4:8003",
"receivernode-5:8003",
"receivernode-6:8003",]

View File

@ -0,0 +1,4 @@
-d:chronicles_line_numbers
-d:chronicles_runtime_filtering:on
-d:discv5_protocol_id:d5waku
path = "../.."

View File

@ -0,0 +1,266 @@
import
std/[strformat, sysrand, random, strutils, sequtils],
system/ansi_c,
chronicles,
chronos,
chronos/timer as chtimer,
stew/byteutils,
results,
json_serialization as js
import
waku/[
common/logging,
waku_node,
node/peer_manager,
waku_core,
waku_lightpush/client,
waku_lightpush/common,
common/utils/parse_size_units,
],
./tester_config,
./tester_message,
./lpt_metrics,
./diagnose_connections,
./service_peer_management,
./publisher_base,
./legacy_publisher,
./v3_publisher
randomize()
type SizeRange* = tuple[min: uint64, max: uint64]
var RANDOM_PAYLOAD {.threadvar.}: seq[byte]
RANDOM_PAYLOAD = urandom(1024 * 1024)
# 1MiB of random payload to be used to extend message
proc prepareMessage(
sender: string,
messageIndex, numMessages: uint32,
startedAt: TimeStamp,
prevMessageAt: var Timestamp,
contentTopic: ContentTopic,
size: SizeRange,
): (WakuMessage, uint64) =
var renderSize = rand(size.min .. size.max)
let current = getNowInNanosecondTime()
let payload = ProtocolTesterMessage(
sender: sender,
index: messageIndex,
count: numMessages,
startedAt: startedAt,
sinceStart: current - startedAt,
sincePrev: current - prevMessageAt,
size: renderSize,
)
prevMessageAt = current
let text = js.Json.encode(payload)
let contentPayload = toBytes(text & " \0")
if renderSize < len(contentPayload).uint64:
renderSize = len(contentPayload).uint64
let finalPayload =
concat(contentPayload, RANDOM_PAYLOAD[0 .. renderSize - len(contentPayload).uint64])
let message = WakuMessage(
payload: finalPayload, # content of the message
contentTopic: contentTopic, # content topic to publish to
ephemeral: true, # tell store nodes to not store it
timestamp: current, # current timestamp
)
return (message, renderSize)
var sentMessages {.threadvar.}: OrderedTable[uint32, tuple[hash: string, relayed: bool]]
var failedToSendCause {.threadvar.}: Table[string, uint32]
var failedToSendCount {.threadvar.}: uint32
var numMessagesToSend {.threadvar.}: uint32
var messagesSent {.threadvar.}: uint32
var noOfServicePeerSwitches {.threadvar.}: uint32
proc reportSentMessages() =
let report = catch:
"""*----------------------------------------*
| Service Peer Switches: {noOfServicePeerSwitches:>15} |
*----------------------------------------*
| Expected | Sent | Failed |
|{numMessagesToSend+failedToSendCount:>11} |{messagesSent:>11} |{failedToSendCount:>11} |
*----------------------------------------*""".fmt()
echo report.valueOr("Error while printing statistics")
echo "*--------------------------------------------------------------------------------------------------*"
echo "| Failure cause | count |"
for (cause, count) in failedToSendCause.pairs:
echo fmt"|{cause:<87}|{count:>10}|"
echo "*--------------------------------------------------------------------------------------------------*"
echo "*--------------------------------------------------------------------------------------------------*"
echo "| Index | Relayed | Hash |"
for (index, info) in sentMessages.pairs:
echo fmt"|{index+1:>10}|{info.relayed:<9}| {info.hash:<76}|"
echo "*--------------------------------------------------------------------------------------------------*"
# evere sent message hash should logged once
sentMessages.clear()
proc publishMessages(
wakuNode: WakuNode,
publisher: PublisherBase,
servicePeer: RemotePeerInfo,
lightpushPubsubTopic: PubsubTopic,
lightpushContentTopic: ContentTopic,
numMessages: uint32,
messageSizeRange: SizeRange,
messageInterval: Duration,
preventPeerSwitch: bool,
) {.async.} =
var actualServicePeer = servicePeer
let startedAt = getNowInNanosecondTime()
var prevMessageAt = startedAt
var renderMsgSize = messageSizeRange
# sets some default of min max message size to avoid conflict with meaningful payload size
renderMsgSize.min = max(1024.uint64, renderMsgSize.min) # do not use less than 1KB
renderMsgSize.max = max(2048.uint64, renderMsgSize.max) # minimum of max is 2KB
renderMsgSize.min = min(renderMsgSize.min, renderMsgSize.max)
renderMsgSize.max = max(renderMsgSize.min, renderMsgSize.max)
const maxFailedPush = 3
var noFailedPush = 0
var noFailedServiceNodeSwitches = 0
let selfPeerId = $wakuNode.switch.peerInfo.peerId
failedToSendCount = 0
numMessagesToSend = if numMessages == 0: uint32.high else: numMessages
messagesSent = 0
while messagesSent < numMessagesToSend:
let (message, msgSize) = prepareMessage(
selfPeerId,
messagesSent + 1,
numMessagesToSend,
startedAt,
prevMessageAt,
lightpushContentTopic,
renderMsgSize,
)
let publishStartTime = Moment.now()
let wlpRes = await publisher.send(lightpushPubsubTopic, message, actualServicePeer)
let publishDuration = Moment.now() - publishStartTime
let msgHash = computeMessageHash(lightpushPubsubTopic, message).to0xHex
if wlpRes.isOk():
lpt_publish_duration_seconds.observe(publishDuration.milliseconds.float / 1000)
sentMessages[messagesSent] = (hash: msgHash, relayed: true)
notice "published message using lightpush",
index = messagesSent + 1,
count = numMessagesToSend,
size = msgSize,
pubsubTopic = lightpushPubsubTopic,
hash = msgHash
inc(messagesSent)
lpt_publisher_sent_messages_count.inc()
lpt_publisher_sent_bytes.inc(amount = msgSize.int64)
if noFailedPush > 0:
noFailedPush -= 1
else:
sentMessages[messagesSent] = (hash: msgHash, relayed: false)
failedToSendCause.mgetOrPut(wlpRes.error, 1).inc()
error "failed to publish message using lightpush",
err = wlpRes.error, hash = msgHash
inc(failedToSendCount)
lpt_publisher_failed_messages_count.inc(labelValues = [wlpRes.error])
if not wlpRes.error.toLower().contains("dial"):
# retry sending after shorter wait
await sleepAsync(2.seconds)
continue
else:
noFailedPush += 1
lpt_service_peer_failure_count.inc(
labelValues = ["publisher", actualServicePeer.getAgent()]
)
if not preventPeerSwitch and noFailedPush > maxFailedPush:
info "Max push failure limit reached, Try switching peer."
actualServicePeer = selectRandomServicePeer(
wakuNode.peerManager, some(actualServicePeer), WakuLightPushCodec
).valueOr:
error "Failed to find new service peer. Exiting."
noFailedServiceNodeSwitches += 1
break
info "New service peer in use",
codec = lightpushPubsubTopic,
peer = constructMultiaddrStr(actualServicePeer)
noFailedPush = 0
noOfServicePeerSwitches += 1
lpt_change_service_peer_count.inc(labelValues = ["publisher"])
continue # try again with new peer without delay
await sleepAsync(messageInterval)
proc setupAndPublish*(
wakuNode: WakuNode, conf: LiteProtocolTesterConf, servicePeer: RemotePeerInfo
) =
var publisher: PublisherBase
if conf.lightpushVersion == LightpushVersion.LEGACY:
info "Using legacy lightpush protocol for publishing messages"
publisher = LegacyPublisher.new(wakuNode)
else:
info "Using lightpush v3 protocol for publishing messages"
publisher = V3Publisher.new(wakuNode)
# give some time to receiver side to set up
let waitTillStartTesting = conf.startPublishingAfter.seconds
let parsedMinMsgSize = parseMsgSize(conf.minTestMessageSize).valueOr:
error "failed to parse 'min-test-msg-size' param: ", error = error
return
let parsedMaxMsgSize = parseMsgSize(conf.maxTestMessageSize).valueOr:
error "failed to parse 'max-test-msg-size' param: ", error = error
return
info "Sending test messages in", wait = waitTillStartTesting
waitFor sleepAsync(waitTillStartTesting)
info "Start sending messages to service node using lightpush"
sentMessages.sort(system.cmp)
let interval = secs(60)
var printStats: CallbackFunc
printStats = CallbackFunc(
proc(udata: pointer) {.gcsafe.} =
reportSentMessages()
if messagesSent >= numMessagesToSend:
info "All messages are sent. Exiting."
## for gracefull shutdown through signal hooks
discard c_raise(ansi_c.SIGTERM)
else:
discard setTimer(Moment.fromNow(interval), printStats)
)
discard setTimer(Moment.fromNow(interval), printStats)
# Start maintaining subscription
asyncSpawn publishMessages(
wakuNode,
publisher,
servicePeer,
conf.getPubsubTopic(),
conf.contentTopics[0],
conf.numMessages,
(min: parsedMinMsgSize, max: parsedMaxMsgSize),
conf.messageInterval.milliseconds,
conf.fixedServicePeer,
)

View File

@ -0,0 +1,14 @@
import chronos, results
import waku/[waku_node, waku_core]
type PublisherBase* = ref object of RootObj
wakuNode*: WakuNode
method send*(
self: PublisherBase,
topic: PubsubTopic,
message: WakuMessage,
servicePeer: RemotePeerInfo,
): Future[Result[void, string]] {.base, async.} =
discard
# when error it must return original error desc due the text is used for distinction between error types in metrics.

View File

@ -0,0 +1,180 @@
## Example showing how a resource restricted client may
## subscribe to messages without relay
import
std/options,
system/ansi_c,
chronicles,
chronos,
chronos/timer as chtimer,
stew/byteutils,
results,
serialization,
json_serialization as js
import
waku/[
common/logging,
node/peer_manager,
waku_node,
waku_core,
waku_filter_v2/client,
waku_filter_v2/common,
waku_core/multiaddrstr,
],
./tester_config,
./tester_message,
./statistics,
./diagnose_connections,
./service_peer_management,
./lpt_metrics
var actualFilterPeer {.threadvar.}: RemotePeerInfo
proc unsubscribe(
wakuNode: WakuNode, filterPubsubTopic: PubsubTopic, filterContentTopic: ContentTopic
) {.async.} =
notice "unsubscribing from filter"
let unsubscribeRes = await wakuNode.wakuFilterClient.unsubscribe(
actualFilterPeer, filterPubsubTopic, @[filterContentTopic]
)
if unsubscribeRes.isErr:
notice "unsubscribe request failed", err = unsubscribeRes.error
else:
notice "unsubscribe request successful"
proc maintainSubscription(
wakuNode: WakuNode,
filterPubsubTopic: PubsubTopic,
filterContentTopic: ContentTopic,
preventPeerSwitch: bool,
) {.async.} =
const maxFailedSubscribes = 3
const maxFailedServiceNodeSwitches = 10
var noFailedSubscribes = 0
var noFailedServiceNodeSwitches = 0
var isFirstPingOnNewPeer = true
const RetryWaitMs = 2.seconds # Quick retry interval
const SubscriptionMaintenanceMs = 30.seconds # Subscription maintenance interval
while true:
info "maintaining subscription at", peer = constructMultiaddrStr(actualFilterPeer)
# First use filter-ping to check if we have an active subscription
let pingErr = (await wakuNode.wakuFilterClient.ping(actualFilterPeer)).errorOr:
await sleepAsync(SubscriptionMaintenanceMs)
info "subscription is live."
continue
if isFirstPingOnNewPeer == false:
# Very first ping expected to fail as we have not yet subscribed at all
lpt_receiver_lost_subscription_count.inc()
isFirstPingOnNewPeer = false
# No subscription found. Let's subscribe.
error "ping failed.", error = pingErr
trace "no subscription found. Sending subscribe request"
let subscribeErr = (
await wakuNode.filterSubscribe(
some(filterPubsubTopic), filterContentTopic, actualFilterPeer
)
).errorOr:
await sleepAsync(SubscriptionMaintenanceMs)
if noFailedSubscribes > 0:
noFailedSubscribes -= 1
notice "subscribe request successful."
continue
noFailedSubscribes += 1
lpt_service_peer_failure_count.inc(
labelValues = ["receiver", actualFilterPeer.getAgent()]
)
error "Subscribe request failed.",
err = subscribeErr, peer = actualFilterPeer, failCount = noFailedSubscribes
# TODO: disconnet from failed actualFilterPeer
# asyncSpawn(wakuNode.peerManager.switch.disconnect(p))
# wakunode.peerManager.peerStore.delete(actualFilterPeer)
if noFailedSubscribes < maxFailedSubscribes:
await sleepAsync(RetryWaitMs) # Wait a bit before retrying
elif not preventPeerSwitch:
# try again with new peer without delay
actualFilterPeer = selectRandomServicePeer(
wakuNode.peerManager, some(actualFilterPeer), WakuFilterSubscribeCodec
).valueOr:
error "Failed to find new service peer. Exiting."
noFailedServiceNodeSwitches += 1
break
info "Found new peer for codec",
codec = filterPubsubTopic, peer = constructMultiaddrStr(actualFilterPeer)
noFailedSubscribes = 0
lpt_change_service_peer_count.inc(labelValues = ["receiver"])
isFirstPingOnNewPeer = true
else:
await sleepAsync(SubscriptionMaintenanceMs)
proc setupAndListen*(
wakuNode: WakuNode, conf: LiteProtocolTesterConf, servicePeer: RemotePeerInfo
) =
if isNil(wakuNode.wakuFilterClient):
# if we have not yet initialized lightpush client, then do it as the only way we can get here is
# by having a service peer discovered.
waitFor wakuNode.mountFilterClient()
info "Start receiving messages to service node using filter",
servicePeer = servicePeer
var stats: PerPeerStatistics
actualFilterPeer = servicePeer
let pushHandler = proc(
pubsubTopic: PubsubTopic, message: WakuMessage
): Future[void] {.async, closure.} =
let payloadStr = string.fromBytes(message.payload)
let testerMessage = js.Json.decode(payloadStr, ProtocolTesterMessage)
let msgHash = computeMessageHash(pubsubTopic, message).to0xHex
stats.addMessage(testerMessage.sender, testerMessage, msgHash)
notice "message received",
index = testerMessage.index,
count = testerMessage.count,
startedAt = $testerMessage.startedAt,
sinceStart = $testerMessage.sinceStart,
sincePrev = $testerMessage.sincePrev,
size = $testerMessage.size,
pubsubTopic = pubsubTopic,
hash = msgHash
wakuNode.wakuFilterClient.registerPushHandler(pushHandler)
let interval = millis(20000)
var printStats: CallbackFunc
# calculate max wait after the last known message arrived before exiting
# 20% of expected messages times the expected interval but capped to 10min
let maxWaitForLastMessage: Duration =
min(conf.messageInterval.milliseconds * (conf.numMessages div 5), 10.minutes)
printStats = CallbackFunc(
proc(udata: pointer) {.gcsafe.} =
stats.echoStats()
if conf.numMessages > 0 and
waitFor stats.checkIfAllMessagesReceived(maxWaitForLastMessage):
waitFor unsubscribe(wakuNode, conf.getPubsubTopic(), conf.contentTopics[0])
info "All messages received. Exiting."
## for gracefull shutdown through signal hooks
discard c_raise(ansi_c.SIGTERM)
else:
discard setTimer(Moment.fromNow(interval), printStats)
)
discard setTimer(Moment.fromNow(interval), printStats)
# Start maintaining subscription
asyncSpawn maintainSubscription(
wakuNode, conf.getPubsubTopic(), conf.contentTopics[0], conf.fixedServicePeer
)

View File

@ -0,0 +1,63 @@
#!/bin/sh
echo "I am a service node"
IP=$(ip a | grep "inet " | grep -Fv 127.0.0.1 | sed 's/.*inet \([^/]*\).*/\1/')
echo "Service node IP: ${IP}"
if [ -n "${SHARD}" ]; then
SHARD=--shard="${SHARD}"
else
SHARD=--shard="0"
fi
if [ -n "${CLUSTER_ID}" ]; then
CLUSTER_ID=--cluster-id="${CLUSTER_ID}"
fi
echo "STANDALONE: ${STANDALONE}"
if [ -z "${STANDALONE}" ]; then
RETRIES=${RETRIES:=20}
while [ -z "${BOOTSTRAP_ENR}" ] && [ ${RETRIES} -ge 0 ]; do
BOOTSTRAP_ENR=$(wget -qO- http://bootstrap:8645/debug/v1/info --header='Content-Type:application/json' 2> /dev/null | sed 's/.*"enrUri":"\([^"]*\)".*/\1/');
echo "Bootstrap node not ready, retrying (retries left: ${RETRIES})"
sleep 3
RETRIES=$(( $RETRIES - 1 ))
done
if [ -z "${BOOTSTRAP_ENR}" ]; then
echo "Could not get BOOTSTRAP_ENR and none provided. Failing"
exit 1
fi
echo "Using bootstrap node: ${BOOTSTRAP_ENR}"
fi
exec /usr/bin/wakunode\
--relay=true\
--filter=true\
--lightpush=true\
--store=false\
--rest=true\
--rest-admin=true\
--rest-private=true\
--rest-address=0.0.0.0\
--rest-allow-origin="*"\
--keep-alive=true\
--max-connections=300\
--dns-discovery=true\
--discv5-discovery=true\
--discv5-enr-auto-update=True\
--discv5-bootstrap-node=${BOOTSTRAP_ENR}\
--log-level=INFO\
--metrics-server=True\
--metrics-server-port=8003\
--metrics-server-address=0.0.0.0\
--nat=extip:${IP}\
${SHARD}\
${CLUSTER_ID}

View File

@ -0,0 +1,161 @@
#!/bin/sh
#set -x
if test -f .env; then
echo "Using .env file"
. $(pwd)/.env
fi
echo "I am a lite-protocol-tester node"
BINARY_PATH=$1
if [ ! -x "${BINARY_PATH}" ]; then
echo "Invalid binary path '${BINARY_PATH}'. Failing"
exit 1
fi
if [ "${2}" = "--help" ]; then
echo "You might want to check nwaku/apps/liteprotocoltester/README.md"
exec "${BINARY_PATH}" --help
exit 0
fi
FUNCTION=$2
if [ "${FUNCTION}" = "SENDER" ]; then
FUNCTION="--test-func=SENDER --lightpush-version=LEGACY"
SERVICENAME=lightpush-service
fi
if [ "${FUNCTION}" = "SENDERV3" ]; then
FUNCTION="--test-func=SENDER --lightpush-version=V3"
SERVICENAME=lightpush-service
fi
if [ "${FUNCTION}" = "RECEIVER" ]; then
FUNCTION=--test-func=RECEIVER
SERVICENAME=filter-service
fi
SERIVCE_NODE_ADDR=$3
if [ -z "${SERIVCE_NODE_ADDR}" ]; then
echo "Service node peer_id provided. Failing"
exit 1
fi
SELECTOR=$4
if [ -z "${SELECTOR}" ] || [ "${SELECTOR}" = "SERVICE" ]; then
SERVICE_NODE_DIRECT=true
elif [ "${SELECTOR}" = "BOOTSTRAP" ]; then
SERVICE_NODE_DIRECT=false
else
echo "Invalid selector '${SELECTOR}'. Failing"
exit 1
fi
DO_DETECT_SERVICENODE=0
if [ "${SERIVCE_NODE_ADDR}" = "servicenode" ]; then
DO_DETECT_SERVICENODE=1
SERIVCE_NODE_ADDR=""
SERVICENAME=servicenode
fi
if [ "${SERIVCE_NODE_ADDR}" = "waku-sim" ]; then
DO_DETECT_SERVICENODE=1
SERIVCE_NODE_ADDR=""
MY_EXT_IP=$(ip a | grep "inet " | grep -Fv 127.0.0.1 | sed 's/.*inet \([^/]*\).*/\1/')
else
MY_EXT_IP=$(wget -qO- --no-check-certificate https://api4.ipify.org)
fi
if [ $DO_DETECT_SERVICENODE -eq 1 ]; then
RETRIES=${RETRIES:=20}
while [ -z "${SERIVCE_NODE_ADDR}" ] && [ ${RETRIES} -ge 0 ]; do
SERVICE_DEBUG_INFO=$(wget -qO- http://${SERVICENAME}:8645/debug/v1/info --header='Content-Type:application/json' 2> /dev/null);
echo "SERVICE_DEBUG_INFO: ${SERVICE_DEBUG_INFO}"
SERIVCE_NODE_ADDR=$(wget -qO- http://${SERVICENAME}:8645/debug/v1/info --header='Content-Type:application/json' 2> /dev/null | sed 's/.*"listenAddresses":\["\([^"]*\)".*/\1/');
echo "Service node not ready, retrying (retries left: ${RETRIES})"
sleep 3
RETRIES=$(( $RETRIES - 1 ))
done
fi
if [ -z "${SERIVCE_NODE_ADDR}" ]; then
echo "Could not get SERIVCE_NODE_ADDR and none provided. Failing"
exit 1
fi
if $SERVICE_NODE_DIRECT; then
FULL_NODE=--service-node="${SERIVCE_NODE_ADDR} --fixed-service-peer"
else
FULL_NODE=--bootstrap-node="${SERIVCE_NODE_ADDR}"
fi
if [ -n "${SHARD}" ]; then
SHARD=--shard="${SHARD}"
else
SHARD=--shard="0"
fi
if [ -n "${CONTENT_TOPIC}" ]; then
CONTENT_TOPIC=--content-topic="${CONTENT_TOPIC}"
fi
if [ -n "${CLUSTER_ID}" ]; then
CLUSTER_ID=--cluster-id="${CLUSTER_ID}"
fi
if [ -n "${START_PUBLISHING_AFTER_SECS}" ]; then
START_PUBLISHING_AFTER_SECS=--start-publishing-after="${START_PUBLISHING_AFTER_SECS}"
fi
if [ -n "${MIN_MESSAGE_SIZE}" ]; then
MIN_MESSAGE_SIZE=--min-test-msg-size="${MIN_MESSAGE_SIZE}"
fi
if [ -n "${MAX_MESSAGE_SIZE}" ]; then
MAX_MESSAGE_SIZE=--max-test-msg-size="${MAX_MESSAGE_SIZE}"
fi
if [ -n "${NUM_MESSAGES}" ]; then
NUM_MESSAGES=--num-messages="${NUM_MESSAGES}"
fi
if [ -n "${MESSAGE_INTERVAL_MILLIS}" ]; then
MESSAGE_INTERVAL_MILLIS=--message-interval="${MESSAGE_INTERVAL_MILLIS}"
fi
if [ -n "${LOG_LEVEL}" ]; then
LOG_LEVEL=--log-level=${LOG_LEVEL}
else
LOG_LEVEL=--log-level=INFO
fi
echo "Running binary: ${BINARY_PATH}"
echo "Tester node: ${FUNCTION}"
echo "Using service node: ${SERIVCE_NODE_ADDR}"
echo "My external IP: ${MY_EXT_IP}"
exec "${BINARY_PATH}"\
--nat=extip:${MY_EXT_IP}\
--test-peers\
${LOG_LEVEL}\
${FULL_NODE}\
${MESSAGE_INTERVAL_MILLIS}\
${NUM_MESSAGES}\
${SHARD}\
${CONTENT_TOPIC}\
${CLUSTER_ID}\
${FUNCTION}\
${START_PUBLISHING_AFTER_SECS}\
${MIN_MESSAGE_SIZE}\
${MAX_MESSAGE_SIZE}
# --config-file=config.toml\

View File

@ -0,0 +1,119 @@
#!/bin/sh
#set -x
#echo "$@"
if test -f .env; then
echo "Using .env file"
. $(pwd)/.env
fi
echo "I am a lite-protocol-tester node"
BINARY_PATH=$1
if [ ! -x "${BINARY_PATH}" ]; then
echo "Invalid binary path '${BINARY_PATH}'. Failing"
exit 1
fi
if [ "${2}" = "--help" ]; then
echo "You might want to check nwaku/apps/liteprotocoltester/README.md"
exec "${BINARY_PATH}" --help
exit 0
fi
FUNCTION=$2
if [ "${FUNCTION}" = "SENDER" ]; then
FUNCTION="--test-func=SENDER --lightpush-version=LEGACY"
SERIVCE_NODE_ADDR=${LIGHTPUSH_SERVICE_PEER:-${LIGHTPUSH_BOOTSTRAP:-}}
NODE_ARG=${LIGHTPUSH_SERVICE_PEER:+--service-node="${LIGHTPUSH_SERVICE_PEER}"}
NODE_ARG=${NODE_ARG:---bootstrap-node="${LIGHTPUSH_BOOTSTRAP}"}
METRICS_PORT=--metrics-port="${PUBLISHER_METRICS_PORT:-8003}"
fi
if [ "${FUNCTION}" = "SENDERV3" ]; then
FUNCTION="--test-func=SENDER --lightpush-version=V3"
SERIVCE_NODE_ADDR=${LIGHTPUSH_SERVICE_PEER:-${LIGHTPUSH_BOOTSTRAP:-}}
NODE_ARG=${LIGHTPUSH_SERVICE_PEER:+--service-node="${LIGHTPUSH_SERVICE_PEER}"}
NODE_ARG=${NODE_ARG:---bootstrap-node="${LIGHTPUSH_BOOTSTRAP}"}
METRICS_PORT=--metrics-port="${PUBLISHER_METRICS_PORT:-8003}"
fi
if [ "${FUNCTION}" = "RECEIVER" ]; then
FUNCTION=--test-func=RECEIVER
SERIVCE_NODE_ADDR=${FILTER_SERVICE_PEER:-${FILTER_BOOTSTRAP:-}}
NODE_ARG=${FILTER_SERVICE_PEER:+--service-node="${FILTER_SERVICE_PEER}"}
NODE_ARG=${NODE_ARG:---bootstrap-node="${FILTER_BOOTSTRAP}"}
METRICS_PORT=--metrics-port="${RECEIVER_METRICS_PORT:-8003}"
fi
if [ -z "${SERIVCE_NODE_ADDR}" ]; then
echo "Service/Bootsrap node peer_id or enr is not provided. Failing"
exit 1
fi
MY_EXT_IP=$(wget -qO- --no-check-certificate https://api4.ipify.org)
if [ -n "${SHARD}" ]; then
SHARD=--shard="${SHARD}"
else
SHARD=--shard="0"
fi
if [ -n "${CONTENT_TOPIC}" ]; then
CONTENT_TOPIC=--content-topic="${CONTENT_TOPIC}"
fi
if [ -n "${CLUSTER_ID}" ]; then
CLUSTER_ID=--cluster-id="${CLUSTER_ID}"
fi
if [ -n "${START_PUBLISHING_AFTER_SECS}" ]; then
START_PUBLISHING_AFTER_SECS=--start-publishing-after="${START_PUBLISHING_AFTER_SECS}"
fi
if [ -n "${MIN_MESSAGE_SIZE}" ]; then
MIN_MESSAGE_SIZE=--min-test-msg-size="${MIN_MESSAGE_SIZE}"
fi
if [ -n "${MAX_MESSAGE_SIZE}" ]; then
MAX_MESSAGE_SIZE=--max-test-msg-size="${MAX_MESSAGE_SIZE}"
fi
if [ -n "${NUM_MESSAGES}" ]; then
NUM_MESSAGES=--num-messages="${NUM_MESSAGES}"
fi
if [ -n "${MESSAGE_INTERVAL_MILLIS}" ]; then
MESSAGE_INTERVAL_MILLIS=--message-interval="${MESSAGE_INTERVAL_MILLIS}"
fi
if [ -n "${LOG_LEVEL}" ]; then
LOG_LEVEL=--log-level=${LOG_LEVEL}
else
LOG_LEVEL=--log-level=INFO
fi
echo "Running binary: ${BINARY_PATH}"
echo "Node function is: ${FUNCTION}"
echo "Using service/bootstrap node as: ${NODE_ARG}"
echo "My external IP: ${MY_EXT_IP}"
exec "${BINARY_PATH}"\
--nat=extip:${MY_EXT_IP}\
--test-peers\
${LOG_LEVEL}\
${NODE_ARG}\
${MESSAGE_INTERVAL_MILLIS}\
${NUM_MESSAGES}\
${SHARD}\
${CONTENT_TOPIC}\
${CLUSTER_ID}\
${FUNCTION}\
${START_PUBLISHING_AFTER_SECS}\
${MIN_MESSAGE_SIZE}\
${MAX_MESSAGE_SIZE}\
${METRICS_PORT}

View File

@ -0,0 +1,118 @@
#!/bin/sh
#set -x
#echo "$@"
if test -f .env; then
echo "Using .env file"
. $(pwd)/.env
fi
echo "I am a lite-protocol-tester node"
BINARY_PATH=$1
if [ ! -x "${BINARY_PATH}" ]; then
echo "Invalid binary path '${BINARY_PATH}'. Failing"
exit 1
fi
if [ "${2}" = "--help" ]; then
echo "You might want to check nwaku/apps/liteprotocoltester/README.md"
exec "${BINARY_PATH}" --help
exit 0
fi
FUNCTION=$2
if [ "${FUNCTION}" = "SENDER" ]; then
FUNCTION="--test-func=SENDER --lightpush-version=LEGACY"
SERIVCE_NODE_ADDR=${LIGHTPUSH_SERVICE_PEER:-${LIGHTPUSH_BOOTSTRAP:-}}
NODE_ARG=${LIGHTPUSH_SERVICE_PEER:+--service-node="${LIGHTPUSH_SERVICE_PEER}"}
NODE_ARG=${NODE_ARG:---bootstrap-node="${LIGHTPUSH_BOOTSTRAP}"}
METRICS_PORT=--metrics-port="${PUBLISHER_METRICS_PORT:-8003}"
fi
if [ "${FUNCTION}" = "SENDERV3" ]; then
FUNCTION="--test-func=SENDER --lightpush-version=V3"
SERIVCE_NODE_ADDR=${LIGHTPUSH_SERVICE_PEER:-${LIGHTPUSH_BOOTSTRAP:-}}
NODE_ARG=${LIGHTPUSH_SERVICE_PEER:+--service-node="${LIGHTPUSH_SERVICE_PEER}"}
NODE_ARG=${NODE_ARG:---bootstrap-node="${LIGHTPUSH_BOOTSTRAP}"}
METRICS_PORT=--metrics-port="${PUBLISHER_METRICS_PORT:-8003}"
fi
if [ "${FUNCTION}" = "RECEIVER" ]; then
FUNCTION=--test-func=RECEIVER
SERIVCE_NODE_ADDR=${FILTER_SERVICE_PEER:-${FILTER_BOOTSTRAP:-}}
NODE_ARG=${FILTER_SERVICE_PEER:+--service-node="${FILTER_SERVICE_PEER}"}
NODE_ARG=${NODE_ARG:---bootstrap-node="${FILTER_BOOTSTRAP}"}
METRICS_PORT=--metrics-port="${RECEIVER_METRICS_PORT:-8003}"
fi
if [ -z "${SERIVCE_NODE_ADDR}" ]; then
echo "Service/Bootsrap node peer_id or enr is not provided. Failing"
exit 1
fi
MY_EXT_IP=$(wget -qO- --no-check-certificate https://api4.ipify.org)
if [ -n "${SHARD}" ]; then
SHARD=--shard=${SHARD}
else
SHARD=--shard=0
fi
if [ -n "${CONTENT_TOPIC}" ]; then
CONTENT_TOPIC=--content-topic="${CONTENT_TOPIC}"
fi
if [ -n "${CLUSTER_ID}" ]; then
CLUSTER_ID=--cluster-id="${CLUSTER_ID}"
fi
if [ -n "${START_PUBLISHING_AFTER}" ]; then
START_PUBLISHING_AFTER=--start-publishing-after="${START_PUBLISHING_AFTER}"
fi
if [ -n "${MIN_MESSAGE_SIZE}" ]; then
MIN_MESSAGE_SIZE=--min-test-msg-size="${MIN_MESSAGE_SIZE}"
fi
if [ -n "${MAX_MESSAGE_SIZE}" ]; then
MAX_MESSAGE_SIZE=--max-test-msg-size="${MAX_MESSAGE_SIZE}"
fi
if [ -n "${NUM_MESSAGES}" ]; then
NUM_MESSAGES=--num-messages="${NUM_MESSAGES}"
fi
if [ -n "${MESSAGE_INTERVAL_MILLIS}" ]; then
MESSAGE_INTERVAL_MILLIS=--message-interval="${MESSAGE_INTERVAL_MILLIS}"
fi
if [ -n "${LOG_LEVEL}" ]; then
LOG_LEVEL=--log-level=${LOG_LEVEL}
else
LOG_LEVEL=--log-level=INFO
fi
echo "Running binary: ${BINARY_PATH}"
echo "Node function is: ${FUNCTION}"
echo "Using service/bootstrap node as: ${NODE_ARG}"
echo "My external IP: ${MY_EXT_IP}"
exec "${BINARY_PATH}"\
--nat=extip:${MY_EXT_IP}\
${LOG_LEVEL}\
${NODE_ARG}\
${MESSAGE_INTERVAL_MILLIS}\
${NUM_MESSAGES}\
${SHARD}\
${CONTENT_TOPIC}\
${CLUSTER_ID}\
${FUNCTION}\
${START_PUBLISHING_AFTER}\
${MIN_MESSAGE_SIZE}\
${MAX_MESSAGE_SIZE}\
${METRICS_PORT}

View File

@ -0,0 +1,223 @@
{.push raises: [].}
import
std/[options, net, sysrand, random, strformat, strutils, sequtils],
chronicles,
chronos,
metrics,
libbacktrace,
libp2p/crypto/crypto,
confutils,
libp2p/wire
import
tools/confutils/cli_args,
waku/[
common/enr,
waku_node,
node/peer_manager,
waku_lightpush/common,
waku_relay,
waku_filter_v2,
waku_peer_exchange/protocol,
waku_core/multiaddrstr,
waku_core/topics/pubsub_topic,
waku_enr/capabilities,
waku_enr/sharding,
],
./tester_config,
./diagnose_connections,
./lpt_metrics
logScope:
topics = "service peer mgmt"
randomize()
proc translateToRemotePeerInfo*(peerAddress: string): Result[RemotePeerInfo, void] =
var peerInfo: RemotePeerInfo
var enrRec: enr.Record
if enrRec.fromURI(peerAddress):
trace "Parsed ENR", enrRec = $enrRec
peerInfo = enrRec.toRemotePeerInfo().valueOr:
error "failed to convert ENR to RemotePeerInfo", error = error
return err()
else:
peerInfo = parsePeerInfo(peerAddress).valueOr:
error "failed to parse node waku peer-exchange peerId", error = error
return err()
return ok(peerInfo)
## To retrieve peers from PeerExchange partner and return one randomly selected one
## among the ones successfully dialed
## Note: This is kept for future use.
proc selectRandomCapablePeer*(
pm: PeerManager, codec: string, pubsubTopic: PubsubTopic
): Future[Option[RemotePeerInfo]] {.async.} =
var cap = Capabilities.Filter
if codec.contains("lightpush"):
cap = Capabilities.Lightpush
elif codec.contains("filter"):
cap = Capabilities.Filter
var supportivePeers = pm.switch.peerStore.getPeersByCapability(cap)
trace "Found supportive peers count", count = supportivePeers.len()
trace "Found supportive peers", supportivePeers = $supportivePeers
if supportivePeers.len == 0:
return none(RemotePeerInfo)
var found = none(RemotePeerInfo)
while found.isNone() and supportivePeers.len > 0:
let rndPeerIndex = rand(0 .. supportivePeers.len - 1)
let randomPeer = supportivePeers[rndPeerIndex]
info "Dialing random peer",
idx = $rndPeerIndex, peer = constructMultiaddrStr(randomPeer)
supportivePeers.delete(rndPeerIndex .. rndPeerIndex)
let connOpt = pm.dialPeer(randomPeer, codec)
if (await connOpt.withTimeout(10.seconds)):
if connOpt.value().isSome():
found = some(randomPeer)
info "Dialing successful",
peer = constructMultiaddrStr(randomPeer), codec = codec
else:
info "Dialing failed", peer = constructMultiaddrStr(randomPeer), codec = codec
else:
info "Timeout dialing service peer",
peer = constructMultiaddrStr(randomPeer), codec = codec
return found
# Debugging PX gathered peers connectivity
proc tryCallAllPxPeers*(
pm: PeerManager, codec: string, pubsubTopic: PubsubTopic
): Future[Option[seq[RemotePeerInfo]]] {.async.} =
var capability = Capabilities.Filter
if codec.contains("lightpush"):
capability = Capabilities.Lightpush
elif codec.contains("filter"):
capability = Capabilities.Filter
var supportivePeers = pm.switch.peerStore.getPeersByCapability(capability)
lpt_px_peers.set(supportivePeers.len)
info "Found supportive peers count", count = supportivePeers.len()
info "Found supportive peers", supportivePeers = $supportivePeers
if supportivePeers.len == 0:
return none(seq[RemotePeerInfo])
var okPeers: seq[RemotePeerInfo] = @[]
while supportivePeers.len > 0:
let rndPeerIndex = rand(0 .. supportivePeers.len - 1)
let randomPeer = supportivePeers[rndPeerIndex]
info "Dialing random peer",
idx = $rndPeerIndex, peer = constructMultiaddrStr(randomPeer)
supportivePeers.delete(rndPeerIndex, rndPeerIndex)
let connOpt = pm.dialPeer(randomPeer, codec)
if (await connOpt.withTimeout(10.seconds)):
if connOpt.value().isSome():
okPeers.add(randomPeer)
info "Dialing successful",
peer = constructMultiaddrStr(randomPeer),
agent = randomPeer.getAgent(),
codec = codec
lpt_dialed_peers.inc(labelValues = [randomPeer.getAgent()])
else:
lpt_dial_failures.inc(labelValues = [randomPeer.getAgent()])
error "Dialing failed",
peer = constructMultiaddrStr(randomPeer),
agent = randomPeer.getAgent(),
codec = codec
else:
lpt_dial_failures.inc(labelValues = [randomPeer.getAgent()])
error "Timeout dialing service peer",
peer = constructMultiaddrStr(randomPeer),
agent = randomPeer.getAgent(),
codec = codec
var okPeersStr: string = ""
for idx, peer in okPeers:
okPeersStr.add(
" " & $idx & ". | " & constructMultiaddrStr(peer) & " | agent: " &
peer.getAgent() & " | protos: " & $peer.protocols & " | caps: " &
$peer.enr.map(getCapabilities) & "\n"
)
echo "PX returned peers found callable for " & codec & " / " & $capability & ":\n"
echo okPeersStr
return some(okPeers)
proc pxLookupServiceNode*(
node: WakuNode, conf: LiteProtocolTesterConf
): Future[Result[bool, void]] {.async.} =
let codec: string = conf.getCodec()
if node.wakuPeerExchange.isNil():
let peerExchangeNode = translateToRemotePeerInfo(conf.bootstrapNode).valueOr:
error "Failed to parse bootstrap node - cannot use PeerExchange.",
node = conf.bootstrapNode
return err()
info "PeerExchange node", peer = constructMultiaddrStr(peerExchangeNode)
node.peerManager.addServicePeer(peerExchangeNode, WakuPeerExchangeCodec)
try:
await node.mountPeerExchange(some(conf.clusterId))
except CatchableError:
error "failed to mount waku peer-exchange protocol",
error = getCurrentExceptionMsg()
return err()
var trialCount = 5
while trialCount > 0:
let futPeers = node.fetchPeerExchangePeers(conf.reqPxPeers)
if not await futPeers.withTimeout(30.seconds):
notice "Cannot get peers from PX", round = 5 - trialCount
else:
futPeers.value().isOkOr:
info "PeerExchange reported error", error = futPeers.read().error
return err()
if conf.testPeers:
let peersOpt =
await tryCallAllPxPeers(node.peerManager, codec, conf.getPubsubTopic())
if peersOpt.isSome():
info "Found service peers for codec",
codec = codec, peer_count = peersOpt.get().len()
return ok(peersOpt.get().len > 0)
else:
let peerOpt =
await selectRandomCapablePeer(node.peerManager, codec, conf.getPubsubTopic())
if peerOpt.isSome():
info "Found service peer for codec", codec = codec, peer = peerOpt.get()
return ok(true)
await sleepAsync(5.seconds)
trialCount -= 1
return err()
var alreadyUsedServicePeers {.threadvar.}: seq[RemotePeerInfo]
## Select service peers by codec from peer store randomly.
proc selectRandomServicePeer*(
pm: PeerManager, actualPeer: Option[RemotePeerInfo], codec: string
): Result[RemotePeerInfo, void] =
if actualPeer.isSome():
alreadyUsedServicePeers.add(actualPeer.get())
let supportivePeers = pm.switch.peerStore.getPeersByProtocol(codec).filterIt(
it notin alreadyUsedServicePeers
)
if supportivePeers.len == 0:
return err()
let rndPeerIndex = rand(0 .. supportivePeers.len - 1)
return ok(supportivePeers[rndPeerIndex])

View File

@ -0,0 +1,328 @@
{.push raises: [].}
import
std/[sets, tables, sequtils, options, strformat],
chronos/timer as chtimer,
chronicles,
chronos,
results,
libp2p/peerid
from std/sugar import `=>`
import ./tester_message, ./lpt_metrics
type
ArrivalInfo = object
arrivedAt: Moment
prevArrivedAt: Moment
prevIndex: uint32
MessageInfo = tuple[msg: ProtocolTesterMessage, info: ArrivalInfo]
DupStat = tuple[hash: string, dupCount: int, size: uint64]
StatHelper = object
prevIndex: uint32
prevArrivedAt: Moment
lostIndices: HashSet[uint32]
seenIndices: HashSet[uint32]
maxIndex: uint32
duplicates: OrderedTable[uint32, DupStat]
Statistics* = object
received: Table[uint32, MessageInfo]
firstReceivedIdx*: uint32
allMessageCount*: uint32
receivedMessages*: uint32
misorderCount*: uint32
lateCount*: uint32
duplicateCount*: uint32
helper: StatHelper
PerPeerStatistics* = Table[string, Statistics]
func `$`*(a: Duration): string {.inline.} =
## Original stringify implementation from chronos/timer.nim is not capable of printing 0ns
## Returns string representation of Duration ``a`` as nanoseconds value.
if a.isZero:
return "0ns"
return chtimer.`$`(a)
proc init*(T: type Statistics, expectedMessageCount: int = 1000): T =
result.helper.prevIndex = 0
result.helper.maxIndex = 0
result.helper.seenIndices.init(expectedMessageCount)
result.received = initTable[uint32, MessageInfo](expectedMessageCount)
return result
proc addMessage*(
self: var Statistics, sender: string, msg: ProtocolTesterMessage, msgHash: string
) =
if self.allMessageCount == 0:
self.allMessageCount = msg.count
self.firstReceivedIdx = msg.index
elif self.allMessageCount != msg.count:
error "Message count mismatch at message",
index = msg.index, expected = self.allMessageCount, got = msg.count
let currentArrived: MessageInfo = (
msg: msg,
info: ArrivalInfo(
arrivedAt: Moment.now(),
prevArrivedAt: self.helper.prevArrivedAt,
prevIndex: self.helper.prevIndex,
),
)
lpt_receiver_received_bytes.inc(labelValues = [sender], amount = msg.size.int64)
if self.received.hasKeyOrPut(msg.index, currentArrived):
inc(self.duplicateCount)
self.helper.duplicates.mgetOrPut(msg.index, (msgHash, 0, msg.size)).dupCount.inc()
warn "Duplicate message",
index = msg.index,
hash = msgHash,
times_duplicated = self.helper.duplicates[msg.index].dupCount
lpt_receiver_duplicate_messages_count.inc(labelValues = [sender])
lpt_receiver_distinct_duplicate_messages_count.set(
labelValues = [sender], value = self.helper.duplicates.len()
)
return
## detect misorder arrival and possible lost messages
if self.helper.prevIndex + 1 < msg.index:
inc(self.misorderCount)
warn "Misordered message arrival",
index = msg.index, expected = self.helper.prevIndex + 1
elif self.helper.prevIndex > msg.index:
inc(self.lateCount)
warn "Late message arrival", index = msg.index, expected = self.helper.prevIndex + 1
self.helper.maxIndex = max(self.helper.maxIndex, msg.index)
self.helper.prevIndex = msg.index
self.helper.prevArrivedAt = currentArrived.info.arrivedAt
inc(self.receivedMessages)
lpt_receiver_received_messages_count.inc(labelValues = [sender])
lpt_receiver_missing_messages_count.set(
labelValues = [sender], value = (self.helper.maxIndex - self.receivedMessages).int64
)
proc addMessage*(
self: var PerPeerStatistics,
peerId: string,
msg: ProtocolTesterMessage,
msgHash: string,
) =
if not self.contains(peerId):
self[peerId] = Statistics.init()
let shortSenderId = PeerId.init(msg.sender).map(p => p.shortLog()).valueOr(msg.sender)
discard catch:
self[peerId].addMessage(shortSenderId, msg, msgHash)
lpt_receiver_sender_peer_count.set(value = self.len)
proc lastMessageArrivedAt*(self: Statistics): Option[Moment] =
if self.receivedMessages > 0:
return some(self.helper.prevArrivedAt)
return none(Moment)
proc lossCount*(self: Statistics): uint32 =
self.helper.maxIndex - self.receivedMessages
proc calcLatency*(self: Statistics): tuple[min, max, avg: Duration] =
var
minLatency = nanos(0)
maxLatency = nanos(0)
avgLatency = nanos(0)
if self.receivedMessages > 2:
try:
var prevArrivedAt = self.received[self.firstReceivedIdx].info.arrivedAt
for idx, (msg, arrival) in self.received.pairs:
if idx <= 1:
continue
let expectedDelay = nanos(msg.sincePrev)
## latency will be 0 if arrived in shorter time than expected
var latency = arrival.arrivedAt - arrival.prevArrivedAt - expectedDelay
## will not measure zero latency, it is unlikely to happen but in case happens could
## ditort the min latency calulculation as we want to calculate the feasible minimum.
if latency > nanos(0):
if minLatency == nanos(0):
minLatency = latency
else:
minLatency = min(minLatency, latency)
maxLatency = max(maxLatency, latency)
avgLatency += latency
avgLatency = avgLatency div (self.receivedMessages - 1)
except KeyError:
error "Error while calculating latency: " & getCurrentExceptionMsg()
return (minLatency, maxLatency, avgLatency)
proc missingIndices*(self: Statistics): seq[uint32] =
var missing: seq[uint32] = @[]
for idx in 1 .. self.helper.maxIndex:
if not self.received.hasKey(idx):
missing.add(idx)
return missing
proc distinctDupCount(self: Statistics): int {.inline.} =
return self.helper.duplicates.len()
proc allDuplicates(self: Statistics): int {.inline.} =
var total = 0
for _, (_, dupCount, _) in self.helper.duplicates.pairs:
total += dupCount
return total
proc dupMsgs(self: Statistics): string =
var dupMsgs: string = ""
for idx, (hash, dupCount, size) in self.helper.duplicates.pairs:
dupMsgs.add(
" index: " & $idx & " | hash: " & hash & " | count: " & $dupCount & " | size: " &
$size & "\n"
)
return dupMsgs
proc echoStat*(self: Statistics, peerId: string) =
let (minL, maxL, avgL) = self.calcLatency()
lpt_receiver_latencies.set(labelValues = [peerId, "min"], value = minL.nanos())
lpt_receiver_latencies.set(labelValues = [peerId, "avg"], value = avgL.nanos())
lpt_receiver_latencies.set(labelValues = [peerId, "max"], value = maxL.nanos())
let printable = catch:
"""*------------------------------------------------------------------------------------------*
| Expected | Received | Target | Loss | Misorder | Late | |
|{self.helper.maxIndex:>11} |{self.receivedMessages:>11} |{self.allMessageCount:>11} |{self.lossCount():>11} |{self.misorderCount:>11} |{self.lateCount:>11} | |
*------------------------------------------------------------------------------------------*
| Latency stat: |
| min latency: {$minL:<73}|
| avg latency: {$avgL:<73}|
| max latency: {$maxL:<73}|
*------------------------------------------------------------------------------------------*
| Duplicate stat: |
| distinct duplicate messages: {$self.distinctDupCount():<57}|
| sum duplicates : {$self.allDuplicates():<57}|
Duplicated messages:
{self.dupMsgs()}
*------------------------------------------------------------------------------------------*
| Lost indices: |
| {self.missingIndices()} |
*------------------------------------------------------------------------------------------*""".fmt()
echo printable.valueOr("Error while printing statistics: " & error.msg)
proc jsonStat*(self: Statistics): string =
let minL, maxL, avgL = self.calcLatency()
let json = catch:
"""{{"expected":{self.helper.maxIndex},
"received": {self.receivedMessages},
"target": {self.allMessageCount},
"loss": {self.lossCount()},
"misorder": {self.misorderCount},
"late": {self.lateCount},
"duplicate": {self.duplicateCount},
"latency":
{{"avg": "{avgL}",
"min": "{minL}",
"max": "{maxL}"
}},
"lostIndices": {self.missingIndices()}
}}""".fmt()
return json.valueOr("{\"result:\": \"" & error.msg & "\"}")
proc echoStats*(self: var PerPeerStatistics) =
for peerId, stats in self.pairs:
let peerLine = catch:
"Receiver statistics from peer {peerId}".fmt()
peerLine.isOkOr:
echo "Error while printing statistics"
continue
echo peerLine.get()
stats.echoStat(peerId)
proc jsonStats*(self: PerPeerStatistics): string =
try:
#!fmt: off
var json = "{\"statistics\": ["
var first = true
for peerId, stats in self.pairs:
if first:
first = false
else:
json.add(", ")
json.add("{{\"sender\": \"{peerId}\", \"stat\":".fmt())
json.add(stats.jsonStat())
json.add("}")
json.add("]}")
return json
#!fmt: on
except CatchableError:
return
"{\"result:\": \"Error while generating json stats: " & getCurrentExceptionMsg() &
"\"}"
proc lastMessageArrivedAt*(self: PerPeerStatistics): Option[Moment] =
var lastArrivedAt = Moment.init(0, Millisecond)
for stat in self.values:
let lastMsgFromPeerAt = stat.lastMessageArrivedAt().valueOr:
continue
if lastMsgFromPeerAt > lastArrivedAt:
lastArrivedAt = lastMsgFromPeerAt
if lastArrivedAt == Moment.init(0, Millisecond):
return none(Moment)
return some(lastArrivedAt)
proc checkIfAllMessagesReceived*(
self: PerPeerStatistics, maxWaitForLastMessage: Duration
): Future[bool] {.async.} =
# if there are no peers have sent messages, assume we just have started.
if self.len == 0:
return false
# check if numerically all messages are received.
# this suggest we received at least one message already from one peer
var isAlllMessageReceived = true
for stat in self.values:
if (stat.allMessageCount == 0 and stat.receivedMessages == 0) or
stat.helper.maxIndex < stat.allMessageCount:
isAlllMessageReceived = false
break
if not isAlllMessageReceived:
# if not all message received we still need to check if last message arrived within a time frame
# to avoid endless waiting while publishers are already quit.
let lastMessageAt = self.lastMessageArrivedAt()
if lastMessageAt.isNone():
return false
# last message shall arrived within time limit
if Moment.now() - lastMessageAt.get() < maxWaitForLastMessage:
return false
else:
info "No message since max wait time", maxWait = $maxWaitForLastMessage
## Ok, we see last message arrived from all peers,
## lets check if all messages are received
## and if not let's wait another 20 secs to give chance the system will send them.
var shallWait = false
for stat in self.values:
if stat.receivedMessages < stat.allMessageCount:
shallWait = true
if shallWait:
await sleepAsync(20.seconds)
return true

View File

@ -0,0 +1,208 @@
import
results,
chronos,
confutils,
confutils/defs,
confutils/std/net,
confutils/toml/defs as confTomlDefs,
confutils/toml/std/net as confTomlNet,
libp2p/crypto/crypto,
libp2p/crypto/secp,
libp2p/multiaddress,
secp256k1
import
../../tools/confutils/
[cli_args, envvar as confEnvvarDefs, envvar_net as confEnvvarNet],
waku/[common/logging, waku_core, waku_core/topics/pubsub_topic]
export confTomlDefs, confTomlNet, confEnvvarDefs, confEnvvarNet
const
LitePubsubTopic* = PubsubTopic("/waku/2/rs/66/0")
LiteContentTopic* = ContentTopic("/tester/1/light-pubsub-example/proto")
DefaultMinTestMessageSizeStr* = "1KiB"
DefaultMaxTestMessageSizeStr* = "150KiB"
type TesterFunctionality* = enum
SENDER # pumps messages to the network
RECEIVER # gather and analyze messages from the network
type LightpushVersion* = enum
LEGACY # legacy lightpush protocol
V3 # lightpush v3 protocol
type LiteProtocolTesterConf* = object
configFile* {.
desc:
"Loads configuration from a TOML file (cmd-line parameters take precedence) for the light waku node",
name: "config-file"
.}: Option[InputFile]
## Log configuration
logLevel* {.
desc:
"Sets the log level for process. Supported levels: TRACE, DEBUG, INFO, NOTICE, WARN, ERROR or FATAL",
defaultValue: logging.LogLevel.DEBUG,
name: "log-level"
.}: logging.LogLevel
logFormat* {.
desc:
"Specifies what kind of logs should be written to stdout. Supported formats: TEXT, JSON",
defaultValue: logging.LogFormat.TEXT,
name: "log-format"
.}: logging.LogFormat
## Test configuration
serviceNode* {.
desc: "Peer multiaddr of the service node.", defaultValue: "", name: "service-node"
.}: string
bootstrapNode* {.
desc:
"Peer multiaddr of the bootstrap node. If `service-node` not set, it is used to retrieve potential service nodes of the network.",
defaultValue: "",
name: "bootstrap-node"
.}: string
nat* {.
desc:
"Specify method to use for determining public address. " &
"Must be one of: any, none, upnp, pmp, extip:<IP>.",
defaultValue: "any"
.}: string
testFunc* {.
desc: "Specifies the lite protocol tester side. Supported values: sender, receiver.",
defaultValue: TesterFunctionality.RECEIVER,
name: "test-func"
.}: TesterFunctionality
lightpushVersion* {.
desc: "Version of the sender to use. Supported values: legacy, v3.",
defaultValue: LightpushVersion.LEGACY,
name: "lightpush-version"
.}: LightpushVersion
numMessages* {.
desc: "Number of messages to send.", defaultValue: 120, name: "num-messages"
.}: uint32
startPublishingAfter* {.
desc: "Wait number of seconds before start publishing messages.",
defaultValue: 5,
name: "start-publishing-after"
.}: uint32
messageInterval* {.
desc: "Delay between messages in milliseconds.",
defaultValue: 1000,
name: "message-interval"
.}: uint32
shard* {.desc: "Shards index to subscribe to. ", defaultValue: 0, name: "shard".}:
uint16
contentTopics* {.
desc: "Default content topic to subscribe to. Argument may be repeated.",
defaultValue: @[LiteContentTopic],
name: "content-topic"
.}: seq[ContentTopic]
clusterId* {.
desc:
"Cluster id that the node is running in. Node in a different cluster id is disconnected.",
defaultValue: 0,
name: "cluster-id"
.}: uint16
minTestMessageSize* {.
desc:
"Minimum message size. Accepted units: KiB, KB, and B. e.g. 1024KiB; 1500 B; etc.",
defaultValue: DefaultMinTestMessageSizeStr,
name: "min-test-msg-size"
.}: string
maxTestMessageSize* {.
desc:
"Maximum message size. Accepted units: KiB, KB, and B. e.g. 1024KiB; 1500 B; etc.",
defaultValue: DefaultMaxTestMessageSizeStr,
name: "max-test-msg-size"
.}: string
## Tester REST service configuration
restAddress* {.
desc: "Listening address of the REST HTTP server.",
defaultValue: parseIpAddress("127.0.0.1"),
name: "rest-address"
.}: IpAddress
testPeers* {.
desc: "Run dial test on gathered PeerExchange peers.",
defaultValue: false,
name: "test-peers"
.}: bool
reqPxPeers* {.
desc: "Number of peers to request on PeerExchange.",
defaultValue: 100,
name: "req-px-peers"
.}: uint16
restPort* {.
desc: "Listening port of the REST HTTP server.",
defaultValue: 8654,
name: "rest-port"
.}: uint16
fixedServicePeer* {.
desc:
"Prevent changing the service peer in case of failures, the full test will stict to the first service peer in use.",
defaultValue: false,
name: "fixed-service-peer"
.}: bool
restAllowOrigin* {.
desc:
"Allow cross-origin requests from the specified origin." &
"Argument may be repeated." & "Wildcards: * or ? allowed." &
"Ex.: \"localhost:*\" or \"127.0.0.1:8080\"",
defaultValue: @["*"],
name: "rest-allow-origin"
.}: seq[string]
metricsPort* {.
desc: "Listening port of the Metrics HTTP server.",
defaultValue: 8003,
name: "metrics-port"
.}: uint16
{.push warning[ProveInit]: off.}
proc load*(T: type LiteProtocolTesterConf, version = ""): ConfResult[T] =
try:
let conf = LiteProtocolTesterConf.load(
version = version,
secondarySources = proc(
conf: LiteProtocolTesterConf, sources: auto
) {.gcsafe, raises: [ConfigurationError].} =
sources.addConfigFile(Envvar, InputFile("liteprotocoltester")),
)
ok(conf)
except CatchableError:
err(getCurrentExceptionMsg())
proc getPubsubTopic*(conf: LiteProtocolTesterConf): PubsubTopic =
return $RelayShard(clusterId: conf.clusterId, shardId: conf.shard)
proc getCodec*(conf: LiteProtocolTesterConf): string =
return
if conf.testFunc == TesterFunctionality.RECEIVER:
WakuFilterSubscribeCodec
else:
if conf.lightpushVersion == LightpushVersion.LEGACY:
WakuLegacyLightPushCodec
else:
WakuLightPushCodec
{.pop.}

View File

@ -0,0 +1,121 @@
{.push raises: [].}
import
chronicles,
json_serialization,
json_serialization/std/options,
json_serialization/lexer
import waku/rest_api/endpoint/serdes
type ProtocolTesterMessage* = object
sender*: string
index*: uint32
count*: uint32
startedAt*: int64
sinceStart*: int64
sincePrev*: int64
size*: uint64
proc writeValue*(
writer: var JsonWriter[RestJson], value: ProtocolTesterMessage
) {.raises: [IOError].} =
writer.beginRecord()
writer.writeField("sender", value.sender)
writer.writeField("index", value.index)
writer.writeField("count", value.count)
writer.writeField("startedAt", value.startedAt)
writer.writeField("sinceStart", value.sinceStart)
writer.writeField("sincePrev", value.sincePrev)
writer.writeField("size", value.size)
writer.endRecord()
proc readValue*(
reader: var JsonReader[RestJson], value: var ProtocolTesterMessage
) {.gcsafe, raises: [SerializationError, IOError].} =
var
sender: Option[string]
index: Option[uint32]
count: Option[uint32]
startedAt: Option[int64]
sinceStart: Option[int64]
sincePrev: Option[int64]
size: Option[uint64]
for fieldName in readObjectFields(reader):
case fieldName
of "sender":
if sender.isSome():
reader.raiseUnexpectedField(
"Multiple `sender` fields found", "ProtocolTesterMessage"
)
sender = some(reader.readValue(string))
of "index":
if index.isSome():
reader.raiseUnexpectedField(
"Multiple `index` fields found", "ProtocolTesterMessage"
)
index = some(reader.readValue(uint32))
of "count":
if count.isSome():
reader.raiseUnexpectedField(
"Multiple `count` fields found", "ProtocolTesterMessage"
)
count = some(reader.readValue(uint32))
of "startedAt":
if startedAt.isSome():
reader.raiseUnexpectedField(
"Multiple `startedAt` fields found", "ProtocolTesterMessage"
)
startedAt = some(reader.readValue(int64))
of "sinceStart":
if sinceStart.isSome():
reader.raiseUnexpectedField(
"Multiple `sinceStart` fields found", "ProtocolTesterMessage"
)
sinceStart = some(reader.readValue(int64))
of "sincePrev":
if sincePrev.isSome():
reader.raiseUnexpectedField(
"Multiple `sincePrev` fields found", "ProtocolTesterMessage"
)
sincePrev = some(reader.readValue(int64))
of "size":
if size.isSome():
reader.raiseUnexpectedField(
"Multiple `size` fields found", "ProtocolTesterMessage"
)
size = some(reader.readValue(uint64))
else:
unrecognizedFieldWarning(value)
if sender.isNone():
reader.raiseUnexpectedValue("Field `sender` is missing")
if index.isNone():
reader.raiseUnexpectedValue("Field `index` is missing")
if count.isNone():
reader.raiseUnexpectedValue("Field `count` is missing")
if startedAt.isNone():
reader.raiseUnexpectedValue("Field `startedAt` is missing")
if sinceStart.isNone():
reader.raiseUnexpectedValue("Field `sinceStart` is missing")
if sincePrev.isNone():
reader.raiseUnexpectedValue("Field `sincePrev` is missing")
if size.isNone():
reader.raiseUnexpectedValue("Field `size` is missing")
value = ProtocolTesterMessage(
sender: sender.get(),
index: index.get(),
count: count.get(),
startedAt: startedAt.get(),
sinceStart: sinceStart.get(),
sincePrev: sincePrev.get(),
size: size.get(),
)

View File

@ -0,0 +1,29 @@
import results, options, chronos
import waku/[waku_node, waku_core, waku_lightpush, waku_lightpush/common]
import publisher_base
type V3Publisher* = ref object of PublisherBase
proc new*(T: type V3Publisher, wakuNode: WakuNode): T =
if isNil(wakuNode.wakuLightpushClient):
wakuNode.mountLightPushClient()
return V3Publisher(wakuNode: wakuNode)
method send*(
self: V3Publisher,
topic: PubsubTopic,
message: WakuMessage,
servicePeer: RemotePeerInfo,
): Future[Result[void, string]] {.async.} =
# when error it must return original error desc due the text is used for distinction between error types in metrics.
discard (
await self.wakuNode.lightpushPublish(some(topic), message, some(servicePeer))
).valueOr:
if error.code == LightPushErrorCode.NO_PEERS_TO_RELAY and
error.desc != some("No peers for topic, skipping publish"):
# TODO: We need better separation of errors happening on the client side or the server side.-
return err("dial_failure")
else:
return err($error.code)
return ok()

View File

@ -18,13 +18,26 @@ networkmonitor [OPTIONS]...
The following options are available:
-l, --log-level Sets the log level [=LogLevel.DEBUG].
-l, --log-level Sets the log level [=LogLevel.INFO].
-t, --timeout Timeout to consider that the connection failed [=chronos.seconds(10)].
-b, --bootstrap-node Bootstrap ENR node. Argument may be repeated. [=@[""]].
--dns-discovery-url URL for DNS node list in format 'enrtree://<key>@<fqdn>'.
--pubsub-topic Default pubsub topic to subscribe to. Argument may be repeated..
-r, --refresh-interval How often new peers are discovered and connected to (in seconds) [=5].
--cluster-id Cluster id that the node is running in. Node in a different cluster id is
disconnected. [=1].
--rln-relay Enable spam protection through rln-relay: true|false [=true].
--rln-relay-dynamic Enable waku-rln-relay with on-chain dynamic group management: true|false
[=true].
--rln-relay-eth-client-address HTTP address of an Ethereum testnet client e.g., http://localhost:8540/
[=http://localhost:8540/].
--rln-relay-eth-contract-address Address of membership contract on an Ethereum testnet.
--rln-relay-epoch-sec Epoch size in seconds used to rate limit RLN memberships. Default is 1 second.
[=1].
--rln-relay-user-message-limit Set a user message limit for the rln membership registration. Must be a positive
integer. Default is 1. [=1].
--metrics-server Enable the metrics server: true|false [=true].
--metrics-server-address Listening address of the metrics server. [=ValidIpAddress.init("127.0.0.1")].
--metrics-server-address Listening address of the metrics server. [=parseIpAddress("127.0.0.1")].
--metrics-server-port Listening HTTP port of the metrics server. [=8008].
--metrics-rest-address Listening address of the metrics rest server. [=127.0.0.1].
--metrics-rest-port Listening HTTP port of the metrics rest server. [=8009].
@ -35,7 +48,7 @@ The following options are available:
Connect to the network through a given bootstrap node, with default parameters. See metrics section for the data that it exposes.
```console
./build/networkmonitor --log-level=INFO --b="enr:-Nm4QOdTOKZJKTUUZ4O_W932CXIET-M9NamewDnL78P5u9DOGnZlK0JFZ4k0inkfe6iY-0JAaJVovZXc575VV3njeiABgmlkgnY0gmlwhAjS3ueKbXVsdGlhZGRyc7g6ADg2MW5vZGUtMDEuYWMtY24taG9uZ2tvbmctYy53YWt1djIucHJvZC5zdGF0dXNpbS5uZXQGH0DeA4lzZWNwMjU2azGhAo0C-VvfgHiXrxZi3umDiooXMGY9FvYj5_d1Q4EeS7eyg3RjcIJ2X4N1ZHCCIyiFd2FrdTIP"
./build/networkmonitor --log-level=INFO --b="enr:-QEkuEB3WHNS-xA3RDpfu9A2Qycr3bN3u7VoArMEiDIFZJ66F1EB3d4wxZN1hcdcOX-RfuXB-MQauhJGQbpz3qUofOtLAYJpZIJ2NIJpcIQI2SVcim11bHRpYWRkcnO4bgA0Ni9ub2RlLTAxLmFjLWNuLWhvbmdrb25nLWMud2FrdS5zYW5kYm94LnN0YXR1cy5pbQZ2XwA2Ni9ub2RlLTAxLmFjLWNuLWhvbmdrb25nLWMud2FrdS5zYW5kYm94LnN0YXR1cy5pbQYfQN4DgnJzkwABCAAAAAEAAgADAAQABQAGAAeJc2VjcDI1NmsxoQPK35Nnz0cWUtSAhBp7zvHEhyU_AqeQUlqzLiLxfP2L4oN0Y3CCdl-DdWRwgiMohXdha3UyDw"
```
```console
@ -54,14 +67,14 @@ Metrics are divided into two categories:
The following metrics are available. See `http://localhost:8008/metrics`
* `peer_type_as_per_enr`: Number of peers supporting each capability according the the ENR (Relay, Store, Lightpush, Filter)
* `peer_type_as_per_enr`: Number of peers supporting each capability according to the ENR (Relay, Store, Lightpush, Filter)
* `peer_type_as_per_protocol`: Number of peers supporting each protocol, after a successful connection)
* `peer_user_agents`: List of useragents found in the network and their count
Other relevant metrics reused from `nim-eth`:
* `routing_table_nodes`: Inherited from nim-eth, number of nodes in the routing table
* `discovery_message_requests_outgoing_total`: Inherited from nim-eth, number of outging discovery requests, useful to know if the node is actiely looking for new peers
* `discovery_message_requests_outgoing_total`: Inherited from nim-eth, number of outgoing discovery requests, useful to know if the node is actively looking for new peers
### Custom Metrics

View File

@ -0,0 +1,34 @@
version: '3.8'
networks:
monitoring:
driver: bridge
volumes:
prometheus-data:
driver: local
grafana-data:
driver: local
# Services definitions
services:
prometheus:
image: docker.io/prom/prometheus:latest
container_name: prometheus
ports:
- 9090:9090
command:
- '--config.file=/etc/prometheus/prometheus.yaml'
volumes:
- ./prometheus.yaml:/etc/prometheus/prometheus.yaml:ro
- ./data:/prometheus
restart: unless-stopped
grafana:
image: grafana/grafana-oss:latest
container_name: grafana
ports:
- '3000:3000'
volumes:
- grafana-data:/var/lib/grafana
restart: unless-stopped

View File

@ -1,12 +1,8 @@
when (NimMajor, NimMinor) < (1, 4):
{.push raises: [Defect].}
else:
{.push raises: [].}
{.push raises: [].}
import
std/[tables,strutils,times,sequtils],
stew/results,
stew/shims/net,
std/[net, tables, strutils, times, sequtils, random, sugar],
results,
chronicles,
chronicles/topics_registry,
chronos,
@ -21,12 +17,18 @@ import
metrics/chronos_httpserver,
presto/[route, server, client]
import
../../waku/waku_core,
../../waku/node/peer_manager,
../../waku/waku_node,
../../waku/waku_enr,
../../waku/waku_discv5,
../../waku/waku_dnsdisc,
waku/[
waku_core,
node/peer_manager,
waku_node,
waku_enr,
discovery/waku_discv5,
discovery/waku_dnsdisc,
waku_relay,
waku_rln_relay,
factory/builder,
factory/networks_config,
],
./networkmonitor_metrics,
./networkmonitor_config,
./networkmonitor_utils
@ -37,23 +39,44 @@ logScope:
const ReconnectTime = 60
const MaxConnectionRetries = 5
const ResetRetriesAfter = 1200
const AvgPingWindow = 10.0
const PingSmoothing = 0.3
const MaxConnectedPeers = 150
const git_version* {.strdefine.} = "n/a"
proc setDiscoveredPeersCapabilities(
routingTableNodes: seq[Node]) =
proc setDiscoveredPeersCapabilities(routingTableNodes: seq[waku_enr.Record]) =
for capability in @[Relay, Store, Filter, Lightpush]:
let nOfNodesWithCapability = routingTableNodes.countIt(it.record.supportsCapability(capability))
info "capabilities as per ENR waku flag", capability=capability, amount=nOfNodesWithCapability
networkmonitor_peer_type_as_per_enr.set(int64(nOfNodesWithCapability), labelValues = [$capability])
let nOfNodesWithCapability =
routingTableNodes.countIt(it.supportsCapability(capability))
info "capabilities as per ENR waku flag",
capability = capability, amount = nOfNodesWithCapability
networkmonitor_peer_type_as_per_enr.set(
int64(nOfNodesWithCapability), labelValues = [$capability]
)
proc setDiscoveredPeersCluster(routingTableNodes: seq[Node]) =
var clusters: CountTable[uint16]
for node in routingTableNodes:
let typedRec = node.record.toTyped().valueOr:
clusters.inc(0)
continue
let relayShard = typedRec.relaySharding().valueOr:
clusters.inc(0)
continue
clusters.inc(relayShard.clusterId)
for (key, value) in clusters.pairs:
networkmonitor_peer_cluster_as_per_enr.set(int64(value), labelValues = [$key])
proc analyzePeer(
customPeerInfo: CustomPeerInfoRef,
peerInfo: RemotePeerInfo,
node: WakuNode,
timeout: chronos.Duration
): Future[Result[string, string]] {.async.} =
customPeerInfo: CustomPeerInfoRef,
peerInfo: RemotePeerInfo,
node: WakuNode,
timeout: chronos.Duration,
): Future[Result[string, string]] {.async.} =
var pingDelay: chronos.Duration
proc ping(): Future[Result[void, string]] {.async, gcsafe.} =
@ -61,12 +84,11 @@ proc analyzePeer(
let conn = await node.switch.dial(peerInfo.peerId, peerInfo.addrs, PingCodec)
pingDelay = await node.libp2pPing.ping(conn)
return ok()
except CatchableError:
var msg = getCurrentExceptionMsg()
if msg == "Future operation cancelled!":
msg = "timedout"
warn "failed to ping the peer", peer=peerInfo, err=msg
warn "failed to ping the peer", peer = peerInfo, err = msg
customPeerInfo.connError = msg
return err("could not ping peer: " & msg)
@ -78,36 +100,45 @@ proc analyzePeer(
return err(customPeerInfo.connError)
customPeerInfo.connError = ""
info "successfully pinged peer", peer=peerInfo, duration=pingDelay.millis
info "successfully pinged peer", peer = peerInfo, duration = pingDelay.millis
networkmonitor_peer_ping.observe(pingDelay.millis)
if customPeerInfo.avgPingDuration == 0.millis:
customPeerInfo.avgPingDuration = pingDelay
# We are using a smoothed moving average
customPeerInfo.avgPingDuration =
if customPeerInfo.avgPingDuration.millis == 0:
pingDelay
else:
let newAvg =
(float64(pingDelay.millis) * PingSmoothing) +
float64(customPeerInfo.avgPingDuration.millis) * (1.0 - PingSmoothing)
int64(newAvg).millis
# TODO: check why the calculation ends up losing precision
customPeerInfo.avgPingDuration = int64((float64(customPeerInfo.avgPingDuration.millis) * (AvgPingWindow - 1.0) + float64(pingDelay.millis)) / AvgPingWindow).millis
customPeerInfo.lastPingDuration = pingDelay
return ok(customPeerInfo.peerId)
proc shouldReconnect(customPeerInfo: CustomPeerInfoRef): bool =
let reconnetIntervalCheck = getTime().toUnix() >= customPeerInfo.lastTimeConnected + ReconnectTime
let reconnetIntervalCheck =
getTime().toUnix() >= customPeerInfo.lastTimeConnected + ReconnectTime
var retriesCheck = customPeerInfo.retries < MaxConnectionRetries
if not retriesCheck and getTime().toUnix() >= customPeerInfo.lastTimeConnected + ResetRetriesAfter:
if not retriesCheck and
getTime().toUnix() >= customPeerInfo.lastTimeConnected + ResetRetriesAfter:
customPeerInfo.retries = 0
retriesCheck = true
info "resetting retries counter", peerId=customPeerInfo.peerId
info "resetting retries counter", peerId = customPeerInfo.peerId
return reconnetIntervalCheck and retriesCheck
# TODO: Split in discover, connect
proc setConnectedPeersMetrics(discoveredNodes: seq[Node],
node: WakuNode,
timeout: chronos.Duration,
restClient: RestClientRef,
allPeers: CustomPeersTableRef) {.async.} =
proc setConnectedPeersMetrics(
discoveredNodes: seq[waku_enr.Record],
node: WakuNode,
timeout: chronos.Duration,
restClient: RestClientRef,
allPeers: CustomPeersTableRef,
) {.async.} =
let currentTime = getTime().toUnix()
var newPeers = 0
@ -115,22 +146,22 @@ proc setConnectedPeersMetrics(discoveredNodes: seq[Node],
var analyzeFuts: seq[Future[Result[string, string]]]
var (inConns, outConns) = node.peer_manager.connectedPeers(WakuRelayCodec)
info "connected peers", inConns = inConns.len, outConns = outConns.len
shuffle(outConns)
if outConns.len >= toInt(MaxConnectedPeers / 2):
for p in outConns[0 ..< toInt(outConns.len / 2)]:
trace "Pruning Peer", Peer = $p
asyncSpawn(node.switch.disconnect(p))
# iterate all newly discovered nodes
for discNode in discoveredNodes:
let typedRecord = discNode.record.toTypedRecord()
if not typedRecord.isOk():
warn "could not convert record to typed record", record=discNode.record
continue
let peerRes = toRemotePeerInfo(discNode)
let secp256k1 = typedRecord.get().secp256k1
if not secp256k1.isSome():
warn "could not get secp256k1 key", typedRecord=typedRecord.get()
continue
let peerRes = toRemotePeerInfo(discNode.record)
let peerInfo = peerRes.valueOr():
warn "error converting record to remote peer info", record=discNode.record
let peerInfo = peerRes.valueOr:
warn "error converting record to remote peer info", record = discNode
continue
# create new entry if new peerId found
@ -140,26 +171,32 @@ proc setConnectedPeersMetrics(discoveredNodes: seq[Node],
allPeers[peerId] = CustomPeerInfoRef(peerId: peerId)
newPeers += 1
else:
info "already seen", peerId=peerId
info "already seen", peerId = peerId
let customPeerInfo = allPeers[peerId]
customPeerInfo.lastTimeDiscovered = currentTime
customPeerInfo.enr = discNode.record.toURI()
customPeerInfo.enrCapabilities = discNode.record.getCapabilities().mapIt($it)
customPeerInfo.enr = discNode.toURI()
customPeerInfo.enrCapabilities = discNode.getCapabilities().mapIt($it)
customPeerInfo.discovered += 1
if not typedRecord.get().ip.isSome():
warn "ip field is not set", record=typedRecord.get()
for maddr in peerInfo.addrs:
if $maddr notin customPeerInfo.maddrs:
customPeerInfo.maddrs.add $maddr
let typedRecord = discNode.toTypedRecord().valueOr:
warn "could not convert record to typed record", record = discNode
continue
let ipAddr = typedRecord.ip.valueOr:
warn "ip field is not set", record = typedRecord
continue
let ip = $typedRecord.get().ip.get().join(".")
customPeerInfo.ip = ip
customPeerInfo.ip = $ipAddr.join(".")
# try to ping the peer
if shouldReconnect(customPeerInfo):
if customPeerInfo.retries > 0:
warn "trying to dial failed peer again", peerId=peerId, retry=customPeerInfo.retries
warn "trying to dial failed peer again",
peerId = peerId, retry = customPeerInfo.retries
analyzeFuts.add(analyzePeer(customPeerInfo, peerInfo, node, timeout))
# Wait for all connection attempts to finish
@ -167,16 +204,16 @@ proc setConnectedPeersMetrics(discoveredNodes: seq[Node],
for peerIdFut in analyzedPeers:
let peerIdRes = await peerIdFut
let peerIdStr = peerIdRes.valueOr():
let peerIdStr = peerIdRes.valueOr:
continue
successfulConnections += 1
let peerId = PeerId.init(peerIdStr).valueOr():
warn "failed to parse peerId", peerId=peerIdStr
let peerId = PeerId.init(peerIdStr).valueOr:
warn "failed to parse peerId", peerId = peerIdStr
continue
var customPeerInfo = allPeers[peerIdStr]
debug "connected to peer", peer=customPeerInfo[]
info "connected to peer", peer = customPeerInfo[]
# after connection, get supported protocols
let lp2pPeerStore = node.switch.peerStore
@ -188,9 +225,9 @@ proc setConnectedPeersMetrics(discoveredNodes: seq[Node],
let nodeUserAgent = lp2pPeerStore[AgentBook][peerId]
customPeerInfo.userAgent = nodeUserAgent
info "number of newly discovered peers", amount=newPeers
info "number of newly discovered peers", amount = newPeers
# inform the total connections that we did in this round
info "number of successful connections", amount=successfulConnections
info "number of successful connections", amount = successfulConnections
proc updateMetrics(allPeersRef: CustomPeersTableRef) {.gcsafe.} =
var allProtocols: Table[string, int]
@ -204,8 +241,9 @@ proc updateMetrics(allPeersRef: CustomPeersTableRef) {.gcsafe.} =
for protocol in peerInfo.supportedProtocols:
allProtocols[protocol] = allProtocols.mgetOrPut(protocol, 0) + 1
# store available user-agents in the network
allAgentStrings[peerInfo.userAgent] = allAgentStrings.mgetOrPut(peerInfo.userAgent, 0) + 1
# store available user-agents in the network
allAgentStrings[peerInfo.userAgent] =
allAgentStrings.mgetOrPut(peerInfo.userAgent, 0) + 1
if peerInfo.country != "":
countries[peerInfo.country] = countries.mgetOrPut(peerInfo.country, 0) + 1
@ -216,25 +254,32 @@ proc updateMetrics(allPeersRef: CustomPeersTableRef) {.gcsafe.} =
networkmonitor_peer_count.set(int64(connectedPeers), labelValues = ["true"])
networkmonitor_peer_count.set(int64(failedPeers), labelValues = ["false"])
# update count on each protocol
# update count on each protocol
for protocol in allProtocols.keys():
let countOfProtocols = allProtocols.mgetOrPut(protocol, 0)
networkmonitor_peer_type_as_per_protocol.set(int64(countOfProtocols), labelValues = [protocol])
info "supported protocols in the network", protocol=protocol, count=countOfProtocols
networkmonitor_peer_type_as_per_protocol.set(
int64(countOfProtocols), labelValues = [protocol]
)
info "supported protocols in the network",
protocol = protocol, count = countOfProtocols
# update count on each user-agent
for userAgent in allAgentStrings.keys():
let countOfUserAgent = allAgentStrings.mgetOrPut(userAgent, 0)
networkmonitor_peer_user_agents.set(int64(countOfUserAgent), labelValues = [userAgent])
info "user agents participating in the network", userAgent=userAgent, count=countOfUserAgent
networkmonitor_peer_user_agents.set(
int64(countOfUserAgent), labelValues = [userAgent]
)
info "user agents participating in the network",
userAgent = userAgent, count = countOfUserAgent
for country in countries.keys():
let peerCount = countries.mgetOrPut(country, 0)
networkmonitor_peer_country_count.set(int64(peerCount), labelValues = [country])
info "number of peers per country", country=country, count=peerCount
info "number of peers per country", country = country, count = peerCount
proc populateInfoFromIp(allPeersRef: CustomPeersTableRef,
restClient: RestClientRef) {.async.} =
proc populateInfoFromIp(
allPeersRef: CustomPeersTableRef, restClient: RestClientRef
) {.async.} =
for peer in allPeersRef.keys():
if allPeersRef[peer].country != "" and allPeersRef[peer].city != "":
continue
@ -249,7 +294,7 @@ proc populateInfoFromIp(allPeersRef: CustomPeersTableRef,
let response = await restClient.ipToLocation(allPeersRef[peer].ip)
location = response.data
except CatchableError:
warn "could not get location", ip=allPeersRef[peer].ip
warn "could not get location", ip = allPeersRef[peer].ip
continue
allPeersRef[peer].country = location.country
allPeersRef[peer].city = location.city
@ -257,38 +302,44 @@ proc populateInfoFromIp(allPeersRef: CustomPeersTableRef,
# TODO: Split in discovery, connections, and ip2location
# crawls the network discovering peers and trying to connect to them
# metrics are processed and exposed
proc crawlNetwork(node: WakuNode,
wakuDiscv5: WakuDiscoveryV5,
restClient: RestClientRef,
conf: NetworkMonitorConf,
allPeersRef: CustomPeersTableRef) {.async.} =
proc crawlNetwork(
node: WakuNode,
wakuDiscv5: WakuDiscoveryV5,
restClient: RestClientRef,
conf: NetworkMonitorConf,
allPeersRef: CustomPeersTableRef,
) {.async.} =
let crawlInterval = conf.refreshInterval * 1000
while true:
let startTime = Moment.now()
# discover new random nodes
let discoveredNodes = await wakuDiscv5.protocol.queryRandom()
let discoveredNodes = await wakuDiscv5.findRandomPeers()
# nodes are nested into bucket, flat it
let flatNodes = wakuDiscv5.protocol.routingTable.buckets.mapIt(it.nodes).flatten()
# populate metrics related to capabilities as advertised by the ENR (see waku field)
setDiscoveredPeersCapabilities(flatNodes)
setDiscoveredPeersCapabilities(discoveredNodes)
# populate cluster metrics as advertised by the ENR
setDiscoveredPeersCluster(flatNodes)
# tries to connect to all newly discovered nodes
# and populates metrics related to peers we could connect
# note random discovered nodes can be already known
await setConnectedPeersMetrics(discoveredNodes, node, conf.timeout, restClient, allPeersRef)
await setConnectedPeersMetrics(
discoveredNodes, node, conf.timeout, restClient, allPeersRef
)
updateMetrics(allPeersRef)
# populate info from ip addresses
await populateInfoFromIp(allPeersRef, restClient)
let totalNodes = flatNodes.len
let seenNodes = flatNodes.countIt(it.seen)
let totalNodes = discoveredNodes.len
#let seenNodes = totalNodes
info "discovered nodes: ", total=totalNodes, seen=seenNodes
info "discovered nodes: ", total = totalNodes #, seen = seenNodes
# Notes:
# we dont run ipMajorityLoop
@ -296,43 +347,47 @@ proc crawlNetwork(node: WakuNode,
let endTime = Moment.now()
let elapsed = (endTime - startTime).nanos
info "crawl duration", time=elapsed.millis
info "crawl duration", time = elapsed.millis
await sleepAsync(crawlInterval.millis - elapsed.millis)
proc retrieveDynamicBootstrapNodes(dnsDiscovery: bool, dnsDiscoveryUrl: string, dnsDiscoveryNameServers: seq[ValidIpAddress]): Result[seq[RemotePeerInfo], string] =
if dnsDiscovery and dnsDiscoveryUrl != "":
proc retrieveDynamicBootstrapNodes(
dnsDiscoveryUrl: string, dnsAddrsNameServers: seq[IpAddress]
): Future[Result[seq[RemotePeerInfo], string]] {.async.} =
## Retrieve dynamic bootstrap nodes (DNS discovery)
if dnsDiscoveryUrl != "":
# DNS discovery
debug "Discovering nodes using Waku DNS discovery", url=dnsDiscoveryUrl
info "Discovering nodes using Waku DNS discovery", url = dnsDiscoveryUrl
var nameServers: seq[TransportAddress]
for ip in dnsDiscoveryNameServers:
for ip in dnsAddrsNameServers:
nameServers.add(initTAddress(ip, Port(53))) # Assume all servers use port 53
let dnsResolver = DnsResolver.new(nameServers)
proc resolver(domain: string): Future[string] {.async, gcsafe.} =
trace "resolving", domain=domain
trace "resolving", domain = domain
let resolved = await dnsResolver.resolveTxt(domain)
return resolved[0] # Use only first answer
if resolved.len > 0:
return resolved[0] # Use only first answer
var wakuDnsDiscovery = WakuDnsDiscovery.init(dnsDiscoveryUrl, resolver)
if wakuDnsDiscovery.isOk():
return wakuDnsDiscovery.get().findPeers()
.mapErr(proc (e: cstring): string = $e)
else:
warn "Failed to init Waku DNS discovery"
var wakuDnsDiscovery = WakuDnsDiscovery.init(dnsDiscoveryUrl, resolver).errorOr:
return (await value.findPeers()).mapErr(e => $e)
warn "Failed to init Waku DNS discovery"
debug "No method for retrieving dynamic bootstrap nodes specified."
info "No method for retrieving dynamic bootstrap nodes specified."
ok(newSeq[RemotePeerInfo]()) # Return an empty seq by default
proc getBootstrapFromDiscDns(conf: NetworkMonitorConf): Result[seq[enr.Record], string] =
proc getBootstrapFromDiscDns(
conf: NetworkMonitorConf
): Future[Result[seq[enr.Record], string]] {.async.} =
try:
let dnsNameServers = @[ValidIpAddress.init("1.1.1.1"), ValidIpAddress.init("1.0.0.1")]
let dynamicBootstrapNodesRes = retrieveDynamicBootstrapNodes(true, conf.dnsDiscoveryUrl, dnsNameServers)
if not dynamicBootstrapNodesRes.isOk():
error("failed discovering peers from DNS")
let dynamicBootstrapNodes = dynamicBootstrapNodesRes.get()
let dnsNameServers = @[parseIpAddress("1.1.1.1"), parseIpAddress("1.0.0.1")]
let dynamicBootstrapNodes = (
await retrieveDynamicBootstrapNodes(conf.dnsDiscoveryUrl, dnsNameServers)
).valueOr:
return err("Failed retrieving dynamic bootstrap nodes: " & $error)
# select dynamic bootstrap nodes that have an ENR containing a udp port.
# Discv5 only supports UDP https://github.com/ethereum/devp2p/blob/master/discv5/discv5-theory.md)
@ -342,22 +397,28 @@ proc getBootstrapFromDiscDns(conf: NetworkMonitorConf): Result[seq[enr.Record],
let
enr = n.enr.get()
tenrRes = enr.toTypedRecord()
if tenrRes.isOk() and (tenrRes.get().udp.isSome() or tenrRes.get().udp6.isSome()):
if tenrRes.isOk() and (
tenrRes.get().udp.isSome() or tenrRes.get().udp6.isSome()
):
discv5BootstrapEnrs.add(enr)
return ok(discv5BootstrapEnrs)
except CatchableError:
error("failed discovering peers from DNS")
error("failed discovering peers from DNS: " & getCurrentExceptionMsg())
proc initAndStartApp(conf: NetworkMonitorConf): Result[(WakuNode, WakuDiscoveryV5), string] =
let bindIp = try:
ValidIpAddress.init("0.0.0.0")
except CatchableError:
return err("could not start node: " & getCurrentExceptionMsg())
proc initAndStartApp(
conf: NetworkMonitorConf
): Future[Result[(WakuNode, WakuDiscoveryV5), string]] {.async.} =
let bindIp =
try:
parseIpAddress("0.0.0.0")
except CatchableError:
return err("could not start node: " & getCurrentExceptionMsg())
let extIp = try:
ValidIpAddress.init("127.0.0.1")
except CatchableError:
return err("could not start node: " & getCurrentExceptionMsg())
let extIp =
try:
parseIpAddress("127.0.0.1")
except CatchableError:
return err("could not start node: " & getCurrentExceptionMsg())
let
# some hardcoded parameters
@ -365,41 +426,46 @@ proc initAndStartApp(conf: NetworkMonitorConf): Result[(WakuNode, WakuDiscoveryV
key = crypto.PrivateKey.random(Secp256k1, rng[])[]
nodeTcpPort = Port(60000)
nodeUdpPort = Port(9000)
flags = CapabilitiesBitfield.init(lightpush = false, filter = false, store = false, relay = true)
flags = CapabilitiesBitfield.init(
lightpush = false, filter = false, store = false, relay = true
)
var builder = EnrBuilder.init(key)
builder.withIpAddressAndPorts(
ipAddr = some(extIp),
tcpPort = some(nodeTcpPort),
udpPort = some(nodeUdpPort),
ipAddr = some(extIp), tcpPort = some(nodeTcpPort), udpPort = some(nodeUdpPort)
)
builder.withWakuCapabilities(flags)
let recordRes = builder.build()
let record =
if recordRes.isErr():
return err("cannot build record: " & $recordRes.error)
else: recordRes.get()
builder.withWakuRelaySharding(
RelayShards(clusterId: conf.clusterId, shardIds: conf.shards)
).isOkOr:
error "failed to add sharded topics to ENR", error = error
return err("failed to add sharded topics to ENR: " & $error)
let record = builder.build().valueOr:
return err("cannot build record: " & $error)
var nodeBuilder = WakuNodeBuilder.init()
nodeBuilder.withNodeKey(key)
nodeBuilder.withRecord(record)
let res = nodeBuilder.withNetworkConfigurationDetails(bindIp, nodeTcpPort)
if res.isErr():
return err("node building error" & $res.error)
nodeBuilder.withSwitchConfiguration(maxConnections = some(MaxConnectedPeers))
let nodeRes = nodeBuilder.build()
let node =
if nodeRes.isErr():
return err("node building error" & $res.error)
else: nodeRes.get()
nodeBuilder.withPeerManagerConfig(
maxConnections = MaxConnectedPeers,
relayServiceRatio = "13.33:86.67",
shardAware = true,
)
nodeBuilder.withNetworkConfigurationDetails(bindIp, nodeTcpPort).isOkOr:
return err("node building error" & $error)
var discv5BootstrapEnrsRes = getBootstrapFromDiscDns(conf)
if discv5BootstrapEnrsRes.isErr():
let node = nodeBuilder.build().valueOr:
return err("node building error" & $error)
var discv5BootstrapEnrs = (await getBootstrapFromDiscDns(conf)).valueOr:
error("failed discovering peers from DNS")
var discv5BootstrapEnrs = discv5BootstrapEnrsRes.get()
quit(QuitFailure)
# parse enrURIs from the configuration and add the resulting ENRs to the discv5BootstrapEnrs seq
for enrUri in conf.bootstrapNodes:
@ -412,7 +478,7 @@ proc initAndStartApp(conf: NetworkMonitorConf): Result[(WakuNode, WakuDiscoveryV
port: nodeUdpPort,
privateKey: keys.PrivateKey(key.skkey),
bootstrapRecords: discv5BootstrapEnrs,
autoupdateRecord: false
autoupdateRecord: false,
)
let wakuDiscv5 = WakuDiscoveryV5.new(node.rng, discv5Conf, some(record))
@ -424,15 +490,17 @@ proc initAndStartApp(conf: NetworkMonitorConf): Result[(WakuNode, WakuDiscoveryV
ok((node, wakuDiscv5))
proc startRestApiServer(conf: NetworkMonitorConf,
allPeersInfo: CustomPeersTableRef,
numMessagesPerContentTopic: ContentTopicMessageTableRef
): Result[void, string] =
proc startRestApiServer(
conf: NetworkMonitorConf,
allPeersInfo: CustomPeersTableRef,
numMessagesPerContentTopic: ContentTopicMessageTableRef,
): Result[void, string] =
try:
let serverAddress = initTAddress(conf.metricsRestAddress & ":" & $conf.metricsRestPort)
let serverAddress =
initTAddress(conf.metricsRestAddress & ":" & $conf.metricsRestPort)
proc validate(pattern: string, value: string): int =
if pattern.startsWith("{") and pattern.endsWith("}"): 0
else: 1
if pattern.startsWith("{") and pattern.endsWith("}"): 0 else: 1
var router = RestRouter.init(validate)
router.installHandler(allPeersInfo, numMessagesPerContentTopic)
var sres = RestServerRef.new(router, serverAddress)
@ -444,13 +512,16 @@ proc startRestApiServer(conf: NetworkMonitorConf,
# handles rx of messages over a topic (see subscribe)
# counts the number of messages per content topic
proc subscribeAndHandleMessages(node: WakuNode,
pubsubTopic: PubsubTopic,
msgPerContentTopic: ContentTopicMessageTableRef) =
proc subscribeAndHandleMessages(
node: WakuNode,
pubsubTopic: PubsubTopic,
msgPerContentTopic: ContentTopicMessageTableRef,
) =
# handle function
proc handler(pubsubTopic: PubsubTopic, msg: WakuMessage): Future[void] {.async, gcsafe.} =
trace "rx message", pubsubTopic=pubsubTopic, contentTopic=msg.contentTopic
proc handler(
pubsubTopic: PubsubTopic, msg: WakuMessage
): Future[void] {.async, gcsafe.} =
trace "rx message", pubsubTopic = pubsubTopic, contentTopic = msg.contentTopic
# If we reach a table limit size, remove c topics with the least messages.
let tableSize = 100
@ -465,18 +536,32 @@ proc subscribeAndHandleMessages(node: WakuNode,
else:
msgPerContentTopic[msg.contentTopic] = 1
node.subscribe((kind: PubsubSub, topic: pubsubTopic), some(handler))
node.subscribe((kind: PubsubSub, topic: pubsubTopic), WakuRelayHandler(handler)).isOkOr:
error "failed to subscribe to pubsub topic", pubsubTopic, error
quit(1)
when isMainModule:
# known issue: confutils.nim(775, 17) Error: can raise an unlisted exception: ref IOError
{.pop.}
let confRes = NetworkMonitorConf.loadConfig()
if confRes.isErr():
error "could not load cli variables", err=confRes.error
quit(1)
var conf = NetworkMonitorConf.loadConfig().valueOr:
error "could not load cli variables", error = error
quit(QuitFailure)
let conf = confRes.get()
info "cli flags", conf=conf
info "cli flags", conf = conf
if conf.clusterId == 1:
let twnNetworkConf = NetworkConf.TheWakuNetworkConf()
conf.bootstrapNodes = twnNetworkConf.discv5BootstrapNodes
conf.rlnRelayDynamic = twnNetworkConf.rlnRelayDynamic
conf.rlnRelayEthContractAddress = twnNetworkConf.rlnRelayEthContractAddress
conf.rlnEpochSizeSec = twnNetworkConf.rlnEpochSizeSec
conf.rlnRelayUserMessageLimit = twnNetworkConf.rlnRelayUserMessageLimit
conf.numShardsInNetwork = twnNetworkConf.shardingConf.numShardsInCluster
if conf.shards.len == 0:
conf.shards =
toSeq(uint16(0) .. uint16(twnNetworkConf.shardingConf.numShardsInCluster - 1))
if conf.logLevel != LogLevel.NONE:
setLogLevel(conf.logLevel)
@ -489,38 +574,65 @@ when isMainModule:
# start metrics server
if conf.metricsServer:
let res = startMetricsServer(conf.metricsServerAddress, Port(conf.metricsServerPort))
if res.isErr():
error "could not start metrics server", err=res.error
quit(1)
startMetricsServer(conf.metricsServerAddress, Port(conf.metricsServerPort)).isOkOr:
error "could not start metrics server", error = error
quit(QuitFailure)
# start rest server for custom metrics
let res = startRestApiServer(conf, allPeersInfo, msgPerContentTopic)
if res.isErr():
error "could not start rest api server", err=res.error
quit(1)
startRestApiServer(conf, allPeersInfo, msgPerContentTopic).isOkOr:
error "could not start rest api server", error = error
quit(QuitFailure)
# create a rest client
let clientRest = RestClientRef.new(url="http://ip-api.com",
connectTimeout=ctime.seconds(2))
if clientRest.isErr():
error "could not start rest api client", err=res.error
quit(1)
let restClient = clientRest.get()
let restClient = RestClientRef.new(
url = "http://ip-api.com", connectTimeout = ctime.seconds(2)
).valueOr:
error "could not start rest api client", error = error
quit(QuitFailure)
# start waku node
let nodeRes = initAndStartApp(conf)
if nodeRes.isErr():
error "could not start node"
quit 1
let (node, discv5) = (waitFor initAndStartApp(conf)).valueOr:
error "could not start node", error = error
quit(QuitFailure)
let (node, discv5) = nodeRes.get()
(waitFor node.mountRelay()).isOkOr:
error "failed to mount waku relay protocol: ", error = error
quit(QuitFailure)
waitFor node.mountRelay()
waitFor node.mountLibp2pPing()
# Subscribe the node to the default pubsubtopic, to count messages
subscribeAndHandleMessages(node, DefaultPubsubTopic, msgPerContentTopic)
var onFatalErrorAction = proc(msg: string) {.gcsafe, closure.} =
## Action to be taken when an internal error occurs during the node run.
## e.g. the connection with the database is lost and not recovered.
error "Unrecoverable error occurred", error = msg
quit(QuitFailure)
if conf.rlnRelay and conf.rlnRelayEthContractAddress != "":
let rlnConf = WakuRlnConfig(
dynamic: conf.rlnRelayDynamic,
credIndex: some(uint(0)),
ethContractAddress: conf.rlnRelayEthContractAddress,
ethClientUrls: conf.ethClientUrls.mapIt(string(it)),
epochSizeSec: conf.rlnEpochSizeSec,
creds: none(RlnRelayCreds),
onFatalErrorAction: onFatalErrorAction,
)
try:
waitFor node.mountRlnRelay(rlnConf)
except CatchableError:
error "failed to setup RLN", error = getCurrentExceptionMsg()
quit(QuitFailure)
node.mountMetadata(conf.clusterId, conf.shards).isOkOr:
error "failed to mount waku metadata protocol: ", error = error
quit(QuitFailure)
for shard in conf.shards:
# Subscribe the node to the shards, to count messages
subscribeAndHandleMessages(
node, $RelayShard(shardId: shard, clusterId: conf.clusterId), msgPerContentTopic
)
# spawn the routine that crawls the network
# TODO: split into 3 routines (discovery, connections, ip2location)

View File

@ -1,76 +1,150 @@
import
std/strutils,
chronicles,
chronicles/topics_registry,
chronos,
confutils,
stew/results,
stew/shims/net
chronos,
std/strutils,
results,
regex
type
NetworkMonitorConf* = object
logLevel* {.
desc: "Sets the log level",
defaultValue: LogLevel.INFO,
name: "log-level",
abbr: "l" .}: LogLevel
const git_version* {.strdefine.} = "n/a"
timeout* {.
desc: "Timeout to consider that the connection failed",
defaultValue: chronos.seconds(10),
name: "timeout",
abbr: "t" }: chronos.Duration
type EthRpcUrl* = distinct string
bootstrapNodes* {.
desc: "Bootstrap ENR node. Argument may be repeated.",
defaultValue: @[""],
name: "bootstrap-node",
abbr: "b" }: seq[string]
proc `$`*(u: EthRpcUrl): string =
string(u)
dnsDiscoveryUrl* {.
desc: "URL for DNS node list in format 'enrtree://<key>@<fqdn>'",
defaultValue: ""
name: "dns-discovery-url" }: string
type NetworkMonitorConf* = object
logLevel* {.
desc: "Sets the log level",
defaultValue: LogLevel.INFO,
name: "log-level",
abbr: "l"
.}: LogLevel
refreshInterval* {.
desc: "How often new peers are discovered and connected to (in seconds)",
defaultValue: 5,
name: "refresh-interval",
abbr: "r" }: int
timeout* {.
desc: "Timeout to consider that the connection failed",
defaultValue: chronos.seconds(10),
name: "timeout",
abbr: "t"
.}: chronos.Duration
## Prometheus metrics config
metricsServer* {.
desc: "Enable the metrics server: true|false"
defaultValue: true
name: "metrics-server" }: bool
bootstrapNodes* {.
desc: "Bootstrap ENR node. Argument may be repeated.",
defaultValue: @[""],
name: "bootstrap-node",
abbr: "b"
.}: seq[string]
metricsServerAddress* {.
desc: "Listening address of the metrics server."
defaultValue: ValidIpAddress.init("127.0.0.1")
name: "metrics-server-address" }: ValidIpAddress
dnsDiscoveryUrl* {.
desc: "URL for DNS node list in format 'enrtree://<key>@<fqdn>'",
defaultValue: "",
name: "dns-discovery-url"
.}: string
metricsServerPort* {.
desc: "Listening HTTP port of the metrics server."
defaultValue: 8008
name: "metrics-server-port" }: uint16
shards* {.
desc:
"Shards index to subscribe to [0..NUM_SHARDS_IN_NETWORK-1]. Argument may be repeated.",
name: "shard"
.}: seq[uint16]
## Custom metrics rest server
metricsRestAddress* {.
desc: "Listening address of the metrics rest server.",
defaultValue: "127.0.0.1",
name: "metrics-rest-address" }: string
metricsRestPort* {.
desc: "Listening HTTP port of the metrics rest server.",
defaultValue: 8009,
name: "metrics-rest-port" }: uint16
numShardsInNetwork* {.
desc: "Number of shards in the network",
name: "num-shards-in-network",
defaultValue: 8
.}: uint32
proc parseCmdArg*(T: type ValidIpAddress, p: string): T =
refreshInterval* {.
desc: "How often new peers are discovered and connected to (in seconds)",
defaultValue: 5,
name: "refresh-interval",
abbr: "r"
.}: int
clusterId* {.
desc:
"Cluster id that the node is running in. Node in a different cluster id is disconnected.",
defaultValue: 1,
name: "cluster-id"
.}: uint16
rlnRelay* {.
desc: "Enable spam protection through rln-relay: true|false",
defaultValue: true,
name: "rln-relay"
.}: bool
rlnRelayDynamic* {.
desc: "Enable waku-rln-relay with on-chain dynamic group management: true|false",
defaultValue: true,
name: "rln-relay-dynamic"
.}: bool
ethClientUrls* {.
desc:
"HTTP address of an Ethereum testnet client e.g., http://localhost:8540/. Argument may be repeated.",
defaultValue: newSeq[EthRpcUrl](0),
name: "rln-relay-eth-client-address"
.}: seq[EthRpcUrl]
rlnRelayEthContractAddress* {.
desc: "Address of membership contract on an Ethereum testnet",
defaultValue: "",
name: "rln-relay-eth-contract-address"
.}: string
rlnEpochSizeSec* {.
desc:
"Epoch size in seconds used to rate limit RLN memberships. Default is 1 second.",
defaultValue: 1,
name: "rln-relay-epoch-sec"
.}: uint64
rlnRelayUserMessageLimit* {.
desc:
"Set a user message limit for the rln membership registration. Must be a positive integer. Default is 1.",
defaultValue: 1,
name: "rln-relay-user-message-limit"
.}: uint64
## Prometheus metrics config
metricsServer* {.
desc: "Enable the metrics server: true|false",
defaultValue: true,
name: "metrics-server"
.}: bool
metricsServerAddress* {.
desc: "Listening address of the metrics server.",
defaultValue: parseIpAddress("127.0.0.1"),
name: "metrics-server-address"
.}: IpAddress
metricsServerPort* {.
desc: "Listening HTTP port of the metrics server.",
defaultValue: 8008,
name: "metrics-server-port"
.}: uint16
## Custom metrics rest server
metricsRestAddress* {.
desc: "Listening address of the metrics rest server.",
defaultValue: "127.0.0.1",
name: "metrics-rest-address"
.}: string
metricsRestPort* {.
desc: "Listening HTTP port of the metrics rest server.",
defaultValue: 8009,
name: "metrics-rest-port"
.}: uint16
proc parseCmdArg*(T: type IpAddress, p: string): T =
try:
result = ValidIpAddress.init(p)
result = parseIpAddress(p)
except CatchableError as e:
raise newException(ValueError, "Invalid IP address")
proc completeCmdArg*(T: type ValidIpAddress, val: string): seq[string] =
proc completeCmdArg*(T: type IpAddress, val: string): seq[string] =
return @[]
proc parseCmdArg*(T: type chronos.Duration, p: string): T =
@ -82,9 +156,35 @@ proc parseCmdArg*(T: type chronos.Duration, p: string): T =
proc completeCmdArg*(T: type chronos.Duration, val: string): seq[string] =
return @[]
proc completeCmdArg*(T: type EthRpcUrl, val: string): seq[string] =
return @[]
proc parseCmdArg*(T: type EthRpcUrl, s: string): T =
## allowed patterns:
## http://url:port
## https://url:port
## http://url:port/path
## https://url:port/path
## http://url/with/path
## http://url:port/path?query
## https://url:port/path?query
## disallowed patterns:
## any valid/invalid ws or wss url
var httpPattern =
re2"^(https?):\/\/((localhost)|([\w_-]+(?:(?:\.[\w_-]+)+)))(:[0-9]{1,5})?([\w.,@?^=%&:\/~+#-]*[\w@?^=%&\/~+#-])*"
var wsPattern =
re2"^(wss?):\/\/((localhost)|([\w_-]+(?:(?:\.[\w_-]+)+)))(:[0-9]{1,5})?([\w.,@?^=%&:\/~+#-]*[\w@?^=%&\/~+#-])*"
if regex.match(s, wsPattern):
raise newException(
ValueError, "Websocket RPC URL is not supported, Please use an HTTP URL"
)
if not regex.match(s, httpPattern):
raise newException(ValueError, "Invalid HTTP RPC URL")
return EthRpcUrl(s)
proc loadConfig*(T: type NetworkMonitorConf): Result[T, string] =
try:
let conf = NetworkMonitorConf.load(version=git_version)
let conf = NetworkMonitorConf.load(version = git_version)
ok(conf)
except CatchableError:
err(getCurrentExceptionMsg())

View File

@ -1,10 +1,7 @@
when (NimMajor, NimMinor) < (1, 4):
{.push raises: [Defect].}
else:
{.push raises: [].}
{.push raises: [].}
import
std/[json,tables,sequtils],
std/[net, json, tables, sequtils],
chronicles,
chronicles/topics_registry,
chronos,
@ -13,8 +10,7 @@ import
metrics/chronos_httpserver,
presto/route,
presto/server,
stew/results,
stew/shims/net
results
logScope:
topics = "networkmonitor_metrics"
@ -26,32 +22,31 @@ logScope:
#discovery_message_requests_outgoing_total{response="no_response"}
declarePublicGauge networkmonitor_peer_type_as_per_enr,
"Number of peers supporting each capability according the the ENR",
labels = ["capability"]
"Number of peers supporting each capability according to the ENR",
labels = ["capability"]
declarePublicGauge networkmonitor_peer_cluster_as_per_enr,
"Number of peers on each cluster according to the ENR", labels = ["cluster"]
declarePublicGauge networkmonitor_peer_type_as_per_protocol,
"Number of peers supporting each protocol, after a successful connection) ",
labels = ["protocols"]
"Number of peers supporting each protocol, after a successful connection) ",
labels = ["protocols"]
declarePublicGauge networkmonitor_peer_user_agents,
"Number of peers with each user agent",
labels = ["user_agent"]
"Number of peers with each user agent", labels = ["user_agent"]
declarePublicHistogram networkmonitor_peer_ping,
"Histogram tracking ping durations for discovered peers",
buckets = [100.0, 200.0, 300.0, 400.0, 500.0, 600.0, 700.0, 800.0, 900.0, 1000.0, 2000.0, Inf]
"Histogram tracking ping durations for discovered peers",
buckets = [10.0, 20.0, 50.0, 100.0, 200.0, 300.0, 500.0, 800.0, 1000.0, 2000.0, Inf]
declarePublicGauge networkmonitor_peer_count,
"Number of discovered peers",
labels = ["connected"]
"Number of discovered peers", labels = ["connected"]
declarePublicGauge networkmonitor_peer_country_count,
"Number of peers per country",
labels = ["country"]
"Number of peers per country", labels = ["country"]
type
CustomPeerInfo* = object
# populated after discovery
CustomPeerInfo* = object # populated after discovery
lastTimeDiscovered*: int64
discovered*: int64
peerId*: string
@ -60,6 +55,7 @@ type
enrCapabilities*: seq[string]
country*: string
city*: string
maddrs*: seq[string]
# only after ok connection
lastTimeConnected*: int64
@ -80,23 +76,32 @@ type
# stores the content topic and the count of rx messages
ContentTopicMessageTableRef* = TableRef[string, int]
proc installHandler*(router: var RestRouter,
allPeers: CustomPeersTableRef,
numMessagesPerContentTopic: ContentTopicMessageTableRef) =
router.api(MethodGet, "/allpeersinfo") do () -> RestApiResponse:
proc installHandler*(
router: var RestRouter,
allPeers: CustomPeersTableRef,
numMessagesPerContentTopic: ContentTopicMessageTableRef,
) =
router.api(MethodGet, "/allpeersinfo") do() -> RestApiResponse:
let values = toSeq(allPeers.values())
return RestApiResponse.response(values.toJson(), contentType="application/json")
router.api(MethodGet, "/contenttopics") do () -> RestApiResponse:
return RestApiResponse.response(values.toJson(), contentType = "application/json")
router.api(MethodGet, "/contenttopics") do() -> RestApiResponse:
# TODO: toJson() includes the hash
return RestApiResponse.response($(%numMessagesPerContentTopic), contentType="application/json")
return RestApiResponse.response(
$(%numMessagesPerContentTopic), contentType = "application/json"
)
proc startMetricsServer*(serverIp: ValidIpAddress, serverPort: Port): Result[void, string] =
info "Starting metrics HTTP server", serverIp, serverPort
proc startMetricsServer*(serverIp: IpAddress, serverPort: Port): Result[void, string] =
info "Starting metrics HTTP server", serverIp, serverPort
try:
startMetricsHttpServer($serverIp, serverPort)
except Exception as e:
error("Failed to start metrics HTTP server", serverIp=serverIp, serverPort=serverPort, msg=e.msg)
try:
startMetricsHttpServer($serverIp, serverPort)
except Exception as e:
error(
"Failed to start metrics HTTP server",
serverIp = serverIp,
serverPort = serverPort,
msg = e.msg,
)
info "Metrics HTTP server started", serverIp, serverPort
ok()
info "Metrics HTTP server started", serverIp, serverPort
ok()

View File

@ -1,24 +1,19 @@
when (NimMajor, NimMinor) < (1, 4):
{.push raises: [Defect].}
else:
{.push raises: [].}
{.push raises: [].}
import
std/json,
stew/results,
stew/shims/net,
results,
chronicles,
chronicles/topics_registry,
chronos,
presto/[client,common]
presto/[client, common]
type
NodeLocation* = object
country*: string
city*: string
lat*: string
long*: string
isp*: string
type NodeLocation* = object
country*: string
city*: string
lat*: string
long*: string
isp*: string
proc flatten*[T](a: seq[seq[T]]): seq[T] =
var aFlat = newSeq[T](0)
@ -26,8 +21,9 @@ proc flatten*[T](a: seq[seq[T]]): seq[T] =
aFlat &= subseq
return aFlat
proc decodeBytes*(t: typedesc[NodeLocation], value: openArray[byte],
contentType: Opt[ContentTypeData]): RestResult[NodeLocation] =
proc decodeBytes*(
t: typedesc[NodeLocation], value: openArray[byte], contentType: Opt[ContentTypeData]
): RestResult[NodeLocation] =
var res: string
if len(value) > 0:
res = newString(len(value))
@ -35,19 +31,23 @@ proc decodeBytes*(t: typedesc[NodeLocation], value: openArray[byte],
try:
let jsonContent = parseJson(res)
if $jsonContent["status"].getStr() != "success":
error "query failed", result=jsonContent
error "query failed", result = $jsonContent
return err("query failed")
return ok(NodeLocation(
country: jsonContent["country"].getStr(),
city: jsonContent["city"].getStr(),
lat: $jsonContent["lat"].getFloat(),
long: $jsonContent["lon"].getFloat(),
isp: jsonContent["isp"].getStr()
))
return ok(
NodeLocation(
country: jsonContent["country"].getStr(),
city: jsonContent["city"].getStr(),
lat: $jsonContent["lat"].getFloat(),
long: $jsonContent["lon"].getFloat(),
isp: jsonContent["isp"].getStr(),
)
)
except Exception:
return err("failed to get the location: " & getCurrentExceptionMsg())
proc encodeString*(value: string): RestResult[string] =
ok(value)
proc ipToLocation*(ip: string): RestResponse[NodeLocation] {.rest, endpoint: "json/{ip}", meth: MethodGet.}
proc ipToLocation*(
ip: string
): RestResponse[NodeLocation] {.rest, endpoint: "json/{ip}", meth: MethodGet.}

View File

@ -1,3 +1,4 @@
-d:chronicles_line_numbers
-d:chronicles_runtime_filtering:on
-d:discv5_protocol_id:d5waku
-d:discv5_protocol_id:d5waku
path = "../.."

View File

@ -0,0 +1,9 @@
global:
scrape_interval: 15s
scrape_configs:
- job_name: 'prometheus'
scrape_interval: 5s
static_configs:
- targets: ['host.docker.internal:8008']
metrics_path: '/metrics'

44
apps/sonda/.env.example Normal file
View File

@ -0,0 +1,44 @@
# RPC URL for accessing testnet via HTTP.
# e.g. https://linea-sepolia.infura.io/v3/123aa110320f4aec179150fba1e1b1b1
RLN_RELAY_ETH_CLIENT_ADDRESS=
# Account of testnet where you have Linea Sepolia ETH that would be staked into RLN contract.
ETH_TESTNET_ACCOUNT=
# Private key of testnet where you have Linea Sepolia ETH that would be staked into RLN contract.
# Note: make sure you don't use the '0x' prefix.
# e.g. 0116196e9a8abed42dd1a22eb63fa2a5a17b0c27d716b87ded2c54f1bf192a0b
ETH_TESTNET_KEY=
# Address of the RLN contract on Linea Sepolia.
RLN_CONTRACT_ADDRESS=0xB9cd878C90E49F797B4431fBF4fb333108CB90e6
# Address of the RLN Membership Token contract on Linea Sepolia used to pay for membership.
TOKEN_CONTRACT_ADDRESS=0x185A0015aC462a0aECb81beCc0497b649a64B9ea
# Password you would like to use to protect your RLN membership.
RLN_RELAY_CRED_PASSWORD=
# Advanced. Can be left empty in normal use cases.
NWAKU_IMAGE=
NODEKEY=
DOMAIN=
EXTRA_ARGS=
STORAGE_SIZE=
# -------------------- SONDA CONFIG ------------------
METRICS_PORT=8004
NODE_REST_ADDRESS="http://nwaku:8645"
CLUSTER_ID=16
SHARD=32
# Comma separated list of store nodes to poll
STORE_NODES="/dns4/store-01.do-ams3.shards.test.status.im/tcp/30303/p2p/16Uiu2HAmAUdrQ3uwzuE4Gy4D56hX6uLKEeerJAnhKEHZ3DxF1EfT,
/dns4/store-02.do-ams3.shards.test.status.im/tcp/30303/p2p/16Uiu2HAm9aDJPkhGxc2SFcEACTFdZ91Q5TJjp76qZEhq9iF59x7R,
/dns4/store-01.gc-us-central1-a.shards.test.status.im/tcp/30303/p2p/16Uiu2HAmMELCo218hncCtTvC2Dwbej3rbyHQcR8erXNnKGei7WPZ,
/dns4/store-02.gc-us-central1-a.shards.test.status.im/tcp/30303/p2p/16Uiu2HAmJnVR7ZzFaYvciPVafUXuYGLHPzSUigqAmeNw9nJUVGeM,
/dns4/store-01.ac-cn-hongkong-c.shards.test.status.im/tcp/30303/p2p/16Uiu2HAm2M7xs7cLPc3jamawkEqbr7cUJX11uvY7LxQ6WFUdUKUT,
/dns4/store-02.ac-cn-hongkong-c.shards.test.status.im/tcp/30303/p2p/16Uiu2HAm9CQhsuwPR54q27kNj9iaQVfyRzTGKrhFmr94oD8ujU6P"
# Wait time in seconds between two consecutive queries
QUERY_DELAY=60
# Consecutive successful store requests to consider a store node healthy
HEALTH_THRESHOLD=5

4
apps/sonda/.gitignore vendored Normal file
View File

@ -0,0 +1,4 @@
.env
keystore
rln_tree
.env

View File

@ -0,0 +1,23 @@
FROM python:3.9.18-alpine3.18
ENV METRICS_PORT=8004
ENV NODE_REST_ADDRESS="http://nwaku:8645"
ENV QUERY_DELAY=60
ENV STORE_NODES=""
ENV CLUSTER_ID=1
ENV SHARD=1
ENV HEALTH_THRESHOLD=5
WORKDIR /opt
COPY sonda.py /opt/sonda.py
RUN pip install requests argparse prometheus_client
CMD python -u /opt/sonda.py \
--metrics-port=$METRICS_PORT \
--node-rest-address="${NODE_REST_ADDRESS}" \
--delay-seconds=$QUERY_DELAY \
--pubsub-topic="/waku/2/rs/${CLUSTER_ID}/${SHARD}" \
--store-nodes="${STORE_NODES}" \
--health-threshold=$HEALTH_THRESHOLD

52
apps/sonda/README.md Normal file
View File

@ -0,0 +1,52 @@
# Sonda
Sonda is a tool to monitor store nodes and measure their performance.
It works by running a `nwaku` node, publishing a message from it every fixed interval and performing a store query to all the store nodes we want to monitor to check they respond with the last message we published.
## Instructions
1. Create an `.env` file which will contain the configuration parameters.
You can start by copying `.env.example` and adapting it for your use case
```
cp .env.example .env
${EDITOR} .env
```
The variables that have to be filled for Sonda are
```
CLUSTER_ID=
SHARD=
# Comma separated list of store nodes to poll
STORE_NODES=
# Wait time in seconds between two consecutive queries
QUERY_DELAY=
# Consecutive successful store requests to consider a store node healthy
HEALTH_THRESHOLD=
```
2. If you want to query nodes in `cluster-id` 1, then you have to follow the steps of registering an RLN membership. Otherwise, you can skip this step.
For it, you need:
* Ethereum Linea Sepolia WebSocket endpoint. Get one free from [Infura](https://linea-sepolia.infura.io/).
* Ethereum Linea Sepolia account with minimum 0.01ETH. Get some [here](https://docs.metamask.io/developer-tools/faucet/).
* A password to protect your rln membership.
Fill the `RLN_RELAY_ETH_CLIENT_ADDRESS`, `ETH_TESTNET_KEY` and `RLN_RELAY_CRED_PASSWORD` env variables and run
```
./register_rln.sh
```
3. Start Sonda by running
```
docker-compose up -d
```
4. Browse to http://localhost:3000/dashboards and monitor the performance
There's two Grafana dashboards: `nwaku-monitoring` to track the stats of your node that is publishing messages and performing queries, and `sonda-monitoring` to monitor the responses of the store nodes.

View File

@ -0,0 +1,114 @@
x-logging: &logging
logging:
driver: json-file
options:
max-size: 1000m
# Environment variable definitions
x-rln-relay-eth-client-address: &rln_relay_eth_client_address ${RLN_RELAY_ETH_CLIENT_ADDRESS:-} # Add your RLN_RELAY_ETH_CLIENT_ADDRESS after the "-"
x-rln-environment: &rln_env
RLN_RELAY_CONTRACT_ADDRESS: ${RLN_RELAY_CONTRACT_ADDRESS:-0xB9cd878C90E49F797B4431fBF4fb333108CB90e6}
RLN_RELAY_CRED_PATH: ${RLN_RELAY_CRED_PATH:-} # Optional: Add your RLN_RELAY_CRED_PATH after the "-"
RLN_RELAY_CRED_PASSWORD: ${RLN_RELAY_CRED_PASSWORD:-} # Optional: Add your RLN_RELAY_CRED_PASSWORD after the "-"
x-sonda-env: &sonda_env
METRICS_PORT: ${METRICS_PORT:-8004}
NODE_REST_ADDRESS: ${NODE_REST_ADDRESS:-"http://nwaku:8645"}
CLUSTER_ID: ${CLUSTER_ID:-1}
SHARD: ${SHARD:-0}
STORE_NODES: ${STORE_NODES:-}
QUERY_DELAY: ${QUERY_DELAY-60}
HEALTH_THRESHOLD: ${HEALTH_THRESHOLD-5}
# Services definitions
services:
nwaku:
image: ${NWAKU_IMAGE:-harbor.status.im/wakuorg/nwaku:deploy-status-prod}
container_name: nwaku
restart: on-failure
ports:
- 30304:30304/tcp
- 30304:30304/udp
- 9005:9005/udp
- 127.0.0.1:8003:8003
- 80:80 #Let's Encrypt
- 8000:8000/tcp #WSS
- 127.0.0.1:8645:8645
<<:
- *logging
environment:
DOMAIN: ${DOMAIN}
NODEKEY: ${NODEKEY}
RLN_RELAY_CRED_PASSWORD: "${RLN_RELAY_CRED_PASSWORD}"
RLN_RELAY_ETH_CLIENT_ADDRESS: *rln_relay_eth_client_address
EXTRA_ARGS: ${EXTRA_ARGS}
STORAGE_SIZE: ${STORAGE_SIZE}
<<:
- *rln_env
- *sonda_env
volumes:
- ./run_node.sh:/opt/run_node.sh:Z
- ${CERTS_DIR:-./certs}:/etc/letsencrypt/:Z
- ./rln_tree:/etc/rln_tree/:Z
- ./keystore:/keystore:Z
entrypoint: sh
command:
- /opt/run_node.sh
networks:
- nwaku-sonda
sonda:
build:
context: .
dockerfile: Dockerfile.sonda
container_name: sonda
ports:
- 127.0.0.1:${METRICS_PORT}:${METRICS_PORT}
environment:
<<:
- *sonda_env
depends_on:
- nwaku
networks:
- nwaku-sonda
prometheus:
image: docker.io/prom/prometheus:latest
container_name: prometheus
volumes:
- ./monitoring/prometheus-config.yml:/etc/prometheus/prometheus.yml:Z
command:
- --config.file=/etc/prometheus/prometheus.yml
# ports:
# - 127.0.0.1:9090:9090
restart: on-failure:5
depends_on:
- nwaku
networks:
- nwaku-sonda
grafana:
image: docker.io/grafana/grafana:latest
container_name: grafana
env_file:
- ./monitoring/configuration/grafana-plugins.env
volumes:
- ./monitoring/configuration/grafana.ini:/etc/grafana/grafana.ini:Z
- ./monitoring/configuration/dashboards.yaml:/etc/grafana/provisioning/dashboards/dashboards.yaml:Z
- ./monitoring/configuration/datasources.yaml:/etc/grafana/provisioning/datasources/datasources.yaml:Z
- ./monitoring/configuration/dashboards:/var/lib/grafana/dashboards/:Z
- ./monitoring/configuration/customizations/custom-logo.svg:/usr/share/grafana/public/img/grafana_icon.svg:Z
- ./monitoring/configuration/customizations/custom-logo.svg:/usr/share/grafana/public/img/grafana_typelogo.svg:Z
- ./monitoring/configuration/customizations/custom-logo.png:/usr/share/grafana/public/img/fav32.png:Z
ports:
- 0.0.0.0:3000:3000
restart: on-failure:5
depends_on:
- prometheus
networks:
- nwaku-sonda
networks:
nwaku-sonda:

Binary file not shown.

After

Width:  |  Height:  |  Size: 11 KiB

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 13 KiB

View File

@ -0,0 +1,9 @@
apiVersion: 1
providers:
- name: 'Prometheus'
orgId: 1
folder: ''
type: file
options:
path: /var/lib/grafana/dashboards

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,11 @@
apiVersion: 1
datasources:
- name: Prometheus
type: prometheus
access: proxy
org_id: 1
url: http://prometheus:9090
is_default: true
version: 1
editable: true

View File

@ -0,0 +1,2 @@
#GF_INSTALL_PLUGINS=grafana-worldmap-panel,grafana-piechart-panel,digrich-bubblechart-panel,yesoreyeram-boomtheme-panel,briangann-gauge-panel,jdbranham-diagram-panel,agenty-flowcharting-panel,citilogics-geoloop-panel,savantly-heatmap-panel,mtanda-histogram-panel,pierosavi-imageit-panel,michaeldmoore-multistat-panel,zuburqan-parity-report-panel,natel-plotly-panel,bessler-pictureit-panel,grafana-polystat-panel,corpglory-progresslist-panel,snuids-radar-panel,fzakaria-simple-config.config.annotations-datasource,vonage-status-panel,snuids-trafficlights-panel,pr0ps-trackmap-panel,alexandra-trackmap-panel,btplc-trend-box-panel
GF_INSTALL_PLUGINS=grafana-worldmap-panel,grafana-piechart-panel,yesoreyeram-boomtheme-panel,briangann-gauge-panel,pierosavi-imageit-panel,bessler-pictureit-panel,vonage-status-panel

View File

@ -0,0 +1,51 @@
instance_name = nwaku dashboard
;[dashboards.json]
;enabled = true
;path = /home/git/grafana/grafana-dashboards/dashboards
#################################### Auth ##########################
[auth]
disable_login_form = false
#################################### Anonymous Auth ##########################
[auth.anonymous]
# enable anonymous access
enabled = true
# specify organization name that should be used for unauthenticated users
;org_name = Public
# specify role for unauthenticated users
org_role = Admin
; org_role = Viewer
;[security]
;admin_user = ocr
;admin_password = ocr
;[users]
# disable user signup / registration
;allow_sign_up = false
# Set to true to automatically assign new users to the default organization (id 1)
;auto_assign_org = true
# Default role new users will be automatically assigned (if disabled above is set to true)
;auto_assign_org_role = Viewer
#################################### SMTP / Emailing ##########################
;[smtp]
;enabled = false
;host = localhost:25
;user =
;password =
;cert_file =
;key_file =
;skip_verify = false
;from_address = admin@grafana.localhost
;[emails]
;welcome_email_on_sign_up = false

View File

@ -0,0 +1,10 @@
global:
scrape_interval: 15s
evaluation_interval: 15s
external_labels:
monitor: "Monitoring"
scrape_configs:
- job_name: "nwaku"
static_configs:
- targets: ["nwaku:8003", "sonda:8004"]

31
apps/sonda/register_rln.sh Executable file
View File

@ -0,0 +1,31 @@
#!/bin/sh
if test -f ./keystore/keystore.json; then
echo "keystore/keystore.json already exists. Use it instead of creating a new one."
echo "Exiting"
exit 1
fi
if test -f .env; then
echo "Using .env file"
. $(pwd)/.env
fi
# TODO: Set nwaku release when ready instead of quay
if test -n "${ETH_CLIENT_ADDRESS}"; then
echo "ETH_CLIENT_ADDRESS variable was renamed to RLN_RELAY_ETH_CLIENT_ADDRESS"
echo "Please update your .env file"
exit 1
fi
docker run -v $(pwd)/keystore:/keystore/:Z harbor.status.im/wakuorg/nwaku:v0.30.1 generateRlnKeystore \
--rln-relay-eth-client-address=${RLN_RELAY_ETH_CLIENT_ADDRESS} \
--rln-relay-eth-private-key=${ETH_TESTNET_KEY} \
--rln-relay-eth-contract-address=0xB9cd878C90E49F797B4431fBF4fb333108CB90e6 \
--rln-relay-cred-path=/keystore/keystore.json \
--rln-relay-cred-password="${RLN_RELAY_CRED_PASSWORD}" \
--rln-relay-user-message-limit=20 \
--execute

110
apps/sonda/run_node.sh Normal file
View File

@ -0,0 +1,110 @@
#!/bin/sh
echo "I am a nwaku node"
if test -n "${ETH_CLIENT_ADDRESS}" -o ; then
echo "ETH_CLIENT_ADDRESS variable was renamed to RLN_RELAY_ETH_CLIENT_ADDRESS"
echo "Please update your .env file"
exit 1
fi
if [ -z "${RLN_RELAY_ETH_CLIENT_ADDRESS}" ] && [ "${CLUSTER_ID}" -eq 1 ]; then
echo "Missing Eth client address, please refer to README.md for detailed instructions"
exit 1
fi
if [ "${CLUSTER_ID}" -ne 1 ]; then
echo "CLUSTER_ID is not equal to 1, clearing RLN configurations"
RLN_RELAY_CRED_PATH=""
RLN_RELAY_ETH_CLIENT_ADDRESS=""
RLN_RELAY_CRED_PASSWORD=""
fi
MY_EXT_IP=$(wget -qO- https://api4.ipify.org)
DNS_WSS_CMD=
if [ -n "${DOMAIN}" ]; then
LETSENCRYPT_PATH=/etc/letsencrypt/live/${DOMAIN}
if ! [ -d "${LETSENCRYPT_PATH}" ]; then
apk add --no-cache certbot
certbot certonly\
--non-interactive\
--agree-tos\
--no-eff-email\
--no-redirect\
--email admin@${DOMAIN}\
-d ${DOMAIN}\
--standalone
fi
if ! [ -e "${LETSENCRYPT_PATH}/privkey.pem" ]; then
echo "The certificate does not exist"
sleep 60
exit 1
fi
WS_SUPPORT="--websocket-support=true"
WSS_SUPPORT="--websocket-secure-support=true"
WSS_KEY="--websocket-secure-key-path=${LETSENCRYPT_PATH}/privkey.pem"
WSS_CERT="--websocket-secure-cert-path=${LETSENCRYPT_PATH}/cert.pem"
DNS4_DOMAIN="--dns4-domain-name=${DOMAIN}"
DNS_WSS_CMD="${WS_SUPPORT} ${WSS_SUPPORT} ${WSS_CERT} ${WSS_KEY} ${DNS4_DOMAIN}"
fi
if [ -n "${NODEKEY}" ]; then
NODEKEY=--nodekey=${NODEKEY}
fi
if [ "${CLUSTER_ID}" -eq 1 ]; then
RLN_RELAY_CRED_PATH=--rln-relay-cred-path=${RLN_RELAY_CRED_PATH:-/keystore/keystore.json}
fi
if [ -n "${RLN_RELAY_CRED_PASSWORD}" ]; then
RLN_RELAY_CRED_PASSWORD=--rln-relay-cred-password="${RLN_RELAY_CRED_PASSWORD}"
fi
if [ -n "${RLN_RELAY_ETH_CLIENT_ADDRESS}" ]; then
RLN_RELAY_ETH_CLIENT_ADDRESS=--rln-relay-eth-client-address="${RLN_RELAY_ETH_CLIENT_ADDRESS}"
fi
# TO DO: configure bootstrap nodes in env
exec /usr/bin/wakunode\
--relay=true\
--filter=false\
--lightpush=false\
--keep-alive=true\
--max-connections=150\
--cluster-id="${CLUSTER_ID}"\
--discv5-discovery=true\
--discv5-udp-port=9005\
--discv5-enr-auto-update=True\
--log-level=DEBUG\
--tcp-port=30304\
--metrics-server=True\
--metrics-server-port=8003\
--metrics-server-address=0.0.0.0\
--rest=true\
--rest-admin=true\
--rest-address=0.0.0.0\
--rest-port=8645\
--rest-allow-origin="waku-org.github.io"\
--rest-allow-origin="localhost:*"\
--nat=extip:"${MY_EXT_IP}"\
--store=false\
--pubsub-topic="/waku/2/rs/${CLUSTER_ID}/${SHARD}"\
--discv5-bootstrap-node="enr:-QEKuECA0zhRJej2eaOoOPddNcYr7-5NdRwuoLCe2EE4wfEYkAZhFotg6Kkr8K15pMAGyUyt0smHkZCjLeld0BUzogNtAYJpZIJ2NIJpcISnYxMvim11bHRpYWRkcnO4WgAqNiVib290LTAxLmRvLWFtczMuc2hhcmRzLnRlc3Quc3RhdHVzLmltBnZfACw2JWJvb3QtMDEuZG8tYW1zMy5zaGFyZHMudGVzdC5zdGF0dXMuaW0GAbveA4Jyc40AEAUAAQAgAEAAgAEAiXNlY3AyNTZrMaEC3rRtFQSgc24uWewzXaxTY8hDAHB8sgnxr9k8Rjb5GeSDdGNwgnZfg3VkcIIjKIV3YWt1Mg0"\
--discv5-bootstrap-node="enr:-QEcuEAgXDqrYd_TrpUWtn3zmxZ9XPm7O3GS6lV7aMJJOTsbOAAeQwSd_eoHcCXqVzTUtwTyB4855qtbd8DARnExyqHPAYJpZIJ2NIJpcIQihw1Xim11bHRpYWRkcnO4bAAzNi5ib290LTAxLmdjLXVzLWNlbnRyYWwxLWEuc2hhcmRzLnRlc3Quc3RhdHVzLmltBnZfADU2LmJvb3QtMDEuZ2MtdXMtY2VudHJhbDEtYS5zaGFyZHMudGVzdC5zdGF0dXMuaW0GAbveA4Jyc40AEAUAAQAgAEAAgAEAiXNlY3AyNTZrMaECxjqgDQ0WyRSOilYU32DA5k_XNlDis3m1VdXkK9xM6kODdGNwgnZfg3VkcIIjKIV3YWt1Mg0"\
--discv5-bootstrap-node="enr:-QEcuEAX6Qk-vVAoJLxR4A_4UVogGhvQrqKW4DFKlf8MA1PmCjgowL-LBtSC9BLjXbb8gf42FdDHGtSjEvvWKD10erxqAYJpZIJ2NIJpcIQI2hdMim11bHRpYWRkcnO4bAAzNi5ib290LTAxLmFjLWNuLWhvbmdrb25nLWMuc2hhcmRzLnRlc3Quc3RhdHVzLmltBnZfADU2LmJvb3QtMDEuYWMtY24taG9uZ2tvbmctYy5zaGFyZHMudGVzdC5zdGF0dXMuaW0GAbveA4Jyc40AEAUAAQAgAEAAgAEAiXNlY3AyNTZrMaEDP7CbRk-YKJwOFFM4Z9ney0GPc7WPJaCwGkpNRyla7mCDdGNwgnZfg3VkcIIjKIV3YWt1Mg0"\
${RLN_RELAY_CRED_PATH}\
${RLN_RELAY_CRED_PASSWORD}\
${RLN_RELAY_TREE_PATH}\
${RLN_RELAY_ETH_CLIENT_ADDRESS}\
${DNS_WSS_CMD}\
${NODEKEY}\
${EXTRA_ARGS}

207
apps/sonda/sonda.py Normal file
View File

@ -0,0 +1,207 @@
import requests
import time
import json
import os
import base64
import sys
import urllib.parse
import requests
import argparse
from datetime import datetime
from prometheus_client import Counter, Gauge, start_http_server
# Content topic where Sona messages are going to be sent
SONDA_CONTENT_TOPIC = '/sonda/2/polls/proto'
# Prometheus metrics
successful_sonda_msgs = Counter('successful_sonda_msgs', 'Number of successful Sonda messages sent')
failed_sonda_msgs = Counter('failed_sonda_msgs', 'Number of failed Sonda messages attempts')
successful_store_queries = Counter('successful_store_queries', 'Number of successful store queries', ['node'])
failed_store_queries = Counter('failed_store_queries', 'Number of failed store queries', ['node', 'error'])
empty_store_responses = Counter('empty_store_responses', "Number of store responses without the latest Sonda message", ['node'])
store_query_latency = Gauge('store_query_latency', 'Latency of the last store query in seconds', ['node'])
consecutive_successful_responses = Gauge('consecutive_successful_responses', 'Consecutive successful store responses', ['node'])
node_health = Gauge('node_health', "Binary indicator of a node's health. 1 is healthy, 0 is not", ['node'])
# Argparser configuration
parser = argparse.ArgumentParser(description='')
parser.add_argument('-m', '--metrics-port', type=int, default=8004, help='Port to expose prometheus metrics.')
parser.add_argument('-a', '--node-rest-address', type=str, default="http://nwaku:8645", help='Address of the waku node to send messages to.')
parser.add_argument('-p', '--pubsub-topic', type=str, default='/waku/2/rs/1/0', help='PubSub topic.')
parser.add_argument('-d', '--delay-seconds', type=int, default=60, help='Delay in seconds between messages.')
parser.add_argument('-n', '--store-nodes', type=str, required=True, help='Comma separated list of store nodes to query.')
parser.add_argument('-t', '--health-threshold', type=int, default=5, help='Consecutive successful store requests to consider a store node healthy.')
args = parser.parse_args()
# Logs message including current UTC time
def log_with_utc(message):
utc_time = datetime.utcnow().strftime("%Y-%m-%d %H:%M:%S")
print(f"[{utc_time} UTC] {message}")
# Sends Sonda message. Returns True if successful, False otherwise
def send_sonda_msg(rest_address, pubsub_topic, content_topic, timestamp):
message = "Hi, I'm Sonda"
base64_message = base64.b64encode(message.encode('utf-8')).decode('ascii')
body = {
'payload': base64_message,
'contentTopic': content_topic,
'version': 1,
'timestamp': timestamp
}
encoded_pubsub_topic = urllib.parse.quote(pubsub_topic, safe='')
url = f'{rest_address}/relay/v1/messages/{encoded_pubsub_topic}'
headers = {'content-type': 'application/json'}
log_with_utc(f'Sending Sonda message via REST: {url} PubSubTopic: {pubsub_topic}, ContentTopic: {content_topic}, timestamp: {timestamp}')
try:
start_time = time.time()
response = requests.post(url, json=body, headers=headers, timeout=10)
elapsed_seconds = time.time() - start_time
log_with_utc(f'Response from {rest_address}: status:{response.status_code} content:{response.text} [{elapsed_seconds:.4f} s.]')
if response.status_code == 200:
successful_sonda_msgs.inc()
return True
else:
response.raise_for_status()
except requests.RequestException as e:
log_with_utc(f'Error sending request: {e}')
failed_sonda_msgs.inc()
return False
# We return true if both our node and the queried Store node returned a 200
# If our message isn't found but we did get a store 200 response, this function still returns true
def check_store_response(json_response, store_node, timestamp):
# Check for the store node status code
if json_response.get('statusCode') != 200:
error = f"{json_response.get('statusCode')} {json_response.get('statusDesc')}"
log_with_utc(f'Failed performing store query {error}')
failed_store_queries.labels(node=store_node, error=error).inc()
consecutive_successful_responses.labels(node=store_node).set(0)
return False
messages = json_response.get('messages')
# If there's no message in the response, increase counters and return
if not messages:
log_with_utc("No messages in store response")
empty_store_responses.labels(node=store_node).inc()
consecutive_successful_responses.labels(node=store_node).set(0)
return True
# Search for the Sonda message in the returned messages
for message in messages:
# If message field is missing in current message, continue
if not message.get("message"):
log_with_utc("Could not retrieve message")
continue
# If a message is found with the same timestamp as sonda message, increase counters and return
if timestamp == message.get('message').get('timestamp'):
log_with_utc(f'Found Sonda message in store response node={store_node}')
successful_store_queries.labels(node=store_node).inc()
consecutive_successful_responses.labels(node=store_node).inc()
return True
# If our message wasn't found in the returned messages, increase counter and return
empty_store_responses.labels(node=store_node).inc()
consecutive_successful_responses.labels(node=store_node).set(0)
return True
def send_store_query(rest_address, store_node, encoded_pubsub_topic, encoded_content_topic, timestamp):
url = f'{rest_address}/store/v3/messages'
params = {
'peerAddr': urllib.parse.quote(store_node, safe=''),
'pubsubTopic': encoded_pubsub_topic,
'contentTopics': encoded_content_topic,
'includeData': 'true',
'startTime': timestamp
}
s_time = time.time()
try:
log_with_utc(f'Sending store request to {store_node}')
response = requests.get(url, params=params)
except Exception as e:
log_with_utc(f'Error sending request: {e}')
failed_store_queries.labels(node=store_node, error=str(e)).inc()
consecutive_successful_responses.labels(node=store_node).set(0)
return False
elapsed_seconds = time.time() - s_time
log_with_utc(f'Response from {rest_address}: status:{response.status_code} [{elapsed_seconds:.4f} s.]')
if response.status_code != 200:
failed_store_queries.labels(node=store_node, error=f'{response.status_code} {response.content}').inc()
consecutive_successful_responses.labels(node=store_node).set(0)
return False
# Parse REST response into JSON
try:
json_response = response.json()
except Exception as e:
log_with_utc(f'Error parsing response JSON: {e}')
failed_store_queries.labels(node=store_node, error="JSON parse error").inc()
consecutive_successful_responses.labels(node=store_node).set(0)
return False
# Analyze Store response. Return false if response is incorrect or has an error status
if not check_store_response(json_response, store_node, timestamp):
return False
store_query_latency.labels(node=store_node).set(elapsed_seconds)
return True
def send_store_queries(rest_address, store_nodes, pubsub_topic, content_topic, timestamp):
log_with_utc(f'Sending store queries. nodes = {store_nodes} timestamp = {timestamp}')
encoded_pubsub_topic = urllib.parse.quote(pubsub_topic, safe='')
encoded_content_topic = urllib.parse.quote(content_topic, safe='')
for node in store_nodes:
send_store_query(rest_address, node, encoded_pubsub_topic, encoded_content_topic, timestamp)
def main():
log_with_utc(f'Running Sonda with args={args}')
store_nodes = []
if args.store_nodes is not None:
store_nodes = [s.strip() for s in args.store_nodes.split(",")]
log_with_utc(f'Store nodes to query: {store_nodes}')
# Start Prometheus HTTP server at port set by the CLI(default 8004)
start_http_server(args.metrics_port)
while True:
timestamp = time.time_ns()
# Send Sonda message
res = send_sonda_msg(args.node_rest_address, args.pubsub_topic, SONDA_CONTENT_TOPIC, timestamp)
log_with_utc(f'sleeping: {args.delay_seconds} seconds')
time.sleep(args.delay_seconds)
# Only send store query if message was successfully published
if(res):
send_store_queries(args.node_rest_address, store_nodes, args.pubsub_topic, SONDA_CONTENT_TOPIC, timestamp)
# Update node health metrics
for store_node in store_nodes:
if consecutive_successful_responses.labels(node=store_node)._value.get() >= args.health_threshold:
node_health.labels(node=store_node).set(1)
else:
node_health.labels(node=store_node).set(0)
main()

View File

@ -15,9 +15,11 @@ The following options are available:
-p, --protocol Protocol required to be supported: store,relay,lightpush,filter (can be used
multiple times).
-l, --log-level Sets the log level [=LogLevel.DEBUG].
-np, --node-port Listening port for waku node [=60000].
-np, --node-port Listening port for waku node [=60000].
--websocket-secure-key-path Secure websocket key path: '/path/to/key.txt' .
--websocket-secure-cert-path Secure websocket Certificate path: '/path/to/cert.txt' .
-c, --cluster-id Cluster ID of the fleet node to check status [Default=1]
-s, --shard Shards index to subscribe to topics [ Argument may be repeated ]
```
@ -30,21 +32,31 @@ $ make wakucanary
And used as follows. A reachable node that supports both `store` and `filter` protocols.
```console
$ ./build/wakucanary --address=/ip4/8.210.222.231/tcp/30303/p2p/16Uiu2HAm4v86W3bmT1BiH6oSPzcsSr24iDQpSN5Qa992BCjjwgrD --protocol=store --protocol=filter
$ ./build/wakucanary \
--address=/dns4/store-01.do-ams3.status.staging.status.im/tcp/30303/p2p/16Uiu2HAm3xVDaz6SRJ6kErwC21zBJEZjavVXg7VSkoWzaV1aMA3F \
--protocol=store \
--protocol=filter \
--cluster-id=16 \
--shard=64
$ echo $?
0
```
A node that can't be reached.
```console
$ ./build/wakucanary --address=/ip4/8.210.222.231/tcp/1000/p2p/16Uiu2HAm4v86W3bmT1BiH6oSPzcsSr24iDQpSN5Qa992BCjjwgrD --protocol=store --protocol=filter
$ ./build/wakucanary \
--address=/dns4/store-01.do-ams3.status.staging.status.im/tcp/1000/p2p/16Uiu2HAm3xVDaz6SRJ6kErwC21zBJEZjavVXg7VSkoWzaV1aMA3F \
--protocol=store \
--protocol=filter \
--cluster-id=16 \
--shard=64
$ echo $?
1
```
Note that a domain name can also be used.
```console
$ ./build/wakucanary --address=/dns4/node-01.do-ams3.status.test.statusim.net/tcp/30303/p2p/16Uiu2HAkukebeXjTQ9QDBeNDWuGfbaSg79wkkhK4vPocLgR6QFDf --protocol=store --protocol=filter
--- not defined yet
$ echo $?
0
```
@ -53,4 +65,4 @@ Websockets are also supported. The websocket port openned by waku canary is calc
```console
$ ./build/wakucanary --address=/ip4/127.0.0.1/tcp/7777/ws/p2p/16Uiu2HAm4ng2DaLPniRoZtMQbLdjYYWnXjrrJkGoXWCoBWAdn1tu --protocol=store --protocol=filter
$ ./build/wakucanary --address=/ip4/127.0.0.1/tcp/7777/wss/p2p/16Uiu2HAmB6JQpewXScGoQ2syqmimbe4GviLxRwfsR8dCpwaGBPSE --protocol=store --websocket-secure-key-path=MyKey.key --websocket-secure-cert-path=MyCertificate.crt
```
```

View File

@ -0,0 +1,37 @@
import osproc, os, httpclient, strutils
proc getPublicIP(): string =
let client = newHttpClient()
try:
let response = client.get("http://api.ipify.org")
return response.body
except Exception as e:
echo "Could not fetch public IP: " & e.msg
return "127.0.0.1"
# Function to generate a self-signed certificate
proc generateSelfSignedCertificate*(certPath: string, keyPath: string): int =
# Ensure the OpenSSL is installed
if findExe("openssl") == "":
echo "OpenSSL is not installed or not in the PATH."
return 1
let publicIP = getPublicIP()
if publicIP != "127.0.0.1":
echo "Your public IP address is: ", publicIP
# Command to generate private key and cert
let
cmd =
"openssl req -x509 -newkey rsa:4096 -keyout " & keyPath & " -out " & certPath &
" -sha256 -days 3650 -nodes -subj '/C=XX/ST=StateName/L=CityName/O=CompanyName/OU=CompanySectionName/CN=" &
publicIP & "'"
res = execCmd(cmd)
if res == 0:
echo "Successfully generated self-signed certificate and key."
else:
echo "Failed to generate certificate and key."
return res

View File

@ -1,3 +1,4 @@
-d:chronicles_line_numbers
-d:chronicles_runtime_filtering:on
-d:discv5_protocol_id:d5waku
path = "../.."

View File

@ -0,0 +1,50 @@
#!/bin/bash
#this script build the canary app and make basic run to connect to well-known peer via TCP .
set -e
PEER_ADDRESS="/dns4/store-01.do-ams3.status.staging.status.im/tcp/30303/p2p/16Uiu2HAm3xVDaz6SRJ6kErwC21zBJEZjavVXg7VSkoWzaV1aMA3F"
PROTOCOL="relay"
LOG_DIR="logs"
CLUSTER="16"
SHARD="64"
TIMESTAMP=$(date +"%Y-%m-%d_%H-%M-%S")
LOG_FILE="$LOG_DIR/canary_run_$TIMESTAMP.log"
mkdir -p "$LOG_DIR"
echo "Building Waku Canary app..."
( cd ../../../ && make wakucanary ) >> "$LOG_FILE" 2>&1
echo "Running Waku Canary against:"
echo " Peer : $PEER_ADDRESS"
echo " Protocol: $PROTOCOL"
echo "Log file : $LOG_FILE"
echo "-----------------------------------"
{
echo "=== Canary Run: $TIMESTAMP ==="
echo "Peer : $PEER_ADDRESS"
echo "Protocol : $PROTOCOL"
echo "LogLevel : DEBUG"
echo "-----------------------------------"
../../../build/wakucanary \
--address="$PEER_ADDRESS" \
--protocol="$PROTOCOL" \
--cluster-id="$CLUSTER"\
--shard="$SHARD"\
--log-level=DEBUG
echo "-----------------------------------"
echo "Exit code: $?"
} 2>&1 | tee "$LOG_FILE"
EXIT_CODE=${PIPESTATUS[0]}
if [ $EXIT_CODE -eq 0 ]; then
echo "SUCCESS: Connected to peer and protocol '$PROTOCOL' is supported."
else
echo "FAILURE: Could not connect or protocol '$PROTOCOL' is unsupported."
fi
exit $EXIT_CODE

View File

@ -0,0 +1,46 @@
#!/bin/bash
# === Configuration ===
WAKUCANARY_BINARY="../../../build/wakucanary"
PEER_ADDRESS="/dns4/store-01.do-ams3.status.staging.status.im/tcp/30303/p2p/16Uiu2HAm3xVDaz6SRJ6kErwC21zBJEZjavVXg7VSkoWzaV1aMA3F"
TIMEOUT=5
LOG_LEVEL="info"
PROTOCOLS=("store" "relay" "lightpush" "filter")
# === Logging Setup ===
LOG_DIR="logs"
mkdir -p "$LOG_DIR"
TIMESTAMP=$(date +"%Y-%m-%d_%H-%M-%S")
LOG_FILE="$LOG_DIR/ping_test_$TIMESTAMP.log"
echo "Building Waku Canary app..."
( cd ../../../ && make wakucanary ) >> "$LOG_FILE" 2>&1
echo "Protocol Support Test - $TIMESTAMP" | tee -a "$LOG_FILE"
echo "Peer: $PEER_ADDRESS" | tee -a "$LOG_FILE"
echo "---------------------------------------" | tee -a "$LOG_FILE"
# === Protocol Testing Loop ===
for PROTOCOL in "${PROTOCOLS[@]}"; do
TIMESTAMP=$(date +"%Y-%m-%d_%H-%M-%S")
LOG_FILE="$LOG_DIR/ping_test_${PROTOCOL}_$TIMESTAMP.log"
{
echo "=== Canary Run: $TIMESTAMP ==="
echo "Peer : $PEER_ADDRESS"
echo "Protocol : $PROTOCOL"
echo "LogLevel : DEBUG"
echo "-----------------------------------"
$WAKUCANARY_BINARY \
--address="$PEER_ADDRESS" \
--protocol="$PROTOCOL" \
--log-level=DEBUG
echo "-----------------------------------"
echo "Exit code: $?"
} 2>&1 | tee "$LOG_FILE"
echo "✅ Log saved to: $LOG_FILE"
echo ""
done
echo "All protocol checks completed. Log saved to: $LOG_FILE"

View File

@ -0,0 +1,51 @@
#!/bin/bash
#this script build the canary app and make basic run to connect to well-known peer via TCP .
set -e
PEER_ADDRESS="/ip4/127.0.0.1/tcp/7777/ws/p2p/16Uiu2HAm4ng2DaLPniRoZtMQbLdjYYWnXjrrJkGoXWCoBWAdn1tu"
PROTOCOL="relay"
LOG_DIR="logs"
CLUSTER="16"
SHARD="64"
TIMESTAMP=$(date +"%Y-%m-%d_%H-%M-%S")
LOG_FILE="$LOG_DIR/canary_run_$TIMESTAMP.log"
mkdir -p "$LOG_DIR"
echo "Building Waku Canary app..."
( cd ../../../ && make wakucanary ) >> "$LOG_FILE" 2>&1
echo "Running Waku Canary against:"
echo " Peer : $PEER_ADDRESS"
echo " Protocol: $PROTOCOL"
echo "Log file : $LOG_FILE"
echo "-----------------------------------"
{
echo "=== Canary Run: $TIMESTAMP ==="
echo "Peer : $PEER_ADDRESS"
echo "Protocol : $PROTOCOL"
echo "LogLevel : DEBUG"
echo "-----------------------------------"
../../../build/wakucanary \
--address="$PEER_ADDRESS" \
--protocol="$PROTOCOL" \
--cluster-id="$CLUSTER"\
--shard="$SHARD"\
--log-level=DEBUG
echo "-----------------------------------"
echo "Exit code: $?"
} 2>&1 | tee "$LOG_FILE"
EXIT_CODE=${PIPESTATUS[0]}
if [ $EXIT_CODE -eq 0 ]; then
echo "SUCCESS: Connected to peer and protocol '$PROTOCOL' is supported."
else
echo "FAILURE: Could not connect or protocol '$PROTOCOL' is unsupported."
fi
exit $EXIT_CODE

View File

@ -0,0 +1,43 @@
#!/bin/bash
WAKUCANARY_BINARY="../../../build/wakucanary"
NODE_PORT=60000
WSS_PORT=$((NODE_PORT + 1000))
PEER_ID="16Uiu2HAmB6JQpewXScGoQ2syqmimbe4GviLxRwfsR8dCpwaGBPSE"
PROTOCOL="relay"
KEY_PATH="./certs/client.key"
CERT_PATH="./certs/client.crt"
LOG_DIR="logs"
mkdir -p "$LOG_DIR"
PEER_ADDRESS="/ip4/127.0.0.1/tcp/$WSS_PORT/wss/p2p/$PEER_ID"
TIMESTAMP=$(date +"%Y-%m-%d_%H-%M-%S")
LOG_FILE="$LOG_DIR/wss_cert_test_$TIMESTAMP.log"
echo "Building Waku Canary app..."
( cd ../../../ && make wakucanary ) >> "$LOG_FILE" 2>&1
{
echo "=== Canary WSS + Cert Test ==="
echo "Timestamp : $TIMESTAMP"
echo "Node Port : $NODE_PORT"
echo "WSS Port : $WSS_PORT"
echo "Peer ID : $PEER_ID"
echo "Protocol : $PROTOCOL"
echo "Key Path : $KEY_PATH"
echo "Cert Path : $CERT_PATH"
echo "Address : $PEER_ADDRESS"
echo "------------------------------------------"
$WAKUCANARY_BINARY \
--address="$PEER_ADDRESS" \
--protocol="$PROTOCOL" \
--log-level=DEBUG \
--websocket-secure-key-path="$KEY_PATH" \
--websocket-secure-cert-path="$CERT_PATH"
echo "------------------------------------------"
echo "Exit code: $?"
} 2>&1 | tee "$LOG_FILE"
echo "✅ Log saved to: $LOG_FILE"

View File

@ -1,77 +1,108 @@
import
std/[strutils, sequtils, tables],
std/[strutils, sequtils, tables, strformat],
confutils,
chronos,
stew/shims/net,
chronicles/topics_registry
chronicles/topics_registry,
os
import
libp2p/protocols/ping,
libp2p/crypto/[crypto, secp],
libp2p/nameresolving/dnsresolver,
libp2p/multicodec
import
../../waku/waku_enr,
../../waku/node/peer_manager,
../../waku/waku_core,
../../waku/waku_node
./certsgenerator,
waku/[waku_enr, node/peer_manager, waku_core, waku_node, factory/builder]
# protocols and their tag
const ProtocolsTable = {
"store": "/vac/waku/store/",
"storev3": "/vac/waku/store-query/3",
"relay": "/vac/waku/relay/",
"lightpush": "/vac/waku/lightpush/",
"filter": "/vac/waku/filter/",
"filter": "/vac/waku/filter-subscribe/2",
"filter-push": "/vac/waku/filter-push/",
"ipfs-id": "/ipfs/id/",
"autonat": "/libp2p/autonat/",
"circuit-relay": "/libp2p/circuit/relay/",
"metadata": "/vac/waku/metadata/",
"rendezvous": "/rendezvous/",
"ipfs-ping": "/ipfs/ping/",
"peer-exchange": "/vac/waku/peer-exchange/",
"mix": "mix/1.0.0",
}.toTable
const WebSocketPortOffset = 1000
const CertsDirectory = "./certs"
# cli flags
type
WakuCanaryConf* = object
address* {.
desc: "Multiaddress of the peer node to attempt to dial",
defaultValue: "",
name: "address",
abbr: "a".}: string
type WakuCanaryConf* = object
address* {.
desc: "Multiaddress of the peer node to attempt to dial",
defaultValue: "",
name: "address",
abbr: "a"
.}: string
timeout* {.
desc: "Timeout to consider that the connection failed",
defaultValue: chronos.seconds(10),
name: "timeout",
abbr: "t".}: chronos.Duration
timeout* {.
desc: "Timeout to consider that the connection failed",
defaultValue: chronos.seconds(10),
name: "timeout",
abbr: "t"
.}: chronos.Duration
protocols* {.
desc: "Protocol required to be supported: store,relay,lightpush,filter (can be used multiple times)",
name: "protocol",
abbr: "p".}: seq[string]
protocols* {.
desc:
"Protocol required to be supported: store,relay,lightpush,filter (can be used multiple times)",
name: "protocol",
abbr: "p"
.}: seq[string]
logLevel* {.
desc: "Sets the log level",
defaultValue: LogLevel.INFO,
name: "log-level",
abbr: "l".}: LogLevel
logLevel* {.
desc: "Sets the log level",
defaultValue: LogLevel.INFO,
name: "log-level",
abbr: "l"
.}: LogLevel
nodePort* {.
desc: "Listening port for waku node",
defaultValue: 60000,
name: "node-port",
abbr: "np".}: uint16
nodePort* {.
desc: "Listening port for waku node",
defaultValue: 60000,
name: "node-port",
abbr: "np"
.}: uint16
## websocket secure config
websocketSecureKeyPath* {.
desc: "Secure websocket key path: '/path/to/key.txt' ",
defaultValue: ""
name: "websocket-secure-key-path".}: string
## websocket secure config
websocketSecureKeyPath* {.
desc: "Secure websocket key path: '/path/to/key.txt' ",
defaultValue: "",
name: "websocket-secure-key-path"
.}: string
websocketSecureCertPath* {.
desc: "Secure websocket Certificate path: '/path/to/cert.txt' ",
defaultValue: ""
name: "websocket-secure-cert-path".}: string
websocketSecureCertPath* {.
desc: "Secure websocket Certificate path: '/path/to/cert.txt' ",
defaultValue: "",
name: "websocket-secure-cert-path"
.}: string
ping* {.
desc: "Ping the peer node to measure latency",
defaultValue: true,
name: "ping" .}: bool
ping* {.
desc: "Ping the peer node to measure latency", defaultValue: true, name: "ping"
.}: bool
shards* {.
desc:
"Shards index to subscribe to [0..NUM_SHARDS_IN_NETWORK-1]. Argument may be repeated.",
defaultValue: @[],
name: "shard",
abbr: "s"
.}: seq[uint16]
clusterId* {.
desc:
"Cluster id that the node is running in. Node in a different cluster id is disconnected.",
defaultValue: 1,
name: "cluster-id",
abbr: "c"
.}: uint16
proc parseCmdArg*(T: type chronos.Duration, p: string): T =
try:
@ -82,47 +113,59 @@ proc parseCmdArg*(T: type chronos.Duration, p: string): T =
proc completeCmdArg*(T: type chronos.Duration, val: string): seq[string] =
return @[]
# checks if rawProtocols (skipping version) are supported in nodeProtocols
proc areProtocolsSupported(
rawProtocols: seq[string],
nodeProtocols: seq[string]): bool =
toValidateProtocols: seq[string], nodeProtocols: seq[string]
): bool =
## Checks if all toValidateProtocols are contained in nodeProtocols.
## nodeProtocols contains the full list of protocols currently informed by the node under analysis.
## toValidateProtocols contains the protocols, without version number, that we want to check if they are supported by the node.
var numOfSupportedProt: int = 0
for nodeProtocol in nodeProtocols:
for rawProtocol in rawProtocols:
let protocolTag = ProtocolsTable[rawProtocol]
for rawProtocol in toValidateProtocols:
let protocolTag = ProtocolsTable[rawProtocol]
info "Checking if protocol is supported", expected_protocol_tag = protocolTag
var protocolSupported = false
for nodeProtocol in nodeProtocols:
if nodeProtocol.startsWith(protocolTag):
info "Supported protocol ok", expected = protocolTag,
supported = nodeProtocol
info "The node supports the protocol", supported_protocol = nodeProtocol
numOfSupportedProt += 1
protocolSupported = true
break
if numOfSupportedProt == rawProtocols.len:
if not protocolSupported:
error "The node does not support the protocol", expected_protocol = protocolTag
if numOfSupportedProt == toValidateProtocols.len:
return true
return false
proc pingNode(node: WakuNode, peerInfo: RemotePeerInfo): Future[void] {.async, gcsafe.} =
proc pingNode(
node: WakuNode, peerInfo: RemotePeerInfo
): Future[bool] {.async, gcsafe.} =
try:
let conn = await node.switch.dial(peerInfo.peerId, peerInfo.addrs, PingCodec)
let pingDelay = await node.libp2pPing.ping(conn)
info "Peer response time (ms)", peerId = peerInfo.peerId, ping=pingDelay.millis
info "Peer response time (ms)", peerId = peerInfo.peerId, ping = pingDelay.millis
return true
except CatchableError:
var msg = getCurrentExceptionMsg()
if msg == "Future operation cancelled!":
msg = "timedout"
error "Failed to ping the peer", peer=peerInfo, err=msg
error "Failed to ping the peer", peer = peerInfo, err = msg
return false
proc main(rng: ref HmacDrbgContext): Future[int] {.async.} =
let conf: WakuCanaryConf = WakuCanaryConf.load()
# create dns resolver
let
nameServers = @[
initTAddress(ValidIpAddress.init("1.1.1.1"), Port(53)),
initTAddress(ValidIpAddress.init("1.0.0.1"), Port(53))]
nameServers =
@[
initTAddress(parseIpAddress("1.1.1.1"), Port(53)),
initTAddress(parseIpAddress("1.0.0.1"), Port(53)),
]
resolver: DnsResolver = DnsResolver.new(nameServers)
if conf.logLevel != LogLevel.NONE:
@ -140,20 +183,27 @@ proc main(rng: ref HmacDrbgContext): Future[int] {.async.} =
protocols = conf.protocols,
logLevel = conf.logLevel
let peerRes = parsePeerInfo(conf.address)
if peerRes.isErr():
error "Couldn't parse 'conf.address'", error = peerRes.error
return 1
let peer = peerRes.value
let peer = parsePeerInfo(conf.address).valueOr:
error "Couldn't parse 'conf.address'", error = error
quit(QuitFailure)
let
nodeKey = crypto.PrivateKey.random(Secp256k1, rng[])[]
bindIp = ValidIpAddress.init("0.0.0.0")
bindIp = parseIpAddress("0.0.0.0")
wsBindPort = Port(conf.nodePort + WebSocketPortOffset)
nodeTcpPort = Port(conf.nodePort)
isWs = peer.addrs[0].contains(multiCodec("ws")).get()
isWss = peer.addrs[0].contains(multiCodec("wss")).get()
keyPath =
if conf.websocketSecureKeyPath.len > 0:
conf.websocketSecureKeyPath
else:
CertsDirectory & "/key.pem"
certPath =
if conf.websocketSecureCertPath.len > 0:
conf.websocketSecureCertPath
else:
CertsDirectory & "/cert.pem"
var builder = WakuNodeBuilder.init()
builder.withNodeKey(nodeKey)
@ -161,31 +211,36 @@ proc main(rng: ref HmacDrbgContext): Future[int] {.async.} =
let netConfig = NetConfig.init(
bindIp = bindIp,
bindPort = nodeTcpPort,
wsBindPort = wsBindPort,
wsBindPort = some(wsBindPort),
wsEnabled = isWs,
wssEnabled = isWss,
)
var enrBuilder = EnrBuilder.init(nodeKey)
let recordRes = enrBuilder.build()
let record =
if recordRes.isErr():
error "failed to create enr record", error=recordRes.error
quit(QuitFailure)
else: recordRes.get()
enrBuilder.withWakuRelaySharding(
RelayShards(clusterId: conf.clusterId, shardIds: conf.shards)
).isOkOr:
error "could not initialize ENR with shards", error
quit(QuitFailure)
if isWss and (conf.websocketSecureKeyPath.len == 0 or
conf.websocketSecureCertPath.len == 0):
error "WebSocket Secure requires key and certificate, see --help"
return 1
let record = enrBuilder.build().valueOr:
error "failed to create enr record", error = error
quit(QuitFailure)
if isWss and
(conf.websocketSecureKeyPath.len == 0 or conf.websocketSecureCertPath.len == 0):
info "WebSocket Secure requires key and certificate. Generating them"
if not dirExists(CertsDirectory):
createDir(CertsDirectory)
if generateSelfSignedCertificate(certPath, keyPath) != 0:
error "Error generating key and certificate"
quit(QuitFailure)
builder.withRecord(record)
builder.withNetworkConfiguration(netConfig.tryGet())
builder.withSwitchConfiguration(
secureKey = some(conf.websocketSecureKeyPath),
secureCert = some(conf.websocketSecureCertPath),
nameResolver = resolver,
secureKey = some(keyPath), secureCert = some(certPath), nameResolver = resolver
)
let node = builder.build().tryGet()
@ -195,34 +250,49 @@ proc main(rng: ref HmacDrbgContext): Future[int] {.async.} =
await mountLibp2pPing(node)
except CatchableError:
error "failed to mount libp2p ping protocol: " & getCurrentExceptionMsg()
return 1
quit(QuitFailure)
node.mountMetadata(conf.clusterId, conf.shards).isOkOr:
error "failed to mount metadata protocol", error
quit(QuitFailure)
await node.start()
var pingFut:Future[bool]
var pingFut: Future[bool]
if conf.ping:
pingFut = pingNode(node, peer).withTimeout(conf.timeout)
let timedOut = not await node.connectToNodes(@[peer]).withTimeout(conf.timeout)
if timedOut:
error "Timedout after", timeout = conf.timeout
return 1
quit(QuitFailure)
let lp2pPeerStore = node.switch.peerStore
let conStatus = node.peerManager.peerStore[ConnectionBook][peer.peerId]
let conStatus = node.peerManager.switch.peerStore[ConnectionBook][peer.peerId]
var pingSuccess = true
if conf.ping:
discard await pingFut
try:
pingSuccess = await pingFut
except CatchableError as exc:
pingSuccess = false
error "Ping operation failed or timed out", error = exc.msg
if conStatus in [Connected, CanConnect]:
let nodeProtocols = lp2pPeerStore[ProtoBook][peer.peerId]
if not areProtocolsSupported(conf.protocols, nodeProtocols):
error "Not all protocols are supported", expected = conf.protocols,
supported = nodeProtocols
return 1
error "Not all protocols are supported",
expected = conf.protocols, supported = nodeProtocols
quit(QuitFailure)
# Check ping result if ping was enabled
if conf.ping and not pingSuccess:
error "Node is reachable and supports protocols but ping failed - connection may be unstable"
quit(QuitFailure)
elif conStatus == CannotConnect:
error "Could not connect", peerId = peer.peerId
return 1
quit(QuitFailure)
return 0
when isMainModule:

View File

@ -1,732 +0,0 @@
when (NimMajor, NimMinor) < (1, 4):
{.push raises: [Defect].}
else:
{.push raises: [].}
import
std/[options, strutils, sequtils],
stew/results,
chronicles,
chronos,
libp2p/crypto/crypto,
libp2p/nameresolving/dnsresolver,
libp2p/protocols/pubsub/gossipsub,
libp2p/peerid,
eth/keys,
json_rpc/rpcserver,
presto,
metrics,
metrics/chronos_httpserver
import
../../waku/common/utils/nat,
../../waku/common/databases/db_sqlite,
../../waku/waku_archive/driver/builder,
../../waku/waku_archive/retention_policy/builder,
../../waku/waku_core,
../../waku/waku_node,
../../waku/node/waku_metrics,
../../waku/node/peer_manager,
../../waku/node/peer_manager/peer_store/waku_peer_storage,
../../waku/node/peer_manager/peer_store/migrations as peer_store_sqlite_migrations,
../../waku/waku_api/message_cache,
../../waku/waku_api/cache_handlers,
../../waku/waku_api/rest/server,
../../waku/waku_api/rest/debug/handlers as rest_debug_api,
../../waku/waku_api/rest/relay/handlers as rest_relay_api,
../../waku/waku_api/rest/filter/legacy_handlers as rest_legacy_filter_api,
../../waku/waku_api/rest/filter/handlers as rest_filter_api,
../../waku/waku_api/rest/lightpush/handlers as rest_lightpush_api,
../../waku/waku_api/rest/store/handlers as rest_store_api,
../../waku/waku_api/rest/health/handlers as rest_health_api,
../../waku/waku_api/rest/admin/handlers as rest_admin_api,
../../waku/waku_api/jsonrpc/admin/handlers as rpc_admin_api,
../../waku/waku_api/jsonrpc/debug/handlers as rpc_debug_api,
../../waku/waku_api/jsonrpc/filter/handlers as rpc_filter_api,
../../waku/waku_api/jsonrpc/relay/handlers as rpc_relay_api,
../../waku/waku_api/jsonrpc/store/handlers as rpc_store_api,
../../waku/waku_archive,
../../waku/waku_dnsdisc,
../../waku/waku_enr,
../../waku/waku_discv5,
../../waku/waku_peer_exchange,
../../waku/waku_rln_relay,
../../waku/waku_store,
../../waku/waku_lightpush,
../../waku/waku_filter,
../../waku/waku_filter_v2,
./wakunode2_validator_signed,
./internal_config,
./external_config
logScope:
topics = "wakunode app"
# Git version in git describe format (defined at compile time)
const git_version* {.strdefine.} = "n/a"
type
App* = object
version: string
conf: WakuNodeConf
netConf: NetConfig
rng: ref HmacDrbgContext
key: crypto.PrivateKey
record: Record
wakuDiscv5: Option[WakuDiscoveryV5]
peerStore: Option[WakuPeerStorage]
dynamicBootstrapNodes: seq[RemotePeerInfo]
node: WakuNode
rpcServer: Option[RpcHttpServer]
restServer: Option[RestServerRef]
metricsServer: Option[MetricsHttpServerRef]
AppResult*[T] = Result[T, string]
func node*(app: App): WakuNode =
app.node
func version*(app: App): string =
app.version
## Initialisation
proc init*(T: type App, rng: ref HmacDrbgContext, conf: WakuNodeConf): T =
let key =
if conf.nodeKey.isSome():
conf.nodeKey.get()
else:
let keyRes = crypto.PrivateKey.random(Secp256k1, rng[])
if keyRes.isErr():
error "failed to generate key", error=keyRes.error
quit(QuitFailure)
keyRes.get()
let netConfigRes = networkConfiguration(conf, clientId)
let netConfig =
if netConfigRes.isErr():
error "failed to create internal config", error=netConfigRes.error
quit(QuitFailure)
else: netConfigRes.get()
var enrBuilder = EnrBuilder.init(key)
enrBuilder.withIpAddressAndPorts(
netConfig.enrIp,
netConfig.enrPort,
netConfig.discv5UdpPort
)
if netConfig.wakuFlags.isSome():
enrBuilder.withWakuCapabilities(netConfig.wakuFlags.get())
enrBuilder.withMultiaddrs(netConfig.enrMultiaddrs)
let topics =
if conf.pubsubTopics.len > 0 or conf.contentTopics.len > 0:
let shardsRes = conf.contentTopics.mapIt(getShard(it))
for res in shardsRes:
if res.isErr():
error "failed to shard content topic", error=res.error
quit(QuitFailure)
let shards = shardsRes.mapIt(it.get())
conf.pubsubTopics & shards
else:
conf.topics
let addShardedTopics = enrBuilder.withShardedTopics(topics)
if addShardedTopics.isErr():
error "failed to add sharded topics to ENR", error=addShardedTopics.error
quit(QuitFailure)
let recordRes = enrBuilder.build()
let record =
if recordRes.isErr():
error "failed to create record", error=recordRes.error
quit(QuitFailure)
else: recordRes.get()
App(
version: git_version,
conf: conf,
netConf: netConfig,
rng: rng,
key: key,
record: record,
node: nil
)
## Peer persistence
const PeerPersistenceDbUrl = "peers.db"
proc setupPeerStorage(): AppResult[Option[WakuPeerStorage]] =
let db = ? SqliteDatabase.new(PeerPersistenceDbUrl)
? peer_store_sqlite_migrations.migrate(db)
let res = WakuPeerStorage.new(db)
if res.isErr():
return err("failed to init peer store" & res.error)
ok(some(res.value))
proc setupPeerPersistence*(app: var App): AppResult[void] =
if not app.conf.peerPersistence:
return ok()
let peerStoreRes = setupPeerStorage()
if peerStoreRes.isErr():
return err("failed to setup peer store" & peerStoreRes.error)
app.peerStore = peerStoreRes.get()
ok()
## Retrieve dynamic bootstrap nodes (DNS discovery)
proc retrieveDynamicBootstrapNodes*(dnsDiscovery: bool, dnsDiscoveryUrl: string, dnsDiscoveryNameServers: seq[ValidIpAddress]): AppResult[seq[RemotePeerInfo]] =
if dnsDiscovery and dnsDiscoveryUrl != "":
# DNS discovery
debug "Discovering nodes using Waku DNS discovery", url=dnsDiscoveryUrl
var nameServers: seq[TransportAddress]
for ip in dnsDiscoveryNameServers:
nameServers.add(initTAddress(ip, Port(53))) # Assume all servers use port 53
let dnsResolver = DnsResolver.new(nameServers)
proc resolver(domain: string): Future[string] {.async, gcsafe.} =
trace "resolving", domain=domain
let resolved = await dnsResolver.resolveTxt(domain)
return resolved[0] # Use only first answer
var wakuDnsDiscovery = WakuDnsDiscovery.init(dnsDiscoveryUrl, resolver)
if wakuDnsDiscovery.isOk():
return wakuDnsDiscovery.get().findPeers()
.mapErr(proc (e: cstring): string = $e)
else:
warn "Failed to init Waku DNS discovery"
debug "No method for retrieving dynamic bootstrap nodes specified."
ok(newSeq[RemotePeerInfo]()) # Return an empty seq by default
proc setupDyamicBootstrapNodes*(app: var App): AppResult[void] =
let dynamicBootstrapNodesRes = retrieveDynamicBootstrapNodes(app.conf.dnsDiscovery,
app.conf.dnsDiscoveryUrl,
app.conf.dnsDiscoveryNameServers)
if dynamicBootstrapNodesRes.isOk():
app.dynamicBootstrapNodes = dynamicBootstrapNodesRes.get()
else:
warn "2/7 Retrieving dynamic bootstrap nodes failed. Continuing without dynamic bootstrap nodes.", error=dynamicBootstrapNodesRes.error
ok()
## Setup DiscoveryV5
proc setupDiscoveryV5*(app: App): WakuDiscoveryV5 =
let dynamicBootstrapEnrs = app.dynamicBootstrapNodes
.filterIt(it.hasUdpPort())
.mapIt(it.enr.get())
var discv5BootstrapEnrs: seq[enr.Record]
# parse enrURIs from the configuration and add the resulting ENRs to the discv5BootstrapEnrs seq
for enrUri in app.conf.discv5BootstrapNodes:
addBootstrapNode(enrUri, discv5BootstrapEnrs)
discv5BootstrapEnrs.add(dynamicBootstrapEnrs)
let discv5Config = DiscoveryConfig.init(app.conf.discv5TableIpLimit,
app.conf.discv5BucketIpLimit,
app.conf.discv5BitsPerHop)
let discv5UdpPort = Port(uint16(app.conf.discv5UdpPort) + app.conf.portsShift)
let discv5Conf = WakuDiscoveryV5Config(
discv5Config: some(discv5Config),
address: app.conf.listenAddress,
port: discv5UdpPort,
privateKey: keys.PrivateKey(app.key.skkey),
bootstrapRecords: discv5BootstrapEnrs,
autoupdateRecord: app.conf.discv5EnrAutoUpdate,
)
WakuDiscoveryV5.new(app.rng, discv5Conf, some(app.record))
## Init waku node instance
proc initNode(conf: WakuNodeConf,
netConfig: NetConfig,
rng: ref HmacDrbgContext,
nodeKey: crypto.PrivateKey,
record: enr.Record,
peerStore: Option[WakuPeerStorage],
dynamicBootstrapNodes: openArray[RemotePeerInfo] = @[]): AppResult[WakuNode] =
## Setup a basic Waku v2 node based on a supplied configuration
## file. Optionally include persistent peer storage.
## No protocols are mounted yet.
var dnsResolver: DnsResolver
if conf.dnsAddrs:
# Support for DNS multiaddrs
var nameServers: seq[TransportAddress]
for ip in conf.dnsAddrsNameServers:
nameServers.add(initTAddress(ip, Port(53))) # Assume all servers use port 53
dnsResolver = DnsResolver.new(nameServers)
var node: WakuNode
let pStorage = if peerStore.isNone(): nil
else: peerStore.get()
# Build waku node instance
var builder = WakuNodeBuilder.init()
builder.withRng(rng)
builder.withNodeKey(nodekey)
builder.withRecord(record)
builder.withNetworkConfiguration(netConfig)
builder.withPeerStorage(pStorage, capacity = conf.peerStoreCapacity)
builder.withSwitchConfiguration(
maxConnections = some(conf.maxConnections.int),
secureKey = some(conf.websocketSecureKeyPath),
secureCert = some(conf.websocketSecureCertPath),
nameResolver = dnsResolver,
sendSignedPeerRecord = conf.relayPeerExchange, # We send our own signed peer record when peer exchange enabled
agentString = some(conf.agentString)
)
builder.withPeerManagerConfig(maxRelayPeers = conf.maxRelayPeers)
node = ? builder.build().mapErr(proc (err: string): string = "failed to create waku node instance: " & err)
ok(node)
proc setupWakuApp*(app: var App): AppResult[void] =
## Discv5
if app.conf.discv5Discovery:
app.wakuDiscV5 = some(app.setupDiscoveryV5())
## Waku node
let initNodeRes = initNode(app.conf, app.netConf, app.rng, app.key, app.record, app.peerStore, app.dynamicBootstrapNodes)
if initNodeRes.isErr():
return err("failed to init node: " & initNodeRes.error)
app.node = initNodeRes.get()
ok()
## Mount protocols
proc setupProtocols(node: WakuNode,
conf: WakuNodeConf,
nodeKey: crypto.PrivateKey):
Future[AppResult[void]] {.async.} =
## Setup configured protocols on an existing Waku v2 node.
## Optionally include persistent message storage.
## No protocols are started yet.
# Mount relay on all nodes
var peerExchangeHandler = none(RoutingRecordsHandler)
if conf.relayPeerExchange:
proc handlePeerExchange(peer: PeerId, topic: string,
peers: seq[RoutingRecordsPair]) {.gcsafe.} =
## Handle peers received via gossipsub peer exchange
# TODO: Only consider peers on pubsub topics we subscribe to
let exchangedPeers = peers.filterIt(it.record.isSome()) # only peers with populated records
.mapIt(toRemotePeerInfo(it.record.get()))
debug "connecting to exchanged peers", src=peer, topic=topic, numPeers=exchangedPeers.len
# asyncSpawn, as we don't want to block here
asyncSpawn node.connectToNodes(exchangedPeers, "peer exchange")
peerExchangeHandler = some(handlePeerExchange)
if conf.relay:
let pubsubTopics =
if conf.pubsubTopics.len > 0 or conf.contentTopics.len > 0:
# TODO autoshard content topics only once.
# Already checked for errors in app.init
let shards = conf.contentTopics.mapIt(getShard(it).expect("Valid Shard"))
conf.pubsubTopics & shards
else:
conf.topics
try:
await mountRelay(node, pubsubTopics, peerExchangeHandler = peerExchangeHandler)
except CatchableError:
return err("failed to mount waku relay protocol: " & getCurrentExceptionMsg())
# Add validation keys to protected topics
for topicKey in conf.protectedTopics:
if topicKey.topic notin pubsubTopics:
warn "protected topic not in subscribed pubsub topics, skipping adding validator",
protectedTopic=topicKey.topic, subscribedTopics=pubsubTopics
continue
notice "routing only signed traffic", protectedTopic=topicKey.topic, publicKey=topicKey.key
node.wakuRelay.addSignedTopicValidator(Pubsubtopic(topicKey.topic), topicKey.key)
# Enable Rendezvous Discovery protocol when Relay is enabled
try:
await mountRendezvous(node)
except CatchableError:
return err("failed to mount waku rendezvous protocol: " & getCurrentExceptionMsg())
# Keepalive mounted on all nodes
try:
await mountLibp2pPing(node)
except CatchableError:
return err("failed to mount libp2p ping protocol: " & getCurrentExceptionMsg())
if conf.rlnRelay:
let rlnConf = WakuRlnConfig(
rlnRelayDynamic: conf.rlnRelayDynamic,
rlnRelayCredIndex: conf.rlnRelayCredIndex,
rlnRelayEthContractAddress: conf.rlnRelayEthContractAddress,
rlnRelayEthClientAddress: conf.rlnRelayEthClientAddress,
rlnRelayCredPath: conf.rlnRelayCredPath,
rlnRelayCredPassword: conf.rlnRelayCredPassword,
rlnRelayTreePath: conf.rlnRelayTreePath,
)
try:
waitFor node.mountRlnRelay(rlnConf)
except CatchableError:
return err("failed to mount waku RLN relay protocol: " & getCurrentExceptionMsg())
if conf.store:
var onErrAction = proc(msg: string) {.gcsafe, closure.} =
## Action to be taken when an internal error occurs during the node run.
## e.g. the connection with the database is lost and not recovered.
error "Unrecoverable error occurred", error = msg
quit(QuitFailure)
# Archive setup
let archiveDriverRes = ArchiveDriver.new(conf.storeMessageDbUrl,
conf.storeMessageDbVacuum,
conf.storeMessageDbMigration,
onErrAction)
if archiveDriverRes.isErr():
return err("failed to setup archive driver: " & archiveDriverRes.error)
let retPolicyRes = RetentionPolicy.new(conf.storeMessageRetentionPolicy)
if retPolicyRes.isErr():
return err("failed to create retention policy: " & retPolicyRes.error)
let mountArcRes = node.mountArchive(archiveDriverRes.get(),
retPolicyRes.get())
if mountArcRes.isErr():
return err("failed to mount waku archive protocol: " & mountArcRes.error)
# Store setup
try:
await mountStore(node)
except CatchableError:
return err("failed to mount waku store protocol: " & getCurrentExceptionMsg())
mountStoreClient(node)
if conf.storenode != "":
let storeNode = parsePeerInfo(conf.storenode)
if storeNode.isOk():
node.peerManager.addServicePeer(storeNode.value, WakuStoreCodec)
else:
return err("failed to set node waku store peer: " & storeNode.error)
# NOTE Must be mounted after relay
if conf.lightpush:
try:
await mountLightPush(node)
except CatchableError:
return err("failed to mount waku lightpush protocol: " & getCurrentExceptionMsg())
if conf.lightpushnode != "":
let lightPushNode = parsePeerInfo(conf.lightpushnode)
if lightPushNode.isOk():
mountLightPushClient(node)
node.peerManager.addServicePeer(lightPushNode.value, WakuLightPushCodec)
else:
return err("failed to set node waku lightpush peer: " & lightPushNode.error)
# Filter setup. NOTE Must be mounted after relay
if conf.filter:
try:
await mountFilter(node, filterTimeout = chronos.seconds(conf.filterTimeout))
except CatchableError:
return err("failed to mount waku filter protocol: " & getCurrentExceptionMsg())
if conf.filternode != "":
let filterNode = parsePeerInfo(conf.filternode)
if filterNode.isOk():
try:
await node.mountFilterClient()
node.peerManager.addServicePeer(filterNode.value, WakuLegacyFilterCodec)
node.peerManager.addServicePeer(filterNode.value, WakuFilterSubscribeCodec)
except CatchableError:
return err("failed to mount waku filter client protocol: " & getCurrentExceptionMsg())
else:
return err("failed to set node waku filter peer: " & filterNode.error)
# waku peer exchange setup
if conf.peerExchangeNode != "" or conf.peerExchange:
try:
await mountPeerExchange(node)
except CatchableError:
return err("failed to mount waku peer-exchange protocol: " & getCurrentExceptionMsg())
if conf.peerExchangeNode != "":
let peerExchangeNode = parsePeerInfo(conf.peerExchangeNode)
if peerExchangeNode.isOk():
node.peerManager.addServicePeer(peerExchangeNode.value, WakuPeerExchangeCodec)
else:
return err("failed to set node waku peer-exchange peer: " & peerExchangeNode.error)
return ok()
proc setupAndMountProtocols*(app: App): Future[AppResult[void]] {.async.} =
return await setupProtocols(
app.node,
app.conf,
app.key
)
## Start node
proc startNode(node: WakuNode, conf: WakuNodeConf,
dynamicBootstrapNodes: seq[RemotePeerInfo] = @[]): Future[AppResult[void]] {.async.} =
## Start a configured node and all mounted protocols.
## Connect to static nodes and start
## keep-alive, if configured.
# Start Waku v2 node
try:
await node.start()
except CatchableError:
return err("failed to start waku node: " & getCurrentExceptionMsg())
# Connect to configured static nodes
if conf.staticnodes.len > 0:
try:
await connectToNodes(node, conf.staticnodes, "static")
except CatchableError:
return err("failed to connect to static nodes: " & getCurrentExceptionMsg())
if dynamicBootstrapNodes.len > 0:
info "Connecting to dynamic bootstrap peers"
try:
await connectToNodes(node, dynamicBootstrapNodes, "dynamic bootstrap")
except CatchableError:
return err("failed to connect to dynamic bootstrap nodes: " & getCurrentExceptionMsg())
# retrieve px peers and add the to the peer store
if conf.peerExchangeNode != "":
let desiredOutDegree = node.wakuRelay.parameters.d.uint64()
await node.fetchPeerExchangePeers(desiredOutDegree)
# Start keepalive, if enabled
if conf.keepAlive:
node.startKeepalive()
# Maintain relay connections
if conf.relay:
node.peerManager.start()
return ok()
proc startApp*(app: App): Future[AppResult[void]] {.async.} =
if app.wakuDiscv5.isSome():
let wakuDiscv5 = app.wakuDiscv5.get()
let res = wakuDiscv5.start()
if res.isErr():
return err("failed to start waku discovery v5: " & $res.error)
asyncSpawn wakuDiscv5.searchLoop(app.node.peerManager)
asyncSpawn wakuDiscv5.subscriptionsListener(app.node.topicSubscriptionQueue)
return await startNode(
app.node,
app.conf,
app.dynamicBootstrapNodes
)
## Monitoring and external interfaces
proc startRestServer(app: App, address: ValidIpAddress, port: Port, conf: WakuNodeConf): AppResult[RestServerRef] =
let server = ? newRestHttpServer(address, port)
## Admin REST API
installAdminApiHandlers(server.router, app.node)
## Debug REST API
installDebugApiHandlers(server.router, app.node)
## Health REST API
installHealthApiHandler(server.router, app.node)
## Relay REST API
if conf.relay:
let cache = MessageCache[string].init(capacity=conf.restRelayCacheCapacity)
let handler = messageCacheHandler(cache)
let autoHandler = autoMessageCacheHandler(cache)
for pubsubTopic in conf.pubsubTopics:
cache.subscribe(pubsubTopic)
app.node.subscribe((kind: PubsubSub, topic: pubsubTopic), some(handler))
for contentTopic in conf.contentTopics:
cache.subscribe(contentTopic)
app.node.subscribe((kind: ContentSub, topic: contentTopic), some(autoHandler))
installRelayApiHandlers(server.router, app.node, cache)
## Filter REST API
if conf.filternode != "" and
app.node.wakuFilterClient != nil and
app.node.wakuFilterClientLegacy != nil:
let legacyFilterCache = rest_legacy_filter_api.MessageCache.init()
rest_legacy_filter_api.installLegacyFilterRestApiHandlers(server.router, app.node, legacyFilterCache)
let filterCache = rest_filter_api.MessageCache.init()
rest_filter_api.installFilterRestApiHandlers(server.router, app.node, filterCache)
## Store REST API
installStoreApiHandlers(server.router, app.node)
## Light push API
if conf.lightpushnode != "" and
app.node.wakuLightpushClient != nil:
rest_lightpush_api.installLightPushRequestHandler(server.router, app.node)
server.start()
info "Starting REST HTTP server", url = "http://" & $address & ":" & $port & "/"
ok(server)
proc startRpcServer(app: App, address: ValidIpAddress, port: Port, conf: WakuNodeConf): AppResult[RpcHttpServer] =
let ta = initTAddress(address, port)
var server: RpcHttpServer
try:
server = newRpcHttpServer([ta])
except CatchableError:
return err("failed to init JSON-RPC server: " & getCurrentExceptionMsg())
installDebugApiHandlers(app.node, server)
if conf.relay:
let cache = MessageCache[string].init(capacity=30)
let handler = messageCacheHandler(cache)
let autoHandler = autoMessageCacheHandler(cache)
for pubsubTopic in conf.pubsubTopics:
cache.subscribe(pubsubTopic)
app.node.subscribe((kind: PubsubSub, topic: pubsubTopic), some(handler))
for contentTopic in conf.contentTopics:
cache.subscribe(contentTopic)
app.node.subscribe((kind: ContentSub, topic: contentTopic), some(autoHandler))
installRelayApiHandlers(app.node, server, cache)
if conf.filternode != "":
let filterMessageCache = rpc_filter_api.MessageCache.init(capacity=30)
installFilterApiHandlers(app.node, server, filterMessageCache)
installStoreApiHandlers(app.node, server)
if conf.rpcAdmin:
installAdminApiHandlers(app.node, server)
server.start()
info "RPC Server started", address=ta
ok(server)
proc startMetricsServer(serverIp: ValidIpAddress, serverPort: Port): AppResult[MetricsHttpServerRef] =
info "Starting metrics HTTP server", serverIp= $serverIp, serverPort= $serverPort
let metricsServerRes = MetricsHttpServerRef.new($serverIp, serverPort)
if metricsServerRes.isErr():
return err("metrics HTTP server start failed: " & $metricsServerRes.error)
let server = metricsServerRes.value
try:
waitFor server.start()
except CatchableError:
return err("metrics HTTP server start failed: " & getCurrentExceptionMsg())
info "Metrics HTTP server started", serverIp= $serverIp, serverPort= $serverPort
ok(server)
proc startMetricsLogging(): AppResult[void] =
startMetricsLog()
ok()
proc setupMonitoringAndExternalInterfaces*(app: var App): AppResult[void] =
if app.conf.rpc:
let startRpcServerRes = startRpcServer(app, app.conf.rpcAddress, Port(app.conf.rpcPort + app.conf.portsShift), app.conf)
if startRpcServerRes.isErr():
error "6/7 Starting JSON-RPC server failed. Continuing in current state.", error=startRpcServerRes.error
else:
app.rpcServer = some(startRpcServerRes.value)
if app.conf.rest:
let startRestServerRes = startRestServer(app, app.conf.restAddress, Port(app.conf.restPort + app.conf.portsShift), app.conf)
if startRestServerRes.isErr():
error "6/7 Starting REST server failed. Continuing in current state.", error=startRestServerRes.error
else:
app.restServer = some(startRestServerRes.value)
if app.conf.metricsServer:
let startMetricsServerRes = startMetricsServer(app.conf.metricsServerAddress, Port(app.conf.metricsServerPort + app.conf.portsShift))
if startMetricsServerRes.isErr():
error "6/7 Starting metrics server failed. Continuing in current state.", error=startMetricsServerRes.error
else:
app.metricsServer = some(startMetricsServerRes.value)
if app.conf.metricsLogging:
let startMetricsLoggingRes = startMetricsLogging()
if startMetricsLoggingRes.isErr():
error "6/7 Starting metrics console logging failed. Continuing in current state.", error=startMetricsLoggingRes.error
ok()
# App shutdown
proc stop*(app: App): Future[void] {.async.} =
if app.restServer.isSome():
await app.restServer.get().stop()
if app.rpcServer.isSome():
await app.rpcServer.get().stop()
if app.metricsServer.isSome():
await app.metricsServer.get().stop()
if app.wakuDiscv5.isSome():
await app.wakuDiscv5.get().stop()
if not app.node.isNil():
await app.node.stop()

View File

@ -1,560 +0,0 @@
import
std/strutils,
stew/results,
chronos,
regex,
confutils,
confutils/defs,
confutils/std/net,
confutils/toml/defs as confTomlDefs,
confutils/toml/std/net as confTomlNet,
libp2p/crypto/crypto,
libp2p/crypto/secp,
libp2p/multiaddress,
nimcrypto/utils,
secp256k1
import
../../waku/common/confutils/envvar/defs as confEnvvarDefs,
../../waku/common/confutils/envvar/std/net as confEnvvarNet,
../../waku/common/logging,
../../waku/waku_enr
export
confTomlDefs,
confTomlNet,
confEnvvarDefs,
confEnvvarNet
type ConfResult*[T] = Result[T, string]
type ProtectedTopic* = object
topic*: string
key*: secp256k1.SkPublicKey
type
WakuNodeConf* = object
configFile* {.
desc: "Loads configuration from a TOML file (cmd-line parameters take precedence)"
name: "config-file" }: Option[InputFile]
## Application-level configuration
protectedTopics* {.
desc: "Topics and its public key to be used for message validation, topic:pubkey. Argument may be repeated."
defaultValue: newSeq[ProtectedTopic](0)
name: "protected-topic" .}: seq[ProtectedTopic]
## Log configuration
logLevel* {.
desc: "Sets the log level for process. Supported levels: TRACE, DEBUG, INFO, NOTICE, WARN, ERROR or FATAL",
defaultValue: logging.LogLevel.INFO,
name: "log-level" .}: logging.LogLevel
logFormat* {.
desc: "Specifies what kind of logs should be written to stdout. Suported formats: TEXT, JSON",
defaultValue: logging.LogFormat.TEXT,
name: "log-format" .}: logging.LogFormat
## General node config
clusterId* {.
desc: "Cluster id that the node is running in. Node in a different cluster id is disconnected."
defaultValue: 0
name: "cluster-id" }: uint32
agentString* {.
defaultValue: "nwaku",
desc: "Node agent string which is used as identifier in network"
name: "agent-string" .}: string
nodekey* {.
desc: "P2P node private key as 64 char hex string.",
name: "nodekey" }: Option[PrivateKey]
listenAddress* {.
defaultValue: defaultListenAddress()
desc: "Listening address for LibP2P (and Discovery v5, if enabled) traffic."
name: "listen-address"}: ValidIpAddress
tcpPort* {.
desc: "TCP listening port."
defaultValue: 60000
name: "tcp-port" }: Port
portsShift* {.
desc: "Add a shift to all port numbers."
defaultValue: 0
name: "ports-shift" }: uint16
nat* {.
desc: "Specify method to use for determining public address. " &
"Must be one of: any, none, upnp, pmp, extip:<IP>."
defaultValue: "any" }: string
extMultiAddrs* {.
desc: "External multiaddresses to advertise to the network. Argument may be repeated."
name: "ext-multiaddr" }: seq[string]
maxConnections* {.
desc: "Maximum allowed number of libp2p connections."
defaultValue: 50
name: "max-connections" }: uint16
maxRelayPeers* {.
desc: "Maximum allowed number of relay peers."
name: "max-relay-peers" }: Option[int]
peerStoreCapacity* {.
desc: "Maximum stored peers in the peerstore."
name: "peer-store-capacity" }: Option[int]
peerPersistence* {.
desc: "Enable peer persistence.",
defaultValue: false,
name: "peer-persistence" }: bool
## DNS addrs config
dnsAddrs* {.
desc: "Enable resolution of `dnsaddr`, `dns4` or `dns6` multiaddrs"
defaultValue: true
name: "dns-addrs" }: bool
dnsAddrsNameServers* {.
desc: "DNS name server IPs to query for DNS multiaddrs resolution. Argument may be repeated."
defaultValue: @[ValidIpAddress.init("1.1.1.1"), ValidIpAddress.init("1.0.0.1")]
name: "dns-addrs-name-server" }: seq[ValidIpAddress]
dns4DomainName* {.
desc: "The domain name resolving to the node's public IPv4 address",
defaultValue: ""
name: "dns4-domain-name" }: string
## Relay config
relay* {.
desc: "Enable relay protocol: true|false",
defaultValue: true
name: "relay" }: bool
relayPeerExchange* {.
desc: "Enable gossipsub peer exchange in relay protocol: true|false",
defaultValue: false
name: "relay-peer-exchange" }: bool
rlnRelay* {.
desc: "Enable spam protection through rln-relay: true|false",
defaultValue: false
name: "rln-relay" }: bool
rlnRelayCredPath* {.
desc: "The path for peristing rln-relay credential",
defaultValue: ""
name: "rln-relay-cred-path" }: string
rlnRelayCredIndex* {.
desc: "the index of the onchain commitment to use",
name: "rln-relay-membership-index" }: Option[uint]
rlnRelayDynamic* {.
desc: "Enable waku-rln-relay with on-chain dynamic group management: true|false",
defaultValue: false
name: "rln-relay-dynamic" }: bool
rlnRelayIdKey* {.
desc: "Rln relay identity secret key as a Hex string",
defaultValue: ""
name: "rln-relay-id-key" }: string
rlnRelayIdCommitmentKey* {.
desc: "Rln relay identity commitment key as a Hex string",
defaultValue: ""
name: "rln-relay-id-commitment-key" }: string
rlnRelayEthClientAddress* {.
desc: "WebSocket address of an Ethereum testnet client e.g., ws://localhost:8540/",
defaultValue: "ws://localhost:8540/"
name: "rln-relay-eth-client-address" }: string
rlnRelayEthContractAddress* {.
desc: "Address of membership contract on an Ethereum testnet",
defaultValue: ""
name: "rln-relay-eth-contract-address" }: string
rlnRelayCredPassword* {.
desc: "Password for encrypting RLN credentials",
defaultValue: ""
name: "rln-relay-cred-password" }: string
rlnRelayTreePath* {.
desc: "Path to the RLN merkle tree sled db (https://github.com/spacejam/sled)",
defaultValue: ""
name: "rln-relay-tree-path" }: string
rlnRelayBandwidthThreshold* {.
desc: "Message rate in bytes/sec after which verification of proofs should happen",
defaultValue: 0 # to maintain backwards compatibility
name: "rln-relay-bandwidth-threshold" }: int
staticnodes* {.
desc: "Peer multiaddr to directly connect with. Argument may be repeated."
name: "staticnode" }: seq[string]
keepAlive* {.
desc: "Enable keep-alive for idle connections: true|false",
defaultValue: false
name: "keep-alive" }: bool
topics* {.
desc: "Default topic to subscribe to. Argument may be repeated. Deprecated! Please use pubsub-topic and/or content-topic instead."
defaultValue: @["/waku/2/default-waku/proto"]
name: "topic" .}: seq[string]
pubsubTopics* {.
desc: "Default pubsub topic to subscribe to. Argument may be repeated."
name: "pubsub-topic" .}: seq[string]
contentTopics* {.
desc: "Default content topic to subscribe to. Argument may be repeated."
name: "content-topic" .}: seq[string]
## Store and message store config
store* {.
desc: "Enable/disable waku store protocol",
defaultValue: false,
name: "store" }: bool
storenode* {.
desc: "Peer multiaddress to query for storage",
defaultValue: "",
name: "storenode" }: string
storeMessageRetentionPolicy* {.
desc: "Message store retention policy. Time retention policy: 'time:<seconds>'. Capacity retention policy: 'capacity:<count>'. Size retention policy: 'size:<xMB/xGB>'. Set to 'none' to disable.",
defaultValue: "time:" & $2.days.seconds,
name: "store-message-retention-policy" }: string
storeMessageDbUrl* {.
desc: "The database connection URL for peristent storage.",
defaultValue: "sqlite://store.sqlite3",
name: "store-message-db-url" }: string
storeMessageDbVacuum* {.
desc: "Enable database vacuuming at start. Only supported by SQLite database engine.",
defaultValue: false,
name: "store-message-db-vacuum" }: bool
storeMessageDbMigration* {.
desc: "Enable database migration at start.",
defaultValue: true,
name: "store-message-db-migration" }: bool
## Filter config
filter* {.
desc: "Enable filter protocol: true|false",
defaultValue: false
name: "filter" }: bool
filternode* {.
desc: "Peer multiaddr to request content filtering of messages.",
defaultValue: ""
name: "filternode" }: string
filterTimeout* {.
desc: "Timeout for filter node in seconds.",
defaultValue: 14400 # 4 hours
name: "filter-timeout" }: int64
## Lightpush config
lightpush* {.
desc: "Enable lightpush protocol: true|false",
defaultValue: false
name: "lightpush" }: bool
lightpushnode* {.
desc: "Peer multiaddr to request lightpush of published messages.",
defaultValue: ""
name: "lightpushnode" }: string
## JSON-RPC config
rpc* {.
desc: "Enable Waku JSON-RPC server: true|false",
defaultValue: true
name: "rpc" }: bool
rpcAddress* {.
desc: "Listening address of the JSON-RPC server.",
defaultValue: ValidIpAddress.init("127.0.0.1")
name: "rpc-address" }: ValidIpAddress
rpcPort* {.
desc: "Listening port of the JSON-RPC server.",
defaultValue: 8545
name: "rpc-port" }: uint16
rpcAdmin* {.
desc: "Enable access to JSON-RPC Admin API: true|false",
defaultValue: false
name: "rpc-admin" }: bool
rpcPrivate* {.
desc: "Enable access to JSON-RPC Private API: true|false",
defaultValue: false
name: "rpc-private" }: bool
## REST HTTP config
rest* {.
desc: "Enable Waku REST HTTP server: true|false",
defaultValue: false
name: "rest" }: bool
restAddress* {.
desc: "Listening address of the REST HTTP server.",
defaultValue: ValidIpAddress.init("127.0.0.1")
name: "rest-address" }: ValidIpAddress
restPort* {.
desc: "Listening port of the REST HTTP server.",
defaultValue: 8645
name: "rest-port" }: uint16
restRelayCacheCapacity* {.
desc: "Capacity of the Relay REST API message cache.",
defaultValue: 30
name: "rest-relay-cache-capacity" }: uint32
restAdmin* {.
desc: "Enable access to REST HTTP Admin API: true|false",
defaultValue: false
name: "rest-admin" }: bool
restPrivate* {.
desc: "Enable access to REST HTTP Private API: true|false",
defaultValue: false
name: "rest-private" }: bool
## Metrics config
metricsServer* {.
desc: "Enable the metrics server: true|false"
defaultValue: false
name: "metrics-server" }: bool
metricsServerAddress* {.
desc: "Listening address of the metrics server."
defaultValue: ValidIpAddress.init("127.0.0.1")
name: "metrics-server-address" }: ValidIpAddress
metricsServerPort* {.
desc: "Listening HTTP port of the metrics server."
defaultValue: 8008
name: "metrics-server-port" }: uint16
metricsLogging* {.
desc: "Enable metrics logging: true|false"
defaultValue: true
name: "metrics-logging" }: bool
## DNS discovery config
dnsDiscovery* {.
desc: "Enable discovering nodes via DNS"
defaultValue: false
name: "dns-discovery" }: bool
dnsDiscoveryUrl* {.
desc: "URL for DNS node list in format 'enrtree://<key>@<fqdn>'",
defaultValue: ""
name: "dns-discovery-url" }: string
dnsDiscoveryNameServers* {.
desc: "DNS name server IPs to query. Argument may be repeated."
defaultValue: @[ValidIpAddress.init("1.1.1.1"), ValidIpAddress.init("1.0.0.1")]
name: "dns-discovery-name-server" }: seq[ValidIpAddress]
## Discovery v5 config
discv5Discovery* {.
desc: "Enable discovering nodes via Node Discovery v5"
defaultValue: false
name: "discv5-discovery" }: bool
discv5UdpPort* {.
desc: "Listening UDP port for Node Discovery v5."
defaultValue: 9000
name: "discv5-udp-port" }: Port
discv5BootstrapNodes* {.
desc: "Text-encoded ENR for bootstrap node. Used when connecting to the network. Argument may be repeated."
name: "discv5-bootstrap-node" }: seq[string]
discv5EnrAutoUpdate* {.
desc: "Discovery can automatically update its ENR with the IP address " &
"and UDP port as seen by other nodes it communicates with. " &
"This option allows to enable/disable this functionality"
defaultValue: false
name: "discv5-enr-auto-update" .}: bool
discv5TableIpLimit* {.
hidden
desc: "Maximum amount of nodes with the same IP in discv5 routing tables"
defaultValue: 10
name: "discv5-table-ip-limit" .}: uint
discv5BucketIpLimit* {.
hidden
desc: "Maximum amount of nodes with the same IP in discv5 routing table buckets"
defaultValue: 2
name: "discv5-bucket-ip-limit" .}: uint
discv5BitsPerHop* {.
hidden
desc: "Kademlia's b variable, increase for less hops per lookup"
defaultValue: 1
name: "discv5-bits-per-hop" .}: int
## waku peer exchange config
peerExchange* {.
desc: "Enable waku peer exchange protocol (responder side): true|false",
defaultValue: false
name: "peer-exchange" }: bool
peerExchangeNode* {.
desc: "Peer multiaddr to send peer exchange requests to. (enables peer exchange protocol requester side)",
defaultValue: ""
name: "peer-exchange-node" }: string
## websocket config
websocketSupport* {.
desc: "Enable websocket: true|false",
defaultValue: false
name: "websocket-support"}: bool
websocketPort* {.
desc: "WebSocket listening port."
defaultValue: 8000
name: "websocket-port" }: Port
websocketSecureSupport* {.
desc: "Enable secure websocket: true|false",
defaultValue: false
name: "websocket-secure-support"}: bool
websocketSecureKeyPath* {.
desc: "Secure websocket key path: '/path/to/key.txt' ",
defaultValue: ""
name: "websocket-secure-key-path"}: string
websocketSecureCertPath* {.
desc: "Secure websocket Certificate path: '/path/to/cert.txt' ",
defaultValue: ""
name: "websocket-secure-cert-path"}: string
## Parsing
# NOTE: Keys are different in nim-libp2p
proc parseCmdArg*(T: type crypto.PrivateKey, p: string): T =
try:
let key = SkPrivateKey.init(utils.fromHex(p)).tryGet()
crypto.PrivateKey(scheme: Secp256k1, skkey: key)
except CatchableError:
raise newException(ValueError, "Invalid private key")
proc completeCmdArg*(T: type crypto.PrivateKey, val: string): seq[string] =
return @[]
proc parseCmdArg*(T: type ProtectedTopic, p: string): T =
let elements = p.split(":")
if elements.len != 2:
raise newException(ValueError, "Invalid format for protected topic expected topic:publickey")
let publicKey = secp256k1.SkPublicKey.fromHex(elements[1])
if publicKey.isErr:
raise newException(ValueError, "Invalid public key")
return ProtectedTopic(topic: elements[0], key: publicKey.get())
proc completeCmdArg*(T: type ProtectedTopic, val: string): seq[string] =
return @[]
proc parseCmdArg*(T: type ValidIpAddress, p: string): T =
try:
ValidIpAddress.init(p)
except CatchableError:
raise newException(ValueError, "Invalid IP address")
proc completeCmdArg*(T: type ValidIpAddress, val: string): seq[string] =
return @[]
proc defaultListenAddress*(): ValidIpAddress =
# TODO: How should we select between IPv4 and IPv6
# Maybe there should be a config option for this.
(static ValidIpAddress.init("0.0.0.0"))
proc parseCmdArg*(T: type Port, p: string): T =
try:
Port(parseInt(p))
except CatchableError:
raise newException(ValueError, "Invalid Port number")
proc completeCmdArg*(T: type Port, val: string): seq[string] =
return @[]
proc parseCmdArg*(T: type Option[int], p: string): T =
try:
some(parseInt(p))
except CatchableError:
raise newException(ValueError, "Invalid number")
proc parseCmdArg*(T: type Option[uint], p: string): T =
try:
some(parseUint(p))
except CatchableError:
raise newException(ValueError, "Invalid unsigned integer")
## Load
proc readValue*(r: var TomlReader, value: var crypto.PrivateKey) {.raises: [SerializationError].} =
try:
value = parseCmdArg(crypto.PrivateKey, r.readValue(string))
except CatchableError:
raise newException(SerializationError, getCurrentExceptionMsg())
proc readValue*(r: var EnvvarReader, value: var crypto.PrivateKey) {.raises: [SerializationError].} =
try:
value = parseCmdArg(crypto.PrivateKey, r.readValue(string))
except CatchableError:
raise newException(SerializationError, getCurrentExceptionMsg())
proc readValue*(r: var TomlReader, value: var ProtectedTopic) {.raises: [SerializationError].} =
try:
value = parseCmdArg(ProtectedTopic, r.readValue(string))
except CatchableError:
raise newException(SerializationError, getCurrentExceptionMsg())
proc readValue*(r: var EnvvarReader, value: var ProtectedTopic) {.raises: [SerializationError].} =
try:
value = parseCmdArg(ProtectedTopic, r.readValue(string))
except CatchableError:
raise newException(SerializationError, getCurrentExceptionMsg())
{.push warning[ProveInit]: off.}
proc load*(T: type WakuNodeConf, version=""): ConfResult[T] =
try:
let conf = WakuNodeConf.load(
version=version,
secondarySources = proc (conf: WakuNodeConf, sources: auto)
{.gcsafe, raises: [ConfigurationError].} =
sources.addConfigFile(Envvar, InputFile("wakunode2"))
if conf.configFile.isSome():
sources.addConfigFile(Toml, conf.configFile.get())
)
ok(conf)
except CatchableError:
err(getCurrentExceptionMsg())
{.pop.}

Some files were not shown because too many files have changed in this diff Show More