Compare commits

..

152 Commits

Author SHA1 Message Date
NagyZoltanPeter
75864a705e
Fix websock nimble dependency version restriction to match lock file. (#3829) 2026-04-30 14:20:11 +02:00
Ivan FB
587014e34f
add event_loop_accumulates_lag_secs (#3833) 2026-04-30 00:27:38 +02:00
NagyZoltanPeter
300f584efc
Removed duplicates of announcedAddresses, extMultiaddresses (#3831)
Removing duplicates of multiaddresses for Enr.
Safe building Enr Record.

Co-authored-by: Copilot <copilot@github.com>
2026-04-29 15:10:21 +02:00
osmaczko
5034086fef
Chore/make nix build phase configurable (#3826)
* nix: parameterize build flags with named args

Expose `enablePostgres`, `enableNimDebugDlOpen`, and `chroniclesLogLevel`
as arguments on `nix/default.nix`. Defaults preserve today's hardcoded
behavior, so `nix build .#liblogosdelivery` with no overrides is a
no-op change.

Consume the package via `callPackage` in `flake.nix` so consumers can
use `.override { ... }` without extra wrapping.

* nix: link libstdc++ on Linux so consumers don't need patchelf

Append `stdenv.cc.cc.lib` to `buildInputs` on Linux and add `-lstdc++`
to the Nim `--passL` flags. Nix stdenv's fixupPhase will auto-inject
`${stdenv.cc.cc.lib}/lib` into the output's RUNPATH, so downstream
consumers can drop their patchelf step.

macOS resolves the C++ stdlib via dyld/libc++ and is unaffected.

* nix: bundle librln into the output for a self-contained package

Copy the librln shared library (`librln.so` / `librln.dylib`) from the
zerokit input into `$out/lib` and rewrite the internal reference in
`liblogosdelivery`:

- Darwin: set librln's install name to `@rpath/librln.dylib`, change the
  consumer's reference to match, and add `@loader_path` as an rpath.
- Linux: add `$ORIGIN` to the rpath so `librln.so` resolves from the
  sibling directory, preserving the gcc-lib entry injected by the stdenv
  fixupPhase for libstdc++.

The installed `liblogosdelivery` no longer carries a `/nix/store/...`
absolute path to zerokit, so downstream consumers can ship the bundle
as-is.
2026-04-27 12:51:39 +02:00
Darshan
324048430b
fix: restore -d:postgres in nimble task and propagate NIMFLAGS (#3830) 2026-04-25 00:03:46 +05:30
Fabiana Cecin
ff98d85313
fix: relay validator registration and sync filter (#3823)
* reuse stored validator in relay
* fix skip check in store sync
* increase sync tolerance in test (matches similar test)
2026-04-23 21:02:34 +02:00
NagyZoltanPeter
820ccc6e10
Add ci support for liblogosdeliery, build and artifacts (#3746) 2026-04-23 18:24:55 +02:00
Fabiana Cecin
bb8a7e8782
Fix redundant start/stop calls (#3817)
* remove redundant proto start/stop calls from node start/stop
* fix WakuRelay start/stop not overriding GossipSub start/stop
* replace startRelay with reconnectRelayPeers
2026-04-22 09:52:57 -03:00
Igor Sirotin
4394843299
fix: make update and wakunode2 build on arm64 after Nimble migration (#3814)
Rebuild nat libs (miniupnpc, libnatpmp) for the host architecture during
nimble deps setup. The prebuilt libs from the nimble cache are x86_64 and
fail to link on arm64 (Apple Silicon).

Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-21 23:20:53 +02:00
Ivan FB
260def68ad
use EWMA to show main loop lag information (#3808) 2026-04-20 18:05:44 +02:00
Ivan FB
cda0197168
use nimble 0.22.3 and more appropriate nimble.lock (#3809) 2026-04-20 13:54:34 +02:00
Fabiana Cecin
9cbb4e7338
fix: prefer --num-shards-in-network over preset (#3816)
* fill numShardsInCluster from preset when builder slot is none
* add regression tests
2026-04-20 13:48:27 +02:00
Fabiana Cecin
9ae108b4a7
Fix peer stats endpoint (#3815) 2026-04-20 08:16:01 -03:00
Fabiana Cecin
ca4dbb19e0
Improve logging of content topic on server (#3818) 2026-04-20 13:05:54 +02:00
NagyZoltanPeter
509c875533
chore: enable postgres support in nix liblogosdelivery build (#3813)
Add -d:postgres and -d:nimDebugDlOpen to both the dynamic and static
nim c invocations in nix/default.nix, matching the POSTGRES=1 flag
already used in the Make-based build path.
2026-04-15 16:12:52 +02:00
Darshan
ecd3758580
Merge pull request #3760 from logos-messaging/release/v0.38 2026-04-14 18:17:49 +05:30
darshankabariya
04b8e8c2a8
chore: update remaining changelog 2026-04-14 18:07:02 +05:30
Gabriel Cruz
166dc69c39
chore: bump nim-jwt version (#3812) 2026-04-13 21:44:30 +02:00
darshankabariya
a4db8895e4
chore: resolving lint 2026-04-10 17:03:25 +05:30
Fabiana Cecin
c04df751db
Fix BearSSL and NAT lib build reproducibility (#3806)
* pass -mssse3 on x86_64 to BearSSL and NAT C lib builds
* add BearSSL.mk and Nat.mk to nimbledeps cache key
2026-04-10 07:38:02 -03:00
Fabiana Cecin
494ea94606
fix: recv_service delivers store-recovered messages (#3805)
* recv_service now delivers store-recovered messages via MessageReceivedEvent
* add regression test_api_receive to prove store recovery actually delivers messages
* fix confusing "UNSUCCESSFUL / Missed message" log message
* removed some dead/duplicated code

Co-authored-by: Ivan FB <128452529+Ivansete-status@users.noreply.github.com>
Co-authored by Zoltan Nagy
2026-04-09 14:29:17 -03:00
Ivan FB
ca7ec3de05
add main loop lag monitor (#3803)
* add loop lagging as health status
2026-04-09 16:51:46 +02:00
Ivan FB
4d314b376d
setting num-shards-in-network to 0 by default (#3748)
Co-authored-by: darshankabariya <darshan@status.im>
2026-04-09 11:52:48 +05:30
NagyZoltanPeter
5503529531
chore: Add pre-check of options used in config Json for liblogosdelivery pre-createNode, treat unrecognized options as error (#3801)
* Add pre-check of options used in config Json for logos-delivery-api pre-createNode, treat unrecognized options as error
* Collect all unrecognized options and report them at once.
* Refactor json config parsing and error detection
2026-04-09 07:17:17 +02:00
Ivan FB
59bd365c16
setting num-shards-in-network to 0 by default (#3748)
Co-authored-by: darshankabariya <darshan@status.im>
2026-04-08 15:33:16 +02:00
Ivan FB
f5762af4c4
Start using nimble and deprecate vendor dependencies (#3798)
Co-authored-by: NagyZoltanPeter <113987313+NagyZoltanPeter@users.noreply.github.com>
Co-authored-by: Darshan K <35736874+darshankabariya@users.noreply.github.com>
2026-04-08 12:42:14 +02:00
darshankabariya
b2e46b6e91
Merge branch 'master' into release/v0.38 2026-04-08 00:55:39 +05:30
Darshan
9a344553e7
chore: update master changelog after v0.37.4 (#3802) 2026-04-07 18:00:57 +02:00
Danish Arora
549bf8bc43
fix(nix): fetch git submodules automatically via inputs.self (#3738)
The Nix build fails when consumers use `nix build github:logos-messaging/logos-delivery#liblogosdelivery`
without appending `?submodules=1` — vendor/nimbus-build-system is missing,
causing patchShebangs and substituteInPlace to fail.

Two fixes:
1. Add `inputs.self.submodules = true` to flake.nix (Nix >= 2.27) so
   submodules are fetched automatically without requiring callers to
   pass `?submodules=1`.
2. Fix the assertion in nix/default.nix: `(src.submodules or true)`
   always evaluates to true, silently masking the missing-submodules
   error. Changed to `builtins.pathExists` check on the actual
   submodule directory so it fails with a helpful message when
   submodules are genuinely absent.
2026-04-07 13:14:32 +05:30
Fabiana Cecin
56359e49ed
prefer reusing service peers across shards in edge filter reconciliation (#3789)
* selectFilterCandidates prefers peers already serving other shards
* restructure edgeFilterSubLoop (plan all dials then execute) for safety
2026-04-06 11:08:47 -03:00
Darshan
39719e1247
increase default timeout to 20s and add debug logging (#3792) 2026-04-06 15:53:45 +05:30
Fabiana Cecin
b0c0e0b637
chore: optimize release builds for speed (#3735) (#3777)
* Add -flto (lto_incremental, link-time optimization) for release builds
* Add -s (strip symbols) for release builds
* Switch library builds from --opt:size to --opt:speed
* Change -d:marchOptimized to x86-64-v2 target from broadwell
* Remove obsolete chronicles_colors=off for Windows
* Remove obsolete withoutPCRE define
2026-04-02 12:10:02 +02:00
Fabiana Cecin
dc026bbff1
feat: active filter subscription management for edge nodes (#3773)
feat: active filter subscription management for edge nodes

## Subscription Manager
* edgeFilterSubLoop reconciles desired vs actual filter subscriptions
* edgeFilterHealthLoop pings filter peers, evicts stale ones
* EdgeFilterSubState per-shard tracking of confirmed peers and health
* best-effort unsubscribe on peer removal
* RequestEdgeShardHealth and RequestEdgeFilterPeerCount broker providers

## WakuNode
* Remove old edge health loop (loopEdgeHealth, edgeHealthEvent, calculateEdgeTopicHealth)
* Register MessageSeenEvent push handler on filter client during start
* startDeliveryService now returns `Result[void, string]` and propagates errors

## Health Monitor
* getFilterClientHealth queries RequestEdgeFilterPeerCount via broker
* Shard/content health providers fall back to RequestEdgeShardHealth when relay inactive
* Listen to EventShardTopicHealthChange for health recalculation
* Add missing return p.notReady() on failed edge filter peer count request
* HealthyThreshold constant moved to `connection_status.nim`

## Broker types
* RequestEdgeShardHealth, RequestEdgeFilterPeerCount request types
* EventShardTopicHealthChange event type

## Filter Client
* Add timeout parameter to ping proc

## Tests
* Health monitor event tests with per-node lockNewGlobalBrokerContext
* Edge (light client) health update test
* Edge health driven by confirmed filter subscriptions test
* API subscription tests: sub/receive, failover, peer replacement

Co-authored-by: Ivan FB <128452529+Ivansete-status@users.noreply.github.com>
Co-authored by Zoltan Nagy
2026-03-30 08:30:34 -03:00
Ivan FB
0623c10635
completely remove storev2 (#3781) 2026-03-30 00:08:08 +02:00
Ivan FB
5c335c2002
address leftover comments (#3782) 2026-03-27 13:55:27 +01:00
Ivan FB
b1e1c87534
update changelog for v0.37.3 (#3783)
Co-authored-by: darshankabariya <darshan@status.im>
2026-03-27 13:54:06 +01:00
Ivan FB
0b86093247
allow override user-message-rate-limit (#3778) 2026-03-25 13:23:20 +01:00
Ivan Folgueira Bande
6749144739
update change log for v0.37.2 2026-03-24 12:03:59 +01:00
Ivan Folgueira Bande
37f587f057
set default retention policy in archive.nim 2026-03-24 12:03:21 +01:00
Ivan Folgueira Bande
4b5f91c0ce
fix compilation issue in test_node_conf.nim 2026-03-24 12:02:49 +01:00
Ivan Folgueira Bande
a0f134aadb
update changelog for v0.37.2 2026-03-24 09:24:12 +01:00
Ivan FB
d2fdd6ff36
allow union of several retention policies (#3766)
* refactor retention policy to allow union of several retention policies
* bug fix time retention policy
* add removal of orphan partitions if any
* use nim-http-utils 0.4.1
2026-03-24 09:22:43 +01:00
Ivan FB
6a20ee9c83
Merge pull request #3771 from logos-messaging/merge-v037-branch-into-master
This aims to bring recent fixes added into release/v0.37 branch into master
2026-03-23 12:31:38 +01:00
Ivan Folgueira Bande
de3143e351
set default retention policy in archive.nim 2026-03-20 21:05:42 +01:00
Ivan Folgueira Bande
d9aa46e22f
fix compilation issue in test_node_conf.nim 2026-03-20 16:54:42 +01:00
Ivan Folgueira Bande
d1ac84a80f
Merge branch 'master' into merge-v037-branch-into-master 2026-03-20 11:26:35 +01:00
Ivan Folgueira Bande
4e17776f71
update change log for v0.37.2 2026-03-20 00:18:26 +01:00
Ivan FB
3e7aa18a42
force FINALIZE partition detach after detecting shorter error (#3728) 2026-03-20 00:16:16 +01:00
Ivan Folgueira Bande
84e0af8dc0
update changelog for v0.37.2 2026-03-19 23:09:22 +01:00
Ivan FB
11461aed44
allow union of several retention policies (#3766)
* refactor retention policy to allow union of several retention policies
* bug fix time retention policy
* add removal of orphan partitions if any
* use nim-http-utils 0.4.1
2026-03-19 21:37:04 +01:00
Copilot
85a7bf3322
chore: add .nph.toml to exclude vendor and nimbledeps from nph formatting (#3762)
Co-authored-by: NagyZoltanPeter <113987313+NagyZoltanPeter@users.noreply.github.com>
2026-03-17 14:39:00 +01:00
Fabiana Cecin
614f171626
nim nph 0.7.0 formatting (#3759) 2026-03-17 14:15:35 +01:00
darshankabariya
6030983a83
chore: add v0.38.0 changelog 2026-03-16 14:27:43 +05:30
Ivan Folgueira Bande
fdf4e839ff
use fix branch from nim-http-utils 2026-03-13 23:22:53 +01:00
Ivan FB
96f1c40ab3
simple refactor in waku mix protocol mostly to rm duplicated log (#3752) 2026-03-13 14:33:24 +01:00
Ivan FB
9901e6e368
Merge pull request #3753 from logos-messaging/update-changelog-v0.37.1-master
patch release v0.37.1
2026-03-13 12:46:49 +01:00
Ivan Folgueira Bande
69ede0b7df
rm leftovrs in CHANGELOG.md 2026-03-13 12:41:03 +01:00
Ivan FB
6c6a0b503d
Merge branch 'master' into update-changelog-v0.37.1-master 2026-03-13 12:38:50 +01:00
Ivan Folgueira Bande
6a1cf578ef
Revert "Release : patch release v0.37.1-beta (#3661)"
We are going to update the CHANGELOG with another PR today

This reverts commit 868d43164e9b5ad0c3a856e872448e9e80531e0c.
2026-03-13 12:10:40 +01:00
Tanya S
bc9454db5e
Chore: Simplify on chain group manager error handling (#3678) 2026-03-13 01:57:50 +05:30
Ivan FB
03249df715
Add deployment process (#3751) 2026-03-12 19:13:40 +01:00
Ivan FB
a77870782a
Change release process (#3750)
* Simplify release process and leave the DST validation for deployment process
* Rename prepare_full_release.md to prepare_release.md
* Remove release-process.md as it duplicates info and causes confusion
2026-03-12 19:13:09 +01:00
Darshan
1ace0154d3
chore: correct dynamic library extension on mac and update OS detection (#3754) 2026-03-12 23:17:47 +05:30
Ivan Folgueira Bande
ff723a80c5
adapt CHANGELOG to the actual recent releases 2026-03-12 16:03:17 +01:00
Ivan FB
f5fe74f9fe
Merge branch 'master' into update-changelog-v0.37.1-master 2026-03-12 15:51:13 +01:00
Ivan Folgueira Bande
4654975e66
update changelog to avoid IndexDefect exception 2026-03-12 09:31:10 +01:00
Ivan FB
dedc2501db
fix avoid IndexDefect if DB error message is short (#3725) 2026-03-12 09:24:58 +01:00
NagyZoltanPeter
0ad55159b3
liblogosdelivery supports MessageReceivedEvent propagation over FFI. Adjusted example. (#3747) 2026-03-04 09:48:48 +01:00
NagyZoltanPeter
4a6ad73235
Chore: adapt debugapi to wakonodeconf (#3745)
* logosdelivery_get_available_configs collects and format WakuNodeConf options
* simplify debug config output
2026-03-03 23:48:00 +01:00
NagyZoltanPeter
1f9c4cb8cc
Chore: adapt cli args for delivery api (#3744)
* LogosDeliveryAPI: NodeConfig -> WakluNodeConf + mode selector and logos.dev preset

* Adjustment made on test, logos.dev preset

* change default agentString from nwaku to logos-delivery

* Add p2pReliability switch to presets and make it default to true.

* Borrow entryNode idea from NodeConfig to WakuNodeConf to easy shortcut among diffrent bootstrap node list all needs different formats

* Fix rateLimit assignment for builder

* Remove Core mode default as we already have a defaul, user must override

* Removed obsolate API createNode with NodeConfig - tests are refactored for WakuNodeConf usage

* Fix failing test due to twn-clusterid(1) default has overwrite for maxMessagSize. Fix readme.
2026-03-03 19:17:54 +01:00
Ivan FB
09618a2656
Add debug API in liblogosdelivery (#3742) 2026-03-03 13:32:45 +01:00
Fabiana Cecin
7e36e26867
Fix NodeHealthMonitor logspam (#3743) 2026-03-03 12:11:16 +01:00
NagyZoltanPeter
db19da9254
move destroy api to node_api, add some security checks and fix a possible resource leak (#3736) 2026-03-02 18:56:39 +01:00
Fabiana Cecin
51ec09c39d
Implement stateful SubscriptionService for Core mode (#3732)
* SubscriptionManager tracks shard and content topic interest
* RecvService emits MessageReceivedEvent on subscribed content topics
* Route MAPI through old Kernel API relay unique-handler infra to avoid code duplication
* Encode current gen-zero network policy: on Core node boot, subscribe to all pubsub topics (all shards)
* Add test_api_subscriptions.nim (basic relay/core testing only)
* Removed any MAPI Edge sub/unsub/receive support code that was there (will add in next PR)
* Hook MessageSeenEvent to Kernel API bus
* Fix MAPI vs Kernel API unique relay handler support
* RecvService delegating topic subs to SubscriptionManager
* RecvService emits MessageReceivedEvent (fully filtered)
* Rename old SubscriptionManager to LegacySubscriptionManager
2026-03-02 14:52:36 -03:00
NagyZoltanPeter
ba85873f03
health status event support for liblogosdelivery (#3737) 2026-02-26 17:55:31 +01:00
Ivan FB
c7e0cc0eaa
bump nim-metrics to proper tagged commit v0.2.1 (#3734) 2026-02-21 16:24:26 +01:00
Ivan Folgueira Bande
a7872d59d1
add POSTGRES support in nix 2026-02-20 00:19:51 +01:00
Ivan FB
b23e722cb4
bump nim-metrics to latest master (#3730) 2026-02-19 18:11:33 +01:00
Prem Chaitanya Prathi
335600ebcb
feat: waku kademlia integration and mix updates (#3722)
* feat: integrate mix protocol with extended kademlia discovery

Co-authored-by: Ivan FB <128452529+Ivansete-status@users.noreply.github.com>
2026-02-19 10:26:17 +05:30
Darshan
2f3f56898f
fix: update release process (#3727) 2026-02-18 11:51:15 +05:30
Ivan FB
f208cb79ed
adjust Dockerfile.lightpushWithMix.compile (#3724) 2026-02-17 20:00:51 +01:00
Ivan FB
895f3e2d36
update after rename to logos-delivery (#3729) 2026-02-17 19:59:45 +01:00
NagyZoltanPeter
3603b838b9
feat: liblogosdelivery FFI library of new API (#3714)
* Initial for liblogosdelivery library (static & dynamic) based on current state of API. 
* nix build support added.
* logosdelivery_example
* Added support for missing logLevel/logFormat in new API create_node
* Added full JSON to NodeConfig support
* Added ctx and ctx.myLib check to avoid uninitialzed calls and crash. Adjusted logosdelivery_example with proper error handling and JSON config format
* target aware install phase
* Fix base64 decode of payload
2026-02-17 10:38:35 +01:00
Ivan FB
b38b5aaea1
force FINALIZE partition detach after detecting shorter error (#3728) 2026-02-17 00:18:46 +01:00
Ivan FB
8f29070dcf
fix avoid IndexDefect if DB error message is short (#3725) 2026-02-16 11:49:35 +01:00
Ivan FB
eb0c34c553
Adjust docker file to bsd (#3720)
* add libbsd-dev into Dockerfile
* add libstdc++ in Dockerfile
  to avoid runtime error loading shared library libstdc++.so.6:
  No such file or directory (needed by /usr/bin/wakunode)
2026-02-13 12:55:31 +01:00
NagyZoltanPeter
84f791100f
fix: peer selection by shard and rendezvous/metadata sharding initialization (#3718)
* Fix peer selection for cases where ENR is not yet advertiesed but metadata exchange already adjusted supported shards. Fix initialization rendezvous protocol with configured and autoshards to let connect to relay nodes without having a valid subscribed shard already. This solves issue for autoshard nodes to connect ahead of subscribing.
* Extend peer selection, rendezvous and metadata tests
* Fix rendezvous test, fix metadata test failing due wrong setup, added it into all_tests
2026-02-13 11:23:21 +01:00
Fabiana Cecin
1fb4d1eab0
feat: implement Waku API Health spec (#3689)
* Fix protocol strength metric to consider connected peers only
* Remove polling loop; event-driven node connection health updates
* Remove 10s WakuRelay topic health polling loop; now event-driven
* Change NodeHealthStatus to ConnectionStatus
* Change new nodeState (rest API /health) field to connectionStatus
* Add getSyncProtocolHealthInfo and getSyncNodeHealthReport
* Add ConnectionStatusChangeEvent
* Add RequestHealthReport
* Refactor sync/async protocol health queries in the health monitor
* Add EventRelayTopicHealthChange
* Add EventWakuPeer emitted by PeerManager
* Add Edge support for topics health requests and events
* Rename "RelayTopic" -> "Topic"
* Add RequestContentTopicsHealth sync request
* Add EventContentTopicHealthChange
* Rename RequestTopicsHealth -> RequestShardTopicsHealth
* Remove health check gating from checkApiAvailability
* Add basic health smoke tests
* Other misc improvements, refactors, fixes

Co-authored-by: NagyZoltanPeter <113987313+NagyZoltanPeter@users.noreply.github.com>
Co-authored-by: Ivan FB <128452529+Ivansete-status@users.noreply.github.com>
2026-02-12 14:52:39 -03:00
Ivan FB
dd8dc7429d
canary exits with error if ping fails (#3711) 2026-02-11 16:19:58 +01:00
Fabiana Cecin
a8bdbca98a
Simplify NodeHealthMonitor creation (#3716)
Simplify NodeHealthMonitor creation

* Force NodeHealthMonitor.new() to set up a WakuNode
* Remove all checks for isNil(node) in NodeHealthMonitor
* Fix tests to use the new NodeHealthMonitor.new()

Co-authored-by: Ivan FB <128452529+Ivansete-status@users.noreply.github.com>
2026-02-11 10:36:37 -03:00
Darshan
6421685eca
chore: bump v0.38.0 (#3712) 2026-02-10 22:30:57 +01:00
Igor Sirotin
2c2d8e1c15
chore: update license files to comply with Logos licensing requirements 2026-02-05 00:30:16 +00:00
Ivan FB
09034837e6
fix build_rln.sh script (#3704) 2026-01-30 15:35:37 +01:00
Igor Sirotin
77f6bc6d72
docs: update licenses 2026-01-30 11:52:25 +00:00
Ivan FB
beb1dde1b5
force epoll is used in chronos for Android (#3705) 2026-01-30 09:06:51 +01:00
NagyZoltanPeter
1fd25355e0
feat: waku api send (#3669)
* Introduce api/send
Added events and requests for support.
Reworked delivery_monitor into a featured devlivery_service, that
- supports relay publish and lightpush depending on configuration but with fallback options
- if available and configured it utilizes store api to confirm message delivery
- emits message delivery events accordingly

prepare for use in api_example

* Fix edge mode config and test added
* Fix some import issues, start and stop waku shall not throw exception but return with result properly
* Utlize sync RequestBroker, adapt to non-async broker usage and gcsafe where appropriate, removed leftover
* add api_example app to examples2
* Adapt after merge from master
* Adapt code for using broker context
* Fix brokerCtx settings for all usedbrokers, cover locked node init
* Various fixes upon test failures. Added initial of subscribe API and auto-subscribe for send api
* More test added
* Fix multi propagate event emit, fix fail send test case
* Fix rebase
* Fix PushMessageHandlers in tests
* adapt libwaku to api changes
* Fix relay test by adapting publish return error in case NoPeersToPublish
* Addressing all remaining review findings. Removed leftovers. Fixed loggings and typos
* Fix rln relay broker, missed brokerCtx
* Fix rest relay test failed, due to publish will fail if no peer avail
* ignore anvil test state file
* Make terst_wakunode_rln_relay broker context aware to fix
* Fix waku rln tests by having them broker context aware
* fix typo in test_app.nim
2026-01-30 01:06:00 +01:00
538b279b94
nix: drop unnecessay asert for Android SDK on macOS
Newer nixpkgs should have Android SDK for aarch64.

Signed-off-by: Jakub Sokołowski <jakub@status.im>
2026-01-29 17:26:32 +01:00
361d914f87
nix: pin nixpkgs commit to same as status-go
This avoids fetching different nixpkgs versions.

Signed-off-by: Jakub Sokołowski <jakub@status.im>
2026-01-29 17:26:31 +01:00
Ivan FB
74b19c05d1
simple refactor to reduce PRs CI load (#3701)
* add discord notification in ci-daily
2026-01-29 15:48:34 +01:00
Ivan FB
a02aaab53c
bump nim-ffi to v0.1.3 (#3696) 2026-01-26 16:32:07 +01:00
a561ec3a38
nix: add libwaku target, fix compiling Nim using NBS
Use Nim built by NBS otherwise it doesn't work for both libwaku and
wakucanary.

Referenced issue:
* https://github.com/status-im/status-go/issues/7152
2026-01-20 09:29:07 +01:00
Darshan
91b4c5f52e
fix: store protocol issue in v0.37.0 (#3657) 2026-01-17 17:05:25 +05:30
NagyZoltanPeter
c27405b19c
chore: context aware brokers (#3674)
* Refactor RequestBroker to support context aware use - introduction of BrokerContext
* Context aware extension for EventBroker, EventBoker support for native or external types
* Enhance MultiRequestBroker - similar to RequestBroker and EventBroker - with support for native and external types and context aware execution.
* Move duplicated and common code into broker_utils from event- request- and multi_request_brokers
* Change BrokerContext from random number to counter
* Apply suggestion from @Ivansete-status
Co-authored-by: Ivan FB <128452529+Ivansete-status@users.noreply.github.com>
* Adjust naming in broker tests
* Follow up adjustment from send_api use

---------

Co-authored-by: Ivan FB <128452529+Ivansete-status@users.noreply.github.com>
2026-01-12 15:55:25 +01:00
NagyZoltanPeter
284a0816cc
chore: use chronos' TokenBucket (#3670)
* Adapt using chronos' TokenBucket. Removed TokenBucket and test. bump nim-chronos -> nim-libp2p/nim-lsquic/nim-jwt -> adapt to latest libp2p changes
* Fix libp2p/utility reports unlisted exception can occure from close of socket in waitForService - -d:ssl compile flag caused it
* Adapt request_limiter to new chronos' TokenBucket replenish algorithm to keep original intent of use
* Fix filter dos protection test
* Fix peer manager tests due change caused by new libp2p
* Adjust store test rate limit to eliminate CI test flakyness of timing
* Adjust store test rate limit to eliminate CI test flakyness of timing - lightpush/legacy_lightpush/filter
* Rework filter dos protection test to avoid CI crazy timing causing flakyness in test results compared to local runs
* Rework lightpush dos protection test to avoid CI crazy timing causing flakyness in test results compared to local runs
* Rework lightpush and legacy lightpush rate limit tests to eliminate timing effect in CI that cause longer awaits thus result in minting new tokens unlike local runs
2026-01-07 17:48:19 +01:00
Tanya S
a4e44dbe05
chore: Update anvil config (#3662)
* Use anvil config disable-min-priority-fee to prevent gas price doubling

* remove gas limit set in utils->deployContract
2026-01-06 11:35:16 +02:00
Sasha
a865ff72c8
update js-waku repo reference (#3684) 2026-01-06 10:19:37 +01:00
Ivan Folgueira Bande
dafdee9f5f
small refactor README to start using Logos Messaging Nim term 2025-12-29 23:04:24 +01:00
Pablo Lopez
96196ab8bc
feat: compilation for iOS WIP (#3668)
* feat: compilation for iOS WIP

* fix: nim ios version 18
2025-12-22 15:40:09 +02:00
Ivan FB
e3dd6203ae
Start using nim-ffi to implement libwaku (#3656)
* deep changes in libwaku to adap to nim-ffi
* start using ffi pragma in library
* update some binding examples
* add missing declare_lib.nim file
* properly rename api files in library folder
2025-12-19 17:00:43 +01:00
Tanya S
834eea945d
chore: pin rln dependencies to specific version (#3649)
* Add foundry version in makefile and install scripts

* revert to older verison of Anvil for rln tests and anvil_install fix

* pin pnpm version to be installed as rln dep

* source pnpm after new install

* Add to github path

* use npm to install pnpm for rln ci

* Update foundry and pnpm versions in Makefile
2025-12-19 10:55:53 +02:00
Arseniy Klempner
2d40cb9d62
fix: hash inputs for external nullifier, remove length prefix for sha256 (#3660)
* fix: hash inputs for external nullifier, remove length prefix for sha256

* feat: use nimcrypto keccak instead of sha256 ffi

* feat: wrapper function to generate external nullifier
2025-12-17 18:51:10 -08:00
Ivan FB
7c24a15459
simple cleanup rm unused DiscoveryManager from waku.nim (#3671) 2025-12-18 00:07:29 +01:00
Fabiana Cecin
bc5059083e
chore: pin logos-messaging-interop-tests to SMOKE_TEST_STABLE (#3667)
* pin to interop-tests SMOKE_TEST_STABLE
2025-12-16 17:49:03 +01:00
NagyZoltanPeter
3323325526
chore: extend RequestBroker with supporting native and external types and added possibility to define non-async (aka sync) requests for simplicity and performance (#3665)
* chore: extend RequestBroker with supporting native and external types and added possibility to define non-async (aka sync) requests for simplicity and performance
* Adapt gcsafe pragma for RequestBroker sync requests and provider signatures as requirement
---------

Co-authored-by: Ivan FB <128452529+Ivansete-status@users.noreply.github.com>
2025-12-16 02:52:20 +01:00
Fabiana Cecin
2477c4980f
chore: update ci container-image.yml ref to a commit in master (#3666) 2025-12-15 10:33:39 -03:00
Fabiana Cecin
10dc3d3eb4
chore: misc CI fixes (#3664)
* add make update to CI workflow
* add a nwaku -> logos-messaging-nim workflow rename
* pin local container-image.yml workflow to a commit
2025-12-15 09:15:33 -03:00
Ivan FB
9e2b3830e9
Distribute libwaku (#3612)
* allow create libwaku pkg
* fix Makefile create library extension libwaku
* make sure libwaku is built as part of assets
* Makefile: avoid rm libwaku before building it
* properly format debian pkg in gh release workflow
* waku.nimble set dylib extension correctly
* properly pass lib name and ext to waku.nimble
2025-12-15 12:11:11 +01:00
Sergei Tikhomirov
7d1c6abaac
chore: do not mount lightpush without relay (fixes #2808) (#3540)
* chore: do not mount lightpush without relay (fixes #2808)

- Change mountLightPush signature to return Result[void, string]
- Return error when relay is not mounted
- Update all call sites to handle Result return type
- Add test verifying mounting fails without relay
- Only advertise lightpush capability when relay is enabled

* chore: don't mount legacy lightpush without relay
2025-12-11 10:51:47 +01:00
Darshan K
868d43164e
Release : patch release v0.37.1-beta (#3661) 2025-12-10 17:40:42 +05:30
darshankabariya
bbc089f20e
chore: update CHANGELOG for v0.37.1-beta 2025-12-10 17:19:59 +05:30
Fabiana Cecin
4664b96a7a
fix: remove ENR cache from peer exchange (#3652)
* remove WakuPeerExchange.enrCache
* add forEnrPeers to support fast PeerStore search
* add getEnrsFromStore
* fix peer exchange tests
2025-12-10 17:18:56 +05:30
Sergei Tikhomirov
12952d070f
Add text file for coding LLMs with high-level nwaku info and style guide advice (#3624)
* add CLAUDE.md first version

* extract style guide advice

* use AGENTS.md instead of CLAUDE.md for neutrality

* chore: update AGENTS.md w.r.t. master developments

* Apply suggestions from code review

Co-authored-by: Ivan FB <128452529+Ivansete-status@users.noreply.github.com>

* remove project tree from AGENTS.md; minor editx

* Apply suggestions from code review

Co-authored-by: NagyZoltanPeter <113987313+NagyZoltanPeter@users.noreply.github.com>

---------

Co-authored-by: Ivan FB <128452529+Ivansete-status@users.noreply.github.com>
Co-authored-by: NagyZoltanPeter <113987313+NagyZoltanPeter@users.noreply.github.com>
2025-12-09 10:45:06 +01:00
Fabiana Cecin
7920368a36
fix: remove ENR cache from peer exchange (#3652)
* remove WakuPeerExchange.enrCache
* add forEnrPeers to support fast PeerStore search
* add getEnrsFromStore
* fix peer exchange tests
2025-12-08 06:34:57 -03:00
Tanya S
2cf4fe559a
Chore: bump waku-rlnv2-contract-repo commit (#3651)
* Bump commit for vendor wakurlnv2contract

* Update RLN registration proc for contract updates

* add option to runAnvil for state dump or load with optional contract deployment on setup

* Code clean up

* Upodate rln relay tests to use cached anvil state

* Minor updates to utils and new test for anvil state dump

* stopAnvil needs to wait for graceful shutdown

* configure runAnvil to use load state in other tests

* reduce ci timeout

* Allow for RunAnvil load state file to be compressed

* Fix linting

* Change return type of sendMintCall to Futre[void]

* Update naming of ci path for interop tests
2025-12-08 08:29:48 +02:00
Tanya S
a8590a0a7d
chore: Add gasprice overflow check (#3636)
* Check for gasPrice overflow

* use trace for logging and update comments

* Update log level for gas price logs
2025-12-04 10:26:18 +02:00
Ivan FB
8c30a8e1bb
Rest store api constraints default page size to 20 and max to 100 (#3602)
Co-authored-by: Vishwanath Martur <64204611+vishwamartur@users.noreply.github.com>
2025-12-03 11:55:34 +01:00
Fabiana Cecin
54f4ad8fa2
fix: fix .github waku-org/ --> logos-messaging/ (#3653)
* fix: fix .github waku-org/ --> logos-messaging/
* bump CI tests timeout 45 --> 90 minutes
* fix .gitmodules waku-org --> logos-messaging
2025-12-02 11:00:26 -03:00
NagyZoltanPeter
ae74b9018a
chore: Introduce EventBroker, RequestBroker and MultiRequestBroker (#3644)
* Introduce EventBroker and RequestBroker as decoupling helpers that represent reactive (event-driven) and proactive (request/response) patterns without tight coupling between modules
* Address copilot observation. error log if failed listener call exception, handling listener overuse - run out of IDs
* Address review observations: no exception to leak, listeners must raise no exception, adding listener now reports error with Result.
* Added MultiRequestBroker utility to collect results from many providers
* Support an arbitrary number of arguments for RequestBroker's request/provider signature
* MultiRequestBroker allows provider procs to throw exceptions, which will be handled during request processing.
* MultiRequestBroker supports one zero arg signature and/or multi arg signature
* test no exception leaks from RequestBroker and MultiRequestBroker
* Embed MultiRequestBroker tests into common
* EventBroker: removed all ...Broker typed public procs to simplify EventBroker interface, forger is renamed to dropListener
* Make Request's broker type private
* MultiRequestBroker: Use explicit returns in generated procs
* Updated descriptions of EventBroker and RequestBroker, updated RequestBroker.setProvider, returns error if already set.
* Better description for MultiRequestBroker and its usage
* Add EventBroker support for ref objects, fix emit variant with event object ctor
* Add RequestBroker support for ref objects
* Add MultiRequestBroker support for ref objects
* Mover brokers under waku/common
2025-12-02 00:24:46 +01:00
Darshan K
7eb1fdb0ac
chore: new release process ( beta and full ) (#3647) 2025-12-01 19:03:59 +05:30
Fabiana Cecin
c6cf34df06
feat(tests): robustify waku_rln_relay test utils (#3650) 2025-11-28 14:20:36 -03:00
Sergei Tikhomirov
1e73213a36
chore: Lightpush minor refactor (#3538)
* chore: refactor Lightpush (more DRY)

* chore: apply review suggestions

Co-authored-by: Ivan FB <128452529+Ivansete-status@users.noreply.github.com>

---------

Co-authored-by: Ivan FB <128452529+Ivansete-status@users.noreply.github.com>
2025-11-28 10:41:20 +01:00
Ivan FB
c0a7debfd1
Adapt makefile for libwaku windows (#3648) 2025-11-25 10:05:40 +01:00
Ivan FB
454b098ac5
new metric in postgres_driver to estimate payload stats (#3596) 2025-11-24 10:16:37 +01:00
Prem Chaitanya Prathi
088e3108c8
use exit==dest approach for mix (#3642) 2025-11-22 08:11:05 +05:30
Prem Chaitanya Prathi
b0cd75f4cb
feat: update rendezvous to broadcast and discover WakuPeerRecords (#3617)
* update rendezvous to work with WakuPeeRecord and use libp2p updated version

* split rendezvous client and service implementation

* mount rendezvous client by default
2025-11-21 23:15:12 +05:30
Ivan FB
31e1a81552
nix: add wakucanary Flake package (#3599)
Signed-off-by: Jakub Sokołowski <jakub@status.im>
Co-authored-by: Jakub Sokołowski <jakub@status.im>
2025-11-20 13:40:08 +01:00
Ivan FB
e54851d9d6
fix: admin API peer shards field from metadata protocol (#3594)
* fix: admin API peer shards field from metadata protocol
   Store and return peer shard info from metadata protocol exchange instead of only checking ENR records.
* peer_manager set shard info and extend rest test to validate it

Co-authored-by: MorganaFuture <andrewmochalskyi@gmail.com>
2025-11-20 13:12:16 +01:00
Ivan FB
adeb1a928e
fix: wakucanary now fails correctly when ping fails (#3595)
* wakucanary add some more detail if exception

Co-authored-by: MorganaFuture <andrewmochalskyi@gmail.com>
2025-11-20 08:44:15 +01:00
Darshan K
cd5909fafe
chore: first beta release v0.37.0 (#3607) 2025-11-19 18:53:23 +05:30
Darshan K
ff93643ae9
Merge branch 'master' into release/v0.37 2025-11-17 14:18:05 +05:30
NagyZoltanPeter
1762548741
chore: clarify api folders (#3637)
* Rename waku_api to rest_api and underlying rest to endpoint for clearity
* Rename node/api to node/kernel_api to suggest that it is an internal accessor to node interface + make everything compile after renaming
* make waku api a top level import
* fix use of relative path imports and use default to root rather in case of waku and tools modules
2025-11-15 23:31:09 +01:00
Simon-Pierre Vivier
262d33e394
Disable flaky test (#3585) 2025-10-30 10:53:45 -04:00
Fabiana Cecin
7b580dbf39
chore(refactoring): replace some isErr usage with better alternatives (#3615)
* Closes apply isOkOr || valueOr approach (#1969)
2025-10-27 14:07:06 -03:00
Ivan FB
9a341a68e5
use nightly docker rust image to allow release creation (#3628) 2025-10-27 15:06:42 +05:30
36bc01ac0d
ci: move builds to a container
Referenced issue:
* https://github.com/status-im/infra-ci/issues/188
2025-10-23 11:23:55 +02:00
Prem Chaitanya Prathi
8be45180aa
removing mix repo as dependency and using mix from libp2p repo (#3632)
* use released version of libp2p 1.14.2
2025-10-23 10:00:11 +05:30
Ivan FB
9808e205af
use nightly docker rust image to allow release creation (#3628) 2025-10-18 19:08:57 +05:30
Darshan K
59045461d3
Merge branch 'master' into release/v0.37 2025-10-17 15:27:22 +05:30
darshankabariya
9223fac807
chore: update changelog 2025-10-17 15:26:10 +05:30
Darshan K
7a009c8b27
bump libp2p ( v1.14.0 ) (#3627) 2025-10-17 11:49:28 +02:00
Darshan K
9d53867ec2
Merge branch 'master' into release/v0.37 2025-10-17 14:47:13 +05:30
Darshan K
deebee45d7
feat: stateless RLN ( bump v0.9.0 ) (#3621) 2025-10-15 19:08:46 +05:30
530 changed files with 25411 additions and 26800 deletions

View File

@ -1,7 +1,7 @@
--- ---
name: Bump dependencies name: Bump dependencies
about: Bump vendor dependencies for release about: Bump dependencies for release
title: 'Bump vendor dependencies for release 0.0.0' title: 'Bump dependencies for release 0.X.0'
labels: dependencies labels: dependencies
assignees: '' assignees: ''
@ -9,40 +9,10 @@ assignees: ''
<!-- Add appropriate release number to title! --> <!-- Add appropriate release number to title! -->
Update `nwaku` "vendor" dependencies. ### Bumped items
- [ ] Update nimble dependencies
1. Edit manually waku.nimble. For some dependencies, we want to bump versions manually and use a pinned version, f.e., nim-libp2p and all its dependencies.
2. Run `nimble lock` (make sure `nimble --version` shows the Nimble version pinned in waku.nimble)
3. Run `./tools/gen-nix-deps.sh nimble.lock nix/deps.nix` to update nix deps
### Items to bump - [ ] Update vendor/zerokit dependency.
- [ ] dnsclient.nim ( update to the latest tag version )
- [ ] nim-bearssl
- [ ] nimbus-build-system
- [ ] nim-chronicles
- [ ] nim-chronos
- [ ] nim-confutils
- [ ] nimcrypto
- [ ] nim-dnsdisc
- [ ] nim-eth
- [ ] nim-faststreams
- [ ] nim-http-utils
- [ ] nim-json-rpc
- [ ] nim-json-serialization
- [ ] nim-libbacktrace
- [ ] nim-libp2p ( update to the latest tag version )
- [ ] nim-metrics
- [ ] nim-nat-traversal
- [ ] nim-presto
- [ ] nim-regex ( update to the latest tag version )
- [ ] nim-results
- [ ] nim-secp256k1
- [ ] nim-serialization
- [ ] nim-sqlite3-abi ( update to the latest tag version )
- [ ] nim-stew
- [ ] nim-stint
- [ ] nim-taskpools ( update to the latest tag version )
- [ ] nim-testutils ( update to the latest tag version )
- [ ] nim-toml-serialization
- [ ] nim-unicodedb
- [ ] nim-unittest2 ( update to the latest tag version )
- [ ] nim-web3 ( update to the latest tag version )
- [ ] nim-websock ( update to the latest tag version )
- [ ] nim-zlib
- [ ] zerokit ( this should be kept in version `v0.7.0` )

View File

@ -0,0 +1,55 @@
---
name: Deploy Release
about: Execute tasks for deploying a new version in a fleet
title: 'Deploy release vX.X.X in waku.sandbox and/or status.prod fleet'
labels: deploy-release
assignees: ''
---
<!--
Add appropriate release number and adjust the target fleet in the title!
-->
### Link to the Release PR
<!--
Kindly add a link to the release PR where we have a sign-off from QA. At this time, that release PR should be already merged.
-->
### Items to complete, in order
<!--
You can release into either waku.sanbox, status.prod, or both. Both cases require coordination with Infra Team.
waku.sandbox must be considered a prod fleet as it is used by external parties.
For both status.prod it is crucial to coordinate such deployment with Status Team.
The following points should be followed in order.
-->
- [ ] Receive sign-off from DST.
- [ ] Inform DST team about what are the expectations for this release. For example, if we expect higher, same or lower bandwidth consumption. Or a new protocol appears, etc.
- [ ] Ask DST to add a comment approving this deployment and add a link to the analysis report.
- [ ] Deploy to waku.sandbox
- [ ] Coordinate with Infra Team about possible changes in CI behavior
- [ ] Update waku.sandbox with [this deployment job](https://ci.infra.status.im/job/nim-waku/job/deploy-waku-sandbox/).
- [ ] Deploy to status.prod
- [ ] Coordinate with Infra Team about possible changes in CI behavior
- [ ] Ask Status admin to add a comment approving that this deployment to happen now.
- [ ] Update status.prod with [this deployment job](https://ci.infra.status.im/job/nim-waku/job/deploy-status-prod/).
- [ ] Update infra config
- [ ] Submit PRs into infra repos to adjust deprecated or changed arguments (review CHANGELOG.md for that release). And confirm the fleet can run after that. This requires coordination with infra team.
### Reference Links
- [Release process](https://github.com/logos-messaging/logos-delivery/blob/master/docs/contributors/release-process.md)
- [Release notes](https://github.com/logos-messaging/logos-delivery/blob/master/CHANGELOG.md)
- [Infra-role-nim-waku](https://github.com/status-im/infra-role-nim-waku)
- [Infra-waku](https://github.com/status-im/infra-waku)
- [Infra-Status](https://github.com/status-im/infra-status)
- [Jenkins](https://ci.infra.status.im/job/nim-waku/)
- [Fleets](https://fleets.waku.org/)
- [Harbor](https://harbor.status.im/harbor/projects/9/repositories/nwaku/artifacts-tab)
- [Kibana](https://kibana.infra.status.im/app/)

View File

@ -1,8 +1,8 @@
--- ---
name: Prepare release name: Prepare Release
about: Execute tasks for the creation and publishing of a new release about: Execute tasks for the creation and publishing of a new full release
title: 'Prepare release 0.0.0' title: 'Prepare release 0.0.0'
labels: release labels: full-release
assignees: '' assignees: ''
--- ---
@ -10,63 +10,70 @@ assignees: ''
<!-- <!--
Add appropriate release number to title! Add appropriate release number to title!
For detailed info on the release process refer to https://github.com/waku-org/nwaku/blob/master/docs/contributors/release-process.md For detailed info on the release process refer to https://github.com/logos-messaging/logos-delivery/blob/master/docs/contributors/release-process.md
--> -->
### Items to complete ### Items to complete
All items below are to be completed by the owner of the given release. All items below are to be completed by the owner of the given release.
- [ ] Create release branch - [ ] Create release branch with major and minor only ( e.g. release/v0.X ) if it doesn't exist.
- [ ] Assign release candidate tag to the release branch HEAD. e.g. v0.30.0-rc.0 - [ ] Update the `version` field in `waku.nimble` to match the release version (e.g. `version = "0.X.0"`).
- [ ] Generate and edit releases notes in CHANGELOG.md - [ ] Assign release candidate tag to the release branch HEAD (e.g. `v0.X.0-rc.0`, `v0.X.0-rc.1`, ... `v0.X.0-rc.N`).
- [ ] Review possible update of [config-options](https://github.com/waku-org/docs.waku.org/blob/develop/docs/guides/nwaku/config-options.md) - [ ] Generate and edit release notes in CHANGELOG.md.
- [ ] _End user impact_: Summarize impact of changes on Status end users (can be a comment in this issue).
- [ ] **Validate release candidate**
- [ ] Bump nwaku dependency in [waku-rust-bindings](https://github.com/waku-org/waku-rust-bindings) and make sure all examples and tests work
- [ ] Automated testing - [ ] **Validation of release candidate**
- [ ] Ensures js-waku tests are green against release candidate
- [ ] Ask Vac-QA and Vac-DST to perform available tests against release candidate
- [ ] Vac-QA
- [ ] Vac-DST (we need additional report. see [this](https://www.notion.so/DST-Reports-1228f96fb65c80729cd1d98a7496fe6f))
- [ ] **On Waku fleets** - [ ] **Automated testing**
- [ ] Lock `waku.test` fleet to release candidate version - [ ] Ensure all the unit tests (specifically logos-messaging-js tests) are green against the release candidate.
- [ ] Continuously stress `waku.test` fleet for a week (e.g. from `wakudev`)
- [ ] Search _Kibana_ logs from the previous month (since last release was deployed), for possible crashes or errors in `waku.test` and `waku.sandbox`.
- Most relevant logs are `(fleet: "waku.test" OR fleet: "waku.sandbox") AND message: "SIGSEGV"`
- [ ] Run release candidate with `waku-simulator`, ensure that nodes connected to each other
- [ ] Unlock `waku.test` to resume auto-deployment of latest `master` commit
- [ ] **On Status fleet** - [ ] **QA testing**
- [ ] Deploy release candidate to `status.staging` - [ ] Ask QA to run their available tests against the release candidate.
- [ ] **Waku fleet testing**
- [ ] Deploy the release candidate to `waku.test` fleet.
- Start the [deployment job](https://ci.infra.status.im/job/nim-waku/) and wait for it to finish (Jenkins access required; ask the infra team if you don't have it).
- After completion, disable fleet so that daily CI does not override your release candidate.
- Verify at https://fleets.waku.org/ that the fleet is locked to the release candidate image.
- Confirm the container image exists on [Harbor](https://harbor.status.im/harbor/projects/9/repositories/nwaku/artifacts-tab).
- [ ] Search [Kibana logs](https://kibana.infra.status.im/app/discover) from the previous month (since the last release was deployed) for possible crashes or errors in `waku.test`.
- Set time range to "Last 30 days" (or since last release).
- Most relevant search query: `(fleet: "waku.test" AND message: "SIGSEGV")`, `(fleet: "waku.test" AND message: "exception")`, `(fleet: "waku.test" AND message: "error")`.
- Document any crashes or errors found.
- [ ] Ask QA to perform tests against `waku.test`, if any. Then, after that, review Kibana for possible issues or unexpected restart.
- [ ] Enable the `waku.test` fleet again to resume auto-deployment of the latest `master` commit.
- [ ] **Status testing**
- [ ] Get QA approval to deploy a new version in `status.staging`.
- [ ] Deploy release candidate to `status.staging`.
- [ ] Perform [sanity check](https://www.notion.so/How-to-test-Nwaku-on-Status-12c6e4b9bf06420ca868bd199129b425) and log results as comments in this issue. - [ ] Perform [sanity check](https://www.notion.so/How-to-test-Nwaku-on-Status-12c6e4b9bf06420ca868bd199129b425) and log results as comments in this issue.
- [ ] Connect 2 instances to `status.staging` fleet, one in relay mode, the other one in light client. - [ ] Connect 2 instances to `status.staging` fleet, one in relay mode, the other one in light client mode.
- [ ] 1:1 Chats with each other - 1:1 Chats with each other.
- [ ] Send and receive messages in a community - Send and receive messages in a community.
- [ ] Close one instance, send messages with second instance, reopen first instance and confirm messages sent while offline are retrieved from store - Close one instance, send messages with second instance, reopen first instance and confirm messages sent while offline are retrieved from store.
- [ ] Perform checks based _end user impact_ - [ ] Perform checks based on _end user impact_
- [ ] Inform other (Waku and Status) CCs to point their instance to `status.staging` for a few days. Ping Status colleagues from their Discord server or [Status community](https://status.app/c/G3kAAMSQtb05kog3aGbr3kiaxN4tF5xy4BAGEkkLwILk2z3GcoYlm5hSJXGn7J3laft-tnTwDWmYJ18dP_3bgX96dqr_8E3qKAvxDf3NrrCMUBp4R9EYkQez9XSM4486mXoC3mIln2zc-TNdvjdfL9eHVZ-mGgs=#zQ3shZeEJqTC1xhGUjxuS4rtHSrhJ8vUYp64v6qWkLpvdy9L9) (not blocking point.) - [ ] Inform other (Waku and Status) CCs to point their instances to `status.staging` for a few days. Ping Status colleagues on their Discord server or in the [Status community](https://status.app/c/G3kAAMSQtb05kog3aGbr3kiaxN4tF5xy4BAGEkkLwILk2z3GcoYlm5hSJXGn7J3laft-tnTwDWmYJ18dP_3bgX96dqr_8E3qKAvxDf3NrrCMUBp4R9EYkQez9XSM4486mXoC3mIln2zc-TNdvjdfL9eHVZ-mGgs=#zQ3shZeEJqTC1xhGUjxuS4rtHSrhJ8vUYp64v6qWkLpvdy9L9) (this is not a blocking point.)
- [ ] Ask Status-QA to perform sanity checks (as described above) + checks based on _end user impact_; do specify the version being tested - [ ] Ask QA to perform sanity checks (as described above) and checks based on _end user impact_; specify the version being tested
- [ ] Ask Status-QA or infra to run the automated Status e2e tests against `status.staging` - [ ] Ask QA or infra to run the automated Status e2e tests against `status.staging`
- [ ] Get other CCs sign-off: they comment on this PR "used app for a week, no problem", or problem reported, resolved and new RC - [ ] Get other CCs' sign-off: they should comment on this PR, e.g., "Used the app for a week, no problem." If problems are reported, resolve them and create a new RC.
- [ ] **Get Status-QA sign-off**. Ensuring that `status.test` update will not disturb ongoing activities.
- [ ] **Proceed with release** - [ ] **Proceed with release**
- [ ] Assign a release tag to the same commit that contains the validated release-candidate tag - [ ] Assign a final release tag (`v0.X.0`) to the same commit that contains the validated release-candidate tag (e.g. `git tag -as v0.X.0 -m "final release."`).
- [ ] Create GitHub release - [ ] Update [logos-delivery-compose](https://github.com/logos-messaging/logos-delivery-compose) and [logos-delivery-simulator](https://github.com/logos-messaging/logos-delivery-simulator) according to the new release.
- [ ] Deploy the release to DockerHub - [ ] Bump logos-delivery dependency in [logos-delivery-rust-bindings](https://github.com/logos-messaging/logos-delivery-rust-bindings) and make sure all examples and tests work.
- [ ] Announce the release - [ ] Bump logos-delivery dependency in [logos-delivery-go-bindings](https://github.com/logos-messaging/logos-delivery-go-bindings) and make sure all tests work.
- [ ] Create GitHub release (https://github.com/logos-messaging/logos-delivery/releases).
- [ ] Submit a PR to merge the release branch back to `master`. Make sure you use the option "Merge pull request (Create a merge commit)" to perform the merge. Ping repo admin if this option is not available.
- [ ] Create a deployment issue with the recently created release.
- [ ] **Promote release to fleets**. ### Links
- [ ] Update infra config with any deprecated arguments or changed options
- [ ] [Deploy final release to `waku.sandbox` fleet](https://ci.infra.status.im/job/nim-waku/job/deploy-waku-sandbox)
- [ ] [Deploy final release to `status.staging` fleet](https://ci.infra.status.im/job/nim-waku/job/deploy-shards-staging/)
- [ ] [Deploy final release to `status.prod` fleet](https://ci.infra.status.im/job/nim-waku/job/deploy-shards-test/)
- [ ] **Post release** - [Release process](https://github.com/logos-messaging/logos-delivery/blob/master/docs/contributors/release-process.md)
- [ ] Submit a PR from the release branch to master. Important to commit the PR with "create a merge commit" option. - [Release notes](https://github.com/logos-messaging/logos-delivery/blob/master/CHANGELOG.md)
- [ ] Update waku-org/nwaku-compose with the new release version. - [Fleet ownership](https://www.notion.so/Fleet-Ownership-7532aad8896d46599abac3c274189741?pvs=4#d2d2f0fe4b3c429fbd860a1d64f89a64)
- [ ] Update version in js-waku repo. [update only this](https://github.com/waku-org/js-waku/blob/7c0ce7b2eca31cab837da0251e1e4255151be2f7/.github/workflows/ci.yml#L135) by submitting a PR. - [Infra-nim-waku](https://github.com/status-im/infra-nim-waku)
- [Jenkins](https://ci.infra.status.im/job/nim-waku/)
- [Fleets](https://fleets.waku.org/)
- [Harbor](https://harbor.status.im/harbor/projects/9/repositories/nwaku/artifacts-tab)
- [Kibana](https://kibana.infra.status.im/app/)

79
.github/workflows/ci-daily.yml vendored Normal file
View File

@ -0,0 +1,79 @@
name: Daily logos-delivery CI
on:
schedule:
- cron: '30 6 * * *'
env:
NPROC: 2
MAKEFLAGS: "-j${NPROC}"
NIMFLAGS: "--parallelBuild:${NPROC} --colors:off -d:chronicles_colors:none"
jobs:
build:
strategy:
fail-fast: false
matrix:
os: [ubuntu-22.04, macos-15]
runs-on: ${{ matrix.os }}
timeout-minutes: 45
name: build-${{ matrix.os }}
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Get submodules hash
id: submodules
run: |
echo "hash=$(git submodule status | awk '{print $1}' | sort | shasum -a 256 | sed 's/[ -]*//g')" >> $GITHUB_OUTPUT
- name: Cache submodules
uses: actions/cache@v3
with:
path: |
vendor/
.git/modules
key: ${{ runner.os }}-vendor-modules-${{ steps.submodules.outputs.hash }}
- name: Make update
run: make update
- name: Build binaries
run: make V=1 examples tools
- name: Notify Discord
if: always()
env:
DISCORD_WEBHOOK_URL: ${{ secrets.DISCORD_WEBHOOK_URL }}
run: |
STATUS="${{ job.status }}"
OS="${{ matrix.os }}"
REPO="${{ github.repository }}"
RUN_URL="https://github.com/${{ github.repository }}/actions/runs/${{ github.run_id }}"
if [ "$STATUS" = "success" ]; then
COLOR=3066993
TITLE="✅ CI Success"
else
COLOR=15158332
TITLE="❌ CI Failed"
fi
curl -H "Content-Type: application/json" \
-X POST \
-d "{
\"embeds\": [{
\"title\": \"$TITLE\",
\"color\": $COLOR,
\"fields\": [
{\"name\": \"Repository\", \"value\": \"$REPO\", \"inline\": true},
{\"name\": \"OS\", \"value\": \"$OS\", \"inline\": true},
{\"name\": \"Status\", \"value\": \"$STATUS\", \"inline\": true}
],
\"url\": \"$RUN_URL\",
\"footer\": {\"text\": \"Daily logos-delivery CI\"}
}]
}" \
"$DISCORD_WEBHOOK_URL"

39
.github/workflows/ci-nix.yml vendored Normal file
View File

@ -0,0 +1,39 @@
name: ci / nix
permissions:
contents: read
pull-requests: read
checks: write
on:
pull_request:
branches: [master]
jobs:
build:
strategy:
fail-fast: false
matrix:
system:
- aarch64-darwin
- x86_64-linux
nixpkg:
- liblogosdelivery
include:
- system: aarch64-darwin
runs_on: [self-hosted, macOS, ARM64]
- system: x86_64-linux
runs_on: [self-hosted, Linux, X64]
name: '${{ matrix.system }} / ${{ matrix.nixpkg }}'
runs-on: ${{ matrix.runs_on }}
steps:
- uses: actions/checkout@v4
- name: 'Run Nix build for ${{ matrix.nixpkg }}'
shell: bash
run: nix build -L '.#${{ matrix.nixpkg }}'
- name: 'Show result contents'
shell: bash
run: find result -type f

View File

@ -14,6 +14,8 @@ env:
NPROC: 2 NPROC: 2
MAKEFLAGS: "-j${NPROC}" MAKEFLAGS: "-j${NPROC}"
NIMFLAGS: "--parallelBuild:${NPROC} --colors:off -d:chronicles_colors:none" NIMFLAGS: "--parallelBuild:${NPROC} --colors:off -d:chronicles_colors:none"
NIM_VERSION: '2.2.4'
NIMBLE_VERSION: '0.22.3'
jobs: jobs:
changes: # changes detection changes: # changes detection
@ -30,10 +32,11 @@ jobs:
filters: | filters: |
common: common:
- '.github/workflows/**' - '.github/workflows/**'
- 'vendor/**' - 'nimble.lock'
- 'Makefile'
- 'waku.nimble' - 'waku.nimble'
- 'Makefile'
- 'library/**' - 'library/**'
- 'liblogosdelivery/**'
v2: v2:
- 'waku/**' - 'waku/**'
- 'apps/**' - 'apps/**'
@ -63,21 +66,36 @@ jobs:
- name: Checkout code - name: Checkout code
uses: actions/checkout@v4 uses: actions/checkout@v4
- name: Get submodules hash - name: Install Nim ${{ env.NIM_VERSION }}
id: submodules uses: jiro4989/setup-nim-action@v2
run: | with:
echo "hash=$(git submodule status | awk '{print $1}' | sort | shasum -a 256 | sed 's/[ -]*//g')" >> $GITHUB_OUTPUT nim-version: ${{ env.NIM_VERSION }}
repo-token: ${{ secrets.GITHUB_TOKEN }}
- name: Cache submodules - name: Install Nimble ${{ env.NIMBLE_VERSION }}
run: |
cd /tmp && nimble install "nimble@${{ env.NIMBLE_VERSION }}" -y
echo "$HOME/.nimble/bin" >> $GITHUB_PATH
- name: Cache nimble deps
id: cache-nimbledeps
uses: actions/cache@v3 uses: actions/cache@v3
with: with:
path: | path: |
vendor/ nimbledeps/
.git/modules nimble.paths
key: ${{ runner.os }}-vendor-modules-${{ steps.submodules.outputs.hash }} key: ${{ runner.os }}-nimbledeps-nimble${{ env.NIMBLE_VERSION }}-${{ hashFiles('nimble.lock', 'BearSSL.mk', 'Nat.mk') }}
- name: Install nimble deps
if: steps.cache-nimbledeps.outputs.cache-hit != 'true'
run: |
nimble setup --localdeps -y
make rebuild-nat-libs-nimbledeps
make rebuild-bearssl-nimbledeps
touch nimbledeps/.nimble-setup
- name: Build binaries - name: Build binaries
run: make V=1 QUICK_AND_DIRTY_COMPILER=1 all tools run: make V=1 all
build-windows: build-windows:
needs: changes needs: changes
@ -101,18 +119,33 @@ jobs:
- name: Checkout code - name: Checkout code
uses: actions/checkout@v4 uses: actions/checkout@v4
- name: Get submodules hash - name: Install Nim ${{ env.NIM_VERSION }}
id: submodules uses: jiro4989/setup-nim-action@v2
run: | with:
echo "hash=$(git submodule status | awk '{print $1}' | sort | shasum -a 256 | sed 's/[ -]*//g')" >> $GITHUB_OUTPUT nim-version: ${{ env.NIM_VERSION }}
repo-token: ${{ secrets.GITHUB_TOKEN }}
- name: Cache submodules - name: Install Nimble ${{ env.NIMBLE_VERSION }}
run: |
cd /tmp && nimble install "nimble@${{ env.NIMBLE_VERSION }}" -y
echo "$HOME/.nimble/bin" >> $GITHUB_PATH
- name: Cache nimble deps
id: cache-nimbledeps
uses: actions/cache@v3 uses: actions/cache@v3
with: with:
path: | path: |
vendor/ nimbledeps/
.git/modules nimble.paths
key: ${{ runner.os }}-vendor-modules-${{ steps.submodules.outputs.hash }} key: ${{ runner.os }}-nimbledeps-nimble${{ env.NIMBLE_VERSION }}-${{ hashFiles('nimble.lock', 'BearSSL.mk', 'Nat.mk') }}
- name: Install nimble deps
if: steps.cache-nimbledeps.outputs.cache-hit != 'true'
run: |
nimble setup --localdeps -y
make rebuild-nat-libs-nimbledeps
make rebuild-bearssl-nimbledeps
touch nimbledeps/.nimble-setup
- name: Run tests - name: Run tests
run: | run: |
@ -126,18 +159,18 @@ jobs:
export NIMFLAGS="--colors:off -d:chronicles_colors:none" export NIMFLAGS="--colors:off -d:chronicles_colors:none"
export USE_LIBBACKTRACE=0 export USE_LIBBACKTRACE=0
make V=1 LOG_LEVEL=DEBUG QUICK_AND_DIRTY_COMPILER=1 POSTGRES=$postgres_enabled test make V=1 POSTGRES=$postgres_enabled test
make V=1 LOG_LEVEL=DEBUG QUICK_AND_DIRTY_COMPILER=1 POSTGRES=$postgres_enabled testwakunode2 make V=1 POSTGRES=$postgres_enabled testwakunode2
build-docker-image: build-docker-image:
needs: changes needs: changes
if: ${{ needs.changes.outputs.v2 == 'true' || needs.changes.outputs.common == 'true' || needs.changes.outputs.docker == 'true' }} if: ${{ needs.changes.outputs.v2 == 'true' || needs.changes.outputs.common == 'true' || needs.changes.outputs.docker == 'true' }}
uses: waku-org/nwaku/.github/workflows/container-image.yml@master uses: ./.github/workflows/container-image.yml
secrets: inherit secrets: inherit
nwaku-nwaku-interop-tests: nwaku-nwaku-interop-tests:
needs: build-docker-image needs: build-docker-image
uses: waku-org/waku-interop-tests/.github/workflows/nim_waku_PR.yml@SMOKE_TEST_0.0.1 uses: logos-messaging/logos-delivery-interop-tests/.github/workflows/nim_waku_PR.yml@SMOKE_TEST_STABLE
with: with:
node_nwaku: ${{ needs.build-docker-image.outputs.image }} node_nwaku: ${{ needs.build-docker-image.outputs.image }}
@ -145,14 +178,14 @@ jobs:
js-waku-node: js-waku-node:
needs: build-docker-image needs: build-docker-image
uses: waku-org/js-waku/.github/workflows/test-node.yml@master uses: logos-messaging/logos-delivery-js/.github/workflows/test-node.yml@master
with: with:
nim_wakunode_image: ${{ needs.build-docker-image.outputs.image }} nim_wakunode_image: ${{ needs.build-docker-image.outputs.image }}
test_type: node test_type: node
js-waku-node-optional: js-waku-node-optional:
needs: build-docker-image needs: build-docker-image
uses: waku-org/js-waku/.github/workflows/test-node.yml@master uses: logos-messaging/logos-delivery-js/.github/workflows/test-node.yml@master
with: with:
nim_wakunode_image: ${{ needs.build-docker-image.outputs.image }} nim_wakunode_image: ${{ needs.build-docker-image.outputs.image }}
test_type: node-optional test_type: node-optional
@ -165,18 +198,33 @@ jobs:
- name: Checkout code - name: Checkout code
uses: actions/checkout@v4 uses: actions/checkout@v4
- name: Get submodules hash - name: Install Nim ${{ env.NIM_VERSION }}
id: submodules uses: jiro4989/setup-nim-action@v2
run: | with:
echo "hash=$(git submodule status | awk '{print $1}' | sort | shasum -a 256 | sed 's/[ -]*//g')" >> $GITHUB_OUTPUT nim-version: ${{ env.NIM_VERSION }}
repo-token: ${{ secrets.GITHUB_TOKEN }}
- name: Cache submodules - name: Install Nimble ${{ env.NIMBLE_VERSION }}
run: |
cd /tmp && nimble install "nimble@${{ env.NIMBLE_VERSION }}" -y
echo "$HOME/.nimble/bin" >> $GITHUB_PATH
- name: Cache nimble deps
id: cache-nimbledeps
uses: actions/cache@v3 uses: actions/cache@v3
with: with:
path: | path: |
vendor/ nimbledeps/
.git/modules nimble.paths
key: ${{ runner.os }}-vendor-modules-${{ steps.submodules.outputs.hash }} key: ${{ runner.os }}-nimbledeps-nimble${{ env.NIMBLE_VERSION }}-${{ hashFiles('nimble.lock', 'BearSSL.mk', 'Nat.mk') }}
- name: Install nimble deps
if: steps.cache-nimbledeps.outputs.cache-hit != 'true'
run: |
nimble setup --localdeps -y
make rebuild-nat-libs-nimbledeps
make rebuild-bearssl-nimbledeps
touch nimbledeps/.nimble-setup
- name: Build nph - name: Build nph
run: | run: |

View File

@ -15,6 +15,8 @@ env:
NPROC: 2 NPROC: 2
MAKEFLAGS: "-j${NPROC}" MAKEFLAGS: "-j${NPROC}"
NIMFLAGS: "--parallelBuild:${NPROC}" NIMFLAGS: "--parallelBuild:${NPROC}"
NIM_VERSION: '2.2.4'
NIMBLE_VERSION: '0.22.3'
# This workflow should not run for outside contributors # This workflow should not run for outside contributors
# If org secrets are not available, we'll avoid building and publishing the docker image and we'll pass the workflow # If org secrets are not available, we'll avoid building and publishing the docker image and we'll pass the workflow
@ -46,27 +48,42 @@ jobs:
if: ${{ steps.secrets.outcome == 'success' }} if: ${{ steps.secrets.outcome == 'success' }}
uses: actions/checkout@v4 uses: actions/checkout@v4
- name: Get submodules hash - name: Install Nim ${{ env.NIM_VERSION }}
id: submodules if: ${{ steps.secrets.outcome == 'success' }}
uses: jiro4989/setup-nim-action@v2
with:
nim-version: ${{ env.NIM_VERSION }}
repo-token: ${{ secrets.GITHUB_TOKEN }}
- name: Install Nimble ${{ env.NIMBLE_VERSION }}
if: ${{ steps.secrets.outcome == 'success' }} if: ${{ steps.secrets.outcome == 'success' }}
run: | run: |
echo "hash=$(git submodule status | awk '{print $1}' | sort | shasum -a 256 | sed 's/[ -]*//g')" >> $GITHUB_OUTPUT cd /tmp && nimble install "nimble@${{ env.NIMBLE_VERSION }}" -y
echo "$HOME/.nimble/bin" >> $GITHUB_PATH
- name: Cache submodules - name: Cache nimble deps
if: ${{ steps.secrets.outcome == 'success' }} if: ${{ steps.secrets.outcome == 'success' }}
id: cache-nimbledeps
uses: actions/cache@v3 uses: actions/cache@v3
with: with:
path: | path: |
vendor/ nimbledeps/
.git/modules nimble.paths
key: ${{ runner.os }}-vendor-modules-${{ steps.submodules.outputs.hash }} key: ${{ runner.os }}-nimbledeps-nimble${{ env.NIMBLE_VERSION }}-${{ hashFiles('nimble.lock', 'BearSSL.mk', 'Nat.mk') }}
- name: Install nimble deps
if: ${{ steps.secrets.outcome == 'success' && steps.cache-nimbledeps.outputs.cache-hit != 'true' }}
run: |
nimble setup --localdeps -y
make rebuild-nat-libs-nimbledeps
make rebuild-bearssl-nimbledeps
touch nimbledeps/.nimble-setup
- name: Build binaries - name: Build binaries
id: build id: build
if: ${{ steps.secrets.outcome == 'success' }} if: ${{ steps.secrets.outcome == 'success' }}
run: | run: |
make -j${NPROC} V=1 POSTGRES=1 NIMFLAGS="-d:disableMarchNative -d:chronicles_colors:none" wakunode2
make -j${NPROC} V=1 QUICK_AND_DIRTY_COMPILER=1 NIMFLAGS="-d:disableMarchNative -d:postgres -d:chronicles_colors:none" wakunode2
SHORT_REF=$(git rev-parse --short HEAD) SHORT_REF=$(git rev-parse --short HEAD)

View File

@ -63,11 +63,11 @@ jobs:
run: | run: |
OS=$([[ "${{runner.os}}" == "macOS" ]] && echo "macosx" || echo "linux") OS=$([[ "${{runner.os}}" == "macOS" ]] && echo "macosx" || echo "linux")
make QUICK_AND_DIRTY_COMPILER=1 V=1 CI=false NIMFLAGS="-d:disableMarchNative --os:${OS} --cpu:${{matrix.arch}}" \ make V=1 CI=false NIMFLAGS="-d:disableMarchNative --os:${OS} --cpu:${{matrix.arch}}" \
update update
make QUICK_AND_DIRTY_COMPILER=1 V=1 CI=false\ make V=1 CI=false POSTGRES=1\
NIMFLAGS="-d:disableMarchNative --os:${OS} --cpu:${{matrix.arch}} -d:postgres" \ NIMFLAGS="-d:disableMarchNative --os:${OS} --cpu:${{matrix.arch}}" \
wakunode2\ wakunode2\
chat2\ chat2\
tools tools
@ -91,14 +91,14 @@ jobs:
build-docker-image: build-docker-image:
needs: tag-name needs: tag-name
uses: waku-org/nwaku/.github/workflows/container-image.yml@master uses: logos-messaging/logos-delivery/.github/workflows/container-image.yml@master
with: with:
image_tag: ${{ needs.tag-name.outputs.tag }} image_tag: ${{ needs.tag-name.outputs.tag }}
secrets: inherit secrets: inherit
js-waku-node: js-waku-node:
needs: build-docker-image needs: build-docker-image
uses: waku-org/js-waku/.github/workflows/test-node.yml@master uses: logos-messaging/logos-delivery-js/.github/workflows/test-node.yml@master
with: with:
nim_wakunode_image: ${{ needs.build-docker-image.outputs.image }} nim_wakunode_image: ${{ needs.build-docker-image.outputs.image }}
test_type: node test_type: node
@ -106,7 +106,7 @@ jobs:
js-waku-node-optional: js-waku-node-optional:
needs: build-docker-image needs: build-docker-image
uses: waku-org/js-waku/.github/workflows/test-node.yml@master uses: logos-messaging/logos-delivery-js/.github/workflows/test-node.yml@master
with: with:
nim_wakunode_image: ${{ needs.build-docker-image.outputs.image }} nim_wakunode_image: ${{ needs.build-docker-image.outputs.image }}
test_type: node-optional test_type: node-optional
@ -150,7 +150,7 @@ jobs:
-u $(id -u) \ -u $(id -u) \
docker.io/wakuorg/sv4git:latest \ docker.io/wakuorg/sv4git:latest \
release-notes ${RELEASE_NOTES_TAG} --previous $(git tag -l --sort -creatordate | grep -e "^v[0-9]*\.[0-9]*\.[0-9]*$") |\ release-notes ${RELEASE_NOTES_TAG} --previous $(git tag -l --sort -creatordate | grep -e "^v[0-9]*\.[0-9]*\.[0-9]*$") |\
sed -E 's@#([0-9]+)@[#\1](https://github.com/waku-org/nwaku/issues/\1)@g' > release_notes.md sed -E 's@#([0-9]+)@[#\1](https://github.com/logos-messaging/logos-delivery/issues/\1)@g' > release_notes.md
sed -i "s/^## .*/Generated at $(date)/" release_notes.md sed -i "s/^## .*/Generated at $(date)/" release_notes.md

View File

@ -41,25 +41,130 @@ jobs:
.git/modules .git/modules
key: ${{ runner.os }}-${{matrix.arch}}-submodules-${{ steps.submodules.outputs.hash }} key: ${{ runner.os }}-${{matrix.arch}}-submodules-${{ steps.submodules.outputs.hash }}
- name: prep variables - name: Get tag
id: version
run: |
# Use full tag, e.g., v0.37.0
echo "version=${GITHUB_REF_NAME}" >> $GITHUB_OUTPUT
- name: Prep variables
id: vars id: vars
run: | run: |
NWAKU_ARTIFACT_NAME=$(echo "nwaku-${{matrix.arch}}-${{runner.os}}.tar.gz" | tr "[:upper:]" "[:lower:]") VERSION=${{ steps.version.outputs.version }}
echo "nwaku=${NWAKU_ARTIFACT_NAME}" >> $GITHUB_OUTPUT NWAKU_ARTIFACT_NAME=$(echo "waku-${{matrix.arch}}-${{runner.os}}.tar.gz" | tr "[:upper:]" "[:lower:]")
echo "waku=${NWAKU_ARTIFACT_NAME}" >> $GITHUB_OUTPUT
- name: Install dependencies if [[ "${{ runner.os }}" == "Linux" ]]; then
LIBWAKU_ARTIFACT_NAME=$(echo "libwaku-${VERSION}-${{matrix.arch}}-${{runner.os}}-linux.deb" | tr "[:upper:]" "[:lower:]")
fi
if [[ "${{ runner.os }}" == "macOS" ]]; then
LIBWAKU_ARTIFACT_NAME=$(echo "libwaku-${VERSION}-${{matrix.arch}}-macos.tar.gz" | tr "[:upper:]" "[:lower:]")
fi
echo "libwaku=${LIBWAKU_ARTIFACT_NAME}" >> $GITHUB_OUTPUT
if [[ "${{ runner.os }}" == "Linux" ]]; then
LIBLOGOSDELIVERY_ARTIFACT_NAME=$(echo "liblogosdelivery-${VERSION}-${{matrix.arch}}-${{runner.os}}-linux.deb" | tr "[:upper:]" "[:lower:]")
fi
if [[ "${{ runner.os }}" == "macOS" ]]; then
LIBLOGOSDELIVERY_ARTIFACT_NAME=$(echo "liblogosdelivery-${VERSION}-${{matrix.arch}}-macos.tar.gz" | tr "[:upper:]" "[:lower:]")
fi
echo "liblogosdelivery=${LIBLOGOSDELIVERY_ARTIFACT_NAME}" >> $GITHUB_OUTPUT
- name: Install build dependencies
run: |
if [[ "${{ runner.os }}" == "Linux" ]]; then
sudo apt-get update && sudo apt-get install -y build-essential dpkg-dev
fi
- name: Build Waku artifacts
run: | run: |
OS=$([[ "${{runner.os}}" == "macOS" ]] && echo "macosx" || echo "linux") OS=$([[ "${{runner.os}}" == "macOS" ]] && echo "macosx" || echo "linux")
make -j${NPROC} NIMFLAGS="--parallelBuild:${NPROC} -d:disableMarchNative --os:${OS} --cpu:${{matrix.arch}}" V=1 update make -j${NPROC} NIMFLAGS="--parallelBuild:${NPROC} -d:disableMarchNative --os:${OS} --cpu:${{matrix.arch}}" V=1 update
make -j${NPROC} NIMFLAGS="--parallelBuild:${NPROC} -d:disableMarchNative --os:${OS} --cpu:${{matrix.arch}} -d:postgres" CI=false wakunode2 make -j${NPROC} NIMFLAGS="--parallelBuild:${NPROC} -d:disableMarchNative --os:${OS} --cpu:${{matrix.arch}}" POSTGRES=1 CI=false wakunode2
make -j${NPROC} NIMFLAGS="--parallelBuild:${NPROC} -d:disableMarchNative --os:${OS} --cpu:${{matrix.arch}}" CI=false chat2 make -j${NPROC} NIMFLAGS="--parallelBuild:${NPROC} -d:disableMarchNative --os:${OS} --cpu:${{matrix.arch}}" CI=false chat2
tar -cvzf ${{steps.vars.outputs.nwaku}} ./build/ tar -cvzf ${{steps.vars.outputs.waku}} ./build/
- name: Upload asset make -j${NPROC} NIMFLAGS="--parallelBuild:${NPROC} -d:disableMarchNative --os:${OS} --cpu:${{matrix.arch}}" POSTGRES=1 CI=false libwaku
make -j${NPROC} NIMFLAGS="--parallelBuild:${NPROC} -d:disableMarchNative --os:${OS} --cpu:${{matrix.arch}}" POSTGRES=1 CI=false STATIC=1 libwaku
make -j${NPROC} NIMFLAGS="--parallelBuild:${NPROC} -d:disableMarchNative --os:${OS} --cpu:${{matrix.arch}}" POSTGRES=1 CI=false liblogosdelivery
make -j${NPROC} NIMFLAGS="--parallelBuild:${NPROC} -d:disableMarchNative --os:${OS} --cpu:${{matrix.arch}}" POSTGRES=1 CI=false STATIC=1 liblogosdelivery
- name: Create distributable libwaku package
run: |
VERSION=${{ steps.version.outputs.version }}
if [[ "${{ runner.os }}" == "Linux" ]]; then
rm -rf pkg
mkdir -p pkg/DEBIAN pkg/usr/local/lib pkg/usr/local/include
cp build/libwaku.so pkg/usr/local/lib/
cp build/libwaku.a pkg/usr/local/lib/
cp library/libwaku.h pkg/usr/local/include/
echo "Package: waku" >> pkg/DEBIAN/control
echo "Version: ${VERSION}" >> pkg/DEBIAN/control
echo "Priority: optional" >> pkg/DEBIAN/control
echo "Section: libs" >> pkg/DEBIAN/control
echo "Architecture: ${{matrix.arch}}" >> pkg/DEBIAN/control
echo "Maintainer: Waku Team <ivansete@status.im>" >> pkg/DEBIAN/control
echo "Description: Waku library" >> pkg/DEBIAN/control
dpkg-deb --build pkg ${{steps.vars.outputs.libwaku}}
fi
if [[ "${{ runner.os }}" == "macOS" ]]; then
tar -cvzf ${{steps.vars.outputs.libwaku}} ./build/libwaku.dylib ./build/libwaku.a ./library/libwaku.h
fi
- name: Create distributable liblogosdelivery package
run: |
VERSION=${{ steps.version.outputs.version }}
if [[ "${{ runner.os }}" == "Linux" ]]; then
rm -rf pkg
mkdir -p pkg/DEBIAN pkg/usr/local/lib pkg/usr/local/include
cp build/liblogosdelivery.so pkg/usr/local/lib/
cp build/liblogosdelivery.a pkg/usr/local/lib/
cp liblogosdelivery/liblogosdelivery.h pkg/usr/local/include/
echo "Package: logosdelivery" >> pkg/DEBIAN/control
echo "Version: ${VERSION}" >> pkg/DEBIAN/control
echo "Priority: optional" >> pkg/DEBIAN/control
echo "Section: libs" >> pkg/DEBIAN/control
echo "Architecture: ${{matrix.arch}}" >> pkg/DEBIAN/control
echo "Maintainer: Logos Messaging Team" >> pkg/DEBIAN/control
echo "Description: Logos Delivery library" >> pkg/DEBIAN/control
dpkg-deb --build pkg ${{steps.vars.outputs.liblogosdelivery}}
fi
if [[ "${{ runner.os }}" == "macOS" ]]; then
tar -cvzf ${{steps.vars.outputs.liblogosdelivery}} ./build/liblogosdelivery.dylib ./build/liblogosdelivery.a ./liblogosdelivery/liblogosdelivery.h
fi
- name: Upload waku artifact
uses: actions/upload-artifact@v4.4.0 uses: actions/upload-artifact@v4.4.0
with: with:
name: ${{steps.vars.outputs.nwaku}} name: waku-${{ steps.version.outputs.version }}-${{ matrix.arch }}-${{ runner.os }}
path: ${{steps.vars.outputs.nwaku}} path: ${{ steps.vars.outputs.waku }}
if-no-files-found: error
- name: Upload libwaku artifact
uses: actions/upload-artifact@v4.4.0
with:
name: libwaku-${{ steps.version.outputs.version }}-${{ matrix.arch }}-${{ runner.os }}
path: ${{ steps.vars.outputs.libwaku }}
if-no-files-found: error
- name: Upload liblogosdelivery artifact
uses: actions/upload-artifact@v4.4.0
with:
name: liblogosdelivery-${{ steps.version.outputs.version }}-${{ matrix.arch }}-${{ runner.os }}
path: ${{ steps.vars.outputs.liblogosdelivery }}
if-no-files-found: error if-no-files-found: error

View File

@ -7,6 +7,11 @@ on:
required: true required: true
type: string type: string
env:
NPROC: 4
NIM_VERSION: '2.2.4'
NIMBLE_VERSION: '0.22.3'
jobs: jobs:
build: build:
runs-on: windows-latest runs-on: windows-latest
@ -20,7 +25,7 @@ jobs:
steps: steps:
- name: Checkout code - name: Checkout code
uses: actions/checkout@v3 uses: actions/checkout@v4
- name: Setup MSYS2 - name: Setup MSYS2
uses: msys2/setup-msys2@v2 uses: msys2/setup-msys2@v2
@ -33,6 +38,7 @@ jobs:
make make
cmake cmake
upx upx
unzip
mingw-w64-x86_64-rust mingw-w64-x86_64-rust
mingw-w64-x86_64-postgresql mingw-w64-x86_64-postgresql
mingw-w64-x86_64-gcc mingw-w64-x86_64-gcc
@ -44,6 +50,12 @@ jobs:
mingw-w64-x86_64-cmake mingw-w64-x86_64-cmake
mingw-w64-x86_64-llvm mingw-w64-x86_64-llvm
mingw-w64-x86_64-clang mingw-w64-x86_64-clang
mingw-w64-x86_64-nasm
- name: Manually install nasm
run: |
bash scripts/install_nasm_in_windows.sh
source $HOME/.bashrc
- name: Add UPX to PATH - name: Add UPX to PATH
run: | run: |
@ -54,39 +66,49 @@ jobs:
- name: Verify dependencies - name: Verify dependencies
run: | run: |
which upx gcc g++ make cmake cargo rustc python which upx gcc g++ make cmake cargo rustc python nasm
- name: Updating submodules - name: Install Nim ${{ env.NIM_VERSION }}
run: git submodule update --init --recursive uses: jiro4989/setup-nim-action@v2
with:
nim-version: ${{ env.NIM_VERSION }}
repo-token: ${{ secrets.GITHUB_TOKEN }}
- name: Install Nimble ${{ env.NIMBLE_VERSION }}
run: |
export PATH="$GITHUB_WORKSPACE/.nim_runtime/bin:$PATH"
cd /tmp && nimble install "nimble@${{ env.NIMBLE_VERSION }}" -y
echo "$HOME/.nimble/bin" >> $GITHUB_PATH
- name: Patch nimble.lock for Windows nim checksum
# nimble.exe uses Windows Git (core.autocrlf=true by default), which converts LF→CRLF
# on checkout. This changes the SHA1 of the nim package source tree relative to the
# Linux-computed checksum stored in nimble.lock. Patch the lock file with the
# Windows-computed checksum before nimble reads it.
run: |
sed -i 's/68bb85cbfb1832ce4db43943911b046c3af3caab/a092a045d3a427d127a5334a6e59c76faff54686/g' nimble.lock
- name: Install nimble deps
if: steps.cache-nimbledeps.outputs.cache-hit != 'true'
run: |
export PATH="$GITHUB_WORKSPACE/.nim_runtime/bin:$HOME/.nimble/bin:$PATH"
nimble setup --localdeps -y
make rebuild-nat-libs-nimbledeps CC=gcc
make rebuild-bearssl-nimbledeps CC=gcc
touch nimbledeps/.nimble-setup
- name: Creating tmp directory - name: Creating tmp directory
run: mkdir -p tmp run: mkdir -p tmp
- name: Building Nim
run: |
cd vendor/nimbus-build-system/vendor/Nim
./build_all.bat
cd ../../../..
- name: Building miniupnpc
run: |
cd vendor/nim-nat-traversal/vendor/miniupnp/miniupnpc
make -f Makefile.mingw CC=gcc CXX=g++ libminiupnpc.a V=1
cd ../../../../..
- name: Building libnatpmp
run: |
cd ./vendor/nim-nat-traversal/vendor/libnatpmp-upstream
make CC="gcc -fPIC -D_WIN32_WINNT=0x0600 -DNATPMP_STATICLIB" libnatpmp.a V=1
cd ../../../../
- name: Building wakunode2.exe - name: Building wakunode2.exe
run: | run: |
make wakunode2 LOG_LEVEL=DEBUG V=3 -j8 export PATH="$GITHUB_WORKSPACE/.nim_runtime/bin:$HOME/.nimble/bin:$PATH"
make wakunode2 V=3 -j${{ env.NPROC }}
- name: Building libwaku.dll - name: Building libwaku.dll
run: | run: |
make libwaku STATIC=0 LOG_LEVEL=DEBUG V=1 -j export PATH="$GITHUB_WORKSPACE/.nim_runtime/bin:$HOME/.nimble/bin:$PATH"
make libwaku STATIC=0 V=1 -j
- name: Check Executable - name: Check Executable
run: | run: |

18
.gitignore vendored
View File

@ -3,9 +3,6 @@
# Executables shall be put in an ignored build/ directory # Executables shall be put in an ignored build/ directory
/build /build
# Nimble packages
/vendor/.nimble
# Generated Files # Generated Files
*.generated.nim *.generated.nim
@ -45,9 +42,6 @@ node_modules/
rlnKeystore.json rlnKeystore.json
*.tar.gz *.tar.gz
# Nimbus Build System
nimbus-build-system.paths
# sqlite db # sqlite db
*.db *.db
*.db-shm *.db-shm
@ -59,6 +53,10 @@ nimbus-build-system.paths
/examples/nodejs/build/ /examples/nodejs/build/
/examples/rust/target/ /examples/rust/target/
# Xcode user data
xcuserdata/
*.xcuserstate
# Coverage # Coverage
coverage_html_report/ coverage_html_report/
@ -79,3 +77,11 @@ waku_handler.moc.cpp
# Nix build result # Nix build result
result result
# llms
AGENTS.md
nimble.develop
nimble.paths
nimbledeps
**/anvil_state/state-deployed-contracts-mint-and-approved.json

182
.gitmodules vendored
View File

@ -1,190 +1,10 @@
[submodule "vendor/nim-eth"]
path = vendor/nim-eth
url = https://github.com/status-im/nim-eth.git
ignore = dirty
branch = master
[submodule "vendor/nim-secp256k1"]
path = vendor/nim-secp256k1
url = https://github.com/status-im/nim-secp256k1.git
ignore = dirty
branch = master
[submodule "vendor/nim-libp2p"]
path = vendor/nim-libp2p
url = https://github.com/vacp2p/nim-libp2p.git
ignore = dirty
branch = master
[submodule "vendor/nim-stew"]
path = vendor/nim-stew
url = https://github.com/status-im/nim-stew.git
ignore = dirty
branch = master
[submodule "vendor/nimbus-build-system"]
path = vendor/nimbus-build-system
url = https://github.com/status-im/nimbus-build-system.git
ignore = dirty
branch = master
[submodule "vendor/nim-nat-traversal"]
path = vendor/nim-nat-traversal
url = https://github.com/status-im/nim-nat-traversal.git
ignore = dirty
branch = master
[submodule "vendor/nim-libbacktrace"]
path = vendor/nim-libbacktrace
url = https://github.com/status-im/nim-libbacktrace.git
ignore = dirty
branch = master
[submodule "vendor/nim-confutils"]
path = vendor/nim-confutils
url = https://github.com/status-im/nim-confutils.git
ignore = dirty
branch = master
[submodule "vendor/nim-chronicles"]
path = vendor/nim-chronicles
url = https://github.com/status-im/nim-chronicles.git
ignore = dirty
branch = master
[submodule "vendor/nim-faststreams"]
path = vendor/nim-faststreams
url = https://github.com/status-im/nim-faststreams.git
ignore = dirty
branch = master
[submodule "vendor/nim-chronos"]
path = vendor/nim-chronos
url = https://github.com/status-im/nim-chronos.git
ignore = dirty
branch = master
[submodule "vendor/nim-json-serialization"]
path = vendor/nim-json-serialization
url = https://github.com/status-im/nim-json-serialization.git
ignore = dirty
branch = master
[submodule "vendor/nim-serialization"]
path = vendor/nim-serialization
url = https://github.com/status-im/nim-serialization.git
ignore = dirty
branch = master
[submodule "vendor/nimcrypto"]
path = vendor/nimcrypto
url = https://github.com/cheatfate/nimcrypto.git
ignore = dirty
branch = master
[submodule "vendor/nim-metrics"]
path = vendor/nim-metrics
url = https://github.com/status-im/nim-metrics.git
ignore = dirty
branch = master
[submodule "vendor/nim-stint"]
path = vendor/nim-stint
url = https://github.com/status-im/nim-stint.git
ignore = dirty
branch = master
[submodule "vendor/nim-json-rpc"]
path = vendor/nim-json-rpc
url = https://github.com/status-im/nim-json-rpc.git
ignore = dirty
branch = master
[submodule "vendor/nim-http-utils"]
path = vendor/nim-http-utils
url = https://github.com/status-im/nim-http-utils.git
ignore = dirty
branch = master
[submodule "vendor/nim-bearssl"]
path = vendor/nim-bearssl
url = https://github.com/status-im/nim-bearssl.git
ignore = dirty
branch = master
[submodule "vendor/nim-sqlite3-abi"]
path = vendor/nim-sqlite3-abi
url = https://github.com/arnetheduck/nim-sqlite3-abi.git
ignore = dirty
branch = master
[submodule "vendor/nim-web3"]
path = vendor/nim-web3
url = https://github.com/status-im/nim-web3.git
[submodule "vendor/nim-testutils"]
path = vendor/nim-testutils
url = https://github.com/status-im/nim-testutils.git
ignore = untracked
branch = master
[submodule "vendor/nim-unittest2"]
path = vendor/nim-unittest2
url = https://github.com/status-im/nim-unittest2.git
ignore = untracked
branch = master
[submodule "vendor/nim-websock"]
path = vendor/nim-websock
url = https://github.com/status-im/nim-websock.git
ignore = untracked
branch = main
[submodule "vendor/nim-zlib"]
path = vendor/nim-zlib
url = https://github.com/status-im/nim-zlib.git
ignore = untracked
branch = master
[submodule "vendor/nim-dnsdisc"]
path = vendor/nim-dnsdisc
url = https://github.com/status-im/nim-dnsdisc.git
ignore = untracked
branch = main
[submodule "vendor/dnsclient.nim"]
path = vendor/dnsclient.nim
url = https://github.com/ba0f3/dnsclient.nim.git
ignore = untracked
branch = master
[submodule "vendor/nim-toml-serialization"]
path = vendor/nim-toml-serialization
url = https://github.com/status-im/nim-toml-serialization.git
[submodule "vendor/nim-presto"]
path = vendor/nim-presto
url = https://github.com/status-im/nim-presto.git
ignore = untracked
branch = master
[submodule "vendor/zerokit"] [submodule "vendor/zerokit"]
path = vendor/zerokit path = vendor/zerokit
url = https://github.com/vacp2p/zerokit.git url = https://github.com/vacp2p/zerokit.git
ignore = dirty ignore = dirty
branch = v0.5.1 branch = v0.5.1
[submodule "vendor/nim-regex"]
path = vendor/nim-regex
url = https://github.com/nitely/nim-regex.git
ignore = untracked
branch = master
[submodule "vendor/nim-unicodedb"]
path = vendor/nim-unicodedb
url = https://github.com/nitely/nim-unicodedb.git
ignore = untracked
branch = master
[submodule "vendor/nim-taskpools"]
path = vendor/nim-taskpools
url = https://github.com/status-im/nim-taskpools.git
ignore = untracked
branch = stable
[submodule "vendor/nim-results"]
ignore = untracked
branch = master
path = vendor/nim-results
url = https://github.com/arnetheduck/nim-results.git
[submodule "vendor/db_connector"]
path = vendor/db_connector
url = https://github.com/nim-lang/db_connector.git
ignore = untracked
branch = devel
[submodule "vendor/nph"]
ignore = untracked
branch = master
path = vendor/nph
url = https://github.com/arnetheduck/nph.git
[submodule "vendor/nim-minilru"]
path = vendor/nim-minilru
url = https://github.com/status-im/nim-minilru.git
ignore = untracked
branch = master
[submodule "vendor/waku-rlnv2-contract"] [submodule "vendor/waku-rlnv2-contract"]
path = vendor/waku-rlnv2-contract path = vendor/waku-rlnv2-contract
url = https://github.com/waku-org/waku-rlnv2-contract.git url = https://github.com/logos-messaging/waku-rlnv2-contract.git
ignore = untracked ignore = untracked
branch = master branch = master
[submodule "vendor/mix"]
path = vendor/mix
url = https://github.com/vacp2p/mix/
branch = main

4
.nph.toml Normal file
View File

@ -0,0 +1,4 @@
extend-exclude = [
"vendor",
"nimbledeps",
]

509
AGENTS.md Normal file
View File

@ -0,0 +1,509 @@
# AGENTS.md - AI Coding Context
This file provides essential context for LLMs assisting with Logos Messaging development.
## Project Identity
Logos Messaging is designed as a shared public network for generalized messaging, not application-specific infrastructure.
This project is a Nim implementation of a libp2p protocol suite for private, censorship-resistant P2P messaging. It targets resource-restricted devices and privacy-preserving communication.
Logos Messaging was formerly known as Waku. Waku-related terminology remains within the codebase for historical reasons.
### Design Philosophy
Key architectural decisions:
Resource-restricted first: Protocols differentiate between full nodes (relay) and light clients (filter, lightpush, store). Light clients can participate without maintaining full message history or relay capabilities. This explains the client/server split in protocol implementations.
Privacy through unlinkability: RLN (Rate Limiting Nullifier) provides DoS protection while preserving sender anonymity. Messages are routed through pubsub topics with automatic sharding across 8 shards. Code prioritizes metadata privacy alongside content encryption.
Scalability via sharding: The network uses automatic content-topic-based sharding to distribute traffic. This is why you'll see sharding logic throughout the codebase and why pubsub topic selection is protocol-level, not application-level.
See [documentation](https://docs.waku.org/learn/) for architectural details.
### Core Protocols
- Relay: Pub/sub message routing using GossipSub
- Store: Historical message retrieval and persistence
- Filter: Lightweight message filtering for resource-restricted clients
- Lightpush: Lightweight message publishing for clients
- Peer Exchange: Peer discovery mechanism
- RLN Relay: Rate limiting nullifier for spam protection
- Metadata: Cluster and shard metadata exchange between peers
- Mix: Mixnet protocol for enhanced privacy through onion routing
- Rendezvous: Alternative peer discovery mechanism
### Key Terminology
- ENR (Ethereum Node Record): Node identity and capability advertisement
- Multiaddr: libp2p addressing format (e.g., `/ip4/127.0.0.1/tcp/60000/p2p/16Uiu2...`)
- PubsubTopic: Gossipsub topic for message routing (e.g., `/waku/2/default-waku/proto`)
- ContentTopic: Application-level message categorization (e.g., `/my-app/1/chat/proto`)
- Sharding: Partitioning network traffic across topics (static or auto-sharding)
- RLN (Rate Limiting Nullifier): Zero-knowledge proof system for spam prevention
### Specifications
All specs are at [rfc.vac.dev/waku](https://rfc.vac.dev/waku). RFCs use `WAKU2-XXX` format (not legacy `WAKU-XXX`).
## Architecture
### Protocol Module Pattern
Each protocol typically follows this structure:
```
waku_<protocol>/
├── protocol.nim # Main protocol type and handler logic
├── client.nim # Client-side API
├── rpc.nim # RPC message types
├── rpc_codec.nim # Protobuf encoding/decoding
├── common.nim # Shared types and constants
└── protocol_metrics.nim # Prometheus metrics
```
### WakuNode Architecture
- WakuNode (`waku/node/waku_node.nim`) is the central orchestrator
- Protocols are "mounted" onto the node's switch (libp2p component)
- PeerManager handles peer selection and connection management
- Switch provides libp2p transport, security, and multiplexing
Example protocol type definition:
```nim
type WakuFilter* = ref object of LPProtocol
subscriptions*: FilterSubscriptions
peerManager: PeerManager
messageCache: TimedCache[string]
```
## Development Essentials
### Build Requirements
- Nim 2.x (check `waku.nimble` for minimum version)
- Rust toolchain (required for RLN dependencies)
- Build system: Make with nimbus-build-system
### Build System
The project uses Makefile with nimbus-build-system (Status's Nim build framework):
```bash
# Initial build (updates submodules)
make wakunode2
# After git pull, update submodules
make update
# Build with custom flags
make wakunode2 NIMFLAGS="-d:chronicles_log_level=DEBUG"
```
Note: The build system uses `--mm:refc` memory management (automatically enforced). Only relevant if compiling outside the standard build system.
### Common Make Targets
```bash
make wakunode2 # Build main node binary
make test # Run all tests
make testcommon # Run common tests only
make libwakuStatic # Build static C library
make chat2 # Build chat example
make install-nph # Install git hook for auto-formatting
```
### Testing
```bash
# Run all tests
make test
# Run specific test file
make test tests/test_waku_enr.nim
# Run specific test case from file
make test tests/test_waku_enr.nim "check capabilities support"
# Build and run test separately (for development iteration)
make test tests/test_waku_enr.nim
```
Test structure uses `testutils/unittests`:
```nim
import testutils/unittests
suite "Waku ENR - Capabilities":
test "check capabilities support":
## Given
let bitfield: CapabilitiesBitfield = 0b0000_1101u8
## Then
check:
bitfield.supportsCapability(Capabilities.Relay)
not bitfield.supportsCapability(Capabilities.Store)
```
### Code Formatting
Mandatory: All code must be formatted with `nph` (vendored in `vendor/nph`)
```bash
# Format specific file
make nph/waku/waku_core.nim
# Install git pre-commit hook (auto-formats on commit)
make install-nph
```
The nph formatter handles all formatting details automatically, especially with the pre-commit hook installed. Focus on semantic correctness.
### Logging
Uses `chronicles` library with compile-time configuration:
```nim
import chronicles
logScope:
topics = "waku lightpush"
info "handling request", peerId = peerId, topic = pubsubTopic
error "request failed", error = msg
```
Compile with log level:
```bash
nim c -d:chronicles_log_level=TRACE myfile.nim
```
## Code Conventions
Common pitfalls:
- Always handle Result types explicitly
- Avoid global mutable state: Pass state through parameters
- Keep functions focused: Under 50 lines when possible
- Prefer compile-time checks (`static assert`) over runtime checks
### Naming
- Files/Directories: `snake_case` (e.g., `waku_lightpush`, `peer_manager`)
- Procedures: `camelCase` (e.g., `handleRequest`, `pushMessage`)
- Types: `PascalCase` (e.g., `WakuFilter`, `PubsubTopic`)
- Constants: `PascalCase` (e.g., `MaxContentTopicsPerRequest`)
- Constructors: `func init(T: type Xxx, params): T`
- For ref types: `func new(T: type Xxx, params): ref T`
- Exceptions: `XxxError` for CatchableError, `XxxDefect` for Defect
- ref object types: `XxxRef` suffix
### Imports Organization
Group imports: stdlib, external libs, internal modules:
```nim
import
std/[options, sequtils], # stdlib
results, chronicles, chronos, # external
libp2p/peerid
import
../node/peer_manager, # internal (separate import block)
../waku_core,
./common
```
### Async Programming
Uses chronos, not stdlib `asyncdispatch`:
```nim
proc handleRequest(
wl: WakuLightPush, peerId: PeerId
): Future[WakuLightPushResult] {.async.} =
let res = await wl.pushHandler(peerId, pubsubTopic, message)
return res
```
### Error Handling
The project uses both Result types and exceptions:
Result types from nim-results are used for protocol and API-level errors:
```nim
proc subscribe(
wf: WakuFilter, peerId: PeerID
): Future[FilterSubscribeResult] {.async.} =
if contentTopics.len > MaxContentTopicsPerRequest:
return err(FilterSubscribeError.badRequest("exceeds maximum"))
# Handle Result with isOkOr
(await wf.subscriptions.addSubscription(peerId, criteria)).isOkOr:
return err(FilterSubscribeError.serviceUnavailable(error))
ok()
```
Exceptions still used for:
- chronos async failures (CancelledError, etc.)
- Database/system errors
- Library interop
Most files start with `{.push raises: [].}` to disable exception tracking, then use try/catch blocks where needed.
### Pragma Usage
```nim
{.push raises: [].} # Disable default exception tracking (at file top)
proc myProc(): Result[T, E] {.async.} = # Async proc
```
### Protocol Inheritance
Protocols inherit from libp2p's `LPProtocol`:
```nim
type WakuLightPush* = ref object of LPProtocol
rng*: ref rand.HmacDrbgContext
peerManager*: PeerManager
pushHandler*: PushMessageHandler
```
### Type Visibility
- Public exports use `*` suffix: `type WakuFilter* = ...`
- Fields without `*` are module-private
## Style Guide Essentials
This section summarizes key Nim style guidelines relevant to this project. Full guide: https://status-im.github.io/nim-style-guide/
### Language Features
Import and Export
- Use explicit import paths with std/ prefix for stdlib
- Group imports: stdlib, external, internal (separate blocks)
- Export modules whose types appear in public API
- Avoid include
Macros and Templates
- Avoid macros and templates - prefer simple constructs
- Avoid generating public API with macros
- Put logic in templates, use macros only for glue code
Object Construction
- Prefer Type(field: value) syntax
- Use Type.init(params) convention for constructors
- Default zero-initialization should be valid state
- Avoid using result variable for construction
ref object Types
- Avoid ref object unless needed for:
- Resource handles requiring reference semantics
- Shared ownership
- Reference-based data structures (trees, lists)
- Stable pointer for FFI
- Use explicit ref MyType where possible
- Name ref object types with Ref suffix: XxxRef
Memory Management
- Prefer stack-based and statically sized types in core code
- Use heap allocation in glue layers
- Avoid alloca
- For FFI: use create/dealloc or createShared/deallocShared
Variable Usage
- Use most restrictive of const, let, var (prefer const over let over var)
- Prefer expressions for initialization over var then assignment
- Avoid result variable - use explicit return or expression-based returns
Functions
- Prefer func over proc
- Avoid public (*) symbols not part of intended API
- Prefer openArray over seq for function parameters
Methods (runtime polymorphism)
- Avoid method keyword for dynamic dispatch
- Prefer manual vtable with proc closures for polymorphism
- Methods lack support for generics
Miscellaneous
- Annotate callback proc types with {.raises: [], gcsafe.}
- Avoid explicit {.inline.} pragma
- Avoid converters
- Avoid finalizers
Type Guidelines
Binary Data
- Use byte for binary data
- Use seq[byte] for dynamic arrays
- Convert string to seq[byte] early if stdlib returns binary as string
Integers
- Prefer signed (int, int64) for counting, lengths, indexing
- Use unsigned with explicit size (uint8, uint64) for binary data, bit ops
- Avoid Natural
- Check ranges before converting to int
- Avoid casting pointers to int
- Avoid range types
Strings
- Use string for text
- Use seq[byte] for binary data instead of string
### Error Handling
Philosophy
- Prefer Result, Opt for explicit error handling
- Use Exceptions only for legacy code compatibility
Result Types
- Use Result[T, E] for operations that can fail
- Use cstring for simple error messages: Result[T, cstring]
- Use enum for errors needing differentiation: Result[T, SomeErrorEnum]
- Use Opt[T] for simple optional values
- Annotate all modules: {.push raises: [].} at top
Exceptions (when unavoidable)
- Inherit from CatchableError, name XxxError
- Use Defect for panics/logic errors, name XxxDefect
- Annotate functions explicitly: {.raises: [SpecificError].}
- Catch specific error types, avoid catching CatchableError
- Use expression-based try blocks
- Isolate legacy exception code with try/except, convert to Result
Common Defect Sources
- Overflow in signed arithmetic
- Array/seq indexing with []
- Implicit range type conversions
Status Codes
- Avoid status code pattern
- Use Result instead
### Library Usage
Standard Library
- Use judiciously, prefer focused packages
- Prefer these replacements:
- async: chronos
- bitops: stew/bitops2
- endians: stew/endians2
- exceptions: results
- io: stew/io2
Results Library
- Use cstring errors for diagnostics without differentiation
- Use enum errors when caller needs to act on specific errors
- Use complex types when additional error context needed
- Use isOkOr pattern for chaining
Wrappers (C/FFI)
- Prefer native Nim when available
- For C libraries: use {.compile.} to build from source
- Create xxx_abi.nim for raw ABI wrapper
- Avoid C++ libraries
Miscellaneous
- Print hex output in lowercase, accept both cases
### Common Pitfalls
- Defects lack tracking by {.raises.}
- nil ref causes runtime crashes
- result variable disables branch checking
- Exception hierarchy unclear between Nim versions
- Range types have compiler bugs
- Finalizers infect all instances of type
## Common Workflows
### Adding a New Protocol
1. Create directory: `waku/waku_myprotocol/`
2. Define core files:
- `rpc.nim` - Message types
- `rpc_codec.nim` - Protobuf encoding
- `protocol.nim` - Protocol handler
- `client.nim` - Client API
- `common.nim` - Shared types
3. Define protocol type in `protocol.nim`:
```nim
type WakuMyProtocol* = ref object of LPProtocol
peerManager: PeerManager
# ... fields
```
4. Implement request handler
5. Mount in WakuNode (`waku/node/waku_node.nim`)
6. Add tests in `tests/waku_myprotocol/`
7. Export module via `waku/waku_myprotocol.nim`
### Adding a REST API Endpoint
1. Define handler in `waku/rest_api/endpoint/myprotocol/`
2. Implement endpoint following pattern:
```nim
proc installMyProtocolApiHandlers*(
router: var RestRouter, node: WakuNode
) =
router.api(MethodGet, "/waku/v2/myprotocol/endpoint") do () -> RestApiResponse:
# Implementation
return RestApiResponse.jsonResponse(data, status = Http200)
```
3. Register in `waku/rest_api/handlers.nim`
### Adding Database Migration
For message_store (SQLite):
1. Create `migrations/message_store/NNNNN_description.up.sql`
2. Create corresponding `.down.sql` for rollback
3. Increment version number sequentially
4. Test migration locally before committing
For PostgreSQL: add in `migrations/message_store_postgres/`
### Running Single Test During Development
```bash
# Build test binary
make test tests/waku_filter_v2/test_waku_client.nim
# Binary location
./build/tests/waku_filter_v2/test_waku_client.nim.bin
# Or combine
make test tests/waku_filter_v2/test_waku_client.nim "specific test name"
```
### Debugging with Chronicles
Set log level and filter topics:
```bash
nim c -r \
-d:chronicles_log_level=TRACE \
-d:chronicles_disabled_topics="eth,dnsdisc" \
tests/mytest.nim
```
## Key Constraints
### Vendor Directory
- Never edit files directly in vendor - it is auto-generated from git submodules
- Always run `make update` after pulling changes
- Managed by `nimbus-build-system`
### Chronicles Performance
- Log levels are configured at compile time for performance
- Runtime filtering is available but should be used sparingly: `-d:chronicles_runtime_filtering=on`
- Default sinks are optimized for production
### Memory Management
- Uses `refc` (reference counting with cycle collection)
- Automatically enforced by the build system (hardcoded in `waku.nimble`)
- Do not override unless absolutely necessary, as it breaks compatibility
### RLN Dependencies
- RLN code requires a Rust toolchain, which explains Rust imports in some modules
- Pre-built `librln` libraries are checked into the repository
## Quick Reference
Language: Nim 2.x | License: MIT or Apache 2.0
### Important Files
- `Makefile` - Primary build interface
- `waku.nimble` - Package definition and build tasks (called via nimbus-build-system)
- `vendor/nimbus-build-system/` - Status's build framework
- `waku/node/waku_node.nim` - Core node implementation
- `apps/wakunode2/wakunode2.nim` - Main CLI application
- `waku/factory/waku_conf.nim` - Configuration types
- `library/libwaku.nim` - C bindings entry point
### Testing Entry Points
- `tests/all_tests_waku.nim` - All Waku protocol tests
- `tests/all_tests_wakunode2.nim` - Node application tests
- `tests/all_tests_common.nim` - Common utilities tests
### Key Dependencies
- `chronos` - Async framework
- `nim-results` - Result type for error handling
- `chronicles` - Logging
- `libp2p` - P2P networking
- `confutils` - CLI argument parsing
- `presto` - REST server
- `nimcrypto` - Cryptographic primitives
Note: For specific version requirements, check `waku.nimble`.

46
BearSSL.mk Normal file
View File

@ -0,0 +1,46 @@
# Copyright (c) 2022 Status Research & Development GmbH. Licensed under
# either of:
# - Apache License, version 2.0
# - MIT license
# at your option. This file may not be copied, modified, or distributed except
# according to those terms.
###########################
## bearssl (nimbledeps) ##
###########################
# Rebuilds libbearssl.a from the package installed by nimble under
# nimbledeps/pkgs2/. Used by `make update` / $(NIMBLEDEPS_STAMP).
#
# BEARSSL_NIMBLEDEPS_DIR is evaluated at parse time, so targets that
# depend on it must be invoked via a recursive $(MAKE) call so the sub-make
# re-evaluates the variable after nimble setup has populated nimbledeps/.
#
# `ls -dt` (sort by modification time, newest first) is used to pick the
# latest installed version and is portable across Linux, macOS, and
# Windows (MSYS/MinGW).
BEARSSL_NIMBLEDEPS_DIR := $(shell ls -dt $(CURDIR)/nimbledeps/pkgs2/bearssl-* 2>/dev/null | head -1)
BEARSSL_CSOURCES_DIR := $(BEARSSL_NIMBLEDEPS_DIR)/bearssl/csources
BEARSSL_UNAME_M := $(shell uname -m)
ifeq ($(BEARSSL_UNAME_M),x86_64)
PORTABLE_BEARSSL_CFLAGS := -W -Wall -Os -fPIC -mssse3
else
PORTABLE_BEARSSL_CFLAGS := -W -Wall -Os -fPIC
endif
.PHONY: clean-bearssl-nimbledeps rebuild-bearssl-nimbledeps
clean-bearssl-nimbledeps:
ifeq ($(BEARSSL_NIMBLEDEPS_DIR),)
$(error No bearssl package found under nimbledeps/pkgs2/ — run 'make update' first)
endif
+ [ -e "$(BEARSSL_CSOURCES_DIR)/build" ] && \
"$(MAKE)" -C "$(BEARSSL_CSOURCES_DIR)" clean || true
rebuild-bearssl-nimbledeps: | clean-bearssl-nimbledeps
ifeq ($(BEARSSL_NIMBLEDEPS_DIR),)
$(error No bearssl package found under nimbledeps/pkgs2/ — run 'make update' first)
endif
@echo "Rebuilding bearssl from $(BEARSSL_CSOURCES_DIR)"
+ "$(MAKE)" -C "$(BEARSSL_CSOURCES_DIR)" CFLAGS="$(PORTABLE_BEARSSL_CFLAGS)" lib

View File

@ -1,3 +1,108 @@
## v0.38.0 (2026-03-16)
### Notes
- **liblogosdelivery**: Major new FFI API with debug API, health status events, message received events, stateful SubscriptionService, and improved resource management.
- Waku Kademlia discovery integrated with Mix protocol.
- Context-aware and event-driven broker architecture introduced.
- REST Store API now defaults to page size 20 with max 100.
- Lightpush no longer mounts without relay enabled.
- Repository renamed from `logos-messaging-nim` to `logos-delivery`.
### Features
- liblogosdelivery: FFI library of new API ([#3714](https://github.com/logos-messaging/logos-delivery/pull/3714)) ([3603b838](https://github.com/logos-messaging/logos-delivery/commit/3603b838))
- liblogosdelivery: health status event support ([#3737](https://github.com/logos-messaging/logos-delivery/pull/3737)) ([ba85873f](https://github.com/logos-messaging/logos-delivery/commit/ba85873f))
- liblogosdelivery: MessageReceivedEvent propagation over FFI ([#3747](https://github.com/logos-messaging/logos-delivery/pull/3747)) ([0ad55159](https://github.com/logos-messaging/logos-delivery/commit/0ad55159))
- liblogosdelivery: add debug API ([#3742](https://github.com/logos-messaging/logos-delivery/pull/3742)) ([09618a26](https://github.com/logos-messaging/logos-delivery/commit/09618a26))
- liblogosdelivery: implement stateful SubscriptionService for Core mode ([#3732](https://github.com/logos-messaging/logos-delivery/pull/3732)) ([51ec09c3](https://github.com/logos-messaging/logos-delivery/commit/51ec09c3))
- Waku Kademlia integration and Mix protocol updates ([#3722](https://github.com/logos-messaging/logos-delivery/pull/3722)) ([335600eb](https://github.com/logos-messaging/logos-delivery/commit/335600eb))
- Waku API: implement Health spec ([#3689](https://github.com/logos-messaging/logos-delivery/pull/3689)) ([1fb4d1ea](https://github.com/logos-messaging/logos-delivery/commit/1fb4d1ea))
- Waku API: send ([#3669](https://github.com/logos-messaging/logos-delivery/pull/3669)) ([1fd25355](https://github.com/logos-messaging/logos-delivery/commit/1fd25355))
- iOS compilation support (WIP) ([#3668](https://github.com/logos-messaging/logos-delivery/pull/3668)) ([96196ab8](https://github.com/logos-messaging/logos-delivery/commit/96196ab8))
- Distribute libwaku binaries ([#3612](https://github.com/logos-messaging/logos-delivery/pull/3612)) ([9e2b3830](https://github.com/logos-messaging/logos-delivery/commit/9e2b3830))
- Rendezvous: broadcast and discover WakuPeerRecords ([#3617](https://github.com/logos-messaging/logos-delivery/pull/3617)) ([b0cd75f4](https://github.com/logos-messaging/logos-delivery/commit/b0cd75f4))
- New postgres metric to estimate payload stats ([#3596](https://github.com/logos-messaging/logos-delivery/pull/3596)) ([454b098a](https://github.com/logos-messaging/logos-delivery/commit/454b098a))
### Bug Fixes
- Fix NodeHealthMonitor logspam ([#3743](https://github.com/logos-messaging/logos-delivery/pull/3743)) ([7e36e268](https://github.com/logos-messaging/logos-delivery/commit/7e36e268))
- Fix peer selection by shard and rendezvous/metadata sharding initialization ([#3718](https://github.com/logos-messaging/logos-delivery/pull/3718)) ([84f79110](https://github.com/logos-messaging/logos-delivery/commit/84f79110))
- Correct dynamic library extension on mac and update OS detection ([#3754](https://github.com/logos-messaging/logos-delivery/pull/3754)) ([1ace0154](https://github.com/logos-messaging/logos-delivery/commit/1ace0154))
- Force FINALIZE partition detach after detecting shorter error ([#3728](https://github.com/logos-messaging/logos-delivery/pull/3728)) ([b38b5aae](https://github.com/logos-messaging/logos-delivery/commit/b38b5aae))
- Fix store protocol issue in v0.37.0 ([#3657](https://github.com/logos-messaging/logos-delivery/pull/3657)) ([91b4c5f5](https://github.com/logos-messaging/logos-delivery/commit/91b4c5f5))
- Fix hash inputs for external nullifier, remove length prefix for sha256 ([#3660](https://github.com/logos-messaging/logos-delivery/pull/3660)) ([2d40cb9d](https://github.com/logos-messaging/logos-delivery/commit/2d40cb9d))
- Fix admin API peer shards field from metadata protocol ([#3594](https://github.com/logos-messaging/logos-delivery/pull/3594)) ([e54851d9](https://github.com/logos-messaging/logos-delivery/commit/e54851d9))
- Wakucanary exits with error if ping fails ([#3595](https://github.com/logos-messaging/logos-delivery/pull/3595), [#3711](https://github.com/logos-messaging/logos-delivery/pull/3711))
- Force epoll in chronos for Android ([#3705](https://github.com/logos-messaging/logos-delivery/pull/3705)) ([beb1dde1](https://github.com/logos-messaging/logos-delivery/commit/beb1dde1))
- Fix build_rln.sh script ([#3704](https://github.com/logos-messaging/logos-delivery/pull/3704)) ([09034837](https://github.com/logos-messaging/logos-delivery/commit/09034837))
- liblogosdelivery: move destroy API to node_api, add security checks and fix possible resource leak ([#3736](https://github.com/logos-messaging/logos-delivery/pull/3736)) ([db19da92](https://github.com/logos-messaging/logos-delivery/commit/db19da92))
### Changes
- Context-aware brokers architecture ([#3674](https://github.com/logos-messaging/logos-delivery/pull/3674)) ([c27405b1](https://github.com/logos-messaging/logos-delivery/commit/c27405b1))
- Introduce EventBroker, RequestBroker and MultiRequestBroker ([#3644](https://github.com/logos-messaging/logos-delivery/pull/3644)) ([ae74b901](https://github.com/logos-messaging/logos-delivery/commit/ae74b901))
- Use chronos' TokenBucket ([#3670](https://github.com/logos-messaging/logos-delivery/pull/3670)) ([284a0816](https://github.com/logos-messaging/logos-delivery/commit/284a0816))
- REST Store API constraints: default page size 20, max 100 ([#3602](https://github.com/logos-messaging/logos-delivery/pull/3602)) ([8c30a8e1](https://github.com/logos-messaging/logos-delivery/commit/8c30a8e1))
- Do not mount lightpush without relay ([#3540](https://github.com/logos-messaging/logos-delivery/pull/3540)) ([7d1c6aba](https://github.com/logos-messaging/logos-delivery/commit/7d1c6aba))
- Mix: use exit==dest approach ([#3642](https://github.com/logos-messaging/logos-delivery/pull/3642)) ([088e3108](https://github.com/logos-messaging/logos-delivery/commit/088e3108))
- Mix: simple refactor to reduce duplicated logs ([#3752](https://github.com/logos-messaging/logos-delivery/pull/3752)) ([96f1c40a](https://github.com/logos-messaging/logos-delivery/commit/96f1c40a))
- Simplify NodeHealthMonitor creation ([#3716](https://github.com/logos-messaging/logos-delivery/pull/3716)) ([a8bdbca9](https://github.com/logos-messaging/logos-delivery/commit/a8bdbca9))
- Adapt CLI args for delivery API ([#3744](https://github.com/logos-messaging/logos-delivery/pull/3744)) ([1f9c4cb8](https://github.com/logos-messaging/logos-delivery/commit/1f9c4cb8))
- Adapt debugapi to WakoNodeConf ([#3745](https://github.com/logos-messaging/logos-delivery/pull/3745)) ([4a6ad732](https://github.com/logos-messaging/logos-delivery/commit/4a6ad732))
- Bump nim-ffi to v0.1.3 ([#3696](https://github.com/logos-messaging/logos-delivery/pull/3696)) ([a02aaab5](https://github.com/logos-messaging/logos-delivery/commit/a02aaab5))
- Bump nim-metrics to v0.2.1 ([#3734](https://github.com/logos-messaging/logos-delivery/pull/3734)) ([c7e0cc0e](https://github.com/logos-messaging/logos-delivery/commit/c7e0cc0e))
- Add gasprice overflow check ([#3636](https://github.com/logos-messaging/logos-delivery/pull/3636)) ([a8590a0a](https://github.com/logos-messaging/logos-delivery/commit/a8590a0a))
- Pin RLN dependencies to specific version ([#3649](https://github.com/logos-messaging/logos-delivery/pull/3649)) ([834eea94](https://github.com/logos-messaging/logos-delivery/commit/834eea94))
- Update CI/README references after repository rename to logos-delivery ([#3729](https://github.com/logos-messaging/logos-delivery/pull/3729)) ([895f3e2d](https://github.com/logos-messaging/logos-delivery/commit/895f3e2d))
- Simplify on chain group manager error handling ([#3678](https://github.com/logos-messaging/logos-delivery/pull/3678)) ([bc9454db](https://github.com/logos-messaging/logos-delivery/commit/bc9454db))
- Extend RequestBroker with support for native/external types and sync requests ([#3665](https://github.com/logos-messaging/logos-delivery/pull/3665)) ([33233255](https://github.com/logos-messaging/logos-delivery/commit/33233255))
### This release supports the following [libp2p protocols](https://docs.libp2p.io/concepts/protocols/):
| Protocol | Spec status | Protocol id |
| ---: | :---: | :--- |
| [`11/WAKU2-RELAY`](https://github.com/vacp2p/rfc-index/blob/main/waku/standards/core/11/relay.md) | `stable` | `/vac/waku/relay/2.0.0` |
| [`12/WAKU2-FILTER`](https://github.com/vacp2p/rfc-index/blob/main/waku/standards/core/12/filter.md) | `draft` | `/vac/waku/filter/2.0.0-beta1` <br />`/vac/waku/filter-subscribe/2.0.0-beta1` <br />`/vac/waku/filter-push/2.0.0-beta1` |
| [`13/WAKU2-STORE`](https://github.com/vacp2p/rfc-index/blob/main/waku/standards/core/13/store.md) | `draft` | `/vac/waku/store/2.0.0-beta4` |
| [`19/WAKU2-LIGHTPUSH`](https://github.com/vacp2p/rfc-index/blob/main/waku/standards/core/19/lightpush.md) | `draft` | `/vac/waku/lightpush/2.0.0-beta1` |
| [`WAKU2-LIGHTPUSH v3`](https://github.com/waku-org/specs/blob/master/standards/core/lightpush.md) | `draft` | `/vac/waku/lightpush/3.0.0` |
| [`66/WAKU2-METADATA`](https://github.com/waku-org/specs/blob/master/standards/core/metadata.md) | `raw` | `/vac/waku/metadata/1.0.0` |
| [`WAKU-SYNC`](https://github.com/waku-org/specs/blob/master/standards/core/sync.md) | `draft` | `/vac/waku/sync/1.0.0` |
## v0.37.4 (2026-04-03)
### Changes
- Optimize release builds for speed ([#3735](https://github.com/logos-messaging/logos-delivery/pull/3735)) ([#3777](https://github.com/logos-messaging/logos-delivery/pull/3777))
### Bug Fixes
- Properly add DEBUG flag into Dockerfile
## v0.37.3 (2026-03-25)
### Features
- Allow override user-message-rate-limit ([#3778](https://github.com/logos-messaging/logos-delivery/pull/3778))
## v0.37.2 (2026-03-19)
### Features
- Allow union of several retention policies ([#3766](https://github.com/logos-messaging/logos-delivery/pull/3766))
### Bug Fixes
- Bump nim-http-utils to v0.4.1 to allow accepting <:><space><(> as a valid header and tests to validate html rfc7230 ([#43](https://github.com/status-im/nim-http-utils/pull/43))
## v0.37.1 (2026-03-12)
### Bug Fixes
- Avoid IndexDefect if DB error message is short ([#3725](https://github.com/logos-messaging/logos-delivery/pull/3725))
- Remove ENR cache from peer exchange ([#3652](https://github.com/logos-messaging/logos-messaging-nim/pull/3652)) ([7920368a](https://github.com/logos-messaging/logos-messaging-nim/commit/7920368a36687cd5f12afa52d59866792d8457ca))
## v0.37.0 (2025-10-01) ## v0.37.0 (2025-10-01)
### Notes ### Notes
@ -19,6 +124,8 @@
- Rendezvous: add request interval option ([#3569](https://github.com/waku-org/nwaku/pull/3569)) ([cc7a6406](https://github.com/waku-org/nwaku/commit/cc7a6406)) - Rendezvous: add request interval option ([#3569](https://github.com/waku-org/nwaku/pull/3569)) ([cc7a6406](https://github.com/waku-org/nwaku/commit/cc7a6406))
- Shard-specific metrics tracking ([#3520](https://github.com/waku-org/nwaku/pull/3520)) ([c3da29fd](https://github.com/waku-org/nwaku/commit/c3da29fd)) - Shard-specific metrics tracking ([#3520](https://github.com/waku-org/nwaku/pull/3520)) ([c3da29fd](https://github.com/waku-org/nwaku/commit/c3da29fd))
- Libwaku: build Windows DLL for Status-go ([#3460](https://github.com/waku-org/nwaku/pull/3460)) ([5c38a53f](https://github.com/waku-org/nwaku/commit/5c38a53f)) - Libwaku: build Windows DLL for Status-go ([#3460](https://github.com/waku-org/nwaku/pull/3460)) ([5c38a53f](https://github.com/waku-org/nwaku/commit/5c38a53f))
- RLN: add Stateless RLN support ([#3621](https://github.com/waku-org/nwaku/pull/3621))
- LOG: Reduce log level of messages from debug to info for better visibility ([#3622](https://github.com/waku-org/nwaku/pull/3622))
### Bug Fixes ### Bug Fixes
@ -30,6 +137,7 @@
- Metrics: switched to counter instead of gauge ([#3355](https://github.com/waku-org/nwaku/pull/3355)) ([a27eec90](https://github.com/waku-org/nwaku/commit/a27eec90)) - Metrics: switched to counter instead of gauge ([#3355](https://github.com/waku-org/nwaku/pull/3355)) ([a27eec90](https://github.com/waku-org/nwaku/commit/a27eec90))
- Fixed lightpush metrics and diagnostics ([#3486](https://github.com/waku-org/nwaku/pull/3486)) ([0ed3fc80](https://github.com/waku-org/nwaku/commit/0ed3fc80)) - Fixed lightpush metrics and diagnostics ([#3486](https://github.com/waku-org/nwaku/pull/3486)) ([0ed3fc80](https://github.com/waku-org/nwaku/commit/0ed3fc80))
- Misc sync, dashboard, and CI fixes ([#3434](https://github.com/waku-org/nwaku/pull/3434), [#3508](https://github.com/waku-org/nwaku/pull/3508), [#3464](https://github.com/waku-org/nwaku/pull/3464)) - Misc sync, dashboard, and CI fixes ([#3434](https://github.com/waku-org/nwaku/pull/3434), [#3508](https://github.com/waku-org/nwaku/pull/3508), [#3464](https://github.com/waku-org/nwaku/pull/3464))
- Raise log level of numerous operational messages from debug to info for better visibility ([#3622](https://github.com/waku-org/nwaku/pull/3622))
### Changes ### Changes

View File

@ -1,14 +1,14 @@
# BUILD NIM APP ---------------------------------------------------------------- # BUILD NIM APP ----------------------------------------------------------------
FROM rust:1.81.0-alpine3.19 AS nim-build FROM rustlang/rust:nightly-alpine3.19 AS nim-build
ARG NIMFLAGS ARG NIMFLAGS
ARG MAKE_TARGET=wakunode2 ARG MAKE_TARGET=wakunode2
ARG NIM_COMMIT ARG NIM_COMMIT
ARG LOG_LEVEL=TRACE
ARG HEAPTRACK_BUILD=0 ARG HEAPTRACK_BUILD=0
ARG POSTGRES=0
# Get build tools and required header files # Get build tools and required header files
RUN apk add --no-cache bash git build-base openssl-dev linux-headers curl jq RUN apk add --no-cache bash git build-base openssl-dev linux-headers curl jq libbsd-dev
WORKDIR /app WORKDIR /app
COPY . . COPY . .
@ -27,7 +27,7 @@ RUN if [ "$HEAPTRACK_BUILD" = "1" ]; then \
RUN make -j$(nproc) deps QUICK_AND_DIRTY_COMPILER=1 ${NIM_COMMIT} RUN make -j$(nproc) deps QUICK_AND_DIRTY_COMPILER=1 ${NIM_COMMIT}
# Build the final node binary # Build the final node binary
RUN make -j$(nproc) ${NIM_COMMIT} $MAKE_TARGET LOG_LEVEL=${LOG_LEVEL} NIMFLAGS="${NIMFLAGS}" RUN make -j$(nproc) ${NIM_COMMIT} $MAKE_TARGET NIMFLAGS="${NIMFLAGS}" POSTGRES=${POSTGRES}
# PRODUCTION IMAGE ------------------------------------------------------------- # PRODUCTION IMAGE -------------------------------------------------------------
@ -46,7 +46,7 @@ LABEL version="unknown"
EXPOSE 30303 60000 8545 EXPOSE 30303 60000 8545
# Referenced in the binary # Referenced in the binary
RUN apk add --no-cache libgcc libpq-dev bind-tools RUN apk add --no-cache libgcc libpq-dev bind-tools libstdc++
# Copy to separate location to accomodate different MAKE_TARGET values # Copy to separate location to accomodate different MAKE_TARGET values
COPY --from=nim-build /app/build/$MAKE_TARGET /usr/local/bin/ COPY --from=nim-build /app/build/$MAKE_TARGET /usr/local/bin/

View File

@ -1,5 +1,5 @@
# BUILD NIM APP ---------------------------------------------------------------- # BUILD NIM APP ----------------------------------------------------------------
FROM rust:1.81.0-alpine3.19 AS nim-build FROM rustlang/rust:nightly-alpine3.19 AS nim-build
ARG NIMFLAGS ARG NIMFLAGS
ARG MAKE_TARGET=lightpushwithmix ARG MAKE_TARGET=lightpushwithmix
@ -7,7 +7,7 @@ ARG NIM_COMMIT
ARG LOG_LEVEL=TRACE ARG LOG_LEVEL=TRACE
# Get build tools and required header files # Get build tools and required header files
RUN apk add --no-cache bash git build-base openssl-dev linux-headers curl jq RUN apk add --no-cache bash git build-base openssl-dev linux-headers curl jq libbsd-dev
WORKDIR /app WORKDIR /app
COPY . . COPY . .
@ -24,7 +24,6 @@ RUN make -j$(nproc) deps QUICK_AND_DIRTY_COMPILER=1 ${NIM_COMMIT}
# Build the final node binary # Build the final node binary
RUN make -j$(nproc) ${NIM_COMMIT} $MAKE_TARGET LOG_LEVEL=${LOG_LEVEL} NIMFLAGS="${NIMFLAGS}" RUN make -j$(nproc) ${NIM_COMMIT} $MAKE_TARGET LOG_LEVEL=${LOG_LEVEL} NIMFLAGS="${NIMFLAGS}"
# REFERENCE IMAGE as BASE for specialized PRODUCTION IMAGES---------------------------------------- # REFERENCE IMAGE as BASE for specialized PRODUCTION IMAGES----------------------------------------
FROM alpine:3.18 AS base_lpt FROM alpine:3.18 AS base_lpt
@ -44,8 +43,8 @@ RUN apk add --no-cache libgcc libpq-dev \
wget \ wget \
iproute2 \ iproute2 \
python3 \ python3 \
jq jq \
libstdc++
COPY --from=nim-build /app/build/lightpush_publisher_mix /usr/bin/ COPY --from=nim-build /app/build/lightpush_publisher_mix /usr/bin/
RUN chmod +x /usr/bin/lightpush_publisher_mix RUN chmod +x /usr/bin/lightpush_publisher_mix

View File

@ -1,6 +1,3 @@
nim-waku is licensed under the Apache License version 2
Copyright (c) 2018 Status Research & Development GmbH
-----------------------------------------------------
Apache License Apache License
Version 2.0, January 2004 Version 2.0, January 2004
@ -190,7 +187,7 @@ Copyright (c) 2018 Status Research & Development GmbH
same "printed page" as the copyright notice for easier same "printed page" as the copyright notice for easier
identification within third-party archives. identification within third-party archives.
Copyright 2018 Status Research & Development GmbH Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License"); Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License. you may not use this file except in compliance with the License.

View File

@ -1,25 +1,21 @@
nim-waku is licensed under the MIT License
Copyright (c) 2018 Status Research & Development GmbH
-----------------------------------------------------
The MIT License (MIT) The MIT License (MIT)
Copyright (c) 2018 Status Research & Development GmbH Copyright © 2025-2026 Logos
Permission is hereby granted, free of charge, to any person obtaining a copy Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal of this software and associated documentation files (the “Software”), to deal
in the Software without restriction, including without limitation the rights in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions: furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all The above copyright notice and this permission notice shall be included in
copies or substantial portions of the Software. all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
SOFTWARE. THE SOFTWARE.

475
Makefile
View File

@ -4,28 +4,13 @@
# - MIT license # - MIT license
# at your option. This file may not be copied, modified, or distributed except # at your option. This file may not be copied, modified, or distributed except
# according to those terms. # according to those terms.
export BUILD_SYSTEM_DIR := vendor/nimbus-build-system
export EXCLUDED_NIM_PACKAGES := vendor/nim-dnsdisc/vendor include Nat.mk
include BearSSL.mk
LINK_PCRE := 0 LINK_PCRE := 0
FORMAT_MSG := "\\x1B[95mFormatting:\\x1B[39m" FORMAT_MSG := "\\x1B[95mFormatting:\\x1B[39m"
# we don't want an error here, so we can handle things later, in the ".DEFAULT" target BUILD_MSG := "Building:"
-include $(BUILD_SYSTEM_DIR)/makefiles/variables.mk
ifeq ($(NIM_PARAMS),)
# "variables.mk" was not included, so we update the submodules.
GIT_SUBMODULE_UPDATE := git submodule update --init --recursive
.DEFAULT:
+@ echo -e "Git submodules not found. Running '$(GIT_SUBMODULE_UPDATE)'.\n"; \
$(GIT_SUBMODULE_UPDATE); \
echo
# Now that the included *.mk files appeared, and are newer than this file, Make will restart itself:
# https://www.gnu.org/software/make/manual/make.html#Remaking-Makefiles
#
# After restarting, it will execute its original goal, so we don't have to start a child Make here
# with "$(MAKE) $(MAKECMDGOALS)". Isn't hidden control flow great?
else # "variables.mk" was included. Business as usual until the end of this file.
# Determine the OS # Determine the OS
detected_OS := $(shell uname -s) detected_OS := $(shell uname -s)
@ -33,25 +18,36 @@ ifneq (,$(findstring MINGW,$(detected_OS)))
detected_OS := Windows detected_OS := Windows
endif endif
# Ensure the nim/nimble installed by install-nim/install-nimble are found first
export PATH := $(HOME)/.nimble/bin:$(PATH)
# NIM binary location
NIM_BINARY := $(shell which nim 2>/dev/null)
NPH := $(HOME)/.nimble/bin/nph
NIMBLEDEPS_STAMP := nimbledeps/.nimble-setup
# Compilation parameters
NIM_PARAMS ?=
ifeq ($(detected_OS),Windows) ifeq ($(detected_OS),Windows)
# Update MINGW_PATH to standard MinGW location
MINGW_PATH = /mingw64 MINGW_PATH = /mingw64
NIM_PARAMS += --passC:"-I$(MINGW_PATH)/include" NIM_PARAMS += --passC:"-I$(MINGW_PATH)/include"
NIM_PARAMS += --passL:"-L$(MINGW_PATH)/lib" NIM_PARAMS += --passL:"-L$(MINGW_PATH)/lib"
NIM_PARAMS += --passL:"-Lvendor/nim-nat-traversal/vendor/miniupnp/miniupnpc" LIBS = -lws2_32 -lbcrypt -liphlpapi -luserenv -lntdll -lpq
NIM_PARAMS += --passL:"-Lvendor/nim-nat-traversal/vendor/libnatpmp-upstream"
LIBS = -lws2_32 -lbcrypt -liphlpapi -luserenv -lntdll -lminiupnpc -lnatpmp -lpq
NIM_PARAMS += $(foreach lib,$(LIBS),--passL:"$(lib)") NIM_PARAMS += $(foreach lib,$(LIBS),--passL:"$(lib)")
NIM_PARAMS += --passL:"-Wl,--allow-multiple-definition"
export PATH := /c/msys64/usr/bin:/c/msys64/mingw64/bin:/c/msys64/usr/lib:/c/msys64/mingw64/lib:$(PATH)
endif endif
########## ##########
## Main ## ## Main ##
########## ##########
.PHONY: all test update clean .PHONY: all test update clean examples deps nimble install-nim install-nimble
# default target, because it's the first one that doesn't start with '.' # default target
all: | wakunode2 example2 chat2 chat2bridge libwaku all: | wakunode2 libwaku liblogosdelivery
examples: | example2 chat2 chat2bridge
test_file := $(word 2,$(MAKECMDGOALS)) test_file := $(word 2,$(MAKECMDGOALS))
define test_name define test_name
@ -65,94 +61,85 @@ ifeq ($(strip $(test_file)),)
else else
$(MAKE) compile-test TEST_FILE="$(test_file)" TEST_NAME="$(call test_name)" $(MAKE) compile-test TEST_FILE="$(test_file)" TEST_NAME="$(call test_name)"
endif endif
# this prevents make from erroring on unknown targets like "Index"
# this prevents make from erroring on unknown targets
%: %:
@true @true
waku.nims: waku.nims:
ln -s waku.nimble $@ ln -s waku.nimble $@
update: | update-common $(NIMBLEDEPS_STAMP): nimble.lock | waku.nims
rm -rf waku.nims && \ $(MAKE) install-nimble
$(MAKE) waku.nims $(HANDLE_OUTPUT) nimble setup --localdeps
$(MAKE) build-nph $(MAKE) build-nph
$(MAKE) rebuild-bearssl-nimbledeps
$(MAKE) rebuild-nat-libs-nimbledeps
touch $@
update:
rm -f $(NIMBLEDEPS_STAMP)
$(MAKE) $(NIMBLEDEPS_STAMP)
nimble lock
clean: clean:
rm -rf build rm -rf build 2> /dev/null || true
rm -rf nimbledeps 2> /dev/null || true
rm -fr nimcache 2> /dev/null || true
rm nimble.paths 2> /dev/null || true
nimble clean
# must be included after the default target REQUIRED_NIM_VERSION := $(shell grep -E '^const RequiredNimVersion\s*=' waku.nimble | grep -oE '"[0-9]+\.[0-9]+\.[0-9]+"' | tr -d '"')
-include $(BUILD_SYSTEM_DIR)/makefiles/targets.mk REQUIRED_NIMBLE_VERSION := $(shell grep -E '^const RequiredNimbleVersion\s*=' waku.nimble | grep -oE '"[0-9]+\.[0-9]+\.[0-9]+"' | tr -d '"')
install-nim:
scripts/install_nim.sh $(REQUIRED_NIM_VERSION)
install-nimble: install-nim
@nimble_ver=$$(nimble --version 2>/dev/null | head -1 | grep -oE '[0-9]+\.[0-9]+\.[0-9]+' | head -1); \
if [ "$$nimble_ver" = "$(REQUIRED_NIMBLE_VERSION)" ]; then \
echo "nimble $(REQUIRED_NIMBLE_VERSION) already installed, skipping."; \
else \
cd $$(mktemp -d) && nimble install "nimble@$(REQUIRED_NIMBLE_VERSION)" -y; \
fi
build:
mkdir -p build
nimble: install-nimble
## Possible values: prod; debug ## Possible values: prod; debug
TARGET ?= prod TARGET ?= prod
## Git version ## Git version
GIT_VERSION ?= $(shell git describe --abbrev=6 --always --tags) GIT_VERSION ?= $(shell git describe --abbrev=6 --always --tags)
## Compilation parameters. If defined in the CLI the assignments won't be executed
NIM_PARAMS := $(NIM_PARAMS) -d:git_version=\"$(GIT_VERSION)\" NIM_PARAMS := $(NIM_PARAMS) -d:git_version=\"$(GIT_VERSION)\"
## Heaptracker options ## Heaptracker options
HEAPTRACKER ?= 0 HEAPTRACKER ?= 0
HEAPTRACKER_INJECT ?= 0 HEAPTRACKER_INJECT ?= 0
ifeq ($(HEAPTRACKER), 1) ifeq ($(HEAPTRACKER), 1)
# Assumes Nim's lib/system/alloc.nim is patched!
TARGET := debug-with-heaptrack TARGET := debug-with-heaptrack
ifeq ($(HEAPTRACKER_INJECT), 1) ifeq ($(HEAPTRACKER_INJECT), 1)
# the Nim compiler will load 'libheaptrack_inject.so'
HEAPTRACK_PARAMS := -d:heaptracker -d:heaptracker_inject HEAPTRACK_PARAMS := -d:heaptracker -d:heaptracker_inject
NIM_PARAMS := $(NIM_PARAMS) -d:heaptracker -d:heaptracker_inject NIM_PARAMS := $(NIM_PARAMS) -d:heaptracker -d:heaptracker_inject
else else
# the Nim compiler will load 'libheaptrack_preload.so'
HEAPTRACK_PARAMS := -d:heaptracker HEAPTRACK_PARAMS := -d:heaptracker
NIM_PARAMS := $(NIM_PARAMS) -d:heaptracker NIM_PARAMS := $(NIM_PARAMS) -d:heaptracker
endif endif
endif
## end of Heaptracker options
##################
## Dependencies ##
##################
.PHONY: deps libbacktrace
rustup:
ifeq (, $(shell which cargo))
# Install Rustup if it's not installed
# -y: Assume "yes" for all prompts
# --default-toolchain stable: Install the stable toolchain
curl https://sh.rustup.rs -sSf | sh -s -- -y --default-toolchain stable
endif endif
rln-deps: rustup # Debug/Release mode
./scripts/install_rln_tests_dependencies.sh
deps: | deps-common nat-libs waku.nims
### nim-libbacktrace
# "-d:release" implies "--stacktrace:off" and it cannot be added to config.nims
ifeq ($(DEBUG), 0) ifeq ($(DEBUG), 0)
NIM_PARAMS := $(NIM_PARAMS) -d:release NIM_PARAMS := $(NIM_PARAMS) -d:release
else else
NIM_PARAMS := $(NIM_PARAMS) -d:debug NIM_PARAMS := $(NIM_PARAMS) -d:debug
endif endif
ifeq ($(USE_LIBBACKTRACE), 0)
NIM_PARAMS := $(NIM_PARAMS) -d:disable_libbacktrace NIM_PARAMS := $(NIM_PARAMS) -d:disable_libbacktrace
endif
libbacktrace: # enable experimental exit is dest feature in libp2p mix
+ $(MAKE) -C vendor/nim-libbacktrace --no-print-directory BUILD_CXX_LIB=0 NIM_PARAMS := $(NIM_PARAMS) -d:libp2p_mix_experimental_exit_is_dest
clean-libbacktrace:
+ $(MAKE) -C vendor/nim-libbacktrace clean $(HANDLE_OUTPUT)
# Extend deps and clean targets
ifneq ($(USE_LIBBACKTRACE), 0)
deps: | libbacktrace
endif
ifeq ($(POSTGRES), 1) ifeq ($(POSTGRES), 1)
NIM_PARAMS := $(NIM_PARAMS) -d:postgres -d:nimDebugDlOpen NIM_PARAMS := $(NIM_PARAMS) -d:postgres -d:nimDebugDlOpen
@ -162,14 +149,26 @@ ifeq ($(DEBUG_DISCV5), 1)
NIM_PARAMS := $(NIM_PARAMS) -d:debugDiscv5 NIM_PARAMS := $(NIM_PARAMS) -d:debugDiscv5
endif endif
clean: | clean-libbacktrace # Export NIM_PARAMS so nimble can access it
export NIM_PARAMS
### Create nimble links (used when building with Nix) ##################
## Dependencies ##
##################
.PHONY: deps
nimbus-build-system-nimble-dir: FOUNDRY_VERSION := 1.5.0
NIMBLE_DIR="$(CURDIR)/$(NIMBLE_DIR)" \ PNPM_VERSION := 10.23.0
PWD_CMD="$(PWD)" \
$(CURDIR)/scripts/generate_nimble_links.sh rustup:
ifeq (, $(shell which cargo))
curl https://sh.rustup.rs -sSf | sh -s -- -y --default-toolchain stable
endif
rln-deps: rustup
./scripts/install_rln_tests_dependencies.sh $(FOUNDRY_VERSION) $(PNPM_VERSION)
deps: | nimble
################## ##################
## RLN ## ## RLN ##
@ -177,17 +176,18 @@ nimbus-build-system-nimble-dir:
.PHONY: librln .PHONY: librln
LIBRLN_BUILDDIR := $(CURDIR)/vendor/zerokit LIBRLN_BUILDDIR := $(CURDIR)/vendor/zerokit
LIBRLN_VERSION := v0.8.0 LIBRLN_VERSION := v0.9.0
ifeq ($(detected_OS),Windows) ifeq ($(detected_OS),Windows)
LIBRLN_FILE := rln.lib LIBRLN_FILE ?= rln.lib
else else
LIBRLN_FILE := librln_$(LIBRLN_VERSION).a LIBRLN_FILE ?= librln_$(LIBRLN_VERSION).a
endif endif
$(LIBRLN_FILE): $(LIBRLN_FILE):
git submodule update --init vendor/zerokit
echo -e $(BUILD_MSG) "$@" && \ echo -e $(BUILD_MSG) "$@" && \
./scripts/build_rln.sh $(LIBRLN_BUILDDIR) $(LIBRLN_VERSION) $(LIBRLN_FILE) bash scripts/build_rln.sh $(LIBRLN_BUILDDIR) $(LIBRLN_VERSION) $(LIBRLN_FILE)
librln: | $(LIBRLN_FILE) librln: | $(LIBRLN_FILE)
$(eval NIM_PARAMS += --passL:$(LIBRLN_FILE) --passL:-lm) $(eval NIM_PARAMS += --passL:$(LIBRLN_FILE) --passL:-lm)
@ -196,7 +196,6 @@ clean-librln:
cargo clean --manifest-path vendor/zerokit/rln/Cargo.toml cargo clean --manifest-path vendor/zerokit/rln/Cargo.toml
rm -f $(LIBRLN_FILE) rm -f $(LIBRLN_FILE)
# Extend clean target
clean: | clean-librln clean: | clean-librln
################# #################
@ -204,70 +203,71 @@ clean: | clean-librln
################# #################
.PHONY: testcommon .PHONY: testcommon
testcommon: | build deps testcommon: | $(NIMBLEDEPS_STAMP) build
echo -e $(BUILD_MSG) "build/$@" && \ echo -e $(BUILD_MSG) "build/$@" && \
$(ENV_SCRIPT) nim testcommon $(NIM_PARAMS) waku.nims nimble testcommon
########## ##########
## Waku ## ## Waku ##
########## ##########
.PHONY: testwaku wakunode2 testwakunode2 example2 chat2 chat2bridge liteprotocoltester .PHONY: testwaku wakunode2 testwakunode2 example2 chat2 chat2bridge liteprotocoltester
# install rln-deps only for the testwaku target testwaku: | $(NIMBLEDEPS_STAMP) build rln-deps librln
testwaku: | build deps rln-deps librln
echo -e $(BUILD_MSG) "build/$@" && \ echo -e $(BUILD_MSG) "build/$@" && \
$(ENV_SCRIPT) nim test -d:os=$(shell uname) $(NIM_PARAMS) waku.nims nimble test
wakunode2: | build deps librln wakunode2: | $(NIMBLEDEPS_STAMP) build deps librln
echo -e $(BUILD_MSG) "build/$@" && \ echo -e $(BUILD_MSG) "build/$@" && \
\ nimble wakunode2
$(ENV_SCRIPT) nim wakunode2 $(NIM_PARAMS) waku.nims
benchmarks: | build deps librln benchmarks: | $(NIMBLEDEPS_STAMP) build deps librln
echo -e $(BUILD_MSG) "build/$@" && \ echo -e $(BUILD_MSG) "build/$@" && \
$(ENV_SCRIPT) nim benchmarks $(NIM_PARAMS) waku.nims nimble benchmarks
testwakunode2: | build deps librln testwakunode2: | $(NIMBLEDEPS_STAMP) build deps librln
echo -e $(BUILD_MSG) "build/$@" && \ echo -e $(BUILD_MSG) "build/$@" && \
$(ENV_SCRIPT) nim testwakunode2 $(NIM_PARAMS) waku.nims nimble testwakunode2
example2: | build deps librln example2: | $(NIMBLEDEPS_STAMP) build deps librln
echo -e $(BUILD_MSG) "build/$@" && \ echo -e $(BUILD_MSG) "build/$@" && \
$(ENV_SCRIPT) nim example2 $(NIM_PARAMS) waku.nims nimble example2
chat2: | build deps librln chat2: | $(NIMBLEDEPS_STAMP) build deps librln
echo -e $(BUILD_MSG) "build/$@" && \ echo -e $(BUILD_MSG) "build/$@" && \
$(ENV_SCRIPT) nim chat2 $(NIM_PARAMS) waku.nims nimble chat2
chat2mix: | build deps librln chat2mix: | $(NIMBLEDEPS_STAMP) build deps librln
echo -e $(BUILD_MSG) "build/$@" && \ echo -e $(BUILD_MSG) "build/$@" && \
$(ENV_SCRIPT) nim chat2mix $(NIM_PARAMS) waku.nims nimble chat2mix
rln-db-inspector: | build deps librln rln-db-inspector: | $(NIMBLEDEPS_STAMP) build deps librln
echo -e $(BUILD_MSG) "build/$@" && \ echo -e $(BUILD_MSG) "build/$@" && \
$(ENV_SCRIPT) nim rln_db_inspector $(NIM_PARAMS) waku.nims nimble rln_db_inspector
chat2bridge: | build deps librln chat2bridge: | $(NIMBLEDEPS_STAMP) build deps librln
echo -e $(BUILD_MSG) "build/$@" && \ echo -e $(BUILD_MSG) "build/$@" && \
$(ENV_SCRIPT) nim chat2bridge $(NIM_PARAMS) waku.nims nimble chat2bridge
liteprotocoltester: | build deps librln liteprotocoltester: | $(NIMBLEDEPS_STAMP) build deps librln
echo -e $(BUILD_MSG) "build/$@" && \ echo -e $(BUILD_MSG) "build/$@" && \
$(ENV_SCRIPT) nim liteprotocoltester $(NIM_PARAMS) waku.nims nimble liteprotocoltester
lightpushwithmix: | build deps librln lightpushwithmix: | $(NIMBLEDEPS_STAMP) build deps librln
echo -e $(BUILD_MSG) "build/$@" && \ echo -e $(BUILD_MSG) "build/$@" && \
$(ENV_SCRIPT) nim lightpushwithmix $(NIM_PARAMS) waku.nims nimble lightpushwithmix
build/%: | build deps librln api_example: | $(NIMBLEDEPS_STAMP) build deps librln
echo -e $(BUILD_MSG) "build/$@" && \
$(ENV_SCRIPT) nim api_example $(NIM_PARAMS) waku.nims
build/%: | $(NIMBLEDEPS_STAMP) build deps librln
echo -e $(BUILD_MSG) "build/$*" && \ echo -e $(BUILD_MSG) "build/$*" && \
$(ENV_SCRIPT) nim buildone $(NIM_PARAMS) waku.nims $* nimble buildone $*
compile-test: | build deps librln compile-test: | $(NIMBLEDEPS_STAMP) build deps librln
echo -e $(BUILD_MSG) "$(TEST_FILE)" "\"$(TEST_NAME)\"" && \ echo -e $(BUILD_MSG) "$(TEST_FILE)" "\"$(TEST_NAME)\"" && \
$(ENV_SCRIPT) nim buildTest $(NIM_PARAMS) waku.nims $(TEST_FILE) && \ nimble buildTest $(TEST_FILE) && \
$(ENV_SCRIPT) nim execTest $(NIM_PARAMS) waku.nims $(TEST_FILE) "\"$(TEST_NAME)\""; \ nimble execTest $(TEST_FILE) "\"$(TEST_NAME)\""
################ ################
## Waku tools ## ## Waku tools ##
@ -276,29 +276,30 @@ compile-test: | build deps librln
tools: networkmonitor wakucanary tools: networkmonitor wakucanary
wakucanary: | build deps librln wakucanary: | $(NIMBLEDEPS_STAMP) build deps librln
echo -e $(BUILD_MSG) "build/$@" && \ echo -e $(BUILD_MSG) "build/$@" && \
$(ENV_SCRIPT) nim wakucanary $(NIM_PARAMS) waku.nims nimble wakucanary
networkmonitor: | build deps librln networkmonitor: | $(NIMBLEDEPS_STAMP) build deps librln
echo -e $(BUILD_MSG) "build/$@" && \ echo -e $(BUILD_MSG) "build/$@" && \
$(ENV_SCRIPT) nim networkmonitor $(NIM_PARAMS) waku.nims nimble networkmonitor
############ ############
## Format ## ## Format ##
############ ############
.PHONY: build-nph install-nph clean-nph print-nph-path .PHONY: build-nph install-nph print-nph-path
# Default location for nph binary shall be next to nim binary to make it available on the path.
NPH:=$(shell dirname $(NIM_BINARY))/nph
build-nph: | build deps build-nph: | build deps
ifeq ("$(wildcard $(NPH))","") ifneq ($(detected_OS),Windows)
$(ENV_SCRIPT) nim c --skipParentCfg:on vendor/nph/src/nph.nim && \ if command -v nph > /dev/null 2>&1; then \
mv vendor/nph/src/nph $(shell dirname $(NPH)) echo "nph already installed, skipping"; \
echo "nph utility is available at " $(NPH) else \
echo "Installing nph globally"; \
(cd /tmp && nimble install nph@0.7.0 --accept -g); \
fi
command -v nph
else else
echo "nph utility already exists at " $(NPH) echo "Skipping nph build on Windows (nph is only used on Unix-like systems)"
endif endif
GIT_PRE_COMMIT_HOOK := .git/hooks/pre-commit GIT_PRE_COMMIT_HOOK := .git/hooks/pre-commit
@ -315,39 +316,30 @@ nph/%: | build-nph
echo -e $(FORMAT_MSG) "nph/$*" && \ echo -e $(FORMAT_MSG) "nph/$*" && \
$(NPH) $* $(NPH) $*
clean-nph:
rm -f $(NPH)
# To avoid hardcoding nph binary location in several places
print-nph-path: print-nph-path:
echo "$(NPH)" @echo "$(NPH)"
clean: | clean-nph clean:
################### ###################
## Documentation ## ## Documentation ##
################### ###################
.PHONY: docs coverage .PHONY: docs coverage
# TODO: Remove unused target
docs: | build deps docs: | build deps
echo -e $(BUILD_MSG) "build/$@" && \ echo -e $(BUILD_MSG) "build/$@" && \
$(ENV_SCRIPT) nim doc --run --index:on --project --out:.gh-pages waku/waku.nim waku.nims nimble doc --run --index:on --project --out:.gh-pages waku/waku.nim waku.nims
coverage: coverage:
echo -e $(BUILD_MSG) "build/$@" && \ echo -e $(BUILD_MSG) "build/$@" && \
$(ENV_SCRIPT) ./scripts/run_cov.sh -y ./scripts/run_cov.sh -y
##################### #####################
## Container image ## ## Container image ##
##################### #####################
# -d:insecure - Necessary to enable Prometheus HTTP endpoint for metrics
# -d:chronicles_colors:none - Necessary to disable colors in logs for Docker
DOCKER_IMAGE_NIMFLAGS ?= -d:chronicles_colors:none -d:insecure -d:postgres DOCKER_IMAGE_NIMFLAGS ?= -d:chronicles_colors:none -d:insecure -d:postgres
DOCKER_IMAGE_NIMFLAGS := $(DOCKER_IMAGE_NIMFLAGS) $(HEAPTRACK_PARAMS) DOCKER_IMAGE_NIMFLAGS := $(DOCKER_IMAGE_NIMFLAGS) $(HEAPTRACK_PARAMS)
# build a docker image for the fleet
docker-image: MAKE_TARGET ?= wakunode2 docker-image: MAKE_TARGET ?= wakunode2
docker-image: DOCKER_IMAGE_TAG ?= $(MAKE_TARGET)-$(GIT_VERSION) docker-image: DOCKER_IMAGE_TAG ?= $(MAKE_TARGET)-$(GIT_VERSION)
docker-image: DOCKER_IMAGE_NAME ?= wakuorg/nwaku:$(DOCKER_IMAGE_TAG) docker-image: DOCKER_IMAGE_NAME ?= wakuorg/nwaku:$(DOCKER_IMAGE_TAG)
@ -355,8 +347,6 @@ docker-image:
docker build \ docker build \
--build-arg="MAKE_TARGET=$(MAKE_TARGET)" \ --build-arg="MAKE_TARGET=$(MAKE_TARGET)" \
--build-arg="NIMFLAGS=$(DOCKER_IMAGE_NIMFLAGS)" \ --build-arg="NIMFLAGS=$(DOCKER_IMAGE_NIMFLAGS)" \
--build-arg="NIM_COMMIT=$(DOCKER_NIM_COMMIT)" \
--build-arg="LOG_LEVEL=$(LOG_LEVEL)" \
--build-arg="HEAPTRACK_BUILD=$(HEAPTRACKER)" \ --build-arg="HEAPTRACK_BUILD=$(HEAPTRACKER)" \
--label="commit=$(shell git rev-parse HEAD)" \ --label="commit=$(shell git rev-parse HEAD)" \
--label="version=$(GIT_VERSION)" \ --label="version=$(GIT_VERSION)" \
@ -367,7 +357,7 @@ docker-quick-image: MAKE_TARGET ?= wakunode2
docker-quick-image: DOCKER_IMAGE_TAG ?= $(MAKE_TARGET)-$(GIT_VERSION) docker-quick-image: DOCKER_IMAGE_TAG ?= $(MAKE_TARGET)-$(GIT_VERSION)
docker-quick-image: DOCKER_IMAGE_NAME ?= wakuorg/nwaku:$(DOCKER_IMAGE_TAG) docker-quick-image: DOCKER_IMAGE_NAME ?= wakuorg/nwaku:$(DOCKER_IMAGE_TAG)
docker-quick-image: NIM_PARAMS := $(NIM_PARAMS) -d:chronicles_colors:none -d:insecure -d:postgres --passL:$(LIBRLN_FILE) --passL:-lm docker-quick-image: NIM_PARAMS := $(NIM_PARAMS) -d:chronicles_colors:none -d:insecure -d:postgres --passL:$(LIBRLN_FILE) --passL:-lm
docker-quick-image: | build deps librln wakunode2 docker-quick-image: | build librln wakunode2
docker build \ docker build \
--build-arg="MAKE_TARGET=$(MAKE_TARGET)" \ --build-arg="MAKE_TARGET=$(MAKE_TARGET)" \
--tag $(DOCKER_IMAGE_NAME) \ --tag $(DOCKER_IMAGE_NAME) \
@ -381,20 +371,14 @@ docker-push:
#################################### ####################################
## Container lite-protocol-tester ## ## Container lite-protocol-tester ##
#################################### ####################################
# -d:insecure - Necessary to enable Prometheus HTTP endpoint for metrics
# -d:chronicles_colors:none - Necessary to disable colors in logs for Docker
DOCKER_LPT_NIMFLAGS ?= -d:chronicles_colors:none -d:insecure DOCKER_LPT_NIMFLAGS ?= -d:chronicles_colors:none -d:insecure
# build a docker image for the fleet
docker-liteprotocoltester: DOCKER_LPT_TAG ?= latest docker-liteprotocoltester: DOCKER_LPT_TAG ?= latest
docker-liteprotocoltester: DOCKER_LPT_NAME ?= wakuorg/liteprotocoltester:$(DOCKER_LPT_TAG) docker-liteprotocoltester: DOCKER_LPT_NAME ?= wakuorg/liteprotocoltester:$(DOCKER_LPT_TAG)
# --no-cache
docker-liteprotocoltester: docker-liteprotocoltester:
docker build \ docker build \
--build-arg="MAKE_TARGET=liteprotocoltester" \ --build-arg="MAKE_TARGET=liteprotocoltester" \
--build-arg="NIMFLAGS=$(DOCKER_LPT_NIMFLAGS)" \ --build-arg="NIMFLAGS=$(DOCKER_LPT_NIMFLAGS)" \
--build-arg="NIM_COMMIT=$(DOCKER_NIM_COMMIT)" \
--build-arg="LOG_LEVEL=TRACE" \
--label="commit=$(shell git rev-parse HEAD)" \ --label="commit=$(shell git rev-parse HEAD)" \
--label="version=$(GIT_VERSION)" \ --label="version=$(GIT_VERSION)" \
--target $(if $(filter deploy,$(DOCKER_LPT_TAG)),deployment_lpt,standalone_lpt) \ --target $(if $(filter deploy,$(DOCKER_LPT_TAG)),deployment_lpt,standalone_lpt) \
@ -413,37 +397,96 @@ docker-quick-liteprotocoltester: | liteprotocoltester
docker-liteprotocoltester-push: docker-liteprotocoltester-push:
docker push $(DOCKER_LPT_NAME) docker push $(DOCKER_LPT_NAME)
################ ################
## C Bindings ## ## C Bindings ##
################ ################
.PHONY: cbindings cwaku_example libwaku .PHONY: cbindings cwaku_example libwaku liblogosdelivery liblogosdelivery_example
STATIC ?= 0 detected_OS ?= Linux
ifeq ($(OS),Windows_NT)
detected_OS := Windows
libwaku: | build deps librln
rm -f build/libwaku*
ifeq ($(STATIC), 1)
echo -e $(BUILD_MSG) "build/$@.a" && $(ENV_SCRIPT) nim libwakuStatic $(NIM_PARAMS) waku.nims
else ifeq ($(detected_OS),Windows)
echo -e $(BUILD_MSG) "build/$@.dll" && $(ENV_SCRIPT) nim libwakuDynamic $(NIM_PARAMS) waku.nims
else else
echo -e $(BUILD_MSG) "build/$@.so" && $(ENV_SCRIPT) nim libwakuDynamic $(NIM_PARAMS) waku.nims detected_OS := $(shell uname -s)
endif endif
BUILD_COMMAND ?= Dynamic
STATIC ?= 0
ifeq ($(STATIC), 1)
BUILD_COMMAND = Static
endif
ifeq ($(detected_OS),Windows)
BUILD_COMMAND := $(BUILD_COMMAND)Windows
else ifeq ($(detected_OS),Darwin)
BUILD_COMMAND := $(BUILD_COMMAND)Mac
export IOS_SDK_PATH := $(shell xcrun --sdk iphoneos --show-sdk-path)
else ifeq ($(detected_OS),Linux)
BUILD_COMMAND := $(BUILD_COMMAND)Linux
endif
libwaku: | $(NIMBLEDEPS_STAMP) librln
nimble --verbose libwaku$(BUILD_COMMAND) waku.nimble
liblogosdelivery: | $(NIMBLEDEPS_STAMP) librln
nimble --verbose liblogosdelivery$(BUILD_COMMAND) waku.nimble
logosdelivery_example: | build liblogosdelivery
@echo -e $(BUILD_MSG) "build/$@"
ifeq ($(detected_OS),Darwin)
gcc -o build/$@ \
liblogosdelivery/examples/logosdelivery_example.c \
liblogosdelivery/examples/json_utils.c \
-I./liblogosdelivery \
-L./build \
-llogosdelivery \
-Wl,-rpath,./build
else ifeq ($(detected_OS),Linux)
gcc -o build/$@ \
liblogosdelivery/examples/logosdelivery_example.c \
liblogosdelivery/examples/json_utils.c \
-I./liblogosdelivery \
-L./build \
-llogosdelivery \
-Wl,-rpath,'$$ORIGIN'
else ifeq ($(detected_OS),Windows)
gcc -o build/$@.exe \
liblogosdelivery/examples/logosdelivery_example.c \
liblogosdelivery/examples/json_utils.c \
-I./liblogosdelivery \
-L./build \
-llogosdelivery \
-lws2_32
endif
cwaku_example: | build libwaku
echo -e $(BUILD_MSG) "build/$@" && \
cc -o "build/$@" \
./examples/cbindings/waku_example.c \
./examples/cbindings/base64.c \
-lwaku -Lbuild/ \
-pthread -ldl -lm
cppwaku_example: | build libwaku
echo -e $(BUILD_MSG) "build/$@" && \
g++ -o "build/$@" \
./examples/cpp/waku.cpp \
./examples/cpp/base64.cpp \
-lwaku -Lbuild/ \
-pthread -ldl -lm
nodejswaku: | build deps
echo -e $(BUILD_MSG) "build/$@" && \
node-gyp build --directory=examples/nodejs/
##################### #####################
## Mobile Bindings ## ## Mobile Bindings ##
##################### #####################
.PHONY: libwaku-android \ .PHONY: libwaku-android \
libwaku-android-precheck \ libwaku-android-precheck \
libwaku-android-arm64 \ libwaku-android-arm64 \
libwaku-android-amd64 \ libwaku-android-amd64 \
libwaku-android-x86 \ libwaku-android-x86 \
libwaku-android-arm \ libwaku-android-arm
rebuild-nat-libs \
build-libwaku-for-android-arch
ANDROID_TARGET ?= 30 ANDROID_TARGET ?= 30
ifeq ($(detected_OS),Darwin) ifeq ($(detected_OS),Darwin)
@ -452,17 +495,19 @@ else
ANDROID_TOOLCHAIN_DIR := $(ANDROID_NDK_HOME)/toolchains/llvm/prebuilt/linux-x86_64 ANDROID_TOOLCHAIN_DIR := $(ANDROID_NDK_HOME)/toolchains/llvm/prebuilt/linux-x86_64
endif endif
rebuild-nat-libs: | clean-cross nat-libs
libwaku-android-precheck: libwaku-android-precheck:
ifndef ANDROID_NDK_HOME ifndef ANDROID_NDK_HOME
$(error ANDROID_NDK_HOME is not set) $(error ANDROID_NDK_HOME is not set)
endif endif
build-libwaku-for-android-arch: build-libwaku-for-android-arch:
$(MAKE) rebuild-nat-libs CC=$(ANDROID_TOOLCHAIN_DIR)/bin/$(ANDROID_COMPILER) && \ ifneq ($(findstring /nix/store,$(LIBRLN_FILE)),)
./scripts/build_rln_android.sh $(CURDIR)/build $(LIBRLN_BUILDDIR) $(LIBRLN_VERSION) $(CROSS_TARGET) $(ABIDIR) && \ mkdir -p $(CURDIR)/build/android/$(ABIDIR)/
CPU=$(CPU) ABIDIR=$(ABIDIR) ANDROID_ARCH=$(ANDROID_ARCH) ANDROID_COMPILER=$(ANDROID_COMPILER) ANDROID_TOOLCHAIN_DIR=$(ANDROID_TOOLCHAIN_DIR) $(ENV_SCRIPT) nim libWakuAndroid $(NIM_PARAMS) waku.nims CPU=$(CPU) ABIDIR=$(ABIDIR) ANDROID_ARCH=$(ANDROID_ARCH) ANDROID_COMPILER=$(ANDROID_COMPILER) ANDROID_TOOLCHAIN_DIR=$(ANDROID_TOOLCHAIN_DIR) nimble libWakuAndroid
else
./scripts/build_rln_android.sh $(CURDIR)/build $(LIBRLN_BUILDDIR) $(LIBRLN_VERSION) $(CROSS_TARGET) $(ABIDIR)
endif
$(MAKE) rebuild-nat-libs-nimbledeps CC=$(ANDROID_TOOLCHAIN_DIR)/bin/$(ANDROID_COMPILER)
libwaku-android-arm64: ANDROID_ARCH=aarch64-linux-android libwaku-android-arm64: ANDROID_ARCH=aarch64-linux-android
libwaku-android-arm64: CPU=arm64 libwaku-android-arm64: CPU=arm64
@ -486,47 +531,52 @@ libwaku-android-arm: ANDROID_ARCH=armv7a-linux-androideabi
libwaku-android-arm: CPU=arm libwaku-android-arm: CPU=arm
libwaku-android-arm: ABIDIR=armeabi-v7a libwaku-android-arm: ABIDIR=armeabi-v7a
libwaku-android-arm: | libwaku-android-precheck build deps libwaku-android-arm: | libwaku-android-precheck build deps
# cross-rs target architecture name does not match the one used in android
$(MAKE) build-libwaku-for-android-arch ANDROID_ARCH=$(ANDROID_ARCH) CROSS_TARGET=armv7-linux-androideabi CPU=$(CPU) ABIDIR=$(ABIDIR) ANDROID_COMPILER=$(ANDROID_ARCH)$(ANDROID_TARGET)-clang $(MAKE) build-libwaku-for-android-arch ANDROID_ARCH=$(ANDROID_ARCH) CROSS_TARGET=armv7-linux-androideabi CPU=$(CPU) ABIDIR=$(ABIDIR) ANDROID_COMPILER=$(ANDROID_ARCH)$(ANDROID_TARGET)-clang
libwaku-android: libwaku-android:
$(MAKE) libwaku-android-amd64 $(MAKE) libwaku-android-amd64
$(MAKE) libwaku-android-arm64 $(MAKE) libwaku-android-arm64
$(MAKE) libwaku-android-x86 $(MAKE) libwaku-android-x86
# This target is disabled because on recent versions of cross-rs complain with the following error
# relocation R_ARM_THM_ALU_PREL_11_0 cannot be used against symbol 'stack_init_trampoline_return'; recompile with -fPIC
# It's likely this architecture is not used so we might just not support it.
# $(MAKE) libwaku-android-arm
cwaku_example: | build libwaku #################
echo -e $(BUILD_MSG) "build/$@" && \ ## iOS Bindings #
cc -o "build/$@" \ #################
./examples/cbindings/waku_example.c \ .PHONY: libwaku-ios-precheck \
./examples/cbindings/base64.c \ libwaku-ios-device \
-lwaku -Lbuild/ \ libwaku-ios-simulator \
-pthread -ldl -lm \ libwaku-ios
-lminiupnpc -Lvendor/nim-nat-traversal/vendor/miniupnp/miniupnpc/build/ \
-lnatpmp -Lvendor/nim-nat-traversal/vendor/libnatpmp-upstream/ \
vendor/nim-libbacktrace/libbacktrace_wrapper.o \
vendor/nim-libbacktrace/install/usr/lib/libbacktrace.a
cppwaku_example: | build libwaku IOS_DEPLOYMENT_TARGET ?= 18.0
echo -e $(BUILD_MSG) "build/$@" && \
g++ -o "build/$@" \
./examples/cpp/waku.cpp \
./examples/cpp/base64.cpp \
-lwaku -Lbuild/ \
-pthread -ldl -lm \
-lminiupnpc -Lvendor/nim-nat-traversal/vendor/miniupnp/miniupnpc/build/ \
-lnatpmp -Lvendor/nim-nat-traversal/vendor/libnatpmp-upstream/ \
vendor/nim-libbacktrace/libbacktrace_wrapper.o \
vendor/nim-libbacktrace/install/usr/lib/libbacktrace.a
nodejswaku: | build deps define get_ios_sdk_path
echo -e $(BUILD_MSG) "build/$@" && \ $(shell xcrun --sdk $(1) --show-sdk-path 2>/dev/null)
node-gyp build --directory=examples/nodejs/ endef
endif # "variables.mk" was not included libwaku-ios-precheck:
ifeq ($(detected_OS),Darwin)
@command -v xcrun >/dev/null 2>&1 || { echo "Error: Xcode command line tools not installed"; exit 1; }
else
$(error iOS builds are only supported on macOS)
endif
build-libwaku-for-ios-arch:
IOS_SDK=$(IOS_SDK) IOS_ARCH=$(IOS_ARCH) IOS_SDK_PATH=$(IOS_SDK_PATH) nimble libWakuIOS
libwaku-ios-device: IOS_ARCH=arm64
libwaku-ios-device: IOS_SDK=iphoneos
libwaku-ios-device: IOS_SDK_PATH=$(call get_ios_sdk_path,iphoneos)
libwaku-ios-device: | libwaku-ios-precheck build deps
$(MAKE) build-libwaku-for-ios-arch IOS_ARCH=$(IOS_ARCH) IOS_SDK=$(IOS_SDK) IOS_SDK_PATH=$(IOS_SDK_PATH)
libwaku-ios-simulator: IOS_ARCH=arm64
libwaku-ios-simulator: IOS_SDK=iphonesimulator
libwaku-ios-simulator: IOS_SDK_PATH=$(call get_ios_sdk_path,iphonesimulator)
libwaku-ios-simulator: | libwaku-ios-precheck build deps
$(MAKE) build-libwaku-for-ios-arch IOS_ARCH=$(IOS_ARCH) IOS_SDK=$(IOS_SDK) IOS_SDK_PATH=$(IOS_SDK_PATH)
libwaku-ios:
$(MAKE) libwaku-ios-device
$(MAKE) libwaku-ios-simulator
################### ###################
# Release Targets # # Release Targets #
@ -541,6 +591,3 @@ release-notes:
docker.io/wakuorg/sv4git:latest \ docker.io/wakuorg/sv4git:latest \
release-notes |\ release-notes |\
sed -E 's@#([0-9]+)@[#\1](https://github.com/waku-org/nwaku/issues/\1)@g' sed -E 's@#([0-9]+)@[#\1](https://github.com/waku-org/nwaku/issues/\1)@g'
# I could not get the tool to replace issue ids with links, so using sed for now,
# asked here: https://github.com/bvieira/sv4git/discussions/101

61
Nat.mk Normal file
View File

@ -0,0 +1,61 @@
# Copyright (c) 2022 Status Research & Development GmbH. Licensed under
# either of:
# - Apache License, version 2.0
# - MIT license
# at your option. This file may not be copied, modified, or distributed except
# according to those terms.
###########################
## nat-libs (nimbledeps) ##
###########################
# Builds miniupnpc and libnatpmp from the package installed by nimble under
# nimbledeps/pkgs2/. Used by `make update` / $(NIMBLEDEPS_STAMP).
#
# NAT_TRAVERSAL_NIMBLEDEPS_DIR is evaluated at parse time, so targets that
# depend on it must be invoked via a recursive $(MAKE) call so the sub-make
# re-evaluates the variable after nimble setup has populated nimbledeps/.
#
# `ls -dt` (sort by modification time, newest first) is used to pick the
# latest installed version and is portable across Linux, macOS, and
# Windows (MSYS/MinGW).
NAT_TRAVERSAL_NIMBLEDEPS_DIR := $(shell ls -dt $(CURDIR)/nimbledeps/pkgs2/nat_traversal-* 2>/dev/null | head -1)
NAT_UNAME_M := $(shell uname -m)
ifeq ($(NAT_UNAME_M),x86_64)
PORTABLE_NAT_MARCH := -mssse3
else
PORTABLE_NAT_MARCH :=
endif
.PHONY: clean-cross-nimbledeps rebuild-nat-libs-nimbledeps
clean-cross-nimbledeps:
ifeq ($(NAT_TRAVERSAL_NIMBLEDEPS_DIR),)
$(error No nat_traversal package found under nimbledeps/pkgs2/ — run 'make update' first)
endif
+ [ -e "$(NAT_TRAVERSAL_NIMBLEDEPS_DIR)/vendor/miniupnp/miniupnpc" ] && \
"$(MAKE)" -C "$(NAT_TRAVERSAL_NIMBLEDEPS_DIR)/vendor/miniupnp/miniupnpc" CC=$(CC) clean $(HANDLE_OUTPUT) || true
+ [ -e "$(NAT_TRAVERSAL_NIMBLEDEPS_DIR)/vendor/libnatpmp-upstream" ] && \
"$(MAKE)" -C "$(NAT_TRAVERSAL_NIMBLEDEPS_DIR)/vendor/libnatpmp-upstream" CC=$(CC) clean $(HANDLE_OUTPUT) || true
rebuild-nat-libs-nimbledeps: | clean-cross-nimbledeps
ifeq ($(NAT_TRAVERSAL_NIMBLEDEPS_DIR),)
$(error No nat_traversal package found under nimbledeps/pkgs2/ — run 'make update' first)
endif
@echo "Rebuilding nat-libs from $(NAT_TRAVERSAL_NIMBLEDEPS_DIR)"
ifeq ($(OS), Windows_NT)
+ [ -e "$(NAT_TRAVERSAL_NIMBLEDEPS_DIR)/vendor/miniupnp/miniupnpc/libminiupnpc.a" ] || \
PATH=".;$${PATH}" "$(MAKE)" -C "$(NAT_TRAVERSAL_NIMBLEDEPS_DIR)/vendor/miniupnp/miniupnpc" \
-f Makefile.mingw CC=$(CC) CFLAGS="-Os -fPIC" libminiupnpc.a $(HANDLE_OUTPUT)
+ "$(MAKE)" -C "$(NAT_TRAVERSAL_NIMBLEDEPS_DIR)/vendor/libnatpmp-upstream" \
OS=mingw CC=$(CC) \
CFLAGS="-Wall -Wno-cpp -Os -fPIC -DWIN32 -DNATPMP_STATICLIB -DENABLE_STRNATPMPERR -DNATPMP_MAX_RETRIES=4 $(CFLAGS)" \
libnatpmp.a $(HANDLE_OUTPUT)
else
+ "$(MAKE)" -C "$(NAT_TRAVERSAL_NIMBLEDEPS_DIR)/vendor/miniupnp/miniupnpc" \
CC=$(CC) CFLAGS="-Os -fPIC $(PORTABLE_NAT_MARCH)" build/libminiupnpc.a $(HANDLE_OUTPUT)
+ "$(MAKE)" CFLAGS="-Wall -Wno-cpp -Os -fPIC $(PORTABLE_NAT_MARCH) -DENABLE_STRNATPMPERR -DNATPMP_MAX_RETRIES=4 $(CFLAGS)" \
-C "$(NAT_TRAVERSAL_NIMBLEDEPS_DIR)/vendor/libnatpmp-upstream" \
CC=$(CC) libnatpmp.a $(HANDLE_OUTPUT)
endif

View File

@ -1,19 +1,21 @@
# Nwaku # Logos Messaging Nim
## Introduction ## Introduction
The nwaku repository implements Waku, and provides tools related to it. This repository implements a set of libp2p protocols aimed to bring
private communications.
- A Nim implementation of the [Waku (v2) protocol](https://specs.vac.dev/specs/waku/v2/waku-v2.html). - Nim implementation of [these specs](https://github.com/vacp2p/rfc-index/tree/main/waku).
- CLI application `wakunode2` that allows you to run a Waku node. - C library that exposes the implemented protocols.
- Examples of Waku usage. - CLI application that allows you to run an lmn node.
- Examples.
- Various tests of above. - Various tests of above.
For more details see the [source code](waku/README.md) For more details see the [source code](waku/README.md)
## How to Build & Run ( Linux, MacOS & WSL ) ## How to Build & Run ( Linux, MacOS & WSL )
These instructions are generic. For more detailed instructions, see the Waku source code above. These instructions are generic. For more detailed instructions, see the source code above.
### Prerequisites ### Prerequisites

View File

@ -15,7 +15,7 @@ proc benchmark(
manager: OnChainGroupManager, registerCount: int, messageLimit: int manager: OnChainGroupManager, registerCount: int, messageLimit: int
): Future[string] {.async, gcsafe.} = ): Future[string] {.async, gcsafe.} =
# Register a new member so that we can later generate proofs # Register a new member so that we can later generate proofs
let idCredentials = generateCredentials(manager.rlnInstance, registerCount) let idCredentials = generateCredentials(registerCount)
var start_time = getTime() var start_time = getTime()
for i in 0 .. registerCount - 1: for i in 0 .. registerCount - 1:

View File

@ -36,7 +36,6 @@ import
waku_lightpush_legacy/rpc, waku_lightpush_legacy/rpc,
waku_enr, waku_enr,
discovery/waku_dnsdisc, discovery/waku_dnsdisc,
waku_store_legacy,
waku_node, waku_node,
node/waku_metrics, node/waku_metrics,
node/peer_manager, node/peer_manager,
@ -50,8 +49,7 @@ import
import libp2p/protocols/pubsub/rpc/messages, libp2p/protocols/pubsub/pubsub import libp2p/protocols/pubsub/rpc/messages, libp2p/protocols/pubsub/pubsub
import ../../waku/waku_rln_relay import ../../waku/waku_rln_relay
const Help = const Help = """
"""
Commands: /[?|help|connect|nick|exit] Commands: /[?|help|connect|nick|exit]
help: Prints this help help: Prints this help
connect: dials a remote peer connect: dials a remote peer
@ -317,27 +315,19 @@ proc processInput(rfd: AsyncFD, rng: ref HmacDrbgContext) {.async.} =
if conf.logLevel != LogLevel.NONE: if conf.logLevel != LogLevel.NONE:
setLogLevel(conf.logLevel) setLogLevel(conf.logLevel)
let natRes = setupNat( let (extIp, extTcpPort, extUdpPort) = setupNat(
conf.nat, conf.nat,
clientId, clientId,
Port(uint16(conf.tcpPort) + conf.portsShift), Port(uint16(conf.tcpPort) + conf.portsShift),
Port(uint16(conf.udpPort) + conf.portsShift), Port(uint16(conf.udpPort) + conf.portsShift),
) ).valueOr:
raise newException(ValueError, "setupNat error " & error)
if natRes.isErr():
raise newException(ValueError, "setupNat error " & natRes.error)
let (extIp, extTcpPort, extUdpPort) = natRes.get()
var enrBuilder = EnrBuilder.init(nodeKey) var enrBuilder = EnrBuilder.init(nodeKey)
let recordRes = enrBuilder.build() let record = enrBuilder.build().valueOr:
let record = error "failed to create enr record", error = error
if recordRes.isErr(): quit(QuitFailure)
error "failed to create enr record", error = recordRes.error
quit(QuitFailure)
else:
recordRes.get()
let node = block: let node = block:
var builder = WakuNodeBuilder.init() var builder = WakuNodeBuilder.init()
@ -345,16 +335,16 @@ proc processInput(rfd: AsyncFD, rng: ref HmacDrbgContext) {.async.} =
builder.withRecord(record) builder.withRecord(record)
builder builder
.withNetworkConfigurationDetails( .withNetworkConfigurationDetails(
conf.listenAddress, conf.listenAddress,
Port(uint16(conf.tcpPort) + conf.portsShift), Port(uint16(conf.tcpPort) + conf.portsShift),
extIp, extIp,
extTcpPort, extTcpPort,
wsBindPort = Port(uint16(conf.websocketPort) + conf.portsShift), wsBindPort = Port(uint16(conf.websocketPort) + conf.portsShift),
wsEnabled = conf.websocketSupport, wsEnabled = conf.websocketSupport,
wssEnabled = conf.websocketSecureSupport, wssEnabled = conf.websocketSecureSupport,
) )
.tryGet() .tryGet()
builder.build().tryGet() builder.build().tryGet()
await node.start() await node.start()
@ -488,7 +478,9 @@ proc processInput(rfd: AsyncFD, rng: ref HmacDrbgContext) {.async.} =
if conf.lightpushnode != "": if conf.lightpushnode != "":
let peerInfo = parsePeerInfo(conf.lightpushnode) let peerInfo = parsePeerInfo(conf.lightpushnode)
if peerInfo.isOk(): if peerInfo.isOk():
await mountLegacyLightPush(node) (await node.mountLegacyLightPush()).isOkOr:
error "failed to mount legacy lightpush", error = error
quit(QuitFailure)
node.mountLegacyLightPushClient() node.mountLegacyLightPushClient()
node.peerManager.addServicePeer(peerInfo.value, WakuLightpushCodec) node.peerManager.addServicePeer(peerInfo.value, WakuLightpushCodec)
else: else:

View File

@ -126,23 +126,22 @@ proc toMatterbridge(
assert chat2Msg.isOk assert chat2Msg.isOk
let postRes = cmb.mbClient.postMessage( if not cmb.mbClient
text = string.fromBytes(chat2Msg[].payload), username = chat2Msg[].nick .postMessage(
) text = string.fromBytes(chat2Msg[].payload), username = chat2Msg[].nick
)
if postRes.isErr() or (postRes[] == false): .containsValue(true):
chat2_mb_dropped.inc(labelValues = ["duplicate"]) chat2_mb_dropped.inc(labelValues = ["duplicate"])
error "Matterbridge host unreachable. Dropping message." error "Matterbridge host unreachable. Dropping message."
proc pollMatterbridge(cmb: Chat2MatterBridge, handler: MbMessageHandler) {.async.} = proc pollMatterbridge(cmb: Chat2MatterBridge, handler: MbMessageHandler) {.async.} =
while cmb.running: while cmb.running:
if (let getRes = cmb.mbClient.getMessages(); getRes.isOk()): let msg = cmb.mbClient.getMessages().valueOr:
for jsonNode in getRes[]:
await handler(jsonNode)
else:
error "Matterbridge host unreachable. Sleeping before retrying." error "Matterbridge host unreachable. Sleeping before retrying."
await sleepAsync(chronos.seconds(10)) await sleepAsync(chronos.seconds(10))
continue
for jsonNode in msg:
await handler(jsonNode)
await sleepAsync(cmb.pollPeriod) await sleepAsync(cmb.pollPeriod)
############## ##############
@ -178,10 +177,10 @@ proc new*(
builder.withNodeKey(nodev2Key) builder.withNodeKey(nodev2Key)
builder builder
.withNetworkConfigurationDetails( .withNetworkConfigurationDetails(
nodev2BindIp, nodev2BindPort, nodev2ExtIp, nodev2ExtPort nodev2BindIp, nodev2BindPort, nodev2ExtIp, nodev2ExtPort
) )
.tryGet() .tryGet()
builder.build().tryGet() builder.build().tryGet()
return Chat2MatterBridge( return Chat2MatterBridge(
@ -243,7 +242,7 @@ proc stop*(cmb: Chat2MatterBridge) {.async: (raises: [Exception]).} =
{.pop.} {.pop.}
# @TODO confutils.nim(775, 17) Error: can raise an unlisted exception: ref IOError # @TODO confutils.nim(775, 17) Error: can raise an unlisted exception: ref IOError
when isMainModule: when isMainModule:
import waku/common/utils/nat, waku/waku_api/message_cache import waku/common/utils/nat, waku/rest_api/message_cache
let let
rng = newRng() rng = newRng()
@ -252,25 +251,21 @@ when isMainModule:
if conf.logLevel != LogLevel.NONE: if conf.logLevel != LogLevel.NONE:
setLogLevel(conf.logLevel) setLogLevel(conf.logLevel)
let natRes = setupNat( let (nodev2ExtIp, nodev2ExtPort, _) = setupNat(
conf.nat, conf.nat,
clientId, clientId,
Port(uint16(conf.libp2pTcpPort) + conf.portsShift), Port(uint16(conf.libp2pTcpPort) + conf.portsShift),
Port(uint16(conf.udpPort) + conf.portsShift), Port(uint16(conf.udpPort) + conf.portsShift),
) ).valueOr:
if natRes.isErr(): raise newException(ValueError, "setupNat error " & error)
error "Error in setupNat", error = natRes.error
# Load address configuration ## The following heuristic assumes that, in absence of manual
let ## config, the external port is the same as the bind port.
(nodev2ExtIp, nodev2ExtPort, _) = natRes.get() let extPort =
## The following heuristic assumes that, in absence of manual if nodev2ExtIp.isSome() and nodev2ExtPort.isNone():
## config, the external port is the same as the bind port. some(Port(uint16(conf.libp2pTcpPort) + conf.portsShift))
extPort = else:
if nodev2ExtIp.isSome() and nodev2ExtPort.isNone(): nodev2ExtPort
some(Port(uint16(conf.libp2pTcpPort) + conf.portsShift))
else:
nodev2ExtPort
let bridge = Chat2Matterbridge.new( let bridge = Chat2Matterbridge.new(
mbHostUri = "http://" & $initTAddress(conf.mbHostAddress, Port(conf.mbHostPort)), mbHostUri = "http://" & $initTAddress(conf.mbHostAddress, Port(conf.mbHostPort)),

View File

@ -29,8 +29,9 @@ import
peerid, # Implement how peers interact peerid, # Implement how peers interact
protobuf/minprotobuf, # message serialisation/deserialisation from and to protobufs protobuf/minprotobuf, # message serialisation/deserialisation from and to protobufs
nameresolving/dnsresolver, nameresolving/dnsresolver,
protocols/mix/curve25519,
protocols/mix/mix_protocol,
] # define DNS resolution ] # define DNS resolution
import mix/curve25519
import import
waku/[ waku/[
waku_core, waku_core,
@ -38,6 +39,7 @@ import
waku_lightpush/rpc, waku_lightpush/rpc,
waku_enr, waku_enr,
discovery/waku_dnsdisc, discovery/waku_dnsdisc,
discovery/waku_kademlia,
waku_node, waku_node,
node/waku_metrics, node/waku_metrics,
node/peer_manager, node/peer_manager,
@ -55,8 +57,7 @@ import ../../waku/waku_rln_relay
logScope: logScope:
topics = "chat2 mix" topics = "chat2 mix"
const Help = const Help = """
"""
Commands: /[?|help|connect|nick|exit] Commands: /[?|help|connect|nick|exit]
help: Prints this help help: Prints this help
connect: dials a remote peer connect: dials a remote peer
@ -82,6 +83,8 @@ type
PrivateKey* = crypto.PrivateKey PrivateKey* = crypto.PrivateKey
Topic* = waku_core.PubsubTopic Topic* = waku_core.PubsubTopic
const MinMixNodePoolSize = 4
##################### #####################
## chat2 protobufs ## ## chat2 protobufs ##
##################### #####################
@ -124,7 +127,7 @@ proc encode*(message: Chat2Message): ProtoBuffer =
return serialised return serialised
proc toString*(message: Chat2Message): string = proc `$`*(message: Chat2Message): string =
# Get message date and timestamp in local time # Get message date and timestamp in local time
let time = message.timestamp.fromUnix().local().format("'<'MMM' 'dd,' 'HH:mm'>'") let time = message.timestamp.fromUnix().local().format("'<'MMM' 'dd,' 'HH:mm'>'")
@ -175,18 +178,16 @@ proc startMetricsServer(
): Result[MetricsHttpServerRef, string] = ): Result[MetricsHttpServerRef, string] =
info "Starting metrics HTTP server", serverIp = $serverIp, serverPort = $serverPort info "Starting metrics HTTP server", serverIp = $serverIp, serverPort = $serverPort
let metricsServerRes = MetricsHttpServerRef.new($serverIp, serverPort) let server = MetricsHttpServerRef.new($serverIp, serverPort).valueOr:
if metricsServerRes.isErr(): return err("metrics HTTP server start failed: " & $error)
return err("metrics HTTP server start failed: " & $metricsServerRes.error)
let server = metricsServerRes.value
try: try:
waitFor server.start() waitFor server.start()
except CatchableError: except CatchableError:
return err("metrics HTTP server start failed: " & getCurrentExceptionMsg()) return err("metrics HTTP server start failed: " & getCurrentExceptionMsg())
info "Metrics HTTP server started", serverIp = $serverIp, serverPort = $serverPort info "Metrics HTTP server started", serverIp = $serverIp, serverPort = $serverPort
ok(metricsServerRes.value) ok(server)
proc publish(c: Chat, line: string) {.async.} = proc publish(c: Chat, line: string) {.async.} =
# First create a Chat2Message protobuf with this line of text # First create a Chat2Message protobuf with this line of text
@ -333,57 +334,57 @@ proc maintainSubscription(
const maxFailedServiceNodeSwitches = 10 const maxFailedServiceNodeSwitches = 10
var noFailedSubscribes = 0 var noFailedSubscribes = 0
var noFailedServiceNodeSwitches = 0 var noFailedServiceNodeSwitches = 0
# Use chronos.Duration explicitly to avoid mismatch with std/times.Duration
let RetryWait = chronos.seconds(2) # Quick retry interval
let SubscriptionMaintenance = chronos.seconds(30) # Subscription maintenance interval
while true: while true:
info "maintaining subscription at", peer = constructMultiaddrStr(actualFilterPeer) info "maintaining subscription at", peer = constructMultiaddrStr(actualFilterPeer)
# First use filter-ping to check if we have an active subscription # First use filter-ping to check if we have an active subscription
let pingRes = await wakuNode.wakuFilterClient.ping(actualFilterPeer) let pingErr = (await wakuNode.wakuFilterClient.ping(actualFilterPeer)).errorOr:
if pingRes.isErr(): await sleepAsync(SubscriptionMaintenance)
# No subscription found. Let's subscribe. info "subscription is live."
error "ping failed.", err = pingRes.error continue
trace "no subscription found. Sending subscribe request"
let subscribeRes = await wakuNode.filterSubscribe( # No subscription found. Let's subscribe.
error "ping failed.", error = pingErr
trace "no subscription found. Sending subscribe request"
let subscribeErr = (
await wakuNode.filterSubscribe(
some(filterPubsubTopic), filterContentTopic, actualFilterPeer some(filterPubsubTopic), filterContentTopic, actualFilterPeer
) )
).errorOr:
await sleepAsync(SubscriptionMaintenance)
if noFailedSubscribes > 0:
noFailedSubscribes -= 1
notice "subscribe request successful."
continue
if subscribeRes.isErr(): noFailedSubscribes += 1
noFailedSubscribes += 1 error "Subscribe request failed.",
error "Subscribe request failed.", error = subscribeErr, peer = actualFilterPeer, failCount = noFailedSubscribes
err = subscribeRes.error,
peer = actualFilterPeer,
failCount = noFailedSubscribes
# TODO: disconnet from failed actualFilterPeer # TODO: disconnet from failed actualFilterPeer
# asyncSpawn(wakuNode.peerManager.switch.disconnect(p)) # asyncSpawn(wakuNode.peerManager.switch.disconnect(p))
# wakunode.peerManager.peerStore.delete(actualFilterPeer) # wakunode.peerManager.peerStore.delete(actualFilterPeer)
if noFailedSubscribes < maxFailedSubscribes: if noFailedSubscribes < maxFailedSubscribes:
await sleepAsync(2000) # Wait a bit before retrying await sleepAsync(RetryWait) # Wait a bit before retrying
continue elif not preventPeerSwitch:
elif not preventPeerSwitch: # try again with new peer without delay
let peerOpt = selectRandomServicePeer( let actualFilterPeer = selectRandomServicePeer(
wakuNode.peerManager, some(actualFilterPeer), WakuFilterSubscribeCodec wakuNode.peerManager, some(actualFilterPeer), WakuFilterSubscribeCodec
) ).valueOr:
peerOpt.isOkOr: error "Failed to find new service peer. Exiting."
error "Failed to find new service peer. Exiting." noFailedServiceNodeSwitches += 1
noFailedServiceNodeSwitches += 1 break
break
actualFilterPeer = peerOpt.get() info "Found new peer for codec",
info "Found new peer for codec", codec = filterPubsubTopic, peer = constructMultiaddrStr(actualFilterPeer)
codec = filterPubsubTopic, peer = constructMultiaddrStr(actualFilterPeer)
noFailedSubscribes = 0 noFailedSubscribes = 0
continue # try again with new peer without delay
else:
if noFailedSubscribes > 0:
noFailedSubscribes -= 1
notice "subscribe request successful."
else: else:
info "subscription is live." await sleepAsync(SubscriptionMaintenance)
await sleepAsync(30000) # Subscription maintenance interval
{.pop.} {.pop.}
# @TODO confutils.nim(775, 17) Error: can raise an unlisted exception: ref IOError # @TODO confutils.nim(775, 17) Error: can raise an unlisted exception: ref IOError
@ -401,17 +402,13 @@ proc processInput(rfd: AsyncFD, rng: ref HmacDrbgContext) {.async.} =
if conf.logLevel != LogLevel.NONE: if conf.logLevel != LogLevel.NONE:
setLogLevel(conf.logLevel) setLogLevel(conf.logLevel)
let natRes = setupNat( let (extIp, extTcpPort, extUdpPort) = setupNat(
conf.nat, conf.nat,
clientId, clientId,
Port(uint16(conf.tcpPort) + conf.portsShift), Port(uint16(conf.tcpPort) + conf.portsShift),
Port(uint16(conf.udpPort) + conf.portsShift), Port(uint16(conf.udpPort) + conf.portsShift),
) ).valueOr:
raise newException(ValueError, "setupNat error " & error)
if natRes.isErr():
raise newException(ValueError, "setupNat error " & natRes.error)
let (extIp, extTcpPort, extUdpPort) = natRes.get()
var enrBuilder = EnrBuilder.init(nodeKey) var enrBuilder = EnrBuilder.init(nodeKey)
@ -421,13 +418,9 @@ proc processInput(rfd: AsyncFD, rng: ref HmacDrbgContext) {.async.} =
error "failed to add sharded topics to ENR", error = error error "failed to add sharded topics to ENR", error = error
quit(QuitFailure) quit(QuitFailure)
let recordRes = enrBuilder.build() let record = enrBuilder.build().valueOr:
let record = error "failed to create enr record", error = error
if recordRes.isErr(): quit(QuitFailure)
error "failed to create enr record", error = recordRes.error
quit(QuitFailure)
else:
recordRes.get()
let node = block: let node = block:
var builder = WakuNodeBuilder.init() var builder = WakuNodeBuilder.init()
@ -435,16 +428,16 @@ proc processInput(rfd: AsyncFD, rng: ref HmacDrbgContext) {.async.} =
builder.withRecord(record) builder.withRecord(record)
builder builder
.withNetworkConfigurationDetails( .withNetworkConfigurationDetails(
conf.listenAddress, conf.listenAddress,
Port(uint16(conf.tcpPort) + conf.portsShift), Port(uint16(conf.tcpPort) + conf.portsShift),
extIp, extIp,
extTcpPort, extTcpPort,
wsBindPort = Port(uint16(conf.websocketPort) + conf.portsShift), wsBindPort = Port(uint16(conf.websocketPort) + conf.portsShift),
wsEnabled = conf.websocketSupport, wsEnabled = conf.websocketSupport,
wssEnabled = conf.websocketSecureSupport, wssEnabled = conf.websocketSecureSupport,
) )
.tryGet() .tryGet()
builder.build().tryGet() builder.build().tryGet()
node.mountAutoSharding(conf.clusterId, conf.numShardsInNetwork).isOkOr: node.mountAutoSharding(conf.clusterId, conf.numShardsInNetwork).isOkOr:
@ -461,12 +454,48 @@ proc processInput(rfd: AsyncFD, rng: ref HmacDrbgContext) {.async.} =
(await node.mountMix(conf.clusterId, mixPrivKey, conf.mixnodes)).isOkOr: (await node.mountMix(conf.clusterId, mixPrivKey, conf.mixnodes)).isOkOr:
error "failed to mount waku mix protocol: ", error = $error error "failed to mount waku mix protocol: ", error = $error
quit(QuitFailure) quit(QuitFailure)
# Setup extended kademlia discovery if bootstrap nodes are provided
if conf.kadBootstrapNodes.len > 0:
var kadBootstrapPeers: seq[(PeerId, seq[MultiAddress])]
for nodeStr in conf.kadBootstrapNodes:
let (peerId, ma) = parseFullAddress(nodeStr).valueOr:
error "Failed to parse kademlia bootstrap node", node = nodeStr, error = error
continue
kadBootstrapPeers.add((peerId, @[ma]))
if kadBootstrapPeers.len > 0:
node.wakuKademlia = WakuKademlia.new(
node.switch,
ExtendedKademliaDiscoveryParams(
bootstrapNodes: kadBootstrapPeers,
mixPubKey: some(mixPubKey),
advertiseMix: false,
),
node.peerManager,
getMixNodePoolSize = proc(): int {.gcsafe, raises: [].} =
if node.wakuMix.isNil():
0
else:
node.getMixNodePoolSize(),
isNodeStarted = proc(): bool {.gcsafe, raises: [].} =
node.started,
).valueOr:
error "failed to setup kademlia discovery", error = error
quit(QuitFailure)
#await node.mountRendezvousClient(conf.clusterId)
await node.start() await node.start()
node.peerManager.start() node.peerManager.start()
if not node.wakuKademlia.isNil():
(await node.wakuKademlia.start(minMixPeers = MinMixNodePoolSize)).isOkOr:
error "failed to start kademlia discovery", error = error
quit(QuitFailure)
await node.mountLibp2pPing() await node.mountLibp2pPing()
await node.mountPeerExchangeClient() #await node.mountPeerExchangeClient()
let pubsubTopic = conf.getPubsubTopic(node, conf.contentTopic) let pubsubTopic = conf.getPubsubTopic(node, conf.contentTopic)
echo "pubsub topic is: " & pubsubTopic echo "pubsub topic is: " & pubsubTopic
let nick = await readNick(transp) let nick = await readNick(transp)
@ -598,22 +627,17 @@ proc processInput(rfd: AsyncFD, rng: ref HmacDrbgContext) {.async.} =
error "Couldn't find any service peer" error "Couldn't find any service peer"
quit(QuitFailure) quit(QuitFailure)
#await mountLegacyLightPush(node)
node.peerManager.addServicePeer(servicePeerInfo, WakuLightpushCodec) node.peerManager.addServicePeer(servicePeerInfo, WakuLightpushCodec)
node.peerManager.addServicePeer(servicePeerInfo, WakuPeerExchangeCodec) node.peerManager.addServicePeer(servicePeerInfo, WakuPeerExchangeCodec)
#node.peerManager.addServicePeer(servicePeerInfo, WakuRendezVousCodec)
# Start maintaining subscription # Start maintaining subscription
asyncSpawn maintainSubscription( asyncSpawn maintainSubscription(
node, pubsubTopic, conf.contentTopic, servicePeerInfo, false node, pubsubTopic, conf.contentTopic, servicePeerInfo, false
) )
echo "waiting for mix nodes to be discovered..." echo "waiting for mix nodes to be discovered..."
while true:
if node.getMixNodePoolSize() >= 3:
break
discard await node.fetchPeerExchangePeers()
await sleepAsync(1000)
while node.getMixNodePoolSize() < 3: while node.getMixNodePoolSize() < MinMixNodePoolSize:
info "waiting for mix nodes to be discovered", info "waiting for mix nodes to be discovered",
currentpoolSize = node.getMixNodePoolSize() currentpoolSize = node.getMixNodePoolSize()
await sleepAsync(1000) await sleepAsync(1000)

View File

@ -113,17 +113,16 @@ type
shards* {. shards* {.
desc: desc:
"Shards index to subscribe to [0..NUM_SHARDS_IN_NETWORK-1]. Argument may be repeated.", "Shards index to subscribe to [0..NUM_SHARDS_IN_NETWORK-1]. Argument may be repeated.",
defaultValue: defaultValue: @[
@[ uint16(0),
uint16(0), uint16(1),
uint16(1), uint16(2),
uint16(2), uint16(3),
uint16(3), uint16(4),
uint16(4), uint16(5),
uint16(5), uint16(6),
uint16(6), uint16(7),
uint16(7), ],
],
name: "shard" name: "shard"
.}: seq[uint16] .}: seq[uint16]
@ -203,13 +202,13 @@ type
fleet* {. fleet* {.
desc: desc:
"Select the fleet to connect to. This sets the DNS discovery URL to the selected fleet.", "Select the fleet to connect to. This sets the DNS discovery URL to the selected fleet.",
defaultValue: Fleet.test, defaultValue: Fleet.none,
name: "fleet" name: "fleet"
.}: Fleet .}: Fleet
contentTopic* {. contentTopic* {.
desc: "Content topic for chat messages.", desc: "Content topic for chat messages.",
defaultValue: "/toy-chat-mix/2/huilong/proto", defaultValue: "/toy-chat/2/baixa-chiado/proto",
name: "content-topic" name: "content-topic"
.}: string .}: string
@ -228,7 +227,14 @@ type
desc: "WebSocket Secure Support.", desc: "WebSocket Secure Support.",
defaultValue: false, defaultValue: false,
name: "websocket-secure-support" name: "websocket-secure-support"
.}: bool ## rln-relay configuration .}: bool
## Kademlia Discovery config
kadBootstrapNodes* {.
desc:
"Peer multiaddr for kademlia discovery bootstrap node (must include /p2p/<peerID>). Argument may be repeated.",
name: "kad-bootstrap-node"
.}: seq[string]
proc parseCmdArg*(T: type MixNodePubInfo, p: string): T = proc parseCmdArg*(T: type MixNodePubInfo, p: string): T =
let elements = p.split(":") let elements = p.split(":")

View File

@ -7,7 +7,7 @@ ARG NIM_COMMIT
ARG LOG_LEVEL=TRACE ARG LOG_LEVEL=TRACE
# Get build tools and required header files # Get build tools and required header files
RUN apk add --no-cache bash git build-base openssl-dev linux-headers curl jq RUN apk add --no-cache bash git build-base openssl-dev linux-headers curl jq libbsd-dev
WORKDIR /app WORKDIR /app
COPY . . COPY . .
@ -43,7 +43,8 @@ EXPOSE 30303 60000 8545
RUN apk add --no-cache libgcc libpq-dev \ RUN apk add --no-cache libgcc libpq-dev \
wget \ wget \
iproute2 \ iproute2 \
python3 python3 \
libstdc++
COPY --from=nim-build /app/build/liteprotocoltester /usr/bin/ COPY --from=nim-build /app/build/liteprotocoltester /usr/bin/
RUN chmod +x /usr/bin/liteprotocoltester RUN chmod +x /usr/bin/liteprotocoltester

View File

@ -14,7 +14,7 @@ import
libp2p/wire libp2p/wire
import import
../../tools/confutils/cli_args, tools/confutils/cli_args,
waku/[ waku/[
node/peer_manager, node/peer_manager,
waku_lightpush/common, waku_lightpush/common,
@ -59,7 +59,4 @@ proc logSelfPeers*(pm: PeerManager) =
{allPeers(pm)} {allPeers(pm)}
*------------------------------------------------------------------------------------------*""".fmt() *------------------------------------------------------------------------------------------*""".fmt()
if printable.isErr(): echo printable.valueOr("Error while printing statistics: " & error.msg)
echo "Error while printing statistics: " & printable.error().msg
else:
echo printable.get()

View File

@ -11,7 +11,7 @@ import
confutils confutils
import import
../../tools/confutils/cli_args, tools/confutils/cli_args,
waku/[ waku/[
common/enr, common/enr,
common/logging, common/logging,
@ -49,13 +49,10 @@ when isMainModule:
const versionString = "version / git commit hash: " & waku_factory.git_version const versionString = "version / git commit hash: " & waku_factory.git_version
let confRes = LiteProtocolTesterConf.load(version = versionString) let conf = LiteProtocolTesterConf.load(version = versionString).valueOr:
if confRes.isErr(): error "failure while loading the configuration", error = error
error "failure while loading the configuration", error = confRes.error
quit(QuitFailure) quit(QuitFailure)
var conf = confRes.get()
## Logging setup ## Logging setup
logging.setupLog(conf.logLevel, conf.logFormat) logging.setupLog(conf.logLevel, conf.logFormat)
@ -133,7 +130,8 @@ when isMainModule:
info "Setting up shutdown hooks" info "Setting up shutdown hooks"
proc asyncStopper(waku: Waku) {.async: (raises: [Exception]).} = proc asyncStopper(waku: Waku) {.async: (raises: [Exception]).} =
await waku.stop() (await waku.stop()).isOkOr:
error "Waku shutdown failed", error = error
quit(QuitSuccess) quit(QuitSuccess)
# Handle Ctrl-C SIGINT # Handle Ctrl-C SIGINT
@ -163,7 +161,8 @@ when isMainModule:
# Not available in -d:release mode # Not available in -d:release mode
writeStackTrace() writeStackTrace()
waitFor waku.stop() (waitFor waku.stop()).isOkOr:
error "Waku shutdown failed", error = error
quit(QuitFailure) quit(QuitFailure)
c_signal(ansi_c.SIGSEGV, handleSigsegv) c_signal(ansi_c.SIGSEGV, handleSigsegv)
@ -187,7 +186,7 @@ when isMainModule:
error "Service node not found in time via PX" error "Service node not found in time via PX"
quit(QuitFailure) quit(QuitFailure)
if futForServiceNode.read().isErr(): futForServiceNode.read().isOkOr:
error "Service node for test not found via PX" error "Service node for test not found via PX"
quit(QuitFailure) quit(QuitFailure)

View File

@ -89,10 +89,7 @@ proc reportSentMessages() =
|{numMessagesToSend+failedToSendCount:>11} |{messagesSent:>11} |{failedToSendCount:>11} | |{numMessagesToSend+failedToSendCount:>11} |{messagesSent:>11} |{failedToSendCount:>11} |
*----------------------------------------*""".fmt() *----------------------------------------*""".fmt()
if report.isErr: echo report.valueOr("Error while printing statistics")
echo "Error while printing statistics"
else:
echo report.get()
echo "*--------------------------------------------------------------------------------------------------*" echo "*--------------------------------------------------------------------------------------------------*"
echo "| Failure cause | count |" echo "| Failure cause | count |"

View File

@ -54,64 +54,65 @@ proc maintainSubscription(
var noFailedSubscribes = 0 var noFailedSubscribes = 0
var noFailedServiceNodeSwitches = 0 var noFailedServiceNodeSwitches = 0
var isFirstPingOnNewPeer = true var isFirstPingOnNewPeer = true
const RetryWaitMs = 2.seconds # Quick retry interval
const SubscriptionMaintenanceMs = 30.seconds # Subscription maintenance interval
while true: while true:
info "maintaining subscription at", peer = constructMultiaddrStr(actualFilterPeer) info "maintaining subscription at", peer = constructMultiaddrStr(actualFilterPeer)
# First use filter-ping to check if we have an active subscription # First use filter-ping to check if we have an active subscription
let pingRes = await wakuNode.wakuFilterClient.ping(actualFilterPeer) let pingErr = (await wakuNode.wakuFilterClient.ping(actualFilterPeer)).errorOr:
if pingRes.isErr(): await sleepAsync(SubscriptionMaintenanceMs)
if isFirstPingOnNewPeer == false: info "subscription is live."
# Very first ping expected to fail as we have not yet subscribed at all continue
lpt_receiver_lost_subscription_count.inc()
isFirstPingOnNewPeer = false
# No subscription found. Let's subscribe.
error "ping failed.", err = pingRes.error
trace "no subscription found. Sending subscribe request"
let subscribeRes = await wakuNode.filterSubscribe( if isFirstPingOnNewPeer == false:
# Very first ping expected to fail as we have not yet subscribed at all
lpt_receiver_lost_subscription_count.inc()
isFirstPingOnNewPeer = false
# No subscription found. Let's subscribe.
error "ping failed.", error = pingErr
trace "no subscription found. Sending subscribe request"
let subscribeErr = (
await wakuNode.filterSubscribe(
some(filterPubsubTopic), filterContentTopic, actualFilterPeer some(filterPubsubTopic), filterContentTopic, actualFilterPeer
) )
).errorOr:
await sleepAsync(SubscriptionMaintenanceMs)
if noFailedSubscribes > 0:
noFailedSubscribes -= 1
notice "subscribe request successful."
continue
if subscribeRes.isErr(): noFailedSubscribes += 1
noFailedSubscribes += 1 lpt_service_peer_failure_count.inc(
lpt_service_peer_failure_count.inc( labelValues = ["receiver", actualFilterPeer.getAgent()]
labelValues = ["receiver", actualFilterPeer.getAgent()] )
) error "Subscribe request failed.",
error "Subscribe request failed.", err = subscribeErr, peer = actualFilterPeer, failCount = noFailedSubscribes
err = subscribeRes.error,
peer = actualFilterPeer,
failCount = noFailedSubscribes
# TODO: disconnet from failed actualFilterPeer # TODO: disconnet from failed actualFilterPeer
# asyncSpawn(wakuNode.peerManager.switch.disconnect(p)) # asyncSpawn(wakuNode.peerManager.switch.disconnect(p))
# wakunode.peerManager.peerStore.delete(actualFilterPeer) # wakunode.peerManager.peerStore.delete(actualFilterPeer)
if noFailedSubscribes < maxFailedSubscribes: if noFailedSubscribes < maxFailedSubscribes:
await sleepAsync(2.seconds) # Wait a bit before retrying await sleepAsync(RetryWaitMs) # Wait a bit before retrying
continue elif not preventPeerSwitch:
elif not preventPeerSwitch: # try again with new peer without delay
actualFilterPeer = selectRandomServicePeer( actualFilterPeer = selectRandomServicePeer(
wakuNode.peerManager, some(actualFilterPeer), WakuFilterSubscribeCodec wakuNode.peerManager, some(actualFilterPeer), WakuFilterSubscribeCodec
).valueOr: ).valueOr:
error "Failed to find new service peer. Exiting." error "Failed to find new service peer. Exiting."
noFailedServiceNodeSwitches += 1 noFailedServiceNodeSwitches += 1
break break
info "Found new peer for codec", info "Found new peer for codec",
codec = filterPubsubTopic, peer = constructMultiaddrStr(actualFilterPeer) codec = filterPubsubTopic, peer = constructMultiaddrStr(actualFilterPeer)
noFailedSubscribes = 0 noFailedSubscribes = 0
lpt_change_service_peer_count.inc(labelValues = ["receiver"]) lpt_change_service_peer_count.inc(labelValues = ["receiver"])
isFirstPingOnNewPeer = true isFirstPingOnNewPeer = true
continue # try again with new peer without delay
else:
if noFailedSubscribes > 0:
noFailedSubscribes -= 1
notice "subscribe request successful."
else: else:
info "subscription is live." await sleepAsync(SubscriptionMaintenanceMs)
await sleepAsync(30.seconds) # Subscription maintenance interval
proc setupAndListen*( proc setupAndListen*(
wakuNode: WakuNode, conf: LiteProtocolTesterConf, servicePeer: RemotePeerInfo wakuNode: WakuNode, conf: LiteProtocolTesterConf, servicePeer: RemotePeerInfo

View File

@ -11,7 +11,7 @@ import
libp2p/wire libp2p/wire
import import
../wakunode2/cli_args, tools/confutils/cli_args,
waku/[ waku/[
common/enr, common/enr,
waku_node, waku_node,
@ -181,7 +181,7 @@ proc pxLookupServiceNode*(
if not await futPeers.withTimeout(30.seconds): if not await futPeers.withTimeout(30.seconds):
notice "Cannot get peers from PX", round = 5 - trialCount notice "Cannot get peers from PX", round = 5 - trialCount
else: else:
if futPeers.value().isErr(): futPeers.value().isOkOr:
info "PeerExchange reported error", error = futPeers.read().error info "PeerExchange reported error", error = futPeers.read().error
return err() return err()

View File

@ -8,6 +8,8 @@ import
results, results,
libp2p/peerid libp2p/peerid
from std/sugar import `=>`
import ./tester_message, ./lpt_metrics import ./tester_message, ./lpt_metrics
type type
@ -114,12 +116,7 @@ proc addMessage*(
if not self.contains(peerId): if not self.contains(peerId):
self[peerId] = Statistics.init() self[peerId] = Statistics.init()
let shortSenderId = block: let shortSenderId = PeerId.init(msg.sender).map(p => p.shortLog()).valueOr(msg.sender)
let senderPeer = PeerId.init(msg.sender)
if senderPeer.isErr():
msg.sender
else:
senderPeer.get().shortLog()
discard catch: discard catch:
self[peerId].addMessage(shortSenderId, msg, msgHash) self[peerId].addMessage(shortSenderId, msg, msgHash)
@ -220,10 +217,7 @@ proc echoStat*(self: Statistics, peerId: string) =
| {self.missingIndices()} | | {self.missingIndices()} |
*------------------------------------------------------------------------------------------*""".fmt() *------------------------------------------------------------------------------------------*""".fmt()
if printable.isErr(): echo printable.valueOr("Error while printing statistics: " & error.msg)
echo "Error while printing statistics: " & printable.error().msg
else:
echo printable.get()
proc jsonStat*(self: Statistics): string = proc jsonStat*(self: Statistics): string =
let minL, maxL, avgL = self.calcLatency() let minL, maxL, avgL = self.calcLatency()
@ -243,20 +237,18 @@ proc jsonStat*(self: Statistics): string =
}}, }},
"lostIndices": {self.missingIndices()} "lostIndices": {self.missingIndices()}
}}""".fmt() }}""".fmt()
if json.isErr:
return "{\"result:\": \"" & json.error.msg & "\"}"
return json.get() return json.valueOr("{\"result:\": \"" & error.msg & "\"}")
proc echoStats*(self: var PerPeerStatistics) = proc echoStats*(self: var PerPeerStatistics) =
for peerId, stats in self.pairs: for peerId, stats in self.pairs:
let peerLine = catch: let peerLine = catch:
"Receiver statistics from peer {peerId}".fmt() "Receiver statistics from peer {peerId}".fmt()
if peerLine.isErr: peerLine.isOkOr:
echo "Error while printing statistics" echo "Error while printing statistics"
else: continue
echo peerLine.get() echo peerLine.get()
stats.echoStat(peerId) stats.echoStat(peerId)
proc jsonStats*(self: PerPeerStatistics): string = proc jsonStats*(self: PerPeerStatistics): string =
try: try:

View File

@ -6,7 +6,7 @@ import
json_serialization/std/options, json_serialization/std/options,
json_serialization/lexer json_serialization/lexer
import ../../waku/waku_api/rest/serdes import waku/rest_api/endpoint/serdes
type ProtocolTesterMessage* = object type ProtocolTesterMessage* = object
sender*: string sender*: string

View File

@ -443,12 +443,8 @@ proc initAndStartApp(
error "failed to add sharded topics to ENR", error = error error "failed to add sharded topics to ENR", error = error
return err("failed to add sharded topics to ENR: " & $error) return err("failed to add sharded topics to ENR: " & $error)
let recordRes = builder.build() let record = builder.build().valueOr:
let record = return err("cannot build record: " & $error)
if recordRes.isErr():
return err("cannot build record: " & $recordRes.error)
else:
recordRes.get()
var nodeBuilder = WakuNodeBuilder.init() var nodeBuilder = WakuNodeBuilder.init()
@ -461,21 +457,15 @@ proc initAndStartApp(
relayServiceRatio = "13.33:86.67", relayServiceRatio = "13.33:86.67",
shardAware = true, shardAware = true,
) )
let res = nodeBuilder.withNetworkConfigurationDetails(bindIp, nodeTcpPort) nodeBuilder.withNetworkConfigurationDetails(bindIp, nodeTcpPort).isOkOr:
if res.isErr(): return err("node building error" & $error)
return err("node building error" & $res.error)
let nodeRes = nodeBuilder.build() let node = nodeBuilder.build().valueOr:
let node = return err("node building error" & $error)
if nodeRes.isErr():
return err("node building error" & $res.error)
else:
nodeRes.get()
var discv5BootstrapEnrsRes = await getBootstrapFromDiscDns(conf) var discv5BootstrapEnrs = (await getBootstrapFromDiscDns(conf)).valueOr:
if discv5BootstrapEnrsRes.isErr():
error("failed discovering peers from DNS") error("failed discovering peers from DNS")
var discv5BootstrapEnrs = discv5BootstrapEnrsRes.get() quit(QuitFailure)
# parse enrURIs from the configuration and add the resulting ENRs to the discv5BootstrapEnrs seq # parse enrURIs from the configuration and add the resulting ENRs to the discv5BootstrapEnrs seq
for enrUri in conf.bootstrapNodes: for enrUri in conf.bootstrapNodes:
@ -553,12 +543,10 @@ proc subscribeAndHandleMessages(
when isMainModule: when isMainModule:
# known issue: confutils.nim(775, 17) Error: can raise an unlisted exception: ref IOError # known issue: confutils.nim(775, 17) Error: can raise an unlisted exception: ref IOError
{.pop.} {.pop.}
let confRes = NetworkMonitorConf.loadConfig() var conf = NetworkMonitorConf.loadConfig().valueOr:
if confRes.isErr(): error "could not load cli variables", error = error
error "could not load cli variables", err = confRes.error quit(QuitFailure)
quit(1)
var conf = confRes.get()
info "cli flags", conf = conf info "cli flags", conf = conf
if conf.clusterId == 1: if conf.clusterId == 1:
@ -586,37 +574,30 @@ when isMainModule:
# start metrics server # start metrics server
if conf.metricsServer: if conf.metricsServer:
let res = startMetricsServer(conf.metricsServerAddress, Port(conf.metricsServerPort)).isOkOr:
startMetricsServer(conf.metricsServerAddress, Port(conf.metricsServerPort)) error "could not start metrics server", error = error
if res.isErr(): quit(QuitFailure)
error "could not start metrics server", err = res.error
quit(1)
# start rest server for custom metrics # start rest server for custom metrics
let res = startRestApiServer(conf, allPeersInfo, msgPerContentTopic) startRestApiServer(conf, allPeersInfo, msgPerContentTopic).isOkOr:
if res.isErr(): error "could not start rest api server", error = error
error "could not start rest api server", err = res.error quit(QuitFailure)
quit(1)
# create a rest client # create a rest client
let clientRest = let restClient = RestClientRef.new(
RestClientRef.new(url = "http://ip-api.com", connectTimeout = ctime.seconds(2)) url = "http://ip-api.com", connectTimeout = ctime.seconds(2)
if clientRest.isErr(): ).valueOr:
error "could not start rest api client", err = res.error error "could not start rest api client", error = error
quit(1) quit(QuitFailure)
let restClient = clientRest.get()
# start waku node # start waku node
let nodeRes = waitFor initAndStartApp(conf) let (node, discv5) = (waitFor initAndStartApp(conf)).valueOr:
if nodeRes.isErr(): error "could not start node", error = error
error "could not start node" quit(QuitFailure)
quit 1
let (node, discv5) = nodeRes.get()
(waitFor node.mountRelay()).isOkOr: (waitFor node.mountRelay()).isOkOr:
error "failed to mount waku relay protocol: ", err = error error "failed to mount waku relay protocol: ", error = error
quit 1 quit(QuitFailure)
waitFor node.mountLibp2pPing() waitFor node.mountLibp2pPing()
@ -640,12 +621,12 @@ when isMainModule:
try: try:
waitFor node.mountRlnRelay(rlnConf) waitFor node.mountRlnRelay(rlnConf)
except CatchableError: except CatchableError:
error "failed to setup RLN", err = getCurrentExceptionMsg() error "failed to setup RLN", error = getCurrentExceptionMsg()
quit 1 quit(QuitFailure)
node.mountMetadata(conf.clusterId, conf.shards).isOkOr: node.mountMetadata(conf.clusterId, conf.shards).isOkOr:
error "failed to mount waku metadata protocol: ", err = error error "failed to mount waku metadata protocol: ", error = error
quit 1 quit(QuitFailure)
for shard in conf.shards: for shard in conf.shards:
# Subscribe the node to the shards, to count messages # Subscribe the node to the shards, to count messages

View File

@ -6,12 +6,15 @@ import
os os
import import
libp2p/protocols/ping, libp2p/protocols/ping,
libp2p/protocols/protocol,
libp2p/crypto/[crypto, secp], libp2p/crypto/[crypto, secp],
libp2p/nameresolving/dnsresolver, libp2p/nameresolving/dnsresolver,
libp2p/multicodec libp2p/multicodec
import import
./certsgenerator, ./certsgenerator,
waku/[waku_enr, node/peer_manager, waku_core, waku_node, factory/builder] waku/[waku_enr, node/peer_manager, waku_core, waku_node, factory/builder],
waku/waku_metadata/protocol,
waku/common/callbacks
# protocols and their tag # protocols and their tag
const ProtocolsTable = { const ProtocolsTable = {
@ -45,7 +48,7 @@ type WakuCanaryConf* = object
timeout* {. timeout* {.
desc: "Timeout to consider that the connection failed", desc: "Timeout to consider that the connection failed",
defaultValue: chronos.seconds(10), defaultValue: chronos.seconds(20),
name: "timeout", name: "timeout",
abbr: "t" abbr: "t"
.}: chronos.Duration .}: chronos.Duration
@ -143,27 +146,28 @@ proc areProtocolsSupported(
proc pingNode( proc pingNode(
node: WakuNode, peerInfo: RemotePeerInfo node: WakuNode, peerInfo: RemotePeerInfo
): Future[void] {.async, gcsafe.} = ): Future[bool] {.async, gcsafe.} =
try: try:
let conn = await node.switch.dial(peerInfo.peerId, peerInfo.addrs, PingCodec) let conn = await node.switch.dial(peerInfo.peerId, peerInfo.addrs, PingCodec)
let pingDelay = await node.libp2pPing.ping(conn) let pingDelay = await node.libp2pPing.ping(conn)
info "Peer response time (ms)", peerId = peerInfo.peerId, ping = pingDelay.millis info "Peer response time (ms)", peerId = peerInfo.peerId, ping = pingDelay.millis
return true
except CatchableError: except CatchableError:
var msg = getCurrentExceptionMsg() var msg = getCurrentExceptionMsg()
if msg == "Future operation cancelled!": if msg == "Future operation cancelled!":
msg = "timedout" msg = "timedout"
error "Failed to ping the peer", peer = peerInfo, err = msg error "Failed to ping the peer", peer = peerInfo, err = msg
return false
proc main(rng: ref HmacDrbgContext): Future[int] {.async.} = proc main(rng: ref HmacDrbgContext): Future[int] {.async.} =
let conf: WakuCanaryConf = WakuCanaryConf.load() let conf: WakuCanaryConf = WakuCanaryConf.load()
# create dns resolver # create dns resolver
let let
nameServers = nameServers = @[
@[ initTAddress(parseIpAddress("1.1.1.1"), Port(53)),
initTAddress(parseIpAddress("1.1.1.1"), Port(53)), initTAddress(parseIpAddress("1.0.0.1"), Port(53)),
initTAddress(parseIpAddress("1.0.0.1"), Port(53)), ]
]
resolver: DnsResolver = DnsResolver.new(nameServers) resolver: DnsResolver = DnsResolver.new(nameServers)
if conf.logLevel != LogLevel.NONE: if conf.logLevel != LogLevel.NONE:
@ -181,13 +185,10 @@ proc main(rng: ref HmacDrbgContext): Future[int] {.async.} =
protocols = conf.protocols, protocols = conf.protocols,
logLevel = conf.logLevel logLevel = conf.logLevel
let peerRes = parsePeerInfo(conf.address) let peer = parsePeerInfo(conf.address).valueOr:
if peerRes.isErr(): error "Couldn't parse 'conf.address'", error = error
error "Couldn't parse 'conf.address'", error = peerRes.error
quit(QuitFailure) quit(QuitFailure)
let peer = peerRes.value
let let
nodeKey = crypto.PrivateKey.random(Secp256k1, rng[])[] nodeKey = crypto.PrivateKey.random(Secp256k1, rng[])[]
bindIp = parseIpAddress("0.0.0.0") bindIp = parseIpAddress("0.0.0.0")
@ -225,13 +226,9 @@ proc main(rng: ref HmacDrbgContext): Future[int] {.async.} =
error "could not initialize ENR with shards", error error "could not initialize ENR with shards", error
quit(QuitFailure) quit(QuitFailure)
let recordRes = enrBuilder.build() let record = enrBuilder.build().valueOr:
let record = error "failed to create enr record", error = error
if recordRes.isErr(): quit(QuitFailure)
error "failed to create enr record", error = recordRes.error
quit(QuitFailure)
else:
recordRes.get()
if isWss and if isWss and
(conf.websocketSecureKeyPath.len == 0 or conf.websocketSecureCertPath.len == 0): (conf.websocketSecureKeyPath.len == 0 or conf.websocketSecureCertPath.len == 0):
@ -257,12 +254,26 @@ proc main(rng: ref HmacDrbgContext): Future[int] {.async.} =
error "failed to mount libp2p ping protocol: " & getCurrentExceptionMsg() error "failed to mount libp2p ping protocol: " & getCurrentExceptionMsg()
quit(QuitFailure) quit(QuitFailure)
node.mountMetadata(conf.clusterId, conf.shards).isOkOr: # Mount metadata with a custom getter that returns CLI shards directly,
error "failed to mount metadata protocol", error # since the canary doesn't mount relay (which is what the default getter reads from).
# Without this fix, the canary always sends remoteShards=[] in metadata requests.
let cliShards = conf.shards
let shardsGetter: GetShards = proc(): seq[uint16] {.closure, gcsafe, raises: [].} =
return cliShards
let metadata = WakuMetadata.new(conf.clusterId, shardsGetter)
node.wakuMetadata = metadata
node.peerManager.wakuMetadata = metadata
let mountRes = catch:
node.switch.mount(metadata, protocolMatcher(WakuMetadataCodec))
mountRes.isOkOr:
error "failed to mount metadata protocol", error = error.msg
quit(QuitFailure) quit(QuitFailure)
await node.start() await node.start()
debug "Connecting to peer", peer = peer, timeout = conf.timeout
var pingFut: Future[bool] var pingFut: Future[bool]
if conf.ping: if conf.ping:
pingFut = pingNode(node, peer).withTimeout(conf.timeout) pingFut = pingNode(node, peer).withTimeout(conf.timeout)
@ -272,22 +283,42 @@ proc main(rng: ref HmacDrbgContext): Future[int] {.async.} =
error "Timedout after", timeout = conf.timeout error "Timedout after", timeout = conf.timeout
quit(QuitFailure) quit(QuitFailure)
# Clean disconnect with defer so the remote node doesn't see
# "Stream Underlying Connection Closed!" when we exit
defer:
debug "Cleanly disconnecting from peer", peerId = peer.peerId
await node.peerManager.disconnectNode(peer.peerId)
await node.stop()
debug "Connected, checking connection status", peerId = peer.peerId
let lp2pPeerStore = node.switch.peerStore let lp2pPeerStore = node.switch.peerStore
let conStatus = node.peerManager.switch.peerStore[ConnectionBook][peer.peerId] let conStatus = node.peerManager.switch.peerStore[ConnectionBook][peer.peerId]
debug "Connection status", peerId = peer.peerId, conStatus = conStatus
var pingSuccess = true
if conf.ping: if conf.ping:
discard await pingFut try:
pingSuccess = await pingFut
except CatchableError as exc:
pingSuccess = false
error "Ping operation failed or timed out", error = exc.msg
if not pingSuccess:
error "Ping to the node failed", peerId = peer.peerId, conStatus = $conStatus
quit(QuitFailure)
if conStatus in [Connected, CanConnect]: if conStatus in [Connected, CanConnect]:
let nodeProtocols = lp2pPeerStore[ProtoBook][peer.peerId] let nodeProtocols = lp2pPeerStore[ProtoBook][peer.peerId]
debug "Peer protocols", peerId = peer.peerId, protocols = nodeProtocols
if not areProtocolsSupported(conf.protocols, nodeProtocols): if not areProtocolsSupported(conf.protocols, nodeProtocols):
error "Not all protocols are supported", error "Not all protocols are supported",
expected = conf.protocols, supported = nodeProtocols expected = conf.protocols, supported = nodeProtocols
quit(QuitFailure) return 1
elif conStatus == CannotConnect: elif conStatus == CannotConnect:
error "Could not connect", peerId = peer.peerId error "Could not connect", peerId = peer.peerId
quit(QuitFailure) return 1
return 0 return 0
when isMainModule: when isMainModule:

View File

@ -5,7 +5,6 @@ import
chronicles, chronicles,
chronos, chronos,
metrics, metrics,
libbacktrace,
system/ansi_c, system/ansi_c,
libp2p/crypto/crypto libp2p/crypto/crypto
import import
@ -14,7 +13,7 @@ import
common/logging, common/logging,
factory/waku, factory/waku,
node/health_monitor, node/health_monitor,
waku_api/rest/builder as rest_server_builder, rest_api/endpoint/builder as rest_server_builder,
waku_core/message/default_values, waku_core/message/default_values,
] ]
@ -62,7 +61,8 @@ when isMainModule:
info "Setting up shutdown hooks" info "Setting up shutdown hooks"
proc asyncStopper(waku: Waku) {.async: (raises: [Exception]).} = proc asyncStopper(waku: Waku) {.async: (raises: [Exception]).} =
await waku.stop() (await waku.stop()).isOkOr:
error "Waku shutdown failed", error = error
quit(QuitSuccess) quit(QuitSuccess)
# Handle Ctrl-C SIGINT # Handle Ctrl-C SIGINT
@ -87,12 +87,13 @@ when isMainModule:
when defined(posix): when defined(posix):
proc handleSigsegv(signal: cint) {.noconv.} = proc handleSigsegv(signal: cint) {.noconv.} =
# Require --debugger:native # Require --debugger:native
fatal "Shutting down after receiving SIGSEGV", stacktrace = getBacktrace() fatal "Shutting down after receiving SIGSEGV"
# Not available in -d:release mode # Not available in -d:release mode
writeStackTrace() writeStackTrace()
waitFor waku.stop() (waitFor waku.stop()).isOkOr:
error "Waku shutdown failed", error = error
quit(QuitFailure) quit(QuitFailure)
c_signal(ansi_c.SIGSEGV, handleSigsegv) c_signal(ansi_c.SIGSEGV, handleSigsegv)

View File

@ -2,7 +2,14 @@
library 'status-jenkins-lib@v1.8.17' library 'status-jenkins-lib@v1.8.17'
pipeline { pipeline {
agent { label 'linux' } agent {
docker {
label 'linuxcontainer'
image 'harbor.status.im/infra/ci-build-containers:linux-base-1.0.0'
args '--volume=/var/run/docker.sock:/var/run/docker.sock ' +
'--user jenkins'
}
}
options { options {
timestamps() timestamps()

View File

@ -2,7 +2,14 @@
library 'status-jenkins-lib@v1.8.17' library 'status-jenkins-lib@v1.8.17'
pipeline { pipeline {
agent { label 'linux' } agent {
docker {
label 'linuxcontainer'
image 'harbor.status.im/infra/ci-build-containers:linux-base-1.0.0'
args '--volume=/var/run/docker.sock:/var/run/docker.sock ' +
'--user jenkins'
}
}
options { options {
timestamps() timestamps()
@ -78,7 +85,8 @@ pipeline {
"--label=commit='${git.commit()}' " + "--label=commit='${git.commit()}' " +
"--label=version='${git.describe('--tags')}' " + "--label=version='${git.describe('--tags')}' " +
"--build-arg=MAKE_TARGET='${params.MAKE_TARGET}' " + "--build-arg=MAKE_TARGET='${params.MAKE_TARGET}' " +
"--build-arg=NIMFLAGS='${params.NIMFLAGS} -d:postgres -d:heaptracker ' " + "--build-arg=NIMFLAGS='${params.NIMFLAGS} -d:heaptracker ' " +
"--build-arg=POSTGRES='1' " +
"--build-arg=LOG_LEVEL='${params.LOWEST_LOG_LEVEL_ALLOWED}' " + "--build-arg=LOG_LEVEL='${params.LOWEST_LOG_LEVEL_ALLOWED}' " +
"--build-arg=DEBUG='${params.DEBUG ? "1" : "0"} ' " + "--build-arg=DEBUG='${params.DEBUG ? "1" : "0"} ' " +
"--build-arg=NIM_COMMIT='NIM_COMMIT=heaptrack_support_v2.0.12' " + "--build-arg=NIM_COMMIT='NIM_COMMIT=heaptrack_support_v2.0.12' " +
@ -91,7 +99,8 @@ pipeline {
"--label=commit='${git.commit()}' " + "--label=commit='${git.commit()}' " +
"--label=version='${git.describe('--tags')}' " + "--label=version='${git.describe('--tags')}' " +
"--build-arg=MAKE_TARGET='${params.MAKE_TARGET}' " + "--build-arg=MAKE_TARGET='${params.MAKE_TARGET}' " +
"--build-arg=NIMFLAGS='${params.NIMFLAGS} -d:postgres ' " + "--build-arg=NIMFLAGS='${params.NIMFLAGS}' " +
"--build-arg=POSTGRES='1' " +
"--build-arg=LOG_LEVEL='${params.LOWEST_LOG_LEVEL_ALLOWED}' " + "--build-arg=LOG_LEVEL='${params.LOWEST_LOG_LEVEL_ALLOWED}' " +
"--build-arg=DEBUG='${params.DEBUG ? "1" : "0"} ' " + "--build-arg=DEBUG='${params.DEBUG ? "1" : "0"} ' " +
"--target='prod' ." "--target='prod' ."

View File

@ -9,12 +9,6 @@ if defined(windows):
switch("passL", "rln.lib") switch("passL", "rln.lib")
switch("define", "postgres=false") switch("define", "postgres=false")
# Automatically add all vendor subdirectories
for dir in walkDir("./vendor"):
if dir.kind == pcDir:
switch("path", dir.path)
switch("path", dir.path / "src")
# disable timestamps in Windows PE headers - https://wiki.debian.org/ReproducibleBuilds/TimestampsInPEBinaries # disable timestamps in Windows PE headers - https://wiki.debian.org/ReproducibleBuilds/TimestampsInPEBinaries
switch("passL", "-Wl,--no-insert-timestamp") switch("passL", "-Wl,--no-insert-timestamp")
# increase stack size # increase stack size
@ -26,10 +20,6 @@ if defined(windows):
# set the IMAGE_FILE_LARGE_ADDRESS_AWARE flag so we can use PAE, if enabled, and access more than 2 GiB of RAM # set the IMAGE_FILE_LARGE_ADDRESS_AWARE flag so we can use PAE, if enabled, and access more than 2 GiB of RAM
switch("passL", "-Wl,--large-address-aware") switch("passL", "-Wl,--large-address-aware")
# The dynamic Chronicles output currently prevents us from using colors on Windows
# because these require direct manipulations of the stdout File object.
switch("define", "chronicles_colors=off")
# https://github.com/status-im/nimbus-eth2/blob/stable/docs/cpu_features.md#ssse3-supplemental-sse3 # https://github.com/status-im/nimbus-eth2/blob/stable/docs/cpu_features.md#ssse3-supplemental-sse3
# suggests that SHA256 hashing with SSSE3 is 20% faster than without SSSE3, so # suggests that SHA256 hashing with SSSE3 is 20% faster than without SSSE3, so
# given its near-ubiquity in the x86 installed base, it renders a distribution # given its near-ubiquity in the x86 installed base, it renders a distribution
@ -52,9 +42,10 @@ if defined(disableMarchNative):
switch("passL", "-march=haswell -mtune=generic") switch("passL", "-march=haswell -mtune=generic")
else: else:
if defined(marchOptimized): if defined(marchOptimized):
# https://github.com/status-im/nimbus-eth2/blob/stable/docs/cpu_features.md#bmi2--adx # -march=broadwell: https://github.com/status-im/nimbus-eth2/blob/stable/docs/cpu_features.md#bmi2--adx
switch("passC", "-march=broadwell -mtune=generic") # Changed to x86-64-v2 for broader support
switch("passL", "-march=broadwell -mtune=generic") switch("passC", "-march=x86-64-v2 -mtune=generic")
switch("passL", "-march=x86-64-v2 -mtune=generic")
else: else:
switch("passC", "-mssse3") switch("passC", "-mssse3")
switch("passL", "-mssse3") switch("passL", "-mssse3")
@ -76,6 +67,7 @@ else:
on on
--opt: --opt:
speed speed
--excessiveStackTrace: --excessiveStackTrace:
on on
# enable metric collection # enable metric collection
@ -85,16 +77,15 @@ else:
--define: --define:
nimTypeNames nimTypeNames
switch("define", "withoutPCRE")
# the default open files limit is too low on macOS (512), breaking the # the default open files limit is too low on macOS (512), breaking the
# "--debugger:native" build. It can be increased with `ulimit -n 1024`. # "--debugger:native" build. It can be increased with `ulimit -n 1024`.
if not defined(macosx) and not defined(android): if not defined(macosx) and not defined(android):
# add debugging symbols and original files and line numbers # add debugging symbols and original files and line numbers
--debugger: --debugger:
native native
if not (defined(windows) and defined(i386)) and not defined(disable_libbacktrace): when defined(enable_libbacktrace):
# light-weight stack traces using libbacktrace and libunwind # light-weight stack traces using libbacktrace and libunwind
# opt-in: pass -d:enable_libbacktrace (requires libbacktrace in project deps)
--define: --define:
nimStackTraceOverride nimStackTraceOverride
switch("import", "libbacktrace") switch("import", "libbacktrace")
@ -125,3 +116,8 @@ if defined(android):
switch("passC", "--sysroot=" & sysRoot) switch("passC", "--sysroot=" & sysRoot)
switch("passL", "--sysroot=" & sysRoot) switch("passL", "--sysroot=" & sysRoot)
switch("cincludes", sysRoot & "/usr/include/") switch("cincludes", sysRoot & "/usr/include/")
# begin Nimble config (version 2)
when withDir(thisDir(), system.fileExists("nimble.paths")):
--noNimblePath
include "nimble.paths"
# end Nimble config

View File

@ -38,6 +38,9 @@ A particular OpenAPI spec can be easily imported into [Postman](https://www.post
curl http://localhost:8645/debug/v1/info -s | jq curl http://localhost:8645/debug/v1/info -s | jq
``` ```
### Store API
The `page_size` flag in the Store API has a default value of 20 and a max value of 100.
### Node configuration ### Node configuration
Find details [here](https://github.com/waku-org/nwaku/tree/master/docs/operators/how-to/configure-rest-api.md) Find details [here](https://github.com/waku-org/nwaku/tree/master/docs/operators/how-to/configure-rest-api.md)

View File

@ -1,119 +0,0 @@
# Release Process
How to do releases.
For more context, see https://trunkbaseddevelopment.com/branch-for-release/
## How to do releases
### Before release
Ensure all items in this list are ticked:
- [ ] All issues under the corresponding release [milestone](https://github.com/waku-org/nwaku/milestones) has been closed or, after consultation, deferred to a next release.
- [ ] All submodules are up to date.
> **IMPORTANT:** Updating submodules requires a PR (and very often several "fixes" to maintain compatibility with the changes in submodules). That PR process must be done and merged a couple of days before the release.
> In case the submodules update has a low effort and/or risk for the release, follow the ["Update submodules"](./git-submodules.md) instructions.
> If the effort or risk is too high, consider postponing the submodules upgrade for the subsequent release or delaying the current release until the submodules updates are included in the release candidate.
- [ ] The [js-waku CI tests](https://github.com/waku-org/js-waku/actions/workflows/ci.yml) pass against the release candidate (i.e. nwaku latest `master`).
> **NOTE:** This serves as a basic regression test against typical clients of nwaku.
> The specific job that needs to pass is named `node_with_nwaku_master`.
### Performing the release
1. Checkout a release branch from master
```
git checkout -b release/v0.1.0
```
1. Update `CHANGELOG.md` and ensure it is up to date. Use the helper Make target to get PR based release-notes/changelog update.
```
make release-notes
```
1. Create a release-candidate tag with the same name as release and `-rc.N` suffix a few days before the official release and push it
```
git tag -as v0.1.0-rc.0 -m "Initial release."
git push origin v0.1.0-rc.0
```
This will trigger a [workflow](../../.github/workflows/pre-release.yml) which will build RC artifacts and create and publish a Github release
1. Open a PR from the release branch for others to review the included changes and the release-notes
1. In case additional changes are needed, create a new RC tag
Make sure the new tag is associated
with CHANGELOG update.
```
# Make changes, rebase and create new tag
# Squash to one commit and make a nice commit message
git rebase -i origin/master
git tag -as v0.1.0-rc.1 -m "Initial release."
git push origin v0.1.0-rc.1
```
1. Validate the release. For the release validation process, please refer to the following [guide](https://www.notion.so/Release-Process-61234f335b904cd0943a5033ed8f42b4#47af557e7f9744c68fdbe5240bf93ca9)
1. Once the release-candidate has been validated, create a final release tag and push it.
We also need to merge release branch back to master as a final step.
```
git checkout release/v0.1.0
git tag -as v0.1.0 -m "Initial release."
git push origin v0.1.0
git switch master
git pull
git merge release/v0.1.0
```
1. Create a [Github release](https://github.com/waku-org/nwaku/releases) from the release tag.
* Add binaries produced by the ["Upload Release Asset"](https://github.com/waku-org/nwaku/actions/workflows/release-assets.yml) workflow. Where possible, test the binaries before uploading to the release.
### After the release
1. Announce the release on Twitter, Discord and other channels.
2. Deploy the release image to [Dockerhub](https://hub.docker.com/r/wakuorg/nwaku) by triggering [the manual Jenkins deployment job](https://ci.infra.status.im/job/nim-waku/job/docker-manual/).
> Ensure the following build parameters are set:
> - `MAKE_TARGET`: `wakunode2`
> - `IMAGE_TAG`: the release tag (e.g. `v0.16.0`)
> - `IMAGE_NAME`: `wakuorg/nwaku`
> - `NIMFLAGS`: `--colors:off -d:disableMarchNative -d:chronicles_colors:none -d:postgres`
> - `GIT_REF` the release tag (e.g. `v0.16.0`)
3. Update the default nwaku image in [nwaku-compose](https://github.com/waku-org/nwaku-compose/blob/master/docker-compose.yml)
4. Deploy the release to appropriate fleets:
- Inform clients
> **NOTE:** known clients are currently using some version of js-waku, go-waku, nwaku or waku-rs.
> Clients are reachable via the corresponding channels on the Vac Discord server.
> It should be enough to inform clients on the `#nwaku` and `#announce` channels on Discord.
> Informal conversations with specific repo maintainers are often part of this process.
- Check if nwaku configuration parameters changed. If so [update fleet configuration](https://www.notion.so/Fleet-Ownership-7532aad8896d46599abac3c274189741?pvs=4#d2d2f0fe4b3c429fbd860a1d64f89a64) in [infra-nim-waku](https://github.com/status-im/infra-nim-waku)
- Deploy release to the `waku.sandbox` fleet from [Jenkins](https://ci.infra.status.im/job/nim-waku/job/deploy-waku-sandbox/).
- Ensure that nodes successfully start up and monitor health using [Grafana](https://grafana.infra.status.im/d/qrp_ZCTGz/nim-waku-v2?orgId=1) and [Kibana](https://kibana.infra.status.im/goto/a7728e70-eb26-11ec-81d1-210eb3022c76).
- If necessary, revert by deploying the previous release. Download logs and open a bug report issue.
5. Submit a PR to merge the release branch back to `master`. Make sure you use the option `Merge pull request (Create a merge commit)` to perform such merge.
### Performing a patch release
1. Cherry-pick the relevant commits from master to the release branch
```
git cherry-pick <commit-hash>
```
2. Create a release-candidate tag with the same name as release and `-rc.N` suffix
3. Update `CHANGELOG.md`. From the release branch, use the helper Make target after having cherry-picked the commits.
```
make release-notes
```
Create a new branch and raise a PR with the changelog updates to master.
4. Once the release-candidate has been validated and changelog PR got merged, cherry-pick the changelog update from master to the release branch. Create a final release tag and push it.
5. Create a [Github release](https://github.com/waku-org/nwaku/releases) from the release tag and follow the same post-release process as usual.

View File

@ -1,4 +1,3 @@
# Configure a REST API node # Configure a REST API node
A subset of the node configuration can be used to modify the behaviour of the HTTP REST API. A subset of the node configuration can be used to modify the behaviour of the HTTP REST API.
@ -21,3 +20,5 @@ Example:
```shell ```shell
wakunode2 --rest=true wakunode2 --rest=true
``` ```
The `page_size` flag in the Store API has a default value of 20 and a max value of 100.

8
env.sh
View File

@ -1,8 +0,0 @@
#!/bin/bash
# We use ${BASH_SOURCE[0]} instead of $0 to allow sourcing this file
# and we fall back to a Zsh-specific special var to also support Zsh.
REL_PATH="$(dirname ${BASH_SOURCE[0]:-${(%):-%x}})"
ABS_PATH="$(cd ${REL_PATH}; pwd)"
source ${ABS_PATH}/vendor/nimbus-build-system/scripts/env.sh

View File

@ -0,0 +1,94 @@
import std/options
import chronos, results, confutils, confutils/defs
import waku
type CliArgs = object
ethRpcEndpoint* {.
defaultValue: "", desc: "ETH RPC Endpoint, if passed, RLN is enabled"
.}: string
proc periodicSender(w: Waku): Future[void] {.async.} =
let sentListener = MessageSentEvent.listen(
proc(event: MessageSentEvent) {.async: (raises: []).} =
echo "Message sent with request ID: ",
event.requestId, " hash: ", event.messageHash
).valueOr:
echo "Failed to listen to message sent event: ", error
return
let errorListener = MessageErrorEvent.listen(
proc(event: MessageErrorEvent) {.async: (raises: []).} =
echo "Message failed to send with request ID: ",
event.requestId, " error: ", event.error
).valueOr:
echo "Failed to listen to message error event: ", error
return
let propagatedListener = MessagePropagatedEvent.listen(
proc(event: MessagePropagatedEvent) {.async: (raises: []).} =
echo "Message propagated with request ID: ",
event.requestId, " hash: ", event.messageHash
).valueOr:
echo "Failed to listen to message propagated event: ", error
return
defer:
MessageSentEvent.dropListener(sentListener)
MessageErrorEvent.dropListener(errorListener)
MessagePropagatedEvent.dropListener(propagatedListener)
## Periodically sends a Waku message every 30 seconds
var counter = 0
while true:
let envelope = MessageEnvelope.init(
contentTopic = "example/content/topic",
payload = "Hello Waku! Message number: " & $counter,
)
let sendRequestId = (await w.send(envelope)).valueOr:
echo "Failed to send message: ", error
quit(QuitFailure)
echo "Sending message with request ID: ", sendRequestId, " counter: ", counter
counter += 1
await sleepAsync(30.seconds)
when isMainModule:
let args = CliArgs.load()
echo "Starting Waku node..."
# Use WakuNodeConf (the CLI configuration type) for node setup
var conf = defaultWakuNodeConf().valueOr:
echo "Failed to create default config: ", error
quit(QuitFailure)
if args.ethRpcEndpoint == "":
# Create a basic configuration for the Waku node
# No RLN as we don't have an ETH RPC Endpoint
conf.mode = Core
conf.preset = "logos.dev"
else:
# Connect to TWN, use ETH RPC Endpoint for RLN
conf.mode = Core
conf.preset = "twn"
conf.ethClientUrls = @[EthRpcUrl(args.ethRpcEndpoint)]
# Create the node using the library API's createNode function
let node = (waitFor createNode(conf)).valueOr:
echo "Failed to create node: ", error
quit(QuitFailure)
echo("Waku node created successfully!")
# Start the node
(waitFor startWaku(addr node)).isOkOr:
echo "Failed to start node: ", error
quit(QuitFailure)
echo "Node started successfully!"
asyncSpawn periodicSender(node)
runForever()

View File

@ -19,283 +19,309 @@ pthread_mutex_t mutex = PTHREAD_MUTEX_INITIALIZER;
pthread_cond_t cond = PTHREAD_COND_INITIALIZER; pthread_cond_t cond = PTHREAD_COND_INITIALIZER;
int callback_executed = 0; int callback_executed = 0;
void waitForCallback() { void waitForCallback()
pthread_mutex_lock(&mutex); {
while (!callback_executed) { pthread_mutex_lock(&mutex);
pthread_cond_wait(&cond, &mutex); while (!callback_executed)
} {
callback_executed = 0; pthread_cond_wait(&cond, &mutex);
pthread_mutex_unlock(&mutex); }
callback_executed = 0;
pthread_mutex_unlock(&mutex);
} }
#define WAKU_CALL(call) \ #define WAKU_CALL(call) \
do { \ do \
int ret = call; \ { \
if (ret != 0) { \ int ret = call; \
printf("Failed the call to: %s. Returned code: %d\n", #call, ret); \ if (ret != 0) \
exit(1); \ { \
} \ printf("Failed the call to: %s. Returned code: %d\n", #call, ret); \
waitForCallback(); \ exit(1); \
} while (0) } \
waitForCallback(); \
} while (0)
struct ConfigNode { struct ConfigNode
char host[128]; {
int port; char host[128];
char key[128]; int port;
int relay; char key[128];
char peers[2048]; int relay;
int store; char peers[2048];
char storeNode[2048]; int store;
char storeRetentionPolicy[64]; char storeNode[2048];
char storeDbUrl[256]; char storeRetentionPolicy[64];
int storeVacuum; char storeDbUrl[256];
int storeDbMigration; int storeVacuum;
int storeMaxNumDbConnections; int storeDbMigration;
int storeMaxNumDbConnections;
}; };
// libwaku Context // libwaku Context
void* ctx; void *ctx;
// For the case of C language we don't need to store a particular userData // For the case of C language we don't need to store a particular userData
void* userData = NULL; void *userData = NULL;
// Arguments parsing // Arguments parsing
static char doc[] = "\nC example that shows how to use the waku library."; static char doc[] = "\nC example that shows how to use the waku library.";
static char args_doc[] = ""; static char args_doc[] = "";
static struct argp_option options[] = { static struct argp_option options[] = {
{ "host", 'h', "HOST", 0, "IP to listen for for LibP2P traffic. (default: \"0.0.0.0\")"}, {"host", 'h', "HOST", 0, "IP to listen for for LibP2P traffic. (default: \"0.0.0.0\")"},
{ "port", 'p', "PORT", 0, "TCP listening port. (default: \"60000\")"}, {"port", 'p', "PORT", 0, "TCP listening port. (default: \"60000\")"},
{ "key", 'k', "KEY", 0, "P2P node private key as 64 char hex string."}, {"key", 'k', "KEY", 0, "P2P node private key as 64 char hex string."},
{ "relay", 'r', "RELAY", 0, "Enable relay protocol: 1 or 0. (default: 1)"}, {"relay", 'r', "RELAY", 0, "Enable relay protocol: 1 or 0. (default: 1)"},
{ "peers", 'a', "PEERS", 0, "Comma-separated list of peer-multiaddress to connect\ {"peers", 'a', "PEERS", 0, "Comma-separated list of peer-multiaddress to connect\
to. (default: \"\") e.g. \"/ip4/127.0.0.1/tcp/60001/p2p/16Uiu2HAmVFXtAfSj4EiR7mL2KvL4EE2wztuQgUSBoj2Jx2KeXFLN\""}, to. (default: \"\") e.g. \"/ip4/127.0.0.1/tcp/60001/p2p/16Uiu2HAmVFXtAfSj4EiR7mL2KvL4EE2wztuQgUSBoj2Jx2KeXFLN\""},
{ 0 } {0}};
};
static error_t parse_opt(int key, char *arg, struct argp_state *state) { static error_t parse_opt(int key, char *arg, struct argp_state *state)
{
struct ConfigNode *cfgNode = state->input; struct ConfigNode *cfgNode = state->input;
switch (key) { switch (key)
case 'h': {
snprintf(cfgNode->host, 128, "%s", arg); case 'h':
break; snprintf(cfgNode->host, 128, "%s", arg);
case 'p': break;
cfgNode->port = atoi(arg); case 'p':
break; cfgNode->port = atoi(arg);
case 'k': break;
snprintf(cfgNode->key, 128, "%s", arg); case 'k':
break; snprintf(cfgNode->key, 128, "%s", arg);
case 'r': break;
cfgNode->relay = atoi(arg); case 'r':
break; cfgNode->relay = atoi(arg);
case 'a': break;
snprintf(cfgNode->peers, 2048, "%s", arg); case 'a':
break; snprintf(cfgNode->peers, 2048, "%s", arg);
case ARGP_KEY_ARG: break;
if (state->arg_num >= 1) /* Too many arguments. */ case ARGP_KEY_ARG:
argp_usage(state); if (state->arg_num >= 1) /* Too many arguments. */
break; argp_usage(state);
case ARGP_KEY_END: break;
break; case ARGP_KEY_END:
default: break;
return ARGP_ERR_UNKNOWN; default:
} return ARGP_ERR_UNKNOWN;
}
return 0; return 0;
} }
void signal_cond() { void signal_cond()
pthread_mutex_lock(&mutex); {
callback_executed = 1; pthread_mutex_lock(&mutex);
pthread_cond_signal(&cond); callback_executed = 1;
pthread_mutex_unlock(&mutex); pthread_cond_signal(&cond);
pthread_mutex_unlock(&mutex);
} }
static struct argp argp = { options, parse_opt, args_doc, doc, 0, 0, 0 }; static struct argp argp = {options, parse_opt, args_doc, doc, 0, 0, 0};
void event_handler(int callerRet, const char* msg, size_t len, void* userData) { void event_handler(int callerRet, const char *msg, size_t len, void *userData)
if (callerRet == RET_ERR) { {
printf("Error: %s\n", msg); if (callerRet == RET_ERR)
exit(1); {
} printf("Error: %s\n", msg);
else if (callerRet == RET_OK) { exit(1);
printf("Receiving event: %s\n", msg); }
} else if (callerRet == RET_OK)
{
printf("Receiving event: %s\n", msg);
}
signal_cond(); signal_cond();
} }
void on_event_received(int callerRet, const char* msg, size_t len, void* userData) { void on_event_received(int callerRet, const char *msg, size_t len, void *userData)
if (callerRet == RET_ERR) { {
printf("Error: %s\n", msg); if (callerRet == RET_ERR)
exit(1); {
} printf("Error: %s\n", msg);
else if (callerRet == RET_OK) { exit(1);
printf("Receiving event: %s\n", msg); }
} else if (callerRet == RET_OK)
{
printf("Receiving event: %s\n", msg);
}
} }
char* contentTopic = NULL; char *contentTopic = NULL;
void handle_content_topic(int callerRet, const char* msg, size_t len, void* userData) { void handle_content_topic(int callerRet, const char *msg, size_t len, void *userData)
if (contentTopic != NULL) { {
free(contentTopic); if (contentTopic != NULL)
} {
free(contentTopic);
}
contentTopic = malloc(len * sizeof(char) + 1); contentTopic = malloc(len * sizeof(char) + 1);
strcpy(contentTopic, msg); strcpy(contentTopic, msg);
signal_cond(); signal_cond();
} }
char* publishResponse = NULL; char *publishResponse = NULL;
void handle_publish_ok(int callerRet, const char* msg, size_t len, void* userData) { void handle_publish_ok(int callerRet, const char *msg, size_t len, void *userData)
printf("Publish Ok: %s %lu\n", msg, len); {
printf("Publish Ok: %s %lu\n", msg, len);
if (publishResponse != NULL) { if (publishResponse != NULL)
free(publishResponse); {
} free(publishResponse);
}
publishResponse = malloc(len * sizeof(char) + 1); publishResponse = malloc(len * sizeof(char) + 1);
strcpy(publishResponse, msg); strcpy(publishResponse, msg);
} }
#define MAX_MSG_SIZE 65535 #define MAX_MSG_SIZE 65535
void publish_message(const char* msg) { void publish_message(const char *msg)
char jsonWakuMsg[MAX_MSG_SIZE]; {
char *msgPayload = b64_encode(msg, strlen(msg)); char jsonWakuMsg[MAX_MSG_SIZE];
char *msgPayload = b64_encode(msg, strlen(msg));
WAKU_CALL( waku_content_topic(ctx, WAKU_CALL(waku_content_topic(ctx,
"appName", handle_content_topic,
1, userData,
"contentTopicName", "appName",
"encoding", 1,
handle_content_topic, "contentTopicName",
userData) ); "encoding"));
snprintf(jsonWakuMsg, snprintf(jsonWakuMsg,
MAX_MSG_SIZE, MAX_MSG_SIZE,
"{\"payload\":\"%s\",\"contentTopic\":\"%s\"}", "{\"payload\":\"%s\",\"contentTopic\":\"%s\"}",
msgPayload, contentTopic); msgPayload, contentTopic);
free(msgPayload); free(msgPayload);
WAKU_CALL( waku_relay_publish(ctx, WAKU_CALL(waku_relay_publish(ctx,
"/waku/2/rs/16/32", event_handler,
jsonWakuMsg, userData,
10000 /*timeout ms*/, "/waku/2/rs/16/32",
event_handler, jsonWakuMsg,
userData) ); 10000 /*timeout ms*/));
} }
void show_help_and_exit() { void show_help_and_exit()
printf("Wrong parameters\n"); {
exit(1); printf("Wrong parameters\n");
exit(1);
} }
void print_default_pubsub_topic(int callerRet, const char* msg, size_t len, void* userData) { void print_default_pubsub_topic(int callerRet, const char *msg, size_t len, void *userData)
printf("Default pubsub topic: %s\n", msg); {
signal_cond(); printf("Default pubsub topic: %s\n", msg);
signal_cond();
} }
void print_waku_version(int callerRet, const char* msg, size_t len, void* userData) { void print_waku_version(int callerRet, const char *msg, size_t len, void *userData)
printf("Git Version: %s\n", msg); {
signal_cond(); printf("Git Version: %s\n", msg);
signal_cond();
} }
// Beginning of UI program logic // Beginning of UI program logic
enum PROGRAM_STATE { enum PROGRAM_STATE
MAIN_MENU, {
SUBSCRIBE_TOPIC_MENU, MAIN_MENU,
CONNECT_TO_OTHER_NODE_MENU, SUBSCRIBE_TOPIC_MENU,
PUBLISH_MESSAGE_MENU CONNECT_TO_OTHER_NODE_MENU,
PUBLISH_MESSAGE_MENU
}; };
enum PROGRAM_STATE current_state = MAIN_MENU; enum PROGRAM_STATE current_state = MAIN_MENU;
void show_main_menu() { void show_main_menu()
printf("\nPlease, select an option:\n"); {
printf("\t1.) Subscribe to topic\n"); printf("\nPlease, select an option:\n");
printf("\t2.) Connect to other node\n"); printf("\t1.) Subscribe to topic\n");
printf("\t3.) Publish a message\n"); printf("\t2.) Connect to other node\n");
printf("\t3.) Publish a message\n");
} }
void handle_user_input() { void handle_user_input()
char cmd[1024]; {
memset(cmd, 0, 1024); char cmd[1024];
int numRead = read(0, cmd, 1024); memset(cmd, 0, 1024);
if (numRead <= 0) { int numRead = read(0, cmd, 1024);
return; if (numRead <= 0)
} {
return;
}
switch (atoi(cmd)) switch (atoi(cmd))
{ {
case SUBSCRIBE_TOPIC_MENU: case SUBSCRIBE_TOPIC_MENU:
{ {
printf("Indicate the Pubsubtopic to subscribe:\n"); printf("Indicate the Pubsubtopic to subscribe:\n");
char pubsubTopic[128]; char pubsubTopic[128];
scanf("%127s", pubsubTopic); scanf("%127s", pubsubTopic);
WAKU_CALL( waku_relay_subscribe(ctx, WAKU_CALL(waku_relay_subscribe(ctx,
pubsubTopic, event_handler,
event_handler, userData,
userData) ); pubsubTopic));
printf("The subscription went well\n"); printf("The subscription went well\n");
show_main_menu(); show_main_menu();
} }
break;
case CONNECT_TO_OTHER_NODE_MENU:
// printf("Connecting to a node. Please indicate the peer Multiaddress:\n");
// printf("e.g.: /ip4/127.0.0.1/tcp/60001/p2p/16Uiu2HAmVFXtAfSj4EiR7mL2KvL4EE2wztuQgUSBoj2Jx2KeXFLN\n");
// char peerAddr[512];
// scanf("%511s", peerAddr);
// WAKU_CALL(waku_connect(ctx, peerAddr, 10000 /* timeoutMs */, event_handler, userData));
show_main_menu();
break; break;
case CONNECT_TO_OTHER_NODE_MENU: case PUBLISH_MESSAGE_MENU:
printf("Connecting to a node. Please indicate the peer Multiaddress:\n"); {
printf("e.g.: /ip4/127.0.0.1/tcp/60001/p2p/16Uiu2HAmVFXtAfSj4EiR7mL2KvL4EE2wztuQgUSBoj2Jx2KeXFLN\n"); printf("Type the message to publish:\n");
char peerAddr[512]; char msg[1024];
scanf("%511s", peerAddr); scanf("%1023s", msg);
WAKU_CALL(waku_connect(ctx, peerAddr, 10000 /* timeoutMs */, event_handler, userData));
show_main_menu(); publish_message(msg);
show_main_menu();
}
break;
case MAIN_MENU:
break; break;
}
case PUBLISH_MESSAGE_MENU:
{
printf("Type the message to publish:\n");
char msg[1024];
scanf("%1023s", msg);
publish_message(msg);
show_main_menu();
}
break;
case MAIN_MENU:
break;
}
} }
// End of UI program logic // End of UI program logic
int main(int argc, char** argv) { int main(int argc, char **argv)
struct ConfigNode cfgNode; {
// default values struct ConfigNode cfgNode;
snprintf(cfgNode.host, 128, "0.0.0.0"); // default values
cfgNode.port = 60000; snprintf(cfgNode.host, 128, "0.0.0.0");
cfgNode.relay = 1; cfgNode.port = 60000;
cfgNode.relay = 1;
cfgNode.store = 0; cfgNode.store = 0;
snprintf(cfgNode.storeNode, 2048, ""); snprintf(cfgNode.storeNode, 2048, "");
snprintf(cfgNode.storeRetentionPolicy, 64, "time:6000000"); snprintf(cfgNode.storeRetentionPolicy, 64, "time:6000000");
snprintf(cfgNode.storeDbUrl, 256, "postgres://postgres:test123@localhost:5432/postgres"); snprintf(cfgNode.storeDbUrl, 256, "postgres://postgres:test123@localhost:5432/postgres");
cfgNode.storeVacuum = 0; cfgNode.storeVacuum = 0;
cfgNode.storeDbMigration = 0; cfgNode.storeDbMigration = 0;
cfgNode.storeMaxNumDbConnections = 30; cfgNode.storeMaxNumDbConnections = 30;
if (argp_parse(&argp, argc, argv, 0, 0, &cfgNode) if (argp_parse(&argp, argc, argv, 0, 0, &cfgNode) == ARGP_ERR_UNKNOWN)
== ARGP_ERR_UNKNOWN) { {
show_help_and_exit(); show_help_and_exit();
} }
char jsonConfig[5000]; char jsonConfig[5000];
snprintf(jsonConfig, 5000, "{ \ snprintf(jsonConfig, 5000, "{ \
\"clusterId\": 16, \ \"clusterId\": 16, \
\"shards\": [ 1, 32, 64, 128, 256 ], \ \"shards\": [ 1, 32, 64, 128, 256 ], \
\"numShardsInNetwork\": 257, \ \"numShardsInNetwork\": 257, \
@ -313,54 +339,56 @@ int main(int argc, char** argv) {
\"discv5UdpPort\": 9999, \ \"discv5UdpPort\": 9999, \
\"dnsDiscoveryUrl\": \"enrtree://AMOJVZX4V6EXP7NTJPMAYJYST2QP6AJXYW76IU6VGJS7UVSNDYZG4@boot.prod.status.nodes.status.im\", \ \"dnsDiscoveryUrl\": \"enrtree://AMOJVZX4V6EXP7NTJPMAYJYST2QP6AJXYW76IU6VGJS7UVSNDYZG4@boot.prod.status.nodes.status.im\", \
\"dnsDiscoveryNameServers\": [\"8.8.8.8\", \"1.0.0.1\"] \ \"dnsDiscoveryNameServers\": [\"8.8.8.8\", \"1.0.0.1\"] \
}", cfgNode.host, }",
cfgNode.port, cfgNode.host,
cfgNode.relay ? "true":"false", cfgNode.port,
cfgNode.store ? "true":"false", cfgNode.relay ? "true" : "false",
cfgNode.storeDbUrl, cfgNode.store ? "true" : "false",
cfgNode.storeRetentionPolicy, cfgNode.storeDbUrl,
cfgNode.storeMaxNumDbConnections); cfgNode.storeRetentionPolicy,
cfgNode.storeMaxNumDbConnections);
ctx = waku_new(jsonConfig, event_handler, userData); ctx = waku_new(jsonConfig, event_handler, userData);
waitForCallback(); waitForCallback();
WAKU_CALL( waku_default_pubsub_topic(ctx, print_default_pubsub_topic, userData) ); WAKU_CALL(waku_default_pubsub_topic(ctx, print_default_pubsub_topic, userData));
WAKU_CALL( waku_version(ctx, print_waku_version, userData) ); WAKU_CALL(waku_version(ctx, print_waku_version, userData));
printf("Bind addr: %s:%u\n", cfgNode.host, cfgNode.port); printf("Bind addr: %s:%u\n", cfgNode.host, cfgNode.port);
printf("Waku Relay enabled: %s\n", cfgNode.relay == 1 ? "YES": "NO"); printf("Waku Relay enabled: %s\n", cfgNode.relay == 1 ? "YES" : "NO");
waku_set_event_callback(ctx, on_event_received, userData); set_event_callback(ctx, on_event_received, userData);
waku_start(ctx, event_handler, userData); waku_start(ctx, event_handler, userData);
waitForCallback(); waitForCallback();
WAKU_CALL( waku_listen_addresses(ctx, event_handler, userData) ); WAKU_CALL(waku_listen_addresses(ctx, event_handler, userData));
WAKU_CALL( waku_relay_subscribe(ctx, WAKU_CALL(waku_relay_subscribe(ctx,
"/waku/2/rs/0/0", event_handler,
event_handler, userData,
userData) ); "/waku/2/rs/16/32"));
WAKU_CALL( waku_discv5_update_bootnodes(ctx, WAKU_CALL(waku_discv5_update_bootnodes(ctx,
"[\"enr:-QEkuEBIkb8q8_mrorHndoXH9t5N6ZfD-jehQCrYeoJDPHqT0l0wyaONa2-piRQsi3oVKAzDShDVeoQhy0uwN1xbZfPZAYJpZIJ2NIJpcIQiQlleim11bHRpYWRkcnO4bgA0Ni9ub2RlLTAxLmdjLXVzLWNlbnRyYWwxLWEud2FrdS5zYW5kYm94LnN0YXR1cy5pbQZ2XwA2Ni9ub2RlLTAxLmdjLXVzLWNlbnRyYWwxLWEud2FrdS5zYW5kYm94LnN0YXR1cy5pbQYfQN4DgnJzkwABCAAAAAEAAgADAAQABQAGAAeJc2VjcDI1NmsxoQKnGt-GSgqPSf3IAPM7bFgTlpczpMZZLF3geeoNNsxzSoN0Y3CCdl-DdWRwgiMohXdha3UyDw\",\"enr:-QEkuEB3WHNS-xA3RDpfu9A2Qycr3bN3u7VoArMEiDIFZJ66F1EB3d4wxZN1hcdcOX-RfuXB-MQauhJGQbpz3qUofOtLAYJpZIJ2NIJpcIQI2SVcim11bHRpYWRkcnO4bgA0Ni9ub2RlLTAxLmFjLWNuLWhvbmdrb25nLWMud2FrdS5zYW5kYm94LnN0YXR1cy5pbQZ2XwA2Ni9ub2RlLTAxLmFjLWNuLWhvbmdrb25nLWMud2FrdS5zYW5kYm94LnN0YXR1cy5pbQYfQN4DgnJzkwABCAAAAAEAAgADAAQABQAGAAeJc2VjcDI1NmsxoQPK35Nnz0cWUtSAhBp7zvHEhyU_AqeQUlqzLiLxfP2L4oN0Y3CCdl-DdWRwgiMohXdha3UyDw\"]", event_handler,
event_handler, userData,
userData) ); "[\"enr:-QEkuEBIkb8q8_mrorHndoXH9t5N6ZfD-jehQCrYeoJDPHqT0l0wyaONa2-piRQsi3oVKAzDShDVeoQhy0uwN1xbZfPZAYJpZIJ2NIJpcIQiQlleim11bHRpYWRkcnO4bgA0Ni9ub2RlLTAxLmdjLXVzLWNlbnRyYWwxLWEud2FrdS5zYW5kYm94LnN0YXR1cy5pbQZ2XwA2Ni9ub2RlLTAxLmdjLXVzLWNlbnRyYWwxLWEud2FrdS5zYW5kYm94LnN0YXR1cy5pbQYfQN4DgnJzkwABCAAAAAEAAgADAAQABQAGAAeJc2VjcDI1NmsxoQKnGt-GSgqPSf3IAPM7bFgTlpczpMZZLF3geeoNNsxzSoN0Y3CCdl-DdWRwgiMohXdha3UyDw\",\"enr:-QEkuEB3WHNS-xA3RDpfu9A2Qycr3bN3u7VoArMEiDIFZJ66F1EB3d4wxZN1hcdcOX-RfuXB-MQauhJGQbpz3qUofOtLAYJpZIJ2NIJpcIQI2SVcim11bHRpYWRkcnO4bgA0Ni9ub2RlLTAxLmFjLWNuLWhvbmdrb25nLWMud2FrdS5zYW5kYm94LnN0YXR1cy5pbQZ2XwA2Ni9ub2RlLTAxLmFjLWNuLWhvbmdrb25nLWMud2FrdS5zYW5kYm94LnN0YXR1cy5pbQYfQN4DgnJzkwABCAAAAAEAAgADAAQABQAGAAeJc2VjcDI1NmsxoQPK35Nnz0cWUtSAhBp7zvHEhyU_AqeQUlqzLiLxfP2L4oN0Y3CCdl-DdWRwgiMohXdha3UyDw\"]"));
WAKU_CALL( waku_get_peerids_from_peerstore(ctx, WAKU_CALL(waku_get_peerids_from_peerstore(ctx,
event_handler, event_handler,
userData) ); userData));
show_main_menu(); show_main_menu();
while(1) { while (1)
handle_user_input(); {
handle_user_input();
// Uncomment the following if need to test the metrics retrieval // Uncomment the following if need to test the metrics retrieval
// WAKU_CALL( waku_get_metrics(ctx, // WAKU_CALL( waku_get_metrics(ctx,
// event_handler, // event_handler,
// userData) ); // userData) );
} }
pthread_mutex_destroy(&mutex); pthread_mutex_destroy(&mutex);
pthread_cond_destroy(&cond); pthread_cond_destroy(&cond);
} }

View File

@ -21,37 +21,43 @@ pthread_mutex_t mutex = PTHREAD_MUTEX_INITIALIZER;
pthread_cond_t cond = PTHREAD_COND_INITIALIZER; pthread_cond_t cond = PTHREAD_COND_INITIALIZER;
int callback_executed = 0; int callback_executed = 0;
void waitForCallback() { void waitForCallback()
{
pthread_mutex_lock(&mutex); pthread_mutex_lock(&mutex);
while (!callback_executed) { while (!callback_executed)
{
pthread_cond_wait(&cond, &mutex); pthread_cond_wait(&cond, &mutex);
} }
callback_executed = 0; callback_executed = 0;
pthread_mutex_unlock(&mutex); pthread_mutex_unlock(&mutex);
} }
void signal_cond() { void signal_cond()
{
pthread_mutex_lock(&mutex); pthread_mutex_lock(&mutex);
callback_executed = 1; callback_executed = 1;
pthread_cond_signal(&cond); pthread_cond_signal(&cond);
pthread_mutex_unlock(&mutex); pthread_mutex_unlock(&mutex);
} }
#define WAKU_CALL(call) \ #define WAKU_CALL(call) \
do { \ do \
int ret = call; \ { \
if (ret != 0) { \ int ret = call; \
std::cout << "Failed the call to: " << #call << ". Code: " << ret << "\n"; \ if (ret != 0) \
} \ { \
waitForCallback(); \ std::cout << "Failed the call to: " << #call << ". Code: " << ret << "\n"; \
} while (0) } \
waitForCallback(); \
} while (0)
struct ConfigNode { struct ConfigNode
char host[128]; {
int port; char host[128];
char key[128]; int port;
int relay; char key[128];
char peers[2048]; int relay;
char peers[2048];
}; };
// Arguments parsing // Arguments parsing
@ -59,70 +65,76 @@ static char doc[] = "\nC example that shows how to use the waku library.";
static char args_doc[] = ""; static char args_doc[] = "";
static struct argp_option options[] = { static struct argp_option options[] = {
{ "host", 'h', "HOST", 0, "IP to listen for for LibP2P traffic. (default: \"0.0.0.0\")"}, {"host", 'h', "HOST", 0, "IP to listen for for LibP2P traffic. (default: \"0.0.0.0\")"},
{ "port", 'p', "PORT", 0, "TCP listening port. (default: \"60000\")"}, {"port", 'p', "PORT", 0, "TCP listening port. (default: \"60000\")"},
{ "key", 'k', "KEY", 0, "P2P node private key as 64 char hex string."}, {"key", 'k', "KEY", 0, "P2P node private key as 64 char hex string."},
{ "relay", 'r', "RELAY", 0, "Enable relay protocol: 1 or 0. (default: 1)"}, {"relay", 'r', "RELAY", 0, "Enable relay protocol: 1 or 0. (default: 1)"},
{ "peers", 'a', "PEERS", 0, "Comma-separated list of peer-multiaddress to connect\ {"peers", 'a', "PEERS", 0, "Comma-separated list of peer-multiaddress to connect\
to. (default: \"\") e.g. \"/ip4/127.0.0.1/tcp/60001/p2p/16Uiu2HAmVFXtAfSj4EiR7mL2KvL4EE2wztuQgUSBoj2Jx2KeXFLN\""}, to. (default: \"\") e.g. \"/ip4/127.0.0.1/tcp/60001/p2p/16Uiu2HAmVFXtAfSj4EiR7mL2KvL4EE2wztuQgUSBoj2Jx2KeXFLN\""},
{ 0 } {0}};
};
static error_t parse_opt(int key, char *arg, struct argp_state *state) { static error_t parse_opt(int key, char *arg, struct argp_state *state)
{
struct ConfigNode *cfgNode = (ConfigNode *) state->input; struct ConfigNode *cfgNode = (ConfigNode *)state->input;
switch (key) { switch (key)
case 'h': {
snprintf(cfgNode->host, 128, "%s", arg); case 'h':
break; snprintf(cfgNode->host, 128, "%s", arg);
case 'p': break;
cfgNode->port = atoi(arg); case 'p':
break; cfgNode->port = atoi(arg);
case 'k': break;
snprintf(cfgNode->key, 128, "%s", arg); case 'k':
break; snprintf(cfgNode->key, 128, "%s", arg);
case 'r': break;
cfgNode->relay = atoi(arg); case 'r':
break; cfgNode->relay = atoi(arg);
case 'a': break;
snprintf(cfgNode->peers, 2048, "%s", arg); case 'a':
break; snprintf(cfgNode->peers, 2048, "%s", arg);
case ARGP_KEY_ARG: break;
if (state->arg_num >= 1) /* Too many arguments. */ case ARGP_KEY_ARG:
if (state->arg_num >= 1) /* Too many arguments. */
argp_usage(state); argp_usage(state);
break; break;
case ARGP_KEY_END: case ARGP_KEY_END:
break; break;
default: default:
return ARGP_ERR_UNKNOWN; return ARGP_ERR_UNKNOWN;
} }
return 0; return 0;
} }
void event_handler(const char* msg, size_t len) { void event_handler(const char *msg, size_t len)
{
printf("Receiving event: %s\n", msg); printf("Receiving event: %s\n", msg);
} }
void handle_error(const char* msg, size_t len) { void handle_error(const char *msg, size_t len)
{
printf("handle_error: %s\n", msg); printf("handle_error: %s\n", msg);
exit(1); exit(1);
} }
template <class F> template <class F>
auto cify(F&& f) { auto cify(F &&f)
static F fn = std::forward<F>(f); {
return [](int callerRet, const char* msg, size_t len, void* userData) { static F fn = std::forward<F>(f);
signal_cond(); return [](int callerRet, const char *msg, size_t len, void *userData)
return fn(msg, len); {
}; signal_cond();
return fn(msg, len);
};
} }
static struct argp argp = { options, parse_opt, args_doc, doc, 0, 0, 0 }; static struct argp argp = {options, parse_opt, args_doc, doc, 0, 0, 0};
// Beginning of UI program logic // Beginning of UI program logic
enum PROGRAM_STATE { enum PROGRAM_STATE
{
MAIN_MENU, MAIN_MENU,
SUBSCRIBE_TOPIC_MENU, SUBSCRIBE_TOPIC_MENU,
CONNECT_TO_OTHER_NODE_MENU, CONNECT_TO_OTHER_NODE_MENU,
@ -131,18 +143,21 @@ enum PROGRAM_STATE {
enum PROGRAM_STATE current_state = MAIN_MENU; enum PROGRAM_STATE current_state = MAIN_MENU;
void show_main_menu() { void show_main_menu()
{
printf("\nPlease, select an option:\n"); printf("\nPlease, select an option:\n");
printf("\t1.) Subscribe to topic\n"); printf("\t1.) Subscribe to topic\n");
printf("\t2.) Connect to other node\n"); printf("\t2.) Connect to other node\n");
printf("\t3.) Publish a message\n"); printf("\t3.) Publish a message\n");
} }
void handle_user_input(void* ctx) { void handle_user_input(void *ctx)
{
char cmd[1024]; char cmd[1024];
memset(cmd, 0, 1024); memset(cmd, 0, 1024);
int numRead = read(0, cmd, 1024); int numRead = read(0, cmd, 1024);
if (numRead <= 0) { if (numRead <= 0)
{
return; return;
} }
@ -154,12 +169,11 @@ void handle_user_input(void* ctx) {
char pubsubTopic[128]; char pubsubTopic[128];
scanf("%127s", pubsubTopic); scanf("%127s", pubsubTopic);
WAKU_CALL( waku_relay_subscribe(ctx, WAKU_CALL(waku_relay_subscribe(ctx,
pubsubTopic, cify([&](const char *msg, size_t len)
cify([&](const char* msg, size_t len) { { event_handler(msg, len); }),
event_handler(msg, len); nullptr,
}), pubsubTopic));
nullptr) );
printf("The subscription went well\n"); printf("The subscription went well\n");
show_main_menu(); show_main_menu();
@ -171,15 +185,14 @@ void handle_user_input(void* ctx) {
printf("e.g.: /ip4/127.0.0.1/tcp/60001/p2p/16Uiu2HAmVFXtAfSj4EiR7mL2KvL4EE2wztuQgUSBoj2Jx2KeXFLN\n"); printf("e.g.: /ip4/127.0.0.1/tcp/60001/p2p/16Uiu2HAmVFXtAfSj4EiR7mL2KvL4EE2wztuQgUSBoj2Jx2KeXFLN\n");
char peerAddr[512]; char peerAddr[512];
scanf("%511s", peerAddr); scanf("%511s", peerAddr);
WAKU_CALL( waku_connect(ctx, WAKU_CALL(waku_connect(ctx,
peerAddr, cify([&](const char *msg, size_t len)
10000 /* timeoutMs */, { event_handler(msg, len); }),
cify([&](const char* msg, size_t len) { nullptr,
event_handler(msg, len); peerAddr,
}), 10000 /* timeoutMs */));
nullptr));
show_main_menu(); show_main_menu();
break; break;
case PUBLISH_MESSAGE_MENU: case PUBLISH_MESSAGE_MENU:
{ {
@ -193,28 +206,26 @@ void handle_user_input(void* ctx) {
std::string contentTopic; std::string contentTopic;
waku_content_topic(ctx, waku_content_topic(ctx,
cify([&contentTopic](const char *msg, size_t len)
{ contentTopic = msg; }),
nullptr,
"appName", "appName",
1, 1,
"contentTopicName", "contentTopicName",
"encoding", "encoding");
cify([&contentTopic](const char* msg, size_t len) {
contentTopic = msg;
}),
nullptr);
snprintf(jsonWakuMsg, snprintf(jsonWakuMsg,
2048, 2048,
"{\"payload\":\"%s\",\"contentTopic\":\"%s\"}", "{\"payload\":\"%s\",\"contentTopic\":\"%s\"}",
msgPayload.data(), contentTopic.c_str()); msgPayload.data(), contentTopic.c_str());
WAKU_CALL( waku_relay_publish(ctx, WAKU_CALL(waku_relay_publish(ctx,
"/waku/2/rs/16/32", cify([&](const char *msg, size_t len)
jsonWakuMsg, { event_handler(msg, len); }),
10000 /*timeout ms*/, nullptr,
cify([&](const char* msg, size_t len) { "/waku/2/rs/16/32",
event_handler(msg, len); jsonWakuMsg,
}), 10000 /*timeout ms*/));
nullptr) );
show_main_menu(); show_main_menu();
} }
@ -227,12 +238,14 @@ void handle_user_input(void* ctx) {
// End of UI program logic // End of UI program logic
void show_help_and_exit() { void show_help_and_exit()
{
printf("Wrong parameters\n"); printf("Wrong parameters\n");
exit(1); exit(1);
} }
int main(int argc, char** argv) { int main(int argc, char **argv)
{
struct ConfigNode cfgNode; struct ConfigNode cfgNode;
// default values // default values
snprintf(cfgNode.host, 128, "0.0.0.0"); snprintf(cfgNode.host, 128, "0.0.0.0");
@ -241,8 +254,8 @@ int main(int argc, char** argv) {
cfgNode.port = 60000; cfgNode.port = 60000;
cfgNode.relay = 1; cfgNode.relay = 1;
if (argp_parse(&argp, argc, argv, 0, 0, &cfgNode) if (argp_parse(&argp, argc, argv, 0, 0, &cfgNode) == ARGP_ERR_UNKNOWN)
== ARGP_ERR_UNKNOWN) { {
show_help_and_exit(); show_help_and_exit();
} }
@ -260,72 +273,64 @@ int main(int argc, char** argv) {
\"discv5UdpPort\": 9999, \ \"discv5UdpPort\": 9999, \
\"dnsDiscoveryUrl\": \"enrtree://AMOJVZX4V6EXP7NTJPMAYJYST2QP6AJXYW76IU6VGJS7UVSNDYZG4@boot.prod.status.nodes.status.im\", \ \"dnsDiscoveryUrl\": \"enrtree://AMOJVZX4V6EXP7NTJPMAYJYST2QP6AJXYW76IU6VGJS7UVSNDYZG4@boot.prod.status.nodes.status.im\", \
\"dnsDiscoveryNameServers\": [\"8.8.8.8\", \"1.0.0.1\"] \ \"dnsDiscoveryNameServers\": [\"8.8.8.8\", \"1.0.0.1\"] \
}", cfgNode.host, }",
cfgNode.port); cfgNode.host,
cfgNode.port);
void* ctx = void *ctx =
waku_new(jsonConfig, waku_new(jsonConfig,
cify([](const char* msg, size_t len) { cify([](const char *msg, size_t len)
std::cout << "waku_new feedback: " << msg << std::endl; { std::cout << "waku_new feedback: " << msg << std::endl; }),
} nullptr);
),
nullptr
);
waitForCallback(); waitForCallback();
// example on how to retrieve a value from the `libwaku` callback. // example on how to retrieve a value from the `libwaku` callback.
std::string defaultPubsubTopic; std::string defaultPubsubTopic;
WAKU_CALL( WAKU_CALL(
waku_default_pubsub_topic( waku_default_pubsub_topic(
ctx, ctx,
cify([&defaultPubsubTopic](const char* msg, size_t len) { cify([&defaultPubsubTopic](const char *msg, size_t len)
defaultPubsubTopic = msg; { defaultPubsubTopic = msg; }),
} nullptr));
),
nullptr));
std::cout << "Default pubsub topic: " << defaultPubsubTopic << std::endl; std::cout << "Default pubsub topic: " << defaultPubsubTopic << std::endl;
WAKU_CALL(waku_version(ctx, WAKU_CALL(waku_version(ctx,
cify([&](const char* msg, size_t len) { cify([&](const char *msg, size_t len)
std::cout << "Git Version: " << msg << std::endl; { std::cout << "Git Version: " << msg << std::endl; }),
}),
nullptr)); nullptr));
printf("Bind addr: %s:%u\n", cfgNode.host, cfgNode.port); printf("Bind addr: %s:%u\n", cfgNode.host, cfgNode.port);
printf("Waku Relay enabled: %s\n", cfgNode.relay == 1 ? "YES": "NO"); printf("Waku Relay enabled: %s\n", cfgNode.relay == 1 ? "YES" : "NO");
std::string pubsubTopic; std::string pubsubTopic;
WAKU_CALL(waku_pubsub_topic(ctx, WAKU_CALL(waku_pubsub_topic(ctx,
"example", cify([&](const char *msg, size_t len)
cify([&](const char* msg, size_t len) { { pubsubTopic = msg; }),
pubsubTopic = msg; nullptr,
}), "example"));
nullptr));
std::cout << "Custom pubsub topic: " << pubsubTopic << std::endl; std::cout << "Custom pubsub topic: " << pubsubTopic << std::endl;
waku_set_event_callback(ctx, set_event_callback(ctx,
cify([&](const char* msg, size_t len) { cify([&](const char *msg, size_t len)
event_handler(msg, len); { event_handler(msg, len); }),
}), nullptr);
nullptr);
WAKU_CALL( waku_start(ctx, WAKU_CALL(waku_start(ctx,
cify([&](const char* msg, size_t len) { cify([&](const char *msg, size_t len)
event_handler(msg, len); { event_handler(msg, len); }),
}), nullptr));
nullptr));
WAKU_CALL( waku_relay_subscribe(ctx, WAKU_CALL(waku_relay_subscribe(ctx,
defaultPubsubTopic.c_str(), cify([&](const char *msg, size_t len)
cify([&](const char* msg, size_t len) { { event_handler(msg, len); }),
event_handler(msg, len); nullptr,
}), defaultPubsubTopic.c_str()));
nullptr) );
show_main_menu(); show_main_menu();
while(1) { while (1)
{
handle_user_input(ctx); handle_user_input(ctx);
} }
} }

View File

@ -62,13 +62,9 @@ proc setupAndSubscribe(rng: ref HmacDrbgContext) {.async.} =
"Building ENR with relay sharding failed" "Building ENR with relay sharding failed"
) )
let recordRes = enrBuilder.build() let record = enrBuilder.build().valueOr:
let record = error "failed to create enr record", error = error
if recordRes.isErr(): quit(QuitFailure)
error "failed to create enr record", error = recordRes.error
quit(QuitFailure)
else:
recordRes.get()
var builder = WakuNodeBuilder.init() var builder = WakuNodeBuilder.init()
builder.withNodeKey(nodeKey) builder.withNodeKey(nodeKey)
@ -92,20 +88,18 @@ proc setupAndSubscribe(rng: ref HmacDrbgContext) {.async.} =
while true: while true:
notice "maintaining subscription" notice "maintaining subscription"
# First use filter-ping to check if we have an active subscription # First use filter-ping to check if we have an active subscription
let pingRes = await node.wakuFilterClient.ping(filterPeer) if (await node.wakuFilterClient.ping(filterPeer)).isErr():
if pingRes.isErr():
# No subscription found. Let's subscribe. # No subscription found. Let's subscribe.
notice "no subscription found. Sending subscribe request" notice "no subscription found. Sending subscribe request"
let subscribeRes = await node.wakuFilterClient.subscribe( (
filterPeer, FilterPubsubTopic, @[FilterContentTopic] await node.wakuFilterClient.subscribe(
) filterPeer, FilterPubsubTopic, @[FilterContentTopic]
)
if subscribeRes.isErr(): ).isOkOr:
notice "subscribe request failed. Quitting.", err = subscribeRes.error notice "subscribe request failed. Quitting.", error = error
break break
else: notice "subscribe request successful."
notice "subscribe request successful."
else: else:
notice "subscription found." notice "subscription found."

View File

@ -71,32 +71,32 @@ package main
static void* cGoWakuNew(const char* configJson, void* resp) { static void* cGoWakuNew(const char* configJson, void* resp) {
// We pass NULL because we are not interested in retrieving data from this callback // We pass NULL because we are not interested in retrieving data from this callback
void* ret = waku_new(configJson, (WakuCallBack) callback, resp); void* ret = waku_new(configJson, (FFICallBack) callback, resp);
return ret; return ret;
} }
static void cGoWakuStart(void* wakuCtx, void* resp) { static void cGoWakuStart(void* wakuCtx, void* resp) {
WAKU_CALL(waku_start(wakuCtx, (WakuCallBack) callback, resp)); WAKU_CALL(waku_start(wakuCtx, (FFICallBack) callback, resp));
} }
static void cGoWakuStop(void* wakuCtx, void* resp) { static void cGoWakuStop(void* wakuCtx, void* resp) {
WAKU_CALL(waku_stop(wakuCtx, (WakuCallBack) callback, resp)); WAKU_CALL(waku_stop(wakuCtx, (FFICallBack) callback, resp));
} }
static void cGoWakuDestroy(void* wakuCtx, void* resp) { static void cGoWakuDestroy(void* wakuCtx, void* resp) {
WAKU_CALL(waku_destroy(wakuCtx, (WakuCallBack) callback, resp)); WAKU_CALL(waku_destroy(wakuCtx, (FFICallBack) callback, resp));
} }
static void cGoWakuStartDiscV5(void* wakuCtx, void* resp) { static void cGoWakuStartDiscV5(void* wakuCtx, void* resp) {
WAKU_CALL(waku_start_discv5(wakuCtx, (WakuCallBack) callback, resp)); WAKU_CALL(waku_start_discv5(wakuCtx, (FFICallBack) callback, resp));
} }
static void cGoWakuStopDiscV5(void* wakuCtx, void* resp) { static void cGoWakuStopDiscV5(void* wakuCtx, void* resp) {
WAKU_CALL(waku_stop_discv5(wakuCtx, (WakuCallBack) callback, resp)); WAKU_CALL(waku_stop_discv5(wakuCtx, (FFICallBack) callback, resp));
} }
static void cGoWakuVersion(void* wakuCtx, void* resp) { static void cGoWakuVersion(void* wakuCtx, void* resp) {
WAKU_CALL(waku_version(wakuCtx, (WakuCallBack) callback, resp)); WAKU_CALL(waku_version(wakuCtx, (FFICallBack) callback, resp));
} }
static void cGoWakuSetEventCallback(void* wakuCtx) { static void cGoWakuSetEventCallback(void* wakuCtx) {
@ -112,7 +112,7 @@ package main
// This technique is needed because cgo only allows to export Go functions and not methods. // This technique is needed because cgo only allows to export Go functions and not methods.
waku_set_event_callback(wakuCtx, (WakuCallBack) globalEventCallback, wakuCtx); set_event_callback(wakuCtx, (FFICallBack) globalEventCallback, wakuCtx);
} }
static void cGoWakuContentTopic(void* wakuCtx, static void cGoWakuContentTopic(void* wakuCtx,
@ -123,20 +123,21 @@ package main
void* resp) { void* resp) {
WAKU_CALL( waku_content_topic(wakuCtx, WAKU_CALL( waku_content_topic(wakuCtx,
(FFICallBack) callback,
resp,
appName, appName,
appVersion, appVersion,
contentTopicName, contentTopicName,
encoding, encoding
(WakuCallBack) callback, ) );
resp) );
} }
static void cGoWakuPubsubTopic(void* wakuCtx, char* topicName, void* resp) { static void cGoWakuPubsubTopic(void* wakuCtx, char* topicName, void* resp) {
WAKU_CALL( waku_pubsub_topic(wakuCtx, topicName, (WakuCallBack) callback, resp) ); WAKU_CALL( waku_pubsub_topic(wakuCtx, (FFICallBack) callback, resp, topicName) );
} }
static void cGoWakuDefaultPubsubTopic(void* wakuCtx, void* resp) { static void cGoWakuDefaultPubsubTopic(void* wakuCtx, void* resp) {
WAKU_CALL (waku_default_pubsub_topic(wakuCtx, (WakuCallBack) callback, resp)); WAKU_CALL (waku_default_pubsub_topic(wakuCtx, (FFICallBack) callback, resp));
} }
static void cGoWakuRelayPublish(void* wakuCtx, static void cGoWakuRelayPublish(void* wakuCtx,
@ -146,34 +147,36 @@ package main
void* resp) { void* resp) {
WAKU_CALL (waku_relay_publish(wakuCtx, WAKU_CALL (waku_relay_publish(wakuCtx,
(FFICallBack) callback,
resp,
pubSubTopic, pubSubTopic,
jsonWakuMessage, jsonWakuMessage,
timeoutMs, timeoutMs
(WakuCallBack) callback, ));
resp));
} }
static void cGoWakuRelaySubscribe(void* wakuCtx, char* pubSubTopic, void* resp) { static void cGoWakuRelaySubscribe(void* wakuCtx, char* pubSubTopic, void* resp) {
WAKU_CALL ( waku_relay_subscribe(wakuCtx, WAKU_CALL ( waku_relay_subscribe(wakuCtx,
pubSubTopic, (FFICallBack) callback,
(WakuCallBack) callback, resp,
resp) ); pubSubTopic) );
} }
static void cGoWakuRelayUnsubscribe(void* wakuCtx, char* pubSubTopic, void* resp) { static void cGoWakuRelayUnsubscribe(void* wakuCtx, char* pubSubTopic, void* resp) {
WAKU_CALL ( waku_relay_unsubscribe(wakuCtx, WAKU_CALL ( waku_relay_unsubscribe(wakuCtx,
pubSubTopic, (FFICallBack) callback,
(WakuCallBack) callback, resp,
resp) ); pubSubTopic) );
} }
static void cGoWakuConnect(void* wakuCtx, char* peerMultiAddr, int timeoutMs, void* resp) { static void cGoWakuConnect(void* wakuCtx, char* peerMultiAddr, int timeoutMs, void* resp) {
WAKU_CALL( waku_connect(wakuCtx, WAKU_CALL( waku_connect(wakuCtx,
(FFICallBack) callback,
resp,
peerMultiAddr, peerMultiAddr,
timeoutMs, timeoutMs
(WakuCallBack) callback, ) );
resp) );
} }
static void cGoWakuDialPeerById(void* wakuCtx, static void cGoWakuDialPeerById(void* wakuCtx,
@ -183,42 +186,44 @@ package main
void* resp) { void* resp) {
WAKU_CALL( waku_dial_peer_by_id(wakuCtx, WAKU_CALL( waku_dial_peer_by_id(wakuCtx,
(FFICallBack) callback,
resp,
peerId, peerId,
protocol, protocol,
timeoutMs, timeoutMs
(WakuCallBack) callback, ) );
resp) );
} }
static void cGoWakuDisconnectPeerById(void* wakuCtx, char* peerId, void* resp) { static void cGoWakuDisconnectPeerById(void* wakuCtx, char* peerId, void* resp) {
WAKU_CALL( waku_disconnect_peer_by_id(wakuCtx, WAKU_CALL( waku_disconnect_peer_by_id(wakuCtx,
peerId, (FFICallBack) callback,
(WakuCallBack) callback, resp,
resp) ); peerId
) );
} }
static void cGoWakuListenAddresses(void* wakuCtx, void* resp) { static void cGoWakuListenAddresses(void* wakuCtx, void* resp) {
WAKU_CALL (waku_listen_addresses(wakuCtx, (WakuCallBack) callback, resp) ); WAKU_CALL (waku_listen_addresses(wakuCtx, (FFICallBack) callback, resp) );
} }
static void cGoWakuGetMyENR(void* ctx, void* resp) { static void cGoWakuGetMyENR(void* ctx, void* resp) {
WAKU_CALL (waku_get_my_enr(ctx, (WakuCallBack) callback, resp) ); WAKU_CALL (waku_get_my_enr(ctx, (FFICallBack) callback, resp) );
} }
static void cGoWakuGetMyPeerId(void* ctx, void* resp) { static void cGoWakuGetMyPeerId(void* ctx, void* resp) {
WAKU_CALL (waku_get_my_peerid(ctx, (WakuCallBack) callback, resp) ); WAKU_CALL (waku_get_my_peerid(ctx, (FFICallBack) callback, resp) );
} }
static void cGoWakuListPeersInMesh(void* ctx, char* pubSubTopic, void* resp) { static void cGoWakuListPeersInMesh(void* ctx, char* pubSubTopic, void* resp) {
WAKU_CALL (waku_relay_get_num_peers_in_mesh(ctx, pubSubTopic, (WakuCallBack) callback, resp) ); WAKU_CALL (waku_relay_get_num_peers_in_mesh(ctx, (FFICallBack) callback, resp, pubSubTopic) );
} }
static void cGoWakuGetNumConnectedPeers(void* ctx, char* pubSubTopic, void* resp) { static void cGoWakuGetNumConnectedPeers(void* ctx, char* pubSubTopic, void* resp) {
WAKU_CALL (waku_relay_get_num_connected_peers(ctx, pubSubTopic, (WakuCallBack) callback, resp) ); WAKU_CALL (waku_relay_get_num_connected_peers(ctx, (FFICallBack) callback, resp, pubSubTopic) );
} }
static void cGoWakuGetPeerIdsFromPeerStore(void* wakuCtx, void* resp) { static void cGoWakuGetPeerIdsFromPeerStore(void* wakuCtx, void* resp) {
WAKU_CALL (waku_get_peerids_from_peerstore(wakuCtx, (WakuCallBack) callback, resp) ); WAKU_CALL (waku_get_peerids_from_peerstore(wakuCtx, (FFICallBack) callback, resp) );
} }
static void cGoWakuLightpushPublish(void* wakuCtx, static void cGoWakuLightpushPublish(void* wakuCtx,
@ -227,10 +232,11 @@ package main
void* resp) { void* resp) {
WAKU_CALL (waku_lightpush_publish(wakuCtx, WAKU_CALL (waku_lightpush_publish(wakuCtx,
(FFICallBack) callback,
resp,
pubSubTopic, pubSubTopic,
jsonWakuMessage, jsonWakuMessage
(WakuCallBack) callback, ));
resp));
} }
static void cGoWakuStoreQuery(void* wakuCtx, static void cGoWakuStoreQuery(void* wakuCtx,
@ -240,11 +246,12 @@ package main
void* resp) { void* resp) {
WAKU_CALL (waku_store_query(wakuCtx, WAKU_CALL (waku_store_query(wakuCtx,
(FFICallBack) callback,
resp,
jsonQuery, jsonQuery,
peerAddr, peerAddr,
timeoutMs, timeoutMs
(WakuCallBack) callback, ));
resp));
} }
static void cGoWakuPeerExchangeQuery(void* wakuCtx, static void cGoWakuPeerExchangeQuery(void* wakuCtx,
@ -252,9 +259,10 @@ package main
void* resp) { void* resp) {
WAKU_CALL (waku_peer_exchange_request(wakuCtx, WAKU_CALL (waku_peer_exchange_request(wakuCtx,
numPeers, (FFICallBack) callback,
(WakuCallBack) callback, resp,
resp)); numPeers
));
} }
static void cGoWakuGetPeerIdsByProtocol(void* wakuCtx, static void cGoWakuGetPeerIdsByProtocol(void* wakuCtx,
@ -262,9 +270,10 @@ package main
void* resp) { void* resp) {
WAKU_CALL (waku_get_peerids_by_protocol(wakuCtx, WAKU_CALL (waku_get_peerids_by_protocol(wakuCtx,
protocol, (FFICallBack) callback,
(WakuCallBack) callback, resp,
resp)); protocol
));
} }
*/ */

View File

@ -0,0 +1,331 @@
// !$*UTF8*$!
{
archiveVersion = 1;
classes = {
};
objectVersion = 63;
objects = {
/* Begin PBXBuildFile section */
45714AF6D1D12AF5C36694FB /* WakuExampleApp.swift in Sources */ = {isa = PBXBuildFile; fileRef = 0671AF6DCB0D788B0C1E9C8B /* WakuExampleApp.swift */; };
6468FA3F5F760D3FCAD6CDBF /* ContentView.swift in Sources */ = {isa = PBXBuildFile; fileRef = 7D8744E36DADC11F38A1CC99 /* ContentView.swift */; };
C4EA202B782038F96336401F /* WakuNode.swift in Sources */ = {isa = PBXBuildFile; fileRef = 638A565C495A63CFF7396FBC /* WakuNode.swift */; };
/* End PBXBuildFile section */
/* Begin PBXFileReference section */
0671AF6DCB0D788B0C1E9C8B /* WakuExampleApp.swift */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.swift; path = WakuExampleApp.swift; sourceTree = "<group>"; };
31BE20DB2755A11000723420 /* libwaku.h */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.c.h; path = libwaku.h; sourceTree = "<group>"; };
5C5AAC91E0166D28BFA986DB /* Info.plist */ = {isa = PBXFileReference; lastKnownFileType = text.plist; path = Info.plist; sourceTree = "<group>"; };
638A565C495A63CFF7396FBC /* WakuNode.swift */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.swift; path = WakuNode.swift; sourceTree = "<group>"; };
7D8744E36DADC11F38A1CC99 /* ContentView.swift */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.swift; path = ContentView.swift; sourceTree = "<group>"; };
A8655016B3DF9B0877631CE5 /* WakuExample-Bridging-Header.h */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.c.h; path = "WakuExample-Bridging-Header.h"; sourceTree = "<group>"; };
CFBE844B6E18ACB81C65F83B /* WakuExample.app */ = {isa = PBXFileReference; explicitFileType = wrapper.application; includeInIndex = 0; path = WakuExample.app; sourceTree = BUILT_PRODUCTS_DIR; };
/* End PBXFileReference section */
/* Begin PBXGroup section */
34547A6259485BD047D6375C /* Products */ = {
isa = PBXGroup;
children = (
CFBE844B6E18ACB81C65F83B /* WakuExample.app */,
);
name = Products;
sourceTree = "<group>";
};
4F76CB85EC44E951B8E75522 /* WakuExample */ = {
isa = PBXGroup;
children = (
7D8744E36DADC11F38A1CC99 /* ContentView.swift */,
5C5AAC91E0166D28BFA986DB /* Info.plist */,
31BE20DB2755A11000723420 /* libwaku.h */,
A8655016B3DF9B0877631CE5 /* WakuExample-Bridging-Header.h */,
0671AF6DCB0D788B0C1E9C8B /* WakuExampleApp.swift */,
638A565C495A63CFF7396FBC /* WakuNode.swift */,
);
path = WakuExample;
sourceTree = "<group>";
};
D40CD2446F177CAABB0A747A = {
isa = PBXGroup;
children = (
4F76CB85EC44E951B8E75522 /* WakuExample */,
34547A6259485BD047D6375C /* Products */,
);
sourceTree = "<group>";
};
/* End PBXGroup section */
/* Begin PBXNativeTarget section */
F751EF8294AD21F713D47FDA /* WakuExample */ = {
isa = PBXNativeTarget;
buildConfigurationList = 757FA0123629BD63CB254113 /* Build configuration list for PBXNativeTarget "WakuExample" */;
buildPhases = (
D3AFD8C4DA68BF5C4F7D8E10 /* Sources */,
);
buildRules = (
);
dependencies = (
);
name = WakuExample;
packageProductDependencies = (
);
productName = WakuExample;
productReference = CFBE844B6E18ACB81C65F83B /* WakuExample.app */;
productType = "com.apple.product-type.application";
};
/* End PBXNativeTarget section */
/* Begin PBXProject section */
4FF82F0F4AF8E1E34728F150 /* Project object */ = {
isa = PBXProject;
attributes = {
BuildIndependentTargetsInParallel = YES;
LastUpgradeCheck = 1500;
};
buildConfigurationList = B3A4F48294254543E79767C4 /* Build configuration list for PBXProject "WakuExample" */;
compatibilityVersion = "Xcode 14.0";
developmentRegion = en;
hasScannedForEncodings = 0;
knownRegions = (
Base,
en,
);
mainGroup = D40CD2446F177CAABB0A747A;
minimizedProjectReferenceProxies = 1;
projectDirPath = "";
projectRoot = "";
targets = (
F751EF8294AD21F713D47FDA /* WakuExample */,
);
};
/* End PBXProject section */
/* Begin PBXSourcesBuildPhase section */
D3AFD8C4DA68BF5C4F7D8E10 /* Sources */ = {
isa = PBXSourcesBuildPhase;
buildActionMask = 2147483647;
files = (
6468FA3F5F760D3FCAD6CDBF /* ContentView.swift in Sources */,
45714AF6D1D12AF5C36694FB /* WakuExampleApp.swift in Sources */,
C4EA202B782038F96336401F /* WakuNode.swift in Sources */,
);
runOnlyForDeploymentPostprocessing = 0;
};
/* End PBXSourcesBuildPhase section */
/* Begin XCBuildConfiguration section */
36939122077C66DD94082311 /* Release */ = {
isa = XCBuildConfiguration;
buildSettings = {
ASSETCATALOG_COMPILER_APPICON_NAME = AppIcon;
CODE_SIGN_IDENTITY = "iPhone Developer";
DEVELOPMENT_TEAM = 2Q52K2W84K;
HEADER_SEARCH_PATHS = "$(PROJECT_DIR)/WakuExample";
INFOPLIST_FILE = WakuExample/Info.plist;
IPHONEOS_DEPLOYMENT_TARGET = 18.6;
LD_RUNPATH_SEARCH_PATHS = (
"$(inherited)",
"@executable_path/Frameworks",
);
"LIBRARY_SEARCH_PATHS[sdk=iphoneos*]" = "$(PROJECT_DIR)/../../build/ios/iphoneos-arm64";
"LIBRARY_SEARCH_PATHS[sdk=iphonesimulator*]" = "$(PROJECT_DIR)/../../build/ios/iphonesimulator-arm64";
MACOSX_DEPLOYMENT_TARGET = 15.6;
OTHER_LDFLAGS = (
"-lc++",
"-force_load",
"$(PROJECT_DIR)/../../build/ios/iphoneos-arm64/libwaku.a",
"-lsqlite3",
"-lz",
);
PRODUCT_BUNDLE_IDENTIFIER = org.waku.example;
SDKROOT = iphoneos;
SUPPORTED_PLATFORMS = "iphoneos iphonesimulator";
SUPPORTS_MACCATALYST = NO;
SUPPORTS_MAC_DESIGNED_FOR_IPHONE_IPAD = YES;
SUPPORTS_XR_DESIGNED_FOR_IPHONE_IPAD = YES;
SWIFT_OBJC_BRIDGING_HEADER = "WakuExample/WakuExample-Bridging-Header.h";
TARGETED_DEVICE_FAMILY = "1,2";
};
name = Release;
};
9BA833A09EEDB4B3FCCD8F8E /* Release */ = {
isa = XCBuildConfiguration;
buildSettings = {
ALWAYS_SEARCH_USER_PATHS = NO;
CLANG_ANALYZER_NONNULL = YES;
CLANG_ANALYZER_NUMBER_OBJECT_CONVERSION = YES_AGGRESSIVE;
CLANG_CXX_LANGUAGE_STANDARD = "gnu++14";
CLANG_CXX_LIBRARY = "libc++";
CLANG_ENABLE_MODULES = YES;
CLANG_ENABLE_OBJC_ARC = YES;
CLANG_ENABLE_OBJC_WEAK = YES;
CLANG_WARN_BLOCK_CAPTURE_AUTORELEASING = YES;
CLANG_WARN_BOOL_CONVERSION = YES;
CLANG_WARN_COMMA = YES;
CLANG_WARN_CONSTANT_CONVERSION = YES;
CLANG_WARN_DEPRECATED_OBJC_IMPLEMENTATIONS = YES;
CLANG_WARN_DIRECT_OBJC_ISA_USAGE = YES_ERROR;
CLANG_WARN_DOCUMENTATION_COMMENTS = YES;
CLANG_WARN_EMPTY_BODY = YES;
CLANG_WARN_ENUM_CONVERSION = YES;
CLANG_WARN_INFINITE_RECURSION = YES;
CLANG_WARN_INT_CONVERSION = YES;
CLANG_WARN_NON_LITERAL_NULL_CONVERSION = YES;
CLANG_WARN_OBJC_IMPLICIT_RETAIN_SELF = YES;
CLANG_WARN_OBJC_LITERAL_CONVERSION = YES;
CLANG_WARN_OBJC_ROOT_CLASS = YES_ERROR;
CLANG_WARN_QUOTED_INCLUDE_IN_FRAMEWORK_HEADER = YES;
CLANG_WARN_RANGE_LOOP_ANALYSIS = YES;
CLANG_WARN_STRICT_PROTOTYPES = YES;
CLANG_WARN_SUSPICIOUS_MOVE = YES;
CLANG_WARN_UNGUARDED_AVAILABILITY = YES_AGGRESSIVE;
CLANG_WARN_UNREACHABLE_CODE = YES;
CLANG_WARN__DUPLICATE_METHOD_MATCH = YES;
COPY_PHASE_STRIP = NO;
DEBUG_INFORMATION_FORMAT = "dwarf-with-dsym";
ENABLE_NS_ASSERTIONS = NO;
ENABLE_STRICT_OBJC_MSGSEND = YES;
GCC_C_LANGUAGE_STANDARD = gnu11;
GCC_NO_COMMON_BLOCKS = YES;
GCC_WARN_64_TO_32_BIT_CONVERSION = YES;
GCC_WARN_ABOUT_RETURN_TYPE = YES_ERROR;
GCC_WARN_UNDECLARED_SELECTOR = YES;
GCC_WARN_UNINITIALIZED_AUTOS = YES_AGGRESSIVE;
GCC_WARN_UNUSED_FUNCTION = YES;
GCC_WARN_UNUSED_VARIABLE = YES;
IPHONEOS_DEPLOYMENT_TARGET = 18.6;
MTL_ENABLE_DEBUG_INFO = NO;
MTL_FAST_MATH = YES;
PRODUCT_NAME = "$(TARGET_NAME)";
SDKROOT = iphoneos;
SUPPORTED_PLATFORMS = "iphoneos iphonesimulator";
SUPPORTS_MACCATALYST = NO;
SWIFT_COMPILATION_MODE = wholemodule;
SWIFT_OPTIMIZATION_LEVEL = "-O";
SWIFT_VERSION = 5.0;
};
name = Release;
};
A59ABFB792FED8974231E5AC /* Debug */ = {
isa = XCBuildConfiguration;
buildSettings = {
ALWAYS_SEARCH_USER_PATHS = NO;
CLANG_ANALYZER_NONNULL = YES;
CLANG_ANALYZER_NUMBER_OBJECT_CONVERSION = YES_AGGRESSIVE;
CLANG_CXX_LANGUAGE_STANDARD = "gnu++14";
CLANG_CXX_LIBRARY = "libc++";
CLANG_ENABLE_MODULES = YES;
CLANG_ENABLE_OBJC_ARC = YES;
CLANG_ENABLE_OBJC_WEAK = YES;
CLANG_WARN_BLOCK_CAPTURE_AUTORELEASING = YES;
CLANG_WARN_BOOL_CONVERSION = YES;
CLANG_WARN_COMMA = YES;
CLANG_WARN_CONSTANT_CONVERSION = YES;
CLANG_WARN_DEPRECATED_OBJC_IMPLEMENTATIONS = YES;
CLANG_WARN_DIRECT_OBJC_ISA_USAGE = YES_ERROR;
CLANG_WARN_DOCUMENTATION_COMMENTS = YES;
CLANG_WARN_EMPTY_BODY = YES;
CLANG_WARN_ENUM_CONVERSION = YES;
CLANG_WARN_INFINITE_RECURSION = YES;
CLANG_WARN_INT_CONVERSION = YES;
CLANG_WARN_NON_LITERAL_NULL_CONVERSION = YES;
CLANG_WARN_OBJC_IMPLICIT_RETAIN_SELF = YES;
CLANG_WARN_OBJC_LITERAL_CONVERSION = YES;
CLANG_WARN_OBJC_ROOT_CLASS = YES_ERROR;
CLANG_WARN_QUOTED_INCLUDE_IN_FRAMEWORK_HEADER = YES;
CLANG_WARN_RANGE_LOOP_ANALYSIS = YES;
CLANG_WARN_STRICT_PROTOTYPES = YES;
CLANG_WARN_SUSPICIOUS_MOVE = YES;
CLANG_WARN_UNGUARDED_AVAILABILITY = YES_AGGRESSIVE;
CLANG_WARN_UNREACHABLE_CODE = YES;
CLANG_WARN__DUPLICATE_METHOD_MATCH = YES;
COPY_PHASE_STRIP = NO;
DEBUG_INFORMATION_FORMAT = dwarf;
ENABLE_STRICT_OBJC_MSGSEND = YES;
ENABLE_TESTABILITY = YES;
GCC_C_LANGUAGE_STANDARD = gnu11;
GCC_DYNAMIC_NO_PIC = NO;
GCC_NO_COMMON_BLOCKS = YES;
GCC_OPTIMIZATION_LEVEL = 0;
GCC_PREPROCESSOR_DEFINITIONS = (
"$(inherited)",
"DEBUG=1",
);
GCC_WARN_64_TO_32_BIT_CONVERSION = YES;
GCC_WARN_ABOUT_RETURN_TYPE = YES_ERROR;
GCC_WARN_UNDECLARED_SELECTOR = YES;
GCC_WARN_UNINITIALIZED_AUTOS = YES_AGGRESSIVE;
GCC_WARN_UNUSED_FUNCTION = YES;
GCC_WARN_UNUSED_VARIABLE = YES;
IPHONEOS_DEPLOYMENT_TARGET = 18.6;
MTL_ENABLE_DEBUG_INFO = INCLUDE_SOURCE;
MTL_FAST_MATH = YES;
ONLY_ACTIVE_ARCH = YES;
PRODUCT_NAME = "$(TARGET_NAME)";
SDKROOT = iphoneos;
SUPPORTED_PLATFORMS = "iphoneos iphonesimulator";
SUPPORTS_MACCATALYST = NO;
SWIFT_ACTIVE_COMPILATION_CONDITIONS = DEBUG;
SWIFT_OPTIMIZATION_LEVEL = "-Onone";
SWIFT_VERSION = 5.0;
};
name = Debug;
};
AF5ADDAA865B1F6BD4E70A79 /* Debug */ = {
isa = XCBuildConfiguration;
buildSettings = {
ASSETCATALOG_COMPILER_APPICON_NAME = AppIcon;
CODE_SIGN_IDENTITY = "iPhone Developer";
DEVELOPMENT_TEAM = 2Q52K2W84K;
HEADER_SEARCH_PATHS = "$(PROJECT_DIR)/WakuExample";
INFOPLIST_FILE = WakuExample/Info.plist;
IPHONEOS_DEPLOYMENT_TARGET = 18.6;
LD_RUNPATH_SEARCH_PATHS = (
"$(inherited)",
"@executable_path/Frameworks",
);
"LIBRARY_SEARCH_PATHS[sdk=iphoneos*]" = "$(PROJECT_DIR)/../../build/ios/iphoneos-arm64";
"LIBRARY_SEARCH_PATHS[sdk=iphonesimulator*]" = "$(PROJECT_DIR)/../../build/ios/iphonesimulator-arm64";
MACOSX_DEPLOYMENT_TARGET = 15.6;
OTHER_LDFLAGS = (
"-lc++",
"-force_load",
"$(PROJECT_DIR)/../../build/ios/iphoneos-arm64/libwaku.a",
"-lsqlite3",
"-lz",
);
PRODUCT_BUNDLE_IDENTIFIER = org.waku.example;
SDKROOT = iphoneos;
SUPPORTED_PLATFORMS = "iphoneos iphonesimulator";
SUPPORTS_MACCATALYST = NO;
SUPPORTS_MAC_DESIGNED_FOR_IPHONE_IPAD = YES;
SUPPORTS_XR_DESIGNED_FOR_IPHONE_IPAD = YES;
SWIFT_OBJC_BRIDGING_HEADER = "WakuExample/WakuExample-Bridging-Header.h";
TARGETED_DEVICE_FAMILY = "1,2";
};
name = Debug;
};
/* End XCBuildConfiguration section */
/* Begin XCConfigurationList section */
757FA0123629BD63CB254113 /* Build configuration list for PBXNativeTarget "WakuExample" */ = {
isa = XCConfigurationList;
buildConfigurations = (
AF5ADDAA865B1F6BD4E70A79 /* Debug */,
36939122077C66DD94082311 /* Release */,
);
defaultConfigurationIsVisible = 0;
defaultConfigurationName = Debug;
};
B3A4F48294254543E79767C4 /* Build configuration list for PBXProject "WakuExample" */ = {
isa = XCConfigurationList;
buildConfigurations = (
A59ABFB792FED8974231E5AC /* Debug */,
9BA833A09EEDB4B3FCCD8F8E /* Release */,
);
defaultConfigurationIsVisible = 0;
defaultConfigurationName = Debug;
};
/* End XCConfigurationList section */
};
rootObject = 4FF82F0F4AF8E1E34728F150 /* Project object */;
}

View File

@ -0,0 +1,7 @@
<?xml version="1.0" encoding="UTF-8"?>
<Workspace
version = "1.0">
<FileRef
location = "self:">
</FileRef>
</Workspace>

View File

@ -0,0 +1,229 @@
//
// ContentView.swift
// WakuExample
//
// Minimal chat PoC using libwaku on iOS
//
import SwiftUI
struct ContentView: View {
@StateObject private var wakuNode = WakuNode()
@State private var messageText = ""
var body: some View {
ZStack {
// Main content
VStack(spacing: 0) {
// Header with status
HStack {
Circle()
.fill(statusColor)
.frame(width: 10, height: 10)
VStack(alignment: .leading, spacing: 2) {
Text(wakuNode.status.rawValue)
.font(.caption)
if wakuNode.status == .running {
HStack(spacing: 4) {
Text(wakuNode.isConnected ? "Connected" : "Discovering...")
Text("")
filterStatusView
}
.font(.caption2)
.foregroundColor(.secondary)
// Subscription maintenance status
if wakuNode.subscriptionMaintenanceActive {
HStack(spacing: 4) {
Image(systemName: "arrow.triangle.2.circlepath")
.foregroundColor(.blue)
Text("Maintenance active")
if wakuNode.failedSubscribeAttempts > 0 {
Text("(\(wakuNode.failedSubscribeAttempts) retries)")
.foregroundColor(.orange)
}
}
.font(.caption2)
.foregroundColor(.secondary)
}
}
}
Spacer()
if wakuNode.status == .stopped {
Button("Start") {
wakuNode.start()
}
.buttonStyle(.borderedProminent)
.controlSize(.small)
} else if wakuNode.status == .running {
if !wakuNode.filterSubscribed {
Button("Resub") {
wakuNode.resubscribe()
}
.buttonStyle(.bordered)
.controlSize(.small)
}
Button("Stop") {
wakuNode.stop()
}
.buttonStyle(.bordered)
.controlSize(.small)
}
}
.padding()
.background(Color.gray.opacity(0.1))
// Messages list
ScrollViewReader { proxy in
ScrollView {
LazyVStack(alignment: .leading, spacing: 8) {
ForEach(wakuNode.receivedMessages.reversed()) { message in
MessageBubble(message: message)
.id(message.id)
}
}
.padding()
}
.onChange(of: wakuNode.receivedMessages.count) { _, newCount in
if let lastMessage = wakuNode.receivedMessages.first {
withAnimation {
proxy.scrollTo(lastMessage.id, anchor: .bottom)
}
}
}
}
Divider()
// Message input
HStack(spacing: 12) {
TextField("Message", text: $messageText)
.textFieldStyle(.roundedBorder)
.disabled(wakuNode.status != .running)
Button(action: sendMessage) {
Image(systemName: "paperplane.fill")
.foregroundColor(.white)
.padding(10)
.background(canSend ? Color.blue : Color.gray)
.clipShape(Circle())
}
.disabled(!canSend)
}
.padding()
.background(Color.gray.opacity(0.1))
}
// Toast overlay for errors
VStack {
ForEach(wakuNode.errorQueue) { error in
ToastView(error: error) {
wakuNode.dismissError(error)
}
.transition(.asymmetric(
insertion: .move(edge: .top).combined(with: .opacity),
removal: .opacity
))
}
Spacer()
}
.padding(.top, 8)
.animation(.easeInOut(duration: 0.3), value: wakuNode.errorQueue)
}
}
private var statusColor: Color {
switch wakuNode.status {
case .stopped: return .gray
case .starting: return .yellow
case .running: return .green
case .error: return .red
}
}
@ViewBuilder
private var filterStatusView: some View {
if wakuNode.filterSubscribed {
Text("Filter OK")
.foregroundColor(.green)
} else if wakuNode.failedSubscribeAttempts > 0 {
Text("Filter retrying (\(wakuNode.failedSubscribeAttempts))")
.foregroundColor(.orange)
} else {
Text("Filter pending")
.foregroundColor(.orange)
}
}
private var canSend: Bool {
wakuNode.status == .running && wakuNode.isConnected && !messageText.trimmingCharacters(in: .whitespaces).isEmpty
}
private func sendMessage() {
let text = messageText.trimmingCharacters(in: .whitespaces)
guard !text.isEmpty else { return }
wakuNode.publish(message: text)
messageText = ""
}
}
// MARK: - Toast View
struct ToastView: View {
let error: TimestampedError
let onDismiss: () -> Void
var body: some View {
HStack(spacing: 12) {
Image(systemName: "exclamationmark.triangle.fill")
.foregroundColor(.white)
Text(error.message)
.font(.subheadline)
.foregroundColor(.white)
.lineLimit(2)
Spacer()
Button(action: onDismiss) {
Image(systemName: "xmark.circle.fill")
.foregroundColor(.white.opacity(0.8))
.font(.title3)
}
.buttonStyle(.plain)
}
.padding(.horizontal, 16)
.padding(.vertical, 12)
.background(
RoundedRectangle(cornerRadius: 12)
.fill(Color.red.opacity(0.9))
.shadow(color: .black.opacity(0.2), radius: 8, x: 0, y: 4)
)
.padding(.horizontal, 16)
.padding(.vertical, 4)
}
}
// MARK: - Message Bubble
struct MessageBubble: View {
let message: WakuMessage
var body: some View {
VStack(alignment: .leading, spacing: 4) {
Text(message.payload)
.padding(10)
.background(Color.blue.opacity(0.1))
.cornerRadius(12)
Text(message.timestamp, style: .time)
.font(.caption2)
.foregroundColor(.secondary)
}
}
}
#Preview {
ContentView()
}

View File

@ -0,0 +1,36 @@
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
<key>CFBundleDevelopmentRegion</key>
<string>$(DEVELOPMENT_LANGUAGE)</string>
<key>CFBundleDisplayName</key>
<string>Waku Example</string>
<key>CFBundleExecutable</key>
<string>$(EXECUTABLE_NAME)</string>
<key>CFBundleIdentifier</key>
<string>org.waku.example</string>
<key>CFBundleInfoDictionaryVersion</key>
<string>6.0</string>
<key>CFBundleName</key>
<string>WakuExample</string>
<key>CFBundlePackageType</key>
<string>APPL</string>
<key>CFBundleShortVersionString</key>
<string>1.0</string>
<key>CFBundleVersion</key>
<string>1</string>
<key>NSAppTransportSecurity</key>
<dict>
<key>NSAllowsArbitraryLoads</key>
<true/>
</dict>
<key>UILaunchScreen</key>
<dict/>
<key>UISupportedInterfaceOrientations</key>
<array>
<string>UIInterfaceOrientationPortrait</string>
</array>
</dict>
</plist>

View File

@ -0,0 +1,15 @@
//
// WakuExample-Bridging-Header.h
// WakuExample
//
// Bridging header to expose libwaku C functions to Swift
//
#ifndef WakuExample_Bridging_Header_h
#define WakuExample_Bridging_Header_h
#import "libwaku.h"
#endif /* WakuExample_Bridging_Header_h */

View File

@ -0,0 +1,19 @@
//
// WakuExampleApp.swift
// WakuExample
//
// SwiftUI app entry point for Waku iOS example
//
import SwiftUI
@main
struct WakuExampleApp: App {
var body: some Scene {
WindowGroup {
ContentView()
}
}
}

View File

@ -0,0 +1,739 @@
//
// WakuNode.swift
// WakuExample
//
// Swift wrapper around libwaku C API for edge mode (lightpush + filter)
// Uses Swift actors for thread safety and UI responsiveness
//
import Foundation
// MARK: - Data Types
/// Message received from Waku network
struct WakuMessage: Identifiable, Equatable, Sendable {
let id: String // messageHash from Waku - unique identifier for deduplication
let payload: String
let contentTopic: String
let timestamp: Date
}
/// Waku node status
enum WakuNodeStatus: String, Sendable {
case stopped = "Stopped"
case starting = "Starting..."
case running = "Running"
case error = "Error"
}
/// Status updates from WakuActor to WakuNode
enum WakuStatusUpdate: Sendable {
case statusChanged(WakuNodeStatus)
case connectionChanged(isConnected: Bool)
case filterSubscriptionChanged(subscribed: Bool, failedAttempts: Int)
case maintenanceChanged(active: Bool)
case error(String)
}
/// Error with timestamp for toast queue
struct TimestampedError: Identifiable, Equatable {
let id = UUID()
let message: String
let timestamp: Date
static func == (lhs: TimestampedError, rhs: TimestampedError) -> Bool {
lhs.id == rhs.id
}
}
// MARK: - Callback Context for C API
private final class CallbackContext: @unchecked Sendable {
private let lock = NSLock()
private var _continuation: CheckedContinuation<(success: Bool, result: String?), Never>?
private var _resumed = false
var success: Bool = false
var result: String?
var continuation: CheckedContinuation<(success: Bool, result: String?), Never>? {
get {
lock.lock()
defer { lock.unlock() }
return _continuation
}
set {
lock.lock()
defer { lock.unlock() }
_continuation = newValue
}
}
/// Thread-safe resume - ensures continuation is only resumed once
/// Returns true if this call actually resumed, false if already resumed
@discardableResult
func resumeOnce(returning value: (success: Bool, result: String?)) -> Bool {
lock.lock()
defer { lock.unlock() }
guard !_resumed, let cont = _continuation else {
return false
}
_resumed = true
_continuation = nil
cont.resume(returning: value)
return true
}
}
// MARK: - WakuActor
/// Actor that isolates all Waku operations from the main thread
/// All C API calls and mutable state are contained here
actor WakuActor {
// MARK: - State
private var ctx: UnsafeMutableRawPointer?
private var seenMessageHashes: Set<String> = []
private var isSubscribed: Bool = false
private var isSubscribing: Bool = false
private var hasPeers: Bool = false
private var maintenanceTask: Task<Void, Never>?
private var eventProcessingTask: Task<Void, Never>?
// Stream continuations for communicating with UI
private var messageContinuation: AsyncStream<WakuMessage>.Continuation?
private var statusContinuation: AsyncStream<WakuStatusUpdate>.Continuation?
// Event stream from C callbacks
private var eventContinuation: AsyncStream<String>.Continuation?
// Configuration
let defaultPubsubTopic = "/waku/2/rs/1/0"
let defaultContentTopic = "/waku-ios-example/1/chat/proto"
private let staticPeer = "/dns4/node-01.do-ams3.waku.sandbox.status.im/tcp/30303/p2p/16Uiu2HAmPLe7Mzm8TsYUubgCAW1aJoeFScxrLj8ppHFivPo97bUZ"
// Subscription maintenance settings
private let maxFailedSubscribes = 3
private let retryWaitSeconds: UInt64 = 2_000_000_000 // 2 seconds in nanoseconds
private let maintenanceIntervalSeconds: UInt64 = 30_000_000_000 // 30 seconds in nanoseconds
private let maxSeenHashes = 1000
// MARK: - Static callback storage (for C callbacks)
// We need a way for C callbacks to reach the actor
// Using a simple static reference (safe because we only have one instance)
private static var sharedEventContinuation: AsyncStream<String>.Continuation?
private static let eventCallback: WakuCallBack = { ret, msg, len, userData in
guard ret == RET_OK, let msg = msg else { return }
let str = String(cString: msg)
WakuActor.sharedEventContinuation?.yield(str)
}
private static let syncCallback: WakuCallBack = { ret, msg, len, userData in
guard let userData = userData else { return }
let context = Unmanaged<CallbackContext>.fromOpaque(userData).takeUnretainedValue()
let success = (ret == RET_OK)
var resultStr: String? = nil
if let msg = msg {
resultStr = String(cString: msg)
}
context.resumeOnce(returning: (success, resultStr))
}
// MARK: - Stream Setup
func setMessageContinuation(_ continuation: AsyncStream<WakuMessage>.Continuation?) {
self.messageContinuation = continuation
}
func setStatusContinuation(_ continuation: AsyncStream<WakuStatusUpdate>.Continuation?) {
self.statusContinuation = continuation
}
// MARK: - Public API
var isRunning: Bool {
ctx != nil
}
var hasConnectedPeers: Bool {
hasPeers
}
func start() async {
guard ctx == nil else {
print("[WakuActor] Already started")
return
}
statusContinuation?.yield(.statusChanged(.starting))
// Create event stream for C callbacks
let eventStream = AsyncStream<String> { continuation in
self.eventContinuation = continuation
WakuActor.sharedEventContinuation = continuation
}
// Start event processing task
eventProcessingTask = Task { [weak self] in
for await eventJson in eventStream {
await self?.handleEvent(eventJson)
}
}
// Initialize the node
let success = await initializeNode()
if success {
statusContinuation?.yield(.statusChanged(.running))
// Connect to peer
let connected = await connectToPeer()
if connected {
hasPeers = true
statusContinuation?.yield(.connectionChanged(isConnected: true))
// Start maintenance loop
startMaintenanceLoop()
} else {
statusContinuation?.yield(.error("Failed to connect to service peer"))
}
}
}
func stop() async {
guard let context = ctx else { return }
// Stop maintenance loop
maintenanceTask?.cancel()
maintenanceTask = nil
// Stop event processing
eventProcessingTask?.cancel()
eventProcessingTask = nil
// Close event stream
eventContinuation?.finish()
eventContinuation = nil
WakuActor.sharedEventContinuation = nil
statusContinuation?.yield(.statusChanged(.stopped))
statusContinuation?.yield(.connectionChanged(isConnected: false))
statusContinuation?.yield(.filterSubscriptionChanged(subscribed: false, failedAttempts: 0))
statusContinuation?.yield(.maintenanceChanged(active: false))
// Reset state
let ctxToStop = context
ctx = nil
isSubscribed = false
isSubscribing = false
hasPeers = false
seenMessageHashes.removeAll()
// Unsubscribe and stop in background (fire and forget)
Task.detached {
// Unsubscribe
_ = await self.callWakuSync { waku_filter_unsubscribe_all(ctxToStop, WakuActor.syncCallback, $0) }
print("[WakuActor] Unsubscribed from filter")
// Stop
_ = await self.callWakuSync { waku_stop(ctxToStop, WakuActor.syncCallback, $0) }
print("[WakuActor] Node stopped")
// Destroy
_ = await self.callWakuSync { waku_destroy(ctxToStop, WakuActor.syncCallback, $0) }
print("[WakuActor] Node destroyed")
}
}
func publish(message: String, contentTopic: String? = nil) async {
guard let context = ctx else {
print("[WakuActor] Node not started")
return
}
guard hasPeers else {
print("[WakuActor] No peers connected yet")
statusContinuation?.yield(.error("No peers connected yet. Please wait..."))
return
}
let topic = contentTopic ?? defaultContentTopic
guard let payloadData = message.data(using: .utf8) else { return }
let payloadBase64 = payloadData.base64EncodedString()
let timestamp = Int64(Date().timeIntervalSince1970 * 1_000_000_000)
let jsonMessage = """
{"payload":"\(payloadBase64)","contentTopic":"\(topic)","timestamp":\(timestamp)}
"""
let result = await callWakuSync { userData in
waku_lightpush_publish(
context,
self.defaultPubsubTopic,
jsonMessage,
WakuActor.syncCallback,
userData
)
}
if result.success {
print("[WakuActor] Published message")
} else {
print("[WakuActor] Publish error: \(result.result ?? "unknown")")
statusContinuation?.yield(.error("Failed to send message"))
}
}
func resubscribe() async {
print("[WakuActor] Force resubscribe requested")
isSubscribed = false
isSubscribing = false
statusContinuation?.yield(.filterSubscriptionChanged(subscribed: false, failedAttempts: 0))
_ = await subscribe()
}
// MARK: - Private Methods
private func initializeNode() async -> Bool {
let config = """
{
"tcpPort": 60000,
"clusterId": 1,
"shards": [0],
"relay": false,
"lightpush": true,
"filter": true,
"logLevel": "DEBUG",
"discv5Discovery": true,
"discv5BootstrapNodes": [
"enr:-QESuEB4Dchgjn7gfAvwB00CxTA-nGiyk-aALI-H4dYSZD3rUk7bZHmP8d2U6xDiQ2vZffpo45Jp7zKNdnwDUx6g4o6XAYJpZIJ2NIJpcIRA4VDAim11bHRpYWRkcnO4XAArNiZub2RlLTAxLmRvLWFtczMud2FrdS5zYW5kYm94LnN0YXR1cy5pbQZ2XwAtNiZub2RlLTAxLmRvLWFtczMud2FrdS5zYW5kYm94LnN0YXR1cy5pbQYfQN4DgnJzkwABCAAAAAEAAgADAAQABQAGAAeJc2VjcDI1NmsxoQOvD3S3jUNICsrOILlmhENiWAMmMVlAl6-Q8wRB7hidY4N0Y3CCdl-DdWRwgiMohXdha3UyDw",
"enr:-QEkuEBIkb8q8_mrorHndoXH9t5N6ZfD-jehQCrYeoJDPHqT0l0wyaONa2-piRQsi3oVKAzDShDVeoQhy0uwN1xbZfPZAYJpZIJ2NIJpcIQiQlleim11bHRpYWRkcnO4bgA0Ni9ub2RlLTAxLmdjLXVzLWNlbnRyYWwxLWEud2FrdS5zYW5kYm94LnN0YXR1cy5pbQZ2XwA2Ni9ub2RlLTAxLmdjLXVzLWNlbnRyYWwxLWEud2FrdS5zYW5kYm94LnN0YXR1cy5pbQYfQN4DgnJzkwABCAAAAAEAAgADAAQABQAGAAeJc2VjcDI1NmsxoQKnGt-GSgqPSf3IAPM7bFgTlpczpMZZLF3geeoNNsxzSoN0Y3CCdl-DdWRwgiMohXdha3UyDw"
],
"discv5UdpPort": 9999,
"dnsDiscovery": true,
"dnsDiscoveryUrl": "enrtree://AOGYWMBYOUIMOENHXCHILPKY3ZRFEULMFI4DOM442QSZ73TT2A7VI@test.waku.nodes.status.im",
"dnsDiscoveryNameServers": ["8.8.8.8", "1.0.0.1"]
}
"""
// Create node - waku_new is special, it returns the context directly
let createResult = await withCheckedContinuation { (continuation: CheckedContinuation<(ctx: UnsafeMutableRawPointer?, success: Bool, result: String?), Never>) in
let callbackCtx = CallbackContext()
let userDataPtr = Unmanaged.passRetained(callbackCtx).toOpaque()
// Set up a simple callback for waku_new
let newCtx = waku_new(config, { ret, msg, len, userData in
guard let userData = userData else { return }
let context = Unmanaged<CallbackContext>.fromOpaque(userData).takeUnretainedValue()
context.success = (ret == RET_OK)
if let msg = msg {
context.result = String(cString: msg)
}
}, userDataPtr)
// Small delay to ensure callback completes
DispatchQueue.global().asyncAfter(deadline: .now() + 0.1) {
Unmanaged<CallbackContext>.fromOpaque(userDataPtr).release()
continuation.resume(returning: (newCtx, callbackCtx.success, callbackCtx.result))
}
}
guard createResult.ctx != nil else {
statusContinuation?.yield(.statusChanged(.error))
statusContinuation?.yield(.error("Failed to create node: \(createResult.result ?? "unknown")"))
return false
}
ctx = createResult.ctx
// Set event callback
waku_set_event_callback(ctx, WakuActor.eventCallback, nil)
// Start node
let startResult = await callWakuSync { userData in
waku_start(self.ctx, WakuActor.syncCallback, userData)
}
guard startResult.success else {
statusContinuation?.yield(.statusChanged(.error))
statusContinuation?.yield(.error("Failed to start node: \(startResult.result ?? "unknown")"))
ctx = nil
return false
}
print("[WakuActor] Node started")
return true
}
private func connectToPeer() async -> Bool {
guard let context = ctx else { return false }
print("[WakuActor] Connecting to static peer...")
let result = await callWakuSync { userData in
waku_connect(context, self.staticPeer, 10000, WakuActor.syncCallback, userData)
}
if result.success {
print("[WakuActor] Connected to peer successfully")
return true
} else {
print("[WakuActor] Failed to connect: \(result.result ?? "unknown")")
return false
}
}
private func subscribe(contentTopic: String? = nil) async -> Bool {
guard let context = ctx else { return false }
guard !isSubscribed && !isSubscribing else { return isSubscribed }
isSubscribing = true
let topic = contentTopic ?? defaultContentTopic
let result = await callWakuSync { userData in
waku_filter_subscribe(
context,
self.defaultPubsubTopic,
topic,
WakuActor.syncCallback,
userData
)
}
isSubscribing = false
if result.success {
print("[WakuActor] Subscribe request successful to \(topic)")
isSubscribed = true
statusContinuation?.yield(.filterSubscriptionChanged(subscribed: true, failedAttempts: 0))
return true
} else {
print("[WakuActor] Subscribe error: \(result.result ?? "unknown")")
isSubscribed = false
return false
}
}
private func pingFilterPeer() async -> Bool {
guard let context = ctx else { return false }
let result = await callWakuSync { userData in
waku_ping_peer(
context,
self.staticPeer,
10000,
WakuActor.syncCallback,
userData
)
}
return result.success
}
// MARK: - Subscription Maintenance
private func startMaintenanceLoop() {
guard maintenanceTask == nil else {
print("[WakuActor] Maintenance loop already running")
return
}
statusContinuation?.yield(.maintenanceChanged(active: true))
print("[WakuActor] Starting subscription maintenance loop")
maintenanceTask = Task { [weak self] in
guard let self = self else { return }
var failedSubscribes = 0
var isFirstPingOnConnection = true
while !Task.isCancelled {
guard await self.isRunning else { break }
print("[WakuActor] Maintaining subscription...")
let pingSuccess = await self.pingFilterPeer()
let currentlySubscribed = await self.isSubscribed
if pingSuccess && currentlySubscribed {
print("[WakuActor] Subscription is live, waiting 30s")
try? await Task.sleep(nanoseconds: self.maintenanceIntervalSeconds)
continue
}
if !isFirstPingOnConnection && !pingSuccess {
print("[WakuActor] Ping failed - subscription may be lost")
await self.statusContinuation?.yield(.filterSubscriptionChanged(subscribed: false, failedAttempts: failedSubscribes))
}
isFirstPingOnConnection = false
print("[WakuActor] No active subscription found. Sending subscribe request...")
await self.resetSubscriptionState()
let subscribeSuccess = await self.subscribe()
if subscribeSuccess {
print("[WakuActor] Subscribe request successful")
failedSubscribes = 0
try? await Task.sleep(nanoseconds: self.maintenanceIntervalSeconds)
continue
}
failedSubscribes += 1
await self.statusContinuation?.yield(.filterSubscriptionChanged(subscribed: false, failedAttempts: failedSubscribes))
print("[WakuActor] Subscribe request failed. Attempt \(failedSubscribes)/\(self.maxFailedSubscribes)")
if failedSubscribes < self.maxFailedSubscribes {
print("[WakuActor] Retrying in 2s...")
try? await Task.sleep(nanoseconds: self.retryWaitSeconds)
} else {
print("[WakuActor] Max subscribe failures reached")
await self.statusContinuation?.yield(.error("Filter subscription failed after \(self.maxFailedSubscribes) attempts"))
failedSubscribes = 0
try? await Task.sleep(nanoseconds: self.maintenanceIntervalSeconds)
}
}
print("[WakuActor] Subscription maintenance loop stopped")
await self.statusContinuation?.yield(.maintenanceChanged(active: false))
}
}
private func resetSubscriptionState() {
isSubscribed = false
isSubscribing = false
}
// MARK: - Event Handling
private func handleEvent(_ eventJson: String) {
guard let data = eventJson.data(using: .utf8),
let json = try? JSONSerialization.jsonObject(with: data) as? [String: Any],
let eventType = json["eventType"] as? String else {
return
}
if eventType == "connection_change" {
handleConnectionChange(json)
} else if eventType == "message" {
handleMessage(json)
}
}
private func handleConnectionChange(_ json: [String: Any]) {
guard let peerEvent = json["peerEvent"] as? String else { return }
if peerEvent == "Joined" || peerEvent == "Identified" {
hasPeers = true
statusContinuation?.yield(.connectionChanged(isConnected: true))
} else if peerEvent == "Left" {
statusContinuation?.yield(.filterSubscriptionChanged(subscribed: false, failedAttempts: 0))
}
}
private func handleMessage(_ json: [String: Any]) {
guard let messageHash = json["messageHash"] as? String,
let wakuMessage = json["wakuMessage"] as? [String: Any],
let payloadBase64 = wakuMessage["payload"] as? String,
let contentTopic = wakuMessage["contentTopic"] as? String,
let payloadData = Data(base64Encoded: payloadBase64),
let payloadString = String(data: payloadData, encoding: .utf8) else {
return
}
// Deduplicate
guard !seenMessageHashes.contains(messageHash) else {
return
}
seenMessageHashes.insert(messageHash)
// Limit memory usage
if seenMessageHashes.count > maxSeenHashes {
seenMessageHashes.removeAll()
}
let message = WakuMessage(
id: messageHash,
payload: payloadString,
contentTopic: contentTopic,
timestamp: Date()
)
messageContinuation?.yield(message)
}
// MARK: - Helper for synchronous C calls
private func callWakuSync(_ work: @escaping (UnsafeMutableRawPointer) -> Void) async -> (success: Bool, result: String?) {
await withCheckedContinuation { continuation in
let context = CallbackContext()
context.continuation = continuation
let userDataPtr = Unmanaged.passRetained(context).toOpaque()
work(userDataPtr)
// Set a timeout to avoid hanging forever
DispatchQueue.global().asyncAfter(deadline: .now() + 15) {
// Try to resume with timeout - will be ignored if callback already resumed
let didTimeout = context.resumeOnce(returning: (false, "Timeout"))
if didTimeout {
print("[WakuActor] Call timed out")
}
Unmanaged<CallbackContext>.fromOpaque(userDataPtr).release()
}
}
}
}
// MARK: - WakuNode (MainActor UI Wrapper)
/// Main-thread UI wrapper that consumes updates from WakuActor via AsyncStreams
@MainActor
class WakuNode: ObservableObject {
// MARK: - Published Properties (UI State)
@Published var status: WakuNodeStatus = .stopped
@Published var receivedMessages: [WakuMessage] = []
@Published var errorQueue: [TimestampedError] = []
@Published var isConnected: Bool = false
@Published var filterSubscribed: Bool = false
@Published var subscriptionMaintenanceActive: Bool = false
@Published var failedSubscribeAttempts: Int = 0
// Topics (read-only access to actor's config)
var defaultPubsubTopic: String { "/waku/2/rs/1/0" }
var defaultContentTopic: String { "/waku-ios-example/1/chat/proto" }
// MARK: - Private Properties
private let actor = WakuActor()
private var messageTask: Task<Void, Never>?
private var statusTask: Task<Void, Never>?
// MARK: - Initialization
init() {}
deinit {
messageTask?.cancel()
statusTask?.cancel()
}
// MARK: - Public API
func start() {
guard status == .stopped || status == .error else {
print("[WakuNode] Already started or starting")
return
}
// Create message stream
let messageStream = AsyncStream<WakuMessage> { continuation in
Task {
await self.actor.setMessageContinuation(continuation)
}
}
// Create status stream
let statusStream = AsyncStream<WakuStatusUpdate> { continuation in
Task {
await self.actor.setStatusContinuation(continuation)
}
}
// Start consuming messages
messageTask = Task { @MainActor in
for await message in messageStream {
self.receivedMessages.insert(message, at: 0)
if self.receivedMessages.count > 100 {
self.receivedMessages.removeLast()
}
}
}
// Start consuming status updates
statusTask = Task { @MainActor in
for await update in statusStream {
self.handleStatusUpdate(update)
}
}
// Start the actor
Task {
await actor.start()
}
}
func stop() {
messageTask?.cancel()
messageTask = nil
statusTask?.cancel()
statusTask = nil
Task {
await actor.stop()
}
// Immediate UI update
status = .stopped
isConnected = false
filterSubscribed = false
subscriptionMaintenanceActive = false
failedSubscribeAttempts = 0
}
func publish(message: String, contentTopic: String? = nil) {
Task {
await actor.publish(message: message, contentTopic: contentTopic)
}
}
func resubscribe() {
Task {
await actor.resubscribe()
}
}
func dismissError(_ error: TimestampedError) {
errorQueue.removeAll { $0.id == error.id }
}
func dismissAllErrors() {
errorQueue.removeAll()
}
// MARK: - Private Methods
private func handleStatusUpdate(_ update: WakuStatusUpdate) {
switch update {
case .statusChanged(let newStatus):
status = newStatus
case .connectionChanged(let connected):
isConnected = connected
case .filterSubscriptionChanged(let subscribed, let attempts):
filterSubscribed = subscribed
failedSubscribeAttempts = attempts
case .maintenanceChanged(let active):
subscriptionMaintenanceActive = active
case .error(let message):
let error = TimestampedError(message: message, timestamp: Date())
errorQueue.append(error)
// Schedule auto-dismiss after 10 seconds
let errorId = error.id
Task { @MainActor in
try? await Task.sleep(nanoseconds: 10_000_000_000)
self.errorQueue.removeAll { $0.id == errorId }
}
}
}
}

View File

@ -0,0 +1,253 @@
// Generated manually and inspired by the one generated by the Nim Compiler.
// In order to see the header file generated by Nim just run `make libwaku`
// from the root repo folder and the header should be created in
// nimcache/release/libwaku/libwaku.h
#ifndef __libwaku__
#define __libwaku__
#include <stddef.h>
#include <stdint.h>
// The possible returned values for the functions that return int
#define RET_OK 0
#define RET_ERR 1
#define RET_MISSING_CALLBACK 2
#ifdef __cplusplus
extern "C" {
#endif
typedef void (*WakuCallBack) (int callerRet, const char* msg, size_t len, void* userData);
// Creates a new instance of the waku node.
// Sets up the waku node from the given configuration.
// Returns a pointer to the Context needed by the rest of the API functions.
void* waku_new(
const char* configJson,
WakuCallBack callback,
void* userData);
int waku_start(void* ctx,
WakuCallBack callback,
void* userData);
int waku_stop(void* ctx,
WakuCallBack callback,
void* userData);
// Destroys an instance of a waku node created with waku_new
int waku_destroy(void* ctx,
WakuCallBack callback,
void* userData);
int waku_version(void* ctx,
WakuCallBack callback,
void* userData);
// Sets a callback that will be invoked whenever an event occurs.
// It is crucial that the passed callback is fast, non-blocking and potentially thread-safe.
void waku_set_event_callback(void* ctx,
WakuCallBack callback,
void* userData);
int waku_content_topic(void* ctx,
const char* appName,
unsigned int appVersion,
const char* contentTopicName,
const char* encoding,
WakuCallBack callback,
void* userData);
int waku_pubsub_topic(void* ctx,
const char* topicName,
WakuCallBack callback,
void* userData);
int waku_default_pubsub_topic(void* ctx,
WakuCallBack callback,
void* userData);
int waku_relay_publish(void* ctx,
const char* pubSubTopic,
const char* jsonWakuMessage,
unsigned int timeoutMs,
WakuCallBack callback,
void* userData);
int waku_lightpush_publish(void* ctx,
const char* pubSubTopic,
const char* jsonWakuMessage,
WakuCallBack callback,
void* userData);
int waku_relay_subscribe(void* ctx,
const char* pubSubTopic,
WakuCallBack callback,
void* userData);
int waku_relay_add_protected_shard(void* ctx,
int clusterId,
int shardId,
char* publicKey,
WakuCallBack callback,
void* userData);
int waku_relay_unsubscribe(void* ctx,
const char* pubSubTopic,
WakuCallBack callback,
void* userData);
int waku_filter_subscribe(void* ctx,
const char* pubSubTopic,
const char* contentTopics,
WakuCallBack callback,
void* userData);
int waku_filter_unsubscribe(void* ctx,
const char* pubSubTopic,
const char* contentTopics,
WakuCallBack callback,
void* userData);
int waku_filter_unsubscribe_all(void* ctx,
WakuCallBack callback,
void* userData);
int waku_relay_get_num_connected_peers(void* ctx,
const char* pubSubTopic,
WakuCallBack callback,
void* userData);
int waku_relay_get_connected_peers(void* ctx,
const char* pubSubTopic,
WakuCallBack callback,
void* userData);
int waku_relay_get_num_peers_in_mesh(void* ctx,
const char* pubSubTopic,
WakuCallBack callback,
void* userData);
int waku_relay_get_peers_in_mesh(void* ctx,
const char* pubSubTopic,
WakuCallBack callback,
void* userData);
int waku_store_query(void* ctx,
const char* jsonQuery,
const char* peerAddr,
int timeoutMs,
WakuCallBack callback,
void* userData);
int waku_connect(void* ctx,
const char* peerMultiAddr,
unsigned int timeoutMs,
WakuCallBack callback,
void* userData);
int waku_disconnect_peer_by_id(void* ctx,
const char* peerId,
WakuCallBack callback,
void* userData);
int waku_disconnect_all_peers(void* ctx,
WakuCallBack callback,
void* userData);
int waku_dial_peer(void* ctx,
const char* peerMultiAddr,
const char* protocol,
int timeoutMs,
WakuCallBack callback,
void* userData);
int waku_dial_peer_by_id(void* ctx,
const char* peerId,
const char* protocol,
int timeoutMs,
WakuCallBack callback,
void* userData);
int waku_get_peerids_from_peerstore(void* ctx,
WakuCallBack callback,
void* userData);
int waku_get_connected_peers_info(void* ctx,
WakuCallBack callback,
void* userData);
int waku_get_peerids_by_protocol(void* ctx,
const char* protocol,
WakuCallBack callback,
void* userData);
int waku_listen_addresses(void* ctx,
WakuCallBack callback,
void* userData);
int waku_get_connected_peers(void* ctx,
WakuCallBack callback,
void* userData);
// Returns a list of multiaddress given a url to a DNS discoverable ENR tree
// Parameters
// char* entTreeUrl: URL containing a discoverable ENR tree
// char* nameDnsServer: The nameserver to resolve the ENR tree url.
// int timeoutMs: Timeout value in milliseconds to execute the call.
int waku_dns_discovery(void* ctx,
const char* entTreeUrl,
const char* nameDnsServer,
int timeoutMs,
WakuCallBack callback,
void* userData);
// Updates the bootnode list used for discovering new peers via DiscoveryV5
// bootnodes - JSON array containing the bootnode ENRs i.e. `["enr:...", "enr:..."]`
int waku_discv5_update_bootnodes(void* ctx,
char* bootnodes,
WakuCallBack callback,
void* userData);
int waku_start_discv5(void* ctx,
WakuCallBack callback,
void* userData);
int waku_stop_discv5(void* ctx,
WakuCallBack callback,
void* userData);
// Retrieves the ENR information
int waku_get_my_enr(void* ctx,
WakuCallBack callback,
void* userData);
int waku_get_my_peerid(void* ctx,
WakuCallBack callback,
void* userData);
int waku_get_metrics(void* ctx,
WakuCallBack callback,
void* userData);
int waku_peer_exchange_request(void* ctx,
int numPeers,
WakuCallBack callback,
void* userData);
int waku_ping_peer(void* ctx,
const char* peerAddr,
int timeoutMs,
WakuCallBack callback,
void* userData);
int waku_is_online(void* ctx,
WakuCallBack callback,
void* userData);
#ifdef __cplusplus
}
#endif
#endif /* __libwaku__ */

47
examples/ios/project.yml Normal file
View File

@ -0,0 +1,47 @@
name: WakuExample
options:
bundleIdPrefix: org.waku
deploymentTarget:
iOS: "14.0"
xcodeVersion: "15.0"
settings:
SWIFT_VERSION: "5.0"
SUPPORTED_PLATFORMS: "iphoneos iphonesimulator"
SUPPORTS_MACCATALYST: "NO"
targets:
WakuExample:
type: application
platform: iOS
supportedDestinations: [iOS]
sources:
- WakuExample
settings:
INFOPLIST_FILE: WakuExample/Info.plist
PRODUCT_BUNDLE_IDENTIFIER: org.waku.example
SWIFT_OBJC_BRIDGING_HEADER: WakuExample/WakuExample-Bridging-Header.h
HEADER_SEARCH_PATHS:
- "$(PROJECT_DIR)/WakuExample"
"LIBRARY_SEARCH_PATHS[sdk=iphoneos*]":
- "$(PROJECT_DIR)/../../build/ios/iphoneos-arm64"
"LIBRARY_SEARCH_PATHS[sdk=iphonesimulator*]":
- "$(PROJECT_DIR)/../../build/ios/iphonesimulator-arm64"
OTHER_LDFLAGS:
- "-lc++"
- "-lwaku"
IPHONEOS_DEPLOYMENT_TARGET: "14.0"
info:
path: WakuExample/Info.plist
properties:
CFBundleName: WakuExample
CFBundleDisplayName: Waku Example
CFBundleIdentifier: org.waku.example
CFBundleVersion: "1"
CFBundleShortVersionString: "1.0"
UILaunchScreen: {}
UISupportedInterfaceOrientations:
- UIInterfaceOrientationPortrait
NSAppTransportSecurity:
NSAllowsArbitraryLoads: true

View File

@ -7,14 +7,14 @@ import
confutils, confutils,
libp2p/crypto/crypto, libp2p/crypto/crypto,
libp2p/crypto/curve25519, libp2p/crypto/curve25519,
libp2p/protocols/mix,
libp2p/protocols/mix/curve25519,
libp2p/multiaddress, libp2p/multiaddress,
eth/keys, eth/keys,
eth/p2p/discoveryv5/enr, eth/p2p/discoveryv5/enr,
metrics, metrics,
metrics/chronos_httpserver metrics/chronos_httpserver
import mix, mix/mix_protocol, mix/curve25519
import import
waku/[ waku/[
common/logging, common/logging,
@ -51,7 +51,6 @@ proc splitPeerIdAndAddr(maddr: string): (string, string) =
proc setupAndPublish(rng: ref HmacDrbgContext, conf: LightPushMixConf) {.async.} = proc setupAndPublish(rng: ref HmacDrbgContext, conf: LightPushMixConf) {.async.} =
# use notice to filter all waku messaging # use notice to filter all waku messaging
setupLog(logging.LogLevel.DEBUG, logging.LogFormat.TEXT) setupLog(logging.LogLevel.DEBUG, logging.LogFormat.TEXT)
notice "starting publisher", wakuPort = conf.port notice "starting publisher", wakuPort = conf.port
let let
@ -114,17 +113,8 @@ proc setupAndPublish(rng: ref HmacDrbgContext, conf: LightPushMixConf) {.async.}
let dPeerId = PeerId.init(destPeerId).valueOr: let dPeerId = PeerId.init(destPeerId).valueOr:
error "Failed to initialize PeerId", error = error error "Failed to initialize PeerId", error = error
return return
var conn: Connection
if not conf.mixDisabled:
conn = node.wakuMix.toConnection(
MixDestination.init(dPeerId, pxPeerInfo.addrs[0]), # destination lightpush peer
WakuLightPushCodec, # protocol codec which will be used over the mix connection
Opt.some(MixParameters(expectReply: Opt.some(true), numSurbs: Opt.some(byte(1)))),
# mix parameters indicating we expect a single reply
).valueOr:
error "failed to create mix connection", error = error
return
await node.mountRendezvousClient(clusterId)
await node.start() await node.start()
node.peerManager.start() node.peerManager.start()
node.startPeerExchangeLoop() node.startPeerExchangeLoop()
@ -145,20 +135,26 @@ proc setupAndPublish(rng: ref HmacDrbgContext, conf: LightPushMixConf) {.async.}
var i = 0 var i = 0
while i < conf.numMsgs: while i < conf.numMsgs:
var conn: Connection
if conf.mixDisabled: if conf.mixDisabled:
let connOpt = await node.peerManager.dialPeer(dPeerId, WakuLightPushCodec) let connOpt = await node.peerManager.dialPeer(dPeerId, WakuLightPushCodec)
if connOpt.isNone(): if connOpt.isNone():
error "failed to dial peer with WakuLightPushCodec", target_peer_id = dPeerId error "failed to dial peer with WakuLightPushCodec", target_peer_id = dPeerId
return return
conn = connOpt.get() conn = connOpt.get()
else:
conn = node.wakuMix.toConnection(
MixDestination.exitNode(dPeerId), # destination lightpush peer
WakuLightPushCodec, # protocol codec which will be used over the mix connection
MixParameters(expectReply: Opt.some(true), numSurbs: Opt.some(byte(1))),
# mix parameters indicating we expect a single reply
).valueOr:
error "failed to create mix connection", error = error
return
i = i + 1 i = i + 1
let text = let text =
"""Lorem ipsum dolor sit amet, consectetur adipiscing elit. Nullam venenatis magna ut tortor faucibus, in vestibulum nibh commodo. Aenean eget vestibulum augue. Nullam suscipit urna non nunc efficitur, at iaculis nisl consequat. Mauris quis ultrices elit. Suspendisse lobortis odio vitae laoreet facilisis. Cras ornare sem felis, at vulputate magna aliquam ac. Duis quis est ultricies, euismod nulla ac, interdum dui. Maecenas sit amet est vitae enim commodo gravida. Proin vitae elit nulla. Donec tempor dolor lectus, in faucibus velit elementum quis. Donec non mauris eu nibh faucibus cursus ut egestas dolor. Aliquam venenatis ligula id velit pulvinar malesuada. Vestibulum scelerisque, justo non porta gravida, nulla justo tempor purus, at sollicitudin erat erat vel libero. """Lorem ipsum dolor sit amet, consectetur adipiscing elit. Nullam venenatis magna ut tortor faucibus, in vestibulum nibh commodo. Aenean eget vestibulum augue. Nullam suscipit urna non nunc efficitur, at iaculis nisl consequat. Mauris quis ultrices elit. Suspendisse lobortis odio vitae laoreet facilisis. Cras ornare sem felis, at vulputate magna aliquam ac. Duis quis est ultricies, euismod nulla ac, interdum dui. Maecenas sit amet est vitae enim commodo gravida. Proin vitae elit nulla. Donec tempor dolor lectus, in faucibus velit elementum quis. Donec non mauris eu nibh faucibus cursus ut egestas dolor. Aliquam venenatis ligula id velit pulvinar malesuada. Vestibulum scelerisque, justo non porta gravida, nulla justo tempor purus, at sollicitudin erat erat vel libero.
Fusce nec eros eu metus tristique aliquet. Sed ut magna sagittis, vulputate diam sit amet, aliquam magna. Aenean sollicitudin velit lacus, eu ultrices magna semper at. Integer vitae felis ligula. In a eros nec risus condimentum tincidunt fermentum sit amet ex. Class aptent taciti sociosqu ad litora torquent per conubia nostra, per inceptos himenaeos. Pellentesque habitant morbi tristique senectus et netus et malesuada fames ac turpis egestas. Nullam vitae justo maximus, fringilla tellus nec, rutrum purus. Etiam efficitur nisi dapibus euismod vestibulum. Phasellus at felis elementum, tristique nulla ac, consectetur neque. Fusce nec eros eu metus tristique aliquet.
Maecenas hendrerit nibh eget velit rutrum, in ornare mauris molestie. Vestibulum ante ipsum primis in faucibus orci luctus et ultrices posuere cubilia curae; Praesent dignissim efficitur eros, sit amet rutrum justo mattis a. Fusce mollis neque at erat placerat bibendum. Ut fringilla fringilla orci, ut fringilla metus fermentum vel. In hac habitasse platea dictumst. Donec hendrerit porttitor odio. Suspendisse ornare sollicitudin mauris, sodales pulvinar velit finibus vel. Fusce id pulvinar neque. Suspendisse eget tincidunt sapien, ac accumsan turpis.
Curabitur cursus tincidunt leo at aliquet. Nunc dapibus quam id venenatis varius. Aenean eget augue vel velit dapibus aliquam. Nulla facilisi. Curabitur cursus, turpis vel congue volutpat, tellus eros cursus lacus, eu fringilla turpis orci non ipsum. In hac habitasse platea dictumst. Nulla aliquam nisl a nunc placerat, eget dignissim felis pulvinar. Fusce sed porta mauris. Donec sodales arcu in nisl sodales, quis posuere massa ultricies. Nam feugiat massa eget felis ultricies finibus. Nunc magna nulla, interdum a elit vel, egestas efficitur urna. Ut posuere tincidunt odio in maximus. Sed at dignissim est.
Morbi accumsan elementum ligula ut fringilla. Praesent in ex metus. Phasellus urna est, tempus sit amet elementum vitae, sollicitudin vel ipsum. Fusce hendrerit eleifend dignissim. Maecenas tempor dapibus dui quis laoreet. Cras tincidunt sed ipsum sed pellentesque. Proin ut tellus nec ipsum varius interdum. Curabitur id velit ligula. Etiam sapien nulla, cursus sodales orci eu, porta lobortis nunc. Nunc at dapibus velit. Nulla et nunc vehicula, condimentum erat quis, elementum dolor. Quisque eu metus fermentum, vestibulum tellus at, sollicitudin odio. Ut vel neque justo.
Praesent porta porta velit, vel porttitor sem. Donec sagittis at nulla venenatis iaculis. Nullam vel eleifend felis. Nullam a pellentesque lectus. Aliquam tincidunt semper dui sed bibendum. Donec hendrerit, urna et cursus dictum, neque neque convallis magna, id condimentum sem urna quis massa. Fusce non quam vulputate, fermentum mauris at, malesuada ipsum. Mauris id pellentesque libero. Donec vel erat ullamcorper, dapibus quam id, imperdiet urna. Praesent sed ligula ut est pellentesque pharetra quis et diam. Ut placerat lorem eget mi fermentum aliquet.
This is message #""" & This is message #""" &
$i & """ sent from a publisher using mix. End of transmission.""" $i & """ sent from a publisher using mix. End of transmission."""
let message = WakuMessage( let message = WakuMessage(
@ -168,25 +164,34 @@ proc setupAndPublish(rng: ref HmacDrbgContext, conf: LightPushMixConf) {.async.}
timestamp: getNowInNanosecondTime(), timestamp: getNowInNanosecondTime(),
) # current timestamp ) # current timestamp
let res = await node.wakuLightpushClient.publishWithConn( let res =
LightpushPubsubTopic, message, conn, dPeerId await node.wakuLightpushClient.publish(some(LightpushPubsubTopic), message, conn)
)
if res.isOk(): let startTime = getNowInNanosecondTime()
lp_mix_success.inc()
notice "published message", (
text = text, await node.wakuLightpushClient.publishWithConn(
timestamp = message.timestamp, LightpushPubsubTopic, message, conn, dPeerId
psTopic = LightpushPubsubTopic, )
contentTopic = LightpushContentTopic ).isOkOr:
else: error "failed to publish message via mix", error = error.desc
error "failed to publish message", error = $res.error
lp_mix_failed.inc(labelValues = ["publish_error"]) lp_mix_failed.inc(labelValues = ["publish_error"])
return
let latency = float64(getNowInNanosecondTime() - startTime) / 1_000_000.0
lp_mix_latency.observe(latency)
lp_mix_success.inc()
notice "published message",
text = text,
timestamp = message.timestamp,
latency = latency,
psTopic = LightpushPubsubTopic,
contentTopic = LightpushContentTopic
if conf.mixDisabled: if conf.mixDisabled:
await conn.close() await conn.close()
await sleepAsync(conf.msgIntervalMilliseconds) await sleepAsync(conf.msgIntervalMilliseconds)
info "###########Sent all messages via mix" info "Sent all messages via mix"
quit(0) quit(0)
when isMainModule: when isMainModule:

View File

@ -6,3 +6,6 @@ declarePublicCounter lp_mix_success, "number of lightpush messages sent via mix"
declarePublicCounter lp_mix_failed, declarePublicCounter lp_mix_failed,
"number of lightpush messages failed via mix", labels = ["error"] "number of lightpush messages failed via mix", labels = ["error"]
declarePublicHistogram lp_mix_latency,
"lightpush publish latency via mix in milliseconds"

View File

@ -54,13 +54,9 @@ proc setupAndPublish(rng: ref HmacDrbgContext) {.async.} =
"Building ENR with relay sharding failed" "Building ENR with relay sharding failed"
) )
let recordRes = enrBuilder.build() let record = enrBuilder.build().valueOr:
let record = error "failed to create enr record", error = error
if recordRes.isErr(): quit(QuitFailure)
error "failed to create enr record", error = recordRes.error
quit(QuitFailure)
else:
recordRes.get()
var builder = WakuNodeBuilder.init() var builder = WakuNodeBuilder.init()
builder.withNodeKey(nodeKey) builder.withNodeKey(nodeKey)

View File

@ -49,13 +49,9 @@ proc setupAndPublish(rng: ref HmacDrbgContext) {.async.} =
var enrBuilder = EnrBuilder.init(nodeKey) var enrBuilder = EnrBuilder.init(nodeKey)
let recordRes = enrBuilder.build() let record = enrBuilder.build().valueOr:
let record = error "failed to create enr record", error = error
if recordRes.isErr(): quit(QuitFailure)
error "failed to create enr record", error = recordRes.error
quit(QuitFailure)
else:
recordRes.get()
var builder = WakuNodeBuilder.init() var builder = WakuNodeBuilder.init()
builder.withNodeKey(nodeKey) builder.withNodeKey(nodeKey)

View File

@ -1,23 +1,32 @@
from flask import Flask
import ctypes import ctypes
import argparse import argparse
import sys
if sys.platform == "darwin":
_lib_ext = "dylib"
elif sys.platform == "win32":
_lib_ext = "dll"
else:
_lib_ext = "so"
_lib_path = f"build/libwaku.{_lib_ext}"
libwaku = object libwaku = object
try: try:
# This python script should be run from the root repo folder # This python script should be run from the root repo folder
libwaku = ctypes.CDLL("build/libwaku.so") libwaku = ctypes.CDLL(_lib_path)
except Exception as e: except OSError as e:
print("Exception: ", e) print(f"Exception: {e}")
print(""" print(f"""
The 'libwaku.so' library can be created with the next command from The '{_lib_path}' library can be created with the next command from
the repo's root folder: `make libwaku`. the repo's root folder: `make libwaku`.
And it should build the library in 'build/libwaku.so'. And it should build the library in '{_lib_path}'.
Therefore, make sure the LD_LIBRARY_PATH env var points at the location that Therefore, make sure the library path env var points at the location that
contains the 'libwaku.so' library. contains the '{_lib_path}' library.
""") """)
exit(-1) exit(1)
def handle_event(ret, msg, user_data): def handle_event(ret, msg, user_data):
print("Event received: %s" % msg) print("Event received: %s" % msg)
@ -102,8 +111,8 @@ print("Waku Relay enabled: {}".format(args.relay))
# Set the event callback # Set the event callback
callback = callback_type(handle_event) # This line is important so that the callback is not gc'ed callback = callback_type(handle_event) # This line is important so that the callback is not gc'ed
libwaku.waku_set_event_callback.argtypes = [callback_type, ctypes.c_void_p] libwaku.set_event_callback.argtypes = [callback_type, ctypes.c_void_p]
libwaku.waku_set_event_callback(callback, ctypes.c_void_p(0)) libwaku.set_event_callback(callback, ctypes.c_void_p(0))
# Start the node # Start the node
libwaku.waku_start.argtypes = [ctypes.c_void_p, libwaku.waku_start.argtypes = [ctypes.c_void_p,
@ -117,32 +126,32 @@ libwaku.waku_start(ctx,
# Subscribe to the default pubsub topic # Subscribe to the default pubsub topic
libwaku.waku_relay_subscribe.argtypes = [ctypes.c_void_p, libwaku.waku_relay_subscribe.argtypes = [ctypes.c_void_p,
ctypes.c_char_p,
callback_type, callback_type,
ctypes.c_void_p] ctypes.c_void_p,
ctypes.c_char_p]
libwaku.waku_relay_subscribe(ctx, libwaku.waku_relay_subscribe(ctx,
default_pubsub_topic.encode('utf-8'),
callback_type( callback_type(
#onErrCb #onErrCb
lambda ret, msg, len: lambda ret, msg, len:
print("Error calling waku_relay_subscribe: %s" % print("Error calling waku_relay_subscribe: %s" %
msg.decode('utf-8')) msg.decode('utf-8'))
), ),
ctypes.c_void_p(0)) ctypes.c_void_p(0),
default_pubsub_topic.encode('utf-8'))
libwaku.waku_connect.argtypes = [ctypes.c_void_p, libwaku.waku_connect.argtypes = [ctypes.c_void_p,
ctypes.c_char_p,
ctypes.c_int,
callback_type, callback_type,
ctypes.c_void_p] ctypes.c_void_p,
ctypes.c_char_p,
ctypes.c_int]
libwaku.waku_connect(ctx, libwaku.waku_connect(ctx,
args.peer.encode('utf-8'),
10000,
# onErrCb # onErrCb
callback_type( callback_type(
lambda ret, msg, len: lambda ret, msg, len:
print("Error calling waku_connect: %s" % msg.decode('utf-8'))), print("Error calling waku_connect: %s" % msg.decode('utf-8'))),
ctypes.c_void_p(0)) ctypes.c_void_p(0),
args.peer.encode('utf-8'),
10000)
# app = Flask(__name__) # app = Flask(__name__)
# @app.route("/") # @app.route("/")

View File

@ -27,7 +27,7 @@ public:
void initialize(const QString& jsonConfig, WakuCallBack event_handler, void* userData) { void initialize(const QString& jsonConfig, WakuCallBack event_handler, void* userData) {
ctx = waku_new(jsonConfig.toUtf8().constData(), WakuCallBack(event_handler), userData); ctx = waku_new(jsonConfig.toUtf8().constData(), WakuCallBack(event_handler), userData);
waku_set_event_callback(ctx, on_event_received, userData); set_event_callback(ctx, on_event_received, userData);
qDebug() << "Waku context initialized, ready to start."; qDebug() << "Waku context initialized, ready to start.";
} }

View File

@ -3,22 +3,22 @@ use std::ffi::CString;
use std::os::raw::{c_char, c_int, c_void}; use std::os::raw::{c_char, c_int, c_void};
use std::{slice, thread, time}; use std::{slice, thread, time};
pub type WakuCallback = unsafe extern "C" fn(c_int, *const c_char, usize, *const c_void); pub type FFICallBack = unsafe extern "C" fn(c_int, *const c_char, usize, *const c_void);
extern "C" { extern "C" {
pub fn waku_new( pub fn waku_new(
config_json: *const u8, config_json: *const u8,
cb: WakuCallback, cb: FFICallBack,
user_data: *const c_void, user_data: *const c_void,
) -> *mut c_void; ) -> *mut c_void;
pub fn waku_version(ctx: *const c_void, cb: WakuCallback, user_data: *const c_void) -> c_int; pub fn waku_version(ctx: *const c_void, cb: FFICallBack, user_data: *const c_void) -> c_int;
pub fn waku_start(ctx: *const c_void, cb: WakuCallback, user_data: *const c_void) -> c_int; pub fn waku_start(ctx: *const c_void, cb: FFICallBack, user_data: *const c_void) -> c_int;
pub fn waku_default_pubsub_topic( pub fn waku_default_pubsub_topic(
ctx: *mut c_void, ctx: *mut c_void,
cb: WakuCallback, cb: FFICallBack,
user_data: *const c_void, user_data: *const c_void,
) -> *mut c_void; ) -> *mut c_void;
} }
@ -40,7 +40,7 @@ pub unsafe extern "C" fn trampoline<C>(
closure(return_val, &buffer_utf8); closure(return_val, &buffer_utf8);
} }
pub fn get_trampoline<C>(_closure: &C) -> WakuCallback pub fn get_trampoline<C>(_closure: &C) -> FFICallBack
where where
C: FnMut(i32, &str), C: FnMut(i32, &str),
{ {

View File

@ -47,13 +47,9 @@ proc setupAndSubscribe(rng: ref HmacDrbgContext) {.async.} =
var enrBuilder = EnrBuilder.init(nodeKey) var enrBuilder = EnrBuilder.init(nodeKey)
let recordRes = enrBuilder.build() let record = enrBuilder.build().valueOr:
let record = error "failed to create enr record", error = error
if recordRes.isErr(): quit(QuitFailure)
error "failed to create enr record", error = recordRes.error
quit(QuitFailure)
else:
recordRes.get()
var builder = WakuNodeBuilder.init() var builder = WakuNodeBuilder.init()
builder.withNodeKey(nodeKey) builder.withNodeKey(nodeKey)

View File

@ -1,40 +0,0 @@
import std/options
import chronos, results, confutils, confutils/defs
import waku
type CliArgs = object
ethRpcEndpoint* {.
defaultValue: "", desc: "ETH RPC Endpoint, if passed, RLN is enabled"
.}: string
when isMainModule:
let args = CliArgs.load()
echo "Starting Waku node..."
let config =
if (args.ethRpcEndpoint == ""):
# Create a basic configuration for the Waku node
# No RLN as we don't have an ETH RPC Endpoint
NodeConfig.init(
protocolsConfig = ProtocolsConfig.init(entryNodes = @[], clusterId = 42)
)
else:
# Connect to TWN, use ETH RPC Endpoint for RLN
NodeConfig.init(ethRpcEndpoints = @[args.ethRpcEndpoint])
# Create the node using the library API's createNode function
let node = (waitFor createNode(config)).valueOr:
echo "Failed to create node: ", error
quit(QuitFailure)
echo("Waku node created successfully!")
# Start the node
(waitFor startWaku(addr node)).isOkOr:
echo "Failed to start node: ", error
quit(QuitFailure)
echo "Node started successfully!"
runForever()

View File

@ -1,6 +1,6 @@
{.push raises: [].} {.push raises: [].}
import ../../apps/wakunode2/cli_args import tools/confutils/cli_args
import waku/[common/logging, factory/[waku, networks_config]] import waku/[common/logging, factory/[waku, networks_config]]
import import
std/[options, strutils, os, sequtils], std/[options, strutils, os, sequtils],
@ -18,13 +18,10 @@ proc setup*(): Waku =
const versionString = "version / git commit hash: " & waku.git_version const versionString = "version / git commit hash: " & waku.git_version
let rng = crypto.newRng() let rng = crypto.newRng()
let confRes = WakuNodeConf.load(version = versionString) let conf = WakuNodeConf.load(version = versionString).valueOr:
if confRes.isErr(): error "failure while loading the configuration", error = $error
error "failure while loading the configuration", error = $confRes.error
quit(QuitFailure) quit(QuitFailure)
var conf = confRes.get()
let twnNetworkConf = NetworkConf.TheWakuNetworkConf() let twnNetworkConf = NetworkConf.TheWakuNetworkConf()
if len(conf.shards) != 0: if len(conf.shards) != 0:
conf.pubsubTopics = conf.shards.mapIt(twnNetworkConf.pubsubTopics[it.uint16]) conf.pubsubTopics = conf.shards.mapIt(twnNetworkConf.pubsubTopics[it.uint16])

View File

@ -95,61 +95,54 @@ proc sendResponse*(
type SCPHandler* = proc(msg: WakuMessage): Future[void] {.async.} type SCPHandler* = proc(msg: WakuMessage): Future[void] {.async.}
proc getSCPHandler(self: StealthCommitmentProtocol): SCPHandler = proc getSCPHandler(self: StealthCommitmentProtocol): SCPHandler =
let handler = proc(msg: WakuMessage): Future[void] {.async.} = let handler = proc(msg: WakuMessage): Future[void] {.async.} =
let decodedRes = WakuStealthCommitmentMsg.decode(msg.payload) let decoded = WakuStealthCommitmentMsg.decode(msg.payload).valueOr:
if decodedRes.isErr(): error "could not decode scp message", error = error
error "could not decode scp message" quit(QuitFailure)
let decoded = decodedRes.get()
if decoded.request == false: if decoded.request == false:
# check if the generated stealth commitment belongs to the receiver # check if the generated stealth commitment belongs to the receiver
# if not, continue # if not, continue
let ephemeralPubKeyRes = let ephemeralPubKey = deserialize(
deserialize(StealthCommitmentFFI.PublicKey, decoded.ephemeralPubKey.get()) StealthCommitmentFFI.PublicKey, decoded.ephemeralPubKey.get()
if ephemeralPubKeyRes.isErr(): ).valueOr:
error "could not deserialize ephemeral public key: ", error "could not deserialize ephemeral public key: ", error = error
err = ephemeralPubKeyRes.error() quit(QuitFailure)
let ephemeralPubKey = ephemeralPubKeyRes.get() let stealthCommitmentPrivateKey = StealthCommitmentFFI.generateStealthPrivateKey(
let stealthCommitmentPrivateKeyRes = StealthCommitmentFFI.generateStealthPrivateKey(
ephemeralPubKey, ephemeralPubKey,
self.spendingKeyPair.privateKey, self.spendingKeyPair.privateKey,
self.viewingKeyPair.privateKey, self.viewingKeyPair.privateKey,
decoded.viewTag.get(), decoded.viewTag.get(),
) ).valueOr:
if stealthCommitmentPrivateKeyRes.isErr(): error "received stealth commitment does not belong to the receiver: ",
info "received stealth commitment does not belong to the receiver: ", error = error
err = stealthCommitmentPrivateKeyRes.error() quit(QuitFailure)
let stealthCommitmentPrivateKey = stealthCommitmentPrivateKeyRes.get()
info "received stealth commitment belongs to the receiver: ", info "received stealth commitment belongs to the receiver: ",
stealthCommitmentPrivateKey, stealthCommitmentPrivateKey,
stealthCommitmentPubKey = decoded.stealthCommitment.get() stealthCommitmentPubKey = decoded.stealthCommitment.get()
return return
# send response # send response
# deseralize the keys # deseralize the keys
let spendingKeyRes = let spendingKey = deserialize(
deserialize(StealthCommitmentFFI.PublicKey, decoded.spendingPubKey.get()) StealthCommitmentFFI.PublicKey, decoded.spendingPubKey.get()
if spendingKeyRes.isErr(): ).valueOr:
error "could not deserialize spending key: ", err = spendingKeyRes.error() error "could not deserialize spending key: ", error = error
let spendingKey = spendingKeyRes.get() quit(QuitFailure)
let viewingKeyRes = let viewingKey = (
(deserialize(StealthCommitmentFFI.PublicKey, decoded.viewingPubKey.get())) deserialize(StealthCommitmentFFI.PublicKey, decoded.viewingPubKey.get())
if viewingKeyRes.isErr(): ).valueOr:
error "could not deserialize viewing key: ", err = viewingKeyRes.error() error "could not deserialize viewing key: ", error = error
let viewingKey = viewingKeyRes.get() quit(QuitFailure)
info "received spending key", spendingKey info "received spending key", spendingKey
info "received viewing key", viewingKey info "received viewing key", viewingKey
let ephemeralKeyPairRes = StealthCommitmentFFI.generateKeyPair() let ephemeralKeyPair = StealthCommitmentFFI.generateKeyPair().valueOr:
if ephemeralKeyPairRes.isErr(): error "could not generate ephemeral key pair: ", error = error
error "could not generate ephemeral key pair: ", err = ephemeralKeyPairRes.error() quit(QuitFailure)
let ephemeralKeyPair = ephemeralKeyPairRes.get()
let stealthCommitmentRes = StealthCommitmentFFI.generateStealthCommitment( let stealthCommitment = StealthCommitmentFFI.generateStealthCommitment(
spendingKey, viewingKey, ephemeralKeyPair.privateKey spendingKey, viewingKey, ephemeralKeyPair.privateKey
) ).valueOr:
if stealthCommitmentRes.isErr(): error "could not generate stealth commitment: ", error = error
error "could not generate stealth commitment: ", quit(QuitFailure)
err = stealthCommitmentRes.error()
let stealthCommitment = stealthCommitmentRes.get()
( (
await self.sendResponse( await self.sendResponse(
@ -157,7 +150,7 @@ proc getSCPHandler(self: StealthCommitmentProtocol): SCPHandler =
stealthCommitment.viewTag, stealthCommitment.viewTag,
) )
).isOkOr: ).isOkOr:
error "could not send response: ", err = $error error "could not send response: ", error = $error
return handler return handler

61
flake.lock generated
View File

@ -2,44 +2,87 @@
"nodes": { "nodes": {
"nixpkgs": { "nixpkgs": {
"locked": { "locked": {
"lastModified": 1740603184, "lastModified": 1770464364,
"narHash": "sha256-t+VaahjQAWyA+Ctn2idyo1yxRIYpaDxMgHkgCNiMJa4=", "narHash": "sha256-z5NJPSBwsLf/OfD8WTmh79tlSU8XgIbwmk6qB1/TFzY=",
"owner": "NixOS", "owner": "NixOS",
"repo": "nixpkgs", "repo": "nixpkgs",
"rev": "f44bd8ca21e026135061a0a57dcf3d0775b67a49", "rev": "23d72dabcb3b12469f57b37170fcbc1789bd7457",
"type": "github" "type": "github"
}, },
"original": { "original": {
"owner": "NixOS", "owner": "NixOS",
"repo": "nixpkgs", "repo": "nixpkgs",
"rev": "f44bd8ca21e026135061a0a57dcf3d0775b67a49", "rev": "23d72dabcb3b12469f57b37170fcbc1789bd7457",
"type": "github" "type": "github"
} }
}, },
"root": { "root": {
"inputs": { "inputs": {
"nixpkgs": "nixpkgs", "nixpkgs": "nixpkgs",
"rust-overlay": "rust-overlay",
"zerokit": "zerokit" "zerokit": "zerokit"
} }
}, },
"zerokit": { "rust-overlay": {
"inputs": { "inputs": {
"nixpkgs": [ "nixpkgs": [
"nixpkgs" "nixpkgs"
] ]
}, },
"locked": { "locked": {
"lastModified": 1743756626, "lastModified": 1775099554,
"narHash": "sha256-SvhfEl0bJcRsCd79jYvZbxQecGV2aT+TXjJ57WVv7Aw=", "narHash": "sha256-3xBsGnGDLOFtnPZ1D3j2LU19wpAlYefRKTlkv648rU0=",
"owner": "oxalica",
"repo": "rust-overlay",
"rev": "8d6387ed6d8e6e6672fd3ed4b61b59d44b124d99",
"type": "github"
},
"original": {
"owner": "oxalica",
"repo": "rust-overlay",
"type": "github"
}
},
"rust-overlay_2": {
"inputs": {
"nixpkgs": [
"zerokit",
"nixpkgs"
]
},
"locked": {
"lastModified": 1771211437,
"narHash": "sha256-lcNK438i4DGtyA+bPXXyVLHVmJjYpVKmpux9WASa3ro=",
"owner": "oxalica",
"repo": "rust-overlay",
"rev": "c62195b3d6e1bb11e0c2fb2a494117d3b55d410f",
"type": "github"
},
"original": {
"owner": "oxalica",
"repo": "rust-overlay",
"type": "github"
}
},
"zerokit": {
"inputs": {
"nixpkgs": [
"nixpkgs"
],
"rust-overlay": "rust-overlay_2"
},
"locked": {
"lastModified": 1771279884,
"narHash": "sha256-tzkQPwSl4vPTUo1ixHh6NCENjsBDroMKTjifg2q8QX8=",
"owner": "vacp2p", "owner": "vacp2p",
"repo": "zerokit", "repo": "zerokit",
"rev": "c60e0c33fc6350a4b1c20e6b6727c44317129582", "rev": "53b18098e6d5d046e3eb1ac338a8f4f651432477",
"type": "github" "type": "github"
}, },
"original": { "original": {
"owner": "vacp2p", "owner": "vacp2p",
"repo": "zerokit", "repo": "zerokit",
"rev": "c60e0c33fc6350a4b1c20e6b6727c44317129582", "rev": "53b18098e6d5d046e3eb1ac338a8f4f651432477",
"type": "github" "type": "github"
} }
} }

View File

@ -1,64 +1,83 @@
{ {
description = "NWaku build flake"; description = "logos-delivery nim build flake";
nixConfig = { nixConfig = {
extra-substituters = [ "https://nix-cache.status.im/" ]; extra-substituters = [ "https://nix-cache.status.im/" ];
extra-trusted-public-keys = [ "nix-cache.status.im-1:x/93lOfLU+duPplwMSBR+OlY4+mo+dCN7n0mr4oPwgY=" ]; extra-trusted-public-keys = [
"nix-cache.status.im-1:x/93lOfLU+duPplwMSBR+OlY4+mo+dCN7n0mr4oPwgY="
];
}; };
inputs = { inputs = {
nixpkgs.url = "github:NixOS/nixpkgs?rev=f44bd8ca21e026135061a0a57dcf3d0775b67a49"; # Pinning the commit to use same commit across different projects.
# A commit from nixpkgs 25.11 release: https://github.com/NixOS/nixpkgs/tree/release-25.11
nixpkgs.url = "github:NixOS/nixpkgs?rev=23d72dabcb3b12469f57b37170fcbc1789bd7457";
rust-overlay = {
url = "github:oxalica/rust-overlay";
inputs.nixpkgs.follows = "nixpkgs";
};
# External flake input: Zerokit pinned to a specific commit.
# Update the rev here when a new zerokit version is needed.
zerokit = { zerokit = {
url = "github:vacp2p/zerokit?rev=c60e0c33fc6350a4b1c20e6b6727c44317129582"; url = "github:vacp2p/zerokit/53b18098e6d5d046e3eb1ac338a8f4f651432477";
inputs.nixpkgs.follows = "nixpkgs"; inputs.nixpkgs.follows = "nixpkgs";
}; };
}; };
outputs = { self, nixpkgs, zerokit }: outputs = { self, nixpkgs, rust-overlay, zerokit }:
let let
stableSystems = [ systems = [
"x86_64-linux" "aarch64-linux" "x86_64-linux" "aarch64-linux"
"x86_64-darwin" "aarch64-darwin" "x86_64-darwin" "aarch64-darwin"
"x86_64-windows" "i686-linux" "x86_64-windows"
"i686-windows"
]; ];
forAllSystems = f: nixpkgs.lib.genAttrs stableSystems (system: f system); forAllSystems = nixpkgs.lib.genAttrs systems;
pkgsFor = forAllSystems ( nimbleOverlay = final: prev: {
system: import nixpkgs { nimble = prev.nimble.overrideAttrs (_: {
inherit system; version = "0.22.3";
config = { src = prev.fetchFromGitHub {
android_sdk.accept_license = true; owner = "nim-lang";
allowUnfree = true; repo = "nimble";
rev = "v0.22.3";
sha256 = "sha256-f7DYpRGVUeSi6basK1lfu5AxZpMFOSJ3oYsy+urYErg=";
}; };
overlays = [ });
(final: prev: { };
androidEnvCustom = prev.callPackage ./nix/pkgs/android-sdk { };
androidPkgs = final.androidEnvCustom.pkgs; pkgsFor = system: import nixpkgs {
androidShell = final.androidEnvCustom.shell; inherit system;
}) overlays = [ (import rust-overlay) nimbleOverlay ];
]; };
in {
packages = forAllSystems (system:
let
pkgs = pkgsFor system;
liblogosdelivery = pkgs.callPackage ./nix/default.nix {
inherit pkgs;
src = ./.;
zerokitRln = zerokit.packages.${system}.rln;
};
in {
inherit liblogosdelivery;
default = liblogosdelivery;
} }
); );
in rec { devShells = forAllSystems (system:
packages = forAllSystems (system: let let
pkgs = pkgsFor.${system}; pkgs = pkgsFor system;
in rec { in {
libwaku-android-arm64 = pkgs.callPackage ./nix/default.nix { default = pkgs.mkShell {
inherit stableSystems; nativeBuildInputs = with pkgs; [
src = self; nim-2_2
targets = ["libwaku-android-arm64"]; nimble
androidArch = "aarch64-linux-android"; ];
abidir = "arm64-v8a"; };
zerokitPkg = zerokit.packages.${system}.zerokit-android-arm64; }
}; );
default = libwaku-android-arm64;
});
devShells = forAllSystems (system: {
default = pkgsFor.${system}.callPackage ./nix/shell.nix {};
});
}; };
} }

123
liblogosdelivery/BUILD.md Normal file
View File

@ -0,0 +1,123 @@
# Building liblogosdelivery and Examples
## Prerequisites
- Nim 2.x compiler
- Rust toolchain (for RLN dependencies)
- GCC or Clang compiler
- Make
## Building the Library
### Dynamic Library
```bash
make liblogosdelivery
```
This creates `build/liblogosdelivery.dylib` (macOS) or `build/liblogosdelivery.so` (Linux).
### Static Library
```bash
nim liblogosdelivery STATIC=1
```
This creates `build/liblogosdelivery.a`.
## Building Examples
### liblogosdelivery Example
Compile the C example that demonstrates all library features:
```bash
# Using Make (recommended)
make liblogosdelivery_example
## Running Examples
```bash
./build/liblogosdelivery_example
```
The example will:
1. Create a Logos Messaging node
2. Register event callbacks for message events
3. Start the node
4. Subscribe to a content topic
5. Send a message
6. Show message delivery events (sent, propagated, or error)
7. Unsubscribe and cleanup
## Build Artifacts
After building, you'll have:
```
build/
├── liblogosdelivery.dylib # Dynamic library (34MB)
├── liblogosdelivery.dylib.dSYM/ # Debug symbols
└── liblogosdelivery_example # Compiled example (34KB)
```
## Library Headers
The main header file is:
- `liblogosdelivery/liblogosdelivery.h` - C API declarations
## Troubleshooting
### Library not found at runtime
If you get "library not found" errors when running the example:
**macOS:**
```bash
export DYLD_LIBRARY_PATH=/path/to/build:$DYLD_LIBRARY_PATH
./build/liblogosdelivery_example
```
**Linux:**
```bash
export LD_LIBRARY_PATH=/path/to/build:$LD_LIBRARY_PATH
./build/liblogosdelivery_example
```
## Cross-Compilation
For cross-compilation, you need to:
1. Build the Nim library for the target platform
2. Use the appropriate cross-compiler
3. Link against the target platform's liblogosdelivery
Example for Linux from macOS:
```bash
# Build library for Linux (requires Docker or cross-compilation setup)
# Then compile with cross-compiler
```
## Integration with Your Project
### CMake
```cmake
find_library(LMAPI_LIBRARY NAMES lmapi PATHS ${PROJECT_SOURCE_DIR}/build)
include_directories(${PROJECT_SOURCE_DIR}/liblogosdelivery)
target_link_libraries(your_target ${LMAPI_LIBRARY})
```
### Makefile
```makefile
CFLAGS += -I/path/to/liblogosdelivery
LDFLAGS += -L/path/to/build -llmapi -Wl,-rpath,/path/to/build
your_program: your_program.c
$(CC) $(CFLAGS) $< -o $@ $(LDFLAGS)
```
## API Documentation
See:
- [liblogosdelivery.h](liblogosdelivery/liblogosdelivery.h) - API function declarations
- [MESSAGE_EVENTS.md](liblogosdelivery/MESSAGE_EVENTS.md) - Message event handling guide

View File

@ -0,0 +1,148 @@
# Message Event Handling in LMAPI
## Overview
The liblogosdelivery library emits three types of message delivery events that clients can listen to by registering an event callback using `logosdelivery_set_event_callback()`.
## Event Types
### 1. message_sent
Emitted when a message is successfully accepted by the send service and queued for delivery.
**JSON Structure:**
```json
{
"eventType": "message_sent",
"requestId": "unique-request-id",
"messageHash": "0x..."
}
```
**Fields:**
- `eventType`: Always "message_sent"
- `requestId`: Request ID returned from the send operation
- `messageHash`: Hash of the message that was sent
### 2. message_propagated
Emitted when a message has been successfully propagated to neighboring nodes on the network.
**JSON Structure:**
```json
{
"eventType": "message_propagated",
"requestId": "unique-request-id",
"messageHash": "0x..."
}
```
**Fields:**
- `eventType`: Always "message_propagated"
- `requestId`: Request ID from the send operation
- `messageHash`: Hash of the message that was propagated
### 3. message_error
Emitted when an error occurs during message sending or propagation.
**JSON Structure:**
```json
{
"eventType": "message_error",
"requestId": "unique-request-id",
"messageHash": "0x...",
"error": "error description"
}
```
**Fields:**
- `eventType`: Always "message_error"
- `requestId`: Request ID from the send operation
- `messageHash`: Hash of the message that failed
- `error`: Description of what went wrong
## Usage
### 1. Define an Event Callback
```c
void event_callback(int ret, const char *msg, size_t len, void *userData) {
if (ret != RET_OK || msg == NULL || len == 0) {
return;
}
// Parse the JSON message
// Extract eventType field
// Handle based on event type
if (eventType == "message_sent") {
// Handle message sent
} else if (eventType == "message_propagated") {
// Handle message propagated
} else if (eventType == "message_error") {
// Handle message error
}
}
```
### 2. Register the Callback
```c
void *ctx = logosdelivery_create_node(config, callback, userData);
logosdelivery_set_event_callback(ctx, event_callback, NULL);
```
### 3. Start the Node
Once the node is started, events will be delivered to your callback:
```c
logosdelivery_start_node(ctx, callback, userData);
```
## Event Flow
For a typical successful message send:
1. **send** → Returns request ID
2. **message_sent** → Message accepted and queued
3. **message_propagated** → Message delivered to peers
For a failed message send:
1. **send** → Returns request ID
2. **message_sent** → Message accepted and queued
3. **message_error** → Delivery failed with error description
## Important Notes
1. **Thread Safety**: The event callback is invoked from the FFI worker thread. Ensure your callback is thread-safe if it accesses shared state.
2. **Non-Blocking**: Keep the callback fast and non-blocking. Do not perform long-running operations in the callback.
3. **JSON Parsing**: The example uses a simple string-based parser. For production, use a proper JSON library like:
- [cJSON](https://github.com/DaveGamble/cJSON)
- [json-c](https://github.com/json-c/json-c)
- [Jansson](https://github.com/akheron/jansson)
4. **Memory Management**: The message buffer is owned by the library. Copy any data you need to retain.
5. **Event Order**: Events are delivered in the order they occur, but timing depends on network conditions.
## Example Implementation
See `examples/liblogosdelivery_example.c` for a complete working example that:
- Registers an event callback
- Sends a message
- Receives and prints all three event types
- Properly parses the JSON event structure
## Debugging Events
To see all events during development:
```c
void debug_event_callback(int ret, const char *msg, size_t len, void *userData) {
printf("Event received: %.*s\n", (int)len, msg);
}
```
This will print the raw JSON for all events, helping you understand the event structure.

262
liblogosdelivery/README.md Normal file
View File

@ -0,0 +1,262 @@
# Logos Messaging API (LMAPI) Library
A C FFI library providing a simplified interface to Logos Messaging functionality.
## Overview
This library wraps the high-level API functions from `waku/api/api.nim` and exposes them via a C FFI interface, making them accessible from C, C++, and other languages that support C FFI.
## API Functions
### Node Lifecycle
#### `logosdelivery_create_node`
Creates a new instance of the node from the given configuration JSON.
```c
void *logosdelivery_create_node(
const char *configJson,
FFICallBack callback,
void *userData
);
```
**Parameters:**
- `configJson`: JSON string containing node configuration
- `callback`: Callback function to receive the result
- `userData`: User data passed to the callback
**Returns:** Pointer to the context needed by other API functions, or NULL on error.
**Example configuration JSON:**
```json
{
"mode": "Core",
"preset": "logos.dev",
"listenAddress": "0.0.0.0",
"tcpPort": 60000,
"discv5UdpPort": 9000
}
```
Configuration uses flat field names matching `WakuNodeConf` in `tools/confutils/cli_args.nim`.
Use `"preset"` to select a network preset (e.g., `"twn"`, `"logos.dev"`) which auto-configures
entry nodes, cluster ID, sharding, and other network-specific settings.
#### `logosdelivery_start_node`
Starts the node.
```c
int logosdelivery_start_node(
void *ctx,
FFICallBack callback,
void *userData
);
```
#### `logosdelivery_stop_node`
Stops the node.
```c
int logosdelivery_stop_node(
void *ctx,
FFICallBack callback,
void *userData
);
```
#### `logosdelivery_destroy`
Destroys a node instance and frees resources.
```c
int logosdelivery_destroy(
void *ctx,
FFICallBack callback,
void *userData
);
```
### Messaging
#### `logosdelivery_subscribe`
Subscribe to a content topic to receive messages.
```c
int logosdelivery_subscribe(
void *ctx,
FFICallBack callback,
void *userData,
const char *contentTopic
);
```
**Parameters:**
- `ctx`: Context pointer from `logosdelivery_create_node`
- `callback`: Callback function to receive the result
- `userData`: User data passed to the callback
- `contentTopic`: Content topic string (e.g., "/myapp/1/chat/proto")
#### `logosdelivery_unsubscribe`
Unsubscribe from a content topic.
```c
int logosdelivery_unsubscribe(
void *ctx,
FFICallBack callback,
void *userData,
const char *contentTopic
);
```
#### `logosdelivery_send`
Send a message.
```c
int logosdelivery_send(
void *ctx,
FFICallBack callback,
void *userData,
const char *messageJson
);
```
**Parameters:**
- `messageJson`: JSON string containing the message
**Example message JSON:**
```json
{
"contentTopic": "/myapp/1/chat/proto",
"payload": "SGVsbG8gV29ybGQ=",
"ephemeral": false
}
```
Note: The `payload` field should be base64-encoded.
**Returns:** Request ID in the callback message that can be used to track message delivery.
### Events
#### `logosdelivery_set_event_callback`
Sets a callback that will be invoked whenever an event occurs (e.g., message received).
```c
void logosdelivery_set_event_callback(
void *ctx,
FFICallBack callback,
void *userData
);
```
**Important:** The callback should be fast, non-blocking, and thread-safe.
## Building
The library follows the same build system as the main Logos Messaging project.
### Build the library
```bash
make liblogosdeliveryStatic # Build static library
# or
make liblogosdeliveryDynamic # Build dynamic library
```
## Return Codes
All functions that return `int` use the following return codes:
- `RET_OK` (0): Success
- `RET_ERR` (1): Error
- `RET_MISSING_CALLBACK` (2): Missing callback function
## Callback Function
All API functions use the following callback signature:
```c
typedef void (*FFICallBack)(
int callerRet,
const char *msg,
size_t len,
void *userData
);
```
**Parameters:**
- `callerRet`: Return code (RET_OK, RET_ERR, etc.)
- `msg`: Response message (may be empty for success)
- `len`: Length of the message
- `userData`: User data passed in the original call
## Example Usage
```c
#include "liblogosdelivery.h"
#include <stdio.h>
void callback(int ret, const char *msg, size_t len, void *userData) {
if (ret == RET_OK) {
printf("Success: %.*s\n", (int)len, msg);
} else {
printf("Error: %.*s\n", (int)len, msg);
}
}
int main() {
const char *config = "{"
"\"logLevel\": \"INFO\","
"\"mode\": \"Core\","
"\"preset\": \"logos.dev\""
"}";
// Create node
void *ctx = logosdelivery_create_node(config, callback, NULL);
if (ctx == NULL) {
return 1;
}
// Start node
logosdelivery_start_node(ctx, callback, NULL);
// Subscribe to a topic
logosdelivery_subscribe(ctx, callback, NULL, "/myapp/1/chat/proto");
// Send a message
const char *msg = "{"
"\"contentTopic\": \"/myapp/1/chat/proto\","
"\"payload\": \"SGVsbG8gV29ybGQ=\","
"\"ephemeral\": false"
"}";
logosdelivery_send(ctx, callback, NULL, msg);
// Clean up
logosdelivery_stop_node(ctx, callback, NULL);
logosdelivery_destroy(ctx, callback, NULL);
return 0;
}
```
## Architecture
The library is structured as follows:
- `liblogosdelivery.h`: C header file with function declarations
- `liblogosdelivery.nim`: Main library entry point
- `declare_lib.nim`: Library declaration and initialization
- `lmapi/node_api.nim`: Node lifecycle API implementation
- `lmapi/messaging_api.nim`: Subscribe/send API implementation
The library uses the nim-ffi framework for FFI infrastructure, which handles:
- Thread-safe request processing
- Async operation management
- Memory management between C and Nim
- Callback marshaling
## See Also
- Main API documentation: `waku/api/api.nim`
- Original libwaku library: `library/libwaku.nim`
- nim-ffi framework: `vendor/nim-ffi/`

View File

@ -0,0 +1,33 @@
import ffi
import std/locks
import waku/factory/waku
declareLibrary("logosdelivery")
var eventCallbackLock: Lock
initLock(eventCallbackLock)
template requireInitializedNode*(
ctx: ptr FFIContext[Waku], opName: string, onError: untyped
) =
if isNil(ctx):
let errMsg {.inject.} = opName & " failed: invalid context"
onError
elif isNil(ctx.myLib) or isNil(ctx.myLib[]):
let errMsg {.inject.} = opName & " failed: node is not initialized"
onError
proc logosdelivery_set_event_callback(
ctx: ptr FFIContext[Waku], callback: FFICallBack, userData: pointer
) {.dynlib, exportc, cdecl.} =
if isNil(ctx):
echo "error: invalid context in logosdelivery_set_event_callback"
return
# prevent race conditions that might happen due incorrect usage.
eventCallbackLock.acquire()
defer:
eventCallbackLock.release()
ctx[].eventCallback = cast[pointer](callback)
ctx[].eventUserData = userData

View File

@ -0,0 +1,96 @@
#include "json_utils.h"
#include <stdio.h>
#include <string.h>
const char* extract_json_field(const char *json, const char *field, char *buffer, size_t bufSize) {
char searchStr[256];
snprintf(searchStr, sizeof(searchStr), "\"%s\":\"", field);
const char *start = strstr(json, searchStr);
if (!start) {
return NULL;
}
start += strlen(searchStr);
const char *end = strchr(start, '"');
if (!end) {
return NULL;
}
size_t len = end - start;
if (len >= bufSize) {
len = bufSize - 1;
}
memcpy(buffer, start, len);
buffer[len] = '\0';
return buffer;
}
const char* extract_json_object(const char *json, const char *field, size_t *outLen) {
char searchStr[256];
snprintf(searchStr, sizeof(searchStr), "\"%s\":{", field);
const char *start = strstr(json, searchStr);
if (!start) {
return NULL;
}
// Advance to the opening brace
start = strchr(start, '{');
if (!start) {
return NULL;
}
// Find the matching closing brace (handles nested braces)
int depth = 0;
const char *p = start;
while (*p) {
if (*p == '{') depth++;
else if (*p == '}') {
depth--;
if (depth == 0) {
*outLen = (size_t)(p - start + 1);
return start;
}
}
p++;
}
return NULL;
}
int decode_json_byte_array(const char *json, const char *field, char *buffer, size_t bufSize) {
char searchStr[256];
snprintf(searchStr, sizeof(searchStr), "\"%s\":[", field);
const char *start = strstr(json, searchStr);
if (!start) {
return -1;
}
// Advance to the opening bracket
start = strchr(start, '[');
if (!start) {
return -1;
}
start++; // skip '['
size_t pos = 0;
const char *p = start;
while (*p && *p != ']' && pos < bufSize - 1) {
// Skip whitespace and commas
while (*p == ' ' || *p == ',' || *p == '\n' || *p == '\r' || *p == '\t') p++;
if (*p == ']') break;
// Parse integer
int val = 0;
while (*p >= '0' && *p <= '9') {
val = val * 10 + (*p - '0');
p++;
}
buffer[pos++] = (char)val;
}
buffer[pos] = '\0';
return (int)pos;
}

View File

@ -0,0 +1,21 @@
#ifndef JSON_UTILS_H
#define JSON_UTILS_H
#include <stddef.h>
// Extract a JSON string field value into buffer.
// Returns pointer to buffer on success, NULL on failure.
// Very basic parser - for production use a proper JSON library.
const char* extract_json_field(const char *json, const char *field, char *buffer, size_t bufSize);
// Extract a nested JSON object as a raw string.
// Returns a pointer into `json` at the start of the object, and sets `outLen`.
// Handles nested braces.
const char* extract_json_object(const char *json, const char *field, size_t *outLen);
// Decode a JSON array of integers (byte values) into a buffer.
// Parses e.g. [72,101,108,108,111] into "Hello".
// Returns number of bytes decoded, or -1 on error.
int decode_json_byte_array(const char *json, const char *field, char *buffer, size_t bufSize);
#endif // JSON_UTILS_H

View File

@ -0,0 +1,227 @@
#include "../liblogosdelivery.h"
#include "json_utils.h"
#include <stdio.h>
#include <string.h>
#include <unistd.h>
#include <stdlib.h>
static int create_node_ok = -1;
// Flags set by event callback, polled by main thread
static volatile int got_message_sent = 0;
static volatile int got_message_error = 0;
static volatile int got_message_received = 0;
// Event callback that handles message events
void event_callback(int ret, const char *msg, size_t len, void *userData) {
if (ret != RET_OK || msg == NULL || len == 0) {
return;
}
// Create null-terminated string for easier parsing
char *eventJson = malloc(len + 1);
if (!eventJson) {
return;
}
memcpy(eventJson, msg, len);
eventJson[len] = '\0';
// Extract eventType
char eventType[64];
if (!extract_json_field(eventJson, "eventType", eventType, sizeof(eventType))) {
free(eventJson);
return;
}
// Handle different event types
if (strcmp(eventType, "message_sent") == 0) {
char requestId[128];
char messageHash[128];
extract_json_field(eventJson, "requestId", requestId, sizeof(requestId));
extract_json_field(eventJson, "messageHash", messageHash, sizeof(messageHash));
printf("[EVENT] Message sent - RequestID: %s, Hash: %s\n", requestId, messageHash);
got_message_sent = 1;
} else if (strcmp(eventType, "message_error") == 0) {
char requestId[128];
char messageHash[128];
char error[256];
extract_json_field(eventJson, "requestId", requestId, sizeof(requestId));
extract_json_field(eventJson, "messageHash", messageHash, sizeof(messageHash));
extract_json_field(eventJson, "error", error, sizeof(error));
printf("[EVENT] Message error - RequestID: %s, Hash: %s, Error: %s\n",
requestId, messageHash, error);
got_message_error = 1;
} else if (strcmp(eventType, "message_propagated") == 0) {
char requestId[128];
char messageHash[128];
extract_json_field(eventJson, "requestId", requestId, sizeof(requestId));
extract_json_field(eventJson, "messageHash", messageHash, sizeof(messageHash));
printf("[EVENT] Message propagated - RequestID: %s, Hash: %s\n", requestId, messageHash);
} else if (strcmp(eventType, "connection_status_change") == 0) {
char connectionStatus[256];
extract_json_field(eventJson, "connectionStatus", connectionStatus, sizeof(connectionStatus));
printf("[EVENT] Connection status change - Status: %s\n", connectionStatus);
} else if (strcmp(eventType, "message_received") == 0) {
char messageHash[128];
extract_json_field(eventJson, "messageHash", messageHash, sizeof(messageHash));
// Extract the nested "message" object
size_t msgObjLen = 0;
const char *msgObj = extract_json_object(eventJson, "message", &msgObjLen);
if (msgObj) {
// Make a null-terminated copy of the message object
char *msgJson = malloc(msgObjLen + 1);
if (msgJson) {
memcpy(msgJson, msgObj, msgObjLen);
msgJson[msgObjLen] = '\0';
char contentTopic[256];
extract_json_field(msgJson, "contentTopic", contentTopic, sizeof(contentTopic));
// Decode payload from JSON byte array to string
char payload[4096];
int payloadLen = decode_json_byte_array(msgJson, "payload", payload, sizeof(payload));
printf("[EVENT] Message received - Hash: %s, ContentTopic: %s\n", messageHash, contentTopic);
if (payloadLen > 0) {
printf(" Payload (%d bytes): %.*s\n", payloadLen, payloadLen, payload);
} else {
printf(" Payload: (empty or could not decode)\n");
}
free(msgJson);
}
} else {
printf("[EVENT] Message received - Hash: %s (could not parse message)\n", messageHash);
}
got_message_received = 1;
} else {
printf("[EVENT] Unknown event type: %s\n", eventType);
}
free(eventJson);
}
// Simple callback that prints results
void simple_callback(int ret, const char *msg, size_t len, void *userData) {
const char *operation = (const char *)userData;
if (operation != NULL && strcmp(operation, "create_node") == 0) {
create_node_ok = (ret == RET_OK) ? 1 : 0;
}
if (ret == RET_OK) {
if (len > 0) {
printf("[%s] Success: %.*s\n", operation, (int)len, msg);
} else {
printf("[%s] Success\n", operation);
}
} else {
printf("[%s] Error: %.*s\n", operation, (int)len, msg);
}
}
int main() {
printf("=== Logos Messaging API (LMAPI) Example ===\n\n");
// Configuration JSON using WakuNodeConf field names (flat structure).
// Field names match Nim identifiers from WakuNodeConf in tools/confutils/cli_args.nim.
const char *config = "{"
"\"logLevel\": \"INFO\","
"\"mode\": \"Core\","
"\"preset\": \"logos.dev\""
"}";
printf("1. Creating node...\n");
void *ctx = logosdelivery_create_node(config, simple_callback, (void *)"create_node");
if (ctx == NULL) {
printf("Failed to create node\n");
return 1;
}
// Wait a bit for the callback
sleep(1);
if (create_node_ok != 1) {
printf("Create node failed, stopping example early.\n");
logosdelivery_destroy(ctx, simple_callback, (void *)"destroy");
return 1;
}
printf("\n2. Setting up event callback...\n");
logosdelivery_set_event_callback(ctx, event_callback, NULL);
printf("Event callback registered for message events\n");
printf("\n3. Starting node...\n");
logosdelivery_start_node(ctx, simple_callback, (void *)"start_node");
// Wait for node to start
sleep(5);
printf("\n4. Subscribing to content topic...\n");
const char *contentTopic = "/example/1/chat/proto";
logosdelivery_subscribe(ctx, simple_callback, (void *)"subscribe", contentTopic);
// Wait for subscription
sleep(1);
printf("\n5. Retrieving all possibl node info ids...\n");
logosdelivery_get_available_node_info_ids(ctx, simple_callback, (void *)"get_available_node_info_ids");
printf("\nRetrieving node info for a specific invalid ID...\n");
logosdelivery_get_node_info(ctx, simple_callback, (void *)"get_node_info", "WrongNodeInfoId");
printf("\nRetrieving several node info for specific correct IDs...\n");
logosdelivery_get_node_info(ctx, simple_callback, (void *)"get_node_info", "Version");
// logosdelivery_get_node_info(ctx, simple_callback, (void *)"get_node_info", "Metrics");
logosdelivery_get_node_info(ctx, simple_callback, (void *)"get_node_info", "MyMultiaddresses");
logosdelivery_get_node_info(ctx, simple_callback, (void *)"get_node_info", "MyENR");
logosdelivery_get_node_info(ctx, simple_callback, (void *)"get_node_info", "MyPeerId");
printf("\nRetrieving available configs...\n");
logosdelivery_get_available_configs(ctx, simple_callback, (void *)"get_available_configs");
printf("\n6. Sending a message...\n");
printf("Watch for message events (sent, propagated, or error):\n");
// Create base64-encoded payload: "Hello, Logos Messaging!"
const char *message = "{"
"\"contentTopic\": \"/example/1/chat/proto\","
"\"payload\": \"SGVsbG8sIExvZ29zIE1lc3NhZ2luZyE=\","
"\"ephemeral\": false"
"}";
logosdelivery_send(ctx, simple_callback, (void *)"send", message);
// Poll for terminal message events (sent, error, or received) with timeout
printf("Waiting for message delivery events...\n");
int timeout_sec = 60;
int elapsed = 0;
while (!(got_message_sent || got_message_error || got_message_received)
&& elapsed < timeout_sec) {
usleep(100000); // 100ms
elapsed++;
}
if (elapsed >= timeout_sec) {
printf("Timed out waiting for message events after %d seconds\n", timeout_sec);
}
printf("\n7. Unsubscribing from content topic...\n");
logosdelivery_unsubscribe(ctx, simple_callback, (void *)"unsubscribe", contentTopic);
sleep(1);
printf("\n8. Stopping node...\n");
logosdelivery_stop_node(ctx, simple_callback, (void *)"stop_node");
sleep(1);
printf("\n9. Destroying context...\n");
logosdelivery_destroy(ctx, simple_callback, (void *)"destroy");
printf("\n=== Example completed ===\n");
return 0;
}

View File

@ -0,0 +1,27 @@
import std/[json, macros]
type JsonEvent*[T] = ref object
eventType*: string
payload*: T
macro toFlatJson*(event: JsonEvent): JsonNode =
## Serializes JsonEvent[T] to flat JSON with eventType first,
## followed by all fields from T's payload
result = quote:
var jsonObj = newJObject()
jsonObj["eventType"] = %`event`.eventType
# Serialize payload fields into the same object (flattening)
let payloadJson = %`event`.payload
for key, val in payloadJson.pairs:
jsonObj[key] = val
jsonObj
proc `$`*[T](event: JsonEvent[T]): string =
$toFlatJson(event)
proc newJsonEvent*[T](eventType: string, payload: T): JsonEvent[T] =
## Creates a new JsonEvent with the given eventType and payload.
## The payload's fields will be flattened into the JSON output.
JsonEvent[T](eventType: eventType, payload: payload)

View File

@ -0,0 +1,100 @@
// Generated manually and inspired by libwaku.h
// Header file for Logos Messaging API (LMAPI) library
#pragma once
#ifndef __liblogosdelivery__
#define __liblogosdelivery__
#include <stddef.h>
#include <stdint.h>
// The possible returned values for the functions that return int
#define RET_OK 0
#define RET_ERR 1
#define RET_MISSING_CALLBACK 2
#ifdef __cplusplus
extern "C"
{
#endif
typedef void (*FFICallBack)(int callerRet, const char *msg, size_t len, void *userData);
// Creates a new instance of the node from the given configuration JSON.
// Returns a pointer to the Context needed by the rest of the API functions.
// Configuration should be in JSON format using WakuNodeConf field names.
// Field names match Nim identifiers from WakuNodeConf (camelCase).
// Example: {"mode": "Core", "clusterId": 42, "relay": true}
void *logosdelivery_create_node(
const char *configJson,
FFICallBack callback,
void *userData);
// Starts the node.
int logosdelivery_start_node(void *ctx,
FFICallBack callback,
void *userData);
// Stops the node.
int logosdelivery_stop_node(void *ctx,
FFICallBack callback,
void *userData);
// Destroys an instance of a node created with logosdelivery_create_node
int logosdelivery_destroy(void *ctx,
FFICallBack callback,
void *userData);
// Subscribe to a content topic.
// contentTopic: string representing the content topic (e.g., "/myapp/1/chat/proto")
int logosdelivery_subscribe(void *ctx,
FFICallBack callback,
void *userData,
const char *contentTopic);
// Unsubscribe from a content topic.
int logosdelivery_unsubscribe(void *ctx,
FFICallBack callback,
void *userData,
const char *contentTopic);
// Send a message.
// messageJson: JSON string with the following structure:
// {
// "contentTopic": "/myapp/1/chat/proto",
// "payload": "base64-encoded-payload",
// "ephemeral": false
// }
// Returns a request ID that can be used to track the message delivery.
int logosdelivery_send(void *ctx,
FFICallBack callback,
void *userData,
const char *messageJson);
// Sets a callback that will be invoked whenever an event occurs.
// It is crucial that the passed callback is fast, non-blocking and potentially thread-safe.
void logosdelivery_set_event_callback(void *ctx,
FFICallBack callback,
void *userData);
// Retrieves the list of available node info IDs.
int logosdelivery_get_available_node_info_ids(void *ctx,
FFICallBack callback,
void *userData);
// Given a node info ID, retrieves the corresponding info.
int logosdelivery_get_node_info(void *ctx,
FFICallBack callback,
void *userData,
const char *nodeInfoId);
// Retrieves the list of available configurations.
int logosdelivery_get_available_configs(void *ctx,
FFICallBack callback,
void *userData);
#ifdef __cplusplus
}
#endif
#endif /* __liblogosdelivery__ */

View File

@ -0,0 +1,11 @@
import std/[atomics, options]
import chronicles, chronos, chronos/threadsync, ffi
import waku/factory/waku, waku/node/waku_node, ./declare_lib
################################################################################
## Include different APIs, i.e. all procs with {.ffi.} pragma
include
./logos_delivery_api/node_api,
./logos_delivery_api/messaging_api,
./logos_delivery_api/debug_api

View File

@ -0,0 +1,56 @@
import std/[json, strutils]
import waku/factory/waku_state_info
import tools/confutils/[cli_args, config_option_meta]
proc logosdelivery_get_available_node_info_ids(
ctx: ptr FFIContext[Waku], callback: FFICallBack, userData: pointer
) {.ffi.} =
## Returns the list of all available node info item ids that
## can be queried with `get_node_info_item`.
requireInitializedNode(ctx, "GetNodeInfoIds"):
return err(errMsg)
return ok($ctx.myLib[].stateInfo.getAllPossibleInfoItemIds())
proc logosdelivery_get_node_info(
ctx: ptr FFIContext[Waku],
callback: FFICallBack,
userData: pointer,
nodeInfoId: cstring,
) {.ffi.} =
## Returns the content of the node info item with the given id if it exists.
requireInitializedNode(ctx, "GetNodeInfoItem"):
return err(errMsg)
let infoItemIdEnum =
try:
parseEnum[NodeInfoId]($nodeInfoId)
except ValueError:
return err("Invalid node info id: " & $nodeInfoId)
return ok(ctx.myLib[].stateInfo.getNodeInfoItem(infoItemIdEnum))
proc logosdelivery_get_available_configs(
ctx: ptr FFIContext[Waku], callback: FFICallBack, userData: pointer
) {.ffi.} =
## Returns information about the accepted config items.
requireInitializedNode(ctx, "GetAvailableConfigs"):
return err(errMsg)
let optionMetas: seq[ConfigOptionMeta] = extractConfigOptionMeta(WakuNodeConf)
var configOptionDetails = newJArray()
# for confField, confValue in fieldPairs(conf):
# defaultConfig[confField] = $confValue
for meta in optionMetas:
configOptionDetails.add(
%*{
meta.fieldName: meta.typeName & "(" & meta.defaultValue & ")", "desc": meta.desc
}
)
var jsonNode = newJObject()
jsonNode["configOptions"] = configOptionDetails
let asString = pretty(jsonNode)
return ok(pretty(jsonNode))

View File

@ -0,0 +1,91 @@
import std/[json]
import chronos, results, ffi
import stew/byteutils
import
waku/common/base64,
waku/factory/waku,
waku/waku_core/topics/content_topic,
waku/api/[api, types],
../declare_lib
proc logosdelivery_subscribe(
ctx: ptr FFIContext[Waku],
callback: FFICallBack,
userData: pointer,
contentTopicStr: cstring,
) {.ffi.} =
requireInitializedNode(ctx, "Subscribe"):
return err(errMsg)
# ContentTopic is just a string type alias
let contentTopic = ContentTopic($contentTopicStr)
(await api.subscribe(ctx.myLib[], contentTopic)).isOkOr:
let errMsg = $error
return err("Subscribe failed: " & errMsg)
return ok("")
proc logosdelivery_unsubscribe(
ctx: ptr FFIContext[Waku],
callback: FFICallBack,
userData: pointer,
contentTopicStr: cstring,
) {.ffi.} =
requireInitializedNode(ctx, "Unsubscribe"):
return err(errMsg)
# ContentTopic is just a string type alias
let contentTopic = ContentTopic($contentTopicStr)
api.unsubscribe(ctx.myLib[], contentTopic).isOkOr:
let errMsg = $error
return err("Unsubscribe failed: " & errMsg)
return ok("")
proc logosdelivery_send(
ctx: ptr FFIContext[Waku],
callback: FFICallBack,
userData: pointer,
messageJson: cstring,
) {.ffi.} =
requireInitializedNode(ctx, "Send"):
return err(errMsg)
## Parse the message JSON and send the message
var jsonNode: JsonNode
try:
jsonNode = parseJson($messageJson)
except Exception as e:
return err("Failed to parse message JSON: " & e.msg)
# Extract content topic
if not jsonNode.hasKey("contentTopic"):
return err("Missing contentTopic field")
# ContentTopic is just a string type alias
let contentTopic = ContentTopic(jsonNode["contentTopic"].getStr())
# Extract payload (expect base64 encoded string)
if not jsonNode.hasKey("payload"):
return err("Missing payload field")
let payloadStr = jsonNode["payload"].getStr()
let payload = base64.decode(Base64String(payloadStr)).valueOr:
return err("invalid payload format: " & error)
# Extract ephemeral flag
let ephemeral = jsonNode.getOrDefault("ephemeral").getBool(false)
# Create message envelope
let envelope = MessageEnvelope.init(
contentTopic = contentTopic, payload = payload, ephemeral = ephemeral
)
# Send the message
let requestId = (await api.send(ctx.myLib[], envelope)).valueOr:
let errMsg = $error
return err("Send failed: " & errMsg)
return ok($requestId)

View File

@ -0,0 +1,197 @@
import std/[json, strutils, tables]
import chronos, chronicles, results, confutils, confutils/std/net, ffi
import
waku/factory/waku,
waku/node/waku_node,
waku/api/[api, types],
waku/events/[message_events, health_events],
tools/confutils/cli_args,
../declare_lib,
../json_event
# Add JSON serialization for RequestId
proc `%`*(id: RequestId): JsonNode =
%($id)
registerReqFFI(CreateNodeRequest, ctx: ptr FFIContext[Waku]):
proc(configJson: cstring): Future[Result[string, string]] {.async.} =
## Parse the JSON configuration using fieldPairs approach (WakuNodeConf)
var conf = defaultWakuNodeConf().valueOr:
return err("Failed creating default conf: " & error)
var jsonNode: JsonNode
try:
jsonNode = parseJson($configJson)
except Exception:
let exceptionMsg = getCurrentExceptionMsg()
error "Failed to parse config JSON",
error = exceptionMsg, configJson = $configJson
return err(
"Failed to parse config JSON: " & exceptionMsg & " configJson string: " &
$configJson
)
var jsonFields: Table[string, (string, JsonNode)]
for key, value in jsonNode:
let lowerKey = key.toLowerAscii()
if jsonFields.hasKey(lowerKey):
error "Duplicate configuration option found when normalized to lowercase",
key = key
return err(
"Duplicate configuration option found when normalized to lowercase: '" & key &
"'"
)
jsonFields[lowerKey] = (key, value)
for confField, confValue in fieldPairs(conf):
let lowerField = confField.toLowerAscii()
if jsonFields.hasKey(lowerField):
let (jsonKey, jsonValue) = jsonFields[lowerField]
let formattedString = ($jsonValue).strip(chars = {'\"'})
try:
confValue = parseCmdArg(typeof(confValue), formattedString)
except Exception:
return err(
"Failed to parse field '" & confField & "' from JSON key '" & jsonKey & "': " &
getCurrentExceptionMsg() & ". Value: " & formattedString
)
jsonFields.del(lowerField)
if jsonFields.len > 0:
var unknownKeys = newSeq[string]()
for _, (jsonKey, _) in pairs(jsonFields):
unknownKeys.add(jsonKey)
error "Unrecognized configuration option(s) found", option = unknownKeys
return err("Unrecognized configuration option(s) found: " & $unknownKeys)
# Create the node
ctx.myLib[] = (await api.createNode(conf)).valueOr:
let errMsg = $error
chronicles.error "CreateNodeRequest failed", err = errMsg
return err(errMsg)
return ok("")
proc logosdelivery_destroy(
ctx: ptr FFIContext[Waku], callback: FFICallBack, userData: pointer
): cint {.dynlib, exportc, cdecl.} =
initializeLibrary()
checkParams(ctx, callback, userData)
ffi.destroyFFIContext(ctx).isOkOr:
let msg = "liblogosdelivery error: " & $error
callback(RET_ERR, unsafeAddr msg[0], cast[csize_t](len(msg)), userData)
return RET_ERR
## always need to invoke the callback although we don't retrieve value to the caller
callback(RET_OK, nil, 0, userData)
return RET_OK
proc logosdelivery_create_node(
configJson: cstring, callback: FFICallback, userData: pointer
): pointer {.dynlib, exportc, cdecl.} =
initializeLibrary()
if isNil(callback):
echo "error: missing callback in logosdelivery_create_node"
return nil
var ctx = ffi.createFFIContext[Waku]().valueOr:
let msg = "Error in createFFIContext: " & $error
callback(RET_ERR, unsafeAddr msg[0], cast[csize_t](len(msg)), userData)
return nil
ctx.userData = userData
ffi.sendRequestToFFIThread(
ctx, CreateNodeRequest.ffiNewReq(callback, userData, configJson)
).isOkOr:
let msg = "error in sendRequestToFFIThread: " & $error
callback(RET_ERR, unsafeAddr msg[0], cast[csize_t](len(msg)), userData)
# free allocated resources as they won't be available
ffi.destroyFFIContext(ctx).isOkOr:
chronicles.error "Error in destroyFFIContext after sendRequestToFFIThread during creation",
err = $error
return nil
return ctx
proc logosdelivery_start_node(
ctx: ptr FFIContext[Waku], callback: FFICallBack, userData: pointer
) {.ffi.} =
requireInitializedNode(ctx, "START_NODE"):
return err(errMsg)
# setting up outgoing event listeners
let sentListener = MessageSentEvent.listen(
ctx.myLib[].brokerCtx,
proc(event: MessageSentEvent) {.async: (raises: []).} =
callEventCallback(ctx, "onMessageSent"):
$newJsonEvent("message_sent", event),
).valueOr:
chronicles.error "MessageSentEvent.listen failed", err = $error
return err("MessageSentEvent.listen failed: " & $error)
let errorListener = MessageErrorEvent.listen(
ctx.myLib[].brokerCtx,
proc(event: MessageErrorEvent) {.async: (raises: []).} =
callEventCallback(ctx, "onMessageError"):
$newJsonEvent("message_error", event),
).valueOr:
chronicles.error "MessageErrorEvent.listen failed", err = $error
return err("MessageErrorEvent.listen failed: " & $error)
let propagatedListener = MessagePropagatedEvent.listen(
ctx.myLib[].brokerCtx,
proc(event: MessagePropagatedEvent) {.async: (raises: []).} =
callEventCallback(ctx, "onMessagePropagated"):
$newJsonEvent("message_propagated", event),
).valueOr:
chronicles.error "MessagePropagatedEvent.listen failed", err = $error
return err("MessagePropagatedEvent.listen failed: " & $error)
let receivedListener = MessageReceivedEvent.listen(
ctx.myLib[].brokerCtx,
proc(event: MessageReceivedEvent) {.async: (raises: []).} =
callEventCallback(ctx, "onMessageReceived"):
$newJsonEvent("message_received", event),
).valueOr:
chronicles.error "MessageReceivedEvent.listen failed", err = $error
return err("MessageReceivedEvent.listen failed: " & $error)
let ConnectionStatusChangeListener = EventConnectionStatusChange.listen(
ctx.myLib[].brokerCtx,
proc(event: EventConnectionStatusChange) {.async: (raises: []).} =
callEventCallback(ctx, "onConnectionStatusChange"):
$newJsonEvent("connection_status_change", event),
).valueOr:
chronicles.error "ConnectionStatusChange.listen failed", err = $error
return err("ConnectionStatusChange.listen failed: " & $error)
(await startWaku(addr ctx.myLib[])).isOkOr:
let errMsg = $error
chronicles.error "START_NODE failed", err = errMsg
return err("failed to start: " & errMsg)
return ok("")
proc logosdelivery_stop_node(
ctx: ptr FFIContext[Waku], callback: FFICallBack, userData: pointer
) {.ffi.} =
requireInitializedNode(ctx, "STOP_NODE"):
return err(errMsg)
MessageErrorEvent.dropAllListeners(ctx.myLib[].brokerCtx)
MessageSentEvent.dropAllListeners(ctx.myLib[].brokerCtx)
MessagePropagatedEvent.dropAllListeners(ctx.myLib[].brokerCtx)
MessageReceivedEvent.dropAllListeners(ctx.myLib[].brokerCtx)
EventConnectionStatusChange.dropAllListeners(ctx.myLib[].brokerCtx)
(await ctx.myLib[].stop()).isOkOr:
let errMsg = $error
chronicles.error "STOP_NODE failed", err = errMsg
return err("failed to stop: " & errMsg)
return ok("")

27
liblogosdelivery/nim.cfg Normal file
View File

@ -0,0 +1,27 @@
# Nim configuration for liblogosdelivery
# Ensure correct compiler configuration
--gc:
refc
--threads:
on
# Include paths
--path:
"../vendor/nim-ffi"
--path:
"../"
# Optimization and debugging
--opt:
speed
--debugger:
native
# Export symbols for dynamic library
--app:
lib
--noMain
# Enable FFI macro features when needed for debugging
# --define:ffiDumpMacros

View File

@ -1,42 +0,0 @@
## Can be shared safely between threads
type SharedSeq*[T] = tuple[data: ptr UncheckedArray[T], len: int]
proc alloc*(str: cstring): cstring =
# Byte allocation from the given address.
# There should be the corresponding manual deallocation with deallocShared !
if str.isNil():
var ret = cast[cstring](allocShared(1)) # Allocate memory for the null terminator
ret[0] = '\0' # Set the null terminator
return ret
let ret = cast[cstring](allocShared(len(str) + 1))
copyMem(ret, str, len(str) + 1)
return ret
proc alloc*(str: string): cstring =
## Byte allocation from the given address.
## There should be the corresponding manual deallocation with deallocShared !
var ret = cast[cstring](allocShared(str.len + 1))
let s = cast[seq[char]](str)
for i in 0 ..< str.len:
ret[i] = s[i]
ret[str.len] = '\0'
return ret
proc allocSharedSeq*[T](s: seq[T]): SharedSeq[T] =
let data = allocShared(sizeof(T) * s.len)
if s.len != 0:
copyMem(data, unsafeAddr s[0], s.len)
return (cast[ptr UncheckedArray[T]](data), s.len)
proc deallocSharedSeq*[T](s: var SharedSeq[T]) =
deallocShared(s.data)
s.len = 0
proc toSeq*[T](s: SharedSeq[T]): seq[T] =
## Creates a seq[T] from a SharedSeq[T]. No explicit dealloc is required
## as req[T] is a GC managed type.
var ret = newSeq[T]()
for i in 0 ..< s.len:
ret.add(s.data[i])
return ret

10
library/declare_lib.nim Normal file
View File

@ -0,0 +1,10 @@
import ffi
import waku/factory/waku
declareLibrary("waku")
proc set_event_callback(
ctx: ptr FFIContext[Waku], callback: FFICallBack, userData: pointer
) {.dynlib, exportc, cdecl.} =
ctx[].eventCallback = cast[pointer](callback)
ctx[].eventUserData = userData

View File

@ -0,0 +1,15 @@
{.push raises: [].}
import system, std/json
import ./json_base_event
import ../../waku/api/types
type JsonConnectionStatusChangeEvent* = ref object of JsonEvent
status*: ConnectionStatus
proc new*(T: type JsonConnectionStatusChangeEvent, status: ConnectionStatus): T =
return
JsonConnectionStatusChangeEvent(eventType: "node_health_change", status: status)
method `$`*(event: JsonConnectionStatusChangeEvent): string =
$(%*event)

View File

@ -1,9 +0,0 @@
import system, std/json, ./json_base_event
type JsonWakuNotRespondingEvent* = ref object of JsonEvent
proc new*(T: type JsonWakuNotRespondingEvent): T =
return JsonWakuNotRespondingEvent(eventType: "waku_not_responding")
method `$`*(event: JsonWakuNotRespondingEvent): string =
$(%*event)

View File

@ -1,30 +0,0 @@
################################################################################
### Exported types
type WakuCallBack* = proc(
callerRet: cint, msg: ptr cchar, len: csize_t, userData: pointer
) {.cdecl, gcsafe, raises: [].}
const RET_OK*: cint = 0
const RET_ERR*: cint = 1
const RET_MISSING_CALLBACK*: cint = 2
### End of exported types
################################################################################
################################################################################
### FFI utils
template foreignThreadGc*(body: untyped) =
when declared(setupForeignThreadGc):
setupForeignThreadGc()
body
when declared(tearDownForeignThreadGc):
tearDownForeignThreadGc()
type onDone* = proc()
### End of FFI utils
################################################################################

View File

@ -0,0 +1,32 @@
/**
* iOS stubs for BearSSL tools functions not normally included in the library.
* These are typically from the BearSSL tools/ directory which is for CLI tools.
*/
#include <stddef.h>
/* x509_noanchor context - simplified stub */
typedef struct {
void *vtable;
void *inner;
} x509_noanchor_context;
/* Stub for x509_noanchor_init - used to skip anchor validation */
void x509_noanchor_init(x509_noanchor_context *xwc, const void **inner) {
if (xwc && inner) {
xwc->inner = (void*)*inner;
xwc->vtable = NULL;
}
}
/* TAs (Trust Anchors) - empty array stub */
/* This is typically defined by applications with their CA certificates */
typedef struct {
void *dn;
size_t dn_len;
unsigned flags;
void *pkey;
} br_x509_trust_anchor;
const br_x509_trust_anchor TAs[1] = {{0}};
const size_t TAs_NUM = 0;

View File

@ -0,0 +1,14 @@
/**
* iOS stub for getgateway.c functions.
* iOS doesn't have net/route.h, so we provide a stub that returns failure.
* NAT-PMP functionality won't work but the library will link.
*/
#include <stdint.h>
#include <netinet/in.h>
/* getdefaultgateway - returns -1 (failure) on iOS */
int getdefaultgateway(in_addr_t *addr) {
(void)addr; /* unused */
return -1; /* failure - not supported on iOS */
}

View File

@ -0,0 +1,50 @@
import std/json
import
chronicles,
chronos,
results,
eth/p2p/discoveryv5/enr,
strutils,
libp2p/peerid,
metrics,
ffi
import
waku/factory/waku, waku/node/waku_node, waku/node/health_monitor, library/declare_lib
proc getMultiaddresses(node: WakuNode): seq[string] =
return node.info().listenAddresses
proc getMetrics(): string =
{.gcsafe.}:
return defaultRegistry.toText() ## defaultRegistry is {.global.} in metrics module
proc waku_version(
ctx: ptr FFIContext[Waku], callback: FFICallBack, userData: pointer
) {.ffi.} =
return ok(WakuNodeVersionString)
proc waku_listen_addresses(
ctx: ptr FFIContext[Waku], callback: FFICallBack, userData: pointer
) {.ffi.} =
## returns a comma-separated string of the listen addresses
return ok(ctx.myLib[].node.getMultiaddresses().join(","))
proc waku_get_my_enr(
ctx: ptr FFIContext[Waku], callback: FFICallBack, userData: pointer
) {.ffi.} =
return ok(ctx.myLib[].node.enr.toURI())
proc waku_get_my_peerid(
ctx: ptr FFIContext[Waku], callback: FFICallBack, userData: pointer
) {.ffi.} =
return ok($ctx.myLib[].node.peerId())
proc waku_get_metrics(
ctx: ptr FFIContext[Waku], callback: FFICallBack, userData: pointer
) {.ffi.} =
return ok(getMetrics())
proc waku_is_online(
ctx: ptr FFIContext[Waku], callback: FFICallBack, userData: pointer
) {.ffi.} =
return ok($ctx.myLib[].healthMonitor.onlineMonitor.amIOnline())

View File

@ -0,0 +1,96 @@
import std/json
import chronos, chronicles, results, strutils, libp2p/multiaddress, ffi
import
waku/factory/waku,
waku/discovery/waku_dnsdisc,
waku/discovery/waku_discv5,
waku/waku_core/peers,
waku/node/waku_node,
waku/node/kernel_api,
library/declare_lib
proc retrieveBootstrapNodes(
enrTreeUrl: string, ipDnsServer: string
): Future[Result[seq[string], string]] {.async.} =
let dnsNameServers = @[parseIpAddress(ipDnsServer)]
let discoveredPeers: seq[RemotePeerInfo] = (
await retrieveDynamicBootstrapNodes(enrTreeUrl, dnsNameServers)
).valueOr:
return err("failed discovering peers from DNS: " & $error)
var multiAddresses = newSeq[string]()
for discPeer in discoveredPeers:
for address in discPeer.addrs:
multiAddresses.add($address & "/p2p/" & $discPeer)
return ok(multiAddresses)
proc updateDiscv5BootstrapNodes(nodes: string, waku: Waku): Result[void, string] =
waku.wakuDiscv5.updateBootstrapRecords(nodes).isOkOr:
return err("error in updateDiscv5BootstrapNodes: " & $error)
return ok()
proc performPeerExchangeRequestTo*(
numPeers: uint64, waku: Waku
): Future[Result[int, string]] {.async.} =
let numPeersRecv = (await waku.node.fetchPeerExchangePeers(numPeers)).valueOr:
return err($error)
return ok(numPeersRecv)
proc waku_discv5_update_bootnodes(
ctx: ptr FFIContext[Waku],
callback: FFICallBack,
userData: pointer,
bootnodes: cstring,
) {.ffi.} =
## Updates the bootnode list used for discovering new peers via DiscoveryV5
## bootnodes - JSON array containing the bootnode ENRs i.e. `["enr:...", "enr:..."]`
updateDiscv5BootstrapNodes($bootnodes, ctx.myLib[]).isOkOr:
error "UPDATE_DISCV5_BOOTSTRAP_NODES failed", error = error
return err($error)
return ok("discovery request processed correctly")
proc waku_dns_discovery(
ctx: ptr FFIContext[Waku],
callback: FFICallBack,
userData: pointer,
enrTreeUrl: cstring,
nameDnsServer: cstring,
timeoutMs: cint,
) {.ffi.} =
let nodes = (await retrieveBootstrapNodes($enrTreeUrl, $nameDnsServer)).valueOr:
error "GET_BOOTSTRAP_NODES failed", error = error
return err($error)
## returns a comma-separated string of bootstrap nodes' multiaddresses
return ok(nodes.join(","))
proc waku_start_discv5(
ctx: ptr FFIContext[Waku], callback: FFICallBack, userData: pointer
) {.ffi.} =
(await ctx.myLib[].wakuDiscv5.start()).isOkOr:
error "START_DISCV5 failed", error = error
return err("error starting discv5: " & $error)
return ok("discv5 started correctly")
proc waku_stop_discv5(
ctx: ptr FFIContext[Waku], callback: FFICallBack, userData: pointer
) {.ffi.} =
await ctx.myLib[].wakuDiscv5.stop()
return ok("discv5 stopped correctly")
proc waku_peer_exchange_request(
ctx: ptr FFIContext[Waku],
callback: FFICallBack,
userData: pointer,
numPeers: uint64,
) {.ffi.} =
let numValidPeers = (await performPeerExchangeRequestTo(numPeers, ctx.myLib[])).valueOr:
error "waku_peer_exchange_request failed", error = error
return err("failed peer exchange: " & $error)
return ok($numValidPeers)

View File

@ -1,41 +1,14 @@
import std/[options, json, strutils, net] import std/[options, json, strutils, net]
import chronos, chronicles, results, confutils, confutils/std/net import chronos, chronicles, results, confutils, confutils/std/net, ffi
import import
../../../waku/node/peer_manager/peer_manager, waku/node/peer_manager/peer_manager,
../../../tools/confutils/cli_args, tools/confutils/cli_args,
../../../waku/factory/waku, waku/factory/waku,
../../../waku/factory/node_factory, waku/factory/node_factory,
../../../waku/factory/networks_config, waku/factory/app_callbacks,
../../../waku/factory/app_callbacks, waku/rest_api/endpoint/builder,
../../../waku/waku_api/rest/builder, library/declare_lib
../../alloc
type NodeLifecycleMsgType* = enum
CREATE_NODE
START_NODE
STOP_NODE
type NodeLifecycleRequest* = object
operation: NodeLifecycleMsgType
configJson: cstring ## Only used in 'CREATE_NODE' operation
appCallbacks: AppCallbacks
proc createShared*(
T: type NodeLifecycleRequest,
op: NodeLifecycleMsgType,
configJson: cstring = "",
appCallbacks: AppCallbacks = nil,
): ptr type T =
var ret = createShared(T)
ret[].operation = op
ret[].appCallbacks = appCallbacks
ret[].configJson = configJson.alloc()
return ret
proc destroyShared(self: ptr NodeLifecycleRequest) =
deallocShared(self[].configJson)
deallocShared(self)
proc createWaku( proc createWaku(
configJson: cstring, appCallbacks: AppCallbacks = nil configJson: cstring, appCallbacks: AppCallbacks = nil
@ -85,26 +58,28 @@ proc createWaku(
return ok(wakuRes) return ok(wakuRes)
proc process*( registerReqFFI(CreateNodeRequest, ctx: ptr FFIContext[Waku]):
self: ptr NodeLifecycleRequest, waku: ptr Waku proc(
): Future[Result[string, string]] {.async.} = configJson: cstring, appCallbacks: AppCallbacks
defer: ): Future[Result[string, string]] {.async.} =
destroyShared(self) ctx.myLib[] = (await createWaku(configJson, cast[AppCallbacks](appCallbacks))).valueOr:
error "CreateNodeRequest failed", error = error
case self.operation
of CREATE_NODE:
waku[] = (await createWaku(self.configJson, self.appCallbacks)).valueOr:
error "CREATE_NODE failed", error = error
return err($error) return err($error)
of START_NODE:
(await waku.startWaku()).isOkOr:
error "START_NODE failed", error = error
return err($error)
of STOP_NODE:
try:
await waku[].stop()
except Exception:
error "STOP_NODE failed", error = getCurrentExceptionMsg()
return err(getCurrentExceptionMsg())
return ok("")
proc waku_start(
ctx: ptr FFIContext[Waku], callback: FFICallBack, userData: pointer
) {.ffi.} =
(await startWaku(ctx[].myLib)).isOkOr:
error "START_NODE failed", error = error
return err("failed to start: " & $error)
return ok("")
proc waku_stop(
ctx: ptr FFIContext[Waku], callback: FFICallBack, userData: pointer
) {.ffi.} =
(await ctx.myLib[].stop()).isOkOr:
error "STOP_NODE failed", error = error
return err("failed to stop: " & $error)
return ok("") return ok("")

View File

@ -0,0 +1,123 @@
import std/[sequtils, strutils, tables]
import chronicles, chronos, results, options, json, ffi
import waku/factory/waku, waku/node/waku_node, waku/node/peer_manager, ../declare_lib
type PeerInfo = object
protocols: seq[string]
addresses: seq[string]
proc waku_get_peerids_from_peerstore(
ctx: ptr FFIContext[Waku], callback: FFICallBack, userData: pointer
) {.ffi.} =
## returns a comma-separated string of peerIDs
let peerIDs =
ctx.myLib[].node.peerManager.switch.peerStore.peers().mapIt($it.peerId).join(",")
return ok(peerIDs)
proc waku_connect(
ctx: ptr FFIContext[Waku],
callback: FFICallBack,
userData: pointer,
peerMultiAddr: cstring,
timeoutMs: cuint,
) {.ffi.} =
let peers = ($peerMultiAddr).split(",").mapIt(strip(it))
await ctx.myLib[].node.connectToNodes(peers, source = "static")
return ok("")
proc waku_disconnect_peer_by_id(
ctx: ptr FFIContext[Waku], callback: FFICallBack, userData: pointer, peerId: cstring
) {.ffi.} =
let pId = PeerId.init($peerId).valueOr:
error "DISCONNECT_PEER_BY_ID failed", error = $error
return err($error)
await ctx.myLib[].node.peerManager.disconnectNode(pId)
return ok("")
proc waku_disconnect_all_peers(
ctx: ptr FFIContext[Waku], callback: FFICallBack, userData: pointer
) {.ffi.} =
await ctx.myLib[].node.peerManager.disconnectAllPeers()
return ok("")
proc waku_dial_peer(
ctx: ptr FFIContext[Waku],
callback: FFICallBack,
userData: pointer,
peerMultiAddr: cstring,
protocol: cstring,
timeoutMs: cuint,
) {.ffi.} =
let remotePeerInfo = parsePeerInfo($peerMultiAddr).valueOr:
error "DIAL_PEER failed", error = $error
return err($error)
let conn = await ctx.myLib[].node.peerManager.dialPeer(remotePeerInfo, $protocol)
if conn.isNone():
let msg = "failed dialing peer"
error "DIAL_PEER failed", error = msg, peerId = $remotePeerInfo.peerId
return err(msg)
return ok("")
proc waku_dial_peer_by_id(
ctx: ptr FFIContext[Waku],
callback: FFICallBack,
userData: pointer,
peerId: cstring,
protocol: cstring,
timeoutMs: cuint,
) {.ffi.} =
let pId = PeerId.init($peerId).valueOr:
error "DIAL_PEER_BY_ID failed", error = $error
return err($error)
let conn = await ctx.myLib[].node.peerManager.dialPeer(pId, $protocol)
if conn.isNone():
let msg = "failed dialing peer"
error "DIAL_PEER_BY_ID failed", error = msg, peerId = $peerId
return err(msg)
return ok("")
proc waku_get_connected_peers_info(
ctx: ptr FFIContext[Waku], callback: FFICallBack, userData: pointer
) {.ffi.} =
## returns a JSON string mapping peerIDs to objects with protocols and addresses
var peersMap = initTable[string, PeerInfo]()
let peers = ctx.myLib[].node.peerManager.switch.peerStore.peers().filterIt(
it.connectedness == Connected
)
# Build a map of peer IDs to peer info objects
for peer in peers:
let peerIdStr = $peer.peerId
peersMap[peerIdStr] =
PeerInfo(protocols: peer.protocols, addresses: peer.addrs.mapIt($it))
# Convert the map to JSON string
let jsonObj = %*peersMap
let jsonStr = $jsonObj
return ok(jsonStr)
proc waku_get_connected_peers(
ctx: ptr FFIContext[Waku], callback: FFICallBack, userData: pointer
) {.ffi.} =
## returns a comma-separated string of peerIDs
let
(inPeerIds, outPeerIds) = ctx.myLib[].node.peerManager.connectedPeers()
connectedPeerids = concat(inPeerIds, outPeerIds)
return ok(connectedPeerids.mapIt($it).join(","))
proc waku_get_peerids_by_protocol(
ctx: ptr FFIContext[Waku],
callback: FFICallBack,
userData: pointer,
protocol: cstring,
) {.ffi.} =
## returns a comma-separated string of peerIDs that mount the given protocol
let connectedPeers = ctx.myLib[].node.peerManager.switch.peerStore
.peers($protocol)
.filterIt(it.connectedness == Connected)
.mapIt($it.peerId)
.join(",")
return ok(connectedPeers)

View File

@ -0,0 +1,43 @@
import std/[json, strutils]
import chronos, results, ffi
import libp2p/[protocols/ping, switch, multiaddress, multicodec]
import waku/[factory/waku, waku_core/peers, node/waku_node], library/declare_lib
proc waku_ping_peer(
ctx: ptr FFIContext[Waku],
callback: FFICallBack,
userData: pointer,
peerAddr: cstring,
timeoutMs: cuint,
) {.ffi.} =
let peerInfo = peers.parsePeerInfo(($peerAddr).split(",")).valueOr:
return err("PingRequest failed to parse peer addr: " & $error)
let timeout = chronos.milliseconds(timeoutMs)
proc ping(): Future[Result[Duration, string]] {.async, gcsafe.} =
try:
let conn =
await ctx.myLib[].node.switch.dial(peerInfo.peerId, peerInfo.addrs, PingCodec)
defer:
await conn.close()
let pingRTT = await ctx.myLib[].node.libp2pPing.ping(conn)
if pingRTT == 0.nanos:
return err("could not ping peer: rtt-0")
return ok(pingRTT)
except CatchableError as exc:
return err("could not ping peer: " & exc.msg)
let pingFuture = ping()
let pingRTT: Duration =
if timeout == chronos.milliseconds(0): # No timeout expected
(await pingFuture).valueOr:
return err("ping failed, no timeout expected: " & error)
else:
let timedOut = not (await pingFuture.withTimeout(timeout))
if timedOut:
return err("ping timed out")
pingFuture.read().valueOr:
return err("failed to read ping future: " & error)
return ok($(pingRTT.nanos))

View File

@ -0,0 +1,109 @@
import options, std/[strutils, sequtils]
import chronicles, chronos, results, ffi
import
waku/waku_filter_v2/client,
waku/waku_core/message/message,
waku/factory/waku,
waku/waku_relay,
waku/waku_filter_v2/common,
waku/waku_core/subscription/push_handler,
waku/node/peer_manager/peer_manager,
waku/node/waku_node,
waku/node/kernel_api,
waku/waku_core/topics/pubsub_topic,
waku/waku_core/topics/content_topic,
library/events/json_message_event,
library/declare_lib
const FilterOpTimeout = 5.seconds
proc checkFilterClientMounted(waku: Waku): Result[string, string] =
if waku.node.wakuFilterClient.isNil():
let errorMsg = "wakuFilterClient is not mounted"
error "fail filter process", error = errorMsg
return err(errorMsg)
return ok("")
proc waku_filter_subscribe(
ctx: ptr FFIContext[Waku],
callback: FFICallBack,
userData: pointer,
pubSubTopic: cstring,
contentTopics: cstring,
) {.ffi.} =
proc onReceivedMessage(ctx: ptr FFIContext): WakuRelayHandler =
return proc(pubsubTopic: PubsubTopic, msg: WakuMessage) {.async.} =
callEventCallback(ctx, "onReceivedMessage"):
$JsonMessageEvent.new(pubsubTopic, msg)
checkFilterClientMounted(ctx.myLib[]).isOkOr:
return err($error)
var filterPushEventCallback = FilterPushHandler(onReceivedMessage(ctx))
ctx.myLib[].node.wakuFilterClient.registerPushHandler(filterPushEventCallback)
let peer = ctx.myLib[].node.peerManager.selectPeer(WakuFilterSubscribeCodec).valueOr:
let errorMsg = "could not find peer with WakuFilterSubscribeCodec when subscribing"
error "fail filter subscribe", error = errorMsg
return err(errorMsg)
let subFut = ctx.myLib[].node.filterSubscribe(
some(PubsubTopic($pubsubTopic)),
($contentTopics).split(",").mapIt(ContentTopic(it)),
peer,
)
if not await subFut.withTimeout(FilterOpTimeout):
let errorMsg = "filter subscription timed out"
error "fail filter unsubscribe", error = errorMsg
return err(errorMsg)
return ok("")
proc waku_filter_unsubscribe(
ctx: ptr FFIContext[Waku],
callback: FFICallBack,
userData: pointer,
pubSubTopic: cstring,
contentTopics: cstring,
) {.ffi.} =
checkFilterClientMounted(ctx.myLib[]).isOkOr:
return err($error)
let peer = ctx.myLib[].node.peerManager.selectPeer(WakuFilterSubscribeCodec).valueOr:
let errorMsg =
"could not find peer with WakuFilterSubscribeCodec when unsubscribing"
error "fail filter process", error = errorMsg
return err(errorMsg)
let subFut = ctx.myLib[].node.filterUnsubscribe(
some(PubsubTopic($pubsubTopic)),
($contentTopics).split(",").mapIt(ContentTopic(it)),
peer,
)
if not await subFut.withTimeout(FilterOpTimeout):
let errorMsg = "filter un-subscription timed out"
error "fail filter unsubscribe", error = errorMsg
return err(errorMsg)
return ok("")
proc waku_filter_unsubscribe_all(
ctx: ptr FFIContext[Waku], callback: FFICallBack, userData: pointer
) {.ffi.} =
checkFilterClientMounted(ctx.myLib[]).isOkOr:
return err($error)
let peer = ctx.myLib[].node.peerManager.selectPeer(WakuFilterSubscribeCodec).valueOr:
let errorMsg =
"could not find peer with WakuFilterSubscribeCodec when unsubscribing all"
error "fail filter unsubscribe all", error = errorMsg
return err(errorMsg)
let unsubFut = ctx.myLib[].node.filterUnsubscribeAll(peer)
if not await unsubFut.withTimeout(FilterOpTimeout):
let errorMsg = "filter un-subscription all timed out"
error "fail filter unsubscribe all", error = errorMsg
return err(errorMsg)
return ok("")

Some files were not shown because too many files have changed in this diff Show More