From cd5909fafe914014b4602492586e8c2b48b53349 Mon Sep 17 00:00:00 2001 From: Darshan K <35736874+darshankabariya@users.noreply.github.com> Date: Wed, 19 Nov 2025 18:53:23 +0530 Subject: [PATCH 01/70] chore: first beta release v0.37.0 (#3607) --- CHANGELOG.md | 59 ++++++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 59 insertions(+) diff --git a/CHANGELOG.md b/CHANGELOG.md index dc073792c..61e818afd 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -1,3 +1,62 @@ +## v0.37.0 (2025-10-01) + +### Notes + +- Deprecated parameters: + - `tree_path` and `rlnDB` (RLN-related storage paths) + - `--dns-discovery` (fully removed, including dns-discovery-name-server) + - `keepAlive` (deprecated, config updated accordingly) +- Legacy `store` protocol is no longer supported by default. +- Improved sharding configuration: now explicit and shard-specific metrics added. +- Mix nodes are limited to IPv4 addresses only. +- [lightpush legacy](https://github.com/vacp2p/rfc-index/blob/main/waku/standards/core/19/lightpush.md) is being deprecated. Use [lightpush v3](https://github.com/waku-org/specs/blob/master/standards/core/lightpush.md) instead. + +### Features + +- Waku API: create node via API ([#3580](https://github.com/waku-org/nwaku/pull/3580)) ([bc8acf76](https://github.com/waku-org/nwaku/commit/bc8acf76)) +- Waku Sync: full topic support ([#3275](https://github.com/waku-org/nwaku/pull/3275)) ([9327da5a](https://github.com/waku-org/nwaku/commit/9327da5a)) +- Mix PoC implementation ([#3284](https://github.com/waku-org/nwaku/pull/3284)) ([eb7a3d13](https://github.com/waku-org/nwaku/commit/eb7a3d13)) +- Rendezvous: add request interval option ([#3569](https://github.com/waku-org/nwaku/pull/3569)) ([cc7a6406](https://github.com/waku-org/nwaku/commit/cc7a6406)) +- Shard-specific metrics tracking ([#3520](https://github.com/waku-org/nwaku/pull/3520)) ([c3da29fd](https://github.com/waku-org/nwaku/commit/c3da29fd)) +- Libwaku: build Windows DLL for Status-go ([#3460](https://github.com/waku-org/nwaku/pull/3460)) ([5c38a53f](https://github.com/waku-org/nwaku/commit/5c38a53f)) +- RLN: add Stateless RLN support ([#3621](https://github.com/waku-org/nwaku/pull/3621)) +- LOG: Reduce log level of messages from debug to info for better visibility ([#3622](https://github.com/waku-org/nwaku/pull/3622)) + +### Bug Fixes + +- Prevent invalid pubsub topic subscription via Relay REST API ([#3559](https://github.com/waku-org/nwaku/pull/3559)) ([a36601ab](https://github.com/waku-org/nwaku/commit/a36601ab)) +- Fixed node crash when RLN is unregistered ([#3573](https://github.com/waku-org/nwaku/pull/3573)) ([3d0c6279](https://github.com/waku-org/nwaku/commit/3d0c6279)) +- REST: fixed sync protocol issues ([#3503](https://github.com/waku-org/nwaku/pull/3503)) ([393e3cce](https://github.com/waku-org/nwaku/commit/393e3cce)) +- Regex pattern fix for `username:password@` in URLs ([#3517](https://github.com/waku-org/nwaku/pull/3517)) ([89a3f735](https://github.com/waku-org/nwaku/commit/89a3f735)) +- Sharding: applied modulus fix ([#3530](https://github.com/waku-org/nwaku/pull/3530)) ([f68d7999](https://github.com/waku-org/nwaku/commit/f68d7999)) +- Metrics: switched to counter instead of gauge ([#3355](https://github.com/waku-org/nwaku/pull/3355)) ([a27eec90](https://github.com/waku-org/nwaku/commit/a27eec90)) +- Fixed lightpush metrics and diagnostics ([#3486](https://github.com/waku-org/nwaku/pull/3486)) ([0ed3fc80](https://github.com/waku-org/nwaku/commit/0ed3fc80)) +- Misc sync, dashboard, and CI fixes ([#3434](https://github.com/waku-org/nwaku/pull/3434), [#3508](https://github.com/waku-org/nwaku/pull/3508), [#3464](https://github.com/waku-org/nwaku/pull/3464)) +- Raise log level of numerous operational messages from debug to info for better visibility ([#3622](https://github.com/waku-org/nwaku/pull/3622)) + +### Changes + +- Enable peer-exchange by default ([#3557](https://github.com/waku-org/nwaku/pull/3557)) ([7df526f8](https://github.com/waku-org/nwaku/commit/7df526f8)) +- Refactor peer-exchange client and service implementations ([#3523](https://github.com/waku-org/nwaku/pull/3523)) ([4379f9ec](https://github.com/waku-org/nwaku/commit/4379f9ec)) +- Updated rendezvous to use callback-based shard/capability updates ([#3558](https://github.com/waku-org/nwaku/pull/3558)) ([028bf297](https://github.com/waku-org/nwaku/commit/028bf297)) +- Config updates and explicit sharding setup ([#3468](https://github.com/waku-org/nwaku/pull/3468)) ([994d485b](https://github.com/waku-org/nwaku/commit/994d485b)) +- Bumped libp2p to v1.13.0 ([#3574](https://github.com/waku-org/nwaku/pull/3574)) ([b1616e55](https://github.com/waku-org/nwaku/commit/b1616e55)) +- Removed legacy dependencies (e.g., libpcre in Docker builds) ([#3552](https://github.com/waku-org/nwaku/pull/3552)) ([4db4f830](https://github.com/waku-org/nwaku/commit/4db4f830)) +- Benchmarks for RLN proof generation & verification ([#3567](https://github.com/waku-org/nwaku/pull/3567)) ([794c3a85](https://github.com/waku-org/nwaku/commit/794c3a85)) +- Various CI/CD & infra updates ([#3515](https://github.com/waku-org/nwaku/pull/3515), [#3505](https://github.com/waku-org/nwaku/pull/3505)) + +### This release supports the following [libp2p protocols](https://docs.libp2p.io/concepts/protocols/): + +| Protocol | Spec status | Protocol id | +| ---: | :---: | :--- | +| [`11/WAKU2-RELAY`](https://github.com/vacp2p/rfc-index/blob/main/waku/standards/core/11/relay.md) | `stable` | `/vac/waku/relay/2.0.0` | +| [`12/WAKU2-FILTER`](https://github.com/vacp2p/rfc-index/blob/main/waku/standards/core/12/filter.md) | `draft` | `/vac/waku/filter/2.0.0-beta1`
`/vac/waku/filter-subscribe/2.0.0-beta1`
`/vac/waku/filter-push/2.0.0-beta1` | +| [`13/WAKU2-STORE`](https://github.com/vacp2p/rfc-index/blob/main/waku/standards/core/13/store.md) | `draft` | `/vac/waku/store/2.0.0-beta4` | +| [`19/WAKU2-LIGHTPUSH`](https://github.com/vacp2p/rfc-index/blob/main/waku/standards/core/19/lightpush.md) | `draft` | `/vac/waku/lightpush/2.0.0-beta1` | +| [`WAKU2-LIGHTPUSH v3`](https://github.com/waku-org/specs/blob/master/standards/core/lightpush.md) | `draft` | `/vac/waku/lightpush/3.0.0` | +| [`66/WAKU2-METADATA`](https://github.com/waku-org/specs/blob/master/standards/core/metadata.md) | `raw` | `/vac/waku/metadata/1.0.0` | +| [`WAKU-SYNC`](https://github.com/waku-org/specs/blob/master/standards/core/sync.md) | `draft` | `/vac/waku/sync/1.0.0` | + ## v0.36.0 (2025-06-20) ### Notes From adeb1a928ec8943a02d5cbd3927fc178ba2f3686 Mon Sep 17 00:00:00 2001 From: Ivan FB <128452529+Ivansete-status@users.noreply.github.com> Date: Thu, 20 Nov 2025 08:44:15 +0100 Subject: [PATCH 02/70] fix: wakucanary now fails correctly when ping fails (#3595) * wakucanary add some more detail if exception Co-authored-by: MorganaFuture --- apps/wakucanary/wakucanary.nim | 16 ++++++++++++++-- 1 file changed, 14 insertions(+), 2 deletions(-) diff --git a/apps/wakucanary/wakucanary.nim b/apps/wakucanary/wakucanary.nim index bcff9653e..6e02c2a8f 100644 --- a/apps/wakucanary/wakucanary.nim +++ b/apps/wakucanary/wakucanary.nim @@ -143,16 +143,18 @@ proc areProtocolsSupported( proc pingNode( node: WakuNode, peerInfo: RemotePeerInfo -): Future[void] {.async, gcsafe.} = +): Future[bool] {.async, gcsafe.} = try: let conn = await node.switch.dial(peerInfo.peerId, peerInfo.addrs, PingCodec) let pingDelay = await node.libp2pPing.ping(conn) info "Peer response time (ms)", peerId = peerInfo.peerId, ping = pingDelay.millis + return true except CatchableError: var msg = getCurrentExceptionMsg() if msg == "Future operation cancelled!": msg = "timedout" error "Failed to ping the peer", peer = peerInfo, err = msg + return false proc main(rng: ref HmacDrbgContext): Future[int] {.async.} = let conf: WakuCanaryConf = WakuCanaryConf.load() @@ -268,8 +270,13 @@ proc main(rng: ref HmacDrbgContext): Future[int] {.async.} = let lp2pPeerStore = node.switch.peerStore let conStatus = node.peerManager.switch.peerStore[ConnectionBook][peer.peerId] + var pingSuccess = true if conf.ping: - discard await pingFut + try: + pingSuccess = await pingFut + except CatchableError as exc: + pingSuccess = false + error "Ping operation failed or timed out", error = exc.msg if conStatus in [Connected, CanConnect]: let nodeProtocols = lp2pPeerStore[ProtoBook][peer.peerId] @@ -278,6 +285,11 @@ proc main(rng: ref HmacDrbgContext): Future[int] {.async.} = error "Not all protocols are supported", expected = conf.protocols, supported = nodeProtocols quit(QuitFailure) + + # Check ping result if ping was enabled + if conf.ping and not pingSuccess: + error "Node is reachable and supports protocols but ping failed - connection may be unstable" + quit(QuitFailure) elif conStatus == CannotConnect: error "Could not connect", peerId = peer.peerId quit(QuitFailure) From e54851d9d63fd9e7e5e70844364a4464c4dba8f4 Mon Sep 17 00:00:00 2001 From: Ivan FB <128452529+Ivansete-status@users.noreply.github.com> Date: Thu, 20 Nov 2025 13:12:16 +0100 Subject: [PATCH 03/70] fix: admin API peer shards field from metadata protocol (#3594) * fix: admin API peer shards field from metadata protocol Store and return peer shard info from metadata protocol exchange instead of only checking ENR records. * peer_manager set shard info and extend rest test to validate it Co-authored-by: MorganaFuture --- tests/wakunode_rest/test_rest_admin.nim | 14 +++++++++++++- waku/node/peer_manager/peer_manager.nim | 5 +++++ waku/node/peer_manager/waku_peer_store.nim | 8 ++++++++ waku/waku_core/peers.nim | 12 +++++++++++- 4 files changed, 37 insertions(+), 2 deletions(-) diff --git a/tests/wakunode_rest/test_rest_admin.nim b/tests/wakunode_rest/test_rest_admin.nim index 6de886f74..ef82b8dfc 100644 --- a/tests/wakunode_rest/test_rest_admin.nim +++ b/tests/wakunode_rest/test_rest_admin.nim @@ -65,7 +65,7 @@ suite "Waku v2 Rest API - Admin": ): Future[void] {.async, gcsafe.} = await sleepAsync(0.milliseconds) - let shard = RelayShard(clusterId: clusterId, shardId: 0) + let shard = RelayShard(clusterId: clusterId, shardId: 5) node1.subscribe((kind: PubsubSub, topic: $shard), simpleHandler).isOkOr: assert false, "Failed to subscribe to topic: " & $error node2.subscribe((kind: PubsubSub, topic: $shard), simpleHandler).isOkOr: @@ -212,6 +212,18 @@ suite "Waku v2 Rest API - Admin": let conn2 = await node1.peerManager.connectPeer(peerInfo2) let conn3 = await node1.peerManager.connectPeer(peerInfo3) + var count = 0 + while count < 20: + ## Wait ~1s at most for the peer store to update shard info + let getRes = await client.getPeers() + if getRes.data.allIt(it.shards == @[5.uint16]): + break + + count.inc() + await sleepAsync(50.milliseconds) + + assert count < 20, "Timeout waiting for shards to be updated in peer store" + # Check successful connections check: conn2 == true diff --git a/waku/node/peer_manager/peer_manager.nim b/waku/node/peer_manager/peer_manager.nim index 72b526aca..1abcc1ac0 100644 --- a/waku/node/peer_manager/peer_manager.nim +++ b/waku/node/peer_manager/peer_manager.nim @@ -658,6 +658,11 @@ proc onPeerMetadata(pm: PeerManager, peerId: PeerId) {.async.} = $clusterId break guardClauses + # Store the shard information from metadata in the peer store + if pm.switch.peerStore.peerExists(peerId): + let shards = metadata.shards.mapIt(it.uint16) + pm.switch.peerStore.setShardInfo(peerId, shards) + return info "disconnecting from peer", peerId = peerId, reason = reason diff --git a/waku/node/peer_manager/waku_peer_store.nim b/waku/node/peer_manager/waku_peer_store.nim index 0098c1687..c9e2d4817 100644 --- a/waku/node/peer_manager/waku_peer_store.nim +++ b/waku/node/peer_manager/waku_peer_store.nim @@ -39,6 +39,9 @@ type # Keeps track of the ENR (Ethereum Node Record) of a peer ENRBook* = ref object of PeerBook[enr.Record] + # Keeps track of peer shards + ShardBook* = ref object of PeerBook[seq[uint16]] + proc getPeer*(peerStore: PeerStore, peerId: PeerId): RemotePeerInfo = let addresses = if peerStore[LastSeenBook][peerId].isSome(): @@ -55,6 +58,7 @@ proc getPeer*(peerStore: PeerStore, peerId: PeerId): RemotePeerInfo = else: none(enr.Record), protocols: peerStore[ProtoBook][peerId], + shards: peerStore[ShardBook][peerId], agent: peerStore[AgentBook][peerId], protoVersion: peerStore[ProtoVersionBook][peerId], publicKey: peerStore[KeyBook][peerId], @@ -76,6 +80,7 @@ proc peers*(peerStore: PeerStore): seq[RemotePeerInfo] = toSeq(peerStore[AddressBook].book.keys()), toSeq(peerStore[ProtoBook].book.keys()), toSeq(peerStore[KeyBook].book.keys()), + toSeq(peerStore[ShardBook].book.keys()), ) .toHashSet() @@ -127,6 +132,9 @@ proc addPeer*(peerStore: PeerStore, peer: RemotePeerInfo, origin = UnknownOrigin if peer.enr.isSome(): peerStore[ENRBook][peer.peerId] = peer.enr.get() +proc setShardInfo*(peerStore: PeerStore, peerId: PeerID, shards: seq[uint16]) = + peerStore[ShardBook][peerId] = shards + proc peers*(peerStore: PeerStore, proto: string): seq[RemotePeerInfo] = peerStore.peers().filterIt(it.protocols.contains(proto)) diff --git a/waku/waku_core/peers.nim b/waku/waku_core/peers.nim index 5591699c6..76ff29aa0 100644 --- a/waku/waku_core/peers.nim +++ b/waku/waku_core/peers.nim @@ -48,6 +48,7 @@ type RemotePeerInfo* = ref object addrs*: seq[MultiAddress] enr*: Option[enr.Record] protocols*: seq[string] + shards*: seq[uint16] agent*: string protoVersion*: string @@ -73,6 +74,7 @@ proc init*( addrs: seq[MultiAddress] = @[], enr: Option[enr.Record] = none(enr.Record), protocols: seq[string] = @[], + shards: seq[uint16] = @[], publicKey: crypto.PublicKey = crypto.PublicKey(), agent: string = "", protoVersion: string = "", @@ -88,6 +90,7 @@ proc init*( addrs: addrs, enr: enr, protocols: protocols, + shards: shards, publicKey: publicKey, agent: agent, protoVersion: protoVersion, @@ -105,9 +108,12 @@ proc init*( addrs: seq[MultiAddress] = @[], enr: Option[enr.Record] = none(enr.Record), protocols: seq[string] = @[], + shards: seq[uint16] = @[], ): T {.raises: [Defect, ResultError[cstring], LPError].} = let peerId = PeerID.init(peerId).tryGet() - RemotePeerInfo(peerId: peerId, addrs: addrs, enr: enr, protocols: protocols) + RemotePeerInfo( + peerId: peerId, addrs: addrs, enr: enr, protocols: protocols, shards: shards + ) ## Parse @@ -326,6 +332,7 @@ converter toRemotePeerInfo*(peerInfo: PeerInfo): RemotePeerInfo = addrs: peerInfo.listenAddrs, enr: none(enr.Record), protocols: peerInfo.protocols, + shards: @[], agent: peerInfo.agentVersion, protoVersion: peerInfo.protoVersion, publicKey: peerInfo.publicKey, @@ -361,6 +368,9 @@ proc getAgent*(peer: RemotePeerInfo): string = return peer.agent proc getShards*(peer: RemotePeerInfo): seq[uint16] = + if peer.shards.len > 0: + return peer.shards + if peer.enr.isNone(): return @[] From 31e1a81552d72cd07133754e1953639da49ec954 Mon Sep 17 00:00:00 2001 From: Ivan FB <128452529+Ivansete-status@users.noreply.github.com> Date: Thu, 20 Nov 2025 13:40:08 +0100 Subject: [PATCH 04/70] nix: add wakucanary Flake package (#3599) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Signed-off-by: Jakub Sokołowski Co-authored-by: Jakub Sokołowski --- Makefile | 1 - flake.lock | 32 +++++++++++++++++++++++++++----- flake.nix | 17 ++++++++++++----- nix/atlas.nix | 12 ------------ nix/checksums.nix | 2 +- nix/default.nix | 28 ++++++++++++---------------- nix/nimble.nix | 4 ++-- nix/sat.nix | 5 +++-- 8 files changed, 57 insertions(+), 44 deletions(-) delete mode 100644 nix/atlas.nix diff --git a/Makefile b/Makefile index 37341792c..029313c99 100644 --- a/Makefile +++ b/Makefile @@ -543,4 +543,3 @@ release-notes: sed -E 's@#([0-9]+)@[#\1](https://github.com/waku-org/nwaku/issues/\1)@g' # I could not get the tool to replace issue ids with links, so using sed for now, # asked here: https://github.com/bvieira/sv4git/discussions/101 - diff --git a/flake.lock b/flake.lock index 359ae2579..0700e6a43 100644 --- a/flake.lock +++ b/flake.lock @@ -22,24 +22,46 @@ "zerokit": "zerokit" } }, - "zerokit": { + "rust-overlay": { "inputs": { "nixpkgs": [ + "zerokit", "nixpkgs" ] }, "locked": { - "lastModified": 1743756626, - "narHash": "sha256-SvhfEl0bJcRsCd79jYvZbxQecGV2aT+TXjJ57WVv7Aw=", + "lastModified": 1748399823, + "narHash": "sha256-kahD8D5hOXOsGbNdoLLnqCL887cjHkx98Izc37nDjlA=", + "owner": "oxalica", + "repo": "rust-overlay", + "rev": "d68a69dc71bc19beb3479800392112c2f6218159", + "type": "github" + }, + "original": { + "owner": "oxalica", + "repo": "rust-overlay", + "type": "github" + } + }, + "zerokit": { + "inputs": { + "nixpkgs": [ + "nixpkgs" + ], + "rust-overlay": "rust-overlay" + }, + "locked": { + "lastModified": 1749115386, + "narHash": "sha256-UexIE2D7zr6aRajwnKongXwCZCeRZDXOL0kfjhqUFSU=", "owner": "vacp2p", "repo": "zerokit", - "rev": "c60e0c33fc6350a4b1c20e6b6727c44317129582", + "rev": "dc0b31752c91e7b4fefc441cfa6a8210ad7dba7b", "type": "github" }, "original": { "owner": "vacp2p", "repo": "zerokit", - "rev": "c60e0c33fc6350a4b1c20e6b6727c44317129582", + "rev": "dc0b31752c91e7b4fefc441cfa6a8210ad7dba7b", "type": "github" } } diff --git a/flake.nix b/flake.nix index 760f49337..72eaebef1 100644 --- a/flake.nix +++ b/flake.nix @@ -9,7 +9,7 @@ inputs = { nixpkgs.url = "github:NixOS/nixpkgs?rev=f44bd8ca21e026135061a0a57dcf3d0775b67a49"; zerokit = { - url = "github:vacp2p/zerokit?rev=c60e0c33fc6350a4b1c20e6b6727c44317129582"; + url = "github:vacp2p/zerokit?rev=dc0b31752c91e7b4fefc441cfa6a8210ad7dba7b"; inputs.nixpkgs.follows = "nixpkgs"; }; }; @@ -49,11 +49,18 @@ libwaku-android-arm64 = pkgs.callPackage ./nix/default.nix { inherit stableSystems; src = self; - targets = ["libwaku-android-arm64"]; - androidArch = "aarch64-linux-android"; + targets = ["libwaku-android-arm64"]; abidir = "arm64-v8a"; - zerokitPkg = zerokit.packages.${system}.zerokit-android-arm64; + zerokitRln = zerokit.packages.${system}.rln-android-arm64; }; + + wakucanary = pkgs.callPackage ./nix/default.nix { + inherit stableSystems; + src = self; + targets = ["wakucanary"]; + zerokitRln = zerokit.packages.${system}.rln; + }; + default = libwaku-android-arm64; }); @@ -61,4 +68,4 @@ default = pkgsFor.${system}.callPackage ./nix/shell.nix {}; }); }; -} \ No newline at end of file +} diff --git a/nix/atlas.nix b/nix/atlas.nix deleted file mode 100644 index 43336e07a..000000000 --- a/nix/atlas.nix +++ /dev/null @@ -1,12 +0,0 @@ -{ pkgs ? import { } }: - -let - tools = pkgs.callPackage ./tools.nix {}; - sourceFile = ../vendor/nimbus-build-system/vendor/Nim/koch.nim; -in pkgs.fetchFromGitHub { - owner = "nim-lang"; - repo = "atlas"; - rev = tools.findKeyValue "^ +AtlasStableCommit = \"([a-f0-9]+)\"$" sourceFile; - # WARNING: Requires manual updates when Nim compiler version changes. - hash = "sha256-G1TZdgbRPSgxXZ3VsBP2+XFCLHXVb3an65MuQx67o/k="; -} \ No newline at end of file diff --git a/nix/checksums.nix b/nix/checksums.nix index d79345d24..510f2b41a 100644 --- a/nix/checksums.nix +++ b/nix/checksums.nix @@ -6,7 +6,7 @@ let in pkgs.fetchFromGitHub { owner = "nim-lang"; repo = "checksums"; - rev = tools.findKeyValue "^ +ChecksumsStableCommit = \"([a-f0-9]+)\"$" sourceFile; + rev = tools.findKeyValue "^ +ChecksumsStableCommit = \"([a-f0-9]+)\".*$" sourceFile; # WARNING: Requires manual updates when Nim compiler version changes. hash = "sha256-Bm5iJoT2kAvcTexiLMFBa9oU5gf7d4rWjo3OiN7obWQ="; } diff --git a/nix/default.nix b/nix/default.nix index 29eec844d..d78f9935f 100644 --- a/nix/default.nix +++ b/nix/default.nix @@ -9,9 +9,8 @@ stableSystems ? [ "x86_64-linux" "aarch64-linux" ], - androidArch, - abidir, - zerokitPkg, + abidir ? null, + zerokitRln, }: assert pkgs.lib.assertMsg ((src.submodules or true) == true) @@ -51,7 +50,7 @@ in stdenv.mkDerivation rec { cmake which lsb-release - zerokitPkg + zerokitRln nim-unwrapped-2_0 fakeGit fakeCargo @@ -84,27 +83,24 @@ in stdenv.mkDerivation rec { pushd vendor/nimbus-build-system/vendor/Nim mkdir dist cp -r ${callPackage ./nimble.nix {}} dist/nimble - chmod 777 -R dist/nimble - mkdir -p dist/nimble/dist - cp -r ${callPackage ./checksums.nix {}} dist/checksums # need both - cp -r ${callPackage ./checksums.nix {}} dist/nimble/dist/checksums - cp -r ${callPackage ./atlas.nix {}} dist/atlas - chmod 777 -R dist/atlas - mkdir dist/atlas/dist - cp -r ${callPackage ./sat.nix {}} dist/nimble/dist/sat - cp -r ${callPackage ./sat.nix {}} dist/atlas/dist/sat + cp -r ${callPackage ./checksums.nix {}} dist/checksums cp -r ${callPackage ./csources.nix {}} csources_v2 chmod 777 -R dist/nimble csources_v2 popd - mkdir -p vendor/zerokit/target/${androidArch}/release - cp ${zerokitPkg}/librln.so vendor/zerokit/target/${androidArch}/release/ + cp -r ${zerokitRln}/target vendor/zerokit/ + find vendor/zerokit/target + # FIXME + cp vendor/zerokit/target/*/release/librln.a librln_v${zerokitRln.version}.a ''; - installPhase = '' + installPhase = if abidir != null then '' mkdir -p $out/jni cp -r ./build/android/${abidir}/* $out/jni/ echo '${androidManifest}' > $out/jni/AndroidManifest.xml cd $out && zip -r libwaku.aar * + '' else '' + mkdir -p $out/bin + cp -r build/* $out/bin ''; meta = with pkgs.lib; { diff --git a/nix/nimble.nix b/nix/nimble.nix index 5bd7b0f32..f9d87da6d 100644 --- a/nix/nimble.nix +++ b/nix/nimble.nix @@ -6,7 +6,7 @@ let in pkgs.fetchFromGitHub { owner = "nim-lang"; repo = "nimble"; - rev = tools.findKeyValue "^ +NimbleStableCommit = \"([a-f0-9]+)\".+" sourceFile; + rev = tools.findKeyValue "^ +NimbleStableCommit = \"([a-f0-9]+)\".*$" sourceFile; # WARNING: Requires manual updates when Nim compiler version changes. hash = "sha256-MVHf19UbOWk8Zba2scj06PxdYYOJA6OXrVyDQ9Ku6Us="; -} \ No newline at end of file +} diff --git a/nix/sat.nix b/nix/sat.nix index 31f264468..92db58a2e 100644 --- a/nix/sat.nix +++ b/nix/sat.nix @@ -6,7 +6,8 @@ let in pkgs.fetchFromGitHub { owner = "nim-lang"; repo = "sat"; - rev = tools.findKeyValue "^ +SatStableCommit = \"([a-f0-9]+)\"$" sourceFile; + rev = tools.findKeyValue "^ +SatStableCommit = \"([a-f0-9]+)\".*$" sourceFile; + # WARNING: Requires manual updates when Nim compiler version changes. # WARNING: Requires manual updates when Nim compiler version changes. hash = "sha256-JFrrSV+mehG0gP7NiQ8hYthL0cjh44HNbXfuxQNhq7c="; -} \ No newline at end of file +} From b0cd75f4cb7e98d3f5b74962f12469d6012b0a57 Mon Sep 17 00:00:00 2001 From: Prem Chaitanya Prathi Date: Fri, 21 Nov 2025 23:15:12 +0530 Subject: [PATCH 05/70] feat: update rendezvous to broadcast and discover WakuPeerRecords (#3617) * update rendezvous to work with WakuPeeRecord and use libp2p updated version * split rendezvous client and service implementation * mount rendezvous client by default --- Dockerfile.lightpushWithMix.compile | 2 +- apps/chat2mix/chat2mix.nim | 18 +- .../lightpush_mix/lightpush_publisher_mix.nim | 60 ++-- .../lightpush_publisher_mix_metrics.nim | 3 + simulations/mixnet/run_chat_mix.sh | 3 +- simulations/mixnet/run_chat_mix1.sh | 3 +- tests/test_waku_rendezvous.nim | 36 ++- tests/waku_discv5/test_waku_discv5.nim | 6 +- vendor/nim-libp2p | 2 +- waku.nimble | 2 +- waku/common/callbacks.nim | 4 +- waku/factory/node_factory.nim | 23 +- waku/factory/waku_conf.nim | 3 +- waku/node/peer_manager/waku_peer_store.nim | 19 +- waku/node/waku_node.nim | 43 ++- waku/waku_core/codecs.nim | 1 + waku/waku_core/peers.nim | 4 + waku/waku_lightpush/client.nim | 12 +- waku/waku_mix/protocol.nim | 79 +++-- waku/waku_rendezvous/client.nim | 142 ++++++++ waku/waku_rendezvous/common.nim | 8 + waku/waku_rendezvous/protocol.nim | 306 +++++++----------- waku/waku_rendezvous/waku_peer_record.nim | 74 +++++ 23 files changed, 564 insertions(+), 289 deletions(-) create mode 100644 waku/waku_rendezvous/client.nim create mode 100644 waku/waku_rendezvous/waku_peer_record.nim diff --git a/Dockerfile.lightpushWithMix.compile b/Dockerfile.lightpushWithMix.compile index 381ee60ef..8006ec50b 100644 --- a/Dockerfile.lightpushWithMix.compile +++ b/Dockerfile.lightpushWithMix.compile @@ -1,5 +1,5 @@ # BUILD NIM APP ---------------------------------------------------------------- -FROM rust:1.81.0-alpine3.19 AS nim-build +FROM rustlang/rust:nightly-alpine3.19 AS nim-build ARG NIMFLAGS ARG MAKE_TARGET=lightpushwithmix diff --git a/apps/chat2mix/chat2mix.nim b/apps/chat2mix/chat2mix.nim index 5979e2936..3fdd7bc9c 100644 --- a/apps/chat2mix/chat2mix.nim +++ b/apps/chat2mix/chat2mix.nim @@ -124,7 +124,7 @@ proc encode*(message: Chat2Message): ProtoBuffer = return serialised -proc toString*(message: Chat2Message): string = +proc `$`*(message: Chat2Message): string = # Get message date and timestamp in local time let time = message.timestamp.fromUnix().local().format("'<'MMM' 'dd,' 'HH:mm'>'") @@ -331,13 +331,14 @@ proc maintainSubscription( const maxFailedServiceNodeSwitches = 10 var noFailedSubscribes = 0 var noFailedServiceNodeSwitches = 0 - const RetryWaitMs = 2.seconds # Quick retry interval - const SubscriptionMaintenanceMs = 30.seconds # Subscription maintenance interval + # Use chronos.Duration explicitly to avoid mismatch with std/times.Duration + let RetryWait = chronos.seconds(2) # Quick retry interval + let SubscriptionMaintenance = chronos.seconds(30) # Subscription maintenance interval while true: info "maintaining subscription at", peer = constructMultiaddrStr(actualFilterPeer) # First use filter-ping to check if we have an active subscription let pingErr = (await wakuNode.wakuFilterClient.ping(actualFilterPeer)).errorOr: - await sleepAsync(SubscriptionMaintenanceMs) + await sleepAsync(SubscriptionMaintenance) info "subscription is live." continue @@ -350,7 +351,7 @@ proc maintainSubscription( some(filterPubsubTopic), filterContentTopic, actualFilterPeer ) ).errorOr: - await sleepAsync(SubscriptionMaintenanceMs) + await sleepAsync(SubscriptionMaintenance) if noFailedSubscribes > 0: noFailedSubscribes -= 1 notice "subscribe request successful." @@ -365,7 +366,7 @@ proc maintainSubscription( # wakunode.peerManager.peerStore.delete(actualFilterPeer) if noFailedSubscribes < maxFailedSubscribes: - await sleepAsync(RetryWaitMs) # Wait a bit before retrying + await sleepAsync(RetryWait) # Wait a bit before retrying elif not preventPeerSwitch: # try again with new peer without delay let actualFilterPeer = selectRandomServicePeer( @@ -380,7 +381,7 @@ proc maintainSubscription( noFailedSubscribes = 0 else: - await sleepAsync(SubscriptionMaintenanceMs) + await sleepAsync(SubscriptionMaintenance) {.pop.} # @TODO confutils.nim(775, 17) Error: can raise an unlisted exception: ref IOError @@ -450,6 +451,8 @@ proc processInput(rfd: AsyncFD, rng: ref HmacDrbgContext) {.async.} = (await node.mountMix(conf.clusterId, mixPrivKey, conf.mixnodes)).isOkOr: error "failed to mount waku mix protocol: ", error = $error quit(QuitFailure) + await node.mountRendezvousClient(conf.clusterId) + await node.start() node.peerManager.start() @@ -587,7 +590,6 @@ proc processInput(rfd: AsyncFD, rng: ref HmacDrbgContext) {.async.} = error "Couldn't find any service peer" quit(QuitFailure) - #await mountLegacyLightPush(node) node.peerManager.addServicePeer(servicePeerInfo, WakuLightpushCodec) node.peerManager.addServicePeer(servicePeerInfo, WakuPeerExchangeCodec) diff --git a/examples/lightpush_mix/lightpush_publisher_mix.nim b/examples/lightpush_mix/lightpush_publisher_mix.nim index 1e26daa9b..bb4bb4c4e 100644 --- a/examples/lightpush_mix/lightpush_publisher_mix.nim +++ b/examples/lightpush_mix/lightpush_publisher_mix.nim @@ -51,7 +51,6 @@ proc splitPeerIdAndAddr(maddr: string): (string, string) = proc setupAndPublish(rng: ref HmacDrbgContext, conf: LightPushMixConf) {.async.} = # use notice to filter all waku messaging setupLog(logging.LogLevel.DEBUG, logging.LogFormat.TEXT) - notice "starting publisher", wakuPort = conf.port let @@ -114,17 +113,8 @@ proc setupAndPublish(rng: ref HmacDrbgContext, conf: LightPushMixConf) {.async.} let dPeerId = PeerId.init(destPeerId).valueOr: error "Failed to initialize PeerId", error = error return - var conn: Connection - if not conf.mixDisabled: - conn = node.wakuMix.toConnection( - MixDestination.init(dPeerId, pxPeerInfo.addrs[0]), # destination lightpush peer - WakuLightPushCodec, # protocol codec which will be used over the mix connection - MixParameters(expectReply: Opt.some(true), numSurbs: Opt.some(byte(1))), - # mix parameters indicating we expect a single reply - ).valueOr: - error "failed to create mix connection", error = error - return + await node.mountRendezvousClient(clusterId) await node.start() node.peerManager.start() node.startPeerExchangeLoop() @@ -145,20 +135,26 @@ proc setupAndPublish(rng: ref HmacDrbgContext, conf: LightPushMixConf) {.async.} var i = 0 while i < conf.numMsgs: + var conn: Connection if conf.mixDisabled: let connOpt = await node.peerManager.dialPeer(dPeerId, WakuLightPushCodec) if connOpt.isNone(): error "failed to dial peer with WakuLightPushCodec", target_peer_id = dPeerId return conn = connOpt.get() + else: + conn = node.wakuMix.toConnection( + MixDestination.init(dPeerId, pxPeerInfo.addrs[0]), # destination lightpush peer + WakuLightPushCodec, # protocol codec which will be used over the mix connection + MixParameters(expectReply: Opt.some(true), numSurbs: Opt.some(byte(1))), + # mix parameters indicating we expect a single reply + ).valueOr: + error "failed to create mix connection", error = error + return i = i + 1 let text = """Lorem ipsum dolor sit amet, consectetur adipiscing elit. Nullam venenatis magna ut tortor faucibus, in vestibulum nibh commodo. Aenean eget vestibulum augue. Nullam suscipit urna non nunc efficitur, at iaculis nisl consequat. Mauris quis ultrices elit. Suspendisse lobortis odio vitae laoreet facilisis. Cras ornare sem felis, at vulputate magna aliquam ac. Duis quis est ultricies, euismod nulla ac, interdum dui. Maecenas sit amet est vitae enim commodo gravida. Proin vitae elit nulla. Donec tempor dolor lectus, in faucibus velit elementum quis. Donec non mauris eu nibh faucibus cursus ut egestas dolor. Aliquam venenatis ligula id velit pulvinar malesuada. Vestibulum scelerisque, justo non porta gravida, nulla justo tempor purus, at sollicitudin erat erat vel libero. - Fusce nec eros eu metus tristique aliquet. Sed ut magna sagittis, vulputate diam sit amet, aliquam magna. Aenean sollicitudin velit lacus, eu ultrices magna semper at. Integer vitae felis ligula. In a eros nec risus condimentum tincidunt fermentum sit amet ex. Class aptent taciti sociosqu ad litora torquent per conubia nostra, per inceptos himenaeos. Pellentesque habitant morbi tristique senectus et netus et malesuada fames ac turpis egestas. Nullam vitae justo maximus, fringilla tellus nec, rutrum purus. Etiam efficitur nisi dapibus euismod vestibulum. Phasellus at felis elementum, tristique nulla ac, consectetur neque. - Maecenas hendrerit nibh eget velit rutrum, in ornare mauris molestie. Vestibulum ante ipsum primis in faucibus orci luctus et ultrices posuere cubilia curae; Praesent dignissim efficitur eros, sit amet rutrum justo mattis a. Fusce mollis neque at erat placerat bibendum. Ut fringilla fringilla orci, ut fringilla metus fermentum vel. In hac habitasse platea dictumst. Donec hendrerit porttitor odio. Suspendisse ornare sollicitudin mauris, sodales pulvinar velit finibus vel. Fusce id pulvinar neque. Suspendisse eget tincidunt sapien, ac accumsan turpis. - Curabitur cursus tincidunt leo at aliquet. Nunc dapibus quam id venenatis varius. Aenean eget augue vel velit dapibus aliquam. Nulla facilisi. Curabitur cursus, turpis vel congue volutpat, tellus eros cursus lacus, eu fringilla turpis orci non ipsum. In hac habitasse platea dictumst. Nulla aliquam nisl a nunc placerat, eget dignissim felis pulvinar. Fusce sed porta mauris. Donec sodales arcu in nisl sodales, quis posuere massa ultricies. Nam feugiat massa eget felis ultricies finibus. Nunc magna nulla, interdum a elit vel, egestas efficitur urna. Ut posuere tincidunt odio in maximus. Sed at dignissim est. - Morbi accumsan elementum ligula ut fringilla. Praesent in ex metus. Phasellus urna est, tempus sit amet elementum vitae, sollicitudin vel ipsum. Fusce hendrerit eleifend dignissim. Maecenas tempor dapibus dui quis laoreet. Cras tincidunt sed ipsum sed pellentesque. Proin ut tellus nec ipsum varius interdum. Curabitur id velit ligula. Etiam sapien nulla, cursus sodales orci eu, porta lobortis nunc. Nunc at dapibus velit. Nulla et nunc vehicula, condimentum erat quis, elementum dolor. Quisque eu metus fermentum, vestibulum tellus at, sollicitudin odio. Ut vel neque justo. - Praesent porta porta velit, vel porttitor sem. Donec sagittis at nulla venenatis iaculis. Nullam vel eleifend felis. Nullam a pellentesque lectus. Aliquam tincidunt semper dui sed bibendum. Donec hendrerit, urna et cursus dictum, neque neque convallis magna, id condimentum sem urna quis massa. Fusce non quam vulputate, fermentum mauris at, malesuada ipsum. Mauris id pellentesque libero. Donec vel erat ullamcorper, dapibus quam id, imperdiet urna. Praesent sed ligula ut est pellentesque pharetra quis et diam. Ut placerat lorem eget mi fermentum aliquet. + Fusce nec eros eu metus tristique aliquet. This is message #""" & $i & """ sent from a publisher using mix. End of transmission.""" let message = WakuMessage( @@ -168,25 +164,31 @@ proc setupAndPublish(rng: ref HmacDrbgContext, conf: LightPushMixConf) {.async.} timestamp: getNowInNanosecondTime(), ) # current timestamp - let res = await node.wakuLightpushClient.publishWithConn( - LightpushPubsubTopic, message, conn, dPeerId - ) + let startTime = getNowInNanosecondTime() - if res.isOk(): - lp_mix_success.inc() - notice "published message", - text = text, - timestamp = message.timestamp, - psTopic = LightpushPubsubTopic, - contentTopic = LightpushContentTopic - else: - error "failed to publish message", error = $res.error + ( + await node.wakuLightpushClient.publishWithConn( + LightpushPubsubTopic, message, conn, dPeerId + ) + ).isOkOr: + error "failed to publish message via mix", error = error.desc lp_mix_failed.inc(labelValues = ["publish_error"]) + return + + let latency = float64(getNowInNanosecondTime() - startTime) / 1_000_000.0 + lp_mix_latency.observe(latency) + lp_mix_success.inc() + notice "published message", + text = text, + timestamp = message.timestamp, + latency = latency, + psTopic = LightpushPubsubTopic, + contentTopic = LightpushContentTopic if conf.mixDisabled: await conn.close() await sleepAsync(conf.msgIntervalMilliseconds) - info "###########Sent all messages via mix" + info "Sent all messages via mix" quit(0) when isMainModule: diff --git a/examples/lightpush_mix/lightpush_publisher_mix_metrics.nim b/examples/lightpush_mix/lightpush_publisher_mix_metrics.nim index cd06b3e3e..3c467e28c 100644 --- a/examples/lightpush_mix/lightpush_publisher_mix_metrics.nim +++ b/examples/lightpush_mix/lightpush_publisher_mix_metrics.nim @@ -6,3 +6,6 @@ declarePublicCounter lp_mix_success, "number of lightpush messages sent via mix" declarePublicCounter lp_mix_failed, "number of lightpush messages failed via mix", labels = ["error"] + +declarePublicHistogram lp_mix_latency, + "lightpush publish latency via mix in milliseconds" diff --git a/simulations/mixnet/run_chat_mix.sh b/simulations/mixnet/run_chat_mix.sh index 11a28c06b..3dd6f5932 100755 --- a/simulations/mixnet/run_chat_mix.sh +++ b/simulations/mixnet/run_chat_mix.sh @@ -1 +1,2 @@ -../../build/chat2mix --cluster-id=2 --num-shards-in-network=1 --shard=0 --servicenode="/ip4/127.0.0.1/tcp/60001/p2p/16Uiu2HAmPiEs2ozjjJF2iN2Pe2FYeMC9w4caRHKYdLdAfjgbWM6o" --log-level=TRACE --mixnode="/ip4/127.0.0.1/tcp/60002/p2p/16Uiu2HAmLtKaFaSWDohToWhWUZFLtqzYZGPFuXwKrojFVF6az5UF:9231e86da6432502900a84f867004ce78632ab52cd8e30b1ec322cd795710c2a" --mixnode="/ip4/127.0.0.1/tcp/60003/p2p/16Uiu2HAmTEDHwAziWUSz6ZE23h5vxG2o4Nn7GazhMor4bVuMXTrA:275cd6889e1f29ca48e5b9edb800d1a94f49f13d393a0ecf1a07af753506de6c" --mixnode="/ip4/127.0.0.1/tcp/60004/p2p/16Uiu2HAmPwRKZajXtfb1Qsv45VVfRZgK3ENdfmnqzSrVm3BczF6f:e0ed594a8d506681be075e8e23723478388fb182477f7a469309a25e7076fc18" --mixnode="/ip4/127.0.0.1/tcp/60005/p2p/16Uiu2HAmRhxmCHBYdXt1RibXrjAUNJbduAhzaTHwFCZT4qWnqZAu:8fd7a1a7c19b403d231452a9b1ea40eb1cc76f455d918ef8980e7685f9eeeb1f" +../../build/chat2mix --cluster-id=2 --num-shards-in-network=1 --shard=0 --servicenode="/ip4/127.0.0.1/tcp/60001/p2p/16Uiu2HAmPiEs2ozjjJF2iN2Pe2FYeMC9w4caRHKYdLdAfjgbWM6o" --log-level=TRACE +#--mixnode="/ip4/127.0.0.1/tcp/60002/p2p/16Uiu2HAmLtKaFaSWDohToWhWUZFLtqzYZGPFuXwKrojFVF6az5UF:9231e86da6432502900a84f867004ce78632ab52cd8e30b1ec322cd795710c2a" --mixnode="/ip4/127.0.0.1/tcp/60003/p2p/16Uiu2HAmTEDHwAziWUSz6ZE23h5vxG2o4Nn7GazhMor4bVuMXTrA:275cd6889e1f29ca48e5b9edb800d1a94f49f13d393a0ecf1a07af753506de6c" --mixnode="/ip4/127.0.0.1/tcp/60004/p2p/16Uiu2HAmPwRKZajXtfb1Qsv45VVfRZgK3ENdfmnqzSrVm3BczF6f:e0ed594a8d506681be075e8e23723478388fb182477f7a469309a25e7076fc18" --mixnode="/ip4/127.0.0.1/tcp/60005/p2p/16Uiu2HAmRhxmCHBYdXt1RibXrjAUNJbduAhzaTHwFCZT4qWnqZAu:8fd7a1a7c19b403d231452a9b1ea40eb1cc76f455d918ef8980e7685f9eeeb1f" diff --git a/simulations/mixnet/run_chat_mix1.sh b/simulations/mixnet/run_chat_mix1.sh index 11a28c06b..7323bb3a9 100755 --- a/simulations/mixnet/run_chat_mix1.sh +++ b/simulations/mixnet/run_chat_mix1.sh @@ -1 +1,2 @@ -../../build/chat2mix --cluster-id=2 --num-shards-in-network=1 --shard=0 --servicenode="/ip4/127.0.0.1/tcp/60001/p2p/16Uiu2HAmPiEs2ozjjJF2iN2Pe2FYeMC9w4caRHKYdLdAfjgbWM6o" --log-level=TRACE --mixnode="/ip4/127.0.0.1/tcp/60002/p2p/16Uiu2HAmLtKaFaSWDohToWhWUZFLtqzYZGPFuXwKrojFVF6az5UF:9231e86da6432502900a84f867004ce78632ab52cd8e30b1ec322cd795710c2a" --mixnode="/ip4/127.0.0.1/tcp/60003/p2p/16Uiu2HAmTEDHwAziWUSz6ZE23h5vxG2o4Nn7GazhMor4bVuMXTrA:275cd6889e1f29ca48e5b9edb800d1a94f49f13d393a0ecf1a07af753506de6c" --mixnode="/ip4/127.0.0.1/tcp/60004/p2p/16Uiu2HAmPwRKZajXtfb1Qsv45VVfRZgK3ENdfmnqzSrVm3BczF6f:e0ed594a8d506681be075e8e23723478388fb182477f7a469309a25e7076fc18" --mixnode="/ip4/127.0.0.1/tcp/60005/p2p/16Uiu2HAmRhxmCHBYdXt1RibXrjAUNJbduAhzaTHwFCZT4qWnqZAu:8fd7a1a7c19b403d231452a9b1ea40eb1cc76f455d918ef8980e7685f9eeeb1f" +../../build/chat2mix --cluster-id=2 --num-shards-in-network=1 --shard=0 --servicenode="/ip4/127.0.0.1/tcp/60001/p2p/16Uiu2HAmPiEs2ozjjJF2iN2Pe2FYeMC9w4caRHKYdLdAfjgbWM6o" --log-level=TRACE +#--mixnode="/ip4/127.0.0.1/tcp/60002/p2p/16Uiu2HAmLtKaFaSWDohToWhWUZFLtqzYZGPFuXwKrojFVF6az5UF:9231e86da6432502900a84f867004ce78632ab52cd8e30b1ec322cd795710c2a" --mixnode="/ip4/127.0.0.1/tcp/60003/p2p/16Uiu2HAmTEDHwAziWUSz6ZE23h5vxG2o4Nn7GazhMor4bVuMXTrA:275cd6889e1f29ca48e5b9edb800d1a94f49f13d393a0ecf1a07af753506de6c" --mixnode="/ip4/127.0.0.1/tcp/60004/p2p/16Uiu2HAmPwRKZajXtfb1Qsv45VVfRZgK3ENdfmnqzSrVm3BczF6f:e0ed594a8d506681be075e8e23723478388fb182477f7a469309a25e7076fc18" --mixnode="/ip4/127.0.0.1/tcp/60005/p2p/16Uiu2HAmRhxmCHBYdXt1RibXrjAUNJbduAhzaTHwFCZT4qWnqZAu:8fd7a1a7c19b403d231452a9b1ea40eb1cc76f455d918ef8980e7685f9eeeb1f" diff --git a/tests/test_waku_rendezvous.nim b/tests/test_waku_rendezvous.nim index fa2efbd47..d3dd6f920 100644 --- a/tests/test_waku_rendezvous.nim +++ b/tests/test_waku_rendezvous.nim @@ -1,12 +1,20 @@ {.used.} -import std/options, chronos, testutils/unittests, libp2p/builders +import + std/options, + chronos, + testutils/unittests, + libp2p/builders, + libp2p/protocols/rendezvous import waku/waku_core/peers, + waku/waku_core/codecs, waku/node/waku_node, waku/node/peer_manager/peer_manager, waku/waku_rendezvous/protocol, + waku/waku_rendezvous/common, + waku/waku_rendezvous/waku_peer_record, ./testlib/[wakucore, wakunode] procSuite "Waku Rendezvous": @@ -50,18 +58,26 @@ procSuite "Waku Rendezvous": node2.peerManager.addPeer(peerInfo3) node3.peerManager.addPeer(peerInfo2) - let namespace = "test/name/space" - - let res = await node1.wakuRendezvous.batchAdvertise( - namespace, 60.seconds, @[peerInfo2.peerId] - ) + let res = await node1.wakuRendezvous.advertiseAll() assert res.isOk(), $res.error + # Rendezvous Request API requires dialing first + let connOpt = + await node3.peerManager.dialPeer(peerInfo2.peerId, WakuRendezVousCodec) + require: + connOpt.isSome - let response = - await node3.wakuRendezvous.batchRequest(namespace, 1, @[peerInfo2.peerId]) - assert response.isOk(), $response.error - let records = response.get() + var records: seq[WakuPeerRecord] + try: + records = await rendezvous.request[WakuPeerRecord]( + node3.wakuRendezvous, + Opt.some(computeMixNamespace(clusterId)), + Opt.some(1), + Opt.some(@[peerInfo2.peerId]), + ) + except CatchableError as e: + assert false, "Request failed with exception: " & e.msg check: records.len == 1 records[0].peerId == peerInfo1.peerId + #records[0].mixPubKey == $node1.wakuMix.pubKey diff --git a/tests/waku_discv5/test_waku_discv5.nim b/tests/waku_discv5/test_waku_discv5.nim index 6685bda32..20a0c6965 100644 --- a/tests/waku_discv5/test_waku_discv5.nim +++ b/tests/waku_discv5/test_waku_discv5.nim @@ -426,7 +426,6 @@ suite "Waku Discovery v5": confBuilder.withNodeKey(libp2p_keys.PrivateKey.random(Secp256k1, myRng[])[]) confBuilder.discv5Conf.withEnabled(true) confBuilder.discv5Conf.withUdpPort(9000.Port) - let conf = confBuilder.build().valueOr: raiseAssert error @@ -468,6 +467,9 @@ suite "Waku Discovery v5": # leave some time for discv5 to act await sleepAsync(chronos.seconds(10)) + # Connect peers via peer manager to ensure identify happens + discard await waku0.node.peerManager.connectPeer(waku1.node.switch.peerInfo) + var r = waku0.node.peerManager.selectPeer(WakuPeerExchangeCodec) assert r.isSome(), "could not retrieve peer mounting WakuPeerExchangeCodec" @@ -480,7 +482,7 @@ suite "Waku Discovery v5": r = waku2.node.peerManager.selectPeer(WakuPeerExchangeCodec) assert r.isSome(), "could not retrieve peer mounting WakuPeerExchangeCodec" - r = waku2.node.peerManager.selectPeer(RendezVousCodec) + r = waku2.node.peerManager.selectPeer(WakuRendezVousCodec) assert r.isSome(), "could not retrieve peer mounting RendezVousCodec" asyncTest "Discv5 bootstrap nodes should be added to the peer store": diff --git a/vendor/nim-libp2p b/vendor/nim-libp2p index 0309685cd..e82080f7b 160000 --- a/vendor/nim-libp2p +++ b/vendor/nim-libp2p @@ -1 +1 @@ -Subproject commit 0309685cd27d4bf763c8b3be86a76c33bcfe67ea +Subproject commit e82080f7b1aa61c6d35fa5311b873f41eff4bb52 diff --git a/waku.nimble b/waku.nimble index c63d20246..79fdd9fd6 100644 --- a/waku.nimble +++ b/waku.nimble @@ -24,7 +24,7 @@ requires "nim >= 2.2.4", "stew", "stint", "metrics", - "libp2p >= 1.14.2", + "libp2p >= 1.14.3", "web3", "presto", "regex", diff --git a/waku/common/callbacks.nim b/waku/common/callbacks.nim index 9b8590152..83209ef24 100644 --- a/waku/common/callbacks.nim +++ b/waku/common/callbacks.nim @@ -1,5 +1,7 @@ -import ../waku_enr/capabilities +import waku/waku_enr/capabilities, waku/waku_rendezvous/waku_peer_record type GetShards* = proc(): seq[uint16] {.closure, gcsafe, raises: [].} type GetCapabilities* = proc(): seq[Capabilities] {.closure, gcsafe, raises: [].} + +type GetWakuPeerRecord* = proc(): WakuPeerRecord {.closure, gcsafe, raises: [].} diff --git a/waku/factory/node_factory.nim b/waku/factory/node_factory.nim index 488d07c06..34fc958fe 100644 --- a/waku/factory/node_factory.nim +++ b/waku/factory/node_factory.nim @@ -163,6 +163,15 @@ proc setupProtocols( error "Unrecoverable error occurred", error = msg quit(QuitFailure) + #mount mix + if conf.mixConf.isSome(): + ( + await node.mountMix( + conf.clusterId, conf.mixConf.get().mixKey, conf.mixConf.get().mixnodes + ) + ).isOkOr: + return err("failed to mount waku mix protocol: " & $error) + if conf.storeServiceConf.isSome(): let storeServiceConf = conf.storeServiceConf.get() if storeServiceConf.supportV2: @@ -327,9 +336,9 @@ proc setupProtocols( protectedShard = shardKey.shard, publicKey = shardKey.key node.wakuRelay.addSignedShardsValidator(subscribedProtectedShards, conf.clusterId) - # Only relay nodes should be rendezvous points. - if conf.rendezvous: - await node.mountRendezvous(conf.clusterId) + if conf.rendezvous: + await node.mountRendezvous(conf.clusterId) + await node.mountRendezvousClient(conf.clusterId) # Keepalive mounted on all nodes try: @@ -414,14 +423,6 @@ proc setupProtocols( if conf.peerExchangeDiscovery: await node.mountPeerExchangeClient() - #mount mix - if conf.mixConf.isSome(): - ( - await node.mountMix( - conf.clusterId, conf.mixConf.get().mixKey, conf.mixConf.get().mixnodes - ) - ).isOkOr: - return err("failed to mount waku mix protocol: " & $error) return ok() ## Start node diff --git a/waku/factory/waku_conf.nim b/waku/factory/waku_conf.nim index 89ffb366c..899008221 100644 --- a/waku/factory/waku_conf.nim +++ b/waku/factory/waku_conf.nim @@ -154,7 +154,8 @@ proc logConf*(conf: WakuConf) = store = conf.storeServiceConf.isSome(), filter = conf.filterServiceConf.isSome(), lightPush = conf.lightPush, - peerExchange = conf.peerExchangeService + peerExchange = conf.peerExchangeService, + rendezvous = conf.rendezvous info "Configuration. Network", cluster = conf.clusterId diff --git a/waku/node/peer_manager/waku_peer_store.nim b/waku/node/peer_manager/waku_peer_store.nim index c9e2d4817..9cde53fe1 100644 --- a/waku/node/peer_manager/waku_peer_store.nim +++ b/waku/node/peer_manager/waku_peer_store.nim @@ -6,7 +6,8 @@ import chronicles, eth/p2p/discoveryv5/enr, libp2p/builders, - libp2p/peerstore + libp2p/peerstore, + libp2p/crypto/curve25519 import ../../waku_core, @@ -42,6 +43,9 @@ type # Keeps track of peer shards ShardBook* = ref object of PeerBook[seq[uint16]] + # Keeps track of Mix protocol public keys of peers + MixPubKeyBook* = ref object of PeerBook[Curve25519Key] + proc getPeer*(peerStore: PeerStore, peerId: PeerId): RemotePeerInfo = let addresses = if peerStore[LastSeenBook][peerId].isSome(): @@ -68,6 +72,11 @@ proc getPeer*(peerStore: PeerStore, peerId: PeerId): RemotePeerInfo = direction: peerStore[DirectionBook][peerId], lastFailedConn: peerStore[LastFailedConnBook][peerId], numberFailedConn: peerStore[NumberFailedConnBook][peerId], + mixPubKey: + if peerStore[MixPubKeyBook][peerId] != default(Curve25519Key): + some(peerStore[MixPubKeyBook][peerId]) + else: + none(Curve25519Key), ) proc delete*(peerStore: PeerStore, peerId: PeerId) = @@ -87,6 +96,13 @@ proc peers*(peerStore: PeerStore): seq[RemotePeerInfo] = return allKeys.mapIt(peerStore.getPeer(it)) proc addPeer*(peerStore: PeerStore, peer: RemotePeerInfo, origin = UnknownOrigin) = + ## Storing MixPubKey even if peer is already present as this info might be new + ## or updated. + if peer.mixPubKey.isSome(): + trace "adding mix pub key to peer store", + peer_id = $peer.peerId, mix_pub_key = $peer.mixPubKey.get() + peerStore[MixPubKeyBook].book[peer.peerId] = peer.mixPubKey.get() + ## Notice that the origin parameter is used to manually override the given peer origin. ## At the time of writing, this is used in waku_discv5 or waku_node (peer exchange.) if peerStore[AddressBook][peer.peerId] == peer.addrs and @@ -113,6 +129,7 @@ proc addPeer*(peerStore: PeerStore, peer: RemotePeerInfo, origin = UnknownOrigin peerStore[ProtoBook][peer.peerId] = protos ## We don't care whether the item was already present in the table or not. Hence, we always discard the hasKeyOrPut's bool returned value + discard peerStore[AgentBook].book.hasKeyOrPut(peer.peerId, peer.agent) discard peerStore[ProtoVersionBook].book.hasKeyOrPut(peer.peerId, peer.protoVersion) discard peerStore[KeyBook].book.hasKeyOrPut(peer.peerId, peer.publicKey) diff --git a/waku/node/waku_node.nim b/waku/node/waku_node.nim index 114775951..65b2093bb 100644 --- a/waku/node/waku_node.nim +++ b/waku/node/waku_node.nim @@ -22,6 +22,7 @@ import libp2p/transports/tcptransport, libp2p/transports/wstransport, libp2p/utility, + libp2p/utils/offsettedseq, libp2p/protocols/mix, libp2p/protocols/mix/mix_protocol @@ -43,6 +44,8 @@ import ../waku_filter_v2/client as filter_client, ../waku_metadata, ../waku_rendezvous/protocol, + ../waku_rendezvous/client as rendezvous_client, + ../waku_rendezvous/waku_peer_record, ../waku_lightpush_legacy/client as legacy_ligntpuhs_client, ../waku_lightpush_legacy as legacy_lightpush_protocol, ../waku_lightpush/client as ligntpuhs_client, @@ -121,6 +124,7 @@ type libp2pPing*: Ping rng*: ref rand.HmacDrbgContext wakuRendezvous*: WakuRendezVous + wakuRendezvousClient*: rendezvous_client.WakuRendezVousClient announcedAddresses*: seq[MultiAddress] started*: bool # Indicates that node has started listening topicSubscriptionQueue*: AsyncEventQueue[SubscriptionEvent] @@ -148,6 +152,17 @@ proc getCapabilitiesGetter(node: WakuNode): GetCapabilities = return @[] return node.enr.getCapabilities() +proc getWakuPeerRecordGetter(node: WakuNode): GetWakuPeerRecord = + return proc(): WakuPeerRecord {.closure, gcsafe, raises: [].} = + var mixKey: string + if not node.wakuMix.isNil(): + mixKey = node.wakuMix.pubKey.to0xHex() + return WakuPeerRecord.init( + peerId = node.switch.peerInfo.peerId, + addresses = node.announcedAddresses, + mixKey = mixKey, + ) + proc new*( T: type WakuNode, netConfig: NetConfig, @@ -257,12 +272,12 @@ proc mountMix*( return err("Failed to convert multiaddress to string.") info "local addr", localaddr = localaddrStr - let nodeAddr = localaddrStr & "/p2p/" & $node.peerId node.wakuMix = WakuMix.new( - nodeAddr, node.peerManager, clusterId, mixPrivKey, mixnodes + localaddrStr, node.peerManager, clusterId, mixPrivKey, mixnodes ).valueOr: error "Waku Mix protocol initialization failed", err = error return + #TODO: should we do the below only for exit node? Also, what if multiple protocols use mix? node.wakuMix.registerDestReadBehavior(WakuLightPushCodec, readLp(int(-1))) let catchRes = catch: node.switch.mount(node.wakuMix) @@ -346,6 +361,18 @@ proc selectRandomPeers*(peers: seq[PeerId], numRandomPeers: int): seq[PeerId] = shuffle(randomPeers) return randomPeers[0 ..< min(len(randomPeers), numRandomPeers)] +proc mountRendezvousClient*(node: WakuNode, clusterId: uint16) {.async: (raises: []).} = + info "mounting rendezvous client" + + node.wakuRendezvousClient = rendezvous_client.WakuRendezVousClient.new( + node.switch, node.peerManager, clusterId + ).valueOr: + error "initializing waku rendezvous client failed", error = error + return + + if node.started: + await node.wakuRendezvousClient.start() + proc mountRendezvous*(node: WakuNode, clusterId: uint16) {.async: (raises: []).} = info "mounting rendezvous discovery protocol" @@ -355,6 +382,7 @@ proc mountRendezvous*(node: WakuNode, clusterId: uint16) {.async: (raises: []).} clusterId, node.getShardsGetter(), node.getCapabilitiesGetter(), + node.getWakuPeerRecordGetter(), ).valueOr: error "initializing waku rendezvous failed", error = error return @@ -362,6 +390,11 @@ proc mountRendezvous*(node: WakuNode, clusterId: uint16) {.async: (raises: []).} if node.started: await node.wakuRendezvous.start() + try: + node.switch.mount(node.wakuRendezvous, protocolMatcher(WakuRendezVousCodec)) + except LPError: + error "failed to mount wakuRendezvous", error = getCurrentExceptionMsg() + proc isBindIpWithZeroPort(inputMultiAdd: MultiAddress): bool = let inputStr = $inputMultiAdd if inputStr.contains("0.0.0.0/tcp/0") or inputStr.contains("127.0.0.1/tcp/0"): @@ -438,6 +471,9 @@ proc start*(node: WakuNode) {.async.} = if not node.wakuRendezvous.isNil(): await node.wakuRendezvous.start() + if not node.wakuRendezvousClient.isNil(): + await node.wakuRendezvousClient.start() + if not node.wakuStoreReconciliation.isNil(): node.wakuStoreReconciliation.start() @@ -499,6 +535,9 @@ proc stop*(node: WakuNode) {.async.} = if not node.wakuRendezvous.isNil(): await node.wakuRendezvous.stopWait() + if not node.wakuRendezvousClient.isNil(): + await node.wakuRendezvousClient.stopWait() + node.started = false proc isReady*(node: WakuNode): Future[bool] {.async: (raises: [Exception]).} = diff --git a/waku/waku_core/codecs.nim b/waku/waku_core/codecs.nim index 6dcdfe2f5..0d9394c71 100644 --- a/waku/waku_core/codecs.nim +++ b/waku/waku_core/codecs.nim @@ -10,3 +10,4 @@ const WakuMetadataCodec* = "/vac/waku/metadata/1.0.0" WakuPeerExchangeCodec* = "/vac/waku/peer-exchange/2.0.0-alpha1" WakuLegacyStoreCodec* = "/vac/waku/store/2.0.0-beta4" + WakuRendezVousCodec* = "/vac/waku/rendezvous/1.0.0" diff --git a/waku/waku_core/peers.nim b/waku/waku_core/peers.nim index 76ff29aa0..48c994403 100644 --- a/waku/waku_core/peers.nim +++ b/waku/waku_core/peers.nim @@ -9,6 +9,7 @@ import eth/p2p/discoveryv5/enr, eth/net/utils, libp2p/crypto/crypto, + libp2p/crypto/curve25519, libp2p/crypto/secp, libp2p/errors, libp2p/multiaddress, @@ -49,6 +50,7 @@ type RemotePeerInfo* = ref object enr*: Option[enr.Record] protocols*: seq[string] shards*: seq[uint16] + mixPubKey*: Option[Curve25519Key] agent*: string protoVersion*: string @@ -84,6 +86,7 @@ proc init*( direction: PeerDirection = UnknownDirection, lastFailedConn: Moment = Moment.init(0, Second), numberFailedConn: int = 0, + mixPubKey: Option[Curve25519Key] = none(Curve25519Key), ): T = RemotePeerInfo( peerId: peerId, @@ -100,6 +103,7 @@ proc init*( direction: direction, lastFailedConn: lastFailedConn, numberFailedConn: numberFailedConn, + mixPubKey: mixPubKey, ) proc init*( diff --git a/waku/waku_lightpush/client.nim b/waku/waku_lightpush/client.nim index 4d0c49a84..d68552304 100644 --- a/waku/waku_lightpush/client.nim +++ b/waku/waku_lightpush/client.nim @@ -45,8 +45,13 @@ proc sendPushRequest( defer: await connection.closeWithEOF() - - await connection.writeLP(req.encode().buffer) + try: + await connection.writeLP(req.encode().buffer) + except CatchableError: + error "failed to send push request", error = getCurrentExceptionMsg() + return lightpushResultInternalError( + "failed to send push request: " & getCurrentExceptionMsg() + ) var buffer: seq[byte] try: @@ -56,9 +61,8 @@ proc sendPushRequest( return lightpushResultInternalError( "Failed to read response from peer: " & getCurrentExceptionMsg() ) - let response = LightpushResponse.decode(buffer).valueOr: - error "failed to decode response" + error "failed to decode response", error = $error waku_lightpush_v3_errors.inc(labelValues = [decodeRpcFailure]) return lightpushResultInternalError(decodeRpcFailure) diff --git a/waku/waku_mix/protocol.nim b/waku/waku_mix/protocol.nim index 34b50f8a9..d3d765df8 100644 --- a/waku/waku_mix/protocol.nim +++ b/waku/waku_mix/protocol.nim @@ -6,6 +6,8 @@ import libp2p/crypto/curve25519, libp2p/protocols/mix, libp2p/protocols/mix/mix_node, + libp2p/protocols/mix/mix_protocol, + libp2p/protocols/mix/mix_metrics, libp2p/[multiaddress, multicodec, peerid], eth/common/keys @@ -34,22 +36,18 @@ type multiAddr*: string pubKey*: Curve25519Key -proc mixPoolFilter*(cluster: Option[uint16], peer: RemotePeerInfo): bool = +proc filterMixNodes(cluster: Option[uint16], peer: RemotePeerInfo): bool = # Note that origin based(discv5) filtering is not done intentionally # so that more mix nodes can be discovered. - if peer.enr.isNone(): - trace "peer has no ENR", peer = $peer + if peer.mixPubKey.isNone(): + trace "remote peer has no mix Pub Key", peer = $peer return false - if cluster.isSome() and peer.enr.get().isClusterMismatched(cluster.get()): + if cluster.isSome() and peer.enr.isSome() and + peer.enr.get().isClusterMismatched(cluster.get()): trace "peer has mismatching cluster", peer = $peer return false - # Filter if mix is enabled - if not peer.enr.get().supportsCapability(Capabilities.Mix): - trace "peer doesn't support mix", peer = $peer - return false - return true proc appendPeerIdToMultiaddr*(multiaddr: MultiAddress, peerId: PeerId): MultiAddress = @@ -74,34 +72,52 @@ func getIPv4Multiaddr*(maddrs: seq[MultiAddress]): Option[MultiAddress] = trace "no ipv4 multiaddr found" return none(MultiAddress) -#[ Not deleting as these can be reused once discovery is sorted - proc populateMixNodePool*(mix: WakuMix) = +proc populateMixNodePool*(mix: WakuMix) = # populate only peers that i) are reachable ii) share cluster iii) support mix let remotePeers = mix.peerManager.switch.peerStore.peers().filterIt( - mixPoolFilter(some(mix.clusterId), it) + filterMixNodes(some(mix.clusterId), it) ) var mixNodes = initTable[PeerId, MixPubInfo]() for i in 0 ..< min(remotePeers.len, 100): - let remotePeerENR = remotePeers[i].enr.get() let ipv4addr = getIPv4Multiaddr(remotePeers[i].addrs).valueOr: trace "peer has no ipv4 address", peer = $remotePeers[i] continue - let maddrWithPeerId = - toString(appendPeerIdToMultiaddr(ipv4addr, remotePeers[i].peerId)) - trace "remote peer ENR", - peerId = remotePeers[i].peerId, enr = remotePeerENR, maddr = maddrWithPeerId + let maddrWithPeerId = appendPeerIdToMultiaddr(ipv4addr, remotePeers[i].peerId) + trace "remote peer info", info = remotePeers[i] - let peerMixPubKey = mixKey(remotePeerENR).get() - let mixNodePubInfo = - createMixPubInfo(maddrWithPeerId.value, intoCurve25519Key(peerMixPubKey)) + if remotePeers[i].mixPubKey.isNone(): + trace "peer has no mix Pub Key", remotePeerId = $remotePeers[i] + continue + + let peerMixPubKey = remotePeers[i].mixPubKey.get() + var peerPubKey: crypto.PublicKey + if not remotePeers[i].peerId.extractPublicKey(peerPubKey): + warn "Failed to extract public key from peerId, skipping node", + remotePeerId = remotePeers[i].peerId + continue + + if peerPubKey.scheme != PKScheme.Secp256k1: + warn "Peer public key is not Secp256k1, skipping node", + remotePeerId = remotePeers[i].peerId, scheme = peerPubKey.scheme + continue + + let mixNodePubInfo = MixPubInfo.init( + remotePeers[i].peerId, + ipv4addr, + intoCurve25519Key(peerMixPubKey), + peerPubKey.skkey, + ) + trace "adding mix node to pool", + remotePeerId = remotePeers[i].peerId, multiAddr = $ipv4addr mixNodes[remotePeers[i].peerId] = mixNodePubInfo - mix_pool_size.set(len(mixNodes)) # set the mix node pool mix.setNodePool(mixNodes) + mix_pool_size.set(len(mixNodes)) trace "mix node pool updated", poolSize = mix.getNodePoolSize() +# Once mix protocol starts to use info from PeerStore, then this can be removed. proc startMixNodePoolMgr*(mix: WakuMix) {.async.} = info "starting mix node pool manager" # try more aggressively to populate the pool at startup @@ -115,9 +131,10 @@ proc startMixNodePoolMgr*(mix: WakuMix) {.async.} = # TODO: make interval configurable heartbeat "Updating mix node pool", 5.seconds: mix.populateMixNodePool() - ]# -proc toMixNodeTable(bootnodes: seq[MixNodePubInfo]): Table[PeerId, MixPubInfo] = +proc processBootNodes( + bootnodes: seq[MixNodePubInfo], peermgr: PeerManager +): Table[PeerId, MixPubInfo] = var mixNodes = initTable[PeerId, MixPubInfo]() for node in bootnodes: let pInfo = parsePeerInfo(node.multiAddr).valueOr: @@ -140,6 +157,11 @@ proc toMixNodeTable(bootnodes: seq[MixNodePubInfo]): Table[PeerId, MixPubInfo] = continue mixNodes[peerId] = MixPubInfo.init(peerId, multiAddr, node.pubKey, peerPubKey.skkey) + + peermgr.addPeer( + RemotePeerInfo.init(peerId, @[multiAddr], mixPubKey = some(node.pubKey)) + ) + mix_pool_size.set(len(mixNodes)) info "using mix bootstrap nodes ", bootNodes = mixNodes return mixNodes @@ -152,7 +174,7 @@ proc new*( bootnodes: seq[MixNodePubInfo], ): WakuMixResult[T] = let mixPubKey = public(mixPrivKey) - info "mixPrivKey", mixPrivKey = mixPrivKey, mixPubKey = mixPubKey + info "mixPubKey", mixPubKey = mixPubKey let nodeMultiAddr = MultiAddress.init(nodeAddr).valueOr: return err("failed to parse mix node address: " & $nodeAddr & ", error: " & error) let localMixNodeInfo = initMixNodeInfo( @@ -160,17 +182,18 @@ proc new*( peermgr.switch.peerInfo.publicKey.skkey, peermgr.switch.peerInfo.privateKey.skkey, ) if bootnodes.len < mixMixPoolSize: - warn "publishing with mix won't work as there are less than 3 mix nodes in node pool" - let initTable = toMixNodeTable(bootnodes) + warn "publishing with mix won't work until there are 3 mix nodes in node pool" + let initTable = processBootNodes(bootnodes, peermgr) + if len(initTable) < mixMixPoolSize: - warn "publishing with mix won't work as there are less than 3 mix nodes in node pool" + warn "publishing with mix won't work until there are 3 mix nodes in node pool" var m = WakuMix(peerManager: peermgr, clusterId: clusterId, pubKey: mixPubKey) procCall MixProtocol(m).init(localMixNodeInfo, initTable, peermgr.switch) return ok(m) method start*(mix: WakuMix) = info "starting waku mix protocol" - #mix.nodePoolLoopHandle = mix.startMixNodePoolMgr() This can be re-enabled once discovery is addressed + mix.nodePoolLoopHandle = mix.startMixNodePoolMgr() method stop*(mix: WakuMix) {.async.} = if mix.nodePoolLoopHandle.isNil(): diff --git a/waku/waku_rendezvous/client.nim b/waku/waku_rendezvous/client.nim new file mode 100644 index 000000000..09e789774 --- /dev/null +++ b/waku/waku_rendezvous/client.nim @@ -0,0 +1,142 @@ +{.push raises: [].} + +import + std/[options, sequtils, tables], + results, + chronos, + chronicles, + libp2p/protocols/rendezvous, + libp2p/crypto/curve25519, + libp2p/switch, + libp2p/utils/semaphore + +import metrics except collect + +import + waku/node/peer_manager, + waku/waku_core/peers, + waku/waku_core/codecs, + ./common, + ./waku_peer_record + +logScope: + topics = "waku rendezvous client" + +declarePublicCounter rendezvousPeerFoundTotal, + "total number of peers found via rendezvous" + +type WakuRendezVousClient* = ref object + switch: Switch + peerManager: PeerManager + clusterId: uint16 + requestInterval: timer.Duration + periodicRequestFut: Future[void] + # Internal rendezvous instance for making requests + rdv: GenericRendezVous[WakuPeerRecord] + +const MaxSimultanesousAdvertisements = 5 +const RendezVousLookupInterval = 10.seconds + +proc requestAll*( + self: WakuRendezVousClient +): Future[Result[void, string]] {.async: (raises: []).} = + trace "waku rendezvous client requests started" + + let namespace = computeMixNamespace(self.clusterId) + + # Get a random WakuRDV peer + let rpi = self.peerManager.selectPeer(WakuRendezVousCodec).valueOr: + return err("could not get a peer supporting WakuRendezVousCodec") + + var records: seq[WakuPeerRecord] + try: + # Use the libp2p rendezvous request method + records = await self.rdv.request( + Opt.some(namespace), Opt.some(PeersRequestedCount), Opt.some(@[rpi.peerId]) + ) + except CatchableError as e: + return err("rendezvous request failed: " & e.msg) + + trace "waku rendezvous client request got peers", count = records.len + for record in records: + if not self.switch.peerStore.peerExists(record.peerId): + rendezvousPeerFoundTotal.inc() + if record.mixKey.len == 0 or record.peerId == self.switch.peerInfo.peerId: + continue + trace "adding peer from rendezvous", + peerId = record.peerId, addresses = $record.addresses, mixKey = record.mixKey + let rInfo = RemotePeerInfo.init( + record.peerId, + record.addresses, + mixPubKey = some(intoCurve25519Key(fromHex(record.mixKey))), + ) + self.peerManager.addPeer(rInfo) + + trace "waku rendezvous client request finished" + + return ok() + +proc periodicRequests(self: WakuRendezVousClient) {.async.} = + info "waku rendezvous periodic requests started", interval = self.requestInterval + + # infinite loop + while true: + await sleepAsync(self.requestInterval) + + (await self.requestAll()).isOkOr: + error "waku rendezvous requests failed", error = error + + # Exponential backoff + +#[ TODO: Reevaluate for mix, maybe be aggresive in the start until a sizeable pool is built and then backoff + self.requestInterval += self.requestInterval + + if self.requestInterval >= 1.days: + break ]# + +proc new*( + T: type WakuRendezVousClient, + switch: Switch, + peerManager: PeerManager, + clusterId: uint16, +): Result[T, string] {.raises: [].} = + # Create a minimal GenericRendezVous instance for client-side requests + # We don't need the full server functionality, just the request method + let rng = newRng() + let rdv = GenericRendezVous[WakuPeerRecord]( + switch: switch, + rng: rng, + sema: newAsyncSemaphore(MaxSimultanesousAdvertisements), + minDuration: rendezvous.MinimumAcceptedDuration, + maxDuration: rendezvous.MaximumDuration, + minTTL: rendezvous.MinimumAcceptedDuration.seconds.uint64, + maxTTL: rendezvous.MaximumDuration.seconds.uint64, + peers: @[], # Will be populated from selectPeer calls + cookiesSaved: initTable[PeerId, Table[string, seq[byte]]](), + peerRecordValidator: checkWakuPeerRecord, + ) + + # Set codec separately as it's inherited from LPProtocol + rdv.codec = WakuRendezVousCodec + + let client = T( + switch: switch, + peerManager: peerManager, + clusterId: clusterId, + requestInterval: RendezVousLookupInterval, + rdv: rdv, + ) + + info "waku rendezvous client initialized", clusterId = clusterId + + return ok(client) + +proc start*(self: WakuRendezVousClient) {.async: (raises: []).} = + self.periodicRequestFut = self.periodicRequests() + info "waku rendezvous client started" + +proc stopWait*(self: WakuRendezVousClient) {.async: (raises: []).} = + if not self.periodicRequestFut.isNil(): + await self.periodicRequestFut.cancelAndWait() + + info "waku rendezvous client stopped" diff --git a/waku/waku_rendezvous/common.nim b/waku/waku_rendezvous/common.nim index 6125ac860..18c633efb 100644 --- a/waku/waku_rendezvous/common.nim +++ b/waku/waku_rendezvous/common.nim @@ -11,6 +11,14 @@ const DefaultRequestsInterval* = 1.minutes const MaxRegistrationInterval* = 5.minutes const PeersRequestedCount* = 12 +proc computeMixNamespace*(clusterId: uint16): string = + var namespace = "rs/" + + namespace &= $clusterId + namespace &= "/mix" + + return namespace + proc computeNamespace*(clusterId: uint16, shard: uint16): string = var namespace = "rs/" diff --git a/waku/waku_rendezvous/protocol.nim b/waku/waku_rendezvous/protocol.nim index 0eb55d350..ed414fa42 100644 --- a/waku/waku_rendezvous/protocol.nim +++ b/waku/waku_rendezvous/protocol.nim @@ -1,70 +1,91 @@ {.push raises: [].} import - std/[sugar, options], + std/[sugar, options, sequtils, tables], results, chronos, chronicles, - metrics, + stew/byteutils, libp2p/protocols/rendezvous, + libp2p/protocols/rendezvous/protobuf, + libp2p/discovery/discoverymngr, + libp2p/utils/semaphore, + libp2p/utils/offsettedseq, + libp2p/crypto/curve25519, libp2p/switch, libp2p/utility +import metrics except collect + import ../node/peer_manager, ../common/callbacks, ../waku_enr/capabilities, ../waku_core/peers, - ../waku_core/topics, - ../waku_core/topics/pubsub_topic, - ./common + ../waku_core/codecs, + ./common, + ./waku_peer_record logScope: topics = "waku rendezvous" -declarePublicCounter rendezvousPeerFoundTotal, - "total number of peers found via rendezvous" - -type WakuRendezVous* = ref object - rendezvous: Rendezvous +type WakuRendezVous* = ref object of GenericRendezVous[WakuPeerRecord] peerManager: PeerManager clusterId: uint16 getShards: GetShards getCapabilities: GetCapabilities + getPeerRecord: GetWakuPeerRecord registrationInterval: timer.Duration periodicRegistrationFut: Future[void] - requestInterval: timer.Duration - periodicRequestFut: Future[void] +const MaximumNamespaceLen = 255 -proc batchAdvertise*( +method discover*( + self: WakuRendezVous, conn: Connection, d: Discover +) {.async: (raises: [CancelledError, LPStreamError]).} = + # Override discover method to avoid collect macro generic instantiation issues + trace "Received Discover", peerId = conn.peerId, ns = d.ns + await procCall GenericRendezVous[WakuPeerRecord](self).discover(conn, d) + +proc advertise*( self: WakuRendezVous, namespace: string, - ttl: Duration = DefaultRegistrationTTL, peers: seq[PeerId], + ttl: timer.Duration = self.minDuration, ): Future[Result[void, string]] {.async: (raises: []).} = - ## Register with all rendezvous peers under a namespace + trace "advertising via waku rendezvous", + namespace = namespace, ttl = ttl, peers = $peers, peerRecord = $self.getPeerRecord() + let se = SignedPayload[WakuPeerRecord].init( + self.switch.peerInfo.privateKey, self.getPeerRecord() + ).valueOr: + return + err("rendezvous advertisement failed: Failed to sign Waku Peer Record: " & $error) + let sprBuff = se.encode().valueOr: + return err("rendezvous advertisement failed: Wrong Signed Peer Record: " & $error) # rendezvous.advertise expects already opened connections # must dial first + var futs = collect(newSeq): for peerId in peers: - self.peerManager.dialPeer(peerId, RendezVousCodec) + self.peerManager.dialPeer(peerId, self.codec) let dialCatch = catch: await allFinished(futs) - futs = dialCatch.valueOr: - return err("batchAdvertise: " & error.msg) + if dialCatch.isErr(): + return err("advertise: " & dialCatch.error.msg) + + futs = dialCatch.get() let conns = collect(newSeq): for fut in futs: let catchable = catch: fut.read() - catchable.isOkOr: - warn "a rendezvous dial failed", cause = error.msg + if catchable.isErr(): + warn "a rendezvous dial failed", cause = catchable.error.msg continue let connOpt = catchable.get() @@ -74,149 +95,34 @@ proc batchAdvertise*( conn - let advertCatch = catch: - await self.rendezvous.advertise(namespace, Opt.some(ttl)) - - for conn in conns: - await conn.close() - - advertCatch.isOkOr: - return err("batchAdvertise: " & error.msg) + if conns.len == 0: + return err("could not establish any connections to rendezvous peers") + try: + await self.advertise(namespace, ttl, peers, sprBuff) + except Exception as e: + return err("rendezvous advertisement failed: " & e.msg) + finally: + for conn in conns: + await conn.close() return ok() -proc batchRequest*( - self: WakuRendezVous, - namespace: string, - count: int = DiscoverLimit, - peers: seq[PeerId], -): Future[Result[seq[PeerRecord], string]] {.async: (raises: []).} = - ## Request all records from all rendezvous peers matching a namespace - - # rendezvous.request expects already opened connections - # must dial first - var futs = collect(newSeq): - for peerId in peers: - self.peerManager.dialPeer(peerId, RendezVousCodec) - - let dialCatch = catch: - await allFinished(futs) - - futs = dialCatch.valueOr: - return err("batchRequest: " & error.msg) - - let conns = collect(newSeq): - for fut in futs: - let catchable = catch: - fut.read() - - catchable.isOkOr: - warn "a rendezvous dial failed", cause = error.msg - continue - - let connOpt = catchable.get() - - let conn = connOpt.valueOr: - continue - - conn - - let reqCatch = catch: - await self.rendezvous.request(Opt.some(namespace), Opt.some(count), Opt.some(peers)) - - for conn in conns: - await conn.close() - - reqCatch.isOkOr: - return err("batchRequest: " & error.msg) - - return ok(reqCatch.get()) - -proc advertiseAll( +proc advertiseAll*( self: WakuRendezVous ): Future[Result[void, string]] {.async: (raises: []).} = - info "waku rendezvous advertisements started" + trace "waku rendezvous advertisements started" - let shards = self.getShards() - - let futs = collect(newSeq): - for shardId in shards: - # Get a random RDV peer for that shard - - let pubsub = - toPubsubTopic(RelayShard(clusterId: self.clusterId, shardId: shardId)) - - let rpi = self.peerManager.selectPeer(RendezVousCodec, some(pubsub)).valueOr: - continue - - let namespace = computeNamespace(self.clusterId, shardId) - - # Advertise yourself on that peer - self.batchAdvertise(namespace, DefaultRegistrationTTL, @[rpi.peerId]) - - if futs.len < 1: + let rpi = self.peerManager.selectPeer(self.codec).valueOr: return err("could not get a peer supporting RendezVousCodec") - let catchable = catch: - await allFinished(futs) + let namespace = computeMixNamespace(self.clusterId) - catchable.isOkOr: - return err(error.msg) + # Advertise yourself on that peer + let res = await self.advertise(namespace, @[rpi.peerId]) - for fut in catchable.get(): - if fut.failed(): - warn "a rendezvous advertisement failed", cause = fut.error.msg + trace "waku rendezvous advertisements finished" - info "waku rendezvous advertisements finished" - - return ok() - -proc initialRequestAll*( - self: WakuRendezVous -): Future[Result[void, string]] {.async: (raises: []).} = - info "waku rendezvous initial requests started" - - let shards = self.getShards() - - let futs = collect(newSeq): - for shardId in shards: - let namespace = computeNamespace(self.clusterId, shardId) - # Get a random RDV peer for that shard - let rpi = self.peerManager.selectPeer( - RendezVousCodec, - some(toPubsubTopic(RelayShard(clusterId: self.clusterId, shardId: shardId))), - ).valueOr: - continue - - # Ask for peer records for that shard - self.batchRequest(namespace, PeersRequestedCount, @[rpi.peerId]) - - if futs.len < 1: - return err("could not get a peer supporting RendezVousCodec") - - let catchable = catch: - await allFinished(futs) - - catchable.isOkOr: - return err(error.msg) - - for fut in catchable.get(): - if fut.failed(): - warn "a rendezvous request failed", cause = fut.error.msg - elif fut.finished(): - let res = fut.value() - - let records = res.valueOr: - warn "a rendezvous request failed", cause = $error - continue - - for record in records: - rendezvousPeerFoundTotal.inc() - self.peerManager.addPeer(record) - - info "waku rendezvous initial request finished" - - return ok() + return res proc periodicRegistration(self: WakuRendezVous) {.async.} = info "waku rendezvous periodic registration started", @@ -237,22 +143,6 @@ proc periodicRegistration(self: WakuRendezVous) {.async.} = # Back to normal interval if no errors self.registrationInterval = DefaultRegistrationInterval -proc periodicRequests(self: WakuRendezVous) {.async.} = - info "waku rendezvous periodic requests started", interval = self.requestInterval - - # infinite loop - while true: - (await self.initialRequestAll()).isOkOr: - error "waku rendezvous requests failed", error = error - - await sleepAsync(self.requestInterval) - - # Exponential backoff - self.requestInterval += self.requestInterval - - if self.requestInterval >= 1.days: - break - proc new*( T: type WakuRendezVous, switch: Switch, @@ -260,46 +150,88 @@ proc new*( clusterId: uint16, getShards: GetShards, getCapabilities: GetCapabilities, + getPeerRecord: GetWakuPeerRecord, ): Result[T, string] {.raises: [].} = - let rvCatchable = catch: - RendezVous.new(switch = switch, minDuration = DefaultRegistrationTTL) + let rng = newRng() + let wrv = T( + rng: rng, + salt: string.fromBytes(generateBytes(rng[], 8)), + registered: initOffsettedSeq[RegisteredData](), + expiredDT: Moment.now() - 1.days, + sema: newAsyncSemaphore(SemaphoreDefaultSize), + minDuration: rendezvous.MinimumAcceptedDuration, + maxDuration: rendezvous.MaximumDuration, + minTTL: rendezvous.MinimumAcceptedDuration.seconds.uint64, + maxTTL: rendezvous.MaximumDuration.seconds.uint64, + peerRecordValidator: checkWakuPeerRecord, + ) - let rv = rvCatchable.valueOr: - return err(error.msg) - - let mountCatchable = catch: - switch.mount(rv) - - mountCatchable.isOkOr: - return err(error.msg) - - var wrv = WakuRendezVous() - wrv.rendezvous = rv wrv.peerManager = peerManager wrv.clusterId = clusterId wrv.getShards = getShards wrv.getCapabilities = getCapabilities wrv.registrationInterval = DefaultRegistrationInterval - wrv.requestInterval = DefaultRequestsInterval + wrv.getPeerRecord = getPeerRecord + wrv.switch = switch + wrv.codec = WakuRendezVousCodec + + proc handleStream( + conn: Connection, proto: string + ) {.async: (raises: [CancelledError]).} = + try: + let + buf = await conn.readLp(4096) + msg = Message.decode(buf).tryGet() + case msg.msgType + of MessageType.Register: + #TODO: override this to store peers registered with us in peerstore with their info as well. + await wrv.register(conn, msg.register.tryGet(), wrv.getPeerRecord()) + of MessageType.RegisterResponse: + trace "Got an unexpected Register Response", response = msg.registerResponse + of MessageType.Unregister: + wrv.unregister(conn, msg.unregister.tryGet()) + of MessageType.Discover: + await wrv.discover(conn, msg.discover.tryGet()) + of MessageType.DiscoverResponse: + trace "Got an unexpected Discover Response", response = msg.discoverResponse + except CancelledError as exc: + trace "cancelled rendezvous handler" + raise exc + except CatchableError as exc: + trace "exception in rendezvous handler", description = exc.msg + finally: + await conn.close() + + wrv.handler = handleStream info "waku rendezvous initialized", - clusterId = clusterId, shards = getShards(), capabilities = getCapabilities() + clusterId = clusterId, + shards = getShards(), + capabilities = getCapabilities(), + wakuPeerRecord = getPeerRecord() return ok(wrv) proc start*(self: WakuRendezVous) {.async: (raises: []).} = + # Start the parent GenericRendezVous (starts the register deletion loop) + if self.started: + warn "waku rendezvous already started" + return + try: + await procCall GenericRendezVous[WakuPeerRecord](self).start() + except CancelledError as exc: + error "failed to start GenericRendezVous", cause = exc.msg + return # start registering forever self.periodicRegistrationFut = self.periodicRegistration() - self.periodicRequestFut = self.periodicRequests() - info "waku rendezvous discovery started" proc stopWait*(self: WakuRendezVous) {.async: (raises: []).} = if not self.periodicRegistrationFut.isNil(): await self.periodicRegistrationFut.cancelAndWait() - if not self.periodicRequestFut.isNil(): - await self.periodicRequestFut.cancelAndWait() + # Stop the parent GenericRendezVous (stops the register deletion loop) + await GenericRendezVous[WakuPeerRecord](self).stop() info "waku rendezvous discovery stopped" diff --git a/waku/waku_rendezvous/waku_peer_record.nim b/waku/waku_rendezvous/waku_peer_record.nim new file mode 100644 index 000000000..d6e700eb5 --- /dev/null +++ b/waku/waku_rendezvous/waku_peer_record.nim @@ -0,0 +1,74 @@ +import std/times, sugar + +import + libp2p/[ + protocols/rendezvous, + signed_envelope, + multicodec, + multiaddress, + protobuf/minprotobuf, + peerid, + ] + +type WakuPeerRecord* = object + # Considering only mix as of now, but we can keep extending this to include all capabilities part of Waku ENR + peerId*: PeerId + seqNo*: uint64 + addresses*: seq[MultiAddress] + mixKey*: string + +proc payloadDomain*(T: typedesc[WakuPeerRecord]): string = + $multiCodec("libp2p-custom-peer-record") + +proc payloadType*(T: typedesc[WakuPeerRecord]): seq[byte] = + @[(byte) 0x30, (byte) 0x00, (byte) 0x00] + +proc init*( + T: typedesc[WakuPeerRecord], + peerId: PeerId, + seqNo = getTime().toUnix().uint64, + addresses: seq[MultiAddress], + mixKey: string, +): T = + WakuPeerRecord(peerId: peerId, seqNo: seqNo, addresses: addresses, mixKey: mixKey) + +proc decode*( + T: typedesc[WakuPeerRecord], buffer: seq[byte] +): Result[WakuPeerRecord, ProtoError] = + let pb = initProtoBuffer(buffer) + var record = WakuPeerRecord() + + ?pb.getRequiredField(1, record.peerId) + ?pb.getRequiredField(2, record.seqNo) + discard ?pb.getRepeatedField(3, record.addresses) + + if record.addresses.len == 0: + return err(ProtoError.RequiredFieldMissing) + + ?pb.getRequiredField(4, record.mixKey) + + return ok(record) + +proc encode*(record: WakuPeerRecord): seq[byte] = + var pb = initProtoBuffer() + + pb.write(1, record.peerId) + pb.write(2, record.seqNo) + + for address in record.addresses: + pb.write(3, address) + + pb.write(4, record.mixKey) + + pb.finish() + return pb.buffer + +proc checkWakuPeerRecord*( + _: WakuPeerRecord, spr: seq[byte], peerId: PeerId +): Result[void, string] {.gcsafe.} = + if spr.len == 0: + return err("Empty peer record") + let signedEnv = ?SignedPayload[WakuPeerRecord].decode(spr).mapErr(x => $x) + if signedEnv.data.peerId != peerId: + return err("Bad Peer ID") + return ok() From 088e3108c86085c2d3c072c29964ae96f285c6d2 Mon Sep 17 00:00:00 2001 From: Prem Chaitanya Prathi Date: Sat, 22 Nov 2025 08:11:05 +0530 Subject: [PATCH 06/70] use exit==dest approach for mix (#3642) --- Makefile | 3 +++ apps/chat2mix/chat2mix.nim | 7 +++++-- examples/lightpush_mix/lightpush_publisher_mix.nim | 5 ++++- simulations/mixnet/config.toml | 4 ++-- .../mixnet/{run_lp_service_node.sh => run_mix_node.sh} | 0 waku/node/kernel_api/lightpush.nim | 2 +- waku/waku_mix/protocol.nim | 10 +++++----- waku/waku_rendezvous/protocol.nim | 3 +++ 8 files changed, 23 insertions(+), 11 deletions(-) rename simulations/mixnet/{run_lp_service_node.sh => run_mix_node.sh} (100%) diff --git a/Makefile b/Makefile index 029313c99..f65daff71 100644 --- a/Makefile +++ b/Makefile @@ -143,6 +143,9 @@ ifeq ($(USE_LIBBACKTRACE), 0) NIM_PARAMS := $(NIM_PARAMS) -d:disable_libbacktrace endif +# enable experimental exit is dest feature in libp2p mix +NIM_PARAMS := $(NIM_PARAMS) -d:libp2p_mix_experimental_exit_is_dest + libbacktrace: + $(MAKE) -C vendor/nim-libbacktrace --no-print-directory BUILD_CXX_LIB=0 diff --git a/apps/chat2mix/chat2mix.nim b/apps/chat2mix/chat2mix.nim index 3fdd7bc9c..45fd1fa2d 100644 --- a/apps/chat2mix/chat2mix.nim +++ b/apps/chat2mix/chat2mix.nim @@ -82,6 +82,8 @@ type PrivateKey* = crypto.PrivateKey Topic* = waku_core.PubsubTopic +const MinMixNodePoolSize = 4 + ##################### ## chat2 protobufs ## ##################### @@ -592,6 +594,7 @@ proc processInput(rfd: AsyncFD, rng: ref HmacDrbgContext) {.async.} = node.peerManager.addServicePeer(servicePeerInfo, WakuLightpushCodec) node.peerManager.addServicePeer(servicePeerInfo, WakuPeerExchangeCodec) + #node.peerManager.addServicePeer(servicePeerInfo, WakuRendezVousCodec) # Start maintaining subscription asyncSpawn maintainSubscription( @@ -599,12 +602,12 @@ proc processInput(rfd: AsyncFD, rng: ref HmacDrbgContext) {.async.} = ) echo "waiting for mix nodes to be discovered..." while true: - if node.getMixNodePoolSize() >= 3: + if node.getMixNodePoolSize() >= MinMixNodePoolSize: break discard await node.fetchPeerExchangePeers() await sleepAsync(1000) - while node.getMixNodePoolSize() < 3: + while node.getMixNodePoolSize() < MinMixNodePoolSize: info "waiting for mix nodes to be discovered", currentpoolSize = node.getMixNodePoolSize() await sleepAsync(1000) diff --git a/examples/lightpush_mix/lightpush_publisher_mix.nim b/examples/lightpush_mix/lightpush_publisher_mix.nim index bb4bb4c4e..4219cd665 100644 --- a/examples/lightpush_mix/lightpush_publisher_mix.nim +++ b/examples/lightpush_mix/lightpush_publisher_mix.nim @@ -144,7 +144,7 @@ proc setupAndPublish(rng: ref HmacDrbgContext, conf: LightPushMixConf) {.async.} conn = connOpt.get() else: conn = node.wakuMix.toConnection( - MixDestination.init(dPeerId, pxPeerInfo.addrs[0]), # destination lightpush peer + MixDestination.exitNode(dPeerId), # destination lightpush peer WakuLightPushCodec, # protocol codec which will be used over the mix connection MixParameters(expectReply: Opt.some(true), numSurbs: Opt.some(byte(1))), # mix parameters indicating we expect a single reply @@ -163,6 +163,9 @@ proc setupAndPublish(rng: ref HmacDrbgContext, conf: LightPushMixConf) {.async.} ephemeral: true, # tell store nodes to not store it timestamp: getNowInNanosecondTime(), ) # current timestamp + let res = await node.wakuLightpushClient.publishWithConn( + LightpushPubsubTopic, message, conn, dPeerId + ) let startTime = getNowInNanosecondTime() diff --git a/simulations/mixnet/config.toml b/simulations/mixnet/config.toml index 17e9242d3..3719d8177 100644 --- a/simulations/mixnet/config.toml +++ b/simulations/mixnet/config.toml @@ -1,6 +1,6 @@ log-level = "INFO" relay = true -#mix = true +mix = true filter = true store = false lightpush = true @@ -18,7 +18,7 @@ num-shards-in-network = 1 shard = [0] agent-string = "nwaku-mix" nodekey = "f98e3fba96c32e8d1967d460f1b79457380e1a895f7971cecc8528abe733781a" -#mixkey = "a87db88246ec0eedda347b9b643864bee3d6933eb15ba41e6d58cb678d813258" +mixkey = "a87db88246ec0eedda347b9b643864bee3d6933eb15ba41e6d58cb678d813258" rendezvous = true listen-address = "127.0.0.1" nat = "extip:127.0.0.1" diff --git a/simulations/mixnet/run_lp_service_node.sh b/simulations/mixnet/run_mix_node.sh similarity index 100% rename from simulations/mixnet/run_lp_service_node.sh rename to simulations/mixnet/run_mix_node.sh diff --git a/waku/node/kernel_api/lightpush.nim b/waku/node/kernel_api/lightpush.nim index f42cb146e..8df6291b1 100644 --- a/waku/node/kernel_api/lightpush.nim +++ b/waku/node/kernel_api/lightpush.nim @@ -199,7 +199,7 @@ proc lightpushPublishHandler( if mixify: #indicates we want to use mix to send the message #TODO: How to handle multiple addresses? let conn = node.wakuMix.toConnection( - MixDestination.init(peer.peerId, peer.addrs[0]), + MixDestination.exitNode(peer.peerId), WakuLightPushCodec, MixParameters(expectReply: Opt.some(true), numSurbs: Opt.some(byte(1))), # indicating we only want a single path to be used for reply hence numSurbs = 1 diff --git a/waku/waku_mix/protocol.nim b/waku/waku_mix/protocol.nim index d3d765df8..366d5da91 100644 --- a/waku/waku_mix/protocol.nim +++ b/waku/waku_mix/protocol.nim @@ -21,7 +21,7 @@ import logScope: topics = "waku mix" -const mixMixPoolSize = 3 +const minMixPoolSize = 4 type WakuMix* = ref object of MixProtocol @@ -181,12 +181,12 @@ proc new*( peermgr.switch.peerInfo.peerId, nodeMultiAddr, mixPubKey, mixPrivKey, peermgr.switch.peerInfo.publicKey.skkey, peermgr.switch.peerInfo.privateKey.skkey, ) - if bootnodes.len < mixMixPoolSize: - warn "publishing with mix won't work until there are 3 mix nodes in node pool" + if bootnodes.len < minMixPoolSize: + warn "publishing with mix won't work until atleast 3 mix nodes in node pool" let initTable = processBootNodes(bootnodes, peermgr) - if len(initTable) < mixMixPoolSize: - warn "publishing with mix won't work until there are 3 mix nodes in node pool" + if len(initTable) < minMixPoolSize: + warn "publishing with mix won't work until atleast 3 mix nodes in node pool" var m = WakuMix(peerManager: peermgr, clusterId: clusterId, pubKey: mixPubKey) procCall MixProtocol(m).init(localMixNodeInfo, initTable, peermgr.switch) return ok(m) diff --git a/waku/waku_rendezvous/protocol.nim b/waku/waku_rendezvous/protocol.nim index ed414fa42..7b97375ff 100644 --- a/waku/waku_rendezvous/protocol.nim +++ b/waku/waku_rendezvous/protocol.nim @@ -234,4 +234,7 @@ proc stopWait*(self: WakuRendezVous) {.async: (raises: []).} = # Stop the parent GenericRendezVous (stops the register deletion loop) await GenericRendezVous[WakuPeerRecord](self).stop() + # Stop the parent GenericRendezVous (stops the register deletion loop) + await GenericRendezVous[WakuPeerRecord](self).stop() + info "waku rendezvous discovery stopped" From 454b098ac52df75e5d5de5010f9edb42cf8b0d52 Mon Sep 17 00:00:00 2001 From: Ivan FB <128452529+Ivansete-status@users.noreply.github.com> Date: Mon, 24 Nov 2025 10:16:37 +0100 Subject: [PATCH 07/70] new metric in postgres_driver to estimate payload stats (#3596) --- .../driver/postgres_driver/postgres_driver.nim | 10 +++++++++- 1 file changed, 9 insertions(+), 1 deletion(-) diff --git a/waku/waku_archive/driver/postgres_driver/postgres_driver.nim b/waku/waku_archive/driver/postgres_driver/postgres_driver.nim index 842d7cbc2..9b0e14c84 100644 --- a/waku/waku_archive/driver/postgres_driver/postgres_driver.nim +++ b/waku/waku_archive/driver/postgres_driver/postgres_driver.nim @@ -5,6 +5,7 @@ import stew/[byteutils, arrayops], results, chronos, + metrics, db_connector/[postgres, db_common], chronicles import @@ -16,6 +17,9 @@ import ./postgres_healthcheck, ./partitions_manager +declarePublicGauge postgres_payload_size_bytes, + "Payload size in bytes of correctly stored messages" + type PostgresDriver* = ref object of ArchiveDriver ## Establish a separate pools for read/write operations writeConnPool: PgAsyncPool @@ -333,7 +337,7 @@ method put*( return err("could not put msg in messages table: " & $error) ## Now add the row to messages_lookup - return await s.writeConnPool.runStmt( + let ret = await s.writeConnPool.runStmt( InsertRowInMessagesLookupStmtName, InsertRowInMessagesLookupStmtDefinition, @[messageHash, timestamp], @@ -341,6 +345,10 @@ method put*( @[int32(0), int32(0)], ) + if ret.isOk(): + postgres_payload_size_bytes.set(message.payload.len) + return ret + method getAllMessages*( s: PostgresDriver ): Future[ArchiveDriverResult[seq[ArchiveRow]]] {.async.} = From c0a7debfd157a158c7973fc1135150df804f97e5 Mon Sep 17 00:00:00 2001 From: Ivan FB <128452529+Ivansete-status@users.noreply.github.com> Date: Tue, 25 Nov 2025 10:05:40 +0100 Subject: [PATCH 08/70] Adapt makefile for libwaku windows (#3648) --- Makefile | 6 +++- scripts/libwaku_windows_setup.mk | 53 ++++++++++++++++++++++++++++++++ 2 files changed, 58 insertions(+), 1 deletion(-) create mode 100644 scripts/libwaku_windows_setup.mk diff --git a/Makefile b/Makefile index f65daff71..2f15ccd71 100644 --- a/Makefile +++ b/Makefile @@ -43,6 +43,9 @@ ifeq ($(detected_OS),Windows) LIBS = -lws2_32 -lbcrypt -liphlpapi -luserenv -lntdll -lminiupnpc -lnatpmp -lpq NIM_PARAMS += $(foreach lib,$(LIBS),--passL:"$(lib)") + + export PATH := /c/msys64/usr/bin:/c/msys64/mingw64/bin:/c/msys64/usr/lib:/c/msys64/mingw64/lib:$(PATH) + endif ########## @@ -424,13 +427,13 @@ docker-liteprotocoltester-push: STATIC ?= 0 - libwaku: | build deps librln rm -f build/libwaku* ifeq ($(STATIC), 1) echo -e $(BUILD_MSG) "build/$@.a" && $(ENV_SCRIPT) nim libwakuStatic $(NIM_PARAMS) waku.nims else ifeq ($(detected_OS),Windows) + make -f scripts/libwaku_windows_setup.mk windows-setup echo -e $(BUILD_MSG) "build/$@.dll" && $(ENV_SCRIPT) nim libwakuDynamic $(NIM_PARAMS) waku.nims else echo -e $(BUILD_MSG) "build/$@.so" && $(ENV_SCRIPT) nim libwakuDynamic $(NIM_PARAMS) waku.nims @@ -546,3 +549,4 @@ release-notes: sed -E 's@#([0-9]+)@[#\1](https://github.com/waku-org/nwaku/issues/\1)@g' # I could not get the tool to replace issue ids with links, so using sed for now, # asked here: https://github.com/bvieira/sv4git/discussions/101 + diff --git a/scripts/libwaku_windows_setup.mk b/scripts/libwaku_windows_setup.mk new file mode 100644 index 000000000..503d0c405 --- /dev/null +++ b/scripts/libwaku_windows_setup.mk @@ -0,0 +1,53 @@ +# --------------------------------------------------------- +# Windows Setup Makefile +# --------------------------------------------------------- + +# Extend PATH (Make preserves environment variables) +export PATH := /c/msys64/usr/bin:/c/msys64/mingw64/bin:/c/msys64/usr/lib:/c/msys64/mingw64/lib:$(PATH) + +# Tools required +DEPS = gcc g++ make cmake cargo upx rustc python + +# Default target +.PHONY: windows-setup +windows-setup: check-deps update-submodules create-tmp libunwind miniupnpc libnatpmp + @echo "Windows setup completed successfully!" + +.PHONY: check-deps +check-deps: + @echo "Checking libwaku build dependencies..." + @for dep in $(DEPS); do \ + if ! which $$dep >/dev/null 2>&1; then \ + echo "✗ Missing dependency: $$dep"; \ + exit 1; \ + else \ + echo "✓ Found: $$dep"; \ + fi; \ + done + +.PHONY: update-submodules +update-submodules: + @echo "Updating libwaku git submodules..." + git submodule update --init --recursive + +.PHONY: create-tmp +create-tmp: + @echo "Creating tmp directory..." + mkdir -p tmp + +.PHONY: libunwind +libunwind: + @echo "Building libunwind..." + cd vendor/nim-libbacktrace && make all V=1 + +.PHONY: miniupnpc +miniupnpc: + @echo "Building miniupnpc..." + cd vendor/nim-nat-traversal/vendor/miniupnp/miniupnpc && \ + make -f Makefile.mingw CC=gcc CXX=g++ libminiupnpc.a V=1 + +.PHONY: libnatpmp +libnatpmp: + @echo "Building libnatpmp..." + cd vendor/nim-nat-traversal/vendor/libnatpmp-upstream && \ + make CC="gcc -fPIC -D_WIN32_WINNT=0x0600 -DNATPMP_STATICLIB" libnatpmp.a V=1 From 1e73213a3604b0113a13b1ca2157db3276c78a4d Mon Sep 17 00:00:00 2001 From: Sergei Tikhomirov Date: Fri, 28 Nov 2025 10:41:20 +0100 Subject: [PATCH 09/70] chore: Lightpush minor refactor (#3538) * chore: refactor Lightpush (more DRY) * chore: apply review suggestions Co-authored-by: Ivan FB <128452529+Ivansete-status@users.noreply.github.com> --------- Co-authored-by: Ivan FB <128452529+Ivansete-status@users.noreply.github.com> --- .../lightpush_mix/lightpush_publisher_mix.nim | 6 +- tests/waku_lightpush/test_ratelimit.nim | 8 +- waku/node/kernel_api/lightpush.nim | 4 +- waku/waku_lightpush/client.nim | 145 +++++++----------- waku/waku_lightpush/common.nim | 15 +- waku/waku_lightpush/protocol.nim | 12 +- 6 files changed, 76 insertions(+), 114 deletions(-) diff --git a/examples/lightpush_mix/lightpush_publisher_mix.nim b/examples/lightpush_mix/lightpush_publisher_mix.nim index 4219cd665..104de8552 100644 --- a/examples/lightpush_mix/lightpush_publisher_mix.nim +++ b/examples/lightpush_mix/lightpush_publisher_mix.nim @@ -163,9 +163,9 @@ proc setupAndPublish(rng: ref HmacDrbgContext, conf: LightPushMixConf) {.async.} ephemeral: true, # tell store nodes to not store it timestamp: getNowInNanosecondTime(), ) # current timestamp - let res = await node.wakuLightpushClient.publishWithConn( - LightpushPubsubTopic, message, conn, dPeerId - ) + + let res = + await node.wakuLightpushClient.publish(some(LightpushPubsubTopic), message, conn) let startTime = getNowInNanosecondTime() diff --git a/tests/waku_lightpush/test_ratelimit.nim b/tests/waku_lightpush/test_ratelimit.nim index b2dcdc7b5..7420a4e56 100644 --- a/tests/waku_lightpush/test_ratelimit.nim +++ b/tests/waku_lightpush/test_ratelimit.nim @@ -37,7 +37,7 @@ suite "Rate limited push service": handlerFuture = newFuture[(string, WakuMessage)]() let requestRes = - await client.publish(some(DefaultPubsubTopic), message, peer = serverPeerId) + await client.publish(some(DefaultPubsubTopic), message, serverPeerId) check await handlerFuture.withTimeout(50.millis) @@ -66,7 +66,7 @@ suite "Rate limited push service": var endTime = Moment.now() var elapsed: Duration = (endTime - startTime) await sleepAsync(tokenPeriod - elapsed + firstWaitExtend) - firstWaitEXtend = 100.millis + firstWaitExtend = 100.millis ## Cleanup await allFutures(clientSwitch.stop(), serverSwitch.stop()) @@ -99,7 +99,7 @@ suite "Rate limited push service": let message = fakeWakuMessage() handlerFuture = newFuture[(string, WakuMessage)]() let requestRes = - await client.publish(some(DefaultPubsubTopic), message, peer = serverPeerId) + await client.publish(some(DefaultPubsubTopic), message, serverPeerId) discard await handlerFuture.withTimeout(10.millis) check: @@ -114,7 +114,7 @@ suite "Rate limited push service": let message = fakeWakuMessage() handlerFuture = newFuture[(string, WakuMessage)]() let requestRes = - await client.publish(some(DefaultPubsubTopic), message, peer = serverPeerId) + await client.publish(some(DefaultPubsubTopic), message, serverPeerId) discard await handlerFuture.withTimeout(10.millis) check: diff --git a/waku/node/kernel_api/lightpush.nim b/waku/node/kernel_api/lightpush.nim index 8df6291b1..9451767ac 100644 --- a/waku/node/kernel_api/lightpush.nim +++ b/waku/node/kernel_api/lightpush.nim @@ -210,9 +210,7 @@ proc lightpushPublishHandler( "Waku lightpush with mix not available", ) - return await node.wakuLightpushClient.publishWithConn( - pubsubTopic, message, conn, peer.peerId - ) + return await node.wakuLightpushClient.publish(some(pubsubTopic), message, conn) else: return await node.wakuLightpushClient.publish(some(pubsubTopic), message, peer) diff --git a/waku/waku_lightpush/client.nim b/waku/waku_lightpush/client.nim index d68552304..b528b4c76 100644 --- a/waku/waku_lightpush/client.nim +++ b/waku/waku_lightpush/client.nim @@ -17,8 +17,8 @@ logScope: topics = "waku lightpush client" type WakuLightPushClient* = ref object - peerManager*: PeerManager rng*: ref rand.HmacDrbgContext + peerManager*: PeerManager publishObservers: seq[PublishObserver] proc new*( @@ -29,33 +29,31 @@ proc new*( proc addPublishObserver*(wl: WakuLightPushClient, obs: PublishObserver) = wl.publishObservers.add(obs) -proc sendPushRequest( - wl: WakuLightPushClient, - req: LightPushRequest, - peer: PeerId | RemotePeerInfo, - conn: Option[Connection] = none(Connection), -): Future[WakuLightPushResult] {.async.} = - let connection = conn.valueOr: - (await wl.peerManager.dialPeer(peer, WakuLightPushCodec)).valueOr: - waku_lightpush_v3_errors.inc(labelValues = [dialFailure]) - return lighpushErrorResult( - LightPushErrorCode.NO_PEERS_TO_RELAY, - dialFailure & ": " & $peer & " is not accessible", - ) +proc ensureTimestampSet(message: var WakuMessage) = + if message.timestamp == 0: + message.timestamp = getNowInNanosecondTime() - defer: - await connection.closeWithEOF() +## Short log string for peer identifiers (overloads for convenience) +func shortPeerId(peer: PeerId): string = + shortLog(peer) + +func shortPeerId(peer: RemotePeerInfo): string = + shortLog(peer.peerId) + +proc sendPushRequestToConn( + wl: WakuLightPushClient, request: LightPushRequest, conn: Connection +): Future[WakuLightPushResult] {.async.} = try: - await connection.writeLP(req.encode().buffer) - except CatchableError: - error "failed to send push request", error = getCurrentExceptionMsg() + await conn.writeLp(request.encode().buffer) + except LPStreamRemoteClosedError: + error "Failed to write request to peer", error = getCurrentExceptionMsg() return lightpushResultInternalError( - "failed to send push request: " & getCurrentExceptionMsg() + "Failed to write request to peer: " & getCurrentExceptionMsg() ) var buffer: seq[byte] try: - buffer = await connection.readLp(DefaultMaxRpcSize.int) + buffer = await conn.readLp(DefaultMaxRpcSize.int) except LPStreamRemoteClosedError: error "Failed to read response from peer", error = getCurrentExceptionMsg() return lightpushResultInternalError( @@ -66,10 +64,12 @@ proc sendPushRequest( waku_lightpush_v3_errors.inc(labelValues = [decodeRpcFailure]) return lightpushResultInternalError(decodeRpcFailure) - if response.requestId != req.requestId and - response.statusCode != LightPushErrorCode.TOO_MANY_REQUESTS: + let requestIdMismatch = response.requestId != request.requestId + let tooManyRequests = response.statusCode == LightPushErrorCode.TOO_MANY_REQUESTS + if requestIdMismatch and (not tooManyRequests): + # response with TOO_MANY_REQUESTS error code has no requestId by design error "response failure, requestId mismatch", - requestId = req.requestId, responseRequestId = response.requestId + requestId = request.requestId, responseRequestId = response.requestId return lightpushResultInternalError("response failure, requestId mismatch") return toPushResult(response) @@ -78,88 +78,49 @@ proc publish*( wl: WakuLightPushClient, pubSubTopic: Option[PubsubTopic] = none(PubsubTopic), wakuMessage: WakuMessage, - peer: PeerId | RemotePeerInfo, + dest: Connection | PeerId | RemotePeerInfo, ): Future[WakuLightPushResult] {.async, gcsafe.} = + let conn = + when dest is Connection: + dest + else: + (await wl.peerManager.dialPeer(dest, WakuLightPushCodec)).valueOr: + waku_lightpush_v3_errors.inc(labelValues = [dialFailure]) + return lighpushErrorResult( + LightPushErrorCode.NO_PEERS_TO_RELAY, + "Peer is not accessible: " & dialFailure & " - " & $dest, + ) + + defer: + await conn.closeWithEOF() + var message = wakuMessage - if message.timestamp == 0: - message.timestamp = getNowInNanosecondTime() + ensureTimestampSet(message) - when peer is PeerId: - info "publish", - peerId = shortLog(peer), - msg_hash = computeMessageHash(pubsubTopic.get(""), message).to0xHex - else: - info "publish", - peerId = shortLog(peer.peerId), - msg_hash = computeMessageHash(pubsubTopic.get(""), message).to0xHex + let msgHash = computeMessageHash(pubSubTopic.get(""), message).to0xHex() + info "publish", + myPeerId = wl.peerManager.switch.peerInfo.peerId, + peerId = shortPeerId(conn.peerId), + msgHash = msgHash, + sentTime = getNowInNanosecondTime() - let pushRequest = LightpushRequest( - requestId: generateRequestId(wl.rng), pubSubTopic: pubSubTopic, message: message + let request = LightpushRequest( + requestId: generateRequestId(wl.rng), pubsubTopic: pubSubTopic, message: message ) - let publishedCount = ?await wl.sendPushRequest(pushRequest, peer) + let relayPeerCount = ?await wl.sendPushRequestToConn(request, conn) for obs in wl.publishObservers: obs.onMessagePublished(pubSubTopic.get(""), message) - return lightpushSuccessResult(publishedCount) + return lightpushSuccessResult(relayPeerCount) proc publishToAny*( - wl: WakuLightPushClient, pubSubTopic: PubsubTopic, wakuMessage: WakuMessage + wl: WakuLightPushClient, pubsubTopic: PubsubTopic, wakuMessage: WakuMessage ): Future[WakuLightPushResult] {.async, gcsafe.} = - ## This proc is similar to the publish one but in this case - ## we don't specify a particular peer and instead we get it from peer manager - - var message = wakuMessage - if message.timestamp == 0: - message.timestamp = getNowInNanosecondTime() - + # Like publish, but selects a peer automatically from the peer manager let peer = wl.peerManager.selectPeer(WakuLightPushCodec).valueOr: # TODO: check if it is matches the situation - shall we distinguish client side missing peers from server side? return lighpushErrorResult( LightPushErrorCode.NO_PEERS_TO_RELAY, "no suitable remote peers" ) - - info "publishToAny", - my_peer_id = wl.peerManager.switch.peerInfo.peerId, - peer_id = peer.peerId, - msg_hash = computeMessageHash(pubsubTopic, message).to0xHex, - sentTime = getNowInNanosecondTime() - - let pushRequest = LightpushRequest( - requestId: generateRequestId(wl.rng), - pubSubTopic: some(pubSubTopic), - message: message, - ) - let publishedCount = ?await wl.sendPushRequest(pushRequest, peer) - - for obs in wl.publishObservers: - obs.onMessagePublished(pubSubTopic, message) - - return lightpushSuccessResult(publishedCount) - -proc publishWithConn*( - wl: WakuLightPushClient, - pubSubTopic: PubsubTopic, - message: WakuMessage, - conn: Connection, - destPeer: PeerId, -): Future[WakuLightPushResult] {.async, gcsafe.} = - info "publishWithConn", - my_peer_id = wl.peerManager.switch.peerInfo.peerId, - peer_id = destPeer, - msg_hash = computeMessageHash(pubsubTopic, message).to0xHex, - sentTime = getNowInNanosecondTime() - - let pushRequest = LightpushRequest( - requestId: generateRequestId(wl.rng), - pubSubTopic: some(pubSubTopic), - message: message, - ) - #TODO: figure out how to not pass destPeer as this is just a hack - let publishedCount = - ?await wl.sendPushRequest(pushRequest, destPeer, conn = some(conn)) - - for obs in wl.publishObservers: - obs.onMessagePublished(pubSubTopic, message) - - return lightpushSuccessResult(publishedCount) + return await wl.publish(some(pubsubTopic), wakuMessage, peer) diff --git a/waku/waku_lightpush/common.nim b/waku/waku_lightpush/common.nim index f2687834e..9c2ea7ced 100644 --- a/waku/waku_lightpush/common.nim +++ b/waku/waku_lightpush/common.nim @@ -35,7 +35,15 @@ func isSuccess*(response: LightPushResponse): bool = func toPushResult*(response: LightPushResponse): WakuLightPushResult = if isSuccess(response): - return ok(response.relayPeerCount.get(0)) + let relayPeerCount = response.relayPeerCount.get(0) + return ( + if (relayPeerCount == 0): + # Consider publishing to zero peers an error even if the service node + # sent us a "successful" response with zero peers + err((LightPushErrorCode.NO_PEERS_TO_RELAY, response.statusDesc)) + else: + ok(relayPeerCount) + ) else: return err((response.statusCode, response.statusDesc)) @@ -51,11 +59,6 @@ func lightpushResultBadRequest*(msg: string): WakuLightPushResult = func lightpushResultServiceUnavailable*(msg: string): WakuLightPushResult = return err((LightPushErrorCode.SERVICE_NOT_AVAILABLE, some(msg))) -func lighpushErrorResult*( - statusCode: LightpushStatusCode, desc: Option[string] -): WakuLightPushResult = - return err((statusCode, desc)) - func lighpushErrorResult*( statusCode: LightpushStatusCode, desc: string ): WakuLightPushResult = diff --git a/waku/waku_lightpush/protocol.nim b/waku/waku_lightpush/protocol.nim index 2e8c9c2f1..95bfc003e 100644 --- a/waku/waku_lightpush/protocol.nim +++ b/waku/waku_lightpush/protocol.nim @@ -78,9 +78,9 @@ proc handleRequest( proc handleRequest*( wl: WakuLightPush, peerId: PeerId, buffer: seq[byte] ): Future[LightPushResponse] {.async.} = - let pushRequest = LightPushRequest.decode(buffer).valueOr: + let request = LightPushRequest.decode(buffer).valueOr: let desc = decodeRpcFailure & ": " & $error - error "failed to push message", error = desc + error "failed to decode Lightpush request", error = desc let errorCode = LightPushErrorCode.BAD_REQUEST waku_lightpush_v3_errors.inc(labelValues = [$errorCode]) return LightPushResponse( @@ -89,16 +89,16 @@ proc handleRequest*( statusDesc: some(desc), ) - let relayPeerCount = (await handleRequest(wl, peerId, pushRequest)).valueOr: + let relayPeerCount = (await wl.handleRequest(peerId, request)).valueOr: let desc = error.desc waku_lightpush_v3_errors.inc(labelValues = [$error.code]) error "failed to push message", error = desc return LightPushResponse( - requestId: pushRequest.requestId, statusCode: error.code, statusDesc: desc + requestId: request.requestId, statusCode: error.code, statusDesc: desc ) return LightPushResponse( - requestId: pushRequest.requestId, + requestId: request.requestId, statusCode: LightPushSuccessCode.SUCCESS, statusDesc: none[string](), relayPeerCount: some(relayPeerCount), @@ -123,7 +123,7 @@ proc initProtocolHandler(wl: WakuLightPush) = ) try: - rpc = await handleRequest(wl, conn.peerId, buffer) + rpc = await wl.handleRequest(conn.peerId, buffer) except CatchableError: error "lightpush failed handleRequest", error = getCurrentExceptionMsg() do: From c6cf34df067a1362207a815e2363d033367abac8 Mon Sep 17 00:00:00 2001 From: Fabiana Cecin Date: Fri, 28 Nov 2025 14:20:36 -0300 Subject: [PATCH 10/70] feat(tests): robustify waku_rln_relay test utils (#3650) --- tests/waku_rln_relay/utils_onchain.nim | 25 +++++++++++++++++++------ 1 file changed, 19 insertions(+), 6 deletions(-) diff --git a/tests/waku_rln_relay/utils_onchain.nim b/tests/waku_rln_relay/utils_onchain.nim index 85f627aa0..06e4fcdcf 100644 --- a/tests/waku_rln_relay/utils_onchain.nim +++ b/tests/waku_rln_relay/utils_onchain.nim @@ -82,6 +82,10 @@ proc getForgePath(): string = forgePath = joinPath(forgePath, ".foundry/bin/forge") return $forgePath +template execForge(cmd: string): tuple[output: string, exitCode: int] = + # unset env vars that affect e.g. "forge script" before running forge + execCmdEx("unset ETH_FROM ETH_PASSWORD && " & cmd) + contract(ERC20Token): proc allowance(owner: Address, spender: Address): UInt256 {.view.} proc balanceOf(account: Address): UInt256 {.view.} @@ -225,11 +229,14 @@ proc deployTestToken*( # Deploy TestToken contract let forgeCmdTestToken = fmt"""cd {submodulePath} && {forgePath} script test/TestToken.sol --broadcast -vvv --rpc-url http://localhost:8540 --tc TestTokenFactory --private-key {pk} && rm -rf broadcast/*/*/run-1*.json && rm -rf cache/*/*/run-1*.json""" - let (outputDeployTestToken, exitCodeDeployTestToken) = execCmdEx(forgeCmdTestToken) + let (outputDeployTestToken, exitCodeDeployTestToken) = execForge(forgeCmdTestToken) trace "Executed forge command to deploy TestToken contract", output = outputDeployTestToken if exitCodeDeployTestToken != 0: - return error("Forge command to deploy TestToken contract failed") + error "Forge command to deploy TestToken contract failed", + error = outputDeployTestToken + return + err("Forge command to deploy TestToken contract failed: " & outputDeployTestToken) # Parse the command output to find contract address let testTokenAddress = getContractAddressFromDeployScriptOutput(outputDeployTestToken).valueOr: @@ -351,7 +358,7 @@ proc executeForgeContractDeployScripts*( let forgeCmdPriceCalculator = fmt"""cd {submodulePath} && {forgePath} script script/Deploy.s.sol --broadcast -vvvv --rpc-url http://localhost:8540 --tc DeployPriceCalculator --private-key {privateKey} && rm -rf broadcast/*/*/run-1*.json && rm -rf cache/*/*/run-1*.json""" let (outputDeployPriceCalculator, exitCodeDeployPriceCalculator) = - execCmdEx(forgeCmdPriceCalculator) + execForge(forgeCmdPriceCalculator) trace "Executed forge command to deploy LinearPriceCalculator contract", output = outputDeployPriceCalculator if exitCodeDeployPriceCalculator != 0: @@ -368,7 +375,7 @@ proc executeForgeContractDeployScripts*( let forgeCmdWakuRln = fmt"""cd {submodulePath} && {forgePath} script script/Deploy.s.sol --broadcast -vvvv --rpc-url http://localhost:8540 --tc DeployWakuRlnV2 --private-key {privateKey} && rm -rf broadcast/*/*/run-1*.json && rm -rf cache/*/*/run-1*.json""" - let (outputDeployWakuRln, exitCodeDeployWakuRln) = execCmdEx(forgeCmdWakuRln) + let (outputDeployWakuRln, exitCodeDeployWakuRln) = execForge(forgeCmdWakuRln) trace "Executed forge command to deploy WakuRlnV2 contract", output = outputDeployWakuRln if exitCodeDeployWakuRln != 0: @@ -388,7 +395,7 @@ proc executeForgeContractDeployScripts*( # Deploy Proxy contract let forgeCmdProxy = fmt"""cd {submodulePath} && {forgePath} script script/Deploy.s.sol --broadcast -vvvv --rpc-url http://localhost:8540 --tc DeployProxy --private-key {privateKey} && rm -rf broadcast/*/*/run-1*.json && rm -rf cache/*/*/run-1*.json""" - let (outputDeployProxy, exitCodeDeployProxy) = execCmdEx(forgeCmdProxy) + let (outputDeployProxy, exitCodeDeployProxy) = execForge(forgeCmdProxy) trace "Executed forge command to deploy proxy contract", output = outputDeployProxy if exitCodeDeployProxy != 0: error "Forge command to deploy Proxy failed", error = outputDeployProxy @@ -503,7 +510,7 @@ proc runAnvil*(port: int = 8540, chainId: string = "1234"): Process = "--chain-id", $chainId, ], - options = {poUsePath}, + options = {poUsePath, poStdErrToStdOut}, ) let anvilPID = runAnvil.processID @@ -516,7 +523,13 @@ proc runAnvil*(port: int = 8540, chainId: string = "1234"): Process = anvilStartLog.add(cmdline) if cmdline.contains("Listening on 127.0.0.1:" & $port): break + else: + error "Anvil daemon exited (closed output)", + pid = anvilPID, startLog = anvilStartLog + return except Exception, CatchableError: + warn "Anvil daemon stdout reading error; assuming it started OK", + pid = anvilPID, startLog = anvilStartLog, err = getCurrentExceptionMsg() break info "Anvil daemon is running and ready", pid = anvilPID, startLog = anvilStartLog return runAnvil From 7eb1fdb0ac3eed0b39c5d1456ba6d1b9881e7980 Mon Sep 17 00:00:00 2001 From: Darshan K <35736874+darshankabariya@users.noreply.github.com> Date: Mon, 1 Dec 2025 19:03:59 +0530 Subject: [PATCH 11/70] chore: new release process ( beta and full ) (#3647) --- .../ISSUE_TEMPLATE/prepare_beta_release.md | 56 ++++++++ .../ISSUE_TEMPLATE/prepare_full_release.md | 76 +++++++++++ .github/ISSUE_TEMPLATE/prepare_release.md | 72 ---------- docs/contributors/release-process.md | 124 ++++++++++++------ 4 files changed, 213 insertions(+), 115 deletions(-) create mode 100644 .github/ISSUE_TEMPLATE/prepare_beta_release.md create mode 100644 .github/ISSUE_TEMPLATE/prepare_full_release.md delete mode 100644 .github/ISSUE_TEMPLATE/prepare_release.md diff --git a/.github/ISSUE_TEMPLATE/prepare_beta_release.md b/.github/ISSUE_TEMPLATE/prepare_beta_release.md new file mode 100644 index 000000000..270f6a8e6 --- /dev/null +++ b/.github/ISSUE_TEMPLATE/prepare_beta_release.md @@ -0,0 +1,56 @@ +--- +name: Prepare Beta Release +about: Execute tasks for the creation and publishing of a new beta release +title: 'Prepare beta release 0.0.0' +labels: beta-release +assignees: '' + +--- + + + +### Items to complete + +All items below are to be completed by the owner of the given release. + +- [ ] Create release branch with major and minor only ( e.g. release/v0.X ) if it doesn't exist. +- [ ] Assign release candidate tag to the release branch HEAD (e.g. `v0.X.0-beta-rc.0`, `v0.X.0-beta-rc.1`, ... `v0.X.0-beta-rc.N`). +- [ ] Generate and edit release notes in CHANGELOG.md. + +- [ ] **Waku test and fleets validation** + - [ ] Ensure all the unit tests (specifically js-waku tests) are green against the release candidate. + - [ ] Deploy the release candidate to `waku.test` only through [deploy-waku-test job](https://ci.infra.status.im/job/nim-waku/job/deploy-waku-test/) and wait for it to finish (Jenkins access required; ask the infra team if you don't have it). + - After completion, disable [deployment job](https://ci.infra.status.im/job/nim-waku/) so that its version is not updated on every merge to master. + - Verify the deployed version at https://fleets.waku.org/. + - Confirm the container image exists on [Harbor](https://harbor.status.im/harbor/projects/9/repositories/nwaku/artifacts-tab). + - [ ] Analyze Kibana logs from the previous month (since the last release was deployed) for possible crashes or errors in `waku.test`. + - Most relevant logs are `(fleet: "waku.test" AND message: "SIGSEGV")`. + - [ ] Enable again the `waku.test` fleet to resume auto-deployment of the latest `master` commit. + +- [ ] **Proceed with release** + + - [ ] Assign a final release tag (`v0.X.0-beta`) to the same commit that contains the validated release-candidate tag (e.g. `v0.X.0-beta-rc.N`) and submit a PR from the release branch to `master`. + - [ ] Update [nwaku-compose](https://github.com/waku-org/nwaku-compose) and [waku-simulator](https://github.com/waku-org/waku-simulator) according to the new release. + - [ ] Bump nwaku dependency in [waku-rust-bindings](https://github.com/waku-org/waku-rust-bindings) and make sure all examples and tests work. + - [ ] Bump nwaku dependency in [waku-go-bindings](https://github.com/waku-org/waku-go-bindings) and make sure all tests work. + - [ ] Create GitHub release (https://github.com/waku-org/nwaku/releases). + - [ ] Submit a PR to merge the release branch back to `master`. Make sure you use the option "Merge pull request (Create a merge commit)" to perform the merge. Ping repo admin if this option is not available. + +- [ ] **Promote release to fleets** + - [ ] Ask the PM lead to announce the release. + - [ ] Update infra config with any deprecated arguments or changed options. + - [ ] Update waku.sandbox with [this deployment job](https://ci.infra.status.im/job/nim-waku/job/deploy-waku-sandbox/). + +### Links + +- [Release process](https://github.com/waku-org/nwaku/blob/master/docs/contributors/release-process.md) +- [Release notes](https://github.com/waku-org/nwaku/blob/master/CHANGELOG.md) +- [Fleet ownership](https://www.notion.so/Fleet-Ownership-7532aad8896d46599abac3c274189741?pvs=4#d2d2f0fe4b3c429fbd860a1d64f89a64) +- [Infra-nim-waku](https://github.com/status-im/infra-nim-waku) +- [Jenkins](https://ci.infra.status.im/job/nim-waku/) +- [Fleets](https://fleets.waku.org/) +- [Harbor](https://harbor.status.im/harbor/projects/9/repositories/nwaku/artifacts-tab) diff --git a/.github/ISSUE_TEMPLATE/prepare_full_release.md b/.github/ISSUE_TEMPLATE/prepare_full_release.md new file mode 100644 index 000000000..18c668d16 --- /dev/null +++ b/.github/ISSUE_TEMPLATE/prepare_full_release.md @@ -0,0 +1,76 @@ +--- +name: Prepare Full Release +about: Execute tasks for the creation and publishing of a new full release +title: 'Prepare full release 0.0.0' +labels: full-release +assignees: '' + +--- + + + +### Items to complete + +All items below are to be completed by the owner of the given release. + +- [ ] Create release branch with major and minor only ( e.g. release/v0.X ) if it doesn't exist. +- [ ] Assign release candidate tag to the release branch HEAD (e.g. `v0.X.0-rc.0`, `v0.X.0-rc.1`, ... `v0.X.0-rc.N`). +- [ ] Generate and edit release notes in CHANGELOG.md. + +- [ ] **Validation of release candidate** + + - [ ] **Automated testing** + - [ ] Ensure all the unit tests (specifically js-waku tests) are green against the release candidate. + - [ ] Ask Vac-QA and Vac-DST to perform the available tests against the release candidate. + - [ ] Vac-DST (an additional report is needed; see [this](https://www.notion.so/DST-Reports-1228f96fb65c80729cd1d98a7496fe6f)) + + - [ ] **Waku fleet testing** + - [ ] Deploy the release candidate to `waku.test` and `waku.sandbox` fleets. + - Start the [deployment job](https://ci.infra.status.im/job/nim-waku/) for both fleets and wait for it to finish (Jenkins access required; ask the infra team if you don't have it). + - After completion, disable [deployment job](https://ci.infra.status.im/job/nim-waku/) so that its version is not updated on every merge to `master`. + - Verify the deployed version at https://fleets.waku.org/. + - Confirm the container image exists on [Harbor](https://harbor.status.im/harbor/projects/9/repositories/nwaku/artifacts-tab). + - [ ] Search _Kibana_ logs from the previous month (since the last release was deployed) for possible crashes or errors in `waku.test` and `waku.sandbox`. + - Most relevant logs are `(fleet: "waku.test" AND message: "SIGSEGV")` OR `(fleet: "waku.sandbox" AND message: "SIGSEGV")`. + - [ ] Enable again the `waku.test` fleet to resume auto-deployment of the latest `master` commit. + +- [ ] **Status fleet testing** + - [ ] Deploy release candidate to `status.staging` + - [ ] Perform [sanity check](https://www.notion.so/How-to-test-Nwaku-on-Status-12c6e4b9bf06420ca868bd199129b425) and log results as comments in this issue. + - [ ] Connect 2 instances to `status.staging` fleet, one in relay mode, the other one in light client. + - 1:1 Chats with each other + - Send and receive messages in a community + - Close one instance, send messages with second instance, reopen first instance and confirm messages sent while offline are retrieved from store + - [ ] Perform checks based on _end user impact_ + - [ ] Inform other (Waku and Status) CCs to point their instances to `status.staging` for a few days. Ping Status colleagues on their Discord server or in the [Status community](https://status.app/c/G3kAAMSQtb05kog3aGbr3kiaxN4tF5xy4BAGEkkLwILk2z3GcoYlm5hSJXGn7J3laft-tnTwDWmYJ18dP_3bgX96dqr_8E3qKAvxDf3NrrCMUBp4R9EYkQez9XSM4486mXoC3mIln2zc-TNdvjdfL9eHVZ-mGgs=#zQ3shZeEJqTC1xhGUjxuS4rtHSrhJ8vUYp64v6qWkLpvdy9L9) (this is not a blocking point.) + - [ ] Ask Status-QA to perform sanity checks (as described above) and checks based on _end user impact_; specify the version being tested + - [ ] Ask Status-QA or infra to run the automated Status e2e tests against `status.staging` + - [ ] Get other CCs' sign-off: they should comment on this PR, e.g., "Used the app for a week, no problem." If problems are reported, resolve them and create a new RC. + - [ ] **Get Status-QA sign-off**, ensuring that the `status.test` update will not disturb ongoing activities. + +- [ ] **Proceed with release** + + - [ ] Assign a final release tag (`v0.X.0`) to the same commit that contains the validated release-candidate tag (e.g. `v0.X.0`). + - [ ] Update [nwaku-compose](https://github.com/waku-org/nwaku-compose) and [waku-simulator](https://github.com/waku-org/waku-simulator) according to the new release. + - [ ] Bump nwaku dependency in [waku-rust-bindings](https://github.com/waku-org/waku-rust-bindings) and make sure all examples and tests work. + - [ ] Bump nwaku dependency in [waku-go-bindings](https://github.com/waku-org/waku-go-bindings) and make sure all tests work. + - [ ] Create GitHub release (https://github.com/waku-org/nwaku/releases). + - [ ] Submit a PR to merge the release branch back to `master`. Make sure you use the option "Merge pull request (Create a merge commit)" to perform the merge. Ping repo admin if this option is not available. + +- [ ] **Promote release to fleets** + - [ ] Ask the PM lead to announce the release. + - [ ] Update infra config with any deprecated arguments or changed options. + +### Links + +- [Release process](https://github.com/waku-org/nwaku/blob/master/docs/contributors/release-process.md) +- [Release notes](https://github.com/waku-org/nwaku/blob/master/CHANGELOG.md) +- [Fleet ownership](https://www.notion.so/Fleet-Ownership-7532aad8896d46599abac3c274189741?pvs=4#d2d2f0fe4b3c429fbd860a1d64f89a64) +- [Infra-nim-waku](https://github.com/status-im/infra-nim-waku) +- [Jenkins](https://ci.infra.status.im/job/nim-waku/) +- [Fleets](https://fleets.waku.org/) +- [Harbor](https://harbor.status.im/harbor/projects/9/repositories/nwaku/artifacts-tab) diff --git a/.github/ISSUE_TEMPLATE/prepare_release.md b/.github/ISSUE_TEMPLATE/prepare_release.md deleted file mode 100644 index 9553d5685..000000000 --- a/.github/ISSUE_TEMPLATE/prepare_release.md +++ /dev/null @@ -1,72 +0,0 @@ ---- -name: Prepare release -about: Execute tasks for the creation and publishing of a new release -title: 'Prepare release 0.0.0' -labels: release -assignees: '' - ---- - - - -### Items to complete - -All items below are to be completed by the owner of the given release. - -- [ ] Create release branch -- [ ] Assign release candidate tag to the release branch HEAD. e.g. v0.30.0-rc.0 -- [ ] Generate and edit releases notes in CHANGELOG.md -- [ ] Review possible update of [config-options](https://github.com/waku-org/docs.waku.org/blob/develop/docs/guides/nwaku/config-options.md) -- [ ] _End user impact_: Summarize impact of changes on Status end users (can be a comment in this issue). -- [ ] **Validate release candidate** - - [ ] Bump nwaku dependency in [waku-rust-bindings](https://github.com/waku-org/waku-rust-bindings) and make sure all examples and tests work - -- [ ] Automated testing - - [ ] Ensures js-waku tests are green against release candidate - - [ ] Ask Vac-QA and Vac-DST to perform available tests against release candidate - - [ ] Vac-QA - - [ ] Vac-DST (we need additional report. see [this](https://www.notion.so/DST-Reports-1228f96fb65c80729cd1d98a7496fe6f)) - - - [ ] **On Waku fleets** - - [ ] Lock `waku.test` fleet to release candidate version - - [ ] Continuously stress `waku.test` fleet for a week (e.g. from `wakudev`) - - [ ] Search _Kibana_ logs from the previous month (since last release was deployed), for possible crashes or errors in `waku.test` and `waku.sandbox`. - - Most relevant logs are `(fleet: "waku.test" OR fleet: "waku.sandbox") AND message: "SIGSEGV"` - - [ ] Run release candidate with `waku-simulator`, ensure that nodes connected to each other - - [ ] Unlock `waku.test` to resume auto-deployment of latest `master` commit - - - [ ] **On Status fleet** - - [ ] Deploy release candidate to `status.staging` - - [ ] Perform [sanity check](https://www.notion.so/How-to-test-Nwaku-on-Status-12c6e4b9bf06420ca868bd199129b425) and log results as comments in this issue. - - [ ] Connect 2 instances to `status.staging` fleet, one in relay mode, the other one in light client. - - [ ] 1:1 Chats with each other - - [ ] Send and receive messages in a community - - [ ] Close one instance, send messages with second instance, reopen first instance and confirm messages sent while offline are retrieved from store - - [ ] Perform checks based _end user impact_ - - [ ] Inform other (Waku and Status) CCs to point their instance to `status.staging` for a few days. Ping Status colleagues from their Discord server or [Status community](https://status.app/c/G3kAAMSQtb05kog3aGbr3kiaxN4tF5xy4BAGEkkLwILk2z3GcoYlm5hSJXGn7J3laft-tnTwDWmYJ18dP_3bgX96dqr_8E3qKAvxDf3NrrCMUBp4R9EYkQez9XSM4486mXoC3mIln2zc-TNdvjdfL9eHVZ-mGgs=#zQ3shZeEJqTC1xhGUjxuS4rtHSrhJ8vUYp64v6qWkLpvdy9L9) (not blocking point.) - - [ ] Ask Status-QA to perform sanity checks (as described above) + checks based on _end user impact_; do specify the version being tested - - [ ] Ask Status-QA or infra to run the automated Status e2e tests against `status.staging` - - [ ] Get other CCs sign-off: they comment on this PR "used app for a week, no problem", or problem reported, resolved and new RC - - [ ] **Get Status-QA sign-off**. Ensuring that `status.test` update will not disturb ongoing activities. - -- [ ] **Proceed with release** - - - [ ] Assign a release tag to the same commit that contains the validated release-candidate tag - - [ ] Create GitHub release - - [ ] Deploy the release to DockerHub - - [ ] Announce the release - -- [ ] **Promote release to fleets**. - - [ ] Update infra config with any deprecated arguments or changed options - - [ ] [Deploy final release to `waku.sandbox` fleet](https://ci.infra.status.im/job/nim-waku/job/deploy-waku-sandbox) - - [ ] [Deploy final release to `status.staging` fleet](https://ci.infra.status.im/job/nim-waku/job/deploy-shards-staging/) - - [ ] [Deploy final release to `status.prod` fleet](https://ci.infra.status.im/job/nim-waku/job/deploy-shards-test/) - -- [ ] **Post release** - - [ ] Submit a PR from the release branch to master. Important to commit the PR with "create a merge commit" option. - - [ ] Update waku-org/nwaku-compose with the new release version. - - [ ] Update version in js-waku repo. [update only this](https://github.com/waku-org/js-waku/blob/7c0ce7b2eca31cab837da0251e1e4255151be2f7/.github/workflows/ci.yml#L135) by submitting a PR. diff --git a/docs/contributors/release-process.md b/docs/contributors/release-process.md index c0fb12d1c..bde63aa6f 100644 --- a/docs/contributors/release-process.md +++ b/docs/contributors/release-process.md @@ -6,44 +6,52 @@ For more context, see https://trunkbaseddevelopment.com/branch-for-release/ ## How to do releases -### Before release +### Prerequisites + +- All issues under the corresponding release [milestone](https://github.com/waku-org/nwaku/milestones) have been closed or, after consultation, deferred to the next release. +- All submodules are up to date. + > Updating submodules requires a PR (and very often several "fixes" to maintain compatibility with the changes in submodules). That PR process must be done and merged a couple of days before the release. -Ensure all items in this list are ticked: -- [ ] All issues under the corresponding release [milestone](https://github.com/waku-org/nwaku/milestones) has been closed or, after consultation, deferred to a next release. -- [ ] All submodules are up to date. - > **IMPORTANT:** Updating submodules requires a PR (and very often several "fixes" to maintain compatibility with the changes in submodules). That PR process must be done and merged a couple of days before the release. > In case the submodules update has a low effort and/or risk for the release, follow the ["Update submodules"](./git-submodules.md) instructions. - > If the effort or risk is too high, consider postponing the submodules upgrade for the subsequent release or delaying the current release until the submodules updates are included in the release candidate. -- [ ] The [js-waku CI tests](https://github.com/waku-org/js-waku/actions/workflows/ci.yml) pass against the release candidate (i.e. nwaku latest `master`). - > **NOTE:** This serves as a basic regression test against typical clients of nwaku. - > The specific job that needs to pass is named `node_with_nwaku_master`. -### Performing the release + > If the effort or risk is too high, consider postponing the submodules upgrade for the subsequent release or delaying the current release until the submodules updates are included in the release candidate. + +### Release types + +- **Full release**: follow the entire [Release process](#release-process--step-by-step). + +- **Beta release**: skip just `6a` and `6c` steps from [Release process](#release-process--step-by-step). + +- Choose the appropriate release process based on the release type: + - [Full Release](../../.github/ISSUE_TEMPLATE/prepare_full_release.md) + - [Beta Release](../../.github/ISSUE_TEMPLATE/prepare_beta_release.md) + +### Release process ( step by step ) 1. Checkout a release branch from master ``` - git checkout -b release/v0.1.0 + git checkout -b release/v0.X.0 ``` -1. Update `CHANGELOG.md` and ensure it is up to date. Use the helper Make target to get PR based release-notes/changelog update. +2. Update `CHANGELOG.md` and ensure it is up to date. Use the helper Make target to get PR based release-notes/changelog update. ``` make release-notes ``` -1. Create a release-candidate tag with the same name as release and `-rc.N` suffix a few days before the official release and push it +3. Create a release-candidate tag with the same name as release and `-rc.N` suffix a few days before the official release and push it ``` - git tag -as v0.1.0-rc.0 -m "Initial release." - git push origin v0.1.0-rc.0 + git tag -as v0.X.0-rc.0 -m "Initial release." + git push origin v0.X.0-rc.0 ``` - This will trigger a [workflow](../../.github/workflows/pre-release.yml) which will build RC artifacts and create and publish a Github release + This will trigger a [workflow](../../.github/workflows/pre-release.yml) which will build RC artifacts and create and publish a GitHub release -1. Open a PR from the release branch for others to review the included changes and the release-notes +4. Open a PR from the release branch for others to review the included changes and the release-notes -1. In case additional changes are needed, create a new RC tag +5. In case additional changes are needed, create a new RC tag Make sure the new tag is associated with CHANGELOG update. @@ -52,25 +60,57 @@ Ensure all items in this list are ticked: # Make changes, rebase and create new tag # Squash to one commit and make a nice commit message git rebase -i origin/master - git tag -as v0.1.0-rc.1 -m "Initial release." - git push origin v0.1.0-rc.1 + git tag -as v0.X.0-rc.1 -m "Initial release." + git push origin v0.X.0-rc.1 ``` -1. Validate the release. For the release validation process, please refer to the following [guide](https://www.notion.so/Release-Process-61234f335b904cd0943a5033ed8f42b4#47af557e7f9744c68fdbe5240bf93ca9) + Similarly use v0.X.0-rc.2, v0.X.0-rc.3 etc. for additional RC tags. -1. Once the release-candidate has been validated, create a final release tag and push it. -We also need to merge release branch back to master as a final step. +6. **Validation of release candidate** + + 6a. **Automated testing** + - Ensure all the unit tests (specifically js-waku tests) are green against the release candidate. + - Ask Vac-QA and Vac-DST to run their available tests against the release candidate; share all release candidates with both teams. + + > We need an additional report like [this](https://www.notion.so/DST-Reports-1228f96fb65c80729cd1d98a7496fe6f) specifically from the DST team. + + 6b. **Waku fleet testing** + - Start job on `waku.sandbox` and `waku.test` [Deployment job](https://ci.infra.status.im/job/nim-waku/), wait for completion of the job. If it fails, then debug it. + - After completion, disable [deployment job](https://ci.infra.status.im/job/nim-waku/) so that its version is not updated on every merge to `master`. + - Verify at https://fleets.waku.org/ that the fleet is locked to the release candidate version. + - Check if the image is created at [Harbor](https://harbor.status.im/harbor/projects/9/repositories/nwaku/artifacts-tab). + - Search _Kibana_ logs from the previous month (since the last release was deployed) for possible crashes or errors in `waku.test` and `waku.sandbox`. + - Most relevant logs are `(fleet: "waku.test" AND message: "SIGSEGV")` OR `(fleet: "waku.sandbox" AND message: "SIGSEGV")`. + - Enable the `waku.test` fleet again to resume auto-deployment of the latest `master` commit. + + 6c. **Status fleet testing** + - Deploy release candidate to `status.staging` + - Perform [sanity check](https://www.notion.so/How-to-test-Nwaku-on-Status-12c6e4b9bf06420ca868bd199129b425) and log results as comments in this issue. + - Connect 2 instances to `status.staging` fleet, one in relay mode, the other one in light client. + - 1:1 Chats with each other + - Send and receive messages in a community + - Close one instance, send messages with second instance, reopen first instance and confirm messages sent while offline are retrieved from store + - Perform checks based on _end-user impact_. + - Inform other (Waku and Status) CCs to point their instances to `status.staging` for a few days. Ping Status colleagues from their Discord server or [Status community](https://status.app) (not a blocking point). + - Ask Status-QA to perform sanity checks (as described above) and checks based on _end user impact_; specify the version being tested. + - Ask Status-QA or infra to run the automated Status e2e tests against `status.staging`. + - Get other CCs' sign-off: they should comment on this PR, e.g., "Used the app for a week, no problem." If problems are reported, resolve them and create a new RC. + - **Get Status-QA sign-off**, ensuring that the `status.test` update will not disturb ongoing activities. + +7. Once the release-candidate has been validated, create a final release tag and push it. +We also need to merge the release branch back into master as a final step. ``` - git checkout release/v0.1.0 - git tag -as v0.1.0 -m "Initial release." - git push origin v0.1.0 + git checkout release/v0.X.0 + git tag -as v0.X.0 -m "final release." (use v0.X.0-beta as the tag if you are creating a beta release) + git push origin v0.X.0 git switch master git pull - git merge release/v0.1.0 + git merge release/v0.X.0 ``` +8. Update `waku-rust-bindings`, `waku-simulator` and `nwaku-compose` to use the new release. -1. Create a [Github release](https://github.com/waku-org/nwaku/releases) from the release tag. +9. Create a [GitHub release](https://github.com/waku-org/nwaku/releases) from the release tag. * Add binaries produced by the ["Upload Release Asset"](https://github.com/waku-org/nwaku/actions/workflows/release-assets.yml) workflow. Where possible, test the binaries before uploading to the release. @@ -80,22 +120,10 @@ We also need to merge release branch back to master as a final step. 2. Deploy the release image to [Dockerhub](https://hub.docker.com/r/wakuorg/nwaku) by triggering [the manual Jenkins deployment job](https://ci.infra.status.im/job/nim-waku/job/docker-manual/). > Ensure the following build parameters are set: > - `MAKE_TARGET`: `wakunode2` - > - `IMAGE_TAG`: the release tag (e.g. `v0.16.0`) + > - `IMAGE_TAG`: the release tag (e.g. `v0.36.0`) > - `IMAGE_NAME`: `wakuorg/nwaku` > - `NIMFLAGS`: `--colors:off -d:disableMarchNative -d:chronicles_colors:none -d:postgres` - > - `GIT_REF` the release tag (e.g. `v0.16.0`) -3. Update the default nwaku image in [nwaku-compose](https://github.com/waku-org/nwaku-compose/blob/master/docker-compose.yml) -4. Deploy the release to appropriate fleets: - - Inform clients - > **NOTE:** known clients are currently using some version of js-waku, go-waku, nwaku or waku-rs. - > Clients are reachable via the corresponding channels on the Vac Discord server. - > It should be enough to inform clients on the `#nwaku` and `#announce` channels on Discord. - > Informal conversations with specific repo maintainers are often part of this process. - - Check if nwaku configuration parameters changed. If so [update fleet configuration](https://www.notion.so/Fleet-Ownership-7532aad8896d46599abac3c274189741?pvs=4#d2d2f0fe4b3c429fbd860a1d64f89a64) in [infra-nim-waku](https://github.com/status-im/infra-nim-waku) - - Deploy release to the `waku.sandbox` fleet from [Jenkins](https://ci.infra.status.im/job/nim-waku/job/deploy-waku-sandbox/). - - Ensure that nodes successfully start up and monitor health using [Grafana](https://grafana.infra.status.im/d/qrp_ZCTGz/nim-waku-v2?orgId=1) and [Kibana](https://kibana.infra.status.im/goto/a7728e70-eb26-11ec-81d1-210eb3022c76). - - If necessary, revert by deploying the previous release. Download logs and open a bug report issue. -5. Submit a PR to merge the release branch back to `master`. Make sure you use the option `Merge pull request (Create a merge commit)` to perform such merge. + > - `GIT_REF` the release tag (e.g. `v0.36.0`) ### Performing a patch release @@ -116,4 +144,14 @@ We also need to merge release branch back to master as a final step. 4. Once the release-candidate has been validated and changelog PR got merged, cherry-pick the changelog update from master to the release branch. Create a final release tag and push it. -5. Create a [Github release](https://github.com/waku-org/nwaku/releases) from the release tag and follow the same post-release process as usual. +5. Create a [GitHub release](https://github.com/waku-org/nwaku/releases) from the release tag and follow the same post-release process as usual. + +### Links + +- [Release process](https://github.com/waku-org/nwaku/blob/master/docs/contributors/release-process.md) +- [Release notes](https://github.com/waku-org/nwaku/blob/master/CHANGELOG.md) +- [Fleet ownership](https://www.notion.so/Fleet-Ownership-7532aad8896d46599abac3c274189741?pvs=4#d2d2f0fe4b3c429fbd860a1d64f89a64) +- [Infra-nim-waku](https://github.com/status-im/infra-nim-waku) +- [Jenkins](https://ci.infra.status.im/job/nim-waku/) +- [Fleets](https://fleets.waku.org/) +- [Harbor](https://harbor.status.im/harbor/projects/9/repositories/nwaku/artifacts-tab) \ No newline at end of file From ae74b9018a248ae4f641205d60e7122c024d47f6 Mon Sep 17 00:00:00 2001 From: NagyZoltanPeter <113987313+NagyZoltanPeter@users.noreply.github.com> Date: Tue, 2 Dec 2025 00:24:46 +0100 Subject: [PATCH 12/70] chore: Introduce EventBroker, RequestBroker and MultiRequestBroker (#3644) * Introduce EventBroker and RequestBroker as decoupling helpers that represent reactive (event-driven) and proactive (request/response) patterns without tight coupling between modules * Address copilot observation. error log if failed listener call exception, handling listener overuse - run out of IDs * Address review observations: no exception to leak, listeners must raise no exception, adding listener now reports error with Result. * Added MultiRequestBroker utility to collect results from many providers * Support an arbitrary number of arguments for RequestBroker's request/provider signature * MultiRequestBroker allows provider procs to throw exceptions, which will be handled during request processing. * MultiRequestBroker supports one zero arg signature and/or multi arg signature * test no exception leaks from RequestBroker and MultiRequestBroker * Embed MultiRequestBroker tests into common * EventBroker: removed all ...Broker typed public procs to simplify EventBroker interface, forger is renamed to dropListener * Make Request's broker type private * MultiRequestBroker: Use explicit returns in generated procs * Updated descriptions of EventBroker and RequestBroker, updated RequestBroker.setProvider, returns error if already set. * Better description for MultiRequestBroker and its usage * Add EventBroker support for ref objects, fix emit variant with event object ctor * Add RequestBroker support for ref objects * Add MultiRequestBroker support for ref objects * Mover brokers under waku/common --- tests/common/test_all.nim | 5 +- tests/common/test_event_broker.nim | 125 +++++ tests/common/test_multi_request_broker.nim | 234 ++++++++ tests/common/test_request_broker.nim | 198 +++++++ waku/common/broker/event_broker.nim | 308 +++++++++++ waku/common/broker/helper/broker_utils.nim | 43 ++ waku/common/broker/multi_request_broker.nim | 583 ++++++++++++++++++++ waku/common/broker/request_broker.nim | 438 +++++++++++++++ 8 files changed, 1933 insertions(+), 1 deletion(-) create mode 100644 tests/common/test_event_broker.nim create mode 100644 tests/common/test_multi_request_broker.nim create mode 100644 tests/common/test_request_broker.nim create mode 100644 waku/common/broker/event_broker.nim create mode 100644 waku/common/broker/helper/broker_utils.nim create mode 100644 waku/common/broker/multi_request_broker.nim create mode 100644 waku/common/broker/request_broker.nim diff --git a/tests/common/test_all.nim b/tests/common/test_all.nim index 5b4515093..7495c7c9e 100644 --- a/tests/common/test_all.nim +++ b/tests/common/test_all.nim @@ -9,4 +9,7 @@ import ./test_tokenbucket, ./test_requestratelimiter, ./test_ratelimit_setting, - ./test_timed_map + ./test_timed_map, + ./test_event_broker, + ./test_request_broker, + ./test_multi_request_broker diff --git a/tests/common/test_event_broker.nim b/tests/common/test_event_broker.nim new file mode 100644 index 000000000..cead1277f --- /dev/null +++ b/tests/common/test_event_broker.nim @@ -0,0 +1,125 @@ +import chronos +import std/sequtils +import testutils/unittests + +import waku/common/broker/event_broker + +EventBroker: + type SampleEvent = object + value*: int + label*: string + +EventBroker: + type BinaryEvent = object + flag*: bool + +EventBroker: + type RefEvent = ref object + payload*: seq[int] + +template waitForListeners() = + waitFor sleepAsync(1.milliseconds) + +suite "EventBroker": + test "delivers events to all listeners": + var seen: seq[(int, string)] = @[] + + discard SampleEvent.listen( + proc(evt: SampleEvent): Future[void] {.async: (raises: []).} = + seen.add((evt.value, evt.label)) + ) + + discard SampleEvent.listen( + proc(evt: SampleEvent): Future[void] {.async: (raises: []).} = + seen.add((evt.value * 2, evt.label & "!")) + ) + + let evt = SampleEvent(value: 5, label: "hi") + SampleEvent.emit(evt) + waitForListeners() + + check seen.len == 2 + check seen.anyIt(it == (5, "hi")) + check seen.anyIt(it == (10, "hi!")) + + SampleEvent.dropAllListeners() + + test "forget removes a single listener": + var counter = 0 + + let handleA = SampleEvent.listen( + proc(evt: SampleEvent): Future[void] {.async: (raises: []).} = + inc counter + ) + + let handleB = SampleEvent.listen( + proc(evt: SampleEvent): Future[void] {.async: (raises: []).} = + inc(counter, 2) + ) + + SampleEvent.dropListener(handleA.get()) + let eventVal = SampleEvent(value: 1, label: "one") + SampleEvent.emit(eventVal) + waitForListeners() + check counter == 2 + + SampleEvent.dropAllListeners() + + test "forgetAll clears every listener": + var triggered = false + + let handle1 = SampleEvent.listen( + proc(evt: SampleEvent): Future[void] {.async: (raises: []).} = + triggered = true + ) + let handle2 = SampleEvent.listen( + proc(evt: SampleEvent): Future[void] {.async: (raises: []).} = + discard + ) + + SampleEvent.dropAllListeners() + SampleEvent.emit(42, "noop") + SampleEvent.emit(label = "noop", value = 42) + waitForListeners() + check not triggered + + let freshHandle = SampleEvent.listen( + proc(evt: SampleEvent): Future[void] {.async: (raises: []).} = + discard + ) + check freshHandle.get().id > 0'u64 + SampleEvent.dropListener(freshHandle.get()) + + test "broker helpers operate via typedesc": + var toggles: seq[bool] = @[] + + let handle = BinaryEvent.listen( + proc(evt: BinaryEvent): Future[void] {.async: (raises: []).} = + toggles.add(evt.flag) + ) + + BinaryEvent(flag: true).emit() + waitForListeners() + let binaryEvent = BinaryEvent(flag: false) + BinaryEvent.emit(binaryEvent) + waitForListeners() + + check toggles == @[true, false] + BinaryEvent.dropAllListeners() + + test "ref typed event": + var counter: int = 0 + + let handle = RefEvent.listen( + proc(evt: RefEvent): Future[void] {.async: (raises: []).} = + for n in evt.payload: + counter += n + ) + + RefEvent(payload: @[1, 2, 3]).emit() + waitForListeners() + RefEvent.emit(payload = @[4, 5, 6]) + waitForListeners() + + check counter == 21 # 1+2+3 + 4+5+6 + RefEvent.dropAllListeners() diff --git a/tests/common/test_multi_request_broker.nim b/tests/common/test_multi_request_broker.nim new file mode 100644 index 000000000..3bf10a54d --- /dev/null +++ b/tests/common/test_multi_request_broker.nim @@ -0,0 +1,234 @@ +{.used.} + +import testutils/unittests +import chronos +import std/sequtils +import std/strutils + +import waku/common/broker/multi_request_broker + +MultiRequestBroker: + type NoArgResponse = object + label*: string + + proc signatureFetch*(): Future[Result[NoArgResponse, string]] {.async.} + +MultiRequestBroker: + type ArgResponse = object + id*: string + + proc signatureFetch*( + suffix: string, numsuffix: int + ): Future[Result[ArgResponse, string]] {.async.} + +MultiRequestBroker: + type DualResponse = ref object + note*: string + suffix*: string + + proc signatureBase*(): Future[Result[DualResponse, string]] {.async.} + proc signatureWithInput*( + suffix: string + ): Future[Result[DualResponse, string]] {.async.} + +suite "MultiRequestBroker": + test "aggregates zero-argument providers": + discard NoArgResponse.setProvider( + proc(): Future[Result[NoArgResponse, string]] {.async.} = + ok(NoArgResponse(label: "one")) + ) + + discard NoArgResponse.setProvider( + proc(): Future[Result[NoArgResponse, string]] {.async.} = + discard catch: + await sleepAsync(1.milliseconds) + ok(NoArgResponse(label: "two")) + ) + + let responses = waitFor NoArgResponse.request() + check responses.get().len == 2 + check responses.get().anyIt(it.label == "one") + check responses.get().anyIt(it.label == "two") + + NoArgResponse.clearProviders() + + test "aggregates argument providers": + discard ArgResponse.setProvider( + proc(suffix: string, num: int): Future[Result[ArgResponse, string]] {.async.} = + ok(ArgResponse(id: suffix & "-a-" & $num)) + ) + + discard ArgResponse.setProvider( + proc(suffix: string, num: int): Future[Result[ArgResponse, string]] {.async.} = + ok(ArgResponse(id: suffix & "-b-" & $num)) + ) + + let keyed = waitFor ArgResponse.request("topic", 1) + check keyed.get().len == 2 + check keyed.get().anyIt(it.id == "topic-a-1") + check keyed.get().anyIt(it.id == "topic-b-1") + + ArgResponse.clearProviders() + + test "clearProviders resets both provider lists": + discard DualResponse.setProvider( + proc(): Future[Result[DualResponse, string]] {.async.} = + ok(DualResponse(note: "base", suffix: "")) + ) + + discard DualResponse.setProvider( + proc(suffix: string): Future[Result[DualResponse, string]] {.async.} = + ok(DualResponse(note: "base" & suffix, suffix: suffix)) + ) + + let noArgs = waitFor DualResponse.request() + check noArgs.get().len == 1 + + let param = waitFor DualResponse.request("-extra") + check param.get().len == 1 + check param.get()[0].suffix == "-extra" + + DualResponse.clearProviders() + + let emptyNoArgs = waitFor DualResponse.request() + check emptyNoArgs.get().len == 0 + + let emptyWithArgs = waitFor DualResponse.request("-extra") + check emptyWithArgs.get().len == 0 + + test "request returns empty seq when no providers registered": + let empty = waitFor NoArgResponse.request() + check empty.get().len == 0 + + test "failed providers will fail the request": + NoArgResponse.clearProviders() + discard NoArgResponse.setProvider( + proc(): Future[Result[NoArgResponse, string]] {.async.} = + err("boom") + ) + + discard NoArgResponse.setProvider( + proc(): Future[Result[NoArgResponse, string]] {.async.} = + ok(NoArgResponse(label: "survivor")) + ) + + let filtered = waitFor NoArgResponse.request() + check filtered.isErr() + + NoArgResponse.clearProviders() + + test "deduplicates identical zero-argument providers": + NoArgResponse.clearProviders() + var invocations = 0 + let sharedHandler = proc(): Future[Result[NoArgResponse, string]] {.async.} = + inc invocations + ok(NoArgResponse(label: "dup")) + + let first = NoArgResponse.setProvider(sharedHandler) + let second = NoArgResponse.setProvider(sharedHandler) + + check first.get().id == second.get().id + check first.get().kind == second.get().kind + + let dupResponses = waitFor NoArgResponse.request() + check dupResponses.get().len == 1 + check invocations == 1 + + NoArgResponse.clearProviders() + + test "removeProvider deletes registered handlers": + var removedCalled = false + var keptCalled = false + + let removable = NoArgResponse.setProvider( + proc(): Future[Result[NoArgResponse, string]] {.async.} = + removedCalled = true + ok(NoArgResponse(label: "removed")) + ) + + discard NoArgResponse.setProvider( + proc(): Future[Result[NoArgResponse, string]] {.async.} = + keptCalled = true + ok(NoArgResponse(label: "kept")) + ) + + NoArgResponse.removeProvider(removable.get()) + + let afterRemoval = (waitFor NoArgResponse.request()).valueOr: + assert false, "request failed" + @[] + check afterRemoval.len == 1 + check afterRemoval[0].label == "kept" + check not removedCalled + check keptCalled + + NoArgResponse.clearProviders() + + test "removeProvider works for argument signatures": + var invoked: seq[string] = @[] + + discard ArgResponse.setProvider( + proc(suffix: string, num: int): Future[Result[ArgResponse, string]] {.async.} = + invoked.add("first" & suffix) + ok(ArgResponse(id: suffix & "-one-" & $num)) + ) + + let handle = ArgResponse.setProvider( + proc(suffix: string, num: int): Future[Result[ArgResponse, string]] {.async.} = + invoked.add("second" & suffix) + ok(ArgResponse(id: suffix & "-two-" & $num)) + ) + + ArgResponse.removeProvider(handle.get()) + + let single = (waitFor ArgResponse.request("topic", 1)).valueOr: + assert false, "request failed" + @[] + check single.len == 1 + check single[0].id == "topic-one-1" + check invoked == @["firsttopic"] + + ArgResponse.clearProviders() + + test "catches exception from providers and report error": + let firstHandler = NoArgResponse.setProvider( + proc(): Future[Result[NoArgResponse, string]] {.async.} = + raise newException(ValueError, "first handler raised") + ok(NoArgResponse(label: "any")) + ) + + discard NoArgResponse.setProvider( + proc(): Future[Result[NoArgResponse, string]] {.async.} = + ok(NoArgResponse(label: "just ok")) + ) + + let afterException = waitFor NoArgResponse.request() + check afterException.isErr() + check afterException.error().contains("first handler raised") + + NoArgResponse.clearProviders() + + test "ref providers returning nil fail request": + DualResponse.clearProviders() + + discard DualResponse.setProvider( + proc(): Future[Result[DualResponse, string]] {.async.} = + let nilResponse: DualResponse = nil + ok(nilResponse) + ) + + let zeroArg = waitFor DualResponse.request() + check zeroArg.isErr() + + DualResponse.clearProviders() + + discard DualResponse.setProvider( + proc(suffix: string): Future[Result[DualResponse, string]] {.async.} = + let nilResponse: DualResponse = nil + ok(nilResponse) + ) + + let withInput = waitFor DualResponse.request("-extra") + check withInput.isErr() + + DualResponse.clearProviders() diff --git a/tests/common/test_request_broker.nim b/tests/common/test_request_broker.nim new file mode 100644 index 000000000..2ffd9cbf8 --- /dev/null +++ b/tests/common/test_request_broker.nim @@ -0,0 +1,198 @@ +{.used.} + +import testutils/unittests +import chronos +import std/strutils + +import waku/common/broker/request_broker + +RequestBroker: + type SimpleResponse = object + value*: string + + proc signatureFetch*(): Future[Result[SimpleResponse, string]] {.async.} + +RequestBroker: + type KeyedResponse = object + key*: string + payload*: string + + proc signatureFetchWithKey*( + key: string, subKey: int + ): Future[Result[KeyedResponse, string]] {.async.} + +RequestBroker: + type DualResponse = object + note*: string + count*: int + + proc signatureNoInput*(): Future[Result[DualResponse, string]] {.async.} + proc signatureWithInput*( + suffix: string + ): Future[Result[DualResponse, string]] {.async.} + +RequestBroker: + type ImplicitResponse = ref object + note*: string + +suite "RequestBroker macro": + test "serves zero-argument providers": + check SimpleResponse + .setProvider( + proc(): Future[Result[SimpleResponse, string]] {.async.} = + ok(SimpleResponse(value: "hi")) + ) + .isOk() + + let res = waitFor SimpleResponse.request() + check res.isOk() + check res.value.value == "hi" + + SimpleResponse.clearProvider() + + test "zero-argument request errors when unset": + let res = waitFor SimpleResponse.request() + check res.isErr + check res.error.contains("no zero-arg provider") + + test "serves input-based providers": + var seen: seq[string] = @[] + check KeyedResponse + .setProvider( + proc(key: string, subKey: int): Future[Result[KeyedResponse, string]] {.async.} = + seen.add(key) + ok(KeyedResponse(key: key, payload: key & "-payload+" & $subKey)) + ) + .isOk() + + let res = waitFor KeyedResponse.request("topic", 1) + check res.isOk() + check res.value.key == "topic" + check res.value.payload == "topic-payload+1" + check seen == @["topic"] + + KeyedResponse.clearProvider() + + test "catches provider exception": + check KeyedResponse + .setProvider( + proc(key: string, subKey: int): Future[Result[KeyedResponse, string]] {.async.} = + raise newException(ValueError, "simulated failure") + ok(KeyedResponse(key: key, payload: "")) + ) + .isOk() + + let res = waitFor KeyedResponse.request("neglected", 11) + check res.isErr() + check res.error.contains("simulated failure") + + KeyedResponse.clearProvider() + + test "input request errors when unset": + let res = waitFor KeyedResponse.request("foo", 2) + check res.isErr + check res.error.contains("input signature") + + test "supports both provider types simultaneously": + check DualResponse + .setProvider( + proc(): Future[Result[DualResponse, string]] {.async.} = + ok(DualResponse(note: "base", count: 1)) + ) + .isOk() + + check DualResponse + .setProvider( + proc(suffix: string): Future[Result[DualResponse, string]] {.async.} = + ok(DualResponse(note: "base" & suffix, count: suffix.len)) + ) + .isOk() + + let noInput = waitFor DualResponse.request() + check noInput.isOk + check noInput.value.note == "base" + + let withInput = waitFor DualResponse.request("-extra") + check withInput.isOk + check withInput.value.note == "base-extra" + check withInput.value.count == 6 + + DualResponse.clearProvider() + + test "clearProvider resets both entries": + check DualResponse + .setProvider( + proc(): Future[Result[DualResponse, string]] {.async.} = + ok(DualResponse(note: "temp", count: 0)) + ) + .isOk() + DualResponse.clearProvider() + + let res = waitFor DualResponse.request() + check res.isErr + + test "implicit zero-argument provider works by default": + check ImplicitResponse + .setProvider( + proc(): Future[Result[ImplicitResponse, string]] {.async.} = + ok(ImplicitResponse(note: "auto")) + ) + .isOk() + + let res = waitFor ImplicitResponse.request() + check res.isOk + + ImplicitResponse.clearProvider() + check res.value.note == "auto" + + test "implicit zero-argument request errors when unset": + let res = waitFor ImplicitResponse.request() + check res.isErr + check res.error.contains("no zero-arg provider") + + test "no provider override": + check DualResponse + .setProvider( + proc(): Future[Result[DualResponse, string]] {.async.} = + ok(DualResponse(note: "base", count: 1)) + ) + .isOk() + + check DualResponse + .setProvider( + proc(suffix: string): Future[Result[DualResponse, string]] {.async.} = + ok(DualResponse(note: "base" & suffix, count: suffix.len)) + ) + .isOk() + + let overrideProc = proc(): Future[Result[DualResponse, string]] {.async.} = + ok(DualResponse(note: "something else", count: 1)) + + check DualResponse.setProvider(overrideProc).isErr() + + let noInput = waitFor DualResponse.request() + check noInput.isOk + check noInput.value.note == "base" + + let stillResponse = waitFor DualResponse.request(" still works") + check stillResponse.isOk() + check stillResponse.value.note.contains("base still works") + + DualResponse.clearProvider() + + let noResponse = waitFor DualResponse.request() + check noResponse.isErr() + check noResponse.error.contains("no zero-arg provider") + + let noResponseArg = waitFor DualResponse.request("Should not work") + check noResponseArg.isErr() + check noResponseArg.error.contains("no provider") + + check DualResponse.setProvider(overrideProc).isOk() + + let nowSuccWithOverride = waitFor DualResponse.request() + check nowSuccWithOverride.isOk + check nowSuccWithOverride.value.note == "something else" + check nowSuccWithOverride.value.count == 1 + + DualResponse.clearProvider() diff --git a/waku/common/broker/event_broker.nim b/waku/common/broker/event_broker.nim new file mode 100644 index 000000000..05d7b50ab --- /dev/null +++ b/waku/common/broker/event_broker.nim @@ -0,0 +1,308 @@ +## EventBroker +## ------------------- +## EventBroker represents a reactive decoupling pattern, that +## allows event-driven development without +## need for direct dependencies in between emitters and listeners. +## Worth considering using it in a single or many emitters to many listeners scenario. +## +## Generates a standalone, type-safe event broker for the declared object type. +## The macro exports the value type itself plus a broker companion that manages +## listeners via thread-local storage. +## +## Usage: +## Declare your desired event type inside an `EventBroker` macro, add any number of fields.: +## ```nim +## EventBroker: +## type TypeName = object +## field1*: FieldType +## field2*: AnotherFieldType +## ``` +## +## After this, you can register async listeners anywhere in your code with +## `TypeName.listen(...)`, which returns a handle to the registered listener. +## Listeners are async procs or lambdas that take a single argument of the event type. +## Any number of listeners can be registered in different modules. +## +## Events can be emitted from anywhere with no direct dependency on the listeners by +## calling `TypeName.emit(...)` with an instance of the event type. +## This will asynchronously notify all registered listeners with the emitted event. +## +## Whenever you no longer need a listener (or your object instance that listen to the event goes out of scope), +## you can remove it from the broker with the handle returned by `listen`. +## This is done by calling `TypeName.dropListener(handle)`. +## Alternatively, you can remove all registered listeners through `TypeName.dropAllListeners()`. +## +## +## Example: +## ```nim +## EventBroker: +## type GreetingEvent = object +## text*: string +## +## let handle = GreetingEvent.listen( +## proc(evt: GreetingEvent): Future[void] {.async.} = +## echo evt.text +## ) +## GreetingEvent.emit(text= "hi") +## GreetingEvent.dropListener(handle) +## ``` + +import std/[macros, tables] +import chronos, chronicles, results +import ./helper/broker_utils + +export chronicles, results, chronos + +macro EventBroker*(body: untyped): untyped = + when defined(eventBrokerDebug): + echo body.treeRepr + var typeIdent: NimNode = nil + var objectDef: NimNode = nil + var fieldNames: seq[NimNode] = @[] + var fieldTypes: seq[NimNode] = @[] + var isRefObject = false + for stmt in body: + if stmt.kind == nnkTypeSection: + for def in stmt: + if def.kind != nnkTypeDef: + continue + let rhs = def[2] + var objectType: NimNode + case rhs.kind + of nnkObjectTy: + objectType = rhs + of nnkRefTy: + isRefObject = true + if rhs.len != 1 or rhs[0].kind != nnkObjectTy: + error("EventBroker ref object must wrap a concrete object definition", rhs) + objectType = rhs[0] + else: + continue + if not typeIdent.isNil(): + error("Only one object type may be declared inside EventBroker", def) + typeIdent = baseTypeIdent(def[0]) + let recList = objectType[2] + if recList.kind != nnkRecList: + error("EventBroker object must declare a standard field list", objectType) + var exportedRecList = newTree(nnkRecList) + for field in recList: + case field.kind + of nnkIdentDefs: + ensureFieldDef(field) + let fieldTypeNode = field[field.len - 2] + for i in 0 ..< field.len - 2: + let baseFieldIdent = baseTypeIdent(field[i]) + fieldNames.add(copyNimTree(baseFieldIdent)) + fieldTypes.add(copyNimTree(fieldTypeNode)) + var cloned = copyNimTree(field) + for i in 0 ..< cloned.len - 2: + cloned[i] = exportIdentNode(cloned[i]) + exportedRecList.add(cloned) + of nnkEmpty: + discard + else: + error( + "EventBroker object definition only supports simple field declarations", + field, + ) + let exportedObjectType = newTree( + nnkObjectTy, + copyNimTree(objectType[0]), + copyNimTree(objectType[1]), + exportedRecList, + ) + if isRefObject: + objectDef = newTree(nnkRefTy, exportedObjectType) + else: + objectDef = exportedObjectType + if typeIdent.isNil(): + error("EventBroker body must declare exactly one object type", body) + + let exportedTypeIdent = postfix(copyNimTree(typeIdent), "*") + let sanitized = sanitizeIdentName(typeIdent) + let typeNameLit = newLit($typeIdent) + let isRefObjectLit = newLit(isRefObject) + let handlerProcIdent = ident(sanitized & "ListenerProc") + let listenerHandleIdent = ident(sanitized & "Listener") + let brokerTypeIdent = ident(sanitized & "Broker") + let exportedHandlerProcIdent = postfix(copyNimTree(handlerProcIdent), "*") + let exportedListenerHandleIdent = postfix(copyNimTree(listenerHandleIdent), "*") + let exportedBrokerTypeIdent = postfix(copyNimTree(brokerTypeIdent), "*") + let accessProcIdent = ident("access" & sanitized & "Broker") + let globalVarIdent = ident("g" & sanitized & "Broker") + let listenImplIdent = ident("register" & sanitized & "Listener") + let dropListenerImplIdent = ident("drop" & sanitized & "Listener") + let dropAllListenersImplIdent = ident("dropAll" & sanitized & "Listeners") + let emitImplIdent = ident("emit" & sanitized & "Value") + let listenerTaskIdent = ident("notify" & sanitized & "Listener") + + result = newStmtList() + + result.add( + quote do: + type + `exportedTypeIdent` = `objectDef` + `exportedListenerHandleIdent` = object + id*: uint64 + + `exportedHandlerProcIdent` = + proc(event: `typeIdent`): Future[void] {.async: (raises: []), gcsafe.} + `exportedBrokerTypeIdent` = ref object + listeners: Table[uint64, `handlerProcIdent`] + nextId: uint64 + + ) + + result.add( + quote do: + var `globalVarIdent` {.threadvar.}: `brokerTypeIdent` + ) + + result.add( + quote do: + proc `accessProcIdent`(): `brokerTypeIdent` = + if `globalVarIdent`.isNil(): + new(`globalVarIdent`) + `globalVarIdent`.listeners = initTable[uint64, `handlerProcIdent`]() + `globalVarIdent` + + ) + + result.add( + quote do: + proc `listenImplIdent`( + handler: `handlerProcIdent` + ): Result[`listenerHandleIdent`, string] = + if handler.isNil(): + return err("Must provide a non-nil event handler") + var broker = `accessProcIdent`() + if broker.nextId == 0'u64: + broker.nextId = 1'u64 + if broker.nextId == high(uint64): + error "Cannot add more listeners: ID space exhausted", nextId = $broker.nextId + return err("Cannot add more listeners, listener ID space exhausted") + let newId = broker.nextId + inc broker.nextId + broker.listeners[newId] = handler + return ok(`listenerHandleIdent`(id: newId)) + + ) + + result.add( + quote do: + proc `dropListenerImplIdent`(handle: `listenerHandleIdent`) = + if handle.id == 0'u64: + return + var broker = `accessProcIdent`() + if broker.listeners.len == 0: + return + broker.listeners.del(handle.id) + + ) + + result.add( + quote do: + proc `dropAllListenersImplIdent`() = + var broker = `accessProcIdent`() + if broker.listeners.len > 0: + broker.listeners.clear() + + ) + + result.add( + quote do: + proc listen*( + _: typedesc[`typeIdent`], handler: `handlerProcIdent` + ): Result[`listenerHandleIdent`, string] = + return `listenImplIdent`(handler) + + ) + + result.add( + quote do: + proc dropListener*(_: typedesc[`typeIdent`], handle: `listenerHandleIdent`) = + `dropListenerImplIdent`(handle) + + proc dropAllListeners*(_: typedesc[`typeIdent`]) = + `dropAllListenersImplIdent`() + + ) + + result.add( + quote do: + proc `listenerTaskIdent`( + callback: `handlerProcIdent`, event: `typeIdent` + ) {.async: (raises: []), gcsafe.} = + if callback.isNil(): + return + try: + await callback(event) + except Exception: + error "Failed to execute event listener", error = getCurrentExceptionMsg() + + proc `emitImplIdent`( + event: `typeIdent` + ): Future[void] {.async: (raises: []), gcsafe.} = + when `isRefObjectLit`: + if event.isNil(): + error "Cannot emit uninitialized event object", eventType = `typeNameLit` + return + let broker = `accessProcIdent`() + if broker.listeners.len == 0: + # nothing to do as nobody is listening + return + var callbacks: seq[`handlerProcIdent`] = @[] + for cb in broker.listeners.values: + callbacks.add(cb) + for cb in callbacks: + asyncSpawn `listenerTaskIdent`(cb, event) + + proc emit*(event: `typeIdent`) = + asyncSpawn `emitImplIdent`(event) + + proc emit*(_: typedesc[`typeIdent`], event: `typeIdent`) = + asyncSpawn `emitImplIdent`(event) + + ) + + var emitCtorParams = newTree(nnkFormalParams, newEmptyNode()) + let typedescParamType = + newTree(nnkBracketExpr, ident("typedesc"), copyNimTree(typeIdent)) + emitCtorParams.add( + newTree(nnkIdentDefs, ident("_"), typedescParamType, newEmptyNode()) + ) + for i in 0 ..< fieldNames.len: + emitCtorParams.add( + newTree( + nnkIdentDefs, + copyNimTree(fieldNames[i]), + copyNimTree(fieldTypes[i]), + newEmptyNode(), + ) + ) + + var emitCtorExpr = newTree(nnkObjConstr, copyNimTree(typeIdent)) + for i in 0 ..< fieldNames.len: + emitCtorExpr.add( + newTree(nnkExprColonExpr, copyNimTree(fieldNames[i]), copyNimTree(fieldNames[i])) + ) + + let emitCtorCall = newCall(copyNimTree(emitImplIdent), emitCtorExpr) + let emitCtorBody = quote: + asyncSpawn `emitCtorCall` + + let typedescEmitProc = newTree( + nnkProcDef, + postfix(ident("emit"), "*"), + newEmptyNode(), + newEmptyNode(), + emitCtorParams, + newEmptyNode(), + newEmptyNode(), + emitCtorBody, + ) + + result.add(typedescEmitProc) + + when defined(eventBrokerDebug): + echo result.repr diff --git a/waku/common/broker/helper/broker_utils.nim b/waku/common/broker/helper/broker_utils.nim new file mode 100644 index 000000000..ea9f85750 --- /dev/null +++ b/waku/common/broker/helper/broker_utils.nim @@ -0,0 +1,43 @@ +import std/macros + +proc sanitizeIdentName*(node: NimNode): string = + var raw = $node + var sanitizedName = newStringOfCap(raw.len) + for ch in raw: + case ch + of 'A' .. 'Z', 'a' .. 'z', '0' .. '9', '_': + sanitizedName.add(ch) + else: + sanitizedName.add('_') + sanitizedName + +proc ensureFieldDef*(node: NimNode) = + if node.kind != nnkIdentDefs or node.len < 3: + error("Expected field definition of the form `name: Type`", node) + let typeSlot = node.len - 2 + if node[typeSlot].kind == nnkEmpty: + error("Field `" & $node[0] & "` must declare a type", node) + +proc exportIdentNode*(node: NimNode): NimNode = + case node.kind + of nnkIdent: + postfix(copyNimTree(node), "*") + of nnkPostfix: + node + else: + error("Unsupported identifier form in field definition", node) + +proc baseTypeIdent*(defName: NimNode): NimNode = + case defName.kind + of nnkIdent: + defName + of nnkAccQuoted: + if defName.len != 1: + error("Unsupported quoted identifier", defName) + defName[0] + of nnkPostfix: + baseTypeIdent(defName[1]) + of nnkPragmaExpr: + baseTypeIdent(defName[0]) + else: + error("Unsupported type name in broker definition", defName) diff --git a/waku/common/broker/multi_request_broker.nim b/waku/common/broker/multi_request_broker.nim new file mode 100644 index 000000000..7f4161f5a --- /dev/null +++ b/waku/common/broker/multi_request_broker.nim @@ -0,0 +1,583 @@ +## MultiRequestBroker +## -------------------- +## MultiRequestBroker represents a proactive decoupling pattern, that +## allows defining request-response style interactions between modules without +## need for direct dependencies in between. +## Worth considering using it for use cases where you need to collect data from multiple providers. +## +## Provides a declarative way to define an immutable value type together with a +## thread-local broker that can register multiple asynchronous providers, dispatch +## typed requests, and clear handlers. Unlike `RequestBroker`, +## every call to `request` fan-outs to every registered provider and returns with +## collected responses. +## Request succeeds if all providers succeed, otherwise fails with an error. +## +## Usage: +## +## Declare collectable request data type inside a `MultiRequestBroker` macro, add any number of fields: +## ```nim +## MultiRequestBroker: +## type TypeName = object +## field1*: Type1 +## field2*: Type2 +## +## ## Define the request and provider signature, that is enforced at compile time. +## proc signature*(): Future[Result[TypeName, string]] {.async: (raises: []).} +## +## ## Also possible to define signature with arbitrary input arguments. +## proc signature*(arg1: ArgType, arg2: AnotherArgType): Future[Result[TypeName, string]] {.async: (raises: []).} +## +## ``` +## +## You regiser request processor (proveder) at any place of the code without the need to know of who ever may request. +## Respectively to the defined signatures register provider functions with `TypeName.setProvider(...)`. +## Providers are async procs or lambdas that return with a Future[Result[seq[TypeName], string]]. +## Notice MultiRequestBroker's `setProvider` return with a handler that can be used to remove the provider later (or error). + +## Requests can be made from anywhere with no direct dependency on the provider(s) by +## calling `TypeName.request()` - with arguments respecting the signature(s). +## This will asynchronously call the registered provider and return the collected data, in form of `Future[Result[seq[TypeName], string]]`. +## +## Whenever you don't want to process requests anymore (or your object instance that provides the request goes out of scope), +## you can remove it from the broker with `TypeName.removeProvider(handle)`. +## Alternatively, you can remove all registered providers through `TypeName.clearProviders()`. +## +## Example: +## ```nim +## MultiRequestBroker: +## type Greeting = object +## text*: string +## +## ## Define the request and provider signature, that is enforced at compile time. +## proc signature*(): Future[Result[Greeting, string]] {.async: (raises: []).} +## +## ## Also possible to define signature with arbitrary input arguments. +## proc signature*(lang: string): Future[Result[Greeting, string]] {.async: (raises: []).} +## +## ... +## let handle = Greeting.setProvider( +## proc(): Future[Result[Greeting, string]] {.async: (raises: []).} = +## ok(Greeting(text: "hello")) +## ) +## +## let anotherHandle = Greeting.setProvider( +## proc(): Future[Result[Greeting, string]] {.async: (raises: []).} = +## ok(Greeting(text: "szia")) +## ) +## +## let responses = (await Greeting.request()).valueOr(@[Greeting(text: "default")]) +## +## echo responses.len +## Greeting.clearProviders() +## ``` +## If no `signature` proc is declared, a zero-argument form is generated +## automatically, so the caller only needs to provide the type definition. + +import std/[macros, strutils, tables, sugar] +import chronos +import results +import ./helper/broker_utils + +export results, chronos + +proc isReturnTypeValid(returnType, typeIdent: NimNode): bool = + ## Accept Future[Result[TypeIdent, string]] as the contract. + if returnType.kind != nnkBracketExpr or returnType.len != 2: + return false + if returnType[0].kind != nnkIdent or not returnType[0].eqIdent("Future"): + return false + let inner = returnType[1] + if inner.kind != nnkBracketExpr or inner.len != 3: + return false + if inner[0].kind != nnkIdent or not inner[0].eqIdent("Result"): + return false + if inner[1].kind != nnkIdent or not inner[1].eqIdent($typeIdent): + return false + inner[2].kind == nnkIdent and inner[2].eqIdent("string") + +proc cloneParams(params: seq[NimNode]): seq[NimNode] = + ## Deep copy parameter definitions so they can be reused in generated nodes. + result = @[] + for param in params: + result.add(copyNimTree(param)) + +proc collectParamNames(params: seq[NimNode]): seq[NimNode] = + ## Extract identifiers declared in parameter definitions. + result = @[] + for param in params: + assert param.kind == nnkIdentDefs + for i in 0 ..< param.len - 2: + let nameNode = param[i] + if nameNode.kind == nnkEmpty: + continue + result.add(ident($nameNode)) + +proc makeProcType(returnType: NimNode, params: seq[NimNode]): NimNode = + var formal = newTree(nnkFormalParams) + formal.add(returnType) + for param in params: + formal.add(param) + + let pragmas = quote: + {.async.} + + newTree(nnkProcTy, formal, pragmas) + +macro MultiRequestBroker*(body: untyped): untyped = + when defined(requestBrokerDebug): + echo body.treeRepr + var typeIdent: NimNode = nil + var objectDef: NimNode = nil + var isRefObject = false + for stmt in body: + if stmt.kind == nnkTypeSection: + for def in stmt: + if def.kind != nnkTypeDef: + continue + let rhs = def[2] + var objectType: NimNode + case rhs.kind + of nnkObjectTy: + objectType = rhs + of nnkRefTy: + isRefObject = true + if rhs.len != 1 or rhs[0].kind != nnkObjectTy: + error( + "MultiRequestBroker ref object must wrap a concrete object definition", + rhs, + ) + objectType = rhs[0] + else: + continue + if not typeIdent.isNil(): + error("Only one object type may be declared inside MultiRequestBroker", def) + typeIdent = baseTypeIdent(def[0]) + let recList = objectType[2] + if recList.kind != nnkRecList: + error( + "MultiRequestBroker object must declare a standard field list", objectType + ) + var exportedRecList = newTree(nnkRecList) + for field in recList: + case field.kind + of nnkIdentDefs: + ensureFieldDef(field) + var cloned = copyNimTree(field) + for i in 0 ..< cloned.len - 2: + cloned[i] = exportIdentNode(cloned[i]) + exportedRecList.add(cloned) + of nnkEmpty: + discard + else: + error( + "MultiRequestBroker object definition only supports simple field declarations", + field, + ) + let exportedObjectType = newTree( + nnkObjectTy, + copyNimTree(objectType[0]), + copyNimTree(objectType[1]), + exportedRecList, + ) + if isRefObject: + objectDef = newTree(nnkRefTy, exportedObjectType) + else: + objectDef = exportedObjectType + if typeIdent.isNil(): + error("MultiRequestBroker body must declare exactly one object type", body) + + when defined(requestBrokerDebug): + echo "MultiRequestBroker generating type: ", $typeIdent + + let exportedTypeIdent = postfix(copyNimTree(typeIdent), "*") + let sanitized = sanitizeIdentName(typeIdent) + let typeNameLit = newLit($typeIdent) + let isRefObjectLit = newLit(isRefObject) + let tableSym = bindSym"Table" + let initTableSym = bindSym"initTable" + let uint64Ident = ident("uint64") + let providerKindIdent = ident(sanitized & "ProviderKind") + let providerHandleIdent = ident(sanitized & "ProviderHandle") + let exportedProviderHandleIdent = postfix(copyNimTree(providerHandleIdent), "*") + let zeroKindIdent = ident("pk" & sanitized & "NoArgs") + let argKindIdent = ident("pk" & sanitized & "WithArgs") + var zeroArgSig: NimNode = nil + var zeroArgProviderName: NimNode = nil + var zeroArgFieldName: NimNode = nil + var argSig: NimNode = nil + var argParams: seq[NimNode] = @[] + var argProviderName: NimNode = nil + var argFieldName: NimNode = nil + + for stmt in body: + case stmt.kind + of nnkProcDef: + let procName = stmt[0] + let procNameIdent = + case procName.kind + of nnkIdent: + procName + of nnkPostfix: + procName[1] + else: + procName + let procNameStr = $procNameIdent + if not procNameStr.startsWith("signature"): + error("Signature proc names must start with `signature`", procName) + let params = stmt.params + if params.len == 0: + error("Signature must declare a return type", stmt) + let returnType = params[0] + if not isReturnTypeValid(returnType, typeIdent): + error( + "Signature must return Future[Result[`" & $typeIdent & "`, string]]", stmt + ) + let paramCount = params.len - 1 + if paramCount == 0: + if zeroArgSig != nil: + error("Only one zero-argument signature is allowed", stmt) + zeroArgSig = stmt + zeroArgProviderName = ident(sanitizeIdentName(typeIdent) & "ProviderNoArgs") + zeroArgFieldName = ident("providerNoArgs") + elif paramCount >= 1: + if argSig != nil: + error("Only one argument-based signature is allowed", stmt) + argSig = stmt + argParams = @[] + for idx in 1 ..< params.len: + let paramDef = params[idx] + if paramDef.kind != nnkIdentDefs: + error( + "Signature parameter must be a standard identifier declaration", paramDef + ) + let paramTypeNode = paramDef[paramDef.len - 2] + if paramTypeNode.kind == nnkEmpty: + error("Signature parameter must declare a type", paramDef) + var hasName = false + for i in 0 ..< paramDef.len - 2: + if paramDef[i].kind != nnkEmpty: + hasName = true + if not hasName: + error("Signature parameter must declare a name", paramDef) + argParams.add(copyNimTree(paramDef)) + argProviderName = ident(sanitizeIdentName(typeIdent) & "ProviderWithArgs") + argFieldName = ident("providerWithArgs") + of nnkTypeSection, nnkEmpty: + discard + else: + error("Unsupported statement inside MultiRequestBroker definition", stmt) + + if zeroArgSig.isNil() and argSig.isNil(): + zeroArgSig = newEmptyNode() + zeroArgProviderName = ident(sanitizeIdentName(typeIdent) & "ProviderNoArgs") + zeroArgFieldName = ident("providerNoArgs") + + var typeSection = newTree(nnkTypeSection) + typeSection.add(newTree(nnkTypeDef, exportedTypeIdent, newEmptyNode(), objectDef)) + + var kindEnum = newTree(nnkEnumTy, newEmptyNode()) + if not zeroArgSig.isNil(): + kindEnum.add(zeroKindIdent) + if not argSig.isNil(): + kindEnum.add(argKindIdent) + typeSection.add(newTree(nnkTypeDef, providerKindIdent, newEmptyNode(), kindEnum)) + + var handleRecList = newTree(nnkRecList) + handleRecList.add(newTree(nnkIdentDefs, ident("id"), uint64Ident, newEmptyNode())) + handleRecList.add( + newTree(nnkIdentDefs, ident("kind"), providerKindIdent, newEmptyNode()) + ) + typeSection.add( + newTree( + nnkTypeDef, + exportedProviderHandleIdent, + newEmptyNode(), + newTree(nnkObjectTy, newEmptyNode(), newEmptyNode(), handleRecList), + ) + ) + + let returnType = quote: + Future[Result[`typeIdent`, string]] + + if not zeroArgSig.isNil(): + let procType = makeProcType(returnType, @[]) + typeSection.add(newTree(nnkTypeDef, zeroArgProviderName, newEmptyNode(), procType)) + if not argSig.isNil(): + let procType = makeProcType(returnType, cloneParams(argParams)) + typeSection.add(newTree(nnkTypeDef, argProviderName, newEmptyNode(), procType)) + + var brokerRecList = newTree(nnkRecList) + if not zeroArgSig.isNil(): + brokerRecList.add( + newTree( + nnkIdentDefs, + zeroArgFieldName, + newTree(nnkBracketExpr, tableSym, uint64Ident, zeroArgProviderName), + newEmptyNode(), + ) + ) + if not argSig.isNil(): + brokerRecList.add( + newTree( + nnkIdentDefs, + argFieldName, + newTree(nnkBracketExpr, tableSym, uint64Ident, argProviderName), + newEmptyNode(), + ) + ) + brokerRecList.add(newTree(nnkIdentDefs, ident("nextId"), uint64Ident, newEmptyNode())) + let brokerTypeIdent = ident(sanitizeIdentName(typeIdent) & "Broker") + let brokerTypeDef = newTree( + nnkTypeDef, + brokerTypeIdent, + newEmptyNode(), + newTree( + nnkRefTy, newTree(nnkObjectTy, newEmptyNode(), newEmptyNode(), brokerRecList) + ), + ) + typeSection.add(brokerTypeDef) + result = newStmtList() + result.add(typeSection) + + let globalVarIdent = ident("g" & sanitizeIdentName(typeIdent) & "Broker") + let accessProcIdent = ident("access" & sanitizeIdentName(typeIdent) & "Broker") + var initStatements = newStmtList() + if not zeroArgSig.isNil(): + initStatements.add( + quote do: + `globalVarIdent`.`zeroArgFieldName` = + `initTableSym`[`uint64Ident`, `zeroArgProviderName`]() + ) + if not argSig.isNil(): + initStatements.add( + quote do: + `globalVarIdent`.`argFieldName` = + `initTableSym`[`uint64Ident`, `argProviderName`]() + ) + result.add( + quote do: + var `globalVarIdent` {.threadvar.}: `brokerTypeIdent` + + proc `accessProcIdent`(): `brokerTypeIdent` = + if `globalVarIdent`.isNil(): + new(`globalVarIdent`) + `globalVarIdent`.nextId = 1'u64 + `initStatements` + return `globalVarIdent` + + ) + + var clearBody = newStmtList() + if not zeroArgSig.isNil(): + result.add( + quote do: + proc setProvider*( + _: typedesc[`typeIdent`], handler: `zeroArgProviderName` + ): Result[`providerHandleIdent`, string] = + if handler.isNil(): + return err("Provider handler must be provided") + let broker = `accessProcIdent`() + if broker.nextId == 0'u64: + broker.nextId = 1'u64 + for existingId, existing in broker.`zeroArgFieldName`.pairs: + if existing == handler: + return ok(`providerHandleIdent`(id: existingId, kind: `zeroKindIdent`)) + let newId = broker.nextId + inc broker.nextId + broker.`zeroArgFieldName`[newId] = handler + return ok(`providerHandleIdent`(id: newId, kind: `zeroKindIdent`)) + + ) + clearBody.add( + quote do: + let broker = `accessProcIdent`() + if not broker.isNil() and broker.`zeroArgFieldName`.len > 0: + broker.`zeroArgFieldName`.clear() + ) + result.add( + quote do: + proc request*( + _: typedesc[`typeIdent`] + ): Future[Result[seq[`typeIdent`], string]] {.async: (raises: []), gcsafe.} = + var aggregated: seq[`typeIdent`] = @[] + let providers = `accessProcIdent`().`zeroArgFieldName` + if providers.len == 0: + return ok(aggregated) + # var providersFut: seq[Future[Result[`typeIdent`, string]]] = collect: + var providersFut = collect(newSeq): + for provider in providers.values: + if provider.isNil(): + continue + provider() + + let catchable = catch: + await allFinished(providersFut) + + catchable.isOkOr: + return err("Some provider(s) failed:" & error.msg) + + for fut in catchable.get(): + if fut.failed(): + return err("Some provider(s) failed:" & fut.error.msg) + elif fut.finished(): + let providerResult = fut.value() + if providerResult.isOk: + let providerValue = providerResult.get() + when `isRefObjectLit`: + if providerValue.isNil(): + return err( + "MultiRequestBroker(" & `typeNameLit` & + "): provider returned nil result" + ) + aggregated.add(providerValue) + else: + return err("Some provider(s) failed:" & providerResult.error) + + return ok(aggregated) + + ) + if not argSig.isNil(): + result.add( + quote do: + proc setProvider*( + _: typedesc[`typeIdent`], handler: `argProviderName` + ): Result[`providerHandleIdent`, string] = + if handler.isNil(): + return err("Provider handler must be provided") + let broker = `accessProcIdent`() + if broker.nextId == 0'u64: + broker.nextId = 1'u64 + for existingId, existing in broker.`argFieldName`.pairs: + if existing == handler: + return ok(`providerHandleIdent`(id: existingId, kind: `argKindIdent`)) + let newId = broker.nextId + inc broker.nextId + broker.`argFieldName`[newId] = handler + return ok(`providerHandleIdent`(id: newId, kind: `argKindIdent`)) + + ) + clearBody.add( + quote do: + let broker = `accessProcIdent`() + if not broker.isNil() and broker.`argFieldName`.len > 0: + broker.`argFieldName`.clear() + ) + let requestParamDefs = cloneParams(argParams) + let argNameIdents = collectParamNames(requestParamDefs) + let providerSym = genSym(nskLet, "providerVal") + var providerCall = newCall(providerSym) + for argName in argNameIdents: + providerCall.add(argName) + var formalParams = newTree(nnkFormalParams) + formalParams.add( + quote do: + Future[Result[seq[`typeIdent`], string]] + ) + formalParams.add( + newTree( + nnkIdentDefs, + ident("_"), + newTree(nnkBracketExpr, ident("typedesc"), copyNimTree(typeIdent)), + newEmptyNode(), + ) + ) + for paramDef in requestParamDefs: + formalParams.add(paramDef) + let requestPragmas = quote: + {.async: (raises: []), gcsafe.} + let requestBody = quote: + var aggregated: seq[`typeIdent`] = @[] + let providers = `accessProcIdent`().`argFieldName` + if providers.len == 0: + return ok(aggregated) + var providersFut = collect(newSeq): + for provider in providers.values: + if provider.isNil(): + continue + let `providerSym` = provider + `providerCall` + let catchable = catch: + await allFinished(providersFut) + catchable.isOkOr: + return err("Some provider(s) failed:" & error.msg) + for fut in catchable.get(): + if fut.failed(): + return err("Some provider(s) failed:" & fut.error.msg) + elif fut.finished(): + let providerResult = fut.value() + if providerResult.isOk: + let providerValue = providerResult.get() + when `isRefObjectLit`: + if providerValue.isNil(): + return err( + "MultiRequestBroker(" & `typeNameLit` & + "): provider returned nil result" + ) + aggregated.add(providerValue) + else: + return err("Some provider(s) failed:" & providerResult.error) + return ok(aggregated) + + result.add( + newTree( + nnkProcDef, + postfix(ident("request"), "*"), + newEmptyNode(), + newEmptyNode(), + formalParams, + requestPragmas, + newEmptyNode(), + requestBody, + ) + ) + + result.add( + quote do: + proc clearProviders*(_: typedesc[`typeIdent`]) = + `clearBody` + let broker = `accessProcIdent`() + if not broker.isNil(): + broker.nextId = 1'u64 + + ) + + let removeHandleSym = genSym(nskParam, "handle") + let removeBrokerSym = genSym(nskLet, "broker") + var removeBody = newStmtList() + removeBody.add( + quote do: + if `removeHandleSym`.id == 0'u64: + return + let `removeBrokerSym` = `accessProcIdent`() + if `removeBrokerSym`.isNil(): + return + ) + if not zeroArgSig.isNil(): + removeBody.add( + quote do: + if `removeHandleSym`.kind == `zeroKindIdent`: + `removeBrokerSym`.`zeroArgFieldName`.del(`removeHandleSym`.id) + return + ) + if not argSig.isNil(): + removeBody.add( + quote do: + if `removeHandleSym`.kind == `argKindIdent`: + `removeBrokerSym`.`argFieldName`.del(`removeHandleSym`.id) + return + ) + removeBody.add( + quote do: + discard + ) + result.add( + quote do: + proc removeProvider*( + _: typedesc[`typeIdent`], `removeHandleSym`: `providerHandleIdent` + ) = + `removeBody` + + ) + + when defined(requestBrokerDebug): + echo result.repr diff --git a/waku/common/broker/request_broker.nim b/waku/common/broker/request_broker.nim new file mode 100644 index 000000000..a8a6651d7 --- /dev/null +++ b/waku/common/broker/request_broker.nim @@ -0,0 +1,438 @@ +## RequestBroker +## -------------------- +## RequestBroker represents a proactive decoupling pattern, that +## allows defining request-response style interactions between modules without +## need for direct dependencies in between. +## Worth considering using it in a single provider, many requester scenario. +## +## Provides a declarative way to define an immutable value type together with a +## thread-local broker that can register an asynchronous provider, dispatch typed +## requests and clear provider. +## +## Usage: +## Declare your desired request type inside a `RequestBroker` macro, add any number of fields. +## Define the provider signature, that is enforced at compile time. +## +## ```nim +## RequestBroker: +## type TypeName = object +## field1*: FieldType +## field2*: AnotherFieldType +## +## proc signature*(): Future[Result[TypeName, string]] +## ## Also possible to define signature with arbitrary input arguments. +## proc signature*(arg1: ArgType, arg2: AnotherArgType): Future[Result[TypeName, string]] +## +## ``` +## The 'TypeName' object defines the requestable data (but also can be seen as request for action with return value). +## The 'signature' proc defines the provider(s) signature, that is enforced at compile time. +## One signature can be with no arguments, another with any number of arguments - where the input arguments are +## not related to the request type - but alternative inputs for the request to be processed. +## +## After this, you can register a provider anywhere in your code with +## `TypeName.setProvider(...)`, which returns error if already having a provider. +## Providers are async procs or lambdas that take no arguments and return a Future[Result[TypeName, string]]. +## Only one provider can be registered at a time per signature type (zero arg and/or multi arg). +## +## Requests can be made from anywhere with no direct dependency on the provider by +## calling `TypeName.request()` - with arguments respecting the signature(s). +## This will asynchronously call the registered provider and return a Future[Result[TypeName, string]]. +## +## Whenever you no want to process requests (or your object instance that provides the request goes out of scope), +## you can remove it from the broker with `TypeName.clearProvider()`. +## +## +## Example: +## ```nim +## RequestBroker: +## type Greeting = object +## text*: string +## +## ## Define the request and provider signature, that is enforced at compile time. +## proc signature*(): Future[Result[Greeting, string]] +## +## ## Also possible to define signature with arbitrary input arguments. +## proc signature*(lang: string): Future[Result[Greeting, string]] +## +## ... +## Greeting.setProvider( +## proc(): Future[Result[Greeting, string]] {.async.} = +## ok(Greeting(text: "hello")) +## ) +## let res = await Greeting.request() +## ``` +## If no `signature` proc is declared, a zero-argument form is generated +## automatically, so the caller only needs to provide the type definition. + +import std/[macros, strutils] +import chronos +import results +import ./helper/broker_utils + +export results, chronos + +proc errorFuture[T](message: string): Future[Result[T, string]] {.inline.} = + ## Build a future that is already completed with an error result. + let fut = newFuture[Result[T, string]]("request_broker.errorFuture") + fut.complete(err(Result[T, string], message)) + fut + +proc isReturnTypeValid(returnType, typeIdent: NimNode): bool = + ## Accept Future[Result[TypeIdent, string]] as the contract. + if returnType.kind != nnkBracketExpr or returnType.len != 2: + return false + if returnType[0].kind != nnkIdent or not returnType[0].eqIdent("Future"): + return false + let inner = returnType[1] + if inner.kind != nnkBracketExpr or inner.len != 3: + return false + if inner[0].kind != nnkIdent or not inner[0].eqIdent("Result"): + return false + if inner[1].kind != nnkIdent or not inner[1].eqIdent($typeIdent): + return false + inner[2].kind == nnkIdent and inner[2].eqIdent("string") + +proc cloneParams(params: seq[NimNode]): seq[NimNode] = + ## Deep copy parameter definitions so they can be inserted in multiple places. + result = @[] + for param in params: + result.add(copyNimTree(param)) + +proc collectParamNames(params: seq[NimNode]): seq[NimNode] = + ## Extract all identifier symbols declared across IdentDefs nodes. + result = @[] + for param in params: + assert param.kind == nnkIdentDefs + for i in 0 ..< param.len - 2: + let nameNode = param[i] + if nameNode.kind == nnkEmpty: + continue + result.add(ident($nameNode)) + +proc makeProcType(returnType: NimNode, params: seq[NimNode]): NimNode = + var formal = newTree(nnkFormalParams) + formal.add(returnType) + for param in params: + formal.add(param) + let pragmas = newTree(nnkPragma, ident("async")) + newTree(nnkProcTy, formal, pragmas) + +macro RequestBroker*(body: untyped): untyped = + when defined(requestBrokerDebug): + echo body.treeRepr + var typeIdent: NimNode = nil + var objectDef: NimNode = nil + var isRefObject = false + for stmt in body: + if stmt.kind == nnkTypeSection: + for def in stmt: + if def.kind != nnkTypeDef: + continue + let rhs = def[2] + var objectType: NimNode + case rhs.kind + of nnkObjectTy: + objectType = rhs + of nnkRefTy: + isRefObject = true + if rhs.len != 1 or rhs[0].kind != nnkObjectTy: + error( + "RequestBroker ref object must wrap a concrete object definition", rhs + ) + objectType = rhs[0] + else: + continue + if not typeIdent.isNil(): + error("Only one object type may be declared inside RequestBroker", def) + typeIdent = baseTypeIdent(def[0]) + let recList = objectType[2] + if recList.kind != nnkRecList: + error("RequestBroker object must declare a standard field list", objectType) + var exportedRecList = newTree(nnkRecList) + for field in recList: + case field.kind + of nnkIdentDefs: + ensureFieldDef(field) + var cloned = copyNimTree(field) + for i in 0 ..< cloned.len - 2: + cloned[i] = exportIdentNode(cloned[i]) + exportedRecList.add(cloned) + of nnkEmpty: + discard + else: + error( + "RequestBroker object definition only supports simple field declarations", + field, + ) + let exportedObjectType = newTree( + nnkObjectTy, + copyNimTree(objectType[0]), + copyNimTree(objectType[1]), + exportedRecList, + ) + if isRefObject: + objectDef = newTree(nnkRefTy, exportedObjectType) + else: + objectDef = exportedObjectType + if typeIdent.isNil(): + error("RequestBroker body must declare exactly one object type", body) + + when defined(requestBrokerDebug): + echo "RequestBroker generating type: ", $typeIdent + + let exportedTypeIdent = postfix(copyNimTree(typeIdent), "*") + let typeDisplayName = sanitizeIdentName(typeIdent) + let typeNameLit = newLit(typeDisplayName) + let isRefObjectLit = newLit(isRefObject) + var zeroArgSig: NimNode = nil + var zeroArgProviderName: NimNode = nil + var zeroArgFieldName: NimNode = nil + var argSig: NimNode = nil + var argParams: seq[NimNode] = @[] + var argProviderName: NimNode = nil + var argFieldName: NimNode = nil + + for stmt in body: + case stmt.kind + of nnkProcDef: + let procName = stmt[0] + let procNameIdent = + case procName.kind + of nnkIdent: + procName + of nnkPostfix: + procName[1] + else: + procName + let procNameStr = $procNameIdent + if not procNameStr.startsWith("signature"): + error("Signature proc names must start with `signature`", procName) + let params = stmt.params + if params.len == 0: + error("Signature must declare a return type", stmt) + let returnType = params[0] + if not isReturnTypeValid(returnType, typeIdent): + error( + "Signature must return Future[Result[`" & $typeIdent & "`, string]]", stmt + ) + let paramCount = params.len - 1 + if paramCount == 0: + if zeroArgSig != nil: + error("Only one zero-argument signature is allowed", stmt) + zeroArgSig = stmt + zeroArgProviderName = ident(sanitizeIdentName(typeIdent) & "ProviderNoArgs") + zeroArgFieldName = ident("providerNoArgs") + elif paramCount >= 1: + if argSig != nil: + error("Only one argument-based signature is allowed", stmt) + argSig = stmt + argParams = @[] + for idx in 1 ..< params.len: + let paramDef = params[idx] + if paramDef.kind != nnkIdentDefs: + error( + "Signature parameter must be a standard identifier declaration", paramDef + ) + let paramTypeNode = paramDef[paramDef.len - 2] + if paramTypeNode.kind == nnkEmpty: + error("Signature parameter must declare a type", paramDef) + var hasName = false + for i in 0 ..< paramDef.len - 2: + if paramDef[i].kind != nnkEmpty: + hasName = true + if not hasName: + error("Signature parameter must declare a name", paramDef) + argParams.add(copyNimTree(paramDef)) + argProviderName = ident(sanitizeIdentName(typeIdent) & "ProviderWithArgs") + argFieldName = ident("providerWithArgs") + of nnkTypeSection, nnkEmpty: + discard + else: + error("Unsupported statement inside RequestBroker definition", stmt) + + if zeroArgSig.isNil() and argSig.isNil(): + zeroArgSig = newEmptyNode() + zeroArgProviderName = ident(sanitizeIdentName(typeIdent) & "ProviderNoArgs") + zeroArgFieldName = ident("providerNoArgs") + + var typeSection = newTree(nnkTypeSection) + typeSection.add(newTree(nnkTypeDef, exportedTypeIdent, newEmptyNode(), objectDef)) + + let returnType = quote: + Future[Result[`typeIdent`, string]] + + if not zeroArgSig.isNil(): + let procType = makeProcType(returnType, @[]) + typeSection.add(newTree(nnkTypeDef, zeroArgProviderName, newEmptyNode(), procType)) + if not argSig.isNil(): + let procType = makeProcType(returnType, cloneParams(argParams)) + typeSection.add(newTree(nnkTypeDef, argProviderName, newEmptyNode(), procType)) + + var brokerRecList = newTree(nnkRecList) + if not zeroArgSig.isNil(): + brokerRecList.add( + newTree(nnkIdentDefs, zeroArgFieldName, zeroArgProviderName, newEmptyNode()) + ) + if not argSig.isNil(): + brokerRecList.add( + newTree(nnkIdentDefs, argFieldName, argProviderName, newEmptyNode()) + ) + let brokerTypeIdent = ident(sanitizeIdentName(typeIdent) & "Broker") + let brokerTypeDef = newTree( + nnkTypeDef, + brokerTypeIdent, + newEmptyNode(), + newTree(nnkObjectTy, newEmptyNode(), newEmptyNode(), brokerRecList), + ) + typeSection.add(brokerTypeDef) + result = newStmtList() + result.add(typeSection) + + let globalVarIdent = ident("g" & sanitizeIdentName(typeIdent) & "Broker") + let accessProcIdent = ident("access" & sanitizeIdentName(typeIdent) & "Broker") + result.add( + quote do: + var `globalVarIdent` {.threadvar.}: `brokerTypeIdent` + + proc `accessProcIdent`(): var `brokerTypeIdent` = + `globalVarIdent` + + ) + + var clearBody = newStmtList() + if not zeroArgSig.isNil(): + result.add( + quote do: + proc setProvider*( + _: typedesc[`typeIdent`], handler: `zeroArgProviderName` + ): Result[void, string] = + if not `accessProcIdent`().`zeroArgFieldName`.isNil(): + return err("Zero-arg provider already set") + `accessProcIdent`().`zeroArgFieldName` = handler + return ok() + + ) + clearBody.add( + quote do: + `accessProcIdent`().`zeroArgFieldName` = nil + ) + result.add( + quote do: + proc request*( + _: typedesc[`typeIdent`] + ): Future[Result[`typeIdent`, string]] {.async: (raises: []).} = + let provider = `accessProcIdent`().`zeroArgFieldName` + if provider.isNil(): + return err( + "RequestBroker(" & `typeNameLit` & "): no zero-arg provider registered" + ) + let catchedRes = catch: + await provider() + + if catchedRes.isErr(): + return err("Request failed:" & catchedRes.error.msg) + + let providerRes = catchedRes.get() + when `isRefObjectLit`: + if providerRes.isOk(): + let resultValue = providerRes.get() + if resultValue.isNil(): + return err( + "RequestBroker(" & `typeNameLit` & "): provider returned nil result" + ) + return providerRes + + ) + if not argSig.isNil(): + result.add( + quote do: + proc setProvider*( + _: typedesc[`typeIdent`], handler: `argProviderName` + ): Result[void, string] = + if not `accessProcIdent`().`argFieldName`.isNil(): + return err("Provider already set") + `accessProcIdent`().`argFieldName` = handler + return ok() + + ) + clearBody.add( + quote do: + `accessProcIdent`().`argFieldName` = nil + ) + let requestParamDefs = cloneParams(argParams) + let argNameIdents = collectParamNames(requestParamDefs) + let providerSym = genSym(nskLet, "provider") + var formalParams = newTree(nnkFormalParams) + formalParams.add( + quote do: + Future[Result[`typeIdent`, string]] + ) + formalParams.add( + newTree( + nnkIdentDefs, + ident("_"), + newTree(nnkBracketExpr, ident("typedesc"), copyNimTree(typeIdent)), + newEmptyNode(), + ) + ) + for paramDef in requestParamDefs: + formalParams.add(paramDef) + + let requestPragmas = quote: + {.async: (raises: []), gcsafe.} + var providerCall = newCall(providerSym) + for argName in argNameIdents: + providerCall.add(argName) + var requestBody = newStmtList() + requestBody.add( + quote do: + let `providerSym` = `accessProcIdent`().`argFieldName` + ) + requestBody.add( + quote do: + if `providerSym`.isNil(): + return err( + "RequestBroker(" & `typeNameLit` & + "): no provider registered for input signature" + ) + ) + requestBody.add( + quote do: + let catchedRes = catch: + await `providerCall` + if catchedRes.isErr(): + return err("Request failed:" & catchedRes.error.msg) + + let providerRes = catchedRes.get() + when `isRefObjectLit`: + if providerRes.isOk(): + let resultValue = providerRes.get() + if resultValue.isNil(): + return err( + "RequestBroker(" & `typeNameLit` & "): provider returned nil result" + ) + return providerRes + ) + # requestBody.add(providerCall) + result.add( + newTree( + nnkProcDef, + postfix(ident("request"), "*"), + newEmptyNode(), + newEmptyNode(), + formalParams, + requestPragmas, + newEmptyNode(), + requestBody, + ) + ) + + result.add( + quote do: + proc clearProvider*(_: typedesc[`typeIdent`]) = + `clearBody` + + ) + + when defined(requestBrokerDebug): + echo result.repr From 54f4ad8fa2ed452df670e25213a7fa34e5cc5432 Mon Sep 17 00:00:00 2001 From: Fabiana Cecin Date: Tue, 2 Dec 2025 11:00:26 -0300 Subject: [PATCH 13/70] fix: fix .github waku-org/ --> logos-messaging/ (#3653) * fix: fix .github waku-org/ --> logos-messaging/ * bump CI tests timeout 45 --> 90 minutes * fix .gitmodules waku-org --> logos-messaging --- .github/ISSUE_TEMPLATE/prepare_beta_release.md | 14 +++++++------- .github/ISSUE_TEMPLATE/prepare_full_release.md | 16 ++++++++-------- .github/workflows/ci.yml | 14 +++++++------- .github/workflows/pre-release.yml | 10 +++++----- .gitmodules | 2 +- 5 files changed, 28 insertions(+), 28 deletions(-) diff --git a/.github/ISSUE_TEMPLATE/prepare_beta_release.md b/.github/ISSUE_TEMPLATE/prepare_beta_release.md index 270f6a8e6..9afaefbd1 100644 --- a/.github/ISSUE_TEMPLATE/prepare_beta_release.md +++ b/.github/ISSUE_TEMPLATE/prepare_beta_release.md @@ -10,7 +10,7 @@ assignees: '' ### Items to complete @@ -34,10 +34,10 @@ All items below are to be completed by the owner of the given release. - [ ] **Proceed with release** - [ ] Assign a final release tag (`v0.X.0-beta`) to the same commit that contains the validated release-candidate tag (e.g. `v0.X.0-beta-rc.N`) and submit a PR from the release branch to `master`. - - [ ] Update [nwaku-compose](https://github.com/waku-org/nwaku-compose) and [waku-simulator](https://github.com/waku-org/waku-simulator) according to the new release. - - [ ] Bump nwaku dependency in [waku-rust-bindings](https://github.com/waku-org/waku-rust-bindings) and make sure all examples and tests work. - - [ ] Bump nwaku dependency in [waku-go-bindings](https://github.com/waku-org/waku-go-bindings) and make sure all tests work. - - [ ] Create GitHub release (https://github.com/waku-org/nwaku/releases). + - [ ] Update [nwaku-compose](https://github.com/logos-messaging/nwaku-compose) and [waku-simulator](https://github.com/logos-messaging/waku-simulator) according to the new release. + - [ ] Bump nwaku dependency in [waku-rust-bindings](https://github.com/logos-messaging/waku-rust-bindings) and make sure all examples and tests work. + - [ ] Bump nwaku dependency in [waku-go-bindings](https://github.com/logos-messaging/waku-go-bindings) and make sure all tests work. + - [ ] Create GitHub release (https://github.com/logos-messaging/nwaku/releases). - [ ] Submit a PR to merge the release branch back to `master`. Make sure you use the option "Merge pull request (Create a merge commit)" to perform the merge. Ping repo admin if this option is not available. - [ ] **Promote release to fleets** @@ -47,8 +47,8 @@ All items below are to be completed by the owner of the given release. ### Links -- [Release process](https://github.com/waku-org/nwaku/blob/master/docs/contributors/release-process.md) -- [Release notes](https://github.com/waku-org/nwaku/blob/master/CHANGELOG.md) +- [Release process](https://github.com/logos-messaging/nwaku/blob/master/docs/contributors/release-process.md) +- [Release notes](https://github.com/logos-messaging/nwaku/blob/master/CHANGELOG.md) - [Fleet ownership](https://www.notion.so/Fleet-Ownership-7532aad8896d46599abac3c274189741?pvs=4#d2d2f0fe4b3c429fbd860a1d64f89a64) - [Infra-nim-waku](https://github.com/status-im/infra-nim-waku) - [Jenkins](https://ci.infra.status.im/job/nim-waku/) diff --git a/.github/ISSUE_TEMPLATE/prepare_full_release.md b/.github/ISSUE_TEMPLATE/prepare_full_release.md index 18c668d16..314146f60 100644 --- a/.github/ISSUE_TEMPLATE/prepare_full_release.md +++ b/.github/ISSUE_TEMPLATE/prepare_full_release.md @@ -10,7 +10,7 @@ assignees: '' ### Items to complete @@ -54,11 +54,11 @@ All items below are to be completed by the owner of the given release. - [ ] **Proceed with release** - - [ ] Assign a final release tag (`v0.X.0`) to the same commit that contains the validated release-candidate tag (e.g. `v0.X.0`). - - [ ] Update [nwaku-compose](https://github.com/waku-org/nwaku-compose) and [waku-simulator](https://github.com/waku-org/waku-simulator) according to the new release. - - [ ] Bump nwaku dependency in [waku-rust-bindings](https://github.com/waku-org/waku-rust-bindings) and make sure all examples and tests work. - - [ ] Bump nwaku dependency in [waku-go-bindings](https://github.com/waku-org/waku-go-bindings) and make sure all tests work. - - [ ] Create GitHub release (https://github.com/waku-org/nwaku/releases). + - [ ] Assign a final release tag (`v0.X.0`) to the same commit that contains the validated release-candidate tag (e.g. `v0.X.0`). + - [ ] Update [nwaku-compose](https://github.com/logos-messaging/nwaku-compose) and [waku-simulator](https://github.com/logos-messaging/waku-simulator) according to the new release. + - [ ] Bump nwaku dependency in [waku-rust-bindings](https://github.com/logos-messaging/waku-rust-bindings) and make sure all examples and tests work. + - [ ] Bump nwaku dependency in [waku-go-bindings](https://github.com/logos-messaging/waku-go-bindings) and make sure all tests work. + - [ ] Create GitHub release (https://github.com/logos-messaging/nwaku/releases). - [ ] Submit a PR to merge the release branch back to `master`. Make sure you use the option "Merge pull request (Create a merge commit)" to perform the merge. Ping repo admin if this option is not available. - [ ] **Promote release to fleets** @@ -67,8 +67,8 @@ All items below are to be completed by the owner of the given release. ### Links -- [Release process](https://github.com/waku-org/nwaku/blob/master/docs/contributors/release-process.md) -- [Release notes](https://github.com/waku-org/nwaku/blob/master/CHANGELOG.md) +- [Release process](https://github.com/logos-messaging/nwaku/blob/master/docs/contributors/release-process.md) +- [Release notes](https://github.com/logos-messaging/nwaku/blob/master/CHANGELOG.md) - [Fleet ownership](https://www.notion.so/Fleet-Ownership-7532aad8896d46599abac3c274189741?pvs=4#d2d2f0fe4b3c429fbd860a1d64f89a64) - [Infra-nim-waku](https://github.com/status-im/infra-nim-waku) - [Jenkins](https://ci.infra.status.im/job/nim-waku/) diff --git a/.github/workflows/ci.yml b/.github/workflows/ci.yml index 5cf64b66a..12c1abd6d 100644 --- a/.github/workflows/ci.yml +++ b/.github/workflows/ci.yml @@ -78,7 +78,7 @@ jobs: - name: Build binaries run: make V=1 QUICK_AND_DIRTY_COMPILER=1 all tools - + build-windows: needs: changes if: ${{ needs.changes.outputs.v2 == 'true' || needs.changes.outputs.common == 'true' }} @@ -94,7 +94,7 @@ jobs: matrix: os: [ubuntu-22.04, macos-15] runs-on: ${{ matrix.os }} - timeout-minutes: 45 + timeout-minutes: 90 name: test-${{ matrix.os }} steps: @@ -121,7 +121,7 @@ jobs: sudo docker run --rm -d -e POSTGRES_PASSWORD=test123 -p 5432:5432 postgres:15.4-alpine3.18 postgres_enabled=1 fi - + export MAKEFLAGS="-j1" export NIMFLAGS="--colors:off -d:chronicles_colors:none" export USE_LIBBACKTRACE=0 @@ -132,12 +132,12 @@ jobs: build-docker-image: needs: changes if: ${{ needs.changes.outputs.v2 == 'true' || needs.changes.outputs.common == 'true' || needs.changes.outputs.docker == 'true' }} - uses: waku-org/nwaku/.github/workflows/container-image.yml@master + uses: logos-messaging/nwaku/.github/workflows/container-image.yml@master secrets: inherit nwaku-nwaku-interop-tests: needs: build-docker-image - uses: waku-org/waku-interop-tests/.github/workflows/nim_waku_PR.yml@SMOKE_TEST_0.0.1 + uses: logos-messaging/waku-interop-tests/.github/workflows/nim_waku_PR.yml@SMOKE_TEST_0.0.1 with: node_nwaku: ${{ needs.build-docker-image.outputs.image }} @@ -145,14 +145,14 @@ jobs: js-waku-node: needs: build-docker-image - uses: waku-org/js-waku/.github/workflows/test-node.yml@master + uses: logos-messaging/js-waku/.github/workflows/test-node.yml@master with: nim_wakunode_image: ${{ needs.build-docker-image.outputs.image }} test_type: node js-waku-node-optional: needs: build-docker-image - uses: waku-org/js-waku/.github/workflows/test-node.yml@master + uses: logos-messaging/js-waku/.github/workflows/test-node.yml@master with: nim_wakunode_image: ${{ needs.build-docker-image.outputs.image }} test_type: node-optional diff --git a/.github/workflows/pre-release.yml b/.github/workflows/pre-release.yml index fe108e616..380ec755f 100644 --- a/.github/workflows/pre-release.yml +++ b/.github/workflows/pre-release.yml @@ -47,7 +47,7 @@ jobs: - name: prep variables id: vars run: | - ARCH=${{matrix.arch}} + ARCH=${{matrix.arch}} echo "arch=${ARCH}" >> $GITHUB_OUTPUT @@ -91,14 +91,14 @@ jobs: build-docker-image: needs: tag-name - uses: waku-org/nwaku/.github/workflows/container-image.yml@master + uses: logos-messaging/nwaku/.github/workflows/container-image.yml@master with: image_tag: ${{ needs.tag-name.outputs.tag }} secrets: inherit js-waku-node: needs: build-docker-image - uses: waku-org/js-waku/.github/workflows/test-node.yml@master + uses: logos-messaging/js-waku/.github/workflows/test-node.yml@master with: nim_wakunode_image: ${{ needs.build-docker-image.outputs.image }} test_type: node @@ -106,7 +106,7 @@ jobs: js-waku-node-optional: needs: build-docker-image - uses: waku-org/js-waku/.github/workflows/test-node.yml@master + uses: logos-messaging/js-waku/.github/workflows/test-node.yml@master with: nim_wakunode_image: ${{ needs.build-docker-image.outputs.image }} test_type: node-optional @@ -150,7 +150,7 @@ jobs: -u $(id -u) \ docker.io/wakuorg/sv4git:latest \ release-notes ${RELEASE_NOTES_TAG} --previous $(git tag -l --sort -creatordate | grep -e "^v[0-9]*\.[0-9]*\.[0-9]*$") |\ - sed -E 's@#([0-9]+)@[#\1](https://github.com/waku-org/nwaku/issues/\1)@g' > release_notes.md + sed -E 's@#([0-9]+)@[#\1](https://github.com/logos-messaging/nwaku/issues/\1)@g' > release_notes.md sed -i "s/^## .*/Generated at $(date)/" release_notes.md diff --git a/.gitmodules b/.gitmodules index b7e52550a..93a3a006f 100644 --- a/.gitmodules +++ b/.gitmodules @@ -181,6 +181,6 @@ branch = master [submodule "vendor/waku-rlnv2-contract"] path = vendor/waku-rlnv2-contract - url = https://github.com/waku-org/waku-rlnv2-contract.git + url = https://github.com/logos-messaging/waku-rlnv2-contract.git ignore = untracked branch = master From 8c30a8e1bb7469e6184d1ac6289676aec27b719d Mon Sep 17 00:00:00 2001 From: Ivan FB <128452529+Ivansete-status@users.noreply.github.com> Date: Wed, 3 Dec 2025 11:55:34 +0100 Subject: [PATCH 14/70] Rest store api constraints default page size to 20 and max to 100 (#3602) Co-authored-by: Vishwanath Martur <64204611+vishwamartur@users.noreply.github.com> --- docs/api/rest-api.md | 3 +++ docs/operators/how-to/configure-rest-api.md | 3 ++- waku/rest_api/endpoint/store/client.nim | 2 +- waku/rest_api/endpoint/store/handlers.nim | 8 ++++++++ 4 files changed, 14 insertions(+), 2 deletions(-) diff --git a/docs/api/rest-api.md b/docs/api/rest-api.md index eeb90abfb..cc8e51020 100644 --- a/docs/api/rest-api.md +++ b/docs/api/rest-api.md @@ -38,6 +38,9 @@ A particular OpenAPI spec can be easily imported into [Postman](https://www.post curl http://localhost:8645/debug/v1/info -s | jq ``` +### Store API + +The `page_size` flag in the Store API has a default value of 20 and a max value of 100. ### Node configuration Find details [here](https://github.com/waku-org/nwaku/tree/master/docs/operators/how-to/configure-rest-api.md) diff --git a/docs/operators/how-to/configure-rest-api.md b/docs/operators/how-to/configure-rest-api.md index 3fe070aab..7a58a798c 100644 --- a/docs/operators/how-to/configure-rest-api.md +++ b/docs/operators/how-to/configure-rest-api.md @@ -1,4 +1,3 @@ - # Configure a REST API node A subset of the node configuration can be used to modify the behaviour of the HTTP REST API. @@ -21,3 +20,5 @@ Example: ```shell wakunode2 --rest=true ``` + +The `page_size` flag in the Store API has a default value of 20 and a max value of 100. diff --git a/waku/rest_api/endpoint/store/client.nim b/waku/rest_api/endpoint/store/client.nim index 80939ee25..71ba7610d 100644 --- a/waku/rest_api/endpoint/store/client.nim +++ b/waku/rest_api/endpoint/store/client.nim @@ -57,7 +57,7 @@ proc getStoreMessagesV3*( # Optional cursor fields cursor: string = "", # base64-encoded hash ascending: string = "", - pageSize: string = "", + pageSize: string = "20", # default value is 20 ): RestResponse[StoreQueryResponseHex] {. rest, endpoint: "/store/v3/messages", meth: HttpMethod.MethodGet .} diff --git a/waku/rest_api/endpoint/store/handlers.nim b/waku/rest_api/endpoint/store/handlers.nim index 79724b9d7..7d37191fb 100644 --- a/waku/rest_api/endpoint/store/handlers.nim +++ b/waku/rest_api/endpoint/store/handlers.nim @@ -129,6 +129,14 @@ proc createStoreQuery( except CatchableError: return err("page size parsing error: " & getCurrentExceptionMsg()) + # Enforce default value of page_size to 20 + if parsedPagedSize.isNone(): + parsedPagedSize = some(20.uint64) + + # Enforce max value of page_size to 100 + if parsedPagedSize.get() > 100: + parsedPagedSize = some(100.uint64) + return ok( StoreQueryRequest( includeData: parsedIncludeData, From a8590a0a7dd53776bc2fef87149fdb084b58d317 Mon Sep 17 00:00:00 2001 From: Tanya S <120410716+stubbsta@users.noreply.github.com> Date: Thu, 4 Dec 2025 10:26:18 +0200 Subject: [PATCH 15/70] chore: Add gasprice overflow check (#3636) * Check for gasPrice overflow * use trace for logging and update comments * Update log level for gas price logs --- .../group_manager/on_chain/group_manager.nim | 13 ++++++++++++- 1 file changed, 12 insertions(+), 1 deletion(-) diff --git a/waku/waku_rln_relay/group_manager/on_chain/group_manager.nim b/waku/waku_rln_relay/group_manager/on_chain/group_manager.nim index db68b2289..e8af61682 100644 --- a/waku/waku_rln_relay/group_manager/on_chain/group_manager.nim +++ b/waku/waku_rln_relay/group_manager/on_chain/group_manager.nim @@ -229,7 +229,18 @@ method register*( var gasPrice: int g.retryWrapper(gasPrice, "Failed to get gas price"): - int(await ethRpc.provider.eth_gasPrice()) * 2 + let fetchedGasPrice = uint64(await ethRpc.provider.eth_gasPrice()) + ## Multiply by 2 to speed up the transaction + ## Check for overflow when casting to int + if fetchedGasPrice > uint64(high(int) div 2): + warn "Gas price overflow detected, capping at maximum int value", + fetchedGasPrice = fetchedGasPrice, maxInt = high(int) + high(int) + else: + let calculatedGasPrice = int(fetchedGasPrice) * 2 + debug "Gas price calculated", + fetchedGasPrice = fetchedGasPrice, gasPrice = calculatedGasPrice + calculatedGasPrice let idCommitmentHex = identityCredential.idCommitment.inHex() info "identityCredential idCommitmentHex", idCommitment = idCommitmentHex let idCommitment = identityCredential.idCommitment.toUInt256() From 2cf4fe559a0a6a4511cc9da2b69c7935ebc7862f Mon Sep 17 00:00:00 2001 From: Tanya S <120410716+stubbsta@users.noreply.github.com> Date: Mon, 8 Dec 2025 08:29:48 +0200 Subject: [PATCH 16/70] Chore: bump waku-rlnv2-contract-repo commit (#3651) * Bump commit for vendor wakurlnv2contract * Update RLN registration proc for contract updates * add option to runAnvil for state dump or load with optional contract deployment on setup * Code clean up * Upodate rln relay tests to use cached anvil state * Minor updates to utils and new test for anvil state dump * stopAnvil needs to wait for graceful shutdown * configure runAnvil to use load state in other tests * reduce ci timeout * Allow for RunAnvil load state file to be compressed * Fix linting * Change return type of sendMintCall to Futre[void] * Update naming of ci path for interop tests --- .github/workflows/ci.yml | 4 +- tests/node/test_wakunode_legacy_lightpush.nim | 4 +- tests/node/test_wakunode_lightpush.nim | 4 +- ...ployed-contracts-mint-and-approved.json.gz | Bin 0 -> 118346 bytes .../test_rln_contract_deployment.nim | 29 ++ .../test_rln_group_manager_onchain.nim | 4 +- tests/waku_rln_relay/test_waku_rln_relay.nim | 4 +- .../test_wakunode_rln_relay.nim | 4 +- tests/waku_rln_relay/utils_onchain.nim | 249 ++++++++++++++---- tests/wakunode_rest/test_rest_health.nim | 4 +- vendor/waku-rlnv2-contract | 2 +- .../group_manager/on_chain/group_manager.nim | 38 +-- 12 files changed, 259 insertions(+), 87 deletions(-) create mode 100644 tests/waku_rln_relay/anvil_state/state-deployed-contracts-mint-and-approved.json.gz create mode 100644 tests/waku_rln_relay/test_rln_contract_deployment.nim diff --git a/.github/workflows/ci.yml b/.github/workflows/ci.yml index 12c1abd6d..e3186a007 100644 --- a/.github/workflows/ci.yml +++ b/.github/workflows/ci.yml @@ -94,7 +94,7 @@ jobs: matrix: os: [ubuntu-22.04, macos-15] runs-on: ${{ matrix.os }} - timeout-minutes: 90 + timeout-minutes: 45 name: test-${{ matrix.os }} steps: @@ -137,7 +137,7 @@ jobs: nwaku-nwaku-interop-tests: needs: build-docker-image - uses: logos-messaging/waku-interop-tests/.github/workflows/nim_waku_PR.yml@SMOKE_TEST_0.0.1 + uses: logos-messaging/logos-messaging-interop-tests/.github/workflows/nim_waku_PR.yml@SMOKE_TEST_0.0.1 with: node_nwaku: ${{ needs.build-docker-image.outputs.image }} diff --git a/tests/node/test_wakunode_legacy_lightpush.nim b/tests/node/test_wakunode_legacy_lightpush.nim index a51ba60b9..80e623ce4 100644 --- a/tests/node/test_wakunode_legacy_lightpush.nim +++ b/tests/node/test_wakunode_legacy_lightpush.nim @@ -135,8 +135,8 @@ suite "RLN Proofs as a Lightpush Service": server = newTestWakuNode(serverKey, parseIpAddress("0.0.0.0"), Port(0)) client = newTestWakuNode(clientKey, parseIpAddress("0.0.0.0"), Port(0)) - anvilProc = runAnvil() - manager = waitFor setupOnchainGroupManager() + anvilProc = runAnvil(stateFile = some(DEFAULT_ANVIL_STATE_PATH)) + manager = waitFor setupOnchainGroupManager(deployContracts = false) # mount rln-relay let wakuRlnConfig = getWakuRlnConfig(manager = manager, index = MembershipIndex(1)) diff --git a/tests/node/test_wakunode_lightpush.nim b/tests/node/test_wakunode_lightpush.nim index 12bfdddd8..29f72b2cc 100644 --- a/tests/node/test_wakunode_lightpush.nim +++ b/tests/node/test_wakunode_lightpush.nim @@ -135,8 +135,8 @@ suite "RLN Proofs as a Lightpush Service": server = newTestWakuNode(serverKey, parseIpAddress("0.0.0.0"), Port(0)) client = newTestWakuNode(clientKey, parseIpAddress("0.0.0.0"), Port(0)) - anvilProc = runAnvil() - manager = waitFor setupOnchainGroupManager() + anvilProc = runAnvil(stateFile = some(DEFAULT_ANVIL_STATE_PATH)) + manager = waitFor setupOnchainGroupManager(deployContracts = false) # mount rln-relay let wakuRlnConfig = getWakuRlnConfig(manager = manager, index = MembershipIndex(1)) diff --git a/tests/waku_rln_relay/anvil_state/state-deployed-contracts-mint-and-approved.json.gz b/tests/waku_rln_relay/anvil_state/state-deployed-contracts-mint-and-approved.json.gz new file mode 100644 index 0000000000000000000000000000000000000000..ceb081c77788d7b5a3a933b2d0510303247694a1 GIT binary patch literal 118346 zcmV)1K+V4&iwFoy%`s^J19Nm?bY(4MWpHe7d1YiRV{dMBa$#e1b1iLYZgeeSZe%TC zaBy;Oc4cHPYIARH0OXwAt{pdyh2O>3xsXJO67wai5qb;$e_{hJ|DMF+{wiZ zGH73{CMvB z_1=H_uh(bkPyXT3pWpv*KK|@K{QY&IS~)j$=fL0m<9FY`|Ka`5@4l5kUd4R=@!WGb zVnwZ4&fmP-?=L3&^8L?mCNRX`e>(s9lmEE=!|S6t&#pCAleisHZ_;-K*>HRw4l()}c=Wy`dx9RNy z-n^ZZ|M|~v-uYS|e);j|k8#ca!B^q-4?o5zFAse0uh)!FcFKN+Y@7P>>H4rg9pUxI zpFjNMe+&C!>hx=_dv!Cd+sjI07Ev2hwyf2cOUcGK*4qC(oyaT)huob;l6A)CSpKi` zwk_5Dwpz6B*>_)}IM(>AXHN-p)b)$c%lh&-BOkeK?b|V6$=1fJxsuP@X31yVFFvn3 zCwskfCM~7pCALnj$J$86uANyhL4G%H{^PuVn4!y`9Z*uPilyKIwKnmEH-Zvi5%znfQlR`=`wAsS zGLq#amacpnvtLPy!M&U;guK`sjcl7bGS`l_%MdxEj!w39uC8+OZ6^1V%X7bFTACBM z+~uhayPEsyd(C@BIf(!#^{F*2+)?}Nmfd9bdO(}#?l-5LdS*l^!aQnSn7gZDXlk-^ z9lM^K?6!-i+Zs;7w&v*$o7Wk^TrSvY+Ebag=05rK93c*2th314lcbLCZ|}se+uA)t zFFRP}++1rb(r#h4)odshWVO5UTziZ@6Gm*ppkw1>(|d1C*7OXm;}zdVP{Sjfd9UEJ zGklrhuwvnwovx2GP15RRbV;-O6a~NCt_(|O-A+;r(BN;QjW!5D!k!^x@Q*o@S!rR& z>xiYd*Z(@2U!Umfx{i1B?3#)2oVL#JQt#e#&1Di4L^jQRj<>dTX>C>Zs+Qy= zAYrktmK)AkcyA}TLy5R8iwT)kb%!4IV<7F_Z0bTS?#f$s|z3xN){o9y>9CaIo2){ zd!ZB-pe5bAy*g>r(^`q_x%jzb72YG#H+SW8Ot9t~0FB~3oy&4uFI3Fx`^n|B0nYo> znH3)MxyCt#%7+(TRuz^&PFmRu;4}A@u|0Qyxbj*P*W4Ecw0lILz!HY-;*9`7gIE-h(4#CQ<_|1#JZGjiDYzvc`R9KtwDc$eZr zO>v@Sb)LR*_Fby<0Xb|)rdgRj@YozZeD?C%7@6VyOh4|}?$UMN(fw9e!OOGHTe|NG ztcs0u`;-x%4cKnKq0GW}pVs&#P!?+pL@@C=8?LO8$t-SlWkH6s0Nfh5E!WJd_dW#F zYJxsan!ES>yEXRGA-~BLxH!p$Z6_3P^2)m+dXfkX1aU8EWiEndIAZU8ZOuMdV|%T! ze~#n7i`n1eSS3+?7xV$8Qo44-aoVZ^M5y<0S+`T*$SJ*v6h*5Y&hR|5_us8i>RGkn zmdOi)kY^MMq7&G8$yU+>hF$z@=uFgd^_5HceK7`fqtPZ7+)w0>TlkIomy%52B<-4XrrNJ0hrtQOMRF*aZO7u z!oX`0WB4yhN#C^uLgXzA^b2Ld_(ECmd5vjvrf=PL>ZJ6xv#H}7{zN1!iuXxW(6RF* zbXm&oiR?I3NosJuRL3{2>!0KJ?`FaHLRs*yII?hS_pXvN^43vZ2%^Ta3y zT%~f|vy9|!2sN+$)D0N(8)MhV&^OHgzo`Woy__as?}%&P_ZY#1D$%odN%T#+9rTsf zhcQ&{3z>@0Qro9BdqOP$$h!_WaRSQQX&G6~9tX%Ft8XnPw8bNZO4!L!i^J(G1~Ob} zFZ2nuAaB#wE)`AUBQ%bkgh|r8q|oq073Z%(Tww*H?X7c9W=e}u+Mlcie_=kSX}gX7 z8Y27$GB8ekJ2NnYTbK1f|Ik+fm*tmQE09> zAx_Jwq?P*WbXOTTB^}~u61+(;NM4e)OuYCNaI%n!v(t*~wsWWM7TcopB#4-dEUWsu zi$p&c7vfx0Ias-~g7p~Xug}SxsRU@XlX0dVSr=?(@FD zNqH!zndu&^mxDb7TcKsZNGYN$8NnXc;#or!->$Pgf{`0%R?=|p>H%XEF@tKaq$80> zSPJtYII5pUQ#H~myRK59Fr>v37&%4QH=S;(Ws51mXd6m(Ar_ER+zCQ5BbyzM8g!k} zBhVHkzO>EfGcqBN+4E(HOd207kyKtR!L%MyM|%ZpY7= zy7qX|QhPiizH&&-HFK&0oN_xSRRwPGAm{m~=zxPeFOu+^r;C4TO1ukPumbEJw~}l^T~ko|DbHHT+G* zMPZ;JEuP}G-EhWDwUXCobOtS8b#hEQHL6`1bE}>waB_i!%oBK>QjkrxIr&0U6kH4n zS>Wn|j2<3O#bM$;LA?>myGyxG;N-%pXe%JzPBin&Ga-u28BgG3 zoVezpt`gTe0kJ7%XYO@J79cj1`I{k_BMLVt8Ua;g9>b~m5u6;)gi{-}Gnic3o4cF@ zTu|=4%}=kf{Q@Vg0vRQyF1pUeWwO1=SHQ`KrXATx6edCNc1h>tv#yFWZ&6na)4Jj7 z(&vaSsJAxTjz_F%J%E$X$SZD@SAk!SVg#}gcUdEpf;pwlddW}`l;w!KQv+F*V#`O0 z%~Vfd-*rC?&_#8cz`*U|dx!;^c8 zEp@c;dl8296)-Z5{GeF}CigNVnAkBPZP23a;T^m@QeCMNlinhu1Mx7qkn#5cj0}={ zM895|^^7C%#-7J_)3}39ZShKMC${P`zt$G;yu`cFCP8yg;N%oX&mu))2dRzYv0qb` zBO3Fpl+Q|Jh;gP+DdC}b7^LgQXsY{G=iXbA||8kquZ@G=fpa4;nL zB0EpuWa88Nk)dV7IoQEvgR)8o5B-8HSjcNh3-umw(1g^e2;D37SH|~tzqpX{u4NvzI8E&oP7-b9Ys+MJ8o2v zzB?XHJHAmLG3jLu`~jKCikM*96F8aGTo6>Kp{<9TsWcY`(gGI&a^16_F=*GJrn;b} zh8kJYPKw^ZpU=s8D;;g`rJXvmIs|rW6PgE#F9X@hQ}eOd4R#DY(GH6GH9)oPkJtxS zQ2z-_dOYMiJ%mKr@jWOK(lS2)k7F$^VlF9lg@neLwP_2<`~Xf)z;oJOvcS^F9VrzX z|75A~S&49jvS7HqN^4RtIG>$17-5N6{0cakO0zXP-N9JH%*oZP)Sp{JX8~UVYLy-c zk0wPWYgW318a|k`P!Hhb*H>YBpD@uL6qVbX?HjjSuW))f?haTu1$bW1;sjA&4Po4M ze8k=4q`FC1!Yw&&j&95usnxj@kl)xPmXL==n&#&S|YUeYi~cP)-9X$I>PI2mYCb~9Hm1pcnEtpqc3^}Y`7 zZl{;mvpl`JG48bHmu?4#Cdeimq2D8*6We19zbE2M_Udg7fUI)l z3^gabZvJ>qKIdvUs7$J?u7L`s@#z7MxK-hJGQJ?(=;x~si2R z4rOLYH0l~(s~WRtMqGH0Z^a&LNGo(C$5`fuiJC8$w&5!4Tb{tk$oocn8v0^)9i`Mp zGoTG^$U4RmEX7R~M;XE@cB)o1qA~iJ5!bJPk(-_iWERqxHSN1yr1gdgQ0~BGrPADj zqv?tHqQ+T%9?#Ma*QjD2uOq#jBLl^nk&A*U$1lCR#W zyT!w6loSrg4Qj5{&PaL269Zc%Zs|OLlT-Ev8ENTscL=20BJ6Vu(zr+7h9XuJg;SLH z(4^#bW?nVE_;~8qz{xXyqs5TAyvvY?OoO+&$JXRDD+;P$s8WcEZU>Mc9WT$mWWGAy zzdk2ZR0sOd$itF0q?PVLOblI#URRJ3SN+EC1)(pvOxw?HS-lx3tdIBxU#gkqZ=6yG zKgtLw*SL8eQon$=(xm06G_-|1!{G#G;a0}NL(k{r+^PdLR6;!Y(M*zFM-cRf z7A`TdP1B3mDybIhd2dT+WctJ>aB`Z$0tTIp)9qaf?b$mHi@)}pkg%fhTx8raX?u~F zNGPG{KVG-jM{qL9cBm$*IW3EO#m-7AItr?aXV!yuq344Hq0MFalpv$9rV$mmoab|L zJkL_s*h~N4+?`pDCASVlx1@Pk4kc0hU&1?))gS&eVBn!uFl=}At(&QV3Pwm2xvZ9w z0AEiDYK4(A^gwdsMULLhb+?;QMNF+*(0HdOqAtQ!{_4?0PS#A zE4pyO9ph{)?bvs}H>c2HBX3~xZSD$B)Lmj2vhAu1@<N^ z$iMK#_#5KjmhMOA429BWT;0*KZ;&YOgQ^1|V}wpcD5b>zf_1fVQbST`Y@f)I5mW9B zO<(4zD^x}pc~3J2el{}fil|{|6t65wafY;StEdFE;*H(p6iZ(+q^(l{#42y9Y4NT$ zrO$0DuUILqh*>CqOR9Y(Ez{ky|AM`Xa4{~W{!*(e;>OwrfT|nAX3SOTt}{S8v=T8v ztN8Pn9l78b#S54$OU^bDXoW9PMg&Cs+Vs6+o2RUe5z1Q5TCStO0>F)mty-W>=><&Y zDY~U-6|Z?mU3NM@q%a8ah5tc$BpV6xxkr5$9f6g_6z?lhh;_<`ZU!V>ff zCyN@YdFzair_OCVpd!)>m>gudK5FjTXFTDXj|5}NG&c|Q0j{({4Hr}zQ_rh)Mo_UP zgKqqS6TVPwz$)fcUNN)06tSw)OXj++mRk+VJlQMHqpk4f>r6{n%w@oCUcls}6r22w zZi7e6;+7YNFHn$AA1`poc5UYVm72IFlnklDy(I5e6sCMT z@9fo-L{2x^cJJ?Fas+#aSRw)k!%N#)=(Qi7(OKUaLe+gO>;{sRbN{ZQVmb5bqCtCt zC)1t?UEm=M{G0=uvyRWTcvc0C&qmeX+yYuimtvB3*+-QK?AY}SXM9*G>{pQ^VPnZL zd(L$YSYXo)Vns~Z95m^IN2%7K44l<(+e_y2J|auoE+yw-;4pzX`$62V7TXFJ`k+wl z;4?Z7TK+3zvAHmvuKZi)g%dtsj6VdJIbsHr+dXY7D+OJONlx)Lgn4Rs1S zqW|(%=Gk=;V$DznJJBCsOsypbb^qKV_}kH#76mJ})|%Tz>C7I&fG_L_8NPOU(+%>3 zZ_)M4qbA0H)Gf?Vu@Qa&Lr&9{!3I0_>P;09ZjQ=|W&x=KW!%aYhZa<(i}>9c`p%GC z`L$^M19&Wm)mhFo4Rf#Gx7PZBpn(^5*iwweLI%o$<8YU?8gj#^V3hd)440GYS+zz<^ObwR{orh3;r}BtI}S^% zlc&JAQo9PqRycA!8nX@UNX2e`;&cxiL&Hm+^4A-CP-tIgcsJ&Hx8I{ArMD(OQ&IN^ zONL(1-(&jp2~3V3+iRxia)wco{AN}oe%1#%FsmNaqJbgT$$wT3<=$-yU`5=%kI9UH zsvIb8KDNUQIs?YLQ9WNE;B_VzSFA!OUUapb(YSR1PiE8Z3z)2w$cb4fJcvQ4Gna@3 zwXEVmSxM3QU7rXt#8K6eK{a&FN2YO7}P> z2j4W-8y4$TrB)O-(>xx9OBuZzvuXn;1l3*PD@Ca=8eTV5wDKEyE+F(7A2V9hX*K;~ z?B~jUPy-PLy-#-B?aXtlf*O6Dxf`QI)9WL-hn6l)$9QKbOBq*HHH7JmyWv}U&!sk= zqO4h}p|}+G=oK&ZlhFSc>|KPFPEU!BL#?9p?OEvF2<=KXlUvx8u@FO6&d!YmrO}cK zC7KH7y@1K_;!sXe4&;qYFUqUCRRe6Nf+(0(X%0+T@syubk#M2K6M3T7mQHyBlcVx? zAjo0-xE?pRiFRc)ugp}tG|JYQRR77Ejl<+J2Fm_`8qU8UGFjCJg6CE5slAqoy{+5m z_A%XcTG+q>rtr=Q@UB4_4nxgQ$*6 zez2LmbHxC(W3H6Z7RM8N7l~&njbS|snu)6wqZiF)^pRr_i`*6KwV^cVGs&p^j#MU; zYW;>x_PPghGacjeq4Ve2WM|3Ws`KlqQ0L5w>KD`yPszOjb zsZ#e7n7np0<>17SR0ne_Uh0%mYxM&~uDbHokPGubrR&w+E3_#nfnz%LSKJn%Lv0Ne z$H60I_`;V80lKU@9CAVuNvcVKp{Z69s$!pJV%MR*v7elqs?DvS@|S+CDe8F6aeRgz zi^uLNu+odwgm1RJVrnY(t_qZV^9f9@46@-zkJ|x_l^__3w3j((RedQY!KzOcKwk)f zv)9_H9e&4{Uj>!NF&Rj;+Inlutow8(uGMr-x_QN8Aww|%t8zOgt%&3cj&01v?^4|B z6PRqY!b;XHm(k`(cP>{c*n)Az9`Z`wXb{XYmywGX8EmMoT%8`CxWUKbVBM8{9phsG zX7IVe`1i`C(*A+KJ25652jpc+bl#f7os=U%5zOynGLRGpGoz&TuDhjlwWu+3V&)+& z34&C0w_u$woC+m@r52tE?R)`~1LK)$W=NTr2I~(y=jztVaoGr*gZWQq1;scHno%9E zoYqy=kMu;`MW|yhA2zLq)YmanflNHA38Y)<#$pgPt+F|bE>-1E zUe(NWLq6kkupFzuVLw^ssPmxgzEgxUs(P90$Yd&Y8d@mdd#cBZK_LJ!N1Y$7Wf$V{ zLTvI4&3FB1;{bhFo;07SUpXeamba%}e0{k)Quy*_Blr9SR3WI`pU8Bf)zMMvx)q|J z)S|Mrt(D=Ub>C29xK+3+LWc}kWioj|K?w(AKYGUdn0zo8L)0A@;#X2q7rhS6eZNMtLS3tVMUBb3G4Hr6UTLOS<+oSo?S8D zVMW}Eao)W@v3C)^!d-PnrA$Kd`7K{OoZ)Ya&Ma*qd9%t?C*3NiuMQKjj{Ty3&ik00 zYD2p-FlPtl%TkJ-zL}SNaYLv+EV4!9&(W>6tr}USGL+|p+hnZ|U@}BEq>~2T#MV&+ z(NEwMh}=Rz%+}4L1GJxRo}>#iX|TjIVC}x1$aH~4AlZSNyAReS>f518yJu+Px+Ng= zc!f>G?k$$1LRlAlGTpuVeE-IX_AZnjO}ptgpWafv*7??|Q^(O`GQKT26MvX5zp!I| z>kh^@GyZR6=^Q-Wn7VS)0eViL>Dzf^Upbe4;_g-zZqpQoft}-)Pd8ENu<9FUd^5v_ zL-d6qvtU&pu{UK$a|Vx7Pngc9NKMn*Vg-O(Mz6#78PG0^xJ!H&}IY)#I^ZgI0!Acbp!U)&JKK^z*! z{XQlKlg9;ob7oaDn7bE6CItM^;vWaG z_i%K9A?x^(E?BIHQ@*OYSikfDi1~)ITDeH^etze#$7F4QUlq1mr&JwK&W;xiQh!&0 z*{NAU#@eysLlNi@5A4D7jZ0PBegP&ogltMEUhPDcz2u7#@JIvPCPtoxi}PsK9R zie(#fK=|+e$bK^ON1p)R{#A^b6%l_*H@IZ(uU& z6%_6ar3$b!p&^%QX)2B}4+imuZo|?lH0&$Gj>%0}W5AH=FW9?C8!@k{)WezOa!35B z$EU28)}84exogv6S=~iGNmhYQ9y(O0Z`>K#9simn?FQy>O&90!!V?QBDRxy6g)?TR zWiKRB^S-pRj)}K%`xBU)_L&-~TXl@daKjWGEA7_HH49iQ3wo?Fq0-uWE*+``vAC#& z`{itT9FtSlp{=6bQ0s*$M$Vs^ppr^)Kc_)3Q?jiz{w8Fq&}e*DR26I5_yi_T@WvEF z(@pmX_E%c5XrtmBtd4P0PMR2U)gwCVH8T1G_Y_KMZ`7+EP~KhBpfv~Lv=o7Wg)K&6 zO?;59*ETm$8h8e{tet}|cAc5D{0q+bzC&tkW41c!Hkf?dz_3b&JNI19vW{gpNM_DT zd4ecDL;0sJ>%2b^cM(;2Ahwx-*bK{vP~Xi!gtRk`CSlr$+|nYfan?ioMw%5h#4q3T z-KY5LF*#zVu%ZFdz!X)Etetr?6Wa5c3ODzui!(JU|5=(W(V_gtfK%_P$$l*5tqbPabynTy*ee#~ zP^$JuZPcYpBLi%T-L-!%BOW_VMcMmO@exzT)u^DjSad-ZlI3xzj#<4=+~eaZHOaIR zqV*iL0(}61u%OLVM)CUH0_43bqGf z@o&d-q=L@&6WR}^xo1kvMsP<%=`_tHpKJm0%xc3Lv@VLSwR&0Sei@wpdOSxA$Jo1L zaVE;SOvTF&)`w~3)zU^w+^h=c!BkIms-$6S%@PvCiMU2$Q#mlhgUY3TZJz;OB- z6Z2RS^XbPKC6KQv`#(+8>G;+r$-l(&@25YXwtsy3Gd@))@#oX}k59{A?+dOfnN;N* z9I4Koe3g9@?T!mNGnZq^sBe@QR;NTrs#^bGcWjpsM^nWVA`eYwku9A3Yrzkzcb7HHsx-Py0qSE z*L9rY#&>L48~5MF;Qdlx5UvO1s$jl~uJL;^B>=vBl_d-diVclkR8sMzY+i!Y(AMj+ z+j`<$GR1a*%D$HFxhQnHGE6GRHA=q4)3GX}{${h_=S!${_wqwtLUJEnT1;;9E%;ZmM|s|AQ}3HlzNh-~V+T@Ko&QzpsP( zGk)QJ?7hj3B}bMe_%FVWh0H@P6Wv{1KrRvu)T&WP$QVI1Nsl5KRgp-m_a$O4&j3$-{W^}bMSC4m>7IrfkjQ& z;$iM>>H5}MJ19bK5%f655xGnv!^-y=&`p&-KAD9-tW{pX*lAnX|4n9xZ>jWGz-z%M zNYAbFA$3aIj;R2uMo8C2;eQ!KO8WjQPvCW8MOe*TJRl+Dt`mw91j1;1Kq4lV&on^R z1h|fX3NI4D3JmAd(G!K8PINA)>gQGGF0L9}WxJMn71S@5oaa>)=hX`MC+N=esxHe_ z@FB3QHm{l!K{(BZpUafyQ@*jwm=N=76Z2{?w4J#*7JFEYtHY3*%^X3ofb{tdTGD{w zmX+1zsqN|*%jM;&13R&7CCjURnGIecK*pCj0@=n@BZ_1@o_Q4{|Khy@tgo`<`1oAd z`ILy{rFprkmf0}q4XR3=SIq)vuqXzhFIPbi{3OE*IpwI8YsS0XMqY7MrpQoi>@XGE z^%F>%66a8?*Ojy(@+BUEjJ31b}ugaTp)M*nVdkqj>!&>5_wK z?obF61<7`BO+nv`;sr4=$@1xid!s!_~ zW;f=QH#iOZ1aXcl=h>xi!je&t+8MwwGk*9fxj15E?o#zDWipm^lHV+7k1q;u?l9`u zEjvZJA*fBUj+h(q7gg?ey$>o6=s+4`HuM+cuqQYKtRO?7n+A!vVcehy<~Bw6dtqGA zs{$cdblWeq9Xbh{dq7UJGOu)I%oea3e_1Ln^Do9}SrB!I2!f?U9WVh>lfriiB5omC zVr9@0i|ZbA$2B$LYAuL-k_g>@xhl5%g{2|#t5B^9VEiKA|mfwj9F7045vxu!u2 z{hFzt zlLt-yXg5AIxRh9E`)DS{_002k{KT=)y5~QB9>e;1h|fq|K|XW$kfR-o&pwPt?Lj;O zws<#ATqGXh3`XaHLDCz~JMAlYgn6+yG5_r+<6trnN6z^K71ENl7`o>ZbjAd6hssEg zsgX`#t)wQ7zq^X{8pkRs8yy+c$KcsqN57twuI7^c|_%8_URFB+oeD4u)fAW6Rs`>*7zJ9-3&uN%@h zJ%dAsc&{OA$arROvM@|lE6i4A(zDzplcjC?K*;qN*}Z6Jr%a@@wT$+sE$U6zUtb<-5BWX6v_KQcQ5WN zwwshC>q9fu0!)u34RfdG&7009zslz*wKg?@D-RuHv9tLO>}VYrVadsrC_Yu355=Ehz;tIVmzQ4=9`pg!`v1`?m;Sd%`N{)D7V zE$=ROu7f|DM`XfDi|=!1+7ZQo2JKG6c2PxHY`=?zYV3+7r+#wHFVhiHirIL??(*HY zpb_?DqZ!;`taE5C{HcfCtCd}o+$>~4E*6~{-8ZrEvY6Gijkk$@vhmht9Egby>gqi7{<_?RiC)w%e%(hB)OlU=6@Fa)J;0;Z9bhF#odiuZh!X z(Z`Oooz#G%kDs6#oz>LldF!MF%(Uigbeq3TpSKk&ZoTh|wl2GPJ)S%>m|tCd&uhYa z96<>?JO`F7kY=DIE!0t+JtAQQsjLAgk{(t+e-6{(z)S8Ct|ac<>Z!SWPPX}+Y?sfu zXY|i-=5R(EW7%f!VVl?S>qR@Nx?F+pa|M*T&nqZT=mM8K`neV0oq9ST6Rag zpXO{qbB?oEy{>1Ut6JAN{QaVhE1V}i>~i`c84<8k6NXJaN5iJYN-73d?Qte&-oAv% zF)Vcn0+i4oPhoz%k_KnQ=VM-GSl#HY$Bea&`hw<(&BsY;a~s#!%W>&DI*dzBiNEIf zgx%pYX}5qr`ga#k;QW8XO?3)q1RA=_H;UoGSV!evzvsjVI?CO)ljEefIgja&!JB&F zwGoJz_PwPoQw#};>l}A#%c{g_SMl`cUU5ZJUrxE-nG&Vj8o!m&&L(U9KR2k1vd0u z_EN`kv6nhodf7G%aRYmxk&KnZ6ZY5zd&0mTV<&co6?@&Xa>mu2&W^7Bbaq^fkRcgY ztMR*{JUQTN_Hm2Y&c#KP#f`IOguhsok>YgLh9rxtZN;j?XWJ<_oah;dTNnXBI}x9q z)Y%WZJ!UlNkCH<^qvUJSM+WAqLBB5pa~C;N9eKvby;I%@2L8YmU5$Q<4P{v8`@YnS z4Snis2Ct@CY-P6{F=B(?Jrg}<^1e_MsXk&RW|;~9y?GEfW};;AnFg>PXu{AY6dPl2 zSrw>|OuNAAvTwTB0zI&3+CrEN4RMPlf&BK zI7+Z$#gQuXXSY*TS+&SVW{E}?1#jjog^t0_9xSar%JS5S%& zdjNHMDSrB1Ff+x&8LAAjCG{4ViNE=sLcAmqT7pGq80axL%0rO_4d-6vB$HMQG4O@l zJ*t{xr~po*5RA=YUQ~n{tP9;SRR@(byIejIN)R_Q)Do`d7PWLC&dc8h3ar^xR)%je zdYzePi~6eZZ2cxALcC?`FTdmm989fWoJrTQv(@D2zRAjJrZksw2vAylEC4r46Fc_- zGb!;f6FG?ljt(bblo5@YOv;YNr_6>AkjkBJl9BGZv5SW8P%u7%Se5j~-BCVN{%qB> z4?O(f##>-{s4Z(?fO7c)>lY$h*b>;qqeOpKYlmoB|HlUf{xHgt5g7=}?O=2ntv zP$l~)N?*DI+D`(`WHleXD3Hdbszs+!PPKx#veIFdz7`z0a6j2+BpG1|NjP?n1=Z{# zYx7DL3=15w8!7px>2ysxcCm+GtQw8yvPBJI>OMNTDS8LyNvnUOEyS8i1+OQT^9S3i zf)Ge4e)xN#TUZu;Wu@0rz%c=-00eM7YhQzmStL_GJeezr);#b*RRtuI;EC*8W>TQ3 zN3y*HL{pmc{5~nwwGQ=fmCSRlR$~^`T)u0} zwVdy2dlpd+8?5Y>AJpBFJ)ztlVYcqKrJr|ZIMjHEKifTvv{KAFf>f&zex0FmzEkhM zJ7U@r>BhK};16S*?5C%>R;5oP$9Xl^8&})3w$O{~?9tO12^DjMG0WmH<x`{?l*k?jL^#jq6&Fg3WXzQvCs-~1 zQ06h_aW+^}i1*MhmNy=-zHRY0FSbA0m=Ev2j2EPR{20&6%$g-`aQNO=-*X&ZCg$tg z(4WhYI(&^c46iP*ASEV&8TJ}?N-0@%EDgsvLYGvm^}#B%Sx0j9<-H^J17hc}c~7fs zkzw`itXLjKToYs)_HJ4I-prf3>-XM$-=hY3mvMNL)bJ(^3+h(jVA|&;4Oa`FDrWMz z1HTx)Ya1M9=OQzf^o;I^OKB7CVb%xAyv7E96eY9ncS*^1N^nWPE_npwly-_1rnGA7 z6CUFWul-|CVen$)@7F82N7KUYQFSLeVjDSZc#V#Tv)m=b8Ut*nuXUxC9CF22w@`{P z&ILD(6v;2H%-mBPv{&w2;OxwZ5#`KR*^^IC$}{8QmEEYCV-e;_)j~K^_G?-e=DJ^| zXyrCVi%Uf0cSOrvEpMP@j@^_$kd}Fxcz4+%dIt}PU9EB8i@YF?&(HXx2VrAg(Gy1&DE_&ZFySHC zh;^PfileO&bSdT$YmT2>_l#HGY*{n^75Irs~i>0+1pw6 zW?k80`{cY+&FK5gt&~l>m42=en2|1ajmgbr%=bca&1Q1ExJaIm7cMx*!;-8ukInIN z2BA%-_Kg2J&R~&O-Zg{l=M1uc9ZEMeL&w^cpfMKnw#%uKr&CoSU!3<_ObWelu_C7g z77jR4vRzJc5pIS3J**gSDWgJJI6F00#MSM0<^b_jX|!>TZs@13jO z&Oq^X!s6cUUh64?s+TP0OkzG+93P+cHoHq0mJ()uM|n5y-JDII_=lg5;NK0|5W7Aw z(2ZTrbuF?;oV=0{)fRue(9PKCPey6BG!Zcq!B zr0mUYGSORYfcqD4?uM;V`Ym^C@g7Wl=QSDZE*GyYKf%SDNXyx8BM0D`i`ROVm4;1! zm6cXI!CHT9#wV(@WwmV4%ZIy`C`MM_VeTe$xzKITk)4xT%DS{vbXOkFG|4tLme z+dKQ7>l9d1KN#b%B=*$0^$s=P5!!r*WUM~)sZz64Y%sx=&B^s{gGs&>fQcbqIo9Gr z?ZWwFyvHlR`Z^fx&39Bg~tJ2}-J7zV2#*DL(JL`L_K{Hd-XTfGY3U|_fpIDUAcDjbZ0a&>Yw8@r~SS8D{ zMsc(%-F%0gSx4lKV?Q7P0|#2bvC6vJ5}lSfe-=Q4V(pWQvDi=|a3EDv*^;t>Ca~=l zZnrnz;pRK6TiyYgDe7XA!=bPduxV_itO}jEAPd-OkJA@H8lok>)zQC&EpJ4u{x+=q zd`Ac?tk~YC*`xqs=;=YReV4@X(9uR<7s$A@DFpDpRD4e%nUr$Vg$xqU1NHD~(@XWn-+V`!xjb?`A3z%+Yw`;8 z6$q~&X!yhk3KpVwu}2%CqV>o#SX@`;xM1ZqVem9<3s>)0@`dbrK6(J1Rf;iQK->Vr z4S|Er&+$@_+~k^TIPs3vfJ3&q<>J8k3aeJLz{ah2l=+UZoDT&o4T@dNB1ja3+z4SB zQ_m@sURcx7sVvw&wE$02sH`-Xgz=IcChYC=QRh4Iay}%Ef{`Ww6AdIeR;aen$Pn4Bc?zKm{B7 zg{5wxF(k%WLN~_tfz!sDdoIO%hgr`@^2I|=a;0IiBD4HGkhFT0yW>4TFkFccC>t=i zG3DOsszc5RTD`vc4msc9SN_2~4A%e+7Xt`U!L)lU(o{o(R56}`DJ8P$41i}rv23l* z=&oIkch;fiJJNDKm}|`X1+v9tz@toL?se&SkR7*x46TENRaUKtKyf-2fz=hV5(X6g z<~#I!N89*^#tu7>(++{0%eCP3T6Uc3plCPn^pPdJN$e{TF(a{KPsHEEJBsg zO|fI9Du|yQ+ngL-YB<@g;|${HLu??`#|eT|tiw(IF%pjPj=G)?kXhNYN#I8;;@@I} zEW{KL1v$3?YKf8rb{eEA%gVBNIGd~j1zE`4q$Q&)FY}Yu^8wM_(O@XLQ?WznknE*f zJotmCIE*D@fiwIfXmB&gC6MP`1(8op-QJJMKeS%?hr-(eI|x*E0|kE-s)HyhkFSta zV?-DPYaIGYdon}_tr$4P{8*?bH{UV&hxQx)P_nn|1zR+-PR z$5B#XW~yS{RY7M6EUtd@9g}}RomgOHa*M1O2sWn~{8hlHurGjM3XZZTgBfW64G>hI zkLN5CPEV5oytCe>C6j;XYF1i;ZH4TrKw>?hRV=`eYGce?H3h);t!qg074n+~Cx*=6 z4d|P$iGgM}-!b{e=qWPZ0a*-0B4Y*w&G8wG`xcumj!k>qW(&=t&PvQj`|PM6d|{Pg|EoDO zs-d^7GHX?Yix|0U{9)Jh-!=XWi-W$l%1KwAsH~t~i8}+4x5LLYYm`mD!tlSa(&X?r z9pQ1R?RLw@_)#n8I%`;BDmJ^|7H1(3=gDhESev7Lk)4MY562r~L+E2a+p{UV49vyP zM{Kf^^8To*%A8#;|w^(bDY7j76ynhH|g~OQo&66jGqtTQj&*gV=PM#47z@UdpJr-KN6)h zpvK}jGa}rd&R57~#&q)clrN*EhOgoJ9d9~|qgt~V++}%R(>J5E4iIMH$@%kv6E$Eg z+D3EFD~)Q2ULA2}@A6bc@l1TptEbQ$Du@Nn`T`1(eU|3Wmrvxwr!FS(w9@$$ZM?+{$(!e zH>_1m#_YsPv$_X^98P!PN*S??GDWj`U&hsbs3UPTk5?+gq*-fyZx6Sx?>ufaufQ#z zyIgH>W3G)jRDW4#coT~2TekXatbM9x9k!(|+3e6TnGwxm7J0HqW1NE+nO2YvoY|HmV;pg+N*bfDr-Aq{)uBb zxdk%Eteul5ag3zKNc#1W#!gL2p{E??Mn$n@zlxzGD(8NUQ@%9nJ5;Tidsfs^>uKmZ zmpMh3urD@WP!f6Ets4+o<9C!CJb*tBtGEYCHP$~k+<*+vc^UO>#?K=}uVy`|XC6@) z6M}`NjBv`Qqh7LpTEZN+gnir@!!bXvkdj6%SiUDm9Sr6$_DIYsnI*+_cduv!@v_j1 zGbNq{64L?$omgzu2v46(&4R10a#rz8A{zgu18ph>%GJn$7KF5p0qR5>Q>4Li~Zwz_9 zG33KM_cd{hw0OcxS~UARJ6f}xV`aIIg}u_3wFghph11(UV&55^NY6enC+%lpt?R~F z4{fHlDL>=q8Mc!;?wKOrvZ3dk*9G->@zj0XeDtk4M%-gPPV?P4+{mbYII;iPQY&uH zk=AQ1sbh|`^V&9HikwYwmkx+AvGJ^!SE(3|tIcVxebGxk^>S}ly*3^4l22_eu8tT& zvvi&8kZY5!VP19X)!?hBOTWbwP8F|C?;RPOeaYi)sh3>i%-BwH$(L>!kfM)cT-EAw zEINeQmv_!KFMS}lt7e{q`pAfbl0f) zlw7V>!(!lJ_MeRQs#|7=ioF@b+?_R(JSyghB{#1wo#*ze&34RI z*|zll?6P$gCd8Fk@Ya^c| zSFO})yE?{_a_#MEV}a5>udZv*F5&hzamm_n_r0Sd-@1DmoJGl{FD`I)xr$X}77ZO& zySy9=cmkPxT|-~ywOt)=@N1vgmUBu=CtAIB=>-c8*zAclI$z?4>K*7Y_ilJ!XB`Q= zD=eo}n~N$CTOVO9!&#Q)H8r9@)t1q(1Uxv#Gjqhr! z=%KS%ykhiqs8xsF`YW9(#?z-@hU~7F-}K1w=Won39=}6U)Q4TBT?cfIIdD@Wii%#$ zbqyHF49y7K`Wga3j&XzDfwJLJ+3$HJyza-Tzp_F#`2@yNq(%xrY4 zMmfor zxN@~~j3QU3=dHYc-l`GHb2^LSp;Ks$xr6!E>U}nPt+-On84s)}=emxwt)>lSbOnkj0o#lN#S>4$SSL8Krvjo%m`Fi7d?gj2V(|>nG z|Hqn#?C7p>PpW?2buaGmA1`n4AE{>rqy2ex{rsQ_!d~o=*Vp(pGwQv`H)hKI+y^dt z)ocC||Gm3AzP~lTQQ`c2xOaYh%s2MT=<`nabocqZKfiQ_=JS{EDR-X7VlP(OF6Eaq zw!gr^-iH0#n}fWT&*mC~s@IYeWZ`;#USB@H&OEGLZH|2ILdLt~g0J(K;N9i>w4PGO z`8{(6=V#+JeOf9&Bz@y#xXX9v>*qrmwJ*y4JY@Fu`D71kJ<82hozHBz+W&24UEIL* zAkLV|!*qKWDUz?xSMS#6%^5OaOgAXE9{UJI{hW=wTBxtHhG>a9@xfyblAgKB$KJ$h z=}3HYr=!dF*5uQT-3sQ6nK4{GFG_n!@7SMA*ZR@ijhOwd-Kb}}6Uz<#D%B6Dn{zSG zeobGJ7~a?;r9ER_IYTMkJ^$%&{%c>)|56)pfBx$HuY9efGma&DL%*7Da_R;(t{2oe z`P=##?S}jF&8@pqOF2IuFP}e4m$xK@lnOQBbf`^;U0iBBzLg?}Ri>x^YK_0Tszahq z`RIFOCr{erRWswECS?>$T&NH)*8^cm@dqOY=;zTaXd)Vcsv|f7bZnLazepcQ7t{D$B-={|`Qa$)m zcy|uFmvXfjFr$`Hk(;kg9z|nq#yWb)!M7?VBt8dalmCp6ATN-xNr8U zGjUc6(o5ulo>$jwc{jWJ%(#T3#X|wwZm}XvtHCYHhZ>d-yI2?WE$hN#7LZ|mSf3x( zlri^(SFhHHQD>;?H2DeOHt_Z8T%6v2zYpu9&MzSN%=ew07axthEvLv^)^lu5d*dQh zsdgP_Ok4fvr)}Nr<{3u6ckBF!NA5 z9y{LLdpWJ}Ek6@6{q6Qm&T$x7y;gEzO4nQWjTl*1D^M$}!hti9PIkme8;?B|m%fR# zJ281iuV*QXh1RcDseGjKCEcQ{M~d-m^-zATkA8pja_=um8D*iEW}%pq9v-PBshHWR zqe0lH)v*m+`v;FU`B=82=cO*ge;4cr;P#5gWyFr!MVn>D_$f+#i&on^jyo1Sn~~6FR`wRN^2?0cGjUq} zxJh@sCX$xUSF}&5O}yX)>G@s-$FCKx$GV?UeLu3w?^3rjV&7vi{39mJ>OI3!L;uZ< zuIaeX__NgU^4>p0*5=+{`U~!0z^ob-W~A|AE9aH6>5lBH&OnRw<{qzWR?gUcHFsZ~ zPWs5jHKmPNEWW9k+iT<#G)|s44Z|tL=mD)K1sYl4#%S=JihmK;XhLd!=9cpJCAXA? zkxw)(dNJyomQmXb)Y8Xel~4alTIq|r(DeIGi~D|(=l<Q_u&+1dpZRayLE`HE6v1`A%{y=&Yzjsz;=F(o+J2li<}yD zr{h@JE`i5z@`Fqd$^3m*l@_tfUSk62d`e(yI&>Vr-oZjuNFzEvgz8NZ)pTsWcBX(! zXcpRupyGq}4FB=Sh)eR6o3fu-r{v+x^6w?vQ6)F7UC9e4*bL1=h?5Qqknx{4Vm=k%w>Vgk5f-AtCRj;&Z=ELWMbj0sWRiF4f&(x4i-|3;G?zVa~3$FlWy6VuZxmw1@z2V@tw zZ2g9@^RBJF=-3&1_Cm_dU(V0ADgY78SP!bx`!ByC$Sn*95xv0-?oSE1 zjUHda!m>$;yZn2ynGIR^dPM6U>J9XQY&@vadvxKI_AGE4bYy|;@Z5jQ8^m%(xc>{S z8u^LzQmeF7G~(YCe`l}PzfA^L+y3#hzMn5OsIZy@;O)MOBfU6#Kz1r4BYSx3>t^i6xnsanu^H=pMh5zw*o<&IJ-J_2G&t6J7+(K-ksFi_KgtR-J2%;t z7L*u27Vq{wb2IKqQDX<}0yY?@Qg-Mk$pzJ!f@>9^9a$u`KqIax+Q@7>thZ+!4BG-5 zC~RVt>yDmQXhivjMzqHIEPYkK9lpFNqGy@ajy12;0IAY_Xbq5!=NjSV%ClBJD(l() zbLRhpyE8iLi7sdAtowKGL6?uTouzC&YiEs5W*x0T-D$racJ#e=-UXd^qq4i#uUR{8 zf4%GvZli0_szI+(!&XIYR6$Ws*n&j4fWx$QjEZ4{ueICt=TzI`8RQ~d2rUztyktc4su#c;z}lKDP3R7<_S5im8tpR z=UGm>jvzbB{(CmnklBYvd6&~UVjq=pPdf2#tCg~Af3R}aYI1oTCoaEER!)7(%DH>z zuTJ`uIxD8s>JTQ&Dg4InhQ$9z|RTYn8cP zHv9-{B%|82DLdY>jJ;R`-PwY2(znd22d%cjTlyAzc5ZoZhiszVCOvX?Mt%C`Trx`K z)Ptck1SPM6d$7; zYlWT{DgNe9-D&;m(_3l&&JA9m9AeZT0Y5ufA@t}Ub=X@{@|~km(~#hvJxSMFWhn@N zt$7K_$Cu)2lA~$ds4E|haFr~8aFxTPwaNflG}A); z*^7JP^3hd4D`YO7j^PQ3C%!(t@F%=j{lkyYkC$Q|-s`=c^=y$&2WiO==Eec97}VFj!<9%XC60@E(MPqrR2IMI#MIUZ13{3(N1 zkN&F9%G>KqACjpq|WELipw6Y=bY8mUK;hty45q+$f}X6N;v8oo>=8sdX4i;IeyXm8Yk^pD;Hy> znzo{)I(He;xZWkNr&ehmT%}`w%wU%6VtN4N=*>RP-KcIi&Yj;XYnoBDa`&exqbBv| zZ$}Af?*tZk3D1&u(N(1_Y3H$-3f*I6!*Xf$W+Ep^P2&Oa8o{1;H*h>vin- z9QWY4648VRqWUh%aix`ys&DIGJ0wQV_C_zCqZ`XSU&_2WT6dIb_P%vTnb$iii_t-N zW;adCE*Qm9w`>^&$6@RMe)79KD#R>4p+>({%p<6??7j6lTRp|2ipO(V{z|}YIhsdh*Q3bun~)t8#{4mr%PH!9_m|P{yLih+#@`8(W<>s!9U`mmkXPDK$#X_+FYnKQg;7U}xbxYP zrkCoa__+)3?&^WQcirB(99W8r{fgNy$CPK3iC{f;d|!qf@Q%M2rbw4{*hyq z(+|1Nm9_3>#KCd?w8Kw}@z|YxG&SGgj@R^i-w>`xGjAU)U5^&tK3cgRtz3+Dv4e*( zulDvhayyP}Zo=?QBgdsaH;zyWUXQB3H#E~>hL0mZ^3poB`4vZ|!ZEsLqabQIrS=w; zQmeOjT85U2-0!5gINgue0cS}6`d<2+Rmdf-a0W$a4^`q;p}xw$pQ4GO*J~?$7!eRE z#`LY_C3TE_JV}m!H+d=g`hvU^fom_rTdHJFx!nS4dF%Fl*!3c$o`lFUpGW3wiX?C#Q7~H|)$qhkbv1 z@$AMsdgSWW`#7W@x!4SP$UqDqi^ZKbN+Y!~rOI9fXb7V>p&eUz=uh}%-B8Fuj5b7~pObW7v9<(E;}J(FLVKOk;m)$`AGx66@66S>FF z&x_eBj7bHb*)rsBLgH~d^QVQxJsc>m{?1p9HTn#SxBX*BN?R^y#a%m><_L9kp?DNZ z^}gNpA)EBs4t=svPb=9qbl(f553o@Go%1!gE9`KI-E>B7HV4Hr_JcN7HzHJNYdY51 zpbcZ%mi3EurLJR<50cAfDXOg0#7VxZOPP#O;d$-ROsA82Zx58tG7yzT6zKN49b_R zb*sH$@}?9pYvAmlokz-IR;#9?ls@5MgxW1~BVWQU+K3)>&u_EZY-7hboB=~pKO5!L z!apv4a;18Q_>f@_d+jZc;^2Hs82xLgun@b$wmWIVyv363yC=bd^iVtHgz8%N025C> zbP*~tNrvbPjw!SorDKy`^ahKQG}GS{&W=7>1lHQdYu(#nA2`*%jfa@aV_fD- z^JUq+2;xyw$dy?;KuQe~7IV^$;#`j|2_~LEyo$a-tjT!LrUsJZ*fW9i&E^B=E7lV* z&f_`G>uy9H#IspbengUOR@ZU=M?}*OIc($yEs@Z8V{peg1-V$Q?G_)m#j$CnM@Jd{ zJjJouL(JXqBx;CPhv(%n#!f4%&arbJwZ1hD{!TXdx<^X{F@)b9B0HjWl?vwTJOiy- zPO(!HFuGb%p+`AJ6@j#t=Ge;}!?d1;b^mq*@s^Lj#vq1~aW+og2+-hXghQR3LG?~M z_xydWM&;k}^XMBL`re~i{4e?-X8*ZKmx*y&kIL`kCucNkJ1(?-(T0{t#PHsIn%PgB z?VRM6nv-HtKA6HcT&i;GcozijeN(s!sF*RwSgtqNu4qkkKXyCc0e72EjXzi zD4l~>_01WRQLMU}V*K+9x;*y&nfT~Wsms$+qxCsH`Zg@;6EVl15f*3fb~t;Thdp1L ze(q;DkV>UX9 zS)z}At$hNnJn^K3;IT1`tR_1%EtjAEpuZ1!c2-=8)AsQ}HR294Gz+I?W~9Hi=V)ir z$bDF%S?^Hd5x4AnF1|qI$BAr>Y?^(va>|;);T`M~B2vx8K)c0vG(qDEAO6p;Qm-?+^``nu0^RCndC#;wP}!vJhkcT9>CL%vyS7t zWn{8*cCi&IXiI7IbMGFkV>V}E4WT`TnUyN=TbwnAroaA$PSuxgu6zG$#AAyQjeWF?@;7taHaep1v>R=L zAIrL2zh1JPyJ^X|3E{BjoB=v(M{Pr2*p|I>^1NC4)}}#XNDV|%r4ZiVCYp-q1- zWV*Wgb0_}bImPdD^rbU!8zXG<9dF1p+GR$(Xx~e4Bd(UxTbSM3Z+D(`OO)SQ(S#l* z9aijiOMD%!PzzBLe`vSD(9^eiKCyOM(brEOjHEnU(d9Ag>KM&GV)y&xzrc=q+cJJX zlG4t6D>Hi(EbE^3;jJP%V@-^ov0M!3eRCGt^(yX1h@K)s$9eBZd>=UsPgdo76yFD~ z-?H1;;5TfKO1+NlOd9EO)oL#=e(7NN4vYsKcmd<1$NR4^o(W`Extp{{&)tn{FW_O> zA_ZK!r`_Fgu|aQzM>@w_`W)raj>l(b z3g&>Fa1$yfE;dp(dWDIzM}PEn*N0J;9o82e)nD!nuCq|*S&V%iaIXITQ&wbu|AnRhM{jdx^v#tq$J570Hi*qxh<@~rOw{@;d%_fh1Ri{&3{24!( z=|G#39o%GoVg@#jx=0*pKXtch!2AyW2|lw!HPy4$jq{-gKPq{63Dz?{&)BrDnPI7Z zk!F+{FK9+-TcpdJ=gynSL%>{*7;3dT4i22@8Po>kWuRn zsM$Yb&&?DwI@&Dp(aV~7<=78HSIf?%dh(7q_Je$3{9L1nahF-D&JG%RCBxE=%1&YT zD9_xzF$WXe4vlmFpP*md5h#oXJVu=4B=DP?oa%gXS~_>|r5^MW zZEQJS&_=t^Mth=-vkJiBzUhltN2z_#M*Q4q)DEKwvQw|tzWV5U~j zx|7S;{oMbB8F3@3let=U!MCp18QItod%fNfmy4i!iCEJ7KHpUq?Eg@eYXJFz*<-6f0A3cml=g;x; z^n?_benoNP&1*fxQP*$&JdSc>&96Km#9Ff?3?H5Yt^C0Kxv~3UzL6r9TPf*TpSYw2 zl(@x_jkq}fq${{u!&wGqF6XG_K)BW7p9iDf$-x($mDJl!|tu+CwNq=l#{^>uyh`+vdN7t_}|Mc!b zdjImbzx@HjG~c;L&3kQ1g8&RV<9w4jv$$v!IfD;bsW|hWzy4gX1o?+AIsKT%@0<5ELAmkOfccP?Ig00r*-yMX)VJnpe%)ID^lnC!=j;js@fzPeC`t*-x{L65A5b*<_iD z6w7-@0oQy||HWvn2$6bijVO%S!akK_r|55xzMxu$6-!l7HeOVfS=uw z8E{h4sra`WVMivP3ZyUpuW-rx3?F~zELg$PN{0luP_&P4&cYTCnESuuqH|a0u`Xua zLdLo6_aTAk8SIYqafbDc z1b`8odMHo*6wZ8fLLOAqEgf$O@>6V@_W=oc@|cWAQ~YyuXfZMvb21;iW~>#n2#Ppx zYctbdgpqX`U5-k@G&;Vd{O}ZA8A5J26~L9;*(`8bjPQEo)Gy~6-QYRPT*Gusshgj(RPDD_<9Cty?<(SU9eZVvaY|6-y6cc!UCInpSHMPq95Vy|`0|JM&qTgB z6WepmjL~&I0KH3Q0JqSpM(A-y;eSw+l=Mf;M7>e$hoq`T4{OyX$FE<1`T!mFA77vS3kXqv=>3QC zmmh!o_sy8FQU;z;{Q1Z9>%aW)(+?8`CG<>6OQVGR{P4g1+UocW0%+iI_~9o|ne@}2 zr$HiuRtT5zLHjFC%KwpmODl2u>4(2=^d&bfKvELU3=TkIl_||O?%HLo*wF$)P3bxn zNGKroP1h=VYAs|aAvlzr_t(uNa=p%+er^9(YyaAQ`7deGADtuKHa`4cKm7Km`b+v> z>Bs%BS=WNZ$(6YjYeCL%o{Vi+PZ3b>n$0G#;Ao@;q%sB2;e2P?K#a3dgdFH<{x1gr zi2;Pte&rF&>pXaM65D@nseJwUpGl4PGyncA{-ynozhFmx-CzwdP!wYnfLdgcCMHlm zQv#Mkwqu|vG$R|-v}MC_8SORX||{KLyJGDo%8-HO594E0~eD8kkp1Dzf2~QK%O2 zbTUlZRMkgoD&3}_=T)l~7{N>KGLH!exo-*rcWconT0${R44NB|70P33fFy+XmSl3w zR&7!Rhu1hA>o_g{{;%nm9}cJTe{R41Y2C*F8N%QdI)l^)8$yU#U?XutP0LIvgzOPU z`ThbX+aZOp@VI1)0LU0-$Ra=d^yl`=uU~%#Q>_2?Yx{Yy|DXTzEV6M($w+#+gPdI~Zh=qIF-B_+Etr zV~A7Cm&1Hb3#~DkNhiSB10}}vO6pU zx{=G?*Z_V`wY_8rQz>QzK?7o91TGbb9~j?4zmLGtTf#xiwH8PvUPsCSkTNXx*a;8W zz$WJLL2uBG{kKhsjZp`XAR~kxXgYEnvFC(?>yYH|-~7bj>f-14GPmwnzT}N-Z~A&wIPlL12L?KjyCWU=^qO?g6p1j-cNH@F zDj+B)W>+L{?~tH zPWfx%fjx==E=TUhPwB7co$?`r=g2fagDCNpeDkiXM>&G*?I<7cdMX6{zx?p)Z)A5z z@nH9+8T&%K3ZH;J=F_bFt;(7^>NzP1Uw~NmSdLhtlV^xn`lX^|h6U&~4bl zgu77+24_E?O2G&_LM8BXCi~ETwT2<*gKdlB4M(z|Q~p0IB}o+bcuEok2f3#t4coB0 zVY9ng=155WOda`(q>jrFyrNfY_-xkk zawz#Z&;^Io-VbyUD%TeRU9No)(7CQclkNxnYjH}6*g{@rQ5Ug4Y1S0`CD2M}W$go@ zl0xfv8T_+{C<#513C2&(J((i~KL*>cg>#?~$r*UhvXh(*!>86zd!hYox`Lnkcwwc) zqxbpUfaovjCwd9^3Cn(@B@p;@wr&KUk02!IBJCW6w2tt65E2-u zzZHbEj`&y*67PdXJvh9GBSvO?hho(ZQ8r8lnDI5ZD)kmug1_gR+`x0S@N#Dlu?K^y zzMr~3=g*E}3o{UGhc_1~nM%<#sdO!UPtf3+@z#`)3$I=69V`M=;{-RFSsY)F7-?<$$d0Y6}VL z!_0xoIZ&vQLQl5U)S8xsC8xZI;0Li<-@2>PmzIoEE)I7;&T?o8Aky)BH3xb|G%+}& ziYj9BY3!b&8_pz(3W@=WAq%KX#)b}x2*p8zss#f+mhPDYD~NcitYx8;Q@T{>+rB|q z(mFsbjJ*ix4l+d`4hs3x>s%B?STSxP&`I`-iXaA77`hSIvR;DRUq&&dGcB!1$UyPy*eHv4-V5LG>(X z1Kohy#sJk?_X3GT*=Pzokh0ab!I1@Bh1TH;B%9~$feO+E2zLI5J@7x#9;goXKvx22 z{4Imva0$2cj6rZUNsv+-Foy_Wtoh}tE2(BQ7P_soGTUmiCczIQjAv`l#$f}p0t zkZC>)YE<)>m2MIgh#80E z*J%Z72a!2j<|ST+?3+5iZz1fM<7PUwhdLDJ&ONt6@u5^f-7pP!Ahk?FMV3V{_;6$o z1`FP(q16CZKw8Kpl)PoPQ7_S%ko#1oC=MLyKolF89ORp?CHEFo6P4wBQg&p*SpjQ2 zIDBN1KF&1M>Ix|zs-cIjDYODgW!eWN%Z@+?gMbf-&;$s&lIaUQLp20Or$IhIGGgut zBY-(~cC~jN+G{t`XIzCq6(C?^vI&bc9vk5SdnjC9GkOd{B&cGwO+e6+2Kx>vl>tHp z^wD7kOm!+W+63T=z$7Y*8;1%Q<4|`97Lf$ajhZ>cHQM4B_6*Ti17CAl0}5kS3QNG5jH(va60))~J4kvUA~;0J08Jkx!1kuu zutITSmCa_L<#y)ta8DsDhlnGXxBbGA^ZOn?$P0;nUOW3Ul4ZuUpfE;2pT~V!q$k?cIf-%!lvZ@D4I3JvOh6ZbM&Z3YtkEkE;+OFRKrD0zwap*_4T1xe zF~uG>wET#oAsrOv!vGMq)*ee{l|`Bhu|~h`<9l&--ABWQUk$1l6r8TLfT|v$of$_j zo~0sHeQXLe68zt44NVLT1a~BvC4_)|jo`76JpmSo86F4Io0M@96~qy20qaKR3U;Ew zm_R2S=tYM9FELapPLI0(V~wl?yW%`C#f^8YxIO+_K6QsrCwwf=;_|6Id}@|c;6CUS zgbz9eW`!Q8lVvAO0aOD?>mJgvDnOX-@gMj@mZO0ZLvJxK)cY7g|Hw*wWvd{$tU9b~ zeD^5`A9M<~HA2-0Jylm@Gtj`hZ54dHN{0Ew0_NQWIB2LcW9vp$WP#Jsz$L+hhsAiI zb$;8&_nw0AL8oBf$IK!wy%!~A&!Yn*1R{%N4^Osl$utZRx&!CxtwbDd3qdUz-%JHE z%7tPZp3nv7|F_`<67sUhKz6Sr^ugX@W1b2m3D=9pWWX5`u#Y*g5kpB|Af`gla1(fWd7hfF|a^;`$XYw7!N$8)*j5Q&B4eO@>lJ6$#Q}NU>!oAy9ezI0fH5FOW4l zYdbg^4Q#+7h9Y4~nvoNf;jya#J`)j16=F%+y5HAM+3fEr7HDiA$H?)oX{x$_F>WqZJWj`l_}Yiu<%znXwe z5X?m)+bRKDkyt9bKuymM|`FBi{L6S>OO5t4{`f%~|!vMgzu^JbR3C(E8T`6XUSj(VsTMIyQ zmm@Us&p?w!4mkm!MA!lX0iR1{K>Rfa#hp9k@voypC+-VYSEV zu-F>7l9Yk-C2I&$u`vqd6VT*lNAwUBM{GAVPEk7EACnQ#DgZMj17t_AAf{BP992~q z8DNLXE&A_(A}dx-_f0EEQi+mNme3xeR}J%F@qdtIAR|~2*iyu1ht3exLgwpTwx5C` zTij`I40AP!Xbg!MgF0_;9I^8(B#-w&;4rCKC{;leRZUTbk%O$o{QXm8!ELY5bcIn> zM+d~P6(lt=UkJpBmLQ1C5StSq8bPil=97K1NAx*%tmNHZj60~sRFzU!lQ(1scon=-4W2S5YL)2Fav3p*l~da%&l z1q2A-uE$o%rXYF5Tp6BOEHEqy5F`j$Kr+cQE1focyE;D??sB0Zmp|HPn`9DgfV2ieQ)u*oTFI<93E!3P2MIZeqob%KB9} zwh8mVx)~b&*=e$5>Z1fQVDO%m2hD?_Oa`RKrQaxFcupgb$?u?UuC6_t`ZNi+l~rE`k_s{9l*nLc4*gYit0 zW1YNS5)NP){fKR9o>gfAfK(KS5vyM+R~n1T=$fql9njRLJn#vj6YuJew$e{o(#+I>?1NJ+cjuWai zB+EeSRDf6|81liMte=7+PlK^!8iY3y$GEAUY6Dc8fIzHp4GJxgj@&5&4cLKJD9t}~ z`1}-EGrcB8h*zPqz@Evu(;!uCN&tn>kP;JUvIl`;AWvjOm;}(-<%b!9pMWCEq9I@O zYArSe)JG`^o@1(sSTRlFB1vom7?N;48(;`^gJ`fXK(bUn14Whsij8OG7HH%Qcr?*6 zpe|*O@&oDB9hW3H~m)T1px z&(3!5P?O3`PW;|!vf!t!;D|7SG&0pDYN1Oij1x9sp9VXFou_xNfbAC58`K4n7_ggu z2bK%Pgx?G~qFDj$E<)px5H}ixLyTjMg8t(|XJ(cvOm+tON+jEk$Zf@EpvekrVO11X z1ZYMIj|H&R z$YKmwVFVRQMhToxNYW-j#t z_+nY=sC3j?ehQk5)pTHxs;qlKFoV-{P#~ui6F9xASwY5tx(?E03P_WMG*Yx}g1SNa z{4`ls2aVP<8=Acf69`y0R`*e(1_Q+=vy&|{8mzGfjct&q#{eeQ^eN)t6r_KFnXnWz z;6gz{)sS`VEN*}wK%Ruz0z`P1R;Eyd1~6HafgG+kol#-};-PX0%&Eu}lu?kOwG)}fyWas#R>@$Y zg=xE)zG@|z4XKEGKv1yoAD_eR3)Rl$Es&KLJf<+hSEEo-PpZt6^9P zrYTMGr2*Wwc{O}_uDgoKH5RR6K|sl}?dDU^WXVcLR&~IIwbz`C*!miXIt-7AfV;px zy_nQ)JQDr%F>%cTB4aSvd*eB6()THrCQ9lq2yOmRTdq`jn(?1+9v{>d!!t zA>Lc4r<5=#5wMqe~`WXqo-=7KMGviQ)l1=FdmKt4>0;j1{F z0&hvcFUVEKd8b&4nI#?!gYI+uB>p{Vp_Fa1Qwc4k-;a7#PTbEVgi1!9V76Z#&e4LDzlNhwvwC%YN3~);np#ZE;;gE5g>cBnWTyDWWb!XXGXzRVq7%0hU+?NKdf_;)7@9NSs#ag&DG= zj1EUz`r>nV=a7ea&Ry?M~u@>;>r^y}2#nNA#we_r6S6C6eC4enquUP4{Ym+Kw z31tGa!tWt^s0$59@fm0`F2fS;cx)7+$^ zg#HtuE+k!hHf%kpOa+a0ilvL8^G4N;#mU9`1E{cJr5w$~)lWf_ThGn73N@A*J+9jFs%cWT6dNUFhS(sIjCXphc+9K$8U#w*{*`?Tl4~+35iA#B_Cle&E=^ zPC?qh>TzFo9uQ=#Nx(9G2Od853W8J-BkPK#q%$K{>Al2Wvq^`uf|Vz|#dw%2}e;fDW#8}akgWS|O`7mBoW z{2t>!9fi0rFPXj8nm5 zbQV9s2BptHlOcc8+mglIdk2Rs-1KFGE3SkQ<)0UoPhy3iUguL3mMeg>M1 zIapZ$RRc}cN3hgd6aBOfQ_&Erz+P1I$uLa0d&!0tiACg?$yU7o{4`mD_J+~#2&}gi zRT>Vu^%Y}yoNo<*%D0?>HxRP4_9GP)m=ik_4cw=YlQAC&aRGw>@G%D9T=L+#N!DyZ zjcb8c&&E#Bg_sn>Zet)#R02Kl`ZM_WfTfUr895>fFj?xRwao!ipkoFxBc@Uem=YUK zD{BKQaN5>g$wZ&ePm!tHW)|lG;D7=#vQsryFI^TL;sZmqMT?SczRABdFjgCEXJh@X z`WZZYEd&0eYYx;M8(FGEd7uN^SX2-oh!_eI2J{2K2uwO^Fr1FVE2=K}Q)G*Xq&ieR zYC0`<=x{EwVtcb58Y@-+?ZyaJx=#k6Phmuc)y(wOLciaqr^mp#z}|gqhs1t=SSL_W zLTu8pMJKvtfn}U#!xCx4uB0#%VL9*cDe7}d4`S?;$Sju5g36&lbP{yYkGhpQdU`^V zOVA98!g^Z9)+Pm@mYGHHy;Eb%NF>v(wc2%PI?)ReATJaNVYsL29W%%kcni}rE2k(0 zR9%A|GHG=0`}FiUrko%Rdf;nw?1<2{WB`yrZ@*QvB%ta*WGt)|2VIbrF677=8a|}P z`dXO%C!oh9vGnD{L@;|hp2~TWpa3r1AT3F0lF%S!qJiP&x^r z7c+a9rD_@cWkOW7XOV5{Ev3pZf7NuHJ2*`UPSc?yUtl%r&9r6cF1F_Vs4NOs~aPWP}eIK1d7FHh`2 zseP5vm36B);iYkg$FF~{w404opWROiA!W?%hx5$Ev&Xql6 zo4Qupe7CYpM_C|Zn|___ZSOp^4Y{uw03rK%t{seSPaq2}7>_yd9It9L(j_8%qY~c zne|D`mFQnm0Rh&wt38)==;?Y?rMtOLc6-v;U9i}lCAFfwnX>&-x|Gqo*0VNnoRjVf zA4w{OQHZ)B! z_R!F!Y41;WX{h3=R703fKMS#?)|hhPDXN+!4aK3fN2#IL$AbP(VDG|>uxbc&9BAdm zZ`XkLdSF+u9=5re84Dp~)$E*DP-rDcDA6RGcL$&B4*_K*)j*!m^rX7FQyO41BuGKe zDsy0}il_Rd%7k;pFV7RDal6tje6m;l4hT7jpU|^yTZg+cnnz};T^d#E4AOtHVr?{VcCumGD<>U~ZCWmaWVEx>?<)G4JaBm7`9amI`RXvbV7qs_Nl>|F$&r80(jDQU)! zl8jyy>wdTBolxY=Sg$peLGM9Ey-y3N1F6PSsAR8uASco>-fud8?nQQ{iU34f$s;E$ zFBPnO4<(WXNh~|5HpU54yn|1MqZ|B37Zk&gd|>mb)fF!J;^qiS(7$qc`hpN(gbW2? z8na2TzaRWfDF^jcOP>x4_GSmoU%V?8|aNJqFrA3J)!5C`=H>cNE^ z0=!M>*^;~}x~qtrp4(|lgGCIe^vggOs{&;giSorW%|Wl5{Yq@}*WvIp@8A$XQpu z3Ur|_pmaXkdj&Q*6>tnIKZ)BSbf}CbaU3F21TS2r5P-|l;ZP6~Ns=Z7h9<2fSjAk$ zgsxq_$9{4wQkxr2@|XKqQPy$qZSC%=Putfmft8vp2V%3$;X{$wyA&w-;?Cx5|4cC&BXW5nAyGiX5vZ? zYlRbsU*;kZ6K9o9#iW&yeDGrvbJ=Go@AVcw*=U8Os9P?r^cGGZTT-wI0Gb(Dob+?qRCN-va%-n?`LJ+d<7L0YJQ^6##)WkD^op}K7v3w10r31kvc zO-{OmN_4@$mgDDTqsAogP}_YZz4iorGKhBri#r^4`=*Q;dd?_qO=!T#Mq*)(eI!LO zu9Fh6{yF#55z-xevWtm+NlpTcUjWRkL>jLn0Y99i_lj34q}Fa46&sWWiE6d1u6&|9 z_++?1mBNpT(x_z8fyXm-5Z88~0#mX%gDz#&PafIKbVEK8bFduCPhme<=BV?axc3Ym z$SC!)ts|4J(rIv^c+Se(XABAgh&k%~Xbd|5k2~ZhA6NFw7i?^JAC@Q0XX;1vfv&~5 z!%TZzwL5~l>Sjat_HwX7ymGxor3;=whgDc#KB{>*JymYwP4Pg)^#OGh@EZlCTxyJZrth-i5n_yK?s`nFQtA zCtdk)M!Zcrv$O%_jgqMjx>ZhJH%P=P_6z%2_xH&m7q~kEv!+wMES2b~lX;1UFa+zv zB8!LqEZu6G(#T4gpYPX_3^utFo6*jUOSdQ6hiN%R=Q158U}VwSYFixrQOK)IOFRPH0=By7%~f1`iQltIvRi6cqN9m z6=0_3v(SpID_^P*VD#GG-zR%lQdtC|;vn~tFwBbG*9Me_+%rLN`U2P@95Fzc?l`%| zqmR$(Z*gY?QaAE;SnLQ@I?WjX>{S&zXS`Po^kzy_(k%urLVK}J+#7%r;&6|OaH_ha zJBq>829sl;i7|3JD!Vf^IS0EX%w{<$LKFDnINuiJ&@k@%`($T|xH#V!S=9{YtVx*( zj(9Zq4@Sszk0Cm^n2fWE?t+!z(p9ea7CzZZ?I*8@hf>+^w&4W^t>Z(wU@<&F`ABs! zU*!QH^MznFViM!M;?8I9leGbUBy2TS$~vIzl_=`O{*Dr}L$L&nwPk5HWuOB*u!kLw zA5w990zSDwWJ7@QY9}hqp;!!ohazCuv~kZJb)U1%u3TnXF>GQE5dT?kvY*WS(I*^l zy%b}5hQzN_4Q!k@Q`{q#fP*_em$ZQsT{Y7r9IDb<_wdQESG;g#C?&wogu0lrg|0lt z*cim)sx=EE(XdN~9h2*zMu#EgC$M)BCS+c!)NM1%=`{bS$ET{6RyWh%V%4U_usoAu zlB@!r+;pgr?{Q~jR^n?AwQHEeF!IwoG)skiXS zVXm%`I$6h<3^zz|W2Mzd;o%Zm{*l z6hr4%4?szkxbNKnm?79y8Gn;!%D`x1mzRn)ZM=m~?)V#14ox-PBbb-8V$w#%T38+9 zrkXU~#jHnE)~jdq8{x^NQ0}2u-GIC^hem4_z-cH00Sg<9#OTBzU9TzPm2YWUu;Xp|eDr>SKdKt~&ouz$e#UgdtQd!7df7L&q4@9=!CP z`0v;PG-@-4WvdyNDiZY1tj}BEK@DVpyFDAxXCu*BK@hyx!w%vB&AQ$_tJ~0EqDE5pasQDBVNa+TdPux6EME zMSP9+gJ=%RKe1-+5DUDtpHIWIQ`K!(f}WRaRwn<9Eb~)TyV>i~ddXGSand$EW5Y^2 zKQ{;OiN*qVwo|PN<}2wM{|=@Cz=w~rgh4^Rz|oT=6<3J1lM@@7`ntGn+~Qm^_;PTS zbqw8eQR;MNm?X#5QaovgWmz=O`X0J3yqo21Q1k@)Tj3tLB1;v`W_Rwu({#8Tcj=M| z(bkwB5syED&06CuLl+upJr_J48DWW^n6i0 zfAiCVPQM?Y4jrrXXD1dz6$<%9Ki~BwX*(uey-HWF)Z(j8Szhb0cW@fj+2ZGF!Tf&z zZ~>3)etdUHjKxXt>~XBR{GL~)(e+u-xJC$@%bN4||CMK7JbV4@SI^ux^YnnJ5?f+r zz1F0SO>KBDUIMf)^LjD*rJB!aYIZAANDj9f>~D$ zt&xHz>+NfMcw_VJ^H&dVKIL;?@426U_4boK^Tn$_zWMcwSFc`vl1cs7&%J*zi1qxO z=kx0q@7$1HdieU~_-mTjSZ=b(>{%C?Y;zewfDusLZ=I`I^_5P3FK7YAcU@yPA zUcoZ=aYepvf5S5QH+yT>8`s+X=2JSChgT1C|KP)TbJ&~x?|c7lzy6KA^WLMIs%7!! zpP#?|a=o@c+lx=S(#DHd<2QdZI{TZs-)Q}f@7kePA3g1_x6M}%yf^>%eKS76xV+x~ z_cb%}=KWajINVGLGIpk)K6SfFb~d_lMh_`elP0~cPK_=}@6@G#JV59H51pTV`+bk6 z$nmqIh!qBOGn8RO1LE)%TdJM|c+8MuSO?T{;+EF*ZhNZaJ`jQiD;+U^u2E#c)@LsZ zh`9ao6GlIRsvM;baE-7El|$VJ{M%Xzsx}h=_NfLVlyCQAQAEXOHWb76U6TgSE4{+D zq?Xud56I#F+(D-~EBgL_4%*erL;i*hT z15<#h0^qas5eGPa-6e{&kNflFfRi8bl%E2}KlH)$;V(b*ZU4Ye`bE$C6pi@FC=x#= zig;%@@*j>Q)&Eo^sRdI`5@EU0M33tBWL5=yGeh#bcu1|ghMGp3+KmaNh7eV0(A)*@ zKQfZQ-ND*s|B57^Ad>v`!xt1ke!Kr@^Dp|>{L-F3{O98cFu~uODF0|fzkB`s-3tY} z#TVM+>)*GlSC8{@#?rPAp`zK*rhL(j;ecJ9LVzgAH)kf({gb`;dcPmz=MP_f{Wzux zRw#AI9~-lD=IjHK<_QAVwN34?NcL-33bvHs=!dHxHxsJei>?BC z;M&(L)q<*L(uNWUeSVfAIkkkJ#|G2!%MlcHCTCT`kbTDY+%_w7X0sq#2+~`#tg^@! zrSy{=G}Jlfm}e5zbXps=+oWRswWP#^{|&cw@42n}klPMlXV&J$uA8;0BA+!Dhmq*E z(c(*3CpE?ZB2XEE3afUT$QNgGD||X`Yr~c+$|*2#Lyo5G&Fk`X-!XVvl9l#mj9qRR z$kKLQIDSZ^4=nMyq7rQ24A)9&Yo!86&kBomGK&Md$YZtTG|8!K6e)OHc8&f|Xm3-2lpN~rX zC&4E_xBWkarhf7E5~~F; zi?k^VQxaSf@5+<_`D_^5c5BT5VTY=fivpMv2~3iXH3CVl^OiKs!D4{^+LbBk z4bG4hSP#|64ks^SCP!1?Xi3axgTi%>SZ2l-LokiW<$x8$>N8e8rf@gaLchS-Rnt{7 z7;9w`ntKGWIT{-GaCC(BrMU`(2qR#n2Faezaung+n3AF$Fd^Ngl+e*!!H8lm28$X1 zpJXJaWXiA>wfkmZPz7KdZS5e~Aj#LiRi>l^uFEioq;v%dtO8dG^T@&Ll4Fv&RX|Dr z`Dn0&W!G4p19(4^&2s05i^&4t%>0 zS*p}98+UgAzgJ92NC-Nz_t>4U8KBEzb!p)d!fFLp(WYvwHo>4Z2xcV$8gqeZ4b}So zx1uWfc>PMR7nu_El&VAC$K{`o80306i0JwDv<1<;u0)H3HF)z4ZUeY z0e1rbu(bU$V)UFqenRr9V4d11=(CEMy&TZDf`3g|Lr)!krC&heMh`HJ-ZOANuQ6I# z;2zuog?<`zXO+~aJKjMfmgq$S%$eXyeGytkKb1CqgK;Ws3-3qI4=nYTX3G-kYZL6_ zBc?wJy!#HEnl8Gy2zaPtq%+1h6jQplkrK!t70=SgEI@23SSJrS7fkW3BdlkqF+tB( zMe=Zo@E#Ses~gvGBw>8k0!PK45yD_SMVY(8^OL|cYTlbD=5FEqc&C)Wb%g(MZEy|> zy-PH;L%WUh)wS~Pf}RS%e#!~mGN6ABe*)o-Kc#^8(+=@>@?se~FJlSjw6~b}4ERM5F?+tf zUPoonbLd82N}c;6KcQvqB>O?xvC?i}A-VU^lDmy>?>7AbF)_gq6h|&$vR%ngM?eYleQ$&|AQQ?S?Rup*Sc(wV()B z@!it%T0;A3Z39N^;S{2UNN&a(BEbzV6ptErX4wJ8qu4s??h$_Mdin;A)POZKV|^v4 zeZn_~eb*8Dx8#{etA?ZFJ;f*JGu?-!%m%)n{(uhQgwD`drNs--VSns3jsLdzA&3Qw zQ-6_XvB04jo_E-q5{%`sf-9_F#+aV%Y2mgK{Vnu&hvAvB$U#3O=8d8REN})jW+#Ek z(`+PsU+^)00}gckRxlUq`c1aYH>|F+`Ihwd8}PLFjeEt@u6uKn3z*!7#KU&vHDEmO z!78#ybe*=`m4b_pwMS?LTTT|-Q)I!c6uk%_h% z>00Ye&&n^a;KcdIE_aa#uFovI)$=L8Nt-4LC zLr$J@-N22Mu^gpbjGcO0f2eE18VInPz&08Odjf|5OMzkZ1oDt8$43~!Tw{b^3*v&F zkstysNq*pVkXR(}e88vKDa$Gz?cgl|`~dm#H%z!?0jUFvAg~6sLvcrY1%J`~#5#j| z3|HV^_ked?1LAAp8s5Rzgjw2xc_m;ulzZh)z^X2zM0w>((jJ%ri!eHj1lr-5GY%S< zBWT0&EUZ=WkFOO7|CeiJ1L~#OPV9UFDFh=$jRjhbt}B6F*lKs7zMcdqDvmnA3kA*v>=^n{!l`WXA zYelKJi0-q3W}3}_ITD647mYC!adO@Z1xfmK(0}5yZpaa+pCfJ@_0A+ioB~Xreg)PS zMAGst(VxJOJcEC3(6I&QXkcCNh4o;t#$xlAl>bFKM&pG2qkmiK{3g5;pP||J_#2OB zN8)@Ps;1NfxMy?D_~gC*w7nzdNgVn7;EL#HR^&z8vt5xlVdD3~JTGZXHm4@FXE_;YWXxV8uC?$H!XTTFy1CzLokG~ zacw>8{Q=!dwpgG%=6kJJemGT`n<^n@U{zHXF7%>(z~l~?=?qg3=)&Gip7mN0tx3>xKhe^ zc$?@u9dB*s0Ua+s^;R7(uKxo%-gSlEM>9ET2iwsF+pNXW2SJQ}N$#(!ZWs2CRvq(c zy)7(aj7;slWt5ty$B~1z7Ili!qKm$DT zIMyf3#0CepMb!b%1_`oOA~+%#?!d_SF)r55Xqvy>&*-STWjQcJ=GOpfVA-g+!`^#o zqeTk*&GI*f=``fAEp8`&G31T#9k52nvY33{I^qIdc{IB5C$@P{AGNniZK&(A!|L(m znSt@uVS9EXOFsKAtm{coHdY!l=Anjq=VK&>ua23Sgd|#6J-?Ickl+&c2(BdDxoN36 zJ|`PLC)@Ek_l*1*_7u)&gP!8@ah=zm;@{Dus^biN=NTZ^y`MpOLKZmE=;vkt_Y{<> zok#lp!VL6r2I#lI?}r&k`v(2fvola03Hf!K&v6k5@UmL%_v(DaFUy{`k9=pC zUvOk|lORFv^0ncWM3hlU#(RdeZfLEC5o;Uu0q2Ru#);hK(yp(^cF}gUX%{Ia<_a;> zM{V<)v`c0-wC@f}ApgJhblc>m12A;Q9tBx2W~;c@Z#gj-fJMG|YW9!z6&i;b-d?H~ zyha)2(z>^(%M`>x4PQJdHH>X{53L}=G128G_C83Kz}#X9!d5)Cf;>O=TgI=!(rKE! z0A`e>46f5(@pQ&Oo&{bNJqby_N*<1+i6DNMeMRA}i8r8~60a>ECy3epr8Ns7EqkeB zIqaoQ7OiXxggAm8jnUW&gX6Z)V+ZsEf*xZTI>UrsH)f80b-S~ptKXd+7bE4RLFfk# z3HeD{U$eH02?55rIQc=J&YBVaFe@X)?yM0f3s>8Os?Bf9DcGFoqo*<^;K_IWlre*K zde01tSANZ5@|kK&Gv2RHdEvMHjx##7zJ!wcqGOXaV`}goVJko&tOWy>T*@6_!w5j+ zt%W49bT*LKR$CMqxa$^Nt4k=UE9gLPOvSo-z!UkFDlh=PG;ek6sRJ{RFwO~1rdL?0 z+Pms8>#Xe&$kt<_q@kLErJoum`whNU-&E>4`d6&8*NA#AkKyk$~3YbVl-3leuvKq{o0ocL# z(zBEmIcZGd0yw4FkaYkN*jSgUl2rXTcvK-sZ!r+$DwNcf+#1$@Q^kVN+Is<+3>r1S z9s?A5Yyt4ML>!?oN3n@LptxfzU>NxkO8S=9ubgz_@cf$+X~>ID^m;Lr)IAeQYB4uf zycJ3sUkoL!Hr67G(<)*WEfChE0_>a<;6PV`%b^4`8DWuZ-(tsVc_7(G4>$VqiUf|b zQHH?!5lZ@>7)q))p`;3ks?1lRq#=7GqUniHQtYcCMKr&zeUlOPFCIF=~&RXAWhthpQaKF#<2LF9|q{ z5n@qB;6+KX6(IC{weG~>UB4WzG1?+w2Y8JUQld`*AxfFnEr+iFLpK9LOo&PIQ@~k_ zs|CHwh7m+`GXqhu)O(x3BWfQIg34qL3Cg=57*auE*)jNAR=515rt61nWrjkmZM0)p z(b7iiYa4b6Jm^cPhF~LzTJK6TY-0rJ4b_)fMNoE3Tum=GgH|m4UBf<7ouMQcUbzWc zIRug{K@64xN9-E*l17jWhDmI~W%#LKWW!42ZnHWVT711a(zmOz8MS{t+XHo)rtJ1rcBVWz+;%Z~@8O?Ie zZdsU!t75imj0Gz(h74$h)G}eTGdiVPG%N)S;i?K4=LUoIK-fG$x6xQj2!-2iKHU5*r zA3$%qYJhmjAN8qZVN;OiAl!%RSTCz-nKV2mse$p#ihEVL%+p98?-^`ae2=p~7?P zpcyIU137|`F?AXfBbg1bl7pQ`Fq4$CDMK1?E<5eZmBIFvC8!D(J`<@$-2mw{TimO=NC|M?gr0jSes25wYjHgqTj9 zmW^ih>899Mf%`G9^2E&E=r6?=;UK+n8z{fOLPZwGLRGJ1?6c<>=hh$xJK%i zp>P^vwIWM5z-59eAMuI(nFRz8`7wO{*a#LWE(3gE@ho_q3ncB=i z8vo4xzC=MW2tc9p41i|Gb=$7xd9HV)g6PpjpTgp$$t=+xLf>gcpcR1N$M^=b(6iY3 zZ;x+qtg+c**%9u7UIPoxeQ8IN;$$@NNmAw_SHE#qOl+2D~^Yudd77LP0H|oV*t~ zYO3j8%Dg^1SkOwQT7xHzNvfWxphL~@NI-$Q9$U@AFS!lBls!7h94RWa0}`P>@JxqY z#;3KVd1g6z%SF4gIBQA1Q?WaB#;uf1FA2To3b5;folcgy(0C7=Yc@>r;@~`iUpP>X zhdEhm9vkCv1fdOWIXUZWM=r4S`3WlCL>gzmkyH1Xir2d5 zl}M*Qd8Mf*mv5KsV2#Rj;^ zmXt8i*7zySjcv;fB{>uyVQTEh9WuxKDy`{ZIQHk1f3kgBO5 z=b;_df#@6Hc6-@}<36k#eE`oCb+O68p|BCw1`7--RnV9VXaRQG!|99Y&}9+cD*RWI zRxOEG{qt!E-`^ag`0}ykwxb=XmFj^4sMk@f6))_3d2&`CEO?i`yR1#qffUM~hW7h?+)ny-X zg!PQ`AyA#Cy{mwxdd1S|84FT}h;_bz<88(^k1hxx+NuEOr=j;LbZvU6-uTNt5(j6< zc|HJaF!#DwfL{UN6)+k;VQD7SvT?CT8=^w(kw@=eT`_0@W{$P1ftOU5eT)DQJI_ZC zfM=ERihN!IAZ}ndGNy4h5m0V&%{BBQ!A!Ox+uU-&0%8U0$z#TZt3C?%5ytsY0Hr~} zqLc_I3Ygr8**!zdDU@C?NuN_01QwMFz)8XqR6ZtQyv!Cv+3WLBxsN=~hlHbGqzM3t z1~@sa9dAJ*PjEbXl2W7FKrvWE1C!}t*r*MF%>cpXo_oEPZOxb(a!g%fG^e!yyQzj)4I=K)Ozx)T7C7AW)JNxV7CFv`!p2jIu)YFy?Fp+| zSwN=-gbdx_!~qtB0Y#U(1&JZqtO46<*gl}N@p8|l;6BVeAITRFe3B~-1}kzzIt_@= ztK1#l0{{kBA~2K#;~v4JxFH z@#vURVo@=`;#t60wpM3!*Dl8!^HAJJ8s`IpM=>B>wwMgiC=)T5wsd%~riMPJtpf_H zn1%ogfS_R!pt=ID)Uou(WgnXRXbb(&u)_|}X$OWJ^F4MTR7M9_IN&h>C;HGdQZf%S zYcwS|NC0p^Ul4I#^+EIlP|rLc2%g?0I3`L0vV(;yDuW;is0DB&U>Z0a4Y6_e2z(Vt zRdyBj3{X|q?XnNceVBPZV(k&gB2Z{J!N7F1mMe%^9aAXBOv!sIkPjOgU_!vZ9sa@L z0X7BjO}@m5(s3Vto{s<%R#;We7Qpi!@CQ%}*n|okidBI265!3KTL*YnfWb_FZeg8N z>&>cs*@x#o(mWrOm^m7XbFUI)Q`j+66|kQjwmCU8slmx^9nK&eePA09>%$2Gu2_dl z`auz-&_|u;1JJDO*`!F0%rN8zyb${ItBBst&L!j|V5b39Wmz%L8JtZ_>jS)yxx^)u zXB7iZ=J^2D-J!xzkUuss9RSxLfP0`1qQYS;879c?7eNCx1HJ_Kc~=3+r>3s=gXxFX zlYS`ZEdXi+fQ@iSY!nL1`6>^ukW@pDFkr0V&{xza0}G)QI!e|?@@fFh#qr>DGfI`DA8NjfymDM zeQ#fS1lTJER!9pi}LR-ODh3)U6m`|Y#6^?nAi#!*t; z)sJ;8>ZDc+JI>=!NC%#6c3Sx#^D}_U>OK7oaEN<91JV`-U}G-v^=74lVf+m555ZEB z8*gKnmL>=~_uwA(QqqrjDbcE7wiEUU_q+2Ia$%oN{G9U9Yl>_Q=RRI_7AgI%AnG#u zYx>elYqP?bNs!+kI8n65qAfUgKhvm&_bU0By~$D$#WVgjFP=iPNeLBj)Q44w7}~?{ z_fO=*r!FS(EWt5`)3aAsq(}zhb{wb0*x5MN;QeIN{lpg+Nh%V1p)4^tSWI9h za5A%Byf5ULyaB(NLqPsaOF^R(_eOPV4RYAs1y>6BGRhRqw7$^QzDW#l8pjK%J`yL- zUS_M-`R&_{=7n|3`?5vLx?!x1*rfBABfMP2*~^k}3zJzYPOLQ&O>xS7;(s_w#ggT$%~XCIhZ|cYP*N4^Aquq_JrbI)H>g9n*?EhHV6(;b3jU< zH5~(BWo_B#A7LyfH&$jFwX?G%QcH?j(l56}JH?nnGau$CMby}@f@Qu4=V->vh%ZL+ zv6+U1_pGRqyoY3+V@%O;?TgI^Od_|tb+IDTe@9880r#hpA|fzc9mTMEYJ}jTh*P9XfcOlU@K54kRPGd(m5(E)wd^~58N&PrHxl5)Ayx8~Mw=%>8-hIXZq!wt*M<0ooa9BX&F zHLbx?g?5|Pzt8f_l6IG3x2N!PdyzKC0pL<;L@=zmwPQS8Xm15@UdLQr+gZF z3LFl5F4NUAlWYL10VkMy@aNGkFigMe@stZLXW&Hns+(7%Qq||tLlDQdbPSl1z|v@y zZKM8bm#wS7E3U+VTMLL1z;f)vJSEFjssL5!-|FM=re*j55o4@f7^K@Uh+uF_LN%&WCJD|uXQxwj-h zgPvE^*>m;Vxpw%w^#d+-@_;WppDi25RRH`FR8D-go%R87_H&=zN3^)jNO(?;SZzaOxN6D~waB z%^`8iQp;J(aNr@Z7B4w&t0f5s=Jp%HLmG8sXTlQLZC2XpB(Eyo>uq1=oFoQM7mbiY zlQt^Ku&vU&Tt6ml;Su1SYcD~bXB6p|wRSJCmD3>@mXm}Tv27#ia~lcxp;9PM{?_Pv zcPz!<8cTKSSo%yBg0($%&$r|AqQrH*kU@V|&>+6=w9ZS9z&@NtY!dxdOA;QC1qL^R zJffOvucgM-E^9yXZHB~cJw9nl%YmOTIRJeI;ZARM8C9`>=P+!Ql^9W>Tbt=VpQY%?b-2Lx}&IBOA?@jI*Q2medF`z5TIUN2!5eOJqA=P>X? z?Vh*t{5&$`fY)|sQ9N`CtuZ$s-&(znM$aTk)$HNWy5sYrqz$InK*I|VZ9}{@k{B+| zU~78+d^$gGGp*}dyL+>)UF|}`H!wq$F)18dPZ*e4m}hXln<}l-dbVuX|e7(j~dBMeB1dIoI_%o~&-{g)8z5+n6>oa_D5)5zh#YnM6`i6=E$7Ao}6Yp-e z@2}c7D(s&Rx6Y4``C`wE-tUA@H=obz^P@U9K7U-Fa^rcJ>|v(u$iM8N{SF0t9rQ0f z2YKetc#c8UGv@@naNR$zkI&DMhgn%@OXm(e-jNEvOk;v~$Nsctu4CU1k6{07JmaU4 zv?b|_li`ki=kxQSP`aA3J`X(md_LLDT(^94Rr@m=PW!)%tcwds4`L6gc$ltt5n1y2 zeD!WVZ}uw#gy{lvYuZN;)b|Mlrcvn}=!eSl9Ua_;4Qfs=u&ocRMpfv`osKTwnv+i# zyA{kHGGjPCFG{;7cdSpQvwXz65wpMAjk?D>v0TVkslG$1tHU_^8NVb!4isBVyNA58 zU!`>O{HM+NuYI2Xk4k4w2l5pCj6NOzIX<1*3k;CNa^+vmXN*$zjDI`{BY&>aRCm&-E-TA6I zP}>x0t1@6FbkR@^pH*{>sv&E74ZtpIS|9jNe)f}SF)h{2lSeBi|5>v1?v_`s!*2!~d0P5 z@`nS8K1xy5978P$LjgblE7~I&Q9$SmRu!)Ryg8REjyqoNy<{zX%g+Qke7k*<+ztjq z%;ZZLh56cjLx`+X3)BiG&Ve(Lc6vm{joZ4Gqjn|goiN@_&$E=p!1YrrmA81lq$|9- z#Td^f{KeON^tXpE*BYP5qYR8`2F7H3xW$%;#mtH=2R^-xG0i=F>=PK%Y%nH`a*Hhk zx_5HLUp0~=EP5c~JVwa-H`5^_g@RvILB|-k;Z;Cu9ReFt;f^SWs%Br7fp-=i;W6?to;ExxBMkFVuuP` z#XvPKc!EMT%dtIRkLY!|0Iu8<33_CP3?JV){mMb-K1dl2`-ddV>imeU(RS*%<<_sz z)yoOIrw4NB7JP?9^?)4nM1;;QUi>3*=Np{-)>ph8ns%pzZKL8uPE*>u;HTd-vu^Hi z@fXV^MzkK$kYs{7M)mOWIUuvKOtk0_y^J|3$+Uv=lj=zPVc7&B4CHU==)a>@O>0?x z&%Y!Gl%34(JwN0Clt`3>Wnh*vNh?n3Ue*oig=f>g(Sc!6ZHhh`!x_mrVavDVeECrS zPX2j9&8tc*w5IYiY~>`^f~j99Y;1ZIyoIzXp^+LTnWR@Aep)b(6~h<_QoI!23ff-a zmDim+i_|c>D%RH3=Q{!qLEr`3H-X_E;4W!31GhZ zQ}=CPX>$GA7U%9lj7HEQK4pYegg4wbzeN(QSaCjh1g#ikCH`sFAwQHV6S4(xsBTy4 zz^M;kI;{7%=cpw`3?Vyslv&EqypY&CGDF;Ax`N8?4+fU!0KuOAB;)Vv z3G+zq1ukuBCAUoV$P_cB7DxqyQ#fFO#2;64&&mjxy-|NXYto!+(vJD+sa$!XO%LDe z%AI6=IoDS%G1<#JW36$PNSD~aejqc!2Uxw&N+MdC+wf&|yi+rYMy$~i@bCi6Ha$3o<#Cd8DNPn zcp4e7LXr5|YNPAtH+Xbw@@VP3yp3{et??n)TcwdAFwCpD??)7&gKeC-ndQHV4L^g- z5B!@IJRduLHvRhCI+^^5E!b+YC3@I(c|9CNwxGuZudvCs9CJOY324~tRX{e%Lf!lf ztfVf&sa~XwAvbyM?teOec(Y?wY(6CVSYEj+hJmM(0I6+Nv$+}C25T@95e-VZ=YJf# zYg6KXN)YJs(tWi%kMiL^%w!o6l_s-$HVe5p8GBFrig3&ct@>!?=Zfd^VPZhQ1~j5$ zC*$m3UoHzcpFiE(7P8Nv&D}rX7RUDzbajmvnnzoOW?;`hs-A3f*v@=+j=6T4uacHD z-8%Dsyw5G;^D-Ae_v}i2?|ceKZEYiHRPdH{m+W;o!lEQ&A7Y`I3mC_zJ9&= zv$wJ0>+MkZ-B=|kM`s}MIUWtWoXSO{0Dn2KnIQQf?pV8%k;KvZyw#PMi>KsoE_I?e zj{U&Hk&|SZZqN+s1IHD&9*O3NEz1WbQZ{rfZ--~-QT^V6HX2n?Q2Sp|t^DU}p2Tw7$@ybWtbx7(C-r`bhBy)l5MnR|ryh8TJ^uVv&~ej?gbyGinR zIec(rVzOyHZ}NVfsqWb4G!xQsFbs z-!$pwOQW};oQ^-QmJ6BO{>4!4%;9;-BEmDA+AwkyJDlE8X+Fgk)Rt0t|Lmnh=L~f= zQ!=|gJ~TCy(yoN%7-H=GYUH4!T$$S}E`Sa3jDGcvYdvvaSRW=4WP-Uh&K@zZINw#v zEP2?En)V>}8{|7ExH7>B;|0LUdaC;?Jxw%JLK1q~J<&)6U9pK*_nTQYD+oQ_!1wp< z$j-tqd+Cg{ZFq4oJ&B{_${Jj6aFivHAB6wrou^0TLyC$bjcLw}N%Tw=h8LL!Ie7m* zjiO08XC>F~Z8-KB%pO^v8l4AUnI{o=P20`eWiaRd&TG!?42imdZ=KGvm7M1^VwL6O zw~cR)_$P&LEy`B_2#s&0OL#sqWwwA=& zq+eR_gh~PIGy*gsEEIfH6p$veQ#e2fc#$-+i8)3HYs!768T<**Y) z0W50#X(scCG;OI^%-u!@8b)LH?!?jcFwRl;XlAvWWV)HXwR^H>J4{Dj!4Ce`V1_O+ z2HDcC1!OGv1xVMvetX1H5J6yErbm)(qIz5EyO5azy`SyA`1~vxSrE(%;$MErX@OtI zKhREXKWOPx6#y3%!K6ZmN#QX#iLp8FNm#bgTv;9ykm`ZL-E@#HdNTymmRHW8y@qdT z0vv~ik;e$r(#L{zIjy&WMf zZ&?QRxNdE?5l$=r{;T2pPL~a~9pws{=1V_)6 zH!^c8d~eDSnw`Dd$`qYZ7g)q8t#&(5y6zgMW0<)6o{NHIiEYM2s?vf6MC6RhY>@av z@s`CdguM6Je?4C<9JSw9_Qd~C%(eLKO2%>WR0p@svLzt^-9C*N^W<=i9ui5^r|(a2 zlF(Q|!s3Sd|3_H5i*mmW0XpVB`76ssuAD0y(MEB^Yi60NMJ^9evVFT8_ST|m1r=pi z&C+~4;X>i4yJo8DHZ!R4*Zn#(&FMq$%FQN%^;OW%?^2>RPl0weo#Ok469wT4+vilx zAOp^qu{3)I|L-xILDwM>RzmY(0G-`wJ9~>_xx-{(rKCVmRIToPf>6Jw?I_98z@9p^5@;v1` z0#U-w*$)By7Xh*!wKLztL2EgVlWR_SpY`S`#7@qfG{*8Od@P*CB5Y##Y@Kcs$s6tt z4MXKg&)T(3zAm`G-Fq$tW6E&WB2W5*ONn9ySRajm0Z3Wq`Jmd6mYP@*lZcM(Crtu> z+$4%4DI=5UhS(yzH)OEuy1_4DX@Z4L{rAv<8rHq3#Iz?{5D5FU7iT2ADyB(~e( zy`6$F`#A1rHH9B729Jt$MX)eksqRhH#BR|xMiNdm1Jy0AS#;8KPekh;{ z^t)#Nzlfee@3{o79uo~0Kmt01R z@4RsG)no97WL8OHEc(6uaF+_cZM^Qy;*mZ!Dm7qy_3m9knhX80Jh zhxO8M^LbUGrnSeIZv*C$&z*LLV`dJQneMhdST6J+4!(S)n1ya2)Es$rX-RIQ(+|gq zeSPQ#wFUNl%O?LzSV}!&I0b=k&-wK%XHfvj9_h|qk&ELsnBgwv_e^Ba*68>Y=(3DJ zE_}}0gGCg8Y9&yK7qzy<{%jkHlcSRp&%5?L8ZqM%bT^S+?9;Q&Nd*w`RadP8r@L3) zN{HTKEUw!n2ZC|ZzI>%vEa$YCIJI}j7E4Z%1Qei!m6(_Z)Zu#K0opRo!j<*u%jlL* zwE{m=J>E`&aj5@9%ME%7ZcFts_y?5sqxe$E4zIe*-47b?BE+oI^OCf=}q z&st7?$=-Gp%i3_EtbY9K{c0PgQgoQ5Y<&1hY z%C`C8M}ruxN!1*sx~YBCIuAOIqbUzx3|{PfUD3m9#2QIcNptVtLpCw6EwoSc&P6j( zSR!%O?SKEEy-4K~`-!z;WWR;vUCDNG*L(u>k^pYvV=Ik$-3+c`eEks%ysq*6Sc#I7 z{!{U>;Ka#u@8}7*l)8T(mb+6ZtUE4$a*sswz2`WC$oh(afe#!W7FLo=A!8Wn+!?(Z zoN*fdHQEv9K91Uon~$4}MMn0nm0OrCjR9ot(y})&x&=H6VCx4QhNG|HC0bS${!&>! z#@`O~0gF;~c5E;7E5XQ5p5g^V{--MyuN7p=h2`q|vBxN^>YLp3Uf0q@_aj0-|q?{yadSx~)fqTZ_ zIYQYuvtQNB@1(9_sM1l-ZbbZ1a!DAKqg?OG!tmjCvpJ++iebKl*Z2-mBG-8?lX|Du zn;_IeY=2&Dh9Q)0y+!T0VJZ2mn<%OytJb^2c_brx0uzbtFM>MNVLPv5{vz~5ruYV) zGkJ%TpwQ#RQcrt(tc|F#H{-R7>$)?p-!4z6fBS>dWTz7RghzS!M){3fXIRkU@KR1Li_&ZEI23Go1tS< zLf5wYWz4;o?H%&CY$fAnBPTg!zZzkM8t2aJ{W=kqz*jEXT%Xb9*w+Zz&4NL~VCsj7Ftv(oQtL=bKtS3mR#;S^p#vIWVQ`IkGeFW)j(e149Zdd5a70XHp zZrxn_=+s`wwd%9@;$13`Dvv4C2l};vb%THzNHEHn;|5aW)##gYgCGj}N$A!K&C&$f zDe$>$IyhDL!ZQ5rM#V#rk>qD&3_Rmi&E4$hf!m~h(PO87y^(3}r}0o^*$Q14Sx1N! z%S7*40TM9$TDEj*?(Lp?B>e5OZN_@dNtgZ%8GbIEhwAx~{gx~$_nQ%W^nd2QFd zXzycx?Kzi%rYcMS#QEhVXsb}&4**1;l8$Z%R(e~HH8z0$XfddKg0RO^6vgYg zSt*a0K1xU(;oktU(3=*}c1?Vnqgo)GPxTB&Ey{d*5+3@FWOt!XT7~jTI)F)>Fl^YN zuijF)uwt)CKA!bU;42djs-#{vPNgYymuu*+gcypQGfzo;s`T|dR8XW_XhvW*O}V=H zk5|9y2|C{Hh$~VH5#ICB%brUog)X7%SpRH7O{9(6da`^C)~c?09$@V@$_DYps<-s6MxGA*LjHrI@bdOy~VI5zmk8V4^~gre%BWr?=*bh?Xbh@=p~q@a3Q*9YWS) zaR6IWhu;mkd5yTNZK(mWXyQ$RF$X*0i~`TF4_WJ9!pD-H)LdkEqJZV^WHdW>yW!Ie zq^_EWL2H86$t!wZowJO(mhs4I$Mz@5LhqLg3b`M9NzZJw9~h%fx@%a;V4UnmE%6!E zHUzrMWKCIkE*M>d%3j6ZE7n+$c_b}xi0`t9JQu&pwZQ<%iO4h?iry6IYT*^C;bzGv zE!t!W#Bhgm6}Evi0}S)Cv;J?+uP1japLu`DeLj09N`;DZKgIM7Xu4;rO;LS8kiLFW zcapVR$-`@Siy#$1#{6>s&Pu_9qgdA_h3iqI;QxMhCFuL{G4il3 zVDNiB?7aGTt?dQg9|EuTMif4;KR?d&{hv}5{H-GV-X}_l78EYRYd>f3rKHM|p-Q^@ z7;#d&tA(RQ>7=|s!z!^T;Nx?mD(1nahF9^~O>ppgk=pG1f4usCD)v}+1!(5XE@D~H z&88{PKu^}6e>p0_#QN94+(0<{gm=P@+u19?D|O@+hee}y{6F$h1Sm$0?uy}8;DzAG- zsy%)mw*xaWoocIg!V8O;+tjad)0Incb!b1 zk9Yp>mlLU1x!TPSm^mH52g(VmOhxROF78FD1s|Wgl_l~Tzjoh?kFzdf5R?b!tK12u z9Ypq!)UW#uG)kifmQakwhUUm~(b2wKOc%E6o|erOh*FZoR}ePKk-ukG(~%OCCU_i+)Z@we5M+}zjuz5XAsrC)dM z>3+*=y$bwbqd4_TVwoiRI0LdCQ}L5fr*REJmJ5T0zN-Md4!kPYc0t~}5GCg5s^`Fe zvnoDMoch9?KiD%Cyq2})y_Vq@lTe@i;V1oohgar|%4hkz|CdPot*fRP%F8S|G9O!u zT(K^vlZJ#;J#w5nH{gRauNU`K%>X9y+E!lJ*F* zW?DXeP^nt!;$(rGq-gZ-zIl@V$>d3#$FR^ZF9sDOJzi5$OS!lm1TcG2%Mt1-kh?SF z5;0sj6!LMK#6y7)>`A@`fjhMU3fnTzg|hr$CZc`1oG?su(^3yRmiU#_AUG?cx|<;q zWP?DVVAK|}HyN5^1D-SzqXgT{NbPtMdVpM=hSS|kj# zZV{2+xJQ&UKd33?%U+S>Sguah(e``8l}B!HEe!NcD}*wV_8;KY@8uSiZ2j3X^H|0+ zX~+TMG&-OER3A26+a0&4<`iEeqL$33OF^C_Y>bhM2KxJ4 z-)vWg!>4U2s9^R8FTw?+8t#Oj{r8UCKtcJki0iwHWuS(1KWx@=Qv9&0`i9)^Y-!US z&O4yXi$IWL4)*}E(zCh2-)5ziIL>sP6u0DTqTP9h|o zuPWMl70+vaOac@(a;%5zNRPnJ{l{>eDVxB?@UEF1;y{o|i#2u)4f<3WD=^`Rb${g) z7u01Wh53(D7+RyLdHz8aZbDiHvDY*h!v7guGjyp^E?^GZP{V1bnbzpCidWmQF)SDRxt+!fe2{-J=9?;ScoYhbIax@NSk${vDY zZXm1u3?(c@u(tV2PS7mI5Yc1Qi%Ee-;Pc^o^81_UXHd}D-*6$uLil=(1$KM@_toz( z6eBERfq~2P6qZPIHxk96ka@TcH@e9O#(G*&`mJ-0F#k6MGT>dQKT0of(I3x>2lQVSi*dX$MEC0o`^ZESJ$qD`G3`8OZn&ZrQ zeyKnFtlm?JPKkO;azCnlySsupt6OvZz&!B0AbZX}`>^^wJ~#Yd28aZ0)sWUL0Y()= z%L#<254PVPR#NqQg4C9KcQf8*YW4ldG2MVS=G(~JE&8;A zVtG?=dS6d31(4fZ|L0cN!;6`sfp2m?T6~1#IQ0Zzm}M_#1;Hf)*F-`!70ISzJ}zHR zhR<8w*VmWNmg)}pwA^b;zM@vETp>Mw1*Sz(vqjgMwq}dn!0+xf6tV*qkyR|%27(DG z-n$msPPkIa7d^Nr>@u)Q*mGTD$H}EaQ&^=)fC8m6M&6luaBZ-UI!_CJ8etk@W^mo; zVz4!^l6B5qo#t^QTZJKP)cgS}7g#CLUPxa5WRd5M??(!gr>#Gy-B-VXfMV)$RXX$k z=*Zz?NyRwC!ep6Vt1BemAHQ2XV9NFT@%bV7{Jr#eA}$wRPn^yi@)5p-jE&i|54v4b z6^qdYWe6N^!SK&mVc!=&>##CgEmz`J>9SlcdUa*A*S0k}z(3e)=l0b8!Q^&W8(#dj zRZ7(R_LeS328M_nP5#kAwc!7953OU=_6F0PvN1JG3U*q3uMH;#mZ|H1c=8?)KX3;3 z{X=~<|A_v>LbbO+cE{$B0`=|mclO?gC*Si{s;8Rw70XX}w_}1VyT|i*zXXDyg$x-D zoD2-%b?6uMV^zqXEpwm)khz7k6nq9#HqGT@nw3uF*i})m_!F zVn=_Khw&xh+QPQyAgZ%AaGhpskn)Sr>Yh;~#3Z%k=IVzlR2bWp^E{AMLH{lopU3+! z2;C1x>+!s?2`w(T3$^N}lDwwwe5QUr05|C8tPXYP5219o!iw<)cj}UzQH|9$ngBE{ z#IOwru-(ar;@$4KhnFFCMcEjaq`IV8`&}}r)b#D>1ANefh2oB3r-#{?8r)7T7{_fL ztKhXZTRY4)`d@zB)gERXDb|5UX}}d@Q(0=caOmO;M&+wSxiA57oHoluTHq($HE#WQ zP)9SIz;MWKUM4{5Hr2_I=m&IbqK6=kHWP@YlLN!gw9x53@d9*g7+Nci-FVkaI`KC0YlLj%s@ou06}Q@C4&(I@p+3g7 zBZMQ7sF$Jyk^P}2INJQl8C!Ohn8VQYSLd%X6! zUyh+>RfY_R(BXqnIaS&2r9= zf?s``caVuK8~RO)n>+*%Ya&Mb*-DWPh0`Hf%4_zAu9lI0qPCS+0eeB%Vw7XMDUoxB z#iiJ@HZV6V7)ldF3?;photZ5`L_R$D9c!-g+rzTwXe&qFdD88}Pod<*a2k!(xqOQKic8j&}MJbm-zdBU}Le~9#iu2bc}F_j!$3>z>(hz zoGAT-3gV;__iu%}GNNj)q-mdMJYc+%zEt?R3mALfD}I*KWc>hMeO?3oA?DtD#P`7o zqIMv7dyMA-L!qLZB*~x_RC+7rj;cSbcl3-u>v}FQ(|#fhbEyxz9||w_XJ!3s05$AX z+2ZrdJpM-DUgNceCUcaKyi3YUba2aIKqi#3g~A+ftGHG6clm z?SzV9YmBD69(&&A4F7~9xm396*-IZ@+RN5^ntZid}-Ztz-!UQ3109^HHEp2_oI$cJ7wf_M|6@I7Vk6ZN>? z?P%Y67?h48ZLGp>j=s-?CC2wrfnHb$k*jAp*_7vPuX;rryM{F=o#0w5^${1Qm|L|L z1Kuo${9>8{VDvUjpGE(m3T)q^h3x*5;D_-Oo@d%?PoPfD(*wa4h64S2QZV0_IJ^&H zKl#Sq)n$_M#r@Iw4TQxB*Gp`G7QsSq4QzEF`)6%b^R?FgUxOQ5F{z(BD zqGXSU4@w34Cc{)sdtn7WCsqo>pW!fluxiUiT9x9|L=;{g5~n;2Vb#pw zv*kXunK8uKq>&vb$O4puV7hp{{zQS4@Y&$or)jXF2I4M&)@B(1r%H zF&fJt9BBii#clNwjrzZTFKKz|^5@}e&Geax+i%tG1M}`VXj+1{(az7Y;%F>W{CG^K zEUalYUMdzwN<)-OTsI^tvJyrm#L$iA$-3n@_N|m^I+!2x<0^T<93?}<1gM)8?Agvv zzi(W`uR{aW&y{&q>POjW5epi(5}(a%3kep$8El(+S`1UgXayK(%tuWX${uP=O4%}{ zzJ-^`sl^zhWloszW-$?cMmAsYT5Hj^m!mbvO$IBBrr*L*O8OtJ_y3u#RNan_%`s}_ zPG6dQovaK_7pv8fe0AgCEPR{^&zu=~c$t(Bn9(@anOP8#<9{-&GUgcRv8Xcj*SI^= zHPoUzOU-sC+lqCP$yqV&W#DmCMhovK!r35K@AGxuk0oAGHL*{^(b+0*ogRmyqf)Y^ zpfS46K&&MG}5QWI`S zI4PNhf3+fL8L|7pD3&nqn-mWSW8LE>`Eoafr0iUdL)5k!DBF;9t`Ga1s&!aW>d>U< zHLNCXt)u{XC4>R1A?mA=b)Gzz${8#Ls90VF_ zBgvx7#AzP>&As49SCVfait%XlYODh0A`dentPF^;j!mGk4$uwod?~KUsvUu^#qp^7 z1WU8Tqw(TlC-+_s#qJhf!TU3-LOZ{1CC;ulaa%q6V6R%915Cb;~KgA5T@o)-*uYLK2GjA6ibl`Q2v(hRDz+LQejt zE4WH@MQ5iQS;Ij%-T5Fg^hGG&2rr*83j(~Ppr?XQ1`9G|#Pi8>oI76ye5@wo)@f_n zd-Md@3{X&lZUu)SR{OD+MCY%SCnTeD+Wb9%V|uD<+ywV3r2mQC7hYbq*qSfoDC(b6 z%**v?kgd~cE}Z)&Eyz`@JCKFL34vs zIML1~e7%XfC)rA#Jf$Qc`ctc=22kW!3jYU~bNeg6Vlq)Hs6fY+@#&B~Oek`HcQw$+ z&5L_Q+^6rLHTZ)<9sq}D+LdN# zHzMeVhT*S*JCF>572D2~qxP=`--0V0I^lQ^^qUd?dmmw$wL4ovEM^E1>4 zUM54zA$dM*)Exu&Q!#;D|Cia0?ptRLq zZYMM>a&=7FewQu~>3xiP7J1uoyZXXKO9wPQ=xA`aW|?>+a7X1IV!qN-z;O49Zze;R zZZ(t z+(ENpEm=tl_7eXXb7F5okw>Q5`B0PNAgXa2hQCFBRVds`E5;pG?SJEVDc1MzooLEi z2PS-3QOCsF+8 zK8MY1&7I~vp>SGj#JAA-}ZQFbW7&ot3QS4v2V+bZ1pMxH!D8(NLX>Z+t> z_I7&_dX?Zh`XJ8O9mDBWUM2Q~(&s}m{Ht=bdH}zmNUnt8HT&7hhu~{G`Ot~z-KBfrRjM>GWW^A+9`Q5h>x-*oWW?PM_tK%bo3N_RAq)Gy zYp9<^(R5cX3 z9(p(CfvgL1awcrS1f1Q3#$wJF{LZCqZP{#J2ca9d_}8{f-6d|EW8cTInu4xrBpMIS z@0OT@l(zOq_=F(B(;Vg#=%=9>hyP?XMbbXuijttLa5vS` z`aY}H#|GpP-#)cVaF3F*C^;mg(3m<-#AlIWI6P-ihCX>?p6;IyoAmvF&`+!R3W_SA zOSgIs5XP`ZO>vGh6N8>*^h5*`0V;4mzc2Zq$m@ptMbEtYGnHLt>)($dYo)`QWHc8feDVb|biUM^QWHdvb#Z4Ckp!+Ny zF6^4~h$N%(RAEB5W}H&=6G^Y{hSNB6L|goSbLNJ1@K}J8Vy}HBpwudK20ZjR*eZ-x zk_Ob-00(4rm;z>>RI`oF`CM3;qW3{@i}niCPk1vubhF&6(}uW7-&IdRFA)ApW-Yko zF1UcUz`fB~Cf>)1G#zKeN+~7WI~(Uhn0VC;=#tc=oUB*sO-Y<1QP5;4$S7sfJZb|c zGUf&5`>gLL#pd+G@gfFFw$H9Z91fINN(7!#+4Zj2cDwYE(y%1_7p{Zd?)-zZ* z4$?XX-+nHGotNN|nnlL5vL|&~wnOg9*1R4sh9~d)P}}eq!<^A6uH;1CuY>V&5StvU z*MEov{fC(wR-ug!*ZB%t?jPpJ{Qh6ubs`htGkN&ZdhjUf2|KZ1t&_V4G6;@NM4uvO z!0s}*!)V88yfaw2p8g|AJF?ITbP2a3qWXaNq^pL(ssNT{g-EC>3n78Z4|-ZCO@4Ts ziRIJi%k@JKW5P(F)V>dltOjycmJJJTo)V>z*ZBP2?+m<@EE@tvoTgJ(2o%WyQejs} zJy<-9_XgxC-=@ORsc_O_mz^M4Dvje6CN-~R)QO2W+519Z$p%2#To7&Gb~Amx%nJL0 zZ~R1{lnBR|@HJpLPXR&!+I^I&yNXp{O{pOE5?jnvDPRa@U@q`yMcVf2hZBNd&8UB0 z0)CC0J~*(mkE#)8_Q$4I1iYWuO~Fp+T5zvb58#FrBHMr)`ErZ&VV*fi3z_T+p7Ij~ zn3>K{idPuJiQ4tS<7QV!YMw8)hh#yVv*%UNTf3y`;4$|^_Q9vi=#WPE^=Tgc2b#N- z8bOzU@8Ur?Am59qaz|4PK&-u!8bo;R%8>pw0VmJR=-5UsA<|ZO&ChEEDMJO}>?f&L z%6p<Y}Yf&PhZTrF$oprIiAmz~!ilUTKmsV<0y z`#|7V@EtgUGl&G_!A+KFqNW-v$i-LIQOHSnjoBZxRrOt6%sfrx%22^_O$lt@zZEof z&pocPjB0`Y3&IhB=GRcynKiHkQ;a}^SHnSWB27s6c(E{oGE^J_+d9%3X=cr)N(o`G zF)9>h74*K(z7}Z^b`Wp_S6iEv=g1Ri;-WD&kkBw))&N6ey5@Y(E%0V*s95Q6Ip{?s z7ldbUllv-&VGMOVML36#_Y8AIS)pjaGo7Y|WNvZepfkmIt)@1F$-EmAyxy{vA%4``tIhmaCDCljlXVGDtX)->{2SQ2 z!@b_Xe(Bgl0q71{j-_fxK=t@+0Btvv?SMd9it}-~8|TReF8~%~=)xPwo6Ufq`Pep% zMIMDm|Gsf-&a<|oM@MF>#LtQ$THDN;)iD=ooH335#(n3GBvj-$YhIn@@+G0Z!+x6v z83Q!1IOcP<(8a$)F4QD+TXqs|^n0Xo4nJk^OgDz@snj1sVhh2y^C;*QhBvRBVi%v0 zwf9;3h|Ui8teBHseUFE~iaR!;xCIu;B<8%WwPJ;;Y=9H>0Yy8Uen&7cHUED}bGasK z&Iq?)fa~oUy{1O=RqM>-G7xtV&&K?0q@tvmXj!V(+m$vEeuk5;`YrASx6kw;#}A5o zusavj5_>wS1*eO|eIuvACq<-Z7hne3B-qa6%Uk#yA zVPAOtt^vf~SmM4+Jx(q^^kx0)>IZZy^fcVM?Rk4{u*0z)sBG0QpHR!{E*rBojx=m$ z$ewsC5Pnm@xYO$Tu2(~5+B|9EyPQ_DUhy+jkmfgY_e5avE@M)d?QNLrtqwKDbo1Ik zNsD(IuMp}Pi7T5IVx?=(PK(F!hYbq8ay=-ENrp(RQ|aL1bFBY@lwMEN`kEG6IfXdG zNh;tE7-9$w`<is+`s7Ee{vlTZm~+u5t9s01;tyuu1^W=j9b% z28;WQmPhGDC$u81u#9u^h0$&QhW{cMw^4K}?&yi(UKl9By1cs4IHIB+qqpc*y&?z& zwQ{f@&|bt$!KY}okf)oaFznvc?ChPu9+`YENKiI$%wlrMaxlfc!ejWYkQt`xDgNC4 z++r$mW8BFW{_#|&#p6tOblL6JSMo}%=^Xr^=z$)=TS$cGA8#J*s^n(fsnGl3_k7&Z z)d%_5=6$+t&##?5BI;emMf1rtT&JLMCq_CZSEBY!WOJLUM2S~I^KqrPgvU(LleqlG z+8Iv>)$Y>{mg}>z0=2wzj8lm8;{RuCNfy1ouY8sNd)OP43jRCt8OERfzi0pFdHbCA zI4R)&9(=WQ6?dqXpa0L{f4cuAI{o2A*Pd|iKeq;b8D%ekSa=n4xu4h9EK~ObtqF${ z;qxg1Cb7Lsd_GBafz3{Ep!g{*eG@kt#!vsiWL=@;{r^>RpM4BjFQ^ti-tWTsk&-Nh z(jWhovx+oX;8;`c|0+AleE+Tf+nPAd`ELt5{m~$;CqYzjZ6@iRe{IIBA(smHqA-Tb zuf3A={x<#awe|_^VexPCUpb;!HxW{Jyfcqm{P+4v<}&tg@;_y*S?IsZG5N?}@cz%) ze=|u_*~h#8dGmj>&xjdpN}y*$?v9ZE=sxp*=ab&MY9hz{a?X$b6}k5R^IBb1m;Y@E znhDwO;K(1h2XlqNH{A>E$LxK+L%%Wh8r4`+x~KO3d3(IX+979jS8JX3D9=9*PaJ=E zNAIzGJ81wYSKU=e;wd+z?))vVXpfZA>S>}c-l~J@Lp!3u&!$CH+>cZ}PN4|*<+RLQdW7HjPZ zw-v%4zCBa=IsF5@;|pVa7K>J%9@H-rOnZOZqdWb=SVG`FAFhPn4~c|COkc&5HLvNy zONE6Hcvv!E88C4Vz)Cr=p~*j!E+IpXY;(C@)992r?Iv*lS3x!`9eelPzR4%Nj#b$T(i|JrR zV#KBkfB+(6;;<4a$3Ot*7Ayci#S-Z%4^Q-=<98X<2`=LFd@YAj4skQ`FAzw8I}VSt z?uq<%?X3t$<4@KEY074IlI7k?;QGoF2YiX&M zVEt)#iglQjZ$jk-C^q4R`SzCZt$NEkWsNdM7x~x?6z2E~{AVm*QIN2Ljb70!PZ4dl zDekUn<5WS6E|++_aLz&EPOF{*v4KY9emrh8ttNuZXgwN7qe@nuYB8fF3X?xO*~8zL zL8{{UaUw5gr!tka{|+-xZ%j{5)fo1mv*InzHK9)RgAN{1Nk@_lL1coWmhzbsL<$g9 z^;rv(v$yPsC1!iPv;MsJ_d35x71)IaTka?4cMrl=^QjzlH`l!UqX#dSeR}e7Td8pI zI|xSUTz;NsX4ElrXh*)Yz-@?NzMFwWZ}mVMZQ;A|<0(hm_cP1O1c_kE7W7O(Q5f_x z)am#nO^XXmX-cPlWcRRdcUrgd*&%O38Q~Itl80{c2?-utA$ZLPiwm^(M{OcfOZaB< zkND)gTelsD;J-jwMxJivK`z^C%^u2d#0R|Pniv{Ek)0F4PMno0o_&ax%!fMkZ!1mY zD@}GKf9vmt>%}7{+>+(0HUH+HPK!^VvC1hS$S$aaga?SPf!S1+WZ9gHi zx2a95MkadLz{Ol$B2r*`M)#&|z5N)s@xN(!!K~i-=*Q7Kt4%)9pu^PY95jyHtGil< z8X0mcv!>5)vcWu0`$zG@jNF) z>YOx#4p=c~xd0PR{#MeqB%MYILv7e)BZ?J>WY1VZ6fU)eXM3F(2$aEE8_L~+aIdCT z)1eRXy0US3`W+D8B#CSFz9|EoE`N`zUYUb>Ct#APb^6XSjdK`hcI&xw$sXnteQD?w z&b`Ucc5J>nDRI~x}R%Z>* zbXZ4Uu~bG>d*At5Sl{dc5jbchWbsVOn87#&b2w3x8=^?%1yX&!r8}R~CiAWLOfJ9Q z3HvndozdF88r=(#i|Q;?-~`)Z2(Q8rabcF-9-0(ecZm>oKIl80B1)m0!onQ-d8kxy z5FVtsZ}Ix3owmq|_fLl@EO@KQze~%O0vDWaDd=Q7laDWnsww;nc*DLiSrflDr=x=~ zjmoI$<6^UTA)@>FM&NYn0PG(mF^f`iTC>9B;W@!`R;!vdzTqvVl_?Ty3UbS;jCQ+wke&4JRT({&^uU}{Y z(aZcThj#b!3AaL-VExvI)^4NOjqnt@%g7<7ZL%Y#xiSX zNs(Pv$k{<*nslkV*wH(>P*9D8^&hzBknpsBy$|1DnVqk?;a>~q)(LuvA#zceMz?|q zfA4}Gc?TCwwnz%je#>1Zb2=tvy7D2Gc91k~DlIQ@ESwJMA-N|qjBkQUZCaurfM>#! zeKrw!39O>RwxJ&74ar`h(-JF#P<>;iWbct#E392%yGH+ ziG~T=TMptk=y!tI))=Xu{BXxl^q-$uPS7lDMC*d4P?t4kiq`aTsZbG(*WV zgf7Egf~P*pba|O+HB)!kP34_7F8PBweJYVeuV$ZluU%^{F$2Z$Nh#h)qag-hel}pOu4pa(X-Kf6ep{ISB1c zI&*V)x$0)GvADbAU7@QLF6OZNy7p20qCrZozTkH!puPM<>H~#1?|yquQpm`UTCDl% zEYTCp$e+|5G9fPCaPY7h6AaKx@_q>uV<;d6t8^7V6L0SoRcSUlJ_VYY4fqe*Y6|8Regn?e~F!XCUQqYK#(ZqXEv zXMZ@)o%p=`vf0GqmT=|a1LU1dr>X&!M2>GRB%iK^2dAQzo}_(tbTkH( z?wy&ESUJFFqr(Q%e z|5+lh^jyXc?MrrX2nrO?vWi#=RHFgAT2)RJVlxsdHLmXtuZ8|J`80y9-$&CM5<-eE z7-Q}BVc{qwo2Jog;D$oT&oC0`2<^gg+2o=EI&E?-@cn!EchR3yc~0b#^OnSUhctP6 z5FYjv5}Al*oR|66VFdk7LldSbN3`4L@P91CU`!5&OsqkVCnsY1+xg!LIY5|-W@e9r zNaT6@2p8&odk{l^N*@v$AT9C>5)|bXZ&rZT~!%axz0*Avgxxj9B?*B2>2N|MWgaFnc_rFf~ zZhqoFr=`cG{h!w(K_K$@GRGgb+D`E*W9#|^30=nnMSH=`ue&!WRt|YTD7MPu-^0I4 zXBoLX^8UG*$WN*u=l^}_0&S&XOg|DCV2SXM7VHZCd-(5i`2Tb)RnS#D%QGs*^h(=` zN|moMFH%NzEQHMWKrRp@N%J<>%;H8-tv6pvf5KgSs=YEixfM(nE%v>{W+ijfZ zB)->+lS2M(6r8yEAKkn-XM2l_JPWPusPPhGCM*Uuxdiv+?zQl+iO#+kHuYf;{ zw50^)HHvv|ojpCE+I0bK9m!lNiPSks8l3WZb+W;gUAr1Sok!}Y+l`y@!xc{nr(4oh zfsiX_AGbCf6hu-(P2y0yOX*9)^@MG+7T~_5lkm=^gG)31wPuwPld%8QSZ>omChC`# z@tb>&6nz8KM|)M0dp^+gg0)Z*8Y!>3tZHd0M%EkD#rwAzfO>Jfk^QPyH3qT}h)Co9 zzVy&t9$#aBa)RQkEgR%ocL{iM3!a z`*)UVz2juFn>DMyB@xJ~%dWCkODD^sr4vneqss2L>K;5E@1wQo(a|aV?kHCIQXCd0 zm_igbx}F}3yB=UvB{E%MpFjF;pMBvu;s3PCd))EyNq2o~hxds6$S;w00J@WvciIo^ zP~}wNIO>quAVym;o*36R%vj2@2}vbIPySWnq+r5FI-+ z-Sf5ac$M#^%nw`>tv1zw?BEPYL1SiGtqI1pvhW}Ml-1$y#5Ak zuUK_^=t6Hv{HsMv*IpPf6hB?G6K7Hd)kYfg+&A1+ED2P}bP;v|Bh6O6OVgERC!1Ru zb2KA-+cb0Iy8XE|*VXn13VZ(%;Q-AUSdkiYzGXXl8>4`0?BkCJCigEt8=|w2Y6_H4 z8sRFSB)Zm=L*A}@YDQ_7(!v2}jh<;YcfiD}y;3}zkn~9Sm(xjqqxl{QvOyX{DC7k*gq)TtJ*V@H-sP4*!g zpw(1D@k&8_Vr@6YbjpAg0afUQ3c@(hk_UU2zi&?~cgA$>RDWz;b?2gVK8eD?H%>EO z#sAVB>JgmltYUNhVaco~UP_*sLrM5%vVCs!`%Te5hLerN?Dwf|Z1-p&tCiitPdHm9 zHdfMz>Q56EadWC{U)!jHlO&(m#Hk5_dSW_IN1=4@AaN2%@oRZ2pQZ_H=bWT#d^MV*56-9 zrT^s4@s>~8_m~lHFOuGM8UeibpuwCC%_k@qdX8wF<*Kb3&jl{ZNiMz9ZRTYZ>yVw| z4n2aB-#wfWvb@=A!_~7D*UZJSQ|^XhuUJONQext>kANwr|7(T4x~8(ACFKEdtoYxkv@jK;br#{xJm`68mk&&x6P z=6=b2F!!Ki=dErtF=%1;aABx-WPf?IB?0J`9H5`V+}(D0`Ri>eVmAtt2o!zuU@@uO ztU7+`8p12c3)Lr)T0d<%%&P+(&eW)P%i2{PG@JE8?(N79Hv^^vxs+&aiVjuDle~5H zJG2)vU}Hi|y7so?hsdH?TsI5>wBi+GOZ=*zvm9LR_0UeLBGTFZ*q4kc*AbsO7Uqw} ztPK}O&VAVVnZ}$6!Y%ADc3Z8Ixq(kG!_Cqxx8 zQfaQj6>CH6$bhv>=y+W3^!_!ttrM!RL-)s4Q`a;nT#R@8aLec@k5^wcv|cu^kyf-Q zK2+THwj!X{o5~Z!zD8|IoPuGfdF1ajdcHI}p6r#{4I|khUb{fwrAnkUb6IdP!IOqJ z&$t(pq}LkSWa1CW1k=}J=kM0}2#&5620=ZImB-6cGl%^8(wv;4xlRMXZ^v7P3eu$^ z4Vs!i=-#BWW|c(x<7G+zREnWXz*CQJ@M?F|v>3~%&A2ugtjK4c*WH%0B@5(zNr9|j zS`pWTaH&I%HObIj_~d86HglCa!RK2Ko|#49S#12}^$0rIiPg#*RDq7;0)`PSBgh6-2L`GpuWAQ? ze@;A74rniQW(scDXKwQ2aOX2J zv^94qYRIFU3R2#J-iYDF{dUrbmw^5>wm!t^)7dcR<)F*{%^;)Q9Cksz4A24%zz<)I zZ%%D-7#)@;agw==7lfk{%^Y!2#yC#U_wEsWU0UEQljRV^ICNaLeU3Q@*HZ`o9hzLD zecPf|tX;&!?iEE;wR2P;2(D`jkB2Oci) z@-y>7y_OVWP{8BcWj$vdp3yCfaVdJVhnuoB96`}7Y+TdXtq<_bu(;?6{>@b0VhKeo z$!su4BS$I2J^pP)TYH!1o;U>}?EEIb{DH=Xm(13R&ywQPLg9otRlgmBWym=kFuUFek#nW4yz_wYVlF_}pnfMz%0GyQbi-F#UeVZ(;e^_)Sm({(aUv;?z} zas`#JnD?8Qg!hu7`-uT`U$57NGL@}9ZLiEv<&V!z5n8yKR^R5_GbmWfbDcHs_3!1d zdD8H}U`Hre;nKd9X5h%i$uI<348n$L7?V3UW}2^$^X$GuKCu-odo27- zcMAJTCc@$yBUAXzb;(L5l_%$OP&yi51$46-@gbZpT(o5Ei=u4tJ0I|upD4QNgK+sc z5A?y97Bv3hQCilB!(m8vVV9ZPn53+S#PC|I#kLJy`XnLNR@%VG6O~AYF;@NKrVf;+M4p0 zF4cS($*_(1Y>vJ_%nE6NEnKhxuJlk&Z_eElxmauVz3j`uJ1Vhw5vUpA7z4H;32t*g zwfL6kz06XPqse7mlXztnw&Co%-6^}bTA~6cQ+#mtRj-s8{n+ea1f_`)=Yk` zM1WNx8X7ueFTYBAST)}Cbkd140^w5pWW9oPBm!pkDlwvlVRM`IX*WfOBA? zux%y1CGI6JvYd_a-gZnUXVKB=7y>csl4Cp_*-OiBRg7-UPk*DUB2wq6D~@h*Y9B+~ z&+pmH*gqL}oZLmmnPwG~B5g{~O?x=??3DPN@0VJ!O`kLBCm|wo${nwtFBAOc}qtR zzumgP6lHx6Zo8?wwk!d*yOirkgN7-uYi-8YP zY}!KFZF;%NaQ-b1xP~XcYv{bm{UCY)l)AQg$7yl)dKY6A^5w#MH#nCTsZL*ebBi-v zJoz2_-y_M^wXy+OW}9(obAiq;>R4lVdd)J&>p|t@uYcNcbSUpPHfFGkx!JIKH8_xP zs|I8X{6Q?(%L4<@4(7mip=yZ24sxm3W=Ds>wyq12N z;j$=7lsRpJipyIp9xppMTros=#lB=n3VN!7f4-Lx*%Ypdp?>gZQ1iZJe}ENchQ^57 zN!+j%j%l;t!?aB!5QqdkjBHo`&7Tb|CRjxILRR`yDd9VF7q?cd?2c`2dVeipdN1$m z2?h5II$IHU``Z!0nJ1Q`imNL)-aMYz+`|d>t*&hfr4TYc+KOa%Ezr#&^p&}fM|4GK z4f`Tt;TRrAnm&0kbRvX4KHufCO5%q1n^#<$zw4i;wGU;*7-5?cMH`JDpuq;RnQezi z3Y}5y+(+}__jvvxu_zWf78lqKa>GcD0~u@6?HBZr5TydQMX$Qg{%mVCr{3DNVCqf; zj=Zus>WO^UvB_7lZ^yFlUQj{L5%C;L*c8a=RzkSG?jYhM47Feh-B=y_B|h>oSND?%D|ti&pMRk2 zwd>Jk&M@!3-YxN!*>)Ui?dl($-8&l1WYcsiCI#983SB-KiJ!$TFqMynz&%Vv%=Jy#vaK2OypGrMRFTaMi@Cd z*4!nNp?MIND;P5>40QWGY=x2#8|h`OKxW+oR!C+@d_*dx$p3j1n# zqU6k6YI$I70m~cYQiP@Nl?4ZaQ}7`Ge+pOz-I0qL0VmTnU~D z^*!SQ5moxjp9yphwUYPG%ir&DQx>~m;QE?28VF|VkNI{@k*e7N47Z0C4043g@q;#u zYHkanr+?bWl-8=hA{JT7V2bsQ(KD#ElUJT;;_8T=L%ILs!-5->$ zHDc0Q+g!EDLfeJ3AL7U>n8{gl^pFod_NF@7L)<@J8eV_cE)W4*V_&4Oo_h*j&Vc*K z2woBY;nYdV+ieCI^2h2D3sZI^uE7S>853fciiMM2{Uw% zkprhB@bp0|eYEpF!?{HWkIB=xwhtRhFi*1&I6bkgN%1-*aVl6Is^Q2Npd3cjPehKGjl6 zSH&p|ET4r}#m(wI3I9eLU0!P<3@^f`d$V!SbLu)k*#UgT)S<|!#l3frbD1j-1hzZu zG?{}}j8LC;sJ1oEh7_ugP>_x2_0Jl6j#43gpTv;l*5+G4aonw&VTyg(c7Q^q@D! zhm)N+m?sKvC8oH!kuRrs!`%SfgNrY;?-3bOE1Jqz?P+V%v+V%`9_n*U11ju3PyDN) zWn7Kn5^CczE$%UsaNI@}f^?kcI*nCk1u;rD-!*%^^nqa*!>5}g*d^cDdK$~K!nws9 z96hD-R8;MDwMJI$NeJp}IwIk^>Wps79QD!Lk%kpEdSQ;`7sb-49{;9h*U%V~#0ru* znW7L5rc!vxm{hQ0VW=y*(HAcce`adcQZ$g@9-)10ML#aAHe;hD ze+20Qe_cB`S4T5kk6>N%$jlYqH771}>y__12Wvles;%Co_?;N#Tc36`Mm+c9^$D2} zd#6#$S)Dw91ucF^L7h)cP{(EZ&Zb-l=H%IynGeWL^>lcd*e(d3F9?}0IJ~l`w5W$O zB0UGsLlBoK^M5N`ltQKYeVl#m0I|`+dt=4`>a0NZYY29N;&*LX#iz@UjePS@^mkM8 z7NZHp;)h1QSJyhToLfYKFdd@&l?^->Q@>7*$1pSJqj3ktq>3h)DAhNTZTC8PA1LYp zf-dSb!&-A)6=n(fL@6=Xz=BmfEw*P<92|bZ$Aq#U`iKVm*C*f3`#zEy8LgL5^3u}p z%6HX4HFE#}_02OMZ>5-K2=aJhy$b&R=_B`YT3-V?YsU2}s-1QdsM9$Tt)Q#16?qYg z1YL0j4R^b!w}@C-OY`cmpmK!5h$$d1kV9K#&f~Hi1UBm#uh4^|$y)F`;FxxBHU~l7 z2smpoIT+T&luhH&k4n{J9 zBGH?|(VNc4{=3D1x|*A8Mb_$7kavEjO)*XANrP(2;UNXIg)lp66d$d8=Tlw6MS51V zUCPqMV(*9Bckke6_mb z1Cj6!3`ax-H`+%S_p%pbbB3mGc|M9*soVq-D52DLlbe$91Pgd8`%l& zWJ$Q#6H7sqmHTiA?w;I``UwNBmsy{<5~hKkbgWDtcD&G^&*m|+9A%?OT67X85ht|u zh$i<;QqLje?Z>s7D=_;fyQ7gCEl)G8u8WnWbxz9|DW9z6QuBgQleuha-*0%nzc;(z z>*e0=oj>qK6u+)*Tb8|$h?$FC9XD$4UH^^6f4mV9X#=14F3`orY@Al*viIYz?C~ka z0Fx>Y%A}BTPj4(6IK1rp_A$Ff^~$1E0xv_FvHJ1f`L@bQJmmasY50!qF-|4$y$e&D zkIc{7RbL;!gC~tW(o-u)XO2yQNF#YnM)~NKM`Rxa);0q%DZWnL@{)uHP5gy&;)Se?6x>c!pN%cp^sp`RjzFwiof+jFxH@#xACs75)7&;O}4(w+)UiD|&v54>) zi32=386vWKew}(I_U-|RzRSS#G<;jipHo4f;|}Ry zbmU0=FOmA!Hut92TnFP%%iU)0u%j+F>;=O1>A5QIb~n}t!T%6u`|xm>_Kc-M*WK%U zBjAG+NO9<)xr&3@^3U?NyH8>V&G)#hYQRH5vx+cJej~O&(q%>F=Hr(0gR8s&0(z)LG*q|FhB}8JJZq zX7E=)+xP(HGu0lh`rCmmj}xUpU?uXz`QQHWR4T6-z1#ag?!~IOCm!g}6RhbcL-9hpOAgeK%iw*e-t=22%1IH+>1Ot4JTl|APdf08Lz zr#DaW0QY9dhPNEpDa(v}b%u0wc;!aoq8xB3s5uZ8nElqniKZ0DTU0M+<|w?o1V7@c*i9n! zFSzo0j~Fc==xs~aXg{yt_6Jh6iTJ-U zm`OK{?rx-H?AX8SEX}z&3&y6XBCo|Pew6n{lCa`B_^UEH+(ATbTk(yI>~vvqRT}l$ zwbY}Rf6(+jibROP@P?GOVweaYB<{!UGtT1|Ki(M@Kt|x$`gU@&}qsi3M7(+)n*H@$Xo13c_4daqDF`Saw4?>Y~`8}Hn_gmn1S zoA2!?)b2qPX{dxK2LGcvIpHf5HmYOdKAn+tw6qPLVU~^OCtDqMso4eCnOc|L9 zI5$VB*|Sf~Sx4~w%bN$@9bn1%_3ei%HO!o0m&)2{5V;frEO&%slc~dKxntCfB8xQG zNg5u#K^RGD*|QQ_fP;Y{`CKB2Eb(VeN%-RiwIFAZhsujKk60er0?<(T(_ooP>$rFK z;*P!P6|G_*kRN#9QctN}3!no4Q^7I@EeoIX@$;_}1RC?5ljNOb4afsgz;0(I#Qnqt zT<7iqT_nfiz215gl>Rs&08Pwu?L{usk(w9H}U%l+0XPV0rsaFE`=KxM!%qU;|*A>UJ0f znV-J<2fhTXfC+^%x}R7j|1wQL<@9SzjjNW-)7yI${xRJ$RM-7!-MunYI@QsSZ?#o# z5vZ>?;d3P6ds{f$5ZE~7{;AV;>)H!+AsZrAgq9B_g|GP^icM?D+zX?Rg=aa;MqSP49h;JJ}$yt{z*4< z@xG$H(YMW-KDofqOSzn2a}|O|&YF4Wa34t4m{l@5nBosGU$8DB8hES`nk3bhIIFi_ z#y)&y(i)njCMij+mZ+wktJswzWWN)2iWIsWCOb5$d^Vn~Y z4+1YL=Wg&O|D=-vwA?zM!ZzCTLqH1k{H|fpn4k#K6ZkJHiB+rWeT2s_ABbCL?67Za zsVqS5PW5PND)L=2KPI~7p!(&~3FhBj?JIBpI62qFCr>JdDzgLo3s?S*-o=Gbg16nt zJ8IFtW{29g6ldH{TAedp+-jaA^q%jroh588fFCJwrm)d58i)JUgyfL zAg*{Z-@S(F(0H|X>gi1UJdgt1b+y5sAHWs7tslKT|1az`FKFvO+3D7(fUgpCCWQnl z+lg1Y2Adz;zwz_WalbixdOCQ*3f_11aj*CD9>cXVl}o)6;BOazRxwV24ks>^iFIO9 z_&Y3k4;dvapD{+jk%;d@UXRx8Mhe$@P5}%$F$;&6*H&BpZ5O&O6-&cO<-R`{r+=KC zJhUHn78t!9w1V;b5nO$V?_9(`c{uX+^P%?ddiikOs>1}ITuybD)w&Ja$EG+J2jP9J zYwNabj+txPc&W}Rkgto_J6=IO&htB@qMZVj`zF|WEx030n-9rifZqYWFt4V89%z$Oz5lAhc_)hLSE?l*YWDv?7 zQwP0FA^%QR5cYrMp-bB^xTKeXel!)gOLz2PIAIP;kn6I{ho-{}?Q$h)ZatET1(JqQ zI#g1Mh{-Iy+ubGEWcZO1Qjp2wxl?gfb*G-v2ZtEyx=bf`|W>5)nCh*W94}M$< zk<(Ld`Pr2y44BLs(|EMS!J=Xob8U_AO;!gkV`iA;+9+CEsQ_1*>VKfs4YHVt64~2_ z5N#~Jf-Ohivs=t3Nk-8naOt;Hu5j zEV9-<@)O_=#ok!7M0FDU#-#JBKH{5P+_}8?taABhu}qdA;y(^PB_ps${x44=+xJB! zI&~RSkWMbXXJ%h#dN6rJj|dP$hcbzMM=johVUZZnti2Chu0fr3QC!i!w-iJ@xX(gw z=nU$5TGnivjmS9h<9p({Ywn74L4@2M{=o(-K+YDLh?xydmn&^U%U1-p+Z2l?mZb>= z5EoM0Z_*80H_LNWAFm(JM+HT5@D5seTXAE2ng2A6qxZFDHm*EZe7qJY)9xEu&gsBx zzM;x#@G6*s&1Fvg)f4x+L{0{6I{1DFU^Gh)%YQ!2qdWtZarpdLg8o#KGaAfQV%D8a&7rW!)b#}pvyVoE+ z8(906m<1Pci`&ME+uDDKxTa~~CS&ufCLBv&@F~6yeSFui-w3(A&$Ve%*mVbR3-lmk z(k)A#XPB1;tSfW`OaL&yh2oq^xMms?eZ*qfml{=ilx}0u>D})Z4YjE@>;3QC5+VZCf6UkMU_Q4vR+8vxub81-YLRsV^I11 zgZqzHM40ix=IowrZju02CKmw;WOe{gqy8yI9xXZe3woZX@(*@EBq)m}_H!$nJU#2T zOX56rvpPT#)M0{$!p+Oo*@*a55OK4*|1q=$UxRXUU4mqOd3O=_h}{=&V18io9NCT(ed8Ljk`%^}Fg3c~3COZcg!s70sGDp4N)yzx3kFafv=VmXUMs zPc+&X#XBU4OYr^6D}e;lS^o{?_Gh@%O5iRxClWSR1E|#smaF;YMp1^Y?U}0Y_;v%D zB6cG_9Jup%U^!9dxGiJ)q!@NyuPRrh`7(*nsNeLFajb7*#V?n6&yk&XcC_F82(q6S z1o30$%p&$r`T{{+S95k{An@8C(KFf?#qSO?;#KEWMq~4KUwC=-z{DwazBhQmiQ4U% zVEQ)Afj_3F@q`!7-^<_3pqlWgx%pMtAkTq>vVmEc@V#Q*ukx$}>o4@p(L2FcubHfkcmmSV>-DYl;tUR%^973Oc_6UMLqNCNNuP+#rU@#tvRC{US%pF8cJG@9CO@hg(hQwu4yl-Q6bqAgMl zBOL^R0##Fwk)5b>@v+-+q~qv`N>o5h#WZ&nAWonIB@E}7Z^5N-;MD)wDS`8l@Ug+JeH zYPz3j)7p$X;kaPs>$^1%mv47g0uJql&IUzbtEix_lpT%D)tG7v8yVU2@y#>;Mcp_& z1J5S%vt4t9U1uNd=2R-`oPOLD70&=r3eTcu8t*I%kCso{GaJ%%^yIO2K`+ovP>6hL zg*t}U$NYlv5YhVrMh+Z8Ck?u5aw|J{4z8;lM%$Jk>J*6amuoYGdqCpEzz&mSK)rXMIvmF>NyEji&GJu%yl3u%$Irwa9TutW z*OWUYDU~oNt1;vBl4y6WIlal}cWOGTN7) z(_=iai0;4&x&Qf$zM|1+H-OC=+p|FV)brMohwN8vYIS?t(cmq`O}pzXlD=*Ao_ z3>PgPwW|C||HNA3SQ3+xlQnMY)>f{wmQ+_zLSTHz+P)JK0B1d6mTk9%5NVUsC;|1i5nIMA9HHj-2X+^vJk>uhxFrg&}qeqs$Zrq%wqo( z7=FnO_T++RL0chM1oSR(+UIO1MdJ;!w>FG#P;aE6#$PQO$j463Eqn9y6PE31_JE5* z>5oFLs>0MpOMa4qBgnhj6OF0qI=m>}7@m=4~NE6}&7 z$f<5#C|?`K7YBQo2d?#d-YNEpurUqHht_h=r5mz-Gl{3cvd985+fdPypI<|oeii9d zjLhnQTh1h`6Ie~NYJ~wVr~IZboq;lLOp|J~^7l=Qsf^17S55M07M5=^Lw09it1Xx$ zMQeP}y+-FqbeVbud?#>|Z@cXiAu~OoBYQ%xo_o!{SlhP#!>jAk&yyY2ycr!-1y%(}ziU9#A=|F){aDyo(=q^JVDIR2`zI@^~j?Z4mcAihzpdYPr1;31+3D_9_H zIPPX3?XY^^d(FsMEHg!!9G)r)at3%F3Eu>rgq8DKKN5Xo-FU-8DxWK&gc+W&kAkc# z)g%*)#Qf>_MzWwiGAlvH`%gSZ;s}lJuTI1*3OI^XmdgJ@&pVQ}+=b?=r&gpZ+3{di z+q$LkXtURlrtNpw+Hl!P7n=TkKB_EAc1)}KURB;4HF83KopgtFLHR@x7Ef+w3pr;b zQvtzQ1&?L;{`haM{a(IKv;(?b;S<>*c9EqUGTr+{1#t~BN@@EDP4~lWT+iA%M_x`+ z6q>g_W!p6%?DhQJCm{45y=oy>)ghwIjf;#k=S`5mu?j=x@tZ??6@>}`!mZv^WP3(< zQtO1+n$bUu`uGm+4~9nmJhxy_Vari@mt;yl+LlOU_%RbqyO)Qe0|tT6!yQi_jeshI z9-??A0P+6?l9kURWU)Z(a|OMFZ0Fbu9O#?N^+v6;Z)Os0dNiL6w&|rgz}a_Kn%M7w zw0`qP0=#t}=FJ%6-VHZ#2>2=SV{eVg~+1F?dMSJ6qXhY z|BfFyDV7WPsHe@M*;iC5;;XW_Y8}|y1D?|^Y)R4k>P)&VJ^!{~hRI#| zLlC<;Jc>oov$8duU2wN|^=!wiBPeSg*~oDfBgw9_#)lYfg4989bO?UIdP#-5u(BOm zEqG)l@9~n5Ssdi=PcGrjXkDJbF_MvO{hAi3Hvq1{yZQ7GS(AJXc!jK;C)ruO*ea z3MCSJ|GrZID0Sj*B`-O9d!TxSYAHJq%)wqwB$0~3wr2gWzvbh9)?xeR9zF0hB*aeb>6CBK) zc(zQ`%#4&7ovfO5Kt}@o6i_?o%}p^x?~X}F5=GTZf3<^^|EP}AxYoXns!~%~R0P&W z{C&}{#m+fr%qOs;vxYT!J|qR=HE%|_5mT8@a;JwLK+b0D+Ar0*KyyFBRH86i`o)*l zylk+aBbmeee{tQi7=``s*;tg>K)%4mNq<9lAe0M!LFkV_L!+V6`tpBfV;Pa&7IH2)n3Ri;EhQar0pP7@3{( zB&qrb^r*%4$}P@)GdQ+}*g0K`uhn@j@si^Rv}xCR&H5vJiXF%ZH=$4o1dwpCToT2n4 zl)PreAcbx%&!AJ+@EwlLYjnd3cbzW_m}-snW=+D}1#KxNi_LWhSy%J)PAOmpI$OgX z`mSXwZW|tF{Z{YO75&T@dM-k*Q~W~cSc6?+-7(wtWJlwHqff^cuUl_|19z{%T&97m zl3pE_+g@DgP)6b6^5jqFEWD045C~_}jziz1I3F&LuI~A&uwb*?)Q!VUIM?be+KT%f ziidHV%&G=d+CEcVdYLe-_{S^Uy%8fSg=FY~xd(IJiVuFiMHQ)cU&F z==kgps17t%z~?!2I)&bp_0Pzj=u-gCL4(n-^*O|1S!LBK#DM#o`7$c_QRfgZ$Wp2{}Kl{{^~uCzoLc zIxsoj_A}=`KjedLhi#*0Cn6fWH)=i6noHH&0Hfe=qaT78l!s_$o|V;d$4r8upPmp6 z;LRZWiNmo=okv(X_*Q?pQ-|j4-N7hjmyi!3&xM9RSuG2>OuP0gqU)XCR*aPzDoo(e z-n*|jjaR6_&0SL16k;9)?|zybn?>6V zy(rh>+uNLQS@Y*Vl=$a3l#hQQ<6Rzm*t-Ke8^?s^l~Wii@f%BQU4L!)E2z%FEMQ`~ zpt%$0#`FD75ryUma~HpS(1@uUVhRg;#=456du?>V=T<5_sP<;VYw*887o2NHVy*Z2 zGOEa{!VQ5Yscpxtz_|>7M*ainx4}v0GQ4*DR&JXZR}K4^y#Kn9vrCz4G8liCZAHA; z$%v7y_k4SXn;neF`Dxe5G5e5@Y$F>@Q*NzOFkb&S^?8^@oal;tC_`G(WtE#AfayJq z<{8e}NF?7rCAVgd-tOT1YSG8G@7uhtrP*3o4(A%zl}O9D;x26!zu70&$RMjx4=NfXeiD3>>Uuc6=l#jl=EQf`T$G+^ujqGE4C}Rb+14Udk*RkIw9hX+G;`<^kxsz=n2P15Je2=by z9X-pECTUc%cPtyZwS*+C*^%^Soj7=Mmzxd=2mkLs!QzfxyQSz5Xk0<> zID^Iw3Gw!dvS-^FgRV!cJYpL?Rq|2N$DaK zY2$#~>H9v?Kf)297YMf9)dcl@-Y#ZzE_lRx{}l6l-o)6DltTyEEQb0$oOeCJw+C~c zqrnfmDf-jTkLVBQ+o#WW+f6yoccOXu`h9-h^#%*Lo~HNo23(thYtq56@*Boy7f=5e z)V4d(^>ns^F8F-4eY*MbrMIPx&wuTngF~Mlq;0w$&StvKM;|#onTd+9js?vdZ4)jp zPrJ_7Gc~rke;lN1^P+V8c9*2F5J?sqg*Djpd^w5#sf$Z4!IqL4~4#a;&I(C=x>r^EL}QrHL@J*-@DhpeeJ?EB7f#X96cXoy6a?=CeMVarSM-^ zF1POgj^)k-{|n2pIcfnc?T6)a>lL%arS&Bl5_%PVZILfa;5DOs*%;G2mfCSW^$zvR zp%dGeR<>{I$KKVM5X~K$%QhPGs__sR)TZ?~f+)mU#b5dXYl(f@oSk$}$tBZEfOnci z_dNBLK=%5nMB5Zo&cY=vt`4|%EZ-IoEv1^Xixt2Hl^zu8_8ZMtZHz9LNkLuSPjxsj z>yfZ2MKk7JFgT7srsnUSlQfWnmpKC1PVRXuvGITehwCN+F{WM&`q;bo+M2nM3cJKS zQdwKKd)EE`6U%+d_+PNxpM?Lwa=+qz$4cssH&&-S4ykkxwz_2zi`pFE{I3t6Rpemi zZQAdVc-h<*h544nP}X(wX4zapTC0}4BRBC*0H&6jD~=s-mD?o42FzwMzF}j-2|vuG zGFxpnwiB(&&@_N3^>Yz$GT_cUW!KAJPKoUha8I=0`hrWTsJ;rdzAd*SicaVuPi4dL zQiukd-79ni@(uhCy52cB(lA)`j&0k?#&U#=rwEztZhrawSE?ykYkezKaM!G=GudJm%$*wz;cI^kLZrm5IgiGC} z$Bfe;h=!zY+*$!qzVLig)-)f(X2?Z<$SxB@$bR%HDRE-zPhz>h&BMv>CzvK>R(utCSvvLB zv{RZ&K&s|5@CzQ=R2tBrUL(r7ypjYwe5a1LdzC&Z${#4ameY@hN^?-&fT)(57OtiHFAM?{@v@Md@EDO@o>J`_}m*4^}Adf;#< zYk4l*=iqA*T5RZ5eI+dS{)XI8%Iew+bg<5IK_uopny368=+&ZK)sMj)`(I3>f3(Ip zQEe^uG8=cdvgGI0<+D&P+ga>o?ql)kh}zR`!u!oF;8QX7>Bh^|Pv8y$4w*ox{OV*P z?uyDFS~W}$4v<}tjQt$(pG>1rK(+mHnB?2Yh)=z_1hGbcS=jQ?zX!@!*ALI%<3VRP z4P{(*79R&v7sOzUUQT(Cp0#-~?g$l|Ha}-W&ThPXdw$38I~EO8QVH|5J1_2@h-boV z=H$6=oVrGDd)RdyDFo#L=CSz(a`JMnYLO|gpZ-0ROt|_sAm1NW#CmhCSATIfFfvm6 zf{ocN|26N>7gruzepsxP_-lAHa<=&K&=$Xp8p`W*sd620Pv2Q4Q^@ALPrScowK<6)8NFXERM3_u@NK z7i#aR=Rys~Z#YfNZk_wva(&L3bvgBVLt_5QRwvJHYvxwwjjx{&5rhp6JJvWGpFU&M zsHOT4A_=@{HlCay)(H4V35|NHgEhj*4DE<6EoHcirW3cL-65k_fmLgYUZt{Ki$1Q+oq~+%Hez$HJD@CR#r{k~cnL>ksw#$HJ4(5z)1zxVKKtRtFd> zCheW#EyVfXHl5_=x4YIP?H~x$A@PZ2OZsahS7{ow#2~bPF%3Sd0z0na3_fwSuN2Am z=1{zvwo$w7ev>R6G;MrZ=mDRUm?vHtHiBA-Y7Z$;{}ZG0!Q8!95l-Ow z-Y5Ivhs+V+rv`dOKK?1uAOk3RWAxbkF+tmEbM-ja-jS#XvHmt9OKqEUt5>zem8_V& zE)l|=6}%P%oCt#G7$dQEm03Q96`}&_+_$c9m|$*#byN%#oVy2}5{OtL$z$19HmuQ} zYgqKc#i^gBc%M*GXU6?~EZpmdZLPG5S{Nye;%c!e2=2wkYoX5zr%*W3s8T4O!o7bD zLUdc|y3AS@vfZKnY`7R+hPLa3+#fj`y+MWA(46}jxg{nmjvE$8;&Tbp!PDd~)DZDQ zG$=h@T0^_XnOE7(5qBlzds(Kz+kI#AYU`p(n%x6}P~#1?xozYF1@OCA=5$M9MV*tV z+yw-BQ?5x^Af%LOyce}FH(s1=d#ppHCc4u!A3?Mp41xag0+H7t20kG48TasmGpzy0 zFTJfJ*&7I;=((mueZXtC-o#jtkDhkj`yAYsdBcit!3B<-b_8SS4i*djMQqNEu;bUN zMG+G4SrMP7s=2c}rXgPo&nTzuGgJ_mP!#gPZeXgo$td6{Y?~&o5;NlsPkm%ita*Rg z_HL2_)TFX~hyf3!;nuuZ27D?cl)tQUJL1I-1pu|?PJHQud%3PMmyhR`jsly0En*%2 zfyJ?H8x8iw305`F)H7j8El{JZ6h`MA%`6N=78uw226ppFw^lw%%s+VT(bYdGSO0c5 zsX-43MkpvZDXu1#C302l7nc2XHvA@SZj>r8;(T9zGQ(%nO7dPW z@@2_Iq2Bzp52i#HktWVDDoR1>RP|0RZt(5#b`u@-JNMT0DCVQgSKWz@-19lF(Ocfb z`%l{ColjA$eo!92JrpI5MW$y2@ROW@Dz2RnkRjE4H z_NC@G{?-!kyBBPd&kc&9IPVhVa0(>qNa4rWjr^-8=2n-;ozlxp!}}gXizk-_26^uP zrrxN{bb3-#D95c5-+dBPcukf3LU{3U!rP%RfjO-{>e`-Zi^2D<+XWZUY9Xl&e^SADj8LyvdiyMBc9Wt-NtHy&Bfd2cHZTo)n>}`$$B(0&4&b;qx5iV z7UYF}Q{qmcQn<%mKS$~_Q-PosC#JeQgD^-oT)LKlu`Ob8^t)!(OmX?EA$9(v-gYy; zEXIxQ{eQr0m}ln%N(yV50R^`=-^DwXG*Q27J5J@rdxmi zS%?2&_)J62;pRPy-6OyShUFjIRTm1ljAd)f;Ftqi3_T+3$!gNCZnaaKU@IGaz}-zr zTc+Xgb-&rE7NssFs<2n%2}F{0(?%6dhfUUM9rA<0Fo(z*udI^h~8 z@t@1|{{q*t0BcV2t;&ZC8PPWkY~rgh3!lsI3Z24E#cS&gu(3+{&9%o9Vt1vFOH7g- z3jYPIi5r0q{5P~#|1Y1z-1MIu$YQI=)^tLT`j&b{lR}6Fm2T7jVAd`J78t;ujN1TRdwR$Q&LMUSE>~AZbMRH%X zmYq2Ab9YmX@FJ+tT)FOjZguE&_Xy50n0>XKUP-=D8a{6OepMY(yt{6*m$4AW9`FcL zLltsU2ifp`!BIQ|Ahh@kN>7^uAh9d#t~VY!i`+wGNT}UBes*c%^VH=ni`MI-~bMdS_ME8yZiAog3JX+A4Wt$Ke-b3AO z0`ScL0@J83`HIYRZ`|{X6B2}eg#q_0y&Rw&HL_P<@1vvJnXP9~9h_T=QA@mN{4Oyz z^iYo_M=pZ)b~b5Oau$~@Jh7vjVVF+fOcY>a1^!+I)*&7uO#NRn?JQ85PhvPE6E)b# zZ2huy7=8US{qq>T|6^nF?JD*13TxA^)%p8${Ih~|f9lft_mT7ObNbL3Y0LG9+pTgC zOQz4pp4Q7KqXA#g*>@k40%ZiVU3mvaoP{SmvvWb;iKe*^r_lmN{d?HkhDV!@J36c# z2j$BB*nKu6c+dY}(r`10-Fi&#a`I$ZEHIrTY;yZhwKbgLHkj@OljhG zXuJmt+|aAYzR7v^f#FajkATo~Z_;5dyLANbeM(`xemkLn`49Yn<^*%~e*6O#LDU-x)GimoFL#C!Ux#`b!yVWwiPawf;m3vI(SHuv5 zxJraIgUmDBH~WGbTf>^}S%@WoS(345l;_{`)eiTelzs1FF76yXkRskzslzZoJztWd z&Mt06SBM{*%eqpbMpXL!QxeI0uv-?T@ekw`IE{J(jYPbGIrqV*+E3|JYcF-;qw&(L zVg_~cutjnsRNcpjoHmS^E+*_Ly)@^9-A^!8o5Ph?3~$g#-41(h z)W*8;{{_%MLI`{^j8-|^e;?(sla)AhJT-)xQyD)W^TwS12o2v;BKCaRT^sCkAn<;P z`Qr;6mW=KnwLc}^z2`CT@gi$uxWrp7TCyw1+uMpXI!r>s+klij-B8!eq7dcFs4%Uz#{bO#dwPE9GmCF7&~|27|n~kE%-OiLt$ZvR%YVm@71~cl7-|*3bzk&-U7Kj7l4hDK~3V-+b`BFwBS~q^9=mwq~}D>(Js8| z@AJ_mDLM35^`BZQ&K_aU!I7#iPVGs4r5FXtQ@j9ZyB1)@-Jym-&P;HYR`)p}8vN}h z!>YjPpO2ouZ3!{?|85vFS(AJ$xoq%k$&hWD(D>%igK8o@VBwsa=iW6P_0RZ`f?b zRjFEZ?T zx<+;&oTFJDe1_G;3-Jc;?yT(^+C!KVm7uM3)tG4e3c=vkiU`}>-*H4W9PO{(|&MG6!kyde(s^3;7<`TAbBJ?x+2Xrlf-XhL~J(`XlG4mo(NBj{3} zu1bY(a)vbM>vwgd;)v2U?;XF!*K-BiXIcxB23%LbYr`0m;uaLQfciF!IwCCkl{$!I z_o@Nk^JIRYlOVw~bp3Y6lg+26;e06DpHPdu*q97vwnHxN&8ob@4<~661Rtbkr>=+l zuYI>mgW}jqKNgdNwojMgNZ>M?1bIjCDZZ+YQrQE!*Wt$D^T+-ANZIFC(=?Z`gWmZy zF;*(uS>`I#9oQ}$L?~% z%M;Xl@g%5mjZ3ZEKu-Ju{g<^xLvY+5?UHF%^tZhpJ9Fpx_9PqcC>%fZj)!N)H8;Ug zA&mZq!xr&at=r_6DK0sZ7ux<*Nipwbc4{n4k12HbAihrQk4?)T4o%RT+#Mk2f?UR^ zXN=8k4udoJ4g_8W>xV!W%;j9R&aq*+o%E1FG;B%V~MJj-qNq0?r9p%UK(3N3jpU8r}dAPaeM(nHi z6uMwd0a>t-YXXmhA2l;Wr(ri@%Ju8J#gg8;lkielxvY5y@-~?qQ2#Q!?n{r}N`G$- zV5=v$C5}pDKMLJo5>)R1!BrT3JUQpL)kj80cNv(tFmWvWD+xsf&u~;+jX~C%8TxT~ zDJ$?|TzY`|^~C;Hr6Cr#O)VP_7Y6?K{cXI6!)E5m;)Bk2tAh66ty^S*(I}^&RaNN7 z-iVsLW6!G2os{>)7GDi=7Lo8AfV_R{ZXxXQN@{+`9J}fbIqvp?rd!V=t%iTJ%>F*k zXWZbsI98VDM|&}`HMQ0AuJigp-21h5n$~N>1c*NTkJ4G1`Wo`rcRgu$o~Yewj% zooQDHqvrLY_5R;g?`Fb?yg$F5?|OF{zefCi zOE-Vc|0Z1SZut_b`MNmr`&2weYCA8Q>gjop>-8-^X&p52V1kUR-Ryl|{=#v1`g*-N z%KQ32w}0tf((w2->Hj$wg(~iooT%;|T)B;k~S+Mw`=?ldXKu%vKmjtTu` zx+vRf!T*{e{yj+O*H3dkN_>h#@0WcDZ1MYi6ndGh?M>@P{=7)r!mp7P`jldOZNc_B zI{p&)5NiL}%iAs&`g#-cdqyQcI~g%xi8S~Z*WF^kQv9(H&Bd4fR`RgrSMl}C@Nw&K z(K?}UUGeZZ!BqXZ_a!6rxw%yKsZd1<_5SsF_4W4k`1P^HlJ1$vy_7~M`OA0_-ToN} zh%EYSDg+O_Wd}Q$irzAk9$_w7x`*DjUTnv&wWMo}p1*`;0HiO5q|xR8-T4pP2jyw^ zep+wI?~3>?>p_s$crj=)>ohM-f~o|UKf*p}->O!+Vt1Fi)3kb_>3OqCQ~GU1o%D40 z9SU!BFN>cW80;Urd;*)}niZsKJj#3pA5UaFZ>xO7=~DXdS#Pm04;;Rm)i?SqJqcZ5 zi`t>dQZ22=OuJ1sk85}rOl3=HxnCiM;E*J(2FYfMD!KZ&*S^U+%*QCCy+(`zJjo~$ zTOfOErb#fw5ld&Cn-1<%fA)=H8k6$&1IJrOs_)Pz&sWn~ipYVh-$B~?<@>7*BlDWLeo<)4^wz+I z??>tv-iG;9*F%JtY0H+eV3Ewh{XI|E@K&y1C*_XihOOl^3wjFc>n5`b{FhXGsXR+# z&&LLp#^2lM+#!m_FWYP&82}lgW+Q!2SWYX4``iAJpEVYPgN2zOncaexn11#iQf^5+ zzgooBC@pL`YM2p4KHh9pD-O~bUv4=aEq8usTG%$nKXg<-)kHoqEO2>IqkZ^vzI=+l zGScR4N?D1HTXzmmINI$pEPFiRTeJN>U4GnA!hn0RBPxgg< zHHVku=~(k{ zvOY9Hlniz&;r5K>Ou}nvZoTZ)n#sqLt9P-e(r$)Bg)-s}^vr-Y1sAs3IvCdWd(9is zsL1`JlL)9vJ5asN90hB7ruXY{X3NhzDCRB=>-GDL|M)Bk`DVsG=K){y03psTKdXZ8 zgYEM0pP$Px+)t1Wmp48ACV06C^ZNp=APqR6ZG^Y~e0uygW$%MK(~mO-lC}jcw=O04 z@j4&9ZuL|D*)h;ZnxWTqTeKfm=Eo0w-j&xUd|r|KNi*r19{FEQ0>X1-v5i&XsE|^A zpCv8djRGfI9+{u`@~B9EBOZy_FIufWUzi|2{`=?|_|Y84m0$h$8@mi&%Co&pkiMnl z<3SZfChWIGtKmPXOo%DTdnln#{2&iL(dY5DQrDk8If)#)hFH!Vbng}mezV_QH}=-( z@19QXnK1cnD2$40lLM(vl$HJa7Obu1CjcLRyTVbzXS03JYGe$w!7P3yyck9DG(mLA zj&wFj+$}aqX{*}-&U1lbUg$9 z>Z6Tce=}E~dshz2R3%lj+)9ONh=XHd)M8+@V#Y7{^vd)Im-x1!r5G>T^jTyzsuy^^ zk~E6CsLr6VbDJ^3P#5%+ju8qNRV8h85)ERrNLBPrlK$pOlncWI{oUC8`>2dbBxyZH z!+Su*8Ld39RQmSMJ$*Isi(IEoKupFyV z%DKQjECw-~1&u)Z*0s*z?B;lxXjN!bE9;758W`%}LJ#qD`qeTy=G*j*8t@g@pqPCE1p`%);UjG`fxG67CRuexh+A%9 zk+KVXVjvJy3qmfqHu0DX%ifjbNb>?O!x~OTGp^cD2a8a?&&U?Mx^Q6645{9$NS0bq zQqmWVh7dHfWu}3vq%{Foq4uvHC@FM0qvmZfT$A-WYjOqa@IOG=I0Yq1sG*_pKM!je zGimo^1h=r&0U6Q7Ukg={)|o1p5Xw+GO?{cZ|AJG<||e0!Amek zIE{ablszEN$AH!aOC0Lz&W{5GVDT5i@n&B@G<#od(x&kOR-c#YWt6xj-k z{#B|7)uK>PyjXuOPKz6)T(~xG*nrWS3DjW7_SJzJcxz)G?Yo5f)W+X{Ii=wmkVRvz znUmg<9qha)Lbp4u(>>T0`fj2POojdzbnBEMs63Wd3^0;JWptG+vV|7g`NW9Kx>_Jr zPW}TphiSMSRpZJY%(Mu%0pw06ucbTxRIP%$H$bdY3tJmnZ9HwO zg7*E2`ywH3Fzq5rXc~@@?ZXTDN}#?;iMQJKIjh^e8R#VOnP*(tR8sX}RnstG!!5)% zhuimn{0dG~B81wc zTl^8B7Q?Eo_73d;+pFBcQhX6R!ol`5R3f$YfHQU5Bjrd2G~VEwU~UXxse zR9$CfxS7+suqY-fLlivXs$p>x02`i{MkKB_QFCYCh&D(@CxKCf5V!&s&wem0WnlI8 zM1~*;I#N>TgMt7x8k4?<8H-8ox>T-M9OH0?_Jqo{DK1;Un2HM&=pGz#ew=iH7;2A) zrUu$Rh3@P^B3@&}(4WI0b2Z$?=O5c+cseZJ>12!QCv2`N6>A4nf5Pj@)mX~!17kLM zOHqbF37B_)g8QQA-| z*DFzf!Fo71#;4C_WDPhE>|&Jcy|CbdR)V>dn)+YDpo&f5tHxH)uH*hiO47L(0+m!> z0RH2u&X=Z~&u$2QK_$)&joAKGqMSX7?Vnc%XTWUgc)z2!p^3jJ02g> zIXCZzXf5;XQ-M5X#7jVczYt)mbnGzX1ma$@BxxMr^fTSrjG%{O)?JRascd^}4I8Y? zPLm<96!I)E*NCj0lm+h?CE&eN&>GqXxMQYLO0@yn2)cJd)3wr1HGX#5*YAd{NFv-x zBukkUc!=s!5FQ5=RCK>e-Iv5fPO312=v8*07b{uSf zHFgo5MLTOf%yUFqWV=kNakR#EZ3;<+kp$9hUY*a0n^k9Eh=Ixkg}DnEwX9eqS_tIk zuAj+&4KF!YEB{2`*V?HzY7tXmx!EX&2`Cr#^pIsMvg2T9y{Y&LNmUM%lYh(?@0jN;m0LD_$n*cfg>#{ZUk`^H6>mG@hDDQQ%$%Xu zzR%2TLHpc%%M*{eUp>{L9Ilr9tvw8*fEwqbJ_DF%%)tU?z2t=@_Ru0*PC`Ca8_U#y zE)`Jgl>B7mKoDQJSbl=xhJC#5h?Qaa2UUFYW?1S$xDmtIz+)+1vv^R@T)2`GWWwA$ zavN(DLh^-5#d3*1`|D$*YIZW|a0LaFRM)1+P~s8%39S7TR5`+Cz@`IO?go0iSMk~< zW2Ha7$%*kirJ3PeRB3HeXcHg-6xlOS7=RWLc1>&)t%`4{E%7%BI*{9DP9Bl$CZ{Q2 zG+(C3h3n+7=xVbMKtqJ}&s07&>|YKvqxJ)fMGzJIqY0gHAFC5cMJs?}t{AWy)cYqb zh*{jDDx$TiC}Ov7jKj_6KfWpR*z{UDpjp-rn3R}eP&*q9LRw8kSmHFPsu^4;Vj!9M zdA!SS#S~vy`8k{X@2jvM*g?1%%`$;f=B;S9;ql2Fppg z3cRbE0Z!dHADn9q$>5tW%5kVsM}rY>>#?zG}Z}KMwo%y@jRJP>ZO9 z$9T7Dc|c?ySZqHYD=} z_vzuDKL#Pr^_Qz5127hU!820PJ4BwGcsko`%t4gP8R}6A!-wwdMR1R*C47tXi?z{A z76utlJ@eGiKp=GvInJofYc>VTxKvRd%p*0>9c2XS4{$Jh>B6tyW)zEv@uF zIDJgt#cyRS*wjfcD>hKQ3RG4dvCR}XG@Hz`i13W|tStfE#~gs2#at2WbK{`QY8 zHLme;0oKv(72Q zLB1L(C-+2N&|D`&yawun@rNUo?1Xg z4tn!M6W&w!T&CP6vL!^!CEo8_9F6MXjITXtc1oltiCNVx_hoioCD2LmO4D1;;x=lV z-PPbK=4xEjL!C1}@JQGlR|ta^R!ups=b8l3O? z&DdGKl^OkWszP!meM10L39}-!NU^RwiIieTK9RGyMT_koUT2;x0IHZlL3RY|I-W!(w2 zp_88fLw}FPPXMpNT-}Cke*EIu&_ormkWZ<&KO${ApPNsq@|@XGFP51IGzpo5Hd zLkewMpsWCPKV&Hkocgoyj+G46FdC7VvAb*)MGOyjA4mhqnjaA;0@S>J=gQsCrx6^? zADPA|8TQC44Je@SloHXe?uZ~h*0c<{B38o4{!7e%QCCQCCFM1NO zsS0=nk-*1uqdV_M*ie!Pt7h=M=~i+0wlXt)Nu4$^^XTQNIC##qsc`#7CJHt1wv6tEGSEJSgpxOyR{W)#Su;4c;V>+ByI>c zO(|n$#H2zB!I;%=C@NXGdZ^y`#ZT$Xfn|-w`rLypzUoc#QAJSs-k5f5+7fIft4X(qKLnyOL~SHzsQN4*hU`)hK%E=#1D*0r%&~>AA()r!PxuVv)ul+* z;yxG|g>^X(7aSxWbDN<^rL>{kDXs_WB|t553}F3e=u$!C2mHy<1W#$mUDV~8aSjTe zh5Ra>_)O2OQ7q{H{rh7Z#YMp%=thhbdfuEwYay(d{A)DUjBr_8754OQjGIFK@c5_6 zMdgkUG#<`p?YJ67!-?X^tgZr7>*U+Wa>&%tynLdg{df%>iqXjKF_f5~bO3?xm|m=n zLGxm4Mia6&EO5A(MVHW!SU6c57>ZlLE~RI2#1fQjgLaMQ7YLJAMdr(itPzvx6W=%r0XTBT+g%)q>-A zmHIZunCeOWBmu0dMhzwt+WbVU!+}q36=KnKFZBlH187j-yhF4cLE`qMg5p%!wkaJV z0X-!A<#(|lYJEK+P$kAEnozd-qBBd%l%S~RYDn$cIlQi>zQ1$n88FJROiG7)GwZC3 z8clWs&*EDPDQ(a4X|Mp*pXC6Y_7!l7|Q8_T1yQAPgIt^;O|%RtN%$molk;o-tz}mcZ?bbL1hE(74B0 zcx3@22-F%duk!MK$X;&>Ym*@mpyblV2y6Nb3GhF*h5`{SWfXe%Nu)t!kEKHu?Sl3i9g4>A=J1xisZ^JQZa zV-(UufO4~VRD+0c())bArd?fj%&WtqMfi1d^h+36G)F(2gtb~B|7f3zy9P4}5C;wq z>KuF$HTc&83O8YvhN|AJifUANF|V9J!!Pa2x*gr(*o!GI%_V>&q7N~lv1hAX07yr>w0yPnA3}BcK??~*I3NCR9D?6T{ z5h#$RNhsIil! znELY`0C5Z-l27Mdv8e>v9%x%|J1HYd0jx8x15iQU{Fseblj9M=zt3Mjgh4H^k=C0_ z^e+cbsaLG!UI|<{0XR>gjKt$TEcYMh`4gi@m1zEyW1rf*RpUX%!XOF3Z=$4;N5iX< zV~1JK&c<8|0Ol*5T5OLts6)OXmLi?w#n+VVGocq&2t?VcI0WPD=j&l48O@BKWzkD~ zue5)f*=|-WVO<9wL=KK&f5xRyFB6e%LUj)qF3~MBwUv= z`Y`(|(cY2t+zi1s8J0=o)V~6rKbSl76|C2dWfaF{yza_I0|zj;u;mD87F)Qo z2sq3G%v1wQ8nkhh)dhoB0__AxtBS|AM?xdH@(Q)ZjJE>1+n}(b<~aGuf&z`L@%Yre zmEYz4Q9+;(V8v4e+cIbIU28?&abI;&e!0$#Ei%dvv0u z&`B7YGgMJ7YX6RvqDK26iVx~vF+zZ9mE#OZKs zUGi7VVXZ@M&2^Jf=1b5v8O3Cg2uNYvA=7ZILv~19Q$niW349szXKhB)w+X~6Z}ijO z(p@X-peWdnm0g3{VtBCtiUXYz7wG1&qOA(^-qb2{P00dqN#%RA1)+vKW=`=8!M=8c zmjJy+7*Q6LHIv48<)VVGyg%-w4Hf$3GWjsMG$lqjFsI^> z#-pM5laP6|;70VmX+b&oB69UytaQJmz&bWU{j)5%?sTZ57#isF;%?X1M8>JZo>g>#lGweH^R(7ArT>Nvy{LMiIY_b z&jX-3s%;0m2qU7cF?EVEg}k0;5qUmbU@f94n2uYY0H}gGo{VbHGzqLXnL3;%yhFM5 z8engBjTsLORMAu;UjbW2i;{%87N-2|J8)oRRYLxWg0^FbHJ(;6tZAbY5@}n$KhdWf zj#<%kvF{zSMga^cs(4gTVhF@`=0V{`qDTzw`BROO&xno$4NJeXi zGrp*~1kPR#F|dFvB^$3T#V*4cEzvFkWR?vfK--xym~c}xJUah&pC6Z=L+dDKV7vtj zA8T4mt@4z2A~QHN64ub;mA3+#DTfipl?3hJ@yZ1N>UeN#P0Rh9TWBaN z;+Nxj^0Q695I#-GBqO0umLd}sabdfwltw9(&)8Q09Z%L-O(fJf{v?u-#0S~D7;zS( z@|XS1##(j=+kz{(ID)ByHs#mW9qejy6YHI#u$NhgF12c)l_Gu0hta2Mg*9Dpf*E~jgZdny(%JVv zpzZ*uVUoxz*CdOuqm}(h$U$K?x9h8hT=?B(uBOUd1lH09)o9P~AJ;$;Le8x&OP>}P zrD}kzR_=kw!QxJCc)Wv3)4y^-WhGr+h5K53TJrq-j!C}Xng3|HFMrkHRc5O|q+y0n zLqJG&xLh6@Qma|JR6w|=B4GHN{B^fpPn*SVhz5Bag98uxTbfMB1wkwR3 zht117zZRT@ZU~7p*l>3!&Ty>Vxql+_Wxa6S31KorIJq2(6qoy%YuQ9pnt~D=SP|nt zD&cx9&wi^GMSn`yp%$!lS-LKHQblPc7Ee5LCLsDbA{-?`*wHbJq@FuaLJ5}GaLlNX z)Fk{u@m~%g>aR#EN@Hp|hS>M`2u4#NGZK3E+xOG46opftPxEliuql=UHJgYqk=c8Y zc=*9|#$;dQ(D9-YL%K@8W9Mes_So3VirwP2;=&^z`wAqiIU!>qSQI^JGG76swmmY4 z$)<>=nce8FS^M3_ovE|GJ6Oy3q9&Z81Tqcn=<8W>dbKVqvx$m)Ddf0t*eMUx+|yBFIX8J^(*U zwU}RmkU2u{86$BV3fu(8EnYWxiCS3dQDM7N5ptNnKshW_jZFQ~T4??JXk_RZo++1SmlxheDB7#enns-_M^FSQt@AbOkOmsXva! zWK?ET8#WB9@#$G6U+^$hrLCQaO#%bURcRN7g-|UT6pKhZDGD@AqQApT@@M%%0=zt6 zY(8^SA*rNzbw2MT{zGvBK&K;QN0tipy=jEi|Qy zjkPCOQ5)2$&|>|}hswU56o4q6(Yj6(kwFyjz>p485E+`UL}G{2J}hk+Gp8QS1hh0l z+!^S078r7$NuoT)&{hH|3n-OxG_4t~sn6G6mO43PP~M47S@@C@x1AIIs4oa{%=;1o zRd}3BKoZ$NLn|_L?ZTnrpq2JecwSZsnU|U-u+`Ca0*D8`q$$%Y6#Sr*5gGnL7hDyD zOV<>*nnn|~h#xYw0(3=$!mnEke1+I1S3RMsR8L{(xyt8vf}Jfgi#Km zVl;15`~%L`33=YW4lB`vEoB{RPSig8@AJ1u>~K1Rjb2ecqOZt+!KrT?590AuNI&PG z(F6X`<%(2$!;wjaI*e~P;i*$%!;FErraHlYnTZG&RZ3_$*P*&j5F^x}q#TkKg{)9) z$V3P>F@IB8xY=P;N428CscxR9{!R@9NCjqBrVnj;k%sCC*B*?$ zFY{NrTa3I@ubtdgt>vU_@d<^M3)lNX{zJ|*kC17{5VNT(pw3(x^^w7qBdRk2 zNNMIwvN=Nfxg{VJ#zJ56gou$Qbubz6XMzrhTwKzq0@)(A9PG%5wo@(<$o^}(Ia&i% zp>L+-E*ZPD+o?Z*7X92ysc+z>N7;xlrr)b&jahcvC${6uBpic~X`jNAQs}7nP^*=X z?{EYyE#2bcvg+k4$=#mMi*N+c`{PN*R-`OAe#=rmN-P79hhowh+`276*)4QkVAHMtfCqKNb{qp|9cYO8Z z(;xct^V2u(r@#F4?#HK({bhgP4es63{OkMY#aEAyykCEN`o_O|dVK*IKYjBfPyhJ( z*XvuKKF_&7@#xo=&zsNjy!~=bSUm%VSv3Oe!V)9!iqIw-_>2?O(8OBGlX3UOyY!I% zot(b!|26ivA77a2_aEOsGqTg4KQW3=`x%4wi6PC`Z-05XB=MV{z3lV(!}|8MzkIh} z|Ht28${#+R{qnNcxBuCnKkqNU^&js)Al;`w{OOPX_UAtaM)UJenS$f1)nEJdg>*y= z8`9BurMR%S_$R6qOS-C_T&}S9u}3%$G%af!(%@=|M(6;l6SHFVuy5^oK2(;*i7H$GcjFk&aB;;YFcc>xCvWI z8x13iV^N^5%iZhGRnx1osE)8EFj(E*RNqOEbhdhKu2Y;6k4#i40ZlEioTyb3nOwdER;!)h(bgD;mVPLq+ z=vR*zT@x_a&Lpj{5XFh&0B03S!v*>k(o$;C#$~eZF{z;)UJ);(0#70Z(Il*jt~ds< zM7&iOGQMguh{cj}tJr~B-^8LmxVzQ?0s?a@C>Ve%W?w|JfCh>;qMId%DP)vB)V&4i zo>-MndrDL*x$OGSPh(48F(9vviiLg^9p+y}hx&W{yXY|M6MU}xbD~4d3w(z#|L34*w27{Ed^Klr*HV@+R1_Romvu8~>MbdETa83bcG zM3+--eNBbMVKClZ)t??San9=2**5re{E|4diRu`Y={WwM5QXK(s4i{>)HbY%;`8pM zV?eJiHq*~J)B)u(0qN`*BY~y{1{O>#&^SrtNIFS$?OzEwpva!b_dWXVow4^Qa@ATo z!KXSF^E#Yc>B~?n&?g9OL=mHhnjW{9Uq5mvsstx+o)WQ(N)Q^DfbFNquvTy}s+!t? zhLT7Vl0qWf>^cpMbc+i4fRCM`;RuD`g0}z?dB-dl)!BgZ3*gv_FvMk(#L0dVG zeTiS$2Ci4VzdG)|W`cx?^WF{*TE$0H)%0K$wRzikNqn$B`4*>0OuVIYRKs|X4pA2Y zQPRDp4$I5#CI-2BGM?+~+1T8PK&@NZw%Wh?je=4O2g+WQmb1=QlWa^S@$8wA!QdG> z+84k+jYs)12~$M_)z(fIU(8qU3sB@`6`MFv%2O6EOl4J7PpO#ILW{vbGKvzG10B|? z10_HRkf)qmuy2dnG^K+_i~5#{&D1d@ovS*<@T@ArpOtmRDzw?7jWu|an>^(W^ww3+ z0LA6*GIr=;OJtBJ4}cQt|OBV+#6F5nLK99>gtgE5;5Vzd!FQ5qmt1qq@bo2|FS zpz~2P09BazC8Zr{3JBdlYX_4&M-PE_9q_PvSyl!S`Nd_qnbmDEiD+>Uyt9)I7Yub=8eUWEL*oL|oAHwpE_L0#8QlKyG*5 zfJF{b2d6s*BB`#(X?QiSEMbenD*23AMj3HMxTVHo*VC4C2*ZUiNrXmTEzk0C1Q6(MVX6EMN`oFN1(#<+!@9#DiJd?V z)6f<#(z!Q~+fF1^ntGZj(YrPi4_g@}t%NX+Q%e9HJM++huBvh{7kpGYv5uAVzJpB; z7@^i3`Z{7z!e;917eJA0Fy084pR2)E#Q70np1MQuy=Jk}cy zGoXD;43B%R@@1q6!0m`36J`a58am@(-t`UCfFetgL?Sd-&j>|bS~mD^sv`|+?u$Cp zEJBawtf~k6a)5cY*j>`kTi9fTIF)MeSyvkZ+vHvS)KR%b?28y&QI1g5 z1DvIdL#+>o9d>ApM3z;jQ~7vw<919d=Eg2 z(#bmw_3doOMBU35@eQ!Ns+L*(vJNp}XS6oP(Sg)Ty(0=2i4r?DSv{7N$aB&DdV~&6 zRN!?7i|l74zBnr-76t1Qg2(6Bvc`;cB<{Lr8E{B0i(I`IVS{F@3rg1SQI+r1IqImo zYblSa+~C$Rn0VY(iH6%3r<6MN!<#I_d;=C4AYTFNO|-;35}jCjAuwnH zd-w{wtR@kzuqGvGM@!?e!+oO zdLGt00h8uUIyvg6d$L88LE7r>%GZpuPvr?4?$cNqR$jMz6Sq6V*(w0-O4#jh;N&9s zRI)0TcXFURv2c6BjZJXKbCLyBb=6~GSpXifR6JE?J4?CKEjl>{<}zKvKGm~{keLJ; zsy-zKv@s#C2wENl+r;A0iH3WCS{qFGp> zuJ#>~Ench>W2NB+&S{4d-$K_g)vG6dspSW2Rbx`D8xWUB5ux{^d+eNN1mt(6c{knt zC7_`dHvS!KGPre$Y;UjKu)j5Aq6T%Q>}=nq%ju46a<6*nVGim{Q=Drdwix{_Y%;Ss zXi!iS=pr%WX~c$~1GWey&+398sv~|rZ5}m8Mnz)UPS~K|-zNK&LA0fWw5|HwF<{41 z1U%q;c~IMaWWSrT!FD5_NSlcIdVsanZ;=NVi2fZTQJorcQ6Ldk9dWTCo990$PiM4< zh^tb7&e~dMulkJtIi@&EeS~?pfq{^y%S6f9XHbGk1|!{rDMh6)BAq_A zs>sAkYU}8@^+bL>?A|-d!mps7r~BAh#C!olojTs)G}-06*sv^j^g0|}45zweV`h~6 zT%4eUhaq>I)AYRt3&hYdyP?CoH(-;++iPcL^ls>`I(SuJ2I(FO8;6>@%$!aMTTB8d zw2P(Z&=DnI(C%Q9QBB;g=hKq_f0@!sj@W&AnOluXcL~>BUOss-xSdizVm)CRE88t> zvR4Uq3J1Almqkp&(yuDjQF=^6?v8b4_vVul3NpivyXMwe-hTDWy0=Z<`(z0y47IA6 zl_(hCQ=}X^s^~~EsAn{I-nuH}8I++l0~R^ueNY)uE-|NdM3UwC*R#7A7{$iEv(%}Z@n}PypCx;MI6Qij7J`e z2znsZ<#);3%Ec7o&x2*`tGCIBI~GMkM9HF?@J>!Q`jgMdE9vr@ND2Y?glRTwyE;E~ zW6;zRvE0EXJGF?$Du)8PBS1Py%RW{lG%nh=iik52MaL=e%_jQkJ^VBkK5lypHhGBA zqDGsU3S3AOCWc0GxgYVXfC~<V7p;+ZVp-Poahvy3Q1Ns(HOkSP_ zSXewZoQ0W)HH*u6-QOnrWF1u_O3=+u_H5GAMHJLTU1AJfbQ7^HKrg{oa28teYc;p4 z2PfXaCc6=1yV#{WXz$=?)h3M}$A0CTuwi4?y+~D4`yr|vLZ~>@64BPhWBc{nWH#Hz zo8;`>Y&t9M%+!dZFjcyKC$~`NWe)MD(Fuq%~ zby>vgC`@B*7&3S_rkPI!6|uDY>O`BvEwV1$QRZe>i%M@HB1`6h>6yt0A276avs=N1 zgV@ox%+~g0mUCioE2LOe*f%Fof<|$V)8yc-3@OsotpH$^H>EI|dO!%D z_aQuy7H!(nItKmaut*mU_0rtfx0TYah zzt>iP3p?2@SdXO1vDbjL!k35<0T4eXb1&%TGVw?sY0Da8>nhMNa5bT;-7%(H@TKw; zRVns_*E}_iz6xK2bBY1)Ky}$dNqGnlm^z{nZFznd%hkjKcz>I$o9v{xyQWxl&bZvQ9)XA{GTc~;T4}9BXgCO^QS~@-L4wLP zsZ8Sy-0->7D^W#j;uY&X4rQ$B_L8+uBgdSam?ul(d5o2~`D`!JRo2p>H+QhfK@^*O zN7djSH95#5wXq_bv#vhlDVU$ht{hhf7oB3^dQ0dr>?-EOuiqxK<#?YM0NOe$qn#xj zApFG;>a#9QR#n1`=*1LE3_!bjMDu@iqut*oqbf|VPM>s*JNL%3dcuZ~dh%<}aozE5 zQ7rdd#6pCMiq&&Y5q;dEw{BoHbq!LLp$cSY%O+lrk4f0xqe{dE=4Zc(VTr+@r>xR8 z7J1vcMJAlzOKec+7tT}6I}WV^-#T?7n1^Lx^P1iry?WLFj{cjxv)PgCwxRI5c%6m* zfAThzL~WBr0_5@Yom*pLl9iDdLElb~G4AQ=x+PH*$s>_GX^nMiwyTb;@3+aZ5=i-+ zOO1ast60J82yV2CzYT6|>ydRAg7O~qj7y{QbTvO=fnLGm7=b+KFh2G+tZ>?6){FMw zb6NMtT!+qtp4=1BM2~Vq&V2X@*y#EOo6Hz$+(@;6RQtrGo@2VFZ44z7RN-EdcPmO$ zzMaqP)zm~zm2CTABz?I}j@aIzC=q~z;-&3u^x6;4=W06~!c-vX3ed*dO}yuWyslQlejF zj)aXS$Lu-RHHZS6Du|UaWpgm3i#?gezz4_( zCp|L`=dyXj$$Pz`S|nJyKs_NHUl$D5raHE}$gm4D0Ns?JPWM(2y^SX`>@1b9Zr%I6 z-X5cKLu4c+-4k?9g;%;W5;o=OvxQsLV42-q)FMsVWr>l5s&S83^yiEnW86|$YL>2s z%3~8c1zpiUeJk_qx(TsnAcLKlk1v+i5<=adDuSTKqkn-tmWkC}&NPX+*XLbpeN5263p>$LjKxC6lm*A(E^9U9hEfqx<~La6 zc*Kj$DlTvhs#}%(RmM!`uCS_IH#%zi?q)}_kjDs*~G&C zM`U%JD6vkSLd2EYRWi2Hk?WDnHep99cJnK4_s}sgyx=K6-spo;`?@&SP}jTt9yKYw zHTjv!x<4ZG$~ub^h4MYGu*va>_L?cX95D)#-^^~r&-wrd<~Z1+3^3$6`D^7+?%k#Y zRxxfOBu5pv+4jR1o5u$mZDY|4X>LjSos}&E(qu| zK4!G0+iK>8?B~jUKm(BodY|mL+nJ{bxes%lxf@d2qMv@n?H*B43gU^Hp)7S=6>A8~ z8F$0C^qxy?JVjlz#G$y9_UIKa^=at;4bCpYN~foEMD=S?`1UMtZzSwWHj`V}6prt>SHYg%IwOa zUVsS=>03-!hWkNh^3D|n(22QPMq3=OIJ-zZOKlA6QPNCYtr)XtHlvRmgQCb?(Ow&B zgFcg#+V4nZ5>l=2(8*qPAU88HULU%Do=tkDjsSRC(IXc$uQasWmlDa6D3*g*8}k;$ z_zP?@7~Q}>Do{*I^noLm(NB8li$}mI0sq?N=?6d{D*G{T)2K}W1@>2DT;O+`LqdVA23 z2Ai0mG6u`)$~&8gDnws91cB>w1or})oH9X_6Ej3{Pt2nW>(HyJrYjLg&gEf45&^|( zn|OLFWmQB_J&CFN6*hV8XzIa=y5w(V>JjSMB2+7YgK(I zC&8*u5uh)Cz}ahUafjbA=EI=!a+{1vwc2`XtgQQV7OvHFPP%!;V<8h_Vyw#TsI)SY zFLrFBE`FEtUSDC8jZs+1s&W}^j&$d8QNb3JEB26A^2UIOEOQyTc#*<}>gv_$;T08p zY!3Ea+1F7%Hekj+HzNMMa;c1e0Ps$XN!I~+nHrt9=EP3wk)RCb_uFJlQXH&|n%cW+ zOPOlXVCF>4Ls|j^sj9YMohO|NB!Q+Do=MpG2Adp^XQ~;IvM!CNKkS^VTPw$y8D$8j)>>Uia}uDX7tSL9uUI`;Bm(`rb49WxPR;!#aTx}}bH!M?T|_oJi6BJfc2 zxr%=F4Qw(IuM-weFs#R+j2UK*lw1p%P_k9nn9HxIDAsdRBGkXPZ(Sk1!6t`Xn3w1z zi18Z%Gdq#N>q;Q5AnLsql?ti7TUJL0#X+K8EoUm9=nXa*EYQX9qoy=!nRLPNEFHjg zT%f=fZO&p!RsEA!H8WMnXM7HtWA!_nCrcf5AC%p9iiC_}FLPa)Otnq}3*~!H^;l6T z03hn9`=hn&5_r5JH~EI($>IeiB^;Ff=o#;~$p?ioK;0oi zd?+PV07fLmH_Wq&-VJN!C7piVCOSQVh?V&!odvwWCac+lvV>Z8kuIhlRz!K!us$Cq zaa=c-CEeBI*%kF2Ruo$y=iU1&&Mv}3+*M~(%OoJ5-}2D9;JENn2lF zlL>S~I%(idY#n6~eG;dz$SnlGY~4IM5cV_8lTy`k}<0EVWc5ks9mCCw^C)3@#&-Yi1=hQ6Ig_LXz#C+==l={8Mi80a}} z`E*k#9aeqE9pB7E!y)=Yk=d}~N9;}A(fH-TPjVVA2eY-FjZwHtzSJSWn00)=O^%_J zc5p;30PZtxm>uiS0aOIsvp`__MzDoDLO`hQy0z!4x9=WbQ8NOlTh#*^yHXcVb0q+d z>WW<})+;AwvqdTy4#La8UbGYQMnDO0ct=M#b=^^oVhVkL7ZQIhZ^y#y4knHNxDzC^I48j~0JGge>=3f`f;}xTmQWtff$% zcD=8#$=3S3^<*rR+I|lQ7Z|XPH>tp4MV#^#>tcP{0|4e5&T8c%#QXT2FK?4|0DeVm zwN9zJpqw2q8if9?60=jY0*tj|#fLJ`2|UmT&o?fyxP1eg+yJsEfp~QiRrZoE2EYRm zP;AEN+q3F9$DYb%W)#ad>HzTH{UztgtRHj2@b-r>W>!G_Dcy*TtFy#2qX`(e8*3|0 zI5E{TO~j#Ut@RF@40^>1cSI=yb`~_`QY}s8G3J2~Z|F8Gt-`<_8g^7}5;X=CseXgA zi?jjrVx=C=DwjLrhmTKPEv-AtKXTWh#j?7Ke3G;RoIG@?P~TBAvOE4Y3EBQuDravW|+kar-N5a@uD~Qn%_F6LAwMI$GMTmunW#ST^)n zWfDs3?74J8Edq-RO1MwYmY3V)lyzyVY&Xz)VTpnBXC^{PwYZ_1igVC9@<~8N+4m9DK3s&ZOmUaL4ytQllHQ)ycH6$+ry@ zt7Ky5p37O*(d~X_3e2-=0wJa(GOviBwO5mU$2sHC`9RG>Qh}M zUd=zf`jJo4PXF<0`svm9)$KV@L3jHJ>_?=zXG+aRa0f%_Hq8Z}Y+>YC)rK}`Ulf(K zdRgZ_9h|sS)&>Bo^0manP%KSR{*_|_rG@3iO7zka;hetPvIpDHWy z z!PQ=H9UMRgPp^bg#6+nM$#uYewvURinIg7E=RpjI?O)h4Z}0_C+|PHQ+NG=76{ip{ILSg)coeov+bz?ZkOg^7Y< z1EUvFDxQ?hOAs2``nc@2UU4s(V!N2izLx4-lsa7zljyic$+vhqRt58H`cl2H536oK z^g{G^(mQfRjyjs{8Nvmp>2f)Ho1zKP(O6s=i@%cX-t%rt1+)YERuYPvDqjA7;0x5v zsQ;Mvf4dKOD*E%^?}Pf0UuYn>P?P94^xe%>SMYpa!qXoui^BY2e^qI!`nrUB>KdYI zBi1IGPS30N=U4S^WUFWJ8AvnoLH+*Q*GRhk{yZHx)|rnNHbYkw@*jQuI+vp3nDWf3 zJhRe>pE*@otqYaJK+{``ii+VZE>h$2LK{vKKDT{PzG8nb@N+O=QT zY_ejX8HV`z32rNfJb%*+o$s}xGJLc+nyeVggt(gIRYyIDCOeL#&oxndwz@(uUAH}7 zPnD$#Rg8t%02A?Vsp1Z|bzLlo=k}mfkfP=lsh|-vsxx6Po?FlDNdIH#BFD_C-CAO5 zQ^8wnO1rNa<2X(saMzK|l0gCcfr!9Ccc{I;A_NT24yq6fX|a|1vWK-p2e%B?m}Kc8 zhaOaPj`>J^Vw56y_$b-KF=FUguA&&jPGJ{Ra)<3!;rbb9a$pcgt&qk+#!r?!-mxm% zrHp5ND~tN+rgSbrIlyCpOMo2ewTfsC!J0fk4!bAyZdl!x_%x92%|jWEw`Y-iwO~J< zhE_jbnUBr#PCxnU_LINX_v2sub<;a|U;ktLwdO^{jWA?<@eBlAtoQ_Xed_I4)TX3? zib{EoVv#@)PhmXK>29A6N}sXTUq|*wc-d!UrE5A@U2GWyW4ieJEuz(;QqQy)uMaVX zw-oL@R*Y?fmBmu1t1*6-Q57AK{sDfmeMa?E%cHjGY!si*K026*;uYC(?@sW3O^jcFwtzsPz>! z%PALS79p0-Y}uH-82F9wEf$vGG$$Izon#yncA{Y~*2HqSh%Q?Gy?MXi6xS;ZOEoKI zV=QXvJX!+w;vx7C{{U#=JvG=r$vd;dT5=GGrlbXJ;h-)55av`niKjas^E|e%yCsBx zlAy?$2QQVgWB=qz+P3rtr?Iloe2ekcCazcAm>hTCD?!4-d7q&6_F{D_Jl8#Iu(=C?np-=z$N&9|SIJ%z zWp7H9*!QSyF{YAu_O8re@C=UjB?9`vqhecxsUqeX`;?EtJw@eq`d83=BypT|qHCtBUYf zXI-%hV|li@CvOUir~E*-Q+2T|E;5Hv#CA4=dT~#k=Vg_?IG6|f&8An^xgY?TnL=0J zY7v_}Vb7H!EmoDzdh$lWh0Ice{I*|R(v6@r&c3>dLKb(mQctsD{zc⁡~sqblGgo zrh^!5rbv_qh*gTX2%*;Su^Du|qGwmlPux=4S(bp%`*{?YMqzBMCOCr z>M-laauN02B+47HXFhg1YF_66iN9cz!|c-$s1$aqyk8#x!XdPw8@DbJ`_e*71&Q&e zHHgJ<9(3yt(gg=28&Q%;IBAEX-);;$a1TeWnJDsT$MZ-nWc}( z4i~hd9TiOP#FG&_kUQKTu*fOv;Pm7~B;|^n)E@nemX1h;O01Y=lo3~iTN*5Ozhg^> zaK0i)c)}tVwGLz(&TTF=CL-pcRh=`?Sp*QW?#QM(SQQofh_caEGE5ku#S0cWXxTSN zH(1(vR1~9ZOsZ$rBB@+&E<$F)W=Em}a~+9CFk2Y$Rm%@B)UyP6ws?6&WE&S0S=5Cs zt)T?Psq!ASaOfgMYkLub6s3nKsm;!MBb&k1&eLK;l?WC!;uzSj_pTB5%4fxn*|sfm zp_E2LhBeHoPEX6;sLG3Jg`9^URqR;Tar~U2)ux*kYa=1deAt?6#-Ow;u);aPD(a?- zocC=qafOL?G9>>wLG3TtP#J$@Ty>5>g73-Dv zmqEa+Rk$+I1W+l@GZ1Qg!Y1oZIFxRUhvf^rxy#vr3zK_q@zYG{7YOOA*b6s)HC*Dt z&fCimbPY-8qX!H3>uROMKelu5X-nPJZ>nAyShbD4PHlFfpjOH(TUV^&t@-C|^67Eu zsN6F4MVy)_M{4TI%u?p5W^~goJ2Yn^%dXR@PFNL`@)5o+1P@r`zXRjA<{r%k+xVpG z`yka52#6Cu1JI&$@=jBo6vs7@sq#(!0p3kjm#Tx&B_?!6Yja*5NbTZ|C_*MmbZoM^ z-YAjhBVvsR9h|7Z>jjG(S0=u=yA&1$>yuK%=h(KvjCCX)JhBWpq)KV80&(P z`5Sf8LA`vg>TS02_=1h1gt{@mlB@%6os)^@)0Jp=Vhc*CQ-`v}GOQo4$N>2cSRW#a z<&o&b)(e3_3)sVV*k#f7hsK(;q8%-b$C8UM`i(m2)wKF{0qYS#AdNkb?^1Pwj&E^e zZ6{-`Yxo^yp`MrZF2JPqNN+v;_AIuDGDt^1u6)hBoS-pI!+jbn!^-P+Z{c=Vx<@5> zu7q9xfs>0AORK6OJtctd#ETpWHx9ue&ncEvXVb5xZ3%e9Qt?!o?JDIiPxRJH%w>gw zed_lRAu|ayRA+HY!o->_5!d5!qFVf5mrb;~IrJ=Cq(^QG)|l~o}p|o^^ZU05K(^*u(AD#Jh(*k9~g-$hmeZ`iM0F7n+-W4 zUr-)rw1|jz@pRlBW#J>J=k=JKMXVnn)T!eWr^%t#&4y*UbJXGJW;i_* zo2#Pa_hV@!JWSPbPAhs(mWZKq4O54AKVXx^q-u9n^ls{|I$?iWPtrXUHcz$LShd`i zj%vG6=#ZP&!w`Rhf1zfEZ+S9~kp)={g-)6#dBS8v@6?v&zJJQPf0 zS9`)HM-|1EbdgI9Ra7A?{q9m7rN<299$05~Z@s0UAS<1GdL7R4`q#ziZJT_~trAe0 zY96vWQ82)#NjY>>;Yc#6i!XTITs6xK%Frhe#Jcy1jv?XTW_lJ-uPUqRGu23N;Z0Ss zVl7l-kh=LD)7)61;>&Z{*s9TXe!(J((JYHDV(i^&Pf0b?RtvEq^AHCobIRKg)F7}@ zrm7Jm4xWCd*7XN0a;fK{G9tlYPWy_$uC&4sFu5I(B?H`|N%J@$7=5V9z5w*OGCN}MZUq5wcA*`gWg7I8h9Pke4AKh2^gKdUr+?liBktT32@xfWCLsnn-5hVdv+knHYa%HG5DTW+Y@8nA z)Q!PVle+eTO%7_>i&ahybVq=6iZtA51`0xr0`ww+yK^)k>xd!m9bSY-Mep;+fj>QZIX;kiQngubPcOO&Sp z7Pg2DXJHm%tzy^R@7v^9%uzL>1l|19$R@o`l&tG(v9Sz^WaO~{y#!k+SQ;(vJ;G7% z`6t-v!_0Yne7_FbyEs}kuHx6(zkCxmY_7T&>1s1RMax47m8Y82+5VVC|9zXxW;=M3 zT4Pv4XT{D;jW`NZr90~mTxjtSf&g=wJ}zj85U9%or}Mr|)_qp>s$BZ%#VlcbkBG#o zsIXC(!P+on@NO=v-V7>YY0uXiw5BJrF2YsjrmJaSw3Lx0^T711VuTME+J4xr;KE7l z7)NDm$F|zNIfeE}7pGvio}=_c)e;jS`zk7s2g(TF9-EWRS-LKV8j7as3UDDH{}M08 zU&w=-s*lc@5K4z}Wks=Jinn}iP#qRBM#8B`NGb3?iMndvsRAh^+b38uV#-Y&P;Bfn zLqWp8dxp{Rvk|c?qXy9^Us)l=?!vycfD$x{H%^mNv^J$kOSb}mRo+z69Af1cKKCJB zF|%rlEX3cEN}EaBbk%%*;Ort?luOiK8g&8OcwHEvazw%Jtd+H+J3!xz5-`E2`2Ri{ zaN!htg7rw6oV_Qk6~07_2!Qyxn0rGvw~5*JN?Z1vM^}M{f$Ifb^}v{(V6fyVs#5fV z*Sx(<-;Hn5z0H7kpt@?Ql{$q7Og*ZIwj!U!at-kSzHgIt+?0np3=ZIe=Y6L&qKjl3 z8?6R|O_+Biwd)<~F43{VCZ?RM!qq2i^11U^-N0BOdbPi~!+A#M8^nW#iD|Q{`Iwcw zbjQb2$MO|G5$Oe+9Hh9Xm)vyDxWl&|iHIpP+}xX*8|^}9I0>au^*n1sg32}NOydV` z_(JhcR56Bl#oY5$#;R^FS?e-ut|f?hvK5}kSc#kOVOhdtEdzSb4K5Kq=yho(BUYl5m zP*JgZt|g+6C%T;`R#Vp?T^XuCcD8Kc_4t^A?Y*i*9AJL-s~DCT40_5c9b=J??I$wf z;@M(@LcegHV%`a875FxY8-87ufz4}rcaG{J0ys*u&t0-#F|xjIlVc%}I%7-`|7KRP zfY}jTuNQwCjxYWH}_q3IvWP&Q(OFGR>Y09_p&Q@F` za;juoUysPgZF0o+c14K*9276D!$vP{^Nhy+&IDBL%&8uVCFR_{QdBf&TooGBC*tI! z_GyJx>XGj-&luGGh;4J6XHKji9TV!F|DRLeJhYF0*HsF5NSk;|uN2O|q zGvdKP%YS7wHXF-%#J{y)xZyh!`A7BG0q&^DL7!an9rFkttW)K{;CN6b!*Isg4->n4$S@&D* zm9oy_)YSIwCv0;3qP^PIuzQy)fmMC0p0~+J zKwS=$Hy`U}1rGw_-JqT?5%3-s7FVoBC&yG;jx=tWm?!t)_XV3QO60^U7!N`SWn_zJ zP|a!%ltqfx?)*TI1CENW42o+)sN=EvL>62Qp`Iz-R9S70!|QfkWf2kkiu;!6M{SoI zVf42GM96X7?*wD`4`jh5=P?(Y6U9cOo@7Aydyyxu_F78J3WE%uK)UFsh+Ay+luFdh zpU8qk#HV)Ke<{`Fp5j%+EFh!Ri^^(a_nBJnWQV%AU z(H>qg(V*ccvtx5p;BOrefC0ussnUnPo}+;kb`QV6bHNsjo^6$3$T}O6voZ8uyl^6b zR8#rIN-?`NQe761YymB$bLJaaa6JD|M=;VAb9P)o@C#nqdEE!$6x3pGk@9#(zfbJg_EL?FQ44sDcbmznOru5^#v z)Jr#{TMXgqpH&ajCN|$nG*GJI-P6+C|!k?I^Fe+X*6|nLfJQoD?86VSY)onHN zLiS_kvq1xq2zu>&aJM55wMgE~b>!wqX{jOb6SsRrMJcFH)Ce}kxT>xpEN5I5-_lyP zQhAD4v(!UzD(&$Uywo3|{|}s9gqe0rX{-6FN8#H&!M%~NEBQ#r!z#!^5wbWt7aEj$ zO=^_rYMl3iO^z3bvP~Sw3z%-=)m`cVwvqt~MiI?{ixp4&q{@T~HSWk0PpN6!6E-=D zzXL!{#EK{_UcVlytLk)D)nPV|#8js=s@56A|1`>Io5f`gRQ&-o9A80Xve*Zl z#hK5}HK&fe%~Kfm9?Er^$iPHRVVwiYJ3Cc4bUETmdR_F!JLqJu0!xz(*c4LlbHY(( zS2ooG3`j^_V!9fbA7m!3m=OS-n5$&;@a-077lF1^#xQRM&G=c;U=(F^Z!LN!FLGt1 zcQlnj??FOsr-jsUspe~_WUoAs6B!uKo9>@`k({X_0G3wt$O*|y1u3_wK(Zi;Whd6g zxI~S81Dy;;AK)J)D26Hez~)n{J6!a|&0&;)f925n1`vo`yfw@;Vv|6D^%fBq*c}B{ zI95}vWQv&Swd_3i2^Ukb%DaokdV1`Uu5jgjSoC}$3hEWqg9|wXaGS!jMR`?}tB9Ij z=5(aNCdR4sZbVh(?L|ZtqOWa&!1+3YyMay)5hu#<8N9eB`qqWDt65po8IL2zbg?0c zfMS+~KfM&PDk7*F#MFHYojjKxs=|Sh$kI-r>aQ_izyx6J?c7 zMWhvxe9&VHaj`QM_j(JRJm|$pQCTjn^cGH^CMwv3aQW)u3|8n6o@GoeCQA}nmrb=g z&D|n{kIlipOY0cI#|Dh(=Yq$-W=xs>Hvr!DKIl3i&Y?o*r5L`GY9uIv`TjZ?mE;>M zqoVev+){>`)S2E9a~GxnLCDH2nEMK+0!bjLi8gUN@1T<%^BHnvN?DhJ*Y8%0*+xm* zBqR77tbf?kDaNtUjci!uFpsK!gj>X2xZKvXHu}ge)VaqHK_(V;NJ*Dau`cM>a{Rhv zbg&3CI;_v6|Gfg8jK^!Z#T^W5zA0jckuyuC1q}$B^+5bQ)MF zp0l(0j6eYZ5l7u0%{dN^#~osmk1MwFZAa!MhC@3|Fwzibitd(jD*BIZ*9g)xh17_+ho>Nf5M%cI7xW7)` z2+RP~Z9K$_Qc?+EcvAeqIJ4;8&}NpxuE$14raKTZGv1)HfE(y!6?;&WP>Cke#ZcY! z2#*TZ$IT$N>E<$pGppY#Bff1U`Bu#Hs`VCU7w#hN%H6AE5|GbMy5ivm_BQFxa!ep^ z6iszdRykueJdsh6U(nCKzfKOhfZdsxy`1W0sYFj*tV=wYAy6L@Sv>G(E30jZBP(U5 za&LEvr1b_m8AsQJ9U9hT%&iEbm%u4Ba&ZnYn-@(x+Ya?B9eJifgsu4GFp5y>}e8uR1-IGT23T2)6li{pY`~5RUbatWgXgWelC|B0cQ49wDc7J6Yy`BH^| z!KnTHb+UIQm5m`P4saha!|YhQ4xl{Xo&^Ha7o0835fg;y&Xa3A`uM8;7C9q;x|z)( zu`^WhG-n)Oud3KNqrGBaG)ttCZZmli*o$@Cr>Z;3Q4Fp&kQ@_@jgi_> z*`1}uImj($HcLrin(!}<^KAnT2f}@So$Q7zF3LAXb~Tf^YEoo^!yZk3L4+*#oPvX! z#W=et7pw%Au5!J%(8;6Je)5iJD3$$g1}+%DIzFTXi|H}SN34taDh~jdFBq#ClMwG^ zcfNX^tOM{PVyn4R)&*s+SWzeRcNCZ%%1FT2u}89|2y`3|!;Z*xyhev0*ew{O$o) zQYG$tA2^sHjHNRELz*e$Mq|6YSgh&eEp&26-wegj)S*0rbx|uO9aQX%)G=?WN#kA2 zYNE1VJ+t31PcDUW55DS#%e!)D^k(BYO+_FeVUwAd9UG+d+F}DOje16Ljj}hk*mP%7 z;w!l0yDq7bjb3tR*y!X_8iJJ~zH^Of&ry-=f|D7eN}d3UcT@eT!#=OKh`aD&9`L0{ zRBXh{@VLH>3{60^0lBGoyvCk4{cE9T&=7Av`^u;I>UFY5PVtHwng&l%bYx}s z6Isw2cUQQ%btTSZuli@9vqhWgW8;Nfb^c$0POiPkhfuYIaj{?>2F9fJ;KldE?-?7X zQHMEfTdlZQk$`_=^mPkAIVc2~cW#Vq0mix*fPGU@tThETqp5Taoo%ePQ=I_bqD(ix zW7rqs`|5SF>jf;%Iy&$rbd#^9^K%Jt zTcIn;-loV$hAOUl1;zQK1XY|Y^G!Tv`P?Fpk5+P!Xazv)-s%PT00&|?9j-Eq$Ig;M zUMmBN7LCt}(}jd0l)qLdU#~ttjbA){nV+DY{`qP6;;H}I^*K;Mcl!bCho`xE2xT;9 z28PmYnh8FQiIQhk3(}x{k(AbIY3_4XaQf=?IbhiPS`CRCayjP^dHF{A5Un^%SYU}0 z#c=M1_*AQMWNG~ zX%Zb*OYtNNdt|}9(QPVU*xJZ8fanSDZ-;xtifmOhTiv+>PSfRbw9-WrqN6cCBN~5( zv1*O0OeN4Z^jiuDZpc{q|A8$~HKY1@+`o4p&?@rt`}aY9nLk%Ra4rYYZ|JWRv##Lz zJGrYqS`>x(#r~?)RMmBHXXn{O)rPN)Hyy60=gX)1G;^yq_zq^%^F{Ui*G~(&{eEd3 zI93^tB{oBq7xJ5aejZEGaZI{Km99~#$Jdy$wARb$z%;6}#n07)^`GNI1-$h8<>?j~ zixc75%UpHoJ+DZk>$7;{njvH^Yt8@tTOQs#d^euHeSWh2@J(W5$ji&W-o1VQ_S3^R z1uq@Re)I73X?%KqdidtyPxk5gL*^?k81UoMjoH_()}35aB+>KtZ7`o;89Q@ToH!?kEFG9o<1WV`SAE* z*~2&C%dX=&dGr`x{n3`a{eedR`S9r*79DzKBa$9o)F16C6KrP#u^)$564G^}5GS4?nl~;racvkI1t2$@g!cK0SVXyT-e3 TCAs$Wd2js-fauz)YDEbEVqEpL literal 0 HcmV?d00001 diff --git a/tests/waku_rln_relay/test_rln_contract_deployment.nim b/tests/waku_rln_relay/test_rln_contract_deployment.nim new file mode 100644 index 000000000..5a9624ce8 --- /dev/null +++ b/tests/waku_rln_relay/test_rln_contract_deployment.nim @@ -0,0 +1,29 @@ +{.used.} + +{.push raises: [].} + +import std/[options, os], results, testutils/unittests, chronos, web3 + +import + waku/[ + waku_rln_relay, + waku_rln_relay/conversion_utils, + waku_rln_relay/group_manager/on_chain/group_manager, + ], + ./utils_onchain + +suite "Token and RLN Contract Deployment": + test "anvil should dump state to file on exit": + # git will ignore this file, if the contract has been updated and the state file needs to be regenerated then this file can be renamed to replace the one in the repo (tests/waku_rln_relay/anvil_state/tests/waku_rln_relay/anvil_state/state-deployed-contracts-mint-and-approved.json) + let testStateFile = some("tests/waku_rln_relay/anvil_state/anvil_state.ignore.json") + let anvilProc = runAnvil(stateFile = testStateFile, dumpStateOnExit = true) + let manager = waitFor setupOnchainGroupManager(deployContracts = true) + + stopAnvil(anvilProc) + + check: + fileExists(testStateFile.get()) + + #The test should still pass even if thie compression fails + compressGzipFile(testStateFile.get(), testStateFile.get() & ".gz").isOkOr: + error "Failed to compress state file", error = error diff --git a/tests/waku_rln_relay/test_rln_group_manager_onchain.nim b/tests/waku_rln_relay/test_rln_group_manager_onchain.nim index cf697961a..aac900911 100644 --- a/tests/waku_rln_relay/test_rln_group_manager_onchain.nim +++ b/tests/waku_rln_relay/test_rln_group_manager_onchain.nim @@ -33,8 +33,8 @@ suite "Onchain group manager": var manager {.threadVar.}: OnchainGroupManager setup: - anvilProc = runAnvil() - manager = waitFor setupOnchainGroupManager() + anvilProc = runAnvil(stateFile = some(DEFAULT_ANVIL_STATE_PATH)) + manager = waitFor setupOnchainGroupManager(deployContracts = false) teardown: stopAnvil(anvilProc) diff --git a/tests/waku_rln_relay/test_waku_rln_relay.nim b/tests/waku_rln_relay/test_waku_rln_relay.nim index 0bbb448e1..ea3a5ca62 100644 --- a/tests/waku_rln_relay/test_waku_rln_relay.nim +++ b/tests/waku_rln_relay/test_waku_rln_relay.nim @@ -27,8 +27,8 @@ suite "Waku rln relay": var manager {.threadVar.}: OnchainGroupManager setup: - anvilProc = runAnvil() - manager = waitFor setupOnchainGroupManager() + anvilProc = runAnvil(stateFile = some(DEFAULT_ANVIL_STATE_PATH)) + manager = waitFor setupOnchainGroupManager(deployContracts = false) teardown: stopAnvil(anvilProc) diff --git a/tests/waku_rln_relay/test_wakunode_rln_relay.nim b/tests/waku_rln_relay/test_wakunode_rln_relay.nim index 7308ae257..1850b5277 100644 --- a/tests/waku_rln_relay/test_wakunode_rln_relay.nim +++ b/tests/waku_rln_relay/test_wakunode_rln_relay.nim @@ -30,8 +30,8 @@ procSuite "WakuNode - RLN relay": var manager {.threadVar.}: OnchainGroupManager setup: - anvilProc = runAnvil() - manager = waitFor setupOnchainGroupManager() + anvilProc = runAnvil(stateFile = some(DEFAULT_ANVIL_STATE_PATH)) + manager = waitFor setupOnchainGroupManager(deployContracts = false) teardown: stopAnvil(anvilProc) diff --git a/tests/waku_rln_relay/utils_onchain.nim b/tests/waku_rln_relay/utils_onchain.nim index 06e4fcdcf..d8bb13a62 100644 --- a/tests/waku_rln_relay/utils_onchain.nim +++ b/tests/waku_rln_relay/utils_onchain.nim @@ -3,7 +3,7 @@ {.push raises: [].} import - std/[options, os, osproc, deques, streams, strutils, tempfiles, strformat], + std/[options, os, osproc, streams, strutils, strformat], results, stew/byteutils, testutils/unittests, @@ -14,7 +14,6 @@ import web3/conversions, web3/eth_api_types, json_rpc/rpcclient, - json, libp2p/crypto/crypto, eth/keys, results @@ -24,25 +23,19 @@ import waku_rln_relay, waku_rln_relay/protocol_types, waku_rln_relay/constants, - waku_rln_relay/contract, waku_rln_relay/rln, ], - ../testlib/common, - ./utils + ../testlib/common const CHAIN_ID* = 1234'u256 -template skip0xPrefix(hexStr: string): int = - ## Returns the index of the first meaningful char in `hexStr` by skipping - ## "0x" prefix - if hexStr.len > 1 and hexStr[0] == '0' and hexStr[1] in {'x', 'X'}: 2 else: 0 - -func strip0xPrefix(s: string): string = - let prefixLen = skip0xPrefix(s) - if prefixLen != 0: - s[prefixLen .. ^1] - else: - s +# Path to the file which Anvil loads at startup to initialize the chain with pre-deployed contracts, an account funded with tokens and approved for spending +const DEFAULT_ANVIL_STATE_PATH* = + "tests/waku_rln_relay/anvil_state/state-deployed-contracts-mint-and-approved.json.gz" +# The contract address of the TestStableToken used for the RLN Membership registration fee +const TOKEN_ADDRESS* = "0x5FbDB2315678afecb367f032d93F642f64180aa3" +# The contract address used ti interact with the WakuRLNV2 contract via the proxy +const WAKU_RLNV2_PROXY_ADDRESS* = "0x5fc8d32690cc91d4c39d9d3abcbd16989f875707" proc generateCredentials*(): IdentityCredential = let credRes = membershipKeyGen() @@ -106,7 +99,7 @@ proc sendMintCall( recipientAddress: Address, amountTokens: UInt256, recipientBalanceBeforeExpectedTokens: Option[UInt256] = none(UInt256), -): Future[TxHash] {.async.} = +): Future[void] {.async.} = let doBalanceAssert = recipientBalanceBeforeExpectedTokens.isSome() if doBalanceAssert: @@ -142,7 +135,7 @@ proc sendMintCall( tx.data = Opt.some(byteutils.hexToSeqByte(mintCallData)) trace "Sending mint call" - let txHash = await web3.send(tx) + discard await web3.send(tx) let balanceOfSelector = "0x70a08231" let balanceCallData = balanceOfSelector & paddedAddress @@ -157,8 +150,6 @@ proc sendMintCall( assert balanceAfterMint == balanceAfterExpectedTokens, fmt"Balance is {balanceAfterMint} after transfer but expected {balanceAfterExpectedTokens}" - return txHash - # Check how many tokens a spender (the RLN contract) is allowed to spend on behalf of the owner (account which wishes to register a membership) proc checkTokenAllowance( web3: Web3, tokenAddress: Address, owner: Address, spender: Address @@ -487,20 +478,64 @@ proc getAnvilPath*(): string = anvilPath = joinPath(anvilPath, ".foundry/bin/anvil") return $anvilPath +proc decompressGzipFile*( + compressedPath: string, targetPath: string +): Result[void, string] = + ## Decompress a gzipped file using the gunzip command-line utility + let cmd = fmt"gunzip -c {compressedPath} > {targetPath}" + + try: + let (output, exitCode) = execCmdEx(cmd) + if exitCode != 0: + return err( + "Failed to decompress '" & compressedPath & "' to '" & targetPath & "': " & + output + ) + except OSError as e: + return err("Failed to execute gunzip command: " & e.msg) + except IOError as e: + return err("Failed to execute gunzip command: " & e.msg) + + ok() + +proc compressGzipFile*(sourcePath: string, targetPath: string): Result[void, string] = + ## Compress a file with gzip using the gzip command-line utility + let cmd = fmt"gzip -c {sourcePath} > {targetPath}" + + try: + let (output, exitCode) = execCmdEx(cmd) + if exitCode != 0: + return err( + "Failed to compress '" & sourcePath & "' to '" & targetPath & "': " & output + ) + except OSError as e: + return err("Failed to execute gzip command: " & e.msg) + except IOError as e: + return err("Failed to execute gzip command: " & e.msg) + + ok() + # Runs Anvil daemon -proc runAnvil*(port: int = 8540, chainId: string = "1234"): Process = +proc runAnvil*( + port: int = 8540, + chainId: string = "1234", + stateFile: Option[string] = none(string), + dumpStateOnExit: bool = false, +): Process = # Passed options are # --port Port to listen on. # --gas-limit Sets the block gas limit in WEI. # --balance The default account balance, specified in ether. # --chain-id Chain ID of the network. + # --load-state Initialize the chain from a previously saved state snapshot (read-only) + # --dump-state Dump the state on exit to the given file (write-only) # See anvil documentation https://book.getfoundry.sh/reference/anvil/ for more details try: let anvilPath = getAnvilPath() info "Anvil path", anvilPath - let runAnvil = startProcess( - anvilPath, - args = [ + + var args = + @[ "--port", $port, "--gas-limit", @@ -509,9 +544,54 @@ proc runAnvil*(port: int = 8540, chainId: string = "1234"): Process = "1000000000", "--chain-id", $chainId, - ], - options = {poUsePath, poStdErrToStdOut}, - ) + ] + + # Add state file argument if provided + if stateFile.isSome(): + var statePath = stateFile.get() + info "State file parameter provided", + statePath = statePath, + dumpStateOnExit = dumpStateOnExit, + absolutePath = absolutePath(statePath) + + # Check if the file is gzip compressed and handle decompression + if statePath.endsWith(".gz"): + let decompressedPath = statePath[0 .. ^4] # Remove .gz extension + debug "Gzip compressed state file detected", + compressedPath = statePath, decompressedPath = decompressedPath + + if not fileExists(decompressedPath): + decompressGzipFile(statePath, decompressedPath).isOkOr: + error "Failed to decompress state file", error = error + return nil + + statePath = decompressedPath + + if dumpStateOnExit: + # Ensure the directory exists + let stateDir = parentDir(statePath) + if not dirExists(stateDir): + createDir(stateDir) + # Fresh deployment: start clean and dump state on exit + args.add("--dump-state") + args.add(statePath) + debug "Anvil configured to dump state on exit", path = statePath + else: + # Using cache: only load state, don't overwrite it (preserves clean cached state) + if fileExists(statePath): + args.add("--load-state") + args.add(statePath) + debug "Anvil configured to load state file (read-only)", path = statePath + else: + warn "State file does not exist, anvil will start fresh", + path = statePath, absolutePath = absolutePath(statePath) + else: + info "No state file provided, anvil will start fresh without state persistence" + + info "Starting anvil with arguments", args = args.join(" ") + + let runAnvil = + startProcess(anvilPath, args = args, options = {poUsePath, poStdErrToStdOut}) let anvilPID = runAnvil.processID # We read stdout from Anvil to see when daemon is ready @@ -549,7 +629,14 @@ proc stopAnvil*(runAnvil: Process) {.used.} = # Send termination signals when not defined(windows): discard execCmdEx(fmt"kill -TERM {anvilPID}") - discard execCmdEx(fmt"kill -9 {anvilPID}") + # Wait for graceful shutdown to allow state dumping + sleep(200) + # Only force kill if process is still running + let checkResult = execCmdEx(fmt"kill -0 {anvilPID} 2>/dev/null") + if checkResult.exitCode == 0: + info "Anvil process still running after TERM signal, sending KILL", + anvilPID = anvilPID + discard execCmdEx(fmt"kill -9 {anvilPID}") else: discard execCmdEx(fmt"taskkill /F /PID {anvilPID}") @@ -560,52 +647,100 @@ proc stopAnvil*(runAnvil: Process) {.used.} = info "Error stopping Anvil daemon", anvilPID = anvilPID, error = e.msg proc setupOnchainGroupManager*( - ethClientUrl: string = EthClient, amountEth: UInt256 = 10.u256 + ethClientUrl: string = EthClient, + amountEth: UInt256 = 10.u256, + deployContracts: bool = true, ): Future[OnchainGroupManager] {.async.} = + ## Setup an onchain group manager for testing + ## If deployContracts is false, it will assume that the Anvil testnet already has the required contracts deployed, this significantly speeds up test runs. + ## To run Anvil with a cached state file containing pre-deployed contracts, see runAnvil documentation. + ## + ## To generate/update the cached state file: + ## 1. Call runAnvil with stateFile and dumpStateOnExit=true + ## 2. Run setupOnchainGroupManager with deployContracts=true to deploy contracts + ## 3. The state will be saved to the specified file when anvil exits + ## 4. Commit this file to git + ## + ## To use cached state: + ## 1. Call runAnvil with stateFile and dumpStateOnExit=false + ## 2. Anvil loads state in read-only mode (won't overwrite the cached file) + ## 3. Call setupOnchainGroupManager with deployContracts=false + ## 4. Tests run fast using pre-deployed contracts let rlnInstanceRes = createRlnInstance() check: rlnInstanceRes.isOk() let rlnInstance = rlnInstanceRes.get() - # connect to the eth client let web3 = await newWeb3(ethClientUrl) let accounts = await web3.provider.eth_accounts() web3.defaultAccount = accounts[1] - let (privateKey, acc) = createEthAccount(web3) + var privateKey: keys.PrivateKey + var acc: Address + var testTokenAddress: Address + var contractAddress: Address - # we just need to fund the default account - # the send procedure returns a tx hash that we don't use, hence discard - discard await sendEthTransfer( - web3, web3.defaultAccount, acc, ethToWei(1000.u256), some(0.u256) - ) + if not deployContracts: + info "Using contract addresses from constants" - let testTokenAddress = (await deployTestToken(privateKey, acc, web3)).valueOr: - assert false, "Failed to deploy test token contract: " & $error - return + testTokenAddress = Address(hexToByteArray[20](TOKEN_ADDRESS)) + contractAddress = Address(hexToByteArray[20](WAKU_RLNV2_PROXY_ADDRESS)) - # mint the token from the generated account - discard await sendMintCall( - web3, web3.defaultAccount, testTokenAddress, acc, ethToWei(1000.u256), some(0.u256) - ) + (privateKey, acc) = createEthAccount(web3) - let contractAddress = (await executeForgeContractDeployScripts(privateKey, acc, web3)).valueOr: - assert false, "Failed to deploy RLN contract: " & $error - return + # Fund the test account + discard await sendEthTransfer(web3, web3.defaultAccount, acc, ethToWei(1000.u256)) - # If the generated account wishes to register a membership, it needs to approve the contract to spend its tokens - let tokenApprovalResult = await approveTokenAllowanceAndVerify( - web3, - acc, - privateKey, - testTokenAddress, - contractAddress, - ethToWei(200.u256), - some(0.u256), - ) + # Mint tokens to the test account + await sendMintCall( + web3, web3.defaultAccount, testTokenAddress, acc, ethToWei(1000.u256) + ) - assert tokenApprovalResult.isOk, tokenApprovalResult.error() + # Approve the contract to spend tokens + let tokenApprovalResult = await approveTokenAllowanceAndVerify( + web3, acc, privateKey, testTokenAddress, contractAddress, ethToWei(200.u256) + ) + assert tokenApprovalResult.isOk(), tokenApprovalResult.error + else: + info "Performing Token and RLN contracts deployment" + (privateKey, acc) = createEthAccount(web3) + + # fund the default account + discard await sendEthTransfer( + web3, web3.defaultAccount, acc, ethToWei(1000.u256), some(0.u256) + ) + + testTokenAddress = (await deployTestToken(privateKey, acc, web3)).valueOr: + assert false, "Failed to deploy test token contract: " & $error + return + + # mint the token from the generated account + await sendMintCall( + web3, + web3.defaultAccount, + testTokenAddress, + acc, + ethToWei(1000.u256), + some(0.u256), + ) + + contractAddress = (await executeForgeContractDeployScripts(privateKey, acc, web3)).valueOr: + assert false, "Failed to deploy RLN contract: " & $error + return + + # If the generated account wishes to register a membership, it needs to approve the contract to spend its tokens + let tokenApprovalResult = await approveTokenAllowanceAndVerify( + web3, + acc, + privateKey, + testTokenAddress, + contractAddress, + ethToWei(200.u256), + some(0.u256), + ) + + assert tokenApprovalResult.isOk(), tokenApprovalResult.error let manager = OnchainGroupManager( ethClientUrls: @[ethClientUrl], diff --git a/tests/wakunode_rest/test_rest_health.nim b/tests/wakunode_rest/test_rest_health.nim index dacfd801e..ed8269f55 100644 --- a/tests/wakunode_rest/test_rest_health.nim +++ b/tests/wakunode_rest/test_rest_health.nim @@ -41,8 +41,8 @@ suite "Waku v2 REST API - health": var manager {.threadVar.}: OnchainGroupManager setup: - anvilProc = runAnvil() - manager = waitFor setupOnchainGroupManager() + anvilProc = runAnvil(stateFile = some(DEFAULT_ANVIL_STATE_PATH)) + manager = waitFor setupOnchainGroupManager(deployContracts = false) teardown: stopAnvil(anvilProc) diff --git a/vendor/waku-rlnv2-contract b/vendor/waku-rlnv2-contract index 900d4f95e..8a338f354 160000 --- a/vendor/waku-rlnv2-contract +++ b/vendor/waku-rlnv2-contract @@ -1 +1 @@ -Subproject commit 900d4f95e0e618bdeb4c241f7a4b6347df6bb950 +Subproject commit 8a338f354481e8a3f3d64a72e38fad4c62e32dcd diff --git a/waku/waku_rln_relay/group_manager/on_chain/group_manager.nim b/waku/waku_rln_relay/group_manager/on_chain/group_manager.nim index e8af61682..bdb272c1f 100644 --- a/waku/waku_rln_relay/group_manager/on_chain/group_manager.nim +++ b/waku/waku_rln_relay/group_manager/on_chain/group_manager.nim @@ -242,7 +242,7 @@ method register*( fetchedGasPrice = fetchedGasPrice, gasPrice = calculatedGasPrice calculatedGasPrice let idCommitmentHex = identityCredential.idCommitment.inHex() - info "identityCredential idCommitmentHex", idCommitment = idCommitmentHex + debug "identityCredential idCommitmentHex", idCommitment = idCommitmentHex let idCommitment = identityCredential.idCommitment.toUInt256() let idCommitmentsToErase: seq[UInt256] = @[] info "registering the member", @@ -259,11 +259,10 @@ method register*( var tsReceipt: ReceiptObject g.retryWrapper(tsReceipt, "Failed to get the transaction receipt"): await ethRpc.getMinedTransactionReceipt(txHash) - info "registration transaction mined", txHash = txHash + debug "registration transaction mined", txHash = txHash g.registrationTxHash = some(txHash) # the receipt topic holds the hash of signature of the raised events - # TODO: make this robust. search within the event list for the event - info "ts receipt", receipt = tsReceipt[] + debug "ts receipt", receipt = tsReceipt[] if tsReceipt.status.isNone(): raise newException(ValueError, "Transaction failed: status is None") @@ -272,18 +271,27 @@ method register*( ValueError, "Transaction failed with status: " & $tsReceipt.status.get() ) - ## Extract MembershipRegistered event from transaction logs (third event) - let thirdTopic = tsReceipt.logs[2].topics[0] - info "third topic", thirdTopic = thirdTopic - if thirdTopic != - cast[FixedBytes[32]](keccak.keccak256.digest( - "MembershipRegistered(uint256,uint256,uint32)" - ).data): - raise newException(ValueError, "register: unexpected event signature") + ## Search through all transaction logs to find the MembershipRegistered event + let expectedEventSignature = cast[FixedBytes[32]](keccak.keccak256.digest( + "MembershipRegistered(uint256,uint256,uint32)" + ).data) - ## Parse MembershipRegistered event data: rateCommitment(256) || membershipRateLimit(256) || index(32) - let arguments = tsReceipt.logs[2].data - info "tx log data", arguments = arguments + var membershipRegisteredLog: Option[LogObject] + for log in tsReceipt.logs: + if log.topics.len > 0 and log.topics[0] == expectedEventSignature: + membershipRegisteredLog = some(log) + break + + if membershipRegisteredLog.isNone(): + raise newException( + ValueError, "register: MembershipRegistered event not found in transaction logs" + ) + + let registrationLog = membershipRegisteredLog.get() + + ## Parse MembershipRegistered event data: idCommitment(256) || membershipRateLimit(256) || index(32) + let arguments = registrationLog.data + trace "registration transaction log data", arguments = arguments let ## Extract membership index from transaction log data (big endian) membershipIndex = UInt256.fromBytesBE(arguments[64 .. 95]) From 7920368a36687cd5f12afa52d59866792d8457ca Mon Sep 17 00:00:00 2001 From: Fabiana Cecin Date: Mon, 8 Dec 2025 06:34:57 -0300 Subject: [PATCH 17/70] fix: remove ENR cache from peer exchange (#3652) * remove WakuPeerExchange.enrCache * add forEnrPeers to support fast PeerStore search * add getEnrsFromStore * fix peer exchange tests --- tests/node/test_wakunode_peer_exchange.nim | 18 ++-- tests/waku_peer_exchange/test_protocol.nim | 100 +++++++++------------ waku/node/peer_manager/waku_peer_store.nim | 14 +++ waku/node/waku_node.nim | 3 - waku/waku_peer_exchange/protocol.nim | 98 +++++++++----------- 5 files changed, 108 insertions(+), 125 deletions(-) diff --git a/tests/node/test_wakunode_peer_exchange.nim b/tests/node/test_wakunode_peer_exchange.nim index 9b0ea4c40..e6649c455 100644 --- a/tests/node/test_wakunode_peer_exchange.nim +++ b/tests/node/test_wakunode_peer_exchange.nim @@ -66,15 +66,17 @@ suite "Waku Peer Exchange": suite "fetchPeerExchangePeers": var node2 {.threadvar.}: WakuNode + var node3 {.threadvar.}: WakuNode asyncSetup: node = newTestWakuNode(generateSecp256k1Key(), bindIp, bindPort) node2 = newTestWakuNode(generateSecp256k1Key(), bindIp, bindPort) + node3 = newTestWakuNode(generateSecp256k1Key(), bindIp, bindPort) - await allFutures(node.start(), node2.start()) + await allFutures(node.start(), node2.start(), node3.start()) asyncTeardown: - await allFutures(node.stop(), node2.stop()) + await allFutures(node.stop(), node2.stop(), node3.stop()) asyncTest "Node fetches without mounting peer exchange": # When a node, without peer exchange mounted, fetches peers @@ -104,12 +106,10 @@ suite "Waku Peer Exchange": await allFutures([node.mountPeerExchangeClient(), node2.mountPeerExchange()]) check node.peerManager.switch.peerStore.peers.len == 0 - # Mock that we discovered a node (to avoid running discv5) - var enr = enr.Record() - assert enr.fromUri( - "enr:-Iu4QGNuTvNRulF3A4Kb9YHiIXLr0z_CpvWkWjWKU-o95zUPR_In02AWek4nsSk7G_-YDcaT4bDRPzt5JIWvFqkXSNcBgmlkgnY0gmlwhE0WsGeJc2VjcDI1NmsxoQKp9VzU2FAh7fwOwSpg1M_Ekz4zzl0Fpbg6po2ZwgVwQYN0Y3CC6mCFd2FrdTIB" - ), "Failed to parse ENR" - node2.wakuPeerExchange.enrCache.add(enr) + # Simulate node2 discovering node3 via Discv5 + var rpInfo = node3.peerInfo.toRemotePeerInfo() + rpInfo.enr = some(node3.enr) + node2.peerManager.addPeer(rpInfo, PeerOrigin.Discv5) # Set node2 as service peer (default one) for px protocol node.peerManager.addServicePeer( @@ -121,10 +121,8 @@ suite "Waku Peer Exchange": check res.tryGet() == 1 # Check that the peer ended up in the peerstore - let rpInfo = enr.toRemotePeerInfo.get() check: node.peerManager.switch.peerStore.peers.anyIt(it.peerId == rpInfo.peerId) - node.peerManager.switch.peerStore.peers.anyIt(it.addrs == rpInfo.addrs) suite "setPeerExchangePeer": var node2 {.threadvar.}: WakuNode diff --git a/tests/waku_peer_exchange/test_protocol.nim b/tests/waku_peer_exchange/test_protocol.nim index 204338a85..74cdba110 100644 --- a/tests/waku_peer_exchange/test_protocol.nim +++ b/tests/waku_peer_exchange/test_protocol.nim @@ -142,9 +142,13 @@ suite "Waku Peer Exchange": newTestWakuNode(generateSecp256k1Key(), parseIpAddress("0.0.0.0"), Port(0)) node2 = newTestWakuNode(generateSecp256k1Key(), parseIpAddress("0.0.0.0"), Port(0)) + node3 = + newTestWakuNode(generateSecp256k1Key(), parseIpAddress("0.0.0.0"), Port(0)) + node4 = + newTestWakuNode(generateSecp256k1Key(), parseIpAddress("0.0.0.0"), Port(0)) # Start and mount peer exchange - await allFutures([node1.start(), node2.start()]) + await allFutures([node1.start(), node2.start(), node3.start(), node4.start()]) await allFutures([node1.mountPeerExchange(), node2.mountPeerExchangeClient()]) # Create connection @@ -154,18 +158,15 @@ suite "Waku Peer Exchange": require: connOpt.isSome - # Create some enr and add to peer exchange (simulating disv5) - var enr1, enr2 = enr.Record() - check enr1.fromUri( - "enr:-Iu4QGNuTvNRulF3A4Kb9YHiIXLr0z_CpvWkWjWKU-o95zUPR_In02AWek4nsSk7G_-YDcaT4bDRPzt5JIWvFqkXSNcBgmlkgnY0gmlwhE0WsGeJc2VjcDI1NmsxoQKp9VzU2FAh7fwOwSpg1M_Ekz4zzl0Fpbg6po2ZwgVwQYN0Y3CC6mCFd2FrdTIB" - ) - check enr2.fromUri( - "enr:-Iu4QGJllOWlviPIh_SGR-VVm55nhnBIU5L-s3ran7ARz_4oDdtJPtUs3Bc5aqZHCiPQX6qzNYF2ARHER0JPX97TFbEBgmlkgnY0gmlwhE0WsGeJc2VjcDI1NmsxoQP3ULycvday4EkvtVu0VqbBdmOkbfVLJx8fPe0lE_dRkIN0Y3CC6mCFd2FrdTIB" - ) + # Simulate node1 discovering node3 via Discv5 + var info3 = node3.peerInfo.toRemotePeerInfo() + info3.enr = some(node3.enr) + node1.peerManager.addPeer(info3, PeerOrigin.Discv5) - # Mock that we have discovered these enrs - node1.wakuPeerExchange.enrCache.add(enr1) - node1.wakuPeerExchange.enrCache.add(enr2) + # Simulate node1 discovering node4 via Discv5 + var info4 = node4.peerInfo.toRemotePeerInfo() + info4.enr = some(node4.enr) + node1.peerManager.addPeer(info4, PeerOrigin.Discv5) # Request 2 peer from px. Test all request variants let response1 = await node2.wakuPeerExchangeClient.request(2) @@ -185,12 +186,12 @@ suite "Waku Peer Exchange": response3.get().peerInfos.len == 2 # Since it can return duplicates test that at least one of the enrs is in the response - response1.get().peerInfos.anyIt(it.enr == enr1.raw) or - response1.get().peerInfos.anyIt(it.enr == enr2.raw) - response2.get().peerInfos.anyIt(it.enr == enr1.raw) or - response2.get().peerInfos.anyIt(it.enr == enr2.raw) - response3.get().peerInfos.anyIt(it.enr == enr1.raw) or - response3.get().peerInfos.anyIt(it.enr == enr2.raw) + response1.get().peerInfos.anyIt(it.enr == node3.enr.raw) or + response1.get().peerInfos.anyIt(it.enr == node4.enr.raw) + response2.get().peerInfos.anyIt(it.enr == node3.enr.raw) or + response2.get().peerInfos.anyIt(it.enr == node4.enr.raw) + response3.get().peerInfos.anyIt(it.enr == node3.enr.raw) or + response3.get().peerInfos.anyIt(it.enr == node4.enr.raw) asyncTest "Request fails gracefully": let @@ -265,8 +266,8 @@ suite "Waku Peer Exchange": peerInfo2.origin = PeerOrigin.Discv5 check: - not poolFilter(cluster, peerInfo1) - poolFilter(cluster, peerInfo2) + poolFilter(cluster, peerInfo1).isErr() + poolFilter(cluster, peerInfo2).isOk() asyncTest "Request 0 peers, with 1 peer in PeerExchange": # Given two valid nodes with PeerExchange @@ -275,9 +276,11 @@ suite "Waku Peer Exchange": newTestWakuNode(generateSecp256k1Key(), parseIpAddress("0.0.0.0"), Port(0)) node2 = newTestWakuNode(generateSecp256k1Key(), parseIpAddress("0.0.0.0"), Port(0)) + node3 = + newTestWakuNode(generateSecp256k1Key(), parseIpAddress("0.0.0.0"), Port(0)) # Start and mount peer exchange - await allFutures([node1.start(), node2.start()]) + await allFutures([node1.start(), node2.start(), node3.start()]) await allFutures([node1.mountPeerExchange(), node2.mountPeerExchangeClient()]) # Connect the nodes @@ -286,12 +289,10 @@ suite "Waku Peer Exchange": ) assert dialResponse.isSome - # Mock that we have discovered one enr - var record = enr.Record() - check record.fromUri( - "enr:-Iu4QGNuTvNRulF3A4Kb9YHiIXLr0z_CpvWkWjWKU-o95zUPR_In02AWek4nsSk7G_-YDcaT4bDRPzt5JIWvFqkXSNcBgmlkgnY0gmlwhE0WsGeJc2VjcDI1NmsxoQKp9VzU2FAh7fwOwSpg1M_Ekz4zzl0Fpbg6po2ZwgVwQYN0Y3CC6mCFd2FrdTIB" - ) - node1.wakuPeerExchange.enrCache.add(record) + # Simulate node1 discovering node3 via Discv5 + var info3 = node3.peerInfo.toRemotePeerInfo() + info3.enr = some(node3.enr) + node1.peerManager.addPeer(info3, PeerOrigin.Discv5) # When requesting 0 peers let response = await node2.wakuPeerExchangeClient.request(0) @@ -312,13 +313,6 @@ suite "Waku Peer Exchange": await allFutures([node1.start(), node2.start()]) await allFutures([node1.mountPeerExchangeClient(), node2.mountPeerExchange()]) - # Mock that we have discovered one enr - var record = enr.Record() - check record.fromUri( - "enr:-Iu4QGNuTvNRulF3A4Kb9YHiIXLr0z_CpvWkWjWKU-o95zUPR_In02AWek4nsSk7G_-YDcaT4bDRPzt5JIWvFqkXSNcBgmlkgnY0gmlwhE0WsGeJc2VjcDI1NmsxoQKp9VzU2FAh7fwOwSpg1M_Ekz4zzl0Fpbg6po2ZwgVwQYN0Y3CC6mCFd2FrdTIB" - ) - node2.wakuPeerExchange.enrCache.add(record) - # When making any request with an invalid peer info var remotePeerInfo2 = node2.peerInfo.toRemotePeerInfo() remotePeerInfo2.peerId.data.add(255.byte) @@ -362,17 +356,17 @@ suite "Waku Peer Exchange": newTestWakuNode(generateSecp256k1Key(), parseIpAddress("0.0.0.0"), Port(0)) node2 = newTestWakuNode(generateSecp256k1Key(), parseIpAddress("0.0.0.0"), Port(0)) + node3 = + newTestWakuNode(generateSecp256k1Key(), parseIpAddress("0.0.0.0"), Port(0)) # Start and mount peer exchange - await allFutures([node1.start(), node2.start()]) + await allFutures([node1.start(), node2.start(), node3.start()]) await allFutures([node1.mountPeerExchange(), node2.mountPeerExchange()]) - # Mock that we have discovered these enrs - var enr1 = enr.Record() - check enr1.fromUri( - "enr:-Iu4QGNuTvNRulF3A4Kb9YHiIXLr0z_CpvWkWjWKU-o95zUPR_In02AWek4nsSk7G_-YDcaT4bDRPzt5JIWvFqkXSNcBgmlkgnY0gmlwhE0WsGeJc2VjcDI1NmsxoQKp9VzU2FAh7fwOwSpg1M_Ekz4zzl0Fpbg6po2ZwgVwQYN0Y3CC6mCFd2FrdTIB" - ) - node1.wakuPeerExchange.enrCache.add(enr1) + # Simulate node1 discovering node3 via Discv5 + var info3 = node3.peerInfo.toRemotePeerInfo() + info3.enr = some(node3.enr) + node1.peerManager.addPeer(info3, PeerOrigin.Discv5) # Create connection let connOpt = await node2.peerManager.dialPeer( @@ -396,7 +390,7 @@ suite "Waku Peer Exchange": check: decodedBuff.get().response.status_code == PeerExchangeResponseStatusCode.SUCCESS decodedBuff.get().response.peerInfos.len == 1 - decodedBuff.get().response.peerInfos[0].enr == enr1.raw + decodedBuff.get().response.peerInfos[0].enr == node3.enr.raw asyncTest "RateLimit as expected": let @@ -404,9 +398,11 @@ suite "Waku Peer Exchange": newTestWakuNode(generateSecp256k1Key(), parseIpAddress("0.0.0.0"), Port(0)) node2 = newTestWakuNode(generateSecp256k1Key(), parseIpAddress("0.0.0.0"), Port(0)) + node3 = + newTestWakuNode(generateSecp256k1Key(), parseIpAddress("0.0.0.0"), Port(0)) # Start and mount peer exchange - await allFutures([node1.start(), node2.start()]) + await allFutures([node1.start(), node2.start(), node3.start()]) await allFutures( [ node1.mountPeerExchange(rateLimit = (1, 150.milliseconds)), @@ -414,6 +410,11 @@ suite "Waku Peer Exchange": ] ) + # Simulate node1 discovering nodeA via Discv5 + var info3 = node3.peerInfo.toRemotePeerInfo() + info3.enr = some(node3.enr) + node1.peerManager.addPeer(info3, PeerOrigin.Discv5) + # Create connection let connOpt = await node2.peerManager.dialPeer( node1.switch.peerInfo.toRemotePeerInfo(), WakuPeerExchangeCodec @@ -421,19 +422,6 @@ suite "Waku Peer Exchange": require: connOpt.isSome - # Create some enr and add to peer exchange (simulating disv5) - var enr1, enr2 = enr.Record() - check enr1.fromUri( - "enr:-Iu4QGNuTvNRulF3A4Kb9YHiIXLr0z_CpvWkWjWKU-o95zUPR_In02AWek4nsSk7G_-YDcaT4bDRPzt5JIWvFqkXSNcBgmlkgnY0gmlwhE0WsGeJc2VjcDI1NmsxoQKp9VzU2FAh7fwOwSpg1M_Ekz4zzl0Fpbg6po2ZwgVwQYN0Y3CC6mCFd2FrdTIB" - ) - check enr2.fromUri( - "enr:-Iu4QGJllOWlviPIh_SGR-VVm55nhnBIU5L-s3ran7ARz_4oDdtJPtUs3Bc5aqZHCiPQX6qzNYF2ARHER0JPX97TFbEBgmlkgnY0gmlwhE0WsGeJc2VjcDI1NmsxoQP3ULycvday4EkvtVu0VqbBdmOkbfVLJx8fPe0lE_dRkIN0Y3CC6mCFd2FrdTIB" - ) - - # Mock that we have discovered these enrs - node1.wakuPeerExchange.enrCache.add(enr1) - node1.wakuPeerExchange.enrCache.add(enr2) - await sleepAsync(150.milliseconds) # Request 2 peer from px. Test all request variants diff --git a/waku/node/peer_manager/waku_peer_store.nim b/waku/node/peer_manager/waku_peer_store.nim index 9cde53fe1..b7f2669e5 100644 --- a/waku/node/peer_manager/waku_peer_store.nim +++ b/waku/node/peer_manager/waku_peer_store.nim @@ -227,3 +227,17 @@ proc getPeersByCapability*( ): seq[RemotePeerInfo] = return peerStore.peers.filterIt(it.enr.isSome() and it.enr.get().supportsCapability(cap)) + +template forEnrPeers*( + peerStore: PeerStore, + peerId, peerConnectedness, peerOrigin, peerEnrRecord, body: untyped, +) = + let enrBook = peerStore[ENRBook] + let connBook = peerStore[ConnectionBook] + let sourceBook = peerStore[SourceBook] + for pid, enrRecord in tables.pairs(enrBook.book): + let peerId {.inject.} = pid + let peerConnectedness {.inject.} = connBook.book.getOrDefault(pid, NotConnected) + let peerOrigin {.inject.} = sourceBook.book.getOrDefault(pid, UnknownOrigin) + let peerEnrRecord {.inject.} = enrRecord + body diff --git a/waku/node/waku_node.nim b/waku/node/waku_node.nim index 65b2093bb..07e36dd13 100644 --- a/waku/node/waku_node.nim +++ b/waku/node/waku_node.nim @@ -525,9 +525,6 @@ proc stop*(node: WakuNode) {.async.} = if not node.wakuStoreTransfer.isNil(): node.wakuStoreTransfer.stop() - if not node.wakuPeerExchange.isNil() and not node.wakuPeerExchange.pxLoopHandle.isNil(): - await node.wakuPeerExchange.pxLoopHandle.cancelAndWait() - if not node.wakuPeerExchangeClient.isNil() and not node.wakuPeerExchangeClient.pxLoopHandle.isNil(): await node.wakuPeerExchangeClient.pxLoopHandle.cancelAndWait() diff --git a/waku/waku_peer_exchange/protocol.nim b/waku/waku_peer_exchange/protocol.nim index cf7ebc2a7..b99f5eabf 100644 --- a/waku/waku_peer_exchange/protocol.nim +++ b/waku/waku_peer_exchange/protocol.nim @@ -22,7 +22,6 @@ export WakuPeerExchangeCodec declarePublicGauge waku_px_peers_received_unknown, "number of previously unknown ENRs received via peer exchange" -declarePublicGauge waku_px_peers_cached, "number of peer exchange peer ENRs cached" declarePublicCounter waku_px_errors, "number of peer exchange errors", ["type"] declarePublicCounter waku_px_peers_sent, "number of ENRs sent to peer exchange requesters" @@ -32,11 +31,9 @@ logScope: type WakuPeerExchange* = ref object of LPProtocol peerManager*: PeerManager - enrCache*: seq[enr.Record] cluster*: Option[uint16] # todo: next step: ring buffer; future: implement cache satisfying https://rfc.vac.dev/spec/34/ requestRateLimiter*: RequestRateLimiter - pxLoopHandle*: Future[void] proc respond( wpx: WakuPeerExchange, enrs: seq[enr.Record], conn: Connection @@ -79,61 +76,50 @@ proc respondError( return ok() -proc getEnrsFromCache( - wpx: WakuPeerExchange, numPeers: uint64 -): seq[enr.Record] {.gcsafe.} = - if wpx.enrCache.len() == 0: - info "peer exchange ENR cache is empty" - return @[] - - # copy and shuffle - randomize() - var shuffledCache = wpx.enrCache - shuffledCache.shuffle() - - # return numPeers or less if cache is smaller - return shuffledCache[0 ..< min(shuffledCache.len.int, numPeers.int)] - -proc poolFilter*(cluster: Option[uint16], peer: RemotePeerInfo): bool = - if peer.origin != Discv5: - trace "peer not from discv5", peer = $peer, origin = $peer.origin - return false +proc poolFilter*( + cluster: Option[uint16], origin: PeerOrigin, enr: enr.Record +): Result[void, string] = + if origin != Discv5: + trace "peer not from discv5", origin = $origin + return err("peer not from discv5: " & $origin) + if cluster.isSome() and enr.isClusterMismatched(cluster.get()): + trace "peer has mismatching cluster" + return err("peer has mismatching cluster") + return ok() +proc poolFilter*(cluster: Option[uint16], peer: RemotePeerInfo): Result[void, string] = if peer.enr.isNone(): info "peer has no ENR", peer = $peer - return false + return err("peer has no ENR: " & $peer) + return poolFilter(cluster, peer.origin, peer.enr.get()) - if cluster.isSome() and peer.enr.get().isClusterMismatched(cluster.get()): - info "peer has mismatching cluster", peer = $peer - return false - - return true - -proc populateEnrCache(wpx: WakuPeerExchange) = - # share only peers that i) are reachable ii) come from discv5 iii) share cluster - let withEnr = wpx.peerManager.switch.peerStore.getReachablePeers().filterIt( - poolFilter(wpx.cluster, it) - ) - - # either what we have or max cache size - var newEnrCache = newSeq[enr.Record](0) - for i in 0 ..< min(withEnr.len, MaxPeersCacheSize): - newEnrCache.add(withEnr[i].enr.get()) - - # swap cache for new - wpx.enrCache = newEnrCache - trace "ENR cache populated" - -proc updatePxEnrCache(wpx: WakuPeerExchange) {.async.} = - # try more aggressively to fill the cache at startup - var attempts = 50 - while wpx.enrCache.len < MaxPeersCacheSize and attempts > 0: - attempts -= 1 - wpx.populateEnrCache() - await sleepAsync(1.seconds) - - heartbeat "Updating px enr cache", CacheRefreshInterval: - wpx.populateEnrCache() +proc getEnrsFromStore( + wpx: WakuPeerExchange, numPeers: uint64 +): seq[enr.Record] {.gcsafe.} = + # Reservoir sampling (Algorithm R) + var i = 0 + let k = min(MaxPeersCacheSize, numPeers.int) + let enrStoreLen = wpx.peerManager.switch.peerStore[ENRBook].len + var enrs = newSeqOfCap[enr.Record](min(k, enrStoreLen)) + wpx.peerManager.switch.peerStore.forEnrPeers( + peerId, peerConnectedness, peerOrigin, peerEnrRecord + ): + if peerConnectedness == CannotConnect: + debug "Could not retrieve ENR because cannot connect to peer", + remotePeerId = peerId + continue + poolFilter(wpx.cluster, peerOrigin, peerEnrRecord).isOkOr: + debug "Could not get ENR because no peer matched pool", error = error + continue + if i < k: + enrs.add(peerEnrRecord) + else: + # Add some randomness + let j = rand(i) + if j < k: + enrs[j] = peerEnrRecord + inc(i) + return enrs proc initProtocolHandler(wpx: WakuPeerExchange) = proc handler(conn: Connection, proto: string) {.async: (raises: [CancelledError]).} = @@ -174,7 +160,8 @@ proc initProtocolHandler(wpx: WakuPeerExchange) = error "Failed to respond with BAD_REQUEST:", error = $error return - let enrs = wpx.getEnrsFromCache(decBuf.request.numPeers) + let enrs = wpx.getEnrsFromStore(decBuf.request.numPeers) + info "peer exchange request received" trace "px enrs to respond", enrs = $enrs try: @@ -214,5 +201,4 @@ proc new*( ) wpx.initProtocolHandler() setServiceLimitMetric(WakuPeerExchangeCodec, rateLimitSetting) - asyncSpawn wpx.updatePxEnrCache() return wpx From 12952d070f10fba51afbbcfbfa1b782d0d2fed3a Mon Sep 17 00:00:00 2001 From: Sergei Tikhomirov Date: Tue, 9 Dec 2025 10:45:06 +0100 Subject: [PATCH 18/70] Add text file for coding LLMs with high-level nwaku info and style guide advice (#3624) * add CLAUDE.md first version * extract style guide advice * use AGENTS.md instead of CLAUDE.md for neutrality * chore: update AGENTS.md w.r.t. master developments * Apply suggestions from code review Co-authored-by: Ivan FB <128452529+Ivansete-status@users.noreply.github.com> * remove project tree from AGENTS.md; minor editx * Apply suggestions from code review Co-authored-by: NagyZoltanPeter <113987313+NagyZoltanPeter@users.noreply.github.com> --------- Co-authored-by: Ivan FB <128452529+Ivansete-status@users.noreply.github.com> Co-authored-by: NagyZoltanPeter <113987313+NagyZoltanPeter@users.noreply.github.com> --- AGENTS.md | 509 ++++++++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 509 insertions(+) create mode 100644 AGENTS.md diff --git a/AGENTS.md b/AGENTS.md new file mode 100644 index 000000000..4f735f240 --- /dev/null +++ b/AGENTS.md @@ -0,0 +1,509 @@ +# AGENTS.md - AI Coding Context + +This file provides essential context for LLMs assisting with Logos Messaging development. + +## Project Identity + +Logos Messaging is designed as a shared public network for generalized messaging, not application-specific infrastructure. + +This project is a Nim implementation of a libp2p protocol suite for private, censorship-resistant P2P messaging. It targets resource-restricted devices and privacy-preserving communication. + +Logos Messaging was formerly known as Waku. Waku-related terminology remains within the codebase for historical reasons. + +### Design Philosophy + +Key architectural decisions: + +Resource-restricted first: Protocols differentiate between full nodes (relay) and light clients (filter, lightpush, store). Light clients can participate without maintaining full message history or relay capabilities. This explains the client/server split in protocol implementations. + +Privacy through unlinkability: RLN (Rate Limiting Nullifier) provides DoS protection while preserving sender anonymity. Messages are routed through pubsub topics with automatic sharding across 8 shards. Code prioritizes metadata privacy alongside content encryption. + +Scalability via sharding: The network uses automatic content-topic-based sharding to distribute traffic. This is why you'll see sharding logic throughout the codebase and why pubsub topic selection is protocol-level, not application-level. + +See [documentation](https://docs.waku.org/learn/) for architectural details. + +### Core Protocols +- Relay: Pub/sub message routing using GossipSub +- Store: Historical message retrieval and persistence +- Filter: Lightweight message filtering for resource-restricted clients +- Lightpush: Lightweight message publishing for clients +- Peer Exchange: Peer discovery mechanism +- RLN Relay: Rate limiting nullifier for spam protection +- Metadata: Cluster and shard metadata exchange between peers +- Mix: Mixnet protocol for enhanced privacy through onion routing +- Rendezvous: Alternative peer discovery mechanism + +### Key Terminology +- ENR (Ethereum Node Record): Node identity and capability advertisement +- Multiaddr: libp2p addressing format (e.g., `/ip4/127.0.0.1/tcp/60000/p2p/16Uiu2...`) +- PubsubTopic: Gossipsub topic for message routing (e.g., `/waku/2/default-waku/proto`) +- ContentTopic: Application-level message categorization (e.g., `/my-app/1/chat/proto`) +- Sharding: Partitioning network traffic across topics (static or auto-sharding) +- RLN (Rate Limiting Nullifier): Zero-knowledge proof system for spam prevention + +### Specifications +All specs are at [rfc.vac.dev/waku](https://rfc.vac.dev/waku). RFCs use `WAKU2-XXX` format (not legacy `WAKU-XXX`). + +## Architecture + +### Protocol Module Pattern +Each protocol typically follows this structure: +``` +waku_/ +├── protocol.nim # Main protocol type and handler logic +├── client.nim # Client-side API +├── rpc.nim # RPC message types +├── rpc_codec.nim # Protobuf encoding/decoding +├── common.nim # Shared types and constants +└── protocol_metrics.nim # Prometheus metrics +``` + +### WakuNode Architecture +- WakuNode (`waku/node/waku_node.nim`) is the central orchestrator +- Protocols are "mounted" onto the node's switch (libp2p component) +- PeerManager handles peer selection and connection management +- Switch provides libp2p transport, security, and multiplexing + +Example protocol type definition: +```nim +type WakuFilter* = ref object of LPProtocol + subscriptions*: FilterSubscriptions + peerManager: PeerManager + messageCache: TimedCache[string] +``` + +## Development Essentials + +### Build Requirements +- Nim 2.x (check `waku.nimble` for minimum version) +- Rust toolchain (required for RLN dependencies) +- Build system: Make with nimbus-build-system + +### Build System +The project uses Makefile with nimbus-build-system (Status's Nim build framework): +```bash +# Initial build (updates submodules) +make wakunode2 + +# After git pull, update submodules +make update + +# Build with custom flags +make wakunode2 NIMFLAGS="-d:chronicles_log_level=DEBUG" +``` + +Note: The build system uses `--mm:refc` memory management (automatically enforced). Only relevant if compiling outside the standard build system. + +### Common Make Targets +```bash +make wakunode2 # Build main node binary +make test # Run all tests +make testcommon # Run common tests only +make libwakuStatic # Build static C library +make chat2 # Build chat example +make install-nph # Install git hook for auto-formatting +``` + +### Testing +```bash +# Run all tests +make test + +# Run specific test file +make test tests/test_waku_enr.nim + +# Run specific test case from file +make test tests/test_waku_enr.nim "check capabilities support" + +# Build and run test separately (for development iteration) +make test tests/test_waku_enr.nim +``` + +Test structure uses `testutils/unittests`: +```nim +import testutils/unittests + +suite "Waku ENR - Capabilities": + test "check capabilities support": + ## Given + let bitfield: CapabilitiesBitfield = 0b0000_1101u8 + + ## Then + check: + bitfield.supportsCapability(Capabilities.Relay) + not bitfield.supportsCapability(Capabilities.Store) +``` + +### Code Formatting +Mandatory: All code must be formatted with `nph` (vendored in `vendor/nph`) +```bash +# Format specific file +make nph/waku/waku_core.nim + +# Install git pre-commit hook (auto-formats on commit) +make install-nph +``` +The nph formatter handles all formatting details automatically, especially with the pre-commit hook installed. Focus on semantic correctness. + +### Logging +Uses `chronicles` library with compile-time configuration: +```nim +import chronicles + +logScope: + topics = "waku lightpush" + +info "handling request", peerId = peerId, topic = pubsubTopic +error "request failed", error = msg +``` + +Compile with log level: +```bash +nim c -d:chronicles_log_level=TRACE myfile.nim +``` + + +## Code Conventions + +Common pitfalls: +- Always handle Result types explicitly +- Avoid global mutable state: Pass state through parameters +- Keep functions focused: Under 50 lines when possible +- Prefer compile-time checks (`static assert`) over runtime checks + +### Naming +- Files/Directories: `snake_case` (e.g., `waku_lightpush`, `peer_manager`) +- Procedures: `camelCase` (e.g., `handleRequest`, `pushMessage`) +- Types: `PascalCase` (e.g., `WakuFilter`, `PubsubTopic`) +- Constants: `PascalCase` (e.g., `MaxContentTopicsPerRequest`) +- Constructors: `func init(T: type Xxx, params): T` +- For ref types: `func new(T: type Xxx, params): ref T` +- Exceptions: `XxxError` for CatchableError, `XxxDefect` for Defect +- ref object types: `XxxRef` suffix + +### Imports Organization +Group imports: stdlib, external libs, internal modules: +```nim +import + std/[options, sequtils], # stdlib + results, chronicles, chronos, # external + libp2p/peerid +import + ../node/peer_manager, # internal (separate import block) + ../waku_core, + ./common +``` + +### Async Programming +Uses chronos, not stdlib `asyncdispatch`: +```nim +proc handleRequest( + wl: WakuLightPush, peerId: PeerId +): Future[WakuLightPushResult] {.async.} = + let res = await wl.pushHandler(peerId, pubsubTopic, message) + return res +``` + +### Error Handling +The project uses both Result types and exceptions: + +Result types from nim-results are used for protocol and API-level errors: +```nim +proc subscribe( + wf: WakuFilter, peerId: PeerID +): Future[FilterSubscribeResult] {.async.} = + if contentTopics.len > MaxContentTopicsPerRequest: + return err(FilterSubscribeError.badRequest("exceeds maximum")) + + # Handle Result with isOkOr + (await wf.subscriptions.addSubscription(peerId, criteria)).isOkOr: + return err(FilterSubscribeError.serviceUnavailable(error)) + + ok() +``` + +Exceptions still used for: +- chronos async failures (CancelledError, etc.) +- Database/system errors +- Library interop + +Most files start with `{.push raises: [].}` to disable exception tracking, then use try/catch blocks where needed. + +### Pragma Usage +```nim +{.push raises: [].} # Disable default exception tracking (at file top) + +proc myProc(): Result[T, E] {.async.} = # Async proc +``` + +### Protocol Inheritance +Protocols inherit from libp2p's `LPProtocol`: +```nim +type WakuLightPush* = ref object of LPProtocol + rng*: ref rand.HmacDrbgContext + peerManager*: PeerManager + pushHandler*: PushMessageHandler +``` + +### Type Visibility +- Public exports use `*` suffix: `type WakuFilter* = ...` +- Fields without `*` are module-private + +## Style Guide Essentials + +This section summarizes key Nim style guidelines relevant to this project. Full guide: https://status-im.github.io/nim-style-guide/ + +### Language Features + +Import and Export +- Use explicit import paths with std/ prefix for stdlib +- Group imports: stdlib, external, internal (separate blocks) +- Export modules whose types appear in public API +- Avoid include + +Macros and Templates +- Avoid macros and templates - prefer simple constructs +- Avoid generating public API with macros +- Put logic in templates, use macros only for glue code + +Object Construction +- Prefer Type(field: value) syntax +- Use Type.init(params) convention for constructors +- Default zero-initialization should be valid state +- Avoid using result variable for construction + +ref object Types +- Avoid ref object unless needed for: + - Resource handles requiring reference semantics + - Shared ownership + - Reference-based data structures (trees, lists) + - Stable pointer for FFI +- Use explicit ref MyType where possible +- Name ref object types with Ref suffix: XxxRef + +Memory Management +- Prefer stack-based and statically sized types in core code +- Use heap allocation in glue layers +- Avoid alloca +- For FFI: use create/dealloc or createShared/deallocShared + +Variable Usage +- Use most restrictive of const, let, var (prefer const over let over var) +- Prefer expressions for initialization over var then assignment +- Avoid result variable - use explicit return or expression-based returns + +Functions +- Prefer func over proc +- Avoid public (*) symbols not part of intended API +- Prefer openArray over seq for function parameters + +Methods (runtime polymorphism) +- Avoid method keyword for dynamic dispatch +- Prefer manual vtable with proc closures for polymorphism +- Methods lack support for generics + +Miscellaneous +- Annotate callback proc types with {.raises: [], gcsafe.} +- Avoid explicit {.inline.} pragma +- Avoid converters +- Avoid finalizers + +Type Guidelines + +Binary Data +- Use byte for binary data +- Use seq[byte] for dynamic arrays +- Convert string to seq[byte] early if stdlib returns binary as string + +Integers +- Prefer signed (int, int64) for counting, lengths, indexing +- Use unsigned with explicit size (uint8, uint64) for binary data, bit ops +- Avoid Natural +- Check ranges before converting to int +- Avoid casting pointers to int +- Avoid range types + +Strings +- Use string for text +- Use seq[byte] for binary data instead of string + +### Error Handling + +Philosophy +- Prefer Result, Opt for explicit error handling +- Use Exceptions only for legacy code compatibility + +Result Types +- Use Result[T, E] for operations that can fail +- Use cstring for simple error messages: Result[T, cstring] +- Use enum for errors needing differentiation: Result[T, SomeErrorEnum] +- Use Opt[T] for simple optional values +- Annotate all modules: {.push raises: [].} at top + +Exceptions (when unavoidable) +- Inherit from CatchableError, name XxxError +- Use Defect for panics/logic errors, name XxxDefect +- Annotate functions explicitly: {.raises: [SpecificError].} +- Catch specific error types, avoid catching CatchableError +- Use expression-based try blocks +- Isolate legacy exception code with try/except, convert to Result + +Common Defect Sources +- Overflow in signed arithmetic +- Array/seq indexing with [] +- Implicit range type conversions + +Status Codes +- Avoid status code pattern +- Use Result instead + +### Library Usage + +Standard Library +- Use judiciously, prefer focused packages +- Prefer these replacements: + - async: chronos + - bitops: stew/bitops2 + - endians: stew/endians2 + - exceptions: results + - io: stew/io2 + +Results Library +- Use cstring errors for diagnostics without differentiation +- Use enum errors when caller needs to act on specific errors +- Use complex types when additional error context needed +- Use isOkOr pattern for chaining + +Wrappers (C/FFI) +- Prefer native Nim when available +- For C libraries: use {.compile.} to build from source +- Create xxx_abi.nim for raw ABI wrapper +- Avoid C++ libraries + +Miscellaneous +- Print hex output in lowercase, accept both cases + +### Common Pitfalls + +- Defects lack tracking by {.raises.} +- nil ref causes runtime crashes +- result variable disables branch checking +- Exception hierarchy unclear between Nim versions +- Range types have compiler bugs +- Finalizers infect all instances of type + +## Common Workflows + +### Adding a New Protocol +1. Create directory: `waku/waku_myprotocol/` +2. Define core files: + - `rpc.nim` - Message types + - `rpc_codec.nim` - Protobuf encoding + - `protocol.nim` - Protocol handler + - `client.nim` - Client API + - `common.nim` - Shared types +3. Define protocol type in `protocol.nim`: + ```nim + type WakuMyProtocol* = ref object of LPProtocol + peerManager: PeerManager + # ... fields + ``` +4. Implement request handler +5. Mount in WakuNode (`waku/node/waku_node.nim`) +6. Add tests in `tests/waku_myprotocol/` +7. Export module via `waku/waku_myprotocol.nim` + +### Adding a REST API Endpoint +1. Define handler in `waku/rest_api/endpoint/myprotocol/` +2. Implement endpoint following pattern: + ```nim + proc installMyProtocolApiHandlers*( + router: var RestRouter, node: WakuNode + ) = + router.api(MethodGet, "/waku/v2/myprotocol/endpoint") do () -> RestApiResponse: + # Implementation + return RestApiResponse.jsonResponse(data, status = Http200) + ``` +3. Register in `waku/rest_api/handlers.nim` + +### Adding Database Migration +For message_store (SQLite): +1. Create `migrations/message_store/NNNNN_description.up.sql` +2. Create corresponding `.down.sql` for rollback +3. Increment version number sequentially +4. Test migration locally before committing + +For PostgreSQL: add in `migrations/message_store_postgres/` + +### Running Single Test During Development +```bash +# Build test binary +make test tests/waku_filter_v2/test_waku_client.nim + +# Binary location +./build/tests/waku_filter_v2/test_waku_client.nim.bin + +# Or combine +make test tests/waku_filter_v2/test_waku_client.nim "specific test name" +``` + +### Debugging with Chronicles +Set log level and filter topics: +```bash +nim c -r \ + -d:chronicles_log_level=TRACE \ + -d:chronicles_disabled_topics="eth,dnsdisc" \ + tests/mytest.nim +``` + +## Key Constraints + +### Vendor Directory +- Never edit files directly in vendor - it is auto-generated from git submodules +- Always run `make update` after pulling changes +- Managed by `nimbus-build-system` + +### Chronicles Performance +- Log levels are configured at compile time for performance +- Runtime filtering is available but should be used sparingly: `-d:chronicles_runtime_filtering=on` +- Default sinks are optimized for production + +### Memory Management +- Uses `refc` (reference counting with cycle collection) +- Automatically enforced by the build system (hardcoded in `waku.nimble`) +- Do not override unless absolutely necessary, as it breaks compatibility + +### RLN Dependencies +- RLN code requires a Rust toolchain, which explains Rust imports in some modules +- Pre-built `librln` libraries are checked into the repository + +## Quick Reference + +Language: Nim 2.x | License: MIT or Apache 2.0 + +### Important Files +- `Makefile` - Primary build interface +- `waku.nimble` - Package definition and build tasks (called via nimbus-build-system) +- `vendor/nimbus-build-system/` - Status's build framework +- `waku/node/waku_node.nim` - Core node implementation +- `apps/wakunode2/wakunode2.nim` - Main CLI application +- `waku/factory/waku_conf.nim` - Configuration types +- `library/libwaku.nim` - C bindings entry point + +### Testing Entry Points +- `tests/all_tests_waku.nim` - All Waku protocol tests +- `tests/all_tests_wakunode2.nim` - Node application tests +- `tests/all_tests_common.nim` - Common utilities tests + +### Key Dependencies +- `chronos` - Async framework +- `nim-results` - Result type for error handling +- `chronicles` - Logging +- `libp2p` - P2P networking +- `confutils` - CLI argument parsing +- `presto` - REST server +- `nimcrypto` - Cryptographic primitives + +Note: For specific version requirements, check `waku.nimble`. + + From 868d43164e9b5ad0c3a856e872448e9e80531e0c Mon Sep 17 00:00:00 2001 From: Darshan K <35736874+darshankabariya@users.noreply.github.com> Date: Wed, 10 Dec 2025 17:40:42 +0530 Subject: [PATCH 19/70] Release : patch release v0.37.1-beta (#3661) --- CHANGELOG.md | 8 +++++++- 1 file changed, 7 insertions(+), 1 deletion(-) diff --git a/CHANGELOG.md b/CHANGELOG.md index 61e818afd..3c80a3b79 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -1,4 +1,10 @@ -## v0.37.0 (2025-10-01) +## v0.37.1-beta (2025-12-10) + +### Bug Fixes + +- Remove ENR cache from peer exchange ([#3652](https://github.com/logos-messaging/logos-messaging-nim/pull/3652)) ([7920368a](https://github.com/logos-messaging/logos-messaging-nim/commit/7920368a36687cd5f12afa52d59866792d8457ca)) + +## v0.37.0-beta (2025-10-01) ### Notes From 7d1c6abaacba3e05edb57fa177381602b71b9b98 Mon Sep 17 00:00:00 2001 From: Sergei Tikhomirov Date: Thu, 11 Dec 2025 10:51:47 +0100 Subject: [PATCH 20/70] chore: do not mount lightpush without relay (fixes #2808) (#3540) * chore: do not mount lightpush without relay (fixes #2808) - Change mountLightPush signature to return Result[void, string] - Return error when relay is not mounted - Update all call sites to handle Result return type - Add test verifying mounting fails without relay - Only advertise lightpush capability when relay is enabled * chore: don't mount legacy lightpush without relay --- apps/chat2/chat2.nim | 4 +- tests/node/test_wakunode_legacy_lightpush.nim | 23 ++++++- tests/node/test_wakunode_lightpush.nim | 23 ++++++- tests/node/test_wakunode_sharding.nim | 8 +-- tests/wakunode_rest/test_rest_lightpush.nim | 2 +- .../test_rest_lightpush_legacy.nim | 2 +- .../conf_builder/waku_conf_builder.nim | 2 +- waku/factory/node_factory.nim | 7 ++- waku/node/kernel_api/lightpush.nim | 61 ++++++++++--------- 9 files changed, 88 insertions(+), 44 deletions(-) diff --git a/apps/chat2/chat2.nim b/apps/chat2/chat2.nim index e2a46ca1b..71d8a4e6a 100644 --- a/apps/chat2/chat2.nim +++ b/apps/chat2/chat2.nim @@ -480,7 +480,9 @@ proc processInput(rfd: AsyncFD, rng: ref HmacDrbgContext) {.async.} = if conf.lightpushnode != "": let peerInfo = parsePeerInfo(conf.lightpushnode) if peerInfo.isOk(): - await mountLegacyLightPush(node) + (await node.mountLegacyLightPush()).isOkOr: + error "failed to mount legacy lightpush", error = error + quit(QuitFailure) node.mountLegacyLightPushClient() node.peerManager.addServicePeer(peerInfo.value, WakuLightpushCodec) else: diff --git a/tests/node/test_wakunode_legacy_lightpush.nim b/tests/node/test_wakunode_legacy_lightpush.nim index 80e623ce4..4aedd7d4b 100644 --- a/tests/node/test_wakunode_legacy_lightpush.nim +++ b/tests/node/test_wakunode_legacy_lightpush.nim @@ -13,6 +13,7 @@ import node/peer_manager, node/waku_node, node/kernel_api, + node/kernel_api/lightpush, waku_lightpush_legacy, waku_lightpush_legacy/common, waku_lightpush_legacy/protocol_metrics, @@ -56,7 +57,7 @@ suite "Waku Legacy Lightpush - End To End": (await server.mountRelay()).isOkOr: assert false, "Failed to mount relay" - await server.mountLegacyLightpush() # without rln-relay + check (await server.mountLegacyLightpush()).isOk() # without rln-relay client.mountLegacyLightpushClient() serverRemotePeerInfo = server.peerInfo.toRemotePeerInfo() @@ -147,7 +148,7 @@ suite "RLN Proofs as a Lightpush Service": (await server.mountRelay()).isOkOr: assert false, "Failed to mount relay" await server.mountRlnRelay(wakuRlnConfig) - await server.mountLegacyLightPush() + check (await server.mountLegacyLightPush()).isOk() client.mountLegacyLightPushClient() let manager1 = cast[OnchainGroupManager](server.wakuRlnRelay.groupManager) @@ -213,7 +214,7 @@ suite "Waku Legacy Lightpush message delivery": assert false, "Failed to mount relay" (await bridgeNode.mountRelay()).isOkOr: assert false, "Failed to mount relay" - await bridgeNode.mountLegacyLightPush() + check (await bridgeNode.mountLegacyLightPush()).isOk() lightNode.mountLegacyLightPushClient() discard await lightNode.peerManager.dialPeer( @@ -249,3 +250,19 @@ suite "Waku Legacy Lightpush message delivery": ## Cleanup await allFutures(lightNode.stop(), bridgeNode.stop(), destNode.stop()) + +suite "Waku Legacy Lightpush mounting behavior": + asyncTest "fails to mount when relay is not mounted": + ## Given a node without Relay mounted + let + key = generateSecp256k1Key() + node = newTestWakuNode(key, parseIpAddress("0.0.0.0"), Port(0)) + + # Do not mount Relay on purpose + check node.wakuRelay.isNil() + + ## Then mounting Legacy Lightpush must fail + let res = await node.mountLegacyLightPush() + check: + res.isErr() + res.error == MountWithoutRelayError diff --git a/tests/node/test_wakunode_lightpush.nim b/tests/node/test_wakunode_lightpush.nim index 29f72b2cc..7b4da6d4c 100644 --- a/tests/node/test_wakunode_lightpush.nim +++ b/tests/node/test_wakunode_lightpush.nim @@ -13,6 +13,7 @@ import node/peer_manager, node/waku_node, node/kernel_api, + node/kernel_api/lightpush, waku_lightpush, waku_rln_relay, ], @@ -55,7 +56,7 @@ suite "Waku Lightpush - End To End": (await server.mountRelay()).isOkOr: assert false, "Failed to mount relay" - await server.mountLightpush() # without rln-relay + check (await server.mountLightpush()).isOk() # without rln-relay client.mountLightpushClient() serverRemotePeerInfo = server.peerInfo.toRemotePeerInfo() @@ -147,7 +148,7 @@ suite "RLN Proofs as a Lightpush Service": (await server.mountRelay()).isOkOr: assert false, "Failed to mount relay" await server.mountRlnRelay(wakuRlnConfig) - await server.mountLightPush() + check (await server.mountLightPush()).isOk() client.mountLightPushClient() let manager1 = cast[OnchainGroupManager](server.wakuRlnRelay.groupManager) @@ -213,7 +214,7 @@ suite "Waku Lightpush message delivery": assert false, "Failed to mount relay" (await bridgeNode.mountRelay()).isOkOr: assert false, "Failed to mount relay" - await bridgeNode.mountLightPush() + check (await bridgeNode.mountLightPush()).isOk() lightNode.mountLightPushClient() discard await lightNode.peerManager.dialPeer( @@ -251,3 +252,19 @@ suite "Waku Lightpush message delivery": ## Cleanup await allFutures(lightNode.stop(), bridgeNode.stop(), destNode.stop()) + +suite "Waku Lightpush mounting behavior": + asyncTest "fails to mount when relay is not mounted": + ## Given a node without Relay mounted + let + key = generateSecp256k1Key() + node = newTestWakuNode(key, parseIpAddress("0.0.0.0"), Port(0)) + + # Do not mount Relay on purpose + check node.wakuRelay.isNil() + + ## Then mounting Lightpush must fail + let res = await node.mountLightPush() + check: + res.isErr() + res.error == MountWithoutRelayError diff --git a/tests/node/test_wakunode_sharding.nim b/tests/node/test_wakunode_sharding.nim index eefd8f06e..261077e36 100644 --- a/tests/node/test_wakunode_sharding.nim +++ b/tests/node/test_wakunode_sharding.nim @@ -282,7 +282,7 @@ suite "Sharding": asyncTest "lightpush": # Given a connected server and client subscribed to the same pubsub topic client.mountLegacyLightPushClient() - await server.mountLightpush() + check (await server.mountLightpush()).isOk() let topic = "/waku/2/rs/0/1" @@ -405,7 +405,7 @@ suite "Sharding": asyncTest "lightpush (automatic sharding filtering)": # Given a connected server and client using the same content topic (with two different formats) client.mountLegacyLightPushClient() - await server.mountLightpush() + check (await server.mountLightpush()).isOk() let contentTopicShort = "/toychat/2/huilong/proto" @@ -563,7 +563,7 @@ suite "Sharding": asyncTest "lightpush - exclusion (automatic sharding filtering)": # Given a connected server and client using different content topics client.mountLegacyLightPushClient() - await server.mountLightpush() + check (await server.mountLightpush()).isOk() let contentTopic1 = "/toychat/2/huilong/proto" @@ -874,7 +874,7 @@ suite "Sharding": asyncTest "Waku LightPush Sharding (Static Sharding)": # Given a connected server and client using two different pubsub topics client.mountLegacyLightPushClient() - await server.mountLightpush() + check (await server.mountLightpush()).isOk() # Given a connected server and client subscribed to multiple pubsub topics let diff --git a/tests/wakunode_rest/test_rest_lightpush.nim b/tests/wakunode_rest/test_rest_lightpush.nim index cc5c715b8..deba7de22 100644 --- a/tests/wakunode_rest/test_rest_lightpush.nim +++ b/tests/wakunode_rest/test_rest_lightpush.nim @@ -61,7 +61,7 @@ proc init( assert false, "Failed to mount relay: " & $error (await testSetup.serviceNode.mountRelay()).isOkOr: assert false, "Failed to mount relay: " & $error - await testSetup.serviceNode.mountLightPush(rateLimit) + check (await testSetup.serviceNode.mountLightPush(rateLimit)).isOk() testSetup.pushNode.mountLightPushClient() testSetup.serviceNode.peerManager.addServicePeer( diff --git a/tests/wakunode_rest/test_rest_lightpush_legacy.nim b/tests/wakunode_rest/test_rest_lightpush_legacy.nim index 526a6c24e..4043eeed9 100644 --- a/tests/wakunode_rest/test_rest_lightpush_legacy.nim +++ b/tests/wakunode_rest/test_rest_lightpush_legacy.nim @@ -61,7 +61,7 @@ proc init( assert false, "Failed to mount relay" (await testSetup.serviceNode.mountRelay()).isOkOr: assert false, "Failed to mount relay" - await testSetup.serviceNode.mountLegacyLightPush(rateLimit) + check (await testSetup.serviceNode.mountLegacyLightPush(rateLimit)).isOk() testSetup.pushNode.mountLegacyLightPushClient() testSetup.serviceNode.peerManager.addServicePeer( diff --git a/waku/factory/conf_builder/waku_conf_builder.nim b/waku/factory/conf_builder/waku_conf_builder.nim index 645869247..f3f942ecc 100644 --- a/waku/factory/conf_builder/waku_conf_builder.nim +++ b/waku/factory/conf_builder/waku_conf_builder.nim @@ -606,7 +606,7 @@ proc build*( let relayShardedPeerManagement = builder.relayShardedPeerManagement.get(false) let wakuFlags = CapabilitiesBitfield.init( - lightpush = lightPush, + lightpush = lightPush and relay, filter = filterServiceConf.isSome, store = storeServiceConf.isSome, relay = relay, diff --git a/waku/factory/node_factory.nim b/waku/factory/node_factory.nim index 34fc958fe..2cdfdb0d2 100644 --- a/waku/factory/node_factory.nim +++ b/waku/factory/node_factory.nim @@ -368,8 +368,11 @@ proc setupProtocols( # NOTE Must be mounted after relay if conf.lightPush: try: - await mountLightPush(node, node.rateLimitSettings.getSetting(LIGHTPUSH)) - await mountLegacyLightPush(node, node.rateLimitSettings.getSetting(LIGHTPUSH)) + (await mountLightPush(node, node.rateLimitSettings.getSetting(LIGHTPUSH))).isOkOr: + return err("failed to mount waku lightpush protocol: " & $error) + + (await mountLegacyLightPush(node, node.rateLimitSettings.getSetting(LIGHTPUSH))).isOkOr: + return err("failed to mount waku legacy lightpush protocol: " & $error) except CatchableError: return err("failed to mount waku lightpush protocol: " & getCurrentExceptionMsg()) diff --git a/waku/node/kernel_api/lightpush.nim b/waku/node/kernel_api/lightpush.nim index 9451767ac..2a5f6acbb 100644 --- a/waku/node/kernel_api/lightpush.nim +++ b/waku/node/kernel_api/lightpush.nim @@ -34,26 +34,27 @@ import logScope: topics = "waku node lightpush api" +const MountWithoutRelayError* = "cannot mount lightpush because relay is not mounted" + ## Waku lightpush proc mountLegacyLightPush*( node: WakuNode, rateLimit: RateLimitSetting = DefaultGlobalNonRelayRateLimit -) {.async.} = +): Future[Result[void, string]] {.async.} = info "mounting legacy light push" - let pushHandler = - if node.wakuRelay.isNil: - info "mounting legacy lightpush without relay (nil)" - legacy_lightpush_protocol.getNilPushHandler() + if node.wakuRelay.isNil(): + return err(MountWithoutRelayError) + + info "mounting legacy lightpush with relay" + let rlnPeer = + if node.wakuRlnRelay.isNil(): + info "mounting legacy lightpush without rln-relay" + none(WakuRLNRelay) else: - info "mounting legacy lightpush with relay" - let rlnPeer = - if isNil(node.wakuRlnRelay): - info "mounting legacy lightpush without rln-relay" - none(WakuRLNRelay) - else: - info "mounting legacy lightpush with rln-relay" - some(node.wakuRlnRelay) - legacy_lightpush_protocol.getRelayPushHandler(node.wakuRelay, rlnPeer) + info "mounting legacy lightpush with rln-relay" + some(node.wakuRlnRelay) + let pushHandler = + legacy_lightpush_protocol.getRelayPushHandler(node.wakuRelay, rlnPeer) node.wakuLegacyLightPush = WakuLegacyLightPush.new(node.peerManager, node.rng, pushHandler, some(rateLimit)) @@ -64,6 +65,9 @@ proc mountLegacyLightPush*( node.switch.mount(node.wakuLegacyLightPush, protocolMatcher(WakuLegacyLightPushCodec)) + info "legacy lightpush mounted successfully" + return ok() + proc mountLegacyLightPushClient*(node: WakuNode) = info "mounting legacy light push client" @@ -146,23 +150,21 @@ proc legacyLightpushPublish*( proc mountLightPush*( node: WakuNode, rateLimit: RateLimitSetting = DefaultGlobalNonRelayRateLimit -) {.async.} = +): Future[Result[void, string]] {.async.} = info "mounting light push" - let pushHandler = - if node.wakuRelay.isNil(): - info "mounting lightpush v2 without relay (nil)" - lightpush_protocol.getNilPushHandler() + if node.wakuRelay.isNil(): + return err(MountWithoutRelayError) + + info "mounting lightpush with relay" + let rlnPeer = + if node.wakuRlnRelay.isNil(): + info "mounting lightpush without rln-relay" + none(WakuRLNRelay) else: - info "mounting lightpush with relay" - let rlnPeer = - if isNil(node.wakuRlnRelay): - info "mounting lightpush without rln-relay" - none(WakuRLNRelay) - else: - info "mounting lightpush with rln-relay" - some(node.wakuRlnRelay) - lightpush_protocol.getRelayPushHandler(node.wakuRelay, rlnPeer) + info "mounting lightpush with rln-relay" + some(node.wakuRlnRelay) + let pushHandler = lightpush_protocol.getRelayPushHandler(node.wakuRelay, rlnPeer) node.wakuLightPush = WakuLightPush.new( node.peerManager, node.rng, pushHandler, node.wakuAutoSharding, some(rateLimit) @@ -174,6 +176,9 @@ proc mountLightPush*( node.switch.mount(node.wakuLightPush, protocolMatcher(WakuLightPushCodec)) + info "lightpush mounted successfully" + return ok() + proc mountLightPushClient*(node: WakuNode) = info "mounting light push client" From 9e2b3830e92419ab6fec3263f858bd872300b295 Mon Sep 17 00:00:00 2001 From: Ivan FB <128452529+Ivansete-status@users.noreply.github.com> Date: Mon, 15 Dec 2025 12:11:11 +0100 Subject: [PATCH 21/70] Distribute libwaku (#3612) * allow create libwaku pkg * fix Makefile create library extension libwaku * make sure libwaku is built as part of assets * Makefile: avoid rm libwaku before building it * properly format debian pkg in gh release workflow * waku.nimble set dylib extension correctly * properly pass lib name and ext to waku.nimble --- .github/workflows/release-assets.yml | 75 +++++++++++++++++++++++++--- Makefile | 30 +++++++---- waku.nimble | 30 +++++------ 3 files changed, 98 insertions(+), 37 deletions(-) diff --git a/.github/workflows/release-assets.yml b/.github/workflows/release-assets.yml index c6cfbd680..50e3c4c3d 100644 --- a/.github/workflows/release-assets.yml +++ b/.github/workflows/release-assets.yml @@ -41,25 +41,84 @@ jobs: .git/modules key: ${{ runner.os }}-${{matrix.arch}}-submodules-${{ steps.submodules.outputs.hash }} - - name: prep variables + - name: Get tag + id: version + run: | + # Use full tag, e.g., v0.37.0 + echo "version=${GITHUB_REF_NAME}" >> $GITHUB_OUTPUT + + - name: Prep variables id: vars run: | - NWAKU_ARTIFACT_NAME=$(echo "nwaku-${{matrix.arch}}-${{runner.os}}.tar.gz" | tr "[:upper:]" "[:lower:]") + VERSION=${{ steps.version.outputs.version }} - echo "nwaku=${NWAKU_ARTIFACT_NAME}" >> $GITHUB_OUTPUT + NWAKU_ARTIFACT_NAME=$(echo "waku-${{matrix.arch}}-${{runner.os}}.tar.gz" | tr "[:upper:]" "[:lower:]") + echo "waku=${NWAKU_ARTIFACT_NAME}" >> $GITHUB_OUTPUT - - name: Install dependencies + if [[ "${{ runner.os }}" == "Linux" ]]; then + LIBWAKU_ARTIFACT_NAME=$(echo "libwaku-${VERSION}-${{matrix.arch}}-${{runner.os}}-linux.deb" | tr "[:upper:]" "[:lower:]") + fi + + if [[ "${{ runner.os }}" == "macOS" ]]; then + LIBWAKU_ARTIFACT_NAME=$(echo "libwaku-${VERSION}-${{matrix.arch}}-macos.tar.gz" | tr "[:upper:]" "[:lower:]") + fi + + echo "libwaku=${LIBWAKU_ARTIFACT_NAME}" >> $GITHUB_OUTPUT + + - name: Install build dependencies + run: | + if [[ "${{ runner.os }}" == "Linux" ]]; then + sudo apt-get update && sudo apt-get install -y build-essential dpkg-dev + fi + + - name: Build Waku artifacts run: | OS=$([[ "${{runner.os}}" == "macOS" ]] && echo "macosx" || echo "linux") make -j${NPROC} NIMFLAGS="--parallelBuild:${NPROC} -d:disableMarchNative --os:${OS} --cpu:${{matrix.arch}}" V=1 update make -j${NPROC} NIMFLAGS="--parallelBuild:${NPROC} -d:disableMarchNative --os:${OS} --cpu:${{matrix.arch}} -d:postgres" CI=false wakunode2 make -j${NPROC} NIMFLAGS="--parallelBuild:${NPROC} -d:disableMarchNative --os:${OS} --cpu:${{matrix.arch}}" CI=false chat2 - tar -cvzf ${{steps.vars.outputs.nwaku}} ./build/ + tar -cvzf ${{steps.vars.outputs.waku}} ./build/ - - name: Upload asset + make -j${NPROC} NIMFLAGS="--parallelBuild:${NPROC} -d:disableMarchNative --os:${OS} --cpu:${{matrix.arch}} -d:postgres" CI=false libwaku + make -j${NPROC} NIMFLAGS="--parallelBuild:${NPROC} -d:disableMarchNative --os:${OS} --cpu:${{matrix.arch}} -d:postgres" CI=false STATIC=1 libwaku + + - name: Create distributable libwaku package + run: | + VERSION=${{ steps.version.outputs.version }} + + if [[ "${{ runner.os }}" == "Linux" ]]; then + rm -rf pkg + mkdir -p pkg/DEBIAN pkg/usr/local/lib pkg/usr/local/include + cp build/libwaku.so pkg/usr/local/lib/ + cp build/libwaku.a pkg/usr/local/lib/ + cp library/libwaku.h pkg/usr/local/include/ + + echo "Package: waku" >> pkg/DEBIAN/control + echo "Version: ${VERSION}" >> pkg/DEBIAN/control + echo "Priority: optional" >> pkg/DEBIAN/control + echo "Section: libs" >> pkg/DEBIAN/control + echo "Architecture: ${{matrix.arch}}" >> pkg/DEBIAN/control + echo "Maintainer: Waku Team " >> pkg/DEBIAN/control + echo "Description: Waku library" >> pkg/DEBIAN/control + + dpkg-deb --build pkg ${{steps.vars.outputs.libwaku}} + fi + + if [[ "${{ runner.os }}" == "macOS" ]]; then + tar -cvzf ${{steps.vars.outputs.libwaku}} ./build/libwaku.dylib ./build/libwaku.a ./library/libwaku.h + fi + + - name: Upload waku artifact uses: actions/upload-artifact@v4.4.0 with: - name: ${{steps.vars.outputs.nwaku}} - path: ${{steps.vars.outputs.nwaku}} + name: waku-${{ steps.version.outputs.version }}-${{ matrix.arch }}-${{ runner.os }} + path: ${{ steps.vars.outputs.waku }} + if-no-files-found: error + + - name: Upload libwaku artifact + uses: actions/upload-artifact@v4.4.0 + with: + name: libwaku-${{ steps.version.outputs.version }}-${{ matrix.arch }}-${{ runner.os }} + path: ${{ steps.vars.outputs.libwaku }} if-no-files-found: error diff --git a/Makefile b/Makefile index 2f15ccd71..44f1c6495 100644 --- a/Makefile +++ b/Makefile @@ -426,18 +426,27 @@ docker-liteprotocoltester-push: .PHONY: cbindings cwaku_example libwaku STATIC ?= 0 +BUILD_COMMAND ?= libwakuDynamic + +ifeq ($(detected_OS),Windows) + LIB_EXT_DYNAMIC = dll + LIB_EXT_STATIC = lib +else ifeq ($(detected_OS),Darwin) + LIB_EXT_DYNAMIC = dylib + LIB_EXT_STATIC = a +else ifeq ($(detected_OS),Linux) + LIB_EXT_DYNAMIC = so + LIB_EXT_STATIC = a +endif + +LIB_EXT := $(LIB_EXT_DYNAMIC) +ifeq ($(STATIC), 1) + LIB_EXT = $(LIB_EXT_STATIC) + BUILD_COMMAND = libwakuStatic +endif libwaku: | build deps librln - rm -f build/libwaku* - -ifeq ($(STATIC), 1) - echo -e $(BUILD_MSG) "build/$@.a" && $(ENV_SCRIPT) nim libwakuStatic $(NIM_PARAMS) waku.nims -else ifeq ($(detected_OS),Windows) - make -f scripts/libwaku_windows_setup.mk windows-setup - echo -e $(BUILD_MSG) "build/$@.dll" && $(ENV_SCRIPT) nim libwakuDynamic $(NIM_PARAMS) waku.nims -else - echo -e $(BUILD_MSG) "build/$@.so" && $(ENV_SCRIPT) nim libwakuDynamic $(NIM_PARAMS) waku.nims -endif + echo -e $(BUILD_MSG) "build/$@.$(LIB_EXT)" && $(ENV_SCRIPT) nim $(BUILD_COMMAND) $(NIM_PARAMS) waku.nims $@.$(LIB_EXT) ##################### ## Mobile Bindings ## @@ -549,4 +558,3 @@ release-notes: sed -E 's@#([0-9]+)@[#\1](https://github.com/waku-org/nwaku/issues/\1)@g' # I could not get the tool to replace issue ids with links, so using sed for now, # asked here: https://github.com/bvieira/sv4git/discussions/101 - diff --git a/waku.nimble b/waku.nimble index 79fdd9fd6..09ff48969 100644 --- a/waku.nimble +++ b/waku.nimble @@ -61,27 +61,21 @@ proc buildBinary(name: string, srcDir = "./", params = "", lang = "c") = exec "nim " & lang & " --out:build/" & name & " --mm:refc " & extra_params & " " & srcDir & name & ".nim" -proc buildLibrary(name: string, srcDir = "./", params = "", `type` = "static") = +proc buildLibrary(lib_name: string, srcDir = "./", params = "", `type` = "static") = if not dirExists "build": mkDir "build" # allow something like "nim nimbus --verbosity:0 --hints:off nimbus.nims" var extra_params = params - for i in 2 ..< paramCount(): + for i in 2 ..< (paramCount() - 1): extra_params &= " " & paramStr(i) if `type` == "static": - exec "nim c" & " --out:build/" & name & - ".a --threads:on --app:staticlib --opt:size --noMain --mm:refc --header -d:metrics --nimMainPrefix:libwaku --skipParentCfg:on -d:discv5_protocol_id=d5waku " & - extra_params & " " & srcDir & name & ".nim" + exec "nim c" & " --out:build/" & lib_name & + " --threads:on --app:staticlib --opt:size --noMain --mm:refc --header -d:metrics --nimMainPrefix:libwaku --skipParentCfg:on -d:discv5_protocol_id=d5waku " & + extra_params & " " & srcDir & "libwaku.nim" else: - let lib_name = (when defined(windows): toDll(name) else: name & ".so") - when defined(windows): - exec "nim c" & " --out:build/" & lib_name & - " --threads:on --app:lib --opt:size --noMain --mm:refc --header -d:metrics --nimMainPrefix:libwaku --skipParentCfg:off -d:discv5_protocol_id=d5waku " & - extra_params & " " & srcDir & name & ".nim" - else: - exec "nim c" & " --out:build/" & lib_name & - " --threads:on --app:lib --opt:size --noMain --mm:refc --header -d:metrics --nimMainPrefix:libwaku --skipParentCfg:on -d:discv5_protocol_id=d5waku " & - extra_params & " " & srcDir & name & ".nim" + exec "nim c" & " --out:build/" & lib_name & + " --threads:on --app:lib --opt:size --noMain --mm:refc --header -d:metrics --nimMainPrefix:libwaku --skipParentCfg:off -d:discv5_protocol_id=d5waku " & + extra_params & " " & srcDir & "libwaku.nim" proc buildMobileAndroid(srcDir = ".", params = "") = let cpu = getEnv("CPU") @@ -206,12 +200,12 @@ let chroniclesParams = "--warning:UnusedImport:on " & "-d:chronicles_log_level=TRACE" task libwakuStatic, "Build the cbindings waku node library": - let name = "libwaku" - buildLibrary name, "library/", chroniclesParams, "static" + let lib_name = paramStr(paramCount()) + buildLibrary lib_name, "library/", chroniclesParams, "static" task libwakuDynamic, "Build the cbindings waku node library": - let name = "libwaku" - buildLibrary name, "library/", chroniclesParams, "dynamic" + let lib_name = paramStr(paramCount()) + buildLibrary lib_name, "library/", chroniclesParams, "dynamic" ### Mobile Android task libWakuAndroid, "Build the mobile bindings for Android": From 10dc3d3eb4b6a3d4313f7b2cc4a85a925e9ce039 Mon Sep 17 00:00:00 2001 From: Fabiana Cecin Date: Mon, 15 Dec 2025 09:15:33 -0300 Subject: [PATCH 22/70] chore: misc CI fixes (#3664) * add make update to CI workflow * add a nwaku -> logos-messaging-nim workflow rename * pin local container-image.yml workflow to a commit --- .github/workflows/ci.yml | 8 +++++++- .github/workflows/container-image.yml | 3 ++- 2 files changed, 9 insertions(+), 2 deletions(-) diff --git a/.github/workflows/ci.yml b/.github/workflows/ci.yml index e3186a007..b51f4621c 100644 --- a/.github/workflows/ci.yml +++ b/.github/workflows/ci.yml @@ -76,6 +76,9 @@ jobs: .git/modules key: ${{ runner.os }}-vendor-modules-${{ steps.submodules.outputs.hash }} + - name: Make update + run: make update + - name: Build binaries run: make V=1 QUICK_AND_DIRTY_COMPILER=1 all tools @@ -114,6 +117,9 @@ jobs: .git/modules key: ${{ runner.os }}-vendor-modules-${{ steps.submodules.outputs.hash }} + - name: Make update + run: make update + - name: Run tests run: | postgres_enabled=0 @@ -132,7 +138,7 @@ jobs: build-docker-image: needs: changes if: ${{ needs.changes.outputs.v2 == 'true' || needs.changes.outputs.common == 'true' || needs.changes.outputs.docker == 'true' }} - uses: logos-messaging/nwaku/.github/workflows/container-image.yml@master + uses: logos-messaging/logos-messaging-nim/.github/workflows/container-image.yml@4139681df984de008069e86e8ce695f1518f1c0b secrets: inherit nwaku-nwaku-interop-tests: diff --git a/.github/workflows/container-image.yml b/.github/workflows/container-image.yml index cfa66d20a..2bc08be2f 100644 --- a/.github/workflows/container-image.yml +++ b/.github/workflows/container-image.yml @@ -41,7 +41,7 @@ jobs: env: QUAY_PASSWORD: ${{ secrets.QUAY_PASSWORD }} QUAY_USER: ${{ secrets.QUAY_USER }} - + - name: Checkout code if: ${{ steps.secrets.outcome == 'success' }} uses: actions/checkout@v4 @@ -65,6 +65,7 @@ jobs: id: build if: ${{ steps.secrets.outcome == 'success' }} run: | + make update make -j${NPROC} V=1 QUICK_AND_DIRTY_COMPILER=1 NIMFLAGS="-d:disableMarchNative -d:postgres -d:chronicles_colors:none" wakunode2 From 2477c4980f15df0efc2eedf27d7593e0dd2b1e1b Mon Sep 17 00:00:00 2001 From: Fabiana Cecin Date: Mon, 15 Dec 2025 10:33:39 -0300 Subject: [PATCH 23/70] chore: update ci container-image.yml ref to a commit in master (#3666) --- .github/workflows/ci.yml | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/.github/workflows/ci.yml b/.github/workflows/ci.yml index b51f4621c..9c94577f0 100644 --- a/.github/workflows/ci.yml +++ b/.github/workflows/ci.yml @@ -138,7 +138,7 @@ jobs: build-docker-image: needs: changes if: ${{ needs.changes.outputs.v2 == 'true' || needs.changes.outputs.common == 'true' || needs.changes.outputs.docker == 'true' }} - uses: logos-messaging/logos-messaging-nim/.github/workflows/container-image.yml@4139681df984de008069e86e8ce695f1518f1c0b + uses: logos-messaging/logos-messaging-nim/.github/workflows/container-image.yml@10dc3d3eb4b6a3d4313f7b2cc4a85a925e9ce039 secrets: inherit nwaku-nwaku-interop-tests: From 3323325526bfa4898ca0c5c289638585c341af22 Mon Sep 17 00:00:00 2001 From: NagyZoltanPeter <113987313+NagyZoltanPeter@users.noreply.github.com> Date: Tue, 16 Dec 2025 02:52:20 +0100 Subject: [PATCH 24/70] chore: extend RequestBroker with supporting native and external types and added possibility to define non-async (aka sync) requests for simplicity and performance (#3665) * chore: extend RequestBroker with supporting native and external types and added possibility to define non-async (aka sync) requests for simplicity and performance * Adapt gcsafe pragma for RequestBroker sync requests and provider signatures as requirement --------- Co-authored-by: Ivan FB <128452529+Ivansete-status@users.noreply.github.com> --- tests/common/test_request_broker.nim | 328 ++++++++++++++++++- waku/common/broker/request_broker.nim | 438 ++++++++++++++++++++------ 2 files changed, 651 insertions(+), 115 deletions(-) diff --git a/tests/common/test_request_broker.nim b/tests/common/test_request_broker.nim index 2ffd9cbf8..a534216dc 100644 --- a/tests/common/test_request_broker.nim +++ b/tests/common/test_request_broker.nim @@ -6,6 +6,10 @@ import std/strutils import waku/common/broker/request_broker +## --------------------------------------------------------------------------- +## Async-mode brokers + tests +## --------------------------------------------------------------------------- + RequestBroker: type SimpleResponse = object value*: string @@ -31,11 +35,14 @@ RequestBroker: suffix: string ): Future[Result[DualResponse, string]] {.async.} -RequestBroker: +RequestBroker(async): type ImplicitResponse = ref object note*: string -suite "RequestBroker macro": +static: + doAssert typeof(SimpleResponse.request()) is Future[Result[SimpleResponse, string]] + +suite "RequestBroker macro (async mode)": test "serves zero-argument providers": check SimpleResponse .setProvider( @@ -52,7 +59,7 @@ suite "RequestBroker macro": test "zero-argument request errors when unset": let res = waitFor SimpleResponse.request() - check res.isErr + check res.isErr() check res.error.contains("no zero-arg provider") test "serves input-based providers": @@ -78,7 +85,6 @@ suite "RequestBroker macro": .setProvider( proc(key: string, subKey: int): Future[Result[KeyedResponse, string]] {.async.} = raise newException(ValueError, "simulated failure") - ok(KeyedResponse(key: key, payload: "")) ) .isOk() @@ -90,7 +96,7 @@ suite "RequestBroker macro": test "input request errors when unset": let res = waitFor KeyedResponse.request("foo", 2) - check res.isErr + check res.isErr() check res.error.contains("input signature") test "supports both provider types simultaneously": @@ -109,11 +115,11 @@ suite "RequestBroker macro": .isOk() let noInput = waitFor DualResponse.request() - check noInput.isOk + check noInput.isOk() check noInput.value.note == "base" let withInput = waitFor DualResponse.request("-extra") - check withInput.isOk + check withInput.isOk() check withInput.value.note == "base-extra" check withInput.value.count == 6 @@ -129,7 +135,7 @@ suite "RequestBroker macro": DualResponse.clearProvider() let res = waitFor DualResponse.request() - check res.isErr + check res.isErr() test "implicit zero-argument provider works by default": check ImplicitResponse @@ -140,14 +146,14 @@ suite "RequestBroker macro": .isOk() let res = waitFor ImplicitResponse.request() - check res.isOk + check res.isOk() ImplicitResponse.clearProvider() check res.value.note == "auto" test "implicit zero-argument request errors when unset": let res = waitFor ImplicitResponse.request() - check res.isErr + check res.isErr() check res.error.contains("no zero-arg provider") test "no provider override": @@ -171,7 +177,7 @@ suite "RequestBroker macro": check DualResponse.setProvider(overrideProc).isErr() let noInput = waitFor DualResponse.request() - check noInput.isOk + check noInput.isOk() check noInput.value.note == "base" let stillResponse = waitFor DualResponse.request(" still works") @@ -191,8 +197,306 @@ suite "RequestBroker macro": check DualResponse.setProvider(overrideProc).isOk() let nowSuccWithOverride = waitFor DualResponse.request() - check nowSuccWithOverride.isOk + check nowSuccWithOverride.isOk() check nowSuccWithOverride.value.note == "something else" check nowSuccWithOverride.value.count == 1 DualResponse.clearProvider() + +## --------------------------------------------------------------------------- +## Sync-mode brokers + tests +## --------------------------------------------------------------------------- + +RequestBroker(sync): + type SimpleResponseSync = object + value*: string + + proc signatureFetch*(): Result[SimpleResponseSync, string] + +RequestBroker(sync): + type KeyedResponseSync = object + key*: string + payload*: string + + proc signatureFetchWithKey*( + key: string, subKey: int + ): Result[KeyedResponseSync, string] + +RequestBroker(sync): + type DualResponseSync = object + note*: string + count*: int + + proc signatureNoInput*(): Result[DualResponseSync, string] + proc signatureWithInput*(suffix: string): Result[DualResponseSync, string] + +RequestBroker(sync): + type ImplicitResponseSync = ref object + note*: string + +static: + doAssert typeof(SimpleResponseSync.request()) is Result[SimpleResponseSync, string] + doAssert not ( + typeof(SimpleResponseSync.request()) is Future[Result[SimpleResponseSync, string]] + ) + doAssert typeof(KeyedResponseSync.request("topic", 1)) is + Result[KeyedResponseSync, string] + +suite "RequestBroker macro (sync mode)": + test "serves zero-argument providers (sync)": + check SimpleResponseSync + .setProvider( + proc(): Result[SimpleResponseSync, string] = + ok(SimpleResponseSync(value: "hi")) + ) + .isOk() + + let res = SimpleResponseSync.request() + check res.isOk() + check res.value.value == "hi" + + SimpleResponseSync.clearProvider() + + test "zero-argument request errors when unset (sync)": + let res = SimpleResponseSync.request() + check res.isErr() + check res.error.contains("no zero-arg provider") + + test "serves input-based providers (sync)": + var seen: seq[string] = @[] + check KeyedResponseSync + .setProvider( + proc(key: string, subKey: int): Result[KeyedResponseSync, string] = + seen.add(key) + ok(KeyedResponseSync(key: key, payload: key & "-payload+" & $subKey)) + ) + .isOk() + + let res = KeyedResponseSync.request("topic", 1) + check res.isOk() + check res.value.key == "topic" + check res.value.payload == "topic-payload+1" + check seen == @["topic"] + + KeyedResponseSync.clearProvider() + + test "catches provider exception (sync)": + check KeyedResponseSync + .setProvider( + proc(key: string, subKey: int): Result[KeyedResponseSync, string] = + raise newException(ValueError, "simulated failure") + ) + .isOk() + + let res = KeyedResponseSync.request("neglected", 11) + check res.isErr() + check res.error.contains("simulated failure") + + KeyedResponseSync.clearProvider() + + test "input request errors when unset (sync)": + let res = KeyedResponseSync.request("foo", 2) + check res.isErr() + check res.error.contains("input signature") + + test "supports both provider types simultaneously (sync)": + check DualResponseSync + .setProvider( + proc(): Result[DualResponseSync, string] = + ok(DualResponseSync(note: "base", count: 1)) + ) + .isOk() + + check DualResponseSync + .setProvider( + proc(suffix: string): Result[DualResponseSync, string] = + ok(DualResponseSync(note: "base" & suffix, count: suffix.len)) + ) + .isOk() + + let noInput = DualResponseSync.request() + check noInput.isOk() + check noInput.value.note == "base" + + let withInput = DualResponseSync.request("-extra") + check withInput.isOk() + check withInput.value.note == "base-extra" + check withInput.value.count == 6 + + DualResponseSync.clearProvider() + + test "clearProvider resets both entries (sync)": + check DualResponseSync + .setProvider( + proc(): Result[DualResponseSync, string] = + ok(DualResponseSync(note: "temp", count: 0)) + ) + .isOk() + DualResponseSync.clearProvider() + + let res = DualResponseSync.request() + check res.isErr() + + test "implicit zero-argument provider works by default (sync)": + check ImplicitResponseSync + .setProvider( + proc(): Result[ImplicitResponseSync, string] = + ok(ImplicitResponseSync(note: "auto")) + ) + .isOk() + + let res = ImplicitResponseSync.request() + check res.isOk() + + ImplicitResponseSync.clearProvider() + check res.value.note == "auto" + + test "implicit zero-argument request errors when unset (sync)": + let res = ImplicitResponseSync.request() + check res.isErr() + check res.error.contains("no zero-arg provider") + + test "implicit zero-argument provider raises error (sync)": + check ImplicitResponseSync + .setProvider( + proc(): Result[ImplicitResponseSync, string] = + raise newException(ValueError, "simulated failure") + ) + .isOk() + + let res = ImplicitResponseSync.request() + check res.isErr() + check res.error.contains("simulated failure") + + ImplicitResponseSync.clearProvider() + +## --------------------------------------------------------------------------- +## POD / external type brokers + tests (distinct/alias behavior) +## --------------------------------------------------------------------------- + +type ExternalDefinedTypeAsync = object + label*: string + +type ExternalDefinedTypeSync = object + label*: string + +type ExternalDefinedTypeShared = object + label*: string + +RequestBroker: + type PodResponse = int + + proc signatureFetch*(): Future[Result[PodResponse, string]] {.async.} + +RequestBroker: + type ExternalAliasedResponse = ExternalDefinedTypeAsync + + proc signatureFetch*(): Future[Result[ExternalAliasedResponse, string]] {.async.} + +RequestBroker(sync): + type ExternalAliasedResponseSync = ExternalDefinedTypeSync + + proc signatureFetch*(): Result[ExternalAliasedResponseSync, string] + +RequestBroker(sync): + type DistinctStringResponseA = distinct string + +RequestBroker(sync): + type DistinctStringResponseB = distinct string + +RequestBroker(sync): + type ExternalDistinctResponseA = distinct ExternalDefinedTypeShared + +RequestBroker(sync): + type ExternalDistinctResponseB = distinct ExternalDefinedTypeShared + +suite "RequestBroker macro (POD/external types)": + test "supports non-object response types (async)": + check PodResponse + .setProvider( + proc(): Future[Result[PodResponse, string]] {.async.} = + ok(PodResponse(123)) + ) + .isOk() + + let res = waitFor PodResponse.request() + check res.isOk() + check int(res.value) == 123 + + PodResponse.clearProvider() + + test "supports aliased external types (async)": + check ExternalAliasedResponse + .setProvider( + proc(): Future[Result[ExternalAliasedResponse, string]] {.async.} = + ok(ExternalAliasedResponse(ExternalDefinedTypeAsync(label: "ext"))) + ) + .isOk() + + let res = waitFor ExternalAliasedResponse.request() + check res.isOk() + check ExternalDefinedTypeAsync(res.value).label == "ext" + + ExternalAliasedResponse.clearProvider() + + test "supports aliased external types (sync)": + check ExternalAliasedResponseSync + .setProvider( + proc(): Result[ExternalAliasedResponseSync, string] = + ok(ExternalAliasedResponseSync(ExternalDefinedTypeSync(label: "ext"))) + ) + .isOk() + + let res = ExternalAliasedResponseSync.request() + check res.isOk() + check ExternalDefinedTypeSync(res.value).label == "ext" + + ExternalAliasedResponseSync.clearProvider() + + test "distinct response types avoid overload ambiguity (sync)": + check DistinctStringResponseA + .setProvider( + proc(): Result[DistinctStringResponseA, string] = + ok(DistinctStringResponseA("a")) + ) + .isOk() + + check DistinctStringResponseB + .setProvider( + proc(): Result[DistinctStringResponseB, string] = + ok(DistinctStringResponseB("b")) + ) + .isOk() + + check ExternalDistinctResponseA + .setProvider( + proc(): Result[ExternalDistinctResponseA, string] = + ok(ExternalDistinctResponseA(ExternalDefinedTypeShared(label: "ea"))) + ) + .isOk() + + check ExternalDistinctResponseB + .setProvider( + proc(): Result[ExternalDistinctResponseB, string] = + ok(ExternalDistinctResponseB(ExternalDefinedTypeShared(label: "eb"))) + ) + .isOk() + + let resA = DistinctStringResponseA.request() + let resB = DistinctStringResponseB.request() + check resA.isOk() + check resB.isOk() + check string(resA.value) == "a" + check string(resB.value) == "b" + + let resEA = ExternalDistinctResponseA.request() + let resEB = ExternalDistinctResponseB.request() + check resEA.isOk() + check resEB.isOk() + check ExternalDefinedTypeShared(resEA.value).label == "ea" + check ExternalDefinedTypeShared(resEB.value).label == "eb" + + DistinctStringResponseA.clearProvider() + DistinctStringResponseB.clearProvider() + ExternalDistinctResponseA.clearProvider() + ExternalDistinctResponseB.clearProvider() diff --git a/waku/common/broker/request_broker.nim b/waku/common/broker/request_broker.nim index a8a6651d7..dece77381 100644 --- a/waku/common/broker/request_broker.nim +++ b/waku/common/broker/request_broker.nim @@ -6,8 +6,15 @@ ## Worth considering using it in a single provider, many requester scenario. ## ## Provides a declarative way to define an immutable value type together with a -## thread-local broker that can register an asynchronous provider, dispatch typed -## requests and clear provider. +## thread-local broker that can register an asynchronous or synchronous provider, +## dispatch typed requests and clear provider. +## +## For consideration use `sync` mode RequestBroker when you need to provide simple value(s) +## where there is no long-running async operation involved. +## Typically it act as a accessor for the local state of generic setting. +## +## `async` mode is better to be used when you request date that may involve some long IO operation +## or action. ## ## Usage: ## Declare your desired request type inside a `RequestBroker` macro, add any number of fields. @@ -24,6 +31,56 @@ ## proc signature*(arg1: ArgType, arg2: AnotherArgType): Future[Result[TypeName, string]] ## ## ``` +## +## Sync mode (no `async` / `Future`) can be generated with: +## +## ```nim +## RequestBroker(sync): +## type TypeName = object +## field1*: FieldType +## +## proc signature*(): Result[TypeName, string] +## proc signature*(arg1: ArgType): Result[TypeName, string] +## ``` +## +## Note: When the request type is declared as a native type / alias / externally-defined +## type (i.e. not an inline `object` / `ref object` definition), RequestBroker +## will wrap it in `distinct` automatically unless you already used `distinct`. +## This avoids overload ambiguity when multiple brokers share the same +## underlying base type (Nim overload resolution does not consider return type). +## +## This means that for non-object request types you typically: +## - construct values with an explicit cast/constructor, e.g. `MyType("x")` +## - unwrap with a cast when needed, e.g. `string(myVal)` or `BaseType(myVal)` +## +## Example (native response type): +## ```nim +## RequestBroker(sync): +## type MyCount = int # exported as: `distinct int` +## +## MyCount.setProvider(proc(): Result[MyCount, string] = ok(MyCount(42))) +## let res = MyCount.request() +## if res.isOk(): +## let raw = int(res.get()) +## ``` +## +## Example (externally-defined type): +## ```nim +## type External = object +## label*: string +## +## RequestBroker: +## type MyExternal = External # exported as: `distinct External` +## +## MyExternal.setProvider( +## proc(): Future[Result[MyExternal, string]] {.async.} = +## ok(MyExternal(External(label: "hi"))) +## ) +## let res = await MyExternal.request() +## if res.isOk(): +## let base = External(res.get()) +## echo base.label +## ``` ## The 'TypeName' object defines the requestable data (but also can be seen as request for action with return value). ## The 'signature' proc defines the provider(s) signature, that is enforced at compile time. ## One signature can be with no arguments, another with any number of arguments - where the input arguments are @@ -31,12 +88,12 @@ ## ## After this, you can register a provider anywhere in your code with ## `TypeName.setProvider(...)`, which returns error if already having a provider. -## Providers are async procs or lambdas that take no arguments and return a Future[Result[TypeName, string]]. +## Providers are async procs/lambdas in default mode and sync procs in sync mode. ## Only one provider can be registered at a time per signature type (zero arg and/or multi arg). ## ## Requests can be made from anywhere with no direct dependency on the provider by ## calling `TypeName.request()` - with arguments respecting the signature(s). -## This will asynchronously call the registered provider and return a Future[Result[TypeName, string]]. +## In async mode, this returns a Future[Result[TypeName, string]]. In sync mode, it returns Result[TypeName, string]. ## ## Whenever you no want to process requests (or your object instance that provides the request goes out of scope), ## you can remove it from the broker with `TypeName.clearProvider()`. @@ -49,10 +106,10 @@ ## text*: string ## ## ## Define the request and provider signature, that is enforced at compile time. -## proc signature*(): Future[Result[Greeting, string]] +## proc signature*(): Future[Result[Greeting, string]] {.async.} ## ## ## Also possible to define signature with arbitrary input arguments. -## proc signature*(lang: string): Future[Result[Greeting, string]] +## proc signature*(lang: string): Future[Result[Greeting, string]] {.async.} ## ## ... ## Greeting.setProvider( @@ -60,6 +117,23 @@ ## ok(Greeting(text: "hello")) ## ) ## let res = await Greeting.request() +## +## +## ... +## # using native type as response for a synchronous request. +## RequestBroker(sync): +## type NeedThatInfo = string +## +##... +## NeedThatInfo.setProvider( +## proc(): Result[NeedThatInfo, string] = +## ok("this is the info you wanted") +## ) +## let res = NeedThatInfo.request().valueOr: +## echo "not ok due to: " & error +## NeedThatInfo(":-(") +## +## echo string(res) ## ``` ## If no `signature` proc is declared, a zero-argument form is generated ## automatically, so the caller only needs to provide the type definition. @@ -77,7 +151,11 @@ proc errorFuture[T](message: string): Future[Result[T, string]] {.inline.} = fut.complete(err(Result[T, string], message)) fut -proc isReturnTypeValid(returnType, typeIdent: NimNode): bool = +type RequestBrokerMode = enum + rbAsync + rbSync + +proc isAsyncReturnTypeValid(returnType, typeIdent: NimNode): bool = ## Accept Future[Result[TypeIdent, string]] as the contract. if returnType.kind != nnkBracketExpr or returnType.len != 2: return false @@ -92,6 +170,23 @@ proc isReturnTypeValid(returnType, typeIdent: NimNode): bool = return false inner[2].kind == nnkIdent and inner[2].eqIdent("string") +proc isSyncReturnTypeValid(returnType, typeIdent: NimNode): bool = + ## Accept Result[TypeIdent, string] as the contract. + if returnType.kind != nnkBracketExpr or returnType.len != 3: + return false + if returnType[0].kind != nnkIdent or not returnType[0].eqIdent("Result"): + return false + if returnType[1].kind != nnkIdent or not returnType[1].eqIdent($typeIdent): + return false + returnType[2].kind == nnkIdent and returnType[2].eqIdent("string") + +proc isReturnTypeValid(returnType, typeIdent: NimNode, mode: RequestBrokerMode): bool = + case mode + of rbAsync: + isAsyncReturnTypeValid(returnType, typeIdent) + of rbSync: + isSyncReturnTypeValid(returnType, typeIdent) + proc cloneParams(params: seq[NimNode]): seq[NimNode] = ## Deep copy parameter definitions so they can be inserted in multiple places. result = @[] @@ -109,73 +204,122 @@ proc collectParamNames(params: seq[NimNode]): seq[NimNode] = continue result.add(ident($nameNode)) -proc makeProcType(returnType: NimNode, params: seq[NimNode]): NimNode = +proc makeProcType( + returnType: NimNode, params: seq[NimNode], mode: RequestBrokerMode +): NimNode = var formal = newTree(nnkFormalParams) formal.add(returnType) for param in params: formal.add(param) - let pragmas = newTree(nnkPragma, ident("async")) - newTree(nnkProcTy, formal, pragmas) + case mode + of rbAsync: + let pragmas = newTree(nnkPragma, ident("async")) + newTree(nnkProcTy, formal, pragmas) + of rbSync: + let raisesPragma = newTree( + nnkExprColonExpr, ident("raises"), newTree(nnkBracket, ident("CatchableError")) + ) + let pragmas = newTree(nnkPragma, raisesPragma, ident("gcsafe")) + newTree(nnkProcTy, formal, pragmas) -macro RequestBroker*(body: untyped): untyped = +proc parseMode(modeNode: NimNode): RequestBrokerMode = + ## Parses the mode selector for the 2-argument macro overload. + ## Supported spellings: `sync` / `async` (case-insensitive). + let raw = ($modeNode).strip().toLowerAscii() + case raw + of "sync": + rbSync + of "async": + rbAsync + else: + error("RequestBroker mode must be `sync` or `async` (default is async)", modeNode) + +proc ensureDistinctType(rhs: NimNode): NimNode = + ## For PODs / aliases / externally-defined types, wrap in `distinct` unless + ## it's already distinct. + if rhs.kind == nnkDistinctTy: + return copyNimTree(rhs) + newTree(nnkDistinctTy, copyNimTree(rhs)) + +proc generateRequestBroker(body: NimNode, mode: RequestBrokerMode): NimNode = when defined(requestBrokerDebug): echo body.treeRepr + echo "RequestBroker mode: ", $mode var typeIdent: NimNode = nil var objectDef: NimNode = nil - var isRefObject = false for stmt in body: if stmt.kind == nnkTypeSection: for def in stmt: if def.kind != nnkTypeDef: continue + if not typeIdent.isNil(): + error("Only one type may be declared inside RequestBroker", def) + + typeIdent = baseTypeIdent(def[0]) let rhs = def[2] - var objectType: NimNode + + ## Support inline object types (fields are auto-exported) + ## AND non-object types / aliases (e.g. `string`, `int`, `OtherType`). case rhs.kind of nnkObjectTy: - objectType = rhs + let recList = rhs[2] + if recList.kind != nnkRecList: + error("RequestBroker object must declare a standard field list", rhs) + var exportedRecList = newTree(nnkRecList) + for field in recList: + case field.kind + of nnkIdentDefs: + ensureFieldDef(field) + var cloned = copyNimTree(field) + for i in 0 ..< cloned.len - 2: + cloned[i] = exportIdentNode(cloned[i]) + exportedRecList.add(cloned) + of nnkEmpty: + discard + else: + error( + "RequestBroker object definition only supports simple field declarations", + field, + ) + objectDef = newTree( + nnkObjectTy, copyNimTree(rhs[0]), copyNimTree(rhs[1]), exportedRecList + ) of nnkRefTy: - isRefObject = true - if rhs.len != 1 or rhs[0].kind != nnkObjectTy: - error( - "RequestBroker ref object must wrap a concrete object definition", rhs + if rhs.len != 1: + error("RequestBroker ref type must have a single base", rhs) + if rhs[0].kind == nnkObjectTy: + let obj = rhs[0] + let recList = obj[2] + if recList.kind != nnkRecList: + error("RequestBroker object must declare a standard field list", obj) + var exportedRecList = newTree(nnkRecList) + for field in recList: + case field.kind + of nnkIdentDefs: + ensureFieldDef(field) + var cloned = copyNimTree(field) + for i in 0 ..< cloned.len - 2: + cloned[i] = exportIdentNode(cloned[i]) + exportedRecList.add(cloned) + of nnkEmpty: + discard + else: + error( + "RequestBroker object definition only supports simple field declarations", + field, + ) + let exportedObjectType = newTree( + nnkObjectTy, copyNimTree(obj[0]), copyNimTree(obj[1]), exportedRecList ) - objectType = rhs[0] - else: - continue - if not typeIdent.isNil(): - error("Only one object type may be declared inside RequestBroker", def) - typeIdent = baseTypeIdent(def[0]) - let recList = objectType[2] - if recList.kind != nnkRecList: - error("RequestBroker object must declare a standard field list", objectType) - var exportedRecList = newTree(nnkRecList) - for field in recList: - case field.kind - of nnkIdentDefs: - ensureFieldDef(field) - var cloned = copyNimTree(field) - for i in 0 ..< cloned.len - 2: - cloned[i] = exportIdentNode(cloned[i]) - exportedRecList.add(cloned) - of nnkEmpty: - discard + objectDef = newTree(nnkRefTy, exportedObjectType) else: - error( - "RequestBroker object definition only supports simple field declarations", - field, - ) - let exportedObjectType = newTree( - nnkObjectTy, - copyNimTree(objectType[0]), - copyNimTree(objectType[1]), - exportedRecList, - ) - if isRefObject: - objectDef = newTree(nnkRefTy, exportedObjectType) + ## `ref SomeType` (SomeType can be defined elsewhere) + objectDef = ensureDistinctType(rhs) else: - objectDef = exportedObjectType + ## Non-object type / alias (e.g. `string`, `int`, `SomeExternalType`). + objectDef = ensureDistinctType(rhs) if typeIdent.isNil(): - error("RequestBroker body must declare exactly one object type", body) + error("RequestBroker body must declare exactly one type", body) when defined(requestBrokerDebug): echo "RequestBroker generating type: ", $typeIdent @@ -183,7 +327,6 @@ macro RequestBroker*(body: untyped): untyped = let exportedTypeIdent = postfix(copyNimTree(typeIdent), "*") let typeDisplayName = sanitizeIdentName(typeIdent) let typeNameLit = newLit(typeDisplayName) - let isRefObjectLit = newLit(isRefObject) var zeroArgSig: NimNode = nil var zeroArgProviderName: NimNode = nil var zeroArgFieldName: NimNode = nil @@ -211,10 +354,14 @@ macro RequestBroker*(body: untyped): untyped = if params.len == 0: error("Signature must declare a return type", stmt) let returnType = params[0] - if not isReturnTypeValid(returnType, typeIdent): - error( - "Signature must return Future[Result[`" & $typeIdent & "`, string]]", stmt - ) + if not isReturnTypeValid(returnType, typeIdent, mode): + case mode + of rbAsync: + error( + "Signature must return Future[Result[`" & $typeIdent & "`, string]]", stmt + ) + of rbSync: + error("Signature must return Result[`" & $typeIdent & "`, string]", stmt) let paramCount = params.len - 1 if paramCount == 0: if zeroArgSig != nil: @@ -258,14 +405,20 @@ macro RequestBroker*(body: untyped): untyped = var typeSection = newTree(nnkTypeSection) typeSection.add(newTree(nnkTypeDef, exportedTypeIdent, newEmptyNode(), objectDef)) - let returnType = quote: - Future[Result[`typeIdent`, string]] + let returnType = + case mode + of rbAsync: + quote: + Future[Result[`typeIdent`, string]] + of rbSync: + quote: + Result[`typeIdent`, string] if not zeroArgSig.isNil(): - let procType = makeProcType(returnType, @[]) + let procType = makeProcType(returnType, @[], mode) typeSection.add(newTree(nnkTypeDef, zeroArgProviderName, newEmptyNode(), procType)) if not argSig.isNil(): - let procType = makeProcType(returnType, cloneParams(argParams)) + let procType = makeProcType(returnType, cloneParams(argParams), mode) typeSection.add(newTree(nnkTypeDef, argProviderName, newEmptyNode(), procType)) var brokerRecList = newTree(nnkRecList) @@ -316,33 +469,69 @@ macro RequestBroker*(body: untyped): untyped = quote do: `accessProcIdent`().`zeroArgFieldName` = nil ) - result.add( - quote do: - proc request*( - _: typedesc[`typeIdent`] - ): Future[Result[`typeIdent`, string]] {.async: (raises: []).} = - let provider = `accessProcIdent`().`zeroArgFieldName` - if provider.isNil(): - return err( - "RequestBroker(" & `typeNameLit` & "): no zero-arg provider registered" - ) - let catchedRes = catch: - await provider() + case mode + of rbAsync: + result.add( + quote do: + proc request*( + _: typedesc[`typeIdent`] + ): Future[Result[`typeIdent`, string]] {.async: (raises: []).} = + let provider = `accessProcIdent`().`zeroArgFieldName` + if provider.isNil(): + return err( + "RequestBroker(" & `typeNameLit` & "): no zero-arg provider registered" + ) + let catchedRes = catch: + await provider() - if catchedRes.isErr(): - return err("Request failed:" & catchedRes.error.msg) + if catchedRes.isErr(): + return err( + "RequestBroker(" & `typeNameLit` & "): provider threw exception: " & + catchedRes.error.msg + ) - let providerRes = catchedRes.get() - when `isRefObjectLit`: + let providerRes = catchedRes.get() if providerRes.isOk(): let resultValue = providerRes.get() - if resultValue.isNil(): - return err( - "RequestBroker(" & `typeNameLit` & "): provider returned nil result" - ) - return providerRes + when compiles(resultValue.isNil()): + if resultValue.isNil(): + return err( + "RequestBroker(" & `typeNameLit` & "): provider returned nil result" + ) + return providerRes - ) + ) + of rbSync: + result.add( + quote do: + proc request*( + _: typedesc[`typeIdent`] + ): Result[`typeIdent`, string] {.gcsafe, raises: [].} = + let provider = `accessProcIdent`().`zeroArgFieldName` + if provider.isNil(): + return err( + "RequestBroker(" & `typeNameLit` & "): no zero-arg provider registered" + ) + + var providerRes: Result[`typeIdent`, string] + try: + providerRes = provider() + except CatchableError as e: + return err( + "RequestBroker(" & `typeNameLit` & "): provider threw exception: " & + e.msg + ) + + if providerRes.isOk(): + let resultValue = providerRes.get() + when compiles(resultValue.isNil()): + if resultValue.isNil(): + return err( + "RequestBroker(" & `typeNameLit` & "): provider returned nil result" + ) + return providerRes + + ) if not argSig.isNil(): result.add( quote do: @@ -363,10 +552,7 @@ macro RequestBroker*(body: untyped): untyped = let argNameIdents = collectParamNames(requestParamDefs) let providerSym = genSym(nskLet, "provider") var formalParams = newTree(nnkFormalParams) - formalParams.add( - quote do: - Future[Result[`typeIdent`, string]] - ) + formalParams.add(copyNimTree(returnType)) formalParams.add( newTree( nnkIdentDefs, @@ -378,8 +564,14 @@ macro RequestBroker*(body: untyped): untyped = for paramDef in requestParamDefs: formalParams.add(paramDef) - let requestPragmas = quote: - {.async: (raises: []), gcsafe.} + let requestPragmas = + case mode + of rbAsync: + quote: + {.async: (raises: []).} + of rbSync: + quote: + {.gcsafe, raises: [].} var providerCall = newCall(providerSym) for argName in argNameIdents: providerCall.add(argName) @@ -396,23 +588,49 @@ macro RequestBroker*(body: untyped): untyped = "): no provider registered for input signature" ) ) - requestBody.add( - quote do: - let catchedRes = catch: - await `providerCall` - if catchedRes.isErr(): - return err("Request failed:" & catchedRes.error.msg) - let providerRes = catchedRes.get() - when `isRefObjectLit`: + case mode + of rbAsync: + requestBody.add( + quote do: + let catchedRes = catch: + await `providerCall` + if catchedRes.isErr(): + return err( + "RequestBroker(" & `typeNameLit` & "): provider threw exception: " & + catchedRes.error.msg + ) + + let providerRes = catchedRes.get() if providerRes.isOk(): let resultValue = providerRes.get() - if resultValue.isNil(): - return err( - "RequestBroker(" & `typeNameLit` & "): provider returned nil result" - ) - return providerRes - ) + when compiles(resultValue.isNil()): + if resultValue.isNil(): + return err( + "RequestBroker(" & `typeNameLit` & "): provider returned nil result" + ) + return providerRes + ) + of rbSync: + requestBody.add( + quote do: + var providerRes: Result[`typeIdent`, string] + try: + providerRes = `providerCall` + except CatchableError as e: + return err( + "RequestBroker(" & `typeNameLit` & "): provider threw exception: " & e.msg + ) + + if providerRes.isOk(): + let resultValue = providerRes.get() + when compiles(resultValue.isNil()): + if resultValue.isNil(): + return err( + "RequestBroker(" & `typeNameLit` & "): provider returned nil result" + ) + return providerRes + ) # requestBody.add(providerCall) result.add( newTree( @@ -436,3 +654,17 @@ macro RequestBroker*(body: untyped): untyped = when defined(requestBrokerDebug): echo result.repr + + return result + +macro RequestBroker*(body: untyped): untyped = + ## Default (async) mode. + generateRequestBroker(body, rbAsync) + +macro RequestBroker*(mode: untyped, body: untyped): untyped = + ## Explicit mode selector. + ## Example: + ## RequestBroker(sync): + ## type Foo = object + ## proc signature*(): Result[Foo, string] + generateRequestBroker(body, parseMode(mode)) From bc5059083ec0af6bfb91aa98cb546758ad52e6db Mon Sep 17 00:00:00 2001 From: Fabiana Cecin Date: Tue, 16 Dec 2025 13:49:03 -0300 Subject: [PATCH 25/70] chore: pin logos-messaging-interop-tests to `SMOKE_TEST_STABLE` (#3667) * pin to interop-tests SMOKE_TEST_STABLE --- .github/workflows/ci.yml | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/.github/workflows/ci.yml b/.github/workflows/ci.yml index 9c94577f0..2b12a5109 100644 --- a/.github/workflows/ci.yml +++ b/.github/workflows/ci.yml @@ -143,7 +143,7 @@ jobs: nwaku-nwaku-interop-tests: needs: build-docker-image - uses: logos-messaging/logos-messaging-interop-tests/.github/workflows/nim_waku_PR.yml@SMOKE_TEST_0.0.1 + uses: logos-messaging/logos-messaging-interop-tests/.github/workflows/nim_waku_PR.yml@SMOKE_TEST_STABLE with: node_nwaku: ${{ needs.build-docker-image.outputs.image }} From 7c24a15459a1892ffa17421b981f3f3dcf652523 Mon Sep 17 00:00:00 2001 From: Ivan FB <128452529+Ivansete-status@users.noreply.github.com> Date: Thu, 18 Dec 2025 00:07:29 +0100 Subject: [PATCH 26/70] simple cleanup rm unused DiscoveryManager from waku.nim (#3671) --- waku/factory/waku.nim | 2 -- 1 file changed, 2 deletions(-) diff --git a/waku/factory/waku.nim b/waku/factory/waku.nim index bed8a9137..c0380ccc9 100644 --- a/waku/factory/waku.nim +++ b/waku/factory/waku.nim @@ -13,7 +13,6 @@ import libp2p/services/autorelayservice, libp2p/services/hpservice, libp2p/peerid, - libp2p/discovery/discoverymngr, libp2p/discovery/rendezvousinterface, eth/keys, eth/p2p/discoveryv5/enr, @@ -63,7 +62,6 @@ type Waku* = ref object dynamicBootstrapNodes*: seq[RemotePeerInfo] dnsRetryLoopHandle: Future[void] networkConnLoopHandle: Future[void] - discoveryMngr: DiscoveryManager node*: WakuNode From 2d40cb9d62ba24eac9f58da1be3a3c6eb4357253 Mon Sep 17 00:00:00 2001 From: Arseniy Klempner Date: Wed, 17 Dec 2025 18:51:10 -0800 Subject: [PATCH 27/70] fix: hash inputs for external nullifier, remove length prefix for sha256 (#3660) * fix: hash inputs for external nullifier, remove length prefix for sha256 * feat: use nimcrypto keccak instead of sha256 ffi * feat: wrapper function to generate external nullifier --- tests/waku_rln_relay/test_waku_rln_relay.nim | 47 ------------------- .../group_manager/on_chain/group_manager.nim | 7 ++- waku/waku_rln_relay/rln/wrappers.nim | 34 +++++--------- 3 files changed, 16 insertions(+), 72 deletions(-) diff --git a/tests/waku_rln_relay/test_waku_rln_relay.nim b/tests/waku_rln_relay/test_waku_rln_relay.nim index ea3a5ca62..3430657ad 100644 --- a/tests/waku_rln_relay/test_waku_rln_relay.nim +++ b/tests/waku_rln_relay/test_waku_rln_relay.nim @@ -70,53 +70,6 @@ suite "Waku rln relay": info "the generated identity credential: ", idCredential - test "hash Nim Wrappers": - # create an RLN instance - let rlnInstance = createRLNInstanceWrapper() - require: - rlnInstance.isOk() - - # prepare the input - let - msg = "Hello".toBytes() - hashInput = encodeLengthPrefix(msg) - hashInputBuffer = toBuffer(hashInput) - - # prepare other inputs to the hash function - let outputBuffer = default(Buffer) - - let hashSuccess = sha256(unsafeAddr hashInputBuffer, unsafeAddr outputBuffer, true) - require: - hashSuccess - let outputArr = cast[ptr array[32, byte]](outputBuffer.`ptr`)[] - - check: - "1e32b3ab545c07c8b4a7ab1ca4f46bc31e4fdc29ac3b240ef1d54b4017a26e4c" == - outputArr.inHex() - - let - hashOutput = cast[ptr array[32, byte]](outputBuffer.`ptr`)[] - hashOutputHex = hashOutput.toHex() - - info "hash output", hashOutputHex - - test "sha256 hash utils": - # create an RLN instance - let rlnInstance = createRLNInstanceWrapper() - require: - rlnInstance.isOk() - let rln = rlnInstance.get() - - # prepare the input - let msg = "Hello".toBytes() - - let hashRes = sha256(msg) - - check: - hashRes.isOk() - "1e32b3ab545c07c8b4a7ab1ca4f46bc31e4fdc29ac3b240ef1d54b4017a26e4c" == - hashRes.get().inHex() - test "poseidon hash utils": # create an RLN instance let rlnInstance = createRLNInstanceWrapper() diff --git a/waku/waku_rln_relay/group_manager/on_chain/group_manager.nim b/waku/waku_rln_relay/group_manager/on_chain/group_manager.nim index bdb272c1f..2ce7d4423 100644 --- a/waku/waku_rln_relay/group_manager/on_chain/group_manager.nim +++ b/waku/waku_rln_relay/group_manager/on_chain/group_manager.nim @@ -379,7 +379,7 @@ method generateProof*( let x = keccak.keccak256.digest(data) - let extNullifier = poseidon(@[@(epoch), @(rlnIdentifier)]).valueOr: + let extNullifier = generateExternalNullifier(epoch, rlnIdentifier).valueOr: return err("Failed to compute external nullifier: " & error) let witness = RLNWitnessInput( @@ -457,10 +457,9 @@ method verifyProof*( var normalizedProof = proof - normalizedProof.externalNullifier = poseidon( - @[@(proof.epoch), @(proof.rlnIdentifier)] - ).valueOr: + let externalNullifier = generateExternalNullifier(proof.epoch, proof.rlnIdentifier).valueOr: return err("Failed to compute external nullifier: " & error) + normalizedProof.externalNullifier = externalNullifier let proofBytes = serialize(normalizedProof, input) let proofBuffer = proofBytes.toBuffer() diff --git a/waku/waku_rln_relay/rln/wrappers.nim b/waku/waku_rln_relay/rln/wrappers.nim index d1dec2b38..1b2b0270f 100644 --- a/waku/waku_rln_relay/rln/wrappers.nim +++ b/waku/waku_rln_relay/rln/wrappers.nim @@ -6,7 +6,8 @@ import stew/[arrayops, byteutils, endians2], stint, results, - std/[sequtils, strutils, tables] + std/[sequtils, strutils, tables], + nimcrypto/keccak as keccak import ./rln_interface, ../conversion_utils, ../protocol_types, ../protocol_metrics import ../../waku_core, ../../waku_keystore @@ -119,24 +120,6 @@ proc createRLNInstance*(): RLNResult = res = createRLNInstanceLocal() return res -proc sha256*(data: openArray[byte]): RlnRelayResult[MerkleNode] = - ## a thin layer on top of the Nim wrapper of the sha256 hasher - var lenPrefData = encodeLengthPrefix(data) - var - hashInputBuffer = lenPrefData.toBuffer() - outputBuffer: Buffer # will holds the hash output - - trace "sha256 hash input buffer length", bufflen = hashInputBuffer.len - let hashSuccess = sha256(addr hashInputBuffer, addr outputBuffer, true) - - # check whether the hash call is done successfully - if not hashSuccess: - return err("error in sha256 hash") - - let output = cast[ptr MerkleNode](outputBuffer.`ptr`)[] - - return ok(output) - proc poseidon*(data: seq[seq[byte]]): RlnRelayResult[array[32, byte]] = ## a thin layer on top of the Nim wrapper of the poseidon hasher var inputBytes = serialize(data) @@ -180,9 +163,18 @@ proc toLeaves*(rateCommitments: seq[RateCommitment]): RlnRelayResult[seq[seq[byt leaves.add(leaf) return ok(leaves) +proc generateExternalNullifier*( + epoch: Epoch, rlnIdentifier: RlnIdentifier +): RlnRelayResult[ExternalNullifier] = + let epochHash = keccak.keccak256.digest(@(epoch)) + let rlnIdentifierHash = keccak.keccak256.digest(@(rlnIdentifier)) + let externalNullifier = poseidon(@[@(epochHash), @(rlnIdentifierHash)]).valueOr: + return err("Failed to compute external nullifier: " & error) + return ok(externalNullifier) + proc extractMetadata*(proof: RateLimitProof): RlnRelayResult[ProofMetadata] = - let externalNullifier = poseidon(@[@(proof.epoch), @(proof.rlnIdentifier)]).valueOr: - return err("could not construct the external nullifier") + let externalNullifier = generateExternalNullifier(proof.epoch, proof.rlnIdentifier).valueOr: + return err("Failed to compute external nullifier: " & error) return ok( ProofMetadata( nullifier: proof.nullifier, From 834eea945d05b4092466f3953467b28467e6b24c Mon Sep 17 00:00:00 2001 From: Tanya S <120410716+stubbsta@users.noreply.github.com> Date: Fri, 19 Dec 2025 10:55:53 +0200 Subject: [PATCH 28/70] chore: pin rln dependencies to specific version (#3649) * Add foundry version in makefile and install scripts * revert to older verison of Anvil for rln tests and anvil_install fix * pin pnpm version to be installed as rln dep * source pnpm after new install * Add to github path * use npm to install pnpm for rln ci * Update foundry and pnpm versions in Makefile --- Makefile | 6 ++- scripts/install_anvil.sh | 47 ++++++++++++++++++++--- scripts/install_pnpm.sh | 35 +++++++++++++++-- scripts/install_rln_tests_dependencies.sh | 8 ++-- 4 files changed, 84 insertions(+), 12 deletions(-) diff --git a/Makefile b/Makefile index 44f1c6495..35c107d2d 100644 --- a/Makefile +++ b/Makefile @@ -119,6 +119,10 @@ endif ################## .PHONY: deps libbacktrace +FOUNDRY_VERSION := 1.5.0 +PNPM_VERSION := 10.23.0 + + rustup: ifeq (, $(shell which cargo)) # Install Rustup if it's not installed @@ -128,7 +132,7 @@ ifeq (, $(shell which cargo)) endif rln-deps: rustup - ./scripts/install_rln_tests_dependencies.sh + ./scripts/install_rln_tests_dependencies.sh $(FOUNDRY_VERSION) $(PNPM_VERSION) deps: | deps-common nat-libs waku.nims diff --git a/scripts/install_anvil.sh b/scripts/install_anvil.sh index 1bf4bd7b1..c573ac31c 100755 --- a/scripts/install_anvil.sh +++ b/scripts/install_anvil.sh @@ -2,14 +2,51 @@ # Install Anvil -if ! command -v anvil &> /dev/null; then +REQUIRED_FOUNDRY_VERSION="$1" + +if command -v anvil &> /dev/null; then + # Foundry is already installed; check the current version. + CURRENT_FOUNDRY_VERSION=$(anvil --version 2>/dev/null | awk '{print $2}') + + if [ -n "$CURRENT_FOUNDRY_VERSION" ]; then + # Compare CURRENT_FOUNDRY_VERSION < REQUIRED_FOUNDRY_VERSION using sort -V + lower_version=$(printf '%s\n%s\n' "$CURRENT_FOUNDRY_VERSION" "$REQUIRED_FOUNDRY_VERSION" | sort -V | head -n1) + + if [ "$lower_version" != "$REQUIRED_FOUNDRY_VERSION" ]; then + echo "Anvil is already installed with version $CURRENT_FOUNDRY_VERSION, which is older than the required $REQUIRED_FOUNDRY_VERSION. Please update Foundry manually if needed." + fi + fi +else BASE_DIR="${XDG_CONFIG_HOME:-$HOME}" FOUNDRY_DIR="${FOUNDRY_DIR:-"$BASE_DIR/.foundry"}" FOUNDRY_BIN_DIR="$FOUNDRY_DIR/bin" + echo "Installing Foundry..." curl -L https://foundry.paradigm.xyz | bash - # Extract the source path from the download result - echo "foundryup_path: $FOUNDRY_BIN_DIR" - # run foundryup - $FOUNDRY_BIN_DIR/foundryup + + # Add Foundry to PATH for this script session + export PATH="$FOUNDRY_BIN_DIR:$PATH" + + # Verify foundryup is available + if ! command -v foundryup >/dev/null 2>&1; then + echo "Error: foundryup installation failed or not found in $FOUNDRY_BIN_DIR" + exit 1 + fi + + # Run foundryup to install the required version + if [ -n "$REQUIRED_FOUNDRY_VERSION" ]; then + echo "Installing Foundry tools version $REQUIRED_FOUNDRY_VERSION..." + foundryup --install "$REQUIRED_FOUNDRY_VERSION" + else + echo "Installing latest Foundry tools..." + foundryup + fi + + # Verify anvil was installed + if ! command -v anvil >/dev/null 2>&1; then + echo "Error: anvil installation failed" + exit 1 + fi + + echo "Anvil successfully installed: $(anvil --version)" fi \ No newline at end of file diff --git a/scripts/install_pnpm.sh b/scripts/install_pnpm.sh index 34ba47b07..fcfc82ccd 100755 --- a/scripts/install_pnpm.sh +++ b/scripts/install_pnpm.sh @@ -1,8 +1,37 @@ #!/usr/bin/env bash # Install pnpm -if ! command -v pnpm &> /dev/null; then - echo "pnpm is not installed, installing it now..." - npm i pnpm --global + +REQUIRED_PNPM_VERSION="$1" + +if command -v pnpm &> /dev/null; then + # pnpm is already installed; check the current version. + CURRENT_PNPM_VERSION=$(pnpm --version 2>/dev/null) + + if [ -n "$CURRENT_PNPM_VERSION" ]; then + # Compare CURRENT_PNPM_VERSION < REQUIRED_PNPM_VERSION using sort -V + lower_version=$(printf '%s\n%s\n' "$CURRENT_PNPM_VERSION" "$REQUIRED_PNPM_VERSION" | sort -V | head -n1) + + if [ "$lower_version" != "$REQUIRED_PNPM_VERSION" ]; then + echo "pnpm is already installed with version $CURRENT_PNPM_VERSION, which is older than the required $REQUIRED_PNPM_VERSION. Please update pnpm manually if needed." + fi + fi +else + # Install pnpm using npm + if [ -n "$REQUIRED_PNPM_VERSION" ]; then + echo "Installing pnpm version $REQUIRED_PNPM_VERSION..." + npm install -g pnpm@$REQUIRED_PNPM_VERSION + else + echo "Installing latest pnpm..." + npm install -g pnpm + fi + + # Verify pnpm was installed + if ! command -v pnpm >/dev/null 2>&1; then + echo "Error: pnpm installation failed" + exit 1 + fi + + echo "pnpm successfully installed: $(pnpm --version)" fi diff --git a/scripts/install_rln_tests_dependencies.sh b/scripts/install_rln_tests_dependencies.sh index e19e0ef3c..c8c083b54 100755 --- a/scripts/install_rln_tests_dependencies.sh +++ b/scripts/install_rln_tests_dependencies.sh @@ -1,7 +1,9 @@ #!/usr/bin/env bash # Install Anvil -./scripts/install_anvil.sh +FOUNDRY_VERSION="$1" +./scripts/install_anvil.sh "$FOUNDRY_VERSION" -#Install pnpm -./scripts/install_pnpm.sh \ No newline at end of file +# Install pnpm +PNPM_VERSION="$2" +./scripts/install_pnpm.sh "$PNPM_VERSION" \ No newline at end of file From e3dd6203ae8fc291fb9f83351252887e5fc6b328 Mon Sep 17 00:00:00 2001 From: Ivan FB <128452529+Ivansete-status@users.noreply.github.com> Date: Fri, 19 Dec 2025 17:00:43 +0100 Subject: [PATCH 29/70] Start using nim-ffi to implement libwaku (#3656) * deep changes in libwaku to adap to nim-ffi * start using ffi pragma in library * update some binding examples * add missing declare_lib.nim file * properly rename api files in library folder --- .gitmodules | 5 + examples/cbindings/waku_example.c | 520 ++++++----- examples/cpp/waku.cpp | 283 +++--- examples/golang/waku.go | 107 ++- examples/python/waku.py | 24 +- examples/qt/waku_handler.h | 2 +- examples/rust/src/main.rs | 12 +- library/alloc.nim | 42 - library/declare_lib.nim | 10 + .../events/json_waku_not_responding_event.nim | 9 - library/ffi_types.nim | 30 - library/kernel_api/debug_node_api.nim | 49 + library/kernel_api/discovery_api.nim | 96 ++ .../node_lifecycle_api.nim} | 79 +- library/kernel_api/peer_manager_api.nim | 123 +++ library/kernel_api/ping_api.nim | 43 + library/kernel_api/protocols/filter_api.nim | 109 +++ .../kernel_api/protocols/lightpush_api.nim | 51 ++ library/kernel_api/protocols/relay_api.nim | 171 ++++ .../protocols/store_api.nim} | 79 +- library/libwaku.h | 379 ++++---- library/libwaku.nim | 853 +----------------- library/waku_context.nim | 223 ----- .../requests/debug_node_request.nim | 63 -- .../requests/discovery_request.nim | 151 ---- .../requests/peer_manager_request.nim | 135 --- .../requests/ping_request.nim | 54 -- .../requests/protocols/filter_request.nim | 106 --- .../requests/protocols/lightpush_request.nim | 109 --- .../requests/protocols/relay_request.nim | 168 ---- .../waku_thread_request.nim | 104 --- vendor/nim-ffi | 1 + waku.nimble | 3 +- 33 files changed, 1441 insertions(+), 2752 deletions(-) delete mode 100644 library/alloc.nim create mode 100644 library/declare_lib.nim delete mode 100644 library/events/json_waku_not_responding_event.nim delete mode 100644 library/ffi_types.nim create mode 100644 library/kernel_api/debug_node_api.nim create mode 100644 library/kernel_api/discovery_api.nim rename library/{waku_thread_requests/requests/node_lifecycle_request.nim => kernel_api/node_lifecycle_api.nim} (60%) create mode 100644 library/kernel_api/peer_manager_api.nim create mode 100644 library/kernel_api/ping_api.nim create mode 100644 library/kernel_api/protocols/filter_api.nim create mode 100644 library/kernel_api/protocols/lightpush_api.nim create mode 100644 library/kernel_api/protocols/relay_api.nim rename library/{waku_thread_requests/requests/protocols/store_request.nim => kernel_api/protocols/store_api.nim} (57%) delete mode 100644 library/waku_context.nim delete mode 100644 library/waku_thread_requests/requests/debug_node_request.nim delete mode 100644 library/waku_thread_requests/requests/discovery_request.nim delete mode 100644 library/waku_thread_requests/requests/peer_manager_request.nim delete mode 100644 library/waku_thread_requests/requests/ping_request.nim delete mode 100644 library/waku_thread_requests/requests/protocols/filter_request.nim delete mode 100644 library/waku_thread_requests/requests/protocols/lightpush_request.nim delete mode 100644 library/waku_thread_requests/requests/protocols/relay_request.nim delete mode 100644 library/waku_thread_requests/waku_thread_request.nim create mode 160000 vendor/nim-ffi diff --git a/.gitmodules b/.gitmodules index 93a3a006f..4d56c4333 100644 --- a/.gitmodules +++ b/.gitmodules @@ -184,3 +184,8 @@ url = https://github.com/logos-messaging/waku-rlnv2-contract.git ignore = untracked branch = master +[submodule "vendor/nim-ffi"] + path = vendor/nim-ffi + url = https://github.com/logos-messaging/nim-ffi/ + ignore = untracked + branch = master diff --git a/examples/cbindings/waku_example.c b/examples/cbindings/waku_example.c index 35ac8a2e2..f337203ae 100644 --- a/examples/cbindings/waku_example.c +++ b/examples/cbindings/waku_example.c @@ -19,283 +19,309 @@ pthread_mutex_t mutex = PTHREAD_MUTEX_INITIALIZER; pthread_cond_t cond = PTHREAD_COND_INITIALIZER; int callback_executed = 0; -void waitForCallback() { - pthread_mutex_lock(&mutex); - while (!callback_executed) { - pthread_cond_wait(&cond, &mutex); - } - callback_executed = 0; - pthread_mutex_unlock(&mutex); +void waitForCallback() +{ + pthread_mutex_lock(&mutex); + while (!callback_executed) + { + pthread_cond_wait(&cond, &mutex); + } + callback_executed = 0; + pthread_mutex_unlock(&mutex); } -#define WAKU_CALL(call) \ -do { \ - int ret = call; \ - if (ret != 0) { \ - printf("Failed the call to: %s. Returned code: %d\n", #call, ret); \ - exit(1); \ - } \ - waitForCallback(); \ -} while (0) +#define WAKU_CALL(call) \ + do \ + { \ + int ret = call; \ + if (ret != 0) \ + { \ + printf("Failed the call to: %s. Returned code: %d\n", #call, ret); \ + exit(1); \ + } \ + waitForCallback(); \ + } while (0) -struct ConfigNode { - char host[128]; - int port; - char key[128]; - int relay; - char peers[2048]; - int store; - char storeNode[2048]; - char storeRetentionPolicy[64]; - char storeDbUrl[256]; - int storeVacuum; - int storeDbMigration; - int storeMaxNumDbConnections; +struct ConfigNode +{ + char host[128]; + int port; + char key[128]; + int relay; + char peers[2048]; + int store; + char storeNode[2048]; + char storeRetentionPolicy[64]; + char storeDbUrl[256]; + int storeVacuum; + int storeDbMigration; + int storeMaxNumDbConnections; }; // libwaku Context -void* ctx; +void *ctx; // For the case of C language we don't need to store a particular userData -void* userData = NULL; +void *userData = NULL; // Arguments parsing static char doc[] = "\nC example that shows how to use the waku library."; static char args_doc[] = ""; static struct argp_option options[] = { - { "host", 'h', "HOST", 0, "IP to listen for for LibP2P traffic. (default: \"0.0.0.0\")"}, - { "port", 'p', "PORT", 0, "TCP listening port. (default: \"60000\")"}, - { "key", 'k', "KEY", 0, "P2P node private key as 64 char hex string."}, - { "relay", 'r', "RELAY", 0, "Enable relay protocol: 1 or 0. (default: 1)"}, - { "peers", 'a', "PEERS", 0, "Comma-separated list of peer-multiaddress to connect\ + {"host", 'h', "HOST", 0, "IP to listen for for LibP2P traffic. (default: \"0.0.0.0\")"}, + {"port", 'p', "PORT", 0, "TCP listening port. (default: \"60000\")"}, + {"key", 'k', "KEY", 0, "P2P node private key as 64 char hex string."}, + {"relay", 'r', "RELAY", 0, "Enable relay protocol: 1 or 0. (default: 1)"}, + {"peers", 'a', "PEERS", 0, "Comma-separated list of peer-multiaddress to connect\ to. (default: \"\") e.g. \"/ip4/127.0.0.1/tcp/60001/p2p/16Uiu2HAmVFXtAfSj4EiR7mL2KvL4EE2wztuQgUSBoj2Jx2KeXFLN\""}, - { 0 } -}; + {0}}; -static error_t parse_opt(int key, char *arg, struct argp_state *state) { +static error_t parse_opt(int key, char *arg, struct argp_state *state) +{ - struct ConfigNode *cfgNode = state->input; - switch (key) { - case 'h': - snprintf(cfgNode->host, 128, "%s", arg); - break; - case 'p': - cfgNode->port = atoi(arg); - break; - case 'k': - snprintf(cfgNode->key, 128, "%s", arg); - break; - case 'r': - cfgNode->relay = atoi(arg); - break; - case 'a': - snprintf(cfgNode->peers, 2048, "%s", arg); - break; - case ARGP_KEY_ARG: - if (state->arg_num >= 1) /* Too many arguments. */ - argp_usage(state); - break; - case ARGP_KEY_END: - break; - default: - return ARGP_ERR_UNKNOWN; - } + struct ConfigNode *cfgNode = state->input; + switch (key) + { + case 'h': + snprintf(cfgNode->host, 128, "%s", arg); + break; + case 'p': + cfgNode->port = atoi(arg); + break; + case 'k': + snprintf(cfgNode->key, 128, "%s", arg); + break; + case 'r': + cfgNode->relay = atoi(arg); + break; + case 'a': + snprintf(cfgNode->peers, 2048, "%s", arg); + break; + case ARGP_KEY_ARG: + if (state->arg_num >= 1) /* Too many arguments. */ + argp_usage(state); + break; + case ARGP_KEY_END: + break; + default: + return ARGP_ERR_UNKNOWN; + } - return 0; + return 0; } -void signal_cond() { - pthread_mutex_lock(&mutex); - callback_executed = 1; - pthread_cond_signal(&cond); - pthread_mutex_unlock(&mutex); +void signal_cond() +{ + pthread_mutex_lock(&mutex); + callback_executed = 1; + pthread_cond_signal(&cond); + pthread_mutex_unlock(&mutex); } -static struct argp argp = { options, parse_opt, args_doc, doc, 0, 0, 0 }; +static struct argp argp = {options, parse_opt, args_doc, doc, 0, 0, 0}; -void event_handler(int callerRet, const char* msg, size_t len, void* userData) { - if (callerRet == RET_ERR) { - printf("Error: %s\n", msg); - exit(1); - } - else if (callerRet == RET_OK) { - printf("Receiving event: %s\n", msg); - } +void event_handler(int callerRet, const char *msg, size_t len, void *userData) +{ + if (callerRet == RET_ERR) + { + printf("Error: %s\n", msg); + exit(1); + } + else if (callerRet == RET_OK) + { + printf("Receiving event: %s\n", msg); + } - signal_cond(); + signal_cond(); } -void on_event_received(int callerRet, const char* msg, size_t len, void* userData) { - if (callerRet == RET_ERR) { - printf("Error: %s\n", msg); - exit(1); - } - else if (callerRet == RET_OK) { - printf("Receiving event: %s\n", msg); - } +void on_event_received(int callerRet, const char *msg, size_t len, void *userData) +{ + if (callerRet == RET_ERR) + { + printf("Error: %s\n", msg); + exit(1); + } + else if (callerRet == RET_OK) + { + printf("Receiving event: %s\n", msg); + } } -char* contentTopic = NULL; -void handle_content_topic(int callerRet, const char* msg, size_t len, void* userData) { - if (contentTopic != NULL) { - free(contentTopic); - } +char *contentTopic = NULL; +void handle_content_topic(int callerRet, const char *msg, size_t len, void *userData) +{ + if (contentTopic != NULL) + { + free(contentTopic); + } - contentTopic = malloc(len * sizeof(char) + 1); - strcpy(contentTopic, msg); - signal_cond(); + contentTopic = malloc(len * sizeof(char) + 1); + strcpy(contentTopic, msg); + signal_cond(); } -char* publishResponse = NULL; -void handle_publish_ok(int callerRet, const char* msg, size_t len, void* userData) { - printf("Publish Ok: %s %lu\n", msg, len); +char *publishResponse = NULL; +void handle_publish_ok(int callerRet, const char *msg, size_t len, void *userData) +{ + printf("Publish Ok: %s %lu\n", msg, len); - if (publishResponse != NULL) { - free(publishResponse); - } + if (publishResponse != NULL) + { + free(publishResponse); + } - publishResponse = malloc(len * sizeof(char) + 1); - strcpy(publishResponse, msg); + publishResponse = malloc(len * sizeof(char) + 1); + strcpy(publishResponse, msg); } #define MAX_MSG_SIZE 65535 -void publish_message(const char* msg) { - char jsonWakuMsg[MAX_MSG_SIZE]; - char *msgPayload = b64_encode(msg, strlen(msg)); +void publish_message(const char *msg) +{ + char jsonWakuMsg[MAX_MSG_SIZE]; + char *msgPayload = b64_encode(msg, strlen(msg)); - WAKU_CALL( waku_content_topic(ctx, - "appName", - 1, - "contentTopicName", - "encoding", - handle_content_topic, - userData) ); - snprintf(jsonWakuMsg, - MAX_MSG_SIZE, - "{\"payload\":\"%s\",\"contentTopic\":\"%s\"}", - msgPayload, contentTopic); + WAKU_CALL(waku_content_topic(ctx, + handle_content_topic, + userData, + "appName", + 1, + "contentTopicName", + "encoding")); + snprintf(jsonWakuMsg, + MAX_MSG_SIZE, + "{\"payload\":\"%s\",\"contentTopic\":\"%s\"}", + msgPayload, contentTopic); - free(msgPayload); + free(msgPayload); - WAKU_CALL( waku_relay_publish(ctx, - "/waku/2/rs/16/32", - jsonWakuMsg, - 10000 /*timeout ms*/, - event_handler, - userData) ); + WAKU_CALL(waku_relay_publish(ctx, + event_handler, + userData, + "/waku/2/rs/16/32", + jsonWakuMsg, + 10000 /*timeout ms*/)); } -void show_help_and_exit() { - printf("Wrong parameters\n"); - exit(1); +void show_help_and_exit() +{ + printf("Wrong parameters\n"); + exit(1); } -void print_default_pubsub_topic(int callerRet, const char* msg, size_t len, void* userData) { - printf("Default pubsub topic: %s\n", msg); - signal_cond(); +void print_default_pubsub_topic(int callerRet, const char *msg, size_t len, void *userData) +{ + printf("Default pubsub topic: %s\n", msg); + signal_cond(); } -void print_waku_version(int callerRet, const char* msg, size_t len, void* userData) { - printf("Git Version: %s\n", msg); - signal_cond(); +void print_waku_version(int callerRet, const char *msg, size_t len, void *userData) +{ + printf("Git Version: %s\n", msg); + signal_cond(); } // Beginning of UI program logic -enum PROGRAM_STATE { - MAIN_MENU, - SUBSCRIBE_TOPIC_MENU, - CONNECT_TO_OTHER_NODE_MENU, - PUBLISH_MESSAGE_MENU +enum PROGRAM_STATE +{ + MAIN_MENU, + SUBSCRIBE_TOPIC_MENU, + CONNECT_TO_OTHER_NODE_MENU, + PUBLISH_MESSAGE_MENU }; enum PROGRAM_STATE current_state = MAIN_MENU; -void show_main_menu() { - printf("\nPlease, select an option:\n"); - printf("\t1.) Subscribe to topic\n"); - printf("\t2.) Connect to other node\n"); - printf("\t3.) Publish a message\n"); +void show_main_menu() +{ + printf("\nPlease, select an option:\n"); + printf("\t1.) Subscribe to topic\n"); + printf("\t2.) Connect to other node\n"); + printf("\t3.) Publish a message\n"); } -void handle_user_input() { - char cmd[1024]; - memset(cmd, 0, 1024); - int numRead = read(0, cmd, 1024); - if (numRead <= 0) { - return; - } +void handle_user_input() +{ + char cmd[1024]; + memset(cmd, 0, 1024); + int numRead = read(0, cmd, 1024); + if (numRead <= 0) + { + return; + } - switch (atoi(cmd)) - { - case SUBSCRIBE_TOPIC_MENU: - { - printf("Indicate the Pubsubtopic to subscribe:\n"); - char pubsubTopic[128]; - scanf("%127s", pubsubTopic); + switch (atoi(cmd)) + { + case SUBSCRIBE_TOPIC_MENU: + { + printf("Indicate the Pubsubtopic to subscribe:\n"); + char pubsubTopic[128]; + scanf("%127s", pubsubTopic); - WAKU_CALL( waku_relay_subscribe(ctx, - pubsubTopic, - event_handler, - userData) ); - printf("The subscription went well\n"); + WAKU_CALL(waku_relay_subscribe(ctx, + event_handler, + userData, + pubsubTopic)); + printf("The subscription went well\n"); - show_main_menu(); - } + show_main_menu(); + } + break; + + case CONNECT_TO_OTHER_NODE_MENU: + // printf("Connecting to a node. Please indicate the peer Multiaddress:\n"); + // printf("e.g.: /ip4/127.0.0.1/tcp/60001/p2p/16Uiu2HAmVFXtAfSj4EiR7mL2KvL4EE2wztuQgUSBoj2Jx2KeXFLN\n"); + // char peerAddr[512]; + // scanf("%511s", peerAddr); + // WAKU_CALL(waku_connect(ctx, peerAddr, 10000 /* timeoutMs */, event_handler, userData)); + show_main_menu(); break; - case CONNECT_TO_OTHER_NODE_MENU: - printf("Connecting to a node. Please indicate the peer Multiaddress:\n"); - printf("e.g.: /ip4/127.0.0.1/tcp/60001/p2p/16Uiu2HAmVFXtAfSj4EiR7mL2KvL4EE2wztuQgUSBoj2Jx2KeXFLN\n"); - char peerAddr[512]; - scanf("%511s", peerAddr); - WAKU_CALL(waku_connect(ctx, peerAddr, 10000 /* timeoutMs */, event_handler, userData)); - show_main_menu(); + case PUBLISH_MESSAGE_MENU: + { + printf("Type the message to publish:\n"); + char msg[1024]; + scanf("%1023s", msg); + + publish_message(msg); + + show_main_menu(); + } + break; + + case MAIN_MENU: break; - - case PUBLISH_MESSAGE_MENU: - { - printf("Type the message to publish:\n"); - char msg[1024]; - scanf("%1023s", msg); - - publish_message(msg); - - show_main_menu(); - } - break; - - case MAIN_MENU: - break; - } + } } // End of UI program logic -int main(int argc, char** argv) { - struct ConfigNode cfgNode; - // default values - snprintf(cfgNode.host, 128, "0.0.0.0"); - cfgNode.port = 60000; - cfgNode.relay = 1; +int main(int argc, char **argv) +{ + struct ConfigNode cfgNode; + // default values + snprintf(cfgNode.host, 128, "0.0.0.0"); + cfgNode.port = 60000; + cfgNode.relay = 1; - cfgNode.store = 0; - snprintf(cfgNode.storeNode, 2048, ""); - snprintf(cfgNode.storeRetentionPolicy, 64, "time:6000000"); - snprintf(cfgNode.storeDbUrl, 256, "postgres://postgres:test123@localhost:5432/postgres"); - cfgNode.storeVacuum = 0; - cfgNode.storeDbMigration = 0; - cfgNode.storeMaxNumDbConnections = 30; + cfgNode.store = 0; + snprintf(cfgNode.storeNode, 2048, ""); + snprintf(cfgNode.storeRetentionPolicy, 64, "time:6000000"); + snprintf(cfgNode.storeDbUrl, 256, "postgres://postgres:test123@localhost:5432/postgres"); + cfgNode.storeVacuum = 0; + cfgNode.storeDbMigration = 0; + cfgNode.storeMaxNumDbConnections = 30; - if (argp_parse(&argp, argc, argv, 0, 0, &cfgNode) - == ARGP_ERR_UNKNOWN) { - show_help_and_exit(); - } + if (argp_parse(&argp, argc, argv, 0, 0, &cfgNode) == ARGP_ERR_UNKNOWN) + { + show_help_and_exit(); + } - char jsonConfig[5000]; - snprintf(jsonConfig, 5000, "{ \ + char jsonConfig[5000]; + snprintf(jsonConfig, 5000, "{ \ \"clusterId\": 16, \ \"shards\": [ 1, 32, 64, 128, 256 ], \ \"numShardsInNetwork\": 257, \ @@ -313,54 +339,56 @@ int main(int argc, char** argv) { \"discv5UdpPort\": 9999, \ \"dnsDiscoveryUrl\": \"enrtree://AMOJVZX4V6EXP7NTJPMAYJYST2QP6AJXYW76IU6VGJS7UVSNDYZG4@boot.prod.status.nodes.status.im\", \ \"dnsDiscoveryNameServers\": [\"8.8.8.8\", \"1.0.0.1\"] \ - }", cfgNode.host, - cfgNode.port, - cfgNode.relay ? "true":"false", - cfgNode.store ? "true":"false", - cfgNode.storeDbUrl, - cfgNode.storeRetentionPolicy, - cfgNode.storeMaxNumDbConnections); + }", + cfgNode.host, + cfgNode.port, + cfgNode.relay ? "true" : "false", + cfgNode.store ? "true" : "false", + cfgNode.storeDbUrl, + cfgNode.storeRetentionPolicy, + cfgNode.storeMaxNumDbConnections); - ctx = waku_new(jsonConfig, event_handler, userData); - waitForCallback(); + ctx = waku_new(jsonConfig, event_handler, userData); + waitForCallback(); - WAKU_CALL( waku_default_pubsub_topic(ctx, print_default_pubsub_topic, userData) ); - WAKU_CALL( waku_version(ctx, print_waku_version, userData) ); + WAKU_CALL(waku_default_pubsub_topic(ctx, print_default_pubsub_topic, userData)); + WAKU_CALL(waku_version(ctx, print_waku_version, userData)); - printf("Bind addr: %s:%u\n", cfgNode.host, cfgNode.port); - printf("Waku Relay enabled: %s\n", cfgNode.relay == 1 ? "YES": "NO"); + printf("Bind addr: %s:%u\n", cfgNode.host, cfgNode.port); + printf("Waku Relay enabled: %s\n", cfgNode.relay == 1 ? "YES" : "NO"); - waku_set_event_callback(ctx, on_event_received, userData); + set_event_callback(ctx, on_event_received, userData); - waku_start(ctx, event_handler, userData); - waitForCallback(); + waku_start(ctx, event_handler, userData); + waitForCallback(); - WAKU_CALL( waku_listen_addresses(ctx, event_handler, userData) ); + WAKU_CALL(waku_listen_addresses(ctx, event_handler, userData)); - WAKU_CALL( waku_relay_subscribe(ctx, - "/waku/2/rs/0/0", - event_handler, - userData) ); + WAKU_CALL(waku_relay_subscribe(ctx, + event_handler, + userData, + "/waku/2/rs/16/32")); - WAKU_CALL( waku_discv5_update_bootnodes(ctx, - "[\"enr:-QEkuEBIkb8q8_mrorHndoXH9t5N6ZfD-jehQCrYeoJDPHqT0l0wyaONa2-piRQsi3oVKAzDShDVeoQhy0uwN1xbZfPZAYJpZIJ2NIJpcIQiQlleim11bHRpYWRkcnO4bgA0Ni9ub2RlLTAxLmdjLXVzLWNlbnRyYWwxLWEud2FrdS5zYW5kYm94LnN0YXR1cy5pbQZ2XwA2Ni9ub2RlLTAxLmdjLXVzLWNlbnRyYWwxLWEud2FrdS5zYW5kYm94LnN0YXR1cy5pbQYfQN4DgnJzkwABCAAAAAEAAgADAAQABQAGAAeJc2VjcDI1NmsxoQKnGt-GSgqPSf3IAPM7bFgTlpczpMZZLF3geeoNNsxzSoN0Y3CCdl-DdWRwgiMohXdha3UyDw\",\"enr:-QEkuEB3WHNS-xA3RDpfu9A2Qycr3bN3u7VoArMEiDIFZJ66F1EB3d4wxZN1hcdcOX-RfuXB-MQauhJGQbpz3qUofOtLAYJpZIJ2NIJpcIQI2SVcim11bHRpYWRkcnO4bgA0Ni9ub2RlLTAxLmFjLWNuLWhvbmdrb25nLWMud2FrdS5zYW5kYm94LnN0YXR1cy5pbQZ2XwA2Ni9ub2RlLTAxLmFjLWNuLWhvbmdrb25nLWMud2FrdS5zYW5kYm94LnN0YXR1cy5pbQYfQN4DgnJzkwABCAAAAAEAAgADAAQABQAGAAeJc2VjcDI1NmsxoQPK35Nnz0cWUtSAhBp7zvHEhyU_AqeQUlqzLiLxfP2L4oN0Y3CCdl-DdWRwgiMohXdha3UyDw\"]", - event_handler, - userData) ); + WAKU_CALL(waku_discv5_update_bootnodes(ctx, + event_handler, + userData, + "[\"enr:-QEkuEBIkb8q8_mrorHndoXH9t5N6ZfD-jehQCrYeoJDPHqT0l0wyaONa2-piRQsi3oVKAzDShDVeoQhy0uwN1xbZfPZAYJpZIJ2NIJpcIQiQlleim11bHRpYWRkcnO4bgA0Ni9ub2RlLTAxLmdjLXVzLWNlbnRyYWwxLWEud2FrdS5zYW5kYm94LnN0YXR1cy5pbQZ2XwA2Ni9ub2RlLTAxLmdjLXVzLWNlbnRyYWwxLWEud2FrdS5zYW5kYm94LnN0YXR1cy5pbQYfQN4DgnJzkwABCAAAAAEAAgADAAQABQAGAAeJc2VjcDI1NmsxoQKnGt-GSgqPSf3IAPM7bFgTlpczpMZZLF3geeoNNsxzSoN0Y3CCdl-DdWRwgiMohXdha3UyDw\",\"enr:-QEkuEB3WHNS-xA3RDpfu9A2Qycr3bN3u7VoArMEiDIFZJ66F1EB3d4wxZN1hcdcOX-RfuXB-MQauhJGQbpz3qUofOtLAYJpZIJ2NIJpcIQI2SVcim11bHRpYWRkcnO4bgA0Ni9ub2RlLTAxLmFjLWNuLWhvbmdrb25nLWMud2FrdS5zYW5kYm94LnN0YXR1cy5pbQZ2XwA2Ni9ub2RlLTAxLmFjLWNuLWhvbmdrb25nLWMud2FrdS5zYW5kYm94LnN0YXR1cy5pbQYfQN4DgnJzkwABCAAAAAEAAgADAAQABQAGAAeJc2VjcDI1NmsxoQPK35Nnz0cWUtSAhBp7zvHEhyU_AqeQUlqzLiLxfP2L4oN0Y3CCdl-DdWRwgiMohXdha3UyDw\"]")); - WAKU_CALL( waku_get_peerids_from_peerstore(ctx, - event_handler, - userData) ); + WAKU_CALL(waku_get_peerids_from_peerstore(ctx, + event_handler, + userData)); - show_main_menu(); - while(1) { - handle_user_input(); + show_main_menu(); + while (1) + { + handle_user_input(); - // Uncomment the following if need to test the metrics retrieval - // WAKU_CALL( waku_get_metrics(ctx, - // event_handler, - // userData) ); - } + // Uncomment the following if need to test the metrics retrieval + // WAKU_CALL( waku_get_metrics(ctx, + // event_handler, + // userData) ); + } - pthread_mutex_destroy(&mutex); - pthread_cond_destroy(&cond); + pthread_mutex_destroy(&mutex); + pthread_cond_destroy(&cond); } diff --git a/examples/cpp/waku.cpp b/examples/cpp/waku.cpp index c47877d02..2824f8e53 100644 --- a/examples/cpp/waku.cpp +++ b/examples/cpp/waku.cpp @@ -21,37 +21,43 @@ pthread_mutex_t mutex = PTHREAD_MUTEX_INITIALIZER; pthread_cond_t cond = PTHREAD_COND_INITIALIZER; int callback_executed = 0; -void waitForCallback() { +void waitForCallback() +{ pthread_mutex_lock(&mutex); - while (!callback_executed) { + while (!callback_executed) + { pthread_cond_wait(&cond, &mutex); } callback_executed = 0; pthread_mutex_unlock(&mutex); } -void signal_cond() { +void signal_cond() +{ pthread_mutex_lock(&mutex); callback_executed = 1; pthread_cond_signal(&cond); pthread_mutex_unlock(&mutex); } -#define WAKU_CALL(call) \ -do { \ - int ret = call; \ - if (ret != 0) { \ - std::cout << "Failed the call to: " << #call << ". Code: " << ret << "\n"; \ - } \ - waitForCallback(); \ -} while (0) +#define WAKU_CALL(call) \ + do \ + { \ + int ret = call; \ + if (ret != 0) \ + { \ + std::cout << "Failed the call to: " << #call << ". Code: " << ret << "\n"; \ + } \ + waitForCallback(); \ + } while (0) -struct ConfigNode { - char host[128]; - int port; - char key[128]; - int relay; - char peers[2048]; +struct ConfigNode +{ + char host[128]; + int port; + char key[128]; + int relay; + char peers[2048]; }; // Arguments parsing @@ -59,70 +65,76 @@ static char doc[] = "\nC example that shows how to use the waku library."; static char args_doc[] = ""; static struct argp_option options[] = { - { "host", 'h', "HOST", 0, "IP to listen for for LibP2P traffic. (default: \"0.0.0.0\")"}, - { "port", 'p', "PORT", 0, "TCP listening port. (default: \"60000\")"}, - { "key", 'k', "KEY", 0, "P2P node private key as 64 char hex string."}, - { "relay", 'r', "RELAY", 0, "Enable relay protocol: 1 or 0. (default: 1)"}, - { "peers", 'a', "PEERS", 0, "Comma-separated list of peer-multiaddress to connect\ + {"host", 'h', "HOST", 0, "IP to listen for for LibP2P traffic. (default: \"0.0.0.0\")"}, + {"port", 'p', "PORT", 0, "TCP listening port. (default: \"60000\")"}, + {"key", 'k', "KEY", 0, "P2P node private key as 64 char hex string."}, + {"relay", 'r', "RELAY", 0, "Enable relay protocol: 1 or 0. (default: 1)"}, + {"peers", 'a', "PEERS", 0, "Comma-separated list of peer-multiaddress to connect\ to. (default: \"\") e.g. \"/ip4/127.0.0.1/tcp/60001/p2p/16Uiu2HAmVFXtAfSj4EiR7mL2KvL4EE2wztuQgUSBoj2Jx2KeXFLN\""}, - { 0 } -}; + {0}}; -static error_t parse_opt(int key, char *arg, struct argp_state *state) { +static error_t parse_opt(int key, char *arg, struct argp_state *state) +{ - struct ConfigNode *cfgNode = (ConfigNode *) state->input; - switch (key) { - case 'h': - snprintf(cfgNode->host, 128, "%s", arg); - break; - case 'p': - cfgNode->port = atoi(arg); - break; - case 'k': - snprintf(cfgNode->key, 128, "%s", arg); - break; - case 'r': - cfgNode->relay = atoi(arg); - break; - case 'a': - snprintf(cfgNode->peers, 2048, "%s", arg); - break; - case ARGP_KEY_ARG: - if (state->arg_num >= 1) /* Too many arguments. */ + struct ConfigNode *cfgNode = (ConfigNode *)state->input; + switch (key) + { + case 'h': + snprintf(cfgNode->host, 128, "%s", arg); + break; + case 'p': + cfgNode->port = atoi(arg); + break; + case 'k': + snprintf(cfgNode->key, 128, "%s", arg); + break; + case 'r': + cfgNode->relay = atoi(arg); + break; + case 'a': + snprintf(cfgNode->peers, 2048, "%s", arg); + break; + case ARGP_KEY_ARG: + if (state->arg_num >= 1) /* Too many arguments. */ argp_usage(state); - break; - case ARGP_KEY_END: - break; - default: - return ARGP_ERR_UNKNOWN; - } + break; + case ARGP_KEY_END: + break; + default: + return ARGP_ERR_UNKNOWN; + } return 0; } -void event_handler(const char* msg, size_t len) { +void event_handler(const char *msg, size_t len) +{ printf("Receiving event: %s\n", msg); } -void handle_error(const char* msg, size_t len) { +void handle_error(const char *msg, size_t len) +{ printf("handle_error: %s\n", msg); exit(1); } template -auto cify(F&& f) { - static F fn = std::forward(f); - return [](int callerRet, const char* msg, size_t len, void* userData) { - signal_cond(); - return fn(msg, len); - }; +auto cify(F &&f) +{ + static F fn = std::forward(f); + return [](int callerRet, const char *msg, size_t len, void *userData) + { + signal_cond(); + return fn(msg, len); + }; } -static struct argp argp = { options, parse_opt, args_doc, doc, 0, 0, 0 }; +static struct argp argp = {options, parse_opt, args_doc, doc, 0, 0, 0}; // Beginning of UI program logic -enum PROGRAM_STATE { +enum PROGRAM_STATE +{ MAIN_MENU, SUBSCRIBE_TOPIC_MENU, CONNECT_TO_OTHER_NODE_MENU, @@ -131,18 +143,21 @@ enum PROGRAM_STATE { enum PROGRAM_STATE current_state = MAIN_MENU; -void show_main_menu() { +void show_main_menu() +{ printf("\nPlease, select an option:\n"); printf("\t1.) Subscribe to topic\n"); printf("\t2.) Connect to other node\n"); printf("\t3.) Publish a message\n"); } -void handle_user_input(void* ctx) { +void handle_user_input(void *ctx) +{ char cmd[1024]; memset(cmd, 0, 1024); int numRead = read(0, cmd, 1024); - if (numRead <= 0) { + if (numRead <= 0) + { return; } @@ -154,12 +169,11 @@ void handle_user_input(void* ctx) { char pubsubTopic[128]; scanf("%127s", pubsubTopic); - WAKU_CALL( waku_relay_subscribe(ctx, - pubsubTopic, - cify([&](const char* msg, size_t len) { - event_handler(msg, len); - }), - nullptr) ); + WAKU_CALL(waku_relay_subscribe(ctx, + cify([&](const char *msg, size_t len) + { event_handler(msg, len); }), + nullptr, + pubsubTopic)); printf("The subscription went well\n"); show_main_menu(); @@ -171,15 +185,14 @@ void handle_user_input(void* ctx) { printf("e.g.: /ip4/127.0.0.1/tcp/60001/p2p/16Uiu2HAmVFXtAfSj4EiR7mL2KvL4EE2wztuQgUSBoj2Jx2KeXFLN\n"); char peerAddr[512]; scanf("%511s", peerAddr); - WAKU_CALL( waku_connect(ctx, - peerAddr, - 10000 /* timeoutMs */, - cify([&](const char* msg, size_t len) { - event_handler(msg, len); - }), - nullptr)); + WAKU_CALL(waku_connect(ctx, + cify([&](const char *msg, size_t len) + { event_handler(msg, len); }), + nullptr, + peerAddr, + 10000 /* timeoutMs */)); show_main_menu(); - break; + break; case PUBLISH_MESSAGE_MENU: { @@ -193,28 +206,26 @@ void handle_user_input(void* ctx) { std::string contentTopic; waku_content_topic(ctx, + cify([&contentTopic](const char *msg, size_t len) + { contentTopic = msg; }), + nullptr, "appName", - 1, - "contentTopicName", - "encoding", - cify([&contentTopic](const char* msg, size_t len) { - contentTopic = msg; - }), - nullptr); + 1, + "contentTopicName", + "encoding"); snprintf(jsonWakuMsg, 2048, "{\"payload\":\"%s\",\"contentTopic\":\"%s\"}", msgPayload.data(), contentTopic.c_str()); - WAKU_CALL( waku_relay_publish(ctx, - "/waku/2/rs/16/32", - jsonWakuMsg, - 10000 /*timeout ms*/, - cify([&](const char* msg, size_t len) { - event_handler(msg, len); - }), - nullptr) ); + WAKU_CALL(waku_relay_publish(ctx, + cify([&](const char *msg, size_t len) + { event_handler(msg, len); }), + nullptr, + "/waku/2/rs/16/32", + jsonWakuMsg, + 10000 /*timeout ms*/)); show_main_menu(); } @@ -227,12 +238,14 @@ void handle_user_input(void* ctx) { // End of UI program logic -void show_help_and_exit() { +void show_help_and_exit() +{ printf("Wrong parameters\n"); exit(1); } -int main(int argc, char** argv) { +int main(int argc, char **argv) +{ struct ConfigNode cfgNode; // default values snprintf(cfgNode.host, 128, "0.0.0.0"); @@ -241,8 +254,8 @@ int main(int argc, char** argv) { cfgNode.port = 60000; cfgNode.relay = 1; - if (argp_parse(&argp, argc, argv, 0, 0, &cfgNode) - == ARGP_ERR_UNKNOWN) { + if (argp_parse(&argp, argc, argv, 0, 0, &cfgNode) == ARGP_ERR_UNKNOWN) + { show_help_and_exit(); } @@ -260,72 +273,64 @@ int main(int argc, char** argv) { \"discv5UdpPort\": 9999, \ \"dnsDiscoveryUrl\": \"enrtree://AMOJVZX4V6EXP7NTJPMAYJYST2QP6AJXYW76IU6VGJS7UVSNDYZG4@boot.prod.status.nodes.status.im\", \ \"dnsDiscoveryNameServers\": [\"8.8.8.8\", \"1.0.0.1\"] \ - }", cfgNode.host, - cfgNode.port); + }", + cfgNode.host, + cfgNode.port); - void* ctx = + void *ctx = waku_new(jsonConfig, - cify([](const char* msg, size_t len) { - std::cout << "waku_new feedback: " << msg << std::endl; - } - ), - nullptr - ); + cify([](const char *msg, size_t len) + { std::cout << "waku_new feedback: " << msg << std::endl; }), + nullptr); waitForCallback(); // example on how to retrieve a value from the `libwaku` callback. std::string defaultPubsubTopic; WAKU_CALL( waku_default_pubsub_topic( - ctx, - cify([&defaultPubsubTopic](const char* msg, size_t len) { - defaultPubsubTopic = msg; - } - ), - nullptr)); + ctx, + cify([&defaultPubsubTopic](const char *msg, size_t len) + { defaultPubsubTopic = msg; }), + nullptr)); std::cout << "Default pubsub topic: " << defaultPubsubTopic << std::endl; - WAKU_CALL(waku_version(ctx, - cify([&](const char* msg, size_t len) { - std::cout << "Git Version: " << msg << std::endl; - }), + WAKU_CALL(waku_version(ctx, + cify([&](const char *msg, size_t len) + { std::cout << "Git Version: " << msg << std::endl; }), nullptr)); printf("Bind addr: %s:%u\n", cfgNode.host, cfgNode.port); - printf("Waku Relay enabled: %s\n", cfgNode.relay == 1 ? "YES": "NO"); + printf("Waku Relay enabled: %s\n", cfgNode.relay == 1 ? "YES" : "NO"); std::string pubsubTopic; - WAKU_CALL(waku_pubsub_topic(ctx, - "example", - cify([&](const char* msg, size_t len) { - pubsubTopic = msg; - }), - nullptr)); + WAKU_CALL(waku_pubsub_topic(ctx, + cify([&](const char *msg, size_t len) + { pubsubTopic = msg; }), + nullptr, + "example")); std::cout << "Custom pubsub topic: " << pubsubTopic << std::endl; - waku_set_event_callback(ctx, - cify([&](const char* msg, size_t len) { - event_handler(msg, len); - }), - nullptr); + set_event_callback(ctx, + cify([&](const char *msg, size_t len) + { event_handler(msg, len); }), + nullptr); - WAKU_CALL( waku_start(ctx, - cify([&](const char* msg, size_t len) { - event_handler(msg, len); - }), - nullptr)); + WAKU_CALL(waku_start(ctx, + cify([&](const char *msg, size_t len) + { event_handler(msg, len); }), + nullptr)); - WAKU_CALL( waku_relay_subscribe(ctx, - defaultPubsubTopic.c_str(), - cify([&](const char* msg, size_t len) { - event_handler(msg, len); - }), - nullptr) ); + WAKU_CALL(waku_relay_subscribe(ctx, + cify([&](const char *msg, size_t len) + { event_handler(msg, len); }), + nullptr, + defaultPubsubTopic.c_str())); show_main_menu(); - while(1) { + while (1) + { handle_user_input(ctx); } } diff --git a/examples/golang/waku.go b/examples/golang/waku.go index 846362dfe..e205ecd09 100644 --- a/examples/golang/waku.go +++ b/examples/golang/waku.go @@ -71,32 +71,32 @@ package main static void* cGoWakuNew(const char* configJson, void* resp) { // We pass NULL because we are not interested in retrieving data from this callback - void* ret = waku_new(configJson, (WakuCallBack) callback, resp); + void* ret = waku_new(configJson, (FFICallBack) callback, resp); return ret; } static void cGoWakuStart(void* wakuCtx, void* resp) { - WAKU_CALL(waku_start(wakuCtx, (WakuCallBack) callback, resp)); + WAKU_CALL(waku_start(wakuCtx, (FFICallBack) callback, resp)); } static void cGoWakuStop(void* wakuCtx, void* resp) { - WAKU_CALL(waku_stop(wakuCtx, (WakuCallBack) callback, resp)); + WAKU_CALL(waku_stop(wakuCtx, (FFICallBack) callback, resp)); } static void cGoWakuDestroy(void* wakuCtx, void* resp) { - WAKU_CALL(waku_destroy(wakuCtx, (WakuCallBack) callback, resp)); + WAKU_CALL(waku_destroy(wakuCtx, (FFICallBack) callback, resp)); } static void cGoWakuStartDiscV5(void* wakuCtx, void* resp) { - WAKU_CALL(waku_start_discv5(wakuCtx, (WakuCallBack) callback, resp)); + WAKU_CALL(waku_start_discv5(wakuCtx, (FFICallBack) callback, resp)); } static void cGoWakuStopDiscV5(void* wakuCtx, void* resp) { - WAKU_CALL(waku_stop_discv5(wakuCtx, (WakuCallBack) callback, resp)); + WAKU_CALL(waku_stop_discv5(wakuCtx, (FFICallBack) callback, resp)); } static void cGoWakuVersion(void* wakuCtx, void* resp) { - WAKU_CALL(waku_version(wakuCtx, (WakuCallBack) callback, resp)); + WAKU_CALL(waku_version(wakuCtx, (FFICallBack) callback, resp)); } static void cGoWakuSetEventCallback(void* wakuCtx) { @@ -112,7 +112,7 @@ package main // This technique is needed because cgo only allows to export Go functions and not methods. - waku_set_event_callback(wakuCtx, (WakuCallBack) globalEventCallback, wakuCtx); + set_event_callback(wakuCtx, (FFICallBack) globalEventCallback, wakuCtx); } static void cGoWakuContentTopic(void* wakuCtx, @@ -123,20 +123,21 @@ package main void* resp) { WAKU_CALL( waku_content_topic(wakuCtx, + (FFICallBack) callback, + resp, appName, appVersion, contentTopicName, - encoding, - (WakuCallBack) callback, - resp) ); + encoding + ) ); } static void cGoWakuPubsubTopic(void* wakuCtx, char* topicName, void* resp) { - WAKU_CALL( waku_pubsub_topic(wakuCtx, topicName, (WakuCallBack) callback, resp) ); + WAKU_CALL( waku_pubsub_topic(wakuCtx, (FFICallBack) callback, resp, topicName) ); } static void cGoWakuDefaultPubsubTopic(void* wakuCtx, void* resp) { - WAKU_CALL (waku_default_pubsub_topic(wakuCtx, (WakuCallBack) callback, resp)); + WAKU_CALL (waku_default_pubsub_topic(wakuCtx, (FFICallBack) callback, resp)); } static void cGoWakuRelayPublish(void* wakuCtx, @@ -146,34 +147,36 @@ package main void* resp) { WAKU_CALL (waku_relay_publish(wakuCtx, + (FFICallBack) callback, + resp, pubSubTopic, jsonWakuMessage, - timeoutMs, - (WakuCallBack) callback, - resp)); + timeoutMs + )); } static void cGoWakuRelaySubscribe(void* wakuCtx, char* pubSubTopic, void* resp) { WAKU_CALL ( waku_relay_subscribe(wakuCtx, - pubSubTopic, - (WakuCallBack) callback, - resp) ); + (FFICallBack) callback, + resp, + pubSubTopic) ); } static void cGoWakuRelayUnsubscribe(void* wakuCtx, char* pubSubTopic, void* resp) { WAKU_CALL ( waku_relay_unsubscribe(wakuCtx, - pubSubTopic, - (WakuCallBack) callback, - resp) ); + (FFICallBack) callback, + resp, + pubSubTopic) ); } static void cGoWakuConnect(void* wakuCtx, char* peerMultiAddr, int timeoutMs, void* resp) { WAKU_CALL( waku_connect(wakuCtx, + (FFICallBack) callback, + resp, peerMultiAddr, - timeoutMs, - (WakuCallBack) callback, - resp) ); + timeoutMs + ) ); } static void cGoWakuDialPeerById(void* wakuCtx, @@ -183,42 +186,44 @@ package main void* resp) { WAKU_CALL( waku_dial_peer_by_id(wakuCtx, + (FFICallBack) callback, + resp, peerId, protocol, - timeoutMs, - (WakuCallBack) callback, - resp) ); + timeoutMs + ) ); } static void cGoWakuDisconnectPeerById(void* wakuCtx, char* peerId, void* resp) { WAKU_CALL( waku_disconnect_peer_by_id(wakuCtx, - peerId, - (WakuCallBack) callback, - resp) ); + (FFICallBack) callback, + resp, + peerId + ) ); } static void cGoWakuListenAddresses(void* wakuCtx, void* resp) { - WAKU_CALL (waku_listen_addresses(wakuCtx, (WakuCallBack) callback, resp) ); + WAKU_CALL (waku_listen_addresses(wakuCtx, (FFICallBack) callback, resp) ); } static void cGoWakuGetMyENR(void* ctx, void* resp) { - WAKU_CALL (waku_get_my_enr(ctx, (WakuCallBack) callback, resp) ); + WAKU_CALL (waku_get_my_enr(ctx, (FFICallBack) callback, resp) ); } static void cGoWakuGetMyPeerId(void* ctx, void* resp) { - WAKU_CALL (waku_get_my_peerid(ctx, (WakuCallBack) callback, resp) ); + WAKU_CALL (waku_get_my_peerid(ctx, (FFICallBack) callback, resp) ); } static void cGoWakuListPeersInMesh(void* ctx, char* pubSubTopic, void* resp) { - WAKU_CALL (waku_relay_get_num_peers_in_mesh(ctx, pubSubTopic, (WakuCallBack) callback, resp) ); + WAKU_CALL (waku_relay_get_num_peers_in_mesh(ctx, (FFICallBack) callback, resp, pubSubTopic) ); } static void cGoWakuGetNumConnectedPeers(void* ctx, char* pubSubTopic, void* resp) { - WAKU_CALL (waku_relay_get_num_connected_peers(ctx, pubSubTopic, (WakuCallBack) callback, resp) ); + WAKU_CALL (waku_relay_get_num_connected_peers(ctx, (FFICallBack) callback, resp, pubSubTopic) ); } static void cGoWakuGetPeerIdsFromPeerStore(void* wakuCtx, void* resp) { - WAKU_CALL (waku_get_peerids_from_peerstore(wakuCtx, (WakuCallBack) callback, resp) ); + WAKU_CALL (waku_get_peerids_from_peerstore(wakuCtx, (FFICallBack) callback, resp) ); } static void cGoWakuLightpushPublish(void* wakuCtx, @@ -227,10 +232,11 @@ package main void* resp) { WAKU_CALL (waku_lightpush_publish(wakuCtx, + (FFICallBack) callback, + resp, pubSubTopic, - jsonWakuMessage, - (WakuCallBack) callback, - resp)); + jsonWakuMessage + )); } static void cGoWakuStoreQuery(void* wakuCtx, @@ -240,11 +246,12 @@ package main void* resp) { WAKU_CALL (waku_store_query(wakuCtx, + (FFICallBack) callback, + resp, jsonQuery, peerAddr, - timeoutMs, - (WakuCallBack) callback, - resp)); + timeoutMs + )); } static void cGoWakuPeerExchangeQuery(void* wakuCtx, @@ -252,9 +259,10 @@ package main void* resp) { WAKU_CALL (waku_peer_exchange_request(wakuCtx, - numPeers, - (WakuCallBack) callback, - resp)); + (FFICallBack) callback, + resp, + numPeers + )); } static void cGoWakuGetPeerIdsByProtocol(void* wakuCtx, @@ -262,9 +270,10 @@ package main void* resp) { WAKU_CALL (waku_get_peerids_by_protocol(wakuCtx, - protocol, - (WakuCallBack) callback, - resp)); + (FFICallBack) callback, + resp, + protocol + )); } */ diff --git a/examples/python/waku.py b/examples/python/waku.py index 4d5f5643e..65eb5d750 100644 --- a/examples/python/waku.py +++ b/examples/python/waku.py @@ -102,8 +102,8 @@ print("Waku Relay enabled: {}".format(args.relay)) # Set the event callback callback = callback_type(handle_event) # This line is important so that the callback is not gc'ed -libwaku.waku_set_event_callback.argtypes = [callback_type, ctypes.c_void_p] -libwaku.waku_set_event_callback(callback, ctypes.c_void_p(0)) +libwaku.set_event_callback.argtypes = [callback_type, ctypes.c_void_p] +libwaku.set_event_callback(callback, ctypes.c_void_p(0)) # Start the node libwaku.waku_start.argtypes = [ctypes.c_void_p, @@ -117,32 +117,32 @@ libwaku.waku_start(ctx, # Subscribe to the default pubsub topic libwaku.waku_relay_subscribe.argtypes = [ctypes.c_void_p, - ctypes.c_char_p, callback_type, - ctypes.c_void_p] + ctypes.c_void_p, + ctypes.c_char_p] libwaku.waku_relay_subscribe(ctx, - default_pubsub_topic.encode('utf-8'), callback_type( #onErrCb lambda ret, msg, len: print("Error calling waku_relay_subscribe: %s" % msg.decode('utf-8')) ), - ctypes.c_void_p(0)) + ctypes.c_void_p(0), + default_pubsub_topic.encode('utf-8')) libwaku.waku_connect.argtypes = [ctypes.c_void_p, - ctypes.c_char_p, - ctypes.c_int, callback_type, - ctypes.c_void_p] + ctypes.c_void_p, + ctypes.c_char_p, + ctypes.c_int] libwaku.waku_connect(ctx, - args.peer.encode('utf-8'), - 10000, # onErrCb callback_type( lambda ret, msg, len: print("Error calling waku_connect: %s" % msg.decode('utf-8'))), - ctypes.c_void_p(0)) + ctypes.c_void_p(0), + args.peer.encode('utf-8'), + 10000) # app = Flask(__name__) # @app.route("/") diff --git a/examples/qt/waku_handler.h b/examples/qt/waku_handler.h index 161a17c82..2fb3ce3b7 100644 --- a/examples/qt/waku_handler.h +++ b/examples/qt/waku_handler.h @@ -27,7 +27,7 @@ public: void initialize(const QString& jsonConfig, WakuCallBack event_handler, void* userData) { ctx = waku_new(jsonConfig.toUtf8().constData(), WakuCallBack(event_handler), userData); - waku_set_event_callback(ctx, on_event_received, userData); + set_event_callback(ctx, on_event_received, userData); qDebug() << "Waku context initialized, ready to start."; } diff --git a/examples/rust/src/main.rs b/examples/rust/src/main.rs index 926d0e3b0..d26e9627e 100644 --- a/examples/rust/src/main.rs +++ b/examples/rust/src/main.rs @@ -3,22 +3,22 @@ use std::ffi::CString; use std::os::raw::{c_char, c_int, c_void}; use std::{slice, thread, time}; -pub type WakuCallback = unsafe extern "C" fn(c_int, *const c_char, usize, *const c_void); +pub type FFICallBack = unsafe extern "C" fn(c_int, *const c_char, usize, *const c_void); extern "C" { pub fn waku_new( config_json: *const u8, - cb: WakuCallback, + cb: FFICallBack, user_data: *const c_void, ) -> *mut c_void; - pub fn waku_version(ctx: *const c_void, cb: WakuCallback, user_data: *const c_void) -> c_int; + pub fn waku_version(ctx: *const c_void, cb: FFICallBack, user_data: *const c_void) -> c_int; - pub fn waku_start(ctx: *const c_void, cb: WakuCallback, user_data: *const c_void) -> c_int; + pub fn waku_start(ctx: *const c_void, cb: FFICallBack, user_data: *const c_void) -> c_int; pub fn waku_default_pubsub_topic( ctx: *mut c_void, - cb: WakuCallback, + cb: FFICallBack, user_data: *const c_void, ) -> *mut c_void; } @@ -40,7 +40,7 @@ pub unsafe extern "C" fn trampoline( closure(return_val, &buffer_utf8); } -pub fn get_trampoline(_closure: &C) -> WakuCallback +pub fn get_trampoline(_closure: &C) -> FFICallBack where C: FnMut(i32, &str), { diff --git a/library/alloc.nim b/library/alloc.nim deleted file mode 100644 index 1a6f118b5..000000000 --- a/library/alloc.nim +++ /dev/null @@ -1,42 +0,0 @@ -## Can be shared safely between threads -type SharedSeq*[T] = tuple[data: ptr UncheckedArray[T], len: int] - -proc alloc*(str: cstring): cstring = - # Byte allocation from the given address. - # There should be the corresponding manual deallocation with deallocShared ! - if str.isNil(): - var ret = cast[cstring](allocShared(1)) # Allocate memory for the null terminator - ret[0] = '\0' # Set the null terminator - return ret - - let ret = cast[cstring](allocShared(len(str) + 1)) - copyMem(ret, str, len(str) + 1) - return ret - -proc alloc*(str: string): cstring = - ## Byte allocation from the given address. - ## There should be the corresponding manual deallocation with deallocShared ! - var ret = cast[cstring](allocShared(str.len + 1)) - let s = cast[seq[char]](str) - for i in 0 ..< str.len: - ret[i] = s[i] - ret[str.len] = '\0' - return ret - -proc allocSharedSeq*[T](s: seq[T]): SharedSeq[T] = - let data = allocShared(sizeof(T) * s.len) - if s.len != 0: - copyMem(data, unsafeAddr s[0], s.len) - return (cast[ptr UncheckedArray[T]](data), s.len) - -proc deallocSharedSeq*[T](s: var SharedSeq[T]) = - deallocShared(s.data) - s.len = 0 - -proc toSeq*[T](s: SharedSeq[T]): seq[T] = - ## Creates a seq[T] from a SharedSeq[T]. No explicit dealloc is required - ## as req[T] is a GC managed type. - var ret = newSeq[T]() - for i in 0 ..< s.len: - ret.add(s.data[i]) - return ret diff --git a/library/declare_lib.nim b/library/declare_lib.nim new file mode 100644 index 000000000..188de8549 --- /dev/null +++ b/library/declare_lib.nim @@ -0,0 +1,10 @@ +import ffi +import waku/factory/waku + +declareLibrary("waku") + +proc set_event_callback( + ctx: ptr FFIContext[Waku], callback: FFICallBack, userData: pointer +) {.dynlib, exportc, cdecl.} = + ctx[].eventCallback = cast[pointer](callback) + ctx[].eventUserData = userData diff --git a/library/events/json_waku_not_responding_event.nim b/library/events/json_waku_not_responding_event.nim deleted file mode 100644 index 1e1d5fcc5..000000000 --- a/library/events/json_waku_not_responding_event.nim +++ /dev/null @@ -1,9 +0,0 @@ -import system, std/json, ./json_base_event - -type JsonWakuNotRespondingEvent* = ref object of JsonEvent - -proc new*(T: type JsonWakuNotRespondingEvent): T = - return JsonWakuNotRespondingEvent(eventType: "waku_not_responding") - -method `$`*(event: JsonWakuNotRespondingEvent): string = - $(%*event) diff --git a/library/ffi_types.nim b/library/ffi_types.nim deleted file mode 100644 index a5eeb9711..000000000 --- a/library/ffi_types.nim +++ /dev/null @@ -1,30 +0,0 @@ -################################################################################ -### Exported types - -type WakuCallBack* = proc( - callerRet: cint, msg: ptr cchar, len: csize_t, userData: pointer -) {.cdecl, gcsafe, raises: [].} - -const RET_OK*: cint = 0 -const RET_ERR*: cint = 1 -const RET_MISSING_CALLBACK*: cint = 2 - -### End of exported types -################################################################################ - -################################################################################ -### FFI utils - -template foreignThreadGc*(body: untyped) = - when declared(setupForeignThreadGc): - setupForeignThreadGc() - - body - - when declared(tearDownForeignThreadGc): - tearDownForeignThreadGc() - -type onDone* = proc() - -### End of FFI utils -################################################################################ diff --git a/library/kernel_api/debug_node_api.nim b/library/kernel_api/debug_node_api.nim new file mode 100644 index 000000000..98f5332b4 --- /dev/null +++ b/library/kernel_api/debug_node_api.nim @@ -0,0 +1,49 @@ +import std/json +import + chronicles, + chronos, + results, + eth/p2p/discoveryv5/enr, + strutils, + libp2p/peerid, + metrics, + ffi +import waku/factory/waku, waku/node/waku_node, waku/node/health_monitor, library/declare_lib + +proc getMultiaddresses(node: WakuNode): seq[string] = + return node.info().listenAddresses + +proc getMetrics(): string = + {.gcsafe.}: + return defaultRegistry.toText() ## defaultRegistry is {.global.} in metrics module + +proc waku_version( + ctx: ptr FFIContext[Waku], callback: FFICallBack, userData: pointer +) {.ffi.} = + return ok(WakuNodeVersionString) + +proc waku_listen_addresses( + ctx: ptr FFIContext[Waku], callback: FFICallBack, userData: pointer +) {.ffi.} = + ## returns a comma-separated string of the listen addresses + return ok(ctx.myLib[].node.getMultiaddresses().join(",")) + +proc waku_get_my_enr( + ctx: ptr FFIContext[Waku], callback: FFICallBack, userData: pointer +) {.ffi.} = + return ok(ctx.myLib[].node.enr.toURI()) + +proc waku_get_my_peerid( + ctx: ptr FFIContext[Waku], callback: FFICallBack, userData: pointer +) {.ffi.} = + return ok($ctx.myLib[].node.peerId()) + +proc waku_get_metrics( + ctx: ptr FFIContext[Waku], callback: FFICallBack, userData: pointer +) {.ffi.} = + return ok(getMetrics()) + +proc waku_is_online( + ctx: ptr FFIContext[Waku], callback: FFICallBack, userData: pointer +) {.ffi.} = + return ok($ctx.myLib[].healthMonitor.onlineMonitor.amIOnline()) diff --git a/library/kernel_api/discovery_api.nim b/library/kernel_api/discovery_api.nim new file mode 100644 index 000000000..f61b7bad1 --- /dev/null +++ b/library/kernel_api/discovery_api.nim @@ -0,0 +1,96 @@ +import std/json +import chronos, chronicles, results, strutils, libp2p/multiaddress, ffi +import + waku/factory/waku, + waku/discovery/waku_dnsdisc, + waku/discovery/waku_discv5, + waku/waku_core/peers, + waku/node/waku_node, + waku/node/kernel_api, + library/declare_lib + +proc retrieveBootstrapNodes( + enrTreeUrl: string, ipDnsServer: string +): Future[Result[seq[string], string]] {.async.} = + let dnsNameServers = @[parseIpAddress(ipDnsServer)] + let discoveredPeers: seq[RemotePeerInfo] = ( + await retrieveDynamicBootstrapNodes(enrTreeUrl, dnsNameServers) + ).valueOr: + return err("failed discovering peers from DNS: " & $error) + + var multiAddresses = newSeq[string]() + + for discPeer in discoveredPeers: + for address in discPeer.addrs: + multiAddresses.add($address & "/p2p/" & $discPeer) + + return ok(multiAddresses) + +proc updateDiscv5BootstrapNodes(nodes: string, waku: Waku): Result[void, string] = + waku.wakuDiscv5.updateBootstrapRecords(nodes).isOkOr: + return err("error in updateDiscv5BootstrapNodes: " & $error) + return ok() + +proc performPeerExchangeRequestTo*( + numPeers: uint64, waku: Waku +): Future[Result[int, string]] {.async.} = + let numPeersRecv = (await waku.node.fetchPeerExchangePeers(numPeers)).valueOr: + return err($error) + return ok(numPeersRecv) + +proc waku_discv5_update_bootnodes( + ctx: ptr FFIContext[Waku], + callback: FFICallBack, + userData: pointer, + bootnodes: cstring, +) {.ffi.} = + ## Updates the bootnode list used for discovering new peers via DiscoveryV5 + ## bootnodes - JSON array containing the bootnode ENRs i.e. `["enr:...", "enr:..."]` + + updateDiscv5BootstrapNodes($bootnodes, ctx.myLib[]).isOkOr: + error "UPDATE_DISCV5_BOOTSTRAP_NODES failed", error = error + return err($error) + + return ok("discovery request processed correctly") + +proc waku_dns_discovery( + ctx: ptr FFIContext[Waku], + callback: FFICallBack, + userData: pointer, + enrTreeUrl: cstring, + nameDnsServer: cstring, + timeoutMs: cint, +) {.ffi.} = + let nodes = (await retrieveBootstrapNodes($enrTreeUrl, $nameDnsServer)).valueOr: + error "GET_BOOTSTRAP_NODES failed", error = error + return err($error) + + ## returns a comma-separated string of bootstrap nodes' multiaddresses + return ok(nodes.join(",")) + +proc waku_start_discv5( + ctx: ptr FFIContext[Waku], callback: FFICallBack, userData: pointer +) {.ffi.} = + (await ctx.myLib[].wakuDiscv5.start()).isOkOr: + error "START_DISCV5 failed", error = error + return err("error starting discv5: " & $error) + + return ok("discv5 started correctly") + +proc waku_stop_discv5( + ctx: ptr FFIContext[Waku], callback: FFICallBack, userData: pointer +) {.ffi.} = + await ctx.myLib[].wakuDiscv5.stop() + return ok("discv5 stopped correctly") + +proc waku_peer_exchange_request( + ctx: ptr FFIContext[Waku], + callback: FFICallBack, + userData: pointer, + numPeers: uint64, +) {.ffi.} = + let numValidPeers = (await performPeerExchangeRequestTo(numPeers, ctx.myLib[])).valueOr: + error "waku_peer_exchange_request failed", error = error + return err("failed peer exchange: " & $error) + + return ok($numValidPeers) diff --git a/library/waku_thread_requests/requests/node_lifecycle_request.nim b/library/kernel_api/node_lifecycle_api.nim similarity index 60% rename from library/waku_thread_requests/requests/node_lifecycle_request.nim rename to library/kernel_api/node_lifecycle_api.nim index aa71ac6bb..a2bb25609 100644 --- a/library/waku_thread_requests/requests/node_lifecycle_request.nim +++ b/library/kernel_api/node_lifecycle_api.nim @@ -1,43 +1,14 @@ import std/[options, json, strutils, net] -import chronos, chronicles, results, confutils, confutils/std/net +import chronos, chronicles, results, confutils, confutils/std/net, ffi import waku/node/peer_manager/peer_manager, tools/confutils/cli_args, waku/factory/waku, waku/factory/node_factory, - waku/factory/networks_config, waku/factory/app_callbacks, - waku/rest_api/endpoint/builder - -import - ../../alloc - -type NodeLifecycleMsgType* = enum - CREATE_NODE - START_NODE - STOP_NODE - -type NodeLifecycleRequest* = object - operation: NodeLifecycleMsgType - configJson: cstring ## Only used in 'CREATE_NODE' operation - appCallbacks: AppCallbacks - -proc createShared*( - T: type NodeLifecycleRequest, - op: NodeLifecycleMsgType, - configJson: cstring = "", - appCallbacks: AppCallbacks = nil, -): ptr type T = - var ret = createShared(T) - ret[].operation = op - ret[].appCallbacks = appCallbacks - ret[].configJson = configJson.alloc() - return ret - -proc destroyShared(self: ptr NodeLifecycleRequest) = - deallocShared(self[].configJson) - deallocShared(self) + waku/rest_api/endpoint/builder, + library/declare_lib proc createWaku( configJson: cstring, appCallbacks: AppCallbacks = nil @@ -87,26 +58,30 @@ proc createWaku( return ok(wakuRes) -proc process*( - self: ptr NodeLifecycleRequest, waku: ptr Waku -): Future[Result[string, string]] {.async.} = - defer: - destroyShared(self) - - case self.operation - of CREATE_NODE: - waku[] = (await createWaku(self.configJson, self.appCallbacks)).valueOr: - error "CREATE_NODE failed", error = error +registerReqFFI(CreateNodeRequest, ctx: ptr FFIContext[Waku]): + proc( + configJson: cstring, appCallbacks: AppCallbacks + ): Future[Result[string, string]] {.async.} = + ctx.myLib[] = (await createWaku(configJson, cast[AppCallbacks](appCallbacks))).valueOr: + error "CreateNodeRequest failed", error = error return err($error) - of START_NODE: - (await waku.startWaku()).isOkOr: - error "START_NODE failed", error = error - return err($error) - of STOP_NODE: - try: - await waku[].stop() - except Exception: - error "STOP_NODE failed", error = getCurrentExceptionMsg() - return err(getCurrentExceptionMsg()) + return ok("") + +proc waku_start( + ctx: ptr FFIContext[Waku], callback: FFICallBack, userData: pointer +) {.ffi.} = + (await startWaku(ctx[].myLib)).isOkOr: + error "START_NODE failed", error = error + return err("failed to start: " & $error) + return ok("") + +proc waku_stop( + ctx: ptr FFIContext[Waku], callback: FFICallBack, userData: pointer +) {.ffi.} = + try: + await ctx.myLib[].stop() + except Exception as exc: + error "STOP_NODE failed", error = exc.msg + return err("failed to stop: " & exc.msg) return ok("") diff --git a/library/kernel_api/peer_manager_api.nim b/library/kernel_api/peer_manager_api.nim new file mode 100644 index 000000000..f0ae37f00 --- /dev/null +++ b/library/kernel_api/peer_manager_api.nim @@ -0,0 +1,123 @@ +import std/[sequtils, strutils, tables] +import chronicles, chronos, results, options, json, ffi +import waku/factory/waku, waku/node/waku_node, waku/node/peer_manager, ../declare_lib + +type PeerInfo = object + protocols: seq[string] + addresses: seq[string] + +proc waku_get_peerids_from_peerstore( + ctx: ptr FFIContext[Waku], callback: FFICallBack, userData: pointer +) {.ffi.} = + ## returns a comma-separated string of peerIDs + let peerIDs = + ctx.myLib[].node.peerManager.switch.peerStore.peers().mapIt($it.peerId).join(",") + return ok(peerIDs) + +proc waku_connect( + ctx: ptr FFIContext[Waku], + callback: FFICallBack, + userData: pointer, + peerMultiAddr: cstring, + timeoutMs: cuint, +) {.ffi.} = + let peers = ($peerMultiAddr).split(",").mapIt(strip(it)) + await ctx.myLib[].node.connectToNodes(peers, source = "static") + return ok("") + +proc waku_disconnect_peer_by_id( + ctx: ptr FFIContext[Waku], callback: FFICallBack, userData: pointer, peerId: cstring +) {.ffi.} = + let pId = PeerId.init($peerId).valueOr: + error "DISCONNECT_PEER_BY_ID failed", error = $error + return err($error) + await ctx.myLib[].node.peerManager.disconnectNode(pId) + return ok("") + +proc waku_disconnect_all_peers( + ctx: ptr FFIContext[Waku], callback: FFICallBack, userData: pointer +) {.ffi.} = + await ctx.myLib[].node.peerManager.disconnectAllPeers() + return ok("") + +proc waku_dial_peer( + ctx: ptr FFIContext[Waku], + callback: FFICallBack, + userData: pointer, + peerMultiAddr: cstring, + protocol: cstring, + timeoutMs: cuint, +) {.ffi.} = + let remotePeerInfo = parsePeerInfo($peerMultiAddr).valueOr: + error "DIAL_PEER failed", error = $error + return err($error) + let conn = await ctx.myLib[].node.peerManager.dialPeer(remotePeerInfo, $protocol) + if conn.isNone(): + let msg = "failed dialing peer" + error "DIAL_PEER failed", error = msg, peerId = $remotePeerInfo.peerId + return err(msg) + return ok("") + +proc waku_dial_peer_by_id( + ctx: ptr FFIContext[Waku], + callback: FFICallBack, + userData: pointer, + peerId: cstring, + protocol: cstring, + timeoutMs: cuint, +) {.ffi.} = + let pId = PeerId.init($peerId).valueOr: + error "DIAL_PEER_BY_ID failed", error = $error + return err($error) + let conn = await ctx.myLib[].node.peerManager.dialPeer(pId, $protocol) + if conn.isNone(): + let msg = "failed dialing peer" + error "DIAL_PEER_BY_ID failed", error = msg, peerId = $peerId + return err(msg) + + return ok("") + +proc waku_get_connected_peers_info( + ctx: ptr FFIContext[Waku], callback: FFICallBack, userData: pointer +) {.ffi.} = + ## returns a JSON string mapping peerIDs to objects with protocols and addresses + + var peersMap = initTable[string, PeerInfo]() + let peers = ctx.myLib[].node.peerManager.switch.peerStore.peers().filterIt( + it.connectedness == Connected + ) + + # Build a map of peer IDs to peer info objects + for peer in peers: + let peerIdStr = $peer.peerId + peersMap[peerIdStr] = + PeerInfo(protocols: peer.protocols, addresses: peer.addrs.mapIt($it)) + + # Convert the map to JSON string + let jsonObj = %*peersMap + let jsonStr = $jsonObj + return ok(jsonStr) + +proc waku_get_connected_peers( + ctx: ptr FFIContext[Waku], callback: FFICallBack, userData: pointer +) {.ffi.} = + ## returns a comma-separated string of peerIDs + let + (inPeerIds, outPeerIds) = ctx.myLib[].node.peerManager.connectedPeers() + connectedPeerids = concat(inPeerIds, outPeerIds) + + return ok(connectedPeerids.mapIt($it).join(",")) + +proc waku_get_peerids_by_protocol( + ctx: ptr FFIContext[Waku], + callback: FFICallBack, + userData: pointer, + protocol: cstring, +) {.ffi.} = + ## returns a comma-separated string of peerIDs that mount the given protocol + let connectedPeers = ctx.myLib[].node.peerManager.switch.peerStore + .peers($protocol) + .filterIt(it.connectedness == Connected) + .mapIt($it.peerId) + .join(",") + return ok(connectedPeers) diff --git a/library/kernel_api/ping_api.nim b/library/kernel_api/ping_api.nim new file mode 100644 index 000000000..4f10dcf59 --- /dev/null +++ b/library/kernel_api/ping_api.nim @@ -0,0 +1,43 @@ +import std/[json, strutils] +import chronos, results, ffi +import libp2p/[protocols/ping, switch, multiaddress, multicodec] +import waku/[factory/waku, waku_core/peers, node/waku_node], library/declare_lib + +proc waku_ping_peer( + ctx: ptr FFIContext[Waku], + callback: FFICallBack, + userData: pointer, + peerAddr: cstring, + timeoutMs: cuint, +) {.ffi.} = + let peerInfo = peers.parsePeerInfo(($peerAddr).split(",")).valueOr: + return err("PingRequest failed to parse peer addr: " & $error) + + let timeout = chronos.milliseconds(timeoutMs) + proc ping(): Future[Result[Duration, string]] {.async, gcsafe.} = + try: + let conn = + await ctx.myLib[].node.switch.dial(peerInfo.peerId, peerInfo.addrs, PingCodec) + defer: + await conn.close() + + let pingRTT = await ctx.myLib[].node.libp2pPing.ping(conn) + if pingRTT == 0.nanos: + return err("could not ping peer: rtt-0") + return ok(pingRTT) + except CatchableError as exc: + return err("could not ping peer: " & exc.msg) + + let pingFuture = ping() + let pingRTT: Duration = + if timeout == chronos.milliseconds(0): # No timeout expected + (await pingFuture).valueOr: + return err("ping failed, no timeout expected: " & error) + else: + let timedOut = not (await pingFuture.withTimeout(timeout)) + if timedOut: + return err("ping timed out") + pingFuture.read().valueOr: + return err("failed to read ping future: " & error) + + return ok($(pingRTT.nanos)) diff --git a/library/kernel_api/protocols/filter_api.nim b/library/kernel_api/protocols/filter_api.nim new file mode 100644 index 000000000..c4f99510a --- /dev/null +++ b/library/kernel_api/protocols/filter_api.nim @@ -0,0 +1,109 @@ +import options, std/[strutils, sequtils] +import chronicles, chronos, results, ffi +import + waku/waku_filter_v2/client, + waku/waku_core/message/message, + waku/factory/waku, + waku/waku_relay, + waku/waku_filter_v2/common, + waku/waku_core/subscription/push_handler, + waku/node/peer_manager/peer_manager, + waku/node/waku_node, + waku/node/kernel_api, + waku/waku_core/topics/pubsub_topic, + waku/waku_core/topics/content_topic, + library/events/json_message_event, + library/declare_lib + +const FilterOpTimeout = 5.seconds + +proc checkFilterClientMounted(waku: Waku): Result[string, string] = + if waku.node.wakuFilterClient.isNil(): + let errorMsg = "wakuFilterClient is not mounted" + error "fail filter process", error = errorMsg + return err(errorMsg) + return ok("") + +proc waku_filter_subscribe( + ctx: ptr FFIContext[Waku], + callback: FFICallBack, + userData: pointer, + pubSubTopic: cstring, + contentTopics: cstring, +) {.ffi.} = + proc onReceivedMessage(ctx: ptr FFIContext): WakuRelayHandler = + return proc(pubsubTopic: PubsubTopic, msg: WakuMessage) {.async.} = + callEventCallback(ctx, "onReceivedMessage"): + $JsonMessageEvent.new(pubsubTopic, msg) + + checkFilterClientMounted(ctx.myLib[]).isOkOr: + return err($error) + + var filterPushEventCallback = FilterPushHandler(onReceivedMessage(ctx)) + ctx.myLib[].node.wakuFilterClient.registerPushHandler(filterPushEventCallback) + + let peer = ctx.myLib[].node.peerManager.selectPeer(WakuFilterSubscribeCodec).valueOr: + let errorMsg = "could not find peer with WakuFilterSubscribeCodec when subscribing" + error "fail filter subscribe", error = errorMsg + return err(errorMsg) + + let subFut = ctx.myLib[].node.filterSubscribe( + some(PubsubTopic($pubsubTopic)), + ($contentTopics).split(",").mapIt(ContentTopic(it)), + peer, + ) + if not await subFut.withTimeout(FilterOpTimeout): + let errorMsg = "filter subscription timed out" + error "fail filter unsubscribe", error = errorMsg + + return err(errorMsg) + + return ok("") + +proc waku_filter_unsubscribe( + ctx: ptr FFIContext[Waku], + callback: FFICallBack, + userData: pointer, + pubSubTopic: cstring, + contentTopics: cstring, +) {.ffi.} = + checkFilterClientMounted(ctx.myLib[]).isOkOr: + return err($error) + + let peer = ctx.myLib[].node.peerManager.selectPeer(WakuFilterSubscribeCodec).valueOr: + let errorMsg = + "could not find peer with WakuFilterSubscribeCodec when unsubscribing" + error "fail filter process", error = errorMsg + return err(errorMsg) + + let subFut = ctx.myLib[].node.filterUnsubscribe( + some(PubsubTopic($pubsubTopic)), + ($contentTopics).split(",").mapIt(ContentTopic(it)), + peer, + ) + if not await subFut.withTimeout(FilterOpTimeout): + let errorMsg = "filter un-subscription timed out" + error "fail filter unsubscribe", error = errorMsg + return err(errorMsg) + return ok("") + +proc waku_filter_unsubscribe_all( + ctx: ptr FFIContext[Waku], callback: FFICallBack, userData: pointer +) {.ffi.} = + checkFilterClientMounted(ctx.myLib[]).isOkOr: + return err($error) + + let peer = ctx.myLib[].node.peerManager.selectPeer(WakuFilterSubscribeCodec).valueOr: + let errorMsg = + "could not find peer with WakuFilterSubscribeCodec when unsubscribing all" + error "fail filter unsubscribe all", error = errorMsg + return err(errorMsg) + + let unsubFut = ctx.myLib[].node.filterUnsubscribeAll(peer) + + if not await unsubFut.withTimeout(FilterOpTimeout): + let errorMsg = "filter un-subscription all timed out" + error "fail filter unsubscribe all", error = errorMsg + + return err(errorMsg) + return ok("") diff --git a/library/kernel_api/protocols/lightpush_api.nim b/library/kernel_api/protocols/lightpush_api.nim new file mode 100644 index 000000000..e9251a3f3 --- /dev/null +++ b/library/kernel_api/protocols/lightpush_api.nim @@ -0,0 +1,51 @@ +import options, std/[json, strformat] +import chronicles, chronos, results, ffi +import + waku/waku_core/message/message, + waku/waku_core/codecs, + waku/factory/waku, + waku/waku_core/message, + waku/waku_core/topics/pubsub_topic, + waku/waku_lightpush_legacy/client, + waku/node/peer_manager/peer_manager, + library/events/json_message_event, + library/declare_lib + +proc waku_lightpush_publish( + ctx: ptr FFIContext[Waku], + callback: FFICallBack, + userData: pointer, + pubSubTopic: cstring, + jsonWakuMessage: cstring, +) {.ffi.} = + if ctx.myLib[].node.wakuLightpushClient.isNil(): + let errorMsg = "LightpushRequest waku.node.wakuLightpushClient is nil" + error "PUBLISH failed", error = errorMsg + return err(errorMsg) + + var jsonMessage: JsonMessage + try: + let jsonContent = parseJson($jsonWakuMessage) + jsonMessage = JsonMessage.fromJsonNode(jsonContent).valueOr: + raise newException(JsonParsingError, $error) + except JsonParsingError as exc: + return err(fmt"Error parsing json message: {exc.msg}") + + let msg = json_message_event.toWakuMessage(jsonMessage).valueOr: + return err("Problem building the WakuMessage: " & $error) + + let peerOpt = ctx.myLib[].node.peerManager.selectPeer(WakuLightPushCodec) + if peerOpt.isNone(): + let errorMsg = "failed to lightpublish message, no suitable remote peers" + error "PUBLISH failed", error = errorMsg + return err(errorMsg) + + let msgHashHex = ( + await ctx.myLib[].node.wakuLegacyLightpushClient.publish( + $pubsubTopic, msg, peer = peerOpt.get() + ) + ).valueOr: + error "PUBLISH failed", error = error + return err($error) + + return ok(msgHashHex) diff --git a/library/kernel_api/protocols/relay_api.nim b/library/kernel_api/protocols/relay_api.nim new file mode 100644 index 000000000..b184d6011 --- /dev/null +++ b/library/kernel_api/protocols/relay_api.nim @@ -0,0 +1,171 @@ +import std/[net, sequtils, strutils, json], strformat +import chronicles, chronos, stew/byteutils, results, ffi +import + waku/waku_core/message/message, + waku/factory/[validator_signed, waku], + tools/confutils/cli_args, + waku/waku_core/message, + waku/waku_core/topics/pubsub_topic, + waku/waku_core/topics, + waku/node/kernel_api/relay, + waku/waku_relay/protocol, + waku/node/peer_manager, + library/events/json_message_event, + library/declare_lib + +proc waku_relay_get_peers_in_mesh( + ctx: ptr FFIContext[Waku], + callback: FFICallBack, + userData: pointer, + pubSubTopic: cstring, +) {.ffi.} = + let meshPeers = ctx.myLib[].node.wakuRelay.getPeersInMesh($pubsubTopic).valueOr: + error "LIST_MESH_PEERS failed", error = error + return err($error) + ## returns a comma-separated string of peerIDs + return ok(meshPeers.mapIt($it).join(",")) + +proc waku_relay_get_num_peers_in_mesh( + ctx: ptr FFIContext[Waku], + callback: FFICallBack, + userData: pointer, + pubSubTopic: cstring, +) {.ffi.} = + let numPeersInMesh = ctx.myLib[].node.wakuRelay.getNumPeersInMesh($pubsubTopic).valueOr: + error "NUM_MESH_PEERS failed", error = error + return err($error) + return ok($numPeersInMesh) + +proc waku_relay_get_connected_peers( + ctx: ptr FFIContext[Waku], + callback: FFICallBack, + userData: pointer, + pubSubTopic: cstring, +) {.ffi.} = + ## Returns the list of all connected peers to an specific pubsub topic + let connPeers = ctx.myLib[].node.wakuRelay.getConnectedPeers($pubsubTopic).valueOr: + error "LIST_CONNECTED_PEERS failed", error = error + return err($error) + ## returns a comma-separated string of peerIDs + return ok(connPeers.mapIt($it).join(",")) + +proc waku_relay_get_num_connected_peers( + ctx: ptr FFIContext[Waku], + callback: FFICallBack, + userData: pointer, + pubSubTopic: cstring, +) {.ffi.} = + let numConnPeers = ctx.myLib[].node.wakuRelay.getNumConnectedPeers($pubsubTopic).valueOr: + error "NUM_CONNECTED_PEERS failed", error = error + return err($error) + return ok($numConnPeers) + +proc waku_relay_add_protected_shard( + ctx: ptr FFIContext[Waku], + callback: FFICallBack, + userData: pointer, + clusterId: cint, + shardId: cint, + publicKey: cstring, +) {.ffi.} = + ## Protects a shard with a public key + try: + let relayShard = RelayShard(clusterId: uint16(clusterId), shardId: uint16(shardId)) + let protectedShard = ProtectedShard.parseCmdArg($relayShard & ":" & $publicKey) + ctx.myLib[].node.wakuRelay.addSignedShardsValidator( + @[protectedShard], uint16(clusterId) + ) + except ValueError as exc: + return err("ERROR in waku_relay_add_protected_shard: " & exc.msg) + + return ok("") + +proc waku_relay_subscribe( + ctx: ptr FFIContext[Waku], + callback: FFICallBack, + userData: pointer, + pubSubTopic: cstring, +) {.ffi.} = + echo "Subscribing to topic: " & $pubSubTopic & " ..." + proc onReceivedMessage(ctx: ptr FFIContext[Waku]): WakuRelayHandler = + return proc(pubsubTopic: PubsubTopic, msg: WakuMessage) {.async.} = + callEventCallback(ctx, "onReceivedMessage"): + $JsonMessageEvent.new(pubsubTopic, msg) + + var cb = onReceivedMessage(ctx) + + ctx.myLib[].node.subscribe( + (kind: SubscriptionKind.PubsubSub, topic: $pubsubTopic), + handler = WakuRelayHandler(cb), + ).isOkOr: + error "SUBSCRIBE failed", error = error + return err($error) + return ok("") + +proc waku_relay_unsubscribe( + ctx: ptr FFIContext[Waku], + callback: FFICallBack, + userData: pointer, + pubSubTopic: cstring, +) {.ffi.} = + ctx.myLib[].node.unsubscribe((kind: SubscriptionKind.PubsubSub, topic: $pubsubTopic)).isOkOr: + error "UNSUBSCRIBE failed", error = error + return err($error) + + return ok("") + +proc waku_relay_publish( + ctx: ptr FFIContext[Waku], + callback: FFICallBack, + userData: pointer, + pubSubTopic: cstring, + jsonWakuMessage: cstring, + timeoutMs: cuint, +) {.ffi.} = + var + # https://rfc.vac.dev/spec/36/#extern-char-waku_relay_publishchar-messagejson-char-pubsubtopic-int-timeoutms + jsonMessage: JsonMessage + try: + let jsonContent = parseJson($jsonWakuMessage) + jsonMessage = JsonMessage.fromJsonNode(jsonContent).valueOr: + raise newException(JsonParsingError, $error) + except JsonParsingError as exc: + return err(fmt"Error parsing json message: {exc.msg}") + + let msg = json_message_event.toWakuMessage(jsonMessage).valueOr: + return err("Problem building the WakuMessage: " & $error) + + (await ctx.myLib[].node.wakuRelay.publish($pubsubTopic, msg)).isOkOr: + error "PUBLISH failed", error = error + return err($error) + + let msgHash = computeMessageHash($pubSubTopic, msg).to0xHex + return ok(msgHash) + +proc waku_default_pubsub_topic( + ctx: ptr FFIContext[Waku], callback: FFICallBack, userData: pointer +) {.ffi.} = + # https://rfc.vac.dev/spec/36/#extern-char-waku_default_pubsub_topic + return ok(DefaultPubsubTopic) + +proc waku_content_topic( + ctx: ptr FFIContext[Waku], + callback: FFICallBack, + userData: pointer, + appName: cstring, + appVersion: cuint, + contentTopicName: cstring, + encoding: cstring, +) {.ffi.} = + # https://rfc.vac.dev/spec/36/#extern-char-waku_content_topicchar-applicationname-unsigned-int-applicationversion-char-contenttopicname-char-encoding + + return ok(fmt"/{$appName}/{$appVersion}/{$contentTopicName}/{$encoding}") + +proc waku_pubsub_topic( + ctx: ptr FFIContext[Waku], + callback: FFICallBack, + userData: pointer, + topicName: cstring, +) {.ffi.} = + # https://rfc.vac.dev/spec/36/#extern-char-waku_pubsub_topicchar-name-char-encoding + return ok(fmt"/waku/2/{$topicName}") diff --git a/library/waku_thread_requests/requests/protocols/store_request.nim b/library/kernel_api/protocols/store_api.nim similarity index 57% rename from library/waku_thread_requests/requests/protocols/store_request.nim rename to library/kernel_api/protocols/store_api.nim index 3fe1e2f13..0df4d9b1f 100644 --- a/library/waku_thread_requests/requests/protocols/store_request.nim +++ b/library/kernel_api/protocols/store_api.nim @@ -1,28 +1,16 @@ import std/[json, sugar, strutils, options] -import chronos, chronicles, results, stew/byteutils +import chronos, chronicles, results, stew/byteutils, ffi import - ../../../../waku/factory/waku, - ../../../alloc, - ../../../utils, - ../../../../waku/waku_core/peers, - ../../../../waku/waku_core/time, - ../../../../waku/waku_core/message/digest, - ../../../../waku/waku_store/common, - ../../../../waku/waku_store/client, - ../../../../waku/common/paging + waku/factory/waku, + library/utils, + waku/waku_core/peers, + waku/waku_core/message/digest, + waku/waku_store/common, + waku/waku_store/client, + waku/common/paging, + library/declare_lib -type StoreReqType* = enum - REMOTE_QUERY ## to perform a query to another Store node - -type StoreRequest* = object - operation: StoreReqType - jsonQuery: cstring - peerAddr: cstring - timeoutMs: cint - -func fromJsonNode( - T: type StoreRequest, jsonContent: JsonNode -): Result[StoreQueryRequest, string] = +func fromJsonNode(jsonContent: JsonNode): Result[StoreQueryRequest, string] = var contentTopics: seq[string] if jsonContent.contains("contentTopics"): contentTopics = collect(newSeq): @@ -78,54 +66,29 @@ func fromJsonNode( ) ) -proc createShared*( - T: type StoreRequest, - op: StoreReqType, +proc waku_store_query( + ctx: ptr FFIContext[Waku], + callback: FFICallBack, + userData: pointer, jsonQuery: cstring, peerAddr: cstring, timeoutMs: cint, -): ptr type T = - var ret = createShared(T) - ret[].operation = op - ret[].timeoutMs = timeoutMs - ret[].jsonQuery = jsonQuery.alloc() - ret[].peerAddr = peerAddr.alloc() - return ret - -proc destroyShared(self: ptr StoreRequest) = - deallocShared(self[].jsonQuery) - deallocShared(self[].peerAddr) - deallocShared(self) - -proc process_remote_query( - self: ptr StoreRequest, waku: ptr Waku -): Future[Result[string, string]] {.async.} = +) {.ffi.} = let jsonContentRes = catch: - parseJson($self[].jsonQuery) + parseJson($jsonQuery) if jsonContentRes.isErr(): return err("StoreRequest failed parsing store request: " & jsonContentRes.error.msg) - let storeQueryRequest = ?StoreRequest.fromJsonNode(jsonContentRes.get()) + let storeQueryRequest = ?fromJsonNode(jsonContentRes.get()) - let peer = peers.parsePeerInfo(($self[].peerAddr).split(",")).valueOr: + let peer = peers.parsePeerInfo(($peerAddr).split(",")).valueOr: return err("StoreRequest failed to parse peer addr: " & $error) - let queryResponse = (await waku.node.wakuStoreClient.query(storeQueryRequest, peer)).valueOr: + let queryResponse = ( + await ctx.myLib[].node.wakuStoreClient.query(storeQueryRequest, peer) + ).valueOr: return err("StoreRequest failed store query: " & $error) let res = $(%*(queryResponse.toHex())) return ok(res) ## returning the response in json format - -proc process*( - self: ptr StoreRequest, waku: ptr Waku -): Future[Result[string, string]] {.async.} = - defer: - deallocShared(self) - - case self.operation - of REMOTE_QUERY: - return await self.process_remote_query(waku) - - error "store request not handled at all" - return err("store request not handled at all") diff --git a/library/libwaku.h b/library/libwaku.h index b5d6c9bab..67c89c7c2 100644 --- a/library/libwaku.h +++ b/library/libwaku.h @@ -10,241 +10,242 @@ #include // The possible returned values for the functions that return int -#define RET_OK 0 -#define RET_ERR 1 -#define RET_MISSING_CALLBACK 2 +#define RET_OK 0 +#define RET_ERR 1 +#define RET_MISSING_CALLBACK 2 #ifdef __cplusplus -extern "C" { +extern "C" +{ #endif -typedef void (*WakuCallBack) (int callerRet, const char* msg, size_t len, void* userData); + typedef void (*FFICallBack)(int callerRet, const char *msg, size_t len, void *userData); -// Creates a new instance of the waku node. -// Sets up the waku node from the given configuration. -// Returns a pointer to the Context needed by the rest of the API functions. -void* waku_new( - const char* configJson, - WakuCallBack callback, - void* userData); + // Creates a new instance of the waku node. + // Sets up the waku node from the given configuration. + // Returns a pointer to the Context needed by the rest of the API functions. + void *waku_new( + const char *configJson, + FFICallBack callback, + void *userData); -int waku_start(void* ctx, - WakuCallBack callback, - void* userData); + int waku_start(void *ctx, + FFICallBack callback, + void *userData); -int waku_stop(void* ctx, - WakuCallBack callback, - void* userData); + int waku_stop(void *ctx, + FFICallBack callback, + void *userData); -// Destroys an instance of a waku node created with waku_new -int waku_destroy(void* ctx, - WakuCallBack callback, - void* userData); + // Destroys an instance of a waku node created with waku_new + int waku_destroy(void *ctx, + FFICallBack callback, + void *userData); -int waku_version(void* ctx, - WakuCallBack callback, - void* userData); + int waku_version(void *ctx, + FFICallBack callback, + void *userData); -// Sets a callback that will be invoked whenever an event occurs. -// It is crucial that the passed callback is fast, non-blocking and potentially thread-safe. -void waku_set_event_callback(void* ctx, - WakuCallBack callback, - void* userData); + // Sets a callback that will be invoked whenever an event occurs. + // It is crucial that the passed callback is fast, non-blocking and potentially thread-safe. + void set_event_callback(void *ctx, + FFICallBack callback, + void *userData); -int waku_content_topic(void* ctx, - const char* appName, - unsigned int appVersion, - const char* contentTopicName, - const char* encoding, - WakuCallBack callback, - void* userData); + int waku_content_topic(void *ctx, + FFICallBack callback, + void *userData, + const char *appName, + unsigned int appVersion, + const char *contentTopicName, + const char *encoding); -int waku_pubsub_topic(void* ctx, - const char* topicName, - WakuCallBack callback, - void* userData); + int waku_pubsub_topic(void *ctx, + FFICallBack callback, + void *userData, + const char *topicName); -int waku_default_pubsub_topic(void* ctx, - WakuCallBack callback, - void* userData); + int waku_default_pubsub_topic(void *ctx, + FFICallBack callback, + void *userData); -int waku_relay_publish(void* ctx, - const char* pubSubTopic, - const char* jsonWakuMessage, - unsigned int timeoutMs, - WakuCallBack callback, - void* userData); + int waku_relay_publish(void *ctx, + FFICallBack callback, + void *userData, + const char *pubSubTopic, + const char *jsonWakuMessage, + unsigned int timeoutMs); -int waku_lightpush_publish(void* ctx, - const char* pubSubTopic, - const char* jsonWakuMessage, - WakuCallBack callback, - void* userData); + int waku_lightpush_publish(void *ctx, + FFICallBack callback, + void *userData, + const char *pubSubTopic, + const char *jsonWakuMessage); -int waku_relay_subscribe(void* ctx, - const char* pubSubTopic, - WakuCallBack callback, - void* userData); + int waku_relay_subscribe(void *ctx, + FFICallBack callback, + void *userData, + const char *pubSubTopic); -int waku_relay_add_protected_shard(void* ctx, - int clusterId, - int shardId, - char* publicKey, - WakuCallBack callback, - void* userData); + int waku_relay_add_protected_shard(void *ctx, + FFICallBack callback, + void *userData, + int clusterId, + int shardId, + char *publicKey); -int waku_relay_unsubscribe(void* ctx, - const char* pubSubTopic, - WakuCallBack callback, - void* userData); + int waku_relay_unsubscribe(void *ctx, + FFICallBack callback, + void *userData, + const char *pubSubTopic); -int waku_filter_subscribe(void* ctx, - const char* pubSubTopic, - const char* contentTopics, - WakuCallBack callback, - void* userData); + int waku_filter_subscribe(void *ctx, + FFICallBack callback, + void *userData, + const char *pubSubTopic, + const char *contentTopics); -int waku_filter_unsubscribe(void* ctx, - const char* pubSubTopic, - const char* contentTopics, - WakuCallBack callback, - void* userData); + int waku_filter_unsubscribe(void *ctx, + FFICallBack callback, + void *userData, + const char *pubSubTopic, + const char *contentTopics); -int waku_filter_unsubscribe_all(void* ctx, - WakuCallBack callback, - void* userData); + int waku_filter_unsubscribe_all(void *ctx, + FFICallBack callback, + void *userData); -int waku_relay_get_num_connected_peers(void* ctx, - const char* pubSubTopic, - WakuCallBack callback, - void* userData); + int waku_relay_get_num_connected_peers(void *ctx, + FFICallBack callback, + void *userData, + const char *pubSubTopic); -int waku_relay_get_connected_peers(void* ctx, - const char* pubSubTopic, - WakuCallBack callback, - void* userData); + int waku_relay_get_connected_peers(void *ctx, + FFICallBack callback, + void *userData, + const char *pubSubTopic); -int waku_relay_get_num_peers_in_mesh(void* ctx, - const char* pubSubTopic, - WakuCallBack callback, - void* userData); + int waku_relay_get_num_peers_in_mesh(void *ctx, + FFICallBack callback, + void *userData, + const char *pubSubTopic); -int waku_relay_get_peers_in_mesh(void* ctx, - const char* pubSubTopic, - WakuCallBack callback, - void* userData); + int waku_relay_get_peers_in_mesh(void *ctx, + FFICallBack callback, + void *userData, + const char *pubSubTopic); -int waku_store_query(void* ctx, - const char* jsonQuery, - const char* peerAddr, - int timeoutMs, - WakuCallBack callback, - void* userData); + int waku_store_query(void *ctx, + FFICallBack callback, + void *userData, + const char *jsonQuery, + const char *peerAddr, + int timeoutMs); -int waku_connect(void* ctx, - const char* peerMultiAddr, - unsigned int timeoutMs, - WakuCallBack callback, - void* userData); + int waku_connect(void *ctx, + FFICallBack callback, + void *userData, + const char *peerMultiAddr, + unsigned int timeoutMs); -int waku_disconnect_peer_by_id(void* ctx, - const char* peerId, - WakuCallBack callback, - void* userData); + int waku_disconnect_peer_by_id(void *ctx, + FFICallBack callback, + void *userData, + const char *peerId); -int waku_disconnect_all_peers(void* ctx, - WakuCallBack callback, - void* userData); + int waku_disconnect_all_peers(void *ctx, + FFICallBack callback, + void *userData); -int waku_dial_peer(void* ctx, - const char* peerMultiAddr, - const char* protocol, - int timeoutMs, - WakuCallBack callback, - void* userData); + int waku_dial_peer(void *ctx, + FFICallBack callback, + void *userData, + const char *peerMultiAddr, + const char *protocol, + int timeoutMs); -int waku_dial_peer_by_id(void* ctx, - const char* peerId, - const char* protocol, - int timeoutMs, - WakuCallBack callback, - void* userData); + int waku_dial_peer_by_id(void *ctx, + FFICallBack callback, + void *userData, + const char *peerId, + const char *protocol, + int timeoutMs); -int waku_get_peerids_from_peerstore(void* ctx, - WakuCallBack callback, - void* userData); + int waku_get_peerids_from_peerstore(void *ctx, + FFICallBack callback, + void *userData); -int waku_get_connected_peers_info(void* ctx, - WakuCallBack callback, - void* userData); + int waku_get_connected_peers_info(void *ctx, + FFICallBack callback, + void *userData); -int waku_get_peerids_by_protocol(void* ctx, - const char* protocol, - WakuCallBack callback, - void* userData); + int waku_get_peerids_by_protocol(void *ctx, + FFICallBack callback, + void *userData, + const char *protocol); -int waku_listen_addresses(void* ctx, - WakuCallBack callback, - void* userData); + int waku_listen_addresses(void *ctx, + FFICallBack callback, + void *userData); -int waku_get_connected_peers(void* ctx, - WakuCallBack callback, - void* userData); + int waku_get_connected_peers(void *ctx, + FFICallBack callback, + void *userData); -// Returns a list of multiaddress given a url to a DNS discoverable ENR tree -// Parameters -// char* entTreeUrl: URL containing a discoverable ENR tree -// char* nameDnsServer: The nameserver to resolve the ENR tree url. -// int timeoutMs: Timeout value in milliseconds to execute the call. -int waku_dns_discovery(void* ctx, - const char* entTreeUrl, - const char* nameDnsServer, - int timeoutMs, - WakuCallBack callback, - void* userData); + // Returns a list of multiaddress given a url to a DNS discoverable ENR tree + // Parameters + // char* entTreeUrl: URL containing a discoverable ENR tree + // char* nameDnsServer: The nameserver to resolve the ENR tree url. + // int timeoutMs: Timeout value in milliseconds to execute the call. + int waku_dns_discovery(void *ctx, + FFICallBack callback, + void *userData, + const char *entTreeUrl, + const char *nameDnsServer, + int timeoutMs); -// Updates the bootnode list used for discovering new peers via DiscoveryV5 -// bootnodes - JSON array containing the bootnode ENRs i.e. `["enr:...", "enr:..."]` -int waku_discv5_update_bootnodes(void* ctx, - char* bootnodes, - WakuCallBack callback, - void* userData); + // Updates the bootnode list used for discovering new peers via DiscoveryV5 + // bootnodes - JSON array containing the bootnode ENRs i.e. `["enr:...", "enr:..."]` + int waku_discv5_update_bootnodes(void *ctx, + FFICallBack callback, + void *userData, + char *bootnodes); -int waku_start_discv5(void* ctx, - WakuCallBack callback, - void* userData); + int waku_start_discv5(void *ctx, + FFICallBack callback, + void *userData); -int waku_stop_discv5(void* ctx, - WakuCallBack callback, - void* userData); + int waku_stop_discv5(void *ctx, + FFICallBack callback, + void *userData); -// Retrieves the ENR information -int waku_get_my_enr(void* ctx, - WakuCallBack callback, - void* userData); + // Retrieves the ENR information + int waku_get_my_enr(void *ctx, + FFICallBack callback, + void *userData); -int waku_get_my_peerid(void* ctx, - WakuCallBack callback, - void* userData); + int waku_get_my_peerid(void *ctx, + FFICallBack callback, + void *userData); -int waku_get_metrics(void* ctx, - WakuCallBack callback, - void* userData); + int waku_get_metrics(void *ctx, + FFICallBack callback, + void *userData); -int waku_peer_exchange_request(void* ctx, - int numPeers, - WakuCallBack callback, - void* userData); + int waku_peer_exchange_request(void *ctx, + FFICallBack callback, + void *userData, + int numPeers); -int waku_ping_peer(void* ctx, - const char* peerAddr, - int timeoutMs, - WakuCallBack callback, - void* userData); + int waku_ping_peer(void *ctx, + FFICallBack callback, + void *userData, + const char *peerAddr, + int timeoutMs); -int waku_is_online(void* ctx, - WakuCallBack callback, - void* userData); + int waku_is_online(void *ctx, + FFICallBack callback, + void *userData); #ifdef __cplusplus } diff --git a/library/libwaku.nim b/library/libwaku.nim index ad3afa134..c71e823d6 100644 --- a/library/libwaku.nim +++ b/library/libwaku.nim @@ -1,107 +1,35 @@ -{.pragma: exported, exportc, cdecl, raises: [].} -{.pragma: callback, cdecl, raises: [], gcsafe.} -{.passc: "-fPIC".} - -when defined(linux): - {.passl: "-Wl,-soname,libwaku.so".} - -import std/[json, atomics, strformat, options, atomics] -import chronicles, chronos, chronos/threadsync +import std/[atomics, options, atomics, macros] +import chronicles, chronos, chronos/threadsync, ffi import - waku/common/base64, waku/waku_core/message/message, - waku/node/waku_node, - waku/node/peer_manager, waku/waku_core/topics/pubsub_topic, - waku/waku_core/subscription/push_handler, waku/waku_relay, ./events/json_message_event, - ./waku_context, - ./waku_thread_requests/requests/node_lifecycle_request, - ./waku_thread_requests/requests/peer_manager_request, - ./waku_thread_requests/requests/protocols/relay_request, - ./waku_thread_requests/requests/protocols/store_request, - ./waku_thread_requests/requests/protocols/lightpush_request, - ./waku_thread_requests/requests/protocols/filter_request, - ./waku_thread_requests/requests/debug_node_request, - ./waku_thread_requests/requests/discovery_request, - ./waku_thread_requests/requests/ping_request, - ./waku_thread_requests/waku_thread_request, - ./alloc, - ./ffi_types, - ../waku/factory/app_callbacks + ./events/json_topic_health_change_event, + ./events/json_connection_change_event, + ../waku/factory/app_callbacks, + waku/factory/waku, + waku/node/waku_node, + ./declare_lib ################################################################################ -### Wrapper around the waku node -################################################################################ - -################################################################################ -### Not-exported components - -template checkLibwakuParams*( - ctx: ptr WakuContext, callback: WakuCallBack, userData: pointer -) = - if not isNil(ctx): - ctx[].userData = userData - - if isNil(callback): - return RET_MISSING_CALLBACK - -proc handleRequest( - ctx: ptr WakuContext, - requestType: RequestType, - content: pointer, - callback: WakuCallBack, - userData: pointer, -): cint = - waku_context.sendRequestToWakuThread(ctx, requestType, content, callback, userData).isOkOr: - let msg = "libwaku error: " & $error - callback(RET_ERR, unsafeAddr msg[0], cast[csize_t](len(msg)), userData) - return RET_ERR - - return RET_OK - -### End of not-exported components -################################################################################ - -################################################################################ -### Library setup - -# Every Nim library must have this function called - the name is derived from -# the `--nimMainPrefix` command line option -proc libwakuNimMain() {.importc.} - -# To control when the library has been initialized -var initialized: Atomic[bool] - -if defined(android): - # Redirect chronicles to Android System logs - when compiles(defaultChroniclesStream.outputs[0].writer): - defaultChroniclesStream.outputs[0].writer = proc( - logLevel: LogLevel, msg: LogOutputStr - ) {.raises: [].} = - echo logLevel, msg - -proc initializeLibrary() {.exported.} = - if not initialized.exchange(true): - ## Every Nim library needs to call `NimMain` once exactly, to initialize the Nim runtime. - ## Being `` the value given in the optional compilation flag --nimMainPrefix:yourprefix - libwakuNimMain() - when declared(setupForeignThreadGc): - setupForeignThreadGc() - when declared(nimGC_setStackBottom): - var locals {.volatile, noinit.}: pointer - locals = addr(locals) - nimGC_setStackBottom(locals) - -### End of library setup -################################################################################ +## Include different APIs, i.e. all procs with {.ffi.} pragma +include + ./kernel_api/peer_manager_api, + ./kernel_api/discovery_api, + ./kernel_api/node_lifecycle_api, + ./kernel_api/debug_node_api, + ./kernel_api/ping_api, + ./kernel_api/protocols/relay_api, + ./kernel_api/protocols/store_api, + ./kernel_api/protocols/lightpush_api, + ./kernel_api/protocols/filter_api ################################################################################ ### Exported procs proc waku_new( - configJson: cstring, callback: WakuCallback, userData: pointer + configJson: cstring, callback: FFICallback, userData: pointer ): pointer {.dynlib, exportc, cdecl.} = initializeLibrary() @@ -111,41 +39,50 @@ proc waku_new( return nil ## Create the Waku thread that will keep waiting for req from the main thread. - var ctx = waku_context.createWakuContext().valueOr: - let msg = "Error in createWakuContext: " & $error + var ctx = ffi.createFFIContext[Waku]().valueOr: + let msg = "Error in createFFIContext: " & $error callback(RET_ERR, unsafeAddr msg[0], cast[csize_t](len(msg)), userData) return nil ctx.userData = userData + proc onReceivedMessage(ctx: ptr FFIContext): WakuRelayHandler = + return proc(pubsubTopic: PubsubTopic, msg: WakuMessage) {.async.} = + callEventCallback(ctx, "onReceivedMessage"): + $JsonMessageEvent.new(pubsubTopic, msg) + + proc onTopicHealthChange(ctx: ptr FFIContext): TopicHealthChangeHandler = + return proc(pubsubTopic: PubsubTopic, topicHealth: TopicHealth) {.async.} = + callEventCallback(ctx, "onTopicHealthChange"): + $JsonTopicHealthChangeEvent.new(pubsubTopic, topicHealth) + + proc onConnectionChange(ctx: ptr FFIContext): ConnectionChangeHandler = + return proc(peerId: PeerId, peerEvent: PeerEventKind) {.async.} = + callEventCallback(ctx, "onConnectionChange"): + $JsonConnectionChangeEvent.new($peerId, peerEvent) + let appCallbacks = AppCallbacks( relayHandler: onReceivedMessage(ctx), topicHealthChangeHandler: onTopicHealthChange(ctx), connectionChangeHandler: onConnectionChange(ctx), ) - let retCode = handleRequest( - ctx, - RequestType.LIFECYCLE, - NodeLifecycleRequest.createShared( - NodeLifecycleMsgType.CREATE_NODE, configJson, appCallbacks - ), - callback, - userData, - ) - - if retCode == RET_ERR: + ffi.sendRequestToFFIThread( + ctx, CreateNodeRequest.ffiNewReq(callback, userData, configJson, appCallbacks) + ).isOkOr: + let msg = "error in sendRequestToFFIThread: " & $error + callback(RET_ERR, unsafeAddr msg[0], cast[csize_t](len(msg)), userData) return nil return ctx proc waku_destroy( - ctx: ptr WakuContext, callback: WakuCallBack, userData: pointer -): cint {.dynlib, exportc.} = + ctx: ptr FFIContext[Waku], callback: FFICallBack, userData: pointer +): cint {.dynlib, exportc, cdecl.} = initializeLibrary() - checkLibwakuParams(ctx, callback, userData) + checkParams(ctx, callback, userData) - waku_context.destroyWakuContext(ctx).isOkOr: + ffi.destroyFFIContext(ctx).isOkOr: let msg = "libwaku error: " & $error callback(RET_ERR, unsafeAddr msg[0], cast[csize_t](len(msg)), userData) return RET_ERR @@ -155,699 +92,5 @@ proc waku_destroy( return RET_OK -proc waku_version( - ctx: ptr WakuContext, callback: WakuCallBack, userData: pointer -): cint {.dynlib, exportc.} = - initializeLibrary() - checkLibwakuParams(ctx, callback, userData) - callback( - RET_OK, - cast[ptr cchar](WakuNodeVersionString), - cast[csize_t](len(WakuNodeVersionString)), - userData, - ) - - return RET_OK - -proc waku_set_event_callback( - ctx: ptr WakuContext, callback: WakuCallBack, userData: pointer -) {.dynlib, exportc.} = - initializeLibrary() - ctx[].eventCallback = cast[pointer](callback) - ctx[].eventUserData = userData - -proc waku_content_topic( - ctx: ptr WakuContext, - appName: cstring, - appVersion: cuint, - contentTopicName: cstring, - encoding: cstring, - callback: WakuCallBack, - userData: pointer, -): cint {.dynlib, exportc.} = - # https://rfc.vac.dev/spec/36/#extern-char-waku_content_topicchar-applicationname-unsigned-int-applicationversion-char-contenttopicname-char-encoding - - initializeLibrary() - checkLibwakuParams(ctx, callback, userData) - - let contentTopic = fmt"/{$appName}/{$appVersion}/{$contentTopicName}/{$encoding}" - callback( - RET_OK, unsafeAddr contentTopic[0], cast[csize_t](len(contentTopic)), userData - ) - - return RET_OK - -proc waku_pubsub_topic( - ctx: ptr WakuContext, topicName: cstring, callback: WakuCallBack, userData: pointer -): cint {.dynlib, exportc, cdecl.} = - # https://rfc.vac.dev/spec/36/#extern-char-waku_pubsub_topicchar-name-char-encoding - - initializeLibrary() - checkLibwakuParams(ctx, callback, userData) - - let outPubsubTopic = fmt"/waku/2/{$topicName}" - callback( - RET_OK, unsafeAddr outPubsubTopic[0], cast[csize_t](len(outPubsubTopic)), userData - ) - - return RET_OK - -proc waku_default_pubsub_topic( - ctx: ptr WakuContext, callback: WakuCallBack, userData: pointer -): cint {.dynlib, exportc.} = - # https://rfc.vac.dev/spec/36/#extern-char-waku_default_pubsub_topic - - initializeLibrary() - checkLibwakuParams(ctx, callback, userData) - - callback( - RET_OK, - cast[ptr cchar](DefaultPubsubTopic), - cast[csize_t](len(DefaultPubsubTopic)), - userData, - ) - - return RET_OK - -proc waku_relay_publish( - ctx: ptr WakuContext, - pubSubTopic: cstring, - jsonWakuMessage: cstring, - timeoutMs: cuint, - callback: WakuCallBack, - userData: pointer, -): cint {.dynlib, exportc, cdecl.} = - # https://rfc.vac.dev/spec/36/#extern-char-waku_relay_publishchar-messagejson-char-pubsubtopic-int-timeoutms - - initializeLibrary() - checkLibwakuParams(ctx, callback, userData) - - var jsonMessage: JsonMessage - try: - let jsonContent = parseJson($jsonWakuMessage) - jsonMessage = JsonMessage.fromJsonNode(jsonContent).valueOr: - raise newException(JsonParsingError, $error) - except JsonParsingError: - let msg = fmt"Error parsing json message: {getCurrentExceptionMsg()}" - callback(RET_ERR, unsafeAddr msg[0], cast[csize_t](len(msg)), userData) - return RET_ERR - - let wakuMessage = jsonMessage.toWakuMessage().valueOr: - let msg = "Problem building the WakuMessage: " & $error - callback(RET_ERR, unsafeAddr msg[0], cast[csize_t](len(msg)), userData) - return RET_ERR - - handleRequest( - ctx, - RequestType.RELAY, - RelayRequest.createShared(RelayMsgType.PUBLISH, pubSubTopic, nil, wakuMessage), - callback, - userData, - ) - -proc waku_start( - ctx: ptr WakuContext, callback: WakuCallBack, userData: pointer -): cint {.dynlib, exportc.} = - initializeLibrary() - checkLibwakuParams(ctx, callback, userData) - handleRequest( - ctx, - RequestType.LIFECYCLE, - NodeLifecycleRequest.createShared(NodeLifecycleMsgType.START_NODE), - callback, - userData, - ) - -proc waku_stop( - ctx: ptr WakuContext, callback: WakuCallBack, userData: pointer -): cint {.dynlib, exportc.} = - initializeLibrary() - checkLibwakuParams(ctx, callback, userData) - handleRequest( - ctx, - RequestType.LIFECYCLE, - NodeLifecycleRequest.createShared(NodeLifecycleMsgType.STOP_NODE), - callback, - userData, - ) - -proc waku_relay_subscribe( - ctx: ptr WakuContext, - pubSubTopic: cstring, - callback: WakuCallBack, - userData: pointer, -): cint {.dynlib, exportc.} = - initializeLibrary() - checkLibwakuParams(ctx, callback, userData) - - var cb = onReceivedMessage(ctx) - - handleRequest( - ctx, - RequestType.RELAY, - RelayRequest.createShared(RelayMsgType.SUBSCRIBE, pubSubTopic, WakuRelayHandler(cb)), - callback, - userData, - ) - -proc waku_relay_add_protected_shard( - ctx: ptr WakuContext, - clusterId: cint, - shardId: cint, - publicKey: cstring, - callback: WakuCallBack, - userData: pointer, -): cint {.dynlib, exportc, cdecl.} = - initializeLibrary() - checkLibwakuParams(ctx, callback, userData) - - handleRequest( - ctx, - RequestType.RELAY, - RelayRequest.createShared( - RelayMsgType.ADD_PROTECTED_SHARD, - clusterId = clusterId, - shardId = shardId, - publicKey = publicKey, - ), - callback, - userData, - ) - -proc waku_relay_unsubscribe( - ctx: ptr WakuContext, - pubSubTopic: cstring, - callback: WakuCallBack, - userData: pointer, -): cint {.dynlib, exportc.} = - initializeLibrary() - checkLibwakuParams(ctx, callback, userData) - - handleRequest( - ctx, - RequestType.RELAY, - RelayRequest.createShared( - RelayMsgType.UNSUBSCRIBE, pubSubTopic, WakuRelayHandler(onReceivedMessage(ctx)) - ), - callback, - userData, - ) - -proc waku_relay_get_num_connected_peers( - ctx: ptr WakuContext, - pubSubTopic: cstring, - callback: WakuCallBack, - userData: pointer, -): cint {.dynlib, exportc.} = - initializeLibrary() - checkLibwakuParams(ctx, callback, userData) - - handleRequest( - ctx, - RequestType.RELAY, - RelayRequest.createShared(RelayMsgType.NUM_CONNECTED_PEERS, pubSubTopic), - callback, - userData, - ) - -proc waku_relay_get_connected_peers( - ctx: ptr WakuContext, - pubSubTopic: cstring, - callback: WakuCallBack, - userData: pointer, -): cint {.dynlib, exportc.} = - initializeLibrary() - checkLibwakuParams(ctx, callback, userData) - - handleRequest( - ctx, - RequestType.RELAY, - RelayRequest.createShared(RelayMsgType.LIST_CONNECTED_PEERS, pubSubTopic), - callback, - userData, - ) - -proc waku_relay_get_num_peers_in_mesh( - ctx: ptr WakuContext, - pubSubTopic: cstring, - callback: WakuCallBack, - userData: pointer, -): cint {.dynlib, exportc.} = - initializeLibrary() - checkLibwakuParams(ctx, callback, userData) - - handleRequest( - ctx, - RequestType.RELAY, - RelayRequest.createShared(RelayMsgType.NUM_MESH_PEERS, pubSubTopic), - callback, - userData, - ) - -proc waku_relay_get_peers_in_mesh( - ctx: ptr WakuContext, - pubSubTopic: cstring, - callback: WakuCallBack, - userData: pointer, -): cint {.dynlib, exportc.} = - initializeLibrary() - checkLibwakuParams(ctx, callback, userData) - - handleRequest( - ctx, - RequestType.RELAY, - RelayRequest.createShared(RelayMsgType.LIST_MESH_PEERS, pubSubTopic), - callback, - userData, - ) - -proc waku_filter_subscribe( - ctx: ptr WakuContext, - pubSubTopic: cstring, - contentTopics: cstring, - callback: WakuCallBack, - userData: pointer, -): cint {.dynlib, exportc.} = - initializeLibrary() - checkLibwakuParams(ctx, callback, userData) - - handleRequest( - ctx, - RequestType.FILTER, - FilterRequest.createShared( - FilterMsgType.SUBSCRIBE, - pubSubTopic, - contentTopics, - FilterPushHandler(onReceivedMessage(ctx)), - ), - callback, - userData, - ) - -proc waku_filter_unsubscribe( - ctx: ptr WakuContext, - pubSubTopic: cstring, - contentTopics: cstring, - callback: WakuCallBack, - userData: pointer, -): cint {.dynlib, exportc.} = - initializeLibrary() - checkLibwakuParams(ctx, callback, userData) - - handleRequest( - ctx, - RequestType.FILTER, - FilterRequest.createShared(FilterMsgType.UNSUBSCRIBE, pubSubTopic, contentTopics), - callback, - userData, - ) - -proc waku_filter_unsubscribe_all( - ctx: ptr WakuContext, callback: WakuCallBack, userData: pointer -): cint {.dynlib, exportc.} = - initializeLibrary() - checkLibwakuParams(ctx, callback, userData) - - handleRequest( - ctx, - RequestType.FILTER, - FilterRequest.createShared(FilterMsgType.UNSUBSCRIBE_ALL), - callback, - userData, - ) - -proc waku_lightpush_publish( - ctx: ptr WakuContext, - pubSubTopic: cstring, - jsonWakuMessage: cstring, - callback: WakuCallBack, - userData: pointer, -): cint {.dynlib, exportc, cdecl.} = - initializeLibrary() - checkLibwakuParams(ctx, callback, userData) - - var jsonMessage: JsonMessage - try: - let jsonContent = parseJson($jsonWakuMessage) - jsonMessage = JsonMessage.fromJsonNode(jsonContent).valueOr: - raise newException(JsonParsingError, $error) - except JsonParsingError: - let msg = fmt"Error parsing json message: {getCurrentExceptionMsg()}" - callback(RET_ERR, unsafeAddr msg[0], cast[csize_t](len(msg)), userData) - return RET_ERR - - let wakuMessage = jsonMessage.toWakuMessage().valueOr: - let msg = "Problem building the WakuMessage: " & $error - callback(RET_ERR, unsafeAddr msg[0], cast[csize_t](len(msg)), userData) - return RET_ERR - - handleRequest( - ctx, - RequestType.LIGHTPUSH, - LightpushRequest.createShared(LightpushMsgType.PUBLISH, pubSubTopic, wakuMessage), - callback, - userData, - ) - -proc waku_connect( - ctx: ptr WakuContext, - peerMultiAddr: cstring, - timeoutMs: cuint, - callback: WakuCallBack, - userData: pointer, -): cint {.dynlib, exportc.} = - initializeLibrary() - checkLibwakuParams(ctx, callback, userData) - - handleRequest( - ctx, - RequestType.PEER_MANAGER, - PeerManagementRequest.createShared( - PeerManagementMsgType.CONNECT_TO, $peerMultiAddr, chronos.milliseconds(timeoutMs) - ), - callback, - userData, - ) - -proc waku_disconnect_peer_by_id( - ctx: ptr WakuContext, peerId: cstring, callback: WakuCallBack, userData: pointer -): cint {.dynlib, exportc.} = - initializeLibrary() - checkLibwakuParams(ctx, callback, userData) - - handleRequest( - ctx, - RequestType.PEER_MANAGER, - PeerManagementRequest.createShared( - op = PeerManagementMsgType.DISCONNECT_PEER_BY_ID, peerId = $peerId - ), - callback, - userData, - ) - -proc waku_disconnect_all_peers( - ctx: ptr WakuContext, callback: WakuCallBack, userData: pointer -): cint {.dynlib, exportc.} = - initializeLibrary() - checkLibwakuParams(ctx, callback, userData) - - handleRequest( - ctx, - RequestType.PEER_MANAGER, - PeerManagementRequest.createShared(op = PeerManagementMsgType.DISCONNECT_ALL_PEERS), - callback, - userData, - ) - -proc waku_dial_peer( - ctx: ptr WakuContext, - peerMultiAddr: cstring, - protocol: cstring, - timeoutMs: cuint, - callback: WakuCallBack, - userData: pointer, -): cint {.dynlib, exportc.} = - initializeLibrary() - checkLibwakuParams(ctx, callback, userData) - - handleRequest( - ctx, - RequestType.PEER_MANAGER, - PeerManagementRequest.createShared( - op = PeerManagementMsgType.DIAL_PEER, - peerMultiAddr = $peerMultiAddr, - protocol = $protocol, - ), - callback, - userData, - ) - -proc waku_dial_peer_by_id( - ctx: ptr WakuContext, - peerId: cstring, - protocol: cstring, - timeoutMs: cuint, - callback: WakuCallBack, - userData: pointer, -): cint {.dynlib, exportc.} = - initializeLibrary() - checkLibwakuParams(ctx, callback, userData) - - handleRequest( - ctx, - RequestType.PEER_MANAGER, - PeerManagementRequest.createShared( - op = PeerManagementMsgType.DIAL_PEER_BY_ID, peerId = $peerId, protocol = $protocol - ), - callback, - userData, - ) - -proc waku_get_peerids_from_peerstore( - ctx: ptr WakuContext, callback: WakuCallBack, userData: pointer -): cint {.dynlib, exportc.} = - initializeLibrary() - checkLibwakuParams(ctx, callback, userData) - - handleRequest( - ctx, - RequestType.PEER_MANAGER, - PeerManagementRequest.createShared(PeerManagementMsgType.GET_ALL_PEER_IDS), - callback, - userData, - ) - -proc waku_get_connected_peers_info( - ctx: ptr WakuContext, callback: WakuCallBack, userData: pointer -): cint {.dynlib, exportc.} = - initializeLibrary() - checkLibwakuParams(ctx, callback, userData) - - handleRequest( - ctx, - RequestType.PEER_MANAGER, - PeerManagementRequest.createShared(PeerManagementMsgType.GET_CONNECTED_PEERS_INFO), - callback, - userData, - ) - -proc waku_get_connected_peers( - ctx: ptr WakuContext, callback: WakuCallBack, userData: pointer -): cint {.dynlib, exportc.} = - initializeLibrary() - checkLibwakuParams(ctx, callback, userData) - - handleRequest( - ctx, - RequestType.PEER_MANAGER, - PeerManagementRequest.createShared(PeerManagementMsgType.GET_CONNECTED_PEERS), - callback, - userData, - ) - -proc waku_get_peerids_by_protocol( - ctx: ptr WakuContext, protocol: cstring, callback: WakuCallBack, userData: pointer -): cint {.dynlib, exportc.} = - initializeLibrary() - checkLibwakuParams(ctx, callback, userData) - - handleRequest( - ctx, - RequestType.PEER_MANAGER, - PeerManagementRequest.createShared( - op = PeerManagementMsgType.GET_PEER_IDS_BY_PROTOCOL, protocol = $protocol - ), - callback, - userData, - ) - -proc waku_store_query( - ctx: ptr WakuContext, - jsonQuery: cstring, - peerAddr: cstring, - timeoutMs: cint, - callback: WakuCallBack, - userData: pointer, -): cint {.dynlib, exportc.} = - initializeLibrary() - checkLibwakuParams(ctx, callback, userData) - - handleRequest( - ctx, - RequestType.STORE, - StoreRequest.createShared(StoreReqType.REMOTE_QUERY, jsonQuery, peerAddr, timeoutMs), - callback, - userData, - ) - -proc waku_listen_addresses( - ctx: ptr WakuContext, callback: WakuCallBack, userData: pointer -): cint {.dynlib, exportc.} = - initializeLibrary() - checkLibwakuParams(ctx, callback, userData) - - handleRequest( - ctx, - RequestType.DEBUG, - DebugNodeRequest.createShared(DebugNodeMsgType.RETRIEVE_LISTENING_ADDRESSES), - callback, - userData, - ) - -proc waku_dns_discovery( - ctx: ptr WakuContext, - entTreeUrl: cstring, - nameDnsServer: cstring, - timeoutMs: cint, - callback: WakuCallBack, - userData: pointer, -): cint {.dynlib, exportc.} = - initializeLibrary() - checkLibwakuParams(ctx, callback, userData) - - handleRequest( - ctx, - RequestType.DISCOVERY, - DiscoveryRequest.createRetrieveBootstrapNodesRequest( - DiscoveryMsgType.GET_BOOTSTRAP_NODES, entTreeUrl, nameDnsServer, timeoutMs - ), - callback, - userData, - ) - -proc waku_discv5_update_bootnodes( - ctx: ptr WakuContext, bootnodes: cstring, callback: WakuCallBack, userData: pointer -): cint {.dynlib, exportc.} = - ## Updates the bootnode list used for discovering new peers via DiscoveryV5 - ## bootnodes - JSON array containing the bootnode ENRs i.e. `["enr:...", "enr:..."]` - initializeLibrary() - checkLibwakuParams(ctx, callback, userData) - - handleRequest( - ctx, - RequestType.DISCOVERY, - DiscoveryRequest.createUpdateBootstrapNodesRequest( - DiscoveryMsgType.UPDATE_DISCV5_BOOTSTRAP_NODES, bootnodes - ), - callback, - userData, - ) - -proc waku_get_my_enr( - ctx: ptr WakuContext, callback: WakuCallBack, userData: pointer -): cint {.dynlib, exportc.} = - initializeLibrary() - checkLibwakuParams(ctx, callback, userData) - - handleRequest( - ctx, - RequestType.DEBUG, - DebugNodeRequest.createShared(DebugNodeMsgType.RETRIEVE_MY_ENR), - callback, - userData, - ) - -proc waku_get_my_peerid( - ctx: ptr WakuContext, callback: WakuCallBack, userData: pointer -): cint {.dynlib, exportc.} = - initializeLibrary() - checkLibwakuParams(ctx, callback, userData) - - handleRequest( - ctx, - RequestType.DEBUG, - DebugNodeRequest.createShared(DebugNodeMsgType.RETRIEVE_MY_PEER_ID), - callback, - userData, - ) - -proc waku_get_metrics( - ctx: ptr WakuContext, callback: WakuCallBack, userData: pointer -): cint {.dynlib, exportc.} = - initializeLibrary() - checkLibwakuParams(ctx, callback, userData) - - handleRequest( - ctx, - RequestType.DEBUG, - DebugNodeRequest.createShared(DebugNodeMsgType.RETRIEVE_METRICS), - callback, - userData, - ) - -proc waku_start_discv5( - ctx: ptr WakuContext, callback: WakuCallBack, userData: pointer -): cint {.dynlib, exportc.} = - initializeLibrary() - checkLibwakuParams(ctx, callback, userData) - - handleRequest( - ctx, - RequestType.DISCOVERY, - DiscoveryRequest.createDiscV5StartRequest(), - callback, - userData, - ) - -proc waku_stop_discv5( - ctx: ptr WakuContext, callback: WakuCallBack, userData: pointer -): cint {.dynlib, exportc.} = - initializeLibrary() - checkLibwakuParams(ctx, callback, userData) - - handleRequest( - ctx, - RequestType.DISCOVERY, - DiscoveryRequest.createDiscV5StopRequest(), - callback, - userData, - ) - -proc waku_peer_exchange_request( - ctx: ptr WakuContext, numPeers: uint64, callback: WakuCallBack, userData: pointer -): cint {.dynlib, exportc.} = - initializeLibrary() - checkLibwakuParams(ctx, callback, userData) - - handleRequest( - ctx, - RequestType.DISCOVERY, - DiscoveryRequest.createPeerExchangeRequest(numPeers), - callback, - userData, - ) - -proc waku_ping_peer( - ctx: ptr WakuContext, - peerAddr: cstring, - timeoutMs: cuint, - callback: WakuCallBack, - userData: pointer, -): cint {.dynlib, exportc.} = - initializeLibrary() - checkLibwakuParams(ctx, callback, userData) - - handleRequest( - ctx, - RequestType.PING, - PingRequest.createShared(peerAddr, chronos.milliseconds(timeoutMs)), - callback, - userData, - ) - -proc waku_is_online( - ctx: ptr WakuContext, callback: WakuCallBack, userData: pointer -): cint {.dynlib, exportc.} = - initializeLibrary() - checkLibwakuParams(ctx, callback, userData) - - handleRequest( - ctx, - RequestType.DEBUG, - DebugNodeRequest.createShared(DebugNodeMsgType.RETRIEVE_ONLINE_STATE), - callback, - userData, - ) - -### End of exported procs -################################################################################ +# ### End of exported procs +# ################################################################################ diff --git a/library/waku_context.nim b/library/waku_context.nim deleted file mode 100644 index ab4b996af..000000000 --- a/library/waku_context.nim +++ /dev/null @@ -1,223 +0,0 @@ -{.pragma: exported, exportc, cdecl, raises: [].} -{.pragma: callback, cdecl, raises: [], gcsafe.} -{.passc: "-fPIC".} - -import std/[options, atomics, os, net, locks] -import chronicles, chronos, chronos/threadsync, taskpools/channels_spsc_single, results -import - waku/common/logging, - waku/factory/waku, - waku/node/peer_manager, - waku/waku_relay/[protocol, topic_health], - waku/waku_core/[topics/pubsub_topic, message], - ./waku_thread_requests/[waku_thread_request, requests/debug_node_request], - ./ffi_types, - ./events/[ - json_message_event, json_topic_health_change_event, json_connection_change_event, - json_waku_not_responding_event, - ] - -type WakuContext* = object - wakuThread: Thread[(ptr WakuContext)] - watchdogThread: Thread[(ptr WakuContext)] - # monitors the Waku thread and notifies the Waku SDK consumer if it hangs - lock: Lock - reqChannel: ChannelSPSCSingle[ptr WakuThreadRequest] - reqSignal: ThreadSignalPtr - # to inform The Waku Thread (a.k.a TWT) that a new request is sent - reqReceivedSignal: ThreadSignalPtr - # to inform the main thread that the request is rx by TWT - userData*: pointer - eventCallback*: pointer - eventUserdata*: pointer - running: Atomic[bool] # To control when the threads are running - -const git_version* {.strdefine.} = "n/a" -const versionString = "version / git commit hash: " & waku.git_version - -template callEventCallback(ctx: ptr WakuContext, eventName: string, body: untyped) = - if isNil(ctx[].eventCallback): - error eventName & " - eventCallback is nil" - return - - foreignThreadGc: - try: - let event = body - cast[WakuCallBack](ctx[].eventCallback)( - RET_OK, unsafeAddr event[0], cast[csize_t](len(event)), ctx[].eventUserData - ) - except Exception, CatchableError: - let msg = - "Exception " & eventName & " when calling 'eventCallBack': " & - getCurrentExceptionMsg() - cast[WakuCallBack](ctx[].eventCallback)( - RET_ERR, unsafeAddr msg[0], cast[csize_t](len(msg)), ctx[].eventUserData - ) - -proc onConnectionChange*(ctx: ptr WakuContext): ConnectionChangeHandler = - return proc(peerId: PeerId, peerEvent: PeerEventKind) {.async.} = - callEventCallback(ctx, "onConnectionChange"): - $JsonConnectionChangeEvent.new($peerId, peerEvent) - -proc onReceivedMessage*(ctx: ptr WakuContext): WakuRelayHandler = - return proc(pubsubTopic: PubsubTopic, msg: WakuMessage) {.async.} = - callEventCallback(ctx, "onReceivedMessage"): - $JsonMessageEvent.new(pubsubTopic, msg) - -proc onTopicHealthChange*(ctx: ptr WakuContext): TopicHealthChangeHandler = - return proc(pubsubTopic: PubsubTopic, topicHealth: TopicHealth) {.async.} = - callEventCallback(ctx, "onTopicHealthChange"): - $JsonTopicHealthChangeEvent.new(pubsubTopic, topicHealth) - -proc onWakuNotResponding*(ctx: ptr WakuContext) = - callEventCallback(ctx, "onWakuNotResponsive"): - $JsonWakuNotRespondingEvent.new() - -proc sendRequestToWakuThread*( - ctx: ptr WakuContext, - reqType: RequestType, - reqContent: pointer, - callback: WakuCallBack, - userData: pointer, - timeout = InfiniteDuration, -): Result[void, string] = - ctx.lock.acquire() - # This lock is only necessary while we use a SP Channel and while the signalling - # between threads assumes that there aren't concurrent requests. - # Rearchitecting the signaling + migrating to a MP Channel will allow us to receive - # requests concurrently and spare us the need of locks - defer: - ctx.lock.release() - - let req = WakuThreadRequest.createShared(reqType, reqContent, callback, userData) - ## Sending the request - let sentOk = ctx.reqChannel.trySend(req) - if not sentOk: - deallocShared(req) - return err("Couldn't send a request to the waku thread: " & $req[]) - - let fireSync = ctx.reqSignal.fireSync().valueOr: - deallocShared(req) - return err("failed fireSync: " & $error) - - if not fireSync: - deallocShared(req) - return err("Couldn't fireSync in time") - - ## wait until the Waku Thread properly received the request - ctx.reqReceivedSignal.waitSync(timeout).isOkOr: - deallocShared(req) - return err("Couldn't receive reqReceivedSignal signal") - - ## Notice that in case of "ok", the deallocShared(req) is performed by the Waku Thread in the - ## process proc. See the 'waku_thread_request.nim' module for more details. - ok() - -proc watchdogThreadBody(ctx: ptr WakuContext) {.thread.} = - ## Watchdog thread that monitors the Waku thread and notifies the library user if it hangs. - - let watchdogRun = proc(ctx: ptr WakuContext) {.async.} = - const WatchdogStartDelay = 10.seconds - const WatchdogTimeinterval = 1.seconds - const WakuNotRespondingTimeout = 3.seconds - - # Give time for the node to be created and up before sending watchdog requests - await sleepAsync(WatchdogStartDelay) - while true: - await sleepAsync(WatchdogTimeinterval) - - if ctx.running.load == false: - info "Watchdog thread exiting because WakuContext is not running" - break - - let wakuCallback = proc( - callerRet: cint, msg: ptr cchar, len: csize_t, userData: pointer - ) {.cdecl, gcsafe, raises: [].} = - discard ## Don't do anything. Just respecting the callback signature. - const nilUserData = nil - - trace "Sending watchdog request to Waku thread" - - sendRequestToWakuThread( - ctx, - RequestType.DEBUG, - DebugNodeRequest.createShared(DebugNodeMsgType.CHECK_WAKU_NOT_BLOCKED), - wakuCallback, - nilUserData, - WakuNotRespondingTimeout, - ).isOkOr: - error "Failed to send watchdog request to Waku thread", error = $error - onWakuNotResponding(ctx) - - waitFor watchdogRun(ctx) - -proc wakuThreadBody(ctx: ptr WakuContext) {.thread.} = - ## Waku thread that attends library user requests (stop, connect_to, etc.) - - logging.setupLog(logging.LogLevel.DEBUG, logging.LogFormat.TEXT) - - let wakuRun = proc(ctx: ptr WakuContext) {.async.} = - var waku: Waku - while true: - await ctx.reqSignal.wait() - - if ctx.running.load == false: - break - - ## Trying to get a request from the libwaku requestor thread - var request: ptr WakuThreadRequest - let recvOk = ctx.reqChannel.tryRecv(request) - if not recvOk: - error "waku thread could not receive a request" - continue - - ## Handle the request - asyncSpawn WakuThreadRequest.process(request, addr waku) - - ctx.reqReceivedSignal.fireSync().isOkOr: - error "could not fireSync back to requester thread", error = error - - waitFor wakuRun(ctx) - -proc createWakuContext*(): Result[ptr WakuContext, string] = - ## This proc is called from the main thread and it creates - ## the Waku working thread. - var ctx = createShared(WakuContext, 1) - ctx.reqSignal = ThreadSignalPtr.new().valueOr: - return err("couldn't create reqSignal ThreadSignalPtr") - ctx.reqReceivedSignal = ThreadSignalPtr.new().valueOr: - return err("couldn't create reqReceivedSignal ThreadSignalPtr") - ctx.lock.initLock() - - ctx.running.store(true) - - try: - createThread(ctx.wakuThread, wakuThreadBody, ctx) - except ValueError, ResourceExhaustedError: - freeShared(ctx) - return err("failed to create the Waku thread: " & getCurrentExceptionMsg()) - - try: - createThread(ctx.watchdogThread, watchdogThreadBody, ctx) - except ValueError, ResourceExhaustedError: - freeShared(ctx) - return err("failed to create the watchdog thread: " & getCurrentExceptionMsg()) - - return ok(ctx) - -proc destroyWakuContext*(ctx: ptr WakuContext): Result[void, string] = - ctx.running.store(false) - - let signaledOnTime = ctx.reqSignal.fireSync().valueOr: - return err("error in destroyWakuContext: " & $error) - if not signaledOnTime: - return err("failed to signal reqSignal on time in destroyWakuContext") - - joinThread(ctx.wakuThread) - joinThread(ctx.watchdogThread) - ctx.lock.deinitLock() - ?ctx.reqSignal.close() - ?ctx.reqReceivedSignal.close() - freeShared(ctx) - - return ok() diff --git a/library/waku_thread_requests/requests/debug_node_request.nim b/library/waku_thread_requests/requests/debug_node_request.nim deleted file mode 100644 index c9aa5a743..000000000 --- a/library/waku_thread_requests/requests/debug_node_request.nim +++ /dev/null @@ -1,63 +0,0 @@ -import std/json -import - chronicles, - chronos, - results, - eth/p2p/discoveryv5/enr, - strutils, - libp2p/peerid, - metrics -import - ../../../waku/factory/waku, - ../../../waku/node/waku_node, - ../../../waku/node/health_monitor - -type DebugNodeMsgType* = enum - RETRIEVE_LISTENING_ADDRESSES - RETRIEVE_MY_ENR - RETRIEVE_MY_PEER_ID - RETRIEVE_METRICS - RETRIEVE_ONLINE_STATE - CHECK_WAKU_NOT_BLOCKED - -type DebugNodeRequest* = object - operation: DebugNodeMsgType - -proc createShared*(T: type DebugNodeRequest, op: DebugNodeMsgType): ptr type T = - var ret = createShared(T) - ret[].operation = op - return ret - -proc destroyShared(self: ptr DebugNodeRequest) = - deallocShared(self) - -proc getMultiaddresses(node: WakuNode): seq[string] = - return node.info().listenAddresses - -proc getMetrics(): string = - {.gcsafe.}: - return defaultRegistry.toText() ## defaultRegistry is {.global.} in metrics module - -proc process*( - self: ptr DebugNodeRequest, waku: Waku -): Future[Result[string, string]] {.async.} = - defer: - destroyShared(self) - - case self.operation - of RETRIEVE_LISTENING_ADDRESSES: - ## returns a comma-separated string of the listen addresses - return ok(waku.node.getMultiaddresses().join(",")) - of RETRIEVE_MY_ENR: - return ok(waku.node.enr.toURI()) - of RETRIEVE_MY_PEER_ID: - return ok($waku.node.peerId()) - of RETRIEVE_METRICS: - return ok(getMetrics()) - of RETRIEVE_ONLINE_STATE: - return ok($waku.healthMonitor.onlineMonitor.amIOnline()) - of CHECK_WAKU_NOT_BLOCKED: - return ok("waku thread is not blocked") - - error "unsupported operation in DebugNodeRequest" - return err("unsupported operation in DebugNodeRequest") diff --git a/library/waku_thread_requests/requests/discovery_request.nim b/library/waku_thread_requests/requests/discovery_request.nim deleted file mode 100644 index 405483a46..000000000 --- a/library/waku_thread_requests/requests/discovery_request.nim +++ /dev/null @@ -1,151 +0,0 @@ -import std/json -import chronos, chronicles, results, strutils, libp2p/multiaddress -import - ../../../waku/factory/waku, - ../../../waku/discovery/waku_dnsdisc, - ../../../waku/discovery/waku_discv5, - ../../../waku/waku_core/peers, - ../../../waku/node/waku_node, - ../../../waku/node/kernel_api, - ../../alloc - -type DiscoveryMsgType* = enum - GET_BOOTSTRAP_NODES - UPDATE_DISCV5_BOOTSTRAP_NODES - START_DISCV5 - STOP_DISCV5 - PEER_EXCHANGE - -type DiscoveryRequest* = object - operation: DiscoveryMsgType - - ## used in GET_BOOTSTRAP_NODES - enrTreeUrl: cstring - nameDnsServer: cstring - timeoutMs: cint - - ## used in UPDATE_DISCV5_BOOTSTRAP_NODES - nodes: cstring - - ## used in PEER_EXCHANGE - numPeers: uint64 - -proc createShared( - T: type DiscoveryRequest, - op: DiscoveryMsgType, - enrTreeUrl: cstring, - nameDnsServer: cstring, - timeoutMs: cint, - nodes: cstring, - numPeers: uint64, -): ptr type T = - var ret = createShared(T) - ret[].operation = op - ret[].enrTreeUrl = enrTreeUrl.alloc() - ret[].nameDnsServer = nameDnsServer.alloc() - ret[].timeoutMs = timeoutMs - ret[].nodes = nodes.alloc() - ret[].numPeers = numPeers - return ret - -proc createRetrieveBootstrapNodesRequest*( - T: type DiscoveryRequest, - op: DiscoveryMsgType, - enrTreeUrl: cstring, - nameDnsServer: cstring, - timeoutMs: cint, -): ptr type T = - return T.createShared(op, enrTreeUrl, nameDnsServer, timeoutMs, "", 0) - -proc createUpdateBootstrapNodesRequest*( - T: type DiscoveryRequest, op: DiscoveryMsgType, nodes: cstring -): ptr type T = - return T.createShared(op, "", "", 0, nodes, 0) - -proc createDiscV5StartRequest*(T: type DiscoveryRequest): ptr type T = - return T.createShared(START_DISCV5, "", "", 0, "", 0) - -proc createDiscV5StopRequest*(T: type DiscoveryRequest): ptr type T = - return T.createShared(STOP_DISCV5, "", "", 0, "", 0) - -proc createPeerExchangeRequest*( - T: type DiscoveryRequest, numPeers: uint64 -): ptr type T = - return T.createShared(PEER_EXCHANGE, "", "", 0, "", numPeers) - -proc destroyShared(self: ptr DiscoveryRequest) = - deallocShared(self[].enrTreeUrl) - deallocShared(self[].nameDnsServer) - deallocShared(self[].nodes) - deallocShared(self) - -proc retrieveBootstrapNodes( - enrTreeUrl: string, ipDnsServer: string -): Future[Result[seq[string], string]] {.async.} = - let dnsNameServers = @[parseIpAddress(ipDnsServer)] - let discoveredPeers: seq[RemotePeerInfo] = ( - await retrieveDynamicBootstrapNodes(enrTreeUrl, dnsNameServers) - ).valueOr: - return err("failed discovering peers from DNS: " & $error) - - var multiAddresses = newSeq[string]() - - for discPeer in discoveredPeers: - for address in discPeer.addrs: - multiAddresses.add($address & "/p2p/" & $discPeer) - - return ok(multiAddresses) - -proc updateDiscv5BootstrapNodes(nodes: string, waku: ptr Waku): Result[void, string] = - waku.wakuDiscv5.updateBootstrapRecords(nodes).isOkOr: - return err("error in updateDiscv5BootstrapNodes: " & $error) - return ok() - -proc performPeerExchangeRequestTo( - numPeers: uint64, waku: ptr Waku -): Future[Result[int, string]] {.async.} = - let numPeersRecv = (await waku.node.fetchPeerExchangePeers(numPeers)).valueOr: - return err($error) - return ok(numPeersRecv) - -proc process*( - self: ptr DiscoveryRequest, waku: ptr Waku -): Future[Result[string, string]] {.async.} = - defer: - destroyShared(self) - - case self.operation - of START_DISCV5: - let res = await waku.wakuDiscv5.start() - res.isOkOr: - error "START_DISCV5 failed", error = error - return err($error) - - return ok("discv5 started correctly") - of STOP_DISCV5: - await waku.wakuDiscv5.stop() - - return ok("discv5 stopped correctly") - of GET_BOOTSTRAP_NODES: - let nodes = ( - await retrieveBootstrapNodes($self[].enrTreeUrl, $self[].nameDnsServer) - ).valueOr: - error "GET_BOOTSTRAP_NODES failed", error = error - return err($error) - - ## returns a comma-separated string of bootstrap nodes' multiaddresses - return ok(nodes.join(",")) - of UPDATE_DISCV5_BOOTSTRAP_NODES: - updateDiscv5BootstrapNodes($self[].nodes, waku).isOkOr: - error "UPDATE_DISCV5_BOOTSTRAP_NODES failed", error = error - return err($error) - - return ok("discovery request processed correctly") - of PEER_EXCHANGE: - let numValidPeers = (await performPeerExchangeRequestTo(self[].numPeers, waku)).valueOr: - error "PEER_EXCHANGE failed", error = error - return err($error) - return ok($numValidPeers) - - error "discovery request not handled" - return err("discovery request not handled") diff --git a/library/waku_thread_requests/requests/peer_manager_request.nim b/library/waku_thread_requests/requests/peer_manager_request.nim deleted file mode 100644 index cac5ca30e..000000000 --- a/library/waku_thread_requests/requests/peer_manager_request.nim +++ /dev/null @@ -1,135 +0,0 @@ -import std/[sequtils, strutils, tables] -import chronicles, chronos, results, options, json -import - ../../../waku/factory/waku, - ../../../waku/node/waku_node, - ../../alloc, - ../../../waku/node/peer_manager - -type PeerManagementMsgType* {.pure.} = enum - CONNECT_TO - GET_ALL_PEER_IDS - GET_CONNECTED_PEERS_INFO - GET_PEER_IDS_BY_PROTOCOL - DISCONNECT_PEER_BY_ID - DISCONNECT_ALL_PEERS - DIAL_PEER - DIAL_PEER_BY_ID - GET_CONNECTED_PEERS - -type PeerManagementRequest* = object - operation: PeerManagementMsgType - peerMultiAddr: cstring - dialTimeout: Duration - protocol: cstring - peerId: cstring - -type PeerInfo = object - protocols: seq[string] - addresses: seq[string] - -proc createShared*( - T: type PeerManagementRequest, - op: PeerManagementMsgType, - peerMultiAddr = "", - dialTimeout = chronos.milliseconds(0), ## arbitrary Duration as not all ops needs dialTimeout - peerId = "", - protocol = "", -): ptr type T = - var ret = createShared(T) - ret[].operation = op - ret[].peerMultiAddr = peerMultiAddr.alloc() - ret[].peerId = peerId.alloc() - ret[].protocol = protocol.alloc() - ret[].dialTimeout = dialTimeout - return ret - -proc destroyShared(self: ptr PeerManagementRequest) = - if not isNil(self[].peerMultiAddr): - deallocShared(self[].peerMultiAddr) - - if not isNil(self[].peerId): - deallocShared(self[].peerId) - - if not isNil(self[].protocol): - deallocShared(self[].protocol) - - deallocShared(self) - -proc process*( - self: ptr PeerManagementRequest, waku: Waku -): Future[Result[string, string]] {.async.} = - defer: - destroyShared(self) - - case self.operation - of CONNECT_TO: - let peers = ($self[].peerMultiAddr).split(",").mapIt(strip(it)) - await waku.node.connectToNodes(peers, source = "static") - return ok("") - of GET_ALL_PEER_IDS: - ## returns a comma-separated string of peerIDs - let peerIDs = - waku.node.peerManager.switch.peerStore.peers().mapIt($it.peerId).join(",") - return ok(peerIDs) - of GET_CONNECTED_PEERS_INFO: - ## returns a JSON string mapping peerIDs to objects with protocols and addresses - - var peersMap = initTable[string, PeerInfo]() - let peers = waku.node.peerManager.switch.peerStore.peers().filterIt( - it.connectedness == Connected - ) - - # Build a map of peer IDs to peer info objects - for peer in peers: - let peerIdStr = $peer.peerId - peersMap[peerIdStr] = - PeerInfo(protocols: peer.protocols, addresses: peer.addrs.mapIt($it)) - - # Convert the map to JSON string - let jsonObj = %*peersMap - let jsonStr = $jsonObj - return ok(jsonStr) - of GET_PEER_IDS_BY_PROTOCOL: - ## returns a comma-separated string of peerIDs that mount the given protocol - let connectedPeers = waku.node.peerManager.switch.peerStore - .peers($self[].protocol) - .filterIt(it.connectedness == Connected) - .mapIt($it.peerId) - .join(",") - return ok(connectedPeers) - of DISCONNECT_PEER_BY_ID: - let peerId = PeerId.init($self[].peerId).valueOr: - error "DISCONNECT_PEER_BY_ID failed", error = $error - return err($error) - await waku.node.peerManager.disconnectNode(peerId) - return ok("") - of DISCONNECT_ALL_PEERS: - await waku.node.peerManager.disconnectAllPeers() - return ok("") - of DIAL_PEER: - let remotePeerInfo = parsePeerInfo($self[].peerMultiAddr).valueOr: - error "DIAL_PEER failed", error = $error - return err($error) - let conn = await waku.node.peerManager.dialPeer(remotePeerInfo, $self[].protocol) - if conn.isNone(): - let msg = "failed dialing peer" - error "DIAL_PEER failed", error = msg, peerId = $remotePeerInfo.peerId - return err(msg) - of DIAL_PEER_BY_ID: - let peerId = PeerId.init($self[].peerId).valueOr: - error "DIAL_PEER_BY_ID failed", error = $error - return err($error) - let conn = await waku.node.peerManager.dialPeer(peerId, $self[].protocol) - if conn.isNone(): - let msg = "failed dialing peer" - error "DIAL_PEER_BY_ID failed", error = msg, peerId = $peerId - return err(msg) - of GET_CONNECTED_PEERS: - ## returns a comma-separated string of peerIDs - let - (inPeerIds, outPeerIds) = waku.node.peerManager.connectedPeers() - connectedPeerids = concat(inPeerIds, outPeerIds) - return ok(connectedPeerids.mapIt($it).join(",")) - - return ok("") diff --git a/library/waku_thread_requests/requests/ping_request.nim b/library/waku_thread_requests/requests/ping_request.nim deleted file mode 100644 index 716b9ed68..000000000 --- a/library/waku_thread_requests/requests/ping_request.nim +++ /dev/null @@ -1,54 +0,0 @@ -import std/[json, strutils] -import chronos, results -import libp2p/[protocols/ping, switch, multiaddress, multicodec] -import ../../../waku/[factory/waku, waku_core/peers, node/waku_node], ../../alloc - -type PingRequest* = object - peerAddr: cstring - timeout: Duration - -proc createShared*( - T: type PingRequest, peerAddr: cstring, timeout: Duration -): ptr type T = - var ret = createShared(T) - ret[].peerAddr = peerAddr.alloc() - ret[].timeout = timeout - return ret - -proc destroyShared(self: ptr PingRequest) = - deallocShared(self[].peerAddr) - deallocShared(self) - -proc process*( - self: ptr PingRequest, waku: ptr Waku -): Future[Result[string, string]] {.async.} = - defer: - destroyShared(self) - - let peerInfo = peers.parsePeerInfo(($self[].peerAddr).split(",")).valueOr: - return err("PingRequest failed to parse peer addr: " & $error) - - proc ping(): Future[Result[Duration, string]] {.async, gcsafe.} = - try: - let conn = await waku.node.switch.dial(peerInfo.peerId, peerInfo.addrs, PingCodec) - defer: - await conn.close() - - let pingRTT = await waku.node.libp2pPing.ping(conn) - if pingRTT == 0.nanos: - return err("could not ping peer: rtt-0") - return ok(pingRTT) - except CatchableError: - return err("could not ping peer: " & getCurrentExceptionMsg()) - - let pingFuture = ping() - let pingRTT: Duration = - if self[].timeout == chronos.milliseconds(0): # No timeout expected - ?(await pingFuture) - else: - let timedOut = not (await pingFuture.withTimeout(self[].timeout)) - if timedOut: - return err("ping timed out") - ?(pingFuture.read()) - - ok($(pingRTT.nanos)) diff --git a/library/waku_thread_requests/requests/protocols/filter_request.nim b/library/waku_thread_requests/requests/protocols/filter_request.nim deleted file mode 100644 index cd401d443..000000000 --- a/library/waku_thread_requests/requests/protocols/filter_request.nim +++ /dev/null @@ -1,106 +0,0 @@ -import options, std/[strutils, sequtils] -import chronicles, chronos, results -import - ../../../../waku/waku_filter_v2/client, - ../../../../waku/waku_core/message/message, - ../../../../waku/factory/waku, - ../../../../waku/waku_filter_v2/common, - ../../../../waku/waku_core/subscription/push_handler, - ../../../../waku/node/peer_manager/peer_manager, - ../../../../waku/node/waku_node, - ../../../../waku/node/kernel_api, - ../../../../waku/waku_core/topics/pubsub_topic, - ../../../../waku/waku_core/topics/content_topic, - ../../../alloc - -type FilterMsgType* = enum - SUBSCRIBE - UNSUBSCRIBE - UNSUBSCRIBE_ALL - -type FilterRequest* = object - operation: FilterMsgType - pubsubTopic: cstring - contentTopics: cstring ## comma-separated list of content-topics - filterPushEventCallback: FilterPushHandler ## handles incoming filter pushed msgs - -proc createShared*( - T: type FilterRequest, - op: FilterMsgType, - pubsubTopic: cstring = "", - contentTopics: cstring = "", - filterPushEventCallback: FilterPushHandler = nil, -): ptr type T = - var ret = createShared(T) - ret[].operation = op - ret[].pubsubTopic = pubsubTopic.alloc() - ret[].contentTopics = contentTopics.alloc() - ret[].filterPushEventCallback = filterPushEventCallback - - return ret - -proc destroyShared(self: ptr FilterRequest) = - deallocShared(self[].pubsubTopic) - deallocShared(self[].contentTopics) - deallocShared(self) - -proc process*( - self: ptr FilterRequest, waku: ptr Waku -): Future[Result[string, string]] {.async.} = - defer: - destroyShared(self) - - const FilterOpTimeout = 5.seconds - if waku.node.wakuFilterClient.isNil(): - let errorMsg = "FilterRequest waku.node.wakuFilterClient is nil" - error "fail filter process", error = errorMsg, op = $(self.operation) - return err(errorMsg) - - case self.operation - of SUBSCRIBE: - waku.node.wakuFilterClient.registerPushHandler(self.filterPushEventCallback) - - let peer = waku.node.peerManager.selectPeer(WakuFilterSubscribeCodec).valueOr: - let errorMsg = - "could not find peer with WakuFilterSubscribeCodec when subscribing" - error "fail filter process", error = errorMsg, op = $(self.operation) - return err(errorMsg) - - let pubsubTopic = some(PubsubTopic($self[].pubsubTopic)) - let contentTopics = ($(self[].contentTopics)).split(",").mapIt(ContentTopic(it)) - - let subFut = waku.node.filterSubscribe(pubsubTopic, contentTopics, peer) - if not await subFut.withTimeout(FilterOpTimeout): - let errorMsg = "filter subscription timed out" - error "fail filter process", error = errorMsg, op = $(self.operation) - return err(errorMsg) - of UNSUBSCRIBE: - let peer = waku.node.peerManager.selectPeer(WakuFilterSubscribeCodec).valueOr: - let errorMsg = - "could not find peer with WakuFilterSubscribeCodec when unsubscribing" - error "fail filter process", error = errorMsg, op = $(self.operation) - return err(errorMsg) - - let pubsubTopic = some(PubsubTopic($self[].pubsubTopic)) - let contentTopics = ($(self[].contentTopics)).split(",").mapIt(ContentTopic(it)) - - let subFut = waku.node.filterUnsubscribe(pubsubTopic, contentTopics, peer) - if not await subFut.withTimeout(FilterOpTimeout): - let errorMsg = "filter un-subscription timed out" - error "fail filter process", error = errorMsg, op = $(self.operation) - return err(errorMsg) - of UNSUBSCRIBE_ALL: - let peer = waku.node.peerManager.selectPeer(WakuFilterSubscribeCodec).valueOr: - let errorMsg = - "could not find peer with WakuFilterSubscribeCodec when unsubscribing all" - error "fail filter process", error = errorMsg, op = $(self.operation) - return err(errorMsg) - - let unsubFut = waku.node.filterUnsubscribeAll(peer) - - if not await unsubFut.withTimeout(FilterOpTimeout): - let errorMsg = "filter un-subscription all timed out" - error "fail filter process", error = errorMsg, op = $(self.operation) - return err(errorMsg) - - return ok("") diff --git a/library/waku_thread_requests/requests/protocols/lightpush_request.nim b/library/waku_thread_requests/requests/protocols/lightpush_request.nim deleted file mode 100644 index bc3d9de2c..000000000 --- a/library/waku_thread_requests/requests/protocols/lightpush_request.nim +++ /dev/null @@ -1,109 +0,0 @@ -import options -import chronicles, chronos, results -import - ../../../../waku/waku_core/message/message, - ../../../../waku/waku_core/codecs, - ../../../../waku/factory/waku, - ../../../../waku/waku_core/message, - ../../../../waku/waku_core/time, # Timestamp - ../../../../waku/waku_core/topics/pubsub_topic, - ../../../../waku/waku_lightpush_legacy/client, - ../../../../waku/waku_lightpush_legacy/common, - ../../../../waku/node/peer_manager/peer_manager, - ../../../alloc - -type LightpushMsgType* = enum - PUBLISH - -type ThreadSafeWakuMessage* = object - payload: SharedSeq[byte] - contentTopic: cstring - meta: SharedSeq[byte] - version: uint32 - timestamp: Timestamp - ephemeral: bool - when defined(rln): - proof: SharedSeq[byte] - -type LightpushRequest* = object - operation: LightpushMsgType - pubsubTopic: cstring - message: ThreadSafeWakuMessage # only used in 'PUBLISH' requests - -proc createShared*( - T: type LightpushRequest, - op: LightpushMsgType, - pubsubTopic: cstring, - m = WakuMessage(), -): ptr type T = - var ret = createShared(T) - ret[].operation = op - ret[].pubsubTopic = pubsubTopic.alloc() - ret[].message = ThreadSafeWakuMessage( - payload: allocSharedSeq(m.payload), - contentTopic: m.contentTopic.alloc(), - meta: allocSharedSeq(m.meta), - version: m.version, - timestamp: m.timestamp, - ephemeral: m.ephemeral, - ) - when defined(rln): - ret[].message.proof = allocSharedSeq(m.proof) - - return ret - -proc destroyShared(self: ptr LightpushRequest) = - deallocSharedSeq(self[].message.payload) - deallocShared(self[].message.contentTopic) - deallocSharedSeq(self[].message.meta) - when defined(rln): - deallocSharedSeq(self[].message.proof) - - deallocShared(self) - -proc toWakuMessage(m: ThreadSafeWakuMessage): WakuMessage = - var wakuMessage = WakuMessage() - - wakuMessage.payload = m.payload.toSeq() - wakuMessage.contentTopic = $m.contentTopic - wakuMessage.meta = m.meta.toSeq() - wakuMessage.version = m.version - wakuMessage.timestamp = m.timestamp - wakuMessage.ephemeral = m.ephemeral - - when defined(rln): - wakuMessage.proof = m.proof - - return wakuMessage - -proc process*( - self: ptr LightpushRequest, waku: ptr Waku -): Future[Result[string, string]] {.async.} = - defer: - destroyShared(self) - - case self.operation - of PUBLISH: - let msg = self.message.toWakuMessage() - let pubsubTopic = $self.pubsubTopic - - if waku.node.wakuLightpushClient.isNil(): - let errorMsg = "LightpushRequest waku.node.wakuLightpushClient is nil" - error "PUBLISH failed", error = errorMsg - return err(errorMsg) - - let peerOpt = waku.node.peerManager.selectPeer(WakuLightPushCodec) - if peerOpt.isNone(): - let errorMsg = "failed to lightpublish message, no suitable remote peers" - error "PUBLISH failed", error = errorMsg - return err(errorMsg) - - let msgHashHex = ( - await waku.node.wakuLegacyLightpushClient.publish( - pubsubTopic, msg, peer = peerOpt.get() - ) - ).valueOr: - error "PUBLISH failed", error = error - return err($error) - - return ok(msgHashHex) diff --git a/library/waku_thread_requests/requests/protocols/relay_request.nim b/library/waku_thread_requests/requests/protocols/relay_request.nim deleted file mode 100644 index e110f689e..000000000 --- a/library/waku_thread_requests/requests/protocols/relay_request.nim +++ /dev/null @@ -1,168 +0,0 @@ -import std/[net, sequtils, strutils] -import chronicles, chronos, stew/byteutils, results -import - waku/waku_core/message/message, - waku/factory/[validator_signed, waku], - tools/confutils/cli_args, - waku/waku_node, - waku/waku_core/message, - waku/waku_core/time, # Timestamp - waku/waku_core/topics/pubsub_topic, - waku/waku_core/topics, - waku/waku_relay/protocol, - waku/node/peer_manager - -import - ../../../alloc - -type RelayMsgType* = enum - SUBSCRIBE - UNSUBSCRIBE - PUBLISH - NUM_CONNECTED_PEERS - LIST_CONNECTED_PEERS - ## to return the list of all connected peers to an specific pubsub topic - NUM_MESH_PEERS - LIST_MESH_PEERS - ## to return the list of only the peers that conform the mesh for a particular pubsub topic - ADD_PROTECTED_SHARD ## Protects a shard with a public key - -type ThreadSafeWakuMessage* = object - payload: SharedSeq[byte] - contentTopic: cstring - meta: SharedSeq[byte] - version: uint32 - timestamp: Timestamp - ephemeral: bool - when defined(rln): - proof: SharedSeq[byte] - -type RelayRequest* = object - operation: RelayMsgType - pubsubTopic: cstring - relayEventCallback: WakuRelayHandler # not used in 'PUBLISH' requests - message: ThreadSafeWakuMessage # only used in 'PUBLISH' requests - clusterId: cint # only used in 'ADD_PROTECTED_SHARD' requests - shardId: cint # only used in 'ADD_PROTECTED_SHARD' requests - publicKey: cstring # only used in 'ADD_PROTECTED_SHARD' requests - -proc createShared*( - T: type RelayRequest, - op: RelayMsgType, - pubsubTopic: cstring = nil, - relayEventCallback: WakuRelayHandler = nil, - m = WakuMessage(), - clusterId: cint = 0, - shardId: cint = 0, - publicKey: cstring = nil, -): ptr type T = - var ret = createShared(T) - ret[].operation = op - ret[].pubsubTopic = pubsubTopic.alloc() - ret[].clusterId = clusterId - ret[].shardId = shardId - ret[].publicKey = publicKey.alloc() - ret[].relayEventCallback = relayEventCallback - ret[].message = ThreadSafeWakuMessage( - payload: allocSharedSeq(m.payload), - contentTopic: m.contentTopic.alloc(), - meta: allocSharedSeq(m.meta), - version: m.version, - timestamp: m.timestamp, - ephemeral: m.ephemeral, - ) - when defined(rln): - ret[].message.proof = allocSharedSeq(m.proof) - - return ret - -proc destroyShared(self: ptr RelayRequest) = - deallocSharedSeq(self[].message.payload) - deallocShared(self[].message.contentTopic) - deallocSharedSeq(self[].message.meta) - when defined(rln): - deallocSharedSeq(self[].message.proof) - deallocShared(self[].pubsubTopic) - deallocShared(self[].publicKey) - deallocShared(self) - -proc toWakuMessage(m: ThreadSafeWakuMessage): WakuMessage = - var wakuMessage = WakuMessage() - - wakuMessage.payload = m.payload.toSeq() - wakuMessage.contentTopic = $m.contentTopic - wakuMessage.meta = m.meta.toSeq() - wakuMessage.version = m.version - wakuMessage.timestamp = m.timestamp - wakuMessage.ephemeral = m.ephemeral - - when defined(rln): - wakuMessage.proof = m.proof - - return wakuMessage - -proc process*( - self: ptr RelayRequest, waku: ptr Waku -): Future[Result[string, string]] {.async.} = - defer: - destroyShared(self) - - if waku.node.wakuRelay.isNil(): - return err("Operation not supported without Waku Relay enabled.") - - case self.operation - of SUBSCRIBE: - waku.node.subscribe( - (kind: SubscriptionKind.PubsubSub, topic: $self.pubsubTopic), - handler = self.relayEventCallback, - ).isOkOr: - error "SUBSCRIBE failed", error - return err($error) - of UNSUBSCRIBE: - waku.node.unsubscribe((kind: SubscriptionKind.PubsubSub, topic: $self.pubsubTopic)).isOkOr: - error "UNSUBSCRIBE failed", error - return err($error) - of PUBLISH: - let msg = self.message.toWakuMessage() - let pubsubTopic = $self.pubsubTopic - - (await waku.node.wakuRelay.publish(pubsubTopic, msg)).isOkOr: - error "PUBLISH failed", error - return err($error) - - let msgHash = computeMessageHash(pubSubTopic, msg).to0xHex - return ok(msgHash) - of NUM_CONNECTED_PEERS: - let numConnPeers = waku.node.wakuRelay.getNumConnectedPeers($self.pubsubTopic).valueOr: - error "NUM_CONNECTED_PEERS failed", error - return err($error) - return ok($numConnPeers) - of LIST_CONNECTED_PEERS: - let connPeers = waku.node.wakuRelay.getConnectedPeers($self.pubsubTopic).valueOr: - error "LIST_CONNECTED_PEERS failed", error = error - return err($error) - ## returns a comma-separated string of peerIDs - return ok(connPeers.mapIt($it).join(",")) - of NUM_MESH_PEERS: - let numPeersInMesh = waku.node.wakuRelay.getNumPeersInMesh($self.pubsubTopic).valueOr: - error "NUM_MESH_PEERS failed", error = error - return err($error) - return ok($numPeersInMesh) - of LIST_MESH_PEERS: - let meshPeers = waku.node.wakuRelay.getPeersInMesh($self.pubsubTopic).valueOr: - error "LIST_MESH_PEERS failed", error = error - return err($error) - ## returns a comma-separated string of peerIDs - return ok(meshPeers.mapIt($it).join(",")) - of ADD_PROTECTED_SHARD: - try: - let relayShard = - RelayShard(clusterId: uint16(self.clusterId), shardId: uint16(self.shardId)) - let protectedShard = - ProtectedShard.parseCmdArg($relayShard & ":" & $self.publicKey) - waku.node.wakuRelay.addSignedShardsValidator( - @[protectedShard], uint16(self.clusterId) - ) - except ValueError: - return err(getCurrentExceptionMsg()) - return ok("") diff --git a/library/waku_thread_requests/waku_thread_request.nim b/library/waku_thread_requests/waku_thread_request.nim deleted file mode 100644 index 50462fba7..000000000 --- a/library/waku_thread_requests/waku_thread_request.nim +++ /dev/null @@ -1,104 +0,0 @@ -## This file contains the base message request type that will be handled. -## The requests are created by the main thread and processed by -## the Waku Thread. - -import std/json, results -import chronos, chronos/threadsync -import - ../../waku/factory/waku, - ../ffi_types, - ./requests/node_lifecycle_request, - ./requests/peer_manager_request, - ./requests/protocols/relay_request, - ./requests/protocols/store_request, - ./requests/protocols/lightpush_request, - ./requests/protocols/filter_request, - ./requests/debug_node_request, - ./requests/discovery_request, - ./requests/ping_request - -type RequestType* {.pure.} = enum - LIFECYCLE - PEER_MANAGER - PING - RELAY - STORE - DEBUG - DISCOVERY - LIGHTPUSH - FILTER - -type WakuThreadRequest* = object - reqType: RequestType - reqContent: pointer - callback: WakuCallBack - userData: pointer - -proc createShared*( - T: type WakuThreadRequest, - reqType: RequestType, - reqContent: pointer, - callback: WakuCallBack, - userData: pointer, -): ptr type T = - var ret = createShared(T) - ret[].reqType = reqType - ret[].reqContent = reqContent - ret[].callback = callback - ret[].userData = userData - return ret - -proc handleRes[T: string | void]( - res: Result[T, string], request: ptr WakuThreadRequest -) = - ## Handles the Result responses, which can either be Result[string, string] or - ## Result[void, string]. - - defer: - deallocShared(request) - - if res.isErr(): - foreignThreadGc: - let msg = "libwaku error: handleRes fireSyncRes error: " & $res.error - request[].callback( - RET_ERR, unsafeAddr msg[0], cast[csize_t](len(msg)), request[].userData - ) - return - - foreignThreadGc: - var msg: cstring = "" - when T is string: - msg = res.get().cstring() - request[].callback( - RET_OK, unsafeAddr msg[0], cast[csize_t](len(msg)), request[].userData - ) - return - -proc process*( - T: type WakuThreadRequest, request: ptr WakuThreadRequest, waku: ptr Waku -) {.async.} = - let retFut = - case request[].reqType - of LIFECYCLE: - cast[ptr NodeLifecycleRequest](request[].reqContent).process(waku) - of PEER_MANAGER: - cast[ptr PeerManagementRequest](request[].reqContent).process(waku[]) - of PING: - cast[ptr PingRequest](request[].reqContent).process(waku) - of RELAY: - cast[ptr RelayRequest](request[].reqContent).process(waku) - of STORE: - cast[ptr StoreRequest](request[].reqContent).process(waku) - of DEBUG: - cast[ptr DebugNodeRequest](request[].reqContent).process(waku[]) - of DISCOVERY: - cast[ptr DiscoveryRequest](request[].reqContent).process(waku) - of LIGHTPUSH: - cast[ptr LightpushRequest](request[].reqContent).process(waku) - of FILTER: - cast[ptr FilterRequest](request[].reqContent).process(waku) - - handleRes(await retFut, request) - -proc `$`*(self: WakuThreadRequest): string = - return $self.reqType diff --git a/vendor/nim-ffi b/vendor/nim-ffi new file mode 160000 index 000000000..d7a549212 --- /dev/null +++ b/vendor/nim-ffi @@ -0,0 +1 @@ +Subproject commit d7a5492121aad190cf549436836e2fa42e34ff9b diff --git a/waku.nimble b/waku.nimble index 09ff48969..7bfdfab12 100644 --- a/waku.nimble +++ b/waku.nimble @@ -30,7 +30,8 @@ requires "nim >= 2.2.4", "regex", "results", "db_connector", - "minilru" + "minilru", + "ffi" ### Helper functions proc buildModule(filePath, params = "", lang = "c"): bool = From 96196ab8bc05f31b09dac2403f9d5de3bc05f31b Mon Sep 17 00:00:00 2001 From: Pablo Lopez Date: Mon, 22 Dec 2025 15:40:09 +0200 Subject: [PATCH 30/70] feat: compilation for iOS WIP (#3668) * feat: compilation for iOS WIP * fix: nim ios version 18 --- .gitignore | 10 + Makefile | 45 ++ .../ios/WakuExample.xcodeproj/project.pbxproj | 331 ++++++++ .../contents.xcworkspacedata | 7 + examples/ios/WakuExample/ContentView.swift | 229 ++++++ examples/ios/WakuExample/Info.plist | 36 + .../WakuExample/WakuExample-Bridging-Header.h | 15 + examples/ios/WakuExample/WakuExampleApp.swift | 19 + examples/ios/WakuExample/WakuNode.swift | 739 ++++++++++++++++++ examples/ios/WakuExample/libwaku.h | 253 ++++++ examples/ios/project.yml | 47 ++ library/ios_bearssl_stubs.c | 32 + library/ios_natpmp_stubs.c | 14 + waku.nimble | 179 +++++ 14 files changed, 1956 insertions(+) create mode 100644 examples/ios/WakuExample.xcodeproj/project.pbxproj create mode 100644 examples/ios/WakuExample.xcodeproj/project.xcworkspace/contents.xcworkspacedata create mode 100644 examples/ios/WakuExample/ContentView.swift create mode 100644 examples/ios/WakuExample/Info.plist create mode 100644 examples/ios/WakuExample/WakuExample-Bridging-Header.h create mode 100644 examples/ios/WakuExample/WakuExampleApp.swift create mode 100644 examples/ios/WakuExample/WakuNode.swift create mode 100644 examples/ios/WakuExample/libwaku.h create mode 100644 examples/ios/project.yml create mode 100644 library/ios_bearssl_stubs.c create mode 100644 library/ios_natpmp_stubs.c diff --git a/.gitignore b/.gitignore index 7430c3e99..f03c4ebaf 100644 --- a/.gitignore +++ b/.gitignore @@ -59,6 +59,10 @@ nimbus-build-system.paths /examples/nodejs/build/ /examples/rust/target/ +# Xcode user data +xcuserdata/ +*.xcuserstate + # Coverage coverage_html_report/ @@ -79,3 +83,9 @@ waku_handler.moc.cpp # Nix build result result + +# llms +AGENTS.md +nimble.develop +nimble.paths +nimbledeps diff --git a/Makefile b/Makefile index 35c107d2d..87bd7bc74 100644 --- a/Makefile +++ b/Makefile @@ -517,6 +517,51 @@ libwaku-android: # It's likely this architecture is not used so we might just not support it. # $(MAKE) libwaku-android-arm +################# +## iOS Bindings # +################# +.PHONY: libwaku-ios-precheck \ + libwaku-ios-device \ + libwaku-ios-simulator \ + libwaku-ios + +IOS_DEPLOYMENT_TARGET ?= 18.0 + +# Get SDK paths dynamically using xcrun +define get_ios_sdk_path +$(shell xcrun --sdk $(1) --show-sdk-path 2>/dev/null) +endef + +libwaku-ios-precheck: +ifeq ($(detected_OS),Darwin) + @command -v xcrun >/dev/null 2>&1 || { echo "Error: Xcode command line tools not installed"; exit 1; } +else + $(error iOS builds are only supported on macOS) +endif + +# Build for iOS architecture +build-libwaku-for-ios-arch: + IOS_SDK=$(IOS_SDK) IOS_ARCH=$(IOS_ARCH) IOS_SDK_PATH=$(IOS_SDK_PATH) $(ENV_SCRIPT) nim libWakuIOS $(NIM_PARAMS) waku.nims + +# iOS device (arm64) +libwaku-ios-device: IOS_ARCH=arm64 +libwaku-ios-device: IOS_SDK=iphoneos +libwaku-ios-device: IOS_SDK_PATH=$(call get_ios_sdk_path,iphoneos) +libwaku-ios-device: | libwaku-ios-precheck build deps + $(MAKE) build-libwaku-for-ios-arch IOS_ARCH=$(IOS_ARCH) IOS_SDK=$(IOS_SDK) IOS_SDK_PATH=$(IOS_SDK_PATH) + +# iOS simulator (arm64 - Apple Silicon Macs) +libwaku-ios-simulator: IOS_ARCH=arm64 +libwaku-ios-simulator: IOS_SDK=iphonesimulator +libwaku-ios-simulator: IOS_SDK_PATH=$(call get_ios_sdk_path,iphonesimulator) +libwaku-ios-simulator: | libwaku-ios-precheck build deps + $(MAKE) build-libwaku-for-ios-arch IOS_ARCH=$(IOS_ARCH) IOS_SDK=$(IOS_SDK) IOS_SDK_PATH=$(IOS_SDK_PATH) + +# Build all iOS targets +libwaku-ios: + $(MAKE) libwaku-ios-device + $(MAKE) libwaku-ios-simulator + cwaku_example: | build libwaku echo -e $(BUILD_MSG) "build/$@" && \ cc -o "build/$@" \ diff --git a/examples/ios/WakuExample.xcodeproj/project.pbxproj b/examples/ios/WakuExample.xcodeproj/project.pbxproj new file mode 100644 index 000000000..b7ce1dce7 --- /dev/null +++ b/examples/ios/WakuExample.xcodeproj/project.pbxproj @@ -0,0 +1,331 @@ +// !$*UTF8*$! +{ + archiveVersion = 1; + classes = { + }; + objectVersion = 63; + objects = { + +/* Begin PBXBuildFile section */ + 45714AF6D1D12AF5C36694FB /* WakuExampleApp.swift in Sources */ = {isa = PBXBuildFile; fileRef = 0671AF6DCB0D788B0C1E9C8B /* WakuExampleApp.swift */; }; + 6468FA3F5F760D3FCAD6CDBF /* ContentView.swift in Sources */ = {isa = PBXBuildFile; fileRef = 7D8744E36DADC11F38A1CC99 /* ContentView.swift */; }; + C4EA202B782038F96336401F /* WakuNode.swift in Sources */ = {isa = PBXBuildFile; fileRef = 638A565C495A63CFF7396FBC /* WakuNode.swift */; }; +/* End PBXBuildFile section */ + +/* Begin PBXFileReference section */ + 0671AF6DCB0D788B0C1E9C8B /* WakuExampleApp.swift */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.swift; path = WakuExampleApp.swift; sourceTree = ""; }; + 31BE20DB2755A11000723420 /* libwaku.h */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.c.h; path = libwaku.h; sourceTree = ""; }; + 5C5AAC91E0166D28BFA986DB /* Info.plist */ = {isa = PBXFileReference; lastKnownFileType = text.plist; path = Info.plist; sourceTree = ""; }; + 638A565C495A63CFF7396FBC /* WakuNode.swift */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.swift; path = WakuNode.swift; sourceTree = ""; }; + 7D8744E36DADC11F38A1CC99 /* ContentView.swift */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.swift; path = ContentView.swift; sourceTree = ""; }; + A8655016B3DF9B0877631CE5 /* WakuExample-Bridging-Header.h */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.c.h; path = "WakuExample-Bridging-Header.h"; sourceTree = ""; }; + CFBE844B6E18ACB81C65F83B /* WakuExample.app */ = {isa = PBXFileReference; explicitFileType = wrapper.application; includeInIndex = 0; path = WakuExample.app; sourceTree = BUILT_PRODUCTS_DIR; }; +/* End PBXFileReference section */ + +/* Begin PBXGroup section */ + 34547A6259485BD047D6375C /* Products */ = { + isa = PBXGroup; + children = ( + CFBE844B6E18ACB81C65F83B /* WakuExample.app */, + ); + name = Products; + sourceTree = ""; + }; + 4F76CB85EC44E951B8E75522 /* WakuExample */ = { + isa = PBXGroup; + children = ( + 7D8744E36DADC11F38A1CC99 /* ContentView.swift */, + 5C5AAC91E0166D28BFA986DB /* Info.plist */, + 31BE20DB2755A11000723420 /* libwaku.h */, + A8655016B3DF9B0877631CE5 /* WakuExample-Bridging-Header.h */, + 0671AF6DCB0D788B0C1E9C8B /* WakuExampleApp.swift */, + 638A565C495A63CFF7396FBC /* WakuNode.swift */, + ); + path = WakuExample; + sourceTree = ""; + }; + D40CD2446F177CAABB0A747A = { + isa = PBXGroup; + children = ( + 4F76CB85EC44E951B8E75522 /* WakuExample */, + 34547A6259485BD047D6375C /* Products */, + ); + sourceTree = ""; + }; +/* End PBXGroup section */ + +/* Begin PBXNativeTarget section */ + F751EF8294AD21F713D47FDA /* WakuExample */ = { + isa = PBXNativeTarget; + buildConfigurationList = 757FA0123629BD63CB254113 /* Build configuration list for PBXNativeTarget "WakuExample" */; + buildPhases = ( + D3AFD8C4DA68BF5C4F7D8E10 /* Sources */, + ); + buildRules = ( + ); + dependencies = ( + ); + name = WakuExample; + packageProductDependencies = ( + ); + productName = WakuExample; + productReference = CFBE844B6E18ACB81C65F83B /* WakuExample.app */; + productType = "com.apple.product-type.application"; + }; +/* End PBXNativeTarget section */ + +/* Begin PBXProject section */ + 4FF82F0F4AF8E1E34728F150 /* Project object */ = { + isa = PBXProject; + attributes = { + BuildIndependentTargetsInParallel = YES; + LastUpgradeCheck = 1500; + }; + buildConfigurationList = B3A4F48294254543E79767C4 /* Build configuration list for PBXProject "WakuExample" */; + compatibilityVersion = "Xcode 14.0"; + developmentRegion = en; + hasScannedForEncodings = 0; + knownRegions = ( + Base, + en, + ); + mainGroup = D40CD2446F177CAABB0A747A; + minimizedProjectReferenceProxies = 1; + projectDirPath = ""; + projectRoot = ""; + targets = ( + F751EF8294AD21F713D47FDA /* WakuExample */, + ); + }; +/* End PBXProject section */ + +/* Begin PBXSourcesBuildPhase section */ + D3AFD8C4DA68BF5C4F7D8E10 /* Sources */ = { + isa = PBXSourcesBuildPhase; + buildActionMask = 2147483647; + files = ( + 6468FA3F5F760D3FCAD6CDBF /* ContentView.swift in Sources */, + 45714AF6D1D12AF5C36694FB /* WakuExampleApp.swift in Sources */, + C4EA202B782038F96336401F /* WakuNode.swift in Sources */, + ); + runOnlyForDeploymentPostprocessing = 0; + }; +/* End PBXSourcesBuildPhase section */ + +/* Begin XCBuildConfiguration section */ + 36939122077C66DD94082311 /* Release */ = { + isa = XCBuildConfiguration; + buildSettings = { + ASSETCATALOG_COMPILER_APPICON_NAME = AppIcon; + CODE_SIGN_IDENTITY = "iPhone Developer"; + DEVELOPMENT_TEAM = 2Q52K2W84K; + HEADER_SEARCH_PATHS = "$(PROJECT_DIR)/WakuExample"; + INFOPLIST_FILE = WakuExample/Info.plist; + IPHONEOS_DEPLOYMENT_TARGET = 18.6; + LD_RUNPATH_SEARCH_PATHS = ( + "$(inherited)", + "@executable_path/Frameworks", + ); + "LIBRARY_SEARCH_PATHS[sdk=iphoneos*]" = "$(PROJECT_DIR)/../../build/ios/iphoneos-arm64"; + "LIBRARY_SEARCH_PATHS[sdk=iphonesimulator*]" = "$(PROJECT_DIR)/../../build/ios/iphonesimulator-arm64"; + MACOSX_DEPLOYMENT_TARGET = 15.6; + OTHER_LDFLAGS = ( + "-lc++", + "-force_load", + "$(PROJECT_DIR)/../../build/ios/iphoneos-arm64/libwaku.a", + "-lsqlite3", + "-lz", + ); + PRODUCT_BUNDLE_IDENTIFIER = org.waku.example; + SDKROOT = iphoneos; + SUPPORTED_PLATFORMS = "iphoneos iphonesimulator"; + SUPPORTS_MACCATALYST = NO; + SUPPORTS_MAC_DESIGNED_FOR_IPHONE_IPAD = YES; + SUPPORTS_XR_DESIGNED_FOR_IPHONE_IPAD = YES; + SWIFT_OBJC_BRIDGING_HEADER = "WakuExample/WakuExample-Bridging-Header.h"; + TARGETED_DEVICE_FAMILY = "1,2"; + }; + name = Release; + }; + 9BA833A09EEDB4B3FCCD8F8E /* Release */ = { + isa = XCBuildConfiguration; + buildSettings = { + ALWAYS_SEARCH_USER_PATHS = NO; + CLANG_ANALYZER_NONNULL = YES; + CLANG_ANALYZER_NUMBER_OBJECT_CONVERSION = YES_AGGRESSIVE; + CLANG_CXX_LANGUAGE_STANDARD = "gnu++14"; + CLANG_CXX_LIBRARY = "libc++"; + CLANG_ENABLE_MODULES = YES; + CLANG_ENABLE_OBJC_ARC = YES; + CLANG_ENABLE_OBJC_WEAK = YES; + CLANG_WARN_BLOCK_CAPTURE_AUTORELEASING = YES; + CLANG_WARN_BOOL_CONVERSION = YES; + CLANG_WARN_COMMA = YES; + CLANG_WARN_CONSTANT_CONVERSION = YES; + CLANG_WARN_DEPRECATED_OBJC_IMPLEMENTATIONS = YES; + CLANG_WARN_DIRECT_OBJC_ISA_USAGE = YES_ERROR; + CLANG_WARN_DOCUMENTATION_COMMENTS = YES; + CLANG_WARN_EMPTY_BODY = YES; + CLANG_WARN_ENUM_CONVERSION = YES; + CLANG_WARN_INFINITE_RECURSION = YES; + CLANG_WARN_INT_CONVERSION = YES; + CLANG_WARN_NON_LITERAL_NULL_CONVERSION = YES; + CLANG_WARN_OBJC_IMPLICIT_RETAIN_SELF = YES; + CLANG_WARN_OBJC_LITERAL_CONVERSION = YES; + CLANG_WARN_OBJC_ROOT_CLASS = YES_ERROR; + CLANG_WARN_QUOTED_INCLUDE_IN_FRAMEWORK_HEADER = YES; + CLANG_WARN_RANGE_LOOP_ANALYSIS = YES; + CLANG_WARN_STRICT_PROTOTYPES = YES; + CLANG_WARN_SUSPICIOUS_MOVE = YES; + CLANG_WARN_UNGUARDED_AVAILABILITY = YES_AGGRESSIVE; + CLANG_WARN_UNREACHABLE_CODE = YES; + CLANG_WARN__DUPLICATE_METHOD_MATCH = YES; + COPY_PHASE_STRIP = NO; + DEBUG_INFORMATION_FORMAT = "dwarf-with-dsym"; + ENABLE_NS_ASSERTIONS = NO; + ENABLE_STRICT_OBJC_MSGSEND = YES; + GCC_C_LANGUAGE_STANDARD = gnu11; + GCC_NO_COMMON_BLOCKS = YES; + GCC_WARN_64_TO_32_BIT_CONVERSION = YES; + GCC_WARN_ABOUT_RETURN_TYPE = YES_ERROR; + GCC_WARN_UNDECLARED_SELECTOR = YES; + GCC_WARN_UNINITIALIZED_AUTOS = YES_AGGRESSIVE; + GCC_WARN_UNUSED_FUNCTION = YES; + GCC_WARN_UNUSED_VARIABLE = YES; + IPHONEOS_DEPLOYMENT_TARGET = 18.6; + MTL_ENABLE_DEBUG_INFO = NO; + MTL_FAST_MATH = YES; + PRODUCT_NAME = "$(TARGET_NAME)"; + SDKROOT = iphoneos; + SUPPORTED_PLATFORMS = "iphoneos iphonesimulator"; + SUPPORTS_MACCATALYST = NO; + SWIFT_COMPILATION_MODE = wholemodule; + SWIFT_OPTIMIZATION_LEVEL = "-O"; + SWIFT_VERSION = 5.0; + }; + name = Release; + }; + A59ABFB792FED8974231E5AC /* Debug */ = { + isa = XCBuildConfiguration; + buildSettings = { + ALWAYS_SEARCH_USER_PATHS = NO; + CLANG_ANALYZER_NONNULL = YES; + CLANG_ANALYZER_NUMBER_OBJECT_CONVERSION = YES_AGGRESSIVE; + CLANG_CXX_LANGUAGE_STANDARD = "gnu++14"; + CLANG_CXX_LIBRARY = "libc++"; + CLANG_ENABLE_MODULES = YES; + CLANG_ENABLE_OBJC_ARC = YES; + CLANG_ENABLE_OBJC_WEAK = YES; + CLANG_WARN_BLOCK_CAPTURE_AUTORELEASING = YES; + CLANG_WARN_BOOL_CONVERSION = YES; + CLANG_WARN_COMMA = YES; + CLANG_WARN_CONSTANT_CONVERSION = YES; + CLANG_WARN_DEPRECATED_OBJC_IMPLEMENTATIONS = YES; + CLANG_WARN_DIRECT_OBJC_ISA_USAGE = YES_ERROR; + CLANG_WARN_DOCUMENTATION_COMMENTS = YES; + CLANG_WARN_EMPTY_BODY = YES; + CLANG_WARN_ENUM_CONVERSION = YES; + CLANG_WARN_INFINITE_RECURSION = YES; + CLANG_WARN_INT_CONVERSION = YES; + CLANG_WARN_NON_LITERAL_NULL_CONVERSION = YES; + CLANG_WARN_OBJC_IMPLICIT_RETAIN_SELF = YES; + CLANG_WARN_OBJC_LITERAL_CONVERSION = YES; + CLANG_WARN_OBJC_ROOT_CLASS = YES_ERROR; + CLANG_WARN_QUOTED_INCLUDE_IN_FRAMEWORK_HEADER = YES; + CLANG_WARN_RANGE_LOOP_ANALYSIS = YES; + CLANG_WARN_STRICT_PROTOTYPES = YES; + CLANG_WARN_SUSPICIOUS_MOVE = YES; + CLANG_WARN_UNGUARDED_AVAILABILITY = YES_AGGRESSIVE; + CLANG_WARN_UNREACHABLE_CODE = YES; + CLANG_WARN__DUPLICATE_METHOD_MATCH = YES; + COPY_PHASE_STRIP = NO; + DEBUG_INFORMATION_FORMAT = dwarf; + ENABLE_STRICT_OBJC_MSGSEND = YES; + ENABLE_TESTABILITY = YES; + GCC_C_LANGUAGE_STANDARD = gnu11; + GCC_DYNAMIC_NO_PIC = NO; + GCC_NO_COMMON_BLOCKS = YES; + GCC_OPTIMIZATION_LEVEL = 0; + GCC_PREPROCESSOR_DEFINITIONS = ( + "$(inherited)", + "DEBUG=1", + ); + GCC_WARN_64_TO_32_BIT_CONVERSION = YES; + GCC_WARN_ABOUT_RETURN_TYPE = YES_ERROR; + GCC_WARN_UNDECLARED_SELECTOR = YES; + GCC_WARN_UNINITIALIZED_AUTOS = YES_AGGRESSIVE; + GCC_WARN_UNUSED_FUNCTION = YES; + GCC_WARN_UNUSED_VARIABLE = YES; + IPHONEOS_DEPLOYMENT_TARGET = 18.6; + MTL_ENABLE_DEBUG_INFO = INCLUDE_SOURCE; + MTL_FAST_MATH = YES; + ONLY_ACTIVE_ARCH = YES; + PRODUCT_NAME = "$(TARGET_NAME)"; + SDKROOT = iphoneos; + SUPPORTED_PLATFORMS = "iphoneos iphonesimulator"; + SUPPORTS_MACCATALYST = NO; + SWIFT_ACTIVE_COMPILATION_CONDITIONS = DEBUG; + SWIFT_OPTIMIZATION_LEVEL = "-Onone"; + SWIFT_VERSION = 5.0; + }; + name = Debug; + }; + AF5ADDAA865B1F6BD4E70A79 /* Debug */ = { + isa = XCBuildConfiguration; + buildSettings = { + ASSETCATALOG_COMPILER_APPICON_NAME = AppIcon; + CODE_SIGN_IDENTITY = "iPhone Developer"; + DEVELOPMENT_TEAM = 2Q52K2W84K; + HEADER_SEARCH_PATHS = "$(PROJECT_DIR)/WakuExample"; + INFOPLIST_FILE = WakuExample/Info.plist; + IPHONEOS_DEPLOYMENT_TARGET = 18.6; + LD_RUNPATH_SEARCH_PATHS = ( + "$(inherited)", + "@executable_path/Frameworks", + ); + "LIBRARY_SEARCH_PATHS[sdk=iphoneos*]" = "$(PROJECT_DIR)/../../build/ios/iphoneos-arm64"; + "LIBRARY_SEARCH_PATHS[sdk=iphonesimulator*]" = "$(PROJECT_DIR)/../../build/ios/iphonesimulator-arm64"; + MACOSX_DEPLOYMENT_TARGET = 15.6; + OTHER_LDFLAGS = ( + "-lc++", + "-force_load", + "$(PROJECT_DIR)/../../build/ios/iphoneos-arm64/libwaku.a", + "-lsqlite3", + "-lz", + ); + PRODUCT_BUNDLE_IDENTIFIER = org.waku.example; + SDKROOT = iphoneos; + SUPPORTED_PLATFORMS = "iphoneos iphonesimulator"; + SUPPORTS_MACCATALYST = NO; + SUPPORTS_MAC_DESIGNED_FOR_IPHONE_IPAD = YES; + SUPPORTS_XR_DESIGNED_FOR_IPHONE_IPAD = YES; + SWIFT_OBJC_BRIDGING_HEADER = "WakuExample/WakuExample-Bridging-Header.h"; + TARGETED_DEVICE_FAMILY = "1,2"; + }; + name = Debug; + }; +/* End XCBuildConfiguration section */ + +/* Begin XCConfigurationList section */ + 757FA0123629BD63CB254113 /* Build configuration list for PBXNativeTarget "WakuExample" */ = { + isa = XCConfigurationList; + buildConfigurations = ( + AF5ADDAA865B1F6BD4E70A79 /* Debug */, + 36939122077C66DD94082311 /* Release */, + ); + defaultConfigurationIsVisible = 0; + defaultConfigurationName = Debug; + }; + B3A4F48294254543E79767C4 /* Build configuration list for PBXProject "WakuExample" */ = { + isa = XCConfigurationList; + buildConfigurations = ( + A59ABFB792FED8974231E5AC /* Debug */, + 9BA833A09EEDB4B3FCCD8F8E /* Release */, + ); + defaultConfigurationIsVisible = 0; + defaultConfigurationName = Debug; + }; +/* End XCConfigurationList section */ + }; + rootObject = 4FF82F0F4AF8E1E34728F150 /* Project object */; +} diff --git a/examples/ios/WakuExample.xcodeproj/project.xcworkspace/contents.xcworkspacedata b/examples/ios/WakuExample.xcodeproj/project.xcworkspace/contents.xcworkspacedata new file mode 100644 index 000000000..919434a62 --- /dev/null +++ b/examples/ios/WakuExample.xcodeproj/project.xcworkspace/contents.xcworkspacedata @@ -0,0 +1,7 @@ + + + + + diff --git a/examples/ios/WakuExample/ContentView.swift b/examples/ios/WakuExample/ContentView.swift new file mode 100644 index 000000000..14bb4ee1d --- /dev/null +++ b/examples/ios/WakuExample/ContentView.swift @@ -0,0 +1,229 @@ +// +// ContentView.swift +// WakuExample +// +// Minimal chat PoC using libwaku on iOS +// + +import SwiftUI + +struct ContentView: View { + @StateObject private var wakuNode = WakuNode() + @State private var messageText = "" + + var body: some View { + ZStack { + // Main content + VStack(spacing: 0) { + // Header with status + HStack { + Circle() + .fill(statusColor) + .frame(width: 10, height: 10) + VStack(alignment: .leading, spacing: 2) { + Text(wakuNode.status.rawValue) + .font(.caption) + if wakuNode.status == .running { + HStack(spacing: 4) { + Text(wakuNode.isConnected ? "Connected" : "Discovering...") + Text("•") + filterStatusView + } + .font(.caption2) + .foregroundColor(.secondary) + + // Subscription maintenance status + if wakuNode.subscriptionMaintenanceActive { + HStack(spacing: 4) { + Image(systemName: "arrow.triangle.2.circlepath") + .foregroundColor(.blue) + Text("Maintenance active") + if wakuNode.failedSubscribeAttempts > 0 { + Text("(\(wakuNode.failedSubscribeAttempts) retries)") + .foregroundColor(.orange) + } + } + .font(.caption2) + .foregroundColor(.secondary) + } + } + } + Spacer() + if wakuNode.status == .stopped { + Button("Start") { + wakuNode.start() + } + .buttonStyle(.borderedProminent) + .controlSize(.small) + } else if wakuNode.status == .running { + if !wakuNode.filterSubscribed { + Button("Resub") { + wakuNode.resubscribe() + } + .buttonStyle(.bordered) + .controlSize(.small) + } + Button("Stop") { + wakuNode.stop() + } + .buttonStyle(.bordered) + .controlSize(.small) + } + } + .padding() + .background(Color.gray.opacity(0.1)) + + // Messages list + ScrollViewReader { proxy in + ScrollView { + LazyVStack(alignment: .leading, spacing: 8) { + ForEach(wakuNode.receivedMessages.reversed()) { message in + MessageBubble(message: message) + .id(message.id) + } + } + .padding() + } + .onChange(of: wakuNode.receivedMessages.count) { _, newCount in + if let lastMessage = wakuNode.receivedMessages.first { + withAnimation { + proxy.scrollTo(lastMessage.id, anchor: .bottom) + } + } + } + } + + Divider() + + // Message input + HStack(spacing: 12) { + TextField("Message", text: $messageText) + .textFieldStyle(.roundedBorder) + .disabled(wakuNode.status != .running) + + Button(action: sendMessage) { + Image(systemName: "paperplane.fill") + .foregroundColor(.white) + .padding(10) + .background(canSend ? Color.blue : Color.gray) + .clipShape(Circle()) + } + .disabled(!canSend) + } + .padding() + .background(Color.gray.opacity(0.1)) + } + + // Toast overlay for errors + VStack { + ForEach(wakuNode.errorQueue) { error in + ToastView(error: error) { + wakuNode.dismissError(error) + } + .transition(.asymmetric( + insertion: .move(edge: .top).combined(with: .opacity), + removal: .opacity + )) + } + Spacer() + } + .padding(.top, 8) + .animation(.easeInOut(duration: 0.3), value: wakuNode.errorQueue) + } + } + + private var statusColor: Color { + switch wakuNode.status { + case .stopped: return .gray + case .starting: return .yellow + case .running: return .green + case .error: return .red + } + } + + @ViewBuilder + private var filterStatusView: some View { + if wakuNode.filterSubscribed { + Text("Filter OK") + .foregroundColor(.green) + } else if wakuNode.failedSubscribeAttempts > 0 { + Text("Filter retrying (\(wakuNode.failedSubscribeAttempts))") + .foregroundColor(.orange) + } else { + Text("Filter pending") + .foregroundColor(.orange) + } + } + + private var canSend: Bool { + wakuNode.status == .running && wakuNode.isConnected && !messageText.trimmingCharacters(in: .whitespaces).isEmpty + } + + private func sendMessage() { + let text = messageText.trimmingCharacters(in: .whitespaces) + guard !text.isEmpty else { return } + + wakuNode.publish(message: text) + messageText = "" + } +} + +// MARK: - Toast View + +struct ToastView: View { + let error: TimestampedError + let onDismiss: () -> Void + + var body: some View { + HStack(spacing: 12) { + Image(systemName: "exclamationmark.triangle.fill") + .foregroundColor(.white) + + Text(error.message) + .font(.subheadline) + .foregroundColor(.white) + .lineLimit(2) + + Spacer() + + Button(action: onDismiss) { + Image(systemName: "xmark.circle.fill") + .foregroundColor(.white.opacity(0.8)) + .font(.title3) + } + .buttonStyle(.plain) + } + .padding(.horizontal, 16) + .padding(.vertical, 12) + .background( + RoundedRectangle(cornerRadius: 12) + .fill(Color.red.opacity(0.9)) + .shadow(color: .black.opacity(0.2), radius: 8, x: 0, y: 4) + ) + .padding(.horizontal, 16) + .padding(.vertical, 4) + } +} + +// MARK: - Message Bubble + +struct MessageBubble: View { + let message: WakuMessage + + var body: some View { + VStack(alignment: .leading, spacing: 4) { + Text(message.payload) + .padding(10) + .background(Color.blue.opacity(0.1)) + .cornerRadius(12) + + Text(message.timestamp, style: .time) + .font(.caption2) + .foregroundColor(.secondary) + } + } +} + +#Preview { + ContentView() +} diff --git a/examples/ios/WakuExample/Info.plist b/examples/ios/WakuExample/Info.plist new file mode 100644 index 000000000..a9222555a --- /dev/null +++ b/examples/ios/WakuExample/Info.plist @@ -0,0 +1,36 @@ + + + + + CFBundleDevelopmentRegion + $(DEVELOPMENT_LANGUAGE) + CFBundleDisplayName + Waku Example + CFBundleExecutable + $(EXECUTABLE_NAME) + CFBundleIdentifier + org.waku.example + CFBundleInfoDictionaryVersion + 6.0 + CFBundleName + WakuExample + CFBundlePackageType + APPL + CFBundleShortVersionString + 1.0 + CFBundleVersion + 1 + NSAppTransportSecurity + + NSAllowsArbitraryLoads + + + UILaunchScreen + + UISupportedInterfaceOrientations + + UIInterfaceOrientationPortrait + + + + diff --git a/examples/ios/WakuExample/WakuExample-Bridging-Header.h b/examples/ios/WakuExample/WakuExample-Bridging-Header.h new file mode 100644 index 000000000..50595450e --- /dev/null +++ b/examples/ios/WakuExample/WakuExample-Bridging-Header.h @@ -0,0 +1,15 @@ +// +// WakuExample-Bridging-Header.h +// WakuExample +// +// Bridging header to expose libwaku C functions to Swift +// + +#ifndef WakuExample_Bridging_Header_h +#define WakuExample_Bridging_Header_h + +#import "libwaku.h" + +#endif /* WakuExample_Bridging_Header_h */ + + diff --git a/examples/ios/WakuExample/WakuExampleApp.swift b/examples/ios/WakuExample/WakuExampleApp.swift new file mode 100644 index 000000000..fb99785aa --- /dev/null +++ b/examples/ios/WakuExample/WakuExampleApp.swift @@ -0,0 +1,19 @@ +// +// WakuExampleApp.swift +// WakuExample +// +// SwiftUI app entry point for Waku iOS example +// + +import SwiftUI + +@main +struct WakuExampleApp: App { + var body: some Scene { + WindowGroup { + ContentView() + } + } +} + + diff --git a/examples/ios/WakuExample/WakuNode.swift b/examples/ios/WakuExample/WakuNode.swift new file mode 100644 index 000000000..245529a2f --- /dev/null +++ b/examples/ios/WakuExample/WakuNode.swift @@ -0,0 +1,739 @@ +// +// WakuNode.swift +// WakuExample +// +// Swift wrapper around libwaku C API for edge mode (lightpush + filter) +// Uses Swift actors for thread safety and UI responsiveness +// + +import Foundation + +// MARK: - Data Types + +/// Message received from Waku network +struct WakuMessage: Identifiable, Equatable, Sendable { + let id: String // messageHash from Waku - unique identifier for deduplication + let payload: String + let contentTopic: String + let timestamp: Date +} + +/// Waku node status +enum WakuNodeStatus: String, Sendable { + case stopped = "Stopped" + case starting = "Starting..." + case running = "Running" + case error = "Error" +} + +/// Status updates from WakuActor to WakuNode +enum WakuStatusUpdate: Sendable { + case statusChanged(WakuNodeStatus) + case connectionChanged(isConnected: Bool) + case filterSubscriptionChanged(subscribed: Bool, failedAttempts: Int) + case maintenanceChanged(active: Bool) + case error(String) +} + +/// Error with timestamp for toast queue +struct TimestampedError: Identifiable, Equatable { + let id = UUID() + let message: String + let timestamp: Date + + static func == (lhs: TimestampedError, rhs: TimestampedError) -> Bool { + lhs.id == rhs.id + } +} + +// MARK: - Callback Context for C API + +private final class CallbackContext: @unchecked Sendable { + private let lock = NSLock() + private var _continuation: CheckedContinuation<(success: Bool, result: String?), Never>? + private var _resumed = false + var success: Bool = false + var result: String? + + var continuation: CheckedContinuation<(success: Bool, result: String?), Never>? { + get { + lock.lock() + defer { lock.unlock() } + return _continuation + } + set { + lock.lock() + defer { lock.unlock() } + _continuation = newValue + } + } + + /// Thread-safe resume - ensures continuation is only resumed once + /// Returns true if this call actually resumed, false if already resumed + @discardableResult + func resumeOnce(returning value: (success: Bool, result: String?)) -> Bool { + lock.lock() + defer { lock.unlock() } + + guard !_resumed, let cont = _continuation else { + return false + } + + _resumed = true + _continuation = nil + cont.resume(returning: value) + return true + } +} + +// MARK: - WakuActor + +/// Actor that isolates all Waku operations from the main thread +/// All C API calls and mutable state are contained here +actor WakuActor { + + // MARK: - State + + private var ctx: UnsafeMutableRawPointer? + private var seenMessageHashes: Set = [] + private var isSubscribed: Bool = false + private var isSubscribing: Bool = false + private var hasPeers: Bool = false + private var maintenanceTask: Task? + private var eventProcessingTask: Task? + + // Stream continuations for communicating with UI + private var messageContinuation: AsyncStream.Continuation? + private var statusContinuation: AsyncStream.Continuation? + + // Event stream from C callbacks + private var eventContinuation: AsyncStream.Continuation? + + // Configuration + let defaultPubsubTopic = "/waku/2/rs/1/0" + let defaultContentTopic = "/waku-ios-example/1/chat/proto" + private let staticPeer = "/dns4/node-01.do-ams3.waku.sandbox.status.im/tcp/30303/p2p/16Uiu2HAmPLe7Mzm8TsYUubgCAW1aJoeFScxrLj8ppHFivPo97bUZ" + + // Subscription maintenance settings + private let maxFailedSubscribes = 3 + private let retryWaitSeconds: UInt64 = 2_000_000_000 // 2 seconds in nanoseconds + private let maintenanceIntervalSeconds: UInt64 = 30_000_000_000 // 30 seconds in nanoseconds + private let maxSeenHashes = 1000 + + // MARK: - Static callback storage (for C callbacks) + + // We need a way for C callbacks to reach the actor + // Using a simple static reference (safe because we only have one instance) + private static var sharedEventContinuation: AsyncStream.Continuation? + + private static let eventCallback: WakuCallBack = { ret, msg, len, userData in + guard ret == RET_OK, let msg = msg else { return } + let str = String(cString: msg) + WakuActor.sharedEventContinuation?.yield(str) + } + + private static let syncCallback: WakuCallBack = { ret, msg, len, userData in + guard let userData = userData else { return } + let context = Unmanaged.fromOpaque(userData).takeUnretainedValue() + let success = (ret == RET_OK) + var resultStr: String? = nil + if let msg = msg { + resultStr = String(cString: msg) + } + context.resumeOnce(returning: (success, resultStr)) + } + + // MARK: - Stream Setup + + func setMessageContinuation(_ continuation: AsyncStream.Continuation?) { + self.messageContinuation = continuation + } + + func setStatusContinuation(_ continuation: AsyncStream.Continuation?) { + self.statusContinuation = continuation + } + + // MARK: - Public API + + var isRunning: Bool { + ctx != nil + } + + var hasConnectedPeers: Bool { + hasPeers + } + + func start() async { + guard ctx == nil else { + print("[WakuActor] Already started") + return + } + + statusContinuation?.yield(.statusChanged(.starting)) + + // Create event stream for C callbacks + let eventStream = AsyncStream { continuation in + self.eventContinuation = continuation + WakuActor.sharedEventContinuation = continuation + } + + // Start event processing task + eventProcessingTask = Task { [weak self] in + for await eventJson in eventStream { + await self?.handleEvent(eventJson) + } + } + + // Initialize the node + let success = await initializeNode() + + if success { + statusContinuation?.yield(.statusChanged(.running)) + + // Connect to peer + let connected = await connectToPeer() + if connected { + hasPeers = true + statusContinuation?.yield(.connectionChanged(isConnected: true)) + + // Start maintenance loop + startMaintenanceLoop() + } else { + statusContinuation?.yield(.error("Failed to connect to service peer")) + } + } + } + + func stop() async { + guard let context = ctx else { return } + + // Stop maintenance loop + maintenanceTask?.cancel() + maintenanceTask = nil + + // Stop event processing + eventProcessingTask?.cancel() + eventProcessingTask = nil + + // Close event stream + eventContinuation?.finish() + eventContinuation = nil + WakuActor.sharedEventContinuation = nil + + statusContinuation?.yield(.statusChanged(.stopped)) + statusContinuation?.yield(.connectionChanged(isConnected: false)) + statusContinuation?.yield(.filterSubscriptionChanged(subscribed: false, failedAttempts: 0)) + statusContinuation?.yield(.maintenanceChanged(active: false)) + + // Reset state + let ctxToStop = context + ctx = nil + isSubscribed = false + isSubscribing = false + hasPeers = false + seenMessageHashes.removeAll() + + // Unsubscribe and stop in background (fire and forget) + Task.detached { + // Unsubscribe + _ = await self.callWakuSync { waku_filter_unsubscribe_all(ctxToStop, WakuActor.syncCallback, $0) } + print("[WakuActor] Unsubscribed from filter") + + // Stop + _ = await self.callWakuSync { waku_stop(ctxToStop, WakuActor.syncCallback, $0) } + print("[WakuActor] Node stopped") + + // Destroy + _ = await self.callWakuSync { waku_destroy(ctxToStop, WakuActor.syncCallback, $0) } + print("[WakuActor] Node destroyed") + } + } + + func publish(message: String, contentTopic: String? = nil) async { + guard let context = ctx else { + print("[WakuActor] Node not started") + return + } + + guard hasPeers else { + print("[WakuActor] No peers connected yet") + statusContinuation?.yield(.error("No peers connected yet. Please wait...")) + return + } + + let topic = contentTopic ?? defaultContentTopic + guard let payloadData = message.data(using: .utf8) else { return } + let payloadBase64 = payloadData.base64EncodedString() + let timestamp = Int64(Date().timeIntervalSince1970 * 1_000_000_000) + let jsonMessage = """ + {"payload":"\(payloadBase64)","contentTopic":"\(topic)","timestamp":\(timestamp)} + """ + + let result = await callWakuSync { userData in + waku_lightpush_publish( + context, + self.defaultPubsubTopic, + jsonMessage, + WakuActor.syncCallback, + userData + ) + } + + if result.success { + print("[WakuActor] Published message") + } else { + print("[WakuActor] Publish error: \(result.result ?? "unknown")") + statusContinuation?.yield(.error("Failed to send message")) + } + } + + func resubscribe() async { + print("[WakuActor] Force resubscribe requested") + isSubscribed = false + isSubscribing = false + statusContinuation?.yield(.filterSubscriptionChanged(subscribed: false, failedAttempts: 0)) + _ = await subscribe() + } + + // MARK: - Private Methods + + private func initializeNode() async -> Bool { + let config = """ + { + "tcpPort": 60000, + "clusterId": 1, + "shards": [0], + "relay": false, + "lightpush": true, + "filter": true, + "logLevel": "DEBUG", + "discv5Discovery": true, + "discv5BootstrapNodes": [ + "enr:-QESuEB4Dchgjn7gfAvwB00CxTA-nGiyk-aALI-H4dYSZD3rUk7bZHmP8d2U6xDiQ2vZffpo45Jp7zKNdnwDUx6g4o6XAYJpZIJ2NIJpcIRA4VDAim11bHRpYWRkcnO4XAArNiZub2RlLTAxLmRvLWFtczMud2FrdS5zYW5kYm94LnN0YXR1cy5pbQZ2XwAtNiZub2RlLTAxLmRvLWFtczMud2FrdS5zYW5kYm94LnN0YXR1cy5pbQYfQN4DgnJzkwABCAAAAAEAAgADAAQABQAGAAeJc2VjcDI1NmsxoQOvD3S3jUNICsrOILlmhENiWAMmMVlAl6-Q8wRB7hidY4N0Y3CCdl-DdWRwgiMohXdha3UyDw", + "enr:-QEkuEBIkb8q8_mrorHndoXH9t5N6ZfD-jehQCrYeoJDPHqT0l0wyaONa2-piRQsi3oVKAzDShDVeoQhy0uwN1xbZfPZAYJpZIJ2NIJpcIQiQlleim11bHRpYWRkcnO4bgA0Ni9ub2RlLTAxLmdjLXVzLWNlbnRyYWwxLWEud2FrdS5zYW5kYm94LnN0YXR1cy5pbQZ2XwA2Ni9ub2RlLTAxLmdjLXVzLWNlbnRyYWwxLWEud2FrdS5zYW5kYm94LnN0YXR1cy5pbQYfQN4DgnJzkwABCAAAAAEAAgADAAQABQAGAAeJc2VjcDI1NmsxoQKnGt-GSgqPSf3IAPM7bFgTlpczpMZZLF3geeoNNsxzSoN0Y3CCdl-DdWRwgiMohXdha3UyDw" + ], + "discv5UdpPort": 9999, + "dnsDiscovery": true, + "dnsDiscoveryUrl": "enrtree://AOGYWMBYOUIMOENHXCHILPKY3ZRFEULMFI4DOM442QSZ73TT2A7VI@test.waku.nodes.status.im", + "dnsDiscoveryNameServers": ["8.8.8.8", "1.0.0.1"] + } + """ + + // Create node - waku_new is special, it returns the context directly + let createResult = await withCheckedContinuation { (continuation: CheckedContinuation<(ctx: UnsafeMutableRawPointer?, success: Bool, result: String?), Never>) in + let callbackCtx = CallbackContext() + let userDataPtr = Unmanaged.passRetained(callbackCtx).toOpaque() + + // Set up a simple callback for waku_new + let newCtx = waku_new(config, { ret, msg, len, userData in + guard let userData = userData else { return } + let context = Unmanaged.fromOpaque(userData).takeUnretainedValue() + context.success = (ret == RET_OK) + if let msg = msg { + context.result = String(cString: msg) + } + }, userDataPtr) + + // Small delay to ensure callback completes + DispatchQueue.global().asyncAfter(deadline: .now() + 0.1) { + Unmanaged.fromOpaque(userDataPtr).release() + continuation.resume(returning: (newCtx, callbackCtx.success, callbackCtx.result)) + } + } + + guard createResult.ctx != nil else { + statusContinuation?.yield(.statusChanged(.error)) + statusContinuation?.yield(.error("Failed to create node: \(createResult.result ?? "unknown")")) + return false + } + + ctx = createResult.ctx + + // Set event callback + waku_set_event_callback(ctx, WakuActor.eventCallback, nil) + + // Start node + let startResult = await callWakuSync { userData in + waku_start(self.ctx, WakuActor.syncCallback, userData) + } + + guard startResult.success else { + statusContinuation?.yield(.statusChanged(.error)) + statusContinuation?.yield(.error("Failed to start node: \(startResult.result ?? "unknown")")) + ctx = nil + return false + } + + print("[WakuActor] Node started") + return true + } + + private func connectToPeer() async -> Bool { + guard let context = ctx else { return false } + + print("[WakuActor] Connecting to static peer...") + + let result = await callWakuSync { userData in + waku_connect(context, self.staticPeer, 10000, WakuActor.syncCallback, userData) + } + + if result.success { + print("[WakuActor] Connected to peer successfully") + return true + } else { + print("[WakuActor] Failed to connect: \(result.result ?? "unknown")") + return false + } + } + + private func subscribe(contentTopic: String? = nil) async -> Bool { + guard let context = ctx else { return false } + guard !isSubscribed && !isSubscribing else { return isSubscribed } + + isSubscribing = true + let topic = contentTopic ?? defaultContentTopic + + let result = await callWakuSync { userData in + waku_filter_subscribe( + context, + self.defaultPubsubTopic, + topic, + WakuActor.syncCallback, + userData + ) + } + + isSubscribing = false + + if result.success { + print("[WakuActor] Subscribe request successful to \(topic)") + isSubscribed = true + statusContinuation?.yield(.filterSubscriptionChanged(subscribed: true, failedAttempts: 0)) + return true + } else { + print("[WakuActor] Subscribe error: \(result.result ?? "unknown")") + isSubscribed = false + return false + } + } + + private func pingFilterPeer() async -> Bool { + guard let context = ctx else { return false } + + let result = await callWakuSync { userData in + waku_ping_peer( + context, + self.staticPeer, + 10000, + WakuActor.syncCallback, + userData + ) + } + + return result.success + } + + // MARK: - Subscription Maintenance + + private func startMaintenanceLoop() { + guard maintenanceTask == nil else { + print("[WakuActor] Maintenance loop already running") + return + } + + statusContinuation?.yield(.maintenanceChanged(active: true)) + print("[WakuActor] Starting subscription maintenance loop") + + maintenanceTask = Task { [weak self] in + guard let self = self else { return } + + var failedSubscribes = 0 + var isFirstPingOnConnection = true + + while !Task.isCancelled { + guard await self.isRunning else { break } + + print("[WakuActor] Maintaining subscription...") + + let pingSuccess = await self.pingFilterPeer() + let currentlySubscribed = await self.isSubscribed + + if pingSuccess && currentlySubscribed { + print("[WakuActor] Subscription is live, waiting 30s") + try? await Task.sleep(nanoseconds: self.maintenanceIntervalSeconds) + continue + } + + if !isFirstPingOnConnection && !pingSuccess { + print("[WakuActor] Ping failed - subscription may be lost") + await self.statusContinuation?.yield(.filterSubscriptionChanged(subscribed: false, failedAttempts: failedSubscribes)) + } + isFirstPingOnConnection = false + + print("[WakuActor] No active subscription found. Sending subscribe request...") + + await self.resetSubscriptionState() + let subscribeSuccess = await self.subscribe() + + if subscribeSuccess { + print("[WakuActor] Subscribe request successful") + failedSubscribes = 0 + try? await Task.sleep(nanoseconds: self.maintenanceIntervalSeconds) + continue + } + + failedSubscribes += 1 + await self.statusContinuation?.yield(.filterSubscriptionChanged(subscribed: false, failedAttempts: failedSubscribes)) + print("[WakuActor] Subscribe request failed. Attempt \(failedSubscribes)/\(self.maxFailedSubscribes)") + + if failedSubscribes < self.maxFailedSubscribes { + print("[WakuActor] Retrying in 2s...") + try? await Task.sleep(nanoseconds: self.retryWaitSeconds) + } else { + print("[WakuActor] Max subscribe failures reached") + await self.statusContinuation?.yield(.error("Filter subscription failed after \(self.maxFailedSubscribes) attempts")) + failedSubscribes = 0 + try? await Task.sleep(nanoseconds: self.maintenanceIntervalSeconds) + } + } + + print("[WakuActor] Subscription maintenance loop stopped") + await self.statusContinuation?.yield(.maintenanceChanged(active: false)) + } + } + + private func resetSubscriptionState() { + isSubscribed = false + isSubscribing = false + } + + // MARK: - Event Handling + + private func handleEvent(_ eventJson: String) { + guard let data = eventJson.data(using: .utf8), + let json = try? JSONSerialization.jsonObject(with: data) as? [String: Any], + let eventType = json["eventType"] as? String else { + return + } + + if eventType == "connection_change" { + handleConnectionChange(json) + } else if eventType == "message" { + handleMessage(json) + } + } + + private func handleConnectionChange(_ json: [String: Any]) { + guard let peerEvent = json["peerEvent"] as? String else { return } + + if peerEvent == "Joined" || peerEvent == "Identified" { + hasPeers = true + statusContinuation?.yield(.connectionChanged(isConnected: true)) + } else if peerEvent == "Left" { + statusContinuation?.yield(.filterSubscriptionChanged(subscribed: false, failedAttempts: 0)) + } + } + + private func handleMessage(_ json: [String: Any]) { + guard let messageHash = json["messageHash"] as? String, + let wakuMessage = json["wakuMessage"] as? [String: Any], + let payloadBase64 = wakuMessage["payload"] as? String, + let contentTopic = wakuMessage["contentTopic"] as? String, + let payloadData = Data(base64Encoded: payloadBase64), + let payloadString = String(data: payloadData, encoding: .utf8) else { + return + } + + // Deduplicate + guard !seenMessageHashes.contains(messageHash) else { + return + } + + seenMessageHashes.insert(messageHash) + + // Limit memory usage + if seenMessageHashes.count > maxSeenHashes { + seenMessageHashes.removeAll() + } + + let message = WakuMessage( + id: messageHash, + payload: payloadString, + contentTopic: contentTopic, + timestamp: Date() + ) + + messageContinuation?.yield(message) + } + + // MARK: - Helper for synchronous C calls + + private func callWakuSync(_ work: @escaping (UnsafeMutableRawPointer) -> Void) async -> (success: Bool, result: String?) { + await withCheckedContinuation { continuation in + let context = CallbackContext() + context.continuation = continuation + let userDataPtr = Unmanaged.passRetained(context).toOpaque() + + work(userDataPtr) + + // Set a timeout to avoid hanging forever + DispatchQueue.global().asyncAfter(deadline: .now() + 15) { + // Try to resume with timeout - will be ignored if callback already resumed + let didTimeout = context.resumeOnce(returning: (false, "Timeout")) + if didTimeout { + print("[WakuActor] Call timed out") + } + Unmanaged.fromOpaque(userDataPtr).release() + } + } + } +} + +// MARK: - WakuNode (MainActor UI Wrapper) + +/// Main-thread UI wrapper that consumes updates from WakuActor via AsyncStreams +@MainActor +class WakuNode: ObservableObject { + + // MARK: - Published Properties (UI State) + + @Published var status: WakuNodeStatus = .stopped + @Published var receivedMessages: [WakuMessage] = [] + @Published var errorQueue: [TimestampedError] = [] + @Published var isConnected: Bool = false + @Published var filterSubscribed: Bool = false + @Published var subscriptionMaintenanceActive: Bool = false + @Published var failedSubscribeAttempts: Int = 0 + + // Topics (read-only access to actor's config) + var defaultPubsubTopic: String { "/waku/2/rs/1/0" } + var defaultContentTopic: String { "/waku-ios-example/1/chat/proto" } + + // MARK: - Private Properties + + private let actor = WakuActor() + private var messageTask: Task? + private var statusTask: Task? + + // MARK: - Initialization + + init() {} + + deinit { + messageTask?.cancel() + statusTask?.cancel() + } + + // MARK: - Public API + + func start() { + guard status == .stopped || status == .error else { + print("[WakuNode] Already started or starting") + return + } + + // Create message stream + let messageStream = AsyncStream { continuation in + Task { + await self.actor.setMessageContinuation(continuation) + } + } + + // Create status stream + let statusStream = AsyncStream { continuation in + Task { + await self.actor.setStatusContinuation(continuation) + } + } + + // Start consuming messages + messageTask = Task { @MainActor in + for await message in messageStream { + self.receivedMessages.insert(message, at: 0) + if self.receivedMessages.count > 100 { + self.receivedMessages.removeLast() + } + } + } + + // Start consuming status updates + statusTask = Task { @MainActor in + for await update in statusStream { + self.handleStatusUpdate(update) + } + } + + // Start the actor + Task { + await actor.start() + } + } + + func stop() { + messageTask?.cancel() + messageTask = nil + statusTask?.cancel() + statusTask = nil + + Task { + await actor.stop() + } + + // Immediate UI update + status = .stopped + isConnected = false + filterSubscribed = false + subscriptionMaintenanceActive = false + failedSubscribeAttempts = 0 + } + + func publish(message: String, contentTopic: String? = nil) { + Task { + await actor.publish(message: message, contentTopic: contentTopic) + } + } + + func resubscribe() { + Task { + await actor.resubscribe() + } + } + + func dismissError(_ error: TimestampedError) { + errorQueue.removeAll { $0.id == error.id } + } + + func dismissAllErrors() { + errorQueue.removeAll() + } + + // MARK: - Private Methods + + private func handleStatusUpdate(_ update: WakuStatusUpdate) { + switch update { + case .statusChanged(let newStatus): + status = newStatus + + case .connectionChanged(let connected): + isConnected = connected + + case .filterSubscriptionChanged(let subscribed, let attempts): + filterSubscribed = subscribed + failedSubscribeAttempts = attempts + + case .maintenanceChanged(let active): + subscriptionMaintenanceActive = active + + case .error(let message): + let error = TimestampedError(message: message, timestamp: Date()) + errorQueue.append(error) + + // Schedule auto-dismiss after 10 seconds + let errorId = error.id + Task { @MainActor in + try? await Task.sleep(nanoseconds: 10_000_000_000) + self.errorQueue.removeAll { $0.id == errorId } + } + } + } +} diff --git a/examples/ios/WakuExample/libwaku.h b/examples/ios/WakuExample/libwaku.h new file mode 100644 index 000000000..b5d6c9bab --- /dev/null +++ b/examples/ios/WakuExample/libwaku.h @@ -0,0 +1,253 @@ + +// Generated manually and inspired by the one generated by the Nim Compiler. +// In order to see the header file generated by Nim just run `make libwaku` +// from the root repo folder and the header should be created in +// nimcache/release/libwaku/libwaku.h +#ifndef __libwaku__ +#define __libwaku__ + +#include +#include + +// The possible returned values for the functions that return int +#define RET_OK 0 +#define RET_ERR 1 +#define RET_MISSING_CALLBACK 2 + +#ifdef __cplusplus +extern "C" { +#endif + +typedef void (*WakuCallBack) (int callerRet, const char* msg, size_t len, void* userData); + +// Creates a new instance of the waku node. +// Sets up the waku node from the given configuration. +// Returns a pointer to the Context needed by the rest of the API functions. +void* waku_new( + const char* configJson, + WakuCallBack callback, + void* userData); + +int waku_start(void* ctx, + WakuCallBack callback, + void* userData); + +int waku_stop(void* ctx, + WakuCallBack callback, + void* userData); + +// Destroys an instance of a waku node created with waku_new +int waku_destroy(void* ctx, + WakuCallBack callback, + void* userData); + +int waku_version(void* ctx, + WakuCallBack callback, + void* userData); + +// Sets a callback that will be invoked whenever an event occurs. +// It is crucial that the passed callback is fast, non-blocking and potentially thread-safe. +void waku_set_event_callback(void* ctx, + WakuCallBack callback, + void* userData); + +int waku_content_topic(void* ctx, + const char* appName, + unsigned int appVersion, + const char* contentTopicName, + const char* encoding, + WakuCallBack callback, + void* userData); + +int waku_pubsub_topic(void* ctx, + const char* topicName, + WakuCallBack callback, + void* userData); + +int waku_default_pubsub_topic(void* ctx, + WakuCallBack callback, + void* userData); + +int waku_relay_publish(void* ctx, + const char* pubSubTopic, + const char* jsonWakuMessage, + unsigned int timeoutMs, + WakuCallBack callback, + void* userData); + +int waku_lightpush_publish(void* ctx, + const char* pubSubTopic, + const char* jsonWakuMessage, + WakuCallBack callback, + void* userData); + +int waku_relay_subscribe(void* ctx, + const char* pubSubTopic, + WakuCallBack callback, + void* userData); + +int waku_relay_add_protected_shard(void* ctx, + int clusterId, + int shardId, + char* publicKey, + WakuCallBack callback, + void* userData); + +int waku_relay_unsubscribe(void* ctx, + const char* pubSubTopic, + WakuCallBack callback, + void* userData); + +int waku_filter_subscribe(void* ctx, + const char* pubSubTopic, + const char* contentTopics, + WakuCallBack callback, + void* userData); + +int waku_filter_unsubscribe(void* ctx, + const char* pubSubTopic, + const char* contentTopics, + WakuCallBack callback, + void* userData); + +int waku_filter_unsubscribe_all(void* ctx, + WakuCallBack callback, + void* userData); + +int waku_relay_get_num_connected_peers(void* ctx, + const char* pubSubTopic, + WakuCallBack callback, + void* userData); + +int waku_relay_get_connected_peers(void* ctx, + const char* pubSubTopic, + WakuCallBack callback, + void* userData); + +int waku_relay_get_num_peers_in_mesh(void* ctx, + const char* pubSubTopic, + WakuCallBack callback, + void* userData); + +int waku_relay_get_peers_in_mesh(void* ctx, + const char* pubSubTopic, + WakuCallBack callback, + void* userData); + +int waku_store_query(void* ctx, + const char* jsonQuery, + const char* peerAddr, + int timeoutMs, + WakuCallBack callback, + void* userData); + +int waku_connect(void* ctx, + const char* peerMultiAddr, + unsigned int timeoutMs, + WakuCallBack callback, + void* userData); + +int waku_disconnect_peer_by_id(void* ctx, + const char* peerId, + WakuCallBack callback, + void* userData); + +int waku_disconnect_all_peers(void* ctx, + WakuCallBack callback, + void* userData); + +int waku_dial_peer(void* ctx, + const char* peerMultiAddr, + const char* protocol, + int timeoutMs, + WakuCallBack callback, + void* userData); + +int waku_dial_peer_by_id(void* ctx, + const char* peerId, + const char* protocol, + int timeoutMs, + WakuCallBack callback, + void* userData); + +int waku_get_peerids_from_peerstore(void* ctx, + WakuCallBack callback, + void* userData); + +int waku_get_connected_peers_info(void* ctx, + WakuCallBack callback, + void* userData); + +int waku_get_peerids_by_protocol(void* ctx, + const char* protocol, + WakuCallBack callback, + void* userData); + +int waku_listen_addresses(void* ctx, + WakuCallBack callback, + void* userData); + +int waku_get_connected_peers(void* ctx, + WakuCallBack callback, + void* userData); + +// Returns a list of multiaddress given a url to a DNS discoverable ENR tree +// Parameters +// char* entTreeUrl: URL containing a discoverable ENR tree +// char* nameDnsServer: The nameserver to resolve the ENR tree url. +// int timeoutMs: Timeout value in milliseconds to execute the call. +int waku_dns_discovery(void* ctx, + const char* entTreeUrl, + const char* nameDnsServer, + int timeoutMs, + WakuCallBack callback, + void* userData); + +// Updates the bootnode list used for discovering new peers via DiscoveryV5 +// bootnodes - JSON array containing the bootnode ENRs i.e. `["enr:...", "enr:..."]` +int waku_discv5_update_bootnodes(void* ctx, + char* bootnodes, + WakuCallBack callback, + void* userData); + +int waku_start_discv5(void* ctx, + WakuCallBack callback, + void* userData); + +int waku_stop_discv5(void* ctx, + WakuCallBack callback, + void* userData); + +// Retrieves the ENR information +int waku_get_my_enr(void* ctx, + WakuCallBack callback, + void* userData); + +int waku_get_my_peerid(void* ctx, + WakuCallBack callback, + void* userData); + +int waku_get_metrics(void* ctx, + WakuCallBack callback, + void* userData); + +int waku_peer_exchange_request(void* ctx, + int numPeers, + WakuCallBack callback, + void* userData); + +int waku_ping_peer(void* ctx, + const char* peerAddr, + int timeoutMs, + WakuCallBack callback, + void* userData); + +int waku_is_online(void* ctx, + WakuCallBack callback, + void* userData); + +#ifdef __cplusplus +} +#endif + +#endif /* __libwaku__ */ diff --git a/examples/ios/project.yml b/examples/ios/project.yml new file mode 100644 index 000000000..9519e8b9e --- /dev/null +++ b/examples/ios/project.yml @@ -0,0 +1,47 @@ +name: WakuExample +options: + bundleIdPrefix: org.waku + deploymentTarget: + iOS: "14.0" + xcodeVersion: "15.0" + +settings: + SWIFT_VERSION: "5.0" + SUPPORTED_PLATFORMS: "iphoneos iphonesimulator" + SUPPORTS_MACCATALYST: "NO" + +targets: + WakuExample: + type: application + platform: iOS + supportedDestinations: [iOS] + sources: + - WakuExample + settings: + INFOPLIST_FILE: WakuExample/Info.plist + PRODUCT_BUNDLE_IDENTIFIER: org.waku.example + SWIFT_OBJC_BRIDGING_HEADER: WakuExample/WakuExample-Bridging-Header.h + HEADER_SEARCH_PATHS: + - "$(PROJECT_DIR)/WakuExample" + "LIBRARY_SEARCH_PATHS[sdk=iphoneos*]": + - "$(PROJECT_DIR)/../../build/ios/iphoneos-arm64" + "LIBRARY_SEARCH_PATHS[sdk=iphonesimulator*]": + - "$(PROJECT_DIR)/../../build/ios/iphonesimulator-arm64" + OTHER_LDFLAGS: + - "-lc++" + - "-lwaku" + IPHONEOS_DEPLOYMENT_TARGET: "14.0" + info: + path: WakuExample/Info.plist + properties: + CFBundleName: WakuExample + CFBundleDisplayName: Waku Example + CFBundleIdentifier: org.waku.example + CFBundleVersion: "1" + CFBundleShortVersionString: "1.0" + UILaunchScreen: {} + UISupportedInterfaceOrientations: + - UIInterfaceOrientationPortrait + NSAppTransportSecurity: + NSAllowsArbitraryLoads: true + diff --git a/library/ios_bearssl_stubs.c b/library/ios_bearssl_stubs.c new file mode 100644 index 000000000..a028cdf25 --- /dev/null +++ b/library/ios_bearssl_stubs.c @@ -0,0 +1,32 @@ +/** + * iOS stubs for BearSSL tools functions not normally included in the library. + * These are typically from the BearSSL tools/ directory which is for CLI tools. + */ + +#include + +/* x509_noanchor context - simplified stub */ +typedef struct { + void *vtable; + void *inner; +} x509_noanchor_context; + +/* Stub for x509_noanchor_init - used to skip anchor validation */ +void x509_noanchor_init(x509_noanchor_context *xwc, const void **inner) { + if (xwc && inner) { + xwc->inner = (void*)*inner; + xwc->vtable = NULL; + } +} + +/* TAs (Trust Anchors) - empty array stub */ +/* This is typically defined by applications with their CA certificates */ +typedef struct { + void *dn; + size_t dn_len; + unsigned flags; + void *pkey; +} br_x509_trust_anchor; + +const br_x509_trust_anchor TAs[1] = {{0}}; +const size_t TAs_NUM = 0; diff --git a/library/ios_natpmp_stubs.c b/library/ios_natpmp_stubs.c new file mode 100644 index 000000000..ef635db10 --- /dev/null +++ b/library/ios_natpmp_stubs.c @@ -0,0 +1,14 @@ +/** + * iOS stub for getgateway.c functions. + * iOS doesn't have net/route.h, so we provide a stub that returns failure. + * NAT-PMP functionality won't work but the library will link. + */ + +#include +#include + +/* getdefaultgateway - returns -1 (failure) on iOS */ +int getdefaultgateway(in_addr_t *addr) { + (void)addr; /* unused */ + return -1; /* failure - not supported on iOS */ +} diff --git a/waku.nimble b/waku.nimble index 7bfdfab12..5c5c09763 100644 --- a/waku.nimble +++ b/waku.nimble @@ -213,3 +213,182 @@ task libWakuAndroid, "Build the mobile bindings for Android": let srcDir = "./library" let extraParams = "-d:chronicles_log_level=ERROR" buildMobileAndroid srcDir, extraParams + +### Mobile iOS +import std/sequtils + +proc buildMobileIOS(srcDir = ".", params = "") = + echo "Building iOS libwaku library" + + let iosArch = getEnv("IOS_ARCH") + let iosSdk = getEnv("IOS_SDK") + let sdkPath = getEnv("IOS_SDK_PATH") + + if sdkPath.len == 0: + quit "Error: IOS_SDK_PATH not set. Set it to the path of the iOS SDK" + + # Use SDK name in path to differentiate device vs simulator + let outDir = "build/ios/" & iosSdk & "-" & iosArch + if not dirExists outDir: + mkDir outDir + + var extra_params = params + for i in 2 ..< paramCount(): + extra_params &= " " & paramStr(i) + + let cpu = if iosArch == "arm64": "arm64" else: "amd64" + + # The output static library + let nimcacheDir = outDir & "/nimcache" + let objDir = outDir & "/obj" + let vendorObjDir = outDir & "/vendor_obj" + let aFile = outDir & "/libwaku.a" + + if not dirExists objDir: + mkDir objDir + if not dirExists vendorObjDir: + mkDir vendorObjDir + + let clangBase = "clang -arch " & iosArch & " -isysroot " & sdkPath & + " -mios-version-min=18.0 -fembed-bitcode -fPIC -O2" + + # Generate C sources from Nim (no linking) + exec "nim c" & + " --nimcache:" & nimcacheDir & + " --os:ios --cpu:" & cpu & + " --compileOnly:on" & + " --noMain --mm:refc" & + " --threads:on --opt:size --header" & + " -d:metrics -d:discv5_protocol_id=d5waku" & + " --nimMainPrefix:libwaku --skipParentCfg:on" & + " --cc:clang" & + " " & extra_params & + " " & srcDir & "/libwaku.nim" + + # Compile vendor C libraries for iOS + + # --- BearSSL --- + echo "Compiling BearSSL for iOS..." + let bearSslSrcDir = "./vendor/nim-bearssl/bearssl/csources/src" + let bearSslIncDir = "./vendor/nim-bearssl/bearssl/csources/inc" + for path in walkDirRec(bearSslSrcDir): + if path.endsWith(".c"): + let relPath = path.replace(bearSslSrcDir & "/", "").replace("/", "_") + let baseName = relPath.changeFileExt("o") + let oFile = vendorObjDir / ("bearssl_" & baseName) + if not fileExists(oFile): + exec clangBase & " -I" & bearSslIncDir & " -I" & bearSslSrcDir & " -c " & path & " -o " & oFile + + # --- secp256k1 --- + echo "Compiling secp256k1 for iOS..." + let secp256k1Dir = "./vendor/nim-secp256k1/vendor/secp256k1" + let secp256k1Flags = " -I" & secp256k1Dir & "/include" & + " -I" & secp256k1Dir & "/src" & + " -I" & secp256k1Dir & + " -DENABLE_MODULE_RECOVERY=1" & + " -DENABLE_MODULE_ECDH=1" & + " -DECMULT_WINDOW_SIZE=15" & + " -DECMULT_GEN_PREC_BITS=4" + + # Main secp256k1 source + let secp256k1Obj = vendorObjDir / "secp256k1.o" + if not fileExists(secp256k1Obj): + exec clangBase & secp256k1Flags & " -c " & secp256k1Dir & "/src/secp256k1.c -o " & secp256k1Obj + + # Precomputed tables (required for ecmult operations) + let secp256k1PreEcmultObj = vendorObjDir / "secp256k1_precomputed_ecmult.o" + if not fileExists(secp256k1PreEcmultObj): + exec clangBase & secp256k1Flags & " -c " & secp256k1Dir & "/src/precomputed_ecmult.c -o " & secp256k1PreEcmultObj + + let secp256k1PreEcmultGenObj = vendorObjDir / "secp256k1_precomputed_ecmult_gen.o" + if not fileExists(secp256k1PreEcmultGenObj): + exec clangBase & secp256k1Flags & " -c " & secp256k1Dir & "/src/precomputed_ecmult_gen.c -o " & secp256k1PreEcmultGenObj + + # --- miniupnpc --- + echo "Compiling miniupnpc for iOS..." + let miniupnpcSrcDir = "./vendor/nim-nat-traversal/vendor/miniupnp/miniupnpc/src" + let miniupnpcIncDir = "./vendor/nim-nat-traversal/vendor/miniupnp/miniupnpc/include" + let miniupnpcBuildDir = "./vendor/nim-nat-traversal/vendor/miniupnp/miniupnpc/build" + let miniupnpcFiles = @[ + "addr_is_reserved.c", "connecthostport.c", "igd_desc_parse.c", + "minisoap.c", "minissdpc.c", "miniupnpc.c", "miniwget.c", + "minixml.c", "portlistingparse.c", "receivedata.c", "upnpcommands.c", + "upnpdev.c", "upnperrors.c", "upnpreplyparse.c" + ] + for fileName in miniupnpcFiles: + let srcPath = miniupnpcSrcDir / fileName + let oFile = vendorObjDir / ("miniupnpc_" & fileName.changeFileExt("o")) + if fileExists(srcPath) and not fileExists(oFile): + exec clangBase & + " -I" & miniupnpcIncDir & + " -I" & miniupnpcSrcDir & + " -I" & miniupnpcBuildDir & + " -DMINIUPNPC_SET_SOCKET_TIMEOUT" & + " -D_BSD_SOURCE -D_DEFAULT_SOURCE" & + " -c " & srcPath & " -o " & oFile + + # --- libnatpmp --- + echo "Compiling libnatpmp for iOS..." + let natpmpSrcDir = "./vendor/nim-nat-traversal/vendor/libnatpmp-upstream" + # Only compile natpmp.c - getgateway.c uses net/route.h which is not available on iOS + let natpmpObj = vendorObjDir / "natpmp_natpmp.o" + if not fileExists(natpmpObj): + exec clangBase & + " -I" & natpmpSrcDir & + " -DENABLE_STRNATPMPERR" & + " -c " & natpmpSrcDir & "/natpmp.c -o " & natpmpObj + + # Use iOS-specific stub for getgateway + let getgatewayStubSrc = "./library/ios_natpmp_stubs.c" + let getgatewayStubObj = vendorObjDir / "natpmp_getgateway_stub.o" + if fileExists(getgatewayStubSrc) and not fileExists(getgatewayStubObj): + exec clangBase & " -c " & getgatewayStubSrc & " -o " & getgatewayStubObj + + # --- BearSSL stubs (for tools functions not in main library) --- + echo "Compiling BearSSL stubs for iOS..." + let bearSslStubsSrc = "./library/ios_bearssl_stubs.c" + let bearSslStubsObj = vendorObjDir / "bearssl_stubs.o" + if fileExists(bearSslStubsSrc) and not fileExists(bearSslStubsObj): + exec clangBase & " -c " & bearSslStubsSrc & " -o " & bearSslStubsObj + + # Compile all Nim-generated C files to object files + echo "Compiling Nim-generated C files for iOS..." + var cFiles: seq[string] = @[] + for kind, path in walkDir(nimcacheDir): + if kind == pcFile and path.endsWith(".c"): + cFiles.add(path) + + for cFile in cFiles: + let baseName = extractFilename(cFile).changeFileExt("o") + let oFile = objDir / baseName + exec clangBase & + " -DENABLE_STRNATPMPERR" & + " -I./vendor/nimbus-build-system/vendor/Nim/lib/" & + " -I./vendor/nim-bearssl/bearssl/csources/inc/" & + " -I./vendor/nim-bearssl/bearssl/csources/tools/" & + " -I./vendor/nim-bearssl/bearssl/abi/" & + " -I./vendor/nim-secp256k1/vendor/secp256k1/include/" & + " -I./vendor/nim-nat-traversal/vendor/miniupnp/miniupnpc/include/" & + " -I./vendor/nim-nat-traversal/vendor/libnatpmp-upstream/" & + " -I" & nimcacheDir & + " -c " & cFile & + " -o " & oFile + + # Create static library from all object files + echo "Creating static library..." + var objFiles: seq[string] = @[] + for kind, path in walkDir(objDir): + if kind == pcFile and path.endsWith(".o"): + objFiles.add(path) + for kind, path in walkDir(vendorObjDir): + if kind == pcFile and path.endsWith(".o"): + objFiles.add(path) + + exec "libtool -static -o " & aFile & " " & objFiles.join(" ") + + echo "✔ iOS library created: " & aFile + +task libWakuIOS, "Build the mobile bindings for iOS": + let srcDir = "./library" + let extraParams = "-d:chronicles_log_level=ERROR" + buildMobileIOS srcDir, extraParams From dafdee9f5ffc0460f45307c61fbd8e9832fc3ecd Mon Sep 17 00:00:00 2001 From: Ivan Folgueira Bande Date: Mon, 29 Dec 2025 23:04:24 +0100 Subject: [PATCH 31/70] small refactor README to start using Logos Messaging Nim term --- README.md | 14 ++++++++------ 1 file changed, 8 insertions(+), 6 deletions(-) diff --git a/README.md b/README.md index ce352d6f5..c64479738 100644 --- a/README.md +++ b/README.md @@ -1,19 +1,21 @@ -# Nwaku +# Logos Messaging Nim ## Introduction -The nwaku repository implements Waku, and provides tools related to it. +The logos-messaging-nim, a.k.a. lmn or nwaku, repository implements a set of libp2p protocols aimed to bring +private communications. -- A Nim implementation of the [Waku (v2) protocol](https://specs.vac.dev/specs/waku/v2/waku-v2.html). -- CLI application `wakunode2` that allows you to run a Waku node. -- Examples of Waku usage. +- Nim implementation of [these specs](https://github.com/vacp2p/rfc-index/tree/main/waku). +- C library that exposes the implemented protocols. +- CLI application that allows you to run an lmn node. +- Examples. - Various tests of above. For more details see the [source code](waku/README.md) ## How to Build & Run ( Linux, MacOS & WSL ) -These instructions are generic. For more detailed instructions, see the Waku source code above. +These instructions are generic. For more detailed instructions, see the source code above. ### Prerequisites From a865ff72c86a17bdafad6779391d4d8994d7c0dc Mon Sep 17 00:00:00 2001 From: Sasha <118575614+weboko@users.noreply.github.com> Date: Tue, 6 Jan 2026 10:19:37 +0100 Subject: [PATCH 32/70] update js-waku repo reference (#3684) --- .github/ISSUE_TEMPLATE/prepare_beta_release.md | 2 +- .github/ISSUE_TEMPLATE/prepare_full_release.md | 2 +- .github/workflows/ci.yml | 4 ++-- .github/workflows/pre-release.yml | 4 ++-- 4 files changed, 6 insertions(+), 6 deletions(-) diff --git a/.github/ISSUE_TEMPLATE/prepare_beta_release.md b/.github/ISSUE_TEMPLATE/prepare_beta_release.md index 9afaefbd1..383d9018c 100644 --- a/.github/ISSUE_TEMPLATE/prepare_beta_release.md +++ b/.github/ISSUE_TEMPLATE/prepare_beta_release.md @@ -22,7 +22,7 @@ All items below are to be completed by the owner of the given release. - [ ] Generate and edit release notes in CHANGELOG.md. - [ ] **Waku test and fleets validation** - - [ ] Ensure all the unit tests (specifically js-waku tests) are green against the release candidate. + - [ ] Ensure all the unit tests (specifically logos-messaging-js tests) are green against the release candidate. - [ ] Deploy the release candidate to `waku.test` only through [deploy-waku-test job](https://ci.infra.status.im/job/nim-waku/job/deploy-waku-test/) and wait for it to finish (Jenkins access required; ask the infra team if you don't have it). - After completion, disable [deployment job](https://ci.infra.status.im/job/nim-waku/) so that its version is not updated on every merge to master. - Verify the deployed version at https://fleets.waku.org/. diff --git a/.github/ISSUE_TEMPLATE/prepare_full_release.md b/.github/ISSUE_TEMPLATE/prepare_full_release.md index 314146f60..d7458a8e3 100644 --- a/.github/ISSUE_TEMPLATE/prepare_full_release.md +++ b/.github/ISSUE_TEMPLATE/prepare_full_release.md @@ -24,7 +24,7 @@ All items below are to be completed by the owner of the given release. - [ ] **Validation of release candidate** - [ ] **Automated testing** - - [ ] Ensure all the unit tests (specifically js-waku tests) are green against the release candidate. + - [ ] Ensure all the unit tests (specifically logos-messaging-js tests) are green against the release candidate. - [ ] Ask Vac-QA and Vac-DST to perform the available tests against the release candidate. - [ ] Vac-DST (an additional report is needed; see [this](https://www.notion.so/DST-Reports-1228f96fb65c80729cd1d98a7496fe6f)) diff --git a/.github/workflows/ci.yml b/.github/workflows/ci.yml index 2b12a5109..da8383e43 100644 --- a/.github/workflows/ci.yml +++ b/.github/workflows/ci.yml @@ -151,14 +151,14 @@ jobs: js-waku-node: needs: build-docker-image - uses: logos-messaging/js-waku/.github/workflows/test-node.yml@master + uses: logos-messaging/logos-messaging-js/.github/workflows/test-node.yml@master with: nim_wakunode_image: ${{ needs.build-docker-image.outputs.image }} test_type: node js-waku-node-optional: needs: build-docker-image - uses: logos-messaging/js-waku/.github/workflows/test-node.yml@master + uses: logos-messaging/logos-messaging-js/.github/workflows/test-node.yml@master with: nim_wakunode_image: ${{ needs.build-docker-image.outputs.image }} test_type: node-optional diff --git a/.github/workflows/pre-release.yml b/.github/workflows/pre-release.yml index 380ec755f..faded198b 100644 --- a/.github/workflows/pre-release.yml +++ b/.github/workflows/pre-release.yml @@ -98,7 +98,7 @@ jobs: js-waku-node: needs: build-docker-image - uses: logos-messaging/js-waku/.github/workflows/test-node.yml@master + uses: logos-messaging/logos-messaging-js/.github/workflows/test-node.yml@master with: nim_wakunode_image: ${{ needs.build-docker-image.outputs.image }} test_type: node @@ -106,7 +106,7 @@ jobs: js-waku-node-optional: needs: build-docker-image - uses: logos-messaging/js-waku/.github/workflows/test-node.yml@master + uses: logos-messaging/logos-messaging-js/.github/workflows/test-node.yml@master with: nim_wakunode_image: ${{ needs.build-docker-image.outputs.image }} test_type: node-optional From a4e44dbe05347197ba367f967aa99078814a80fc Mon Sep 17 00:00:00 2001 From: Tanya S <120410716+stubbsta@users.noreply.github.com> Date: Tue, 6 Jan 2026 11:35:16 +0200 Subject: [PATCH 33/70] chore: Update anvil config (#3662) * Use anvil config disable-min-priority-fee to prevent gas price doubling * remove gas limit set in utils->deployContract --- tests/waku_rln_relay/utils.nim | 1 - tests/waku_rln_relay/utils_onchain.nim | 10 ++++++++-- 2 files changed, 8 insertions(+), 3 deletions(-) diff --git a/tests/waku_rln_relay/utils.nim b/tests/waku_rln_relay/utils.nim index a4247ab44..8aed18f9b 100644 --- a/tests/waku_rln_relay/utils.nim +++ b/tests/waku_rln_relay/utils.nim @@ -24,7 +24,6 @@ proc deployContract*( tr.`from` = Opt.some(web3.defaultAccount) let sData = code & contractInput tr.data = Opt.some(hexToSeqByte(sData)) - tr.gas = Opt.some(Quantity(3000000000000)) if gasPrice != 0: tr.gasPrice = Opt.some(gasPrice.Quantity) diff --git a/tests/waku_rln_relay/utils_onchain.nim b/tests/waku_rln_relay/utils_onchain.nim index d8bb13a62..9f1048097 100644 --- a/tests/waku_rln_relay/utils_onchain.nim +++ b/tests/waku_rln_relay/utils_onchain.nim @@ -529,6 +529,7 @@ proc runAnvil*( # --chain-id Chain ID of the network. # --load-state Initialize the chain from a previously saved state snapshot (read-only) # --dump-state Dump the state on exit to the given file (write-only) + # Values used are representative of Linea Sepolia testnet # See anvil documentation https://book.getfoundry.sh/reference/anvil/ for more details try: let anvilPath = getAnvilPath() @@ -539,11 +540,16 @@ proc runAnvil*( "--port", $port, "--gas-limit", - "300000000000000", + "30000000", + "--gas-price", + "7", + "--base-fee", + "7", "--balance", - "1000000000", + "10000000000", "--chain-id", $chainId, + "--disable-min-priority-fee", ] # Add state file argument if provided From 284a0816ccd1d7709f6e36acf37e8ad583502f6c Mon Sep 17 00:00:00 2001 From: NagyZoltanPeter <113987313+NagyZoltanPeter@users.noreply.github.com> Date: Wed, 7 Jan 2026 17:48:19 +0100 Subject: [PATCH 34/70] chore: use chronos' TokenBucket (#3670) * Adapt using chronos' TokenBucket. Removed TokenBucket and test. bump nim-chronos -> nim-libp2p/nim-lsquic/nim-jwt -> adapt to latest libp2p changes * Fix libp2p/utility reports unlisted exception can occure from close of socket in waitForService - -d:ssl compile flag caused it * Adapt request_limiter to new chronos' TokenBucket replenish algorithm to keep original intent of use * Fix filter dos protection test * Fix peer manager tests due change caused by new libp2p * Adjust store test rate limit to eliminate CI test flakyness of timing * Adjust store test rate limit to eliminate CI test flakyness of timing - lightpush/legacy_lightpush/filter * Rework filter dos protection test to avoid CI crazy timing causing flakyness in test results compared to local runs * Rework lightpush dos protection test to avoid CI crazy timing causing flakyness in test results compared to local runs * Rework lightpush and legacy lightpush rate limit tests to eliminate timing effect in CI that cause longer awaits thus result in minting new tokens unlike local runs --- .gitmodules | 6 + tests/common/test_all.nim | 1 - tests/common/test_tokenbucket.nim | 69 ------- tests/test_peer_manager.nim | 5 + .../test_waku_filter_dos_protection.nim | 98 +++++++--- tests/waku_lightpush/test_ratelimit.nim | 66 +++---- .../waku_lightpush_legacy/test_ratelimit.nim | 64 +++--- tests/waku_store/test_wakunode_store.nim | 2 +- vendor/nim-chronos | 2 +- vendor/nim-jwt | 1 + vendor/nim-libp2p | 2 +- vendor/nim-lsquic | 1 + waku.nimble | 8 +- waku/common/rate_limit/per_peer_limiter.nim | 2 +- waku/common/rate_limit/request_limiter.nim | 78 ++++++-- .../rate_limit/single_token_limiter.nim | 16 +- waku/common/rate_limit/token_bucket.nim | 182 ------------------ waku/factory/builder.nim | 1 + waku/factory/waku.nim | 1 - waku/node/peer_manager/peer_manager.nim | 7 +- waku/waku_rendezvous/protocol.nim | 1 - 21 files changed, 241 insertions(+), 372 deletions(-) delete mode 100644 tests/common/test_tokenbucket.nim create mode 160000 vendor/nim-jwt create mode 160000 vendor/nim-lsquic delete mode 100644 waku/common/rate_limit/token_bucket.nim diff --git a/.gitmodules b/.gitmodules index 4d56c4333..6a63491e3 100644 --- a/.gitmodules +++ b/.gitmodules @@ -184,6 +184,12 @@ url = https://github.com/logos-messaging/waku-rlnv2-contract.git ignore = untracked branch = master +[submodule "vendor/nim-lsquic"] + path = vendor/nim-lsquic + url = https://github.com/vacp2p/nim-lsquic +[submodule "vendor/nim-jwt"] + path = vendor/nim-jwt + url = https://github.com/vacp2p/nim-jwt.git [submodule "vendor/nim-ffi"] path = vendor/nim-ffi url = https://github.com/logos-messaging/nim-ffi/ diff --git a/tests/common/test_all.nim b/tests/common/test_all.nim index 7495c7c9e..d597a7424 100644 --- a/tests/common/test_all.nim +++ b/tests/common/test_all.nim @@ -6,7 +6,6 @@ import ./test_protobuf_validation, ./test_sqlite_migrations, ./test_parse_size, - ./test_tokenbucket, ./test_requestratelimiter, ./test_ratelimit_setting, ./test_timed_map, diff --git a/tests/common/test_tokenbucket.nim b/tests/common/test_tokenbucket.nim deleted file mode 100644 index 5bc1a0583..000000000 --- a/tests/common/test_tokenbucket.nim +++ /dev/null @@ -1,69 +0,0 @@ -# Chronos Test Suite -# (c) Copyright 2022-Present -# Status Research & Development GmbH -# -# Licensed under either of -# Apache License, version 2.0, (LICENSE-APACHEv2) -# MIT license (LICENSE-MIT) - -{.used.} - -import testutils/unittests -import chronos -import ../../waku/common/rate_limit/token_bucket - -suite "Token Bucket": - test "TokenBucket Sync test - strict": - var bucket = TokenBucket.newStrict(1000, 1.milliseconds) - let - start = Moment.now() - fullTime = start + 1.milliseconds - check: - bucket.tryConsume(800, start) == true - bucket.tryConsume(200, start) == true - # Out of budget - bucket.tryConsume(100, start) == false - bucket.tryConsume(800, fullTime) == true - bucket.tryConsume(200, fullTime) == true - # Out of budget - bucket.tryConsume(100, fullTime) == false - - test "TokenBucket Sync test - compensating": - var bucket = TokenBucket.new(1000, 1.milliseconds) - let - start = Moment.now() - fullTime = start + 1.milliseconds - check: - bucket.tryConsume(800, start) == true - bucket.tryConsume(200, start) == true - # Out of budget - bucket.tryConsume(100, start) == false - bucket.tryConsume(800, fullTime) == true - bucket.tryConsume(200, fullTime) == true - # Due not using the bucket for a full period the compensation will satisfy this request - bucket.tryConsume(100, fullTime) == true - - test "TokenBucket Max compensation": - var bucket = TokenBucket.new(1000, 1.minutes) - var reqTime = Moment.now() - - check bucket.tryConsume(1000, reqTime) - check bucket.tryConsume(1, reqTime) == false - reqTime += 1.minutes - check bucket.tryConsume(500, reqTime) == true - reqTime += 1.minutes - check bucket.tryConsume(1000, reqTime) == true - reqTime += 10.seconds - # max compensation is 25% so try to consume 250 more - check bucket.tryConsume(250, reqTime) == true - reqTime += 49.seconds - # out of budget within the same period - check bucket.tryConsume(1, reqTime) == false - - test "TokenBucket Short replenish": - var bucket = TokenBucket.new(15000, 1.milliseconds) - let start = Moment.now() - check bucket.tryConsume(15000, start) - check bucket.tryConsume(1, start) == false - - check bucket.tryConsume(15000, start + 1.milliseconds) == true diff --git a/tests/test_peer_manager.nim b/tests/test_peer_manager.nim index 1369f3f88..97df39582 100644 --- a/tests/test_peer_manager.nim +++ b/tests/test_peer_manager.nim @@ -997,6 +997,7 @@ procSuite "Peer Manager": .build(), maxFailedAttempts = 1, storage = nil, + maxConnections = 20, ) # Create 30 peers and add them to the peerstore @@ -1063,6 +1064,7 @@ procSuite "Peer Manager": backoffFactor = 2, maxFailedAttempts = 10, storage = nil, + maxConnections = 20, ) var p1: PeerId require p1.init("QmeuZJbXrszW2jdT7GdduSjQskPU3S7vvGWKtKgDfkDvW" & "1") @@ -1116,6 +1118,7 @@ procSuite "Peer Manager": .build(), maxFailedAttempts = 150, storage = nil, + maxConnections = 20, ) # Should result in backoff > 1 week @@ -1131,6 +1134,7 @@ procSuite "Peer Manager": .build(), maxFailedAttempts = 10, storage = nil, + maxConnections = 20, ) let pm = PeerManager.new( @@ -1144,6 +1148,7 @@ procSuite "Peer Manager": .build(), maxFailedAttempts = 5, storage = nil, + maxConnections = 20, ) asyncTest "colocationLimit is enforced by pruneConnsByIp()": diff --git a/tests/waku_filter_v2/test_waku_filter_dos_protection.nim b/tests/waku_filter_v2/test_waku_filter_dos_protection.nim index 7c8c640ba..fd3d8c837 100644 --- a/tests/waku_filter_v2/test_waku_filter_dos_protection.nim +++ b/tests/waku_filter_v2/test_waku_filter_dos_protection.nim @@ -122,24 +122,51 @@ suite "Waku Filter - DOS protection": check client2.subscribe(serverRemotePeerInfo, pubsubTopic, contentTopicSeq) == none(FilterSubscribeErrorKind) - await sleepAsync(20.milliseconds) - check client1.subscribe(serverRemotePeerInfo, pubsubTopic, contentTopicSeq) == - none(FilterSubscribeErrorKind) + # Avoid using tiny sleeps to control refill behavior: CI scheduling can + # oversleep and mint additional tokens. Instead, issue a small burst of + # subscribe requests and require at least one TOO_MANY_REQUESTS. + var c1SubscribeFutures = newSeq[Future[FilterSubscribeResult]]() + for i in 0 ..< 6: + c1SubscribeFutures.add( + client1.wakuFilterClient.subscribe( + serverRemotePeerInfo, pubsubTopic, contentTopicSeq + ) + ) + + let c1Finished = await allFinished(c1SubscribeFutures) + var c1GotTooMany = false + for fut in c1Finished: + check not fut.failed() + let res = fut.read() + if res.isErr() and res.error().kind == FilterSubscribeErrorKind.TOO_MANY_REQUESTS: + c1GotTooMany = true + break + check c1GotTooMany + + # Ensure the other client is not affected by client1's rate limit. check client2.subscribe(serverRemotePeerInfo, pubsubTopic, contentTopicSeq) == none(FilterSubscribeErrorKind) - await sleepAsync(20.milliseconds) - check client1.subscribe(serverRemotePeerInfo, pubsubTopic, contentTopicSeq) == - none(FilterSubscribeErrorKind) - await sleepAsync(20.milliseconds) - check client1.subscribe(serverRemotePeerInfo, pubsubTopic, contentTopicSeq) == - some(FilterSubscribeErrorKind.TOO_MANY_REQUESTS) - check client2.subscribe(serverRemotePeerInfo, pubsubTopic, contentTopicSeq) == - none(FilterSubscribeErrorKind) - check client2.subscribe(serverRemotePeerInfo, pubsubTopic, contentTopicSeq) == - some(FilterSubscribeErrorKind.TOO_MANY_REQUESTS) + + var c2SubscribeFutures = newSeq[Future[FilterSubscribeResult]]() + for i in 0 ..< 6: + c2SubscribeFutures.add( + client2.wakuFilterClient.subscribe( + serverRemotePeerInfo, pubsubTopic, contentTopicSeq + ) + ) + + let c2Finished = await allFinished(c2SubscribeFutures) + var c2GotTooMany = false + for fut in c2Finished: + check not fut.failed() + let res = fut.read() + if res.isErr() and res.error().kind == FilterSubscribeErrorKind.TOO_MANY_REQUESTS: + c2GotTooMany = true + break + check c2GotTooMany # ensure period of time has passed and clients can again use the service - await sleepAsync(1000.milliseconds) + await sleepAsync(1100.milliseconds) check client1.subscribe(serverRemotePeerInfo, pubsubTopic, contentTopicSeq) == none(FilterSubscribeErrorKind) check client2.subscribe(serverRemotePeerInfo, pubsubTopic, contentTopicSeq) == @@ -147,29 +174,54 @@ suite "Waku Filter - DOS protection": asyncTest "Ensure normal usage allowed": # Given + # Rate limit setting is (3 requests / 1000ms) per peer. + # In a token-bucket model this means: + # - capacity = 3 tokens + # - refill rate = 3 tokens / second => ~1 token every ~333ms + # - each request consumes 1 token (including UNSUBSCRIBE) check client1.subscribe(serverRemotePeerInfo, pubsubTopic, contentTopicSeq) == none(FilterSubscribeErrorKind) check wakuFilter.subscriptions.isSubscribed(client1.clientPeerId) - await sleepAsync(500.milliseconds) - check client1.ping(serverRemotePeerInfo) == none(FilterSubscribeErrorKind) - check wakuFilter.subscriptions.isSubscribed(client1.clientPeerId) + # Expected remaining tokens (approx): 2 await sleepAsync(500.milliseconds) check client1.ping(serverRemotePeerInfo) == none(FilterSubscribeErrorKind) check wakuFilter.subscriptions.isSubscribed(client1.clientPeerId) - await sleepAsync(50.milliseconds) + # After ~500ms, ~1 token refilled; PING consumes 1 => expected remaining: 2 + + await sleepAsync(500.milliseconds) + check client1.ping(serverRemotePeerInfo) == none(FilterSubscribeErrorKind) + check wakuFilter.subscriptions.isSubscribed(client1.clientPeerId) + + # After another ~500ms, ~1 token refilled; PING consumes 1 => expected remaining: 2 + check client1.unsubscribe(serverRemotePeerInfo, pubsubTopic, contentTopicSeq) == none(FilterSubscribeErrorKind) check wakuFilter.subscriptions.isSubscribed(client1.clientPeerId) == false - await sleepAsync(50.milliseconds) check client1.ping(serverRemotePeerInfo) == some(FilterSubscribeErrorKind.NOT_FOUND) - check client1.ping(serverRemotePeerInfo) == some(FilterSubscribeErrorKind.NOT_FOUND) - await sleepAsync(50.milliseconds) - check client1.ping(serverRemotePeerInfo) == - some(FilterSubscribeErrorKind.TOO_MANY_REQUESTS) + # After unsubscribing, PING is expected to return NOT_FOUND while still + # counting towards the rate limit. + + # CI can oversleep / schedule slowly, which can mint extra tokens between + # requests. To make the test robust, issue a small burst of pings and + # require at least one TOO_MANY_REQUESTS response. + var pingFutures = newSeq[Future[FilterSubscribeResult]]() + for i in 0 ..< 9: + pingFutures.add(client1.wakuFilterClient.ping(serverRemotePeerInfo)) + + let finished = await allFinished(pingFutures) + var gotTooMany = false + for fut in finished: + check not fut.failed() + let pingRes = fut.read() + if pingRes.isErr() and pingRes.error().kind == FilterSubscribeErrorKind.TOO_MANY_REQUESTS: + gotTooMany = true + break + + check gotTooMany check client2.subscribe(serverRemotePeerInfo, pubsubTopic, contentTopicSeq) == none(FilterSubscribeErrorKind) diff --git a/tests/waku_lightpush/test_ratelimit.nim b/tests/waku_lightpush/test_ratelimit.nim index 7420a4e56..bdab3f074 100644 --- a/tests/waku_lightpush/test_ratelimit.nim +++ b/tests/waku_lightpush/test_ratelimit.nim @@ -80,11 +80,12 @@ suite "Rate limited push service": await allFutures(serverSwitch.start(), clientSwitch.start()) ## Given - var handlerFuture = newFuture[(string, WakuMessage)]() + # Don't rely on per-request timing assumptions or a single shared Future. + # CI can be slow enough that sequential requests accidentally refill tokens. + # Instead we issue a small burst and assert we observe at least one rejection. let handler = proc( peer: PeerId, pubsubTopic: PubsubTopic, message: WakuMessage ): Future[WakuLightPushResult] {.async.} = - handlerFuture.complete((pubsubTopic, message)) return lightpushSuccessResult(1) let @@ -93,45 +94,38 @@ suite "Rate limited push service": client = newTestWakuLightpushClient(clientSwitch) let serverPeerId = serverSwitch.peerInfo.toRemotePeerInfo() - let topic = DefaultPubsubTopic + let tokenPeriod = 500.millis - let successProc = proc(): Future[void] {.async.} = + # Fire a burst of requests; require at least one success and one rejection. + var publishFutures = newSeq[Future[WakuLightPushResult]]() + for i in 0 ..< 10: let message = fakeWakuMessage() - handlerFuture = newFuture[(string, WakuMessage)]() - let requestRes = - await client.publish(some(DefaultPubsubTopic), message, serverPeerId) - discard await handlerFuture.withTimeout(10.millis) + publishFutures.add( + client.publish(some(DefaultPubsubTopic), message, serverPeerId) + ) - check: - requestRes.isOk() - handlerFuture.finished() - let (handledMessagePubsubTopic, handledMessage) = handlerFuture.read() - check: - handledMessagePubsubTopic == DefaultPubsubTopic - handledMessage == message + let finished = await allFinished(publishFutures) + var gotOk = false + var gotTooMany = false + for fut in finished: + check not fut.failed() + let res = fut.read() + if res.isOk(): + gotOk = true + else: + check res.error.code == LightPushErrorCode.TOO_MANY_REQUESTS + check res.error.desc == some(TooManyRequestsMessage) + gotTooMany = true - let rejectProc = proc(): Future[void] {.async.} = - let message = fakeWakuMessage() - handlerFuture = newFuture[(string, WakuMessage)]() - let requestRes = - await client.publish(some(DefaultPubsubTopic), message, serverPeerId) - discard await handlerFuture.withTimeout(10.millis) + check gotOk + check gotTooMany - check: - requestRes.isErr() - requestRes.error.code == LightPushErrorCode.TOO_MANY_REQUESTS - requestRes.error.desc == some(TooManyRequestsMessage) - - for testCnt in 0 .. 2: - await successProc() - await sleepAsync(20.millis) - - await rejectProc() - - await sleepAsync(500.millis) - - ## next one shall succeed due to the rate limit time window has passed - await successProc() + # ensure period of time has passed and the client can again use the service + await sleepAsync(tokenPeriod + 100.millis) + let recoveryRes = await client.publish( + some(DefaultPubsubTopic), fakeWakuMessage(), serverPeerId + ) + check recoveryRes.isOk() ## Cleanup await allFutures(clientSwitch.stop(), serverSwitch.stop()) diff --git a/tests/waku_lightpush_legacy/test_ratelimit.nim b/tests/waku_lightpush_legacy/test_ratelimit.nim index 3df8d369d..37c43a066 100644 --- a/tests/waku_lightpush_legacy/test_ratelimit.nim +++ b/tests/waku_lightpush_legacy/test_ratelimit.nim @@ -86,58 +86,52 @@ suite "Rate limited push service": await allFutures(serverSwitch.start(), clientSwitch.start()) ## Given - var handlerFuture = newFuture[(string, WakuMessage)]() let handler = proc( peer: PeerId, pubsubTopic: PubsubTopic, message: WakuMessage ): Future[WakuLightPushResult[void]] {.async.} = - handlerFuture.complete((pubsubTopic, message)) return ok() let + tokenPeriod = 500.millis server = await newTestWakuLegacyLightpushNode( - serverSwitch, handler, some((3, 500.millis)) + serverSwitch, handler, some((3, tokenPeriod)) ) client = newTestWakuLegacyLightpushClient(clientSwitch) let serverPeerId = serverSwitch.peerInfo.toRemotePeerInfo() - let topic = DefaultPubsubTopic - let successProc = proc(): Future[void] {.async.} = - let message = fakeWakuMessage() - handlerFuture = newFuture[(string, WakuMessage)]() - let requestRes = - await client.publish(DefaultPubsubTopic, message, peer = serverPeerId) - discard await handlerFuture.withTimeout(10.millis) + # Avoid assuming the exact Nth request will be rejected. With Chronos TokenBucket + # minting semantics and real network latency, CI timing can allow refills. + # Instead, send a short burst and require that we observe at least one rejection. + let burstSize = 10 + var publishFutures: seq[Future[WakuLightPushResult[string]]] = @[] + for _ in 0 ..< burstSize: + publishFutures.add( + client.publish(DefaultPubsubTopic, fakeWakuMessage(), peer = serverPeerId) + ) - check: - requestRes.isOk() - handlerFuture.finished() - let (handledMessagePubsubTopic, handledMessage) = handlerFuture.read() - check: - handledMessagePubsubTopic == DefaultPubsubTopic - handledMessage == message + let finished = await allFinished(publishFutures) + var gotOk = false + var gotTooMany = false + for fut in finished: + check not fut.failed() + let res = fut.read() + if res.isOk(): + gotOk = true + elif res.error == "TOO_MANY_REQUESTS": + gotTooMany = true - let rejectProc = proc(): Future[void] {.async.} = - let message = fakeWakuMessage() - handlerFuture = newFuture[(string, WakuMessage)]() - let requestRes = - await client.publish(DefaultPubsubTopic, message, peer = serverPeerId) - discard await handlerFuture.withTimeout(10.millis) + check: + gotOk + gotTooMany - check: - requestRes.isErr() - requestRes.error == "TOO_MANY_REQUESTS" - - for testCnt in 0 .. 2: - await successProc() - await sleepAsync(20.millis) - - await rejectProc() - - await sleepAsync(500.millis) + await sleepAsync(tokenPeriod + 100.millis) ## next one shall succeed due to the rate limit time window has passed - await successProc() + let afterCooldownRes = + await client.publish(DefaultPubsubTopic, fakeWakuMessage(), peer = serverPeerId) + check: + afterCooldownRes.isOk() ## Cleanup await allFutures(clientSwitch.stop(), serverSwitch.stop()) diff --git a/tests/waku_store/test_wakunode_store.nim b/tests/waku_store/test_wakunode_store.nim index b20309079..7d1a44ecc 100644 --- a/tests/waku_store/test_wakunode_store.nim +++ b/tests/waku_store/test_wakunode_store.nim @@ -413,7 +413,7 @@ procSuite "WakuNode - Store": for count in 0 ..< 3: waitFor successProc() - waitFor sleepAsync(20.millis) + waitFor sleepAsync(5.millis) waitFor failsProc() diff --git a/vendor/nim-chronos b/vendor/nim-chronos index 0646c444f..85af4db76 160000 --- a/vendor/nim-chronos +++ b/vendor/nim-chronos @@ -1 +1 @@ -Subproject commit 0646c444fce7c7ed08ef6f2c9a7abfd172ffe655 +Subproject commit 85af4db764ecd3573c4704139560df3943216cf1 diff --git a/vendor/nim-jwt b/vendor/nim-jwt new file mode 160000 index 000000000..18f8378de --- /dev/null +++ b/vendor/nim-jwt @@ -0,0 +1 @@ +Subproject commit 18f8378de52b241f321c1f9ea905456e89b95c6f diff --git a/vendor/nim-libp2p b/vendor/nim-libp2p index e82080f7b..eb7e6ff89 160000 --- a/vendor/nim-libp2p +++ b/vendor/nim-libp2p @@ -1 +1 @@ -Subproject commit e82080f7b1aa61c6d35fa5311b873f41eff4bb52 +Subproject commit eb7e6ff89889e41b57515f891ba82986c54809fb diff --git a/vendor/nim-lsquic b/vendor/nim-lsquic new file mode 160000 index 000000000..f3fe33462 --- /dev/null +++ b/vendor/nim-lsquic @@ -0,0 +1 @@ +Subproject commit f3fe33462601ea34eb2e8e9c357c92e61f8d121b diff --git a/waku.nimble b/waku.nimble index 5c5c09763..afc0ad634 100644 --- a/waku.nimble +++ b/waku.nimble @@ -31,6 +31,8 @@ requires "nim >= 2.2.4", "results", "db_connector", "minilru", + "lsquic", + "jwt", "ffi" ### Helper functions @@ -148,7 +150,8 @@ task chat2, "Build example Waku chat usage": let name = "chat2" buildBinary name, "apps/chat2/", - "-d:chronicles_sinks=textlines[file] -d:ssl -d:chronicles_log_level='TRACE' " + "-d:chronicles_sinks=textlines[file] -d:chronicles_log_level='TRACE' " + # -d:ssl - cause unlisted exception error in libp2p/utility... task chat2mix, "Build example Waku chat mix usage": # NOTE For debugging, set debug level. For chat usage we want minimal log @@ -158,7 +161,8 @@ task chat2mix, "Build example Waku chat mix usage": let name = "chat2mix" buildBinary name, "apps/chat2mix/", - "-d:chronicles_sinks=textlines[file] -d:ssl -d:chronicles_log_level='TRACE' " + "-d:chronicles_sinks=textlines[file] -d:chronicles_log_level='TRACE' " + # -d:ssl - cause unlisted exception error in libp2p/utility... task chat2bridge, "Build chat2bridge": let name = "chat2bridge" diff --git a/waku/common/rate_limit/per_peer_limiter.nim b/waku/common/rate_limit/per_peer_limiter.nim index 5cb96a2d1..16b6bf065 100644 --- a/waku/common/rate_limit/per_peer_limiter.nim +++ b/waku/common/rate_limit/per_peer_limiter.nim @@ -20,7 +20,7 @@ proc mgetOrPut( perPeerRateLimiter: var PerPeerRateLimiter, peerId: PeerId ): var Option[TokenBucket] = return perPeerRateLimiter.peerBucket.mgetOrPut( - peerId, newTokenBucket(perPeerRateLimiter.setting, ReplenishMode.Compensating) + peerId, newTokenBucket(perPeerRateLimiter.setting, ReplenishMode.Continuous) ) template checkUsageLimit*( diff --git a/waku/common/rate_limit/request_limiter.nim b/waku/common/rate_limit/request_limiter.nim index 0ede20be4..bc318e151 100644 --- a/waku/common/rate_limit/request_limiter.nim +++ b/waku/common/rate_limit/request_limiter.nim @@ -39,38 +39,82 @@ const SECONDS_RATIO = 3 const MINUTES_RATIO = 2 type RequestRateLimiter* = ref object of RootObj - tokenBucket: Option[TokenBucket] + tokenBucket: TokenBucket setting*: Option[RateLimitSetting] + mainBucketSetting: RateLimitSetting + ratio: int peerBucketSetting*: RateLimitSetting peerUsage: TimedMap[PeerId, TokenBucket] + checkUsageImpl: proc( + t: var RequestRateLimiter, proto: string, conn: Connection, now: Moment + ): bool {.gcsafe, raises: [].} + +proc newMainTokenBucket( + setting: RateLimitSetting, ratio: int, startTime: Moment +): TokenBucket = + ## RequestRateLimiter's global bucket should keep the *rate* of the configured + ## setting while allowing a larger burst window. We achieve this by scaling + ## both capacity and fillDuration by the same ratio. + ## + ## This matches previous behavior where unused tokens could effectively + ## accumulate across multiple periods. + let burstCapacity = setting.volume * ratio + var bucket = TokenBucket.new( + capacity = burstCapacity, + fillDuration = setting.period * ratio, + startTime = startTime, + mode = Continuous, + ) + + # Start with the configured volume (not the burst capacity) so that the + # initial burst behavior matches the raw setting, while still allowing + # accumulation up to `burstCapacity` over time. + let excess = burstCapacity - setting.volume + if excess > 0: + discard bucket.tryConsume(excess, startTime) + + return bucket proc mgetOrPut( - requestRateLimiter: var RequestRateLimiter, peerId: PeerId + requestRateLimiter: var RequestRateLimiter, peerId: PeerId, now: Moment ): var TokenBucket = - let bucketForNew = newTokenBucket(some(requestRateLimiter.peerBucketSetting)).valueOr: + let bucketForNew = newTokenBucket( + some(requestRateLimiter.peerBucketSetting), Discrete, now + ).valueOr: raiseAssert "This branch is not allowed to be reached as it will not be called if the setting is None." return requestRateLimiter.peerUsage.mgetOrPut(peerId, bucketForNew) -proc checkUsage*( - t: var RequestRateLimiter, proto: string, conn: Connection, now = Moment.now() -): bool {.raises: [].} = - if t.tokenBucket.isNone(): - return true +proc checkUsageUnlimited( + t: var RequestRateLimiter, proto: string, conn: Connection, now: Moment +): bool {.gcsafe, raises: [].} = + true - let peerBucket = t.mgetOrPut(conn.peerId) +proc checkUsageLimited( + t: var RequestRateLimiter, proto: string, conn: Connection, now: Moment +): bool {.gcsafe, raises: [].} = + # Lazy-init the main bucket using the first observed request time. This makes + # refill behavior deterministic under tests where `now` is controlled. + if isNil(t.tokenBucket): + t.tokenBucket = newMainTokenBucket(t.mainBucketSetting, t.ratio, now) + + let peerBucket = t.mgetOrPut(conn.peerId, now) ## check requesting peer's usage is not over the calculated ratio and let that peer go which not requested much/or this time... if not peerBucket.tryConsume(1, now): trace "peer usage limit reached", peer = conn.peerId return false # Ok if the peer can consume, check the overall budget we have left - let tokenBucket = t.tokenBucket.get() - if not tokenBucket.tryConsume(1, now): + if not t.tokenBucket.tryConsume(1, now): return false return true +proc checkUsage*( + t: var RequestRateLimiter, proto: string, conn: Connection, now = Moment.now() +): bool {.raises: [].} = + t.checkUsageImpl(t, proto, conn, now) + template checkUsageLimit*( t: var RequestRateLimiter, proto: string, @@ -135,9 +179,19 @@ func calcPeerTokenSetting( proc newRequestRateLimiter*(setting: Option[RateLimitSetting]): RequestRateLimiter = let ratio = calcPeriodRatio(setting) + let isLimited = setting.isSome() and not setting.get().isUnlimited() + let mainBucketSetting = + if isLimited: + setting.get() + else: + (0, 0.minutes) + return RequestRateLimiter( - tokenBucket: newTokenBucket(setting), + tokenBucket: nil, setting: setting, + mainBucketSetting: mainBucketSetting, + ratio: ratio, peerBucketSetting: calcPeerTokenSetting(setting, ratio), peerUsage: init(TimedMap[PeerId, TokenBucket], calcCacheTimeout(setting, ratio)), + checkUsageImpl: (if isLimited: checkUsageLimited else: checkUsageUnlimited), ) diff --git a/waku/common/rate_limit/single_token_limiter.nim b/waku/common/rate_limit/single_token_limiter.nim index 50fb2d64c..fc4b0acd5 100644 --- a/waku/common/rate_limit/single_token_limiter.nim +++ b/waku/common/rate_limit/single_token_limiter.nim @@ -6,12 +6,15 @@ import std/[options], chronos/timer, libp2p/stream/connection, libp2p/utility import std/times except TimeInterval, Duration -import ./[token_bucket, setting, service_metrics] +import chronos/ratelimit as token_bucket + +import ./[setting, service_metrics] export token_bucket, setting, service_metrics proc newTokenBucket*( setting: Option[RateLimitSetting], - replenishMode: ReplenishMode = ReplenishMode.Compensating, + replenishMode: static[ReplenishMode] = ReplenishMode.Continuous, + startTime: Moment = Moment.now(), ): Option[TokenBucket] = if setting.isNone(): return none[TokenBucket]() @@ -19,7 +22,14 @@ proc newTokenBucket*( if setting.get().isUnlimited(): return none[TokenBucket]() - return some(TokenBucket.new(setting.get().volume, setting.get().period)) + return some( + TokenBucket.new( + capacity = setting.get().volume, + fillDuration = setting.get().period, + startTime = startTime, + mode = replenishMode, + ) + ) proc checkUsage( t: var TokenBucket, proto: string, now = Moment.now() diff --git a/waku/common/rate_limit/token_bucket.nim b/waku/common/rate_limit/token_bucket.nim deleted file mode 100644 index 799817ebd..000000000 --- a/waku/common/rate_limit/token_bucket.nim +++ /dev/null @@ -1,182 +0,0 @@ -{.push raises: [].} - -import chronos, std/math, std/options - -const BUDGET_COMPENSATION_LIMIT_PERCENT = 0.25 - -## This is an extract from chronos/rate_limit.nim due to the found bug in the original implementation. -## Unfortunately that bug cannot be solved without harm the original features of TokenBucket class. -## So, this current shortcut is used to enable move ahead with nwaku rate limiter implementation. -## ref: https://github.com/status-im/nim-chronos/issues/500 -## -## This version of TokenBucket is different from the original one in chronos/rate_limit.nim in many ways: -## - It has a new mode called `Compensating` which is the default mode. -## Compensation is calculated as the not used bucket capacity in the last measured period(s) in average. -## or up until maximum the allowed compansation treshold (Currently it is const 25%). -## Also compensation takes care of the proper time period calculation to avoid non-usage periods that can lead to -## overcompensation. -## - Strict mode is also available which will only replenish when time period is over but also will fill -## the bucket to the max capacity. - -type - ReplenishMode* = enum - Strict - Compensating - - TokenBucket* = ref object - budget: int ## Current number of tokens in the bucket - budgetCap: int ## Bucket capacity - lastTimeFull: Moment - ## This timer measures the proper periodizaiton of the bucket refilling - fillDuration: Duration ## Refill period - case replenishMode*: ReplenishMode - of Strict: - ## In strict mode, the bucket is refilled only till the budgetCap - discard - of Compensating: - ## This is the default mode. - maxCompensation: float - -func periodDistance(bucket: TokenBucket, currentTime: Moment): float = - ## notice fillDuration cannot be zero by design - ## period distance is a float number representing the calculated period time - ## since the last time bucket was refilled. - return - nanoseconds(currentTime - bucket.lastTimeFull).float / - nanoseconds(bucket.fillDuration).float - -func getUsageAverageSince(bucket: TokenBucket, distance: float): float = - if distance == 0.float: - ## in case there is zero time difference than the usage percentage is 100% - return 1.0 - - ## budgetCap can never be zero - ## usage average is calculated as a percentage of total capacity available over - ## the measured period - return bucket.budget.float / bucket.budgetCap.float / distance - -proc calcCompensation(bucket: TokenBucket, averageUsage: float): int = - # if we already fully used or even overused the tokens, there is no place for compensation - if averageUsage >= 1.0: - return 0 - - ## compensation is the not used bucket capacity in the last measured period(s) in average. - ## or maximum the allowed compansation treshold - let compensationPercent = - min((1.0 - averageUsage) * bucket.budgetCap.float, bucket.maxCompensation) - return trunc(compensationPercent).int - -func periodElapsed(bucket: TokenBucket, currentTime: Moment): bool = - return currentTime - bucket.lastTimeFull >= bucket.fillDuration - -## Update will take place if bucket is empty and trying to consume tokens. -## It checks if the bucket can be replenished as refill duration is passed or not. -## - strict mode: -proc updateStrict(bucket: TokenBucket, currentTime: Moment) = - if bucket.fillDuration == default(Duration): - bucket.budget = min(bucket.budgetCap, bucket.budget) - return - - if not periodElapsed(bucket, currentTime): - return - - bucket.budget = bucket.budgetCap - bucket.lastTimeFull = currentTime - -## - compensating - ballancing load: -## - between updates we calculate average load (current bucket capacity / number of periods till last update) -## - gives the percentage load used recently -## - with this we can replenish bucket up to 100% + calculated leftover from previous period (caped with max treshold) -proc updateWithCompensation(bucket: TokenBucket, currentTime: Moment) = - if bucket.fillDuration == default(Duration): - bucket.budget = min(bucket.budgetCap, bucket.budget) - return - - # do not replenish within the same period - if not periodElapsed(bucket, currentTime): - return - - let distance = bucket.periodDistance(currentTime) - let recentAvgUsage = bucket.getUsageAverageSince(distance) - let compensation = bucket.calcCompensation(recentAvgUsage) - - bucket.budget = bucket.budgetCap + compensation - bucket.lastTimeFull = currentTime - -proc update(bucket: TokenBucket, currentTime: Moment) = - if bucket.replenishMode == ReplenishMode.Compensating: - updateWithCompensation(bucket, currentTime) - else: - updateStrict(bucket, currentTime) - -proc tryConsume*(bucket: TokenBucket, tokens: int, now = Moment.now()): bool = - ## If `tokens` are available, consume them, - ## Otherwhise, return false. - - if bucket.budget >= bucket.budgetCap: - bucket.lastTimeFull = now - - if bucket.budget >= tokens: - bucket.budget -= tokens - return true - - bucket.update(now) - - if bucket.budget >= tokens: - bucket.budget -= tokens - return true - else: - return false - -proc replenish*(bucket: TokenBucket, tokens: int, now = Moment.now()) = - ## Add `tokens` to the budget (capped to the bucket capacity) - bucket.budget += tokens - bucket.update(now) - -proc new*( - T: type[TokenBucket], - budgetCap: int, - fillDuration: Duration = 1.seconds, - mode: ReplenishMode = ReplenishMode.Compensating, -): T = - assert not isZero(fillDuration) - assert budgetCap != 0 - - ## Create different mode TokenBucket - case mode - of ReplenishMode.Strict: - return T( - budget: budgetCap, - budgetCap: budgetCap, - fillDuration: fillDuration, - lastTimeFull: Moment.now(), - replenishMode: mode, - ) - of ReplenishMode.Compensating: - T( - budget: budgetCap, - budgetCap: budgetCap, - fillDuration: fillDuration, - lastTimeFull: Moment.now(), - replenishMode: mode, - maxCompensation: budgetCap.float * BUDGET_COMPENSATION_LIMIT_PERCENT, - ) - -proc newStrict*(T: type[TokenBucket], capacity: int, period: Duration): TokenBucket = - T.new(capacity, period, ReplenishMode.Strict) - -proc newCompensating*( - T: type[TokenBucket], capacity: int, period: Duration -): TokenBucket = - T.new(capacity, period, ReplenishMode.Compensating) - -func `$`*(b: TokenBucket): string {.inline.} = - if isNil(b): - return "nil" - return $b.budgetCap & "/" & $b.fillDuration - -func `$`*(ob: Option[TokenBucket]): string {.inline.} = - if ob.isNone(): - return "no-limit" - - return $ob.get() diff --git a/waku/factory/builder.nim b/waku/factory/builder.nim index 772cfbffd..f379f92bb 100644 --- a/waku/factory/builder.nim +++ b/waku/factory/builder.nim @@ -209,6 +209,7 @@ proc build*(builder: WakuNodeBuilder): Result[WakuNode, string] = maxServicePeers = some(builder.maxServicePeers), colocationLimit = builder.colocationLimit, shardedPeerManagement = builder.shardAware, + maxConnections = builder.switchMaxConnections.get(builders.MaxConnections), ) var node: WakuNode diff --git a/waku/factory/waku.nim b/waku/factory/waku.nim index c0380ccc9..d55206f97 100644 --- a/waku/factory/waku.nim +++ b/waku/factory/waku.nim @@ -13,7 +13,6 @@ import libp2p/services/autorelayservice, libp2p/services/hpservice, libp2p/peerid, - libp2p/discovery/rendezvousinterface, eth/keys, eth/p2p/discoveryv5/enr, presto, diff --git a/waku/node/peer_manager/peer_manager.nim b/waku/node/peer_manager/peer_manager.nim index 1abcc1ac0..487d3894d 100644 --- a/waku/node/peer_manager/peer_manager.nim +++ b/waku/node/peer_manager/peer_manager.nim @@ -103,6 +103,7 @@ type PeerManager* = ref object of RootObj onConnectionChange*: ConnectionChangeHandler online: bool ## state managed by online_monitor module getShards: GetShards + maxConnections: int #~~~~~~~~~~~~~~~~~~~# # Helper Functions # @@ -748,7 +749,6 @@ proc logAndMetrics(pm: PeerManager) {.async.} = var peerStore = pm.switch.peerStore # log metrics let (inRelayPeers, outRelayPeers) = pm.connectedPeers(WakuRelayCodec) - let maxConnections = pm.switch.connManager.inSema.size let notConnectedPeers = peerStore.getDisconnectedPeers().mapIt(RemotePeerInfo.init(it.peerId, it.addrs)) let outsideBackoffPeers = notConnectedPeers.filterIt(pm.canBeConnected(it.peerId)) @@ -758,7 +758,7 @@ proc logAndMetrics(pm: PeerManager) {.async.} = info "Relay peer connections", inRelayConns = $inRelayPeers.len & "/" & $pm.inRelayPeersTarget, outRelayConns = $outRelayPeers.len & "/" & $pm.outRelayPeersTarget, - totalConnections = $totalConnections & "/" & $maxConnections, + totalConnections = $totalConnections & "/" & $pm.maxConnections, notConnectedPeers = notConnectedPeers.len, outsideBackoffPeers = outsideBackoffPeers.len @@ -1048,9 +1048,9 @@ proc new*( maxFailedAttempts = MaxFailedAttempts, colocationLimit = DefaultColocationLimit, shardedPeerManagement = false, + maxConnections: int = MaxConnections, ): PeerManager {.gcsafe.} = let capacity = switch.peerStore.capacity - let maxConnections = switch.connManager.inSema.size if maxConnections > capacity: error "Max number of connections can't be greater than PeerManager capacity", capacity = capacity, maxConnections = maxConnections @@ -1099,6 +1099,7 @@ proc new*( colocationLimit: colocationLimit, shardedPeerManagement: shardedPeerManagement, online: true, + maxConnections: maxConnections, ) proc peerHook( diff --git a/waku/waku_rendezvous/protocol.nim b/waku/waku_rendezvous/protocol.nim index 7b97375ff..00b5f1a5c 100644 --- a/waku/waku_rendezvous/protocol.nim +++ b/waku/waku_rendezvous/protocol.nim @@ -8,7 +8,6 @@ import stew/byteutils, libp2p/protocols/rendezvous, libp2p/protocols/rendezvous/protobuf, - libp2p/discovery/discoverymngr, libp2p/utils/semaphore, libp2p/utils/offsettedseq, libp2p/crypto/curve25519, From c27405b19c62bd06ccf5a322590fc55ffa172ea3 Mon Sep 17 00:00:00 2001 From: NagyZoltanPeter <113987313+NagyZoltanPeter@users.noreply.github.com> Date: Mon, 12 Jan 2026 15:55:25 +0100 Subject: [PATCH 35/70] chore: context aware brokers (#3674) * Refactor RequestBroker to support context aware use - introduction of BrokerContext * Context aware extension for EventBroker, EventBoker support for native or external types * Enhance MultiRequestBroker - similar to RequestBroker and EventBroker - with support for native and external types and context aware execution. * Move duplicated and common code into broker_utils from event- request- and multi_request_brokers * Change BrokerContext from random number to counter * Apply suggestion from @Ivansete-status Co-authored-by: Ivan FB <128452529+Ivansete-status@users.noreply.github.com> * Adjust naming in broker tests * Follow up adjustment from send_api use --------- Co-authored-by: Ivan FB <128452529+Ivansete-status@users.noreply.github.com> --- tests/common/test_event_broker.nim | 76 +++ tests/common/test_multi_request_broker.nim | 111 +++- tests/common/test_request_broker.nim | 163 ++++++ waku/common/broker/broker_context.nim | 68 +++ waku/common/broker/event_broker.nim | 346 +++++++----- waku/common/broker/helper/broker_utils.nim | 163 ++++++ waku/common/broker/multi_request_broker.nim | 562 +++++++++++++------- waku/common/broker/request_broker.nim | 451 +++++++++++----- 8 files changed, 1477 insertions(+), 463 deletions(-) create mode 100644 waku/common/broker/broker_context.nim diff --git a/tests/common/test_event_broker.nim b/tests/common/test_event_broker.nim index cead1277f..bcd081f4f 100644 --- a/tests/common/test_event_broker.nim +++ b/tests/common/test_event_broker.nim @@ -4,6 +4,15 @@ import testutils/unittests import waku/common/broker/event_broker +type ExternalDefinedEventType = object + label*: string + +EventBroker: + type IntEvent = int + +EventBroker: + type ExternalAliasEvent = distinct ExternalDefinedEventType + EventBroker: type SampleEvent = object value*: int @@ -123,3 +132,70 @@ suite "EventBroker": check counter == 21 # 1+2+3 + 4+5+6 RefEvent.dropAllListeners() + + test "supports BrokerContext-scoped listeners": + SampleEvent.dropAllListeners() + + let ctxA = NewBrokerContext() + let ctxB = NewBrokerContext() + + var seenA: seq[int] = @[] + var seenB: seq[int] = @[] + + discard SampleEvent.listen( + ctxA, + proc(evt: SampleEvent): Future[void] {.async: (raises: []).} = + seenA.add(evt.value), + ) + + discard SampleEvent.listen( + ctxB, + proc(evt: SampleEvent): Future[void] {.async: (raises: []).} = + seenB.add(evt.value), + ) + + SampleEvent.emit(ctxA, SampleEvent(value: 1, label: "a")) + SampleEvent.emit(ctxB, SampleEvent(value: 2, label: "b")) + waitForListeners() + + check seenA == @[1] + check seenB == @[2] + + SampleEvent.dropAllListeners(ctxA) + SampleEvent.emit(ctxA, SampleEvent(value: 3, label: "a2")) + SampleEvent.emit(ctxB, SampleEvent(value: 4, label: "b2")) + waitForListeners() + + check seenA == @[1] + check seenB == @[2, 4] + + SampleEvent.dropAllListeners(ctxB) + + test "supports non-object event types (auto-distinct)": + var seen: seq[int] = @[] + + discard IntEvent.listen( + proc(evt: IntEvent): Future[void] {.async: (raises: []).} = + seen.add(int(evt)) + ) + + IntEvent.emit(IntEvent(42)) + waitForListeners() + + check seen == @[42] + IntEvent.dropAllListeners() + + test "supports externally-defined type aliases (auto-distinct)": + var seen: seq[string] = @[] + + discard ExternalAliasEvent.listen( + proc(evt: ExternalAliasEvent): Future[void] {.async: (raises: []).} = + let base = ExternalDefinedEventType(evt) + seen.add(base.label) + ) + + ExternalAliasEvent.emit(ExternalAliasEvent(ExternalDefinedEventType(label: "x"))) + waitForListeners() + + check seen == @["x"] + ExternalAliasEvent.dropAllListeners() diff --git a/tests/common/test_multi_request_broker.nim b/tests/common/test_multi_request_broker.nim index 3bf10a54d..39ed90eea 100644 --- a/tests/common/test_multi_request_broker.nim +++ b/tests/common/test_multi_request_broker.nim @@ -31,6 +31,23 @@ MultiRequestBroker: suffix: string ): Future[Result[DualResponse, string]] {.async.} +type ExternalBaseType = string + +MultiRequestBroker: + type NativeIntResponse = int + + proc signatureFetch*(): Future[Result[NativeIntResponse, string]] {.async.} + +MultiRequestBroker: + type ExternalAliasResponse = ExternalBaseType + + proc signatureFetch*(): Future[Result[ExternalAliasResponse, string]] {.async.} + +MultiRequestBroker: + type AlreadyDistinctResponse = distinct int + + proc signatureFetch*(): Future[Result[AlreadyDistinctResponse, string]] {.async.} + suite "MultiRequestBroker": test "aggregates zero-argument providers": discard NoArgResponse.setProvider( @@ -194,7 +211,6 @@ suite "MultiRequestBroker": let firstHandler = NoArgResponse.setProvider( proc(): Future[Result[NoArgResponse, string]] {.async.} = raise newException(ValueError, "first handler raised") - ok(NoArgResponse(label: "any")) ) discard NoArgResponse.setProvider( @@ -211,6 +227,99 @@ suite "MultiRequestBroker": test "ref providers returning nil fail request": DualResponse.clearProviders() + test "supports native request types": + NativeIntResponse.clearProviders() + + discard NativeIntResponse.setProvider( + proc(): Future[Result[NativeIntResponse, string]] {.async.} = + ok(NativeIntResponse(1)) + ) + + discard NativeIntResponse.setProvider( + proc(): Future[Result[NativeIntResponse, string]] {.async.} = + ok(NativeIntResponse(2)) + ) + + let res = waitFor NativeIntResponse.request() + check res.isOk() + check res.get().len == 2 + check res.get().anyIt(int(it) == 1) + check res.get().anyIt(int(it) == 2) + + NativeIntResponse.clearProviders() + + test "supports external request types": + ExternalAliasResponse.clearProviders() + + discard ExternalAliasResponse.setProvider( + proc(): Future[Result[ExternalAliasResponse, string]] {.async.} = + ok(ExternalAliasResponse("hello")) + ) + + let res = waitFor ExternalAliasResponse.request() + check res.isOk() + check res.get().len == 1 + check ExternalBaseType(res.get()[0]) == "hello" + + ExternalAliasResponse.clearProviders() + + test "supports already-distinct request types": + AlreadyDistinctResponse.clearProviders() + + discard AlreadyDistinctResponse.setProvider( + proc(): Future[Result[AlreadyDistinctResponse, string]] {.async.} = + ok(AlreadyDistinctResponse(7)) + ) + + let res = waitFor AlreadyDistinctResponse.request() + check res.isOk() + check res.get().len == 1 + check int(res.get()[0]) == 7 + + AlreadyDistinctResponse.clearProviders() + + test "context-aware providers are isolated": + NoArgResponse.clearProviders() + let ctxA = NewBrokerContext() + let ctxB = NewBrokerContext() + + discard NoArgResponse.setProvider( + ctxA, + proc(): Future[Result[NoArgResponse, string]] {.async.} = + ok(NoArgResponse(label: "a")), + ) + discard NoArgResponse.setProvider( + ctxB, + proc(): Future[Result[NoArgResponse, string]] {.async.} = + ok(NoArgResponse(label: "b")), + ) + + let resA = waitFor NoArgResponse.request(ctxA) + check resA.isOk() + check resA.get().len == 1 + check resA.get()[0].label == "a" + + let resB = waitFor NoArgResponse.request(ctxB) + check resB.isOk() + check resB.get().len == 1 + check resB.get()[0].label == "b" + + let resDefault = waitFor NoArgResponse.request() + check resDefault.isOk() + check resDefault.get().len == 0 + + NoArgResponse.clearProviders(ctxA) + let clearedA = waitFor NoArgResponse.request(ctxA) + check clearedA.isOk() + check clearedA.get().len == 0 + + let stillB = waitFor NoArgResponse.request(ctxB) + check stillB.isOk() + check stillB.get().len == 1 + check stillB.get()[0].label == "b" + + NoArgResponse.clearProviders(ctxB) + discard DualResponse.setProvider( proc(): Future[Result[DualResponse, string]] {.async.} = let nilResponse: DualResponse = nil diff --git a/tests/common/test_request_broker.nim b/tests/common/test_request_broker.nim index a534216dc..87065a916 100644 --- a/tests/common/test_request_broker.nim +++ b/tests/common/test_request_broker.nim @@ -203,6 +203,104 @@ suite "RequestBroker macro (async mode)": DualResponse.clearProvider() + test "supports keyed providers (async, zero-arg)": + SimpleResponse.clearProvider() + + check SimpleResponse + .setProvider( + proc(): Future[Result[SimpleResponse, string]] {.async.} = + ok(SimpleResponse(value: "default")) + ) + .isOk() + + check SimpleResponse + .setProvider( + BrokerContext(0x11111111'u32), + proc(): Future[Result[SimpleResponse, string]] {.async.} = + ok(SimpleResponse(value: "one")), + ) + .isOk() + + check SimpleResponse + .setProvider( + BrokerContext(0x22222222'u32), + proc(): Future[Result[SimpleResponse, string]] {.async.} = + ok(SimpleResponse(value: "two")), + ) + .isOk() + + let defaultRes = waitFor SimpleResponse.request() + check defaultRes.isOk() + check defaultRes.value.value == "default" + + let res1 = waitFor SimpleResponse.request(BrokerContext(0x11111111'u32)) + check res1.isOk() + check res1.value.value == "one" + + let res2 = waitFor SimpleResponse.request(BrokerContext(0x22222222'u32)) + check res2.isOk() + check res2.value.value == "two" + + let missing = waitFor SimpleResponse.request(BrokerContext(0x33333333'u32)) + check missing.isErr() + check missing.error.contains("no provider registered for broker context") + + check SimpleResponse + .setProvider( + BrokerContext(0x11111111'u32), + proc(): Future[Result[SimpleResponse, string]] {.async.} = + ok(SimpleResponse(value: "dup")), + ) + .isErr() + + SimpleResponse.clearProvider() + + test "supports keyed providers (async, with args)": + KeyedResponse.clearProvider() + + check KeyedResponse + .setProvider( + proc(key: string, subKey: int): Future[Result[KeyedResponse, string]] {.async.} = + ok(KeyedResponse(key: "default-" & key, payload: $subKey)) + ) + .isOk() + + check KeyedResponse + .setProvider( + BrokerContext(0xABCDEF01'u32), + proc(key: string, subKey: int): Future[Result[KeyedResponse, string]] {.async.} = + ok(KeyedResponse(key: "k1-" & key, payload: "p" & $subKey)), + ) + .isOk() + + check KeyedResponse + .setProvider( + BrokerContext(0xABCDEF02'u32), + proc(key: string, subKey: int): Future[Result[KeyedResponse, string]] {.async.} = + ok(KeyedResponse(key: "k2-" & key, payload: "q" & $subKey)), + ) + .isOk() + + let d = waitFor KeyedResponse.request("topic", 7) + check d.isOk() + check d.value.key == "default-topic" + + let k1 = waitFor KeyedResponse.request(BrokerContext(0xABCDEF01'u32), "topic", 7) + check k1.isOk() + check k1.value.key == "k1-topic" + check k1.value.payload == "p7" + + let k2 = waitFor KeyedResponse.request(BrokerContext(0xABCDEF02'u32), "topic", 7) + check k2.isOk() + check k2.value.key == "k2-topic" + check k2.value.payload == "q7" + + let miss = waitFor KeyedResponse.request(BrokerContext(0xDEADBEEF'u32), "topic", 7) + check miss.isErr() + check miss.error.contains("no provider registered for broker context") + + KeyedResponse.clearProvider() + ## --------------------------------------------------------------------------- ## Sync-mode brokers + tests ## --------------------------------------------------------------------------- @@ -370,6 +468,71 @@ suite "RequestBroker macro (sync mode)": ImplicitResponseSync.clearProvider() + test "supports keyed providers (sync, zero-arg)": + SimpleResponseSync.clearProvider() + + check SimpleResponseSync + .setProvider( + proc(): Result[SimpleResponseSync, string] = + ok(SimpleResponseSync(value: "default")) + ) + .isOk() + + check SimpleResponseSync + .setProvider( + BrokerContext(0x10101010'u32), + proc(): Result[SimpleResponseSync, string] = + ok(SimpleResponseSync(value: "ten")), + ) + .isOk() + + let defaultRes = SimpleResponseSync.request() + check defaultRes.isOk() + check defaultRes.value.value == "default" + + let keyedRes = SimpleResponseSync.request(BrokerContext(0x10101010'u32)) + check keyedRes.isOk() + check keyedRes.value.value == "ten" + + let miss = SimpleResponseSync.request(BrokerContext(0x20202020'u32)) + check miss.isErr() + check miss.error.contains("no provider registered for broker context") + + SimpleResponseSync.clearProvider() + + test "supports keyed providers (sync, with args)": + KeyedResponseSync.clearProvider() + + check KeyedResponseSync + .setProvider( + proc(key: string, subKey: int): Result[KeyedResponseSync, string] = + ok(KeyedResponseSync(key: "default-" & key, payload: $subKey)) + ) + .isOk() + + check KeyedResponseSync + .setProvider( + BrokerContext(0xA0A0A0A0'u32), + proc(key: string, subKey: int): Result[KeyedResponseSync, string] = + ok(KeyedResponseSync(key: "k-" & key, payload: "p" & $subKey)), + ) + .isOk() + + let d = KeyedResponseSync.request("topic", 2) + check d.isOk() + check d.value.key == "default-topic" + + let keyed = KeyedResponseSync.request(BrokerContext(0xA0A0A0A0'u32), "topic", 2) + check keyed.isOk() + check keyed.value.key == "k-topic" + check keyed.value.payload == "p2" + + let miss = KeyedResponseSync.request(BrokerContext(0xB0B0B0B0'u32), "topic", 2) + check miss.isErr() + check miss.error.contains("no provider registered for broker context") + + KeyedResponseSync.clearProvider() + ## --------------------------------------------------------------------------- ## POD / external type brokers + tests (distinct/alias behavior) ## --------------------------------------------------------------------------- diff --git a/waku/common/broker/broker_context.nim b/waku/common/broker/broker_context.nim new file mode 100644 index 000000000..483a2e3a7 --- /dev/null +++ b/waku/common/broker/broker_context.nim @@ -0,0 +1,68 @@ +{.push raises: [].} + +import std/[strutils, concurrency/atomics], chronos + +type BrokerContext* = distinct uint32 + +func `==`*(a, b: BrokerContext): bool = + uint32(a) == uint32(b) + +func `!=`*(a, b: BrokerContext): bool = + uint32(a) != uint32(b) + +func `$`*(bc: BrokerContext): string = + toHex(uint32(bc), 8) + +const DefaultBrokerContext* = BrokerContext(0xCAFFE14E'u32) + +# Global broker context accessor. +# +# NOTE: This intentionally creates a *single* active BrokerContext per process +# (per event loop thread). Use only if you accept serialization of all broker +# context usage through the lock. +var globalBrokerContextLock {.threadvar.}: AsyncLock +globalBrokerContextLock = newAsyncLock() +var globalBrokerContextValue {.threadvar.}: BrokerContext +globalBrokerContextValue = DefaultBrokerContext +proc globalBrokerContext*(): BrokerContext = + ## Returns the currently active global broker context. + ## + ## This is intentionally lock-free; callers should use it inside + ## `withNewGlobalBrokerContext` / `withGlobalBrokerContext`. + globalBrokerContextValue + +var gContextCounter: Atomic[uint32] + +proc NewBrokerContext*(): BrokerContext = + var nextId = gContextCounter.fetchAdd(1, moRelaxed) + if nextId == uint32(DefaultBrokerContext): + nextId = gContextCounter.fetchAdd(1, moRelaxed) + return BrokerContext(nextId) + +template lockGlobalBrokerContext*(brokerCtx: BrokerContext, body: untyped): untyped = + ## Runs `body` while holding the global broker context lock with the provided + ## `brokerCtx` installed as the globally accessible context. + ## + ## This template is intended for use from within `chronos` async procs. + block: + await noCancel(globalBrokerContextLock.acquire()) + let previousBrokerCtx = globalBrokerContextValue + globalBrokerContextValue = brokerCtx + try: + body + finally: + globalBrokerContextValue = previousBrokerCtx + try: + globalBrokerContextLock.release() + except AsyncLockError: + doAssert false, "globalBrokerContextLock.release(): lock not held" + +template lockNewGlobalBrokerContext*(body: untyped): untyped = + ## Runs `body` while holding the global broker context lock with a freshly + ## generated broker context installed as the global accessor. + ## + ## The previous global broker context (if any) is restored on exit. + lockGlobalBrokerContext(NewBrokerContext()): + body + +{.pop.} diff --git a/waku/common/broker/event_broker.nim b/waku/common/broker/event_broker.nim index 05d7b50ab..779689f88 100644 --- a/waku/common/broker/event_broker.nim +++ b/waku/common/broker/event_broker.nim @@ -5,10 +5,35 @@ ## need for direct dependencies in between emitters and listeners. ## Worth considering using it in a single or many emitters to many listeners scenario. ## -## Generates a standalone, type-safe event broker for the declared object type. +## Generates a standalone, type-safe event broker for the declared type. ## The macro exports the value type itself plus a broker companion that manages ## listeners via thread-local storage. ## +## Type definitions: +## - Inline `object` / `ref object` definitions are supported. +## - Native types, aliases, and externally-defined types are also supported. +## In that case, EventBroker will automatically wrap the declared RHS type in +## `distinct` unless you already used `distinct`. +## This keeps event types unique even when multiple brokers share the same +## underlying base type. +## +## Default vs. context aware use: +## Every generated broker is a thread-local global instance. This means EventBroker +## enables decoupled event exchange threadwise. +## +## Sometimes we use brokers inside a context (e.g. within a component that has many +## modules or subsystems). If you instantiate multiple such components in a single +## thread, and each component must have its own listener set for the same EventBroker +## type, you can use context-aware EventBroker. +## +## Context awareness is supported through the `BrokerContext` argument for +## `listen`, `emit`, `dropListener`, and `dropAllListeners`. +## Listener stores are kept separate per broker context. +## +## Default broker context is defined as `DefaultBrokerContext`. If you don't need +## context awareness, you can keep using the interfaces without the context +## argument, which operate on `DefaultBrokerContext`. +## ## Usage: ## Declare your desired event type inside an `EventBroker` macro, add any number of fields.: ## ```nim @@ -47,87 +72,46 @@ ## GreetingEvent.dropListener(handle) ## ``` +## Example (non-object event type): +## ```nim +## EventBroker: +## type CounterEvent = int # exported as: `distinct int` +## +## discard CounterEvent.listen( +## proc(evt: CounterEvent): Future[void] {.async.} = +## echo int(evt) +## ) +## CounterEvent.emit(CounterEvent(42)) +## ``` + import std/[macros, tables] import chronos, chronicles, results -import ./helper/broker_utils +import ./helper/broker_utils, broker_context -export chronicles, results, chronos +export chronicles, results, chronos, broker_context macro EventBroker*(body: untyped): untyped = when defined(eventBrokerDebug): echo body.treeRepr - var typeIdent: NimNode = nil - var objectDef: NimNode = nil - var fieldNames: seq[NimNode] = @[] - var fieldTypes: seq[NimNode] = @[] - var isRefObject = false - for stmt in body: - if stmt.kind == nnkTypeSection: - for def in stmt: - if def.kind != nnkTypeDef: - continue - let rhs = def[2] - var objectType: NimNode - case rhs.kind - of nnkObjectTy: - objectType = rhs - of nnkRefTy: - isRefObject = true - if rhs.len != 1 or rhs[0].kind != nnkObjectTy: - error("EventBroker ref object must wrap a concrete object definition", rhs) - objectType = rhs[0] - else: - continue - if not typeIdent.isNil(): - error("Only one object type may be declared inside EventBroker", def) - typeIdent = baseTypeIdent(def[0]) - let recList = objectType[2] - if recList.kind != nnkRecList: - error("EventBroker object must declare a standard field list", objectType) - var exportedRecList = newTree(nnkRecList) - for field in recList: - case field.kind - of nnkIdentDefs: - ensureFieldDef(field) - let fieldTypeNode = field[field.len - 2] - for i in 0 ..< field.len - 2: - let baseFieldIdent = baseTypeIdent(field[i]) - fieldNames.add(copyNimTree(baseFieldIdent)) - fieldTypes.add(copyNimTree(fieldTypeNode)) - var cloned = copyNimTree(field) - for i in 0 ..< cloned.len - 2: - cloned[i] = exportIdentNode(cloned[i]) - exportedRecList.add(cloned) - of nnkEmpty: - discard - else: - error( - "EventBroker object definition only supports simple field declarations", - field, - ) - let exportedObjectType = newTree( - nnkObjectTy, - copyNimTree(objectType[0]), - copyNimTree(objectType[1]), - exportedRecList, - ) - if isRefObject: - objectDef = newTree(nnkRefTy, exportedObjectType) - else: - objectDef = exportedObjectType - if typeIdent.isNil(): - error("EventBroker body must declare exactly one object type", body) + let parsed = parseSingleTypeDef(body, "EventBroker", collectFieldInfo = true) + let typeIdent = parsed.typeIdent + let objectDef = parsed.objectDef + let fieldNames = parsed.fieldNames + let fieldTypes = parsed.fieldTypes + let hasInlineFields = parsed.hasInlineFields let exportedTypeIdent = postfix(copyNimTree(typeIdent), "*") let sanitized = sanitizeIdentName(typeIdent) let typeNameLit = newLit($typeIdent) - let isRefObjectLit = newLit(isRefObject) let handlerProcIdent = ident(sanitized & "ListenerProc") let listenerHandleIdent = ident(sanitized & "Listener") let brokerTypeIdent = ident(sanitized & "Broker") let exportedHandlerProcIdent = postfix(copyNimTree(handlerProcIdent), "*") let exportedListenerHandleIdent = postfix(copyNimTree(listenerHandleIdent), "*") let exportedBrokerTypeIdent = postfix(copyNimTree(brokerTypeIdent), "*") + let bucketTypeIdent = ident(sanitized & "CtxBucket") + let findBucketIdxIdent = ident(sanitized & "FindBucketIdx") + let getOrCreateBucketIdxIdent = ident(sanitized & "GetOrCreateBucketIdx") let accessProcIdent = ident("access" & sanitized & "Broker") let globalVarIdent = ident("g" & sanitized & "Broker") let listenImplIdent = ident("register" & sanitized & "Listener") @@ -147,10 +131,14 @@ macro EventBroker*(body: untyped): untyped = `exportedHandlerProcIdent` = proc(event: `typeIdent`): Future[void] {.async: (raises: []), gcsafe.} - `exportedBrokerTypeIdent` = ref object + `bucketTypeIdent` = object + brokerCtx: BrokerContext listeners: Table[uint64, `handlerProcIdent`] nextId: uint64 + `exportedBrokerTypeIdent` = ref object + buckets: seq[`bucketTypeIdent`] + ) result.add( @@ -163,49 +151,102 @@ macro EventBroker*(body: untyped): untyped = proc `accessProcIdent`(): `brokerTypeIdent` = if `globalVarIdent`.isNil(): new(`globalVarIdent`) - `globalVarIdent`.listeners = initTable[uint64, `handlerProcIdent`]() + `globalVarIdent`.buckets = + @[ + `bucketTypeIdent`( + brokerCtx: DefaultBrokerContext, + listeners: initTable[uint64, `handlerProcIdent`](), + nextId: 1'u64, + ) + ] `globalVarIdent` ) result.add( quote do: + proc `findBucketIdxIdent`( + broker: `brokerTypeIdent`, brokerCtx: BrokerContext + ): int = + if brokerCtx == DefaultBrokerContext: + return 0 + for i in 1 ..< broker.buckets.len: + if broker.buckets[i].brokerCtx == brokerCtx: + return i + return -1 + + proc `getOrCreateBucketIdxIdent`( + broker: `brokerTypeIdent`, brokerCtx: BrokerContext + ): int = + let idx = `findBucketIdxIdent`(broker, brokerCtx) + if idx >= 0: + return idx + broker.buckets.add( + `bucketTypeIdent`( + brokerCtx: brokerCtx, + listeners: initTable[uint64, `handlerProcIdent`](), + nextId: 1'u64, + ) + ) + return broker.buckets.high + proc `listenImplIdent`( - handler: `handlerProcIdent` + brokerCtx: BrokerContext, handler: `handlerProcIdent` ): Result[`listenerHandleIdent`, string] = if handler.isNil(): return err("Must provide a non-nil event handler") var broker = `accessProcIdent`() - if broker.nextId == 0'u64: - broker.nextId = 1'u64 - if broker.nextId == high(uint64): - error "Cannot add more listeners: ID space exhausted", nextId = $broker.nextId + + let bucketIdx = `getOrCreateBucketIdxIdent`(broker, brokerCtx) + if broker.buckets[bucketIdx].nextId == 0'u64: + broker.buckets[bucketIdx].nextId = 1'u64 + + if broker.buckets[bucketIdx].nextId == high(uint64): + error "Cannot add more listeners: ID space exhausted", + nextId = $broker.buckets[bucketIdx].nextId return err("Cannot add more listeners, listener ID space exhausted") - let newId = broker.nextId - inc broker.nextId - broker.listeners[newId] = handler + + let newId = broker.buckets[bucketIdx].nextId + inc broker.buckets[bucketIdx].nextId + broker.buckets[bucketIdx].listeners[newId] = handler return ok(`listenerHandleIdent`(id: newId)) ) result.add( quote do: - proc `dropListenerImplIdent`(handle: `listenerHandleIdent`) = + proc `dropListenerImplIdent`( + brokerCtx: BrokerContext, handle: `listenerHandleIdent` + ) = if handle.id == 0'u64: return var broker = `accessProcIdent`() - if broker.listeners.len == 0: + + let bucketIdx = `findBucketIdxIdent`(broker, brokerCtx) + if bucketIdx < 0: return - broker.listeners.del(handle.id) + + if broker.buckets[bucketIdx].listeners.len == 0: + return + broker.buckets[bucketIdx].listeners.del(handle.id) + if brokerCtx != DefaultBrokerContext and + broker.buckets[bucketIdx].listeners.len == 0: + broker.buckets.delete(bucketIdx) ) result.add( quote do: - proc `dropAllListenersImplIdent`() = + proc `dropAllListenersImplIdent`(brokerCtx: BrokerContext) = var broker = `accessProcIdent`() - if broker.listeners.len > 0: - broker.listeners.clear() + + let bucketIdx = `findBucketIdxIdent`(broker, brokerCtx) + if bucketIdx < 0: + return + if broker.buckets[bucketIdx].listeners.len > 0: + broker.buckets[bucketIdx].listeners.clear() + if brokerCtx != DefaultBrokerContext: + broker.buckets.delete(bucketIdx) ) @@ -214,17 +255,34 @@ macro EventBroker*(body: untyped): untyped = proc listen*( _: typedesc[`typeIdent`], handler: `handlerProcIdent` ): Result[`listenerHandleIdent`, string] = - return `listenImplIdent`(handler) + return `listenImplIdent`(DefaultBrokerContext, handler) + + proc listen*( + _: typedesc[`typeIdent`], + brokerCtx: BrokerContext, + handler: `handlerProcIdent`, + ): Result[`listenerHandleIdent`, string] = + return `listenImplIdent`(brokerCtx, handler) ) result.add( quote do: proc dropListener*(_: typedesc[`typeIdent`], handle: `listenerHandleIdent`) = - `dropListenerImplIdent`(handle) + `dropListenerImplIdent`(DefaultBrokerContext, handle) + + proc dropListener*( + _: typedesc[`typeIdent`], + brokerCtx: BrokerContext, + handle: `listenerHandleIdent`, + ) = + `dropListenerImplIdent`(brokerCtx, handle) proc dropAllListeners*(_: typedesc[`typeIdent`]) = - `dropAllListenersImplIdent`() + `dropAllListenersImplIdent`(DefaultBrokerContext) + + proc dropAllListeners*(_: typedesc[`typeIdent`], brokerCtx: BrokerContext) = + `dropAllListenersImplIdent`(brokerCtx) ) @@ -241,68 +299,114 @@ macro EventBroker*(body: untyped): untyped = error "Failed to execute event listener", error = getCurrentExceptionMsg() proc `emitImplIdent`( - event: `typeIdent` + brokerCtx: BrokerContext, event: `typeIdent` ): Future[void] {.async: (raises: []), gcsafe.} = - when `isRefObjectLit`: + when compiles(event.isNil()): if event.isNil(): error "Cannot emit uninitialized event object", eventType = `typeNameLit` return let broker = `accessProcIdent`() - if broker.listeners.len == 0: + let bucketIdx = `findBucketIdxIdent`(broker, brokerCtx) + if bucketIdx < 0: # nothing to do as nobody is listening return + if broker.buckets[bucketIdx].listeners.len == 0: + return var callbacks: seq[`handlerProcIdent`] = @[] - for cb in broker.listeners.values: + for cb in broker.buckets[bucketIdx].listeners.values: callbacks.add(cb) for cb in callbacks: asyncSpawn `listenerTaskIdent`(cb, event) proc emit*(event: `typeIdent`) = - asyncSpawn `emitImplIdent`(event) + asyncSpawn `emitImplIdent`(DefaultBrokerContext, event) proc emit*(_: typedesc[`typeIdent`], event: `typeIdent`) = - asyncSpawn `emitImplIdent`(event) + asyncSpawn `emitImplIdent`(DefaultBrokerContext, event) + + proc emit*( + _: typedesc[`typeIdent`], brokerCtx: BrokerContext, event: `typeIdent` + ) = + asyncSpawn `emitImplIdent`(brokerCtx, event) ) - var emitCtorParams = newTree(nnkFormalParams, newEmptyNode()) - let typedescParamType = - newTree(nnkBracketExpr, ident("typedesc"), copyNimTree(typeIdent)) - emitCtorParams.add( - newTree(nnkIdentDefs, ident("_"), typedescParamType, newEmptyNode()) - ) - for i in 0 ..< fieldNames.len: + if hasInlineFields: + # Typedesc emit constructor overloads for inline object/ref object types. + var emitCtorParams = newTree(nnkFormalParams, newEmptyNode()) + let typedescParamType = + newTree(nnkBracketExpr, ident("typedesc"), copyNimTree(typeIdent)) emitCtorParams.add( - newTree( - nnkIdentDefs, - copyNimTree(fieldNames[i]), - copyNimTree(fieldTypes[i]), - newEmptyNode(), + newTree(nnkIdentDefs, ident("_"), typedescParamType, newEmptyNode()) + ) + for i in 0 ..< fieldNames.len: + emitCtorParams.add( + newTree( + nnkIdentDefs, + copyNimTree(fieldNames[i]), + copyNimTree(fieldTypes[i]), + newEmptyNode(), + ) ) + + var emitCtorExpr = newTree(nnkObjConstr, copyNimTree(typeIdent)) + for i in 0 ..< fieldNames.len: + emitCtorExpr.add( + newTree( + nnkExprColonExpr, copyNimTree(fieldNames[i]), copyNimTree(fieldNames[i]) + ) + ) + + let emitCtorCallDefault = + newCall(copyNimTree(emitImplIdent), ident("DefaultBrokerContext"), emitCtorExpr) + let emitCtorBodyDefault = quote: + asyncSpawn `emitCtorCallDefault` + + let typedescEmitProcDefault = newTree( + nnkProcDef, + postfix(ident("emit"), "*"), + newEmptyNode(), + newEmptyNode(), + emitCtorParams, + newEmptyNode(), + newEmptyNode(), + emitCtorBodyDefault, ) + result.add(typedescEmitProcDefault) - var emitCtorExpr = newTree(nnkObjConstr, copyNimTree(typeIdent)) - for i in 0 ..< fieldNames.len: - emitCtorExpr.add( - newTree(nnkExprColonExpr, copyNimTree(fieldNames[i]), copyNimTree(fieldNames[i])) + var emitCtorParamsCtx = newTree(nnkFormalParams, newEmptyNode()) + emitCtorParamsCtx.add( + newTree(nnkIdentDefs, ident("_"), typedescParamType, newEmptyNode()) ) + emitCtorParamsCtx.add( + newTree(nnkIdentDefs, ident("brokerCtx"), ident("BrokerContext"), newEmptyNode()) + ) + for i in 0 ..< fieldNames.len: + emitCtorParamsCtx.add( + newTree( + nnkIdentDefs, + copyNimTree(fieldNames[i]), + copyNimTree(fieldTypes[i]), + newEmptyNode(), + ) + ) - let emitCtorCall = newCall(copyNimTree(emitImplIdent), emitCtorExpr) - let emitCtorBody = quote: - asyncSpawn `emitCtorCall` + let emitCtorCallCtx = + newCall(copyNimTree(emitImplIdent), ident("brokerCtx"), copyNimTree(emitCtorExpr)) + let emitCtorBodyCtx = quote: + asyncSpawn `emitCtorCallCtx` - let typedescEmitProc = newTree( - nnkProcDef, - postfix(ident("emit"), "*"), - newEmptyNode(), - newEmptyNode(), - emitCtorParams, - newEmptyNode(), - newEmptyNode(), - emitCtorBody, - ) - - result.add(typedescEmitProc) + let typedescEmitProcCtx = newTree( + nnkProcDef, + postfix(ident("emit"), "*"), + newEmptyNode(), + newEmptyNode(), + emitCtorParamsCtx, + newEmptyNode(), + newEmptyNode(), + emitCtorBodyCtx, + ) + result.add(typedescEmitProcCtx) when defined(eventBrokerDebug): echo result.repr diff --git a/waku/common/broker/helper/broker_utils.nim b/waku/common/broker/helper/broker_utils.nim index ea9f85750..90f2055d3 100644 --- a/waku/common/broker/helper/broker_utils.nim +++ b/waku/common/broker/helper/broker_utils.nim @@ -1,5 +1,21 @@ import std/macros +type ParsedBrokerType* = object + ## Result of parsing the single `type` definition inside a broker macro body. + ## + ## - `typeIdent`: base identifier for the declared type name + ## - `objectDef`: exported type definition RHS (inline object fields exported; + ## non-object types wrapped in `distinct` unless already distinct) + ## - `isRefObject`: true only for inline `ref object` definitions + ## - `hasInlineFields`: true for inline `object` / `ref object` + ## - `fieldNames`/`fieldTypes`: populated only when `collectFieldInfo = true` + typeIdent*: NimNode + objectDef*: NimNode + isRefObject*: bool + hasInlineFields*: bool + fieldNames*: seq[NimNode] + fieldTypes*: seq[NimNode] + proc sanitizeIdentName*(node: NimNode): string = var raw = $node var sanitizedName = newStringOfCap(raw.len) @@ -41,3 +57,150 @@ proc baseTypeIdent*(defName: NimNode): NimNode = baseTypeIdent(defName[0]) else: error("Unsupported type name in broker definition", defName) + +proc ensureDistinctType*(rhs: NimNode): NimNode = + ## For PODs / aliases / externally-defined types, wrap in `distinct` unless + ## it's already distinct. + if rhs.kind == nnkDistinctTy: + return copyNimTree(rhs) + newTree(nnkDistinctTy, copyNimTree(rhs)) + +proc cloneParams*(params: seq[NimNode]): seq[NimNode] = + ## Deep copy parameter definitions so they can be inserted in multiple places. + result = @[] + for param in params: + result.add(copyNimTree(param)) + +proc collectParamNames*(params: seq[NimNode]): seq[NimNode] = + ## Extract all identifier symbols declared across IdentDefs nodes. + result = @[] + for param in params: + assert param.kind == nnkIdentDefs + for i in 0 ..< param.len - 2: + let nameNode = param[i] + if nameNode.kind == nnkEmpty: + continue + result.add(ident($nameNode)) + +proc parseSingleTypeDef*( + body: NimNode, + macroName: string, + allowRefToNonObject = false, + collectFieldInfo = false, +): ParsedBrokerType = + ## Parses exactly one `type` definition from a broker macro body. + ## + ## Supported RHS: + ## - inline `object` / `ref object` (fields are auto-exported) + ## - non-object types / aliases / externally-defined types (wrapped in `distinct`) + ## - optionally: `ref SomeType` when `allowRefToNonObject = true` + var typeIdent: NimNode = nil + var objectDef: NimNode = nil + var isRefObject = false + var hasInlineFields = false + var fieldNames: seq[NimNode] = @[] + var fieldTypes: seq[NimNode] = @[] + + for stmt in body: + if stmt.kind != nnkTypeSection: + continue + for def in stmt: + if def.kind != nnkTypeDef: + continue + if not typeIdent.isNil(): + error("Only one type may be declared inside " & macroName, def) + typeIdent = baseTypeIdent(def[0]) + let rhs = def[2] + + case rhs.kind + of nnkObjectTy: + let recList = rhs[2] + if recList.kind != nnkRecList: + error(macroName & " object must declare a standard field list", rhs) + var exportedRecList = newTree(nnkRecList) + for field in recList: + case field.kind + of nnkIdentDefs: + ensureFieldDef(field) + if collectFieldInfo: + let fieldTypeNode = field[field.len - 2] + for i in 0 ..< field.len - 2: + let baseFieldIdent = baseTypeIdent(field[i]) + fieldNames.add(copyNimTree(baseFieldIdent)) + fieldTypes.add(copyNimTree(fieldTypeNode)) + var cloned = copyNimTree(field) + for i in 0 ..< cloned.len - 2: + cloned[i] = exportIdentNode(cloned[i]) + exportedRecList.add(cloned) + of nnkEmpty: + discard + else: + error( + macroName & " object definition only supports simple field declarations", + field, + ) + objectDef = newTree( + nnkObjectTy, copyNimTree(rhs[0]), copyNimTree(rhs[1]), exportedRecList + ) + isRefObject = false + hasInlineFields = true + of nnkRefTy: + if rhs.len != 1: + error(macroName & " ref type must have a single base", rhs) + if rhs[0].kind == nnkObjectTy: + let obj = rhs[0] + let recList = obj[2] + if recList.kind != nnkRecList: + error(macroName & " object must declare a standard field list", obj) + var exportedRecList = newTree(nnkRecList) + for field in recList: + case field.kind + of nnkIdentDefs: + ensureFieldDef(field) + if collectFieldInfo: + let fieldTypeNode = field[field.len - 2] + for i in 0 ..< field.len - 2: + let baseFieldIdent = baseTypeIdent(field[i]) + fieldNames.add(copyNimTree(baseFieldIdent)) + fieldTypes.add(copyNimTree(fieldTypeNode)) + var cloned = copyNimTree(field) + for i in 0 ..< cloned.len - 2: + cloned[i] = exportIdentNode(cloned[i]) + exportedRecList.add(cloned) + of nnkEmpty: + discard + else: + error( + macroName & " object definition only supports simple field declarations", + field, + ) + let exportedObjectType = newTree( + nnkObjectTy, copyNimTree(obj[0]), copyNimTree(obj[1]), exportedRecList + ) + objectDef = newTree(nnkRefTy, exportedObjectType) + isRefObject = true + hasInlineFields = true + elif allowRefToNonObject: + ## `ref SomeType` (SomeType can be defined elsewhere) + objectDef = ensureDistinctType(rhs) + isRefObject = false + hasInlineFields = false + else: + error(macroName & " ref object must wrap a concrete object definition", rhs) + else: + ## Non-object type / alias. + objectDef = ensureDistinctType(rhs) + isRefObject = false + hasInlineFields = false + + if typeIdent.isNil(): + error(macroName & " body must declare exactly one type", body) + + result = ParsedBrokerType( + typeIdent: typeIdent, + objectDef: objectDef, + isRefObject: isRefObject, + hasInlineFields: hasInlineFields, + fieldNames: fieldNames, + fieldTypes: fieldTypes, + ) diff --git a/waku/common/broker/multi_request_broker.nim b/waku/common/broker/multi_request_broker.nim index 7f4161f5a..2baa19940 100644 --- a/waku/common/broker/multi_request_broker.nim +++ b/waku/common/broker/multi_request_broker.nim @@ -5,12 +5,35 @@ ## need for direct dependencies in between. ## Worth considering using it for use cases where you need to collect data from multiple providers. ## -## Provides a declarative way to define an immutable value type together with a -## thread-local broker that can register multiple asynchronous providers, dispatch -## typed requests, and clear handlers. Unlike `RequestBroker`, -## every call to `request` fan-outs to every registered provider and returns with -## collected responses. -## Request succeeds if all providers succeed, otherwise fails with an error. +## Generates a standalone, type-safe request broker for the declared type. +## The macro exports the value type itself plus a broker companion that manages +## providers via thread-local storage. +## +## Unlike `RequestBroker`, every call to `request` fan-outs to every registered +## provider and returns all collected responses. +## The request succeeds only if all providers succeed, otherwise it fails. +## +## Type definitions: +## - Inline `object` / `ref object` definitions are supported. +## - Native types, aliases, and externally-defined types are also supported. +## In that case, MultiRequestBroker will automatically wrap the declared RHS +## type in `distinct` unless you already used `distinct`. +## This keeps request types unique even when multiple brokers share the same +## underlying base type. +## +## Default vs. context aware use: +## Every generated broker is a thread-local global instance. +## Sometimes you want multiple independent provider sets for the same request +## type within the same thread (e.g. multiple components). For that, you can use +## context-aware MultiRequestBroker. +## +## Context awareness is supported through the `BrokerContext` argument for +## `setProvider`, `request`, `removeProvider`, and `clearProviders`. +## Provider stores are kept separate per broker context. +## +## Default broker context is defined as `DefaultBrokerContext`. If you don't +## need context awareness, you can keep using the interfaces without the context +## argument, which operate on `DefaultBrokerContext`. ## ## Usage: ## @@ -29,14 +52,17 @@ ## ## ``` ## -## You regiser request processor (proveder) at any place of the code without the need to know of who ever may request. -## Respectively to the defined signatures register provider functions with `TypeName.setProvider(...)`. -## Providers are async procs or lambdas that return with a Future[Result[seq[TypeName], string]]. -## Notice MultiRequestBroker's `setProvider` return with a handler that can be used to remove the provider later (or error). +## You can register a request processor (provider) anywhere without the need to +## know who will request. +## Register provider functions with `TypeName.setProvider(...)`. +## Providers are async procs or lambdas that return `Future[Result[TypeName, string]]`. +## `setProvider` returns a handle (or an error) that can later be used to remove +## the provider. -## Requests can be made from anywhere with no direct dependency on the provider(s) by -## calling `TypeName.request()` - with arguments respecting the signature(s). -## This will asynchronously call the registered provider and return the collected data, in form of `Future[Result[seq[TypeName], string]]`. +## Requests can be made from anywhere with no direct dependency on the provider(s) +## by calling `TypeName.request()` (with arguments respecting the declared signature). +## This will asynchronously call all registered providers and return the collected +## responses as `Future[Result[seq[TypeName], string]]`. ## ## Whenever you don't want to process requests anymore (or your object instance that provides the request goes out of scope), ## you can remove it from the broker with `TypeName.removeProvider(handle)`. @@ -77,8 +103,9 @@ import std/[macros, strutils, tables, sugar] import chronos import results import ./helper/broker_utils +import ./broker_context -export results, chronos +export results, chronos, broker_context proc isReturnTypeValid(returnType, typeIdent: NimNode): bool = ## Accept Future[Result[TypeIdent, string]] as the contract. @@ -95,23 +122,6 @@ proc isReturnTypeValid(returnType, typeIdent: NimNode): bool = return false inner[2].kind == nnkIdent and inner[2].eqIdent("string") -proc cloneParams(params: seq[NimNode]): seq[NimNode] = - ## Deep copy parameter definitions so they can be reused in generated nodes. - result = @[] - for param in params: - result.add(copyNimTree(param)) - -proc collectParamNames(params: seq[NimNode]): seq[NimNode] = - ## Extract identifiers declared in parameter definitions. - result = @[] - for param in params: - assert param.kind == nnkIdentDefs - for i in 0 ..< param.len - 2: - let nameNode = param[i] - if nameNode.kind == nnkEmpty: - continue - result.add(ident($nameNode)) - proc makeProcType(returnType: NimNode, params: seq[NimNode]): NimNode = var formal = newTree(nnkFormalParams) formal.add(returnType) @@ -126,65 +136,10 @@ proc makeProcType(returnType: NimNode, params: seq[NimNode]): NimNode = macro MultiRequestBroker*(body: untyped): untyped = when defined(requestBrokerDebug): echo body.treeRepr - var typeIdent: NimNode = nil - var objectDef: NimNode = nil - var isRefObject = false - for stmt in body: - if stmt.kind == nnkTypeSection: - for def in stmt: - if def.kind != nnkTypeDef: - continue - let rhs = def[2] - var objectType: NimNode - case rhs.kind - of nnkObjectTy: - objectType = rhs - of nnkRefTy: - isRefObject = true - if rhs.len != 1 or rhs[0].kind != nnkObjectTy: - error( - "MultiRequestBroker ref object must wrap a concrete object definition", - rhs, - ) - objectType = rhs[0] - else: - continue - if not typeIdent.isNil(): - error("Only one object type may be declared inside MultiRequestBroker", def) - typeIdent = baseTypeIdent(def[0]) - let recList = objectType[2] - if recList.kind != nnkRecList: - error( - "MultiRequestBroker object must declare a standard field list", objectType - ) - var exportedRecList = newTree(nnkRecList) - for field in recList: - case field.kind - of nnkIdentDefs: - ensureFieldDef(field) - var cloned = copyNimTree(field) - for i in 0 ..< cloned.len - 2: - cloned[i] = exportIdentNode(cloned[i]) - exportedRecList.add(cloned) - of nnkEmpty: - discard - else: - error( - "MultiRequestBroker object definition only supports simple field declarations", - field, - ) - let exportedObjectType = newTree( - nnkObjectTy, - copyNimTree(objectType[0]), - copyNimTree(objectType[1]), - exportedRecList, - ) - if isRefObject: - objectDef = newTree(nnkRefTy, exportedObjectType) - else: - objectDef = exportedObjectType - if typeIdent.isNil(): - error("MultiRequestBroker body must declare exactly one object type", body) + let parsed = parseSingleTypeDef(body, "MultiRequestBroker") + let typeIdent = parsed.typeIdent + let objectDef = parsed.objectDef + let isRefObject = parsed.isRefObject when defined(requestBrokerDebug): echo "MultiRequestBroker generating type: ", $typeIdent @@ -193,12 +148,13 @@ macro MultiRequestBroker*(body: untyped): untyped = let sanitized = sanitizeIdentName(typeIdent) let typeNameLit = newLit($typeIdent) let isRefObjectLit = newLit(isRefObject) - let tableSym = bindSym"Table" - let initTableSym = bindSym"initTable" let uint64Ident = ident("uint64") let providerKindIdent = ident(sanitized & "ProviderKind") let providerHandleIdent = ident(sanitized & "ProviderHandle") let exportedProviderHandleIdent = postfix(copyNimTree(providerHandleIdent), "*") + let bucketTypeIdent = ident(sanitized & "CtxBucket") + let findBucketIdxIdent = ident(sanitized & "FindBucketIdx") + let getOrCreateBucketIdxIdent = ident(sanitized & "GetOrCreateBucketIdx") let zeroKindIdent = ident("pk" & sanitized & "NoArgs") let argKindIdent = ident("pk" & sanitized & "WithArgs") var zeroArgSig: NimNode = nil @@ -306,63 +262,90 @@ macro MultiRequestBroker*(body: untyped): untyped = let procType = makeProcType(returnType, cloneParams(argParams)) typeSection.add(newTree(nnkTypeDef, argProviderName, newEmptyNode(), procType)) - var brokerRecList = newTree(nnkRecList) + var bucketRecList = newTree(nnkRecList) + bucketRecList.add( + newTree(nnkIdentDefs, ident("brokerCtx"), ident("BrokerContext"), newEmptyNode()) + ) if not zeroArgSig.isNil(): - brokerRecList.add( + bucketRecList.add( newTree( nnkIdentDefs, zeroArgFieldName, - newTree(nnkBracketExpr, tableSym, uint64Ident, zeroArgProviderName), + newTree(nnkBracketExpr, ident("seq"), zeroArgProviderName), newEmptyNode(), ) ) if not argSig.isNil(): - brokerRecList.add( + bucketRecList.add( newTree( nnkIdentDefs, argFieldName, - newTree(nnkBracketExpr, tableSym, uint64Ident, argProviderName), + newTree(nnkBracketExpr, ident("seq"), argProviderName), newEmptyNode(), ) ) - brokerRecList.add(newTree(nnkIdentDefs, ident("nextId"), uint64Ident, newEmptyNode())) - let brokerTypeIdent = ident(sanitizeIdentName(typeIdent) & "Broker") - let brokerTypeDef = newTree( - nnkTypeDef, - brokerTypeIdent, - newEmptyNode(), + typeSection.add( newTree( - nnkRefTy, newTree(nnkObjectTy, newEmptyNode(), newEmptyNode(), brokerRecList) - ), + nnkTypeDef, + bucketTypeIdent, + newEmptyNode(), + newTree(nnkObjectTy, newEmptyNode(), newEmptyNode(), bucketRecList), + ) + ) + + var brokerRecList = newTree(nnkRecList) + brokerRecList.add( + newTree( + nnkIdentDefs, + ident("buckets"), + newTree(nnkBracketExpr, ident("seq"), bucketTypeIdent), + newEmptyNode(), + ) + ) + let brokerTypeIdent = ident(sanitizeIdentName(typeIdent) & "Broker") + typeSection.add( + newTree( + nnkTypeDef, + brokerTypeIdent, + newEmptyNode(), + newTree( + nnkRefTy, newTree(nnkObjectTy, newEmptyNode(), newEmptyNode(), brokerRecList) + ), + ) ) - typeSection.add(brokerTypeDef) result = newStmtList() result.add(typeSection) let globalVarIdent = ident("g" & sanitizeIdentName(typeIdent) & "Broker") let accessProcIdent = ident("access" & sanitizeIdentName(typeIdent) & "Broker") - var initStatements = newStmtList() - if not zeroArgSig.isNil(): - initStatements.add( - quote do: - `globalVarIdent`.`zeroArgFieldName` = - `initTableSym`[`uint64Ident`, `zeroArgProviderName`]() - ) - if not argSig.isNil(): - initStatements.add( - quote do: - `globalVarIdent`.`argFieldName` = - `initTableSym`[`uint64Ident`, `argProviderName`]() - ) result.add( quote do: var `globalVarIdent` {.threadvar.}: `brokerTypeIdent` + proc `findBucketIdxIdent`( + broker: `brokerTypeIdent`, brokerCtx: BrokerContext + ): int = + if brokerCtx == DefaultBrokerContext: + return 0 + for i in 1 ..< broker.buckets.len: + if broker.buckets[i].brokerCtx == brokerCtx: + return i + return -1 + + proc `getOrCreateBucketIdxIdent`( + broker: `brokerTypeIdent`, brokerCtx: BrokerContext + ): int = + let idx = `findBucketIdxIdent`(broker, brokerCtx) + if idx >= 0: + return idx + broker.buckets.add(`bucketTypeIdent`(brokerCtx: brokerCtx)) + return broker.buckets.high + proc `accessProcIdent`(): `brokerTypeIdent` = if `globalVarIdent`.isNil(): new(`globalVarIdent`) - `globalVarIdent`.nextId = 1'u64 - `initStatements` + `globalVarIdent`.buckets = + @[`bucketTypeIdent`(brokerCtx: DefaultBrokerContext)] return `globalVarIdent` ) @@ -372,40 +355,47 @@ macro MultiRequestBroker*(body: untyped): untyped = result.add( quote do: proc setProvider*( - _: typedesc[`typeIdent`], handler: `zeroArgProviderName` + _: typedesc[`typeIdent`], + brokerCtx: BrokerContext, + handler: `zeroArgProviderName`, ): Result[`providerHandleIdent`, string] = if handler.isNil(): return err("Provider handler must be provided") let broker = `accessProcIdent`() - if broker.nextId == 0'u64: - broker.nextId = 1'u64 - for existingId, existing in broker.`zeroArgFieldName`.pairs: - if existing == handler: - return ok(`providerHandleIdent`(id: existingId, kind: `zeroKindIdent`)) - let newId = broker.nextId - inc broker.nextId - broker.`zeroArgFieldName`[newId] = handler - return ok(`providerHandleIdent`(id: newId, kind: `zeroKindIdent`)) + let bucketIdx = `getOrCreateBucketIdxIdent`(broker, brokerCtx) + for i, existing in broker.buckets[bucketIdx].`zeroArgFieldName`: + if not existing.isNil() and existing == handler: + return ok(`providerHandleIdent`(id: uint64(i + 1), kind: `zeroKindIdent`)) + broker.buckets[bucketIdx].`zeroArgFieldName`.add(handler) + return ok( + `providerHandleIdent`( + id: uint64(broker.buckets[bucketIdx].`zeroArgFieldName`.len), + kind: `zeroKindIdent`, + ) + ) + + proc setProvider*( + _: typedesc[`typeIdent`], handler: `zeroArgProviderName` + ): Result[`providerHandleIdent`, string] = + return setProvider(`typeIdent`, DefaultBrokerContext, handler) - ) - clearBody.add( - quote do: - let broker = `accessProcIdent`() - if not broker.isNil() and broker.`zeroArgFieldName`.len > 0: - broker.`zeroArgFieldName`.clear() ) result.add( quote do: proc request*( - _: typedesc[`typeIdent`] + _: typedesc[`typeIdent`], brokerCtx: BrokerContext ): Future[Result[seq[`typeIdent`], string]] {.async: (raises: []), gcsafe.} = var aggregated: seq[`typeIdent`] = @[] - let providers = `accessProcIdent`().`zeroArgFieldName` + let broker = `accessProcIdent`() + let bucketIdx = `findBucketIdxIdent`(broker, brokerCtx) + if bucketIdx < 0: + return ok(aggregated) + let providers = broker.buckets[bucketIdx].`zeroArgFieldName` if providers.len == 0: return ok(aggregated) # var providersFut: seq[Future[Result[`typeIdent`, string]]] = collect: var providersFut = collect(newSeq): - for provider in providers.values: + for provider in providers: if provider.isNil(): continue provider() @@ -435,32 +425,40 @@ macro MultiRequestBroker*(body: untyped): untyped = return ok(aggregated) + proc request*( + _: typedesc[`typeIdent`] + ): Future[Result[seq[`typeIdent`], string]] = + return request(`typeIdent`, DefaultBrokerContext) + ) if not argSig.isNil(): result.add( quote do: proc setProvider*( - _: typedesc[`typeIdent`], handler: `argProviderName` + _: typedesc[`typeIdent`], + brokerCtx: BrokerContext, + handler: `argProviderName`, ): Result[`providerHandleIdent`, string] = if handler.isNil(): return err("Provider handler must be provided") let broker = `accessProcIdent`() - if broker.nextId == 0'u64: - broker.nextId = 1'u64 - for existingId, existing in broker.`argFieldName`.pairs: - if existing == handler: - return ok(`providerHandleIdent`(id: existingId, kind: `argKindIdent`)) - let newId = broker.nextId - inc broker.nextId - broker.`argFieldName`[newId] = handler - return ok(`providerHandleIdent`(id: newId, kind: `argKindIdent`)) + let bucketIdx = `getOrCreateBucketIdxIdent`(broker, brokerCtx) + for i, existing in broker.buckets[bucketIdx].`argFieldName`: + if not existing.isNil() and existing == handler: + return ok(`providerHandleIdent`(id: uint64(i + 1), kind: `argKindIdent`)) + broker.buckets[bucketIdx].`argFieldName`.add(handler) + return ok( + `providerHandleIdent`( + id: uint64(broker.buckets[bucketIdx].`argFieldName`.len), + kind: `argKindIdent`, + ) + ) + + proc setProvider*( + _: typedesc[`typeIdent`], handler: `argProviderName` + ): Result[`providerHandleIdent`, string] = + return setProvider(`typeIdent`, DefaultBrokerContext, handler) - ) - clearBody.add( - quote do: - let broker = `accessProcIdent`() - if not broker.isNil() and broker.`argFieldName`.len > 0: - broker.`argFieldName`.clear() ) let requestParamDefs = cloneParams(argParams) let argNameIdents = collectParamNames(requestParamDefs) @@ -481,17 +479,24 @@ macro MultiRequestBroker*(body: untyped): untyped = newEmptyNode(), ) ) + formalParams.add( + newTree(nnkIdentDefs, ident("brokerCtx"), ident("BrokerContext"), newEmptyNode()) + ) for paramDef in requestParamDefs: formalParams.add(paramDef) let requestPragmas = quote: {.async: (raises: []), gcsafe.} let requestBody = quote: var aggregated: seq[`typeIdent`] = @[] - let providers = `accessProcIdent`().`argFieldName` + let broker = `accessProcIdent`() + let bucketIdx = `findBucketIdxIdent`(broker, brokerCtx) + if bucketIdx < 0: + return ok(aggregated) + let providers = broker.buckets[bucketIdx].`argFieldName` if providers.len == 0: return ok(aggregated) var providersFut = collect(newSeq): - for provider in providers.values: + for provider in providers: if provider.isNil(): continue let `providerSym` = provider @@ -531,53 +536,208 @@ macro MultiRequestBroker*(body: untyped): untyped = ) ) - result.add( - quote do: - proc clearProviders*(_: typedesc[`typeIdent`]) = - `clearBody` - let broker = `accessProcIdent`() - if not broker.isNil(): - broker.nextId = 1'u64 - - ) - - let removeHandleSym = genSym(nskParam, "handle") - let removeBrokerSym = genSym(nskLet, "broker") - var removeBody = newStmtList() - removeBody.add( - quote do: - if `removeHandleSym`.id == 0'u64: - return - let `removeBrokerSym` = `accessProcIdent`() - if `removeBrokerSym`.isNil(): - return - ) - if not zeroArgSig.isNil(): - removeBody.add( + # Backward-compatible default-context overload (no brokerCtx parameter). + var formalParamsDefault = newTree(nnkFormalParams) + formalParamsDefault.add( quote do: - if `removeHandleSym`.kind == `zeroKindIdent`: - `removeBrokerSym`.`zeroArgFieldName`.del(`removeHandleSym`.id) - return + Future[Result[seq[`typeIdent`], string]] ) - if not argSig.isNil(): - removeBody.add( - quote do: - if `removeHandleSym`.kind == `argKindIdent`: - `removeBrokerSym`.`argFieldName`.del(`removeHandleSym`.id) - return + formalParamsDefault.add( + newTree( + nnkIdentDefs, + ident("_"), + newTree(nnkBracketExpr, ident("typedesc"), copyNimTree(typeIdent)), + newEmptyNode(), + ) ) - removeBody.add( - quote do: - discard - ) - result.add( - quote do: - proc removeProvider*( - _: typedesc[`typeIdent`], `removeHandleSym`: `providerHandleIdent` - ) = - `removeBody` + for paramDef in requestParamDefs: + formalParamsDefault.add(copyNimTree(paramDef)) - ) + var wrapperCall = newCall(ident("request")) + wrapperCall.add(copyNimTree(typeIdent)) + wrapperCall.add(ident("DefaultBrokerContext")) + for argName in argNameIdents: + wrapperCall.add(copyNimTree(argName)) + + result.add( + newTree( + nnkProcDef, + postfix(ident("request"), "*"), + newEmptyNode(), + newEmptyNode(), + formalParamsDefault, + newEmptyNode(), + newEmptyNode(), + newStmtList(newTree(nnkReturnStmt, wrapperCall)), + ) + ) + let removeHandleCtxSym = genSym(nskParam, "handle") + let removeHandleDefaultSym = genSym(nskParam, "handle") + + when true: + # Generate clearProviders / removeProvider with macro-time knowledge about which + # provider lists exist (zero-arg and/or arg providers). + if not zeroArgSig.isNil() and not argSig.isNil(): + result.add( + quote do: + proc clearProviders*(_: typedesc[`typeIdent`], brokerCtx: BrokerContext) = + let broker = `accessProcIdent`() + if broker.isNil(): + return + let bucketIdx = `findBucketIdxIdent`(broker, brokerCtx) + if bucketIdx < 0: + return + broker.buckets[bucketIdx].`zeroArgFieldName`.setLen(0) + broker.buckets[bucketIdx].`argFieldName`.setLen(0) + if brokerCtx != DefaultBrokerContext: + broker.buckets.delete(bucketIdx) + + proc clearProviders*(_: typedesc[`typeIdent`]) = + clearProviders(`typeIdent`, DefaultBrokerContext) + + proc removeProvider*( + _: typedesc[`typeIdent`], + brokerCtx: BrokerContext, + `removeHandleCtxSym`: `providerHandleIdent`, + ) = + if `removeHandleCtxSym`.id == 0'u64: + return + let broker = `accessProcIdent`() + if broker.isNil(): + return + let bucketIdx = `findBucketIdxIdent`(broker, brokerCtx) + if bucketIdx < 0: + return + + if `removeHandleCtxSym`.kind == `zeroKindIdent`: + let idx = int(`removeHandleCtxSym`.id) - 1 + if idx >= 0 and idx < broker.buckets[bucketIdx].`zeroArgFieldName`.len: + broker.buckets[bucketIdx].`zeroArgFieldName`[idx] = nil + elif `removeHandleCtxSym`.kind == `argKindIdent`: + let idx = int(`removeHandleCtxSym`.id) - 1 + if idx >= 0 and idx < broker.buckets[bucketIdx].`argFieldName`.len: + broker.buckets[bucketIdx].`argFieldName`[idx] = nil + + if brokerCtx != DefaultBrokerContext: + var hasAny = false + for p in broker.buckets[bucketIdx].`zeroArgFieldName`: + if not p.isNil(): + hasAny = true + break + if not hasAny: + for p in broker.buckets[bucketIdx].`argFieldName`: + if not p.isNil(): + hasAny = true + break + if not hasAny: + broker.buckets.delete(bucketIdx) + + proc removeProvider*( + _: typedesc[`typeIdent`], `removeHandleDefaultSym`: `providerHandleIdent` + ) = + removeProvider(`typeIdent`, DefaultBrokerContext, `removeHandleDefaultSym`) + + ) + elif not zeroArgSig.isNil(): + result.add( + quote do: + proc clearProviders*(_: typedesc[`typeIdent`], brokerCtx: BrokerContext) = + let broker = `accessProcIdent`() + if broker.isNil(): + return + let bucketIdx = `findBucketIdxIdent`(broker, brokerCtx) + if bucketIdx < 0: + return + broker.buckets[bucketIdx].`zeroArgFieldName`.setLen(0) + if brokerCtx != DefaultBrokerContext: + broker.buckets.delete(bucketIdx) + + proc clearProviders*(_: typedesc[`typeIdent`]) = + clearProviders(`typeIdent`, DefaultBrokerContext) + + proc removeProvider*( + _: typedesc[`typeIdent`], + brokerCtx: BrokerContext, + `removeHandleCtxSym`: `providerHandleIdent`, + ) = + if `removeHandleCtxSym`.id == 0'u64: + return + let broker = `accessProcIdent`() + if broker.isNil(): + return + let bucketIdx = `findBucketIdxIdent`(broker, brokerCtx) + if bucketIdx < 0: + return + if `removeHandleCtxSym`.kind != `zeroKindIdent`: + return + let idx = int(`removeHandleCtxSym`.id) - 1 + if idx >= 0 and idx < broker.buckets[bucketIdx].`zeroArgFieldName`.len: + broker.buckets[bucketIdx].`zeroArgFieldName`[idx] = nil + if brokerCtx != DefaultBrokerContext: + var hasAny = false + for p in broker.buckets[bucketIdx].`zeroArgFieldName`: + if not p.isNil(): + hasAny = true + break + if not hasAny: + broker.buckets.delete(bucketIdx) + + proc removeProvider*( + _: typedesc[`typeIdent`], `removeHandleDefaultSym`: `providerHandleIdent` + ) = + removeProvider(`typeIdent`, DefaultBrokerContext, `removeHandleDefaultSym`) + + ) + else: + result.add( + quote do: + proc clearProviders*(_: typedesc[`typeIdent`], brokerCtx: BrokerContext) = + let broker = `accessProcIdent`() + if broker.isNil(): + return + let bucketIdx = `findBucketIdxIdent`(broker, brokerCtx) + if bucketIdx < 0: + return + broker.buckets[bucketIdx].`argFieldName`.setLen(0) + if brokerCtx != DefaultBrokerContext: + broker.buckets.delete(bucketIdx) + + proc clearProviders*(_: typedesc[`typeIdent`]) = + clearProviders(`typeIdent`, DefaultBrokerContext) + + proc removeProvider*( + _: typedesc[`typeIdent`], + brokerCtx: BrokerContext, + `removeHandleCtxSym`: `providerHandleIdent`, + ) = + if `removeHandleCtxSym`.id == 0'u64: + return + let broker = `accessProcIdent`() + if broker.isNil(): + return + let bucketIdx = `findBucketIdxIdent`(broker, brokerCtx) + if bucketIdx < 0: + return + if `removeHandleCtxSym`.kind != `argKindIdent`: + return + let idx = int(`removeHandleCtxSym`.id) - 1 + if idx >= 0 and idx < broker.buckets[bucketIdx].`argFieldName`.len: + broker.buckets[bucketIdx].`argFieldName`[idx] = nil + if brokerCtx != DefaultBrokerContext: + var hasAny = false + for p in broker.buckets[bucketIdx].`argFieldName`: + if not p.isNil(): + hasAny = true + break + if not hasAny: + broker.buckets.delete(bucketIdx) + + proc removeProvider*( + _: typedesc[`typeIdent`], `removeHandleDefaultSym`: `providerHandleIdent` + ) = + removeProvider(`typeIdent`, DefaultBrokerContext, `removeHandleDefaultSym`) + + ) when defined(requestBrokerDebug): echo result.repr diff --git a/waku/common/broker/request_broker.nim b/waku/common/broker/request_broker.nim index dece77381..46f7d7d16 100644 --- a/waku/common/broker/request_broker.nim +++ b/waku/common/broker/request_broker.nim @@ -16,6 +16,18 @@ ## `async` mode is better to be used when you request date that may involve some long IO operation ## or action. ## +## Default vs. context aware use: +## Every generated broker is a thread-local global instance. This means each RequestBroker enables decoupled +## data exchange threadwise. Sometimes we use brokers inside a context - like inside a component that has many modules or subsystems. +## In case you would instantiate multiple such components in a single thread, and each component must has its own provider for the same RequestBroker type, +## in order to avoid provider collision, you can use context aware RequestBroker. +## Context awareness is supported through the `BrokerContext` argument for `setProvider`, `request`, `clearProvider` interfaces. +## Suce use requires generating a new unique `BrokerContext` value per component instance, and spread it to all modules using the brokers. +## Example, store the `BrokerContext` as a field inside the top level component instance, and spread around at initialization of the subcomponents.. +## +## Default broker context is defined as `DefaultBrokerContext` constant. But if you don't need context awareness, you can use the +## interfaces without context argument. +## ## Usage: ## Declare your desired request type inside a `RequestBroker` macro, add any number of fields. ## Define the provider signature, that is enforced at compile time. @@ -89,7 +101,13 @@ ## After this, you can register a provider anywhere in your code with ## `TypeName.setProvider(...)`, which returns error if already having a provider. ## Providers are async procs/lambdas in default mode and sync procs in sync mode. -## Only one provider can be registered at a time per signature type (zero arg and/or multi arg). +## +## Providers are stored as a broker-context keyed list: +## - the default provider is always stored at index 0 (reserved broker context: 0) +## - additional providers can be registered under arbitrary non-zero broker contexts +## +## The original `setProvider(handler)` / `request(...)` APIs continue to operate +## on the default provider (broker context 0) for backward compatibility. ## ## Requests can be made from anywhere with no direct dependency on the provider by ## calling `TypeName.request()` - with arguments respecting the signature(s). @@ -139,11 +157,12 @@ ## automatically, so the caller only needs to provide the type definition. import std/[macros, strutils] +from std/sequtils import keepItIf import chronos import results -import ./helper/broker_utils +import ./helper/broker_utils, broker_context -export results, chronos +export results, chronos, keepItIf, broker_context proc errorFuture[T](message: string): Future[Result[T, string]] {.inline.} = ## Build a future that is already completed with an error result. @@ -187,23 +206,6 @@ proc isReturnTypeValid(returnType, typeIdent: NimNode, mode: RequestBrokerMode): of rbSync: isSyncReturnTypeValid(returnType, typeIdent) -proc cloneParams(params: seq[NimNode]): seq[NimNode] = - ## Deep copy parameter definitions so they can be inserted in multiple places. - result = @[] - for param in params: - result.add(copyNimTree(param)) - -proc collectParamNames(params: seq[NimNode]): seq[NimNode] = - ## Extract all identifier symbols declared across IdentDefs nodes. - result = @[] - for param in params: - assert param.kind == nnkIdentDefs - for i in 0 ..< param.len - 2: - let nameNode = param[i] - if nameNode.kind == nnkEmpty: - continue - result.add(ident($nameNode)) - proc makeProcType( returnType: NimNode, params: seq[NimNode], mode: RequestBrokerMode ): NimNode = @@ -234,92 +236,13 @@ proc parseMode(modeNode: NimNode): RequestBrokerMode = else: error("RequestBroker mode must be `sync` or `async` (default is async)", modeNode) -proc ensureDistinctType(rhs: NimNode): NimNode = - ## For PODs / aliases / externally-defined types, wrap in `distinct` unless - ## it's already distinct. - if rhs.kind == nnkDistinctTy: - return copyNimTree(rhs) - newTree(nnkDistinctTy, copyNimTree(rhs)) - proc generateRequestBroker(body: NimNode, mode: RequestBrokerMode): NimNode = when defined(requestBrokerDebug): echo body.treeRepr echo "RequestBroker mode: ", $mode - var typeIdent: NimNode = nil - var objectDef: NimNode = nil - for stmt in body: - if stmt.kind == nnkTypeSection: - for def in stmt: - if def.kind != nnkTypeDef: - continue - if not typeIdent.isNil(): - error("Only one type may be declared inside RequestBroker", def) - - typeIdent = baseTypeIdent(def[0]) - let rhs = def[2] - - ## Support inline object types (fields are auto-exported) - ## AND non-object types / aliases (e.g. `string`, `int`, `OtherType`). - case rhs.kind - of nnkObjectTy: - let recList = rhs[2] - if recList.kind != nnkRecList: - error("RequestBroker object must declare a standard field list", rhs) - var exportedRecList = newTree(nnkRecList) - for field in recList: - case field.kind - of nnkIdentDefs: - ensureFieldDef(field) - var cloned = copyNimTree(field) - for i in 0 ..< cloned.len - 2: - cloned[i] = exportIdentNode(cloned[i]) - exportedRecList.add(cloned) - of nnkEmpty: - discard - else: - error( - "RequestBroker object definition only supports simple field declarations", - field, - ) - objectDef = newTree( - nnkObjectTy, copyNimTree(rhs[0]), copyNimTree(rhs[1]), exportedRecList - ) - of nnkRefTy: - if rhs.len != 1: - error("RequestBroker ref type must have a single base", rhs) - if rhs[0].kind == nnkObjectTy: - let obj = rhs[0] - let recList = obj[2] - if recList.kind != nnkRecList: - error("RequestBroker object must declare a standard field list", obj) - var exportedRecList = newTree(nnkRecList) - for field in recList: - case field.kind - of nnkIdentDefs: - ensureFieldDef(field) - var cloned = copyNimTree(field) - for i in 0 ..< cloned.len - 2: - cloned[i] = exportIdentNode(cloned[i]) - exportedRecList.add(cloned) - of nnkEmpty: - discard - else: - error( - "RequestBroker object definition only supports simple field declarations", - field, - ) - let exportedObjectType = newTree( - nnkObjectTy, copyNimTree(obj[0]), copyNimTree(obj[1]), exportedRecList - ) - objectDef = newTree(nnkRefTy, exportedObjectType) - else: - ## `ref SomeType` (SomeType can be defined elsewhere) - objectDef = ensureDistinctType(rhs) - else: - ## Non-object type / alias (e.g. `string`, `int`, `SomeExternalType`). - objectDef = ensureDistinctType(rhs) - if typeIdent.isNil(): - error("RequestBroker body must declare exactly one type", body) + let parsed = parseSingleTypeDef(body, "RequestBroker", allowRefToNonObject = true) + let typeIdent = parsed.typeIdent + let objectDef = parsed.objectDef when defined(requestBrokerDebug): echo "RequestBroker generating type: ", $typeIdent @@ -329,11 +252,9 @@ proc generateRequestBroker(body: NimNode, mode: RequestBrokerMode): NimNode = let typeNameLit = newLit(typeDisplayName) var zeroArgSig: NimNode = nil var zeroArgProviderName: NimNode = nil - var zeroArgFieldName: NimNode = nil var argSig: NimNode = nil var argParams: seq[NimNode] = @[] var argProviderName: NimNode = nil - var argFieldName: NimNode = nil for stmt in body: case stmt.kind @@ -368,7 +289,6 @@ proc generateRequestBroker(body: NimNode, mode: RequestBrokerMode): NimNode = error("Only one zero-argument signature is allowed", stmt) zeroArgSig = stmt zeroArgProviderName = ident(sanitizeIdentName(typeIdent) & "ProviderNoArgs") - zeroArgFieldName = ident("providerNoArgs") elif paramCount >= 1: if argSig != nil: error("Only one argument-based signature is allowed", stmt) @@ -391,7 +311,6 @@ proc generateRequestBroker(body: NimNode, mode: RequestBrokerMode): NimNode = error("Signature parameter must declare a name", paramDef) argParams.add(copyNimTree(paramDef)) argProviderName = ident(sanitizeIdentName(typeIdent) & "ProviderWithArgs") - argFieldName = ident("providerWithArgs") of nnkTypeSection, nnkEmpty: discard else: @@ -400,7 +319,6 @@ proc generateRequestBroker(body: NimNode, mode: RequestBrokerMode): NimNode = if zeroArgSig.isNil() and argSig.isNil(): zeroArgSig = newEmptyNode() zeroArgProviderName = ident(sanitizeIdentName(typeIdent) & "ProviderNoArgs") - zeroArgFieldName = ident("providerNoArgs") var typeSection = newTree(nnkTypeSection) typeSection.add(newTree(nnkTypeDef, exportedTypeIdent, newEmptyNode(), objectDef)) @@ -423,12 +341,29 @@ proc generateRequestBroker(body: NimNode, mode: RequestBrokerMode): NimNode = var brokerRecList = newTree(nnkRecList) if not zeroArgSig.isNil(): + let zeroArgProvidersFieldName = ident("providersNoArgs") + let zeroArgProvidersTupleTy = newTree( + nnkTupleTy, + newTree(nnkIdentDefs, ident("brokerCtx"), ident("BrokerContext"), newEmptyNode()), + newTree(nnkIdentDefs, ident("handler"), zeroArgProviderName, newEmptyNode()), + ) + let zeroArgProvidersSeqTy = + newTree(nnkBracketExpr, ident("seq"), zeroArgProvidersTupleTy) brokerRecList.add( - newTree(nnkIdentDefs, zeroArgFieldName, zeroArgProviderName, newEmptyNode()) + newTree( + nnkIdentDefs, zeroArgProvidersFieldName, zeroArgProvidersSeqTy, newEmptyNode() + ) ) if not argSig.isNil(): + let argProvidersFieldName = ident("providersWithArgs") + let argProvidersTupleTy = newTree( + nnkTupleTy, + newTree(nnkIdentDefs, ident("brokerCtx"), ident("BrokerContext"), newEmptyNode()), + newTree(nnkIdentDefs, ident("handler"), argProviderName, newEmptyNode()), + ) + let argProvidersSeqTy = newTree(nnkBracketExpr, ident("seq"), argProvidersTupleTy) brokerRecList.add( - newTree(nnkIdentDefs, argFieldName, argProviderName, newEmptyNode()) + newTree(nnkIdentDefs, argProvidersFieldName, argProvidersSeqTy, newEmptyNode()) ) let brokerTypeIdent = ident(sanitizeIdentName(typeIdent) & "Broker") let brokerTypeDef = newTree( @@ -443,31 +378,97 @@ proc generateRequestBroker(body: NimNode, mode: RequestBrokerMode): NimNode = let globalVarIdent = ident("g" & sanitizeIdentName(typeIdent) & "Broker") let accessProcIdent = ident("access" & sanitizeIdentName(typeIdent) & "Broker") + + var brokerNewBody = newStmtList() + if not zeroArgSig.isNil(): + brokerNewBody.add( + quote do: + result.providersNoArgs = + @[(brokerCtx: DefaultBrokerContext, handler: default(`zeroArgProviderName`))] + ) + if not argSig.isNil(): + brokerNewBody.add( + quote do: + result.providersWithArgs = + @[(brokerCtx: DefaultBrokerContext, handler: default(`argProviderName`))] + ) + + var brokerInitChecks = newStmtList() + if not zeroArgSig.isNil(): + brokerInitChecks.add( + quote do: + if `globalVarIdent`.providersNoArgs.len == 0: + `globalVarIdent` = `brokerTypeIdent`.new() + ) + if not argSig.isNil(): + brokerInitChecks.add( + quote do: + if `globalVarIdent`.providersWithArgs.len == 0: + `globalVarIdent` = `brokerTypeIdent`.new() + ) + result.add( quote do: var `globalVarIdent` {.threadvar.}: `brokerTypeIdent` + proc new(_: type `brokerTypeIdent`): `brokerTypeIdent` = + result = `brokerTypeIdent`() + `brokerNewBody` + proc `accessProcIdent`(): var `brokerTypeIdent` = + `brokerInitChecks` `globalVarIdent` ) - var clearBody = newStmtList() + var clearBodyKeyed = newStmtList() + let brokerCtxParamIdent = ident("brokerCtx") if not zeroArgSig.isNil(): + let zeroArgProvidersFieldName = ident("providersNoArgs") result.add( quote do: proc setProvider*( _: typedesc[`typeIdent`], handler: `zeroArgProviderName` ): Result[void, string] = - if not `accessProcIdent`().`zeroArgFieldName`.isNil(): + if not `accessProcIdent`().`zeroArgProvidersFieldName`[0].handler.isNil(): return err("Zero-arg provider already set") - `accessProcIdent`().`zeroArgFieldName` = handler + `accessProcIdent`().`zeroArgProvidersFieldName`[0].handler = handler return ok() ) - clearBody.add( + + result.add( quote do: - `accessProcIdent`().`zeroArgFieldName` = nil + proc setProvider*( + _: typedesc[`typeIdent`], + brokerCtx: BrokerContext, + handler: `zeroArgProviderName`, + ): Result[void, string] = + if brokerCtx == DefaultBrokerContext: + return setProvider(`typeIdent`, handler) + + for entry in `accessProcIdent`().`zeroArgProvidersFieldName`: + if entry.brokerCtx == brokerCtx: + return err( + "RequestBroker(" & `typeNameLit` & + "): provider already set for broker context " & $brokerCtx + ) + + `accessProcIdent`().`zeroArgProvidersFieldName`.add( + (brokerCtx: brokerCtx, handler: handler) + ) + return ok() + + ) + clearBodyKeyed.add( + quote do: + if `brokerCtxParamIdent` == DefaultBrokerContext: + `accessProcIdent`().`zeroArgProvidersFieldName`[0].handler = + default(`zeroArgProviderName`) + else: + `accessProcIdent`().`zeroArgProvidersFieldName`.keepItIf( + it.brokerCtx != `brokerCtxParamIdent` + ) ) case mode of rbAsync: @@ -476,11 +477,34 @@ proc generateRequestBroker(body: NimNode, mode: RequestBrokerMode): NimNode = proc request*( _: typedesc[`typeIdent`] ): Future[Result[`typeIdent`, string]] {.async: (raises: []).} = - let provider = `accessProcIdent`().`zeroArgFieldName` + return await request(`typeIdent`, DefaultBrokerContext) + + ) + + result.add( + quote do: + proc request*( + _: typedesc[`typeIdent`], brokerCtx: BrokerContext + ): Future[Result[`typeIdent`, string]] {.async: (raises: []).} = + var provider: `zeroArgProviderName` + if brokerCtx == DefaultBrokerContext: + provider = `accessProcIdent`().`zeroArgProvidersFieldName`[0].handler + else: + for entry in `accessProcIdent`().`zeroArgProvidersFieldName`: + if entry.brokerCtx == brokerCtx: + provider = entry.handler + break + if provider.isNil(): + if brokerCtx == DefaultBrokerContext: + return err( + "RequestBroker(" & `typeNameLit` & "): no zero-arg provider registered" + ) return err( - "RequestBroker(" & `typeNameLit` & "): no zero-arg provider registered" + "RequestBroker(" & `typeNameLit` & + "): no provider registered for broker context " & $brokerCtx ) + let catchedRes = catch: await provider() @@ -507,10 +531,32 @@ proc generateRequestBroker(body: NimNode, mode: RequestBrokerMode): NimNode = proc request*( _: typedesc[`typeIdent`] ): Result[`typeIdent`, string] {.gcsafe, raises: [].} = - let provider = `accessProcIdent`().`zeroArgFieldName` + return request(`typeIdent`, DefaultBrokerContext) + + ) + + result.add( + quote do: + proc request*( + _: typedesc[`typeIdent`], brokerCtx: BrokerContext + ): Result[`typeIdent`, string] {.gcsafe, raises: [].} = + var provider: `zeroArgProviderName` + if brokerCtx == DefaultBrokerContext: + provider = `accessProcIdent`().`zeroArgProvidersFieldName`[0].handler + else: + for entry in `accessProcIdent`().`zeroArgProvidersFieldName`: + if entry.brokerCtx == brokerCtx: + provider = entry.handler + break + if provider.isNil(): + if brokerCtx == DefaultBrokerContext: + return err( + "RequestBroker(" & `typeNameLit` & "): no zero-arg provider registered" + ) return err( - "RequestBroker(" & `typeNameLit` & "): no zero-arg provider registered" + "RequestBroker(" & `typeNameLit` & + "): no provider registered for broker context " & $brokerCtx ) var providerRes: Result[`typeIdent`, string] @@ -533,24 +579,54 @@ proc generateRequestBroker(body: NimNode, mode: RequestBrokerMode): NimNode = ) if not argSig.isNil(): + let argProvidersFieldName = ident("providersWithArgs") result.add( quote do: proc setProvider*( _: typedesc[`typeIdent`], handler: `argProviderName` ): Result[void, string] = - if not `accessProcIdent`().`argFieldName`.isNil(): + if not `accessProcIdent`().`argProvidersFieldName`[0].handler.isNil(): return err("Provider already set") - `accessProcIdent`().`argFieldName` = handler + `accessProcIdent`().`argProvidersFieldName`[0].handler = handler return ok() ) - clearBody.add( + + result.add( quote do: - `accessProcIdent`().`argFieldName` = nil + proc setProvider*( + _: typedesc[`typeIdent`], + brokerCtx: BrokerContext, + handler: `argProviderName`, + ): Result[void, string] = + if brokerCtx == DefaultBrokerContext: + return setProvider(`typeIdent`, handler) + + for entry in `accessProcIdent`().`argProvidersFieldName`: + if entry.brokerCtx == brokerCtx: + return err( + "RequestBroker(" & `typeNameLit` & + "): provider already set for broker context " & $brokerCtx + ) + + `accessProcIdent`().`argProvidersFieldName`.add( + (brokerCtx: brokerCtx, handler: handler) + ) + return ok() + + ) + clearBodyKeyed.add( + quote do: + if `brokerCtxParamIdent` == DefaultBrokerContext: + `accessProcIdent`().`argProvidersFieldName`[0].handler = + default(`argProviderName`) + else: + `accessProcIdent`().`argProvidersFieldName`.keepItIf( + it.brokerCtx != `brokerCtxParamIdent` + ) ) let requestParamDefs = cloneParams(argParams) let argNameIdents = collectParamNames(requestParamDefs) - let providerSym = genSym(nskLet, "provider") var formalParams = newTree(nnkFormalParams) formalParams.add(copyNimTree(returnType)) formalParams.add( @@ -572,29 +648,96 @@ proc generateRequestBroker(body: NimNode, mode: RequestBrokerMode): NimNode = of rbSync: quote: {.gcsafe, raises: [].} - var providerCall = newCall(providerSym) + + var forwardCall = newCall(ident("request")) + forwardCall.add(copyNimTree(typeIdent)) + forwardCall.add(ident("DefaultBrokerContext")) for argName in argNameIdents: - providerCall.add(argName) + forwardCall.add(argName) + var requestBody = newStmtList() - requestBody.add( - quote do: - let `providerSym` = `accessProcIdent`().`argFieldName` + case mode + of rbAsync: + requestBody.add( + quote do: + return await `forwardCall` + ) + of rbSync: + requestBody.add( + quote do: + return `forwardCall` + ) + + result.add( + newTree( + nnkProcDef, + postfix(ident("request"), "*"), + newEmptyNode(), + newEmptyNode(), + formalParams, + requestPragmas, + newEmptyNode(), + requestBody, + ) ) - requestBody.add( + + # Keyed request variant for the argument-based signature. + let requestParamDefsKeyed = cloneParams(argParams) + let argNameIdentsKeyed = collectParamNames(requestParamDefsKeyed) + let providerSymKeyed = genSym(nskVar, "provider") + var formalParamsKeyed = newTree(nnkFormalParams) + formalParamsKeyed.add(copyNimTree(returnType)) + formalParamsKeyed.add( + newTree( + nnkIdentDefs, + ident("_"), + newTree(nnkBracketExpr, ident("typedesc"), copyNimTree(typeIdent)), + newEmptyNode(), + ) + ) + formalParamsKeyed.add( + newTree(nnkIdentDefs, ident("brokerCtx"), ident("BrokerContext"), newEmptyNode()) + ) + for paramDef in requestParamDefsKeyed: + formalParamsKeyed.add(paramDef) + + let requestPragmasKeyed = requestPragmas + var providerCallKeyed = newCall(providerSymKeyed) + for argName in argNameIdentsKeyed: + providerCallKeyed.add(argName) + + var requestBodyKeyed = newStmtList() + requestBodyKeyed.add( quote do: - if `providerSym`.isNil(): + var `providerSymKeyed`: `argProviderName` + if brokerCtx == DefaultBrokerContext: + `providerSymKeyed` = `accessProcIdent`().`argProvidersFieldName`[0].handler + else: + for entry in `accessProcIdent`().`argProvidersFieldName`: + if entry.brokerCtx == brokerCtx: + `providerSymKeyed` = entry.handler + break + ) + requestBodyKeyed.add( + quote do: + if `providerSymKeyed`.isNil(): + if brokerCtx == DefaultBrokerContext: + return err( + "RequestBroker(" & `typeNameLit` & + "): no provider registered for input signature" + ) return err( "RequestBroker(" & `typeNameLit` & - "): no provider registered for input signature" + "): no provider registered for broker context " & $brokerCtx ) ) case mode of rbAsync: - requestBody.add( + requestBodyKeyed.add( quote do: let catchedRes = catch: - await `providerCall` + await `providerCallKeyed` if catchedRes.isErr(): return err( "RequestBroker(" & `typeNameLit` & "): provider threw exception: " & @@ -612,11 +755,11 @@ proc generateRequestBroker(body: NimNode, mode: RequestBrokerMode): NimNode = return providerRes ) of rbSync: - requestBody.add( + requestBodyKeyed.add( quote do: var providerRes: Result[`typeIdent`, string] try: - providerRes = `providerCall` + providerRes = `providerCallKeyed` except CatchableError as e: return err( "RequestBroker(" & `typeNameLit` & "): provider threw exception: " & e.msg @@ -631,24 +774,52 @@ proc generateRequestBroker(body: NimNode, mode: RequestBrokerMode): NimNode = ) return providerRes ) - # requestBody.add(providerCall) + result.add( newTree( nnkProcDef, postfix(ident("request"), "*"), newEmptyNode(), newEmptyNode(), - formalParams, - requestPragmas, + formalParamsKeyed, + requestPragmasKeyed, newEmptyNode(), - requestBody, + requestBodyKeyed, + ) + ) + + block: + var formalParamsClearKeyed = newTree(nnkFormalParams) + formalParamsClearKeyed.add(newEmptyNode()) + formalParamsClearKeyed.add( + newTree( + nnkIdentDefs, + ident("_"), + newTree(nnkBracketExpr, ident("typedesc"), copyNimTree(typeIdent)), + newEmptyNode(), + ) + ) + formalParamsClearKeyed.add( + newTree(nnkIdentDefs, brokerCtxParamIdent, ident("BrokerContext"), newEmptyNode()) + ) + + result.add( + newTree( + nnkProcDef, + postfix(ident("clearProvider"), "*"), + newEmptyNode(), + newEmptyNode(), + formalParamsClearKeyed, + newEmptyNode(), + newEmptyNode(), + clearBodyKeyed, ) ) result.add( quote do: proc clearProvider*(_: typedesc[`typeIdent`]) = - `clearBody` + clearProvider(`typeIdent`, DefaultBrokerContext) ) From 91b4c5f52ee37f0db7ea2714ea591bcf4f26b5fe Mon Sep 17 00:00:00 2001 From: Darshan <35736874+darshankabariya@users.noreply.github.com> Date: Sat, 17 Jan 2026 17:05:25 +0530 Subject: [PATCH 36/70] fix: store protocol issue in v0.37.0 (#3657) --- tests/testlib/wakunode.nim | 4 +- tools/confutils/cli_args.nim | 18 +++------ .../conf_builder/waku_conf_builder.nim | 16 +++++--- waku/node/peer_manager/peer_manager.nim | 2 +- waku/waku_archive/archive.nim | 10 +++++ waku/waku_store/client.nim | 39 +++++++++++++++---- 6 files changed, 59 insertions(+), 30 deletions(-) diff --git a/tests/testlib/wakunode.nim b/tests/testlib/wakunode.nim index ef6ba2b24..36aacce03 100644 --- a/tests/testlib/wakunode.nim +++ b/tests/testlib/wakunode.nim @@ -34,8 +34,8 @@ proc defaultTestWakuConfBuilder*(): WakuConfBuilder = @[parseIpAddress("1.1.1.1"), parseIpAddress("1.0.0.1")] ) builder.withNatStrategy("any") - builder.withMaxConnections(50) - builder.withRelayServiceRatio("60:40") + builder.withMaxConnections(150) + builder.withRelayServiceRatio("50:50") builder.withMaxMessageSize("1024 KiB") builder.withClusterId(DefaultClusterId) builder.withSubscribeShards(@[DefaultShardId]) diff --git a/tools/confutils/cli_args.nim b/tools/confutils/cli_args.nim index e6b3fc97d..6811e335f 100644 --- a/tools/confutils/cli_args.nim +++ b/tools/confutils/cli_args.nim @@ -206,22 +206,17 @@ type WakuNodeConf* = object .}: bool maxConnections* {. - desc: "Maximum allowed number of libp2p connections.", - defaultValue: 50, + desc: + "Maximum allowed number of libp2p connections. (Default: 150) that's recommended value for better connectivity", + defaultValue: 150, name: "max-connections" .}: int - maxRelayPeers* {. - desc: - "Deprecated. Use relay-service-ratio instead. It represents the maximum allowed number of relay peers.", - name: "max-relay-peers" - .}: Option[int] - relayServiceRatio* {. desc: "This percentage ratio represents the relay peers to service peers. For example, 60:40, tells that 60% of the max-connections will be used for relay protocol and the other 40% of max-connections will be reserved for other service protocols (e.g., filter, lightpush, store, metadata, etc.)", - name: "relay-service-ratio", - defaultValue: "60:40" # 60:40 ratio of relay to service peers + defaultValue: "50:50", + name: "relay-service-ratio" .}: string colocationLimit* {. @@ -957,9 +952,6 @@ proc toWakuConf*(n: WakuNodeConf): ConfResult[WakuConf] = b.withExtMultiAddrsOnly(n.extMultiAddrsOnly) b.withMaxConnections(n.maxConnections) - if n.maxRelayPeers.isSome(): - b.withMaxRelayPeers(n.maxRelayPeers.get()) - if n.relayServiceRatio != "": b.withRelayServiceRatio(n.relayServiceRatio) b.withColocationLimit(n.colocationLimit) diff --git a/waku/factory/conf_builder/waku_conf_builder.nim b/waku/factory/conf_builder/waku_conf_builder.nim index f3f942ecc..b952e711e 100644 --- a/waku/factory/conf_builder/waku_conf_builder.nim +++ b/waku/factory/conf_builder/waku_conf_builder.nim @@ -30,6 +30,8 @@ import logScope: topics = "waku conf builder" +const DefaultMaxConnections* = 150 + type MaxMessageSizeKind* = enum mmskNone mmskStr @@ -248,9 +250,6 @@ proc withAgentString*(b: var WakuConfBuilder, agentString: string) = proc withColocationLimit*(b: var WakuConfBuilder, colocationLimit: int) = b.colocationLimit = some(colocationLimit) -proc withMaxRelayPeers*(b: var WakuConfBuilder, maxRelayPeers: int) = - b.maxRelayPeers = some(maxRelayPeers) - proc withRelayServiceRatio*(b: var WakuConfBuilder, relayServiceRatio: string) = b.relayServiceRatio = some(relayServiceRatio) @@ -592,8 +591,13 @@ proc build*( if builder.maxConnections.isSome(): builder.maxConnections.get() else: - warn "Max Connections was not specified, defaulting to 300" - 300 + warn "Max connections not specified, defaulting to DefaultMaxConnections", + default = DefaultMaxConnections + DefaultMaxConnections + + if maxConnections < DefaultMaxConnections: + warn "max-connections less than DefaultMaxConnections; we suggest using DefaultMaxConnections or more for better connectivity", + provided = maxConnections, recommended = DefaultMaxConnections # TODO: Do the git version thing here let agentString = builder.agentString.get("nwaku") @@ -663,7 +667,7 @@ proc build*( agentString: agentString, colocationLimit: colocationLimit, maxRelayPeers: builder.maxRelayPeers, - relayServiceRatio: builder.relayServiceRatio.get("60:40"), + relayServiceRatio: builder.relayServiceRatio.get("50:50"), rateLimit: rateLimit, circuitRelayClient: builder.circuitRelayClient.get(false), staticNodes: builder.staticNodes, diff --git a/waku/node/peer_manager/peer_manager.nim b/waku/node/peer_manager/peer_manager.nim index 487d3894d..bdb68905e 100644 --- a/waku/node/peer_manager/peer_manager.nim +++ b/waku/node/peer_manager/peer_manager.nim @@ -1041,7 +1041,7 @@ proc new*( wakuMetadata: WakuMetadata = nil, maxRelayPeers: Option[int] = none(int), maxServicePeers: Option[int] = none(int), - relayServiceRatio: string = "60:40", + relayServiceRatio: string = "50:50", storage: PeerStorage = nil, initialBackoffInSec = InitialBackoffInSec, backoffFactor = BackoffFactor, diff --git a/waku/waku_archive/archive.nim b/waku/waku_archive/archive.nim index 707c757a3..8eb1fc051 100644 --- a/waku/waku_archive/archive.nim +++ b/waku/waku_archive/archive.nim @@ -61,9 +61,19 @@ proc validate*(msg: WakuMessage): Result[void, string] = upperBound = now + MaxMessageTimestampVariance if msg.timestamp < lowerBound: + warn "rejecting message with old timestamp", + msgTimestamp = msg.timestamp, + lowerBound = lowerBound, + now = now, + drift = (now - msg.timestamp) div 1_000_000_000 return err(invalidMessageOld) if upperBound < msg.timestamp: + warn "rejecting message with future timestamp", + msgTimestamp = msg.timestamp, + upperBound = upperBound, + now = now, + drift = (msg.timestamp - now) div 1_000_000_000 return err(invalidMessageFuture) return ok() diff --git a/waku/waku_store/client.nim b/waku/waku_store/client.nim index 308d7f98e..5b261af47 100644 --- a/waku/waku_store/client.nim +++ b/waku/waku_store/client.nim @@ -1,6 +1,12 @@ {.push raises: [].} -import std/[options, tables], results, chronicles, chronos, metrics, bearssl/rand +import + std/[options, tables, sequtils, algorithm], + results, + chronicles, + chronos, + metrics, + bearssl/rand import ../node/peer_manager, ../utils/requests, ./protocol_metrics, ./common, ./rpc_codec @@ -10,6 +16,8 @@ logScope: const DefaultPageSize*: uint = 20 # A recommended default number of waku messages per page +const MaxQueryRetries = 5 # Maximum number of store peers to try before giving up + type WakuStoreClient* = ref object peerManager: PeerManager rng: ref rand.HmacDrbgContext @@ -79,18 +87,33 @@ proc query*( proc queryToAny*( self: WakuStoreClient, request: StoreQueryRequest, peerId = none(PeerId) ): Future[StoreQueryResult] {.async.} = - ## This proc is similar to the query one but in this case - ## we don't specify a particular peer and instead we get it from peer manager + ## we don't specify a particular peer and instead we get it from peer manager. + ## It will retry with different store peers if the dial fails. if request.paginationCursor.isSome() and request.paginationCursor.get() == EmptyCursor: return err(StoreError(kind: ErrorCode.BAD_REQUEST, cause: "invalid cursor")) - let peer = self.peerManager.selectPeer(WakuStoreCodec).valueOr: + # Get all available store peers + var peers = self.peerManager.switch.peerStore.getPeersByProtocol(WakuStoreCodec) + if peers.len == 0: return err(StoreError(kind: BAD_RESPONSE, cause: "no service store peer connected")) - let connection = (await self.peerManager.dialPeer(peer, WakuStoreCodec)).valueOr: - waku_store_errors.inc(labelValues = [DialFailure]) + # Shuffle to distribute load and limit retries + let peersToTry = peers[0 ..< min(peers.len, MaxQueryRetries)] - return err(StoreError(kind: ErrorCode.PEER_DIAL_FAILURE, address: $peer)) + var lastError: StoreError + for peer in peersToTry: + let connection = (await self.peerManager.dialPeer(peer, WakuStoreCodec)).valueOr: + waku_store_errors.inc(labelValues = [DialFailure]) + warn "failed to dial store peer, trying next" + lastError = StoreError(kind: ErrorCode.PEER_DIAL_FAILURE, address: $peer) + continue - return await self.sendStoreRequest(request, connection) + let response = (await self.sendStoreRequest(request, connection)).valueOr: + warn "store query failed, trying next peer", peerId = peer.peerId, error = $error + lastError = error + continue + + return ok(response) + + return err(lastError) From a561ec3a3843defcb1dce01bd7878a1f6a7c4743 Mon Sep 17 00:00:00 2001 From: markoburcul Date: Tue, 20 Jan 2026 09:29:07 +0100 Subject: [PATCH 37/70] nix: add libwaku target, fix compiling Nim using NBS Use Nim built by NBS otherwise it doesn't work for both libwaku and wakucanary. Referenced issue: * https://github.com/status-im/status-go/issues/7152 --- flake.nix | 13 ++++++++++++- nix/checksums.nix | 2 +- nix/default.nix | 35 +++++++++++++++++++++++++---------- nix/nimble.nix | 2 +- nix/shell.nix | 1 + nix/zippy.nix | 9 +++++++++ 6 files changed, 49 insertions(+), 13 deletions(-) create mode 100644 nix/zippy.nix diff --git a/flake.nix b/flake.nix index 72eaebef1..93d7d23f0 100644 --- a/flake.nix +++ b/flake.nix @@ -1,5 +1,5 @@ { - description = "NWaku build flake"; + description = "Logos Messaging Nim build flake"; nixConfig = { extra-substituters = [ "https://nix-cache.status.im/" ]; @@ -54,10 +54,21 @@ zerokitRln = zerokit.packages.${system}.rln-android-arm64; }; + libwaku = pkgs.callPackage ./nix/default.nix { + inherit stableSystems; + src = self; + targets = ["libwaku"]; + # We are not able to compile the code with nim-unwrapped-2_0 + useSystemNim = false; + zerokitRln = zerokit.packages.${system}.rln; + }; + wakucanary = pkgs.callPackage ./nix/default.nix { inherit stableSystems; src = self; targets = ["wakucanary"]; + # We are not able to compile the code with nim-unwrapped-2_0 + useSystemNim = false; zerokitRln = zerokit.packages.${system}.rln; }; diff --git a/nix/checksums.nix b/nix/checksums.nix index 510f2b41a..c9c9f3d45 100644 --- a/nix/checksums.nix +++ b/nix/checksums.nix @@ -8,5 +8,5 @@ in pkgs.fetchFromGitHub { repo = "checksums"; rev = tools.findKeyValue "^ +ChecksumsStableCommit = \"([a-f0-9]+)\".*$" sourceFile; # WARNING: Requires manual updates when Nim compiler version changes. - hash = "sha256-Bm5iJoT2kAvcTexiLMFBa9oU5gf7d4rWjo3OiN7obWQ="; + hash = "sha256-JZhWqn4SrAgNw/HLzBK0rrj3WzvJ3Tv1nuDMn83KoYY="; } diff --git a/nix/default.nix b/nix/default.nix index d78f9935f..73838a4a1 100644 --- a/nix/default.nix +++ b/nix/default.nix @@ -23,7 +23,7 @@ let in stdenv.mkDerivation rec { - pname = "nwaku"; + pname = "logos-messaging-nim"; version = "1.0.0-${revision}"; @@ -70,6 +70,7 @@ in stdenv.mkDerivation rec { "QUICK_AND_DIRTY_COMPILER=${if quickAndDirty then "1" else "0"}" "QUICK_AND_DIRTY_NIMBLE=${if quickAndDirty then "1" else "0"}" "USE_SYSTEM_NIM=${if useSystemNim then "1" else "0"}" + "LIBRLN_FILE=${zerokitRln}/target/release/librln.a" ]; configurePhase = '' @@ -78,19 +79,28 @@ in stdenv.mkDerivation rec { make nimbus-build-system-nimble-dir ''; + # For the Nim v2.2.4 built with NBS we added sat and zippy preBuild = '' ln -s waku.nimble waku.nims + + ${lib.optionalString (!useSystemNim) '' pushd vendor/nimbus-build-system/vendor/Nim + mkdir dist - cp -r ${callPackage ./nimble.nix {}} dist/nimble - cp -r ${callPackage ./checksums.nix {}} dist/checksums - cp -r ${callPackage ./csources.nix {}} csources_v2 + mkdir -p dist/nimble/vendor/sat + mkdir -p dist/nimble/vendor/checksums + mkdir -p dist/nimble/vendor/zippy + + cp -r ${callPackage ./nimble.nix {}}/. dist/nimble + cp -r ${callPackage ./checksums.nix {}}/. dist/checksums + cp -r ${callPackage ./csources.nix {}}/. csources_v2 + cp -r ${callPackage ./sat.nix {}}/. dist/nimble/vendor/sat + cp -r ${callPackage ./checksums.nix {}}/. dist/nimble/vendor/checksums + cp -r ${callPackage ./zippy.nix {}}/. dist/nimble/vendor/zippy chmod 777 -R dist/nimble csources_v2 + popd - cp -r ${zerokitRln}/target vendor/zerokit/ - find vendor/zerokit/target - # FIXME - cp vendor/zerokit/target/*/release/librln.a librln_v${zerokitRln.version}.a + ''} ''; installPhase = if abidir != null then '' @@ -99,8 +109,13 @@ in stdenv.mkDerivation rec { echo '${androidManifest}' > $out/jni/AndroidManifest.xml cd $out && zip -r libwaku.aar * '' else '' - mkdir -p $out/bin - cp -r build/* $out/bin + mkdir -p $out/bin $out/include + + # Copy library files + cp build/* $out/bin/ 2>/dev/null || true + + # Copy the header file + cp library/libwaku.h $out/include/ ''; meta = with pkgs.lib; { diff --git a/nix/nimble.nix b/nix/nimble.nix index f9d87da6d..337ecd672 100644 --- a/nix/nimble.nix +++ b/nix/nimble.nix @@ -8,5 +8,5 @@ in pkgs.fetchFromGitHub { repo = "nimble"; rev = tools.findKeyValue "^ +NimbleStableCommit = \"([a-f0-9]+)\".*$" sourceFile; # WARNING: Requires manual updates when Nim compiler version changes. - hash = "sha256-MVHf19UbOWk8Zba2scj06PxdYYOJA6OXrVyDQ9Ku6Us="; + hash = "sha256-8iutVgNzDtttZ7V+7S11KfLEuwhKA9TsgS51mlUI08k="; } diff --git a/nix/shell.nix b/nix/shell.nix index 0db73dc25..fe0b065b4 100644 --- a/nix/shell.nix +++ b/nix/shell.nix @@ -16,6 +16,7 @@ pkgs.mkShell { git cargo rustup + rustc cmake nim-unwrapped-2_0 ]; diff --git a/nix/zippy.nix b/nix/zippy.nix new file mode 100644 index 000000000..ec59dfc07 --- /dev/null +++ b/nix/zippy.nix @@ -0,0 +1,9 @@ +{ pkgs }: + +pkgs.fetchFromGitHub { + owner = "guzba"; + repo = "zippy"; + rev = "a99f6a7d8a8e3e0213b3cad0daf0ea974bf58e3f"; + # WARNING: Requires manual updates when Nim compiler version changes. + hash = "sha256-e2ma2Oyp0dlNx8pJsdZl5o5KnaoAX87tqfY0RLG3DZs="; +} \ No newline at end of file From a02aaab53c90087f2477b8f77b1a1da9b4bf98c6 Mon Sep 17 00:00:00 2001 From: Ivan FB <128452529+Ivansete-status@users.noreply.github.com> Date: Mon, 26 Jan 2026 16:32:07 +0100 Subject: [PATCH 38/70] bump nim-ffi to v0.1.3 (#3696) --- vendor/nim-ffi | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/vendor/nim-ffi b/vendor/nim-ffi index d7a549212..06111de15 160000 --- a/vendor/nim-ffi +++ b/vendor/nim-ffi @@ -1 +1 @@ -Subproject commit d7a5492121aad190cf549436836e2fa42e34ff9b +Subproject commit 06111de155253b34e47ed2aaed1d61d08d62cc1b From 74b19c05d1fead6cc3b523b2aa1337d70b142bfe Mon Sep 17 00:00:00 2001 From: Ivan FB <128452529+Ivansete-status@users.noreply.github.com> Date: Thu, 29 Jan 2026 15:48:34 +0100 Subject: [PATCH 39/70] simple refactor to reduce PRs CI load (#3701) * add discord notification in ci-daily --- .github/workflows/ci-daily.yml | 79 +++++++++++++++++++ .github/workflows/ci.yml | 2 +- Makefile | 6 +- library/kernel_api/debug_node_api.nim | 3 +- .../test_waku_filter_dos_protection.nim | 3 +- tests/waku_lightpush/test_ratelimit.nim | 5 +- 6 files changed, 90 insertions(+), 8 deletions(-) create mode 100644 .github/workflows/ci-daily.yml diff --git a/.github/workflows/ci-daily.yml b/.github/workflows/ci-daily.yml new file mode 100644 index 000000000..236bd5216 --- /dev/null +++ b/.github/workflows/ci-daily.yml @@ -0,0 +1,79 @@ +name: Daily logos-messaging-nim CI + +on: + schedule: + - cron: '30 6 * * *' + +env: + NPROC: 2 + MAKEFLAGS: "-j${NPROC}" + NIMFLAGS: "--parallelBuild:${NPROC} --colors:off -d:chronicles_colors:none" + +jobs: + build: + strategy: + fail-fast: false + matrix: + os: [ubuntu-22.04, macos-15] + runs-on: ${{ matrix.os }} + timeout-minutes: 45 + + name: build-${{ matrix.os }} + steps: + - name: Checkout code + uses: actions/checkout@v4 + + - name: Get submodules hash + id: submodules + run: | + echo "hash=$(git submodule status | awk '{print $1}' | sort | shasum -a 256 | sed 's/[ -]*//g')" >> $GITHUB_OUTPUT + + - name: Cache submodules + uses: actions/cache@v3 + with: + path: | + vendor/ + .git/modules + key: ${{ runner.os }}-vendor-modules-${{ steps.submodules.outputs.hash }} + + - name: Make update + run: make update + + - name: Build binaries + run: make V=1 QUICK_AND_DIRTY_COMPILER=1 examples tools + + - name: Notify Discord + if: always() + env: + DISCORD_WEBHOOK_URL: ${{ secrets.DISCORD_WEBHOOK_URL }} + run: | + STATUS="${{ job.status }}" + OS="${{ matrix.os }}" + REPO="${{ github.repository }}" + RUN_URL="https://github.com/${{ github.repository }}/actions/runs/${{ github.run_id }}" + + if [ "$STATUS" = "success" ]; then + COLOR=3066993 + TITLE="✅ CI Success" + else + COLOR=15158332 + TITLE="❌ CI Failed" + fi + + curl -H "Content-Type: application/json" \ + -X POST \ + -d "{ + \"embeds\": [{ + \"title\": \"$TITLE\", + \"color\": $COLOR, + \"fields\": [ + {\"name\": \"Repository\", \"value\": \"$REPO\", \"inline\": true}, + {\"name\": \"OS\", \"value\": \"$OS\", \"inline\": true}, + {\"name\": \"Status\", \"value\": \"$STATUS\", \"inline\": true} + ], + \"url\": \"$RUN_URL\", + \"footer\": {\"text\": \"Daily logos-messaging-nim CI\"} + }] + }" \ + "$DISCORD_WEBHOOK_URL" + diff --git a/.github/workflows/ci.yml b/.github/workflows/ci.yml index da8383e43..3de6eb4f5 100644 --- a/.github/workflows/ci.yml +++ b/.github/workflows/ci.yml @@ -80,7 +80,7 @@ jobs: run: make update - name: Build binaries - run: make V=1 QUICK_AND_DIRTY_COMPILER=1 all tools + run: make V=1 QUICK_AND_DIRTY_COMPILER=1 all build-windows: needs: changes diff --git a/Makefile b/Makefile index 87bd7bc74..9e241bbfa 100644 --- a/Makefile +++ b/Makefile @@ -51,10 +51,12 @@ endif ########## ## Main ## ########## -.PHONY: all test update clean +.PHONY: all test update clean examples # default target, because it's the first one that doesn't start with '.' -all: | wakunode2 example2 chat2 chat2bridge libwaku +all: | wakunode2 libwaku + +examples: | example2 chat2 chat2bridge test_file := $(word 2,$(MAKECMDGOALS)) define test_name diff --git a/library/kernel_api/debug_node_api.nim b/library/kernel_api/debug_node_api.nim index 98f5332b4..9d5a7f134 100644 --- a/library/kernel_api/debug_node_api.nim +++ b/library/kernel_api/debug_node_api.nim @@ -8,7 +8,8 @@ import libp2p/peerid, metrics, ffi -import waku/factory/waku, waku/node/waku_node, waku/node/health_monitor, library/declare_lib +import + waku/factory/waku, waku/node/waku_node, waku/node/health_monitor, library/declare_lib proc getMultiaddresses(node: WakuNode): seq[string] = return node.info().listenAddresses diff --git a/tests/waku_filter_v2/test_waku_filter_dos_protection.nim b/tests/waku_filter_v2/test_waku_filter_dos_protection.nim index fd3d8c837..be92fc409 100644 --- a/tests/waku_filter_v2/test_waku_filter_dos_protection.nim +++ b/tests/waku_filter_v2/test_waku_filter_dos_protection.nim @@ -217,7 +217,8 @@ suite "Waku Filter - DOS protection": for fut in finished: check not fut.failed() let pingRes = fut.read() - if pingRes.isErr() and pingRes.error().kind == FilterSubscribeErrorKind.TOO_MANY_REQUESTS: + if pingRes.isErr() and + pingRes.error().kind == FilterSubscribeErrorKind.TOO_MANY_REQUESTS: gotTooMany = true break diff --git a/tests/waku_lightpush/test_ratelimit.nim b/tests/waku_lightpush/test_ratelimit.nim index bdab3f074..ffbd1a06d 100644 --- a/tests/waku_lightpush/test_ratelimit.nim +++ b/tests/waku_lightpush/test_ratelimit.nim @@ -122,9 +122,8 @@ suite "Rate limited push service": # ensure period of time has passed and the client can again use the service await sleepAsync(tokenPeriod + 100.millis) - let recoveryRes = await client.publish( - some(DefaultPubsubTopic), fakeWakuMessage(), serverPeerId - ) + let recoveryRes = + await client.publish(some(DefaultPubsubTopic), fakeWakuMessage(), serverPeerId) check recoveryRes.isOk() ## Cleanup From 361d914f8794948d094867e51884910467643838 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Jakub=20Soko=C5=82owski?= Date: Fri, 23 Jan 2026 14:05:58 +0100 Subject: [PATCH 40/70] nix: pin nixpkgs commit to same as status-go MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit This avoids fetching different nixpkgs versions. Signed-off-by: Jakub Sokołowski --- flake.nix | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/flake.nix b/flake.nix index 93d7d23f0..3427ff6ac 100644 --- a/flake.nix +++ b/flake.nix @@ -7,7 +7,9 @@ }; inputs = { - nixpkgs.url = "github:NixOS/nixpkgs?rev=f44bd8ca21e026135061a0a57dcf3d0775b67a49"; + # We are pinning the commit because ultimately we want to use same commit across different projects. + # A commit from nixpkgs 24.11 release : https://github.com/NixOS/nixpkgs/tree/release-24.11 + nixpkgs.url = "github:NixOS/nixpkgs/0ef228213045d2cdb5a169a95d63ded38670b293"; zerokit = { url = "github:vacp2p/zerokit?rev=dc0b31752c91e7b4fefc441cfa6a8210ad7dba7b"; inputs.nixpkgs.follows = "nixpkgs"; From 538b279b947e997606ab43dd556c0f40e40166d1 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Jakub=20Soko=C5=82owski?= Date: Fri, 23 Jan 2026 14:06:30 +0100 Subject: [PATCH 41/70] nix: drop unnecessay asert for Android SDK on macOS MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Newer nixpkgs should have Android SDK for aarch64. Signed-off-by: Jakub Sokołowski --- .github/workflows/ci-nix.yml | 48 +++++++++++++++++++++++++++ Makefile | 17 ++++++---- flake.lock | 28 ++++++++-------- flake.nix | 9 ++--- nix/default.nix | 56 ++++++++++---------------------- nix/pkgs/android-sdk/compose.nix | 7 ++-- nix/shell.nix | 19 ++++------- scripts/build_rln_android.sh | 1 - vendor/zerokit | 2 +- 9 files changed, 104 insertions(+), 83 deletions(-) create mode 100644 .github/workflows/ci-nix.yml diff --git a/.github/workflows/ci-nix.yml b/.github/workflows/ci-nix.yml new file mode 100644 index 000000000..8fc7ac985 --- /dev/null +++ b/.github/workflows/ci-nix.yml @@ -0,0 +1,48 @@ +name: ci / nix +permissions: + contents: read + pull-requests: read + checks: write +on: + pull_request: + branches: [master] + +jobs: + build: + strategy: + fail-fast: false + matrix: + system: + - aarch64-darwin + - x86_64-linux + nixpkg: + - libwaku + - libwaku-android-arm64 + - wakucanary + + exclude: + # Android SDK limitation + - system: aarch64-darwin + nixpkg: libwaku-android-arm64 + + include: + - system: aarch64-darwin + runs_on: [self-hosted, macOS, ARM64] + + - system: x86_64-linux + runs_on: [self-hosted, Linux, X64] + + name: '${{ matrix.system }} / ${{ matrix.nixpkg }}' + runs-on: ${{ matrix.runs_on }} + steps: + - uses: actions/checkout@v4 + with: + submodules: recursive + + - name: 'Run Nix build for {{ matrix.nixpkg }}' + shell: bash + run: nix build -L '.?submodules=1#${{ matrix.nixpkg }}' + + - name: 'Show result contents' + shell: bash + run: find result -type f diff --git a/Makefile b/Makefile index 9e241bbfa..b19d5eaf8 100644 --- a/Makefile +++ b/Makefile @@ -153,7 +153,7 @@ NIM_PARAMS := $(NIM_PARAMS) -d:disable_libbacktrace endif # enable experimental exit is dest feature in libp2p mix -NIM_PARAMS := $(NIM_PARAMS) -d:libp2p_mix_experimental_exit_is_dest +NIM_PARAMS := $(NIM_PARAMS) -d:libp2p_mix_experimental_exit_is_dest libbacktrace: + $(MAKE) -C vendor/nim-libbacktrace --no-print-directory BUILD_CXX_LIB=0 @@ -192,9 +192,9 @@ LIBRLN_BUILDDIR := $(CURDIR)/vendor/zerokit LIBRLN_VERSION := v0.9.0 ifeq ($(detected_OS),Windows) -LIBRLN_FILE := rln.lib +LIBRLN_FILE ?= rln.lib else -LIBRLN_FILE := librln_$(LIBRLN_VERSION).a +LIBRLN_FILE ?= librln_$(LIBRLN_VERSION).a endif $(LIBRLN_FILE): @@ -481,8 +481,13 @@ ifndef ANDROID_NDK_HOME endif build-libwaku-for-android-arch: - $(MAKE) rebuild-nat-libs CC=$(ANDROID_TOOLCHAIN_DIR)/bin/$(ANDROID_COMPILER) && \ - ./scripts/build_rln_android.sh $(CURDIR)/build $(LIBRLN_BUILDDIR) $(LIBRLN_VERSION) $(CROSS_TARGET) $(ABIDIR) && \ +ifneq ($(findstring /nix/store,$(LIBRLN_FILE)),) + mkdir -p $(CURDIR)/build/android/$(ABIDIR)/ + cp $(LIBRLN_FILE) $(CURDIR)/build/android/$(ABIDIR)/ +else + ./scripts/build_rln_android.sh $(CURDIR)/build $(LIBRLN_BUILDDIR) $(LIBRLN_VERSION) $(CROSS_TARGET) $(ABIDIR) +endif + $(MAKE) rebuild-nat-libs CC=$(ANDROID_TOOLCHAIN_DIR)/bin/$(ANDROID_COMPILER) CPU=$(CPU) ABIDIR=$(ABIDIR) ANDROID_ARCH=$(ANDROID_ARCH) ANDROID_COMPILER=$(ANDROID_COMPILER) ANDROID_TOOLCHAIN_DIR=$(ANDROID_TOOLCHAIN_DIR) $(ENV_SCRIPT) nim libWakuAndroid $(NIM_PARAMS) waku.nims libwaku-android-arm64: ANDROID_ARCH=aarch64-linux-android @@ -541,7 +546,7 @@ else $(error iOS builds are only supported on macOS) endif -# Build for iOS architecture +# Build for iOS architecture build-libwaku-for-ios-arch: IOS_SDK=$(IOS_SDK) IOS_ARCH=$(IOS_ARCH) IOS_SDK_PATH=$(IOS_SDK_PATH) $(ENV_SCRIPT) nim libWakuIOS $(NIM_PARAMS) waku.nims diff --git a/flake.lock b/flake.lock index 0700e6a43..b927e8807 100644 --- a/flake.lock +++ b/flake.lock @@ -2,17 +2,17 @@ "nodes": { "nixpkgs": { "locked": { - "lastModified": 1740603184, - "narHash": "sha256-t+VaahjQAWyA+Ctn2idyo1yxRIYpaDxMgHkgCNiMJa4=", + "lastModified": 1757590060, + "narHash": "sha256-EWwwdKLMZALkgHFyKW7rmyhxECO74+N+ZO5xTDnY/5c=", "owner": "NixOS", "repo": "nixpkgs", - "rev": "f44bd8ca21e026135061a0a57dcf3d0775b67a49", + "rev": "0ef228213045d2cdb5a169a95d63ded38670b293", "type": "github" }, "original": { "owner": "NixOS", "repo": "nixpkgs", - "rev": "f44bd8ca21e026135061a0a57dcf3d0775b67a49", + "rev": "0ef228213045d2cdb5a169a95d63ded38670b293", "type": "github" } }, @@ -51,18 +51,18 @@ "rust-overlay": "rust-overlay" }, "locked": { - "lastModified": 1749115386, - "narHash": "sha256-UexIE2D7zr6aRajwnKongXwCZCeRZDXOL0kfjhqUFSU=", - "owner": "vacp2p", - "repo": "zerokit", - "rev": "dc0b31752c91e7b4fefc441cfa6a8210ad7dba7b", - "type": "github" + "lastModified": 1762211504, + "narHash": "sha256-SbDoBElFYJ4cYebltxlO2lYnz6qOaDAVY6aNJ5bqHDE=", + "ref": "refs/heads/master", + "rev": "3160d9504d07791f2fc9b610948a6cf9a58ed488", + "revCount": 342, + "type": "git", + "url": "https://github.com/vacp2p/zerokit" }, "original": { - "owner": "vacp2p", - "repo": "zerokit", - "rev": "dc0b31752c91e7b4fefc441cfa6a8210ad7dba7b", - "type": "github" + "rev": "3160d9504d07791f2fc9b610948a6cf9a58ed488", + "type": "git", + "url": "https://github.com/vacp2p/zerokit" } } }, diff --git a/flake.nix b/flake.nix index 3427ff6ac..88229a826 100644 --- a/flake.nix +++ b/flake.nix @@ -10,8 +10,9 @@ # We are pinning the commit because ultimately we want to use same commit across different projects. # A commit from nixpkgs 24.11 release : https://github.com/NixOS/nixpkgs/tree/release-24.11 nixpkgs.url = "github:NixOS/nixpkgs/0ef228213045d2cdb5a169a95d63ded38670b293"; + # WARNING: Remember to update commit and use 'nix flake update' to update flake.lock. zerokit = { - url = "github:vacp2p/zerokit?rev=dc0b31752c91e7b4fefc441cfa6a8210ad7dba7b"; + url = "git+https://github.com/vacp2p/zerokit?rev=3160d9504d07791f2fc9b610948a6cf9a58ed488"; inputs.nixpkgs.follows = "nixpkgs"; }; }; @@ -60,8 +61,6 @@ inherit stableSystems; src = self; targets = ["libwaku"]; - # We are not able to compile the code with nim-unwrapped-2_0 - useSystemNim = false; zerokitRln = zerokit.packages.${system}.rln; }; @@ -69,12 +68,10 @@ inherit stableSystems; src = self; targets = ["wakucanary"]; - # We are not able to compile the code with nim-unwrapped-2_0 - useSystemNim = false; zerokitRln = zerokit.packages.${system}.rln; }; - default = libwaku-android-arm64; + default = libwaku; }); devShells = forAllSystems (system: { diff --git a/nix/default.nix b/nix/default.nix index 73838a4a1..d77862e8f 100644 --- a/nix/default.nix +++ b/nix/default.nix @@ -1,9 +1,8 @@ { - config ? {}, - pkgs ? import { }, + pkgs, src ? ../., targets ? ["libwaku-android-arm64"], - verbosity ? 2, + verbosity ? 1, useSystemNim ? true, quickAndDirty ? true, stableSystems ? [ @@ -19,43 +18,31 @@ assert pkgs.lib.assertMsg ((src.submodules or true) == true) let inherit (pkgs) stdenv lib writeScriptBin callPackage; - revision = lib.substring 0 8 (src.rev or "dirty"); + androidManifest = ""; -in stdenv.mkDerivation rec { + tools = pkgs.callPackage ./tools.nix {}; + version = tools.findKeyValue "^version = \"([a-f0-9.-]+)\"$" ../waku.nimble; + revision = lib.substring 0 8 (src.rev or src.dirtyRev or "00000000"); +in stdenv.mkDerivation { pname = "logos-messaging-nim"; - - version = "1.0.0-${revision}"; + version = "${version}-${revision}"; inherit src; + # Runtime dependencies buildInputs = with pkgs; [ - openssl - gmp - zip + openssl gmp zip ]; # Dependencies that should only exist in the build environment. nativeBuildInputs = let # Fix for Nim compiler calling 'git rev-parse' and 'lsb_release'. fakeGit = writeScriptBin "git" "echo ${version}"; - # Fix for the zerokit package that is built with cargo/rustup/cross. - fakeCargo = writeScriptBin "cargo" "echo ${version}"; - # Fix for the zerokit package that is built with cargo/rustup/cross. - fakeRustup = writeScriptBin "rustup" "echo ${version}"; - # Fix for the zerokit package that is built with cargo/rustup/cross. - fakeCross = writeScriptBin "cross" "echo ${version}"; - in - with pkgs; [ - cmake - which - lsb-release - zerokitRln - nim-unwrapped-2_0 - fakeGit - fakeCargo - fakeRustup - fakeCross + in with pkgs; [ + cmake which zerokitRln nim-unwrapped-2_2 fakeGit + ] ++ lib.optionals stdenv.isDarwin [ + pkgs.darwin.cctools gcc # Necessary for libbacktrace ]; # Environment variables required for Android builds @@ -63,14 +50,13 @@ in stdenv.mkDerivation rec { ANDROID_NDK_HOME="${pkgs.androidPkgs.ndk}"; NIMFLAGS = "-d:disableMarchNative -d:git_revision_override=${revision}"; XDG_CACHE_HOME = "/tmp"; - androidManifest = ""; makeFlags = targets ++ [ "V=${toString verbosity}" "QUICK_AND_DIRTY_COMPILER=${if quickAndDirty then "1" else "0"}" "QUICK_AND_DIRTY_NIMBLE=${if quickAndDirty then "1" else "0"}" "USE_SYSTEM_NIM=${if useSystemNim then "1" else "0"}" - "LIBRLN_FILE=${zerokitRln}/target/release/librln.a" + "LIBRLN_FILE=${zerokitRln}/lib/librln.${if abidir != null then "so" else "a"}" ]; configurePhase = '' @@ -80,12 +66,8 @@ in stdenv.mkDerivation rec { ''; # For the Nim v2.2.4 built with NBS we added sat and zippy - preBuild = '' - ln -s waku.nimble waku.nims - - ${lib.optionalString (!useSystemNim) '' + preBuild = lib.optionalString (!useSystemNim) '' pushd vendor/nimbus-build-system/vendor/Nim - mkdir dist mkdir -p dist/nimble/vendor/sat mkdir -p dist/nimble/vendor/checksums @@ -98,9 +80,7 @@ in stdenv.mkDerivation rec { cp -r ${callPackage ./checksums.nix {}}/. dist/nimble/vendor/checksums cp -r ${callPackage ./zippy.nix {}}/. dist/nimble/vendor/zippy chmod 777 -R dist/nimble csources_v2 - popd - ''} ''; installPhase = if abidir != null then '' @@ -110,10 +90,10 @@ in stdenv.mkDerivation rec { cd $out && zip -r libwaku.aar * '' else '' mkdir -p $out/bin $out/include - + # Copy library files cp build/* $out/bin/ 2>/dev/null || true - + # Copy the header file cp library/libwaku.h $out/include/ ''; diff --git a/nix/pkgs/android-sdk/compose.nix b/nix/pkgs/android-sdk/compose.nix index c73aaee43..9a8536ddb 100644 --- a/nix/pkgs/android-sdk/compose.nix +++ b/nix/pkgs/android-sdk/compose.nix @@ -5,19 +5,16 @@ { androidenv, lib, stdenv }: -assert lib.assertMsg (stdenv.system != "aarch64-darwin") - "aarch64-darwin not supported for Android SDK. Use: NIXPKGS_SYSTEM_OVERRIDE=x86_64-darwin"; - # The "android-sdk-license" license is accepted # by setting android_sdk.accept_license = true. androidenv.composeAndroidPackages { cmdLineToolsVersion = "9.0"; toolsVersion = "26.1.1"; - platformToolsVersion = "33.0.3"; + platformToolsVersion = "34.0.5"; buildToolsVersions = [ "34.0.0" ]; platformVersions = [ "34" ]; cmakeVersions = [ "3.22.1" ]; - ndkVersion = "25.2.9519653"; + ndkVersion = "27.2.12479018"; includeNDK = true; includeExtras = [ "extras;android;m2repository" diff --git a/nix/shell.nix b/nix/shell.nix index fe0b065b4..3b83ac93d 100644 --- a/nix/shell.nix +++ b/nix/shell.nix @@ -1,16 +1,12 @@ -{ - pkgs ? import { }, -}: -let - optionalDarwinDeps = pkgs.lib.optionals pkgs.stdenv.isDarwin [ - pkgs.libiconv - pkgs.darwin.apple_sdk.frameworks.Security - ]; -in +{ pkgs }: + pkgs.mkShell { inputsFrom = [ pkgs.androidShell - ] ++ optionalDarwinDeps; + ] ++ pkgs.lib.optionals pkgs.stdenv.isDarwin [ + pkgs.libiconv + pkgs.darwin.apple_sdk.frameworks.Security + ]; buildInputs = with pkgs; [ git @@ -18,7 +14,6 @@ pkgs.mkShell { rustup rustc cmake - nim-unwrapped-2_0 + nim-unwrapped-2_2 ]; - } diff --git a/scripts/build_rln_android.sh b/scripts/build_rln_android.sh index 93a8c47ff..15b81ce9c 100755 --- a/scripts/build_rln_android.sh +++ b/scripts/build_rln_android.sh @@ -25,4 +25,3 @@ cargo clean cross rustc --release --lib --target=${android_arch} --crate-type=cdylib cp ../target/${android_arch}/release/librln.so ${output_dir}/. popd - diff --git a/vendor/zerokit b/vendor/zerokit index a4bb3feb5..70c79fbc9 160000 --- a/vendor/zerokit +++ b/vendor/zerokit @@ -1 +1 @@ -Subproject commit a4bb3feb5054e6fd24827adf204493e6e173437b +Subproject commit 70c79fbc989d4f87d9352b2f4bddcb60ebe55b19 From 1fd25355e0ea23f7e456b5fe95a702059fbaedb5 Mon Sep 17 00:00:00 2001 From: NagyZoltanPeter <113987313+NagyZoltanPeter@users.noreply.github.com> Date: Fri, 30 Jan 2026 01:06:00 +0100 Subject: [PATCH 42/70] feat: waku api send (#3669) * Introduce api/send Added events and requests for support. Reworked delivery_monitor into a featured devlivery_service, that - supports relay publish and lightpush depending on configuration but with fallback options - if available and configured it utilizes store api to confirm message delivery - emits message delivery events accordingly prepare for use in api_example * Fix edge mode config and test added * Fix some import issues, start and stop waku shall not throw exception but return with result properly * Utlize sync RequestBroker, adapt to non-async broker usage and gcsafe where appropriate, removed leftover * add api_example app to examples2 * Adapt after merge from master * Adapt code for using broker context * Fix brokerCtx settings for all usedbrokers, cover locked node init * Various fixes upon test failures. Added initial of subscribe API and auto-subscribe for send api * More test added * Fix multi propagate event emit, fix fail send test case * Fix rebase * Fix PushMessageHandlers in tests * adapt libwaku to api changes * Fix relay test by adapting publish return error in case NoPeersToPublish * Addressing all remaining review findings. Removed leftovers. Fixed loggings and typos * Fix rln relay broker, missed brokerCtx * Fix rest relay test failed, due to publish will fail if no peer avail * ignore anvil test state file * Make terst_wakunode_rln_relay broker context aware to fix * Fix waku rln tests by having them broker context aware * fix typo in test_app.nim --- .gitignore | 2 + Makefile | 4 + .../liteprotocoltester/liteprotocoltester.nim | 6 +- apps/wakunode2/wakunode2.nim | 6 +- examples/api_example/api_example.nim | 89 +++ examples/waku_example.nim | 40 -- library/kernel_api/node_lifecycle_api.nim | 8 +- tests/api/test_api_send.nim | 431 +++++++++++++++ tests/api/test_node_conf.nim | 21 + tests/node/test_wakunode_legacy_lightpush.nim | 20 - tests/node/test_wakunode_lightpush.nim | 17 - tests/waku_lightpush/test_client.nim | 4 +- tests/waku_lightpush/test_ratelimit.nim | 4 +- tests/waku_lightpush_legacy/test_client.nim | 4 +- .../waku_lightpush_legacy/test_ratelimit.nim | 4 +- tests/waku_relay/test_wakunode_relay.nim | 5 +- tests/waku_rln_relay/test_waku_rln_relay.nim | 32 +- .../test_wakunode_rln_relay.nim | 510 ++++++++++-------- tests/wakunode2/test_app.nim | 6 +- tests/wakunode_rest/test_rest_relay.nim | 41 +- waku.nim | 6 +- waku.nimble | 6 +- waku/api.nim | 3 +- waku/api/api.nim | 57 +- waku/api/api_conf.nim | 15 +- waku/api/send_api.md | 46 ++ waku/api/types.nim | 65 +++ waku/events/delivery_events.nim | 27 + waku/events/events.nim | 3 + waku/events/message_events.nim | 30 ++ waku/factory/waku.nim | 173 ++++-- .../delivery_monitor/delivery_callback.nim | 17 - .../delivery_monitor/delivery_monitor.nim | 43 -- .../delivery_monitor/publish_observer.nim | 9 - waku/node/delivery_monitor/send_monitor.nim | 212 -------- .../subscriptions_observer.nim | 13 - .../delivery_service/delivery_service.nim | 46 ++ .../not_delivered_storage/migrations.nim | 2 +- .../not_delivered_storage.nim | 8 +- waku/node/delivery_service/recv_service.nim | 3 + .../recv_service/recv_service.nim} | 117 ++-- waku/node/delivery_service/send_service.nim | 6 + .../send_service/delivery_task.nim | 74 +++ .../send_service/lightpush_processor.nim | 81 +++ .../send_service/relay_processor.nim | 78 +++ .../send_service/send_processor.nim | 36 ++ .../send_service/send_service.nim | 269 +++++++++ .../delivery_service/subscription_service.nim | 64 +++ .../health_monitor}/topic_health.nim | 4 +- waku/node/kernel_api/lightpush.nim | 33 +- waku/node/kernel_api/relay.nim | 85 +-- waku/node/waku_node.nim | 108 ++-- waku/requests/health_request.nim | 21 + waku/requests/node_requests.nim | 11 + waku/requests/requests.nim | 3 + waku/requests/rln_requests.nim | 9 + waku/waku_core/message/digest.nim | 5 + waku/waku_filter_v2/client.nim | 21 +- waku/waku_lightpush/callbacks.nim | 4 +- waku/waku_lightpush/client.nim | 84 +-- waku/waku_lightpush/common.nim | 4 +- waku/waku_lightpush/protocol.nim | 2 +- waku/waku_lightpush_legacy/callbacks.nim | 4 +- waku/waku_lightpush_legacy/client.nim | 11 - waku/waku_lightpush_legacy/common.nim | 2 +- waku/waku_lightpush_legacy/protocol.nim | 2 +- waku/waku_relay.nim | 3 +- waku/waku_relay/protocol.nim | 33 +- waku/waku_rln_relay/rln_relay.nim | 47 +- 69 files changed, 2331 insertions(+), 928 deletions(-) create mode 100644 examples/api_example/api_example.nim delete mode 100644 examples/waku_example.nim create mode 100644 tests/api/test_api_send.nim create mode 100644 waku/api/send_api.md create mode 100644 waku/api/types.nim create mode 100644 waku/events/delivery_events.nim create mode 100644 waku/events/events.nim create mode 100644 waku/events/message_events.nim delete mode 100644 waku/node/delivery_monitor/delivery_callback.nim delete mode 100644 waku/node/delivery_monitor/delivery_monitor.nim delete mode 100644 waku/node/delivery_monitor/publish_observer.nim delete mode 100644 waku/node/delivery_monitor/send_monitor.nim delete mode 100644 waku/node/delivery_monitor/subscriptions_observer.nim create mode 100644 waku/node/delivery_service/delivery_service.nim rename waku/node/{delivery_monitor => delivery_service}/not_delivered_storage/migrations.nim (95%) rename waku/node/{delivery_monitor => delivery_service}/not_delivered_storage/not_delivered_storage.nim (93%) create mode 100644 waku/node/delivery_service/recv_service.nim rename waku/node/{delivery_monitor/recv_monitor.nim => delivery_service/recv_service/recv_service.nim} (67%) create mode 100644 waku/node/delivery_service/send_service.nim create mode 100644 waku/node/delivery_service/send_service/delivery_task.nim create mode 100644 waku/node/delivery_service/send_service/lightpush_processor.nim create mode 100644 waku/node/delivery_service/send_service/relay_processor.nim create mode 100644 waku/node/delivery_service/send_service/send_processor.nim create mode 100644 waku/node/delivery_service/send_service/send_service.nim create mode 100644 waku/node/delivery_service/subscription_service.nim rename waku/{waku_relay => node/health_monitor}/topic_health.nim (84%) create mode 100644 waku/requests/health_request.nim create mode 100644 waku/requests/node_requests.nim create mode 100644 waku/requests/requests.nim create mode 100644 waku/requests/rln_requests.nim diff --git a/.gitignore b/.gitignore index f03c4ebaf..5222a0d5e 100644 --- a/.gitignore +++ b/.gitignore @@ -89,3 +89,5 @@ AGENTS.md nimble.develop nimble.paths nimbledeps + +**/anvil_state/state-deployed-contracts-mint-and-approved.json diff --git a/Makefile b/Makefile index b19d5eaf8..13882253e 100644 --- a/Makefile +++ b/Makefile @@ -272,6 +272,10 @@ lightpushwithmix: | build deps librln echo -e $(BUILD_MSG) "build/$@" && \ $(ENV_SCRIPT) nim lightpushwithmix $(NIM_PARAMS) waku.nims +api_example: | build deps librln + echo -e $(BUILD_MSG) "build/$@" && \ + $(ENV_SCRIPT) nim api_example $(NIM_PARAMS) waku.nims + build/%: | build deps librln echo -e $(BUILD_MSG) "build/$*" && \ $(ENV_SCRIPT) nim buildone $(NIM_PARAMS) waku.nims $* diff --git a/apps/liteprotocoltester/liteprotocoltester.nim b/apps/liteprotocoltester/liteprotocoltester.nim index adb1b0f8a..46c85e910 100644 --- a/apps/liteprotocoltester/liteprotocoltester.nim +++ b/apps/liteprotocoltester/liteprotocoltester.nim @@ -130,7 +130,8 @@ when isMainModule: info "Setting up shutdown hooks" proc asyncStopper(waku: Waku) {.async: (raises: [Exception]).} = - await waku.stop() + (await waku.stop()).isOkOr: + error "Waku shutdown failed", error = error quit(QuitSuccess) # Handle Ctrl-C SIGINT @@ -160,7 +161,8 @@ when isMainModule: # Not available in -d:release mode writeStackTrace() - waitFor waku.stop() + (waitFor waku.stop()).isOkOr: + error "Waku shutdown failed", error = error quit(QuitFailure) c_signal(ansi_c.SIGSEGV, handleSigsegv) diff --git a/apps/wakunode2/wakunode2.nim b/apps/wakunode2/wakunode2.nim index b50c7113b..c8132ff4e 100644 --- a/apps/wakunode2/wakunode2.nim +++ b/apps/wakunode2/wakunode2.nim @@ -62,7 +62,8 @@ when isMainModule: info "Setting up shutdown hooks" proc asyncStopper(waku: Waku) {.async: (raises: [Exception]).} = - await waku.stop() + (await waku.stop()).isOkOr: + error "Waku shutdown failed", error = error quit(QuitSuccess) # Handle Ctrl-C SIGINT @@ -92,7 +93,8 @@ when isMainModule: # Not available in -d:release mode writeStackTrace() - waitFor waku.stop() + (waitFor waku.stop()).isOkOr: + error "Waku shutdown failed", error = error quit(QuitFailure) c_signal(ansi_c.SIGSEGV, handleSigsegv) diff --git a/examples/api_example/api_example.nim b/examples/api_example/api_example.nim new file mode 100644 index 000000000..37dd5d34b --- /dev/null +++ b/examples/api_example/api_example.nim @@ -0,0 +1,89 @@ +import std/options +import chronos, results, confutils, confutils/defs +import waku + +type CliArgs = object + ethRpcEndpoint* {. + defaultValue: "", desc: "ETH RPC Endpoint, if passed, RLN is enabled" + .}: string + +proc periodicSender(w: Waku): Future[void] {.async.} = + let sentListener = MessageSentEvent.listen( + proc(event: MessageSentEvent) {.async: (raises: []).} = + echo "Message sent with request ID: ", + event.requestId, " hash: ", event.messageHash + ).valueOr: + echo "Failed to listen to message sent event: ", error + return + + let errorListener = MessageErrorEvent.listen( + proc(event: MessageErrorEvent) {.async: (raises: []).} = + echo "Message failed to send with request ID: ", + event.requestId, " error: ", event.error + ).valueOr: + echo "Failed to listen to message error event: ", error + return + + let propagatedListener = MessagePropagatedEvent.listen( + proc(event: MessagePropagatedEvent) {.async: (raises: []).} = + echo "Message propagated with request ID: ", + event.requestId, " hash: ", event.messageHash + ).valueOr: + echo "Failed to listen to message propagated event: ", error + return + + defer: + MessageSentEvent.dropListener(sentListener) + MessageErrorEvent.dropListener(errorListener) + MessagePropagatedEvent.dropListener(propagatedListener) + + ## Periodically sends a Waku message every 30 seconds + var counter = 0 + while true: + let envelope = MessageEnvelope.init( + contentTopic = "example/content/topic", + payload = "Hello Waku! Message number: " & $counter, + ) + + let sendRequestId = (await w.send(envelope)).valueOr: + echo "Failed to send message: ", error + quit(QuitFailure) + + echo "Sending message with request ID: ", sendRequestId, " counter: ", counter + + counter += 1 + await sleepAsync(30.seconds) + +when isMainModule: + let args = CliArgs.load() + + echo "Starting Waku node..." + + let config = + if (args.ethRpcEndpoint == ""): + # Create a basic configuration for the Waku node + # No RLN as we don't have an ETH RPC Endpoint + NodeConfig.init( + protocolsConfig = ProtocolsConfig.init(entryNodes = @[], clusterId = 42) + ) + else: + # Connect to TWN, use ETH RPC Endpoint for RLN + NodeConfig.init(mode = WakuMode.Core, ethRpcEndpoints = @[args.ethRpcEndpoint]) + + # Create the node using the library API's createNode function + let node = (waitFor createNode(config)).valueOr: + echo "Failed to create node: ", error + quit(QuitFailure) + + echo("Waku node created successfully!") + + # Start the node + (waitFor startWaku(addr node)).isOkOr: + echo "Failed to start node: ", error + quit(QuitFailure) + + echo "Node started successfully!" + + asyncSpawn periodicSender(node) + + runForever() diff --git a/examples/waku_example.nim b/examples/waku_example.nim deleted file mode 100644 index ebac0b466..000000000 --- a/examples/waku_example.nim +++ /dev/null @@ -1,40 +0,0 @@ -import std/options -import chronos, results, confutils, confutils/defs -import waku - -type CliArgs = object - ethRpcEndpoint* {. - defaultValue: "", desc: "ETH RPC Endpoint, if passed, RLN is enabled" - .}: string - -when isMainModule: - let args = CliArgs.load() - - echo "Starting Waku node..." - - let config = - if (args.ethRpcEndpoint == ""): - # Create a basic configuration for the Waku node - # No RLN as we don't have an ETH RPC Endpoint - NodeConfig.init( - protocolsConfig = ProtocolsConfig.init(entryNodes = @[], clusterId = 42) - ) - else: - # Connect to TWN, use ETH RPC Endpoint for RLN - NodeConfig.init(ethRpcEndpoints = @[args.ethRpcEndpoint]) - - # Create the node using the library API's createNode function - let node = (waitFor createNode(config)).valueOr: - echo "Failed to create node: ", error - quit(QuitFailure) - - echo("Waku node created successfully!") - - # Start the node - (waitFor startWaku(addr node)).isOkOr: - echo "Failed to start node: ", error - quit(QuitFailure) - - echo "Node started successfully!" - - runForever() diff --git a/library/kernel_api/node_lifecycle_api.nim b/library/kernel_api/node_lifecycle_api.nim index a2bb25609..8f3e99b24 100644 --- a/library/kernel_api/node_lifecycle_api.nim +++ b/library/kernel_api/node_lifecycle_api.nim @@ -79,9 +79,7 @@ proc waku_start( proc waku_stop( ctx: ptr FFIContext[Waku], callback: FFICallBack, userData: pointer ) {.ffi.} = - try: - await ctx.myLib[].stop() - except Exception as exc: - error "STOP_NODE failed", error = exc.msg - return err("failed to stop: " & exc.msg) + (await ctx.myLib[].stop()).isOkOr: + error "STOP_NODE failed", error = error + return err("failed to stop: " & $error) return ok("") diff --git a/tests/api/test_api_send.nim b/tests/api/test_api_send.nim new file mode 100644 index 000000000..e247c65ce --- /dev/null +++ b/tests/api/test_api_send.nim @@ -0,0 +1,431 @@ +{.used.} + +import std/strutils +import chronos, testutils/unittests, stew/byteutils, libp2p/[switch, peerinfo] +import ../testlib/[common, wakucore, wakunode, testasync] +import ../waku_archive/archive_utils +import + waku, waku/[waku_node, waku_core, waku_relay/protocol, common/broker/broker_context] +import waku/api/api_conf, waku/factory/waku_conf + +type SendEventOutcome {.pure.} = enum + Sent + Propagated + Error + +type SendEventListenerManager = ref object + brokerCtx: BrokerContext + sentListener: MessageSentEventListener + errorListener: MessageErrorEventListener + propagatedListener: MessagePropagatedEventListener + sentFuture: Future[void] + errorFuture: Future[void] + propagatedFuture: Future[void] + sentCount: int + errorCount: int + propagatedCount: int + sentRequestIds: seq[RequestId] + errorRequestIds: seq[RequestId] + propagatedRequestIds: seq[RequestId] + +proc newSendEventListenerManager(brokerCtx: BrokerContext): SendEventListenerManager = + let manager = SendEventListenerManager(brokerCtx: brokerCtx) + manager.sentFuture = newFuture[void]("sentEvent") + manager.errorFuture = newFuture[void]("errorEvent") + manager.propagatedFuture = newFuture[void]("propagatedEvent") + + manager.sentListener = MessageSentEvent.listen( + brokerCtx, + proc(event: MessageSentEvent) {.async: (raises: []).} = + inc manager.sentCount + manager.sentRequestIds.add(event.requestId) + echo "SENT EVENT TRIGGERED (#", + manager.sentCount, "): requestId=", event.requestId + if not manager.sentFuture.finished(): + manager.sentFuture.complete() + , + ).valueOr: + raiseAssert error + + manager.errorListener = MessageErrorEvent.listen( + brokerCtx, + proc(event: MessageErrorEvent) {.async: (raises: []).} = + inc manager.errorCount + manager.errorRequestIds.add(event.requestId) + echo "ERROR EVENT TRIGGERED (#", manager.errorCount, "): ", event.error + if not manager.errorFuture.finished(): + manager.errorFuture.fail( + newException(CatchableError, "Error event triggered: " & event.error) + ) + , + ).valueOr: + raiseAssert error + + manager.propagatedListener = MessagePropagatedEvent.listen( + brokerCtx, + proc(event: MessagePropagatedEvent) {.async: (raises: []).} = + inc manager.propagatedCount + manager.propagatedRequestIds.add(event.requestId) + echo "PROPAGATED EVENT TRIGGERED (#", + manager.propagatedCount, "): requestId=", event.requestId + if not manager.propagatedFuture.finished(): + manager.propagatedFuture.complete() + , + ).valueOr: + raiseAssert error + + return manager + +proc teardown(manager: SendEventListenerManager) = + MessageSentEvent.dropListener(manager.brokerCtx, manager.sentListener) + MessageErrorEvent.dropListener(manager.brokerCtx, manager.errorListener) + MessagePropagatedEvent.dropListener(manager.brokerCtx, manager.propagatedListener) + +proc waitForEvents( + manager: SendEventListenerManager, timeout: Duration +): Future[bool] {.async.} = + return await allFutures( + manager.sentFuture, manager.propagatedFuture, manager.errorFuture + ) + .withTimeout(timeout) + +proc outcomes(manager: SendEventListenerManager): set[SendEventOutcome] = + if manager.sentFuture.completed(): + result.incl(SendEventOutcome.Sent) + if manager.propagatedFuture.completed(): + result.incl(SendEventOutcome.Propagated) + if manager.errorFuture.failed(): + result.incl(SendEventOutcome.Error) + +proc validate(manager: SendEventListenerManager, expected: set[SendEventOutcome]) = + echo "EVENT COUNTS: sent=", + manager.sentCount, ", propagated=", manager.propagatedCount, ", error=", + manager.errorCount + check manager.outcomes() == expected + +proc validate( + manager: SendEventListenerManager, + expected: set[SendEventOutcome], + expectedRequestId: RequestId, +) = + manager.validate(expected) + for requestId in manager.sentRequestIds: + check requestId == expectedRequestId + for requestId in manager.propagatedRequestIds: + check requestId == expectedRequestId + for requestId in manager.errorRequestIds: + check requestId == expectedRequestId + +proc createApiNodeConf(mode: WakuMode = WakuMode.Core): NodeConfig = + result = NodeConfig.init( + mode = mode, + protocolsConfig = ProtocolsConfig.init( + entryNodes = @[], + clusterId = 1, + autoShardingConfig = AutoShardingConfig(numShardsInCluster: 1), + ), + p2pReliability = true, + ) + +suite "Waku API - Send": + var + relayNode1 {.threadvar.}: WakuNode + relayNode1PeerInfo {.threadvar.}: RemotePeerInfo + relayNode1PeerId {.threadvar.}: PeerId + + relayNode2 {.threadvar.}: WakuNode + relayNode2PeerInfo {.threadvar.}: RemotePeerInfo + relayNode2PeerId {.threadvar.}: PeerId + + lightpushNode {.threadvar.}: WakuNode + lightpushNodePeerInfo {.threadvar.}: RemotePeerInfo + lightpushNodePeerId {.threadvar.}: PeerId + + storeNode {.threadvar.}: WakuNode + storeNodePeerInfo {.threadvar.}: RemotePeerInfo + storeNodePeerId {.threadvar.}: PeerId + + asyncSetup: + lockNewGlobalBrokerContext: + relayNode1 = + newTestWakuNode(generateSecp256k1Key(), parseIpAddress("0.0.0.0"), Port(0)) + relayNode1.mountMetadata(1, @[0'u16]).isOkOr: + raiseAssert "Failed to mount metadata: " & error + (await relayNode1.mountRelay()).isOkOr: + raiseAssert "Failed to mount relay" + await relayNode1.mountLibp2pPing() + await relayNode1.start() + + lockNewGlobalBrokerContext: + relayNode2 = + newTestWakuNode(generateSecp256k1Key(), parseIpAddress("0.0.0.0"), Port(0)) + relayNode2.mountMetadata(1, @[0'u16]).isOkOr: + raiseAssert "Failed to mount metadata: " & error + (await relayNode2.mountRelay()).isOkOr: + raiseAssert "Failed to mount relay" + await relayNode2.mountLibp2pPing() + await relayNode2.start() + + lockNewGlobalBrokerContext: + lightpushNode = + newTestWakuNode(generateSecp256k1Key(), parseIpAddress("0.0.0.0"), Port(0)) + lightpushNode.mountMetadata(1, @[0'u16]).isOkOr: + raiseAssert "Failed to mount metadata: " & error + (await lightpushNode.mountRelay()).isOkOr: + raiseAssert "Failed to mount relay" + (await lightpushNode.mountLightPush()).isOkOr: + raiseAssert "Failed to mount lightpush" + await lightpushNode.mountLibp2pPing() + await lightpushNode.start() + + lockNewGlobalBrokerContext: + storeNode = + newTestWakuNode(generateSecp256k1Key(), parseIpAddress("0.0.0.0"), Port(0)) + storeNode.mountMetadata(1, @[0'u16]).isOkOr: + raiseAssert "Failed to mount metadata: " & error + (await storeNode.mountRelay()).isOkOr: + raiseAssert "Failed to mount relay" + # Mount archive so store can persist messages + let archiveDriver = newSqliteArchiveDriver() + storeNode.mountArchive(archiveDriver).isOkOr: + raiseAssert "Failed to mount archive: " & error + await storeNode.mountStore() + await storeNode.mountLibp2pPing() + await storeNode.start() + + relayNode1PeerInfo = relayNode1.peerInfo.toRemotePeerInfo() + relayNode1PeerId = relayNode1.peerInfo.peerId + + relayNode2PeerInfo = relayNode2.peerInfo.toRemotePeerInfo() + relayNode2PeerId = relayNode2.peerInfo.peerId + + lightpushNodePeerInfo = lightpushNode.peerInfo.toRemotePeerInfo() + lightpushNodePeerId = lightpushNode.peerInfo.peerId + + storeNodePeerInfo = storeNode.peerInfo.toRemotePeerInfo() + storeNodePeerId = storeNode.peerInfo.peerId + + # Subscribe all relay nodes to the default shard topic + const testPubsubTopic = PubsubTopic("/waku/2/rs/1/0") + proc dummyHandler( + topic: PubsubTopic, msg: WakuMessage + ): Future[void] {.async, gcsafe.} = + discard + + relayNode1.subscribe((kind: PubsubSub, topic: testPubsubTopic), dummyHandler).isOkOr: + raiseAssert "Failed to subscribe relayNode1: " & error + relayNode2.subscribe((kind: PubsubSub, topic: testPubsubTopic), dummyHandler).isOkOr: + raiseAssert "Failed to subscribe relayNode2: " & error + + lightpushNode.subscribe((kind: PubsubSub, topic: testPubsubTopic), dummyHandler).isOkOr: + raiseAssert "Failed to subscribe lightpushNode: " & error + storeNode.subscribe((kind: PubsubSub, topic: testPubsubTopic), dummyHandler).isOkOr: + raiseAssert "Failed to subscribe storeNode: " & error + + # Subscribe all relay nodes to the default shard topic + await relayNode1.connectToNodes(@[relayNode2PeerInfo, storeNodePeerInfo]) + await lightpushNode.connectToNodes(@[relayNode2PeerInfo]) + + asyncTeardown: + await allFutures( + relayNode1.stop(), relayNode2.stop(), lightpushNode.stop(), storeNode.stop() + ) + + asyncTest "Check API availability (unhealthy node)": + var node: Waku + lockNewGlobalBrokerContext: + node = (await createNode(createApiNodeConf())).valueOr: + raiseAssert error + (await startWaku(addr node)).isOkOr: + raiseAssert "Failed to start Waku node: " & error + # node is not connected ! + + let envelope = MessageEnvelope.init( + ContentTopic("/waku/2/default-content/proto"), "test payload" + ) + + let sendResult = await node.send(envelope) + + check sendResult.isErr() # Depending on implementation, it might say "not healthy" + check sendResult.error().contains("not healthy") + + (await node.stop()).isOkOr: + raiseAssert "Failed to stop node: " & error + + asyncTest "Send fully validated": + var node: Waku + lockNewGlobalBrokerContext: + node = (await createNode(createApiNodeConf())).valueOr: + raiseAssert error + (await startWaku(addr node)).isOkOr: + raiseAssert "Failed to start Waku node: " & error + + await node.node.connectToNodes( + @[relayNode1PeerInfo, lightpushNodePeerInfo, storeNodePeerInfo] + ) + + let eventManager = newSendEventListenerManager(node.brokerCtx) + defer: + eventManager.teardown() + + let envelope = MessageEnvelope.init( + ContentTopic("/waku/2/default-content/proto"), "test payload" + ) + + let requestId = (await node.send(envelope)).valueOr: + raiseAssert error + + # Wait for events with timeout + const eventTimeout = 10.seconds + discard await eventManager.waitForEvents(eventTimeout) + + eventManager.validate( + {SendEventOutcome.Sent, SendEventOutcome.Propagated}, requestId + ) + + (await node.stop()).isOkOr: + raiseAssert "Failed to stop node: " & error + + asyncTest "Send only propagates": + var node: Waku + lockNewGlobalBrokerContext: + node = (await createNode(createApiNodeConf())).valueOr: + raiseAssert error + (await startWaku(addr node)).isOkOr: + raiseAssert "Failed to start Waku node: " & error + + await node.node.connectToNodes(@[relayNode1PeerInfo]) + + let eventManager = newSendEventListenerManager(node.brokerCtx) + defer: + eventManager.teardown() + + let envelope = MessageEnvelope.init( + ContentTopic("/waku/2/default-content/proto"), "test payload" + ) + + let requestId = (await node.send(envelope)).valueOr: + raiseAssert error + + # Wait for events with timeout + const eventTimeout = 10.seconds + discard await eventManager.waitForEvents(eventTimeout) + + eventManager.validate({SendEventOutcome.Propagated}, requestId) + + (await node.stop()).isOkOr: + raiseAssert "Failed to stop node: " & error + + asyncTest "Send only propagates fallback to lightpush": + var node: Waku + lockNewGlobalBrokerContext: + node = (await createNode(createApiNodeConf())).valueOr: + raiseAssert error + (await startWaku(addr node)).isOkOr: + raiseAssert "Failed to start Waku node: " & error + + await node.node.connectToNodes(@[lightpushNodePeerInfo]) + + let eventManager = newSendEventListenerManager(node.brokerCtx) + defer: + eventManager.teardown() + + let envelope = MessageEnvelope.init( + ContentTopic("/waku/2/default-content/proto"), "test payload" + ) + + let requestId = (await node.send(envelope)).valueOr: + raiseAssert error + + # Wait for events with timeout + const eventTimeout = 10.seconds + discard await eventManager.waitForEvents(eventTimeout) + + eventManager.validate({SendEventOutcome.Propagated}, requestId) + + (await node.stop()).isOkOr: + raiseAssert "Failed to stop node: " & error + + asyncTest "Send fully validates fallback to lightpush": + var node: Waku + lockNewGlobalBrokerContext: + node = (await createNode(createApiNodeConf())).valueOr: + raiseAssert error + (await startWaku(addr node)).isOkOr: + raiseAssert "Failed to start Waku node: " & error + + await node.node.connectToNodes(@[lightpushNodePeerInfo, storeNodePeerInfo]) + + let eventManager = newSendEventListenerManager(node.brokerCtx) + defer: + eventManager.teardown() + + let envelope = MessageEnvelope.init( + ContentTopic("/waku/2/default-content/proto"), "test payload" + ) + + let requestId = (await node.send(envelope)).valueOr: + raiseAssert error + + # Wait for events with timeout + const eventTimeout = 10.seconds + discard await eventManager.waitForEvents(eventTimeout) + + eventManager.validate( + {SendEventOutcome.Propagated, SendEventOutcome.Sent}, requestId + ) + (await node.stop()).isOkOr: + raiseAssert "Failed to stop node: " & error + + asyncTest "Send fails with event": + var fakeLightpushNode: WakuNode + lockNewGlobalBrokerContext: + fakeLightpushNode = + newTestWakuNode(generateSecp256k1Key(), parseIpAddress("0.0.0.0"), Port(0)) + fakeLightpushNode.mountMetadata(1, @[0'u16]).isOkOr: + raiseAssert "Failed to mount metadata: " & error + (await fakeLightpushNode.mountRelay()).isOkOr: + raiseAssert "Failed to mount relay" + (await fakeLightpushNode.mountLightPush()).isOkOr: + raiseAssert "Failed to mount lightpush" + await fakeLightpushNode.mountLibp2pPing() + await fakeLightpushNode.start() + let fakeLightpushNodePeerInfo = fakeLightpushNode.peerInfo.toRemotePeerInfo() + proc dummyHandler( + topic: PubsubTopic, msg: WakuMessage + ): Future[void] {.async, gcsafe.} = + discard + + fakeLightpushNode.subscribe( + (kind: PubsubSub, topic: PubsubTopic("/waku/2/rs/1/0")), dummyHandler + ).isOkOr: + raiseAssert "Failed to subscribe fakeLightpushNode: " & error + + var node: Waku + lockNewGlobalBrokerContext: + node = (await createNode(createApiNodeConf(WakuMode.Edge))).valueOr: + raiseAssert error + (await startWaku(addr node)).isOkOr: + raiseAssert "Failed to start Waku node: " & error + + await node.node.connectToNodes(@[fakeLightpushNodePeerInfo]) + + let eventManager = newSendEventListenerManager(node.brokerCtx) + defer: + eventManager.teardown() + + let envelope = MessageEnvelope.init( + ContentTopic("/waku/2/default-content/proto"), "test payload" + ) + + let requestId = (await node.send(envelope)).valueOr: + raiseAssert error + + echo "Sent message with requestId=", requestId + # Wait for events with timeout + const eventTimeout = 62.seconds + discard await eventManager.waitForEvents(eventTimeout) + + eventManager.validate({SendEventOutcome.Error}, requestId) + (await node.stop()).isOkOr: + raiseAssert "Failed to stop node: " & error diff --git a/tests/api/test_node_conf.nim b/tests/api/test_node_conf.nim index 232ffc7d2..4dfbd4b51 100644 --- a/tests/api/test_node_conf.nim +++ b/tests/api/test_node_conf.nim @@ -21,6 +21,27 @@ suite "LibWaku Conf - toWakuConf": wakuConf.shardingConf.numShardsInCluster == 8 wakuConf.staticNodes.len == 0 + test "Edge mode configuration": + ## Given + let protocolsConfig = ProtocolsConfig.init(entryNodes = @[], clusterId = 1) + + let nodeConfig = NodeConfig.init(mode = Edge, protocolsConfig = protocolsConfig) + + ## When + let wakuConfRes = toWakuConf(nodeConfig) + + ## Then + require wakuConfRes.isOk() + let wakuConf = wakuConfRes.get() + require wakuConf.validate().isOk() + check: + wakuConf.relay == false + wakuConf.lightPush == false + wakuConf.filterServiceConf.isSome() == false + wakuConf.storeServiceConf.isSome() == false + wakuConf.peerExchangeService == true + wakuConf.clusterId == 1 + test "Core mode configuration": ## Given let protocolsConfig = ProtocolsConfig.init(entryNodes = @[], clusterId = 1) diff --git a/tests/node/test_wakunode_legacy_lightpush.nim b/tests/node/test_wakunode_legacy_lightpush.nim index 4aedd7d4b..902464bcd 100644 --- a/tests/node/test_wakunode_legacy_lightpush.nim +++ b/tests/node/test_wakunode_legacy_lightpush.nim @@ -25,9 +25,6 @@ import suite "Waku Legacy Lightpush - End To End": var - handlerFuture {.threadvar.}: Future[(PubsubTopic, WakuMessage)] - handler {.threadvar.}: PushMessageHandler - server {.threadvar.}: WakuNode client {.threadvar.}: WakuNode @@ -37,13 +34,6 @@ suite "Waku Legacy Lightpush - End To End": message {.threadvar.}: WakuMessage asyncSetup: - handlerFuture = newPushHandlerFuture() - handler = proc( - peer: PeerId, pubsubTopic: PubsubTopic, message: WakuMessage - ): Future[WakuLightPushResult[void]] {.async.} = - handlerFuture.complete((pubsubTopic, message)) - return ok() - let serverKey = generateSecp256k1Key() clientKey = generateSecp256k1Key() @@ -108,9 +98,6 @@ suite "Waku Legacy Lightpush - End To End": suite "RLN Proofs as a Lightpush Service": var - handlerFuture {.threadvar.}: Future[(PubsubTopic, WakuMessage)] - handler {.threadvar.}: PushMessageHandler - server {.threadvar.}: WakuNode client {.threadvar.}: WakuNode anvilProc {.threadvar.}: Process @@ -122,13 +109,6 @@ suite "RLN Proofs as a Lightpush Service": message {.threadvar.}: WakuMessage asyncSetup: - handlerFuture = newPushHandlerFuture() - handler = proc( - peer: PeerId, pubsubTopic: PubsubTopic, message: WakuMessage - ): Future[WakuLightPushResult[void]] {.async.} = - handlerFuture.complete((pubsubTopic, message)) - return ok() - let serverKey = generateSecp256k1Key() clientKey = generateSecp256k1Key() diff --git a/tests/node/test_wakunode_lightpush.nim b/tests/node/test_wakunode_lightpush.nim index 7b4da6d4c..66b87b85e 100644 --- a/tests/node/test_wakunode_lightpush.nim +++ b/tests/node/test_wakunode_lightpush.nim @@ -37,13 +37,6 @@ suite "Waku Lightpush - End To End": message {.threadvar.}: WakuMessage asyncSetup: - handlerFuture = newPushHandlerFuture() - handler = proc( - peer: PeerId, pubsubTopic: PubsubTopic, message: WakuMessage - ): Future[WakuLightPushResult] {.async.} = - handlerFuture.complete((pubsubTopic, message)) - return ok(PublishedToOnePeer) - let serverKey = generateSecp256k1Key() clientKey = generateSecp256k1Key() @@ -108,9 +101,6 @@ suite "Waku Lightpush - End To End": suite "RLN Proofs as a Lightpush Service": var - handlerFuture {.threadvar.}: Future[(PubsubTopic, WakuMessage)] - handler {.threadvar.}: PushMessageHandler - server {.threadvar.}: WakuNode client {.threadvar.}: WakuNode anvilProc {.threadvar.}: Process @@ -122,13 +112,6 @@ suite "RLN Proofs as a Lightpush Service": message {.threadvar.}: WakuMessage asyncSetup: - handlerFuture = newPushHandlerFuture() - handler = proc( - peer: PeerId, pubsubTopic: PubsubTopic, message: WakuMessage - ): Future[WakuLightPushResult] {.async.} = - handlerFuture.complete((pubsubTopic, message)) - return ok(PublishedToOnePeer) - let serverKey = generateSecp256k1Key() clientKey = generateSecp256k1Key() diff --git a/tests/waku_lightpush/test_client.nim b/tests/waku_lightpush/test_client.nim index af22ffa5d..0bc9afdd4 100644 --- a/tests/waku_lightpush/test_client.nim +++ b/tests/waku_lightpush/test_client.nim @@ -38,7 +38,7 @@ suite "Waku Lightpush Client": asyncSetup: handlerFuture = newPushHandlerFuture() handler = proc( - peer: PeerId, pubsubTopic: PubsubTopic, message: WakuMessage + pubsubTopic: PubsubTopic, message: WakuMessage ): Future[WakuLightPushResult] {.async.} = let msgLen = message.encode().buffer.len if msgLen > int(DefaultMaxWakuMessageSize) + 64 * 1024: @@ -287,7 +287,7 @@ suite "Waku Lightpush Client": handlerError = "handler-error" handlerFuture2 = newFuture[void]() handler2 = proc( - peer: PeerId, pubsubTopic: PubsubTopic, message: WakuMessage + pubsubTopic: PubsubTopic, message: WakuMessage ): Future[WakuLightPushResult] {.async.} = handlerFuture2.complete() return lighpushErrorResult(LightPushErrorCode.PAYLOAD_TOO_LARGE, handlerError) diff --git a/tests/waku_lightpush/test_ratelimit.nim b/tests/waku_lightpush/test_ratelimit.nim index ffbd1a06d..e023bf3f5 100644 --- a/tests/waku_lightpush/test_ratelimit.nim +++ b/tests/waku_lightpush/test_ratelimit.nim @@ -19,7 +19,7 @@ suite "Rate limited push service": ## Given var handlerFuture = newFuture[(string, WakuMessage)]() let handler: PushMessageHandler = proc( - peer: PeerId, pubsubTopic: PubsubTopic, message: WakuMessage + pubsubTopic: PubsubTopic, message: WakuMessage ): Future[WakuLightPushResult] {.async.} = handlerFuture.complete((pubsubTopic, message)) return lightpushSuccessResult(1) # succeed to publish to 1 peer. @@ -84,7 +84,7 @@ suite "Rate limited push service": # CI can be slow enough that sequential requests accidentally refill tokens. # Instead we issue a small burst and assert we observe at least one rejection. let handler = proc( - peer: PeerId, pubsubTopic: PubsubTopic, message: WakuMessage + pubsubTopic: PubsubTopic, message: WakuMessage ): Future[WakuLightPushResult] {.async.} = return lightpushSuccessResult(1) diff --git a/tests/waku_lightpush_legacy/test_client.nim b/tests/waku_lightpush_legacy/test_client.nim index 1dcb466c9..3d3027e9c 100644 --- a/tests/waku_lightpush_legacy/test_client.nim +++ b/tests/waku_lightpush_legacy/test_client.nim @@ -35,7 +35,7 @@ suite "Waku Legacy Lightpush Client": asyncSetup: handlerFuture = newPushHandlerFuture() handler = proc( - peer: PeerId, pubsubTopic: PubsubTopic, message: WakuMessage + pubsubTopic: PubsubTopic, message: WakuMessage ): Future[WakuLightPushResult[void]] {.async.} = let msgLen = message.encode().buffer.len if msgLen > int(DefaultMaxWakuMessageSize) + 64 * 1024: @@ -282,7 +282,7 @@ suite "Waku Legacy Lightpush Client": handlerError = "handler-error" handlerFuture2 = newFuture[void]() handler2 = proc( - peer: PeerId, pubsubTopic: PubsubTopic, message: WakuMessage + pubsubTopic: PubsubTopic, message: WakuMessage ): Future[WakuLightPushResult[void]] {.async.} = handlerFuture2.complete() return err(handlerError) diff --git a/tests/waku_lightpush_legacy/test_ratelimit.nim b/tests/waku_lightpush_legacy/test_ratelimit.nim index 37c43a066..ae5f5ed28 100644 --- a/tests/waku_lightpush_legacy/test_ratelimit.nim +++ b/tests/waku_lightpush_legacy/test_ratelimit.nim @@ -25,7 +25,7 @@ suite "Rate limited push service": ## Given var handlerFuture = newFuture[(string, WakuMessage)]() let handler: PushMessageHandler = proc( - peer: PeerId, pubsubTopic: PubsubTopic, message: WakuMessage + pubsubTopic: PubsubTopic, message: WakuMessage ): Future[WakuLightPushResult[void]] {.async.} = handlerFuture.complete((pubsubTopic, message)) return ok() @@ -87,7 +87,7 @@ suite "Rate limited push service": ## Given let handler = proc( - peer: PeerId, pubsubTopic: PubsubTopic, message: WakuMessage + pubsubTopic: PubsubTopic, message: WakuMessage ): Future[WakuLightPushResult[void]] {.async.} = return ok() diff --git a/tests/waku_relay/test_wakunode_relay.nim b/tests/waku_relay/test_wakunode_relay.nim index 2b4f32617..a687119bd 100644 --- a/tests/waku_relay/test_wakunode_relay.nim +++ b/tests/waku_relay/test_wakunode_relay.nim @@ -1,7 +1,7 @@ {.used.} import - std/[os, sequtils, sysrand, math], + std/[os, strutils, sequtils, sysrand, math], stew/byteutils, testutils/unittests, chronos, @@ -450,7 +450,8 @@ suite "WakuNode - Relay": await sleepAsync(500.millis) let res = await node2.publish(some($shard), message) - assert res.isOk(), $res.error + check res.isErr() + check contains($res.error, "NoPeersToPublish") await sleepAsync(500.millis) diff --git a/tests/waku_rln_relay/test_waku_rln_relay.nim b/tests/waku_rln_relay/test_waku_rln_relay.nim index 3430657ad..d9fe0d890 100644 --- a/tests/waku_rln_relay/test_waku_rln_relay.nim +++ b/tests/waku_rln_relay/test_waku_rln_relay.nim @@ -15,6 +15,7 @@ import waku_rln_relay/rln, waku_rln_relay/protocol_metrics, waku_keystore, + common/broker/broker_context, ], ./rln/waku_rln_relay_utils, ./utils_onchain, @@ -233,8 +234,10 @@ suite "Waku rln relay": let index = MembershipIndex(5) let wakuRlnConfig = getWakuRlnConfig(manager = manager, index = index) - let wakuRlnRelay = (await WakuRlnRelay.new(wakuRlnConfig)).valueOr: - raiseAssert $error + var wakuRlnRelay: WakuRlnRelay + lockNewGlobalBrokerContext: + wakuRlnRelay = (await WakuRlnRelay.new(wakuRlnConfig)).valueOr: + raiseAssert $error let manager = cast[OnchainGroupManager](wakuRlnRelay.groupManager) let idCredentials = generateCredentials() @@ -290,8 +293,10 @@ suite "Waku rln relay": let wakuRlnConfig = getWakuRlnConfig(manager = manager, index = index) - let wakuRlnRelay = (await WakuRlnRelay.new(wakuRlnConfig)).valueOr: - raiseAssert $error + var wakuRlnRelay: WakuRlnRelay + lockNewGlobalBrokerContext: + wakuRlnRelay = (await WakuRlnRelay.new(wakuRlnConfig)).valueOr: + raiseAssert $error let manager = cast[OnchainGroupManager](wakuRlnRelay.groupManager) let idCredentials = generateCredentials() @@ -340,8 +345,10 @@ suite "Waku rln relay": asyncTest "multiple senders with same external nullifier": let index1 = MembershipIndex(5) let rlnConf1 = getWakuRlnConfig(manager = manager, index = index1) - let wakuRlnRelay1 = (await WakuRlnRelay.new(rlnConf1)).valueOr: - raiseAssert "failed to create waku rln relay: " & $error + var wakuRlnRelay1: WakuRlnRelay + lockNewGlobalBrokerContext: + wakuRlnRelay1 = (await WakuRlnRelay.new(rlnConf1)).valueOr: + raiseAssert "failed to create waku rln relay: " & $error let manager1 = cast[OnchainGroupManager](wakuRlnRelay1.groupManager) let idCredentials1 = generateCredentials() @@ -354,8 +361,10 @@ suite "Waku rln relay": let index2 = MembershipIndex(6) let rlnConf2 = getWakuRlnConfig(manager = manager, index = index2) - let wakuRlnRelay2 = (await WakuRlnRelay.new(rlnConf2)).valueOr: - raiseAssert "failed to create waku rln relay: " & $error + var wakuRlnRelay2: WakuRlnRelay + lockNewGlobalBrokerContext: + wakuRlnRelay2 = (await WakuRlnRelay.new(rlnConf2)).valueOr: + raiseAssert "failed to create waku rln relay: " & $error let manager2 = cast[OnchainGroupManager](wakuRlnRelay2.groupManager) let idCredentials2 = generateCredentials() @@ -486,9 +495,10 @@ suite "Waku rln relay": let wakuRlnConfig = getWakuRlnConfig( manager = manager, index = index, epochSizeSec = rlnEpochSizeSec.uint64 ) - - let wakuRlnRelay = (await WakuRlnRelay.new(wakuRlnConfig)).valueOr: - raiseAssert $error + var wakuRlnRelay: WakuRlnRelay + lockNewGlobalBrokerContext: + wakuRlnRelay = (await WakuRlnRelay.new(wakuRlnConfig)).valueOr: + raiseAssert $error let rlnMaxEpochGap = wakuRlnRelay.rlnMaxEpochGap let testProofMetadata = default(ProofMetadata) diff --git a/tests/waku_rln_relay/test_wakunode_rln_relay.nim b/tests/waku_rln_relay/test_wakunode_rln_relay.nim index 1850b5277..fcf97a671 100644 --- a/tests/waku_rln_relay/test_wakunode_rln_relay.nim +++ b/tests/waku_rln_relay/test_wakunode_rln_relay.nim @@ -12,7 +12,8 @@ import waku/[waku_core, waku_node, waku_rln_relay], ../testlib/[wakucore, futures, wakunode, testutils], ./utils_onchain, - ./rln/waku_rln_relay_utils + ./rln/waku_rln_relay_utils, + waku/common/broker/broker_context from std/times import epochTime @@ -37,68 +38,70 @@ procSuite "WakuNode - RLN relay": stopAnvil(anvilProc) asyncTest "testing rln-relay with valid proof": - let - # publisher node - nodeKey1 = generateSecp256k1Key() + var node1, node2, node3: WakuNode # publisher node + let contentTopic = ContentTopic("/waku/2/default-content/proto") + # set up three nodes + lockNewGlobalBrokerContext: + let nodeKey1 = generateSecp256k1Key() node1 = newTestWakuNode(nodeKey1, parseIpAddress("0.0.0.0"), Port(0)) + (await node1.mountRelay()).isOkOr: + assert false, "Failed to mount relay" + + # mount rlnrelay in off-chain mode + let wakuRlnConfig1 = + getWakuRlnConfig(manager = manager, index = MembershipIndex(1)) + + await node1.mountRlnRelay(wakuRlnConfig1) + await node1.start() + + # Registration is mandatory before sending messages with rln-relay + let manager1 = cast[OnchainGroupManager](node1.wakuRlnRelay.groupManager) + let idCredentials1 = generateCredentials() + + try: + waitFor manager1.register(idCredentials1, UserMessageLimit(20)) + except Exception, CatchableError: + assert false, + "exception raised when calling register: " & getCurrentExceptionMsg() + + let rootUpdated1 = waitFor manager1.updateRoots() + info "Updated root for node1", rootUpdated1 + + lockNewGlobalBrokerContext: # Relay node - nodeKey2 = generateSecp256k1Key() + let nodeKey2 = generateSecp256k1Key() node2 = newTestWakuNode(nodeKey2, parseIpAddress("0.0.0.0"), Port(0)) + + (await node2.mountRelay()).isOkOr: + assert false, "Failed to mount relay" + # mount rlnrelay in off-chain mode + let wakuRlnConfig2 = + getWakuRlnConfig(manager = manager, index = MembershipIndex(2)) + + await node2.mountRlnRelay(wakuRlnConfig2) + await node2.start() + + let manager2 = cast[OnchainGroupManager](node2.wakuRlnRelay.groupManager) + let rootUpdated2 = waitFor manager2.updateRoots() + info "Updated root for node2", rootUpdated2 + + lockNewGlobalBrokerContext: # Subscriber - nodeKey3 = generateSecp256k1Key() + let nodeKey3 = generateSecp256k1Key() node3 = newTestWakuNode(nodeKey3, parseIpAddress("0.0.0.0"), Port(0)) - contentTopic = ContentTopic("/waku/2/default-content/proto") + (await node3.mountRelay()).isOkOr: + assert false, "Failed to mount relay" - # set up three nodes - # node1 - (await node1.mountRelay()).isOkOr: - assert false, "Failed to mount relay" + let wakuRlnConfig3 = + getWakuRlnConfig(manager = manager, index = MembershipIndex(3)) - # mount rlnrelay in off-chain mode - let wakuRlnConfig1 = getWakuRlnConfig(manager = manager, index = MembershipIndex(1)) + await node3.mountRlnRelay(wakuRlnConfig3) + await node3.start() - await node1.mountRlnRelay(wakuRlnConfig1) - await node1.start() - - # Registration is mandatory before sending messages with rln-relay - let manager1 = cast[OnchainGroupManager](node1.wakuRlnRelay.groupManager) - let idCredentials1 = generateCredentials() - - try: - waitFor manager1.register(idCredentials1, UserMessageLimit(20)) - except Exception, CatchableError: - assert false, - "exception raised when calling register: " & getCurrentExceptionMsg() - - let rootUpdated1 = waitFor manager1.updateRoots() - info "Updated root for node1", rootUpdated1 - - # node 2 - (await node2.mountRelay()).isOkOr: - assert false, "Failed to mount relay" - # mount rlnrelay in off-chain mode - let wakuRlnConfig2 = getWakuRlnConfig(manager = manager, index = MembershipIndex(2)) - - await node2.mountRlnRelay(wakuRlnConfig2) - await node2.start() - - let manager2 = cast[OnchainGroupManager](node2.wakuRlnRelay.groupManager) - let rootUpdated2 = waitFor manager2.updateRoots() - info "Updated root for node2", rootUpdated2 - - # node 3 - (await node3.mountRelay()).isOkOr: - assert false, "Failed to mount relay" - - let wakuRlnConfig3 = getWakuRlnConfig(manager = manager, index = MembershipIndex(3)) - - await node3.mountRlnRelay(wakuRlnConfig3) - await node3.start() - - let manager3 = cast[OnchainGroupManager](node3.wakuRlnRelay.groupManager) - let rootUpdated3 = waitFor manager3.updateRoots() - info "Updated root for node3", rootUpdated3 + let manager3 = cast[OnchainGroupManager](node3.wakuRlnRelay.groupManager) + let rootUpdated3 = waitFor manager3.updateRoots() + info "Updated root for node3", rootUpdated3 # connect them together await node1.connectToNodes(@[node2.switch.peerInfo.toRemotePeerInfo()]) @@ -156,10 +159,67 @@ procSuite "WakuNode - RLN relay": asyncTest "testing rln-relay is applied in all rln shards/content topics": # create 3 nodes - let nodes = toSeq(0 ..< 3).mapIt( - newTestWakuNode(generateSecp256k1Key(), parseIpAddress("0.0.0.0"), Port(0)) - ) - await allFutures(nodes.mapIt(it.start())) + var node1, node2, node3: WakuNode + lockNewGlobalBrokerContext: + let nodeKey1 = generateSecp256k1Key() + node1 = newTestWakuNode(nodeKey1, parseIpAddress("0.0.0.0"), Port(0)) + (await node1.mountRelay()).isOkOr: + assert false, "Failed to mount relay" + let wakuRlnConfig1 = + getWakuRlnConfig(manager = manager, index = MembershipIndex(1)) + await node1.mountRlnRelay(wakuRlnConfig1) + await node1.start() + let manager1 = cast[OnchainGroupManager](node1.wakuRlnRelay.groupManager) + let idCredentials1 = generateCredentials() + + try: + waitFor manager1.register(idCredentials1, UserMessageLimit(20)) + except Exception, CatchableError: + assert false, + "exception raised when calling register: " & getCurrentExceptionMsg() + + let rootUpdated1 = waitFor manager1.updateRoots() + info "Updated root for node", node = 1, rootUpdated = rootUpdated1 + lockNewGlobalBrokerContext: + let nodeKey2 = generateSecp256k1Key() + node2 = newTestWakuNode(nodeKey2, parseIpAddress("0.0.0.0"), Port(0)) + (await node2.mountRelay()).isOkOr: + assert false, "Failed to mount relay" + let wakuRlnConfig2 = + getWakuRlnConfig(manager = manager, index = MembershipIndex(2)) + await node2.mountRlnRelay(wakuRlnConfig2) + await node2.start() + let manager2 = cast[OnchainGroupManager](node2.wakuRlnRelay.groupManager) + let idCredentials2 = generateCredentials() + + try: + waitFor manager2.register(idCredentials2, UserMessageLimit(20)) + except Exception, CatchableError: + assert false, + "exception raised when calling register: " & getCurrentExceptionMsg() + + let rootUpdated2 = waitFor manager2.updateRoots() + info "Updated root for node", node = 2, rootUpdated = rootUpdated2 + lockNewGlobalBrokerContext: + let nodeKey3 = generateSecp256k1Key() + node3 = newTestWakuNode(nodeKey3, parseIpAddress("0.0.0.0"), Port(0)) + (await node3.mountRelay()).isOkOr: + assert false, "Failed to mount relay" + let wakuRlnConfig3 = + getWakuRlnConfig(manager = manager, index = MembershipIndex(3)) + await node3.mountRlnRelay(wakuRlnConfig3) + await node3.start() + let manager3 = cast[OnchainGroupManager](node3.wakuRlnRelay.groupManager) + let idCredentials3 = generateCredentials() + + try: + waitFor manager3.register(idCredentials3, UserMessageLimit(20)) + except Exception, CatchableError: + assert false, + "exception raised when calling register: " & getCurrentExceptionMsg() + + let rootUpdated3 = waitFor manager3.updateRoots() + info "Updated root for node", node = 3, rootUpdated = rootUpdated3 let shards = @[RelayShard(clusterId: 0, shardId: 0), RelayShard(clusterId: 0, shardId: 1)] @@ -169,31 +229,9 @@ procSuite "WakuNode - RLN relay": ContentTopic("/waku/2/content-topic-b/proto"), ] - # set up three nodes - await allFutures(nodes.mapIt(it.mountRelay())) - - # mount rlnrelay in off-chain mode - for index, node in nodes: - let wakuRlnConfig = - getWakuRlnConfig(manager = manager, index = MembershipIndex(index + 1)) - - await node.mountRlnRelay(wakuRlnConfig) - await node.start() - let manager = cast[OnchainGroupManager](node.wakuRlnRelay.groupManager) - let idCredentials = generateCredentials() - - try: - waitFor manager.register(idCredentials, UserMessageLimit(20)) - except Exception, CatchableError: - assert false, - "exception raised when calling register: " & getCurrentExceptionMsg() - - let rootUpdated = waitFor manager.updateRoots() - info "Updated root for node", node = index + 1, rootUpdated = rootUpdated - # connect them together - await nodes[0].connectToNodes(@[nodes[1].switch.peerInfo.toRemotePeerInfo()]) - await nodes[2].connectToNodes(@[nodes[1].switch.peerInfo.toRemotePeerInfo()]) + await node1.connectToNodes(@[node2.switch.peerInfo.toRemotePeerInfo()]) + await node3.connectToNodes(@[node2.switch.peerInfo.toRemotePeerInfo()]) var rxMessagesTopic1 = 0 var rxMessagesTopic2 = 0 @@ -211,15 +249,15 @@ procSuite "WakuNode - RLN relay": ): Future[void] {.async, gcsafe.} = await sleepAsync(0.milliseconds) - nodes[0].subscribe((kind: PubsubSub, topic: DefaultPubsubTopic), simpleHandler).isOkOr: - assert false, "Failed to subscribe to pubsub topic in nodes[0]: " & $error - nodes[1].subscribe((kind: PubsubSub, topic: DefaultPubsubTopic), simpleHandler).isOkOr: - assert false, "Failed to subscribe to pubsub topic in nodes[1]: " & $error + node1.subscribe((kind: PubsubSub, topic: DefaultPubsubTopic), simpleHandler).isOkOr: + assert false, "Failed to subscribe to pubsub topic in node1: " & $error + node2.subscribe((kind: PubsubSub, topic: DefaultPubsubTopic), simpleHandler).isOkOr: + assert false, "Failed to subscribe to pubsub topic in node2: " & $error # mount the relay handlers - nodes[2].subscribe((kind: PubsubSub, topic: $shards[0]), relayHandler).isOkOr: + node3.subscribe((kind: PubsubSub, topic: $shards[0]), relayHandler).isOkOr: assert false, "Failed to subscribe to pubsub topic: " & $error - nodes[2].subscribe((kind: PubsubSub, topic: $shards[1]), relayHandler).isOkOr: + node3.subscribe((kind: PubsubSub, topic: $shards[1]), relayHandler).isOkOr: assert false, "Failed to subscribe to pubsub topic: " & $error await sleepAsync(1000.millis) @@ -236,8 +274,8 @@ procSuite "WakuNode - RLN relay": contentTopic: contentTopics[0], ) - nodes[0].wakuRlnRelay.unsafeAppendRLNProof( - message, nodes[0].wakuRlnRelay.getCurrentEpoch(), MessageId(i.uint8) + node1.wakuRlnRelay.unsafeAppendRLNProof( + message, node1.wakuRlnRelay.getCurrentEpoch(), MessageId(i.uint8) ).isOkOr: raiseAssert $error messages1.add(message) @@ -249,8 +287,8 @@ procSuite "WakuNode - RLN relay": contentTopic: contentTopics[1], ) - nodes[1].wakuRlnRelay.unsafeAppendRLNProof( - message, nodes[1].wakuRlnRelay.getCurrentEpoch(), MessageId(i.uint8) + node2.wakuRlnRelay.unsafeAppendRLNProof( + message, node2.wakuRlnRelay.getCurrentEpoch(), MessageId(i.uint8) ).isOkOr: raiseAssert $error messages2.add(message) @@ -258,9 +296,9 @@ procSuite "WakuNode - RLN relay": # publish 3 messages from node[0] (last 2 are spam, window is 10 secs) # publish 3 messages from node[1] (last 2 are spam, window is 10 secs) for msg in messages1: - discard await nodes[0].publish(some($shards[0]), msg) + discard await node1.publish(some($shards[0]), msg) for msg in messages2: - discard await nodes[1].publish(some($shards[1]), msg) + discard await node2.publish(some($shards[1]), msg) # wait for gossip to propagate await sleepAsync(5000.millis) @@ -271,70 +309,70 @@ procSuite "WakuNode - RLN relay": rxMessagesTopic1 == 3 rxMessagesTopic2 == 3 - await allFutures(nodes.mapIt(it.stop())) + await node1.stop() + await node2.stop() + await node3.stop() asyncTest "testing rln-relay with invalid proof": - let + var node1, node2, node3: WakuNode + let contentTopic = ContentTopic("/waku/2/default-content/proto") + lockNewGlobalBrokerContext: # publisher node - nodeKey1 = generateSecp256k1Key() + let nodeKey1 = generateSecp256k1Key() node1 = newTestWakuNode(nodeKey1, parseIpAddress("0.0.0.0"), Port(0)) + (await node1.mountRelay()).isOkOr: + assert false, "Failed to mount relay" + + # mount rlnrelay in off-chain mode + let wakuRlnConfig1 = + getWakuRlnConfig(manager = manager, index = MembershipIndex(1)) + + await node1.mountRlnRelay(wakuRlnConfig1) + await node1.start() + + let manager1 = cast[OnchainGroupManager](node1.wakuRlnRelay.groupManager) + let idCredentials1 = generateCredentials() + + try: + waitFor manager1.register(idCredentials1, UserMessageLimit(20)) + except Exception, CatchableError: + assert false, + "exception raised when calling register: " & getCurrentExceptionMsg() + + let rootUpdated1 = waitFor manager1.updateRoots() + info "Updated root for node1", rootUpdated1 + lockNewGlobalBrokerContext: # Relay node - nodeKey2 = generateSecp256k1Key() + let nodeKey2 = generateSecp256k1Key() node2 = newTestWakuNode(nodeKey2, parseIpAddress("0.0.0.0"), Port(0)) + (await node2.mountRelay()).isOkOr: + assert false, "Failed to mount relay" + # mount rlnrelay in off-chain mode + let wakuRlnConfig2 = + getWakuRlnConfig(manager = manager, index = MembershipIndex(2)) + + await node2.mountRlnRelay(wakuRlnConfig2) + await node2.start() + + let manager2 = cast[OnchainGroupManager](node2.wakuRlnRelay.groupManager) + let rootUpdated2 = waitFor manager2.updateRoots() + info "Updated root for node2", rootUpdated2 + lockNewGlobalBrokerContext: # Subscriber - nodeKey3 = generateSecp256k1Key() + let nodeKey3 = generateSecp256k1Key() node3 = newTestWakuNode(nodeKey3, parseIpAddress("0.0.0.0"), Port(0)) + (await node3.mountRelay()).isOkOr: + assert false, "Failed to mount relay" - contentTopic = ContentTopic("/waku/2/default-content/proto") + let wakuRlnConfig3 = + getWakuRlnConfig(manager = manager, index = MembershipIndex(3)) - # set up three nodes - # node1 - (await node1.mountRelay()).isOkOr: - assert false, "Failed to mount relay" + await node3.mountRlnRelay(wakuRlnConfig3) + await node3.start() - # mount rlnrelay in off-chain mode - let wakuRlnConfig1 = getWakuRlnConfig(manager = manager, index = MembershipIndex(1)) - - await node1.mountRlnRelay(wakuRlnConfig1) - await node1.start() - - let manager1 = cast[OnchainGroupManager](node1.wakuRlnRelay.groupManager) - let idCredentials1 = generateCredentials() - - try: - waitFor manager1.register(idCredentials1, UserMessageLimit(20)) - except Exception, CatchableError: - assert false, - "exception raised when calling register: " & getCurrentExceptionMsg() - - let rootUpdated1 = waitFor manager1.updateRoots() - info "Updated root for node1", rootUpdated1 - - # node 2 - (await node2.mountRelay()).isOkOr: - assert false, "Failed to mount relay" - # mount rlnrelay in off-chain mode - let wakuRlnConfig2 = getWakuRlnConfig(manager = manager, index = MembershipIndex(2)) - - await node2.mountRlnRelay(wakuRlnConfig2) - await node2.start() - - let manager2 = cast[OnchainGroupManager](node2.wakuRlnRelay.groupManager) - let rootUpdated2 = waitFor manager2.updateRoots() - info "Updated root for node2", rootUpdated2 - - # node 3 - (await node3.mountRelay()).isOkOr: - assert false, "Failed to mount relay" - - let wakuRlnConfig3 = getWakuRlnConfig(manager = manager, index = MembershipIndex(3)) - - await node3.mountRlnRelay(wakuRlnConfig3) - await node3.start() - - let manager3 = cast[OnchainGroupManager](node3.wakuRlnRelay.groupManager) - let rootUpdated3 = waitFor manager3.updateRoots() - info "Updated root for node3", rootUpdated3 + let manager3 = cast[OnchainGroupManager](node3.wakuRlnRelay.groupManager) + let rootUpdated3 = waitFor manager3.updateRoots() + info "Updated root for node3", rootUpdated3 # connect them together await node1.connectToNodes(@[node2.switch.peerInfo.toRemotePeerInfo()]) @@ -390,72 +428,70 @@ procSuite "WakuNode - RLN relay": await node3.stop() asyncTest "testing rln-relay double-signaling detection": - let + var node1, node2, node3: WakuNode + let contentTopic = ContentTopic("/waku/2/default-content/proto") + lockNewGlobalBrokerContext: # publisher node - nodeKey1 = generateSecp256k1Key() + let nodeKey1 = generateSecp256k1Key() node1 = newTestWakuNode(nodeKey1, parseIpAddress("0.0.0.0"), Port(0)) + (await node1.mountRelay()).isOkOr: + assert false, "Failed to mount relay" + + # mount rlnrelay in off-chain mode + let wakuRlnConfig1 = + getWakuRlnConfig(manager = manager, index = MembershipIndex(1)) + + await node1.mountRlnRelay(wakuRlnConfig1) + await node1.start() + + # Registration is mandatory before sending messages with rln-relay + let manager1 = cast[OnchainGroupManager](node1.wakuRlnRelay.groupManager) + let idCredentials1 = generateCredentials() + + try: + waitFor manager1.register(idCredentials1, UserMessageLimit(20)) + except Exception, CatchableError: + assert false, + "exception raised when calling register: " & getCurrentExceptionMsg() + + let rootUpdated1 = waitFor manager1.updateRoots() + info "Updated root for node1", rootUpdated1 + lockNewGlobalBrokerContext: # Relay node - nodeKey2 = generateSecp256k1Key() + let nodeKey2 = generateSecp256k1Key() node2 = newTestWakuNode(nodeKey2, parseIpAddress("0.0.0.0"), Port(0)) + (await node2.mountRelay()).isOkOr: + assert false, "Failed to mount relay" + + # mount rlnrelay in off-chain mode + let wakuRlnConfig2 = + getWakuRlnConfig(manager = manager, index = MembershipIndex(2)) + + await node2.mountRlnRelay(wakuRlnConfig2) + await node2.start() + + # Registration is mandatory before sending messages with rln-relay + let manager2 = cast[OnchainGroupManager](node2.wakuRlnRelay.groupManager) + let rootUpdated2 = waitFor manager2.updateRoots() + info "Updated root for node2", rootUpdated2 + lockNewGlobalBrokerContext: # Subscriber - nodeKey3 = generateSecp256k1Key() + let nodeKey3 = generateSecp256k1Key() node3 = newTestWakuNode(nodeKey3, parseIpAddress("0.0.0.0"), Port(0)) + (await node3.mountRelay()).isOkOr: + assert false, "Failed to mount relay" - contentTopic = ContentTopic("/waku/2/default-content/proto") + # mount rlnrelay in off-chain mode + let wakuRlnConfig3 = + getWakuRlnConfig(manager = manager, index = MembershipIndex(3)) - # set up three nodes - # node1 - (await node1.mountRelay()).isOkOr: - assert false, "Failed to mount relay" + await node3.mountRlnRelay(wakuRlnConfig3) + await node3.start() - # mount rlnrelay in off-chain mode - let wakuRlnConfig1 = getWakuRlnConfig(manager = manager, index = MembershipIndex(1)) - - await node1.mountRlnRelay(wakuRlnConfig1) - await node1.start() - - # Registration is mandatory before sending messages with rln-relay - let manager1 = cast[OnchainGroupManager](node1.wakuRlnRelay.groupManager) - let idCredentials1 = generateCredentials() - - try: - waitFor manager1.register(idCredentials1, UserMessageLimit(20)) - except Exception, CatchableError: - assert false, - "exception raised when calling register: " & getCurrentExceptionMsg() - - let rootUpdated1 = waitFor manager1.updateRoots() - info "Updated root for node1", rootUpdated1 - - # node 2 - (await node2.mountRelay()).isOkOr: - assert false, "Failed to mount relay" - - # mount rlnrelay in off-chain mode - let wakuRlnConfig2 = getWakuRlnConfig(manager = manager, index = MembershipIndex(2)) - - await node2.mountRlnRelay(wakuRlnConfig2) - await node2.start() - - # Registration is mandatory before sending messages with rln-relay - let manager2 = cast[OnchainGroupManager](node2.wakuRlnRelay.groupManager) - let rootUpdated2 = waitFor manager2.updateRoots() - info "Updated root for node2", rootUpdated2 - - # node 3 - (await node3.mountRelay()).isOkOr: - assert false, "Failed to mount relay" - - # mount rlnrelay in off-chain mode - let wakuRlnConfig3 = getWakuRlnConfig(manager = manager, index = MembershipIndex(3)) - - await node3.mountRlnRelay(wakuRlnConfig3) - await node3.start() - - # Registration is mandatory before sending messages with rln-relay - let manager3 = cast[OnchainGroupManager](node3.wakuRlnRelay.groupManager) - let rootUpdated3 = waitFor manager3.updateRoots() - info "Updated root for node3", rootUpdated3 + # Registration is mandatory before sending messages with rln-relay + let manager3 = cast[OnchainGroupManager](node3.wakuRlnRelay.groupManager) + let rootUpdated3 = waitFor manager3.updateRoots() + info "Updated root for node3", rootUpdated3 # connect the nodes together node1 <-> node2 <-> node3 await node1.connectToNodes(@[node2.switch.peerInfo.toRemotePeerInfo()]) @@ -565,49 +601,49 @@ procSuite "WakuNode - RLN relay": xasyncTest "clearNullifierLog: should clear epochs > MaxEpochGap": ## This is skipped because is flaky and made CI randomly fail but is useful to run manually # Given two nodes + var node1, node2: WakuNode let contentTopic = ContentTopic("/waku/2/default-content/proto") shardSeq = @[DefaultRelayShard] - nodeKey1 = generateSecp256k1Key() - node1 = newTestWakuNode(nodeKey1, parseIpAddress("0.0.0.0"), Port(0)) - nodeKey2 = generateSecp256k1Key() - node2 = newTestWakuNode(nodeKey2, parseIpAddress("0.0.0.0"), Port(0)) epochSizeSec: uint64 = 5 # This means rlnMaxEpochGap = 4 + lockNewGlobalBrokerContext: + let nodeKey1 = generateSecp256k1Key() + node1 = newTestWakuNode(nodeKey1, parseIpAddress("0.0.0.0"), Port(0)) + (await node1.mountRelay()).isOkOr: + assert false, "Failed to mount relay" + let wakuRlnConfig1 = + getWakuRlnConfig(manager = manager, index = MembershipIndex(1)) + await node1.mountRlnRelay(wakuRlnConfig1) + await node1.start() - # Given both nodes mount relay and rlnrelay - (await node1.mountRelay()).isOkOr: - assert false, "Failed to mount relay" - let wakuRlnConfig1 = getWakuRlnConfig(manager = manager, index = MembershipIndex(1)) - await node1.mountRlnRelay(wakuRlnConfig1) - await node1.start() + # Registration is mandatory before sending messages with rln-relay + let manager1 = cast[OnchainGroupManager](node1.wakuRlnRelay.groupManager) + let idCredentials1 = generateCredentials() - # Registration is mandatory before sending messages with rln-relay - let manager1 = cast[OnchainGroupManager](node1.wakuRlnRelay.groupManager) - let idCredentials1 = generateCredentials() + try: + waitFor manager1.register(idCredentials1, UserMessageLimit(20)) + except Exception, CatchableError: + assert false, + "exception raised when calling register: " & getCurrentExceptionMsg() - try: - waitFor manager1.register(idCredentials1, UserMessageLimit(20)) - except Exception, CatchableError: - assert false, - "exception raised when calling register: " & getCurrentExceptionMsg() + let rootUpdated1 = waitFor manager1.updateRoots() + info "Updated root for node1", rootUpdated1 + lockNewGlobalBrokerContext: + let nodeKey2 = generateSecp256k1Key() + node2 = newTestWakuNode(nodeKey2, parseIpAddress("0.0.0.0"), Port(0)) + (await node2.mountRelay()).isOkOr: + assert false, "Failed to mount relay" + let wakuRlnConfig2 = + getWakuRlnConfig(manager = manager, index = MembershipIndex(2)) + await node2.mountRlnRelay(wakuRlnConfig2) + await node2.start() - let rootUpdated1 = waitFor manager1.updateRoots() - info "Updated root for node1", rootUpdated1 - - # Mount rlnrelay in node2 in off-chain mode - (await node2.mountRelay()).isOkOr: - assert false, "Failed to mount relay" - let wakuRlnConfig2 = getWakuRlnConfig(manager = manager, index = MembershipIndex(2)) - await node2.mountRlnRelay(wakuRlnConfig2) - await node2.start() - - # Registration is mandatory before sending messages with rln-relay - let manager2 = cast[OnchainGroupManager](node2.wakuRlnRelay.groupManager) - let rootUpdated2 = waitFor manager2.updateRoots() - info "Updated root for node2", rootUpdated2 + # Registration is mandatory before sending messages with rln-relay + let manager2 = cast[OnchainGroupManager](node2.wakuRlnRelay.groupManager) + let rootUpdated2 = waitFor manager2.updateRoots() + info "Updated root for node2", rootUpdated2 # Given the two nodes are started and connected - waitFor allFutures(node1.start(), node2.start()) await node1.connectToNodes(@[node2.switch.peerInfo.toRemotePeerInfo()]) # Given some messages diff --git a/tests/wakunode2/test_app.nim b/tests/wakunode2/test_app.nim index b16880787..e94a3b21d 100644 --- a/tests/wakunode2/test_app.nim +++ b/tests/wakunode2/test_app.nim @@ -60,7 +60,8 @@ suite "Wakunode2 - Waku initialization": not node.wakuRendezvous.isNil() ## Cleanup - waitFor waku.stop() + (waitFor waku.stop()).isOkOr: + raiseAssert error test "app properly handles dynamic port configuration": ## Given @@ -96,4 +97,5 @@ suite "Wakunode2 - Waku initialization": typedNodeEnr.get().tcp.get() != 0 ## Cleanup - waitFor waku.stop() + (waitFor waku.stop()).isOkOr: + raiseAssert error diff --git a/tests/wakunode_rest/test_rest_relay.nim b/tests/wakunode_rest/test_rest_relay.nim index ca9f7cb17..f16e5c4f4 100644 --- a/tests/wakunode_rest/test_rest_relay.nim +++ b/tests/wakunode_rest/test_rest_relay.nim @@ -21,6 +21,7 @@ import rest_api/endpoint/relay/client as relay_rest_client, waku_relay, waku_rln_relay, + common/broker/broker_context, ], ../testlib/wakucore, ../testlib/wakunode, @@ -505,15 +506,41 @@ suite "Waku v2 Rest API - Relay": asyncTest "Post a message to a content topic - POST /relay/v1/auto/messages/{topic}": ## "Relay API: publish and subscribe/unsubscribe": # Given - let node = testWakuNode() - (await node.mountRelay()).isOkOr: - assert false, "Failed to mount relay" - require node.mountAutoSharding(1, 8).isOk + var meshNode: WakuNode + lockNewGlobalBrokerContext: + meshNode = testWakuNode() + (await meshNode.mountRelay()).isOkOr: + assert false, "Failed to mount relay" + require meshNode.mountAutoSharding(1, 8).isOk - let wakuRlnConfig = getWakuRlnConfig(manager = manager, index = MembershipIndex(1)) + let wakuRlnConfig = + getWakuRlnConfig(manager = manager, index = MembershipIndex(1)) + + await meshNode.mountRlnRelay(wakuRlnConfig) + await meshNode.start() + const testPubsubTopic = PubsubTopic("/waku/2/rs/1/0") + proc dummyHandler( + topic: PubsubTopic, msg: WakuMessage + ): Future[void] {.async, gcsafe.} = + discard + + meshNode.subscribe((kind: ContentSub, topic: DefaultContentTopic), dummyHandler).isOkOr: + raiseAssert "Failed to subscribe meshNode: " & error + + var node: WakuNode + lockNewGlobalBrokerContext: + node = testWakuNode() + (await node.mountRelay()).isOkOr: + assert false, "Failed to mount relay" + require node.mountAutoSharding(1, 8).isOk + + let wakuRlnConfig = + getWakuRlnConfig(manager = manager, index = MembershipIndex(1)) + + await node.mountRlnRelay(wakuRlnConfig) + await node.start() + await node.connectToNodes(@[meshNode.peerInfo.toRemotePeerInfo()]) - await node.mountRlnRelay(wakuRlnConfig) - await node.start() # Registration is mandatory before sending messages with rln-relay let manager = cast[OnchainGroupManager](node.wakuRlnRelay.groupManager) let idCredentials = generateCredentials() diff --git a/waku.nim b/waku.nim index 18d52741e..65a017c5a 100644 --- a/waku.nim +++ b/waku.nim @@ -1,10 +1,10 @@ ## Main module for using nwaku as a Nimble library -## +## ## This module re-exports the public API for creating and managing Waku nodes ## when using nwaku as a library dependency. -import waku/api/[api, api_conf] -export api, api_conf +import waku/api +export api import waku/factory/waku export waku diff --git a/waku.nimble b/waku.nimble index afc0ad634..3ff41c5bd 100644 --- a/waku.nimble +++ b/waku.nimble @@ -136,7 +136,7 @@ task testwakunode2, "Build & run wakunode2 app tests": test "all_tests_wakunode2" task example2, "Build Waku examples": - buildBinary "waku_example", "examples/" + buildBinary "api_example", "examples/api_example/" buildBinary "publisher", "examples/" buildBinary "subscriber", "examples/" buildBinary "filter_subscriber", "examples/" @@ -176,6 +176,10 @@ task lightpushwithmix, "Build lightpushwithmix": let name = "lightpush_publisher_mix" buildBinary name, "examples/lightpush_mix/" +task api_example, "Build api_example": + let name = "api_example" + buildBinary name, "examples/api_example/" + task buildone, "Build custom target": let filepath = paramStr(paramCount()) discard buildModule filepath diff --git a/waku/api.nim b/waku/api.nim index c3211867d..110a8f431 100644 --- a/waku/api.nim +++ b/waku/api.nim @@ -1,3 +1,4 @@ import ./api/[api, api_conf, entry_nodes] +import ./events/message_events -export api, api_conf, entry_nodes +export api, api_conf, entry_nodes, message_events diff --git a/waku/api/api.nim b/waku/api/api.nim index 5bab06188..41f4fd240 100644 --- a/waku/api/api.nim +++ b/waku/api/api.nim @@ -1,8 +1,13 @@ import chronicles, chronos, results import waku/factory/waku +import waku/[requests/health_request, waku_core, waku_node] +import waku/node/delivery_service/send_service +import waku/node/delivery_service/subscription_service +import ./[api_conf, types] -import ./api_conf +logScope: + topics = "api" # TODO: Specs says it should return a `WakuNode`. As `send` and other APIs are defined, we can align. proc createNode*(config: NodeConfig): Future[Result[Waku, string]] {.async.} = @@ -15,3 +20,53 @@ proc createNode*(config: NodeConfig): Future[Result[Waku, string]] {.async.} = return err("Failed setting up Waku: " & $error) return ok(wakuRes) + +proc checkApiAvailability(w: Waku): Result[void, string] = + if w.isNil(): + return err("Waku node is not initialized") + + # check if health is satisfactory + # If Node is not healthy, return err("Waku node is not healthy") + let healthStatus = RequestNodeHealth.request(w.brokerCtx) + + if healthStatus.isErr(): + warn "Failed to get Waku node health status: ", error = healthStatus.error + # Let's suppose the node is hesalthy enough, go ahead + else: + if healthStatus.get().healthStatus == NodeHealth.Unhealthy: + return err("Waku node is not healthy, has got no connections.") + + return ok() + +proc subscribe*( + w: Waku, contentTopic: ContentTopic +): Future[Result[void, string]] {.async.} = + ?checkApiAvailability(w) + + return w.deliveryService.subscriptionService.subscribe(contentTopic) + +proc unsubscribe*(w: Waku, contentTopic: ContentTopic): Result[void, string] = + ?checkApiAvailability(w) + + return w.deliveryService.subscriptionService.unsubscribe(contentTopic) + +proc send*( + w: Waku, envelope: MessageEnvelope +): Future[Result[RequestId, string]] {.async.} = + ?checkApiAvailability(w) + + let requestId = RequestId.new(w.rng) + + let deliveryTask = DeliveryTask.new(requestId, envelope, w.brokerCtx).valueOr: + return err("API send: Failed to create delivery task: " & error) + + info "API send: scheduling delivery task", + requestId = $requestId, + pubsubTopic = deliveryTask.pubsubTopic, + contentTopic = deliveryTask.msg.contentTopic, + msgHash = deliveryTask.msgHash.to0xHex(), + myPeerId = w.node.peerId() + + asyncSpawn w.deliveryService.sendService.send(deliveryTask) + + return ok(requestId) diff --git a/waku/api/api_conf.nim b/waku/api/api_conf.nim index 155554dfd..47aa9e7d8 100644 --- a/waku/api/api_conf.nim +++ b/waku/api/api_conf.nim @@ -86,6 +86,7 @@ type NodeConfig* {.requiresInit.} = object protocolsConfig: ProtocolsConfig networkingConfig: NetworkingConfig ethRpcEndpoints: seq[string] + p2pReliability: bool proc init*( T: typedesc[NodeConfig], @@ -93,12 +94,14 @@ proc init*( protocolsConfig: ProtocolsConfig = TheWakuNetworkPreset, networkingConfig: NetworkingConfig = DefaultNetworkingConfig, ethRpcEndpoints: seq[string] = @[], + p2pReliability: bool = false, ): T = return T( mode: mode, protocolsConfig: protocolsConfig, networkingConfig: networkingConfig, ethRpcEndpoints: ethRpcEndpoints, + p2pReliability: p2pReliability, ) proc toWakuConf*(nodeConfig: NodeConfig): Result[WakuConf, string] = @@ -131,7 +134,16 @@ proc toWakuConf*(nodeConfig: NodeConfig): Result[WakuConf, string] = b.rateLimitConf.withRateLimits(@["filter:100/1s", "lightpush:5/1s", "px:5/1s"]) of Edge: - return err("Edge mode is not implemented") + # All client side protocols are mounted by default + # Peer exchange client is always enabled and start_node will start the px loop + # Metadata is always mounted + b.withPeerExchange(true) + # switch off all service side protocols and relay + b.withRelay(false) + b.filterServiceConf.withEnabled(false) + b.withLightPush(false) + b.storeServiceConf.withEnabled(false) + # Leave discv5 and rendezvous for user choice ## Network Conf let protocolsConfig = nodeConfig.protocolsConfig @@ -193,6 +205,7 @@ proc toWakuConf*(nodeConfig: NodeConfig): Result[WakuConf, string] = ## Various configurations b.withNatStrategy("any") + b.withP2PReliability(nodeConfig.p2pReliability) let wakuConf = b.build().valueOr: return err("Failed to build configuration: " & error) diff --git a/waku/api/send_api.md b/waku/api/send_api.md new file mode 100644 index 000000000..2a5a2f8a4 --- /dev/null +++ b/waku/api/send_api.md @@ -0,0 +1,46 @@ +# SEND API + +**THIS IS TO BE REMOVED BEFORE PR MERGE** + +This document collects logic and todo's around the Send API. + +## Overview + +Send api hides the complex logic of using raw protocols for reliable message delivery. +The delivery method is chosen based on the node configuration and actual availabilities of peers. + +## Delivery task + +Each message send request is bundled into a task that not just holds the composed message but also the state of the delivery. + +## Delivery methods + +Depending on the configuration and the availability of store client protocol + actual configured and/or discovered store nodes: +- P2PReliability validation - checking network store node whether the message is reached at least a store node. +- Simple retry until message is propagated to the network + - Relay says >0 peers as publish result + - LightpushClient returns with success + +Depending on node config: +- Relay +- Lightpush + +These methods are used in combination to achieve the best reliability. +Fallback mechanism is used to switch between methods if the current one fails. + +Relay+StoreCheck -> Relay+simple retry -> Lightpush+StoreCheck -> Lightpush simple retry -> Error + +Combination is dynamically chosen on node configuration. Levels can be skipped depending on actual connectivity. +Actual connectivity is checked: +- Relay's topic health check - at least dLow peers in the mesh for the topic +- Store nodes availability - at least one store service node is available in peer manager +- Lightpush client availability - at least one lightpush service node is available in peer manager + +## Delivery processing + +At every send request, each task is tried to be delivered right away. +Any further retries and store check is done as a background task in a loop with predefined intervals. +Each task is set for a maximum number of retries and/or maximum time to live. + +In each round of store check and retry send tasks are selected based on their state. +The state is updated based on the result of the delivery method. diff --git a/waku/api/types.nim b/waku/api/types.nim new file mode 100644 index 000000000..a0626e98c --- /dev/null +++ b/waku/api/types.nim @@ -0,0 +1,65 @@ +{.push raises: [].} + +import bearssl/rand, std/times, chronos +import stew/byteutils +import waku/utils/requests as request_utils +import waku/waku_core/[topics/content_topic, message/message, time] +import waku/requests/requests + +type + MessageEnvelope* = object + contentTopic*: ContentTopic + payload*: seq[byte] + ephemeral*: bool + + RequestId* = distinct string + + NodeHealth* {.pure.} = enum + Healthy + MinimallyHealthy + Unhealthy + +proc new*(T: typedesc[RequestId], rng: ref HmacDrbgContext): T = + ## Generate a new RequestId using the provided RNG. + RequestId(request_utils.generateRequestId(rng)) + +proc `$`*(r: RequestId): string {.inline.} = + string(r) + +proc `==`*(a, b: RequestId): bool {.inline.} = + string(a) == string(b) + +proc init*( + T: type MessageEnvelope, + contentTopic: ContentTopic, + payload: seq[byte] | string, + ephemeral: bool = false, +): MessageEnvelope = + when payload is seq[byte]: + MessageEnvelope(contentTopic: contentTopic, payload: payload, ephemeral: ephemeral) + else: + MessageEnvelope( + contentTopic: contentTopic, payload: payload.toBytes(), ephemeral: ephemeral + ) + +proc toWakuMessage*(envelope: MessageEnvelope): WakuMessage = + ## Convert a MessageEnvelope to a WakuMessage. + var wm = WakuMessage( + contentTopic: envelope.contentTopic, + payload: envelope.payload, + ephemeral: envelope.ephemeral, + timestamp: getNowInNanosecondTime(), + ) + + ## TODO: First find out if proof is needed at all + ## Follow up: left it to the send logic to add RLN proof if needed and possible + # let requestedProof = ( + # waitFor RequestGenerateRlnProof.request(wm, getTime().toUnixFloat()) + # ).valueOr: + # warn "Failed to add RLN proof to WakuMessage: ", error = error + # return wm + + # wm.proof = requestedProof.proof + return wm + +{.pop.} diff --git a/waku/events/delivery_events.nim b/waku/events/delivery_events.nim new file mode 100644 index 000000000..f8eb0f48d --- /dev/null +++ b/waku/events/delivery_events.nim @@ -0,0 +1,27 @@ +import waku/waku_core/[message/message, message/digest], waku/common/broker/event_broker + +type DeliveryDirection* {.pure.} = enum + PUBLISHING + RECEIVING + +type DeliverySuccess* {.pure.} = enum + SUCCESSFUL + UNSUCCESSFUL + +EventBroker: + type DeliveryFeedbackEvent* = ref object + success*: DeliverySuccess + dir*: DeliveryDirection + comment*: string + msgHash*: WakuMessageHash + msg*: WakuMessage + +EventBroker: + type OnFilterSubscribeEvent* = object + pubsubTopic*: string + contentTopics*: seq[string] + +EventBroker: + type OnFilterUnSubscribeEvent* = object + pubsubTopic*: string + contentTopics*: seq[string] diff --git a/waku/events/events.nim b/waku/events/events.nim new file mode 100644 index 000000000..2a0af8828 --- /dev/null +++ b/waku/events/events.nim @@ -0,0 +1,3 @@ +import ./[message_events, delivery_events] + +export message_events, delivery_events diff --git a/waku/events/message_events.nim b/waku/events/message_events.nim new file mode 100644 index 000000000..cf3dac9b7 --- /dev/null +++ b/waku/events/message_events.nim @@ -0,0 +1,30 @@ +import waku/common/broker/event_broker +import waku/api/types +import waku/waku_core/message + +export types + +EventBroker: + # Event emitted when a message is sent to the network + type MessageSentEvent* = object + requestId*: RequestId + messageHash*: string + +EventBroker: + # Event emitted when a message send operation fails + type MessageErrorEvent* = object + requestId*: RequestId + messageHash*: string + error*: string + +EventBroker: + # Confirmation that a message has been correctly delivered to some neighbouring nodes. + type MessagePropagatedEvent* = object + requestId*: RequestId + messageHash*: string + +EventBroker: + # Event emitted when a message is received via Waku + type MessageReceivedEvent* = object + messageHash*: string + message*: WakuMessage diff --git a/waku/factory/waku.nim b/waku/factory/waku.nim index d55206f97..c452d44c5 100644 --- a/waku/factory/waku.nim +++ b/waku/factory/waku.nim @@ -25,7 +25,7 @@ import ../node/peer_manager, ../node/health_monitor, ../node/waku_metrics, - ../node/delivery_monitor/delivery_monitor, + ../node/delivery_service/delivery_service, ../rest_api/message_cache, ../rest_api/endpoint/server, ../rest_api/endpoint/builder as rest_server_builder, @@ -42,7 +42,10 @@ import ../factory/internal_config, ../factory/app_callbacks, ../waku_enr/multiaddr, - ./waku_conf + ./waku_conf, + ../common/broker/broker_context, + ../requests/health_request, + ../api/types logScope: topics = "wakunode waku" @@ -66,12 +69,14 @@ type Waku* = ref object healthMonitor*: NodeHealthMonitor - deliveryMonitor: DeliveryMonitor + deliveryService*: DeliveryService restServer*: WakuRestServerRef metricsServer*: MetricsHttpServerRef appCallbacks*: AppCallbacks + brokerCtx*: BrokerContext + func version*(waku: Waku): string = waku.version @@ -160,6 +165,7 @@ proc new*( T: type Waku, wakuConf: WakuConf, appCallbacks: AppCallbacks = nil ): Future[Result[Waku, string]] {.async.} = let rng = crypto.newRng() + let brokerCtx = globalBrokerContext() logging.setupLog(wakuConf.logLevel, wakuConf.logFormat) @@ -197,16 +203,8 @@ proc new*( return err("Failed setting up app callbacks: " & $error) ## Delivery Monitor - var deliveryMonitor: DeliveryMonitor - if wakuConf.p2pReliability: - if wakuConf.remoteStoreNode.isNone(): - return err("A storenode should be set when reliability mode is on") - - let deliveryMonitor = DeliveryMonitor.new( - node.wakuStoreClient, node.wakuRelay, node.wakuLightpushClient, - node.wakuFilterClient, - ).valueOr: - return err("could not create delivery monitor: " & $error) + let deliveryService = DeliveryService.new(wakuConf.p2pReliability, node).valueOr: + return err("could not create delivery service: " & $error) var waku = Waku( version: git_version, @@ -215,9 +213,10 @@ proc new*( key: wakuConf.nodeKey, node: node, healthMonitor: healthMonitor, - deliveryMonitor: deliveryMonitor, + deliveryService: deliveryService, appCallbacks: appCallbacks, restServer: restServer, + brokerCtx: brokerCtx, ) waku.setupSwitchServices(wakuConf, relay, rng) @@ -353,7 +352,7 @@ proc startDnsDiscoveryRetryLoop(waku: ptr Waku): Future[void] {.async.} = error "failed to connect to dynamic bootstrap nodes: " & getCurrentExceptionMsg() return -proc startWaku*(waku: ptr Waku): Future[Result[void, string]] {.async.} = +proc startWaku*(waku: ptr Waku): Future[Result[void, string]] {.async: (raises: []).} = if waku[].node.started: warn "startWaku: waku node already started" return ok() @@ -363,9 +362,15 @@ proc startWaku*(waku: ptr Waku): Future[Result[void, string]] {.async.} = if conf.dnsDiscoveryConf.isSome(): let dnsDiscoveryConf = waku.conf.dnsDiscoveryConf.get() - let dynamicBootstrapNodesRes = await waku_dnsdisc.retrieveDynamicBootstrapNodes( - dnsDiscoveryConf.enrTreeUrl, dnsDiscoveryConf.nameServers - ) + let dynamicBootstrapNodesRes = + try: + await waku_dnsdisc.retrieveDynamicBootstrapNodes( + dnsDiscoveryConf.enrTreeUrl, dnsDiscoveryConf.nameServers + ) + except CatchableError as exc: + Result[seq[RemotePeerInfo], string].err( + "Retrieving dynamic bootstrap nodes failed: " & exc.msg + ) if dynamicBootstrapNodesRes.isErr(): error "Retrieving dynamic bootstrap nodes failed", @@ -379,8 +384,11 @@ proc startWaku*(waku: ptr Waku): Future[Result[void, string]] {.async.} = return err("error while calling startNode: " & $error) ## Update waku data that is set dynamically on node start - (await updateWaku(waku)).isOkOr: - return err("Error in updateApp: " & $error) + try: + (await updateWaku(waku)).isOkOr: + return err("Error in updateApp: " & $error) + except CatchableError: + return err("Caught exception in updateApp: " & getCurrentExceptionMsg()) ## Discv5 if conf.discv5Conf.isSome(): @@ -400,13 +408,68 @@ proc startWaku*(waku: ptr Waku): Future[Result[void, string]] {.async.} = return err("failed to start waku discovery v5: " & $error) ## Reliability - if not waku[].deliveryMonitor.isNil(): - waku[].deliveryMonitor.startDeliveryMonitor() + if not waku[].deliveryService.isNil(): + waku[].deliveryService.startDeliveryService() ## Health Monitor waku[].healthMonitor.startHealthMonitor().isOkOr: return err("failed to start health monitor: " & $error) + ## Setup RequestNodeHealth provider + + RequestNodeHealth.setProvider( + globalBrokerContext(), + proc(): Result[RequestNodeHealth, string] = + let healthReportFut = waku[].healthMonitor.getNodeHealthReport() + if not healthReportFut.completed(): + return err("Health report not available") + try: + let healthReport = healthReportFut.read() + + # Check if Relay or Lightpush Client is ready (MinimallyHealthy condition) + var relayReady = false + var lightpushClientReady = false + var storeClientReady = false + var filterClientReady = false + + for protocolHealth in healthReport.protocolsHealth: + if protocolHealth.protocol == "Relay" and + protocolHealth.health == HealthStatus.READY: + relayReady = true + elif protocolHealth.protocol == "Lightpush Client" and + protocolHealth.health == HealthStatus.READY: + lightpushClientReady = true + elif protocolHealth.protocol == "Store Client" and + protocolHealth.health == HealthStatus.READY: + storeClientReady = true + elif protocolHealth.protocol == "Filter Client" and + protocolHealth.health == HealthStatus.READY: + filterClientReady = true + + # Determine node health based on protocol states + let isMinimallyHealthy = relayReady or lightpushClientReady + let nodeHealth = + if isMinimallyHealthy and storeClientReady and filterClientReady: + NodeHealth.Healthy + elif isMinimallyHealthy: + NodeHealth.MinimallyHealthy + else: + NodeHealth.Unhealthy + + debug "Providing health report", + nodeHealth = $nodeHealth, + relayReady = relayReady, + lightpushClientReady = lightpushClientReady, + storeClientReady = storeClientReady, + filterClientReady = filterClientReady, + details = $(healthReport) + + return ok(RequestNodeHealth(healthStatus: nodeHealth)) + except CatchableError as exc: + err("Failed to read health report: " & exc.msg), + ).isOkOr: + error "Failed to set RequestNodeHealth provider", error = error + if conf.restServerConf.isSome(): rest_server_builder.startRestServerProtocolSupport( waku[].restServer, @@ -422,41 +485,65 @@ proc startWaku*(waku: ptr Waku): Future[Result[void, string]] {.async.} = return err ("Starting protocols support REST server failed: " & $error) if conf.metricsServerConf.isSome(): - waku[].metricsServer = ( - await ( - waku_metrics.startMetricsServerAndLogging( - conf.metricsServerConf.get(), conf.portsShift + try: + waku[].metricsServer = ( + await ( + waku_metrics.startMetricsServerAndLogging( + conf.metricsServerConf.get(), conf.portsShift + ) ) + ).valueOr: + return err("Starting monitoring and external interfaces failed: " & error) + except CatchableError: + return err( + "Caught exception starting monitoring and external interfaces failed: " & + getCurrentExceptionMsg() ) - ).valueOr: - return err("Starting monitoring and external interfaces failed: " & error) - waku[].healthMonitor.setOverallHealth(HealthStatus.READY) return ok() -proc stop*(waku: Waku): Future[void] {.async: (raises: [Exception]).} = +proc stop*(waku: Waku): Future[Result[void, string]] {.async: (raises: []).} = ## Waku shutdown if not waku.node.started: warn "stop: attempting to stop node that isn't running" - waku.healthMonitor.setOverallHealth(HealthStatus.SHUTTING_DOWN) + try: + waku.healthMonitor.setOverallHealth(HealthStatus.SHUTTING_DOWN) - if not waku.metricsServer.isNil(): - await waku.metricsServer.stop() + if not waku.metricsServer.isNil(): + await waku.metricsServer.stop() - if not waku.wakuDiscv5.isNil(): - await waku.wakuDiscv5.stop() + if not waku.wakuDiscv5.isNil(): + await waku.wakuDiscv5.stop() - if not waku.node.isNil(): - await waku.node.stop() + if not waku.node.isNil(): + await waku.node.stop() - if not waku.dnsRetryLoopHandle.isNil(): - await waku.dnsRetryLoopHandle.cancelAndWait() + if not waku.dnsRetryLoopHandle.isNil(): + await waku.dnsRetryLoopHandle.cancelAndWait() - if not waku.healthMonitor.isNil(): - await waku.healthMonitor.stopHealthMonitor() + if not waku.healthMonitor.isNil(): + await waku.healthMonitor.stopHealthMonitor() - if not waku.restServer.isNil(): - await waku.restServer.stop() + ## Clear RequestNodeHealth provider + RequestNodeHealth.clearProvider(waku.brokerCtx) + + if not waku.restServer.isNil(): + await waku.restServer.stop() + except Exception: + error "waku stop failed: " & getCurrentExceptionMsg() + return err("waku stop failed: " & getCurrentExceptionMsg()) + + return ok() + +proc isModeCoreAvailable*(waku: Waku): bool = + return not waku.node.wakuRelay.isNil() + +proc isModeEdgeAvailable*(waku: Waku): bool = + return + waku.node.wakuRelay.isNil() and not waku.node.wakuStoreClient.isNil() and + not waku.node.wakuFilterClient.isNil() and not waku.node.wakuLightPushClient.isNil() + +{.pop.} diff --git a/waku/node/delivery_monitor/delivery_callback.nim b/waku/node/delivery_monitor/delivery_callback.nim deleted file mode 100644 index c996bc7b0..000000000 --- a/waku/node/delivery_monitor/delivery_callback.nim +++ /dev/null @@ -1,17 +0,0 @@ -import ../../waku_core - -type DeliveryDirection* {.pure.} = enum - PUBLISHING - RECEIVING - -type DeliverySuccess* {.pure.} = enum - SUCCESSFUL - UNSUCCESSFUL - -type DeliveryFeedbackCallback* = proc( - success: DeliverySuccess, - dir: DeliveryDirection, - comment: string, - msgHash: WakuMessageHash, - msg: WakuMessage, -) {.gcsafe, raises: [].} diff --git a/waku/node/delivery_monitor/delivery_monitor.nim b/waku/node/delivery_monitor/delivery_monitor.nim deleted file mode 100644 index 4dda542cc..000000000 --- a/waku/node/delivery_monitor/delivery_monitor.nim +++ /dev/null @@ -1,43 +0,0 @@ -## This module helps to ensure the correct transmission and reception of messages - -import results -import chronos -import - ./recv_monitor, - ./send_monitor, - ./delivery_callback, - ../../waku_core, - ../../waku_store/client, - ../../waku_relay/protocol, - ../../waku_lightpush/client, - ../../waku_filter_v2/client - -type DeliveryMonitor* = ref object - sendMonitor: SendMonitor - recvMonitor: RecvMonitor - -proc new*( - T: type DeliveryMonitor, - storeClient: WakuStoreClient, - wakuRelay: protocol.WakuRelay, - wakuLightpushClient: WakuLightpushClient, - wakuFilterClient: WakuFilterClient, -): Result[T, string] = - ## storeClient is needed to give store visitility to DeliveryMonitor - ## wakuRelay and wakuLightpushClient are needed to give a mechanism to SendMonitor to re-publish - let sendMonitor = ?SendMonitor.new(storeClient, wakuRelay, wakuLightpushClient) - let recvMonitor = RecvMonitor.new(storeClient, wakuFilterClient) - return ok(DeliveryMonitor(sendMonitor: sendMonitor, recvMonitor: recvMonitor)) - -proc startDeliveryMonitor*(self: DeliveryMonitor) = - self.sendMonitor.startSendMonitor() - self.recvMonitor.startRecvMonitor() - -proc stopDeliveryMonitor*(self: DeliveryMonitor) {.async.} = - self.sendMonitor.stopSendMonitor() - await self.recvMonitor.stopRecvMonitor() - -proc setDeliveryCallback*(self: DeliveryMonitor, deliveryCb: DeliveryFeedbackCallback) = - ## The deliveryCb is a proc defined by the api client so that it can get delivery feedback - self.sendMonitor.setDeliveryCallback(deliveryCb) - self.recvMonitor.setDeliveryCallback(deliveryCb) diff --git a/waku/node/delivery_monitor/publish_observer.nim b/waku/node/delivery_monitor/publish_observer.nim deleted file mode 100644 index 1f517f8bd..000000000 --- a/waku/node/delivery_monitor/publish_observer.nim +++ /dev/null @@ -1,9 +0,0 @@ -import chronicles -import ../../waku_core/message/message - -type PublishObserver* = ref object of RootObj - -method onMessagePublished*( - self: PublishObserver, pubsubTopic: string, message: WakuMessage -) {.base, gcsafe, raises: [].} = - error "onMessagePublished not implemented" diff --git a/waku/node/delivery_monitor/send_monitor.nim b/waku/node/delivery_monitor/send_monitor.nim deleted file mode 100644 index 15b16065f..000000000 --- a/waku/node/delivery_monitor/send_monitor.nim +++ /dev/null @@ -1,212 +0,0 @@ -## This module reinforces the publish operation with regular store-v3 requests. -## - -import std/[sequtils, tables] -import chronos, chronicles, libp2p/utility -import - ./delivery_callback, - ./publish_observer, - ../../waku_core, - ./not_delivered_storage/not_delivered_storage, - ../../waku_store/[client, common], - ../../waku_archive/archive, - ../../waku_relay/protocol, - ../../waku_lightpush/client - -const MaxTimeInCache* = chronos.minutes(1) - ## Messages older than this time will get completely forgotten on publication and a - ## feedback will be given when that happens - -const SendCheckInterval* = chronos.seconds(3) - ## Interval at which we check that messages have been properly received by a store node - -const MaxMessagesToCheckAtOnce = 100 - ## Max number of messages to check if they were properly archived by a store node - -const ArchiveTime = chronos.seconds(3) - ## Estimation of the time we wait until we start confirming that a message has been properly - ## received and archived by a store node - -type DeliveryInfo = object - pubsubTopic: string - msg: WakuMessage - -type SendMonitor* = ref object of PublishObserver - publishedMessages: Table[WakuMessageHash, DeliveryInfo] - ## Cache that contains the delivery info per message hash. - ## This is needed to make sure the published messages are properly published - - msgStoredCheckerHandle: Future[void] ## handle that allows to stop the async task - - notDeliveredStorage: NotDeliveredStorage - ## NOTE: this is not fully used because that might be tackled by higher abstraction layers - - storeClient: WakuStoreClient - deliveryCb: DeliveryFeedbackCallback - - wakuRelay: protocol.WakuRelay - wakuLightpushClient: WakuLightPushClient - -proc new*( - T: type SendMonitor, - storeClient: WakuStoreClient, - wakuRelay: protocol.WakuRelay, - wakuLightpushClient: WakuLightPushClient, -): Result[T, string] = - if wakuRelay.isNil() and wakuLightpushClient.isNil(): - return err( - "Could not create SendMonitor. wakuRelay or wakuLightpushClient should be set" - ) - - let notDeliveredStorage = ?NotDeliveredStorage.new() - - let sendMonitor = SendMonitor( - notDeliveredStorage: notDeliveredStorage, - storeClient: storeClient, - wakuRelay: wakuRelay, - wakuLightpushClient: wakuLightPushClient, - ) - - if not wakuRelay.isNil(): - wakuRelay.addPublishObserver(sendMonitor) - - if not wakuLightpushClient.isNil(): - wakuLightpushClient.addPublishObserver(sendMonitor) - - return ok(sendMonitor) - -proc performFeedbackAndCleanup( - self: SendMonitor, - msgsToDiscard: Table[WakuMessageHash, DeliveryInfo], - success: DeliverySuccess, - dir: DeliveryDirection, - comment: string, -) = - ## This procs allows to bring delivery feedback to the API client - ## It requires a 'deliveryCb' to be registered beforehand. - if self.deliveryCb.isNil(): - error "deliveryCb is nil in performFeedbackAndCleanup", - success, dir, comment, hashes = toSeq(msgsToDiscard.keys).mapIt(shortLog(it)) - return - - for hash, deliveryInfo in msgsToDiscard: - info "send monitor performFeedbackAndCleanup", - success, dir, comment, msg_hash = shortLog(hash) - - self.deliveryCb(success, dir, comment, hash, deliveryInfo.msg) - self.publishedMessages.del(hash) - -proc checkMsgsInStore( - self: SendMonitor, msgsToValidate: Table[WakuMessageHash, DeliveryInfo] -): Future[ - Result[ - tuple[ - publishedCorrectly: Table[WakuMessageHash, DeliveryInfo], - notYetPublished: Table[WakuMessageHash, DeliveryInfo], - ], - void, - ] -] {.async.} = - let hashesToValidate = toSeq(msgsToValidate.keys) - - let storeResp: StoreQueryResponse = ( - await self.storeClient.queryToAny( - StoreQueryRequest(includeData: false, messageHashes: hashesToValidate) - ) - ).valueOr: - error "checkMsgsInStore failed to get remote msgHashes", - hashes = hashesToValidate.mapIt(shortLog(it)), error = $error - return err() - - let publishedHashes = storeResp.messages.mapIt(it.messageHash) - - var notYetPublished: Table[WakuMessageHash, DeliveryInfo] - var publishedCorrectly: Table[WakuMessageHash, DeliveryInfo] - - for msgHash, deliveryInfo in msgsToValidate.pairs: - if publishedHashes.contains(msgHash): - publishedCorrectly[msgHash] = deliveryInfo - self.publishedMessages.del(msgHash) ## we will no longer track that message - else: - notYetPublished[msgHash] = deliveryInfo - - return ok((publishedCorrectly: publishedCorrectly, notYetPublished: notYetPublished)) - -proc processMessages(self: SendMonitor) {.async.} = - var msgsToValidate: Table[WakuMessageHash, DeliveryInfo] - var msgsToDiscard: Table[WakuMessageHash, DeliveryInfo] - - let now = getNowInNanosecondTime() - let timeToCheckThreshold = now - ArchiveTime.nanos - let maxLifeTime = now - MaxTimeInCache.nanos - - for hash, deliveryInfo in self.publishedMessages.pairs: - if deliveryInfo.msg.timestamp < maxLifeTime: - ## message is too old - msgsToDiscard[hash] = deliveryInfo - - if deliveryInfo.msg.timestamp < timeToCheckThreshold: - msgsToValidate[hash] = deliveryInfo - - ## Discard the messages that are too old - self.performFeedbackAndCleanup( - msgsToDiscard, DeliverySuccess.UNSUCCESSFUL, DeliveryDirection.PUBLISHING, - "Could not publish messages. Please try again.", - ) - - let (publishedCorrectly, notYetPublished) = ( - await self.checkMsgsInStore(msgsToValidate) - ).valueOr: - return ## the error log is printed in checkMsgsInStore - - ## Give positive feedback for the correctly published messages - self.performFeedbackAndCleanup( - publishedCorrectly, DeliverySuccess.SUCCESSFUL, DeliveryDirection.PUBLISHING, - "messages published correctly", - ) - - ## Try to publish again - for msgHash, deliveryInfo in notYetPublished.pairs: - let pubsubTopic = deliveryInfo.pubsubTopic - let msg = deliveryInfo.msg - if not self.wakuRelay.isNil(): - info "trying to publish again with wakuRelay", msgHash, pubsubTopic - (await self.wakuRelay.publish(pubsubTopic, msg)).isOkOr: - error "could not publish with wakuRelay.publish", - msgHash, pubsubTopic, error = $error - continue - - if not self.wakuLightpushClient.isNil(): - info "trying to publish again with wakuLightpushClient", msgHash, pubsubTopic - (await self.wakuLightpushClient.publishToAny(pubsubTopic, msg)).isOkOr: - error "could not publish with publishToAny", error = $error - continue - -proc checkIfMessagesStored(self: SendMonitor) {.async.} = - ## Continuously monitors that the sent messages have been received by a store node - while true: - await self.processMessages() - await sleepAsync(SendCheckInterval) - -method onMessagePublished( - self: SendMonitor, pubsubTopic: string, msg: WakuMessage -) {.gcsafe, raises: [].} = - ## Implementation of the PublishObserver interface. - ## - ## When publishing a message either through relay or lightpush, we want to add some extra effort - ## to make sure it is received to one store node. Hence, keep track of those published messages. - - info "onMessagePublished" - let msgHash = computeMessageHash(pubSubTopic, msg) - - if not self.publishedMessages.hasKey(msgHash): - self.publishedMessages[msgHash] = DeliveryInfo(pubsubTopic: pubsubTopic, msg: msg) - -proc startSendMonitor*(self: SendMonitor) = - self.msgStoredCheckerHandle = self.checkIfMessagesStored() - -proc stopSendMonitor*(self: SendMonitor) = - discard self.msgStoredCheckerHandle.cancelAndWait() - -proc setDeliveryCallback*(self: SendMonitor, deliveryCb: DeliveryFeedbackCallback) = - self.deliveryCb = deliveryCb diff --git a/waku/node/delivery_monitor/subscriptions_observer.nim b/waku/node/delivery_monitor/subscriptions_observer.nim deleted file mode 100644 index 800117ae9..000000000 --- a/waku/node/delivery_monitor/subscriptions_observer.nim +++ /dev/null @@ -1,13 +0,0 @@ -import chronicles - -type SubscriptionObserver* = ref object of RootObj - -method onSubscribe*( - self: SubscriptionObserver, pubsubTopic: string, contentTopics: seq[string] -) {.base, gcsafe, raises: [].} = - error "onSubscribe not implemented" - -method onUnsubscribe*( - self: SubscriptionObserver, pubsubTopic: string, contentTopics: seq[string] -) {.base, gcsafe, raises: [].} = - error "onUnsubscribe not implemented" diff --git a/waku/node/delivery_service/delivery_service.nim b/waku/node/delivery_service/delivery_service.nim new file mode 100644 index 000000000..8106cba9f --- /dev/null +++ b/waku/node/delivery_service/delivery_service.nim @@ -0,0 +1,46 @@ +## This module helps to ensure the correct transmission and reception of messages + +import results +import chronos +import + ./recv_service, + ./send_service, + ./subscription_service, + waku/[ + waku_core, + waku_node, + waku_store/client, + waku_relay/protocol, + waku_lightpush/client, + waku_filter_v2/client, + ] + +type DeliveryService* = ref object + sendService*: SendService + recvService: RecvService + subscriptionService*: SubscriptionService + +proc new*( + T: type DeliveryService, useP2PReliability: bool, w: WakuNode +): Result[T, string] = + ## storeClient is needed to give store visitility to DeliveryService + ## wakuRelay and wakuLightpushClient are needed to give a mechanism to SendService to re-publish + let subscriptionService = SubscriptionService.new(w) + let sendService = ?SendService.new(useP2PReliability, w, subscriptionService) + let recvService = RecvService.new(w, subscriptionService) + + return ok( + DeliveryService( + sendService: sendService, + recvService: recvService, + subscriptionService: subscriptionService, + ) + ) + +proc startDeliveryService*(self: DeliveryService) = + self.sendService.startSendService() + self.recvService.startRecvService() + +proc stopDeliveryService*(self: DeliveryService) {.async.} = + self.sendService.stopSendService() + await self.recvService.stopRecvService() diff --git a/waku/node/delivery_monitor/not_delivered_storage/migrations.nim b/waku/node/delivery_service/not_delivered_storage/migrations.nim similarity index 95% rename from waku/node/delivery_monitor/not_delivered_storage/migrations.nim rename to waku/node/delivery_service/not_delivered_storage/migrations.nim index 8175aea62..807074d64 100644 --- a/waku/node/delivery_monitor/not_delivered_storage/migrations.nim +++ b/waku/node/delivery_service/not_delivered_storage/migrations.nim @@ -4,7 +4,7 @@ import std/[tables, strutils, os], results, chronicles import ../../../common/databases/db_sqlite, ../../../common/databases/common logScope: - topics = "waku node delivery_monitor" + topics = "waku node delivery_service" const TargetSchemaVersion* = 1 # increase this when there is an update in the database schema diff --git a/waku/node/delivery_monitor/not_delivered_storage/not_delivered_storage.nim b/waku/node/delivery_service/not_delivered_storage/not_delivered_storage.nim similarity index 93% rename from waku/node/delivery_monitor/not_delivered_storage/not_delivered_storage.nim rename to waku/node/delivery_service/not_delivered_storage/not_delivered_storage.nim index 85611310b..b0f5f5828 100644 --- a/waku/node/delivery_monitor/not_delivered_storage/not_delivered_storage.nim +++ b/waku/node/delivery_service/not_delivered_storage/not_delivered_storage.nim @@ -1,17 +1,17 @@ ## This module is aimed to keep track of the sent/published messages that are considered ## not being properly delivered. -## +## ## The archiving of such messages will happen in a local sqlite database. -## +## ## In the very first approach, we consider that a message is sent properly is it has been ## received by any store node. -## +## import results import ../../../common/databases/db_sqlite, ../../../waku_core/message/message, - ../../../node/delivery_monitor/not_delivered_storage/migrations + ../../../node/delivery_service/not_delivered_storage/migrations const NotDeliveredMessagesDbUrl = "not-delivered-messages.db" diff --git a/waku/node/delivery_service/recv_service.nim b/waku/node/delivery_service/recv_service.nim new file mode 100644 index 000000000..c4dcf4fef --- /dev/null +++ b/waku/node/delivery_service/recv_service.nim @@ -0,0 +1,3 @@ +import ./recv_service/recv_service + +export recv_service diff --git a/waku/node/delivery_monitor/recv_monitor.nim b/waku/node/delivery_service/recv_service/recv_service.nim similarity index 67% rename from waku/node/delivery_monitor/recv_monitor.nim rename to waku/node/delivery_service/recv_service/recv_service.nim index 6ea35d301..12780033a 100644 --- a/waku/node/delivery_monitor/recv_monitor.nim +++ b/waku/node/delivery_service/recv_service/recv_service.nim @@ -4,13 +4,18 @@ import std/[tables, sequtils, options] import chronos, chronicles, libp2p/utility +import ../[subscription_service] import - ../../waku_core, - ./delivery_callback, - ./subscriptions_observer, - ../../waku_store/[client, common], - ../../waku_filter_v2/client, - ../../waku_core/topics + waku/[ + waku_core, + waku_store/client, + waku_store/common, + waku_filter_v2/client, + waku_core/topics, + events/delivery_events, + waku_node, + common/broker/broker_context, + ] const StoreCheckPeriod = chronos.minutes(5) ## How often to perform store queries @@ -28,14 +33,16 @@ type RecvMessage = object rxTime: Timestamp ## timestamp of the rx message. We will not keep the rx messages forever -type RecvMonitor* = ref object of SubscriptionObserver +type RecvService* = ref object of RootObj + brokerCtx: BrokerContext topicsInterest: Table[PubsubTopic, seq[ContentTopic]] ## Tracks message verification requests and when was the last time a ## pubsub topic was verified for missing messages ## The key contains pubsub-topics - - storeClient: WakuStoreClient - deliveryCb: DeliveryFeedbackCallback + node: WakuNode + onSubscribeListener: OnFilterSubscribeEventListener + onUnsubscribeListener: OnFilterUnsubscribeEventListener + subscriptionService: SubscriptionService recentReceivedMsgs: seq[RecvMessage] @@ -46,10 +53,10 @@ type RecvMonitor* = ref object of SubscriptionObserver endTimeToCheck: Timestamp proc getMissingMsgsFromStore( - self: RecvMonitor, msgHashes: seq[WakuMessageHash] + self: RecvService, msgHashes: seq[WakuMessageHash] ): Future[Result[seq[TupleHashAndMsg], string]] {.async.} = let storeResp: StoreQueryResponse = ( - await self.storeClient.queryToAny( + await self.node.wakuStoreClient.queryToAny( StoreQueryRequest(includeData: true, messageHashes: msgHashes) ) ).valueOr: @@ -62,35 +69,35 @@ proc getMissingMsgsFromStore( ) proc performDeliveryFeedback( - self: RecvMonitor, + self: RecvService, success: DeliverySuccess, dir: DeliveryDirection, comment: string, msgHash: WakuMessageHash, msg: WakuMessage, ) {.gcsafe, raises: [].} = - ## This procs allows to bring delivery feedback to the API client - ## It requires a 'deliveryCb' to be registered beforehand. - if self.deliveryCb.isNil(): - error "deliveryCb is nil in performDeliveryFeedback", - success, dir, comment, msg_hash - return - info "recv monitor performDeliveryFeedback", success, dir, comment, msg_hash = shortLog(msgHash) - self.deliveryCb(success, dir, comment, msgHash, msg) -proc msgChecker(self: RecvMonitor) {.async.} = + DeliveryFeedbackEvent.emit( + brokerCtx = self.brokerCtx, + success = success, + dir = dir, + comment = comment, + msgHash = msgHash, + msg = msg, + ) + +proc msgChecker(self: RecvService) {.async.} = ## Continuously checks if a message has been received while true: await sleepAsync(StoreCheckPeriod) - self.endTimeToCheck = getNowInNanosecondTime() var msgHashesInStore = newSeq[WakuMessageHash](0) for pubsubTopic, cTopics in self.topicsInterest.pairs: let storeResp: StoreQueryResponse = ( - await self.storeClient.queryToAny( + await self.node.wakuStoreClient.queryToAny( StoreQueryRequest( includeData: false, pubsubTopic: some(PubsubTopic(pubsubTopic)), @@ -126,8 +133,8 @@ proc msgChecker(self: RecvMonitor) {.async.} = ## update next check times self.startTimeToCheck = self.endTimeToCheck -method onSubscribe( - self: RecvMonitor, pubsubTopic: string, contentTopics: seq[string] +proc onSubscribe( + self: RecvService, pubsubTopic: string, contentTopics: seq[string] ) {.gcsafe, raises: [].} = info "onSubscribe", pubsubTopic, contentTopics self.topicsInterest.withValue(pubsubTopic, contentTopicsOfInterest): @@ -135,8 +142,8 @@ method onSubscribe( do: self.topicsInterest[pubsubTopic] = contentTopics -method onUnsubscribe( - self: RecvMonitor, pubsubTopic: string, contentTopics: seq[string] +proc onUnsubscribe( + self: RecvService, pubsubTopic: string, contentTopics: seq[string] ) {.gcsafe, raises: [].} = info "onUnsubscribe", pubsubTopic, contentTopics @@ -150,47 +157,63 @@ method onUnsubscribe( do: error "onUnsubscribe unsubscribing from wrong topic", pubsubTopic, contentTopics -proc new*( - T: type RecvMonitor, - storeClient: WakuStoreClient, - wakuFilterClient: WakuFilterClient, -): T = +proc new*(T: typedesc[RecvService], node: WakuNode, s: SubscriptionService): T = ## The storeClient will help to acquire any possible missed messages let now = getNowInNanosecondTime() - var recvMonitor = RecvMonitor(storeClient: storeClient, startTimeToCheck: now) - - if not wakuFilterClient.isNil(): - wakuFilterClient.addSubscrObserver(recvMonitor) + var recvService = RecvService( + node: node, + startTimeToCheck: now, + brokerCtx: node.brokerCtx, + subscriptionService: s, + topicsInterest: initTable[PubsubTopic, seq[ContentTopic]](), + recentReceivedMsgs: @[], + ) + if not node.wakuFilterClient.isNil(): let filterPushHandler = proc( pubsubTopic: PubsubTopic, message: WakuMessage ) {.async, closure.} = - ## Captures all the messages recived through filter + ## Captures all the messages received through filter let msgHash = computeMessageHash(pubSubTopic, message) let rxMsg = RecvMessage(msgHash: msgHash, rxTime: message.timestamp) - recvMonitor.recentReceivedMsgs.add(rxMsg) + recvService.recentReceivedMsgs.add(rxMsg) - wakuFilterClient.registerPushHandler(filterPushHandler) + node.wakuFilterClient.registerPushHandler(filterPushHandler) - return recvMonitor + return recvService -proc loopPruneOldMessages(self: RecvMonitor) {.async.} = +proc loopPruneOldMessages(self: RecvService) {.async.} = while true: let oldestAllowedTime = getNowInNanosecondTime() - MaxMessageLife.nanos self.recentReceivedMsgs.keepItIf(it.rxTime > oldestAllowedTime) await sleepAsync(PruneOldMsgsPeriod) -proc startRecvMonitor*(self: RecvMonitor) = +proc startRecvService*(self: RecvService) = self.msgCheckerHandler = self.msgChecker() self.msgPrunerHandler = self.loopPruneOldMessages() -proc stopRecvMonitor*(self: RecvMonitor) {.async.} = + self.onSubscribeListener = OnFilterSubscribeEvent.listen( + self.brokerCtx, + proc(subsEv: OnFilterSubscribeEvent) {.async: (raises: []).} = + self.onSubscribe(subsEv.pubsubTopic, subsEv.contentTopics), + ).valueOr: + error "Failed to set OnFilterSubscribeEvent listener", error = error + quit(QuitFailure) + + self.onUnsubscribeListener = OnFilterUnsubscribeEvent.listen( + self.brokerCtx, + proc(subsEv: OnFilterUnsubscribeEvent) {.async: (raises: []).} = + self.onUnsubscribe(subsEv.pubsubTopic, subsEv.contentTopics), + ).valueOr: + error "Failed to set OnFilterUnsubscribeEvent listener", error = error + quit(QuitFailure) + +proc stopRecvService*(self: RecvService) {.async.} = + OnFilterSubscribeEvent.dropListener(self.brokerCtx, self.onSubscribeListener) + OnFilterUnsubscribeEvent.dropListener(self.brokerCtx, self.onUnsubscribeListener) if not self.msgCheckerHandler.isNil(): await self.msgCheckerHandler.cancelAndWait() if not self.msgPrunerHandler.isNil(): await self.msgPrunerHandler.cancelAndWait() - -proc setDeliveryCallback*(self: RecvMonitor, deliveryCb: DeliveryFeedbackCallback) = - self.deliveryCb = deliveryCb diff --git a/waku/node/delivery_service/send_service.nim b/waku/node/delivery_service/send_service.nim new file mode 100644 index 000000000..de0dbf6a3 --- /dev/null +++ b/waku/node/delivery_service/send_service.nim @@ -0,0 +1,6 @@ +## This module reinforces the publish operation with regular store-v3 requests. +## + +import ./send_service/[send_service, delivery_task] + +export send_service, delivery_task diff --git a/waku/node/delivery_service/send_service/delivery_task.nim b/waku/node/delivery_service/send_service/delivery_task.nim new file mode 100644 index 000000000..0ff151f6e --- /dev/null +++ b/waku/node/delivery_service/send_service/delivery_task.nim @@ -0,0 +1,74 @@ +import std/[options, times], chronos +import waku/waku_core, waku/api/types, waku/requests/node_requests +import waku/common/broker/broker_context + +type DeliveryState* {.pure.} = enum + Entry + SuccessfullyPropagated + # message is known to be sent to the network but not yet validated + SuccessfullyValidated + # message is known to be stored at least on one store node, thus validated + FallbackRetry # retry sending with fallback processor if available + NextRoundRetry # try sending in next loop + FailedToDeliver # final state of failed delivery + +type DeliveryTask* = ref object + requestId*: RequestId + pubsubTopic*: PubsubTopic + msg*: WakuMessage + msgHash*: WakuMessageHash + tryCount*: int + state*: DeliveryState + deliveryTime*: Moment + propagateEventEmitted*: bool + errorDesc*: string + +proc new*( + T: typedesc[DeliveryTask], + requestId: RequestId, + envelop: MessageEnvelope, + brokerCtx: BrokerContext, +): Result[T, string] = + let msg = envelop.toWakuMessage() + # TODO: use sync request for such as soon as available + let relayShardRes = ( + RequestRelayShard.request(brokerCtx, none[PubsubTopic](), envelop.contentTopic) + ).valueOr: + error "RequestRelayShard.request failed", error = error + return err("Failed create DeliveryTask: " & $error) + + let pubsubTopic = relayShardRes.relayShard.toPubsubTopic() + let msgHash = computeMessageHash(pubsubTopic, msg) + + return ok( + T( + requestId: requestId, + pubsubTopic: pubsubTopic, + msg: msg, + msgHash: msgHash, + tryCount: 0, + state: DeliveryState.Entry, + ) + ) + +func `==`*(r, l: DeliveryTask): bool = + if r.isNil() == l.isNil(): + r.isNil() or r.msgHash == l.msgHash + else: + false + +proc messageAge*(self: DeliveryTask): timer.Duration = + let actual = getNanosecondTime(getTime().toUnixFloat()) + if self.msg.timestamp >= 0 and self.msg.timestamp < actual: + nanoseconds(actual - self.msg.timestamp) + else: + ZeroDuration + +proc deliveryAge*(self: DeliveryTask): timer.Duration = + if self.state == DeliveryState.SuccessfullyPropagated: + timer.Moment.now() - self.deliveryTime + else: + ZeroDuration + +proc isEphemeral*(self: DeliveryTask): bool = + return self.msg.ephemeral diff --git a/waku/node/delivery_service/send_service/lightpush_processor.nim b/waku/node/delivery_service/send_service/lightpush_processor.nim new file mode 100644 index 000000000..40a754757 --- /dev/null +++ b/waku/node/delivery_service/send_service/lightpush_processor.nim @@ -0,0 +1,81 @@ +import chronicles, chronos, results +import std/options + +import + waku/node/peer_manager, + waku/waku_core, + waku/waku_lightpush/[common, client, rpc], + waku/common/broker/broker_context + +import ./[delivery_task, send_processor] + +logScope: + topics = "send service lightpush processor" + +type LightpushSendProcessor* = ref object of BaseSendProcessor + peerManager: PeerManager + lightpushClient: WakuLightPushClient + +proc new*( + T: typedesc[LightpushSendProcessor], + peerManager: PeerManager, + lightpushClient: WakuLightPushClient, + brokerCtx: BrokerContext, +): T = + return + T(peerManager: peerManager, lightpushClient: lightpushClient, brokerCtx: brokerCtx) + +proc isLightpushPeerAvailable( + self: LightpushSendProcessor, pubsubTopic: PubsubTopic +): bool = + return self.peerManager.selectPeer(WakuLightPushCodec, some(pubsubTopic)).isSome() + +method isValidProcessor*( + self: LightpushSendProcessor, task: DeliveryTask +): bool {.gcsafe.} = + return self.isLightpushPeerAvailable(task.pubsubTopic) + +method sendImpl*( + self: LightpushSendProcessor, task: DeliveryTask +): Future[void] {.async.} = + task.tryCount.inc() + info "Trying message delivery via Lightpush", + requestId = task.requestId, + msgHash = task.msgHash.to0xHex(), + tryCount = task.tryCount + + let peer = self.peerManager.selectPeer(WakuLightPushCodec, some(task.pubsubTopic)).valueOr: + debug "No peer available for Lightpush, request pushed back for next round", + requestId = task.requestId + task.state = DeliveryState.NextRoundRetry + return + + let numLightpushServers = ( + await self.lightpushClient.publish(some(task.pubsubTopic), task.msg, peer) + ).valueOr: + error "LightpushSendProcessor.sendImpl failed", error = error.desc.get($error.code) + case error.code + of LightPushErrorCode.NO_PEERS_TO_RELAY, LightPushErrorCode.TOO_MANY_REQUESTS, + LightPushErrorCode.OUT_OF_RLN_PROOF, LightPushErrorCode.SERVICE_NOT_AVAILABLE, + LightPushErrorCode.INTERNAL_SERVER_ERROR: + task.state = DeliveryState.NextRoundRetry + else: + # the message is malformed, send error + task.state = DeliveryState.FailedToDeliver + task.errorDesc = error.desc.get($error.code) + task.deliveryTime = Moment.now() + return + + if numLightpushServers > 0: + info "Message propagated via Lightpush", + requestId = task.requestId, msgHash = task.msgHash.to0xHex() + task.state = DeliveryState.SuccessfullyPropagated + task.deliveryTime = Moment.now() + # TODO: with a simple retry processor it might be more accurate to say `Sent` + else: + # Controversial state, publish says ok but no peer. It should not happen. + debug "Lightpush publish returned zero peers, request pushed back for next round", + requestId = task.requestId + task.state = DeliveryState.NextRoundRetry + + return diff --git a/waku/node/delivery_service/send_service/relay_processor.nim b/waku/node/delivery_service/send_service/relay_processor.nim new file mode 100644 index 000000000..94cb63776 --- /dev/null +++ b/waku/node/delivery_service/send_service/relay_processor.nim @@ -0,0 +1,78 @@ +import std/options +import chronos, chronicles +import waku/[waku_core], waku/waku_lightpush/[common, rpc] +import waku/requests/health_request +import waku/common/broker/broker_context +import waku/api/types +import ./[delivery_task, send_processor] + +logScope: + topics = "send service relay processor" + +type RelaySendProcessor* = ref object of BaseSendProcessor + publishProc: PushMessageHandler + fallbackStateToSet: DeliveryState + +proc new*( + T: typedesc[RelaySendProcessor], + lightpushAvailable: bool, + publishProc: PushMessageHandler, + brokerCtx: BrokerContext, +): RelaySendProcessor = + let fallbackStateToSet = + if lightpushAvailable: + DeliveryState.FallbackRetry + else: + DeliveryState.FailedToDeliver + + return RelaySendProcessor( + publishProc: publishProc, + fallbackStateToSet: fallbackStateToSet, + brokerCtx: brokerCtx, + ) + +proc isTopicHealthy(self: RelaySendProcessor, topic: PubsubTopic): bool {.gcsafe.} = + let healthReport = RequestRelayTopicsHealth.request(self.brokerCtx, @[topic]).valueOr: + error "isTopicHealthy: failed to get health report", topic = topic, error = error + return false + + if healthReport.topicHealth.len() < 1: + warn "isTopicHealthy: no topic health entries", topic = topic + return false + let health = healthReport.topicHealth[0].health + debug "isTopicHealthy: topic health is ", topic = topic, health = health + return health == MINIMALLY_HEALTHY or health == SUFFICIENTLY_HEALTHY + +method isValidProcessor*( + self: RelaySendProcessor, task: DeliveryTask +): bool {.gcsafe.} = + # Topic health query is not reliable enough after a fresh subscribe... + # return self.isTopicHealthy(task.pubsubTopic) + return true + +method sendImpl*(self: RelaySendProcessor, task: DeliveryTask) {.async.} = + task.tryCount.inc() + info "Trying message delivery via Relay", + requestId = task.requestId, + msgHash = task.msgHash.to0xHex(), + tryCount = task.tryCount + + let noOfPublishedPeers = (await self.publishProc(task.pubsubTopic, task.msg)).valueOr: + let errorMessage = error.desc.get($error.code) + error "Failed to publish message with relay", + request = task.requestId, msgHash = task.msgHash.to0xHex(), error = errorMessage + if error.code != LightPushErrorCode.NO_PEERS_TO_RELAY: + task.state = DeliveryState.FailedToDeliver + task.errorDesc = errorMessage + else: + task.state = self.fallbackStateToSet + return + + if noOfPublishedPeers > 0: + info "Message propagated via Relay", + requestId = task.requestId, msgHash = task.msgHash.to0xHex(), noOfPeers = noOfPublishedPeers + task.state = DeliveryState.SuccessfullyPropagated + task.deliveryTime = Moment.now() + else: + # It shall not happen, but still covering it + task.state = self.fallbackStateToSet diff --git a/waku/node/delivery_service/send_service/send_processor.nim b/waku/node/delivery_service/send_service/send_processor.nim new file mode 100644 index 000000000..0108eacd0 --- /dev/null +++ b/waku/node/delivery_service/send_service/send_processor.nim @@ -0,0 +1,36 @@ +import chronos +import ./delivery_task +import waku/common/broker/broker_context + +{.push raises: [].} + +type BaseSendProcessor* = ref object of RootObj + fallbackProcessor*: BaseSendProcessor + brokerCtx*: BrokerContext + +proc chain*(self: BaseSendProcessor, next: BaseSendProcessor) = + self.fallbackProcessor = next + +method isValidProcessor*( + self: BaseSendProcessor, task: DeliveryTask +): bool {.base, gcsafe.} = + return false + +method sendImpl*( + self: BaseSendProcessor, task: DeliveryTask +): Future[void] {.async, base.} = + assert false, "Not implemented" + +method process*( + self: BaseSendProcessor, task: DeliveryTask +): Future[void] {.async, base.} = + var currentProcessor: BaseSendProcessor = self + var keepTrying = true + while not currentProcessor.isNil() and keepTrying: + if currentProcessor.isValidProcessor(task): + await currentProcessor.sendImpl(task) + currentProcessor = currentProcessor.fallbackProcessor + keepTrying = task.state == DeliveryState.FallbackRetry + + if task.state == DeliveryState.FallbackRetry: + task.state = DeliveryState.NextRoundRetry diff --git a/waku/node/delivery_service/send_service/send_service.nim b/waku/node/delivery_service/send_service/send_service.nim new file mode 100644 index 000000000..f6a6ac94c --- /dev/null +++ b/waku/node/delivery_service/send_service/send_service.nim @@ -0,0 +1,269 @@ +## This module reinforces the publish operation with regular store-v3 requests. +## + +import std/[sequtils, tables, options] +import chronos, chronicles, libp2p/utility +import + ./[send_processor, relay_processor, lightpush_processor, delivery_task], + ../[subscription_service], + waku/[ + waku_core, + node/waku_node, + node/peer_manager, + waku_store/client, + waku_store/common, + waku_relay/protocol, + waku_rln_relay/rln_relay, + waku_lightpush/client, + waku_lightpush/callbacks, + events/message_events, + common/broker/broker_context, + ] + +logScope: + topics = "send service" + +# This useful util is missing from sequtils, this extends applyIt with predicate... +template applyItIf*(varSeq, pred, op: untyped) = + for i in low(varSeq) .. high(varSeq): + let it {.inject.} = varSeq[i] + if pred: + op + varSeq[i] = it + +template forEach*(varSeq, op: untyped) = + for i in low(varSeq) .. high(varSeq): + let it {.inject.} = varSeq[i] + op + +const MaxTimeInCache* = chronos.minutes(1) + ## Messages older than this time will get completely forgotten on publication and a + ## feedback will be given when that happens + +const ServiceLoopInterval* = chronos.seconds(1) + ## Interval at which we check that messages have been properly received by a store node + +const ArchiveTime = chronos.seconds(3) + ## Estimation of the time we wait until we start confirming that a message has been properly + ## received and archived by a store node + +type SendService* = ref object of RootObj + brokerCtx: BrokerContext + taskCache: seq[DeliveryTask] + ## Cache that contains the delivery task per message hash. + ## This is needed to make sure the published messages are properly published + + serviceLoopHandle: Future[void] ## handle that allows to stop the async task + sendProcessor: BaseSendProcessor + + node: WakuNode + checkStoreForMessages: bool + subscriptionService: SubscriptionService + +proc setupSendProcessorChain( + peerManager: PeerManager, + lightpushClient: WakuLightPushClient, + relay: WakuRelay, + rlnRelay: WakuRLNRelay, + brokerCtx: BrokerContext, +): Result[BaseSendProcessor, string] = + let isRelayAvail = not relay.isNil() + let isLightPushAvail = not lightpushClient.isNil() + + if not isRelayAvail and not isLightPushAvail: + return err("No valid send processor found for the delivery task") + + var processors = newSeq[BaseSendProcessor]() + + if isRelayAvail: + let rln: Option[WakuRLNRelay] = + if rlnRelay.isNil(): + none[WakuRLNRelay]() + else: + some(rlnRelay) + let publishProc = getRelayPushHandler(relay, rln) + + processors.add(RelaySendProcessor.new(isLightPushAvail, publishProc, brokerCtx)) + if isLightPushAvail: + processors.add(LightpushSendProcessor.new(peerManager, lightpushClient, brokerCtx)) + + var currentProcessor: BaseSendProcessor = processors[0] + for i in 1 ..< processors.len: + currentProcessor.chain(processors[i]) + currentProcessor = processors[i] + + return ok(processors[0]) + +proc new*( + T: typedesc[SendService], + preferP2PReliability: bool, + w: WakuNode, + s: SubscriptionService, +): Result[T, string] = + if w.wakuRelay.isNil() and w.wakuLightpushClient.isNil(): + return err( + "Could not create SendService. wakuRelay or wakuLightpushClient should be set" + ) + + let checkStoreForMessages = preferP2PReliability and not w.wakuStoreClient.isNil() + + let sendProcessorChain = setupSendProcessorChain( + w.peerManager, w.wakuLightPushClient, w.wakuRelay, w.wakuRlnRelay, w.brokerCtx + ).valueOr: + return err("failed to setup SendProcessorChain: " & $error) + + let sendService = SendService( + brokerCtx: w.brokerCtx, + taskCache: newSeq[DeliveryTask](), + serviceLoopHandle: nil, + sendProcessor: sendProcessorChain, + node: w, + checkStoreForMessages: checkStoreForMessages, + subscriptionService: s, + ) + + return ok(sendService) + +proc addTask(self: SendService, task: DeliveryTask) = + self.taskCache.addUnique(task) + +proc isStorePeerAvailable*(sendService: SendService): bool = + return sendService.node.peerManager.selectPeer(WakuStoreCodec).isSome() + +proc checkMsgsInStore(self: SendService, tasksToValidate: seq[DeliveryTask]) {.async.} = + if tasksToValidate.len() == 0: + return + + if not isStorePeerAvailable(self): + warn "Skipping store validation for ", + messageCount = tasksToValidate.len(), error = "no store peer available" + return + + var hashesToValidate = tasksToValidate.mapIt(it.msgHash) + # TODO: confirm hash format for store query!!! + + let storeResp: StoreQueryResponse = ( + await self.node.wakuStoreClient.queryToAny( + StoreQueryRequest(includeData: false, messageHashes: hashesToValidate) + ) + ).valueOr: + error "Failed to get store validation for messages", + hashes = hashesToValidate.mapIt(shortLog(it)), error = $error + return + + let storedItems = storeResp.messages.mapIt(it.messageHash) + + # Set success state for messages found in store + self.taskCache.applyItIf(storedItems.contains(it.msgHash)): + it.state = DeliveryState.SuccessfullyValidated + + # set retry state for messages not found in store + hashesToValidate.keepItIf(not storedItems.contains(it)) + self.taskCache.applyItIf(hashesToValidate.contains(it.msgHash)): + it.state = DeliveryState.NextRoundRetry + +proc checkStoredMessages(self: SendService) {.async.} = + if not self.checkStoreForMessages: + return + + let tasksToValidate = self.taskCache.filterIt( + it.state == DeliveryState.SuccessfullyPropagated and it.deliveryAge() > ArchiveTime and + not it.isEphemeral() + ) + + await self.checkMsgsInStore(tasksToValidate) + +proc reportTaskResult(self: SendService, task: DeliveryTask) = + case task.state + of DeliveryState.SuccessfullyPropagated: + # TODO: in case of unable to strore check messages shall we report success instead? + if not task.propagateEventEmitted: + info "Message successfully propagated", + requestId = task.requestId, msgHash = task.msgHash.to0xHex() + MessagePropagatedEvent.emit( + self.brokerCtx, task.requestId, task.msgHash.to0xHex() + ) + task.propagateEventEmitted = true + return + of DeliveryState.SuccessfullyValidated: + info "Message successfully sent", + requestId = task.requestId, msgHash = task.msgHash.to0xHex() + MessageSentEvent.emit(self.brokerCtx, task.requestId, task.msgHash.to0xHex()) + return + of DeliveryState.FailedToDeliver: + error "Failed to send message", + requestId = task.requestId, + msgHash = task.msgHash.to0xHex(), + error = task.errorDesc + MessageErrorEvent.emit( + self.brokerCtx, task.requestId, task.msgHash.to0xHex(), task.errorDesc + ) + return + else: + # rest of the states are intermediate and does not translate to event + discard + + if task.messageAge() > MaxTimeInCache: + error "Failed to send message", + requestId = task.requestId, + msgHash = task.msgHash.to0xHex(), + error = "Message too old", + age = task.messageAge() + task.state = DeliveryState.FailedToDeliver + MessageErrorEvent.emit( + self.brokerCtx, + task.requestId, + task.msgHash.to0xHex(), + "Unable to send within retry time window", + ) + +proc evaluateAndCleanUp(self: SendService) = + self.taskCache.forEach(self.reportTaskResult(it)) + self.taskCache.keepItIf( + it.state != DeliveryState.SuccessfullyValidated and + it.state != DeliveryState.FailedToDeliver + ) + + # remove propagated ephemeral messages as no store check is possible + self.taskCache.keepItIf( + not (it.isEphemeral() and it.state == DeliveryState.SuccessfullyPropagated) + ) + +proc trySendMessages(self: SendService) {.async.} = + let tasksToSend = self.taskCache.filterIt(it.state == DeliveryState.NextRoundRetry) + + for task in tasksToSend: + # Todo, check if it has any perf gain to run them concurrent... + await self.sendProcessor.process(task) + +proc serviceLoop(self: SendService) {.async.} = + ## Continuously monitors that the sent messages have been received by a store node + while true: + await self.trySendMessages() + await self.checkStoredMessages() + self.evaluateAndCleanUp() + ## TODO: add circuit breaker to avoid infinite looping in case of persistent failures + ## Use OnlineStateChange observers to pause/resume the loop + await sleepAsync(ServiceLoopInterval) + +proc startSendService*(self: SendService) = + self.serviceLoopHandle = self.serviceLoop() + +proc stopSendService*(self: SendService) = + if not self.serviceLoopHandle.isNil(): + discard self.serviceLoopHandle.cancelAndWait() + +proc send*(self: SendService, task: DeliveryTask) {.async.} = + assert(not task.isNil(), "task for send must not be nil") + + info "SendService.send: processing delivery task", + requestId = task.requestId, msgHash = task.msgHash.to0xHex() + + self.subscriptionService.subscribe(task.msg.contentTopic).isOkOr: + error "SendService.send: failed to subscribe to content topic", + contentTopic = task.msg.contentTopic, error = error + + await self.sendProcessor.process(task) + reportTaskResult(self, task) + if task.state != DeliveryState.FailedToDeliver: + self.addTask(task) diff --git a/waku/node/delivery_service/subscription_service.nim b/waku/node/delivery_service/subscription_service.nim new file mode 100644 index 000000000..78763161b --- /dev/null +++ b/waku/node/delivery_service/subscription_service.nim @@ -0,0 +1,64 @@ +import chronos, chronicles +import + waku/[ + waku_core, + waku_core/topics, + events/message_events, + waku_node, + common/broker/broker_context, + ] + +type SubscriptionService* = ref object of RootObj + brokerCtx: BrokerContext + node: WakuNode + +proc new*(T: typedesc[SubscriptionService], node: WakuNode): T = + ## The storeClient will help to acquire any possible missed messages + + return SubscriptionService(brokerCtx: node.brokerCtx, node: node) + +proc isSubscribed*( + self: SubscriptionService, topic: ContentTopic +): Result[bool, string] = + var isSubscribed = false + if self.node.wakuRelay.isNil() == false: + return self.node.isSubscribed((kind: ContentSub, topic: topic)) + + # TODO: Add support for edge mode with Filter subscription management + return ok(isSubscribed) + +#TODO: later PR may consider to refactor or place this function elsewhere +# The only important part is that it emits MessageReceivedEvent +proc getReceiveHandler(self: SubscriptionService): WakuRelayHandler = + return proc(topic: PubsubTopic, msg: WakuMessage): Future[void] {.async, gcsafe.} = + let msgHash = computeMessageHash(topic, msg).to0xHex() + info "API received message", + pubsubTopic = topic, contentTopic = msg.contentTopic, msgHash = msgHash + + MessageReceivedEvent.emit(self.brokerCtx, msgHash, msg) + +proc subscribe*(self: SubscriptionService, topic: ContentTopic): Result[void, string] = + let isSubscribed = self.isSubscribed(topic).valueOr: + error "Failed to check subscription status: ", error = error + return err("Failed to check subscription status: " & error) + + if isSubscribed == false: + if self.node.wakuRelay.isNil() == false: + self.node.subscribe((kind: ContentSub, topic: topic), self.getReceiveHandler()).isOkOr: + error "Failed to subscribe: ", error = error + return err("Failed to subscribe: " & error) + + # TODO: Add support for edge mode with Filter subscription management + + return ok() + +proc unsubscribe*( + self: SubscriptionService, topic: ContentTopic +): Result[void, string] = + if self.node.wakuRelay.isNil() == false: + self.node.unsubscribe((kind: ContentSub, topic: topic)).isOkOr: + error "Failed to unsubscribe: ", error = error + return err("Failed to unsubscribe: " & error) + + # TODO: Add support for edge mode with Filter subscription management + return ok() diff --git a/waku/waku_relay/topic_health.nim b/waku/node/health_monitor/topic_health.nim similarity index 84% rename from waku/waku_relay/topic_health.nim rename to waku/node/health_monitor/topic_health.nim index 774abc584..5a1ea0a16 100644 --- a/waku/waku_relay/topic_health.nim +++ b/waku/node/health_monitor/topic_health.nim @@ -1,11 +1,12 @@ import chronos -import ../waku_core +import waku/waku_core type TopicHealth* = enum UNHEALTHY MINIMALLY_HEALTHY SUFFICIENTLY_HEALTHY + NOT_SUBSCRIBED proc `$`*(t: TopicHealth): string = result = @@ -13,6 +14,7 @@ proc `$`*(t: TopicHealth): string = of UNHEALTHY: "UnHealthy" of MINIMALLY_HEALTHY: "MinimallyHealthy" of SUFFICIENTLY_HEALTHY: "SufficientlyHealthy" + of NOT_SUBSCRIBED: "NotSubscribed" type TopicHealthChangeHandler* = proc( pubsubTopic: PubsubTopic, topicHealth: TopicHealth diff --git a/waku/node/kernel_api/lightpush.nim b/waku/node/kernel_api/lightpush.nim index 2a5f6acbb..ffe2afdac 100644 --- a/waku/node/kernel_api/lightpush.nim +++ b/waku/node/kernel_api/lightpush.nim @@ -193,7 +193,6 @@ proc lightpushPublishHandler( mixify: bool = false, ): Future[lightpush_protocol.WakuLightPushResult] {.async.} = let msgHash = pubsubTopic.computeMessageHash(message).to0xHex() - if not node.wakuLightpushClient.isNil(): notice "publishing message with lightpush", pubsubTopic = pubsubTopic, @@ -201,21 +200,23 @@ proc lightpushPublishHandler( target_peer_id = peer.peerId, msg_hash = msgHash, mixify = mixify - if mixify: #indicates we want to use mix to send the message - #TODO: How to handle multiple addresses? - let conn = node.wakuMix.toConnection( - MixDestination.exitNode(peer.peerId), - WakuLightPushCodec, - MixParameters(expectReply: Opt.some(true), numSurbs: Opt.some(byte(1))), - # indicating we only want a single path to be used for reply hence numSurbs = 1 - ).valueOr: - error "could not create mix connection" - return lighpushErrorResult( - LightPushErrorCode.SERVICE_NOT_AVAILABLE, - "Waku lightpush with mix not available", - ) + if defined(libp2p_mix_experimental_exit_is_dest) and mixify: + #indicates we want to use mix to send the message + when defined(libp2p_mix_experimental_exit_is_dest): + #TODO: How to handle multiple addresses? + let conn = node.wakuMix.toConnection( + MixDestination.exitNode(peer.peerId), + WakuLightPushCodec, + MixParameters(expectReply: Opt.some(true), numSurbs: Opt.some(byte(1))), + # indicating we only want a single path to be used for reply hence numSurbs = 1 + ).valueOr: + error "could not create mix connection" + return lighpushErrorResult( + LightPushErrorCode.SERVICE_NOT_AVAILABLE, + "Waku lightpush with mix not available", + ) - return await node.wakuLightpushClient.publish(some(pubsubTopic), message, conn) + return await node.wakuLightpushClient.publish(some(pubsubTopic), message, conn) else: return await node.wakuLightpushClient.publish(some(pubsubTopic), message, peer) @@ -264,7 +265,7 @@ proc lightpushPublish*( LightPushErrorCode.NO_PEERS_TO_RELAY, "no suitable remote peers" ) - let pubsubForPublish = pubSubTopic.valueOr: + let pubsubForPublish = pubsubTopic.valueOr: if node.wakuAutoSharding.isNone(): let msg = "Pubsub topic must be specified when static sharding is enabled" error "lightpush publish error", error = msg diff --git a/waku/node/kernel_api/relay.nim b/waku/node/kernel_api/relay.nim index 827cc1e5f..a0a128449 100644 --- a/waku/node/kernel_api/relay.nim +++ b/waku/node/kernel_api/relay.nim @@ -30,6 +30,8 @@ import ../peer_manager, ../../waku_rln_relay +export waku_relay.WakuRelayHandler + declarePublicHistogram waku_histogram_message_size, "message size histogram in kB", buckets = [ @@ -91,6 +93,23 @@ proc registerRelayHandler( node.wakuRelay.subscribe(topic, uniqueTopicHandler) +proc getTopicOfSubscriptionEvent( + node: WakuNode, subscription: SubscriptionEvent +): Result[(PubsubTopic, Option[ContentTopic]), string] = + case subscription.kind + of ContentSub, ContentUnsub: + if node.wakuAutoSharding.isSome(): + let shard = node.wakuAutoSharding.get().getShard((subscription.topic)).valueOr: + return err("Autosharding error: " & error) + return ok(($shard, some(subscription.topic))) + else: + return + err("Static sharding is used, relay subscriptions must specify a pubsub topic") + of PubsubSub, PubsubUnsub: + return ok((subscription.topic, none[ContentTopic]())) + else: + return err("Unsupported subscription type in relay getTopicOfSubscriptionEvent") + proc subscribe*( node: WakuNode, subscription: SubscriptionEvent, handler: WakuRelayHandler ): Result[void, string] = @@ -101,27 +120,15 @@ proc subscribe*( error "Invalid API call to `subscribe`. WakuRelay not mounted." return err("Invalid API call to `subscribe`. WakuRelay not mounted.") - let (pubsubTopic, contentTopicOp) = - case subscription.kind - of ContentSub: - if node.wakuAutoSharding.isSome(): - let shard = node.wakuAutoSharding.get().getShard((subscription.topic)).valueOr: - error "Autosharding error", error = error - return err("Autosharding error: " & error) - ($shard, some(subscription.topic)) - else: - return err( - "Static sharding is used, relay subscriptions must specify a pubsub topic" - ) - of PubsubSub: - (subscription.topic, none(ContentTopic)) - else: - return err("Unsupported subscription type in relay subscribe") + let (pubsubTopic, contentTopicOp) = getTopicOfSubscriptionEvent(node, subscription).valueOr: + error "Failed to decode subscription event", error = error + return err("Failed to decode subscription event: " & error) if node.wakuRelay.isSubscribed(pubsubTopic): warn "No-effect API call to subscribe. Already subscribed to topic", pubsubTopic return ok() + info "subscribe", pubsubTopic, contentTopicOp node.registerRelayHandler(pubsubTopic, handler) node.topicSubscriptionQueue.emit((kind: PubsubSub, topic: pubsubTopic)) @@ -136,22 +143,9 @@ proc unsubscribe*( error "Invalid API call to `unsubscribe`. WakuRelay not mounted." return err("Invalid API call to `unsubscribe`. WakuRelay not mounted.") - let (pubsubTopic, contentTopicOp) = - case subscription.kind - of ContentUnsub: - if node.wakuAutoSharding.isSome(): - let shard = node.wakuAutoSharding.get().getShard((subscription.topic)).valueOr: - error "Autosharding error", error = error - return err("Autosharding error: " & error) - ($shard, some(subscription.topic)) - else: - return err( - "Static sharding is used, relay subscriptions must specify a pubsub topic" - ) - of PubsubUnsub: - (subscription.topic, none(ContentTopic)) - else: - return err("Unsupported subscription type in relay unsubscribe") + let (pubsubTopic, contentTopicOp) = getTopicOfSubscriptionEvent(node, subscription).valueOr: + error "Failed to decode unsubscribe event", error = error + return err("Failed to decode unsubscribe event: " & error) if not node.wakuRelay.isSubscribed(pubsubTopic): warn "No-effect API call to `unsubscribe`. Was not subscribed", pubsubTopic @@ -163,9 +157,22 @@ proc unsubscribe*( return ok() +proc isSubscribed*( + node: WakuNode, subscription: SubscriptionEvent +): Result[bool, string] = + if node.wakuRelay.isNil(): + error "Invalid API call to `isSubscribed`. WakuRelay not mounted." + return err("Invalid API call to `isSubscribed`. WakuRelay not mounted.") + + let (pubsubTopic, contentTopicOp) = getTopicOfSubscriptionEvent(node, subscription).valueOr: + error "Failed to decode subscription event", error = error + return err("Failed to decode subscription event: " & error) + + return ok(node.wakuRelay.isSubscribed(pubsubTopic)) + proc publish*( node: WakuNode, pubsubTopicOp: Option[PubsubTopic], message: WakuMessage -): Future[Result[void, string]] {.async, gcsafe.} = +): Future[Result[int, string]] {.async, gcsafe.} = ## Publish a `WakuMessage`. Pubsub topic contains; none, a named or static shard. ## `WakuMessage` should contain a `contentTopic` field for light node functionality. ## It is also used to determine the shard. @@ -184,16 +191,20 @@ proc publish*( let msg = "Autosharding error: " & error return err(msg) - #TODO instead of discard return error when 0 peers received the message - discard await node.wakuRelay.publish(pubsubTopic, message) + let numPeers = (await node.wakuRelay.publish(pubsubTopic, message)).valueOr: + warn "waku.relay did not publish", error = error + # Todo: If NoPeersToPublish, we might want to return ok(0) instead!!! + return err("publish failed in relay: " & $error) notice "waku.relay published", peerId = node.peerId, pubsubTopic = pubsubTopic, msg_hash = pubsubTopic.computeMessageHash(message).to0xHex(), - publishTime = getNowInNanosecondTime() + publishTime = getNowInNanosecondTime(), + numPeers = numPeers - return ok() + # TODO: investigate if we can return error in case numPeers is 0 + ok(numPeers) proc mountRelay*( node: WakuNode, diff --git a/waku/node/waku_node.nim b/waku/node/waku_node.nim index 07e36dd13..d556811ac 100644 --- a/waku/node/waku_node.nim +++ b/waku/node/waku_node.nim @@ -27,38 +27,42 @@ import libp2p/protocols/mix/mix_protocol import - ../waku_core, - ../waku_core/topics/sharding, - ../waku_relay, - ../waku_archive, - ../waku_archive_legacy, - ../waku_store_legacy/protocol as legacy_store, - ../waku_store_legacy/client as legacy_store_client, - ../waku_store_legacy/common as legacy_store_common, - ../waku_store/protocol as store, - ../waku_store/client as store_client, - ../waku_store/common as store_common, - ../waku_store/resume, - ../waku_store_sync, - ../waku_filter_v2, - ../waku_filter_v2/client as filter_client, - ../waku_metadata, - ../waku_rendezvous/protocol, - ../waku_rendezvous/client as rendezvous_client, - ../waku_rendezvous/waku_peer_record, - ../waku_lightpush_legacy/client as legacy_ligntpuhs_client, - ../waku_lightpush_legacy as legacy_lightpush_protocol, - ../waku_lightpush/client as ligntpuhs_client, - ../waku_lightpush as lightpush_protocol, - ../waku_enr, - ../waku_peer_exchange, - ../waku_rln_relay, + waku/[ + waku_core, + waku_core/topics/sharding, + waku_relay, + waku_archive, + waku_archive_legacy, + waku_store_legacy/protocol as legacy_store, + waku_store_legacy/client as legacy_store_client, + waku_store_legacy/common as legacy_store_common, + waku_store/protocol as store, + waku_store/client as store_client, + waku_store/common as store_common, + waku_store/resume, + waku_store_sync, + waku_filter_v2, + waku_filter_v2/client as filter_client, + waku_metadata, + waku_rendezvous/protocol, + waku_rendezvous/client as rendezvous_client, + waku_rendezvous/waku_peer_record, + waku_lightpush_legacy/client as legacy_ligntpuhs_client, + waku_lightpush_legacy as legacy_lightpush_protocol, + waku_lightpush/client as ligntpuhs_client, + waku_lightpush as lightpush_protocol, + waku_enr, + waku_peer_exchange, + waku_rln_relay, + common/rate_limit/setting, + common/callbacks, + common/nimchronos, + waku_mix, + requests/node_requests, + common/broker/broker_context, + ], ./net_config, - ./peer_manager, - ../common/rate_limit/setting, - ../common/callbacks, - ../common/nimchronos, - ../waku_mix + ./peer_manager declarePublicCounter waku_node_messages, "number of messages received", ["type"] @@ -123,6 +127,7 @@ type enr*: enr.Record libp2pPing*: Ping rng*: ref rand.HmacDrbgContext + brokerCtx*: BrokerContext wakuRendezvous*: WakuRendezVous wakuRendezvousClient*: rendezvous_client.WakuRendezVousClient announcedAddresses*: seq[MultiAddress] @@ -131,6 +136,23 @@ type rateLimitSettings*: ProtocolRateLimitSettings wakuMix*: WakuMix +proc deduceRelayShard( + node: WakuNode, + contentTopic: ContentTopic, + pubsubTopicOp: Option[PubsubTopic] = none[PubsubTopic](), +): Result[RelayShard, string] = + let pubsubTopic = pubsubTopicOp.valueOr: + if node.wakuAutoSharding.isNone(): + return err("Pubsub topic must be specified when static sharding is enabled.") + let shard = node.wakuAutoSharding.get().getShard(contentTopic).valueOr: + let msg = "Deducing shard failed: " & error + return err(msg) + return ok(shard) + + let shard = RelayShard.parse(pubsubTopic).valueOr: + return err("Invalid topic:" & pubsubTopic & " " & $error) + return ok(shard) + proc getShardsGetter(node: WakuNode): GetShards = return proc(): seq[uint16] {.closure, gcsafe, raises: [].} = # fetch pubsubTopics subscribed to relay and convert them to shards @@ -177,11 +199,14 @@ proc new*( info "Initializing networking", addrs = $netConfig.announcedAddresses + let brokerCtx = globalBrokerContext() + let queue = newAsyncEventQueue[SubscriptionEvent](0) let node = WakuNode( peerManager: peerManager, switch: switch, rng: rng, + brokerCtx: brokerCtx, enr: enr, announcedAddresses: netConfig.announcedAddresses, topicSubscriptionQueue: queue, @@ -252,6 +277,7 @@ proc mountAutoSharding*( info "mounting auto sharding", clusterId = clusterId, shardCount = shardCount node.wakuAutoSharding = some(Sharding(clusterId: clusterId, shardCountGenZero: shardCount)) + return ok() proc getMixNodePoolSize*(node: WakuNode): int = @@ -443,6 +469,21 @@ proc updateAnnouncedAddrWithPrimaryIpAddr*(node: WakuNode): Result[void, string] return ok() +proc startProvidersAndListeners(node: WakuNode) = + RequestRelayShard.setProvider( + node.brokerCtx, + proc( + pubsubTopic: Option[PubsubTopic], contentTopic: ContentTopic + ): Result[RequestRelayShard, string] = + let shard = node.deduceRelayShard(contentTopic, pubsubTopic).valueOr: + return err($error) + return ok(RequestRelayShard(relayShard: shard)), + ).isOkOr: + error "Can't set provider for RequestRelayShard", error = error + +proc stopProvidersAndListeners(node: WakuNode) = + RequestRelayShard.clearProvider(node.brokerCtx) + proc start*(node: WakuNode) {.async.} = ## Starts a created Waku Node and ## all its mounted protocols. @@ -491,6 +532,8 @@ proc start*(node: WakuNode) {.async.} = ## The switch will update addresses after start using the addressMapper await node.switch.start() + node.startProvidersAndListeners() + node.started = true if not zeroPortPresent: @@ -503,6 +546,9 @@ proc start*(node: WakuNode) {.async.} = proc stop*(node: WakuNode) {.async.} = ## By stopping the switch we are stopping all the underlying mounted protocols + + node.stopProvidersAndListeners() + await node.switch.stop() node.peerManager.stop() diff --git a/waku/requests/health_request.nim b/waku/requests/health_request.nim new file mode 100644 index 000000000..9f98eba67 --- /dev/null +++ b/waku/requests/health_request.nim @@ -0,0 +1,21 @@ +import waku/common/broker/[request_broker, multi_request_broker] + +import waku/api/types +import waku/node/health_monitor/[protocol_health, topic_health] +import waku/waku_core/topics + +export protocol_health, topic_health + +RequestBroker(sync): + type RequestNodeHealth* = object + healthStatus*: NodeHealth + +RequestBroker(sync): + type RequestRelayTopicsHealth* = object + topicHealth*: seq[tuple[topic: PubsubTopic, health: TopicHealth]] + + proc signature(topics: seq[PubsubTopic]): Result[RequestRelayTopicsHealth, string] + +MultiRequestBroker: + type RequestProtocolHealth* = object + healthStatus*: ProtocolHealth diff --git a/waku/requests/node_requests.nim b/waku/requests/node_requests.nim new file mode 100644 index 000000000..a4ccc6de4 --- /dev/null +++ b/waku/requests/node_requests.nim @@ -0,0 +1,11 @@ +import std/options +import waku/common/broker/[request_broker, multi_request_broker] +import waku/waku_core/[topics] + +RequestBroker(sync): + type RequestRelayShard* = object + relayShard*: RelayShard + + proc signature( + pubsubTopic: Option[PubsubTopic], contentTopic: ContentTopic + ): Result[RequestRelayShard, string] diff --git a/waku/requests/requests.nim b/waku/requests/requests.nim new file mode 100644 index 000000000..03e10f882 --- /dev/null +++ b/waku/requests/requests.nim @@ -0,0 +1,3 @@ +import ./[health_request, rln_requests, node_requests] + +export health_request, rln_requests, node_requests diff --git a/waku/requests/rln_requests.nim b/waku/requests/rln_requests.nim new file mode 100644 index 000000000..8b61f9fcd --- /dev/null +++ b/waku/requests/rln_requests.nim @@ -0,0 +1,9 @@ +import waku/common/broker/request_broker, waku/waku_core/message/message + +RequestBroker: + type RequestGenerateRlnProof* = object + proof*: seq[byte] + + proc signature( + message: WakuMessage, senderEpoch: float64 + ): Future[Result[RequestGenerateRlnProof, string]] {.async.} diff --git a/waku/waku_core/message/digest.nim b/waku/waku_core/message/digest.nim index 8b99abd7e..3f82ce8f6 100644 --- a/waku/waku_core/message/digest.nim +++ b/waku/waku_core/message/digest.nim @@ -19,6 +19,11 @@ func shortLog*(hash: WakuMessageHash): string = func `$`*(hash: WakuMessageHash): string = shortLog(hash) +func to0xHex*(hash: WakuMessageHash): string = + var hexhash = newStringOfCap(64) + hexhash &= hash.toOpenArray(hash.low, hash.high).to0xHex() + hexhash + const EmptyWakuMessageHash*: WakuMessageHash = [ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, diff --git a/waku/waku_filter_v2/client.nim b/waku/waku_filter_v2/client.nim index c42bca3db..323cc6da8 100644 --- a/waku/waku_filter_v2/client.nim +++ b/waku/waku_filter_v2/client.nim @@ -10,9 +10,8 @@ import bearssl/rand, stew/byteutils import - ../node/peer_manager, - ../node/delivery_monitor/subscriptions_observer, - ../waku_core, + waku/ + [node/peer_manager, waku_core, events/delivery_events, common/broker/broker_context], ./common, ./protocol_metrics, ./rpc_codec, @@ -22,19 +21,16 @@ logScope: topics = "waku filter client" type WakuFilterClient* = ref object of LPProtocol + brokerCtx: BrokerContext rng: ref HmacDrbgContext peerManager: PeerManager pushHandlers: seq[FilterPushHandler] - subscrObservers: seq[SubscriptionObserver] func generateRequestId(rng: ref HmacDrbgContext): string = var bytes: array[10, byte] hmacDrbgGenerate(rng[], bytes) return toHex(bytes) -proc addSubscrObserver*(wfc: WakuFilterClient, obs: SubscriptionObserver) = - wfc.subscrObservers.add(obs) - proc sendSubscribeRequest( wfc: WakuFilterClient, servicePeer: RemotePeerInfo, @@ -132,8 +128,7 @@ proc subscribe*( ?await wfc.sendSubscribeRequest(servicePeer, filterSubscribeRequest) - for obs in wfc.subscrObservers: - obs.onSubscribe(pubSubTopic, contentTopicSeq) + OnFilterSubscribeEvent.emit(wfc.brokerCtx, pubsubTopic, contentTopicSeq) return ok() @@ -156,8 +151,7 @@ proc unsubscribe*( ?await wfc.sendSubscribeRequest(servicePeer, filterSubscribeRequest) - for obs in wfc.subscrObservers: - obs.onUnsubscribe(pubSubTopic, contentTopicSeq) + OnFilterUnSubscribeEvent.emit(wfc.brokerCtx, pubsubTopic, contentTopicSeq) return ok() @@ -210,6 +204,9 @@ proc initProtocolHandler(wfc: WakuFilterClient) = proc new*( T: type WakuFilterClient, peerManager: PeerManager, rng: ref HmacDrbgContext ): T = - let wfc = WakuFilterClient(rng: rng, peerManager: peerManager, pushHandlers: @[]) + let brokerCtx = globalBrokerContext() + let wfc = WakuFilterClient( + brokerCtx: brokerCtx, rng: rng, peerManager: peerManager, pushHandlers: @[] + ) wfc.initProtocolHandler() wfc diff --git a/waku/waku_lightpush/callbacks.nim b/waku/waku_lightpush/callbacks.nim index bde4e3e26..ac2e562b6 100644 --- a/waku/waku_lightpush/callbacks.nim +++ b/waku/waku_lightpush/callbacks.nim @@ -31,7 +31,7 @@ proc checkAndGenerateRLNProof*( proc getNilPushHandler*(): PushMessageHandler = return proc( - peer: PeerId, pubsubTopic: string, message: WakuMessage + pubsubTopic: string, message: WakuMessage ): Future[WakuLightPushResult] {.async.} = return lightpushResultInternalError("no waku relay found") @@ -39,7 +39,7 @@ proc getRelayPushHandler*( wakuRelay: WakuRelay, rlnPeer: Option[WakuRLNRelay] = none[WakuRLNRelay]() ): PushMessageHandler = return proc( - peer: PeerId, pubsubTopic: string, message: WakuMessage + pubsubTopic: string, message: WakuMessage ): Future[WakuLightPushResult] {.async.} = # append RLN proof let msgWithProof = checkAndGenerateRLNProof(rlnPeer, message).valueOr: diff --git a/waku/waku_lightpush/client.nim b/waku/waku_lightpush/client.nim index b528b4c76..fd12c49d2 100644 --- a/waku/waku_lightpush/client.nim +++ b/waku/waku_lightpush/client.nim @@ -5,7 +5,6 @@ import libp2p/peerid, libp2p/stream/connection import ../waku_core/peers, ../node/peer_manager, - ../node/delivery_monitor/publish_observer, ../utils/requests, ../waku_core, ./common, @@ -19,16 +18,12 @@ logScope: type WakuLightPushClient* = ref object rng*: ref rand.HmacDrbgContext peerManager*: PeerManager - publishObservers: seq[PublishObserver] proc new*( T: type WakuLightPushClient, peerManager: PeerManager, rng: ref rand.HmacDrbgContext ): T = WakuLightPushClient(peerManager: peerManager, rng: rng) -proc addPublishObserver*(wl: WakuLightPushClient, obs: PublishObserver) = - wl.publishObservers.add(obs) - proc ensureTimestampSet(message: var WakuMessage) = if message.timestamp == 0: message.timestamp = getNowInNanosecondTime() @@ -40,36 +35,43 @@ func shortPeerId(peer: PeerId): string = func shortPeerId(peer: RemotePeerInfo): string = shortLog(peer.peerId) -proc sendPushRequestToConn( - wl: WakuLightPushClient, request: LightPushRequest, conn: Connection +proc sendPushRequest( + wl: WakuLightPushClient, + req: LightPushRequest, + peer: PeerId | RemotePeerInfo, + conn: Option[Connection] = none(Connection), ): Future[WakuLightPushResult] {.async.} = - try: - await conn.writeLp(request.encode().buffer) - except LPStreamRemoteClosedError: - error "Failed to write request to peer", error = getCurrentExceptionMsg() - return lightpushResultInternalError( - "Failed to write request to peer: " & getCurrentExceptionMsg() - ) + let connection = conn.valueOr: + (await wl.peerManager.dialPeer(peer, WakuLightPushCodec)).valueOr: + waku_lightpush_v3_errors.inc(labelValues = [dialFailure]) + return lighpushErrorResult( + LightPushErrorCode.NO_PEERS_TO_RELAY, + dialFailure & ": " & $peer & " is not accessible", + ) + + defer: + await connection.closeWithEOF() + + await connection.writeLP(req.encode().buffer) var buffer: seq[byte] try: - buffer = await conn.readLp(DefaultMaxRpcSize.int) + buffer = await connection.readLp(DefaultMaxRpcSize.int) except LPStreamRemoteClosedError: error "Failed to read response from peer", error = getCurrentExceptionMsg() return lightpushResultInternalError( "Failed to read response from peer: " & getCurrentExceptionMsg() ) + let response = LightpushResponse.decode(buffer).valueOr: - error "failed to decode response", error = $error + error "failed to decode response" waku_lightpush_v3_errors.inc(labelValues = [decodeRpcFailure]) return lightpushResultInternalError(decodeRpcFailure) - let requestIdMismatch = response.requestId != request.requestId - let tooManyRequests = response.statusCode == LightPushErrorCode.TOO_MANY_REQUESTS - if requestIdMismatch and (not tooManyRequests): - # response with TOO_MANY_REQUESTS error code has no requestId by design + if response.requestId != req.requestId and + response.statusCode != LightPushErrorCode.TOO_MANY_REQUESTS: error "response failure, requestId mismatch", - requestId = request.requestId, responseRequestId = response.requestId + requestId = req.requestId, responseRequestId = response.requestId return lightpushResultInternalError("response failure, requestId mismatch") return toPushResult(response) @@ -80,37 +82,32 @@ proc publish*( wakuMessage: WakuMessage, dest: Connection | PeerId | RemotePeerInfo, ): Future[WakuLightPushResult] {.async, gcsafe.} = - let conn = - when dest is Connection: - dest - else: - (await wl.peerManager.dialPeer(dest, WakuLightPushCodec)).valueOr: - waku_lightpush_v3_errors.inc(labelValues = [dialFailure]) - return lighpushErrorResult( - LightPushErrorCode.NO_PEERS_TO_RELAY, - "Peer is not accessible: " & dialFailure & " - " & $dest, - ) - - defer: - await conn.closeWithEOF() - var message = wakuMessage ensureTimestampSet(message) let msgHash = computeMessageHash(pubSubTopic.get(""), message).to0xHex() + + let peerIdStr = + when dest is Connection: + shortPeerId(dest.peerId) + else: + shortPeerId(dest) + info "publish", myPeerId = wl.peerManager.switch.peerInfo.peerId, - peerId = shortPeerId(conn.peerId), + peerId = peerIdStr, msgHash = msgHash, sentTime = getNowInNanosecondTime() let request = LightpushRequest( requestId: generateRequestId(wl.rng), pubsubTopic: pubSubTopic, message: message ) - let relayPeerCount = ?await wl.sendPushRequestToConn(request, conn) - for obs in wl.publishObservers: - obs.onMessagePublished(pubSubTopic.get(""), message) + let relayPeerCount = + when dest is Connection: + ?await wl.sendPushRequest(request, dest.peerId, some(dest)) + else: + ?await wl.sendPushRequest(request, dest) return lightpushSuccessResult(relayPeerCount) @@ -124,3 +121,12 @@ proc publishToAny*( LightPushErrorCode.NO_PEERS_TO_RELAY, "no suitable remote peers" ) return await wl.publish(some(pubsubTopic), wakuMessage, peer) + +proc publishWithConn*( + wl: WakuLightPushClient, + pubSubTopic: PubsubTopic, + message: WakuMessage, + conn: Connection, + destPeer: PeerId, +): Future[WakuLightPushResult] {.async, gcsafe.} = + return await wl.publish(some(pubSubTopic), message, conn) diff --git a/waku/waku_lightpush/common.nim b/waku/waku_lightpush/common.nim index 9c2ea7ced..f0762e2d2 100644 --- a/waku/waku_lightpush/common.nim +++ b/waku/waku_lightpush/common.nim @@ -25,7 +25,7 @@ type ErrorStatus* = tuple[code: LightpushStatusCode, desc: Option[string]] type WakuLightPushResult* = Result[uint32, ErrorStatus] type PushMessageHandler* = proc( - peer: PeerId, pubsubTopic: PubsubTopic, message: WakuMessage + pubsubTopic: PubsubTopic, message: WakuMessage ): Future[WakuLightPushResult] {.async.} const TooManyRequestsMessage* = "Request rejected due to too many requests" @@ -39,7 +39,7 @@ func toPushResult*(response: LightPushResponse): WakuLightPushResult = return ( if (relayPeerCount == 0): # Consider publishing to zero peers an error even if the service node - # sent us a "successful" response with zero peers + # sent us a "successful" response with zero peers err((LightPushErrorCode.NO_PEERS_TO_RELAY, response.statusDesc)) else: ok(relayPeerCount) diff --git a/waku/waku_lightpush/protocol.nim b/waku/waku_lightpush/protocol.nim index 95bfc003e..ecbff8461 100644 --- a/waku/waku_lightpush/protocol.nim +++ b/waku/waku_lightpush/protocol.nim @@ -71,7 +71,7 @@ proc handleRequest( msg_hash = msg_hash, receivedTime = getNowInNanosecondTime() - let res = (await wl.pushHandler(peerId, pubsubTopic, pushRequest.message)).valueOr: + let res = (await wl.pushHandler(pubsubTopic, pushRequest.message)).valueOr: return err((code: error.code, desc: error.desc)) return ok(res) diff --git a/waku/waku_lightpush_legacy/callbacks.nim b/waku/waku_lightpush_legacy/callbacks.nim index 1fe4cf302..a5b88b5b8 100644 --- a/waku/waku_lightpush_legacy/callbacks.nim +++ b/waku/waku_lightpush_legacy/callbacks.nim @@ -30,7 +30,7 @@ proc checkAndGenerateRLNProof*( proc getNilPushHandler*(): PushMessageHandler = return proc( - peer: PeerId, pubsubTopic: string, message: WakuMessage + pubsubTopic: string, message: WakuMessage ): Future[WakuLightPushResult[void]] {.async.} = return err("no waku relay found") @@ -38,7 +38,7 @@ proc getRelayPushHandler*( wakuRelay: WakuRelay, rlnPeer: Option[WakuRLNRelay] = none[WakuRLNRelay]() ): PushMessageHandler = return proc( - peer: PeerId, pubsubTopic: string, message: WakuMessage + pubsubTopic: string, message: WakuMessage ): Future[WakuLightPushResult[void]] {.async.} = # append RLN proof let msgWithProof = ?checkAndGenerateRLNProof(rlnPeer, message) diff --git a/waku/waku_lightpush_legacy/client.nim b/waku/waku_lightpush_legacy/client.nim index 0e3c9bd6f..ab489bec9 100644 --- a/waku/waku_lightpush_legacy/client.nim +++ b/waku/waku_lightpush_legacy/client.nim @@ -5,7 +5,6 @@ import libp2p/peerid import ../waku_core/peers, ../node/peer_manager, - ../node/delivery_monitor/publish_observer, ../utils/requests, ../waku_core, ./common, @@ -19,7 +18,6 @@ logScope: type WakuLegacyLightPushClient* = ref object peerManager*: PeerManager rng*: ref rand.HmacDrbgContext - publishObservers: seq[PublishObserver] proc new*( T: type WakuLegacyLightPushClient, @@ -28,9 +26,6 @@ proc new*( ): T = WakuLegacyLightPushClient(peerManager: peerManager, rng: rng) -proc addPublishObserver*(wl: WakuLegacyLightPushClient, obs: PublishObserver) = - wl.publishObservers.add(obs) - proc sendPushRequest( wl: WakuLegacyLightPushClient, req: PushRequest, peer: PeerId | RemotePeerInfo ): Future[WakuLightPushResult[void]] {.async, gcsafe.} = @@ -86,9 +81,6 @@ proc publish*( let pushRequest = PushRequest(pubSubTopic: pubSubTopic, message: message) ?await wl.sendPushRequest(pushRequest, peer) - for obs in wl.publishObservers: - obs.onMessagePublished(pubSubTopic, message) - notice "publishing message with lightpush", pubsubTopic = pubsubTopic, contentTopic = message.contentTopic, @@ -111,7 +103,4 @@ proc publishToAny*( let pushRequest = PushRequest(pubSubTopic: pubSubTopic, message: message) ?await wl.sendPushRequest(pushRequest, peer) - for obs in wl.publishObservers: - obs.onMessagePublished(pubSubTopic, message) - return ok() diff --git a/waku/waku_lightpush_legacy/common.nim b/waku/waku_lightpush_legacy/common.nim index fcdf1814c..1b40ba72b 100644 --- a/waku/waku_lightpush_legacy/common.nim +++ b/waku/waku_lightpush_legacy/common.nim @@ -9,7 +9,7 @@ export WakuLegacyLightPushCodec type WakuLightPushResult*[T] = Result[T, string] type PushMessageHandler* = proc( - peer: PeerId, pubsubTopic: PubsubTopic, message: WakuMessage + pubsubTopic: PubsubTopic, message: WakuMessage ): Future[WakuLightPushResult[void]] {.async.} const TooManyRequestsMessage* = "TOO_MANY_REQUESTS" diff --git a/waku/waku_lightpush_legacy/protocol.nim b/waku/waku_lightpush_legacy/protocol.nim index d51943cff..72fc963ee 100644 --- a/waku/waku_lightpush_legacy/protocol.nim +++ b/waku/waku_lightpush_legacy/protocol.nim @@ -53,7 +53,7 @@ proc handleRequest*( msg_hash = msg_hash, receivedTime = getNowInNanosecondTime() - let handleRes = await wl.pushHandler(peerId, pubsubTopic, message) + let handleRes = await wl.pushHandler(pubsubTopic, message) isSuccess = handleRes.isOk() pushResponseInfo = (if isSuccess: "OK" else: handleRes.error) diff --git a/waku/waku_relay.nim b/waku/waku_relay.nim index 96328d984..a91033cf1 100644 --- a/waku/waku_relay.nim +++ b/waku/waku_relay.nim @@ -1,3 +1,4 @@ -import ./waku_relay/[protocol, topic_health] +import ./waku_relay/protocol +import waku/node/health_monitor/topic_health export protocol, topic_health diff --git a/waku/waku_relay/protocol.nim b/waku/waku_relay/protocol.nim index cbf9123dd..3f343269a 100644 --- a/waku/waku_relay/protocol.nim +++ b/waku/waku_relay/protocol.nim @@ -17,8 +17,13 @@ import libp2p/protocols/pubsub/rpc/messages, libp2p/stream/connection, libp2p/switch + import - ../waku_core, ./message_id, ./topic_health, ../node/delivery_monitor/publish_observer + waku/waku_core, + waku/node/health_monitor/topic_health, + waku/requests/health_request, + ./message_id, + waku/common/broker/broker_context from ../waku_core/codecs import WakuRelayCodec export WakuRelayCodec @@ -157,7 +162,6 @@ type # map topic with its assigned validator within pubsub topicHandlers: Table[PubsubTopic, TopicHandler] # map topic with the TopicHandler proc in charge of attending topic's incoming message events - publishObservers: seq[PublishObserver] topicsHealth*: Table[string, TopicHealth] onTopicHealthChange*: TopicHealthChangeHandler topicHealthLoopHandle*: Future[void] @@ -321,6 +325,18 @@ proc initRelayObservers(w: WakuRelay) = w.addObserver(administrativeObserver) +proc initRequestProviders(w: WakuRelay) = + RequestRelayTopicsHealth.setProvider( + globalBrokerContext(), + proc(topics: seq[PubsubTopic]): Result[RequestRelayTopicsHealth, string] = + var collectedRes: RequestRelayTopicsHealth + for topic in topics: + let health = w.topicsHealth.getOrDefault(topic, TopicHealth.NOT_SUBSCRIBED) + collectedRes.topicHealth.add((topic, health)) + return ok(collectedRes), + ).isOkOr: + error "Cannot set Relay Topics Health request provider", error = error + proc new*( T: type WakuRelay, switch: Switch, maxMessageSize = int(DefaultMaxWakuMessageSize) ): WakuRelayResult[T] = @@ -340,9 +356,10 @@ proc new*( ) procCall GossipSub(w).initPubSub() + w.topicsHealth = initTable[string, TopicHealth]() w.initProtocolHandler() w.initRelayObservers() - w.topicsHealth = initTable[string, TopicHealth]() + w.initRequestProviders() except InitializationError: return err("initialization error: " & getCurrentExceptionMsg()) @@ -353,12 +370,6 @@ proc addValidator*( ) {.gcsafe.} = w.wakuValidators.add((handler, errorMessage)) -proc addPublishObserver*(w: WakuRelay, obs: PublishObserver) = - ## Observer when the api client performed a publish operation. This - ## is initially aimed for bringing an additional layer of delivery reliability thanks - ## to store - w.publishObservers.add(obs) - proc addObserver*(w: WakuRelay, observer: PubSubObserver) {.gcsafe.} = ## Observes when a message is sent/received from the GossipSub PoV procCall GossipSub(w).addObserver(observer) @@ -573,6 +584,7 @@ proc subscribe*(w: WakuRelay, pubsubTopic: PubsubTopic, handler: WakuRelayHandle procCall GossipSub(w).subscribe(pubsubTopic, topicHandler) w.topicHandlers[pubsubTopic] = topicHandler + asyncSpawn w.updateTopicsHealth() proc unsubscribeAll*(w: WakuRelay, pubsubTopic: PubsubTopic) = ## Unsubscribe all handlers on this pubsub topic @@ -628,9 +640,6 @@ proc publish*( if relayedPeerCount <= 0: return err(NoPeersToPublish) - for obs in w.publishObservers: - obs.onMessagePublished(pubSubTopic, message) - return ok(relayedPeerCount) proc getConnectedPubSubPeers*( diff --git a/waku/waku_rln_relay/rln_relay.nim b/waku/waku_rln_relay/rln_relay.nim index 6a8fea2b5..8758a7bcd 100644 --- a/waku/waku_rln_relay/rln_relay.nim +++ b/waku/waku_rln_relay/rln_relay.nim @@ -24,10 +24,14 @@ import ./nonce_manager import - ../common/error_handling, - ../waku_relay, # for WakuRelayHandler - ../waku_core, - ../waku_keystore + waku/[ + common/error_handling, + waku_relay, # for WakuRelayHandler + waku_core, + requests/rln_requests, + waku_keystore, + common/broker/broker_context, + ] logScope: topics = "waku rln_relay" @@ -65,6 +69,7 @@ type WakuRLNRelay* = ref object of RootObj nonceManager*: NonceManager epochMonitorFuture*: Future[void] rootChangesFuture*: Future[void] + brokerCtx*: BrokerContext proc calcEpoch*(rlnPeer: WakuRLNRelay, t: float64): Epoch = ## gets time `t` as `flaot64` with subseconds resolution in the fractional part @@ -91,6 +96,7 @@ proc stop*(rlnPeer: WakuRLNRelay) {.async: (raises: [Exception]).} = # stop the group sync, and flush data to tree db info "stopping rln-relay" + RequestGenerateRlnProof.clearProvider(rlnPeer.brokerCtx) await rlnPeer.groupManager.stop() proc hasDuplicate*( @@ -275,11 +281,11 @@ proc validateMessageAndUpdateLog*( return isValidMessage -proc appendRLNProof*( - rlnPeer: WakuRLNRelay, msg: var WakuMessage, senderEpochTime: float64 -): RlnRelayResult[void] = - ## returns true if it can create and append a `RateLimitProof` to the supplied `msg` - ## returns false otherwise +proc createRlnProof( + rlnPeer: WakuRLNRelay, msg: WakuMessage, senderEpochTime: float64 +): RlnRelayResult[seq[byte]] = + ## returns a new `RateLimitProof` for the supplied `msg` + ## returns an error if it cannot create the proof ## `senderEpochTime` indicates the number of seconds passed since Unix epoch. The fractional part holds sub-seconds. ## The `epoch` field of `RateLimitProof` is derived from the provided `senderEpochTime` (using `calcEpoch()`) @@ -291,7 +297,14 @@ proc appendRLNProof*( let proof = rlnPeer.groupManager.generateProof(input, epoch, nonce).valueOr: return err("could not generate rln-v2 proof: " & $error) - msg.proof = proof.encode().buffer + return ok(proof.encode().buffer) + +proc appendRLNProof*( + rlnPeer: WakuRLNRelay, msg: var WakuMessage, senderEpochTime: float64 +): RlnRelayResult[void] = + msg.proof = rlnPeer.createRlnProof(msg, senderEpochTime).valueOr: + return err($error) + return ok() proc clearNullifierLog*(rlnPeer: WakuRlnRelay) = @@ -429,6 +442,7 @@ proc mount( rlnMaxEpochGap: max(uint64(MaxClockGapSeconds / float64(conf.epochSizeSec)), 1), rlnMaxTimestampGap: uint64(MaxClockGapSeconds), onFatalErrorAction: conf.onFatalErrorAction, + brokerCtx: globalBrokerContext(), ) # track root changes on smart contract merkle tree @@ -438,6 +452,19 @@ proc mount( # Start epoch monitoring in the background wakuRlnRelay.epochMonitorFuture = monitorEpochs(wakuRlnRelay) + + RequestGenerateRlnProof.setProvider( + wakuRlnRelay.brokerCtx, + proc( + msg: WakuMessage, senderEpochTime: float64 + ): Future[Result[RequestGenerateRlnProof, string]] {.async.} = + let proof = createRlnProof(wakuRlnRelay, msg, senderEpochTime).valueOr: + return err("Could not create RLN proof: " & $error) + + return ok(RequestGenerateRlnProof(proof: proof)), + ).isOkOr: + return err("Proof generator provider cannot be set: " & $error) + return ok(wakuRlnRelay) proc isReady*(rlnPeer: WakuRLNRelay): Future[bool] {.async: (raises: [Exception]).} = From beb1dde1b5bd27018e6bd224c4bd15d6a2c7e65f Mon Sep 17 00:00:00 2001 From: Ivan FB <128452529+Ivansete-status@users.noreply.github.com> Date: Fri, 30 Jan 2026 09:06:51 +0100 Subject: [PATCH 43/70] force epoll is used in chronos for Android (#3705) --- waku.nimble | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/waku.nimble b/waku.nimble index 3ff41c5bd..7368ba74b 100644 --- a/waku.nimble +++ b/waku.nimble @@ -93,7 +93,7 @@ proc buildMobileAndroid(srcDir = ".", params = "") = extra_params &= " " & paramStr(i) exec "nim c" & " --out:" & outDir & - "/libwaku.so --threads:on --app:lib --opt:size --noMain --mm:refc -d:chronicles_sinks=textlines[dynamic] --header --passL:-L" & + "/libwaku.so --threads:on --app:lib --opt:size --noMain --mm:refc -d:chronicles_sinks=textlines[dynamic] --header -d:chronosEventEngine=epoll --passL:-L" & outdir & " --passL:-lrln --passL:-llog --cpu:" & cpu & " --os:android -d:androidNDK " & extra_params & " " & srcDir & "/libwaku.nim" From 77f6bc6d727eb28e3eec074d2f2a47f62482d6c7 Mon Sep 17 00:00:00 2001 From: Igor Sirotin Date: Fri, 30 Jan 2026 11:52:25 +0000 Subject: [PATCH 44/70] docs: update licenses --- LICENSE-APACHEv2 => LICENSE-APACHE-2.0 | 5 +---- LICENSE-MIT | 18 +++++++----------- 2 files changed, 8 insertions(+), 15 deletions(-) rename LICENSE-APACHEv2 => LICENSE-APACHE-2.0 (98%) diff --git a/LICENSE-APACHEv2 b/LICENSE-APACHE-2.0 similarity index 98% rename from LICENSE-APACHEv2 rename to LICENSE-APACHE-2.0 index 7b6a3cb27..d64569567 100644 --- a/LICENSE-APACHEv2 +++ b/LICENSE-APACHE-2.0 @@ -1,6 +1,3 @@ -nim-waku is licensed under the Apache License version 2 -Copyright (c) 2018 Status Research & Development GmbH ------------------------------------------------------ Apache License Version 2.0, January 2004 @@ -190,7 +187,7 @@ Copyright (c) 2018 Status Research & Development GmbH same "printed page" as the copyright notice for easier identification within third-party archives. - Copyright 2018 Status Research & Development GmbH + Copyright [yyyy] [name of copyright owner] Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/LICENSE-MIT b/LICENSE-MIT index aab8020f0..d4c697062 100644 --- a/LICENSE-MIT +++ b/LICENSE-MIT @@ -1,25 +1,21 @@ -nim-waku is licensed under the MIT License -Copyright (c) 2018 Status Research & Development GmbH ------------------------------------------------------ - The MIT License (MIT) -Copyright (c) 2018 Status Research & Development GmbH +Copyright © 2025-2026 Logos Permission is hereby granted, free of charge, to any person obtaining a copy -of this software and associated documentation files (the "Software"), to deal +of this software and associated documentation files (the “Software”), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: -The above copyright notice and this permission notice shall be included in all -copies or substantial portions of the Software. +The above copyright notice and this permission notice shall be included in +all copies or substantial portions of the Software. -THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR +THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, -OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE -SOFTWARE. +OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN +THE SOFTWARE. \ No newline at end of file From 09034837e64146f51f05266774099c0ef8283c60 Mon Sep 17 00:00:00 2001 From: Ivan FB <128452529+Ivansete-status@users.noreply.github.com> Date: Fri, 30 Jan 2026 15:35:37 +0100 Subject: [PATCH 45/70] fix build_rln.sh script (#3704) --- scripts/build_rln.sh | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/scripts/build_rln.sh b/scripts/build_rln.sh index 5e1b0caa5..1a8b63177 100755 --- a/scripts/build_rln.sh +++ b/scripts/build_rln.sh @@ -18,7 +18,7 @@ output_filename=$3 host_triplet=$(rustc --version --verbose | awk '/host:/{print $2}') tarball="${host_triplet}" - +tarball+="-stateless" tarball+="-rln.tar.gz" # Download the prebuilt rln library if it is available From 2c2d8e1c1512921353cde7582a1eb7faf2417423 Mon Sep 17 00:00:00 2001 From: Igor Sirotin Date: Thu, 5 Feb 2026 00:30:16 +0000 Subject: [PATCH 46/70] chore: update license files to comply with Logos licensing requirements --- LICENSE-APACHE-2.0 => LICENSE-APACHE | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename LICENSE-APACHE-2.0 => LICENSE-APACHE (100%) diff --git a/LICENSE-APACHE-2.0 b/LICENSE-APACHE similarity index 100% rename from LICENSE-APACHE-2.0 rename to LICENSE-APACHE From 6421685ecad9039c4252c9b362531d54801715cc Mon Sep 17 00:00:00 2001 From: Darshan <35736874+darshankabariya@users.noreply.github.com> Date: Wed, 11 Feb 2026 03:00:57 +0530 Subject: [PATCH 47/70] chore: bump v0.38.0 (#3712) --- .github/workflows/windows-build.yml | 9 +- Makefile | 1 + scripts/build_rln.sh | 58 ++++------ scripts/install_nasm_in_windows.sh | 37 +++++++ scripts/regenerate_anvil_state.sh | 104 ++++++++++++++++++ tests/testlib/wakunode.nim | 4 +- tests/waku_core/test_message_digest.nim | 8 +- ...ployed-contracts-mint-and-approved.json.gz | Bin 118346 -> 118972 bytes tests/waku_store/test_wakunode_store.nim | 6 +- vendor/nim-dnsdisc | 2 +- vendor/nim-faststreams | 2 +- vendor/nim-http-utils | 2 +- vendor/nim-json-serialization | 2 +- vendor/nim-libp2p | 2 +- vendor/nim-lsquic | 2 +- vendor/nim-metrics | 2 +- vendor/nim-presto | 2 +- vendor/nim-serialization | 2 +- vendor/nim-sqlite3-abi | 2 +- vendor/nim-stew | 2 +- vendor/nim-testutils | 2 +- vendor/nim-toml-serialization | 2 +- vendor/nim-unittest2 | 2 +- vendor/nim-websock | 2 +- vendor/waku-rlnv2-contract | 2 +- vendor/zerokit | 2 +- .../send_service/relay_processor.nim | 4 +- waku/utils/requests.nim | 2 +- .../postgres_driver/postgres_driver.nim | 14 +-- .../postgres_driver/postgres_driver.nim | 14 +-- waku/waku_filter_v2/client.nim | 2 +- waku/waku_rln_relay/rln_relay.nim | 2 +- waku/waku_store_sync/reconciliation.nim | 11 +- 33 files changed, 225 insertions(+), 85 deletions(-) create mode 100644 scripts/install_nasm_in_windows.sh create mode 100755 scripts/regenerate_anvil_state.sh diff --git a/.github/workflows/windows-build.yml b/.github/workflows/windows-build.yml index ed6d2cb17..9c1b1eab0 100644 --- a/.github/workflows/windows-build.yml +++ b/.github/workflows/windows-build.yml @@ -33,6 +33,7 @@ jobs: make cmake upx + unzip mingw-w64-x86_64-rust mingw-w64-x86_64-postgresql mingw-w64-x86_64-gcc @@ -44,6 +45,12 @@ jobs: mingw-w64-x86_64-cmake mingw-w64-x86_64-llvm mingw-w64-x86_64-clang + mingw-w64-x86_64-nasm + + - name: Manually install nasm + run: | + bash scripts/install_nasm_in_windows.sh + source $HOME/.bashrc - name: Add UPX to PATH run: | @@ -54,7 +61,7 @@ jobs: - name: Verify dependencies run: | - which upx gcc g++ make cmake cargo rustc python + which upx gcc g++ make cmake cargo rustc python nasm - name: Updating submodules run: git submodule update --init --recursive diff --git a/Makefile b/Makefile index 13882253e..6457b3c0f 100644 --- a/Makefile +++ b/Makefile @@ -43,6 +43,7 @@ ifeq ($(detected_OS),Windows) LIBS = -lws2_32 -lbcrypt -liphlpapi -luserenv -lntdll -lminiupnpc -lnatpmp -lpq NIM_PARAMS += $(foreach lib,$(LIBS),--passL:"$(lib)") + NIM_PARAMS += --passL:"-Wl,--allow-multiple-definition" export PATH := /c/msys64/usr/bin:/c/msys64/mingw64/bin:/c/msys64/usr/lib:/c/msys64/mingw64/lib:$(PATH) diff --git a/scripts/build_rln.sh b/scripts/build_rln.sh index 1a8b63177..b36ebe807 100755 --- a/scripts/build_rln.sh +++ b/scripts/build_rln.sh @@ -1,7 +1,8 @@ #!/usr/bin/env bash -# This script is used to build the rln library for the current platform, or download it from the -# release page if it is available. +# This script is used to build the rln library for the current platform. +# Previously downloaded prebuilt binaries, but due to compatibility issues +# we now always build from source. set -e @@ -14,41 +15,26 @@ output_filename=$3 [[ -z "${rln_version}" ]] && { echo "No rln version specified"; exit 1; } [[ -z "${output_filename}" ]] && { echo "No output filename specified"; exit 1; } -# Get the host triplet -host_triplet=$(rustc --version --verbose | awk '/host:/{print $2}') +echo "Building RLN library from source (version ${rln_version})..." -tarball="${host_triplet}" -tarball+="-stateless" -tarball+="-rln.tar.gz" +# Check if submodule version = version in Makefile +cargo metadata --format-version=1 --no-deps --manifest-path "${build_dir}/rln/Cargo.toml" -# Download the prebuilt rln library if it is available -if curl --silent --fail-with-body -L \ - "https://github.com/vacp2p/zerokit/releases/download/$rln_version/$tarball" \ - -o "${tarball}"; -then - echo "Downloaded ${tarball}" - tar -xzf "${tarball}" - mv "release/librln.a" "${output_filename}" - rm -rf "${tarball}" release +detected_OS=$(uname -s) +if [[ "$detected_OS" == MINGW* || "$detected_OS" == MSYS* ]]; then + submodule_version=$(cargo metadata --format-version=1 --no-deps --manifest-path "${build_dir}/rln/Cargo.toml" | sed -n 's/.*"name":"rln","version":"\([^"]*\)".*/\1/p') else - echo "Failed to download ${tarball}" - # Build rln instead - # first, check if submodule version = version in Makefile - cargo metadata --format-version=1 --no-deps --manifest-path "${build_dir}/rln/Cargo.toml" - - detected_OS=$(uname -s) - if [[ "$detected_OS" == MINGW* || "$detected_OS" == MSYS* ]]; then - submodule_version=$(cargo metadata --format-version=1 --no-deps --manifest-path "${build_dir}/rln/Cargo.toml" | sed -n 's/.*"name":"rln","version":"\([^"]*\)".*/\1/p') - else - submodule_version=$(cargo metadata --format-version=1 --no-deps --manifest-path "${build_dir}/rln/Cargo.toml" | jq -r '.packages[] | select(.name == "rln") | .version') - fi - - if [[ "v${submodule_version}" != "${rln_version}" ]]; then - echo "Submodule version (v${submodule_version}) does not match version in Makefile (${rln_version})" - echo "Please update the submodule to ${rln_version}" - exit 1 - fi - # if submodule version = version in Makefile, build rln - cargo build --release -p rln --manifest-path "${build_dir}/rln/Cargo.toml" - cp "${build_dir}/target/release/librln.a" "${output_filename}" + submodule_version=$(cargo metadata --format-version=1 --no-deps --manifest-path "${build_dir}/rln/Cargo.toml" | jq -r '.packages[] | select(.name == "rln") | .version') fi + +if [[ "v${submodule_version}" != "${rln_version}" ]]; then + echo "Submodule version (v${submodule_version}) does not match version in Makefile (${rln_version})" + echo "Please update the submodule to ${rln_version}" + exit 1 +fi + +# Build rln from source +cargo build --release -p rln --manifest-path "${build_dir}/rln/Cargo.toml" +cp "${build_dir}/target/release/librln.a" "${output_filename}" + +echo "Successfully built ${output_filename}" diff --git a/scripts/install_nasm_in_windows.sh b/scripts/install_nasm_in_windows.sh new file mode 100644 index 000000000..2bba5ecd4 --- /dev/null +++ b/scripts/install_nasm_in_windows.sh @@ -0,0 +1,37 @@ +#!/usr/bin/env sh +set -e + +NASM_VERSION="2.16.01" +NASM_ZIP="nasm-${NASM_VERSION}-win64.zip" +NASM_URL="https://www.nasm.us/pub/nasm/releasebuilds/${NASM_VERSION}/win64/${NASM_ZIP}" + +INSTALL_DIR="$HOME/.local/nasm" +BIN_DIR="$INSTALL_DIR/bin" + +echo "Installing NASM ${NASM_VERSION}..." + +# Create directories +mkdir -p "$BIN_DIR" +cd "$INSTALL_DIR" + +# Download +if [ ! -f "$NASM_ZIP" ]; then + echo "Downloading NASM..." + curl -LO "$NASM_URL" +fi + +# Extract +echo "Extracting..." +unzip -o "$NASM_ZIP" + +# Move binaries +cp nasm-*/nasm.exe "$BIN_DIR/" +cp nasm-*/ndisasm.exe "$BIN_DIR/" + +# Add to PATH in bashrc (idempotent) +if ! grep -q 'nasm/bin' "$HOME/.bashrc"; then + echo '' >> "$HOME/.bashrc" + echo '# NASM' >> "$HOME/.bashrc" + echo 'export PATH="$HOME/.local/nasm/bin:$PATH"' >> "$HOME/.bashrc" +fi + diff --git a/scripts/regenerate_anvil_state.sh b/scripts/regenerate_anvil_state.sh new file mode 100755 index 000000000..9474591d9 --- /dev/null +++ b/scripts/regenerate_anvil_state.sh @@ -0,0 +1,104 @@ +#!/usr/bin/env bash + +# Simple script to regenerate the Anvil state file +# This creates a state file compatible with the current Foundry version + +set -e + +SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)" +PROJECT_ROOT="$(cd "$SCRIPT_DIR/.." && pwd)" +STATE_DIR="$PROJECT_ROOT/tests/waku_rln_relay/anvil_state" +STATE_FILE="$STATE_DIR/state-deployed-contracts-mint-and-approved.json" +STATE_FILE_GZ="${STATE_FILE}.gz" + +echo "===================================" +echo "Anvil State File Regeneration Tool" +echo "===================================" +echo "" + +# Check if Foundry is installed +if ! command -v anvil &> /dev/null; then + echo "ERROR: anvil is not installed!" + echo "Please run: make rln-deps" + exit 1 +fi + +ANVIL_VERSION=$(anvil --version 2>/dev/null | head -n1) +echo "Using Foundry: $ANVIL_VERSION" +echo "" + +# Backup existing state file +if [ -f "$STATE_FILE_GZ" ]; then + BACKUP_FILE="${STATE_FILE_GZ}.backup-$(date +%Y%m%d-%H%M%S)" + echo "Backing up existing state file to: $(basename $BACKUP_FILE)" + cp "$STATE_FILE_GZ" "$BACKUP_FILE" +fi + +# Remove old state files +rm -f "$STATE_FILE" "$STATE_FILE_GZ" + +echo "" +echo "Running test to generate fresh state file..." +echo "This will:" +echo " 1. Build RLN library" +echo " 2. Start Anvil with state dump enabled" +echo " 3. Deploy contracts" +echo " 4. Save state and compress it" +echo "" + +cd "$PROJECT_ROOT" + +# Run a single test that deploys contracts +# The test framework will handle state dump +make test tests/waku_rln_relay/test_rln_group_manager_onchain.nim "RLN instances" || { + echo "" + echo "Test execution completed (exit status: $?)" + echo "Checking if state file was generated..." +} + +# Check if state file was created +if [ -f "$STATE_FILE" ]; then + echo "" + echo "✓ State file generated: $STATE_FILE" + + # Compress it + gzip -c "$STATE_FILE" > "$STATE_FILE_GZ" + echo "✓ Compressed: $STATE_FILE_GZ" + + # File sizes + STATE_SIZE=$(du -h "$STATE_FILE" | cut -f1) + GZ_SIZE=$(du -h "$STATE_FILE_GZ" | cut -f1) + echo "" + echo "File sizes:" + echo " Uncompressed: $STATE_SIZE" + echo " Compressed: $GZ_SIZE" + + # Optionally remove uncompressed + echo "" + read -p "Remove uncompressed state file? [y/N] " -n 1 -r + echo + if [[ $REPLY =~ ^[Yy]$ ]]; then + rm "$STATE_FILE" + echo "✓ Removed uncompressed file" + fi + + echo "" + echo "============================================" + echo "✓ SUCCESS! State file regenerated" + echo "============================================" + echo "" + echo "Next steps:" + echo " 1. Test locally: make test tests/node/test_wakunode_lightpush.nim" + echo " 2. If tests pass, commit: git add $STATE_FILE_GZ" + echo " 3. Push and verify CI passes" + echo "" +else + echo "" + echo "============================================" + echo "✗ ERROR: State file was not generated" + echo "============================================" + echo "" + echo "The state file should have been created at: $STATE_FILE" + echo "Please check the test output above for errors." + exit 1 +fi diff --git a/tests/testlib/wakunode.nim b/tests/testlib/wakunode.nim index 36aacce03..f59546ec8 100644 --- a/tests/testlib/wakunode.nim +++ b/tests/testlib/wakunode.nim @@ -27,7 +27,7 @@ import # TODO: migrate to usage of a test cluster conf proc defaultTestWakuConfBuilder*(): WakuConfBuilder = var builder = WakuConfBuilder.init() - builder.withP2pTcpPort(Port(60000)) + builder.withP2pTcpPort(Port(0)) builder.withP2pListenAddress(parseIpAddress("0.0.0.0")) builder.restServerConf.withListenAddress(parseIpAddress("127.0.0.1")) builder.withDnsAddrsNameServers( @@ -80,7 +80,7 @@ proc newTestWakuNode*( # Update extPort to default value if it's missing and there's an extIp or a DNS domain let extPort = if (extIp.isSome() or dns4DomainName.isSome()) and extPort.isNone(): - some(Port(60000)) + some(Port(0)) else: extPort diff --git a/tests/waku_core/test_message_digest.nim b/tests/waku_core/test_message_digest.nim index 1d1f71225..22a10d84d 100644 --- a/tests/waku_core/test_message_digest.nim +++ b/tests/waku_core/test_message_digest.nim @@ -35,7 +35,7 @@ suite "Waku Message - Deterministic hashing": byteutils.toHex(message.payload) == "010203045445535405060708" byteutils.toHex(message.meta) == "" byteutils.toHex(toBytesBE(uint64(message.timestamp))) == "175789bfa23f8400" - messageHash.toHex() == + byteutils.toHex(messageHash) == "cccab07fed94181c83937c8ca8340c9108492b7ede354a6d95421ad34141fd37" test "digest computation - meta field (12 bytes)": @@ -69,7 +69,7 @@ suite "Waku Message - Deterministic hashing": byteutils.toHex(message.payload) == "010203045445535405060708" byteutils.toHex(message.meta) == "73757065722d736563726574" byteutils.toHex(toBytesBE(uint64(message.timestamp))) == "175789bfa23f8400" - messageHash.toHex() == + byteutils.toHex(messageHash) == "b9b4852f9d8c489846e8bfc6c5ca6a1a8d460a40d28832a966e029eb39619199" test "digest computation - meta field (64 bytes)": @@ -104,7 +104,7 @@ suite "Waku Message - Deterministic hashing": byteutils.toHex(message.meta) == "000102030405060708090a0b0c0d0e0f101112131415161718191a1b1c1d1e1f202122232425262728292a2b2c2d2e2f303132333435363738393a3b3c3d3e3f" byteutils.toHex(toBytesBE(uint64(message.timestamp))) == "175789bfa23f8400" - messageHash.toHex() == + byteutils.toHex(messageHash) == "653460d04f66c5b11814d235152f4f246e6f03ef80a305a825913636fbafd0ba" test "digest computation - zero length payload": @@ -132,7 +132,7 @@ suite "Waku Message - Deterministic hashing": ## Then check: - messageHash.toHex() == + byteutils.toHex(messageHash) == "0f6448cc23b2db6c696aa6ab4b693eff4cf3549ff346fe1dbeb281697396a09f" test "waku message - check meta size is enforced": diff --git a/tests/waku_rln_relay/anvil_state/state-deployed-contracts-mint-and-approved.json.gz b/tests/waku_rln_relay/anvil_state/state-deployed-contracts-mint-and-approved.json.gz index ceb081c77788d7b5a3a933b2d0510303247694a1..b5fdebb741f2836e72a37eddc2e4cce6efe79b53 100644 GIT binary patch literal 118972 zcmV)nK%KuIiwFpb=!9ti19Nm?bY(4MWpHe7d1YiRV{dMBa$#e1b1iLYZgeeSZe%TC zaBy;Oc4cHPYIARH0PMZ%lH@pYCHyY^dwu|p`@D<{SJ~K(re!mu&$K4fyYD#w$si*l zvofo)wH38rR(D<)Bmp=a$K3(H{MT>gufNy-_1mBS>$kuC$G?^S_uu~foBGf6+n;_b z{cnBzrT!)V{eRDE>Vsc+^q+tExBicR=70P5`JhkH_89rjf6f2+m%sky-~RH?-~Md2 z+tE<|M<2ca_UG`Y-`ZaW+We2d{&Q;+&;0w}`~Ua%{I`~WpO;E=HHxhvHyOEPn`$&p zdhj8PRJ?7~b>(w)zUx*bYpc9=!NxqS^_3(0>+j`X`hV8`kAM6NdxV$&%fJ8qFSU~j z+x2-5KK$*^YI^{GY&Yru{_j8imTUd}KmPX5fAE=45QlXB{cn8B9Qj}K{GH<(rc^=i z>hEn`!SDb0=kI^d|H^)`N%fx^qYEivkXfpVMGS~rOf$8HnvLjWT1w8n-Ha$ZYIogq zaZ0?_w^@6)Vb_e!c{nRRv=UmV#)oJUt_7WyLVtYrU*9lBsAG@1o8f930X?Xud57Oh zRk;jTME5wpzTupeso9QFlot_QO{(46o?KLtM(Rb-2XkszUmL!(5X?tkXq2{IMdg)M zPP|eUBmfI5Wt|*2U$t{_bUed5vJ0;8o~TB{Z@6aL{2qJUSetYlBIbRH9QBA4ZL5qiYh4>CwiJN(y+| zny!aSW-OBHFr*|^3K(Y8tXxdP>(E21W%MXrVTQ4Gv5Hj%Icz8)>6}anwu9e{Z((Sq zRnT~&5rcMo3I=Tl@6l1^Ow%?t+qFu=4Nw|rk+D{t%WA|_M#dXbHUrG#d{9z`)uG^O z?*{Y0&SnhK24z$L?MDchyO3%Nx`}H5m(Yi{)`@XL&{}mVdh|g1G^#GO#Tbf0CmN=~ z#|9cFKgO^s0h6M)an7&9q>>`Ia|{ET^_Xi)AebbQrm2ymD#@rCBnG2O$i?{DLET|u zQoW9VRT04rd3#%Q5mTVhpJ-yl|7cUC1jfge`AJgS@3&9&yyE9`z)>Asd}`o^ww9m_e{E2&4p5xbgUj*aT&|NNb+c2mzY|xrp6iy zFp~T!Mr)7h(4&hvdS(NMGkgUjM~^;@4E!1{;FYm9Dd;aS1q1M+#%S4QESsQeRaiX( zbPnhhsE3&zI(S@17m~{j3=|~?)dz-?{Qw$c^r36%E`rZP@kM(8SuQp9E=iLLvsZ#4 zU`-}z6^xIzn~5#!07k63)&kOu2?1_&V}OCejy1^+69gPYuQ;a)y@J4v<(OXsDyH?< zq$qzc7v4xG-r(=$!W(=aXCQwKZ^XhiobcUh4J7E8#v-IH#0#WK#%v{9dZ=1M##<92 zz@Y(TT#}L_C{|)1%_h}i)OdaFCHI`s_iTc1BL+zO>>*OoTp0pw0g8f`6#EFpL|%(P zyS8yzBJI`~zkcYX(8y- z7=u8&F$GV6#L5hyE!2hBb=A$PCdwEf5P-Wv6Dk#xmOzNs+DNs8h%vf3#&FEzkMit3 zkJc!N-wOBuq(XEx2f}%0Js1(t+woXPU4W6r1ZRwRfdIiBSr?tdX^h79Y8}WDFj_3k zU~|Drr2d8}XuglqAWjSf6L7)P(;8nGHBN(HdW7JU&o+qdH*~ZS>^RMkwfBJOM z$8jxgpSshhc3TDhi&jDWqE%pb>H(dcIM4>H21tW)6une1%n0}o@}X#jXPnB$;Kj9K zj)8v^C81d%hnQ^@_%B)o@rzc$F-FyJXKucwZVN=O>kV?e0Y9N6R0!{`AVFi(1yiSu zw4ks$I;13IupW?(8`gzm9)ENd#4lO}w|T_ClN+Oo*3fS=v@R$_4#PI1KwatCcDy1E z56LxpO<3Fxidwdz+XiOT8)93o(2eK+pOFO``m!ou_5kHNx6u57;qI4+1wtU``3<3UuZHJfN8(iUy#;5MTWV(G9bB@EP}QM?BN=!jp&5Ijm`RGZS2-S z@5-q_%S$>M+}0!5+?NDP{1q@+l~^(eN``H~<-!|a!hvX*|2(2lyaW}*fns>nt3c6X zs`a+D`VyF|6D-iAhQ?c9T;T@Zrj@9wLJhF4G}uY`XfkvXP!p4Z#f!x%1yuY6Fxg-! zmH<9x1({2QsRHo)sG^~ddq@zEWY!6rlGeQ(WEG0~K z9Eryb6u!Y1_Y#QgaHqvGEX`)JF*IVl)CY^@h?$3CY0w@N4x77!Q598{W~*307*vZ_ zKxD;fZ?JTgm1dF$W4H!L%?5J;aNQev@#SziH>bq0l$)d8#oWsyDB3d?nj znslqCR~-S{wt;9vBn8mWs{;KI$OePx1rT{?smhp?!6T3_k=ya5sNe%nT5L!HVw#Sn zS<IfSt7KU0Dxy<3Y~rV>0v#3w!D<}})-GHgmU7&fS(wrz?9XKp201}ieFq!6>)!tY>^75FRa!3yJhQx)2 z0Ua{&6xTHeVhm+1)^(2oFBR}=!Dl`|M!8{PY2No&z+?dkrL4fO#R$|U>oV30j3Qz& zxDW%Z!9b0UnWy4F;!wc#2~#{V34>&L&7Lf69#Su z89dZ1L02WC9~d{d_2n@cv>Z)?8SwH^g#I-!?L)0SieS!lz|Y#CdL(6IlkS^>F<-OB z6wx{36)+h@tffP)VqR?l6DJR`Q(7HS8?adPXx}9$%$$Wmh=u|srt}1&>X*P|o(U@# zZNSUSfO#`&#saK>+%a74!P5N#M5+QXQfOLy?E@q;Ej#lIU~-4i4z&@pP%sJE4J>Ds zi>(&K9OAP_$GmO8YY8RN3JT5@*LcLL>I-0UkGgWJ+#29Z_vi%FMvyBxOr=40#1%WR zLXClD8M!+Zp;jrnd?-3oy#gYy4P#wPOTi{A<1Qn&4ytwqg;;P61}&J5(y4$AavGq$ z;U6ZxJ|bfkb3+~KRcsl(wWSPhNHv!;NTCe0#0*Rh04O^0BzlBt5FK_R6|Yx7WL*vP zi_x6JqyT+HG-Qr#Hetlv5?5Jc8X%CA55$B<_6mrs6$~3pX9mL1k5Ad; z=unr6?;5SbZY>1^H$BqPuhRZK(u1rQmAe1x$MHW>r21WzKXZmZJnxQo6Vj(^)W(G6Q)4d9@TV-^Z3o6rLH~#`V z7ouJ81$snx3f5hM#iOBabSQ^-kFx^%$H!r%S%Gp>W3p}@ z7HtUPx{car0NAk3z<7Z0#fwtc#p|Bv4b~I%3D==f&llL#@=N%^3ABHOo&+B9^*}{J z8bhJ5Ad|)qFi%ErK_awe#8#@%Af|QRQDuGsOjcmel3FztycBhSDn-XXQ@z18;dD^5 z!0TES=ZqcT^AcbNL$QRi_zPe%Bu(==!#apHrj)T(wTJu}mUlnd1%RA zdsqpUG$=5kUI3G4C`i@B0+=k>VxZN7?+VYV z1^5%9koUl_rkJQ}WmuzhB~UM6Fqn|NWRMcZpnU~Q25Um>YCK(`;5SRR5_D7B9BK!- zT@NzP^8D}(o$Jo!0c#$#7|q+4z+{cyLVgU;#Vde~PMG=@$vSk8Nuc|TH?87_b6}7v z-Sp^F#`NZtUmlZtAD#q~0x9cjgcc0LCy+U~RgpnSh6p>@^ESuCO)QYP23Qbv%oBs3 z9+6dtn2Cmlx@TBbEaEkkiyRmg+tCMAsYl{746^}=T&~%;2CPb2cgq?g?^ZY?yFdy3aw=D7xXIY zgO?6&gL);FPWL6f$oZajFwK_(fk0D$K(WD~&p8jMz#bflae4F|{G>t|jeg06Y+{Y3 zp8V>FJd~x^Zou4i@U14N!CnV6k1*1V7#WmU(4Y&b+H?=T+rUf!bnR8I;Rgp5DNqxF zmdrE>T2bg4J^U!XIUZg^lL7&>12kKkdQ`gb#2{7Smd*=cveMZBjJyO`cTkXq+gT0=rw^C8;D{fen=!MTW+3H)fe!aIO8-W4a~xzf#H~#3bNu|pwR1MvUWCu z)u1KtD zx5kf6p8h6Su*UOTB#u~M_Cm#k3T0sQ_iS6XFM-KeY#q|Xd#!5XUU9}qE%Ycr70;|A z%!QykR1lcC7(PT$f?!pV5?K1{V=~XPeD?IEFWFfu0N*YG)e1}w(gTt`8tSljjjMhb zSVfrHo)zgWFX45e20b@HsfxSBfR-2spr<+u^#MQ|J60>~!UEk zYhd#5z0*CBvBc1jb9D@mhbcq*w%by$I9=r7PynMbx&n3~jQjz8QU3~laE0;F#z2MQ zW}K@!n_huLrZ!l0KuE2jPHCu=H2w>;E+3nCm=vV#OJvE=VuFWFZ&PzcIZznpJ-QJg zXEn%9j~bAM{>oG-TBPmk9jpZQ;x%@YMcYuwNR_7oFjmo$lY+K1*le_{Z<1LI*%nnj> zL|1YP_(R4B7UmtSI&UR7UuX|l*nH40vc|nYuEacmuaC*xZZb*3{z)(wvT38(0PR9- z14es5cA(9hWLSrli}6RGuw|wk!@$*-z~nwkx4yxQr9}@m^&@4xBjOK`2OK6)uOMV$ z4LMVJ#)r42nyZ2pA+LbRij3Q%GlBbzP0jfbXiNs(H4*j!M5%*nSfEOS)lKmM6r`_7 z#4vt=6F%i~gjSIf^NQtc3O!bNdWpGq#T)MwGfxJG_o1)Qn@7jAsDimv;G0*#WMPU; zTwyf02Ad4f!==&;7H4d^nh^j$6{`|$gc!Mt1<@NM8|jgm6Mue8#*#yO%m9$uXdc`d zWWa=<8A7$^p-GE$m|=P`>JkM;J1pUj|G^vS^)VT&g5kxg9glHq?~tq-YJ<4!bnCUu zGu{r$L+=R8LIf6NR?k|Q=p(-ZCJSbh#~_g&sxaBHWF4A^jB&u;!AQgen;+|y8J5r) z@F`}gTTF7d?Ik?nbZ^WC1^$BYWab?Ou7Z6F4N6iSmtizWeoIfOF!8U!EIQciAYAkV@>ca+nZ%lBAz*`uQo z7{J%BaK;BNh4xF&5mmv-+H_-FiO>Q)BZ%oSl~Y8Q2JlFni_F03`tY@oeO@1tY1l@jm30!=HF_saKhIM=nn-LbA%a8OjRNQ z>=2mU6#xVkL!e>6J^+m%q?3Wb8CblknOa^VT0~_TF!gBZ^fF+>`NWLv-33^z7y+ys ztC;CNTaZ4-=^54v!&fWM{azlA!E+H{3}tFG#eFJBrI9q$CX;&dYB(lX#%i`qkw)&y zf}SYHe#e){*z*o!Jmmd=FAu|joHJffd@D@_VK3CTXgGN^*_^n$6)0;1LxMewtu zF?yu39nLx92P~Zwl#0+7s)jN=atodjKD+ECFPQ#4F>RQ z_pFF8jAmA}tYGRuWvt8<2V0PtF7%8UnrBFz`L)pd7ryiYhLTo^zuca;mn# zo=-J(U9Bl?O59&z*FtKW(LV7x8!)0e0J&o z(j*@Ss}XXxN7#Xe1)&N$sEkYaR*bS^)Jz9f7`LyF$)EtM9MErG`-U0pAQ;+))iWId zN5{lM6tmDtw+t-@HHIsIr<_wBuYk!+i7c1}ix+_iE*Y!ALEf-9kYQ4^Fv>K7GK?d~ zBLkN?Ah2V%{w2KNj8eWA@+QM-qfXhk%%d!5#JTD=oX)+w!fnVtX8;Hin2UrMQ;)8PsEcS@fX9}>n=cV9qOtA(FTMj}(c6wD3OFd`Zm?oa zpyF>Mi~s;IBorfk@N`K5wnB^QSIBdLC8)Ep)g=pB8w<+Wl-Mq$ut0%yc&K8gFuIz= zSF1o-2Wv@sFR$SR7Y%fTkR>!aiK)cA$fJ*p$tinyd4;6TUF^^9FtYA=xq1k z*zQ#pDt%TyR1Qdb;o<=4n_0fpW`#zgOVT8(jm_NkcO zU=3(!(6yNkahpu4te}RmCKEEKgs!_6$vspUG_CbW$`)o^Wz`T&r>-8dB?M!g$6J^+ zi!~Gr-5$Gxlv)Y>e}TOVRip(G3>>PJrhVHA>|PDEOLCH)Rdt{RHDu=Otl%JtURWqG zvvA%kV6q+%D9thla)p_0nOC>623QpfqaZn^Impb4$NZ%9gi~HOjW>4ZrP-IjWX=2? zFvy|tL-h3AbAs&(Y7Vj~cWIcdQ&|5g`y3l47c^k@2Ux@A7w}AG^#RpQaih85Yy!WP z;W6x%8S7NQ0|hVz=_~+mryyAZsKX$)PX(~K_OW{kq;q(oJoILrA837<^SsvMf^Q;G#_d-W9VTonMs*R?yH1-Q% zGHmn=`-cIFQdmAPY8yk7S-vF6nt}~4VjgSnckgctc!*`;t?*jk8U&?1VH@)`QXHfz;2^^HcMW4#wr4)YggRT zz#@iHsZ}=2%9}h3Rj_<*qzUZi7TgP9vKS~)9Xf+%?TOTQV6B2<)U<=fVT_&CkXV4C z_zHcx(q)x}pn|Yc_e)@MsX8+UCm4x!Ff09~nzhck-k>5ohJ1OL3#mbsc7uDbP@5DJ zI7)NBM6?J7S1Vcrt zC3m2e>xDiE+4aN%=mmyAtEE)d4j(b5SwZE+F&U7Woe$n)W_wQ^6X%_@Cc_lnvNBLH z0akVdurxiA2XL%{UHT~WdwmH^&gez9hEXojrzj(p&Qh=z%%y`eMY>0apjjFljqR2U zt6XQU&aPe}f)9%W>(18E!F*VN2Kelu@z-LkL;oWTyr#AAIKY(5MCaaj=uXU$Ko92E z$7Dc~j+hxHweO5A#ZWCflM>imNufa?jQLC>Lw#F{kX5gwk?mflNr$Opx{x zAYFj3e&{-RG-DF*(%e(Ay!H!VGBn-}wK&0s?K;w91|tWRIww@XWW{1(mOfaDqRf*L zTK%>5$|Iy#z+`1C#>H|H(D6MKW~@YXo<{<@D3;#yny!#syS4=IAZw5?S4&IuCwc`; zhAj|S;fG0Sm@;X=j>pu&aE$||KxNsSf+0EPPcAO!%m{e{nFG$r?pN4PCOh&x$n;)C zLuF+3vd$xu#MEi9h0GYub?v}VFaW`hJU=QW4=5h5;F~;@@6}DUF#>(CJkfnjeKA@v zG*i1&+sn+|A!_E$hPl_9f>j7q4lj}Ef~zCIQfE{Mol-5!)_QMak3RGY)fl>!8Z1Hw zGoaXn$y0PnXkhlncf39(H!w_wQP-d$&XST00D~rlUocL`@@_!0Zl#6Ma{^B%n1~MJ z748MR046ip1HFX2=`6Y^K@|<=VZwUTFo@Q9a%p9hpj&ld-tnJw58EAtX;m>_7E|$}UAX<)7KxD5Jj9FLl zY6Z0)!)(HU8Hrfp24LNLd5KIH;0T!PfSXYpxQnW9fT~o3ftpx`RWS6>L{2bvhs{RQ zm9;>hlu>I)uOBhOy$hyC<8JyeEqHMw2Rv0=;&F6Lps$yXiEo&%KA;`rt`R|B1Ny&4 zmd=Q`XH1ncD{p;+Ao2>vy@43UitTd)O2fFv z1i_~FP_`hBKp?Pg87`Q?$7|3p5i60m z{rZ@!G9@m+*BGp75VHnL&jbZ|RPYZLA*Q<&nu86KF`8#Am=|Sdy55(-la}ucHOdkCvyEG^zCcYbpBMlVXPUq*uioMTB8CZxnE%KLMn`T zR;7*|vusDys~#V-TISF({m}$&T9oXnrA#7Mz)p@lRB*2mGg1TOtI)KIV26m5j0V$E3rr;6L*dRkSlq`EUILS)wnRz|$74*88=7JVmxdhd$XS77v7lqI z392;ro^61t1;t{(N*FU|%Zp>O7#`Zv+YQ#b#T3JwZ%Lp^GR1vL849LIu1w>fiA)Y^ zG-Q`%6>IeIB``SwZ!&#oLS`I6ouw5mH!4~K*MV-#Nu!lX@LJz9P#W+I;L5%=$l}g36K{TjGroC94cQYoKLlsJ<11P*M#9 zjexabiO)4#8{*c~-$17#N6}M6=!#{*BzAG;%Ws%a$|Fsz@;Z za!&On^kkt6GVW{;vL0rvG6J)&(2F%#!D8%8U4za>%(h~l0NlbzH{Fc1Y4QE^n5>eA zEsl9qu+L%3VMIAG1XDnC80nb@TQdjgu$T)?Y-Y%9RU?)@eHvdJlc5{;kpaYxnX?!( z5z%AsBmkU3U`_Ux#Z(5Eo6fW9kYX*`!W*O7J1+<1vNKD@;VSxaEFw#Ht-K7ay|6*pHw#nDG4AlK?(6ji*u z1SnnLP%!03F?s&AeQHjhF8T;p`u3@uKGi=vp2Jk&*}lN`gQi)Nh|f{=4jYQ6X`SX% zE&zGVss}f4U08;-c3aw*8JvE4Jclu?bqF0Cm#K1=%;Mz@+y~Y&#Y=@PF$^o5tIT?; zi6sra_q|{!>Lu`8`#`@i6RcMrT$y3ls|F0mc(q_2tzbTT>!1>l&olc!y2#V<%1x3F z;`#h*`_!L4UG#B8iS1K&`qcjPvS6^1iDS;eC@xj$#nvI{w=Pg;Qt4o2Sl_V3z;#T7 z#JaVBPN^`#!)u%^CR0^sdco2l01WKuA^=4|6yuOOH@I8v(Sg_)BCv-0LFf*azr~t) z1zDh3`*}pLwhfGSh0xQ4W_t2hgJnKEb2n=St$P<3j$?bqGr43R$47neeo#4;h=8q>u9@r&dd zf~meRfHnf(tblRFLCXIR$O2|F3fs7UbRFQW;GZ8~2X4_%d024D3CnNrw_zNQ;PJav zGe4SP3G-R~m7%H3>!L<;C9+f-bZuzUGCv*XPx&-x%Xgp+P*bC!Fz&vh@br7V9p+fT zc)G=62-HHp>uVp2<#vpnqq1{U?C~6vBWkVD!KPuJEqu)$)a&>F0juBZ)5c;fiUrRW ztrBrh)6-}^3vHYf0?#?l`SH*H_NU+e@z3A?p8vJ~_UHfl@BjYOZ|XlyW%_tbw`WdV zJZYE-{EM~GLfZ|XH%1>Zf{N8N$`Do;vB~*!!*Xr-(!#@he)6j`OP`)3*Cc6#J~v4$ z_{|Rbh(llV%CM_+?V6JajYSr5@h~U14)bKJ4zNvhbw0#AsQY?4=1WV^b*rg{#=^;AN2wpwY9K5hvc=l`<~u&wB-G*C5V8nC96+;2SMI?ht0g(yX!k4i=Om8 z*!`{?=Xdk$E&wfeq15U9P>pVP=J}RjK8hoHim)3MA{8?V0bCPUe@R`&qI<)bbGzPU!jrq4PZw$m6r24n6ZL*q z2$#_bU3d1o#wRHWg-Nc|R1=x^zaTUfv?fK*>I4}}EG8uqjqw|$75^#dac&|*wv$$N z@iuW(kM-WUGN1=?+f#hU3>8g>8nIL&j-SeaW5Be`Pv;8Q`lEQX#di?%qp;H6Y~8Q7O~q;Z zm)8Gl1uH{gVIc3QhOqBJ$@bYa%2 zg=ACjxTXibQ*pz!ThHjJrw)ogpZ||@6zm@lRZ13*&{cz6vQ`hrjyL18=-1!SiB$}1 zRu3%MslqjysT<7N%3+Ofjp!5o(RchEIKFLcvv;#!<)G8tEg;WvH5DSvwFoGrKXtEn|D9gCa`(_Lr)|4@4{K!5{z zD!s5KtjC;S5ip+%RbXxpq=VxoBG~7M@IS?I(d_EHVhZfAVLL1YmfC=x=5*O{X7(1e zntwT(5$i8{O*w!%R0PFDo*tY4tx51*1H>IvONzcC1yD+W{ zP7?o}GXbg|y`(wwpmDSV8JJ1S)115D%p(SE$PtZ|Wf&<}!^df*+jCkuHOI-KVRq8Z z(Sm0Qv5jV=qsI5@p=W}Hw(;5H&E8JU#5uF~Jq*uU>J1jO6kQV9YkDy8H`9<#)epwi z$QcSajU=13Pu%Xau&SBQgIhP;!{m)VR{ZR`T-T0FX#}~i=2_XtC8>>gJk6xU;Kj*} zM4oM)WTJ{$zh$+Lu`YAjFa4z-GI8|-_ZtRcE*Z4sqdVl#lu~NL_F3AC<5}zP{E27U z^T2=nJjMOnw7Q^=Z1kTu~-ar{0&^6)&#!58t{M|Q<*LvnOIdEj6kHxmFrp)cXWIWYQyQ^9g#$JGgsC5zt}0O^t~%XNvp)HNK1@X4ggDmwZ4H zN;EZPnq3+`yceIH|z+2ub1Sv@?J{NI#gv!9s1`R^>9e+YNdXJqz!`c2^4(HyVm)|7kzdrs?Y zt-SAFZC{8yd2L^RkRs+Y6!|9hoGJ1{Nc>gE^Q|3|)2hk+Gn%sh6elsFY=nL7V6k$= zP#+9tYRqT4ab;cB)c*y#*?Yv6f+cKo`v_ZCgwlKgQeSjkzCAp{?a>M!%)=wWZ%E^IQH23m3<+ZUGD65i7iY1o1 ztdQ9^R>s+NPNUA22+@J|$mcDP^jV3u@VSmpXv(zm-jvR5%4ho#op53C+tQi)h-y-U z`%ZK3qN%#={cbx{b5ppJwiR^?I`; zj~%k8w;i39{2#K%%hXbjd%RtZ%N}p*_5*voeCkK{cuD^c?D5_x^ffz^yX;`E;yMYV z-oFl?d?Xhs7|LS^+S7%ThCD>cvOWcl2%F`u-UG*h8Fm*G8V?_SC~}&D`mnNVV;Ars>EvDZ9-ldrQtW zusoo~uUXUUT>04X*+%Csqk-l4jah^Y2-I;{m)a>tf%hZ>ILkUt#8dBXm#m;V-*LasB1iUyG;nW{;h*ooK+f&!0ez!Q5tR zy$xakxoF5E+e};h~^1vSU>01;p9us5w2v;+x`pJL<3?8ScSdOdTM zeVp^WU-fx~{p1O|UgwbX2)LmMeV9InVQa@q3q74~ekNDhzB#SYNpy3`)36{fPItVL zr)0$EvtM@Zy79CgJvJ_xJIqtZ87H;PAhuT8hWe4AVDWvZz` zah&r`gT7tvnG_@+Pr3YwxsQ5TVE#A>;#+8w3Uz+ATenZ6o2zNp1sE1E-SBCDs9m2u zuQAxO&{eU<*Z%4(bvt(=ranCNTi&(f2HsQC^OKJUjyZpyG>a^+yUnFZ^=>Xrc06S} zIK)rxfkiS$k}lliHtva&d#qdB8F%jW+sNrxcUe28h0EIUDM5#%m#e|=LVa@5*WTV+ z)v+$_yeyuqHLJqisH|3(wKg?bTzM*vM{IM@tzM-G`V%5`{a;vVf z_Klj3xf-Dps`sS$Z~>c|d~yy4fivwkmqiQ@)I3qU2PSfid0&Pk9Wi-P*L|90v0=YG!U= za#4oNJCnYnCn~qVI>0VhFRLfJGjom22rRk^Dl}V7YN?rAYH2`U*aC{!4$s!ZhaHo(z))3qQ_Un9Or#Yxh`U5k^Z$735~3VDG=6C-uA>^a2TSEcHCB zSk)Ot(yGZbxoHEqGVfs1p;auo_;%#Y3i+WQD)85)G7sRq_-};#Fy9c1gZ3iT z4xUYV7L|u)ESwd)f87XAc9<}v)qRB^hME&M6& zp+zZ~z|)Eq{^ahiuE3uySf>Q6*(_UFPThiaYEHe5X4lnr*+RLryFz@ncTEGi_PT{3 znTH@E5oXh0Ay2Bv9zOc!@cl`yOy4*B~MuK&&vbatWN8 z+t$oCgzxnh%8_Mv7t+1%TH5U${XP`!ct;<%ceo>1WgOu>Y8{-(?XJ9s`ha0iAu#r) zg)L{++i~5Sflh9Izl91x&)DCs$L-w@0Zd2Wi<`kl?iMPQx4GTTEK?Bs z6Lqv7Z%{{v{;Y3SuFuCepHH{vOIS~R+J)^-cHQ*cTgcVyMYH#9&A!&_UHukHRN_G^ zcLWIyzlBhUzqy(1^X+)pyTTk=Iys@;ju*LAvF?akc1`>-Oyzo~xgFq<(>8ZEI;X^N zddFoG_0rcS=JMvWu9o)XYF}F}Jj(U%eB7Bx6?>*JGu1KfFhCX>of(hMKnBOd$jGq2 zCegrnun{s8(mPQb1>-Vt?EGT5)I4-u!H*w2R>+rQ1(=KhfNPU&Zn-qq=?d@Z$`Gb) zzrqJZ0m+nO2|v1S@Nm~yb3Rr81Xh0L2J2737_atI_V`D_38ovw1q-+dw>GgBEqwaK zA(4BZ7Gb&W@;E3>r!XHFM)Agx#gjuKu-@+86YM-D;m^O?NF4#jAgD3 zrupGLJRB>B4A@)S4xP6RKyEaj+P`-i!Wo2)&Erb74}a zui3&p+65Qn%t$cH^50%WbTz4VEaL=&QkZ^&iD8RLl?HhM)boO>Sin8as?#ZVY@+{|gaR;h-=0#qp7ULD!AF(X1alemf^*N%Y z%`NgL!ZP3LcVL-kf#Wa4vH)}If@SM~*-zh*lWrF1YITx7=_G$XD|OzOqeJS1pG@j7 z7tiNsD%Ae9Gd1MdOigR%Uj8t2k@CkZ$C4KAxH%;`j3PH|66u?54Ox!Z_xAImxw>9;MA5d=)-3zu%Ie3>*2{{&<5=IZ3Q}NL$4sT5ZCMuWUAlxx5xfN){4u@39`Y>p@$=~oe(l(tQFaTFqNmA$ zx!;=6EGvN#exrjK@2-bi@00;)$nFfj=?m)o{Bkn-AZ_e7?8F%gihuq+PWTXQ)ZWi0 z`O0=Tx}3^%ILA+pdpRv1ds)i`dAW`s4(&I0_w|O#T|&Lo2FANy*$= z40FeXMgI6c%$M$4(h~0VxMmN1$)EJ466jcUFHu?Qr^IllGu_QHpEhoev*=y7Ja$(O z2S2^;l;wh7IF-5^zvJglf#vSvcE{XOXMgWB*J78%OPl5iec?`Wd{~op_G5j#??LSA zWP3S*oqMq9D?ikO5>^jN_%@ZEScabIB!Mvw{dU``(w9}$pkG|)TP=j%q*&EU1B)jf zsl{z8xhlWY{w=PUt<;&OEM6#AB{O&Em&_dZ4Y$!9w5x9MB8&EXgu|M$_wBAxA8Vk7 zwrKIz>^|Bl6V+=`t0$>kb{s$6>!Ww~G^{nQAdmXPy!W&>WAPt;euV#?=!Tf}$$@^( za%p?sL-OQUp8l>Y z?fV3K61L@}DESC&Ib4`ecWthWOu@qdPS|pBxKR>|8TDLD?YWvE0nc<`Xk*biOu8F` zrN~W=OmiQt^%xS%Ts><;RvWReHZ&dfVNcaow6Nv_t0-qbhKn|bUO}d8Z7r!)AKOqq zWE%pSfan_#v`^bGY7dNCZNy=XG1{5XL4qNjKB=*>?3~OZsG)l1bX5j+Ge#2W)>ed2 z-o(VGZLE|4ondD)fHsWOqG7mbD2nxS1b6ZbKQl9>8$23<1Yr6;V3VskGgX#j+Ti3Q zKW)RU&?8F6IUmq~!2>;FSrs!}P2Om%KL?`0unyU$R9&nIJdm4htPF+57&&H}bLpnCMp0@{+lZ@;lDBaHn7iZD z1FINe3f$-wOzFG@JVAr-MY;r=trkrl1Cw==QyQiMt)8}#wotHbeJF0{ncy2>YBc6d zE=-deQ%@E)nD5s0kZYyefJCgHPH#OlZAWdvhNo@h70{#i^#Qg~y5%6iS1`Ol(FmD7 z6wD3p(?~8R;qFnO$GB+>biv45#>?~CTX<+=3mS6!`WO*7D?&#vFm42JL*ZaOax@Ca z&A$7N74Mm#Hx^fVsUDKAGLba1Y&>hDt~TPfJ_KAES=+uQKopeR1Z5gsFFDpxnb^@w zQLJcM!6!L3rkl&k2F;=m?)my?tBta)4~?Z@wT<9J3r&tmqa7^rjO8)1)`o5a6`Yi3 z0!D@!y9hJ`z?M-)dya{Hwb8frfm@m}z^aj=AISrPFx1|yZIR_0H6wtX1Mg;*ma+uy z*liiMmmXJ$@aU3nRmHtT>=xX9=%794v-RXG_?| zm_A6_^mNX(T5Z^UedJIB^dw(9I;#pZ(t}A`kTN{)0fXU6fC;KBkb~n!5F@S8a_n;QAM{$s@gWo4XJ7a9aCyz9T*VL0kK?b-kM>2NpFlpRvUR+ z9}GBV5(C{+wvbUaF#x*`JjfzgV1_;bVU4LP5?GuEi{R=8T?sE#<7pdawb2jxVKBoU za5|unGe_uvpt2@{aL_S0xnZb=mqcHB&GNQ2Kt8C3#aIglxZE$Ifi#0& z0)5^$fPC)qd_I=^FlLt@g4QCF`ZpHngnU&p0E!au6|;1Dgh8>!qHo+!h6-VXj#CVh zg?aL{jU_)!IOK=u(MwRQnurfW0b-j*DO8x~0)o*dm*>Gz76>zG7Az0v>X_<~@Pup1XW0v`tZ9o?T zNEFOKOHg!-wG>7fo#nY|0tn=gVPNL2p#NanG#CwR(3!x#KgX*jKdgzn&HxFGS+YRb z)NAWJUJsCV-C~&xOgf;QCD-zxqUSUeQ&1X^0?mIs*BNBV59<&83WNrQBBWjDRx-2_ zT6qncyK@?A02(?lTmWUj9q4fpQ(>X=0_So|=Mc*8N<8><&mPmK>4(CyT1Qe5CO5Jx zFxam=;@{k9d^}&kjlcZ)!h;>V<{WZ)25mcb_qMCD?SRTDB`@xoVXkYR>T)Li_MWW< zL#_AI8~1zmY_V+m$MKV<@=esg{YJ@up^=#J}8?)4qNpy@Gth?+hUC<{8sURJP6)=J^ZLP0sVCGd(T|-kzN?e$~i1 z&K5VHieq2!@UzgT^%M*}tnGP!k;QiruBI={vAQn1;`An2=O%KZ5KrR6$4~Ze=lc7}{qB9YFoG$_9@chP zd&T_u_S!ysKf|o?GZFL6k5%^5e$dF==4yJ@ms1i3JloxV%KtGx1I;4UU+8DRE?)f% z=5Aqx8uJvdPn3#w;WK`Ih?lZHc^hL`W^&MD3;x4iO6DV8N|S2Lkh46(!)1NNQkPFB zf6wLKYigb~Jht&&YjLJ*R#Uoc?RE3iOY1~oW}jR?Kd_=Eja5G|_d3!foA>JUGy9>F ziljc{U-QjXXisUxB71#Dg~ZB7>*xC`^24hxW%bEbmze(K3s+rQJJhFDH(TlL;KsLj zBCCZ@S-pu*XP%fnd_0%L>Yk@UE2RVzAs${n4~Z1Axup7VlaHB#VR?0>8OboU9luhk zT+TRd2_Le{?gu2yV{Oza1lvq13u!+@? zlJN4$vMf02rdMj8oeE8sM$z1!WLO{P#@q9A9_%(c?)`-I^UOU=rll8ZT+g$&r<`o9 z$eGr+HZ}w|tMr~R{Ed>@cD$G4Dg4Q^_VvCrcN94@D&I+$t;JYYi?N*gc^ebok++lZ znm5b-jTvo4uXh!Fdl%-)+$SKsco(kI_UZdh??j&LgLCq2E$qqN*z0M{w7&Fb{Jhe3 zq2n!5qWYXvhB>1B_(l--hZ72XHpPp-g=L#Pht)YO+Xv&ivQ@8AxLf5#biMbi z)>Yb~^J@3Tm{83Y!oA%)K%Q=5*+RJO-PP%~Xt!#Y1FCd3Ev~D6zZyeRZL9N`<0aG9 zrQI1cI=0a9o<}V$tswENw-D`KKq(%p%&SK3-^DX%R_9&4EnEG`(&iD+oN5df&o9ySdFWwvhHx0#Ar!J*Bs+%`ypjTy-y#d#KVGldvXaT(h$W zuP0$xTK4PeR`Kq3b+$`4hi`u%Fc(hX&6A*%a#nG_jP?PhPq^{wXU(0xc;6c@Get*gz0Ew_2tSIKd8 zRy}HEPdG4|l(u->kNv8XYQ3y^pclejZ;p27Q)!!j2Pr;y!Z@$yEfIo~1{GyXkHFo^ z`*&rCs$$pb6|-@|adkeW)PA)!@AT@IKkltLq!<=Oy?dxIyRh-4$pPlVSZm9r z?J5|*Mk#q+?GN+7X!J6AUcKEp0mv0=%WL(@j#kIj`7W*ZnzAm{(HHI9J_h46Ztshi zoQrR^J1YV1vu7a8L~r$Y!L!>{jG{1E>AX7h{ky;?Fa!26jICeC)!9PW>(lm9a^5Q7 zn!P$Oq<+L?FRn4=ox9tx{v}5>FiUi;a#~^_kaXI-bgrM8KpY zOgL|~Qul-lnC-RcV6r|{tvx4!xV2>;lU#6h%^uFsiqjTLJXfYYuHPq}SUupKw^pKl z98sE|-0IbuoOXvskaH?y>imvu9^a9hXYONr?%7?^iif_`@K#@%zx8G8$qrc0;_`gI ze_qveqZcOn(+bmcB7LCq(V~^dBqBD??{Xv&F-2_$aXem;-E2SVI$bi=^BBWUbHQxC znVD9AzcILZ{to5ToMxF@Nx(UV-7O<52@g*9G2l&h8e@>wj}(Y{zIPgHer+*oX8oM5 z1odr-ox*p>WAQL*InOaBciIA+F3|I+ngQ@mW*%0DH@LYO;kGJ&VFipb80ePab@gq} zL%ISQ&wbqM@7C?}wcWTK2hI8hPkGIb_LOJodsiQ3j*jbcdEV*A=bcRVq03rSA6kXZ z+BZ1gIdkic*^|69d*wst@1IvSpOE6j4d38s7t>okN%6@V?A=_S&yUZ$GU5AVUnemBV<=-KOAqZs9J6k4s&g5!Cwo;*Lf!|U7g9K%}I z@7ryF8-e%UP>+|i&^PCIZdBuPKfd6yMQ+M=_yr*ScXFaI5H$D%kBBCA~m1Cr%!$3c?|Y$r2SrhxpMnE672i9|7bbtNBwM#F-m*XoS+N0 z>+|OR`Mu|HucmXBb9Xx4y%hYmjETYDw`a`KIy z+st`{p?(d3-Z#Y`1C_Y>caekVuqU&^qt7*wWvhhzbf%LpZ;dJBC$kmpl`~Vke_qx8 zp51YKGC$f!Yc^5|&t{`taVOO$_Em0QkgDgdpW_iW(rUdvpYET(YM0M6 zgq$la;e1Mmh*{jyN`5aQ$6crAk=gF@^^p2RE+OzYXHu@Tk0d^qN6h1RPoKGWg(|B% zcZ4aGkGsBa7vY*BYdct}X56F4rX9u&c?g9FKRTWyU5a%5a z0k!?Mi?DVLp6z_d-1%_Z?t*!?yYR6ckh%M?KR&4lZDI~i$!TJPKsz16mbFY5Z@pgVy5skCXJo9Zw zIl1(+zO{GrzQDR)__;m#QHK6V!p$G2+EcB#oADK;8y?@Cp3c2qyYRhy#!35+n>W4Q zLuc+ic?_e-zIWcpk!{}v>XnHP9 zXSjMsjL#-4<=6h`&xe<{x|Y;YHpH|WVlF&9qa{(y-q3Qx)3*`RzPBIy3BWZ-rup3dUv(?T=0drucRGF4X4dmlH#tlek6fRgu8?z=(cN|g z>xF|ZI>4-L`$KYh$_RUeps4Hq-_13WvBcU>fq7pnt8SS0 z$4Dr8+5gW=s2RxmGtAZ>D52;+Nmsx_LUH$bvy7C!NT8bxQpU_@j2gQnFc15BU$JV| zEYwdV&NG+9dCtvbI$U(hhcLr3XW~2|-B*S9V^%t-dOthoS$J!vKDd_No(1;>yuI^r zo$jh`vt}8jesPRH9OQ15&OLLiE$3^yl8LL8Y-TwM+O&55I7x4unrQBHe#83I#-=+- zkUw7@;rzAJ^_*%n$R~&wo;w{a@^{yk-2;K5JTncKYRIM|e8qhMN zb7Il(jY!saZJ~AUyLQ}fD@m`DgOy|w-1C_vss2KT`*F>&1V$LykCcpf5}tpPk}j87 zriAxl6&QC}1ys9DOiX3XSeB`WYbNVo!QbF^e$6r``1J6cnyIeS9NA3+Pw{dNvQ9|W z?<Nm31rhezY5|q%mr9rhV_2?>JLZCBX3+M>5j)V{~quiDlTNIq5&41QkEDCT*J>_tK*&1bN$;s|hOGdnh| z5d3n|j`*F?@r!@gnwpyF51vrccH2)Wdftml9XP?zd-u&9iSJv>=beZc@Ac2;I}xAo zB6_yBwtw{S!5d%yzrOm1>Z`f!{tb!2=_b{qoSa%Gy?XP9S?7Q(99LX8ICkB2q%@r~ zWp#pq1M$es}2n`HUD_6tNq%hO{mTP!ktrf?m_Hh3u0$( zz#R?d4B9;f#_8^?w;*r*^nBCZ0x;9^RhfZ|x3kR$n(>TyLygRslod3%@3kD|uGI8V zXHazGJw};wgurjt_UHE0u-Y#Fg3s9ha!uL91Wak}!O$j8Azy5NYRG+-6`MOOhn9HL ze=mDxQx|@Gqw^msNA!+vJW=UKY~dYyHo8q5*=RdY?!Tr5wXG4(|AtjFB$B7plD8yE z{JY`r63qU$Es$!*KYliD>q{mI%L0J!KdU(7#q|W_hJtn)PCBfdj)ihq7&4xfEJ z_1(A@DfpCc#@?S9^!|o#Mm(>cGVYTooYPCpQ~$R-H>e$c)h^5`7UgyjczJ%*$F~5x1ONVzeE`JBo>x#fU8oHnr<@=aW`g zMCFM^wEOyOrCk3we)%qso~>6muK7mVk*58ZrX4wCu2n(rGHaKkx?T5wuJV6McY0@i zqOh7=ZT}GhaQTew%$^&Vu4{g>Qg=;s=l<>7N57@(yTf@ulgS73t=*32U+

Y zt2a3ZPY8lo2|GZOQtWEjJ$v8Lm6Z!aLYn78dV5#NiEv^;M`$~S?UP>Lj0*}|`SFI9@5+1UQd49)!tXQ19?S*TxaCn8^l=Q_JuT@CoeW(Hr% zo4>mfKo3{MYb%{c-;?)}U6ZXerXy!-iJzTkREkm&0WC`O&%CfN?7Cfq`D_`$kU8_K zPdtfDlB2wG8@#@o3L;s~u(*{R>Mex+s2v(J2mnT#335}4z5BkdVlLs^yH138SLQ;n z29TRrQL}j&!j53B_|!|5irHSL0T)H@Nl>+?GpDG1LKLCk-2qRiv~e`aOgwp|;2rVs z80)s#%^7N#fq$#4%HW=3=SAIW0*1VsR#YEFH~h*$kd&o~-X`W`hBFvGw= zisHM3^`@DE$MId0ooxEf(YcUnkhhY5HQ&*XkF5NCXo;zh=q3+~;KuJbrVfH@oO;Jp zz}_SzEUBwPp)AE}t#znHAg+C-;-G4S(^fnQ094sY-;pdOS=K zfBw6vq}xZJyQptC+Km3N$2V|vL(&V+6NoFB32}IJ7^$2eqo9OSBz=JYP#4r7G1uN&mYmKxrUKMish%=`MYYptIsP{>T8SU zWP52jVZt@(#zp440O-H76Q6_(xgq8F^4gAk8F}b9FCIi2+HM`oW=>|(2eTo2ha3h-R(68;#m=dIQKSPI0Kb04_`*~V zX9-4-D|W{W!t;4h4T$Y#5tA3GhA@6Hk&3IX*5m4gYa(M+cCZS4AUIyPns)wgIS_); zXhZr{dTOg?pavU3XnJt34fXH1AhfUE%~Cv#R~1sMVL~FVFnGk|&E%W8wWFOUjE-$n zGQYASnH?h!K#&lR0{Rt^s1jmsW-|@rSUj1h6kwi;M~P8Lr+d?W{xoI1`c?&LN1GB+ z_lU{0p_kvD7aN9l`RME%40`^Q2*;o`LTd~#tU(q8G3onNqs1iFld$)E<-iy zt6s%KV?7=NRoGuWauVV$?W9s9FWe50B?snO61}=T;<=+Z{hm%hW^}-$TO@jS~u^GGJgs zax=&qP)amMm(IVhEiBZir{|d#cvvSMV+p{1ruYWQ<9(u5*jH?2;rV8V)`DL#%!xU7 z5Jrusp>}abCO4Czo7Nj7GyA$U$w#+4cKH?WzU9-(?o`<+j3}0z*nDd@s?!xQ-&inD zm8Gd|K3;-tg<)W#`6ZOY$lN~p6`Qm+91HG;(7=!96~3mIbmT9?u&6gN;sqvKR)sD; z)m1R9#@=9Zd7C*Y@N(>ENH^rKMoU6t9v1*66mvLO538izAM7Mvz#lQ>Y_CtW1!25B z6+;#Yz{*PzAUNzvqd0Fw>FOOEb-vB8hfiHjhn@ga|Ju~TuOTS|!S#gT|@;z}p0eD(PJ zmQ>-gI{YM+g~>6JeeHcl+vl52u&7>C6kJNGrorFj@q3oRX5N}^XS*EE+7{zeH_LvJ zG{C+nG2-|W+}<@`ku`K3?c(%1AQML{m$qsTJ+O}OyAdsk=zmsoD82lHRb^$fjAk&YfTJN6JbE(++HnjmE<$+N{=t*YP$H!{>jmsKoz091%tgLlXl%u0XV4B|< z62$#y&io|_iIpp5t5QK~k8Lo1f`h@p^BG7+uU)5!?skJkb0Hh&7QiD)(tc`XzrEc} zS3h&t2TW3bEC2j!`!ZX7}I;vhesnJ`k`Fm|w?_Ws1#N+8PBc2x-vQr*Y}*TLE~G%L1y@84>mE`ZDCa!UPj) z>6sZGm&C;B5e19Xp7AfAZFp7x$D!ecd-osY?b>21eJ5SeyRTG2k|k^R!q{VqI)E)ElLtHXZkUx3ivR)doJxf zw@6Grg8*j`T!CIRDOB`QK8iKa#|+iTi*)a3GT*u#Tsf zX1PY;BX{F>dsH`Xl}#_O&NXL$%-UWT|Iul7@Kwbar4HIfv%yHx@-0sRVjw-CDJ<%{ z8F(>vtA+q=b~*`1%ROMjp?BYiZPbG`)t?dANdxF2qiQ|Cqukp+)xn%;K9&DU_-ndA z7vUH8WUnVMdCfSs*hAr!mXBN?xPVj=(&jFf3?FiBzVqFTmBpVE(@%94Q}2d@PIlgu0_1{JcG+;{ES)pb9gePNM>^|at`%U`SI)JM;KVQoIUw7Jt-`|$MTK}GIUH69tDp`>_(bE}HeaNBGgU)C?DzQW>L?E?l|5pFB z34D&o|J*;`x|Xk}41K<@{zBBsfj@Pl1Fh`)Umw!*FUs>jhlW41Kksf60^iE^p7NhZ zKc9sI9xtChmKVuCo^Fwa-_EbU0C9QX+fl0UyVu*e@Y@aX^Qm#)JNE0Z2&aDJ&llm3 zd(Y3mm-7$BpGi(I{jY7G$HW0VZ+G`e!a0E-53f(%PQvdnY+tVef1Q3kcA!xI%k;eO ztFhOevhc@+U;OcN!h9%@-6WL_4%EXs3iV0ECL7UjP0`jy&B`I$W$Rb>UE*h;>kIX3 zVA|(!exOCw=iRaJ+rUKK`yXtn8|+WekO%DWmcW;Yej$S&PoDMfCy&R%d4W%p0dLZu zr=Pck3eO3FogW7Y0Uy6U+lT`mjQ8#X-+v{@JZ=R(JAK}NK3#uuvSt|@S;7V9$G=<5 zBQK${+RI4aChrgJ?kj82rnzehc8XbFny0^I7r(ClY&_Td?A=Cc za*p2oJ{SC~eTC9wVtnfozeCInwqc;#1unb!k2QKH_*|{>)dj|JW!w4&Ha`|`IthLf z>8_UAJZ!Xz-{~_*z5>M*E{4|v;BL;jA!Jz(pSAf*mJhWgUV>(0sq&*v&D^*y=lIS{ z$*rjEPu&z_8R8{_`s$kCw^_IELsfI_5u0n^6QK_d%>vUq&FulFw9alMdK)9}R#DNcEA}YQfY=-)+$5 zM6|&|c*u6~pmBvcaYxKFJ2fvvYMm@q`#ZYlAi==gplNV|=&g(;+=8H@xn0)vqw*ApC;s+ie$UuJ-aw}MD-o*UJ|%D}+)R7kz{XaZU4DANyXF(ew-!<*mJ+p&%M zSe<7lQhzGrNBs!rYPV=BFi+4!+nOR5mn)3hGxmgCPpTlOop;e(Cgf1`GC&gaEp({E zax@hQMZ`5rU^Pim9Urz~rF^Cxc*UhM@V!|kSW(Na0l4H$>lsGO% zvviQO^;6S6R~c@R1trxM@@=uiOc=;eh#4I)etUdG1cH#=X5uke6ZJ^u1-&QJpO%++ zve?icFUCKmP7mEzJ*NMCC z52!m&I8q7-sYqUc@n9dL?Xn&-5#zbGq8SxHO}Mot>IYE`y_Tutv}Nunci61D zP(gd050y4LV4`M!RbMoBFI_328Lk##Cgj~}xq93b1!P9Sv_ihi#@)ztJ~fdUDSy!C zZ`^nI%u=Xl!tO4#JW<(U^8~UXvuz+$S$+7kc6Si>x|_RVC1ThH0*bjVEm|q4nPwu% zOQWCz5!6my7HGu~0}<_ImA=TyyDopmVJ`z^-}(0l;Iu1CieGgW*~D#;q_%qd)~=6YiOLOpyH z;W;D=xQ-fZKU~F&cd^9b`0sW^wNU|Z4i@B8qFD^#UcC#?fH`Q3_Uafz9YPw{s+ak) zs*=Sv0h7p$q;J?;p3RHzWMZM!A!7|!bSFMu(A7V+WIZ;4#dwuIyW9auO2@d8L>b*O zu2Mn=o${(NExl~PD;B7m`0Hvtbd_~r1iFrzN|)1*%IZ5hQdXzG6h|O0bqtQ4sc$Ss zkPeX^!5Lhq_;~JfB1;sKmZNQrHj-n8s>Z@~rEpy&J4z?b68IC`VUFqf;wd|eOM~f zbYn{P7g}L6VY3mVlx6@FM1%IGBc9d73&_Eg6V>C>mSD2{JBSB`i~yE+!hnT358~ z(>+&$##+SZmJRMrEl+p?Y8Uj^6y;CZNoOwXXM4?5&`5L!1yjPiPRWknxiUM7A`dwz zd}#u^QoIQ?;e3$EZgNRYYSby6xg_%B4vgail;wwwO_PDoCNv?pr(Qt}WG+&feBl-x zBPim>-TuF!@LZ|v=(FPrk>E;u_A&T1Q1K*N1?;#{L<+idOQZ8rpn8=Q3ooO?bq7eI zazzlA7t&~!I*}(FsDGTP3o53Z2wI^c>l^pxMYA}n$b%5_-!6pKRk=0*{4{#KYnHS6 z2dSuI7g~b)>T*&VbmQK^XnrtC!Ds&bDsz1zlas#D;h$AXk7&rjbQS*qCmoc6GJzJULa9Bia#l z^DPsICE!k}AU~ml+}bw!mgIjGJ}@ryK%u5O%fgZ7UmmL)o+Gi*abu5bvI8}m9c+sb zDzwb`Fr0luROd=OE2@0R)7V~a?Tw(^RFAv1yD>}5C`;%577*6PCJB})R0~!f5gH9qw^|h?HV)Lm#-=_p#z{{#P%SGP)8$u zb_P19_Pca7qT0a2W=gTtJZ7$Rw#BDfYm~`hjg@EU%7Gx{3%U61^XM zp~(HOlDt3vF=NryQN$Z4Lw%K{R)3AWf=z5&4FEs_sKPCR$)s6qm^^DNk;sd9;3jkb z>b6qPt#jDHelsfA+C-z1pF$>7p2L>e7(WssnFbnxku?FVg~y*6>MF2!h_8Apm{ZjT z8cKWn70y(Jc6H`~i9om4jgB-UQ{wqth|&ViBHDGs-$Ai(NrXYp)mugc&~kZR#7q{T z9{!+v=qi$lSu6SgCnc3XzTR)A_2md`mHfL1o^@yLru+AJr^ca8rm@Y37XEW7 zLFRMAVA)s7T!Q;5r?yBonPX@5KC9ifRB8j~{IFEhBS3e%OE6*~bL$cA$ zY`U~wsD)ge4)R~&=mdXr^nO%zjWM34KnR#sdR5JMB)hI@ySpmPSVlzq$!EFJf97ifXRO! zMQ)h%%33Z8Mj9h>!}I0jdYfmAZeEejxMngTp}@mab&WwxQK1OY+#5_OQo~77+H(z` z81cLTB9_b7x6i@b;<4Vq;2F2=b-2L^z`a7_|JD#1HzKuD77zyR@dgqwhn%Zrd!gH6|HBv_fU2FkIyyqxC&3{l`)F1cF#2v(BXV00@m-?i>nuZ%-Kv?M z2$AE5H|N-H-snPj9&X&bUZZ{cek;S6o2uXSNfrDqOI>1|zA`HZAlp~$ZKeWT$oCS#1 z_!Z@$TO-Hl%8|OrMY=Sn`;@3St~T4FBk@tgex*DeVZ680cf`OE_xgQTG%{9jxh zwy0GYbp)u}y6S|%(6KIrlgnM=z(tjj9|1j$eqs6(kShUDh?O2p;h>8#>#Fl(_$au| zY$(<)qKzR9oS!$Z%Ezn#MPWynn0|#4Wq~MixNp>mm9C4wI>p{Y`%1IyQp!QG9wHH~ zO}We*_FaXp+b0{b#DS4$?zhWi*>_oy-^(iG;l#0bB=VpacIpz-JZZN5Mr@O4OPm>(EjOX&O1bF*Q97+ z=tlNa0{ecnojPs^|GXeGNReML#sgfM)okGuoY_8^)M%d8@r&@sqOlBCJiTTRcKmGZ z`F<~xmE%BBgV`q$3lsFzn&~Uet&Ey*m)e%oYM>nq3C>N|WgE=e`<*$pyh3h~&geG2 z>%u)i=J5fn2OQY?!^c71tDu4$Z&B;8SEr7aXy9?nf{^>DG9V4T!{m8elW$Ry4jRAK zBh4_wlZrDU@63{|VMtPeS2wKsD@6!MPxlCF6uYx+b1%P%9Yl_VMntc-Q?Kq;yIuX- zMa$jt0Ff-=UOyxmy;pTx<2`_#_n{ywn{mE9u`LDA75*YikpkY@62j80S+K+*cl?3} zLaC1C>|?S|%i~se^;*=P+jas;eoq>C6PZ&w_7O>TP3b}1{r*b6qs)@`D)0n~sIa|P zI+d*4;?4Q@;-5DzrtaOH7iqET+Nq}}BQntY1G8t+4qitin`9kJG zvTqUBV)1q&yKw**S6AO46?-@yw;Ce}P{^h7IOFOYzW5}BW=HzrPV`cW(=opox-IRHgi3695(fl$;nyw znp!R-PrT54)`a-i7IUkh7GJ}MxU7mX{6p!wKf zjRelVibQKHWp*%;>465wH!`66-qu_46;>vO+T;%fV305~nu!PAZ5u1GccbKhNJ7O2 z1xNjCc-ipNr~7++P948)!`_Pd_&_~Z)PUk=72??4^iYu~U9_SKk6I|ban$S=45{^wJFmUFM8$66vodmB z)F(M`_3qXb_EXE&@a@qinL7p&NmQ3jBQj#Zj&b@YUh{FBc8s)%CCLf_u0r(%%=cma zOzSnpNKSCvixEG+#+EdcvyUpWr`=jsayTu=TF!R(rA)z4(VhnK!9T6b*>n@=eUe9+ z(&Dk875X}bX5@Y%p*dxYh7FxXm0iM<*P4jqTtLI67sBA@VM$&1QFdez(jmGNr}uolcWPxV+$>pU^|ta zs>6CVD?ZOK&C9M0RlL93mchrzc|@L*=K@5J=t_)BLdxCH+sJcNSRedk)dvz$fWH`I zbt`p08dKv95-i$a`PCY;ja|AU;)8!dDnH*#5vOdf0o@P*y<69)vZu9^^BSw~s_gYO z%{;9teaDzz_gfZ?*e%n^r9}T`Z^S+668{_^fc8xR4L6|v)Z09)LH`8z`r9nYXB@&6w$>9dt0nb)zYuV$u&WAO)KNm5vHTkS!nQfl@M zQ`b3ieT(LpzO>7zepO-yjBT^Hs)h?mzcPW6(RkmbEc< z^*K5WO7V6ke%3HJ5H|<}frUseRmjh07DerIJ6h5f7?} zmQaf>tyR0VZ8rpkzNq}9e=j+_qlHGels?T?Z6rM zjrr8~ASg1_JE0Aic6MdXL!lYKZ%3JL)$e!oj*pSiYAX2^xKn5cRbAl?IWF9O^4Fa! zW9lPWr9{#4-x7>&)Qm$vh@Cx9wfF=}i?#Z(!kp&6?yC*eS+IA887761S-5yOa?r%3 zt<0YG7*0x_;1_TRnK3 z@o@|0MoV7TajnwtC%A`IpA5Sl!n3Ln4?w5e%KhXjwXQzWRIeqo^6_0rg38V)V1t@A zJIYmzqg(0ofB{=OhPi^|;nRie=IM}-uHsTw&VCsE{wk6MAvSt>-;dUqdk80%C=2P5(QlpVGkIQEE&nRe z7ciKd-c_yZKm22Ud=)+0ROV{6ZM%5%=R&z*cAUA%jz%{+6QLP6)vo+>wXugWgj*Fc zd!;>VUnQhDhE`LRfmAm6PNzYw3H%hM?*zTKxrjtsXRJYwjHx)l1>oZ)2}ztC;;;Px zNJ&7K{sClh!{~7-LB@W%M!p&nS33uk5P;cPDQTlYU)pE#6YoA z+Y&9Ik;rJaLY@npYLLKkwEj^Tn)}Lrf&|oOBU1q1B;HMr4R-W+nFvXLo4haUit@&X zm}-pEmjgy$l8ll(QP0+=BNVGi?*TI_BG|8uz0ujO?()U%@W9j70ar5PQ! zrw(Et&bD{0NR71BbNcM0z$Y=X`bgmjy?2l*1hR|oEiy$aMHZW2ehJiI91_mHbyioQ zGFsSMlU+T`Z|t~$S4;SpfbLnvYr#dII)#PCBimZ)BnALahi1>WYCKvD6hu{ehq%@< zec%AGbCR25?;aK(dNty5@{jIIs=1l1x9{fY+VD=&&~P^5_AR zF(gpE*=;)8Q?gvYB~ea|(#W<~(j0`nZjmHNBVS>uy*#=y98Hq|bAD@zHV$ypT{0BR zGgRF(m2AXx-RpqJuQULq=%&07s^UEpbiopTYyLnTCS~Hbhn;NY#{^NVV z+^d6Y72-v_$sE%DG@7LHUX6TT2dN4%-Uiv9*&g^S;Xku{qFi5s~ai*J4pI=SL8K{vPN#i@44^^KsI%<%Q?QH?Li@$(d6fjr`J z&&1y!gV2Fp>5{PcldlgEl9Fh6?rh!y$2}A3D{w(qYR^F93@fXZ-6%og%|`1m&r{Kx zH*D!}k%p!Rj9Sodr9@zzy`af+bUZbRGsK8Vcp>5vk~UB;=T`R(?y$J^ElZpgN3gj^ zTtFhiUHkW_pp(Z(pMKBONo90nTT(&1E$$;nCLS5dJO63GdE@|oZ^f7rUwehiwy&Mn zR~{3Kdhq4x7zhIR$v|S!@2AvvF|9@rfw1g3Ytdtq?v7vkJ#y(n|J07FU8U8;PH8@n z3|ZLIH1n|43TpJIc~XJm3x?#F}kL5rTYl_a6#m-kU4o?0&e%W@4fQ%5j?=oH5+Y*nmM@jcrDwD=D~?Rk~DE zwCil+evx1m^KnA`R#vdL5S&e@SweZ*fO6|;t1rV+NV-;Z)wqIUSlX|v_EN_qDWcYa61jv zgTTU0aVDNiOW!*DicQt_+zURYQNg#?RAREK+w2^bv^B8p#-8A^ z1sb`_2(lcj$k&zaMvFqC-4^%`sg$yCT!G|#ubgD>!M<>mU(lkOD72@$Y|sWNcv%e! z46iM5BQ|ZVGkKKKqC;VB`25(;Hhg7uqZv7d<80Q*j_C>?MAG7qqz-DYSYph}nG1RP$IMuP$27--`XQy6OL1v_&qU!P*>P{Y zcTU_L(v#S*O5MJ49{)>Sesjc?xxQX+8d;sutijHT<5FE|p5vpf{9sP&gqdNzW(deP zuh$6@uiWJKDQ3cqL$FuBsJnHey;=W8H@%ylm5_m>UHoI;Of{HJG;O?4$m+hYC_bTl zg#suSwCctI_XiX-6q2xxZ1mi7n&jFLR5ClFUTGmLyeK8GS}`Etjnv%7Da^T)~Q^Yul- ziEs3jK2RV%X5(ZQwjr)c)t1L+aB1etfmRm=R;Gp5avCd8_6+5Xucl(B`F<-bOoggL zhQ^=sGdClYHxqp4Q5*Wp+4#vWu@mn^_~R|^;qCb}U3<&P=>3OsYU2~FV5 zb_g_Ec5StJm0S+k+iK!o+kgM`uV98*3+bTKsV`y+P^JR=IW4_0Bq7=uycnZLmzOdt zkAk=P(sj{hLvDQw|N!ZDCiRWQ7 z0c+MJ?hs`8sVlE*5abpJ?Rv_h8@9|C=oDK9tEd>U*X3yIvlrafc@yB$}XYp%L zd6DqvjlW9&AvPWi(@Mz${>#>^ruLt=-p~kQ#_F)=<#)}}%6$Or;|k|WRh&+?4WD5B z?p|M&GP%BB$hAmT67$*P-aGpb$Nv62B!csqaro>)dbnrfm*cfg48rjgYptWqRH?U_ z$|+m_StJGWJjM@isfN$XHx^LigAbqk3S4=W;bvxC#f{zyh2rC8iGv>~eA=zQ1K*eX z-mkXu13pgYg+K2uANPiYA9H5JcC|Em6d=;bJirPnKXMB&aFGe^;5$%;BaFvuHY_w! zKOs}qZF4%gFy|b1i-ezqjI3;{O1e;d7ZDuw&t=`jG?fUCBEJpjR zaQ4@nob~_V6SE7?M)Y*U(lN!8(S-_;)v`u_l@Obz1DO{C^j7O?`nx5-*6qByWu+LV zxhdCx#CyAjOT<=J9aH4-LxDyj+23f5g-F~{h4q-@RX6^nPn@nSb1R@DBfwUDP~T3j zyB_y|M~T#VEot_LDClc&7s6e9T?~p@w9sVPz?__TSy`jD)lTreU2GknuXcSilxezkQUVI z1@jM{X!-Ns@r34F{Qtlc&Up0jv++1i|KJJyB!lR)H|D%u+N;8Yv7jjQ?jJ({F+-yXJm>u_-`vgc%p-hf>#fGcPU7U)O_OW|+U* z3|4jh)Yk=$z`Js*VjMV=cxUL^nlV>md`|_KtM5h!1lFCndb{QzK?i0q!e6vv)(p2b!`*(8mr~Ia8t&3A83jaM-4AImTWTuQ9Jqv^NQ`d?3re)f^eqg zl4uZ7sxe$p#V)_yvYdEG&evt-dzG|QJAxKI9;=%Ccx*~}s)Hxe@*Er_VRI5FomC4@ zH<0vikNj&zklFq$GP{*Ua`ROZHH@)A(1i)oU9aH>Im0-djX6Y)^gS%7OPcw$KsMrh z*$GG2$A8-ipU9^BqW{WHWTgGeP8{M_*X#VtPVjv%tK#VUk9I;{Bff51bK(9EQeJy4 z*+xsXbZgsPkJDQb=p1|W9HZxATCCrgTUTwFrmrPfEw z{4%+`*qNnaiwOA5#tyhL<^9P>!U@7FXq~Uz$@z>IRW_51x>fq|%T8!i{AWAyJ@g+t zu^Rlp*@f7uECrcxv=%~7Eudxrmuop9~_FYLsC$~@X3@ZbnF$-zL?LKXX=r5p_- zead%z$TSk8HeV4&EtROKq;3C|a08*tBahNUBg#aTJU;l+=Q{7^;({M(#)Fp(Tke~O z#rlxm*{Dy6&i)HKv0wX-(Ep!yA}8w~J25r$WhboM|C60Cg!#YQ ziCm0hMl$t(>_ks?6_X}(@qc9}e2Ty9L?q|`kDWNF`(Ji~@3g>3Nc*1af7yv24FADS zsI>iWc0!`@|Fjb=?}g1&y|&Ww2aShhu>sP#cUOAvuAt|rq9k$6+e!|6qT}s4rViNV z4!)bYfa1LL`?`UW_aD1HK0t1L3EUF54q9V&CvuzE>YqaJ^utr<9!>`yo?Uf61k{J> z{0Al_qg%Owi9U!0{5NFMLx|KohrF#6z>Qe_C0bao5w#nQ%uA}Hy84sa^o(gF%vX*& zbW(O$4h?G;xuKCP2h_C#mmfP51rzDf$8lWbt24yriny~4T*Ldn~IF@D)kZa7pGWK!cSYtl0H-_qqMCm@%d}7 zu^`VbFPFQE!;CsbZQpp8@wU6FT=hP%v6lCkN?JjrO zwr$(CZQHidW!tuG+qS3forjs2h?!q-9?p*3x!214R@E6nDq2RVfZ99bKMsl5^il5G z>C0^wOJRPO`^!RUL-(sQgGZapOP@@Tq?C-%OMVoX+cO0?9A%6Isq=m49NOXxV=(1e zQiQ?o7gQXwuqnaSBH zWPCb=yYtB`TN+$qwnfY_-EbDXI@F9IF>1Kb5FZ7~Nj+?J-Uqf2ZqRQQTx}B(U9Tg8 z!V-iN5dyPpVp_JAWR{MY<57UZP#3hu9})NGqx+N++kZ_F?8Qd(iY$)P7=SYs>JN_e z*;dOP3IZD;ln}-JRK=j)?tK~xug7~x7WeQugTwl&llUeE)Q%`gtQfK; zLo%X_m?YI5_h4105$f21*?SyS^jKj$F*lrEs(q zRi)ctr1WooGUMA^rhntYp(jvJ$nPl@)$%!DYO91SS;1>zP2t)vztZK128=SH#O#!8 zq}9Bo&{%sn8rDRTn8|;iov?e#@df4dR%4$Dy;H|XR_@HY8e`QTI4<^#&u2N1 zY~%`(b1xIuW`}P$eX&hkvx;Fgrw-wERls3S+kjX{SqL^qrmOyU=udxlRR9uGgliT_ ziQ0Ptf%O&?fXZ8O#xe?J&uyFZT}1MH1OfD2%fEHid6q5e5AZDQph$=CcgezPKP*cu zU?pvo&tZ+^>vrJA0Wv`3sED4}HGgQG2}-rte`9MR{Th3cgJA$+x|waRP7f&bA3Ej} z>J!eWYIJsm1r=x*3zQgC?h)lERr}00r4uqIp-#56y_|B|lw~W;LEsS{N{H6N+mS~Z z`+&d8e#`cXIp=h2{?I8i+C|9P{~T)PEa+B)-Axrbw#%-OfGM>GlM^Ug!|Fc$+-B+Mty!4yD4A`y?(P=btik1kes@aY6twV|U9%Z;ZYxCz48~zt zW2=uvN{}#V1k9DVYnJ2l9HhY(Uj4AT<*X5uC&_i_8#9B-JX7`D}knSt1$BTXcCJp@l=dX!eB>%Wp2?!&O@DRJE3@ENr&Ho`+|BmRA`2 z-ABuq6??ZV48X(Y!t8Vo4Z^oRM6`4&i7A=B$fH37F}uXDZB|K|E06|gH|&1MPO7oq1DdyM%sBxN zn}lVhVuLn){r}UcCGd|NX%K%*RO$lO5I4=xeX%xImkBRuK#sWgqa+I-oZ@q9`D#up z(fzqj>3E)5D(%({mm{Bd9o1h(v_hwYayF5fuW^n-vfl2IYv-O|S<>kjTjMPN%bvp} zxksF2H!vhfO8dKrZd8(ry*PK)D#C?ilC3_@5Z``n*19f1c>)0NA<2!k(itgJ3Z>8p9MwT)2UMTpZI5$x*a%eq8Py{_K8P%83X*$KMeTdBiT)#B z>tI>crX4qZ`4?tC7zZ+>CVN|nKj)U3VulOT0ETa?9|{#~518PP$@rk1@PKfU%~iOFf3pu?Rd{X6=qVY=GZkyK)ur#Da2aw;UIn?9*VM4j4b zTKz8Drg2rsrW^&q_$OE-b}YwIeFU^t;JFI1Z!-Nt8)c=%%2dpJ+zAad7w^kstEA4! zU2CqLMf|h-I`^)R9ZTKq4x%~^sYGRB34(o*OLA0|)1e1ZgY;^A#MWoT2DcLo46WOE z4eNW}Vs|x-TIt1>Ppp`Pt+Y>Z{dMd*pfFFDO5c$(u1elBU8NkNz1y3Eg2u6N|<&=RI83^a6MO_5YG@-Lz;;hOl zocoO~QmX1ggUttR%o!C|4x~{&|10prm-c>eX*&U_p&vhxG>+z-d_SdLRD#*!xMnm! zaBy3MU6*|6$)kzH04;6~SJ}A9{#PrDIxFI|FeheMzb0)=dk9CW|R$`K2&KE_kMXNjeAqTuy-^LSAM1`zKqgH;xMG`bPUKCaJ`{~j{=PaFUT z0w1l|%ngtZf+LCf^#*%2EhGGA@L9WWzeyu!Jcu0Cu99aBlJ{mC!}SrR zi^f_1;Pie@(kNG012!0DWQpHwvEyIyhAg0H2n<`IEVc=OtbsDL6#yA?zJ){k&oy}V z7^f-{JfM_6-I*@OJOW_V8N*NoDNn2f3CR1mFqtS$g*Wp zy0r?pLlctU-L03I2Jx9e2)`0#&%=P@_!d)bjo{%D?M4880`#&g zD+aMq^k~-5VP2vzC~g7AF|Q4Wq%f?dQrTWxEETx248^zej@i}GOH?EYb1(=g-9~;m z?dNzbFF%tQjCXWz|62w)WRdCSL20ZUo#&xYa{X(*H5G;jA`2_vO;&l8DN0l=!T)to z9#aLTnG__`u9?<>%nX3QUMRzpr|u70$FztE6dfOghDQb3tADo}g@hz3BL$5p))CK( z@XjD_hkL>~8EY%s*o3n1JzL4Bk2%2}F+HtwyQ~JI>{63{HQAsnE;hM2=NlvaT~?tK zVEH_?=TtL91VJXaGLC5~?@mh8TPnfKqxI`uv$TGK>oJ@x2^|03n7LH*RE~&z@L+D%1LO=fhC1;>A>0RxnjHEX96Wk3`76rpE`F1>!cpHar8 zC^7X^kwrY0>7}aVgLy;)xMkjK)T9G83=!u}2MyIqyt7|>Fh8ctYqb$mL3cz7mf^0H zgm6n?q(e`4Bl=@b-Vo8P^i4~+=K~;9YVaax2pQ(Wp3E>Losf7XfF#KZmF4)pLIvfb zjU4fOXQQP$?zcx0q?wewA_}|10%YWj@4Jw|ug+Wcq^3Ys@NB7!fhz|OZ`GJq;+gwZ zderhN24cm`h0>k+?x=8jowf3MRW`DQlFDBbRhb!;M@_xpG7O)PrxS}}+Tq8qIfVs8 zoM|1k)1~A~{DbeiN<1pbPyi5|7r*8-Tp+o&<1)PjAGC98?e)A@b6|3dblFLYQ(8*2 zy|_1V$hkV#dbxQ4mBN&-YxLS@uTSxvm4nI1n@X*%EsbBzwpjD&>y%q91i)d`jO*R_ z#Ua6sc|C?dtnc~tVdPxH5OPxzC;$i(_Gu>@%Db?ZCnY()+NGU5fDkDkq&cA7{C{bT zf%QQeF3j3e+?8SXVYNJXB=%aYsl;1L2WzNLI4UTbe^0hPepx>K3#}E?WaC8mxjXlV z1Ao4oPp9Y6;#e!BofVYX9i0gfFHUJsPkx_1CF{M{5XYDJ-Bj|Ak5lc3kyt=a`SH|1 zzA@$N%^pU{5o9xvE#jKGdWn!uf*j_%(pEUe2;Rgp`th0&erxUSPWYFm8Kh-;B(TkH z!)hHhs&v|Uhz`KrXg**vaGVVWbON9WyVgQKy#z zk7gXwJ#UXG=S|ZkD%tHZxX5qKhg6^XU7tue57nG;wj$p*dS}>NTWakN3w@7Sy|XP3 zqumZPUxQm!6kEmQu)y3S)VVfLMFZtGtp}OuW^6*bWDHm;Ux=Q_oS3$_pCl!F4s@Cq zYZ$>5hmtUW!<7fW10yh1je&#~fOfg|1D9&PaCro~IiNrbtO3-Uh(H3)D^=XO-d-=S z5gCYlTjjX3tI6%}OsYV;WGa^$bvo?ynN`IJd-$C&m%^O%wjfCWp{{iO;m)GW5#*16 z34*h)80k23*H+Zi!nXDe$|6uJLcbstY;Zd>vX|da)46L!*-l?t1hsj}n+WvcQGfeE zs9!gMfWt@H%PLZM<2HvND01vRH zO%90r-k;OIFL7r8pQTlcYz1e2c1|58h%aqolM)k949k%Sc;h@|w>J9%P~EX8^eaJT0}ayn6+_c&nmr`DCZkwu#vDeiyzp-`?;^=!&C3Rnaj@Do)uuCYTEK zgu_hOaiE)+x5bB3U1o<%%QcsmV4#XFZ!kPr|JifdC7yW5mO}EXdx;HxDY;o*A^DzA zl5?ggE)D?^kWO=ShPyB6obvC`H@5|Vh)-cjX=#yg%TPdja}kD!4`E_pqle2s>@Xr% zt^rPRGkE9+fl2*{eP4_kZ7Pj!vzHuAt%%3H;UAISP=5$qVtB7a;q{uEUXx#)8lQw3 zndt<)Ok%P~A+<+`f1{0F1bTrM0xXkIsSIUtTT3K=#A-YU!1jf@$s0;B2Fy*mcY_FT7JRC zOt;fIneGy}D$T$oik4FDKmvEzbU%wgADcf2KIlmC;Qt~q#VWyQ1c5Fx_3?woG2t*l zC9#6E4v6fc=cZ?S@(MuBA#8L^B1&Gb&Fh1lp|s=yB!IIB`s3T!X)s9^FW3t@+`}>0 z42h3<6%m`wdOuftgXMC5b^xWH=l;U`uOT7j>rvkt(3Q%I4|)dpN@n<*3mL;%X=d3A z2?O2H7Nz+@@TliQGi|11>T?1&KrjrHO42ZXJShgSu zzv(C$BjAV|(#HiIJ;9&o!zv2^%5G}LTHjSFBr>t5I-THS>hw; z(@ATXpbBEw0E`e) zsaCMO)_6iZ{)rVo+4$`1q5oL`DpcgXN`{p0Qbl{m_(7pv$x20e2Styo2V4Qr`ZMaN zPp|oLX_6#=5)cE5#sXA?tT_+?e8agO^KLgC#66;4B|X5xP~7=^f`_kd2t6q?vkE!5!LnWjurn+odNt=X2uOQNf2VR7g4Q2;t4MiMdAhD$rUAe_{+t{X z1Yvx6j8G-HKJF7FDj1edf?3Q(aDG?453E&2-hOp_#1D7K+Q#S+GgY=nq6Cx2!x_xx zw|zR-hQT~%Q|^$1w1hmFj0EV3Djv6kad3uq zez76-Go>g#bFza2C-DV%{ZyrSYCeiDCjBhvb{S0!lAr@95!&KXc6F2o1uEoW=+rnX zar^?T-EWg8!`Fz(v^gr#zi4p_v#phpkr2oM<@D_4d9R5QYo6I#L4b{brhuVbeZurH($%%09F0(?jJn0Tt5B&Af%N7udtuib7<1Hw;hN{c+!MH?D;(gW)#QH zh!a*Nesb2mmcXo4r|0v!L+fY&4Zt)wD$laCxjp}Ab$NR} zKks`cQfqQ~I`zcgP@%rRczrz5CBNAWk@32_eSczeb+&y!9lP%D7PNS}K5k7vFYs<% z->{n7neXo?zsbLg6_7Yc1LYTb-^n3Cpop|U;6U#n%FxaiNMV(EyguV5CT7ZSIP7GQ zPBFON6~~UYYGzM&bh|%aYJ#zR&)#XjN%m$wpK5e>N@#DsD?V!;-+tt<>_Bl-v30g} z0DM32Zg_n@DYtGy}mA#c=|NHM`V3{-Cy+I#4KMG-*4HE zXm5CYR&mpJ!yxQfWNW*G{X+aPk`aA zwR~I;XnjB5Gg^HfDQcfYf?>j=k|Dx!{J~(mp9g_yQ{^$qi#aX2i1YD!8{NQtUL1j+ zwfer7%s7!fVzg!HIHVi6!vMt&aA2veIbjRLh#aMa>$?hQ(Q&(0q-nkS{=T<-SK{$f z$NHY=c$wV?Zza-~?9Oa!apHxW*Yf$gN2hrIWH$Aw_Z~3^+oq{Pwf5f_~zgNF1?|^ z<4y5E`;)xwOMO-S6C@r4_jB>{-QxS^`vc3NNJ}sl&~yAqJclRK=vbWNub{_;EFuT- zjnfr+t8}AMlNEabndm~la@zLC8iK6J&QhkVO9vq_VCQt%CaXBNN;^vnPi zdrjY4=?_5QH}cR6THJ=eTh+e_zO6w_uQVrJD{Pmtr|n-7=?FwaU!g7=KYc}(P(Bmd zbG8=aAwDxKe`I>Fy01G9K4w~%Coh5wL&a1^c<!ps4}|u7FDUGG;5T`c>61RV%5l7fS{28>H-il4Fm<`A;{_1W0C6>ZFN(X zhwb6Px#7BX4@!l&WOHkU;#*;_+pDR(1P4XO*b~jZByF5q+k$MOPQn-4Yr~8usGPFf zbtqs~a)+{*#s_&oGV}yi<(v95BwJX5o|Au$RZ(KH$05`Pw>_oi_U4Q*j3*w9C&JN7qE@n` ziD|gsa;e$=FYjM;0OtV8BB zDl&yQ&anZ(XG`4U%g3XB4|BhL>Go?Bp7YU;35o*{e-JPwO zVOLOy!1o;a@NseoA#EX<$(vJ~L9*}EVkS8VKP6I_p;{S$ejeL zC?gzf7j%U|Y0c$Yufi2?!-QzudP7S^%F!5R^gZp;=$I?6gx*^_)bJR0+B}Eg?riEn`Ns%REUhK6G^}LD4EVq@LwjD6~7De?g zbf>p#9z5y5^*r`1WQ!;xNC)jS9=t6&>s;)Bl`$!<`e8$2N>I%U;Y&L%P%Q2oBey4S z0g;lFbLiP&w;viz`V}?&DktWi=f~KeXlLbZM52>xePn;)^|%NUK;!q7_`a$6^3fyq zCL;x_<}`WG@q7n`POdTjuVNx_>Ji`PYu_ zE_}j1@Ao^6A-Pn)<`AOWyD|4-mDAA?FGoH?GUkBh=i~jw$jpr#E1!V#0{o}p;>p+7 zk?gAr&=TqoQ~=ACzc16CHcUc?MM4Lc#Pxl~p-(8pw&XP{{80L#4}piTX5W|Z_eSpb zm9PKlcg@dz39mNV&JX^`eh2;tp*qf37=Hv)0wRJa7{mP!0Jj?!B4irYTA^i-6N^M3 zWa>RkKa7AMAY=+Y&>OKwQ9_QQ1{4V|B)+Niw3_aqGJ(Uzi3K6+#RhJk5K`G>aQt1Q z1HJ`y?Lef~;vo{mM&u$sUD?nZMnc#hwq|x=EdOPW_z8wHJ-*wZa413FZ8pC9^N-Od z??u?8rxFb^QA|p3%nh+hjs&$QSTQSsJmxe_USRqs5>Pi;hoi6pKe{VA{`%m;A5nrj z86;dcpN+x5q_1YMNmQi34G}mLk@C}E_n`hw{eq7`PcFOIB^2|}HNdjD2v$u3)LTE= zE&kh5=_Ah*uH1W*o}GhR@_~o!>~L4jrP$AXXSmxcnDbkGlRw@(mr`=2}@`A^Jnr|-~1q73* z9K&py)hSo6m&H1I5gRA(^_1ajG?uhg5+!&{fl1@CyZGs{gJ@P4m}G4-xMm}b8r13) z#XVq!Xrvbmm?6~g9lC6kLv>^su<}%j`_ei?ffHToYF%mXY880{;d0W($@R-YY?gR+`bi7q{19#tX^pFv>6Um^z zb!mz}XKRv7;nKfRT|#a96ZX8OY>~x@!DGW>LwM}6a@z^bEJx;38pjWz?P78nFp4#} znInW1fHa)CqiGn7Cg#p!0b)qI-34uzs}HQ1*Eo{4&@N6yZ+g9V@3nGGUtlFWs;a+p za<=@8MuVaS66Pxo>+bpuRbk&6CG<=J>`3u_s~LREj3IqYePPo!5Tn0^XJX(v9@W@> z$7`PK{4>zzZF&BQ`s=+ap3Xyxy+(A56gd!8$f9rMOtC&lz7D#Y%NB@mfPjy77fT0= zY=xW4kK}~!%P!^Wusgb6ft{|y_eh&KUT0KckQ2Fx@tL9@N*W~&0TY!+8ka9bp^l-w zOM}I4S2xr(9iP9@8LMODBroT<96sMZB_NY#Z=T{aeQTnBFnYau`~wqn8H0`)McmS} z^fDU%*UmP^eA-3imE$xVpWQEY{fc01GOz=$l zhDVU}m9b8}__~@>q&h;`f>9InIz-zm0{yECltZ(;sT6lidUgOayX=C0)`aSqd*f*! zg(dr1D7VsqiD7TZJ2?mKJw zCEa{;GiS-oJOLa$(%(wN%_OLKgXA3Auvk{?j`4@J( zvJQ`1GY9A?#Itw`iRSHK2J%bW@#ulmsji59Qn7q+bGC$bL(NGTnCp2z)z;pDS@=o3 z`N#CswSu@`aY?IvbAQTcRNAR(kH zpO(EQ^;&&J%f!^wL)5tVMcia+D_%yIO8+fF9oZF1IE}**o`7C?LO+-C>FU_~N5Dlf zjcV;D)8IUgO~0~mS3#aA6|JOccomNV_GudwqoiP9u*ggcL&AQEn5a+*Y9R9wyLGW? z9$<3aYE6|Dl-F(kd1mOs9`SeNi9X0uTv@UN)Dgv#X31g)Ee<6(KW}rxd|$(ooV$;< z2?gUy&dQr63;GiTncj%m@hs^@R&4~XCUnT5%KV;@r0Yup;Kn6W+h1M+8G%LQif zMfms8|Bexd=vkp9ovJ`|Wr=0BMP1=_2OyoHJ8NzcrE1D=AotExu$!t31n|{xUVGRqq z_}~nr)RX}(GXGu{Vtd;S34n3b`z*JYa*?WK1q;b#cfUSRoBZ3=wP(0>;^?;Rw#AUh znW9#I7Eg$<%EL(|&}A#?;1<47RGKYeqF!e;Qu2&vILWIO4GG!XWTY?#X-GI|nrP=> zar}9@HRcHY^&0pls#-3mX#3Jf$Ys)Et#ax={q9>9$}~XLqqPTr>DK=)$jWvD%HgTi zDG|!dBQ)_L!eiqUBtC8=KUpn*HAt~@LA#s(V~nPrI`qYqg9D`VSS6( zzddFaW59n-^?J?U;M<LnGz0qkCNvmmKhJKVj9GDbW6SF9_J*wkjIOJOi z=_0dP4_;1#rrgo!dJ3^boZ+k%ym)>Rmcc1HRT-e9Tn}+8Bc;#!%?No^eMJe8H-b-Psd*xwp@{!;%C39j>LZS?iZk~y9{ zwI}gvSMyyzH`O6m^X27v6Y!QYW!qMN{+wuCugTJjt@?f%p4C<#1ke{6m(&(wGi`FLC2 z-Q%l#>d{}1)uTOGu#bJO)(B)&I5t@pN2Z*3B1aQi-Wy!2y8noR!4)HqSDEAT%ZZM6 zy`d{cG2N>->n68O)6M&`fp=)g6N(Cv=I^`P)QzZQPdhm+6uI7?88rv|Q8j`2J)fzT z4kIdzI7fsz(QI!GYiXkG$PeLO$tIISaH2+c_>YE2IH>e@=%ev)Z`yI9rfhc3eZ5y~ z!tjPhOvPhDW+?BY-n^-V-r_)Kl9%LHt@rnA5@-A0vRmJ|sm(u$nCwMPt3Ov-6Xsfm zV>zR$pF|}xrFWf$!(+JADFH;8B}dLx!s|bukHb*p&JW>bRHT( z)@pR-*^TLMXP%UFZakzSYsbf(fEFi_)M-CR<8kv=c6zsL_w{BG78T?C zGaS8q@w6%97$d9hujT&F!;hI%vjh>z%t%FWTk2DATrIkZL+uh_ew>;z+9wt*aLJC7a7ScBN;6 z99+P#4gR@Y_z*(rV55Pef5=t5H@~ z^1z-wx0GW5klmTd(_FrT*L9d>l58+8K`&jDCI# zUzW<<1Ovs~G5A4Npew2V#5Xu$-tkSUW2Sz5%-H)@Um7cMdrA{ogsS5B5Zy$6kZ|0o zQXxVIPieLbXt%xXLbLEkACnXy(7Kk2XxBwUcG6q z)7DYI|Mfj7?Ce8^Y?b7#BdyBTO+X)vJ1(FZ8LDYUnE}ax&A*vZ%#SG0ndu_S&i7)V zliP6RmgOa{s z>CJ4A!H&(D>`QZ=+0bc&&#zmG9?7-oFzXo5HBp!&oT_D+#+^&X%Lbgm*Z7N8o|hKI zu)wa}eyCm@i9wn8%xY}OdkW5i8Bz|7f1zh86R8pnHEz#KwF`myrfBEFP11kuWI4>SJWeHc&i_DUlEO^0C%E}{N3ii@XRKQ8u;!y)!t>@c*8s>^YWiz zA8YBzPGFHPZ6o9+zXY-l)%LS$5n;kn+lW3Blth;^aW84M4gVPO z(&!(z5$`Z|*=Zt`%TI>3tcxewPtN%Vsi-lyk3I%g0#y*k<0(}Yw`e!t6w^GB8aK3w zyl*>?Vgm}WBl}^zbH@Vp>0M>v*X1g7=$V983YX@W9oobeRM`o{W1{NlG>hJFQ^$^N zfhQ+6x9!r)ykg%Md8gtdlBJL3nRca%Az8K>)^_w8q=WR7Jr(y`>;kJsLjSxcRxKDw z*fve=8rkao->;zU;(GUo#AXrw?tgi67diEKvRR4Pts&-Gu%sGp_3C;Z4VOnQhb8n( z^sjqcqBa%lun24s8)}PYh4`Kk7V2^4lc3h9Ucjx6%h4=|#`lUNID4|>=b9_r!dYin z)JbF=%#^}^wSc)IRXP0Q6rYu&2Ms;jR)m6TPk3jT5nbI7v(J{-iw1(Bb9rBvY{zSg z+um4v|3sk;VXN8@;TY!KcC)GXRI7DYtV39auyShrJ+h!3v6Qw+p{Eof@e^13@MDLq z)fK74Hp|+ogK_TFQfUj1aP#|Cv9z@+0<#Yng5C8j1Y-&u&AhphNotAdp1I7-`pMbB zuSz4aZ?Mi8j1CrIMjHNSH~n2~X|7XnNFqMd!nj=+oRi_)mfYmgG}8OG!MBh5zK23K zOl;$Ihu>7B^R{k?QhoYk26bD_&S?Fh-mZ|_c{40#v5t7+%P|N49JlYSrV;AIRDiNc zqKM!?Y!aSq>CpLOL&~na>gRU7{)$#uNsS)RZBu1#A#x6=UY=htAn|eYW%!|!dFQC? z2JDTZ^5T|eCu%vDj3SgMru$F+4);R+gYN}5E0Ovat-B4r>njyCOcnTHz`3KL*LC~G z@nqf3LQC{KmxR#N3TaAN>%E=N!ph2oATD>4>PES|D!0cW)h{T9J8E-al<0D3L7!RW z2d6EkO~h?&y3!?P+k^`6TE$M;W^}fP_ zWDLGrl;`S{pXL5rlL__i(nx;$)yc}9d@)vsWZS7pfo9!RW7?fXvD;hvk~c6#G1r`( z_2N^%gpt%Yd&jg4cE@tf0r7PRXb8yWN?#}~^PL*La6`0`e+P~Mb^K}~NlrA#6s0;i zZoT!y**oZXtW?E(d|uIFLmBH-1a=(XHk|~Vd22Vl6ZD{DrL~17!)!rZaT3l>qaXbw zLEb_llCG3YJaLl6a7QT%mWn$xZ`~NoekYCH07HVNf!^JL&xp{tfQ_9ql<{E`SywTM z!UDy%5L@}ZQ|3<9^IYU2Q@UpE^NOiefF0y7N5A{B`CmGTzDLK%Brnc6x%oId4;u0I zBE4@8zyXy`km-?=%VQ7&80pQRU90{nHxT?Cs}xYdmL}~Ya;qk#8|B5+-W~SSazc*v z;T5qP4*0;~@c1=wsreCGFoPX7SK+@reVDyQp-4X+dwk-*Vm$i@e}k8r{wBM&Vf ztXV8k%4=!0n6EaTeUABdv*$g{(-G#}P*y_@1*sN=V_ex`w;8c~)_|2PGt_PE_@Z^I zZID9L(ksf(G_z|dLbs{;t_7fJmgmlB z56Ye@C7PMTmShbjPECT3Q0F@!@4-ccOd1ejb0k|D&ckkGA1!m&(3%9LjwP`P^G)ky z9zs|jVN-`>j#zJ0t`g17H;jh$$D(*TZh2XWu;2&-Wqa(W5Jisx&6rKgL++#7z~zCb9$HcU(gH6^ zQz})nmIkeGGXao-cam}YxN*q%=C7t`kRhv@$9tZ3`44yM`{I>rGQ~^ds~CpTk1iSI zS<4h_CFureaA6}0ElvL`K1P6@S4EdP4LZVGOLW`$DTeKdZ#-%mBuO@jj`QVPKA7w6 z&FzCqrD@|EXuQSIGl0xsW~IDS2jf6YciZWEd{H{_^b zd2Zp&>9wp?3Fj$a8`CXov{T8WX90IM3D7NNZ=Ylrz!~AzVanSaBqPt^9Z=(OMep=V zrOIyNjlV73P{N~~28~gLxH-(jOq`mQ5xKkX3}4HnL57yccgyp2O}B2AZ>t*HP0g8o z)l&nT&(x{i#*Q2z@J-n6EfSL{n4Kh4*T`c@i?{LngfHu5OVwiWP+hK568uVAMbA+Z zWhPg~QwU;HEL89VnAL02M0OSA{!+P<4AQ+#7(t($Y=bLl5OY3 zV?9p!1*bIX4^F$ClXCL%zoQSB?NqV~E^($ul}`Xd0WHc;Dcnm7s>@~?{5fEjsV^$r zi(jzDC49-yAVnQ1OQi>u1k!7V zebw6B$Lgn(ZmSr*WfDnF9gM9HClMhT$-0rtw$w%4}%7B>y{IOo4A* zk1QnM4fEqfGqcW-BVZ9WUMvillxAQPTV1$Bbir~w19h9p^U7(Yc~HoEATedD)Sp7S zx+IS@*N_g?;)o4Jv{)FG?+mJ%4;t;i*NKx&(^lt#LB95i6PAk;XAsKO&kS%X zbfTJ%Eb^P=FqZD7#~j!}xy0GB1ktcLNO?(l1QxvBSbfua_DNrPk2HTFK!{}zHXkm< zQA@%UQ_D&afk-zGr-6jw0(_=;2eO(1LA9pfbHFwr(eovJ82pZC1~%i3@zFu&KKJ83 z$htoiZ?@1r`!;o3K*AHb<6lC?%)!vwDZ#H;Ua$VPufWR`Srwnz3B=31^12>PDE03; zKcCnhk)f;**=aNF(tQZ*L4e&pF7-eFe&`4yTR=feW_Vn2cWR_ijDVH^5}8q%#200M zqyU=FhzxAN%p{j2xcY7bzr&2(@j?I=&YGJh1c&VmIiT|#@pw;mvAO%ukU!S_1)~=d z-E3hF`7VJWt|9R`{ZY+YPI}Akt`p7LsGEPMfK`?qyj^zRKPA1Se)0IX6>UKXx-bvV_@B5{MSz0G!5vFum_0Z+y#{f22Bz(qqM()PUi%tJWIB^M~ z;x>aS7Q?v5RUMa^StIs)?!=IOXV3AIFlNYX3$Ab}{qb#`iZEv`E%ShgLcJyjgBlFU z^@&FSHEot*+c_+WsQNrNrC}^Tc4EUpwBUwSCZi`aNLW_ZS0U-NDmMh#u69tgUkHs} zPATz7Y~uZj^DX;TeYzD%{pIc-w!JpIG`BfD!#v&-`+B-i9sE+}6A&>;k8zLGnyKEK zzGzCo2-CiqwE6rgc6aX`B-K2ZF)MXx?wC56F)5BeI5o?!vtOapWz2(I?eUc0Qkt`j zYMMXn37mZEU2IPnIb2W1>_PG-$6}5-IygArSmn~1pFJIOjUJgZHoaC(sf17cx9G1d z8#5yt@-f1?R~o${=qDQc3Btw&?W)?IJ=?Jep=sB96Co zl&LO_)2TNb)YU6mDz^}LtlavIWY0Ev$li3NS4*|4#)YW|C)Q{CdZ9v1jSCE>x>2Hq zD`htUMz{WusA+ZO>*?NGC~7RAQRu4lqUgh=$oO&JO)pAqL5X&jeO-(WR_vlN;$L#@ zVE^|C3e@LxfzeQ@=o}g^hkyArlAAS)SBwM59AGC4IQ9?Ab4q5Wag`6^gx|9GI~p9Q z=F^lsR9%STT$j8m4K=K>I?=tSToLUOK`2XuWN=O?O0tTpzzz+A(KoUca*3kQQME!_ zlJ%TkVCOqSmoy%!KRv=Cdx~x@j#0qxs0 z=99i~#DCra828+iq$)WSMAh^S8c)Fojh3`4oI{)`23}!sKS$bff-pV#AEXYrJ|>!; zUv<}YVo_QRpQKIN>)%O?-~Z4Zz(~>HngB?2on0_w?-I&@Us4c z>eMYRYwTXmlEyt&hV9_^u=m#c2YjAHl}+L`G;8s%Q*3p<`C{4};3wz-c{<%%X@N^( ztCp83ThyI^l7Hi%b40DjL`=~;+2oM@+DwGTtsAjePn3~r)HB2@p$ez9eSw)S_Kmm9 zh~&EDDRyRFUFPcWsyaI=%7QTugV!)}>bGuNm-{+-n^K+K(LGurrYk+HA)52tYUSM= zOT)OPXE>*r9r}j~b5jrmYZq{Fc`kw@Z?a(i{`Of_Q;?vsuhk+Tu4VzQ5&sA*T+$AE zS|wh}b@K*j(RINebD@R3f5y@u*`!yUZ+zC4slFzF7i86ia)V(@k};L-d7=WM=u}ma z0uVO^RekL=FuDswSXb3|I%FZ26CIWjHGi35J}m2XG|hzi?Q!PW+rB!Te3IvW5~nff zZ^iAlvvQOUrpe8PZiU%TF|DZ-H|In&%?okN2u%S`uMIVYERA)-So&W?u+nLhE-8TNbhgs|P64JuQrF0AFlqO7RibXqtKw#F90o*>NP zcI?+=dBmGIa{kR&YUgAxXnUbop>c)ctZlWZ>mbd*dmYG@P$r+VZ^Sqjx`>q!A6{I| zN^BZ;6LPZaV<6s>dc!mSTirk%8cWpV5gNZ3+3}_%z4?@Y{i`vZz{kP`-gqDW;K=ad zpBqM%VwvQWMEcwaFw>uX(y-ZDjA>XqJ=Vm!)smU=h07CPFt`?OHE8ogsO>+`9>qmB z?4Upc-iTEpHahx+oq14}f6dDrrDr6d^V_0u;6;*NUn}yw8T~s7`{}?s^QgavutF>m z0UHcB(ZXdoRn}B&YWH~}GOZS7vM$ks7fiT|H)LMKn`y#U?9R}1)bf!AYW_{Hq=dz7RhuYrhw?f6=r}=AIU=yIod?49FuKeqQl`#E zi{}>8R4QNUrd~}Q-wTqrMXWO;DV;yC(3UalhyTKrFY z^ij*6T3yoAut7wVIeFc9Mi8qUP~LkY(+H zuI;LY#z;sG3UvMagF3OP&g`{eu+e zSUlGEo}bXmA<1>Lc$4%@Z+au>cnXY zO+#rCvp60H$Q#^am~i^S^UwI;VBAGhhwtc?45^TlGe$Yw!s%uNpZh07uS~P70F(#F z$8JmlidpxBT71S!cVK*N#l`n(AHVqCgONX_KhMiZc7#Isqa<>(Oj&V%J7L%7-YG}= ze#C4bBTXNv&Gqx|SncOc&HSjjGY{D>34rTIC6vCad9e9`Zs8#BdiX}YByIx2?Z4*! zX%Yc7>+j;b(I!P4+Ux=IK?mcF z&d=bT^@>0x#d<9I{wK~J?chFa3WNR*U^)-Xq`!4jmDX;tI6K1&$;94X?UzJmu;0?n zigO}9fU`%K0b9=>Vb=1ht9}s=RV+!%hZ`(sEf`u*UxEon^d6GOwtUS!snJ%3$p7w| z*OltjN6JbL6dJK>%JGRXhyp$}+;OcmC{$wz?sf&=Z4Fn6)Spw)i2E(9W9|<6o5)MR zEKSP2ayOfk_nX`Ociy1=nohPw;#BX}^BYMIb*)UV)RX+8)}e|hg)6UsOlPMW53fZ} zL7mmpAhEZhp%eYdap<_sqOgae3eU*Sg{>&5PH7xX$EQMrsjl#{4+#=>r6-kW;m`0b z+TxdC)}PcIQ4bHDo@XxErjBxF1+*cgbq)*!A7*Y2jOLBi+^^KZ8 z>WcI}qJMp?pB@P|zK&>arx`I`u2?63H5i4lsGJ}@m{&xlowy^%5L!JLZPot%#fo!Q zrsq-o=ML1DP4+UQ0Puxktx8ngsMrB}vZqpUKHO^n&fI3$(~WIzMj~l*q$q9$+)zD* z_WaC_qerksuPi0<7H&!?{e8uJ9DgI^MW9pkZtceD@k2fGo)An7lyrA?bZn2LAnc9X z;mu2P^;61pKYmu{tI4jy@MZb3p7<7td!z)xl;f)SW|F^n&ro*x-=q?Zl7SXVMc;mq zgJ`Y~@r5BwysOLc<~*3V`bB`tn^^kIjq*w|OIQeYa@ZmzTv`lT&l9iAdV!xmERiEh z@oK*pyOeS0RX@oyJ}qiBc%si(&3J8hI0lMeOg!gMnaC>mCrHo~TrC!CCDu@t$ebxr z4`bZ&lDRDUOyP(Edmn9=+ZEgl9ZGl8Y%(6!S4BrUmR4FuTQY`R;=-d=7Ds^Lj`u*+N zUR7S?TU&CIRd5%TC=tv8l9$H>9DZq$#+c)B?L}S}hU;s6NBwRcbHzMEHfbS!>udRO z)m4<{C-!Z>_M*n5=J<XAO<1K4*?=zZGjb&l;M)3u{eerLqcu9EMkzMqiyZ3|}!cf4Hfj zHyWH=VEB)<@_Ni4ALsXV*54af>tajRa+{rvYiyJISgUm5e>jCe*HhF0JLW3&QB2t2 z{%LL#*f}o|E845gk131PhZu-n!;bGRdFD&fN5|sdyUI zf>3`O!046_GsdFH zEPM)o_DQjO&n1xSRJ3m!=+Pv>GQx)9(PE6FrYY-Gf=-w(Rf_bRj94+~ySyg0?6y$? zYjh|OXpd)9RNW%oXJ8|i(>xmJxTLa5R6=`(3pX*=FC@R8+H%ZN-U%t~*6y#I2G#w! zPz+YCQExR@^r&2;#p$-}3c0+nzHe995|aXYD!P`O5sS$2qeCg%aGv1e-Y;uMJs|#z z)Sjq%8->p6!>K0z0yVPO+ANaQ+yfsBy(gfdH1Y6MIFeP<{b=Zam9!iRy>qjtOZ+B< z3t$?VzbUO4joK{!XGBYO#V%eS8c&Eeg;w>_qluh0{lMn- ze7dR$N{sMP7YCXMaJ`vd97_Tmq)&4@y#Z4=@2AWMN@kaOwO%0gB!|tm_(#r5w33Te z3TIj_L`C(W?E*q&`a zT~eoiJUJ1IJW9NC)md4x*rI;U5qveTFu~{rGVO}~1I^YI<{B8NWo+Z}kJNf`=CJ}r z$ju&9g=9>YySA}kyn<-nl$p9CzOd{K`yp`D+G&**%vjwr6VMtCJSz0W65ozEc;`8N z@461v0*GOK^~ee;H$|p`rFQ?aGJi0Yubl>iHLcL%YF&WqcyR~YdT(?2Y5DZBRFtM1 zR8cU*Vh%KNFb||f2j?U!p|hz9P##;=O57hJwXa=)KkmyRmlPT1MNW2b$eW@U^0z&*slzn|eC1-vfv$h|wjoSuk; z&?)m2h#VD<@z5-ACA>C_9`rW+uGs{FGO4^|(5=B5N)&_S5JdH7e}kpEQO`yajBP3O z?$Xdbz>FB{QliFLyz303&5r(OadD_Y?2{s}QyB#dj3k#NQ9x_pL_;)tb;vtwZqaMnb zOn-Oq%qn1VuOf5=#X=@Qwl-r{>%cy3;t03IsR8z2AC@O+qL$;sg4;Q){DYyKx%2>+ zN}Q)!Kq};2Y@uY4jloFb*KUxCIdAW0bc<||U~FTh!7x)8XBf{7Lo8}Tg;r2l59${h z%P?Y2|0prA(miW85G+qK7Ef!}PZO_hsCT>XJuGW^cenn$q005tzl2!e%5JMR#^;c* zTlF{}vB;dJnfJ10VjW-$Dq{!cqzXFNadL=E`r??IlaIG^uO4eJmO|SB1*vue&w`Ux zmxd5b%V2{*qzrF4i2viounxDdXvQ(2uzEqZLrF%f_COvQWCYDZi*mLN4=4N&l^|pa z)l`e`{IS%)Xb*>Gry*0>%EhF+OWIUyuW>Zvlr{TTO%{u+XyMR7(@!sp`rS?cL8g6F z?C2F^4TiUC%KN>v-71-X)iQv?(udwXDrx-^rZnarV<6KvaBia1Hd|V)Ci|;;@45(1 zE6Q45Fi5MtiV~1t8>o*pYBso~25a*5Q*YL)W>QuMO$nn1CCc5Py9!Tg`d`&ln%Pmw zY`LCld?&4$PY50C>sXzlvKNZ@ltoyr0CPt9Jyplzm9j^ijX|A2%;d5}XHb-43Id4Y zQtm_FK6ZBSK2_cJ4;Z_+g-d2a89Tkym4x3ofnW z)AfaSS8241WJMuf8hk||ql6Gv_Tf1>;|99X$VnO&*iyM^J;SZ(N2wHFK5xbDIj(H`wCJR&8M^u$cxSeRUtU9 zK2hB$_d~4Q$n=~Ix>5P^&DZDf@6xC^^S9-ZxukhP>mhej-6a#meSxi84hLp|?`0`h zmuQ>!(QoxVBxfnGTF$*I0EC+*xKe{!S=tv=WR8i+#nfHm=1%dh2e6&xRG0P!dvi*T zO;xVOM5l@i!)miTnkJ920;`qu$Q^!4;mr#8Mrm+OIpFdCP_C(Kv*Ev#3*VBj0Jro~ z)=j8jKD0$F6)!r~LKz#_N~13A&$gl*KnhoiK`LNQoFSGSZJ_|J9vM}Qc=t=sWD-H= zoZZY}d2hTPhEX;sAw%1#LdlOC=`0b1U#19;s?R*wuw;Yx6)%JfZ(Pvndsc(gg4|j= zNx#b*&h{u3YWdBZe$=#6JX(~Uo&MQi{oTObvD3(HvT-YeHQN!NLP9O-N8rmVeePA*ZYY3Z-Q8*~-)QG8A9XYg&1ca*2a)AuPS_vNmKV(=JHi{FyEDinN*7d)U6zV9;hVne?>l);C~%mBz`9kaL$JBKpAy@dWUnwyI z2W@kw6t7}MgT}|6A~#b71O7WHB?vdu+RJ(jq0BG0g@vS!s0>BLh)%mH*RDelFFf@2 z37H36;6o=cnL<(uGK1r)yJKU8LPV5!!0^n9WZp>IV+Fu`MidYuMkcvL{Ru&sT}JG# z7hK>_mb?sMXmn?&A)Uv_hkNpi_1&k2QqY*t?}LZdNJiY6gl@DNcxim`<_vmam;RP7 zQLA>RxWH0kUFS0nn|p+v6qzdF)s12%+3N^h<_zKqcBd!nq$JGoA&N!QlhhCf^|6RV|>F676nZd#` zGe7f5X4JT$h_|&vgjGR}%pnmM^jT+XJ>Mifi9HkPD+LxkI7o%nm-ywRRKvsXg*_r2 z1Fbhwna_Yk#XZM7Q);KWZ~G!CfTB$LXHsYLXIR~R_7IfwT}CZcrMctkq(>#$0AN(j z73(q;J6y-yiPRrX2`{3JpU*2ZHZVah-^BNXko(bsoMw(w_W4)xO>hwq#O4}TcLE*R z*ju$IT#49?$||LO>g~a{E)XXoE*?BpKGPqAJW7wk}~ zm`E#i4;edFE8>};R+1#Q+}^_8Yk*wzo>FDrC(_S^M%E^n3>(Es&I;nu8Es8;esF< zsvnawX=<9p1|lB*1-iPyTIsF;?{Ce}B&K`w`PDWmUzd2nMDaqGU~=LEP1UfR$_>7! zG+gtD4~hl3b8#x!e<}|49#NM^zNpiCuEfpSwkPO>`@U|L<7m?7=2eBnlW+95 z+F}Xh9=-E5&XFrTyT4ni5&F@JR%OG`*X*Nfem7Q`72RCILQ^jO9c1|MsC93DA0$r+ z^RLQ|`rHbDPVB%lUIz0M&&jMHck|(2;f!y}gZMh76H%m^5m~(-Hm$y(1sKaho@b?B zDWhCoG3Z8L`u(>oUk?4X$y36RZuYIE%<6aLj(0(4DUNA*Pje19-eyz9=ZfC9gbJ`yJAfz17BI*uX?*l!LU#kGkC6YZg1|rkSpLBT!65=)! zFH7GHr1H`pYFzJAiYz7M&E#XR&OT9&-KEWp7(Z_nl4NjdS(hHjO*rioM`H_3%#{O@ zQ%!(q!QD|H6@~-lxXlHjPovN-v^XX(A4PBy!sSkH-6HH(T+(Z+2U#lJY@uf)h_t#7 zvG`?F$1&40Kw@VA`c|$*H02O|OAI4IZD#E#p?Gh%m#MpEf1vt4Q_Gi-(POODgX|?O zxv4X>4DP@3>4hNO;nSImh!>jeMA6~gWuRMyFvbv_>Sq*JjX_q2RerR-EhEpUrl90T zTzAX}9YP$i2|)c9K7^*nbXTHrd_ zxVK)6HERO);>unR@04;iEGda9G5MfF9ylr>1?@&4xLSl-=v`MQB$IoR)m5khjv5S-r}XW&`el& zzb=JZGs`&u)cQ+ee%5TWCIvydE-cDdRBCaKq?o};tdTDFMeDooD?>dD*-;L~R~;^5 zE@{p^*?v^t{xqg!h`r@h@;qMtoe7<$KD}xbar+owkZznb$uQhzexGDNO3mHz&g$`b zaCSu3^4deR4x7_XJHz4f(M zl+>~xMbC7zbN2T1^?Y*hF~q%>(ADKG`t^kJymxT){qg1btS!8Nde_QIV)?;$N?Z%x zUikfDMqxP1z~yD=63El_bomm`Jr1dvSr2d)AB8iSMt_e3mcJ%WEf`UkG@N$Il`{S((f ztN(#(xDKN`*x8%ELFc8vJ2x@Ph(&+hGdsOXDH^!u?C2Kz(Kq69J>l%Sumjb(eWT-+ z-M`zzh(j;3hOy2H*aSsvR@-zeacdDNo+g`3bl@lGXSE*FQZon)n$;U?w6z9gx)+sCI$#7Xhk9x*Tw$dJ26 zBhrt^m`Dut6|7*bih}v;^tCCSw8AgSmJ$k(3x-JAcCEWwN%9bIMiti`>@M#(oSn7K zkNjfPaJE`e7Sq`|$}a3LIG}+6-=8-5$r|jsrU)Qi8GY2>ro?F4kSMEGJNXOJa$Ibu zrtSi(Ud-BUr$OJM`B@!6*P^Zni6}oSr8M~z7eoLo5PHV9#6IS&Q9C2wc$J-^LF=z5 zR7up^+a4^R>p%O$cGLTMEgVx7ZI|nH6PXi)a;XOYRn%#|Pmbk3q7rT0ol2qL{Ac@E`JH zKlNMfM3FNPoDt7=FB*BmxB@bW8WFwnszIQugEQ)+367W{M+dvZQgN!{KKPtH8YWP*LYgn~RKjK(!d_;6{EK3a5(Y zU{B02J4JRzrPjTf0AAEylqJ6kLN!G~?!z|g1`S5V&FR5>ivFrI*m&s6Gc zcm*w^G^GDWI4G=qZpqkQn|qOL-*p8xn1uPLO^P4S5`D7QEY46V$AY#5&R-9#dZE7f zN2G8K-T1dECT7djU>zp!!kL=4xvhVO@J!?X&@~+_U;ih@?miMngMx2e6V+$@H(k4D zJ-XKU4_(tm`WIarcK)ZXZQuW^t~EdAs}8C>?asdJ9gMvAac$~(IFw5<{G9l;m+{)% z$??T7CnDu=tPZQoSfHLmys2TnmDQl-Q_=XkGy8dC!pS!Bk2>mfHWMx?h!^1HDsVQbKUvvluxM8biOTxg> zovig|Uj1QuI+xI>a$%>fdd^BhjeDg4=iu;>xfhGp8!Z)r71hX(5J9fkq->-|-y3J$l7X+8mv)YI^r(D23xBuoN(`U8O0SqW>V9L(}k3Z~GiJ;jq3wyZdrz7@yx)#5^syx$8&EHBh;u(hOp+5QKN$ps?y9L$9!2}Ce2`_wPt~3 zWZP^ZqLz>CM|CN_8} zeWW>dH^~76@E35cgkfje@I#zu@#l{e#Lb6KRirV{@4YlkkTCeboNQudQ!&)&e-J@H z%M}NYPUsoMXi+H^*_YckP8<_A|Fjxx9{;mR)2-Y9ha=~ZOu{Xf!r>@XUcO+S>A;H2 zY&vy2DcSx++$41^(Fu`0)NR110{aL;%trY|TtNI~^STK{xrAGsNKmu_#DpOTRP=}d zR9!#ss>cS{gU5mJD+Ul;8-q394>{8udHIsUtp)4Fgk%g=a=qhi*g~Q0r+EA>6Efo} zi7EUHdNZ%y_n4qFxNHwWZNmT7q16)RW9}ehWJD*I9R806O-i)Lq1k%p4y5Z<;$>41hj5 ze~z%~i->XQ?cfbHcQ$$P3oM!3yI-NH@nGkx+7rmS~*D=$R1?2eWN03wqL%;M7uaKPF;NSk@JBQK6kPbAM4jV=GI@*(3SH)%$K z8kRdm&1$=BWVJhg@-G?LMD25-HMq6^4acaJrChi>y5^lzRhbmT;c8lC3FMPlXFuid z7I&7i*4NQsCqJ3@d6e85(q4BdLdG%r3A(QiytVFH$23(xz^#>UL4$9c!aj#z0tlw>rrJp405M;Ez-$MtN zG>;fL2UX{re^Ax+OEV5hXXdIPu-P6F6&G@`xjibgPr-F^hMsgxb6PxFKmz(n#`?t!qjaqjL$KeHsA* z5u&*vtwIy#K@D4|DN5O|6HA{gel3YKCo(uYSm_#`V(RP_bRtfsViwWQbiT2SX*vQu zo_CW@0@}2-hu$R&-l6`VK^eVuZV*BNOH#P3oT@*sph&8IBidWxuDSL}i>?4!a`V`~ z_GF|Q@2u-QO^H*SH;jTMo6vzv@NOCf&t;qrc@GnXOU9p$=;}F`eDn9?8OXjl{;e&7 zqMaGWBqo1YIFkrAWDUnr0slu!##-i|b_2@Oxk3RNXIlSXEt$DVX&dbK=PO*b|!<3-5Z2ie~1Y z>rf_XAl2-uC=*=n5FgKl(T<3774a=YakQKCvd0nLCte-b%@Bwxr4xq|G^#$UoxM>Z zhn4q>OOzYzl-uTdQ)*%5+5mlP&Ss}QJj{U}=ZeM=rYlTHJP%yQYpZi=FBS#1E@bU= zySWP=$Sd6CVJOG=g^@g`iQZF{MY4id(EnADfhhOfqnTWjv&EIoyTBynTKW7Ai^p{T zg)4`o4@}Rsb(4ty=DpWsZcGb%3eI>I2P~1nA?hU@R(gEPpq%J-+w36o*@*}_7AaJLxkc4w^Iq{2yY(pGm$D~b+v3zlpo zw{l$0%fVzP$B*u_xeK3xzpPp+6;pt)*z)!?pwUg0F9Xrdd@uSl5#ehRL4km zt&~O1W2V~q=VRhF)qpQU7u0aFXnT}V7v%ac(kDVHcb%vC_yO<%qvJeeb2`3 z)v{nCD^iBk#D*4o_bLwb%v_dMErHvZVC>eK)L#=PPEfh#ZTC0GbRY`083~a%W5h+E za7ejsA>o71LlH}8_+;e4b5E%njO0ZtXa@7q07_N@4|F)z8QVetEg1=;PvTSxMPRqh z7HLuRL;>jvcj3nX({aw-MRwUaQ7k6O#913{fJ3GE>dG_|z+rVCQy6HonRKcGVD^9J z=!$QpYRPt7?RB$$8otZk-ywOD_NumVZORg|_d7zsj*aPVHMUkp&S&7uzS)(1S70^+ zT*a~NJzRg(^DCzla!==TsC?66pgpEJm~kj2Yd=vn2sGg>&e%Qet>Wg`f)P0BN2;x! zs1NIx0M&}p9%7ntR2+aIdaU1)DRohWP4bzDrfgGyreW_l1^oR9U4i@L*XL;0k+sOxfg5=wBi{M%E5G&wnDbja`_egJgIgDi0Ib zUVlGTk4h*7B4T^i(Eeh7-Y&AC^0;TdwN0MaFqYY&EA&Dhr;E3Ul9UB2F~ zuNQ#V?WUk_tlGL{|D;&u>v%=#jDNcL{BMz%jx-1M|4k)k0AR#2=08++&WQ1&f_Yqs z+(pBR38p=c{&w(mD|f^yise~CO0-e?QCj=Dx7*G|a1C5n*gQI3_WUm;6A6{Hc=0al zNZ!d7?8btqXObd;OUtD@mEk=kLc=6DH+lRnkPA>?t3_ToO!|q)rd{IbF&C>1LmaWU zdqmd!{#zmj+LLaMyO3sC{?wO4D{yL*r%(Fs@);L#>hOt|594-0+EmHwDXOhZI`UtI z82%$UEX6kVA<&EeO2R-N&CVo<3LsDl0slQ(BdKouiE=)Q*v}M)(NuRmhQKb;W%kx3 z(E9a3$}7qfkihk(n!A?s54x%cl(|bmTx>h=xo{cwY7)liYL;&-$xzD`1NbH7-4WXP zLyewSFbzB!GbkBsd258DrYra2$7)nw){2PU7W0}|b=}=*U;j}=(!S;|Ldlq{wfvI*?>uc>(&D?0|jMJsc)zbSmwSZp} zYbkbr_d6BdC_{i?0py1RHde)G{H|}T_e>oU{jdylN2quG;qG?%py6n2dCM_x``{ik zPCpNvQT^R8@2{#ImhsJ+OwWQ0wf9a%Bzrly<3m#nLYJ#WlFu?GP~ZUlR`KfdRJyJ? z=EpxL5l@!0(hmZVRvE;Rp1*@xP9OfXVJrxNcgma>G6S5BA<8QLpyIHS+{Jo#Wlpes~o;~L8F?1);#s!A~7tE67;LSw!!x*;+WWeg^xK~Cf0i!7q!bj0BwVc zmqa=d9MZ#iY zK8vSjCSm0@?+iP5LPo4ssO28fls%VY3btt@SXAXw8Q6lTt=$2_7^GGtAWn^JkwsTT zgtvLaCQKLSN|mfqru@GD!Y_14FbXp$`15 zR@ckj1)>23)ZRj+wuIS2ytmmCH!H6g?nYdzb4~itt`xfs5|=fT`py?f*!^kRFrW{|`x+E07J<03~U;1XM zyHO0J1wVDG^K_mFj_f2FRO+53S;sot=(zoV?!kUxHP`Ou2}{3zgbw|vLX zK~T*eYr=(l< z7QUWy(k=YK46^zE@61bg`abeMGp{mibkBUw`>gM&8%fp0`Yo5)$uyYxM#mQ; zHgLgk<&OQ~QV3T9d;6Z*)*mZro1~Cuz1NHqwKJFKR{^@io$YOf&*-v!)m=nq4FN&~ zm`ivWacfgzM@ncP+$Z61zp?<2CQd{cFTayvK&^Pc%9%mB0W6F91g2FVCR=3hORh%2 zdd0&4V|;h3`j{8KmdF`C8Ffwr$b~R}-7ynlpgvj7!;EqZYWzp;z84u)=lJq z;o-s9D~C4x#SC!^=_qgD7p$a@{W$ruJ(1V;#e)^rjRB__(;mC!m<^A#1Hzm4Zst$> zR@suwx(gO&(;$0QzRpePihhkl_S!gvKsY?F*(YiK&<+imbdHt-{`_{&_l+jo+sUeV z*xUmOF#SF|9`Y;}ZsIze=a6CD79kN~7hNfcYl?LjuCWNSQd;1%bal5mk2aH)s~rSV z(e#XAq4P)XStOE-zTsbb6vPL%JZ&|h{^9FWHLRKFr=N0tph86Z3XL;4cQwwtms|&N zQbMYc&s}XXmBi#x-N-hk`aBfQlLvTynhlH!WoJrKwuh<$bB^GTGz4fktcv^}H|k@w z2W2gcLk&y!9qQcW?mtu^BJ$m*5^&fWgNyozQ0gC5^4d8KNQqYq=%H?@Fg^$~ZpWmk zakxB(zj5qtZY*o=NUPJ6>YK(^!(j7XhQ251MX9d7m(6eQ{vL?8-U@q8^Hu$OsD3UB z{&QUG>-E9o_4cxB^h-4B3#Q?x5)E2yPurH)JN^3Gm6w;u-Fu1qp=;COSJ0~#*I%Q; z5YiVFBb}O`-`d9NAu=B%tVH9LcyNI8)noO<{k?4eyhP0K@>M0(eD%owy*{9*)lop{ za*N@isl6orUpw>81yZxcXiOTJkmmX_?J=gaaTK`ezoG&>S9mfRc3s6O9>3L2hw&a{edPY#T}=kvc;jxYOeV;&ER?sx1)|B||DPP(^(@#)R>VSl4hNPxXs z(2RCjF_{rUypA-(ZL5wD%hcc3_9~Nc3a+98sIQJ&9?4@{nx4qq9TZuoY+Cl}hWACH zV3HLBltd^=ME4_dT!z{j-g8QmGmi!6h082x(L*2kGU6aUYyyO~w>l}>N=CFf6bUhz z?(aXMrnEd&zy^b=uD0Z|Y;|P+-k3K*xz*b22x8~%CInNv-5BpEc(aKG;&I zeff#i|Js+=$hyg*37X#W`bgsb0CSb4#_Mo(LFddL-vE->15iXy_d~Fat*-vI1Akd5 zl`QQyJdk`0>K@imbffMTHK9qKDtA9[vhcB}$QAdK7h~uvNm7smeS3``zD_;*p zz{$TeI>#a-a$UP%`jh{XHQQqnHMCZc7g_6 zvL$Q|si;BYgkVpBwvtFy6SEV*1lybK*>oB*M&zpyTVf|CvqbQLG<)Xc3%HI1N*JoP(y5Pt2Ru~DD@v*wO^tj=bOnOfQ5nZ&FYLT*3 znUdvKf4VzZTWqF|vMp0$5sGV1tF^K%r?V&iFfW{`di4jB zjwKhZ2gOJc?6p#lh&L&NI?qw6T3JK1msa!wH&;Q=$-N3g;F1u!oS?Z zdv{c`HHm#Uak^>by#W>vu8a!M%xOmI`!`e`5@wzH9<^pp{A?|0YaxUNZkme{TXn1* zycyP-7a7UO<^y>vm~^^=J5Y(abfhawIQ=md)MjZtu@a#a9G#@U>Q$yav)8|&U)fF^ zTytI|z{^91X(&LmDWPF~%~v90mki>A2O3X~dMwwFpjgrENU4!XlM9LIuM&dP;LfS`k>%re6+Y60ud;*1_Af9Y-1 zLXkqJngV0;)Vp+Cv5JCy#S7RxFhm$kvRa#VNOO5YXRiJ=*w*641+>njh5>Kc5vo3; z1OHGvQj&ie8eG*;Y;yh#c@kjQITck%i1m)uFd^-Tc57DXMLxgPJJ3kNHtTUlWisas z>cQ%#>;uQ|;0Oav$Kb8uSkey#)!GGk%!{QN;dAn;HROuYd!9&^x~k|=S%~vniOQ9F zMU2u6l&kaL(S4l@N_N{pF^0XlQ}e6ESZvlQ<%y$=i&V}Ix5~Weck7~>Jl+{g_u4TP zu?3dfY*YF=>NDj!#RN>BDcuEFjKyaF#fmyc%ES-iQ-FF$V*=dm^U1UK`y#ul2z%j#G!q@R?nQ@vHH(4PgMsJ9*h z6ny9l4me~Erl%F_L5Q-+;6@WNF2gbQcc;`n^Dhsr5Sfn~_39dG+kY!@5~@%#JuL>i zTC$}s_B~dsn&gkO5-)%>_a1SCVN;8*U2(@RQCXEExHW{4Wt7qHP_2mDaaSGD%=D-l z#118Fwk;W_KGj|pd$G8bM!RW!@W0xNR`}w1nl&$~;`uLpM5%r#61Mbj=T#G-tD^Jz&gYW0F>cDU*F^Zlhv04) zD}oD$x`SoGnx3@{gsO`7(&sq^|a7WULjqzk189lRGI5N%jPZ*Cp;mn40t7JE0r zS{zT3ZgQ*deGi#0PVgAI^JkFdE6s9$JYZfh;TH0UI|Zahl0nQnT~-du46LH7#gQfi zvUZLVQZ&N>X2&1b?5c2O6v^RE2#H)Tu(x0NkEb6H?lkp|d}Qq&=VjGSgdiOk!iq4l zuXj^wQOYilLb6uzALPnCL>0yfkcr~z1QWLD zXfvL1bsuFAEw3{mF;a82G5sn0fgOF1C1^EOPX}T5cy~b#REaXhe{_% z(&ptMEh=^)Iy0xjb)8y~D*z)4V@Yn`EOfX6tWtz|Td_Ym{aMg+xUfc!p5K_iA) zI~=LVFt852rRa1rgpZ0ZoD#JUw~qOoj$#!cJtyGwmCLX%p*Y2LHm7jsQ9#+>;+kAW}iz zZ2A4We8B{xvLU9;Q@lQdIGiY$S0w!aHM=I6dPStX$3Bq^14nOgHc{;uMxk{sGZc3~ zKcCg2PY%Ogefb-sC;DTVF&~i-GV!NrxOC8OF!Kvln-^_XIqAen@MIU|Wz`9#3N9hI zE%57^=F4#gdM9lyn@8ibe)UCof=wA%H zcq)!;^z z=cN;Iv4M}lOb73mjSq4~oJ!8F_X$oB=$dBh74-F~yKaYaa>4zU=UGCtfvs zib2cKmLx<*`%Rqt`fwaQ*GW>2+71q|sHpwCZL)EaTvyV7XY_GRgx)WbD$VQ&9{J7y znYEK6EeH_+7RV0$(_j#Ca^N=~vdA(lhvLI-J=EeVP2JaUy;~w4)CC2kj6hRZn0TRR zC0s#HfBf-pC*)qaYGpD)Nw*28pv5Bq zguo(n-{)9Nxb0?tWiXiZQ4@^Zh8UDDJepqg!+KoYA>3jBVZwlbZCJftBJ?yM7dk)Y zv4@1(;&8%3^vOD=(l76mln1}>-U-{Z7cCqM5XGgN555qH2|f{cG-d=Hg+GqEM*%2< z#*juqaDR?T-i&|pbw}35I_!FSS{o3HHOSvUv84au=bds5;8TwIr2h}Wv1&*`Sa?*s z^LZk=jqfYLt~=6U&cJPCAJpj~s1bvEXHP13X0hn2 zXL>KkiFQBO@-`Z11virmPZr1AG3udl+m%6@K3C87ll~AmNWuezWLN6Ve1$Po!!TKd zNmfzxy}uS_R<^_kA- zUb7`jZS&3kb%zdN{p*W{kVFsCxhK(62aL?WxgTzVD!hH_-JK)#>-#}I4rYZ)vXHnk z2%(JSE|5?R#=G(qy@5Yob>J-Vs+15Y9t)wQ!D^d!(Puj27vA_THw!r{sEPwEBjNv!0kEYeSBm4uql99@*bJCKwZ5pxkFckZ4(C*p`ygwjblXADltQ`Q7O1t}{;76B)knV$&_a0+DLSgF+ai73E?5q*81TW5Z!P zM*?Vwz;4pKiG{|si#22ZWWrBKt{H#wOKNZz4?~6vbb}HE@d*DBnHs>4f4=NvrA9rDW&J$53jA9YQ++rUQ@{AVFC)k zprjp_Q%shDAC-`DF-@5>bocH`LKE)pAmXb7?=!dLS=6Mzx65>78JV=53u$w1L~C|ri5@+KQ@$4WBcP+ziEAH%g=a5s z@>1~FlTbPIj~|{)15^%&yRK826!fN*JLaZbZ;t{yyDJ;m+pbHRFZ0u;MQyu{diLz! ztut&!f7+YWv0A0 z1wzvoiEx&VuLk_*=0?vRhm2}(a#QG3Yt9K&OvtUL8m?|C>4qj21nTy2)-z7V@(DNf z)^}K&^Ea6ts(0g^$N9dS`%n$imS)c~RWP#F2{ZYcT8qPN4U;b?ycKqULT>yOb{0Xd zc@7N;>4_`O7Lv~;E5zIy4N_~YGH%W;R3hV@_GJC3NU~y@pM&0%go5p9%4+M2}Wkmafp`Ta(YvvmFE z^FHtSKHukk{(aMu-xs1-HnK^Vtq(X7S3%Iy_wkEm7H2-^b%!4BPjPJKUni{H_jLTh zSz#xkdorrXJt~+`^1EszMz@s2I#%$w>lo{nTJA&sG^-?xo%<#?tY(pLnteLo>M%6k zn-$Gr175vZ?QR=d63~|=(~1%HaDJVkj~u`1vt1TysN(k84-P7BPqcN2E@uolw2$Gs z>S@=Aceg^Sq8=*RJ|fj)SCbZBo@+#rZKE&H#hW;tax|l7`DPLQc4<3K>*OS!JR4K< z*6z=}RYby6wyDadNhgFA1g~|+bVNW+eB*CNctav#e!-C+fX>f&+0WAONC6xZ-33>Br(UBUq=e=bavp<7dFEy$HjuqS_?iY;j;W(9qy^? zt)TSeOY!4;alTzo}3R-8$ceckG9{b$V4RG)7m zlhKUDccyEn<`>kDX49AaE`pC^iz{k{Ed(!PjFf)ki6Qt@Lfyv!Q2`zn4{q4a>A7@d zluG2wYyYKN{HSp33$9j|plfxT(9SfANuD*i70evTcmdF&%($b9l@EA}`{Ni3E2Dqp zJ}75n(HI)2S(4-JIw;wb(`;7(EW&y`Q<%l4Z4IH&6;^>s;a;;Y(!cRcER~ap+zvb_ zvCRwk5um$#(x8e%B*S->w~1p&Ddky;g&Z0d&>+poDkGcDi4R!AqppCLW>OgMQgk*U z(1Bhfq6zx0-Yr}|EP&pFx<52#gL<|As*MX=TQBKE1{wAt^(7zOfX#rN(4jQ!Pr4T% zD6M*Z&k7;O5>3LqC}=V4o1uHaRk7oLi6)C6mADs6{Xf7HzHttFMw)S_8D(vLtTpmT zqtS03zK862=jXHhdHXS-eB$XDnEni0qxQqnYOmb2;=8QKqt^IkIT;`K*)knEg(S6?x|HKi4pY9EYj2CJ13SBrsY)Q&>@i zxEzfoJlz>m!&;8J6~ubsP&cPFBTi}_5*yy6#6#O1cK2oY>mVNMzT-{>pQz76s}g}) zx`T%}K!czNm89;Z9MEq73MP}5y4*JcKw7{yIl2gPg0zw?kgZsl=&A{>MwdqU7gPEDcwm)7awHCXrU?uyRvCFrUkvSnz0sgZ VmB|RDAOku$xApL9e~9Hv{{jM|=LrA+ literal 118346 zcmV)1K+V4&iwFoy%`s^J19Nm?bY(4MWpHe7d1YiRV{dMBa$#e1b1iLYZgeeSZe%TC zaBy;Oc4cHPYIARH0OXwAt{pdyh2O>3xsXJO67wai5qb;$e_{hJ|DMF+{wiZ zGH73{CMvB z_1=H_uh(bkPyXT3pWpv*KK|@K{QY&IS~)j$=fL0m<9FY`|Ka`5@4l5kUd4R=@!WGb zVnwZ4&fmP-?=L3&^8L?mCNRX`e>(s9lmEE=!|S6t&#pCAleisHZ_;-K*>HRw4l()}c=Wy`dx9RNy z-n^ZZ|M|~v-uYS|e);j|k8#ca!B^q-4?o5zFAse0uh)!FcFKN+Y@7P>>H4rg9pUxI zpFjNMe+&C!>hx=_dv!Cd+sjI07Ev2hwyf2cOUcGK*4qC(oyaT)huob;l6A)CSpKi` zwk_5Dwpz6B*>_)}IM(>AXHN-p)b)$c%lh&-BOkeK?b|V6$=1fJxsuP@X31yVFFvn3 zCwskfCM~7pCALnj$J$86uANyhL4G%H{^PuVn4!y`9Z*uPilyKIwKnmEH-Zvi5%znfQlR`=`wAsS zGLq#amacpnvtLPy!M&U;guK`sjcl7bGS`l_%MdxEj!w39uC8+OZ6^1V%X7bFTACBM z+~uhayPEsyd(C@BIf(!#^{F*2+)?}Nmfd9bdO(}#?l-5LdS*l^!aQnSn7gZDXlk-^ z9lM^K?6!-i+Zs;7w&v*$o7Wk^TrSvY+Ebag=05rK93c*2th314lcbLCZ|}se+uA)t zFFRP}++1rb(r#h4)odshWVO5UTziZ@6Gm*ppkw1>(|d1C*7OXm;}zdVP{Sjfd9UEJ zGklrhuwvnwovx2GP15RRbV;-O6a~NCt_(|O-A+;r(BN;QjW!5D!k!^x@Q*o@S!rR& z>xiYd*Z(@2U!Umfx{i1B?3#)2oVL#JQt#e#&1Di4L^jQRj<>dTX>C>Zs+Qy= zAYrktmK)AkcyA}TLy5R8iwT)kb%!4IV<7F_Z0bTS?#f$s|z3xN){o9y>9CaIo2){ zd!ZB-pe5bAy*g>r(^`q_x%jzb72YG#H+SW8Ot9t~0FB~3oy&4uFI3Fx`^n|B0nYo> znH3)MxyCt#%7+(TRuz^&PFmRu;4}A@u|0Qyxbj*P*W4Ecw0lILz!HY-;*9`7gIE-h(4#CQ<_|1#JZGjiDYzvc`R9KtwDc$eZr zO>v@Sb)LR*_Fby<0Xb|)rdgRj@YozZeD?C%7@6VyOh4|}?$UMN(fw9e!OOGHTe|NG ztcs0u`;-x%4cKnKq0GW}pVs&#P!?+pL@@C=8?LO8$t-SlWkH6s0Nfh5E!WJd_dW#F zYJxsan!ES>yEXRGA-~BLxH!p$Z6_3P^2)m+dXfkX1aU8EWiEndIAZU8ZOuMdV|%T! ze~#n7i`n1eSS3+?7xV$8Qo44-aoVZ^M5y<0S+`T*$SJ*v6h*5Y&hR|5_us8i>RGkn zmdOi)kY^MMq7&G8$yU+>hF$z@=uFgd^_5HceK7`fqtPZ7+)w0>TlkIomy%52B<-4XrrNJ0hrtQOMRF*aZO7u z!oX`0WB4yhN#C^uLgXzA^b2Ld_(ECmd5vjvrf=PL>ZJ6xv#H}7{zN1!iuXxW(6RF* zbXm&oiR?I3NosJuRL3{2>!0KJ?`FaHLRs*yII?hS_pXvN^43vZ2%^Ta3y zT%~f|vy9|!2sN+$)D0N(8)MhV&^OHgzo`Woy__as?}%&P_ZY#1D$%odN%T#+9rTsf zhcQ&{3z>@0Qro9BdqOP$$h!_WaRSQQX&G6~9tX%Ft8XnPw8bNZO4!L!i^J(G1~Ob} zFZ2nuAaB#wE)`AUBQ%bkgh|r8q|oq073Z%(Tww*H?X7c9W=e}u+Mlcie_=kSX}gX7 z8Y27$GB8ekJ2NnYTbK1f|Ik+fm*tmQE09> zAx_Jwq?P*WbXOTTB^}~u61+(;NM4e)OuYCNaI%n!v(t*~wsWWM7TcopB#4-dEUWsu zi$p&c7vfx0Ias-~g7p~Xug}SxsRU@XlX0dVSr=?(@FD zNqH!zndu&^mxDb7TcKsZNGYN$8NnXc;#or!->$Pgf{`0%R?=|p>H%XEF@tKaq$80> zSPJtYII5pUQ#H~myRK59Fr>v37&%4QH=S;(Ws51mXd6m(Ar_ER+zCQ5BbyzM8g!k} zBhVHkzO>EfGcqBN+4E(HOd207kyKtR!L%MyM|%ZpY7= zy7qX|QhPiizH&&-HFK&0oN_xSRRwPGAm{m~=zxPeFOu+^r;C4TO1ukPumbEJw~}l^T~ko|DbHHT+G* zMPZ;JEuP}G-EhWDwUXCobOtS8b#hEQHL6`1bE}>waB_i!%oBK>QjkrxIr&0U6kH4n zS>Wn|j2<3O#bM$;LA?>myGyxG;N-%pXe%JzPBin&Ga-u28BgG3 zoVezpt`gTe0kJ7%XYO@J79cj1`I{k_BMLVt8Ua;g9>b~m5u6;)gi{-}Gnic3o4cF@ zTu|=4%}=kf{Q@Vg0vRQyF1pUeWwO1=SHQ`KrXATx6edCNc1h>tv#yFWZ&6na)4Jj7 z(&vaSsJAxTjz_F%J%E$X$SZD@SAk!SVg#}gcUdEpf;pwlddW}`l;w!KQv+F*V#`O0 z%~Vfd-*rC?&_#8cz`*U|dx!;^c8 zEp@c;dl8296)-Z5{GeF}CigNVnAkBPZP23a;T^m@QeCMNlinhu1Mx7qkn#5cj0}={ zM895|^^7C%#-7J_)3}39ZShKMC${P`zt$G;yu`cFCP8yg;N%oX&mu))2dRzYv0qb` zBO3Fpl+Q|Jh;gP+DdC}b7^LgQXsY{G=iXbA||8kquZ@G=fpa4;nL zB0EpuWa88Nk)dV7IoQEvgR)8o5B-8HSjcNh3-umw(1g^e2;D37SH|~tzqpX{u4NvzI8E&oP7-b9Ys+MJ8o2v zzB?XHJHAmLG3jLu`~jKCikM*96F8aGTo6>Kp{<9TsWcY`(gGI&a^16_F=*GJrn;b} zh8kJYPKw^ZpU=s8D;;g`rJXvmIs|rW6PgE#F9X@hQ}eOd4R#DY(GH6GH9)oPkJtxS zQ2z-_dOYMiJ%mKr@jWOK(lS2)k7F$^VlF9lg@neLwP_2<`~Xf)z;oJOvcS^F9VrzX z|75A~S&49jvS7HqN^4RtIG>$17-5N6{0cakO0zXP-N9JH%*oZP)Sp{JX8~UVYLy-c zk0wPWYgW318a|k`P!Hhb*H>YBpD@uL6qVbX?HjjSuW))f?haTu1$bW1;sjA&4Po4M ze8k=4q`FC1!Yw&&j&95usnxj@kl)xPmXL==n&#&S|YUeYi~cP)-9X$I>PI2mYCb~9Hm1pcnEtpqc3^}Y`7 zZl{;mvpl`JG48bHmu?4#Cdeimq2D8*6We19zbE2M_Udg7fUI)l z3^gabZvJ>qKIdvUs7$J?u7L`s@#z7MxK-hJGQJ?(=;x~si2R z4rOLYH0l~(s~WRtMqGH0Z^a&LNGo(C$5`fuiJC8$w&5!4Tb{tk$oocn8v0^)9i`Mp zGoTG^$U4RmEX7R~M;XE@cB)o1qA~iJ5!bJPk(-_iWERqxHSN1yr1gdgQ0~BGrPADj zqv?tHqQ+T%9?#Ma*QjD2uOq#jBLl^nk&A*U$1lCR#W zyT!w6loSrg4Qj5{&PaL269Zc%Zs|OLlT-Ev8ENTscL=20BJ6Vu(zr+7h9XuJg;SLH z(4^#bW?nVE_;~8qz{xXyqs5TAyvvY?OoO+&$JXRDD+;P$s8WcEZU>Mc9WT$mWWGAy zzdk2ZR0sOd$itF0q?PVLOblI#URRJ3SN+EC1)(pvOxw?HS-lx3tdIBxU#gkqZ=6yG zKgtLw*SL8eQon$=(xm06G_-|1!{G#G;a0}NL(k{r+^PdLR6;!Y(M*zFM-cRf z7A`TdP1B3mDybIhd2dT+WctJ>aB`Z$0tTIp)9qaf?b$mHi@)}pkg%fhTx8raX?u~F zNGPG{KVG-jM{qL9cBm$*IW3EO#m-7AItr?aXV!yuq344Hq0MFalpv$9rV$mmoab|L zJkL_s*h~N4+?`pDCASVlx1@Pk4kc0hU&1?))gS&eVBn!uFl=}At(&QV3Pwm2xvZ9w z0AEiDYK4(A^gwdsMULLhb+?;QMNF+*(0HdOqAtQ!{_4?0PS#A zE4pyO9ph{)?bvs}H>c2HBX3~xZSD$B)Lmj2vhAu1@<N^ z$iMK#_#5KjmhMOA429BWT;0*KZ;&YOgQ^1|V}wpcD5b>zf_1fVQbST`Y@f)I5mW9B zO<(4zD^x}pc~3J2el{}fil|{|6t65wafY;StEdFE;*H(p6iZ(+q^(l{#42y9Y4NT$ zrO$0DuUILqh*>CqOR9Y(Ez{ky|AM`Xa4{~W{!*(e;>OwrfT|nAX3SOTt}{S8v=T8v ztN8Pn9l78b#S54$OU^bDXoW9PMg&Cs+Vs6+o2RUe5z1Q5TCStO0>F)mty-W>=><&Y zDY~U-6|Z?mU3NM@q%a8ah5tc$BpV6xxkr5$9f6g_6z?lhh;_<`ZU!V>ff zCyN@YdFzair_OCVpd!)>m>gudK5FjTXFTDXj|5}NG&c|Q0j{({4Hr}zQ_rh)Mo_UP zgKqqS6TVPwz$)fcUNN)06tSw)OXj++mRk+VJlQMHqpk4f>r6{n%w@oCUcls}6r22w zZi7e6;+7YNFHn$AA1`poc5UYVm72IFlnklDy(I5e6sCMT z@9fo-L{2x^cJJ?Fas+#aSRw)k!%N#)=(Qi7(OKUaLe+gO>;{sRbN{ZQVmb5bqCtCt zC)1t?UEm=M{G0=uvyRWTcvc0C&qmeX+yYuimtvB3*+-QK?AY}SXM9*G>{pQ^VPnZL zd(L$YSYXo)Vns~Z95m^IN2%7K44l<(+e_y2J|auoE+yw-;4pzX`$62V7TXFJ`k+wl z;4?Z7TK+3zvAHmvuKZi)g%dtsj6VdJIbsHr+dXY7D+OJONlx)Lgn4Rs1S zqW|(%=Gk=;V$DznJJBCsOsypbb^qKV_}kH#76mJ})|%Tz>C7I&fG_L_8NPOU(+%>3 zZ_)M4qbA0H)Gf?Vu@Qa&Lr&9{!3I0_>P;09ZjQ=|W&x=KW!%aYhZa<(i}>9c`p%GC z`L$^M19&Wm)mhFo4Rf#Gx7PZBpn(^5*iwweLI%o$<8YU?8gj#^V3hd)440GYS+zz<^ObwR{orh3;r}BtI}S^% zlc&JAQo9PqRycA!8nX@UNX2e`;&cxiL&Hm+^4A-CP-tIgcsJ&Hx8I{ArMD(OQ&IN^ zONL(1-(&jp2~3V3+iRxia)wco{AN}oe%1#%FsmNaqJbgT$$wT3<=$-yU`5=%kI9UH zsvIb8KDNUQIs?YLQ9WNE;B_VzSFA!OUUapb(YSR1PiE8Z3z)2w$cb4fJcvQ4Gna@3 zwXEVmSxM3QU7rXt#8K6eK{a&FN2YO7}P> z2j4W-8y4$TrB)O-(>xx9OBuZzvuXn;1l3*PD@Ca=8eTV5wDKEyE+F(7A2V9hX*K;~ z?B~jUPy-PLy-#-B?aXtlf*O6Dxf`QI)9WL-hn6l)$9QKbOBq*HHH7JmyWv}U&!sk= zqO4h}p|}+G=oK&ZlhFSc>|KPFPEU!BL#?9p?OEvF2<=KXlUvx8u@FO6&d!YmrO}cK zC7KH7y@1K_;!sXe4&;qYFUqUCRRe6Nf+(0(X%0+T@syubk#M2K6M3T7mQHyBlcVx? zAjo0-xE?pRiFRc)ugp}tG|JYQRR77Ejl<+J2Fm_`8qU8UGFjCJg6CE5slAqoy{+5m z_A%XcTG+q>rtr=Q@UB4_4nxgQ$*6 zez2LmbHxC(W3H6Z7RM8N7l~&njbS|snu)6wqZiF)^pRr_i`*6KwV^cVGs&p^j#MU; zYW;>x_PPghGacjeq4Ve2WM|3Ws`KlqQ0L5w>KD`yPszOjb zsZ#e7n7np0<>17SR0ne_Uh0%mYxM&~uDbHokPGubrR&w+E3_#nfnz%LSKJn%Lv0Ne z$H60I_`;V80lKU@9CAVuNvcVKp{Z69s$!pJV%MR*v7elqs?DvS@|S+CDe8F6aeRgz zi^uLNu+odwgm1RJVrnY(t_qZV^9f9@46@-zkJ|x_l^__3w3j((RedQY!KzOcKwk)f zv)9_H9e&4{Uj>!NF&Rj;+Inlutow8(uGMr-x_QN8Aww|%t8zOgt%&3cj&01v?^4|B z6PRqY!b;XHm(k`(cP>{c*n)Az9`Z`wXb{XYmywGX8EmMoT%8`CxWUKbVBM8{9phsG zX7IVe`1i`C(*A+KJ25652jpc+bl#f7os=U%5zOynGLRGpGoz&TuDhjlwWu+3V&)+& z34&C0w_u$woC+m@r52tE?R)`~1LK)$W=NTr2I~(y=jztVaoGr*gZWQq1;scHno%9E zoYqy=kMu;`MW|yhA2zLq)YmanflNHA38Y)<#$pgPt+F|bE>-1E zUe(NWLq6kkupFzuVLw^ssPmxgzEgxUs(P90$Yd&Y8d@mdd#cBZK_LJ!N1Y$7Wf$V{ zLTvI4&3FB1;{bhFo;07SUpXeamba%}e0{k)Quy*_Blr9SR3WI`pU8Bf)zMMvx)q|J z)S|Mrt(D=Ub>C29xK+3+LWc}kWioj|K?w(AKYGUdn0zo8L)0A@;#X2q7rhS6eZNMtLS3tVMUBb3G4Hr6UTLOS<+oSo?S8D zVMW}Eao)W@v3C)^!d-PnrA$Kd`7K{OoZ)Ya&Ma*qd9%t?C*3NiuMQKjj{Ty3&ik00 zYD2p-FlPtl%TkJ-zL}SNaYLv+EV4!9&(W>6tr}USGL+|p+hnZ|U@}BEq>~2T#MV&+ z(NEwMh}=Rz%+}4L1GJxRo}>#iX|TjIVC}x1$aH~4AlZSNyAReS>f518yJu+Px+Ng= zc!f>G?k$$1LRlAlGTpuVeE-IX_AZnjO}ptgpWafv*7??|Q^(O`GQKT26MvX5zp!I| z>kh^@GyZR6=^Q-Wn7VS)0eViL>Dzf^Upbe4;_g-zZqpQoft}-)Pd8ENu<9FUd^5v_ zL-d6qvtU&pu{UK$a|Vx7Pngc9NKMn*Vg-O(Mz6#78PG0^xJ!H&}IY)#I^ZgI0!Acbp!U)&JKK^z*! z{XQlKlg9;ob7oaDn7bE6CItM^;vWaG z_i%K9A?x^(E?BIHQ@*OYSikfDi1~)ITDeH^etze#$7F4QUlq1mr&JwK&W;xiQh!&0 z*{NAU#@eysLlNi@5A4D7jZ0PBegP&ogltMEUhPDcz2u7#@JIvPCPtoxi}PsK9R zie(#fK=|+e$bK^ON1p)R{#A^b6%l_*H@IZ(uU& z6%_6ar3$b!p&^%QX)2B}4+imuZo|?lH0&$Gj>%0}W5AH=FW9?C8!@k{)WezOa!35B z$EU28)}84exogv6S=~iGNmhYQ9y(O0Z`>K#9simn?FQy>O&90!!V?QBDRxy6g)?TR zWiKRB^S-pRj)}K%`xBU)_L&-~TXl@daKjWGEA7_HH49iQ3wo?Fq0-uWE*+``vAC#& z`{itT9FtSlp{=6bQ0s*$M$Vs^ppr^)Kc_)3Q?jiz{w8Fq&}e*DR26I5_yi_T@WvEF z(@pmX_E%c5XrtmBtd4P0PMR2U)gwCVH8T1G_Y_KMZ`7+EP~KhBpfv~Lv=o7Wg)K&6 zO?;59*ETm$8h8e{tet}|cAc5D{0q+bzC&tkW41c!Hkf?dz_3b&JNI19vW{gpNM_DT zd4ecDL;0sJ>%2b^cM(;2Ahwx-*bK{vP~Xi!gtRk`CSlr$+|nYfan?ioMw%5h#4q3T z-KY5LF*#zVu%ZFdz!X)Etetr?6Wa5c3ODzui!(JU|5=(W(V_gtfK%_P$$l*5tqbPabynTy*ee#~ zP^$JuZPcYpBLi%T-L-!%BOW_VMcMmO@exzT)u^DjSad-ZlI3xzj#<4=+~eaZHOaIR zqV*iL0(}61u%OLVM)CUH0_43bqGf z@o&d-q=L@&6WR}^xo1kvMsP<%=`_tHpKJm0%xc3Lv@VLSwR&0Sei@wpdOSxA$Jo1L zaVE;SOvTF&)`w~3)zU^w+^h=c!BkIms-$6S%@PvCiMU2$Q#mlhgUY3TZJz;OB- z6Z2RS^XbPKC6KQv`#(+8>G;+r$-l(&@25YXwtsy3Gd@))@#oX}k59{A?+dOfnN;N* z9I4Koe3g9@?T!mNGnZq^sBe@QR;NTrs#^bGcWjpsM^nWVA`eYwku9A3Yrzkzcb7HHsx-Py0qSE z*L9rY#&>L48~5MF;Qdlx5UvO1s$jl~uJL;^B>=vBl_d-diVclkR8sMzY+i!Y(AMj+ z+j`<$GR1a*%D$HFxhQnHGE6GRHA=q4)3GX}{${h_=S!${_wqwtLUJEnT1;;9E%;ZmM|s|AQ}3HlzNh-~V+T@Ko&QzpsP( zGk)QJ?7hj3B}bMe_%FVWh0H@P6Wv{1KrRvu)T&WP$QVI1Nsl5KRgp-m_a$O4&j3$-{W^}bMSC4m>7IrfkjQ& z;$iM>>H5}MJ19bK5%f655xGnv!^-y=&`p&-KAD9-tW{pX*lAnX|4n9xZ>jWGz-z%M zNYAbFA$3aIj;R2uMo8C2;eQ!KO8WjQPvCW8MOe*TJRl+Dt`mw91j1;1Kq4lV&on^R z1h|fX3NI4D3JmAd(G!K8PINA)>gQGGF0L9}WxJMn71S@5oaa>)=hX`MC+N=esxHe_ z@FB3QHm{l!K{(BZpUafyQ@*jwm=N=76Z2{?w4J#*7JFEYtHY3*%^X3ofb{tdTGD{w zmX+1zsqN|*%jM;&13R&7CCjURnGIecK*pCj0@=n@BZ_1@o_Q4{|Khy@tgo`<`1oAd z`ILy{rFprkmf0}q4XR3=SIq)vuqXzhFIPbi{3OE*IpwI8YsS0XMqY7MrpQoi>@XGE z^%F>%66a8?*Ojy(@+BUEjJ31b}ugaTp)M*nVdkqj>!&>5_wK z?obF61<7`BO+nv`;sr4=$@1xid!s!_~ zW;f=QH#iOZ1aXcl=h>xi!je&t+8MwwGk*9fxj15E?o#zDWipm^lHV+7k1q;u?l9`u zEjvZJA*fBUj+h(q7gg?ey$>o6=s+4`HuM+cuqQYKtRO?7n+A!vVcehy<~Bw6dtqGA zs{$cdblWeq9Xbh{dq7UJGOu)I%oea3e_1Ln^Do9}SrB!I2!f?U9WVh>lfriiB5omC zVr9@0i|ZbA$2B$LYAuL-k_g>@xhl5%g{2|#t5B^9VEiKA|mfwj9F7045vxu!u2 z{hFzt zlLt-yXg5AIxRh9E`)DS{_002k{KT=)y5~QB9>e;1h|fq|K|XW$kfR-o&pwPt?Lj;O zws<#ATqGXh3`XaHLDCz~JMAlYgn6+yG5_r+<6trnN6z^K71ENl7`o>ZbjAd6hssEg zsgX`#t)wQ7zq^X{8pkRs8yy+c$KcsqN57twuI7^c|_%8_URFB+oeD4u)fAW6Rs`>*7zJ9-3&uN%@h zJ%dAsc&{OA$arROvM@|lE6i4A(zDzplcjC?K*;qN*}Z6Jr%a@@wT$+sE$U6zUtb<-5BWX6v_KQcQ5WN zwwshC>q9fu0!)u34RfdG&7009zslz*wKg?@D-RuHv9tLO>}VYrVadsrC_Yu355=Ehz;tIVmzQ4=9`pg!`v1`?m;Sd%`N{)D7V zE$=ROu7f|DM`XfDi|=!1+7ZQo2JKG6c2PxHY`=?zYV3+7r+#wHFVhiHirIL??(*HY zpb_?DqZ!;`taE5C{HcfCtCd}o+$>~4E*6~{-8ZrEvY6Gijkk$@vhmht9Egby>gqi7{<_?RiC)w%e%(hB)OlU=6@Fa)J;0;Z9bhF#odiuZh!X z(Z`Oooz#G%kDs6#oz>LldF!MF%(Uigbeq3TpSKk&ZoTh|wl2GPJ)S%>m|tCd&uhYa z96<>?JO`F7kY=DIE!0t+JtAQQsjLAgk{(t+e-6{(z)S8Ct|ac<>Z!SWPPX}+Y?sfu zXY|i-=5R(EW7%f!VVl?S>qR@Nx?F+pa|M*T&nqZT=mM8K`neV0oq9ST6Rag zpXO{qbB?oEy{>1Ut6JAN{QaVhE1V}i>~i`c84<8k6NXJaN5iJYN-73d?Qte&-oAv% zF)Vcn0+i4oPhoz%k_KnQ=VM-GSl#HY$Bea&`hw<(&BsY;a~s#!%W>&DI*dzBiNEIf zgx%pYX}5qr`ga#k;QW8XO?3)q1RA=_H;UoGSV!evzvsjVI?CO)ljEefIgja&!JB&F zwGoJz_PwPoQw#};>l}A#%c{g_SMl`cUU5ZJUrxE-nG&Vj8o!m&&L(U9KR2k1vd0u z_EN`kv6nhodf7G%aRYmxk&KnZ6ZY5zd&0mTV<&co6?@&Xa>mu2&W^7Bbaq^fkRcgY ztMR*{JUQTN_Hm2Y&c#KP#f`IOguhsok>YgLh9rxtZN;j?XWJ<_oah;dTNnXBI}x9q z)Y%WZJ!UlNkCH<^qvUJSM+WAqLBB5pa~C;N9eKvby;I%@2L8YmU5$Q<4P{v8`@YnS z4Snis2Ct@CY-P6{F=B(?Jrg}<^1e_MsXk&RW|;~9y?GEfW};;AnFg>PXu{AY6dPl2 zSrw>|OuNAAvTwTB0zI&3+CrEN4RMPlf&BK zI7+Z$#gQuXXSY*TS+&SVW{E}?1#jjog^t0_9xSar%JS5S%& zdjNHMDSrB1Ff+x&8LAAjCG{4ViNE=sLcAmqT7pGq80axL%0rO_4d-6vB$HMQG4O@l zJ*t{xr~po*5RA=YUQ~n{tP9;SRR@(byIejIN)R_Q)Do`d7PWLC&dc8h3ar^xR)%je zdYzePi~6eZZ2cxALcC?`FTdmm989fWoJrTQv(@D2zRAjJrZksw2vAylEC4r46Fc_- zGb!;f6FG?ljt(bblo5@YOv;YNr_6>AkjkBJl9BGZv5SW8P%u7%Se5j~-BCVN{%qB> z4?O(f##>-{s4Z(?fO7c)>lY$h*b>;qqeOpKYlmoB|HlUf{xHgt5g7=}?O=2ntv zP$l~)N?*DI+D`(`WHleXD3Hdbszs+!PPKx#veIFdz7`z0a6j2+BpG1|NjP?n1=Z{# zYx7DL3=15w8!7px>2ysxcCm+GtQw8yvPBJI>OMNTDS8LyNvnUOEyS8i1+OQT^9S3i zf)Ge4e)xN#TUZu;Wu@0rz%c=-00eM7YhQzmStL_GJeezr);#b*RRtuI;EC*8W>TQ3 zN3y*HL{pmc{5~nwwGQ=fmCSRlR$~^`T)u0} zwVdy2dlpd+8?5Y>AJpBFJ)ztlVYcqKrJr|ZIMjHEKifTvv{KAFf>f&zex0FmzEkhM zJ7U@r>BhK};16S*?5C%>R;5oP$9Xl^8&})3w$O{~?9tO12^DjMG0WmH<x`{?l*k?jL^#jq6&Fg3WXzQvCs-~1 zQ06h_aW+^}i1*MhmNy=-zHRY0FSbA0m=Ev2j2EPR{20&6%$g-`aQNO=-*X&ZCg$tg z(4WhYI(&^c46iP*ASEV&8TJ}?N-0@%EDgsvLYGvm^}#B%Sx0j9<-H^J17hc}c~7fs zkzw`itXLjKToYs)_HJ4I-prf3>-XM$-=hY3mvMNL)bJ(^3+h(jVA|&;4Oa`FDrWMz z1HTx)Ya1M9=OQzf^o;I^OKB7CVb%xAyv7E96eY9ncS*^1N^nWPE_npwly-_1rnGA7 z6CUFWul-|CVen$)@7F82N7KUYQFSLeVjDSZc#V#Tv)m=b8Ut*nuXUxC9CF22w@`{P z&ILD(6v;2H%-mBPv{&w2;OxwZ5#`KR*^^IC$}{8QmEEYCV-e;_)j~K^_G?-e=DJ^| zXyrCVi%Uf0cSOrvEpMP@j@^_$kd}Fxcz4+%dIt}PU9EB8i@YF?&(HXx2VrAg(Gy1&DE_&ZFySHC zh;^PfileO&bSdT$YmT2>_l#HGY*{n^75Irs~i>0+1pw6 zW?k80`{cY+&FK5gt&~l>m42=en2|1ajmgbr%=bca&1Q1ExJaIm7cMx*!;-8ukInIN z2BA%-_Kg2J&R~&O-Zg{l=M1uc9ZEMeL&w^cpfMKnw#%uKr&CoSU!3<_ObWelu_C7g z77jR4vRzJc5pIS3J**gSDWgJJI6F00#MSM0<^b_jX|!>TZs@13jO z&Oq^X!s6cUUh64?s+TP0OkzG+93P+cHoHq0mJ()uM|n5y-JDII_=lg5;NK0|5W7Aw z(2ZTrbuF?;oV=0{)fRue(9PKCPey6BG!Zcq!B zr0mUYGSORYfcqD4?uM;V`Ym^C@g7Wl=QSDZE*GyYKf%SDNXyx8BM0D`i`ROVm4;1! zm6cXI!CHT9#wV(@WwmV4%ZIy`C`MM_VeTe$xzKITk)4xT%DS{vbXOkFG|4tLme z+dKQ7>l9d1KN#b%B=*$0^$s=P5!!r*WUM~)sZz64Y%sx=&B^s{gGs&>fQcbqIo9Gr z?ZWwFyvHlR`Z^fx&39Bg~tJ2}-J7zV2#*DL(JL`L_K{Hd-XTfGY3U|_fpIDUAcDjbZ0a&>Yw8@r~SS8D{ zMsc(%-F%0gSx4lKV?Q7P0|#2bvC6vJ5}lSfe-=Q4V(pWQvDi=|a3EDv*^;t>Ca~=l zZnrnz;pRK6TiyYgDe7XA!=bPduxV_itO}jEAPd-OkJA@H8lok>)zQC&EpJ4u{x+=q zd`Ac?tk~YC*`xqs=;=YReV4@X(9uR<7s$A@DFpDpRD4e%nUr$Vg$xqU1NHD~(@XWn-+V`!xjb?`A3z%+Yw`;8 z6$q~&X!yhk3KpVwu}2%CqV>o#SX@`;xM1ZqVem9<3s>)0@`dbrK6(J1Rf;iQK->Vr z4S|Er&+$@_+~k^TIPs3vfJ3&q<>J8k3aeJLz{ah2l=+UZoDT&o4T@dNB1ja3+z4SB zQ_m@sURcx7sVvw&wE$02sH`-Xgz=IcChYC=QRh4Iay}%Ef{`Ww6AdIeR;aen$Pn4Bc?zKm{B7 zg{5wxF(k%WLN~_tfz!sDdoIO%hgr`@^2I|=a;0IiBD4HGkhFT0yW>4TFkFccC>t=i zG3DOsszc5RTD`vc4msc9SN_2~4A%e+7Xt`U!L)lU(o{o(R56}`DJ8P$41i}rv23l* z=&oIkch;fiJJNDKm}|`X1+v9tz@toL?se&SkR7*x46TENRaUKtKyf-2fz=hV5(X6g z<~#I!N89*^#tu7>(++{0%eCP3T6Uc3plCPn^pPdJN$e{TF(a{KPsHEEJBsg zO|fI9Du|yQ+ngL-YB<@g;|${HLu??`#|eT|tiw(IF%pjPj=G)?kXhNYN#I8;;@@I} zEW{KL1v$3?YKf8rb{eEA%gVBNIGd~j1zE`4q$Q&)FY}Yu^8wM_(O@XLQ?WznknE*f zJotmCIE*D@fiwIfXmB&gC6MP`1(8op-QJJMKeS%?hr-(eI|x*E0|kE-s)HyhkFSta zV?-DPYaIGYdon}_tr$4P{8*?bH{UV&hxQx)P_nn|1zR+-PR z$5B#XW~yS{RY7M6EUtd@9g}}RomgOHa*M1O2sWn~{8hlHurGjM3XZZTgBfW64G>hI zkLN5CPEV5oytCe>C6j;XYF1i;ZH4TrKw>?hRV=`eYGce?H3h);t!qg074n+~Cx*=6 z4d|P$iGgM}-!b{e=qWPZ0a*-0B4Y*w&G8wG`xcumj!k>qW(&=t&PvQj`|PM6d|{Pg|EoDO zs-d^7GHX?Yix|0U{9)Jh-!=XWi-W$l%1KwAsH~t~i8}+4x5LLYYm`mD!tlSa(&X?r z9pQ1R?RLw@_)#n8I%`;BDmJ^|7H1(3=gDhESev7Lk)4MY562r~L+E2a+p{UV49vyP zM{Kf^^8To*%A8#;|w^(bDY7j76ynhH|g~OQo&66jGqtTQj&*gV=PM#47z@UdpJr-KN6)h zpvK}jGa}rd&R57~#&q)clrN*EhOgoJ9d9~|qgt~V++}%R(>J5E4iIMH$@%kv6E$Eg z+D3EFD~)Q2ULA2}@A6bc@l1TptEbQ$Du@Nn`T`1(eU|3Wmrvxwr!FS(w9@$$ZM?+{$(!e zH>_1m#_YsPv$_X^98P!PN*S??GDWj`U&hsbs3UPTk5?+gq*-fyZx6Sx?>ufaufQ#z zyIgH>W3G)jRDW4#coT~2TekXatbM9x9k!(|+3e6TnGwxm7J0HqW1NE+nO2YvoY|HmV;pg+N*bfDr-Aq{)uBb zxdk%Eteul5ag3zKNc#1W#!gL2p{E??Mn$n@zlxzGD(8NUQ@%9nJ5;Tidsfs^>uKmZ zmpMh3urD@WP!f6Ets4+o<9C!CJb*tBtGEYCHP$~k+<*+vc^UO>#?K=}uVy`|XC6@) z6M}`NjBv`Qqh7LpTEZN+gnir@!!bXvkdj6%SiUDm9Sr6$_DIYsnI*+_cduv!@v_j1 zGbNq{64L?$omgzu2v46(&4R10a#rz8A{zgu18ph>%GJn$7KF5p0qR5>Q>4Li~Zwz_9 zG33KM_cd{hw0OcxS~UARJ6f}xV`aIIg}u_3wFghph11(UV&55^NY6enC+%lpt?R~F z4{fHlDL>=q8Mc!;?wKOrvZ3dk*9G->@zj0XeDtk4M%-gPPV?P4+{mbYII;iPQY&uH zk=AQ1sbh|`^V&9HikwYwmkx+AvGJ^!SE(3|tIcVxebGxk^>S}ly*3^4l22_eu8tT& zvvi&8kZY5!VP19X)!?hBOTWbwP8F|C?;RPOeaYi)sh3>i%-BwH$(L>!kfM)cT-EAw zEINeQmv_!KFMS}lt7e{q`pAfbl0f) zlw7V>!(!lJ_MeRQs#|7=ioF@b+?_R(JSyghB{#1wo#*ze&34RI z*|zll?6P$gCd8Fk@Ya^c| zSFO})yE?{_a_#MEV}a5>udZv*F5&hzamm_n_r0Sd-@1DmoJGl{FD`I)xr$X}77ZO& zySy9=cmkPxT|-~ywOt)=@N1vgmUBu=CtAIB=>-c8*zAclI$z?4>K*7Y_ilJ!XB`Q= zD=eo}n~N$CTOVO9!&#Q)H8r9@)t1q(1Uxv#Gjqhr! z=%KS%ykhiqs8xsF`YW9(#?z-@hU~7F-}K1w=Won39=}6U)Q4TBT?cfIIdD@Wii%#$ zbqyHF49y7K`Wga3j&XzDfwJLJ+3$HJyza-Tzp_F#`2@yNq(%xrY4 zMmfor zxN@~~j3QU3=dHYc-l`GHb2^LSp;Ks$xr6!E>U}nPt+-On84s)}=emxwt)>lSbOnkj0o#lN#S>4$SSL8Krvjo%m`Fi7d?gj2V(|>nG z|Hqn#?C7p>PpW?2buaGmA1`n4AE{>rqy2ex{rsQ_!d~o=*Vp(pGwQv`H)hKI+y^dt z)ocC||Gm3AzP~lTQQ`c2xOaYh%s2MT=<`nabocqZKfiQ_=JS{EDR-X7VlP(OF6Eaq zw!gr^-iH0#n}fWT&*mC~s@IYeWZ`;#USB@H&OEGLZH|2ILdLt~g0J(K;N9i>w4PGO z`8{(6=V#+JeOf9&Bz@y#xXX9v>*qrmwJ*y4JY@Fu`D71kJ<82hozHBz+W&24UEIL* zAkLV|!*qKWDUz?xSMS#6%^5OaOgAXE9{UJI{hW=wTBxtHhG>a9@xfyblAgKB$KJ$h z=}3HYr=!dF*5uQT-3sQ6nK4{GFG_n!@7SMA*ZR@ijhOwd-Kb}}6Uz<#D%B6Dn{zSG zeobGJ7~a?;r9ER_IYTMkJ^$%&{%c>)|56)pfBx$HuY9efGma&DL%*7Da_R;(t{2oe z`P=##?S}jF&8@pqOF2IuFP}e4m$xK@lnOQBbf`^;U0iBBzLg?}Ri>x^YK_0Tszahq z`RIFOCr{erRWswECS?>$T&NH)*8^cm@dqOY=;zTaXd)Vcsv|f7bZnLazepcQ7t{D$B-={|`Qa$)m zcy|uFmvXfjFr$`Hk(;kg9z|nq#yWb)!M7?VBt8dalmCp6ATN-xNr8U zGjUc6(o5ulo>$jwc{jWJ%(#T3#X|wwZm}XvtHCYHhZ>d-yI2?WE$hN#7LZ|mSf3x( zlri^(SFhHHQD>;?H2DeOHt_Z8T%6v2zYpu9&MzSN%=ew07axthEvLv^)^lu5d*dQh zsdgP_Ok4fvr)}Nr<{3u6ckBF!NA5 z9y{LLdpWJ}Ek6@6{q6Qm&T$x7y;gEzO4nQWjTl*1D^M$}!hti9PIkme8;?B|m%fR# zJ281iuV*QXh1RcDseGjKCEcQ{M~d-m^-zATkA8pja_=um8D*iEW}%pq9v-PBshHWR zqe0lH)v*m+`v;FU`B=82=cO*ge;4cr;P#5gWyFr!MVn>D_$f+#i&on^jyo1Sn~~6FR`wRN^2?0cGjUq} zxJh@sCX$xUSF}&5O}yX)>G@s-$FCKx$GV?UeLu3w?^3rjV&7vi{39mJ>OI3!L;uZ< zuIaeX__NgU^4>p0*5=+{`U~!0z^ob-W~A|AE9aH6>5lBH&OnRw<{qzWR?gUcHFsZ~ zPWs5jHKmPNEWW9k+iT<#G)|s44Z|tL=mD)K1sYl4#%S=JihmK;XhLd!=9cpJCAXA? zkxw)(dNJyomQmXb)Y8Xel~4alTIq|r(DeIGi~D|(=l<Q_u&+1dpZRayLE`HE6v1`A%{y=&Yzjsz;=F(o+J2li<}yD zr{h@JE`i5z@`Fqd$^3m*l@_tfUSk62d`e(yI&>Vr-oZjuNFzEvgz8NZ)pTsWcBX(! zXcpRupyGq}4FB=Sh)eR6o3fu-r{v+x^6w?vQ6)F7UC9e4*bL1=h?5Qqknx{4Vm=k%w>Vgk5f-AtCRj;&Z=ELWMbj0sWRiF4f&(x4i-|3;G?zVa~3$FlWy6VuZxmw1@z2V@tw zZ2g9@^RBJF=-3&1_Cm_dU(V0ADgY78SP!bx`!ByC$Sn*95xv0-?oSE1 zjUHda!m>$;yZn2ynGIR^dPM6U>J9XQY&@vadvxKI_AGE4bYy|;@Z5jQ8^m%(xc>{S z8u^LzQmeF7G~(YCe`l}PzfA^L+y3#hzMn5OsIZy@;O)MOBfU6#Kz1r4BYSx3>t^i6xnsanu^H=pMh5zw*o<&IJ-J_2G&t6J7+(K-ksFi_KgtR-J2%;t z7L*u27Vq{wb2IKqQDX<}0yY?@Qg-Mk$pzJ!f@>9^9a$u`KqIax+Q@7>thZ+!4BG-5 zC~RVt>yDmQXhivjMzqHIEPYkK9lpFNqGy@ajy12;0IAY_Xbq5!=NjSV%ClBJD(l() zbLRhpyE8iLi7sdAtowKGL6?uTouzC&YiEs5W*x0T-D$racJ#e=-UXd^qq4i#uUR{8 zf4%GvZli0_szI+(!&XIYR6$Ws*n&j4fWx$QjEZ4{ueICt=TzI`8RQ~d2rUztyktc4su#c;z}lKDP3R7<_S5im8tpR z=UGm>jvzbB{(CmnklBYvd6&~UVjq=pPdf2#tCg~Af3R}aYI1oTCoaEER!)7(%DH>z zuTJ`uIxD8s>JTQ&Dg4InhQ$9z|RTYn8cP zHv9-{B%|82DLdY>jJ;R`-PwY2(znd22d%cjTlyAzc5ZoZhiszVCOvX?Mt%C`Trx`K z)Ptck1SPM6d$7; zYlWT{DgNe9-D&;m(_3l&&JA9m9AeZT0Y5ufA@t}Ub=X@{@|~km(~#hvJxSMFWhn@N zt$7K_$Cu)2lA~$ds4E|haFr~8aFxTPwaNflG}A); z*^7JP^3hd4D`YO7j^PQ3C%!(t@F%=j{lkyYkC$Q|-s`=c^=y$&2WiO==Eec97}VFj!<9%XC60@E(MPqrR2IMI#MIUZ13{3(N1 zkN&F9%G>KqACjpq|WELipw6Y=bY8mUK;hty45q+$f}X6N;v8oo>=8sdX4i;IeyXm8Yk^pD;Hy> znzo{)I(He;xZWkNr&ehmT%}`w%wU%6VtN4N=*>RP-KcIi&Yj;XYnoBDa`&exqbBv| zZ$}Af?*tZk3D1&u(N(1_Y3H$-3f*I6!*Xf$W+Ep^P2&Oa8o{1;H*h>vin- z9QWY4648VRqWUh%aix`ys&DIGJ0wQV_C_zCqZ`XSU&_2WT6dIb_P%vTnb$iii_t-N zW;adCE*Qm9w`>^&$6@RMe)79KD#R>4p+>({%p<6??7j6lTRp|2ipO(V{z|}YIhsdh*Q3bun~)t8#{4mr%PH!9_m|P{yLih+#@`8(W<>s!9U`mmkXPDK$#X_+FYnKQg;7U}xbxYP zrkCoa__+)3?&^WQcirB(99W8r{fgNy$CPK3iC{f;d|!qf@Q%M2rbw4{*hyq z(+|1Nm9_3>#KCd?w8Kw}@z|YxG&SGgj@R^i-w>`xGjAU)U5^&tK3cgRtz3+Dv4e*( zulDvhayyP}Zo=?QBgdsaH;zyWUXQB3H#E~>hL0mZ^3poB`4vZ|!ZEsLqabQIrS=w; zQmeOjT85U2-0!5gINgue0cS}6`d<2+Rmdf-a0W$a4^`q;p}xw$pQ4GO*J~?$7!eRE z#`LY_C3TE_JV}m!H+d=g`hvU^fom_rTdHJFx!nS4dF%Fl*!3c$o`lFUpGW3wiX?C#Q7~H|)$qhkbv1 z@$AMsdgSWW`#7W@x!4SP$UqDqi^ZKbN+Y!~rOI9fXb7V>p&eUz=uh}%-B8Fuj5b7~pObW7v9<(E;}J(FLVKOk;m)$`AGx66@66S>FF z&x_eBj7bHb*)rsBLgH~d^QVQxJsc>m{?1p9HTn#SxBX*BN?R^y#a%m><_L9kp?DNZ z^}gNpA)EBs4t=svPb=9qbl(f553o@Go%1!gE9`KI-E>B7HV4Hr_JcN7HzHJNYdY51 zpbcZ%mi3EurLJR<50cAfDXOg0#7VxZOPP#O;d$-ROsA82Zx58tG7yzT6zKN49b_R zb*sH$@}?9pYvAmlokz-IR;#9?ls@5MgxW1~BVWQU+K3)>&u_EZY-7hboB=~pKO5!L z!apv4a;18Q_>f@_d+jZc;^2Hs82xLgun@b$wmWIVyv363yC=bd^iVtHgz8%N025C> zbP*~tNrvbPjw!SorDKy`^ahKQG}GS{&W=7>1lHQdYu(#nA2`*%jfa@aV_fD- z^JUq+2;xyw$dy?;KuQe~7IV^$;#`j|2_~LEyo$a-tjT!LrUsJZ*fW9i&E^B=E7lV* z&f_`G>uy9H#IspbengUOR@ZU=M?}*OIc($yEs@Z8V{peg1-V$Q?G_)m#j$CnM@Jd{ zJjJouL(JXqBx;CPhv(%n#!f4%&arbJwZ1hD{!TXdx<^X{F@)b9B0HjWl?vwTJOiy- zPO(!HFuGb%p+`AJ6@j#t=Ge;}!?d1;b^mq*@s^Lj#vq1~aW+og2+-hXghQR3LG?~M z_xydWM&;k}^XMBL`re~i{4e?-X8*ZKmx*y&kIL`kCucNkJ1(?-(T0{t#PHsIn%PgB z?VRM6nv-HtKA6HcT&i;GcozijeN(s!sF*RwSgtqNu4qkkKXyCc0e72EjXzi zD4l~>_01WRQLMU}V*K+9x;*y&nfT~Wsms$+qxCsH`Zg@;6EVl15f*3fb~t;Thdp1L ze(q;DkV>UX9 zS)z}At$hNnJn^K3;IT1`tR_1%EtjAEpuZ1!c2-=8)AsQ}HR294Gz+I?W~9Hi=V)ir z$bDF%S?^Hd5x4AnF1|qI$BAr>Y?^(va>|;);T`M~B2vx8K)c0vG(qDEAO6p;Qm-?+^``nu0^RCndC#;wP}!vJhkcT9>CL%vyS7t zWn{8*cCi&IXiI7IbMGFkV>V}E4WT`TnUyN=TbwnAroaA$PSuxgu6zG$#AAyQjeWF?@;7taHaep1v>R=L zAIrL2zh1JPyJ^X|3E{BjoB=v(M{Pr2*p|I>^1NC4)}}#XNDV|%r4ZiVCYp-q1- zWV*Wgb0_}bImPdD^rbU!8zXG<9dF1p+GR$(Xx~e4Bd(UxTbSM3Z+D(`OO)SQ(S#l* z9aijiOMD%!PzzBLe`vSD(9^eiKCyOM(brEOjHEnU(d9Ag>KM&GV)y&xzrc=q+cJJX zlG4t6D>Hi(EbE^3;jJP%V@-^ov0M!3eRCGt^(yX1h@K)s$9eBZd>=UsPgdo76yFD~ z-?H1;;5TfKO1+NlOd9EO)oL#=e(7NN4vYsKcmd<1$NR4^o(W`Extp{{&)tn{FW_O> zA_ZK!r`_Fgu|aQzM>@w_`W)raj>l(b z3g&>Fa1$yfE;dp(dWDIzM}PEn*N0J;9o82e)nD!nuCq|*S&V%iaIXITQ&wbu|AnRhM{jdx^v#tq$J570Hi*qxh<@~rOw{@;d%_fh1Ri{&3{24!( z=|G#39o%GoVg@#jx=0*pKXtch!2AyW2|lw!HPy4$jq{-gKPq{63Dz?{&)BrDnPI7Z zk!F+{FK9+-TcpdJ=gynSL%>{*7;3dT4i22@8Po>kWuRn zsM$Yb&&?DwI@&Dp(aV~7<=78HSIf?%dh(7q_Je$3{9L1nahF-D&JG%RCBxE=%1&YT zD9_xzF$WXe4vlmFpP*md5h#oXJVu=4B=DP?oa%gXS~_>|r5^MW zZEQJS&_=t^Mth=-vkJiBzUhltN2z_#M*Q4q)DEKwvQw|tzWV5U~j zx|7S;{oMbB8F3@3let=U!MCp18QItod%fNfmy4i!iCEJ7KHpUq?Eg@eYXJFz*<-6f0A3cml=g;x; z^n?_benoNP&1*fxQP*$&JdSc>&96Km#9Ff?3?H5Yt^C0Kxv~3UzL6r9TPf*TpSYw2 zl(@x_jkq}fq${{u!&wGqF6XG_K)BW7p9iDf$-x($mDJl!|tu+CwNq=l#{^>uyh`+vdN7t_}|Mc!b zdjImbzx@HjG~c;L&3kQ1g8&RV<9w4jv$$v!IfD;bsW|hWzy4gX1o?+AIsKT%@0<5ELAmkOfccP?Ig00r*-yMX)VJnpe%)ID^lnC!=j;js@fzPeC`t*-x{L65A5b*<_iD z6w7-@0oQy||HWvn2$6bijVO%S!akK_r|55xzMxu$6-!l7HeOVfS=uw z8E{h4sra`WVMivP3ZyUpuW-rx3?F~zELg$PN{0luP_&P4&cYTCnESuuqH|a0u`Xua zLdLo6_aTAk8SIYqafbDc z1b`8odMHo*6wZ8fLLOAqEgf$O@>6V@_W=oc@|cWAQ~YyuXfZMvb21;iW~>#n2#Ppx zYctbdgpqX`U5-k@G&;Vd{O}ZA8A5J26~L9;*(`8bjPQEo)Gy~6-QYRPT*Gusshgj(RPDD_<9Cty?<(SU9eZVvaY|6-y6cc!UCInpSHMPq95Vy|`0|JM&qTgB z6WepmjL~&I0KH3Q0JqSpM(A-y;eSw+l=Mf;M7>e$hoq`T4{OyX$FE<1`T!mFA77vS3kXqv=>3QC zmmh!o_sy8FQU;z;{Q1Z9>%aW)(+?8`CG<>6OQVGR{P4g1+UocW0%+iI_~9o|ne@}2 zr$HiuRtT5zLHjFC%KwpmODl2u>4(2=^d&bfKvELU3=TkIl_||O?%HLo*wF$)P3bxn zNGKroP1h=VYAs|aAvlzr_t(uNa=p%+er^9(YyaAQ`7deGADtuKHa`4cKm7Km`b+v> z>Bs%BS=WNZ$(6YjYeCL%o{Vi+PZ3b>n$0G#;Ao@;q%sB2;e2P?K#a3dgdFH<{x1gr zi2;Pte&rF&>pXaM65D@nseJwUpGl4PGyncA{-ynozhFmx-CzwdP!wYnfLdgcCMHlm zQv#Mkwqu|vG$R|-v}MC_8SORX||{KLyJGDo%8-HO594E0~eD8kkp1Dzf2~QK%O2 zbTUlZRMkgoD&3}_=T)l~7{N>KGLH!exo-*rcWconT0${R44NB|70P33fFy+XmSl3w zR&7!Rhu1hA>o_g{{;%nm9}cJTe{R41Y2C*F8N%QdI)l^)8$yU#U?XutP0LIvgzOPU z`ThbX+aZOp@VI1)0LU0-$Ra=d^yl`=uU~%#Q>_2?Yx{Yy|DXTzEV6M($w+#+gPdI~Zh=qIF-B_+Etr zV~A7Cm&1Hb3#~DkNhiSB10}}vO6pU zx{=G?*Z_V`wY_8rQz>QzK?7o91TGbb9~j?4zmLGtTf#xiwH8PvUPsCSkTNXx*a;8W zz$WJLL2uBG{kKhsjZp`XAR~kxXgYEnvFC(?>yYH|-~7bj>f-14GPmwnzT}N-Z~A&wIPlL12L?KjyCWU=^qO?g6p1j-cNH@F zDj+B)W>+L{?~tH zPWfx%fjx==E=TUhPwB7co$?`r=g2fagDCNpeDkiXM>&G*?I<7cdMX6{zx?p)Z)A5z z@nH9+8T&%K3ZH;J=F_bFt;(7^>NzP1Uw~NmSdLhtlV^xn`lX^|h6U&~4bl zgu77+24_E?O2G&_LM8BXCi~ETwT2<*gKdlB4M(z|Q~p0IB}o+bcuEok2f3#t4coB0 zVY9ng=155WOda`(q>jrFyrNfY_-xkk zawz#Z&;^Io-VbyUD%TeRU9No)(7CQclkNxnYjH}6*g{@rQ5Ug4Y1S0`CD2M}W$go@ zl0xfv8T_+{C<#513C2&(J((i~KL*>cg>#?~$r*UhvXh(*!>86zd!hYox`Lnkcwwc) zqxbpUfaovjCwd9^3Cn(@B@p;@wr&KUk02!IBJCW6w2tt65E2-u zzZHbEj`&y*67PdXJvh9GBSvO?hho(ZQ8r8lnDI5ZD)kmug1_gR+`x0S@N#Dlu?K^y zzMr~3=g*E}3o{UGhc_1~nM%<#sdO!UPtf3+@z#`)3$I=69V`M=;{-RFSsY)F7-?<$$d0Y6}VL z!_0xoIZ&vQLQl5U)S8xsC8xZI;0Li<-@2>PmzIoEE)I7;&T?o8Aky)BH3xb|G%+}& ziYj9BY3!b&8_pz(3W@=WAq%KX#)b}x2*p8zss#f+mhPDYD~NcitYx8;Q@T{>+rB|q z(mFsbjJ*ix4l+d`4hs3x>s%B?STSxP&`I`-iXaA77`hSIvR;DRUq&&dGcB!1$UyPy*eHv4-V5LG>(X z1Kohy#sJk?_X3GT*=Pzokh0ab!I1@Bh1TH;B%9~$feO+E2zLI5J@7x#9;goXKvx22 z{4Imva0$2cj6rZUNsv+-Foy_Wtoh}tE2(BQ7P_soGTUmiCczIQjAv`l#$f}p0t zkZC>)YE<)>m2MIgh#80E z*J%Z72a!2j<|ST+?3+5iZz1fM<7PUwhdLDJ&ONt6@u5^f-7pP!Ahk?FMV3V{_;6$o z1`FP(q16CZKw8Kpl)PoPQ7_S%ko#1oC=MLyKolF89ORp?CHEFo6P4wBQg&p*SpjQ2 zIDBN1KF&1M>Ix|zs-cIjDYODgW!eWN%Z@+?gMbf-&;$s&lIaUQLp20Or$IhIGGgut zBY-(~cC~jN+G{t`XIzCq6(C?^vI&bc9vk5SdnjC9GkOd{B&cGwO+e6+2Kx>vl>tHp z^wD7kOm!+W+63T=z$7Y*8;1%Q<4|`97Lf$ajhZ>cHQM4B_6*Ti17CAl0}5kS3QNG5jH(va60))~J4kvUA~;0J08Jkx!1kuu zutITSmCa_L<#y)ta8DsDhlnGXxBbGA^ZOn?$P0;nUOW3Ul4ZuUpfE;2pT~V!q$k?cIf-%!lvZ@D4I3JvOh6ZbM&Z3YtkEkE;+OFRKrD0zwap*_4T1xe zF~uG>wET#oAsrOv!vGMq)*ee{l|`Bhu|~h`<9l&--ABWQUk$1l6r8TLfT|v$of$_j zo~0sHeQXLe68zt44NVLT1a~BvC4_)|jo`76JpmSo86F4Io0M@96~qy20qaKR3U;Ew zm_R2S=tYM9FELapPLI0(V~wl?yW%`C#f^8YxIO+_K6QsrCwwf=;_|6Id}@|c;6CUS zgbz9eW`!Q8lVvAO0aOD?>mJgvDnOX-@gMj@mZO0ZLvJxK)cY7g|Hw*wWvd{$tU9b~ zeD^5`A9M<~HA2-0Jylm@Gtj`hZ54dHN{0Ew0_NQWIB2LcW9vp$WP#Jsz$L+hhsAiI zb$;8&_nw0AL8oBf$IK!wy%!~A&!Yn*1R{%N4^Osl$utZRx&!CxtwbDd3qdUz-%JHE z%7tPZp3nv7|F_`<67sUhKz6Sr^ugX@W1b2m3D=9pWWX5`u#Y*g5kpB|Af`gla1(fWd7hfF|a^;`$XYw7!N$8)*j5Q&B4eO@>lJ6$#Q}NU>!oAy9ezI0fH5FOW4l zYdbg^4Q#+7h9Y4~nvoNf;jya#J`)j16=F%+y5HAM+3fEr7HDiA$H?)oX{x$_F>WqZJWj`l_}Yiu<%znXwe z5X?m)+bRKDkyt9bKuymM|`FBi{L6S>OO5t4{`f%~|!vMgzu^JbR3C(E8T`6XUSj(VsTMIyQ zmm@Us&p?w!4mkm!MA!lX0iR1{K>Rfa#hp9k@voypC+-VYSEV zu-F>7l9Yk-C2I&$u`vqd6VT*lNAwUBM{GAVPEk7EACnQ#DgZMj17t_AAf{BP992~q z8DNLXE&A_(A}dx-_f0EEQi+mNme3xeR}J%F@qdtIAR|~2*iyu1ht3exLgwpTwx5C` zTij`I40AP!Xbg!MgF0_;9I^8(B#-w&;4rCKC{;leRZUTbk%O$o{QXm8!ELY5bcIn> zM+d~P6(lt=UkJpBmLQ1C5StSq8bPil=97K1NAx*%tmNHZj60~sRFzU!lQ(1scon=-4W2S5YL)2Fav3p*l~da%&l z1q2A-uE$o%rXYF5Tp6BOEHEqy5F`j$Kr+cQE1focyE;D??sB0Zmp|HPn`9DgfV2ieQ)u*oTFI<93E!3P2MIZeqob%KB9} zwh8mVx)~b&*=e$5>Z1fQVDO%m2hD?_Oa`RKrQaxFcupgb$?u?UuC6_t`ZNi+l~rE`k_s{9l*nLc4*gYit0 zW1YNS5)NP){fKR9o>gfAfK(KS5vyM+R~n1T=$fql9njRLJn#vj6YuJew$e{o(#+I>?1NJ+cjuWai zB+EeSRDf6|81liMte=7+PlK^!8iY3y$GEAUY6Dc8fIzHp4GJxgj@&5&4cLKJD9t}~ z`1}-EGrcB8h*zPqz@Evu(;!uCN&tn>kP;JUvIl`;AWvjOm;}(-<%b!9pMWCEq9I@O zYArSe)JG`^o@1(sSTRlFB1vom7?N;48(;`^gJ`fXK(bUn14Whsij8OG7HH%Qcr?*6 zpe|*O@&oDB9hW3H~m)T1px z&(3!5P?O3`PW;|!vf!t!;D|7SG&0pDYN1Oij1x9sp9VXFou_xNfbAC58`K4n7_ggu z2bK%Pgx?G~qFDj$E<)px5H}ixLyTjMg8t(|XJ(cvOm+tON+jEk$Zf@EpvekrVO11X z1ZYMIj|H&R z$YKmwVFVRQMhToxNYW-j#t z_+nY=sC3j?ehQk5)pTHxs;qlKFoV-{P#~ui6F9xASwY5tx(?E03P_WMG*Yx}g1SNa z{4`ls2aVP<8=Acf69`y0R`*e(1_Q+=vy&|{8mzGfjct&q#{eeQ^eN)t6r_KFnXnWz z;6gz{)sS`VEN*}wK%Ruz0z`P1R;Eyd1~6HafgG+kol#-};-PX0%&Eu}lu?kOwG)}fyWas#R>@$Y zg=xE)zG@|z4XKEGKv1yoAD_eR3)Rl$Es&KLJf<+hSEEo-PpZt6^9P zrYTMGr2*Wwc{O}_uDgoKH5RR6K|sl}?dDU^WXVcLR&~IIwbz`C*!miXIt-7AfV;px zy_nQ)JQDr%F>%cTB4aSvd*eB6()THrCQ9lq2yOmRTdq`jn(?1+9v{>d!!t zA>Lc4r<5=#5wMqe~`WXqo-=7KMGviQ)l1=FdmKt4>0;j1{F z0&hvcFUVEKd8b&4nI#?!gYI+uB>p{Vp_Fa1Qwc4k-;a7#PTbEVgi1!9V76Z#&e4LDzlNhwvwC%YN3~);np#ZE;;gE5g>cBnWTyDWWb!XXGXzRVq7%0hU+?NKdf_;)7@9NSs#ag&DG= zj1EUz`r>nV=a7ea&Ry?M~u@>;>r^y}2#nNA#we_r6S6C6eC4enquUP4{Ym+Kw z31tGa!tWt^s0$59@fm0`F2fS;cx)7+$^ zg#HtuE+k!hHf%kpOa+a0ilvL8^G4N;#mU9`1E{cJr5w$~)lWf_ThGn73N@A*J+9jFs%cWT6dNUFhS(sIjCXphc+9K$8U#w*{*`?Tl4~+35iA#B_Cle&E=^ zPC?qh>TzFo9uQ=#Nx(9G2Od853W8J-BkPK#q%$K{>Al2Wvq^`uf|Vz|#dw%2}e;fDW#8}akgWS|O`7mBoW z{2t>!9fi0rFPXj8nm5 zbQV9s2BptHlOcc8+mglIdk2Rs-1KFGE3SkQ<)0UoPhy3iUguL3mMeg>M1 zIapZ$RRc}cN3hgd6aBOfQ_&Erz+P1I$uLa0d&!0tiACg?$yU7o{4`mD_J+~#2&}gi zRT>Vu^%Y}yoNo<*%D0?>HxRP4_9GP)m=ik_4cw=YlQAC&aRGw>@G%D9T=L+#N!DyZ zjcb8c&&E#Bg_sn>Zet)#R02Kl`ZM_WfTfUr895>fFj?xRwao!ipkoFxBc@Uem=YUK zD{BKQaN5>g$wZ&ePm!tHW)|lG;D7=#vQsryFI^TL;sZmqMT?SczRABdFjgCEXJh@X z`WZZYEd&0eYYx;M8(FGEd7uN^SX2-oh!_eI2J{2K2uwO^Fr1FVE2=K}Q)G*Xq&ieR zYC0`<=x{EwVtcb58Y@-+?ZyaJx=#k6Phmuc)y(wOLciaqr^mp#z}|gqhs1t=SSL_W zLTu8pMJKvtfn}U#!xCx4uB0#%VL9*cDe7}d4`S?;$Sju5g36&lbP{yYkGhpQdU`^V zOVA98!g^Z9)+Pm@mYGHHy;Eb%NF>v(wc2%PI?)ReATJaNVYsL29W%%kcni}rE2k(0 zR9%A|GHG=0`}FiUrko%Rdf;nw?1<2{WB`yrZ@*QvB%ta*WGt)|2VIbrF677=8a|}P z`dXO%C!oh9vGnD{L@;|hp2~TWpa3r1AT3F0lF%S!qJiP&x^r z7c+a9rD_@cWkOW7XOV5{Ev3pZf7NuHJ2*`UPSc?yUtl%r&9r6cF1F_Vs4NOs~aPWP}eIK1d7FHh`2 zseP5vm36B);iYkg$FF~{w404opWROiA!W?%hx5$Ev&Xql6 zo4Qupe7CYpM_C|Zn|___ZSOp^4Y{uw03rK%t{seSPaq2}7>_yd9It9L(j_8%qY~c zne|D`mFQnm0Rh&wt38)==;?Y?rMtOLc6-v;U9i}lCAFfwnX>&-x|Gqo*0VNnoRjVf zA4w{OQHZ)B! z_R!F!Y41;WX{h3=R703fKMS#?)|hhPDXN+!4aK3fN2#IL$AbP(VDG|>uxbc&9BAdm zZ`XkLdSF+u9=5re84Dp~)$E*DP-rDcDA6RGcL$&B4*_K*)j*!m^rX7FQyO41BuGKe zDsy0}il_Rd%7k;pFV7RDal6tje6m;l4hT7jpU|^yTZg+cnnz};T^d#E4AOtHVr?{VcCumGD<>U~ZCWmaWVEx>?<)G4JaBm7`9amI`RXvbV7qs_Nl>|F$&r80(jDQU)! zl8jyy>wdTBolxY=Sg$peLGM9Ey-y3N1F6PSsAR8uASco>-fud8?nQQ{iU34f$s;E$ zFBPnO4<(WXNh~|5HpU54yn|1MqZ|B37Zk&gd|>mb)fF!J;^qiS(7$qc`hpN(gbW2? z8na2TzaRWfDF^jcOP>x4_GSmoU%V?8|aNJqFrA3J)!5C`=H>cNE^ z0=!M>*^;~}x~qtrp4(|lgGCIe^vggOs{&;giSorW%|Wl5{Yq@}*WvIp@8A$XQpu z3Ur|_pmaXkdj&Q*6>tnIKZ)BSbf}CbaU3F21TS2r5P-|l;ZP6~Ns=Z7h9<2fSjAk$ zgsxq_$9{4wQkxr2@|XKqQPy$qZSC%=Putfmft8vp2V%3$;X{$wyA&w-;?Cx5|4cC&BXW5nAyGiX5vZ? zYlRbsU*;kZ6K9o9#iW&yeDGrvbJ=Go@AVcw*=U8Os9P?r^cGGZTT-wI0Gb(Dob+?qRCN-va%-n?`LJ+d<7L0YJQ^6##)WkD^op}K7v3w10r31kvc zO-{OmN_4@$mgDDTqsAogP}_YZz4iorGKhBri#r^4`=*Q;dd?_qO=!T#Mq*)(eI!LO zu9Fh6{yF#55z-xevWtm+NlpTcUjWRkL>jLn0Y99i_lj34q}Fa46&sWWiE6d1u6&|9 z_++?1mBNpT(x_z8fyXm-5Z88~0#mX%gDz#&PafIKbVEK8bFduCPhme<=BV?axc3Ym z$SC!)ts|4J(rIv^c+Se(XABAgh&k%~Xbd|5k2~ZhA6NFw7i?^JAC@Q0XX;1vfv&~5 z!%TZzwL5~l>Sjat_HwX7ymGxor3;=whgDc#KB{>*JymYwP4Pg)^#OGh@EZlCTxyJZrth-i5n_yK?s`nFQtA zCtdk)M!Zcrv$O%_jgqMjx>ZhJH%P=P_6z%2_xH&m7q~kEv!+wMES2b~lX;1UFa+zv zB8!LqEZu6G(#T4gpYPX_3^utFo6*jUOSdQ6hiN%R=Q158U}VwSYFixrQOK)IOFRPH0=By7%~f1`iQltIvRi6cqN9m z6=0_3v(SpID_^P*VD#GG-zR%lQdtC|;vn~tFwBbG*9Me_+%rLN`U2P@95Fzc?l`%| zqmR$(Z*gY?QaAE;SnLQ@I?WjX>{S&zXS`Po^kzy_(k%urLVK}J+#7%r;&6|OaH_ha zJBq>829sl;i7|3JD!Vf^IS0EX%w{<$LKFDnINuiJ&@k@%`($T|xH#V!S=9{YtVx*( zj(9Zq4@Sszk0Cm^n2fWE?t+!z(p9ea7CzZZ?I*8@hf>+^w&4W^t>Z(wU@<&F`ABs! zU*!QH^MznFViM!M;?8I9leGbUBy2TS$~vIzl_=`O{*Dr}L$L&nwPk5HWuOB*u!kLw zA5w990zSDwWJ7@QY9}hqp;!!ohazCuv~kZJb)U1%u3TnXF>GQE5dT?kvY*WS(I*^l zy%b}5hQzN_4Q!k@Q`{q#fP*_em$ZQsT{Y7r9IDb<_wdQESG;g#C?&wogu0lrg|0lt z*cim)sx=EE(XdN~9h2*zMu#EgC$M)BCS+c!)NM1%=`{bS$ET{6RyWh%V%4U_usoAu zlB@!r+;pgr?{Q~jR^n?AwQHEeF!IwoG)skiXS zVXm%`I$6h<3^zz|W2Mzd;o%Zm{*l z6hr4%4?szkxbNKnm?79y8Gn;!%D`x1mzRn)ZM=m~?)V#14ox-PBbb-8V$w#%T38+9 zrkXU~#jHnE)~jdq8{x^NQ0}2u-GIC^hem4_z-cH00Sg<9#OTBzU9TzPm2YWUu;Xp|eDr>SKdKt~&ouz$e#UgdtQd!7df7L&q4@9=!CP z`0v;PG-@-4WvdyNDiZY1tj}BEK@DVpyFDAxXCu*BK@hyx!w%vB&AQ$_tJ~0EqDE5pasQDBVNa+TdPux6EME zMSP9+gJ=%RKe1-+5DUDtpHIWIQ`K!(f}WRaRwn<9Eb~)TyV>i~ddXGSand$EW5Y^2 zKQ{;OiN*qVwo|PN<}2wM{|=@Cz=w~rgh4^Rz|oT=6<3J1lM@@7`ntGn+~Qm^_;PTS zbqw8eQR;MNm?X#5QaovgWmz=O`X0J3yqo21Q1k@)Tj3tLB1;v`W_Rwu({#8Tcj=M| z(bkwB5syED&06CuLl+upJr_J48DWW^n6i0 zfAiCVPQM?Y4jrrXXD1dz6$<%9Ki~BwX*(uey-HWF)Z(j8Szhb0cW@fj+2ZGF!Tf&z zZ~>3)etdUHjKxXt>~XBR{GL~)(e+u-xJC$@%bN4||CMK7JbV4@SI^ux^YnnJ5?f+r zz1F0SO>KBDUIMf)^LjD*rJB!aYIZAANDj9f>~D$ zt&xHz>+NfMcw_VJ^H&dVKIL;?@426U_4boK^Tn$_zWMcwSFc`vl1cs7&%J*zi1qxO z=kx0q@7$1HdieU~_-mTjSZ=b(>{%C?Y;zewfDusLZ=I`I^_5P3FK7YAcU@yPA zUcoZ=aYepvf5S5QH+yT>8`s+X=2JSChgT1C|KP)TbJ&~x?|c7lzy6KA^WLMIs%7!! zpP#?|a=o@c+lx=S(#DHd<2QdZI{TZs-)Q}f@7kePA3g1_x6M}%yf^>%eKS76xV+x~ z_cb%}=KWajINVGLGIpk)K6SfFb~d_lMh_`elP0~cPK_=}@6@G#JV59H51pTV`+bk6 z$nmqIh!qBOGn8RO1LE)%TdJM|c+8MuSO?T{;+EF*ZhNZaJ`jQiD;+U^u2E#c)@LsZ zh`9ao6GlIRsvM;baE-7El|$VJ{M%Xzsx}h=_NfLVlyCQAQAEXOHWb76U6TgSE4{+D zq?Xud56I#F+(D-~EBgL_4%*erL;i*hT z15<#h0^qas5eGPa-6e{&kNflFfRi8bl%E2}KlH)$;V(b*ZU4Ye`bE$C6pi@FC=x#= zig;%@@*j>Q)&Eo^sRdI`5@EU0M33tBWL5=yGeh#bcu1|ghMGp3+KmaNh7eV0(A)*@ zKQfZQ-ND*s|B57^Ad>v`!xt1ke!Kr@^Dp|>{L-F3{O98cFu~uODF0|fzkB`s-3tY} z#TVM+>)*GlSC8{@#?rPAp`zK*rhL(j;ecJ9LVzgAH)kf({gb`;dcPmz=MP_f{Wzux zRw#AI9~-lD=IjHK<_QAVwN34?NcL-33bvHs=!dHxHxsJei>?BC z;M&(L)q<*L(uNWUeSVfAIkkkJ#|G2!%MlcHCTCT`kbTDY+%_w7X0sq#2+~`#tg^@! zrSy{=G}Jlfm}e5zbXps=+oWRswWP#^{|&cw@42n}klPMlXV&J$uA8;0BA+!Dhmq*E z(c(*3CpE?ZB2XEE3afUT$QNgGD||X`Yr~c+$|*2#Lyo5G&Fk`X-!XVvl9l#mj9qRR z$kKLQIDSZ^4=nMyq7rQ24A)9&Yo!86&kBomGK&Md$YZtTG|8!K6e)OHc8&f|Xm3-2lpN~rX zC&4E_xBWkarhf7E5~~F; zi?k^VQxaSf@5+<_`D_^5c5BT5VTY=fivpMv2~3iXH3CVl^OiKs!D4{^+LbBk z4bG4hSP#|64ks^SCP!1?Xi3axgTi%>SZ2l-LokiW<$x8$>N8e8rf@gaLchS-Rnt{7 z7;9w`ntKGWIT{-GaCC(BrMU`(2qR#n2Faezaung+n3AF$Fd^Ngl+e*!!H8lm28$X1 zpJXJaWXiA>wfkmZPz7KdZS5e~Aj#LiRi>l^uFEioq;v%dtO8dG^T@&Ll4Fv&RX|Dr z`Dn0&W!G4p19(4^&2s05i^&4t%>0 zS*p}98+UgAzgJ92NC-Nz_t>4U8KBEzb!p)d!fFLp(WYvwHo>4Z2xcV$8gqeZ4b}So zx1uWfc>PMR7nu_El&VAC$K{`o80306i0JwDv<1<;u0)H3HF)z4ZUeY z0e1rbu(bU$V)UFqenRr9V4d11=(CEMy&TZDf`3g|Lr)!krC&heMh`HJ-ZOANuQ6I# z;2zuog?<`zXO+~aJKjMfmgq$S%$eXyeGytkKb1CqgK;Ws3-3qI4=nYTX3G-kYZL6_ zBc?wJy!#HEnl8Gy2zaPtq%+1h6jQplkrK!t70=SgEI@23SSJrS7fkW3BdlkqF+tB( zMe=Zo@E#Ses~gvGBw>8k0!PK45yD_SMVY(8^OL|cYTlbD=5FEqc&C)Wb%g(MZEy|> zy-PH;L%WUh)wS~Pf}RS%e#!~mGN6ABe*)o-Kc#^8(+=@>@?se~FJlSjw6~b}4ERM5F?+tf zUPoonbLd82N}c;6KcQvqB>O?xvC?i}A-VU^lDmy>?>7AbF)_gq6h|&$vR%ngM?eYleQ$&|AQQ?S?Rup*Sc(wV()B z@!it%T0;A3Z39N^;S{2UNN&a(BEbzV6ptErX4wJ8qu4s??h$_Mdin;A)POZKV|^v4 zeZn_~eb*8Dx8#{etA?ZFJ;f*JGu?-!%m%)n{(uhQgwD`drNs--VSns3jsLdzA&3Qw zQ-6_XvB04jo_E-q5{%`sf-9_F#+aV%Y2mgK{Vnu&hvAvB$U#3O=8d8REN})jW+#Ek z(`+PsU+^)00}gckRxlUq`c1aYH>|F+`Ihwd8}PLFjeEt@u6uKn3z*!7#KU&vHDEmO z!78#ybe*=`m4b_pwMS?LTTT|-Q)I!c6uk%_h% z>00Ye&&n^a;KcdIE_aa#uFovI)$=L8Nt-4LC zLr$J@-N22Mu^gpbjGcO0f2eE18VInPz&08Odjf|5OMzkZ1oDt8$43~!Tw{b^3*v&F zkstysNq*pVkXR(}e88vKDa$Gz?cgl|`~dm#H%z!?0jUFvAg~6sLvcrY1%J`~#5#j| z3|HV^_ked?1LAAp8s5Rzgjw2xc_m;ulzZh)z^X2zM0w>((jJ%ri!eHj1lr-5GY%S< zBWT0&EUZ=WkFOO7|CeiJ1L~#OPV9UFDFh=$jRjhbt}B6F*lKs7zMcdqDvmnA3kA*v>=^n{!l`WXA zYelKJi0-q3W}3}_ITD647mYC!adO@Z1xfmK(0}5yZpaa+pCfJ@_0A+ioB~Xreg)PS zMAGst(VxJOJcEC3(6I&QXkcCNh4o;t#$xlAl>bFKM&pG2qkmiK{3g5;pP||J_#2OB zN8)@Ps;1NfxMy?D_~gC*w7nzdNgVn7;EL#HR^&z8vt5xlVdD3~JTGZXHm4@FXE_;YWXxV8uC?$H!XTTFy1CzLokG~ zacw>8{Q=!dwpgG%=6kJJemGT`n<^n@U{zHXF7%>(z~l~?=?qg3=)&Gip7mN0tx3>xKhe^ zc$?@u9dB*s0Ua+s^;R7(uKxo%-gSlEM>9ET2iwsF+pNXW2SJQ}N$#(!ZWs2CRvq(c zy)7(aj7;slWt5ty$B~1z7Ili!qKm$DT zIMyf3#0CepMb!b%1_`oOA~+%#?!d_SF)r55Xqvy>&*-STWjQcJ=GOpfVA-g+!`^#o zqeTk*&GI*f=``fAEp8`&G31T#9k52nvY33{I^qIdc{IB5C$@P{AGNniZK&(A!|L(m znSt@uVS9EXOFsKAtm{coHdY!l=Anjq=VK&>ua23Sgd|#6J-?Ickl+&c2(BdDxoN36 zJ|`PLC)@Ek_l*1*_7u)&gP!8@ah=zm;@{Dus^biN=NTZ^y`MpOLKZmE=;vkt_Y{<> zok#lp!VL6r2I#lI?}r&k`v(2fvola03Hf!K&v6k5@UmL%_v(DaFUy{`k9=pC zUvOk|lORFv^0ncWM3hlU#(RdeZfLEC5o;Uu0q2Ru#);hK(yp(^cF}gUX%{Ia<_a;> zM{V<)v`c0-wC@f}ApgJhblc>m12A;Q9tBx2W~;c@Z#gj-fJMG|YW9!z6&i;b-d?H~ zyha)2(z>^(%M`>x4PQJdHH>X{53L}=G128G_C83Kz}#X9!d5)Cf;>O=TgI=!(rKE! z0A`e>46f5(@pQ&Oo&{bNJqby_N*<1+i6DNMeMRA}i8r8~60a>ECy3epr8Ns7EqkeB zIqaoQ7OiXxggAm8jnUW&gX6Z)V+ZsEf*xZTI>UrsH)f80b-S~ptKXd+7bE4RLFfk# z3HeD{U$eH02?55rIQc=J&YBVaFe@X)?yM0f3s>8Os?Bf9DcGFoqo*<^;K_IWlre*K zde01tSANZ5@|kK&Gv2RHdEvMHjx##7zJ!wcqGOXaV`}goVJko&tOWy>T*@6_!w5j+ zt%W49bT*LKR$CMqxa$^Nt4k=UE9gLPOvSo-z!UkFDlh=PG;ek6sRJ{RFwO~1rdL?0 z+Pms8>#Xe&$kt<_q@kLErJoum`whNU-&E>4`d6&8*NA#AkKyk$~3YbVl-3leuvKq{o0ocL# z(zBEmIcZGd0yw4FkaYkN*jSgUl2rXTcvK-sZ!r+$DwNcf+#1$@Q^kVN+Is<+3>r1S z9s?A5Yyt4ML>!?oN3n@LptxfzU>NxkO8S=9ubgz_@cf$+X~>ID^m;Lr)IAeQYB4uf zycJ3sUkoL!Hr67G(<)*WEfChE0_>a<;6PV`%b^4`8DWuZ-(tsVc_7(G4>$VqiUf|b zQHH?!5lZ@>7)q))p`;3ks?1lRq#=7GqUniHQtYcCMKr&zeUlOPFCIF=~&RXAWhthpQaKF#<2LF9|q{ z5n@qB;6+KX6(IC{weG~>UB4WzG1?+w2Y8JUQld`*AxfFnEr+iFLpK9LOo&PIQ@~k_ zs|CHwh7m+`GXqhu)O(x3BWfQIg34qL3Cg=57*auE*)jNAR=515rt61nWrjkmZM0)p z(b7iiYa4b6Jm^cPhF~LzTJK6TY-0rJ4b_)fMNoE3Tum=GgH|m4UBf<7ouMQcUbzWc zIRug{K@64xN9-E*l17jWhDmI~W%#LKWW!42ZnHWVT711a(zmOz8MS{t+XHo)rtJ1rcBVWz+;%Z~@8O?Ie zZdsU!t75imj0Gz(h74$h)G}eTGdiVPG%N)S;i?K4=LUoIK-fG$x6xQj2!-2iKHU5*r zA3$%qYJhmjAN8qZVN;OiAl!%RSTCz-nKV2mse$p#ihEVL%+p98?-^`ae2=p~7?P zpcyIU137|`F?AXfBbg1bl7pQ`Fq4$CDMK1?E<5eZmBIFvC8!D(J`<@$-2mw{TimO=NC|M?gr0jSes25wYjHgqTj9 zmW^ih>899Mf%`G9^2E&E=r6?=;UK+n8z{fOLPZwGLRGJ1?6c<>=hh$xJK%i zp>P^vwIWM5z-59eAMuI(nFRz8`7wO{*a#LWE(3gE@ho_q3ncB=i z8vo4xzC=MW2tc9p41i|Gb=$7xd9HV)g6PpjpTgp$$t=+xLf>gcpcR1N$M^=b(6iY3 zZ;x+qtg+c**%9u7UIPoxeQ8IN;$$@NNmAw_SHE#qOl+2D~^Yudd77LP0H|oV*t~ zYO3j8%Dg^1SkOwQT7xHzNvfWxphL~@NI-$Q9$U@AFS!lBls!7h94RWa0}`P>@JxqY z#;3KVd1g6z%SF4gIBQA1Q?WaB#;uf1FA2To3b5;folcgy(0C7=Yc@>r;@~`iUpP>X zhdEhm9vkCv1fdOWIXUZWM=r4S`3WlCL>gzmkyH1Xir2d5 zl}M*Qd8Mf*mv5KsV2#Rj;^ zmXt8i*7zySjcv;fB{>uyVQTEh9WuxKDy`{ZIQHk1f3kgBO5 z=b;_df#@6Hc6-@}<36k#eE`oCb+O68p|BCw1`7--RnV9VXaRQG!|99Y&}9+cD*RWI zRxOEG{qt!E-`^ag`0}ykwxb=XmFj^4sMk@f6))_3d2&`CEO?i`yR1#qffUM~hW7h?+)ny-X zg!PQ`AyA#Cy{mwxdd1S|84FT}h;_bz<88(^k1hxx+NuEOr=j;LbZvU6-uTNt5(j6< zc|HJaF!#DwfL{UN6)+k;VQD7SvT?CT8=^w(kw@=eT`_0@W{$P1ftOU5eT)DQJI_ZC zfM=ERihN!IAZ}ndGNy4h5m0V&%{BBQ!A!Ox+uU-&0%8U0$z#TZt3C?%5ytsY0Hr~} zqLc_I3Ygr8**!zdDU@C?NuN_01QwMFz)8XqR6ZtQyv!Cv+3WLBxsN=~hlHbGqzM3t z1~@sa9dAJ*PjEbXl2W7FKrvWE1C!}t*r*MF%>cpXo_oEPZOxb(a!g%fG^e!yyQzj)4I=K)Ozx)T7C7AW)JNxV7CFv`!p2jIu)YFy?Fp+| zSwN=-gbdx_!~qtB0Y#U(1&JZqtO46<*gl}N@p8|l;6BVeAITRFe3B~-1}kzzIt_@= ztK1#l0{{kBA~2K#;~v4JxFH z@#vURVo@=`;#t60wpM3!*Dl8!^HAJJ8s`IpM=>B>wwMgiC=)T5wsd%~riMPJtpf_H zn1%ogfS_R!pt=ID)Uou(WgnXRXbb(&u)_|}X$OWJ^F4MTR7M9_IN&h>C;HGdQZf%S zYcwS|NC0p^Ul4I#^+EIlP|rLc2%g?0I3`L0vV(;yDuW;is0DB&U>Z0a4Y6_e2z(Vt zRdyBj3{X|q?XnNceVBPZV(k&gB2Z{J!N7F1mMe%^9aAXBOv!sIkPjOgU_!vZ9sa@L z0X7BjO}@m5(s3Vto{s<%R#;We7Qpi!@CQ%}*n|okidBI265!3KTL*YnfWb_FZeg8N z>&>cs*@x#o(mWrOm^m7XbFUI)Q`j+66|kQjwmCU8slmx^9nK&eePA09>%$2Gu2_dl z`auz-&_|u;1JJDO*`!F0%rN8zyb${ItBBst&L!j|V5b39Wmz%L8JtZ_>jS)yxx^)u zXB7iZ=J^2D-J!xzkUuss9RSxLfP0`1qQYS;879c?7eNCx1HJ_Kc~=3+r>3s=gXxFX zlYS`ZEdXi+fQ@iSY!nL1`6>^ukW@pDFkr0V&{xza0}G)QI!e|?@@fFh#qr>DGfI`DA8NjfymDM zeQ#fS1lTJER!9pi}LR-ODh3)U6m`|Y#6^?nAi#!*t; z)sJ;8>ZDc+JI>=!NC%#6c3Sx#^D}_U>OK7oaEN<91JV`-U}G-v^=74lVf+m555ZEB z8*gKnmL>=~_uwA(QqqrjDbcE7wiEUU_q+2Ia$%oN{G9U9Yl>_Q=RRI_7AgI%AnG#u zYx>elYqP?bNs!+kI8n65qAfUgKhvm&_bU0By~$D$#WVgjFP=iPNeLBj)Q44w7}~?{ z_fO=*r!FS(EWt5`)3aAsq(}zhb{wb0*x5MN;QeIN{lpg+Nh%V1p)4^tSWI9h za5A%Byf5ULyaB(NLqPsaOF^R(_eOPV4RYAs1y>6BGRhRqw7$^QzDW#l8pjK%J`yL- zUS_M-`R&_{=7n|3`?5vLx?!x1*rfBABfMP2*~^k}3zJzYPOLQ&O>xS7;(s_w#ggT$%~XCIhZ|cYP*N4^Aquq_JrbI)H>g9n*?EhHV6(;b3jU< zH5~(BWo_B#A7LyfH&$jFwX?G%QcH?j(l56}JH?nnGau$CMby}@f@Qu4=V->vh%ZL+ zv6+U1_pGRqyoY3+V@%O;?TgI^Od_|tb+IDTe@9880r#hpA|fzc9mTMEYJ}jTh*P9XfcOlU@K54kRPGd(m5(E)wd^~58N&PrHxl5)Ayx8~Mw=%>8-hIXZq!wt*M<0ooa9BX&F zHLbx?g?5|Pzt8f_l6IG3x2N!PdyzKC0pL<;L@=zmwPQS8Xm15@UdLQr+gZF z3LFl5F4NUAlWYL10VkMy@aNGkFigMe@stZLXW&Hns+(7%Qq||tLlDQdbPSl1z|v@y zZKM8bm#wS7E3U+VTMLL1z;f)vJSEFjssL5!-|FM=re*j55o4@f7^K@Uh+uF_LN%&WCJD|uXQxwj-h zgPvE^*>m;Vxpw%w^#d+-@_;WppDi25RRH`FR8D-go%R87_H&=zN3^)jNO(?;SZzaOxN6D~waB z%^`8iQp;J(aNr@Z7B4w&t0f5s=Jp%HLmG8sXTlQLZC2XpB(Eyo>uq1=oFoQM7mbiY zlQt^Ku&vU&Tt6ml;Su1SYcD~bXB6p|wRSJCmD3>@mXm}Tv27#ia~lcxp;9PM{?_Pv zcPz!<8cTKSSo%yBg0($%&$r|AqQrH*kU@V|&>+6=w9ZS9z&@NtY!dxdOA;QC1qL^R zJffOvucgM-E^9yXZHB~cJw9nl%YmOTIRJeI;ZARM8C9`>=P+!Ql^9W>Tbt=VpQY%?b-2Lx}&IBOA?@jI*Q2medF`z5TIUN2!5eOJqA=P>X? z?Vh*t{5&$`fY)|sQ9N`CtuZ$s-&(znM$aTk)$HNWy5sYrqz$InK*I|VZ9}{@k{B+| zU~78+d^$gGGp*}dyL+>)UF|}`H!wq$F)18dPZ*e4m}hXln<}l-dbVuX|e7(j~dBMeB1dIoI_%o~&-{g)8z5+n6>oa_D5)5zh#YnM6`i6=E$7Ao}6Yp-e z@2}c7D(s&Rx6Y4``C`wE-tUA@H=obz^P@U9K7U-Fa^rcJ>|v(u$iM8N{SF0t9rQ0f z2YKetc#c8UGv@@naNR$zkI&DMhgn%@OXm(e-jNEvOk;v~$Nsctu4CU1k6{07JmaU4 zv?b|_li`ki=kxQSP`aA3J`X(md_LLDT(^94Rr@m=PW!)%tcwds4`L6gc$ltt5n1y2 zeD!WVZ}uw#gy{lvYuZN;)b|Mlrcvn}=!eSl9Ua_;4Qfs=u&ocRMpfv`osKTwnv+i# zyA{kHGGjPCFG{;7cdSpQvwXz65wpMAjk?D>v0TVkslG$1tHU_^8NVb!4isBVyNA58 zU!`>O{HM+NuYI2Xk4k4w2l5pCj6NOzIX<1*3k;CNa^+vmXN*$zjDI`{BY&>aRCm&-E-TA6I zP}>x0t1@6FbkR@^pH*{>sv&E74ZtpIS|9jNe)f}SF)h{2lSeBi|5>v1?v_`s!*2!~d0P5 z@`nS8K1xy5978P$LjgblE7~I&Q9$SmRu!)Ryg8REjyqoNy<{zX%g+Qke7k*<+ztjq z%;ZZLh56cjLx`+X3)BiG&Ve(Lc6vm{joZ4Gqjn|goiN@_&$E=p!1YrrmA81lq$|9- z#Td^f{KeON^tXpE*BYP5qYR8`2F7H3xW$%;#mtH=2R^-xG0i=F>=PK%Y%nH`a*Hhk zx_5HLUp0~=EP5c~JVwa-H`5^_g@RvILB|-k;Z;Cu9ReFt;f^SWs%Br7fp-=i;W6?to;ExxBMkFVuuP` z#XvPKc!EMT%dtIRkLY!|0Iu8<33_CP3?JV){mMb-K1dl2`-ddV>imeU(RS*%<<_sz z)yoOIrw4NB7JP?9^?)4nM1;;QUi>3*=Np{-)>ph8ns%pzZKL8uPE*>u;HTd-vu^Hi z@fXV^MzkK$kYs{7M)mOWIUuvKOtk0_y^J|3$+Uv=lj=zPVc7&B4CHU==)a>@O>0?x z&%Y!Gl%34(JwN0Clt`3>Wnh*vNh?n3Ue*oig=f>g(Sc!6ZHhh`!x_mrVavDVeECrS zPX2j9&8tc*w5IYiY~>`^f~j99Y;1ZIyoIzXp^+LTnWR@Aep)b(6~h<_QoI!23ff-a zmDim+i_|c>D%RH3=Q{!qLEr`3H-X_E;4W!31GhZ zQ}=CPX>$GA7U%9lj7HEQK4pYegg4wbzeN(QSaCjh1g#ikCH`sFAwQHV6S4(xsBTy4 zz^M;kI;{7%=cpw`3?Vyslv&EqypY&CGDF;Ax`N8?4+fU!0KuOAB;)Vv z3G+zq1ukuBCAUoV$P_cB7DxqyQ#fFO#2;64&&mjxy-|NXYto!+(vJD+sa$!XO%LDe z%AI6=IoDS%G1<#JW36$PNSD~aejqc!2Uxw&N+MdC+wf&|yi+rYMy$~i@bCi6Ha$3o<#Cd8DNPn zcp4e7LXr5|YNPAtH+Xbw@@VP3yp3{et??n)TcwdAFwCpD??)7&gKeC-ndQHV4L^g- z5B!@IJRduLHvRhCI+^^5E!b+YC3@I(c|9CNwxGuZudvCs9CJOY324~tRX{e%Lf!lf ztfVf&sa~XwAvbyM?teOec(Y?wY(6CVSYEj+hJmM(0I6+Nv$+}C25T@95e-VZ=YJf# zYg6KXN)YJs(tWi%kMiL^%w!o6l_s-$HVe5p8GBFrig3&ct@>!?=Zfd^VPZhQ1~j5$ zC*$m3UoHzcpFiE(7P8Nv&D}rX7RUDzbajmvnnzoOW?;`hs-A3f*v@=+j=6T4uacHD z-8%Dsyw5G;^D-Ae_v}i2?|ceKZEYiHRPdH{m+W;o!lEQ&A7Y`I3mC_zJ9&= zv$wJ0>+MkZ-B=|kM`s}MIUWtWoXSO{0Dn2KnIQQf?pV8%k;KvZyw#PMi>KsoE_I?e zj{U&Hk&|SZZqN+s1IHD&9*O3NEz1WbQZ{rfZ--~-QT^V6HX2n?Q2Sp|t^DU}p2Tw7$@ybWtbx7(C-r`bhBy)l5MnR|ryh8TJ^uVv&~ej?gbyGinR zIec(rVzOyHZ}NVfsqWb4G!xQsFbs z-!$pwOQW};oQ^-QmJ6BO{>4!4%;9;-BEmDA+AwkyJDlE8X+Fgk)Rt0t|Lmnh=L~f= zQ!=|gJ~TCy(yoN%7-H=GYUH4!T$$S}E`Sa3jDGcvYdvvaSRW=4WP-Uh&K@zZINw#v zEP2?En)V>}8{|7ExH7>B;|0LUdaC;?Jxw%JLK1q~J<&)6U9pK*_nTQYD+oQ_!1wp< z$j-tqd+Cg{ZFq4oJ&B{_${Jj6aFivHAB6wrou^0TLyC$bjcLw}N%Tw=h8LL!Ie7m* zjiO08XC>F~Z8-KB%pO^v8l4AUnI{o=P20`eWiaRd&TG!?42imdZ=KGvm7M1^VwL6O zw~cR)_$P&LEy`B_2#s&0OL#sqWwwA=& zq+eR_gh~PIGy*gsEEIfH6p$veQ#e2fc#$-+i8)3HYs!768T<**Y) z0W50#X(scCG;OI^%-u!@8b)LH?!?jcFwRl;XlAvWWV)HXwR^H>J4{Dj!4Ce`V1_O+ z2HDcC1!OGv1xVMvetX1H5J6yErbm)(qIz5EyO5azy`SyA`1~vxSrE(%;$MErX@OtI zKhREXKWOPx6#y3%!K6ZmN#QX#iLp8FNm#bgTv;9ykm`ZL-E@#HdNTymmRHW8y@qdT z0vv~ik;e$r(#L{zIjy&WMf zZ&?QRxNdE?5l$=r{;T2pPL~a~9pws{=1V_)6 zH!^c8d~eDSnw`Dd$`qYZ7g)q8t#&(5y6zgMW0<)6o{NHIiEYM2s?vf6MC6RhY>@av z@s`CdguM6Je?4C<9JSw9_Qd~C%(eLKO2%>WR0p@svLzt^-9C*N^W<=i9ui5^r|(a2 zlF(Q|!s3Sd|3_H5i*mmW0XpVB`76ssuAD0y(MEB^Yi60NMJ^9evVFT8_ST|m1r=pi z&C+~4;X>i4yJo8DHZ!R4*Zn#(&FMq$%FQN%^;OW%?^2>RPl0weo#Ok469wT4+vilx zAOp^qu{3)I|L-xILDwM>RzmY(0G-`wJ9~>_xx-{(rKCVmRIToPf>6Jw?I_98z@9p^5@;v1` z0#U-w*$)By7Xh*!wKLztL2EgVlWR_SpY`S`#7@qfG{*8Od@P*CB5Y##Y@Kcs$s6tt z4MXKg&)T(3zAm`G-Fq$tW6E&WB2W5*ONn9ySRajm0Z3Wq`Jmd6mYP@*lZcM(Crtu> z+$4%4DI=5UhS(yzH)OEuy1_4DX@Z4L{rAv<8rHq3#Iz?{5D5FU7iT2ADyB(~e( zy`6$F`#A1rHH9B729Jt$MX)eksqRhH#BR|xMiNdm1Jy0AS#;8KPekh;{ z^t)#Nzlfee@3{o79uo~0Kmt01R z@4RsG)no97WL8OHEc(6uaF+_cZM^Qy;*mZ!Dm7qy_3m9knhX80Jh zhxO8M^LbUGrnSeIZv*C$&z*LLV`dJQneMhdST6J+4!(S)n1ya2)Es$rX-RIQ(+|gq zeSPQ#wFUNl%O?LzSV}!&I0b=k&-wK%XHfvj9_h|qk&ELsnBgwv_e^Ba*68>Y=(3DJ zE_}}0gGCg8Y9&yK7qzy<{%jkHlcSRp&%5?L8ZqM%bT^S+?9;Q&Nd*w`RadP8r@L3) zN{HTKEUw!n2ZC|ZzI>%vEa$YCIJI}j7E4Z%1Qei!m6(_Z)Zu#K0opRo!j<*u%jlL* zwE{m=J>E`&aj5@9%ME%7ZcFts_y?5sqxe$E4zIe*-47b?BE+oI^OCf=}q z&st7?$=-Gp%i3_EtbY9K{c0PgQgoQ5Y<&1hY z%C`C8M}ruxN!1*sx~YBCIuAOIqbUzx3|{PfUD3m9#2QIcNptVtLpCw6EwoSc&P6j( zSR!%O?SKEEy-4K~`-!z;WWR;vUCDNG*L(u>k^pYvV=Ik$-3+c`eEks%ysq*6Sc#I7 z{!{U>;Ka#u@8}7*l)8T(mb+6ZtUE4$a*sswz2`WC$oh(afe#!W7FLo=A!8Wn+!?(Z zoN*fdHQEv9K91Uon~$4}MMn0nm0OrCjR9ot(y})&x&=H6VCx4QhNG|HC0bS${!&>! z#@`O~0gF;~c5E;7E5XQ5p5g^V{--MyuN7p=h2`q|vBxN^>YLp3Uf0q@_aj0-|q?{yadSx~)fqTZ_ zIYQYuvtQNB@1(9_sM1l-ZbbZ1a!DAKqg?OG!tmjCvpJ++iebKl*Z2-mBG-8?lX|Du zn;_IeY=2&Dh9Q)0y+!T0VJZ2mn<%OytJb^2c_brx0uzbtFM>MNVLPv5{vz~5ruYV) zGkJ%TpwQ#RQcrt(tc|F#H{-R7>$)?p-!4z6fBS>dWTz7RghzS!M){3fXIRkU@KR1Li_&ZEI23Go1tS< zLf5wYWz4;o?H%&CY$fAnBPTg!zZzkM8t2aJ{W=kqz*jEXT%Xb9*w+Zz&4NL~VCsj7Ftv(oQtL=bKtS3mR#;S^p#vIWVQ`IkGeFW)j(e149Zdd5a70XHp zZrxn_=+s`wwd%9@;$13`Dvv4C2l};vb%THzNHEHn;|5aW)##gYgCGj}N$A!K&C&$f zDe$>$IyhDL!ZQ5rM#V#rk>qD&3_Rmi&E4$hf!m~h(PO87y^(3}r}0o^*$Q14Sx1N! z%S7*40TM9$TDEj*?(Lp?B>e5OZN_@dNtgZ%8GbIEhwAx~{gx~$_nQ%W^nd2QFd zXzycx?Kzi%rYcMS#QEhVXsb}&4**1;l8$Z%R(e~HH8z0$XfddKg0RO^6vgYg zSt*a0K1xU(;oktU(3=*}c1?Vnqgo)GPxTB&Ey{d*5+3@FWOt!XT7~jTI)F)>Fl^YN zuijF)uwt)CKA!bU;42djs-#{vPNgYymuu*+gcypQGfzo;s`T|dR8XW_XhvW*O}V=H zk5|9y2|C{Hh$~VH5#ICB%brUog)X7%SpRH7O{9(6da`^C)~c?09$@V@$_DYps<-s6MxGA*LjHrI@bdOy~VI5zmk8V4^~gre%BWr?=*bh?Xbh@=p~q@a3Q*9YWS) zaR6IWhu;mkd5yTNZK(mWXyQ$RF$X*0i~`TF4_WJ9!pD-H)LdkEqJZV^WHdW>yW!Ie zq^_EWL2H86$t!wZowJO(mhs4I$Mz@5LhqLg3b`M9NzZJw9~h%fx@%a;V4UnmE%6!E zHUzrMWKCIkE*M>d%3j6ZE7n+$c_b}xi0`t9JQu&pwZQ<%iO4h?iry6IYT*^C;bzGv zE!t!W#Bhgm6}Evi0}S)Cv;J?+uP1japLu`DeLj09N`;DZKgIM7Xu4;rO;LS8kiLFW zcapVR$-`@Siy#$1#{6>s&Pu_9qgdA_h3iqI;QxMhCFuL{G4il3 zVDNiB?7aGTt?dQg9|EuTMif4;KR?d&{hv}5{H-GV-X}_l78EYRYd>f3rKHM|p-Q^@ z7;#d&tA(RQ>7=|s!z!^T;Nx?mD(1nahF9^~O>ppgk=pG1f4usCD)v}+1!(5XE@D~H z&88{PKu^}6e>p0_#QN94+(0<{gm=P@+u19?D|O@+hee}y{6F$h1Sm$0?uy}8;DzAG- zsy%)mw*xaWoocIg!V8O;+tjad)0Incb!b1 zk9Yp>mlLU1x!TPSm^mH52g(VmOhxROF78FD1s|Wgl_l~Tzjoh?kFzdf5R?b!tK12u z9Ypq!)UW#uG)kifmQakwhUUm~(b2wKOc%E6o|erOh*FZoR}ePKk-ukG(~%OCCU_i+)Z@we5M+}zjuz5XAsrC)dM z>3+*=y$bwbqd4_TVwoiRI0LdCQ}L5fr*REJmJ5T0zN-Md4!kPYc0t~}5GCg5s^`Fe zvnoDMoch9?KiD%Cyq2})y_Vq@lTe@i;V1oohgar|%4hkz|CdPot*fRP%F8S|G9O!u zT(K^vlZJ#;J#w5nH{gRauNU`K%>X9y+E!lJ*F* zW?DXeP^nt!;$(rGq-gZ-zIl@V$>d3#$FR^ZF9sDOJzi5$OS!lm1TcG2%Mt1-kh?SF z5;0sj6!LMK#6y7)>`A@`fjhMU3fnTzg|hr$CZc`1oG?su(^3yRmiU#_AUG?cx|<;q zWP?DVVAK|}HyN5^1D-SzqXgT{NbPtMdVpM=hSS|kj# zZV{2+xJQ&UKd33?%U+S>Sguah(e``8l}B!HEe!NcD}*wV_8;KY@8uSiZ2j3X^H|0+ zX~+TMG&-OER3A26+a0&4<`iEeqL$33OF^C_Y>bhM2KxJ4 z-)vWg!>4U2s9^R8FTw?+8t#Oj{r8UCKtcJki0iwHWuS(1KWx@=Qv9&0`i9)^Y-!US z&O4yXi$IWL4)*}E(zCh2-)5ziIL>sP6u0DTqTP9h|o zuPWMl70+vaOac@(a;%5zNRPnJ{l{>eDVxB?@UEF1;y{o|i#2u)4f<3WD=^`Rb${g) z7u01Wh53(D7+RyLdHz8aZbDiHvDY*h!v7guGjyp^E?^GZP{V1bnbzpCidWmQF)SDRxt+!fe2{-J=9?;ScoYhbIax@NSk${vDY zZXm1u3?(c@u(tV2PS7mI5Yc1Qi%Ee-;Pc^o^81_UXHd}D-*6$uLil=(1$KM@_toz( z6eBERfq~2P6qZPIHxk96ka@TcH@e9O#(G*&`mJ-0F#k6MGT>dQKT0of(I3x>2lQVSi*dX$MEC0o`^ZESJ$qD`G3`8OZn&ZrQ zeyKnFtlm?JPKkO;azCnlySsupt6OvZz&!B0AbZX}`>^^wJ~#Yd28aZ0)sWUL0Y()= z%L#<254PVPR#NqQg4C9KcQf8*YW4ldG2MVS=G(~JE&8;A zVtG?=dS6d31(4fZ|L0cN!;6`sfp2m?T6~1#IQ0Zzm}M_#1;Hf)*F-`!70ISzJ}zHR zhR<8w*VmWNmg)}pwA^b;zM@vETp>Mw1*Sz(vqjgMwq}dn!0+xf6tV*qkyR|%27(DG z-n$msPPkIa7d^Nr>@u)Q*mGTD$H}EaQ&^=)fC8m6M&6luaBZ-UI!_CJ8etk@W^mo; zVz4!^l6B5qo#t^QTZJKP)cgS}7g#CLUPxa5WRd5M??(!gr>#Gy-B-VXfMV)$RXX$k z=*Zz?NyRwC!ep6Vt1BemAHQ2XV9NFT@%bV7{Jr#eA}$wRPn^yi@)5p-jE&i|54v4b z6^qdYWe6N^!SK&mVc!=&>##CgEmz`J>9SlcdUa*A*S0k}z(3e)=l0b8!Q^&W8(#dj zRZ7(R_LeS328M_nP5#kAwc!7953OU=_6F0PvN1JG3U*q3uMH;#mZ|H1c=8?)KX3;3 z{X=~<|A_v>LbbO+cE{$B0`=|mclO?gC*Si{s;8Rw70XX}w_}1VyT|i*zXXDyg$x-D zoD2-%b?6uMV^zqXEpwm)khz7k6nq9#HqGT@nw3uF*i})m_!F zVn=_Khw&xh+QPQyAgZ%AaGhpskn)Sr>Yh;~#3Z%k=IVzlR2bWp^E{AMLH{lopU3+! z2;C1x>+!s?2`w(T3$^N}lDwwwe5QUr05|C8tPXYP5219o!iw<)cj}UzQH|9$ngBE{ z#IOwru-(ar;@$4KhnFFCMcEjaq`IV8`&}}r)b#D>1ANefh2oB3r-#{?8r)7T7{_fL ztKhXZTRY4)`d@zB)gERXDb|5UX}}d@Q(0=caOmO;M&+wSxiA57oHoluTHq($HE#WQ zP)9SIz;MWKUM4{5Hr2_I=m&IbqK6=kHWP@YlLN!gw9x53@d9*g7+Nci-FVkaI`KC0YlLj%s@ou06}Q@C4&(I@p+3g7 zBZMQ7sF$Jyk^P}2INJQl8C!Ohn8VQYSLd%X6! zUyh+>RfY_R(BXqnIaS&2r9= zf?s``caVuK8~RO)n>+*%Ya&Mb*-DWPh0`Hf%4_zAu9lI0qPCS+0eeB%Vw7XMDUoxB z#iiJ@HZV6V7)ldF3?;photZ5`L_R$D9c!-g+rzTwXe&qFdD88}Pod<*a2k!(xqOQKic8j&}MJbm-zdBU}Le~9#iu2bc}F_j!$3>z>(hz zoGAT-3gV;__iu%}GNNj)q-mdMJYc+%zEt?R3mALfD}I*KWc>hMeO?3oA?DtD#P`7o zqIMv7dyMA-L!qLZB*~x_RC+7rj;cSbcl3-u>v}FQ(|#fhbEyxz9||w_XJ!3s05$AX z+2ZrdJpM-DUgNceCUcaKyi3YUba2aIKqi#3g~A+ftGHG6clm z?SzV9YmBD69(&&A4F7~9xm396*-IZ@+RN5^ntZid}-Ztz-!UQ3109^HHEp2_oI$cJ7wf_M|6@I7Vk6ZN>? z?P%Y67?h48ZLGp>j=s-?CC2wrfnHb$k*jAp*_7vPuX;rryM{F=o#0w5^${1Qm|L|L z1Kuo${9>8{VDvUjpGE(m3T)q^h3x*5;D_-Oo@d%?PoPfD(*wa4h64S2QZV0_IJ^&H zKl#Sq)n$_M#r@Iw4TQxB*Gp`G7QsSq4QzEF`)6%b^R?FgUxOQ5F{z(BD zqGXSU4@w34Cc{)sdtn7WCsqo>pW!fluxiUiT9x9|L=;{g5~n;2Vb#pw zv*kXunK8uKq>&vb$O4puV7hp{{zQS4@Y&$or)jXF2I4M&)@B(1r%H zF&fJt9BBii#clNwjrzZTFKKz|^5@}e&Geax+i%tG1M}`VXj+1{(az7Y;%F>W{CG^K zEUalYUMdzwN<)-OTsI^tvJyrm#L$iA$-3n@_N|m^I+!2x<0^T<93?}<1gM)8?Agvv zzi(W`uR{aW&y{&q>POjW5epi(5}(a%3kep$8El(+S`1UgXayK(%tuWX${uP=O4%}{ zzJ-^`sl^zhWloszW-$?cMmAsYT5Hj^m!mbvO$IBBrr*L*O8OtJ_y3u#RNan_%`s}_ zPG6dQovaK_7pv8fe0AgCEPR{^&zu=~c$t(Bn9(@anOP8#<9{-&GUgcRv8Xcj*SI^= zHPoUzOU-sC+lqCP$yqV&W#DmCMhovK!r35K@AGxuk0oAGHL*{^(b+0*ogRmyqf)Y^ zpfS46K&&MG}5QWI`S zI4PNhf3+fL8L|7pD3&nqn-mWSW8LE>`Eoafr0iUdL)5k!DBF;9t`Ga1s&!aW>d>U< zHLNCXt)u{XC4>R1A?mA=b)Gzz${8#Ls90VF_ zBgvx7#AzP>&As49SCVfait%XlYODh0A`dentPF^;j!mGk4$uwod?~KUsvUu^#qp^7 z1WU8Tqw(TlC-+_s#qJhf!TU3-LOZ{1CC;ulaa%q6V6R%915Cb;~KgA5T@o)-*uYLK2GjA6ibl`Q2v(hRDz+LQejt zE4WH@MQ5iQS;Ij%-T5Fg^hGG&2rr*83j(~Ppr?XQ1`9G|#Pi8>oI76ye5@wo)@f_n zd-Md@3{X&lZUu)SR{OD+MCY%SCnTeD+Wb9%V|uD<+ywV3r2mQC7hYbq*qSfoDC(b6 z%**v?kgd~cE}Z)&Eyz`@JCKFL34vs zIML1~e7%XfC)rA#Jf$Qc`ctc=22kW!3jYU~bNeg6Vlq)Hs6fY+@#&B~Oek`HcQw$+ z&5L_Q+^6rLHTZ)<9sq}D+LdN# zHzMeVhT*S*JCF>572D2~qxP=`--0V0I^lQ^^qUd?dmmw$wL4ovEM^E1>4 zUM54zA$dM*)Exu&Q!#;D|Cia0?ptRLq zZYMM>a&=7FewQu~>3xiP7J1uoyZXXKO9wPQ=xA`aW|?>+a7X1IV!qN-z;O49Zze;R zZZ(t z+(ENpEm=tl_7eXXb7F5okw>Q5`B0PNAgXa2hQCFBRVds`E5;pG?SJEVDc1MzooLEi z2PS-3QOCsF+8 zK8MY1&7I~vp>SGj#JAA-}ZQFbW7&ot3QS4v2V+bZ1pMxH!D8(NLX>Z+t> z_I7&_dX?Zh`XJ8O9mDBWUM2Q~(&s}m{Ht=bdH}zmNUnt8HT&7hhu~{G`Ot~z-KBfrRjM>GWW^A+9`Q5h>x-*oWW?PM_tK%bo3N_RAq)Gy zYp9<^(R5cX3 z9(p(CfvgL1awcrS1f1Q3#$wJF{LZCqZP{#J2ca9d_}8{f-6d|EW8cTInu4xrBpMIS z@0OT@l(zOq_=F(B(;Vg#=%=9>hyP?XMbbXuijttLa5vS` z`aY}H#|GpP-#)cVaF3F*C^;mg(3m<-#AlIWI6P-ihCX>?p6;IyoAmvF&`+!R3W_SA zOSgIs5XP`ZO>vGh6N8>*^h5*`0V;4mzc2Zq$m@ptMbEtYGnHLt>)($dYo)`QWHc8feDVb|biUM^QWHdvb#Z4Ckp!+Ny zF6^4~h$N%(RAEB5W}H&=6G^Y{hSNB6L|goSbLNJ1@K}J8Vy}HBpwudK20ZjR*eZ-x zk_Ob-00(4rm;z>>RI`oF`CM3;qW3{@i}niCPk1vubhF&6(}uW7-&IdRFA)ApW-Yko zF1UcUz`fB~Cf>)1G#zKeN+~7WI~(Uhn0VC;=#tc=oUB*sO-Y<1QP5;4$S7sfJZb|c zGUf&5`>gLL#pd+G@gfFFw$H9Z91fINN(7!#+4Zj2cDwYE(y%1_7p{Zd?)-zZ* z4$?XX-+nHGotNN|nnlL5vL|&~wnOg9*1R4sh9~d)P}}eq!<^A6uH;1CuY>V&5StvU z*MEov{fC(wR-ug!*ZB%t?jPpJ{Qh6ubs`htGkN&ZdhjUf2|KZ1t&_V4G6;@NM4uvO z!0s}*!)V88yfaw2p8g|AJF?ITbP2a3qWXaNq^pL(ssNT{g-EC>3n78Z4|-ZCO@4Ts ziRIJi%k@JKW5P(F)V>dltOjycmJJJTo)V>z*ZBP2?+m<@EE@tvoTgJ(2o%WyQejs} zJy<-9_XgxC-=@ORsc_O_mz^M4Dvje6CN-~R)QO2W+519Z$p%2#To7&Gb~Amx%nJL0 zZ~R1{lnBR|@HJpLPXR&!+I^I&yNXp{O{pOE5?jnvDPRa@U@q`yMcVf2hZBNd&8UB0 z0)CC0J~*(mkE#)8_Q$4I1iYWuO~Fp+T5zvb58#FrBHMr)`ErZ&VV*fi3z_T+p7Ij~ zn3>K{idPuJiQ4tS<7QV!YMw8)hh#yVv*%UNTf3y`;4$|^_Q9vi=#WPE^=Tgc2b#N- z8bOzU@8Ur?Am59qaz|4PK&-u!8bo;R%8>pw0VmJR=-5UsA<|ZO&ChEEDMJO}>?f&L z%6p<Y}Yf&PhZTrF$oprIiAmz~!ilUTKmsV<0y z`#|7V@EtgUGl&G_!A+KFqNW-v$i-LIQOHSnjoBZxRrOt6%sfrx%22^_O$lt@zZEof z&pocPjB0`Y3&IhB=GRcynKiHkQ;a}^SHnSWB27s6c(E{oGE^J_+d9%3X=cr)N(o`G zF)9>h74*K(z7}Z^b`Wp_S6iEv=g1Ri;-WD&kkBw))&N6ey5@Y(E%0V*s95Q6Ip{?s z7ldbUllv-&VGMOVML36#_Y8AIS)pjaGo7Y|WNvZepfkmIt)@1F$-EmAyxy{vA%4``tIhmaCDCljlXVGDtX)->{2SQ2 z!@b_Xe(Bgl0q71{j-_fxK=t@+0Btvv?SMd9it}-~8|TReF8~%~=)xPwo6Ufq`Pep% zMIMDm|Gsf-&a<|oM@MF>#LtQ$THDN;)iD=ooH335#(n3GBvj-$YhIn@@+G0Z!+x6v z83Q!1IOcP<(8a$)F4QD+TXqs|^n0Xo4nJk^OgDz@snj1sVhh2y^C;*QhBvRBVi%v0 zwf9;3h|Ui8teBHseUFE~iaR!;xCIu;B<8%WwPJ;;Y=9H>0Yy8Uen&7cHUED}bGasK z&Iq?)fa~oUy{1O=RqM>-G7xtV&&K?0q@tvmXj!V(+m$vEeuk5;`YrASx6kw;#}A5o zusavj5_>wS1*eO|eIuvACq<-Z7hne3B-qa6%Uk#yA zVPAOtt^vf~SmM4+Jx(q^^kx0)>IZZy^fcVM?Rk4{u*0z)sBG0QpHR!{E*rBojx=m$ z$ewsC5Pnm@xYO$Tu2(~5+B|9EyPQ_DUhy+jkmfgY_e5avE@M)d?QNLrtqwKDbo1Ik zNsD(IuMp}Pi7T5IVx?=(PK(F!hYbq8ay=-ENrp(RQ|aL1bFBY@lwMEN`kEG6IfXdG zNh;tE7-9$w`<is+`s7Ee{vlTZm~+u5t9s01;tyuu1^W=j9b% z28;WQmPhGDC$u81u#9u^h0$&QhW{cMw^4K}?&yi(UKl9By1cs4IHIB+qqpc*y&?z& zwQ{f@&|bt$!KY}okf)oaFznvc?ChPu9+`YENKiI$%wlrMaxlfc!ejWYkQt`xDgNC4 z++r$mW8BFW{_#|&#p6tOblL6JSMo}%=^Xr^=z$)=TS$cGA8#J*s^n(fsnGl3_k7&Z z)d%_5=6$+t&##?5BI;emMf1rtT&JLMCq_CZSEBY!WOJLUM2S~I^KqrPgvU(LleqlG z+8Iv>)$Y>{mg}>z0=2wzj8lm8;{RuCNfy1ouY8sNd)OP43jRCt8OERfzi0pFdHbCA zI4R)&9(=WQ6?dqXpa0L{f4cuAI{o2A*Pd|iKeq;b8D%ekSa=n4xu4h9EK~ObtqF${ z;qxg1Cb7Lsd_GBafz3{Ep!g{*eG@kt#!vsiWL=@;{r^>RpM4BjFQ^ti-tWTsk&-Nh z(jWhovx+oX;8;`c|0+AleE+Tf+nPAd`ELt5{m~$;CqYzjZ6@iRe{IIBA(smHqA-Tb zuf3A={x<#awe|_^VexPCUpb;!HxW{Jyfcqm{P+4v<}&tg@;_y*S?IsZG5N?}@cz%) ze=|u_*~h#8dGmj>&xjdpN}y*$?v9ZE=sxp*=ab&MY9hz{a?X$b6}k5R^IBb1m;Y@E znhDwO;K(1h2XlqNH{A>E$LxK+L%%Wh8r4`+x~KO3d3(IX+979jS8JX3D9=9*PaJ=E zNAIzGJ81wYSKU=e;wd+z?))vVXpfZA>S>}c-l~J@Lp!3u&!$CH+>cZ}PN4|*<+RLQdW7HjPZ zw-v%4zCBa=IsF5@;|pVa7K>J%9@H-rOnZOZqdWb=SVG`FAFhPn4~c|COkc&5HLvNy zONE6Hcvv!E88C4Vz)Cr=p~*j!E+IpXY;(C@)992r?Iv*lS3x!`9eelPzR4%Nj#b$T(i|JrR zV#KBkfB+(6;;<4a$3Ot*7Ayci#S-Z%4^Q-=<98X<2`=LFd@YAj4skQ`FAzw8I}VSt z?uq<%?X3t$<4@KEY074IlI7k?;QGoF2YiX&M zVEt)#iglQjZ$jk-C^q4R`SzCZt$NEkWsNdM7x~x?6z2E~{AVm*QIN2Ljb70!PZ4dl zDekUn<5WS6E|++_aLz&EPOF{*v4KY9emrh8ttNuZXgwN7qe@nuYB8fF3X?xO*~8zL zL8{{UaUw5gr!tka{|+-xZ%j{5)fo1mv*InzHK9)RgAN{1Nk@_lL1coWmhzbsL<$g9 z^;rv(v$yPsC1!iPv;MsJ_d35x71)IaTka?4cMrl=^QjzlH`l!UqX#dSeR}e7Td8pI zI|xSUTz;NsX4ElrXh*)Yz-@?NzMFwWZ}mVMZQ;A|<0(hm_cP1O1c_kE7W7O(Q5f_x z)am#nO^XXmX-cPlWcRRdcUrgd*&%O38Q~Itl80{c2?-utA$ZLPiwm^(M{OcfOZaB< zkND)gTelsD;J-jwMxJivK`z^C%^u2d#0R|Pniv{Ek)0F4PMno0o_&ax%!fMkZ!1mY zD@}GKf9vmt>%}7{+>+(0HUH+HPK!^VvC1hS$S$aaga?SPf!S1+WZ9gHi zx2a95MkadLz{Ol$B2r*`M)#&|z5N)s@xN(!!K~i-=*Q7Kt4%)9pu^PY95jyHtGil< z8X0mcv!>5)vcWu0`$zG@jNF) z>YOx#4p=c~xd0PR{#MeqB%MYILv7e)BZ?J>WY1VZ6fU)eXM3F(2$aEE8_L~+aIdCT z)1eRXy0US3`W+D8B#CSFz9|EoE`N`zUYUb>Ct#APb^6XSjdK`hcI&xw$sXnteQD?w z&b`Ucc5J>nDRI~x}R%Z>* zbXZ4Uu~bG>d*At5Sl{dc5jbchWbsVOn87#&b2w3x8=^?%1yX&!r8}R~CiAWLOfJ9Q z3HvndozdF88r=(#i|Q;?-~`)Z2(Q8rabcF-9-0(ecZm>oKIl80B1)m0!onQ-d8kxy z5FVtsZ}Ix3owmq|_fLl@EO@KQze~%O0vDWaDd=Q7laDWnsww;nc*DLiSrflDr=x=~ zjmoI$<6^UTA)@>FM&NYn0PG(mF^f`iTC>9B;W@!`R;!vdzTqvVl_?Ty3UbS;jCQ+wke&4JRT({&^uU}{Y z(aZcThj#b!3AaL-VExvI)^4NOjqnt@%g7<7ZL%Y#xiSX zNs(Pv$k{<*nslkV*wH(>P*9D8^&hzBknpsBy$|1DnVqk?;a>~q)(LuvA#zceMz?|q zfA4}Gc?TCwwnz%je#>1Zb2=tvy7D2Gc91k~DlIQ@ESwJMA-N|qjBkQUZCaurfM>#! zeKrw!39O>RwxJ&74ar`h(-JF#P<>;iWbct#E392%yGH+ ziG~T=TMptk=y!tI))=Xu{BXxl^q-$uPS7lDMC*d4P?t4kiq`aTsZbG(*WV zgf7Egf~P*pba|O+HB)!kP34_7F8PBweJYVeuV$ZluU%^{F$2Z$Nh#h)qag-hel}pOu4pa(X-Kf6ep{ISB1c zI&*V)x$0)GvADbAU7@QLF6OZNy7p20qCrZozTkH!puPM<>H~#1?|yquQpm`UTCDl% zEYTCp$e+|5G9fPCaPY7h6AaKx@_q>uV<;d6t8^7V6L0SoRcSUlJ_VYY4fqe*Y6|8Regn?e~F!XCUQqYK#(ZqXEv zXMZ@)o%p=`vf0GqmT=|a1LU1dr>X&!M2>GRB%iK^2dAQzo}_(tbTkH( z?wy&ESUJFFqr(Q%e z|5+lh^jyXc?MrrX2nrO?vWi#=RHFgAT2)RJVlxsdHLmXtuZ8|J`80y9-$&CM5<-eE z7-Q}BVc{qwo2Jog;D$oT&oC0`2<^gg+2o=EI&E?-@cn!EchR3yc~0b#^OnSUhctP6 z5FYjv5}Al*oR|66VFdk7LldSbN3`4L@P91CU`!5&OsqkVCnsY1+xg!LIY5|-W@e9r zNaT6@2p8&odk{l^N*@v$AT9C>5)|bXZ&rZT~!%axz0*Avgxxj9B?*B2>2N|MWgaFnc_rFf~ zZhqoFr=`cG{h!w(K_K$@GRGgb+D`E*W9#|^30=nnMSH=`ue&!WRt|YTD7MPu-^0I4 zXBoLX^8UG*$WN*u=l^}_0&S&XOg|DCV2SXM7VHZCd-(5i`2Tb)RnS#D%QGs*^h(=` zN|moMFH%NzEQHMWKrRp@N%J<>%;H8-tv6pvf5KgSs=YEixfM(nE%v>{W+ijfZ zB)->+lS2M(6r8yEAKkn-XM2l_JPWPusPPhGCM*Uuxdiv+?zQl+iO#+kHuYf;{ zw50^)HHvv|ojpCE+I0bK9m!lNiPSks8l3WZb+W;gUAr1Sok!}Y+l`y@!xc{nr(4oh zfsiX_AGbCf6hu-(P2y0yOX*9)^@MG+7T~_5lkm=^gG)31wPuwPld%8QSZ>omChC`# z@tb>&6nz8KM|)M0dp^+gg0)Z*8Y!>3tZHd0M%EkD#rwAzfO>Jfk^QPyH3qT}h)Co9 zzVy&t9$#aBa)RQkEgR%ocL{iM3!a z`*)UVz2juFn>DMyB@xJ~%dWCkODD^sr4vneqss2L>K;5E@1wQo(a|aV?kHCIQXCd0 zm_igbx}F}3yB=UvB{E%MpFjF;pMBvu;s3PCd))EyNq2o~hxds6$S;w00J@WvciIo^ zP~}wNIO>quAVym;o*36R%vj2@2}vbIPySWnq+r5FI-+ z-Sf5ac$M#^%nw`>tv1zw?BEPYL1SiGtqI1pvhW}Ml-1$y#5Ak zuUK_^=t6Hv{HsMv*IpPf6hB?G6K7Hd)kYfg+&A1+ED2P}bP;v|Bh6O6OVgERC!1Ru zb2KA-+cb0Iy8XE|*VXn13VZ(%;Q-AUSdkiYzGXXl8>4`0?BkCJCigEt8=|w2Y6_H4 z8sRFSB)Zm=L*A}@YDQ_7(!v2}jh<;YcfiD}y;3}zkn~9Sm(xjqqxl{QvOyX{DC7k*gq)TtJ*V@H-sP4*!g zpw(1D@k&8_Vr@6YbjpAg0afUQ3c@(hk_UU2zi&?~cgA$>RDWz;b?2gVK8eD?H%>EO z#sAVB>JgmltYUNhVaco~UP_*sLrM5%vVCs!`%Te5hLerN?Dwf|Z1-p&tCiitPdHm9 zHdfMz>Q56EadWC{U)!jHlO&(m#Hk5_dSW_IN1=4@AaN2%@oRZ2pQZ_H=bWT#d^MV*56-9 zrT^s4@s>~8_m~lHFOuGM8UeibpuwCC%_k@qdX8wF<*Kb3&jl{ZNiMz9ZRTYZ>yVw| z4n2aB-#wfWvb@=A!_~7D*UZJSQ|^XhuUJONQext>kANwr|7(T4x~8(ACFKEdtoYxkv@jK;br#{xJm`68mk&&x6P z=6=b2F!!Ki=dErtF=%1;aABx-WPf?IB?0J`9H5`V+}(D0`Ri>eVmAtt2o!zuU@@uO ztU7+`8p12c3)Lr)T0d<%%&P+(&eW)P%i2{PG@JE8?(N79Hv^^vxs+&aiVjuDle~5H zJG2)vU}Hi|y7so?hsdH?TsI5>wBi+GOZ=*zvm9LR_0UeLBGTFZ*q4kc*AbsO7Uqw} ztPK}O&VAVVnZ}$6!Y%ADc3Z8Ixq(kG!_Cqxx8 zQfaQj6>CH6$bhv>=y+W3^!_!ttrM!RL-)s4Q`a;nT#R@8aLec@k5^wcv|cu^kyf-Q zK2+THwj!X{o5~Z!zD8|IoPuGfdF1ajdcHI}p6r#{4I|khUb{fwrAnkUb6IdP!IOqJ z&$t(pq}LkSWa1CW1k=}J=kM0}2#&5620=ZImB-6cGl%^8(wv;4xlRMXZ^v7P3eu$^ z4Vs!i=-#BWW|c(x<7G+zREnWXz*CQJ@M?F|v>3~%&A2ugtjK4c*WH%0B@5(zNr9|j zS`pWTaH&I%HObIj_~d86HglCa!RK2Ko|#49S#12}^$0rIiPg#*RDq7;0)`PSBgh6-2L`GpuWAQ? ze@;A74rniQW(scDXKwQ2aOX2J zv^94qYRIFU3R2#J-iYDF{dUrbmw^5>wm!t^)7dcR<)F*{%^;)Q9Cksz4A24%zz<)I zZ%%D-7#)@;agw==7lfk{%^Y!2#yC#U_wEsWU0UEQljRV^ICNaLeU3Q@*HZ`o9hzLD zecPf|tX;&!?iEE;wR2P;2(D`jkB2Oci) z@-y>7y_OVWP{8BcWj$vdp3yCfaVdJVhnuoB96`}7Y+TdXtq<_bu(;?6{>@b0VhKeo z$!su4BS$I2J^pP)TYH!1o;U>}?EEIb{DH=Xm(13R&ywQPLg9otRlgmBWym=kFuUFek#nW4yz_wYVlF_}pnfMz%0GyQbi-F#UeVZ(;e^_)Sm({(aUv;?z} zas`#JnD?8Qg!hu7`-uT`U$57NGL@}9ZLiEv<&V!z5n8yKR^R5_GbmWfbDcHs_3!1d zdD8H}U`Hre;nKd9X5h%i$uI<348n$L7?V3UW}2^$^X$GuKCu-odo27- zcMAJTCc@$yBUAXzb;(L5l_%$OP&yi51$46-@gbZpT(o5Ei=u4tJ0I|upD4QNgK+sc z5A?y97Bv3hQCilB!(m8vVV9ZPn53+S#PC|I#kLJy`XnLNR@%VG6O~AYF;@NKrVf;+M4p0 zF4cS($*_(1Y>vJ_%nE6NEnKhxuJlk&Z_eElxmauVz3j`uJ1Vhw5vUpA7z4H;32t*g zwfL6kz06XPqse7mlXztnw&Co%-6^}bTA~6cQ+#mtRj-s8{n+ea1f_`)=Yk` zM1WNx8X7ueFTYBAST)}Cbkd140^w5pWW9oPBm!pkDlwvlVRM`IX*WfOBA? zux%y1CGI6JvYd_a-gZnUXVKB=7y>csl4Cp_*-OiBRg7-UPk*DUB2wq6D~@h*Y9B+~ z&+pmH*gqL}oZLmmnPwG~B5g{~O?x=??3DPN@0VJ!O`kLBCm|wo${nwtFBAOc}qtR zzumgP6lHx6Zo8?wwk!d*yOirkgN7-uYi-8YP zY}!KFZF;%NaQ-b1xP~XcYv{bm{UCY)l)AQg$7yl)dKY6A^5w#MH#nCTsZL*ebBi-v zJoz2_-y_M^wXy+OW}9(obAiq;>R4lVdd)J&>p|t@uYcNcbSUpPHfFGkx!JIKH8_xP zs|I8X{6Q?(%L4<@4(7mip=yZ24sxm3W=Ds>wyq12N z;j$=7lsRpJipyIp9xppMTros=#lB=n3VN!7f4-Lx*%Ypdp?>gZQ1iZJe}ENchQ^57 zN!+j%j%l;t!?aB!5QqdkjBHo`&7Tb|CRjxILRR`yDd9VF7q?cd?2c`2dVeipdN1$m z2?h5II$IHU``Z!0nJ1Q`imNL)-aMYz+`|d>t*&hfr4TYc+KOa%Ezr#&^p&}fM|4GK z4f`Tt;TRrAnm&0kbRvX4KHufCO5%q1n^#<$zw4i;wGU;*7-5?cMH`JDpuq;RnQezi z3Y}5y+(+}__jvvxu_zWf78lqKa>GcD0~u@6?HBZr5TydQMX$Qg{%mVCr{3DNVCqf; zj=Zus>WO^UvB_7lZ^yFlUQj{L5%C;L*c8a=RzkSG?jYhM47Feh-B=y_B|h>oSND?%D|ti&pMRk2 zwd>Jk&M@!3-YxN!*>)Ui?dl($-8&l1WYcsiCI#983SB-KiJ!$TFqMynz&%Vv%=Jy#vaK2OypGrMRFTaMi@Cd z*4!nNp?MIND;P5>40QWGY=x2#8|h`OKxW+oR!C+@d_*dx$p3j1n# zqU6k6YI$I70m~cYQiP@Nl?4ZaQ}7`Ge+pOz-I0qL0VmTnU~D z^*!SQ5moxjp9yphwUYPG%ir&DQx>~m;QE?28VF|VkNI{@k*e7N47Z0C4043g@q;#u zYHkanr+?bWl-8=hA{JT7V2bsQ(KD#ElUJT;;_8T=L%ILs!-5->$ zHDc0Q+g!EDLfeJ3AL7U>n8{gl^pFod_NF@7L)<@J8eV_cE)W4*V_&4Oo_h*j&Vc*K z2woBY;nYdV+ieCI^2h2D3sZI^uE7S>853fciiMM2{Uw% zkprhB@bp0|eYEpF!?{HWkIB=xwhtRhFi*1&I6bkgN%1-*aVl6Is^Q2Npd3cjPehKGjl6 zSH&p|ET4r}#m(wI3I9eLU0!P<3@^f`d$V!SbLu)k*#UgT)S<|!#l3frbD1j-1hzZu zG?{}}j8LC;sJ1oEh7_ugP>_x2_0Jl6j#43gpTv;l*5+G4aonw&VTyg(c7Q^q@D! zhm)N+m?sKvC8oH!kuRrs!`%SfgNrY;?-3bOE1Jqz?P+V%v+V%`9_n*U11ju3PyDN) zWn7Kn5^CczE$%UsaNI@}f^?kcI*nCk1u;rD-!*%^^nqa*!>5}g*d^cDdK$~K!nws9 z96hD-R8;MDwMJI$NeJp}IwIk^>Wps79QD!Lk%kpEdSQ;`7sb-49{;9h*U%V~#0ru* znW7L5rc!vxm{hQ0VW=y*(HAcce`adcQZ$g@9-)10ML#aAHe;hD ze+20Qe_cB`S4T5kk6>N%$jlYqH771}>y__12Wvles;%Co_?;N#Tc36`Mm+c9^$D2} zd#6#$S)Dw91ucF^L7h)cP{(EZ&Zb-l=H%IynGeWL^>lcd*e(d3F9?}0IJ~l`w5W$O zB0UGsLlBoK^M5N`ltQKYeVl#m0I|`+dt=4`>a0NZYY29N;&*LX#iz@UjePS@^mkM8 z7NZHp;)h1QSJyhToLfYKFdd@&l?^->Q@>7*$1pSJqj3ktq>3h)DAhNTZTC8PA1LYp zf-dSb!&-A)6=n(fL@6=Xz=BmfEw*P<92|bZ$Aq#U`iKVm*C*f3`#zEy8LgL5^3u}p z%6HX4HFE#}_02OMZ>5-K2=aJhy$b&R=_B`YT3-V?YsU2}s-1QdsM9$Tt)Q#16?qYg z1YL0j4R^b!w}@C-OY`cmpmK!5h$$d1kV9K#&f~Hi1UBm#uh4^|$y)F`;FxxBHU~l7 z2smpoIT+T&luhH&k4n{J9 zBGH?|(VNc4{=3D1x|*A8Mb_$7kavEjO)*XANrP(2;UNXIg)lp66d$d8=Tlw6MS51V zUCPqMV(*9Bckke6_mb z1Cj6!3`ax-H`+%S_p%pbbB3mGc|M9*soVq-D52DLlbe$91Pgd8`%l& zWJ$Q#6H7sqmHTiA?w;I``UwNBmsy{<5~hKkbgWDtcD&G^&*m|+9A%?OT67X85ht|u zh$i<;QqLje?Z>s7D=_;fyQ7gCEl)G8u8WnWbxz9|DW9z6QuBgQleuha-*0%nzc;(z z>*e0=oj>qK6u+)*Tb8|$h?$FC9XD$4UH^^6f4mV9X#=14F3`orY@Al*viIYz?C~ka z0Fx>Y%A}BTPj4(6IK1rp_A$Ff^~$1E0xv_FvHJ1f`L@bQJmmasY50!qF-|4$y$e&D zkIc{7RbL;!gC~tW(o-u)XO2yQNF#YnM)~NKM`Rxa);0q%DZWnL@{)uHP5gy&;)Se?6x>c!pN%cp^sp`RjzFwiof+jFxH@#xACs75)7&;O}4(w+)UiD|&v54>) zi32=386vWKew}(I_U-|RzRSS#G<;jipHo4f;|}Ry zbmU0=FOmA!Hut92TnFP%%iU)0u%j+F>;=O1>A5QIb~n}t!T%6u`|xm>_Kc-M*WK%U zBjAG+NO9<)xr&3@^3U?NyH8>V&G)#hYQRH5vx+cJej~O&(q%>F=Hr(0gR8s&0(z)LG*q|FhB}8JJZq zX7E=)+xP(HGu0lh`rCmmj}xUpU?uXz`QQHWR4T6-z1#ag?!~IOCm!g}6RhbcL-9hpOAgeK%iw*e-t=22%1IH+>1Ot4JTl|APdf08Lz zr#DaW0QY9dhPNEpDa(v}b%u0wc;!aoq8xB3s5uZ8nElqniKZ0DTU0M+<|w?o1V7@c*i9n! zFSzo0j~Fc==xs~aXg{yt_6Jh6iTJ-U zm`OK{?rx-H?AX8SEX}z&3&y6XBCo|Pew6n{lCa`B_^UEH+(ATbTk(yI>~vvqRT}l$ zwbY}Rf6(+jibROP@P?GOVweaYB<{!UGtT1|Ki(M@Kt|x$`gU@&}qsi3M7(+)n*H@$Xo13c_4daqDF`Saw4?>Y~`8}Hn_gmn1S zoA2!?)b2qPX{dxK2LGcvIpHf5HmYOdKAn+tw6qPLVU~^OCtDqMso4eCnOc|L9 zI5$VB*|Sf~Sx4~w%bN$@9bn1%_3ei%HO!o0m&)2{5V;frEO&%slc~dKxntCfB8xQG zNg5u#K^RGD*|QQ_fP;Y{`CKB2Eb(VeN%-RiwIFAZhsujKk60er0?<(T(_ooP>$rFK z;*P!P6|G_*kRN#9QctN}3!no4Q^7I@EeoIX@$;_}1RC?5ljNOb4afsgz;0(I#Qnqt zT<7iqT_nfiz215gl>Rs&08Pwu?L{usk(w9H}U%l+0XPV0rsaFE`=KxM!%qU;|*A>UJ0f znV-J<2fhTXfC+^%x}R7j|1wQL<@9SzjjNW-)7yI${xRJ$RM-7!-MunYI@QsSZ?#o# z5vZ>?;d3P6ds{f$5ZE~7{;AV;>)H!+AsZrAgq9B_g|GP^icM?D+zX?Rg=aa;MqSP49h;JJ}$yt{z*4< z@xG$H(YMW-KDofqOSzn2a}|O|&YF4Wa34t4m{l@5nBosGU$8DB8hES`nk3bhIIFi_ z#y)&y(i)njCMij+mZ+wktJswzWWN)2iWIsWCOb5$d^Vn~Y z4+1YL=Wg&O|D=-vwA?zM!ZzCTLqH1k{H|fpn4k#K6ZkJHiB+rWeT2s_ABbCL?67Za zsVqS5PW5PND)L=2KPI~7p!(&~3FhBj?JIBpI62qFCr>JdDzgLo3s?S*-o=Gbg16nt zJ8IFtW{29g6ldH{TAedp+-jaA^q%jroh588fFCJwrm)d58i)JUgyfL zAg*{Z-@S(F(0H|X>gi1UJdgt1b+y5sAHWs7tslKT|1az`FKFvO+3D7(fUgpCCWQnl z+lg1Y2Adz;zwz_WalbixdOCQ*3f_11aj*CD9>cXVl}o)6;BOazRxwV24ks>^iFIO9 z_&Y3k4;dvapD{+jk%;d@UXRx8Mhe$@P5}%$F$;&6*H&BpZ5O&O6-&cO<-R`{r+=KC zJhUHn78t!9w1V;b5nO$V?_9(`c{uX+^P%?ddiikOs>1}ITuybD)w&Ja$EG+J2jP9J zYwNabj+txPc&W}Rkgto_J6=IO&htB@qMZVj`zF|WEx030n-9rifZqYWFt4V89%z$Oz5lAhc_)hLSE?l*YWDv?7 zQwP0FA^%QR5cYrMp-bB^xTKeXel!)gOLz2PIAIP;kn6I{ho-{}?Q$h)ZatET1(JqQ zI#g1Mh{-Iy+ubGEWcZO1Qjp2wxl?gfb*G-v2ZtEyx=bf`|W>5)nCh*W94}M$< zk<(Ld`Pr2y44BLs(|EMS!J=Xob8U_AO;!gkV`iA;+9+CEsQ_1*>VKfs4YHVt64~2_ z5N#~Jf-Ohivs=t3Nk-8naOt;Hu5j zEV9-<@)O_=#ok!7M0FDU#-#JBKH{5P+_}8?taABhu}qdA;y(^PB_ps${x44=+xJB! zI&~RSkWMbXXJ%h#dN6rJj|dP$hcbzMM=johVUZZnti2Chu0fr3QC!i!w-iJ@xX(gw z=nU$5TGnivjmS9h<9p({Ywn74L4@2M{=o(-K+YDLh?xydmn&^U%U1-p+Z2l?mZb>= z5EoM0Z_*80H_LNWAFm(JM+HT5@D5seTXAE2ng2A6qxZFDHm*EZe7qJY)9xEu&gsBx zzM;x#@G6*s&1Fvg)f4x+L{0{6I{1DFU^Gh)%YQ!2qdWtZarpdLg8o#KGaAfQV%D8a&7rW!)b#}pvyVoE+ z8(906m<1Pci`&ME+uDDKxTa~~CS&ufCLBv&@F~6yeSFui-w3(A&$Ve%*mVbR3-lmk z(k)A#XPB1;tSfW`OaL&yh2oq^xMms?eZ*qfml{=ilx}0u>D})Z4YjE@>;3QC5+VZCf6UkMU_Q4vR+8vxub81-YLRsV^I11 zgZqzHM40ix=IowrZju02CKmw;WOe{gqy8yI9xXZe3woZX@(*@EBq)m}_H!$nJU#2T zOX56rvpPT#)M0{$!p+Oo*@*a55OK4*|1q=$UxRXUU4mqOd3O=_h}{=&V18io9NCT(ed8Ljk`%^}Fg3c~3COZcg!s70sGDp4N)yzx3kFafv=VmXUMs zPc+&X#XBU4OYr^6D}e;lS^o{?_Gh@%O5iRxClWSR1E|#smaF;YMp1^Y?U}0Y_;v%D zB6cG_9Jup%U^!9dxGiJ)q!@NyuPRrh`7(*nsNeLFajb7*#V?n6&yk&XcC_F82(q6S z1o30$%p&$r`T{{+S95k{An@8C(KFf?#qSO?;#KEWMq~4KUwC=-z{DwazBhQmiQ4U% zVEQ)Afj_3F@q`!7-^<_3pqlWgx%pMtAkTq>vVmEc@V#Q*ukx$}>o4@p(L2FcubHfkcmmSV>-DYl;tUR%^973Oc_6UMLqNCNNuP+#rU@#tvRC{US%pF8cJG@9CO@hg(hQwu4yl-Q6bqAgMl zBOL^R0##Fwk)5b>@v+-+q~qv`N>o5h#WZ&nAWonIB@E}7Z^5N-;MD)wDS`8l@Ug+JeH zYPz3j)7p$X;kaPs>$^1%mv47g0uJql&IUzbtEix_lpT%D)tG7v8yVU2@y#>;Mcp_& z1J5S%vt4t9U1uNd=2R-`oPOLD70&=r3eTcu8t*I%kCso{GaJ%%^yIO2K`+ovP>6hL zg*t}U$NYlv5YhVrMh+Z8Ck?u5aw|J{4z8;lM%$Jk>J*6amuoYGdqCpEzz&mSK)rXMIvmF>NyEji&GJu%yl3u%$Irwa9TutW z*OWUYDU~oNt1;vBl4y6WIlal}cWOGTN7) z(_=iai0;4&x&Qf$zM|1+H-OC=+p|FV)brMohwN8vYIS?t(cmq`O}pzXlD=*Ao_ z3>PgPwW|C||HNA3SQ3+xlQnMY)>f{wmQ+_zLSTHz+P)JK0B1d6mTk9%5NVUsC;|1i5nIMA9HHj-2X+^vJk>uhxFrg&}qeqs$Zrq%wqo( z7=FnO_T++RL0chM1oSR(+UIO1MdJ;!w>FG#P;aE6#$PQO$j463Eqn9y6PE31_JE5* z>5oFLs>0MpOMa4qBgnhj6OF0qI=m>}7@m=4~NE6}&7 z$f<5#C|?`K7YBQo2d?#d-YNEpurUqHht_h=r5mz-Gl{3cvd985+fdPypI<|oeii9d zjLhnQTh1h`6Ie~NYJ~wVr~IZboq;lLOp|J~^7l=Qsf^17S55M07M5=^Lw09it1Xx$ zMQeP}y+-FqbeVbud?#>|Z@cXiAu~OoBYQ%xo_o!{SlhP#!>jAk&yyY2ycr!-1y%(}ziU9#A=|F){aDyo(=q^JVDIR2`zI@^~j?Z4mcAihzpdYPr1;31+3D_9_H zIPPX3?XY^^d(FsMEHg!!9G)r)at3%F3Eu>rgq8DKKN5Xo-FU-8DxWK&gc+W&kAkc# z)g%*)#Qf>_MzWwiGAlvH`%gSZ;s}lJuTI1*3OI^XmdgJ@&pVQ}+=b?=r&gpZ+3{di z+q$LkXtURlrtNpw+Hl!P7n=TkKB_EAc1)}KURB;4HF83KopgtFLHR@x7Ef+w3pr;b zQvtzQ1&?L;{`haM{a(IKv;(?b;S<>*c9EqUGTr+{1#t~BN@@EDP4~lWT+iA%M_x`+ z6q>g_W!p6%?DhQJCm{45y=oy>)ghwIjf;#k=S`5mu?j=x@tZ??6@>}`!mZv^WP3(< zQtO1+n$bUu`uGm+4~9nmJhxy_Vari@mt;yl+LlOU_%RbqyO)Qe0|tT6!yQi_jeshI z9-??A0P+6?l9kURWU)Z(a|OMFZ0Fbu9O#?N^+v6;Z)Os0dNiL6w&|rgz}a_Kn%M7w zw0`qP0=#t}=FJ%6-VHZ#2>2=SV{eVg~+1F?dMSJ6qXhY z|BfFyDV7WPsHe@M*;iC5;;XW_Y8}|y1D?|^Y)R4k>P)&VJ^!{~hRI#| zLlC<;Jc>oov$8duU2wN|^=!wiBPeSg*~oDfBgw9_#)lYfg4989bO?UIdP#-5u(BOm zEqG)l@9~n5Ssdi=PcGrjXkDJbF_MvO{hAi3Hvq1{yZQ7GS(AJXc!jK;C)ruO*ea z3MCSJ|GrZID0Sj*B`-O9d!TxSYAHJq%)wqwB$0~3wr2gWzvbh9)?xeR9zF0hB*aeb>6CBK) zc(zQ`%#4&7ovfO5Kt}@o6i_?o%}p^x?~X}F5=GTZf3<^^|EP}AxYoXns!~%~R0P&W z{C&}{#m+fr%qOs;vxYT!J|qR=HE%|_5mT8@a;JwLK+b0D+Ar0*KyyFBRH86i`o)*l zylk+aBbmeee{tQi7=``s*;tg>K)%4mNq<9lAe0M!LFkV_L!+V6`tpBfV;Pa&7IH2)n3Ri;EhQar0pP7@3{( zB&qrb^r*%4$}P@)GdQ+}*g0K`uhn@j@si^Rv}xCR&H5vJiXF%ZH=$4o1dwpCToT2n4 zl)PreAcbx%&!AJ+@EwlLYjnd3cbzW_m}-snW=+D}1#KxNi_LWhSy%J)PAOmpI$OgX z`mSXwZW|tF{Z{YO75&T@dM-k*Q~W~cSc6?+-7(wtWJlwHqff^cuUl_|19z{%T&97m zl3pE_+g@DgP)6b6^5jqFEWD045C~_}jziz1I3F&LuI~A&uwb*?)Q!VUIM?be+KT%f ziidHV%&G=d+CEcVdYLe-_{S^Uy%8fSg=FY~xd(IJiVuFiMHQ)cU&F z==kgps17t%z~?!2I)&bp_0Pzj=u-gCL4(n-^*O|1S!LBK#DM#o`7$c_QRfgZ$Wp2{}Kl{{^~uCzoLc zIxsoj_A}=`KjedLhi#*0Cn6fWH)=i6noHH&0Hfe=qaT78l!s_$o|V;d$4r8upPmp6 z;LRZWiNmo=okv(X_*Q?pQ-|j4-N7hjmyi!3&xM9RSuG2>OuP0gqU)XCR*aPzDoo(e z-n*|jjaR6_&0SL16k;9)?|zybn?>6V zy(rh>+uNLQS@Y*Vl=$a3l#hQQ<6Rzm*t-Ke8^?s^l~Wii@f%BQU4L!)E2z%FEMQ`~ zpt%$0#`FD75ryUma~HpS(1@uUVhRg;#=456du?>V=T<5_sP<;VYw*887o2NHVy*Z2 zGOEa{!VQ5Yscpxtz_|>7M*ainx4}v0GQ4*DR&JXZR}K4^y#Kn9vrCz4G8liCZAHA; z$%v7y_k4SXn;neF`Dxe5G5e5@Y$F>@Q*NzOFkb&S^?8^@oal;tC_`G(WtE#AfayJq z<{8e}NF?7rCAVgd-tOT1YSG8G@7uhtrP*3o4(A%zl}O9D;x26!zu70&$RMjx4=NfXeiD3>>Uuc6=l#jl=EQf`T$G+^ujqGE4C}Rb+14Udk*RkIw9hX+G;`<^kxsz=n2P15Je2=by z9X-pECTUc%cPtyZwS*+C*^%^Soj7=Mmzxd=2mkLs!QzfxyQSz5Xk0<> zID^Iw3Gw!dvS-^FgRV!cJYpL?Rq|2N$DaK zY2$#~>H9v?Kf)297YMf9)dcl@-Y#ZzE_lRx{}l6l-o)6DltTyEEQb0$oOeCJw+C~c zqrnfmDf-jTkLVBQ+o#WW+f6yoccOXu`h9-h^#%*Lo~HNo23(thYtq56@*Boy7f=5e z)V4d(^>ns^F8F-4eY*MbrMIPx&wuTngF~Mlq;0w$&StvKM;|#onTd+9js?vdZ4)jp zPrJ_7Gc~rke;lN1^P+V8c9*2F5J?sqg*Djpd^w5#sf$Z4!IqL4~4#a;&I(C=x>r^EL}QrHL@J*-@DhpeeJ?EB7f#X96cXoy6a?=CeMVarSM-^ zF1POgj^)k-{|n2pIcfnc?T6)a>lL%arS&Bl5_%PVZILfa;5DOs*%;G2mfCSW^$zvR zp%dGeR<>{I$KKVM5X~K$%QhPGs__sR)TZ?~f+)mU#b5dXYl(f@oSk$}$tBZEfOnci z_dNBLK=%5nMB5Zo&cY=vt`4|%EZ-IoEv1^Xixt2Hl^zu8_8ZMtZHz9LNkLuSPjxsj z>yfZ2MKk7JFgT7srsnUSlQfWnmpKC1PVRXuvGITehwCN+F{WM&`q;bo+M2nM3cJKS zQdwKKd)EE`6U%+d_+PNxpM?Lwa=+qz$4cssH&&-S4ykkxwz_2zi`pFE{I3t6Rpemi zZQAdVc-h<*h544nP}X(wX4zapTC0}4BRBC*0H&6jD~=s-mD?o42FzwMzF}j-2|vuG zGFxpnwiB(&&@_N3^>Yz$GT_cUW!KAJPKoUha8I=0`hrWTsJ;rdzAd*SicaVuPi4dL zQiukd-79ni@(uhCy52cB(lA)`j&0k?#&U#=rwEztZhrawSE?ykYkezKaM!G=GudJm%$*wz;cI^kLZrm5IgiGC} z$Bfe;h=!zY+*$!qzVLig)-)f(X2?Z<$SxB@$bR%HDRE-zPhz>h&BMv>CzvK>R(utCSvvLB zv{RZ&K&s|5@CzQ=R2tBrUL(r7ypjYwe5a1LdzC&Z${#4ameY@hN^?-&fT)(57OtiHFAM?{@v@Md@EDO@o>J`_}m*4^}Adf;#< zYk4l*=iqA*T5RZ5eI+dS{)XI8%Iew+bg<5IK_uopny368=+&ZK)sMj)`(I3>f3(Ip zQEe^uG8=cdvgGI0<+D&P+ga>o?ql)kh}zR`!u!oF;8QX7>Bh^|Pv8y$4w*ox{OV*P z?uyDFS~W}$4v<}tjQt$(pG>1rK(+mHnB?2Yh)=z_1hGbcS=jQ?zX!@!*ALI%<3VRP z4P{(*79R&v7sOzUUQT(Cp0#-~?g$l|Ha}-W&ThPXdw$38I~EO8QVH|5J1_2@h-boV z=H$6=oVrGDd)RdyDFo#L=CSz(a`JMnYLO|gpZ-0ROt|_sAm1NW#CmhCSATIfFfvm6 zf{ocN|26N>7gruzepsxP_-lAHa<=&K&=$Xp8p`W*sd620Pv2Q4Q^@ALPrScowK<6)8NFXERM3_u@NK z7i#aR=Rys~Z#YfNZk_wva(&L3bvgBVLt_5QRwvJHYvxwwjjx{&5rhp6JJvWGpFU&M zsHOT4A_=@{HlCay)(H4V35|NHgEhj*4DE<6EoHcirW3cL-65k_fmLgYUZt{Ki$1Q+oq~+%Hez$HJD@CR#r{k~cnL>ksw#$HJ4(5z)1zxVKKtRtFd> zCheW#EyVfXHl5_=x4YIP?H~x$A@PZ2OZsahS7{ow#2~bPF%3Sd0z0na3_fwSuN2Am z=1{zvwo$w7ev>R6G;MrZ=mDRUm?vHtHiBA-Y7Z$;{}ZG0!Q8!95l-Ow z-Y5Ivhs+V+rv`dOKK?1uAOk3RWAxbkF+tmEbM-ja-jS#XvHmt9OKqEUt5>zem8_V& zE)l|=6}%P%oCt#G7$dQEm03Q96`}&_+_$c9m|$*#byN%#oVy2}5{OtL$z$19HmuQ} zYgqKc#i^gBc%M*GXU6?~EZpmdZLPG5S{Nye;%c!e2=2wkYoX5zr%*W3s8T4O!o7bD zLUdc|y3AS@vfZKnY`7R+hPLa3+#fj`y+MWA(46}jxg{nmjvE$8;&Tbp!PDd~)DZDQ zG$=h@T0^_XnOE7(5qBlzds(Kz+kI#AYU`p(n%x6}P~#1?xozYF1@OCA=5$M9MV*tV z+yw-BQ?5x^Af%LOyce}FH(s1=d#ppHCc4u!A3?Mp41xag0+H7t20kG48TasmGpzy0 zFTJfJ*&7I;=((mueZXtC-o#jtkDhkj`yAYsdBcit!3B<-b_8SS4i*djMQqNEu;bUN zMG+G4SrMP7s=2c}rXgPo&nTzuGgJ_mP!#gPZeXgo$td6{Y?~&o5;NlsPkm%ita*Rg z_HL2_)TFX~hyf3!;nuuZ27D?cl)tQUJL1I-1pu|?PJHQud%3PMmyhR`jsly0En*%2 zfyJ?H8x8iw305`F)H7j8El{JZ6h`MA%`6N=78uw226ppFw^lw%%s+VT(bYdGSO0c5 zsX-43MkpvZDXu1#C302l7nc2XHvA@SZj>r8;(T9zGQ(%nO7dPW z@@2_Iq2Bzp52i#HktWVDDoR1>RP|0RZt(5#b`u@-JNMT0DCVQgSKWz@-19lF(Ocfb z`%l{ColjA$eo!92JrpI5MW$y2@ROW@Dz2RnkRjE4H z_NC@G{?-!kyBBPd&kc&9IPVhVa0(>qNa4rWjr^-8=2n-;ozlxp!}}gXizk-_26^uP zrrxN{bb3-#D95c5-+dBPcukf3LU{3U!rP%RfjO-{>e`-Zi^2D<+XWZUY9Xl&e^SADj8LyvdiyMBc9Wt-NtHy&Bfd2cHZTo)n>}`$$B(0&4&b;qx5iV z7UYF}Q{qmcQn<%mKS$~_Q-PosC#JeQgD^-oT)LKlu`Ob8^t)!(OmX?EA$9(v-gYy; zEXIxQ{eQr0m}ln%N(yV50R^`=-^DwXG*Q27J5J@rdxmi zS%?2&_)J62;pRPy-6OyShUFjIRTm1ljAd)f;Ftqi3_T+3$!gNCZnaaKU@IGaz}-zr zTc+Xgb-&rE7NssFs<2n%2}F{0(?%6dhfUUM9rA<0Fo(z*udI^h~8 z@t@1|{{q*t0BcV2t;&ZC8PPWkY~rgh3!lsI3Z24E#cS&gu(3+{&9%o9Vt1vFOH7g- z3jYPIi5r0q{5P~#|1Y1z-1MIu$YQI=)^tLT`j&b{lR}6Fm2T7jVAd`J78t;ujN1TRdwR$Q&LMUSE>~AZbMRH%X zmYq2Ab9YmX@FJ+tT)FOjZguE&_Xy50n0>XKUP-=D8a{6OepMY(yt{6*m$4AW9`FcL zLltsU2ifp`!BIQ|Ahh@kN>7^uAh9d#t~VY!i`+wGNT}UBes*c%^VH=ni`MI-~bMdS_ME8yZiAog3JX+A4Wt$Ke-b3AO z0`ScL0@J83`HIYRZ`|{X6B2}eg#q_0y&Rw&HL_P<@1vvJnXP9~9h_T=QA@mN{4Oyz z^iYo_M=pZ)b~b5Oau$~@Jh7vjVVF+fOcY>a1^!+I)*&7uO#NRn?JQ85PhvPE6E)b# zZ2huy7=8US{qq>T|6^nF?JD*13TxA^)%p8${Ih~|f9lft_mT7ObNbL3Y0LG9+pTgC zOQz4pp4Q7KqXA#g*>@k40%ZiVU3mvaoP{SmvvWb;iKe*^r_lmN{d?HkhDV!@J36c# z2j$BB*nKu6c+dY}(r`10-Fi&#a`I$ZEHIrTY;yZhwKbgLHkj@OljhG zXuJmt+|aAYzR7v^f#FajkATo~Z_;5dyLANbeM(`xemkLn`49Yn<^*%~e*6O#LDU-x)GimoFL#C!Ux#`b!yVWwiPawf;m3vI(SHuv5 zxJraIgUmDBH~WGbTf>^}S%@WoS(345l;_{`)eiTelzs1FF76yXkRskzslzZoJztWd z&Mt06SBM{*%eqpbMpXL!QxeI0uv-?T@ekw`IE{J(jYPbGIrqV*+E3|JYcF-;qw&(L zVg_~cutjnsRNcpjoHmS^E+*_Ly)@^9-A^!8o5Ph?3~$g#-41(h z)W*8;{{_%MLI`{^j8-|^e;?(sla)AhJT-)xQyD)W^TwS12o2v;BKCaRT^sCkAn<;P z`Qr;6mW=KnwLc}^z2`CT@gi$uxWrp7TCyw1+uMpXI!r>s+klij-B8!eq7dcFs4%Uz#{bO#dwPE9GmCF7&~|27|n~kE%-OiLt$ZvR%YVm@71~cl7-|*3bzk&-U7Kj7l4hDK~3V-+b`BFwBS~q^9=mwq~}D>(Js8| z@AJ_mDLM35^`BZQ&K_aU!I7#iPVGs4r5FXtQ@j9ZyB1)@-Jym-&P;HYR`)p}8vN}h z!>YjPpO2ouZ3!{?|85vFS(AJ$xoq%k$&hWD(D>%igK8o@VBwsa=iW6P_0RZ`f?b zRjFEZ?T zx<+;&oTFJDe1_G;3-Jc;?yT(^+C!KVm7uM3)tG4e3c=vkiU`}>-*H4W9PO{(|&MG6!kyde(s^3;7<`TAbBJ?x+2Xrlf-XhL~J(`XlG4mo(NBj{3} zu1bY(a)vbM>vwgd;)v2U?;XF!*K-BiXIcxB23%LbYr`0m;uaLQfciF!IwCCkl{$!I z_o@Nk^JIRYlOVw~bp3Y6lg+26;e06DpHPdu*q97vwnHxN&8ob@4<~661Rtbkr>=+l zuYI>mgW}jqKNgdNwojMgNZ>M?1bIjCDZZ+YQrQE!*Wt$D^T+-ANZIFC(=?Z`gWmZy zF;*(uS>`I#9oQ}$L?~% z%M;Xl@g%5mjZ3ZEKu-Ju{g<^xLvY+5?UHF%^tZhpJ9Fpx_9PqcC>%fZj)!N)H8;Ug zA&mZq!xr&at=r_6DK0sZ7ux<*Nipwbc4{n4k12HbAihrQk4?)T4o%RT+#Mk2f?UR^ zXN=8k4udoJ4g_8W>xV!W%;j9R&aq*+o%E1FG;B%V~MJj-qNq0?r9p%UK(3N3jpU8r}dAPaeM(nHi z6uMwd0a>t-YXXmhA2l;Wr(ri@%Ju8J#gg8;lkielxvY5y@-~?qQ2#Q!?n{r}N`G$- zV5=v$C5}pDKMLJo5>)R1!BrT3JUQpL)kj80cNv(tFmWvWD+xsf&u~;+jX~C%8TxT~ zDJ$?|TzY`|^~C;Hr6Cr#O)VP_7Y6?K{cXI6!)E5m;)Bk2tAh66ty^S*(I}^&RaNN7 z-iVsLW6!G2os{>)7GDi=7Lo8AfV_R{ZXxXQN@{+`9J}fbIqvp?rd!V=t%iTJ%>F*k zXWZbsI98VDM|&}`HMQ0AuJigp-21h5n$~N>1c*NTkJ4G1`Wo`rcRgu$o~Yewj% zooQDHqvrLY_5R;g?`Fb?yg$F5?|OF{zefCi zOE-Vc|0Z1SZut_b`MNmr`&2weYCA8Q>gjop>-8-^X&p52V1kUR-Ryl|{=#v1`g*-N z%KQ32w}0tf((w2->Hj$wg(~iooT%;|T)B;k~S+Mw`=?ldXKu%vKmjtTu` zx+vRf!T*{e{yj+O*H3dkN_>h#@0WcDZ1MYi6ndGh?M>@P{=7)r!mp7P`jldOZNc_B zI{p&)5NiL}%iAs&`g#-cdqyQcI~g%xi8S~Z*WF^kQv9(H&Bd4fR`RgrSMl}C@Nw&K z(K?}UUGeZZ!BqXZ_a!6rxw%yKsZd1<_5SsF_4W4k`1P^HlJ1$vy_7~M`OA0_-ToN} zh%EYSDg+O_Wd}Q$irzAk9$_w7x`*DjUTnv&wWMo}p1*`;0HiO5q|xR8-T4pP2jyw^ zep+wI?~3>?>p_s$crj=)>ohM-f~o|UKf*p}->O!+Vt1Fi)3kb_>3OqCQ~GU1o%D40 z9SU!BFN>cW80;Urd;*)}niZsKJj#3pA5UaFZ>xO7=~DXdS#Pm04;;Rm)i?SqJqcZ5 zi`t>dQZ22=OuJ1sk85}rOl3=HxnCiM;E*J(2FYfMD!KZ&*S^U+%*QCCy+(`zJjo~$ zTOfOErb#fw5ld&Cn-1<%fA)=H8k6$&1IJrOs_)Pz&sWn~ipYVh-$B~?<@>7*BlDWLeo<)4^wz+I z??>tv-iG;9*F%JtY0H+eV3Ewh{XI|E@K&y1C*_XihOOl^3wjFc>n5`b{FhXGsXR+# z&&LLp#^2lM+#!m_FWYP&82}lgW+Q!2SWYX4``iAJpEVYPgN2zOncaexn11#iQf^5+ zzgooBC@pL`YM2p4KHh9pD-O~bUv4=aEq8usTG%$nKXg<-)kHoqEO2>IqkZ^vzI=+l zGScR4N?D1HTXzmmINI$pEPFiRTeJN>U4GnA!hn0RBPxgg< zHHVku=~(k{ zvOY9Hlniz&;r5K>Ou}nvZoTZ)n#sqLt9P-e(r$)Bg)-s}^vr-Y1sAs3IvCdWd(9is zsL1`JlL)9vJ5asN90hB7ruXY{X3NhzDCRB=>-GDL|M)Bk`DVsG=K){y03psTKdXZ8 zgYEM0pP$Px+)t1Wmp48ACV06C^ZNp=APqR6ZG^Y~e0uygW$%MK(~mO-lC}jcw=O04 z@j4&9ZuL|D*)h;ZnxWTqTeKfm=Eo0w-j&xUd|r|KNi*r19{FEQ0>X1-v5i&XsE|^A zpCv8djRGfI9+{u`@~B9EBOZy_FIufWUzi|2{`=?|_|Y84m0$h$8@mi&%Co&pkiMnl z<3SZfChWIGtKmPXOo%DTdnln#{2&iL(dY5DQrDk8If)#)hFH!Vbng}mezV_QH}=-( z@19QXnK1cnD2$40lLM(vl$HJa7Obu1CjcLRyTVbzXS03JYGe$w!7P3yyck9DG(mLA zj&wFj+$}aqX{*}-&U1lbUg$9 z>Z6Tce=}E~dshz2R3%lj+)9ONh=XHd)M8+@V#Y7{^vd)Im-x1!r5G>T^jTyzsuy^^ zk~E6CsLr6VbDJ^3P#5%+ju8qNRV8h85)ERrNLBPrlK$pOlncWI{oUC8`>2dbBxyZH z!+Su*8Ld39RQmSMJ$*Isi(IEoKupFyV z%DKQjECw-~1&u)Z*0s*z?B;lxXjN!bE9;758W`%}LJ#qD`qeTy=G*j*8t@g@pqPCE1p`%);UjG`fxG67CRuexh+A%9 zk+KVXVjvJy3qmfqHu0DX%ifjbNb>?O!x~OTGp^cD2a8a?&&U?Mx^Q6645{9$NS0bq zQqmWVh7dHfWu}3vq%{Foq4uvHC@FM0qvmZfT$A-WYjOqa@IOG=I0Yq1sG*_pKM!je zGimo^1h=r&0U6Q7Ukg={)|o1p5Xw+GO?{cZ|AJG<||e0!Amek zIE{ablszEN$AH!aOC0Lz&W{5GVDT5i@n&B@G<#od(x&kOR-c#YWt6xj-k z{#B|7)uK>PyjXuOPKz6)T(~xG*nrWS3DjW7_SJzJcxz)G?Yo5f)W+X{Ii=wmkVRvz znUmg<9qha)Lbp4u(>>T0`fj2POojdzbnBEMs63Wd3^0;JWptG+vV|7g`NW9Kx>_Jr zPW}TphiSMSRpZJY%(Mu%0pw06ucbTxRIP%$H$bdY3tJmnZ9HwO zg7*E2`ywH3Fzq5rXc~@@?ZXTDN}#?;iMQJKIjh^e8R#VOnP*(tR8sX}RnstG!!5)% zhuimn{0dG~B81wc zTl^8B7Q?Eo_73d;+pFBcQhX6R!ol`5R3f$YfHQU5Bjrd2G~VEwU~UXxse zR9$CfxS7+suqY-fLlivXs$p>x02`i{MkKB_QFCYCh&D(@CxKCf5V!&s&wem0WnlI8 zM1~*;I#N>TgMt7x8k4?<8H-8ox>T-M9OH0?_Jqo{DK1;Un2HM&=pGz#ew=iH7;2A) zrUu$Rh3@P^B3@&}(4WI0b2Z$?=O5c+cseZJ>12!QCv2`N6>A4nf5Pj@)mX~!17kLM zOHqbF37B_)g8QQA-| z*DFzf!Fo71#;4C_WDPhE>|&Jcy|CbdR)V>dn)+YDpo&f5tHxH)uH*hiO47L(0+m!> z0RH2u&X=Z~&u$2QK_$)&joAKGqMSX7?Vnc%XTWUgc)z2!p^3jJ02g> zIXCZzXf5;XQ-M5X#7jVczYt)mbnGzX1ma$@BxxMr^fTSrjG%{O)?JRascd^}4I8Y? zPLm<96!I)E*NCj0lm+h?CE&eN&>GqXxMQYLO0@yn2)cJd)3wr1HGX#5*YAd{NFv-x zBukkUc!=s!5FQ5=RCK>e-Iv5fPO312=v8*07b{uSf zHFgo5MLTOf%yUFqWV=kNakR#EZ3;<+kp$9hUY*a0n^k9Eh=Ixkg}DnEwX9eqS_tIk zuAj+&4KF!YEB{2`*V?HzY7tXmx!EX&2`Cr#^pIsMvg2T9y{Y&LNmUM%lYh(?@0jN;m0LD_$n*cfg>#{ZUk`^H6>mG@hDDQQ%$%Xu zzR%2TLHpc%%M*{eUp>{L9Ilr9tvw8*fEwqbJ_DF%%)tU?z2t=@_Ru0*PC`Ca8_U#y zE)`Jgl>B7mKoDQJSbl=xhJC#5h?Qaa2UUFYW?1S$xDmtIz+)+1vv^R@T)2`GWWwA$ zavN(DLh^-5#d3*1`|D$*YIZW|a0LaFRM)1+P~s8%39S7TR5`+Cz@`IO?go0iSMk~< zW2Ha7$%*kirJ3PeRB3HeXcHg-6xlOS7=RWLc1>&)t%`4{E%7%BI*{9DP9Bl$CZ{Q2 zG+(C3h3n+7=xVbMKtqJ}&s07&>|YKvqxJ)fMGzJIqY0gHAFC5cMJs?}t{AWy)cYqb zh*{jDDx$TiC}Ov7jKj_6KfWpR*z{UDpjp-rn3R}eP&*q9LRw8kSmHFPsu^4;Vj!9M zdA!SS#S~vy`8k{X@2jvM*g?1%%`$;f=B;S9;ql2Fppg z3cRbE0Z!dHADn9q$>5tW%5kVsM}rY>>#?zG}Z}KMwo%y@jRJP>ZO9 z$9T7Dc|c?ySZqHYD=} z_vzuDKL#Pr^_Qz5127hU!820PJ4BwGcsko`%t4gP8R}6A!-wwdMR1R*C47tXi?z{A z76utlJ@eGiKp=GvInJofYc>VTxKvRd%p*0>9c2XS4{$Jh>B6tyW)zEv@uF zIDJgt#cyRS*wjfcD>hKQ3RG4dvCR}XG@Hz`i13W|tStfE#~gs2#at2WbK{`QY8 zHLme;0oKv(72Q zLB1L(C-+2N&|D`&yawun@rNUo?1Xg z4tn!M6W&w!T&CP6vL!^!CEo8_9F6MXjITXtc1oltiCNVx_hoioCD2LmO4D1;;x=lV z-PPbK=4xEjL!C1}@JQGlR|ta^R!ups=b8l3O? z&DdGKl^OkWszP!meM10L39}-!NU^RwiIieTK9RGyMT_koUT2;x0IHZlL3RY|I-W!(w2 zp_88fLw}FPPXMpNT-}Cke*EIu&_ormkWZ<&KO${ApPNsq@|@XGFP51IGzpo5Hd zLkewMpsWCPKV&Hkocgoyj+G46FdC7VvAb*)MGOyjA4mhqnjaA;0@S>J=gQsCrx6^? zADPA|8TQC44Je@SloHXe?uZ~h*0c<{B38o4{!7e%QCCQCCFM1NO zsS0=nk-*1uqdV_M*ie!Pt7h=M=~i+0wlXt)Nu4$^^XTQNIC##qsc`#7CJHt1wv6tEGSEJSgpxOyR{W)#Su;4c;V>+ByI>c zO(|n$#H2zB!I;%=C@NXGdZ^y`#ZT$Xfn|-w`rLypzUoc#QAJSs-k5f5+7fIft4X(qKLnyOL~SHzsQN4*hU`)hK%E=#1D*0r%&~>AA()r!PxuVv)ul+* z;yxG|g>^X(7aSxWbDN<^rL>{kDXs_WB|t553}F3e=u$!C2mHy<1W#$mUDV~8aSjTe zh5Ra>_)O2OQ7q{H{rh7Z#YMp%=thhbdfuEwYay(d{A)DUjBr_8754OQjGIFK@c5_6 zMdgkUG#<`p?YJ67!-?X^tgZr7>*U+Wa>&%tynLdg{df%>iqXjKF_f5~bO3?xm|m=n zLGxm4Mia6&EO5A(MVHW!SU6c57>ZlLE~RI2#1fQjgLaMQ7YLJAMdr(itPzvx6W=%r0XTBT+g%)q>-A zmHIZunCeOWBmu0dMhzwt+WbVU!+}q36=KnKFZBlH187j-yhF4cLE`qMg5p%!wkaJV z0X-!A<#(|lYJEK+P$kAEnozd-qBBd%l%S~RYDn$cIlQi>zQ1$n88FJROiG7)GwZC3 z8clWs&*EDPDQ(a4X|Mp*pXC6Y_7!l7|Q8_T1yQAPgIt^;O|%RtN%$molk;o-tz}mcZ?bbL1hE(74B0 zcx3@22-F%duk!MK$X;&>Ym*@mpyblV2y6Nb3GhF*h5`{SWfXe%Nu)t!kEKHu?Sl3i9g4>A=J1xisZ^JQZa zV-(UufO4~VRD+0c())bArd?fj%&WtqMfi1d^h+36G)F(2gtb~B|7f3zy9P4}5C;wq z>KuF$HTc&83O8YvhN|AJifUANF|V9J!!Pa2x*gr(*o!GI%_V>&q7N~lv1hAX07yr>w0yPnA3}BcK??~*I3NCR9D?6T{ z5h#$RNhsIil! znELY`0C5Z-l27Mdv8e>v9%x%|J1HYd0jx8x15iQU{Fseblj9M=zt3Mjgh4H^k=C0_ z^e+cbsaLG!UI|<{0XR>gjKt$TEcYMh`4gi@m1zEyW1rf*RpUX%!XOF3Z=$4;N5iX< zV~1JK&c<8|0Ol*5T5OLts6)OXmLi?w#n+VVGocq&2t?VcI0WPD=j&l48O@BKWzkD~ zue5)f*=|-WVO<9wL=KK&f5xRyFB6e%LUj)qF3~MBwUv= z`Y`(|(cY2t+zi1s8J0=o)V~6rKbSl76|C2dWfaF{yza_I0|zj;u;mD87F)Qo z2sq3G%v1wQ8nkhh)dhoB0__AxtBS|AM?xdH@(Q)ZjJE>1+n}(b<~aGuf&z`L@%Yre zmEYz4Q9+;(V8v4e+cIbIU28?&abI;&e!0$#Ei%dvv0u z&`B7YGgMJ7YX6RvqDK26iVx~vF+zZ9mE#OZKs zUGi7VVXZ@M&2^Jf=1b5v8O3Cg2uNYvA=7ZILv~19Q$niW349szXKhB)w+X~6Z}ijO z(p@X-peWdnm0g3{VtBCtiUXYz7wG1&qOA(^-qb2{P00dqN#%RA1)+vKW=`=8!M=8c zmjJy+7*Q6LHIv48<)VVGyg%-w4Hf$3GWjsMG$lqjFsI^> z#-pM5laP6|;70VmX+b&oB69UytaQJmz&bWU{j)5%?sTZ57#isF;%?X1M8>JZo>g>#lGweH^R(7ArT>Nvy{LMiIY_b z&jX-3s%;0m2qU7cF?EVEg}k0;5qUmbU@f94n2uYY0H}gGo{VbHGzqLXnL3;%yhFM5 z8engBjTsLORMAu;UjbW2i;{%87N-2|J8)oRRYLxWg0^FbHJ(;6tZAbY5@}n$KhdWf zj#<%kvF{zSMga^cs(4gTVhF@`=0V{`qDTzw`BROO&xno$4NJeXi zGrp*~1kPR#F|dFvB^$3T#V*4cEzvFkWR?vfK--xym~c}xJUah&pC6Z=L+dDKV7vtj zA8T4mt@4z2A~QHN64ub;mA3+#DTfipl?3hJ@yZ1N>UeN#P0Rh9TWBaN z;+Nxj^0Q695I#-GBqO0umLd}sabdfwltw9(&)8Q09Z%L-O(fJf{v?u-#0S~D7;zS( z@|XS1##(j=+kz{(ID)ByHs#mW9qejy6YHI#u$NhgF12c)l_Gu0hta2Mg*9Dpf*E~jgZdny(%JVv zpzZ*uVUoxz*CdOuqm}(h$U$K?x9h8hT=?B(uBOUd1lH09)o9P~AJ;$;Le8x&OP>}P zrD}kzR_=kw!QxJCc)Wv3)4y^-WhGr+h5K53TJrq-j!C}Xng3|HFMrkHRc5O|q+y0n zLqJG&xLh6@Qma|JR6w|=B4GHN{B^fpPn*SVhz5Bag98uxTbfMB1wkwR3 zht117zZRT@ZU~7p*l>3!&Ty>Vxql+_Wxa6S31KorIJq2(6qoy%YuQ9pnt~D=SP|nt zD&cx9&wi^GMSn`yp%$!lS-LKHQblPc7Ee5LCLsDbA{-?`*wHbJq@FuaLJ5}GaLlNX z)Fk{u@m~%g>aR#EN@Hp|hS>M`2u4#NGZK3E+xOG46opftPxEliuql=UHJgYqk=c8Y zc=*9|#$;dQ(D9-YL%K@8W9Mes_So3VirwP2;=&^z`wAqiIU!>qSQI^JGG76swmmY4 z$)<>=nce8FS^M3_ovE|GJ6Oy3q9&Z81Tqcn=<8W>dbKVqvx$m)Ddf0t*eMUx+|yBFIX8J^(*U zwU}RmkU2u{86$BV3fu(8EnYWxiCS3dQDM7N5ptNnKshW_jZFQ~T4??JXk_RZo++1SmlxheDB7#enns-_M^FSQt@AbOkOmsXva! zWK?ET8#WB9@#$G6U+^$hrLCQaO#%bURcRN7g-|UT6pKhZDGD@AqQApT@@M%%0=zt6 zY(8^SA*rNzbw2MT{zGvBK&K;QN0tipy=jEi|Qy zjkPCOQ5)2$&|>|}hswU56o4q6(Yj6(kwFyjz>p485E+`UL}G{2J}hk+Gp8QS1hh0l z+!^S078r7$NuoT)&{hH|3n-OxG_4t~sn6G6mO43PP~M47S@@C@x1AIIs4oa{%=;1o zRd}3BKoZ$NLn|_L?ZTnrpq2JecwSZsnU|U-u+`Ca0*D8`q$$%Y6#Sr*5gGnL7hDyD zOV<>*nnn|~h#xYw0(3=$!mnEke1+I1S3RMsR8L{(xyt8vf}Jfgi#Km zVl;15`~%L`33=YW4lB`vEoB{RPSig8@AJ1u>~K1Rjb2ecqOZt+!KrT?590AuNI&PG z(F6X`<%(2$!;wjaI*e~P;i*$%!;FErraHlYnTZG&RZ3_$*P*&j5F^x}q#TkKg{)9) z$V3P>F@IB8xY=P;N428CscxR9{!R@9NCjqBrVnj;k%sCC*B*?$ zFY{NrTa3I@ubtdgt>vU_@d<^M3)lNX{zJ|*kC17{5VNT(pw3(x^^w7qBdRk2 zNNMIwvN=Nfxg{VJ#zJ56gou$Qbubz6XMzrhTwKzq0@)(A9PG%5wo@(<$o^}(Ia&i% zp>L+-E*ZPD+o?Z*7X92ysc+z>N7;xlrr)b&jahcvC${6uBpic~X`jNAQs}7nP^*=X z?{EYyE#2bcvg+k4$=#mMi*N+c`{PN*R-`OAe#=rmN-P79hhowh+`276*)4QkVAHMtfCqKNb{qp|9cYO8Z z(;xct^V2u(r@#F4?#HK({bhgP4es63{OkMY#aEAyykCEN`o_O|dVK*IKYjBfPyhJ( z*XvuKKF_&7@#xo=&zsNjy!~=bSUm%VSv3Oe!V)9!iqIw-_>2?O(8OBGlX3UOyY!I% zot(b!|26ivA77a2_aEOsGqTg4KQW3=`x%4wi6PC`Z-05XB=MV{z3lV(!}|8MzkIh} z|Ht28${#+R{qnNcxBuCnKkqNU^&js)Al;`w{OOPX_UAtaM)UJenS$f1)nEJdg>*y= z8`9BurMR%S_$R6qOS-C_T&}S9u}3%$G%af!(%@=|M(6;l6SHFVuy5^oK2(;*i7H$GcjFk&aB;;YFcc>xCvWI z8x13iV^N^5%iZhGRnx1osE)8EFj(E*RNqOEbhdhKu2Y;6k4#i40ZlEioTyb3nOwdER;!)h(bgD;mVPLq+ z=vR*zT@x_a&Lpj{5XFh&0B03S!v*>k(o$;C#$~eZF{z;)UJ);(0#70Z(Il*jt~ds< zM7&iOGQMguh{cj}tJr~B-^8LmxVzQ?0s?a@C>Ve%W?w|JfCh>;qMId%DP)vB)V&4i zo>-MndrDL*x$OGSPh(48F(9vviiLg^9p+y}hx&W{yXY|M6MU}xbD~4d3w(z#|L34*w27{Ed^Klr*HV@+R1_Romvu8~>MbdETa83bcG zM3+--eNBbMVKClZ)t??San9=2**5re{E|4diRu`Y={WwM5QXK(s4i{>)HbY%;`8pM zV?eJiHq*~J)B)u(0qN`*BY~y{1{O>#&^SrtNIFS$?OzEwpva!b_dWXVow4^Qa@ATo z!KXSF^E#Yc>B~?n&?g9OL=mHhnjW{9Uq5mvsstx+o)WQ(N)Q^DfbFNquvTy}s+!t? zhLT7Vl0qWf>^cpMbc+i4fRCM`;RuD`g0}z?dB-dl)!BgZ3*gv_FvMk(#L0dVG zeTiS$2Ci4VzdG)|W`cx?^WF{*TE$0H)%0K$wRzikNqn$B`4*>0OuVIYRKs|X4pA2Y zQPRDp4$I5#CI-2BGM?+~+1T8PK&@NZw%Wh?je=4O2g+WQmb1=QlWa^S@$8wA!QdG> z+84k+jYs)12~$M_)z(fIU(8qU3sB@`6`MFv%2O6EOl4J7PpO#ILW{vbGKvzG10B|? z10_HRkf)qmuy2dnG^K+_i~5#{&D1d@ovS*<@T@ArpOtmRDzw?7jWu|an>^(W^ww3+ z0LA6*GIr=;OJtBJ4}cQt|OBV+#6F5nLK99>gtgE5;5Vzd!FQ5qmt1qq@bo2|FS zpz~2P09BazC8Zr{3JBdlYX_4&M-PE_9q_PvSyl!S`Nd_qnbmDEiD+>Uyt9)I7Yub=8eUWEL*oL|oAHwpE_L0#8QlKyG*5 zfJF{b2d6s*BB`#(X?QiSEMbenD*23AMj3HMxTVHo*VC4C2*ZUiNrXmTEzk0C1Q6(MVX6EMN`oFN1(#<+!@9#DiJd?V z)6f<#(z!Q~+fF1^ntGZj(YrPi4_g@}t%NX+Q%e9HJM++huBvh{7kpGYv5uAVzJpB; z7@^i3`Z{7z!e;917eJA0Fy084pR2)E#Q70np1MQuy=Jk}cy zGoXD;43B%R@@1q6!0m`36J`a58am@(-t`UCfFetgL?Sd-&j>|bS~mD^sv`|+?u$Cp zEJBawtf~k6a)5cY*j>`kTi9fTIF)MeSyvkZ+vHvS)KR%b?28y&QI1g5 z1DvIdL#+>o9d>ApM3z;jQ~7vw<919d=Eg2 z(#bmw_3doOMBU35@eQ!Ns+L*(vJNp}XS6oP(Sg)Ty(0=2i4r?DSv{7N$aB&DdV~&6 zRN!?7i|l74zBnr-76t1Qg2(6Bvc`;cB<{Lr8E{B0i(I`IVS{F@3rg1SQI+r1IqImo zYblSa+~C$Rn0VY(iH6%3r<6MN!<#I_d;=C4AYTFNO|-;35}jCjAuwnH zd-w{wtR@kzuqGvGM@!?e!+oO zdLGt00h8uUIyvg6d$L88LE7r>%GZpuPvr?4?$cNqR$jMz6Sq6V*(w0-O4#jh;N&9s zRI)0TcXFURv2c6BjZJXKbCLyBb=6~GSpXifR6JE?J4?CKEjl>{<}zKvKGm~{keLJ; zsy-zKv@s#C2wENl+r;A0iH3WCS{qFGp> zuJ#>~Ench>W2NB+&S{4d-$K_g)vG6dspSW2Rbx`D8xWUB5ux{^d+eNN1mt(6c{knt zC7_`dHvS!KGPre$Y;UjKu)j5Aq6T%Q>}=nq%ju46a<6*nVGim{Q=Drdwix{_Y%;Ss zXi!iS=pr%WX~c$~1GWey&+398sv~|rZ5}m8Mnz)UPS~K|-zNK&LA0fWw5|HwF<{41 z1U%q;c~IMaWWSrT!FD5_NSlcIdVsanZ;=NVi2fZTQJorcQ6Ldk9dWTCo990$PiM4< zh^tb7&e~dMulkJtIi@&EeS~?pfq{^y%S6f9XHbGk1|!{rDMh6)BAq_A zs>sAkYU}8@^+bL>?A|-d!mps7r~BAh#C!olojTs)G}-06*sv^j^g0|}45zweV`h~6 zT%4eUhaq>I)AYRt3&hYdyP?CoH(-;++iPcL^ls>`I(SuJ2I(FO8;6>@%$!aMTTB8d zw2P(Z&=DnI(C%Q9QBB;g=hKq_f0@!sj@W&AnOluXcL~>BUOss-xSdizVm)CRE88t> zvR4Uq3J1Almqkp&(yuDjQF=^6?v8b4_vVul3NpivyXMwe-hTDWy0=Z<`(z0y47IA6 zl_(hCQ=}X^s^~~EsAn{I-nuH}8I++l0~R^ueNY)uE-|NdM3UwC*R#7A7{$iEv(%}Z@n}PypCx;MI6Qij7J`e z2znsZ<#);3%Ec7o&x2*`tGCIBI~GMkM9HF?@J>!Q`jgMdE9vr@ND2Y?glRTwyE;E~ zW6;zRvE0EXJGF?$Du)8PBS1Py%RW{lG%nh=iik52MaL=e%_jQkJ^VBkK5lypHhGBA zqDGsU3S3AOCWc0GxgYVXfC~<V7p;+ZVp-Poahvy3Q1Ns(HOkSP_ zSXewZoQ0W)HH*u6-QOnrWF1u_O3=+u_H5GAMHJLTU1AJfbQ7^HKrg{oa28teYc;p4 z2PfXaCc6=1yV#{WXz$=?)h3M}$A0CTuwi4?y+~D4`yr|vLZ~>@64BPhWBc{nWH#Hz zo8;`>Y&t9M%+!dZFjcyKC$~`NWe)MD(Fuq%~ zby>vgC`@B*7&3S_rkPI!6|uDY>O`BvEwV1$QRZe>i%M@HB1`6h>6yt0A276avs=N1 zgV@ox%+~g0mUCioE2LOe*f%Fof<|$V)8yc-3@OsotpH$^H>EI|dO!%D z_aQuy7H!(nItKmaut*mU_0rtfx0TYah zzt>iP3p?2@SdXO1vDbjL!k35<0T4eXb1&%TGVw?sY0Da8>nhMNa5bT;-7%(H@TKw; zRVns_*E}_iz6xK2bBY1)Ky}$dNqGnlm^z{nZFznd%hkjKcz>I$o9v{xyQWxl&bZvQ9)XA{GTc~;T4}9BXgCO^QS~@-L4wLP zsZ8Sy-0->7D^W#j;uY&X4rQ$B_L8+uBgdSam?ul(d5o2~`D`!JRo2p>H+QhfK@^*O zN7djSH95#5wXq_bv#vhlDVU$ht{hhf7oB3^dQ0dr>?-EOuiqxK<#?YM0NOe$qn#xj zApFG;>a#9QR#n1`=*1LE3_!bjMDu@iqut*oqbf|VPM>s*JNL%3dcuZ~dh%<}aozE5 zQ7rdd#6pCMiq&&Y5q;dEw{BoHbq!LLp$cSY%O+lrk4f0xqe{dE=4Zc(VTr+@r>xR8 z7J1vcMJAlzOKec+7tT}6I}WV^-#T?7n1^Lx^P1iry?WLFj{cjxv)PgCwxRI5c%6m* zfAThzL~WBr0_5@Yom*pLl9iDdLElb~G4AQ=x+PH*$s>_GX^nMiwyTb;@3+aZ5=i-+ zOO1ast60J82yV2CzYT6|>ydRAg7O~qj7y{QbTvO=fnLGm7=b+KFh2G+tZ>?6){FMw zb6NMtT!+qtp4=1BM2~Vq&V2X@*y#EOo6Hz$+(@;6RQtrGo@2VFZ44z7RN-EdcPmO$ zzMaqP)zm~zm2CTABz?I}j@aIzC=q~z;-&3u^x6;4=W06~!c-vX3ed*dO}yuWyslQlejF zj)aXS$Lu-RHHZS6Du|UaWpgm3i#?gezz4_( zCp|L`=dyXj$$Pz`S|nJyKs_NHUl$D5raHE}$gm4D0Ns?JPWM(2y^SX`>@1b9Zr%I6 z-X5cKLu4c+-4k?9g;%;W5;o=OvxQsLV42-q)FMsVWr>l5s&S83^yiEnW86|$YL>2s z%3~8c1zpiUeJk_qx(TsnAcLKlk1v+i5<=adDuSTKqkn-tmWkC}&NPX+*XLbpeN5263p>$LjKxC6lm*A(E^9U9hEfqx<~La6 zc*Kj$DlTvhs#}%(RmM!`uCS_IH#%zi?q)}_kjDs*~G&C zM`U%JD6vkSLd2EYRWi2Hk?WDnHep99cJnK4_s}sgyx=K6-spo;`?@&SP}jTt9yKYw zHTjv!x<4ZG$~ub^h4MYGu*va>_L?cX95D)#-^^~r&-wrd<~Z1+3^3$6`D^7+?%k#Y zRxxfOBu5pv+4jR1o5u$mZDY|4X>LjSos}&E(qu| zK4!G0+iK>8?B~jUKm(BodY|mL+nJ{bxes%lxf@d2qMv@n?H*B43gU^Hp)7S=6>A8~ z8F$0C^qxy?JVjlz#G$y9_UIKa^=at;4bCpYN~foEMD=S?`1UMtZzSwWHj`V}6prt>SHYg%IwOa zUVsS=>03-!hWkNh^3D|n(22QPMq3=OIJ-zZOKlA6QPNCYtr)XtHlvRmgQCb?(Ow&B zgFcg#+V4nZ5>l=2(8*qPAU88HULU%Do=tkDjsSRC(IXc$uQasWmlDa6D3*g*8}k;$ z_zP?@7~Q}>Do{*I^noLm(NB8li$}mI0sq?N=?6d{D*G{T)2K}W1@>2DT;O+`LqdVA23 z2Ai0mG6u`)$~&8gDnws91cB>w1or})oH9X_6Ej3{Pt2nW>(HyJrYjLg&gEf45&^|( zn|OLFWmQB_J&CFN6*hV8XzIa=y5w(V>JjSMB2+7YgK(I zC&8*u5uh)Cz}ahUafjbA=EI=!a+{1vwc2`XtgQQV7OvHFPP%!;V<8h_Vyw#TsI)SY zFLrFBE`FEtUSDC8jZs+1s&W}^j&$d8QNb3JEB26A^2UIOEOQyTc#*<}>gv_$;T08p zY!3Ea+1F7%Hekj+HzNMMa;c1e0Ps$XN!I~+nHrt9=EP3wk)RCb_uFJlQXH&|n%cW+ zOPOlXVCF>4Ls|j^sj9YMohO|NB!Q+Do=MpG2Adp^XQ~;IvM!CNKkS^VTPw$y8D$8j)>>Uia}uDX7tSL9uUI`;Bm(`rb49WxPR;!#aTx}}bH!M?T|_oJi6BJfc2 zxr%=F4Qw(IuM-weFs#R+j2UK*lw1p%P_k9nn9HxIDAsdRBGkXPZ(Sk1!6t`Xn3w1z zi18Z%Gdq#N>q;Q5AnLsql?ti7TUJL0#X+K8EoUm9=nXa*EYQX9qoy=!nRLPNEFHjg zT%f=fZO&p!RsEA!H8WMnXM7HtWA!_nCrcf5AC%p9iiC_}FLPa)Otnq}3*~!H^;l6T z03hn9`=hn&5_r5JH~EI($>IeiB^;Ff=o#;~$p?ioK;0oi zd?+PV07fLmH_Wq&-VJN!C7piVCOSQVh?V&!odvwWCac+lvV>Z8kuIhlRz!K!us$Cq zaa=c-CEeBI*%kF2Ruo$y=iU1&&Mv}3+*M~(%OoJ5-}2D9;JENn2lF zlL>S~I%(idY#n6~eG;dz$SnlGY~4IM5cV_8lTy`k}<0EVWc5ks9mCCw^C)3@#&-Yi1=hQ6Ig_LXz#C+==l={8Mi80a}} z`E*k#9aeqE9pB7E!y)=Yk=d}~N9;}A(fH-TPjVVA2eY-FjZwHtzSJSWn00)=O^%_J zc5p;30PZtxm>uiS0aOIsvp`__MzDoDLO`hQy0z!4x9=WbQ8NOlTh#*^yHXcVb0q+d z>WW<})+;AwvqdTy4#La8UbGYQMnDO0ct=M#b=^^oVhVkL7ZQIhZ^y#y4knHNxDzC^I48j~0JGge>=3f`f;}xTmQWtff$% zcD=8#$=3S3^<*rR+I|lQ7Z|XPH>tp4MV#^#>tcP{0|4e5&T8c%#QXT2FK?4|0DeVm zwN9zJpqw2q8if9?60=jY0*tj|#fLJ`2|UmT&o?fyxP1eg+yJsEfp~QiRrZoE2EYRm zP;AEN+q3F9$DYb%W)#ad>HzTH{UztgtRHj2@b-r>W>!G_Dcy*TtFy#2qX`(e8*3|0 zI5E{TO~j#Ut@RF@40^>1cSI=yb`~_`QY}s8G3J2~Z|F8Gt-`<_8g^7}5;X=CseXgA zi?jjrVx=C=DwjLrhmTKPEv-AtKXTWh#j?7Ke3G;RoIG@?P~TBAvOE4Y3EBQuDravW|+kar-N5a@uD~Qn%_F6LAwMI$GMTmunW#ST^)n zWfDs3?74J8Edq-RO1MwYmY3V)lyzyVY&Xz)VTpnBXC^{PwYZ_1igVC9@<~8N+4m9DK3s&ZOmUaL4ytQllHQ)ycH6$+ry@ zt7Ky5p37O*(d~X_3e2-=0wJa(GOviBwO5mU$2sHC`9RG>Qh}M zUd=zf`jJo4PXF<0`svm9)$KV@L3jHJ>_?=zXG+aRa0f%_Hq8Z}Y+>YC)rK}`Ulf(K zdRgZ_9h|sS)&>Bo^0manP%KSR{*_|_rG@3iO7zka;hetPvIpDHWy z z!PQ=H9UMRgPp^bg#6+nM$#uYewvURinIg7E=RpjI?O)h4Z}0_C+|PHQ+NG=76{ip{ILSg)coeov+bz?ZkOg^7Y< z1EUvFDxQ?hOAs2``nc@2UU4s(V!N2izLx4-lsa7zljyic$+vhqRt58H`cl2H536oK z^g{G^(mQfRjyjs{8Nvmp>2f)Ho1zKP(O6s=i@%cX-t%rt1+)YERuYPvDqjA7;0x5v zsQ;Mvf4dKOD*E%^?}Pf0UuYn>P?P94^xe%>SMYpa!qXoui^BY2e^qI!`nrUB>KdYI zBi1IGPS30N=U4S^WUFWJ8AvnoLH+*Q*GRhk{yZHx)|rnNHbYkw@*jQuI+vp3nDWf3 zJhRe>pE*@otqYaJK+{``ii+VZE>h$2LK{vKKDT{PzG8nb@N+O=QT zY_ejX8HV`z32rNfJb%*+o$s}xGJLc+nyeVggt(gIRYyIDCOeL#&oxndwz@(uUAH}7 zPnD$#Rg8t%02A?Vsp1Z|bzLlo=k}mfkfP=lsh|-vsxx6Po?FlDNdIH#BFD_C-CAO5 zQ^8wnO1rNa<2X(saMzK|l0gCcfr!9Ccc{I;A_NT24yq6fX|a|1vWK-p2e%B?m}Kc8 zhaOaPj`>J^Vw56y_$b-KF=FUguA&&jPGJ{Ra)<3!;rbb9a$pcgt&qk+#!r?!-mxm% zrHp5ND~tN+rgSbrIlyCpOMo2ewTfsC!J0fk4!bAyZdl!x_%x92%|jWEw`Y-iwO~J< zhE_jbnUBr#PCxnU_LINX_v2sub<;a|U;ktLwdO^{jWA?<@eBlAtoQ_Xed_I4)TX3? zib{EoVv#@)PhmXK>29A6N}sXTUq|*wc-d!UrE5A@U2GWyW4ieJEuz(;QqQy)uMaVX zw-oL@R*Y?fmBmu1t1*6-Q57AK{sDfmeMa?E%cHjGY!si*K026*;uYC(?@sW3O^jcFwtzsPz>! z%PALS79p0-Y}uH-82F9wEf$vGG$$Izon#yncA{Y~*2HqSh%Q?Gy?MXi6xS;ZOEoKI zV=QXvJX!+w;vx7C{{U#=JvG=r$vd;dT5=GGrlbXJ;h-)55av`niKjas^E|e%yCsBx zlAy?$2QQVgWB=qz+P3rtr?Iloe2ekcCazcAm>hTCD?!4-d7q&6_F{D_Jl8#Iu(=C?np-=z$N&9|SIJ%z zWp7H9*!QSyF{YAu_O8re@C=UjB?9`vqhecxsUqeX`;?EtJw@eq`d83=BypT|qHCtBUYf zXI-%hV|li@CvOUir~E*-Q+2T|E;5Hv#CA4=dT~#k=Vg_?IG6|f&8An^xgY?TnL=0J zY7v_}Vb7H!EmoDzdh$lWh0Ice{I*|R(v6@r&c3>dLKb(mQctsD{zc⁡~sqblGgo zrh^!5rbv_qh*gTX2%*;Su^Du|qGwmlPux=4S(bp%`*{?YMqzBMCOCr z>M-laauN02B+47HXFhg1YF_66iN9cz!|c-$s1$aqyk8#x!XdPw8@DbJ`_e*71&Q&e zHHgJ<9(3yt(gg=28&Q%;IBAEX-);;$a1TeWnJDsT$MZ-nWc}( z4i~hd9TiOP#FG&_kUQKTu*fOv;Pm7~B;|^n)E@nemX1h;O01Y=lo3~iTN*5Ozhg^> zaK0i)c)}tVwGLz(&TTF=CL-pcRh=`?Sp*QW?#QM(SQQofh_caEGE5ku#S0cWXxTSN zH(1(vR1~9ZOsZ$rBB@+&E<$F)W=Em}a~+9CFk2Y$Rm%@B)UyP6ws?6&WE&S0S=5Cs zt)T?Psq!ASaOfgMYkLub6s3nKsm;!MBb&k1&eLK;l?WC!;uzSj_pTB5%4fxn*|sfm zp_E2LhBeHoPEX6;sLG3Jg`9^URqR;Tar~U2)ux*kYa=1deAt?6#-Ow;u);aPD(a?- zocC=qafOL?G9>>wLG3TtP#J$@Ty>5>g73-Dv zmqEa+Rk$+I1W+l@GZ1Qg!Y1oZIFxRUhvf^rxy#vr3zK_q@zYG{7YOOA*b6s)HC*Dt z&fCimbPY-8qX!H3>uROMKelu5X-nPJZ>nAyShbD4PHlFfpjOH(TUV^&t@-C|^67Eu zsN6F4MVy)_M{4TI%u?p5W^~goJ2Yn^%dXR@PFNL`@)5o+1P@r`zXRjA<{r%k+xVpG z`yka52#6Cu1JI&$@=jBo6vs7@sq#(!0p3kjm#Tx&B_?!6Yja*5NbTZ|C_*MmbZoM^ z-YAjhBVvsR9h|7Z>jjG(S0=u=yA&1$>yuK%=h(KvjCCX)JhBWpq)KV80&(P z`5Sf8LA`vg>TS02_=1h1gt{@mlB@%6os)^@)0Jp=Vhc*CQ-`v}GOQo4$N>2cSRW#a z<&o&b)(e3_3)sVV*k#f7hsK(;q8%-b$C8UM`i(m2)wKF{0qYS#AdNkb?^1Pwj&E^e zZ6{-`Yxo^yp`MrZF2JPqNN+v;_AIuDGDt^1u6)hBoS-pI!+jbn!^-P+Z{c=Vx<@5> zu7q9xfs>0AORK6OJtctd#ETpWHx9ue&ncEvXVb5xZ3%e9Qt?!o?JDIiPxRJH%w>gw zed_lRAu|ayRA+HY!o->_5!d5!qFVf5mrb;~IrJ=Cq(^QG)|l~o}p|o^^ZU05K(^*u(AD#Jh(*k9~g-$hmeZ`iM0F7n+-W4 zUr-)rw1|jz@pRlBW#J>J=k=JKMXVnn)T!eWr^%t#&4y*UbJXGJW;i_* zo2#Pa_hV@!JWSPbPAhs(mWZKq4O54AKVXx^q-u9n^ls{|I$?iWPtrXUHcz$LShd`i zj%vG6=#ZP&!w`Rhf1zfEZ+S9~kp)={g-)6#dBS8v@6?v&zJJQPf0 zS9`)HM-|1EbdgI9Ra7A?{q9m7rN<299$05~Z@s0UAS<1GdL7R4`q#ziZJT_~trAe0 zY96vWQ82)#NjY>>;Yc#6i!XTITs6xK%Frhe#Jcy1jv?XTW_lJ-uPUqRGu23N;Z0Ss zVl7l-kh=LD)7)61;>&Z{*s9TXe!(J((JYHDV(i^&Pf0b?RtvEq^AHCobIRKg)F7}@ zrm7Jm4xWCd*7XN0a;fK{G9tlYPWy_$uC&4sFu5I(B?H`|N%J@$7=5V9z5w*OGCN}MZUq5wcA*`gWg7I8h9Pke4AKh2^gKdUr+?liBktT32@xfWCLsnn-5hVdv+knHYa%HG5DTW+Y@8nA z)Q!PVle+eTO%7_>i&ahybVq=6iZtA51`0xr0`ww+yK^)k>xd!m9bSY-Mep;+fj>QZIX;kiQngubPcOO&Sp z7Pg2DXJHm%tzy^R@7v^9%uzL>1l|19$R@o`l&tG(v9Sz^WaO~{y#!k+SQ;(vJ;G7% z`6t-v!_0Yne7_FbyEs}kuHx6(zkCxmY_7T&>1s1RMax47m8Y82+5VVC|9zXxW;=M3 zT4Pv4XT{D;jW`NZr90~mTxjtSf&g=wJ}zj85U9%or}Mr|)_qp>s$BZ%#VlcbkBG#o zsIXC(!P+on@NO=v-V7>YY0uXiw5BJrF2YsjrmJaSw3Lx0^T711VuTME+J4xr;KE7l z7)NDm$F|zNIfeE}7pGvio}=_c)e;jS`zk7s2g(TF9-EWRS-LKV8j7as3UDDH{}M08 zU&w=-s*lc@5K4z}Wks=Jinn}iP#qRBM#8B`NGb3?iMndvsRAh^+b38uV#-Y&P;Bfn zLqWp8dxp{Rvk|c?qXy9^Us)l=?!vycfD$x{H%^mNv^J$kOSb}mRo+z69Af1cKKCJB zF|%rlEX3cEN}EaBbk%%*;Ort?luOiK8g&8OcwHEvazw%Jtd+H+J3!xz5-`E2`2Ri{ zaN!htg7rw6oV_Qk6~07_2!Qyxn0rGvw~5*JN?Z1vM^}M{f$Ifb^}v{(V6fyVs#5fV z*Sx(<-;Hn5z0H7kpt@?Ql{$q7Og*ZIwj!U!at-kSzHgIt+?0np3=ZIe=Y6L&qKjl3 z8?6R|O_+Biwd)<~F43{VCZ?RM!qq2i^11U^-N0BOdbPi~!+A#M8^nW#iD|Q{`Iwcw zbjQb2$MO|G5$Oe+9Hh9Xm)vyDxWl&|iHIpP+}xX*8|^}9I0>au^*n1sg32}NOydV` z_(JhcR56Bl#oY5$#;R^FS?e-ut|f?hvK5}kSc#kOVOhdtEdzSb4K5Kq=yho(BUYl5m zP*JgZt|g+6C%T;`R#Vp?T^XuCcD8Kc_4t^A?Y*i*9AJL-s~DCT40_5c9b=J??I$wf z;@M(@LcegHV%`a875FxY8-87ufz4}rcaG{J0ys*u&t0-#F|xjIlVc%}I%7-`|7KRP zfY}jTuNQwCjxYWH}_q3IvWP&Q(OFGR>Y09_p&Q@F` za;juoUysPgZF0o+c14K*9276D!$vP{^Nhy+&IDBL%&8uVCFR_{QdBf&TooGBC*tI! z_GyJx>XGj-&luGGh;4J6XHKji9TV!F|DRLeJhYF0*HsF5NSk;|uN2O|q zGvdKP%YS7wHXF-%#J{y)xZyh!`A7BG0q&^DL7!an9rFkttW)K{;CN6b!*Isg4->n4$S@&D* zm9oy_)YSIwCv0;3qP^PIuzQy)fmMC0p0~+J zKwS=$Hy`U}1rGw_-JqT?5%3-s7FVoBC&yG;jx=tWm?!t)_XV3QO60^U7!N`SWn_zJ zP|a!%ltqfx?)*TI1CENW42o+)sN=EvL>62Qp`Iz-R9S70!|QfkWf2kkiu;!6M{SoI zVf42GM96X7?*wD`4`jh5=P?(Y6U9cOo@7Aydyyxu_F78J3WE%uK)UFsh+Ay+luFdh zpU8qk#HV)Ke<{`Fp5j%+EFh!Ri^^(a_nBJnWQV%AU z(H>qg(V*ccvtx5p;BOrefC0ussnUnPo}+;kb`QV6bHNsjo^6$3$T}O6voZ8uyl^6b zR8#rIN-?`NQe761YymB$bLJaaa6JD|M=;VAb9P)o@C#nqdEE!$6x3pGk@9#(zfbJg_EL?FQ44sDcbmznOru5^#v z)Jr#{TMXgqpH&ajCN|$nG*GJI-P6+C|!k?I^Fe+X*6|nLfJQoD?86VSY)onHN zLiS_kvq1xq2zu>&aJM55wMgE~b>!wqX{jOb6SsRrMJcFH)Ce}kxT>xpEN5I5-_lyP zQhAD4v(!UzD(&$Uywo3|{|}s9gqe0rX{-6FN8#H&!M%~NEBQ#r!z#!^5wbWt7aEj$ zO=^_rYMl3iO^z3bvP~Sw3z%-=)m`cVwvqt~MiI?{ixp4&q{@T~HSWk0PpN6!6E-=D zzXL!{#EK{_UcVlytLk)D)nPV|#8js=s@56A|1`>Io5f`gRQ&-o9A80Xve*Zl z#hK5}HK&fe%~Kfm9?Er^$iPHRVVwiYJ3Cc4bUETmdR_F!JLqJu0!xz(*c4LlbHY(( zS2ooG3`j^_V!9fbA7m!3m=OS-n5$&;@a-077lF1^#xQRM&G=c;U=(F^Z!LN!FLGt1 zcQlnj??FOsr-jsUspe~_WUoAs6B!uKo9>@`k({X_0G3wt$O*|y1u3_wK(Zi;Whd6g zxI~S81Dy;;AK)J)D26Hez~)n{J6!a|&0&;)f925n1`vo`yfw@;Vv|6D^%fBq*c}B{ zI95}vWQv&Swd_3i2^Ukb%DaokdV1`Uu5jgjSoC}$3hEWqg9|wXaGS!jMR`?}tB9Ij z=5(aNCdR4sZbVh(?L|ZtqOWa&!1+3YyMay)5hu#<8N9eB`qqWDt65po8IL2zbg?0c zfMS+~KfM&PDk7*F#MFHYojjKxs=|Sh$kI-r>aQ_izyx6J?c7 zMWhvxe9&VHaj`QM_j(JRJm|$pQCTjn^cGH^CMwv3aQW)u3|8n6o@GoeCQA}nmrb=g z&D|n{kIlipOY0cI#|Dh(=Yq$-W=xs>Hvr!DKIl3i&Y?o*r5L`GY9uIv`TjZ?mE;>M zqoVev+){>`)S2E9a~GxnLCDH2nEMK+0!bjLi8gUN@1T<%^BHnvN?DhJ*Y8%0*+xm* zBqR77tbf?kDaNtUjci!uFpsK!gj>X2xZKvXHu}ge)VaqHK_(V;NJ*Dau`cM>a{Rhv zbg&3CI;_v6|Gfg8jK^!Z#T^W5zA0jckuyuC1q}$B^+5bQ)MF zp0l(0j6eYZ5l7u0%{dN^#~osmk1MwFZAa!MhC@3|Fwzibitd(jD*BIZ*9g)xh17_+ho>Nf5M%cI7xW7)` z2+RP~Z9K$_Qc?+EcvAeqIJ4;8&}NpxuE$14raKTZGv1)HfE(y!6?;&WP>Cke#ZcY! z2#*TZ$IT$N>E<$pGppY#Bff1U`Bu#Hs`VCU7w#hN%H6AE5|GbMy5ivm_BQFxa!ep^ z6iszdRykueJdsh6U(nCKzfKOhfZdsxy`1W0sYFj*tV=wYAy6L@Sv>G(E30jZBP(U5 za&LEvr1b_m8AsQJ9U9hT%&iEbm%u4Ba&ZnYn-@(x+Ya?B9eJifgsu4GFp5y>}e8uR1-IGT23T2)6li{pY`~5RUbatWgXgWelC|B0cQ49wDc7J6Yy`BH^| z!KnTHb+UIQm5m`P4saha!|YhQ4xl{Xo&^Ha7o0835fg;y&Xa3A`uM8;7C9q;x|z)( zu`^WhG-n)Oud3KNqrGBaG)ttCZZmli*o$@Cr>Z;3Q4Fp&kQ@_@jgi_> z*`1}uImj($HcLrin(!}<^KAnT2f}@So$Q7zF3LAXb~Tf^YEoo^!yZk3L4+*#oPvX! z#W=et7pw%Au5!J%(8;6Je)5iJD3$$g1}+%DIzFTXi|H}SN34taDh~jdFBq#ClMwG^ zcfNX^tOM{PVyn4R)&*s+SWzeRcNCZ%%1FT2u}89|2y`3|!;Z*xyhev0*ew{O$o) zQYG$tA2^sHjHNRELz*e$Mq|6YSgh&eEp&26-wegj)S*0rbx|uO9aQX%)G=?WN#kA2 zYNE1VJ+t31PcDUW55DS#%e!)D^k(BYO+_FeVUwAd9UG+d+F}DOje16Ljj}hk*mP%7 z;w!l0yDq7bjb3tR*y!X_8iJJ~zH^Of&ry-=f|D7eN}d3UcT@eT!#=OKh`aD&9`L0{ zRBXh{@VLH>3{60^0lBGoyvCk4{cE9T&=7Av`^u;I>UFY5PVtHwng&l%bYx}s z6Isw2cUQQ%btTSZuli@9vqhWgW8;Nfb^c$0POiPkhfuYIaj{?>2F9fJ;KldE?-?7X zQHMEfTdlZQk$`_=^mPkAIVc2~cW#Vq0mix*fPGU@tThETqp5Taoo%ePQ=I_bqD(ix zW7rqs`|5SF>jf;%Iy&$rbd#^9^K%Jt zTcIn;-loV$hAOUl1;zQK1XY|Y^G!Tv`P?Fpk5+P!Xazv)-s%PT00&|?9j-Eq$Ig;M zUMmBN7LCt}(}jd0l)qLdU#~ttjbA){nV+DY{`qP6;;H}I^*K;Mcl!bCho`xE2xT;9 z28PmYnh8FQiIQhk3(}x{k(AbIY3_4XaQf=?IbhiPS`CRCayjP^dHF{A5Un^%SYU}0 z#c=M1_*AQMWNG~ zX%Zb*OYtNNdt|}9(QPVU*xJZ8fanSDZ-;xtifmOhTiv+>PSfRbw9-WrqN6cCBN~5( zv1*O0OeN4Z^jiuDZpc{q|A8$~HKY1@+`o4p&?@rt`}aY9nLk%Ra4rYYZ|JWRv##Lz zJGrYqS`>x(#r~?)RMmBHXXn{O)rPN)Hyy60=gX)1G;^yq_zq^%^F{Ui*G~(&{eEd3 zI93^tB{oBq7xJ5aejZEGaZI{Km99~#$Jdy$wARb$z%;6}#n07)^`GNI1-$h8<>?j~ zixc75%UpHoJ+DZk>$7;{njvH^Yt8@tTOQs#d^euHeSWh2@J(W5$ji&W-o1VQ_S3^R z1uq@Re)I73X?%KqdidtyPxk5gL*^?k81UoMjoH_()}35aB+>KtZ7`o;89Q@ToH!?kEFG9o<1WV`SAE* z*~2&C%dX=&dGr`x{n3`a{eedR`S9r*79DzKBa$9o)F16C6KrP#u^)$564G^}5GS4?nl~;racvkI1t2$@g!cK0SVXyT-e3 TCAs$Wd2js-fauz)YDEbEVqEpL diff --git a/tests/waku_store/test_wakunode_store.nim b/tests/waku_store/test_wakunode_store.nim index 7d1a44ecc..e30854906 100644 --- a/tests/waku_store/test_wakunode_store.nim +++ b/tests/waku_store/test_wakunode_store.nim @@ -386,7 +386,7 @@ procSuite "WakuNode - Store": let mountArchiveRes = server.mountArchive(archiveA) assert mountArchiveRes.isOk(), mountArchiveRes.error - waitFor server.mountStore((3, 500.millis)) + waitFor server.mountStore((3, 200.millis)) client.mountStoreClient() @@ -413,11 +413,11 @@ procSuite "WakuNode - Store": for count in 0 ..< 3: waitFor successProc() - waitFor sleepAsync(5.millis) + waitFor sleepAsync(1.millis) waitFor failsProc() - waitFor sleepAsync(500.millis) + waitFor sleepAsync(200.millis) for count in 0 ..< 3: waitFor successProc() diff --git a/vendor/nim-dnsdisc b/vendor/nim-dnsdisc index b71d029f4..203abd2b3 160000 --- a/vendor/nim-dnsdisc +++ b/vendor/nim-dnsdisc @@ -1 +1 @@ -Subproject commit b71d029f4da4ec56974d54c04518bada00e1b623 +Subproject commit 203abd2b3e758e0ea3ae325769b20a7e1bcd1010 diff --git a/vendor/nim-faststreams b/vendor/nim-faststreams index c3ac3f639..ce27581a3 160000 --- a/vendor/nim-faststreams +++ b/vendor/nim-faststreams @@ -1 +1 @@ -Subproject commit c3ac3f639ed1d62f59d3077d376a29c63ac9750c +Subproject commit ce27581a3e881f782f482cb66dc5b07a02bd615e diff --git a/vendor/nim-http-utils b/vendor/nim-http-utils index 79cbab146..c53852d9e 160000 --- a/vendor/nim-http-utils +++ b/vendor/nim-http-utils @@ -1 +1 @@ -Subproject commit 79cbab1460f4c0cdde2084589d017c43a3d7b4f1 +Subproject commit c53852d9e24205b6363bba517fa8ee7bde823691 diff --git a/vendor/nim-json-serialization b/vendor/nim-json-serialization index b65fd6a7e..c343b0e24 160000 --- a/vendor/nim-json-serialization +++ b/vendor/nim-json-serialization @@ -1 +1 @@ -Subproject commit b65fd6a7e64c864dabe40e7dfd6c7d07db0014ac +Subproject commit c343b0e243d9e17e2c40f3a8a24340f7c4a71d44 diff --git a/vendor/nim-libp2p b/vendor/nim-libp2p index eb7e6ff89..ca48c3718 160000 --- a/vendor/nim-libp2p +++ b/vendor/nim-libp2p @@ -1 +1 @@ -Subproject commit eb7e6ff89889e41b57515f891ba82986c54809fb +Subproject commit ca48c3718246bb411ff0e354a70cb82d9a28de0d diff --git a/vendor/nim-lsquic b/vendor/nim-lsquic index f3fe33462..4fb03ee7b 160000 --- a/vendor/nim-lsquic +++ b/vendor/nim-lsquic @@ -1 +1 @@ -Subproject commit f3fe33462601ea34eb2e8e9c357c92e61f8d121b +Subproject commit 4fb03ee7bfb39aecb3316889fdcb60bec3d0936f diff --git a/vendor/nim-metrics b/vendor/nim-metrics index ecf64c607..11d0cddfb 160000 --- a/vendor/nim-metrics +++ b/vendor/nim-metrics @@ -1 +1 @@ -Subproject commit ecf64c6078d1276d3b7d9b3d931fbdb70004db11 +Subproject commit 11d0cddfb0e711aa2a8c75d1892ae24a64c299fc diff --git a/vendor/nim-presto b/vendor/nim-presto index 92b1c7ff1..d66043dd7 160000 --- a/vendor/nim-presto +++ b/vendor/nim-presto @@ -1 +1 @@ -Subproject commit 92b1c7ff141e6920e1f8a98a14c35c1fa098e3be +Subproject commit d66043dd7ede146442e6c39720c76a20bde5225f diff --git a/vendor/nim-serialization b/vendor/nim-serialization index 6f525d544..b0f2fa329 160000 --- a/vendor/nim-serialization +++ b/vendor/nim-serialization @@ -1 +1 @@ -Subproject commit 6f525d5447d97256750ca7856faead03e562ed20 +Subproject commit b0f2fa32960ea532a184394b0f27be37bd80248b diff --git a/vendor/nim-sqlite3-abi b/vendor/nim-sqlite3-abi index bdf01cf42..89ba51f55 160000 --- a/vendor/nim-sqlite3-abi +++ b/vendor/nim-sqlite3-abi @@ -1 +1 @@ -Subproject commit bdf01cf4236fb40788f0733466cdf6708783cbac +Subproject commit 89ba51f557414d3a3e17ab3df8270e1bdaa3ca2a diff --git a/vendor/nim-stew b/vendor/nim-stew index e57400149..b66168735 160000 --- a/vendor/nim-stew +++ b/vendor/nim-stew @@ -1 +1 @@ -Subproject commit e5740014961438610d336cd81706582dbf2c96f0 +Subproject commit b66168735d6f3841c5239c3169d3fe5fe98b1257 diff --git a/vendor/nim-testutils b/vendor/nim-testutils index 94d68e796..e4d37dc16 160000 --- a/vendor/nim-testutils +++ b/vendor/nim-testutils @@ -1 +1 @@ -Subproject commit 94d68e796c045d5b37cabc6be32d7bfa168f8857 +Subproject commit e4d37dc1652d5c63afb89907efb5a5e812261797 diff --git a/vendor/nim-toml-serialization b/vendor/nim-toml-serialization index fea85b27f..b5b387e6f 160000 --- a/vendor/nim-toml-serialization +++ b/vendor/nim-toml-serialization @@ -1 +1 @@ -Subproject commit fea85b27f0badcf617033ca1bc05444b5fd8aa7a +Subproject commit b5b387e6fb2a7cc75d54a269b07cc6218361bd46 diff --git a/vendor/nim-unittest2 b/vendor/nim-unittest2 index 8b51e99b4..26f2ef3ae 160000 --- a/vendor/nim-unittest2 +++ b/vendor/nim-unittest2 @@ -1 +1 @@ -Subproject commit 8b51e99b4a57fcfb31689230e75595f024543024 +Subproject commit 26f2ef3ae0ec72a2a75bfe557e02e88f6a31c189 diff --git a/vendor/nim-websock b/vendor/nim-websock index ebe308a79..35ae76f15 160000 --- a/vendor/nim-websock +++ b/vendor/nim-websock @@ -1 +1 @@ -Subproject commit ebe308a79a7b440a11dfbe74f352be86a3883508 +Subproject commit 35ae76f1559e835c80f9c1a3943bf995d3dd9eb5 diff --git a/vendor/waku-rlnv2-contract b/vendor/waku-rlnv2-contract index 8a338f354..d9906ef40 160000 --- a/vendor/waku-rlnv2-contract +++ b/vendor/waku-rlnv2-contract @@ -1 +1 @@ -Subproject commit 8a338f354481e8a3f3d64a72e38fad4c62e32dcd +Subproject commit d9906ef40f1e113fcf51de4ad27c61aa45375c2d diff --git a/vendor/zerokit b/vendor/zerokit index 70c79fbc9..a4bb3feb5 160000 --- a/vendor/zerokit +++ b/vendor/zerokit @@ -1 +1 @@ -Subproject commit 70c79fbc989d4f87d9352b2f4bddcb60ebe55b19 +Subproject commit a4bb3feb5054e6fd24827adf204493e6e173437b diff --git a/waku/node/delivery_service/send_service/relay_processor.nim b/waku/node/delivery_service/send_service/relay_processor.nim index 94cb63776..974c22f6c 100644 --- a/waku/node/delivery_service/send_service/relay_processor.nim +++ b/waku/node/delivery_service/send_service/relay_processor.nim @@ -70,7 +70,9 @@ method sendImpl*(self: RelaySendProcessor, task: DeliveryTask) {.async.} = if noOfPublishedPeers > 0: info "Message propagated via Relay", - requestId = task.requestId, msgHash = task.msgHash.to0xHex(), noOfPeers = noOfPublishedPeers + requestId = task.requestId, + msgHash = task.msgHash.to0xHex(), + noOfPeers = noOfPublishedPeers task.state = DeliveryState.SuccessfullyPropagated task.deliveryTime = Moment.now() else: diff --git a/waku/utils/requests.nim b/waku/utils/requests.nim index 5e5b9d960..d9afd2887 100644 --- a/waku/utils/requests.nim +++ b/waku/utils/requests.nim @@ -7,4 +7,4 @@ import bearssl/rand, stew/byteutils proc generateRequestId*(rng: ref HmacDrbgContext): string = var bytes: array[10, byte] hmacDrbgGenerate(rng[], bytes) - return toHex(bytes) + return byteutils.toHex(bytes) diff --git a/waku/waku_archive/driver/postgres_driver/postgres_driver.nim b/waku/waku_archive/driver/postgres_driver/postgres_driver.nim index 9b0e14c84..4877cb126 100644 --- a/waku/waku_archive/driver/postgres_driver/postgres_driver.nim +++ b/waku/waku_archive/driver/postgres_driver/postgres_driver.nim @@ -297,13 +297,13 @@ method put*( pubsubTopic: PubsubTopic, message: WakuMessage, ): Future[ArchiveDriverResult[void]] {.async.} = - let messageHash = toHex(messageHash) + let messageHash = byteutils.toHex(messageHash) let contentTopic = message.contentTopic - let payload = toHex(message.payload) + let payload = byteutils.toHex(message.payload) let version = $message.version let timestamp = $message.timestamp - let meta = toHex(message.meta) + let meta = byteutils.toHex(message.meta) trace "put PostgresDriver", messageHash, contentTopic, payload, version, timestamp, meta @@ -439,7 +439,7 @@ proc getMessagesArbitraryQuery( var args: seq[string] if cursor.isSome(): - let hashHex = toHex(cursor.get()) + let hashHex = byteutils.toHex(cursor.get()) let timeCursor = ?await s.getTimeCursor(hashHex) @@ -520,7 +520,7 @@ proc getMessageHashesArbitraryQuery( var args: seq[string] if cursor.isSome(): - let hashHex = toHex(cursor.get()) + let hashHex = byteutils.toHex(cursor.get()) let timeCursor = ?await s.getTimeCursor(hashHex) @@ -630,7 +630,7 @@ proc getMessagesPreparedStmt( return ok(rows) - let hashHex = toHex(cursor.get()) + let hashHex = byteutils.toHex(cursor.get()) let timeCursor = ?await s.getTimeCursor(hashHex) @@ -723,7 +723,7 @@ proc getMessageHashesPreparedStmt( return ok(rows) - let hashHex = toHex(cursor.get()) + let hashHex = byteutils.toHex(cursor.get()) let timeCursor = ?await s.getTimeCursor(hashHex) diff --git a/waku/waku_archive_legacy/driver/postgres_driver/postgres_driver.nim b/waku/waku_archive_legacy/driver/postgres_driver/postgres_driver.nim index 1a39c1267..a6784e4f8 100644 --- a/waku/waku_archive_legacy/driver/postgres_driver/postgres_driver.nim +++ b/waku/waku_archive_legacy/driver/postgres_driver/postgres_driver.nim @@ -213,13 +213,13 @@ method put*( messageHash: WakuMessageHash, receivedTime: Timestamp, ): Future[ArchiveDriverResult[void]] {.async.} = - let digest = toHex(digest.data) - let messageHash = toHex(messageHash) + let digest = byteutils.toHex(digest.data) + let messageHash = byteutils.toHex(messageHash) let contentTopic = message.contentTopic - let payload = toHex(message.payload) + let payload = byteutils.toHex(message.payload) let version = $message.version let timestamp = $message.timestamp - let meta = toHex(message.meta) + let meta = byteutils.toHex(message.meta) trace "put PostgresDriver", timestamp = timestamp @@ -312,7 +312,7 @@ proc getMessagesArbitraryQuery( args.add(pubsubTopic.get()) if cursor.isSome(): - let hashHex = toHex(cursor.get().hash) + let hashHex = byteutils.toHex(cursor.get().hash) var entree: seq[(PubsubTopic, WakuMessage, seq[byte], Timestamp, WakuMessageHash)] proc entreeCallback(pqResult: ptr PGresult) = @@ -463,7 +463,7 @@ proc getMessagesPreparedStmt( let limit = $maxPageSize if cursor.isSome(): - let hash = toHex(cursor.get().hash) + let hash = byteutils.toHex(cursor.get().hash) var entree: seq[(PubsubTopic, WakuMessage, seq[byte], Timestamp, WakuMessageHash)] @@ -576,7 +576,7 @@ proc getMessagesV2PreparedStmt( var stmtDef = if ascOrder: SelectWithCursorV2AscStmtDef else: SelectWithCursorV2DescStmtDef - let digest = toHex(cursor.get().digest.data) + let digest = byteutils.toHex(cursor.get().digest.data) let timestamp = $cursor.get().storeTime ( diff --git a/waku/waku_filter_v2/client.nim b/waku/waku_filter_v2/client.nim index 323cc6da8..ba8cd3d0c 100644 --- a/waku/waku_filter_v2/client.nim +++ b/waku/waku_filter_v2/client.nim @@ -29,7 +29,7 @@ type WakuFilterClient* = ref object of LPProtocol func generateRequestId(rng: ref HmacDrbgContext): string = var bytes: array[10, byte] hmacDrbgGenerate(rng[], bytes) - return toHex(bytes) + return byteutils.toHex(bytes) proc sendSubscribeRequest( wfc: WakuFilterClient, diff --git a/waku/waku_rln_relay/rln_relay.nim b/waku/waku_rln_relay/rln_relay.nim index 8758a7bcd..5c893e2a2 100644 --- a/waku/waku_rln_relay/rln_relay.nim +++ b/waku/waku_rln_relay/rln_relay.nim @@ -346,7 +346,7 @@ proc generateRlnValidator*( let validationRes = wakuRlnRelay.validateMessageAndUpdateLog(message) let - proof = toHex(msgProof.proof) + proof = byteutils.toHex(msgProof.proof) epoch = fromEpoch(msgProof.epoch) root = inHex(msgProof.merkleRoot) shareX = inHex(msgProof.shareX) diff --git a/waku/waku_store_sync/reconciliation.nim b/waku/waku_store_sync/reconciliation.nim index 0cc15d0df..23f513322 100644 --- a/waku/waku_store_sync/reconciliation.nim +++ b/waku/waku_store_sync/reconciliation.nim @@ -79,7 +79,8 @@ proc messageIngress*( let id = SyncID(time: msg.timestamp, hash: msgHash) self.storage.insert(id, pubsubTopic, msg.contentTopic).isOkOr: - error "failed to insert new message", msg_hash = $id.hash.toHex(), error = $error + error "failed to insert new message", + msg_hash = byteutils.toHex(id.hash), error = $error proc messageIngress*( self: SyncReconciliation, @@ -87,7 +88,7 @@ proc messageIngress*( pubsubTopic: PubsubTopic, msg: WakuMessage, ) = - trace "message ingress", msg_hash = msgHash.toHex(), msg = msg + trace "message ingress", msg_hash = byteutils.toHex(msgHash), msg = msg if msg.ephemeral: return @@ -95,7 +96,8 @@ proc messageIngress*( let id = SyncID(time: msg.timestamp, hash: msgHash) self.storage.insert(id, pubsubTopic, msg.contentTopic).isOkOr: - error "failed to insert new message", msg_hash = $id.hash.toHex(), error = $error + error "failed to insert new message", + msg_hash = byteutils.toHex(id.hash), error = $error proc messageIngress*( self: SyncReconciliation, @@ -104,7 +106,8 @@ proc messageIngress*( contentTopic: ContentTopic, ) = self.storage.insert(id, pubsubTopic, contentTopic).isOkOr: - error "failed to insert new message", msg_hash = $id.hash.toHex(), error = $error + error "failed to insert new message", + msg_hash = byteutils.toHex(id.hash), error = $error proc preProcessPayload( self: SyncReconciliation, payload: RangesData From a8bdbca98a3d2eeee8aaff05ce037cd1dc32bde9 Mon Sep 17 00:00:00 2001 From: Fabiana Cecin Date: Wed, 11 Feb 2026 10:36:37 -0300 Subject: [PATCH 48/70] Simplify NodeHealthMonitor creation (#3716) Simplify NodeHealthMonitor creation * Force NodeHealthMonitor.new() to set up a WakuNode * Remove all checks for isNil(node) in NodeHealthMonitor * Fix tests to use the new NodeHealthMonitor.new() Co-authored-by: Ivan FB <128452529+Ivansete-status@users.noreply.github.com> --- tests/test_waku_keepalive.nim | 3 +- tests/wakunode_rest/test_rest_health.nim | 22 ++---- waku/factory/waku.nim | 20 ++---- .../health_monitor/node_health_monitor.nim | 69 ++++++------------- 4 files changed, 35 insertions(+), 79 deletions(-) diff --git a/tests/test_waku_keepalive.nim b/tests/test_waku_keepalive.nim index c12f20a05..5d8402268 100644 --- a/tests/test_waku_keepalive.nim +++ b/tests/test_waku_keepalive.nim @@ -44,8 +44,7 @@ suite "Waku Keepalive": await node1.connectToNodes(@[node2.switch.peerInfo.toRemotePeerInfo()]) - let healthMonitor = NodeHealthMonitor() - healthMonitor.setNodeToHealthMonitor(node1) + let healthMonitor = NodeHealthMonitor.new(node1) healthMonitor.startKeepalive(2.seconds).isOkOr: assert false, "Failed to start keepalive" diff --git a/tests/wakunode_rest/test_rest_health.nim b/tests/wakunode_rest/test_rest_health.nim index ed8269f55..2a70fee5f 100644 --- a/tests/wakunode_rest/test_rest_health.nim +++ b/tests/wakunode_rest/test_rest_health.nim @@ -50,33 +50,22 @@ suite "Waku v2 REST API - health": asyncTest "Get node health info - GET /health": # Given let node = testWakuNode() - let healthMonitor = NodeHealthMonitor() await node.start() (await node.mountRelay()).isOkOr: assert false, "Failed to mount relay" - healthMonitor.setOverallHealth(HealthStatus.INITIALIZING) - var restPort = Port(0) let restAddress = parseIpAddress("0.0.0.0") let restServer = WakuRestServerRef.init(restAddress, restPort).tryGet() restPort = restServer.httpServer.address.port # update with bound port for client use + let healthMonitor = NodeHealthMonitor.new(node) + installHealthApiHandler(restServer.router, healthMonitor) restServer.start() let client = newRestHttpClient(initTAddress(restAddress, restPort)) - # When - var response = await client.healthCheck() - - # Then - check: - response.status == 200 - $response.contentType == $MIMETYPE_JSON - response.data == - HealthReport(nodeHealth: HealthStatus.INITIALIZING, protocolsHealth: @[]) - - # now kick in rln (currently the only check for health) + # kick in rln (currently the only check for health) await node.mountRlnRelay( getWakuRlnConfig(manager = manager, index = MembershipIndex(1)) ) @@ -84,10 +73,11 @@ suite "Waku v2 REST API - health": node.mountLightPushClient() await node.mountFilterClient() - healthMonitor.setNodeToHealthMonitor(node) + # We don't have a Waku, so we need to set the overall health to READY here in its behalf healthMonitor.setOverallHealth(HealthStatus.READY) + # When - response = await client.healthCheck() + var response = await client.healthCheck() # Then check: diff --git a/waku/factory/waku.nim b/waku/factory/waku.nim index c452d44c5..3748847f1 100644 --- a/waku/factory/waku.nim +++ b/waku/factory/waku.nim @@ -172,7 +172,13 @@ proc new*( ?wakuConf.validate() wakuConf.logConf() - let healthMonitor = NodeHealthMonitor.new(wakuConf.dnsAddrsNameServers) + let relay = newCircuitRelay(wakuConf.circuitRelayClient) + + let node = (await setupNode(wakuConf, rng, relay)).valueOr: + error "Failed setting up node", error = $error + return err("Failed setting up node: " & $error) + + let healthMonitor = NodeHealthMonitor.new(node, wakuConf.dnsAddrsNameServers) let restServer: WakuRestServerRef = if wakuConf.restServerConf.isSome(): @@ -186,18 +192,6 @@ proc new*( else: nil - var relay = newCircuitRelay(wakuConf.circuitRelayClient) - - let node = (await setupNode(wakuConf, rng, relay)).valueOr: - error "Failed setting up node", error = $error - return err("Failed setting up node: " & $error) - - healthMonitor.setNodeToHealthMonitor(node) - healthMonitor.onlineMonitor.setPeerStoreToOnlineMonitor(node.switch.peerStore) - healthMonitor.onlineMonitor.addOnlineStateObserver( - node.peerManager.getOnlineStateObserver() - ) - node.setupAppCallbacks(wakuConf, appCallbacks).isOkOr: error "Failed setting up app callbacks", error = error return err("Failed setting up app callbacks: " & $error) diff --git a/waku/node/health_monitor/node_health_monitor.nim b/waku/node/health_monitor/node_health_monitor.nim index eb5d0ed8c..4b13dfd3d 100644 --- a/waku/node/health_monitor/node_health_monitor.nim +++ b/waku/node/health_monitor/node_health_monitor.nim @@ -33,14 +33,8 @@ type onlineMonitor*: OnlineMonitor keepAliveFut: Future[void] -template checkWakuNodeNotNil(node: WakuNode, p: ProtocolHealth): untyped = - if node.isNil(): - warn "WakuNode is not set, cannot check health", protocol_health_instance = $p - return p.notMounted() - proc getRelayHealth(hm: NodeHealthMonitor): ProtocolHealth = var p = ProtocolHealth.init("Relay") - checkWakuNodeNotNil(hm.node, p) if hm.node.wakuRelay == nil: return p.notMounted() @@ -55,10 +49,6 @@ proc getRelayHealth(hm: NodeHealthMonitor): ProtocolHealth = proc getRlnRelayHealth(hm: NodeHealthMonitor): Future[ProtocolHealth] {.async.} = var p = ProtocolHealth.init("Rln Relay") - if hm.node.isNil(): - warn "WakuNode is not set, cannot check health", protocol_health_instance = $p - return p.notMounted() - if hm.node.wakuRlnRelay.isNil(): return p.notMounted() @@ -83,7 +73,6 @@ proc getLightpushHealth( hm: NodeHealthMonitor, relayHealth: HealthStatus ): ProtocolHealth = var p = ProtocolHealth.init("Lightpush") - checkWakuNodeNotNil(hm.node, p) if hm.node.wakuLightPush == nil: return p.notMounted() @@ -97,7 +86,6 @@ proc getLightpushClientHealth( hm: NodeHealthMonitor, relayHealth: HealthStatus ): ProtocolHealth = var p = ProtocolHealth.init("Lightpush Client") - checkWakuNodeNotNil(hm.node, p) if hm.node.wakuLightpushClient == nil: return p.notMounted() @@ -115,7 +103,6 @@ proc getLegacyLightpushHealth( hm: NodeHealthMonitor, relayHealth: HealthStatus ): ProtocolHealth = var p = ProtocolHealth.init("Legacy Lightpush") - checkWakuNodeNotNil(hm.node, p) if hm.node.wakuLegacyLightPush == nil: return p.notMounted() @@ -129,7 +116,6 @@ proc getLegacyLightpushClientHealth( hm: NodeHealthMonitor, relayHealth: HealthStatus ): ProtocolHealth = var p = ProtocolHealth.init("Legacy Lightpush Client") - checkWakuNodeNotNil(hm.node, p) if hm.node.wakuLegacyLightpushClient == nil: return p.notMounted() @@ -142,7 +128,6 @@ proc getLegacyLightpushClientHealth( proc getFilterHealth(hm: NodeHealthMonitor, relayHealth: HealthStatus): ProtocolHealth = var p = ProtocolHealth.init("Filter") - checkWakuNodeNotNil(hm.node, p) if hm.node.wakuFilter == nil: return p.notMounted() @@ -156,7 +141,6 @@ proc getFilterClientHealth( hm: NodeHealthMonitor, relayHealth: HealthStatus ): ProtocolHealth = var p = ProtocolHealth.init("Filter Client") - checkWakuNodeNotNil(hm.node, p) if hm.node.wakuFilterClient == nil: return p.notMounted() @@ -168,7 +152,6 @@ proc getFilterClientHealth( proc getStoreHealth(hm: NodeHealthMonitor): ProtocolHealth = var p = ProtocolHealth.init("Store") - checkWakuNodeNotNil(hm.node, p) if hm.node.wakuStore == nil: return p.notMounted() @@ -177,7 +160,6 @@ proc getStoreHealth(hm: NodeHealthMonitor): ProtocolHealth = proc getStoreClientHealth(hm: NodeHealthMonitor): ProtocolHealth = var p = ProtocolHealth.init("Store Client") - checkWakuNodeNotNil(hm.node, p) if hm.node.wakuStoreClient == nil: return p.notMounted() @@ -191,7 +173,6 @@ proc getStoreClientHealth(hm: NodeHealthMonitor): ProtocolHealth = proc getLegacyStoreHealth(hm: NodeHealthMonitor): ProtocolHealth = var p = ProtocolHealth.init("Legacy Store") - checkWakuNodeNotNil(hm.node, p) if hm.node.wakuLegacyStore == nil: return p.notMounted() @@ -200,7 +181,6 @@ proc getLegacyStoreHealth(hm: NodeHealthMonitor): ProtocolHealth = proc getLegacyStoreClientHealth(hm: NodeHealthMonitor): ProtocolHealth = var p = ProtocolHealth.init("Legacy Store Client") - checkWakuNodeNotNil(hm.node, p) if hm.node.wakuLegacyStoreClient == nil: return p.notMounted() @@ -215,7 +195,6 @@ proc getLegacyStoreClientHealth(hm: NodeHealthMonitor): ProtocolHealth = proc getPeerExchangeHealth(hm: NodeHealthMonitor): ProtocolHealth = var p = ProtocolHealth.init("Peer Exchange") - checkWakuNodeNotNil(hm.node, p) if hm.node.wakuPeerExchange == nil: return p.notMounted() @@ -224,7 +203,6 @@ proc getPeerExchangeHealth(hm: NodeHealthMonitor): ProtocolHealth = proc getRendezvousHealth(hm: NodeHealthMonitor): ProtocolHealth = var p = ProtocolHealth.init("Rendezvous") - checkWakuNodeNotNil(hm.node, p) if hm.node.wakuRendezvous == nil: return p.notMounted() @@ -236,7 +214,6 @@ proc getRendezvousHealth(hm: NodeHealthMonitor): ProtocolHealth = proc getMixHealth(hm: NodeHealthMonitor): ProtocolHealth = var p = ProtocolHealth.init("Mix") - checkWakuNodeNotNil(hm.node, p) if hm.node.wakuMix.isNil(): return p.notMounted() @@ -386,29 +363,25 @@ proc getNodeHealthReport*(hm: NodeHealthMonitor): Future[HealthReport] {.async.} var report: HealthReport report.nodeHealth = hm.nodeHealth - if not hm.node.isNil(): - let relayHealth = hm.getRelayHealth() - report.protocolsHealth.add(relayHealth) - report.protocolsHealth.add(await hm.getRlnRelayHealth()) - report.protocolsHealth.add(hm.getLightpushHealth(relayHealth.health)) - report.protocolsHealth.add(hm.getLegacyLightpushHealth(relayHealth.health)) - report.protocolsHealth.add(hm.getFilterHealth(relayHealth.health)) - report.protocolsHealth.add(hm.getStoreHealth()) - report.protocolsHealth.add(hm.getLegacyStoreHealth()) - report.protocolsHealth.add(hm.getPeerExchangeHealth()) - report.protocolsHealth.add(hm.getRendezvousHealth()) - report.protocolsHealth.add(hm.getMixHealth()) + let relayHealth = hm.getRelayHealth() + report.protocolsHealth.add(relayHealth) + report.protocolsHealth.add(await hm.getRlnRelayHealth()) + report.protocolsHealth.add(hm.getLightpushHealth(relayHealth.health)) + report.protocolsHealth.add(hm.getLegacyLightpushHealth(relayHealth.health)) + report.protocolsHealth.add(hm.getFilterHealth(relayHealth.health)) + report.protocolsHealth.add(hm.getStoreHealth()) + report.protocolsHealth.add(hm.getLegacyStoreHealth()) + report.protocolsHealth.add(hm.getPeerExchangeHealth()) + report.protocolsHealth.add(hm.getRendezvousHealth()) + report.protocolsHealth.add(hm.getMixHealth()) - report.protocolsHealth.add(hm.getLightpushClientHealth(relayHealth.health)) - report.protocolsHealth.add(hm.getLegacyLightpushClientHealth(relayHealth.health)) - report.protocolsHealth.add(hm.getStoreClientHealth()) - report.protocolsHealth.add(hm.getLegacyStoreClientHealth()) - report.protocolsHealth.add(hm.getFilterClientHealth(relayHealth.health)) + report.protocolsHealth.add(hm.getLightpushClientHealth(relayHealth.health)) + report.protocolsHealth.add(hm.getLegacyLightpushClientHealth(relayHealth.health)) + report.protocolsHealth.add(hm.getStoreClientHealth()) + report.protocolsHealth.add(hm.getLegacyStoreClientHealth()) + report.protocolsHealth.add(hm.getFilterClientHealth(relayHealth.health)) return report -proc setNodeToHealthMonitor*(hm: NodeHealthMonitor, node: WakuNode) = - hm.node = node - proc setOverallHealth*(hm: NodeHealthMonitor, health: HealthStatus) = hm.nodeHealth = health @@ -427,10 +400,10 @@ proc stopHealthMonitor*(hm: NodeHealthMonitor) {.async.} = proc new*( T: type NodeHealthMonitor, + node: WakuNode, dnsNameServers = @[parseIpAddress("1.1.1.1"), parseIpAddress("1.0.0.1")], ): T = - T( - nodeHealth: INITIALIZING, - node: nil, - onlineMonitor: OnlineMonitor.init(dnsNameServers), - ) + let om = OnlineMonitor.init(dnsNameServers) + om.setPeerStoreToOnlineMonitor(node.switch.peerStore) + om.addOnlineStateObserver(node.peerManager.getOnlineStateObserver()) + T(nodeHealth: INITIALIZING, node: node, onlineMonitor: om) From dd8dc7429d724dff6b7254bd5206a2811223461a Mon Sep 17 00:00:00 2001 From: Ivan FB <128452529+Ivansete-status@users.noreply.github.com> Date: Wed, 11 Feb 2026 16:19:58 +0100 Subject: [PATCH 49/70] canary exits with error if ping fails (#3711) --- apps/wakucanary/wakucanary.nim | 9 ++++----- 1 file changed, 4 insertions(+), 5 deletions(-) diff --git a/apps/wakucanary/wakucanary.nim b/apps/wakucanary/wakucanary.nim index 6e02c2a8f..40bf4db45 100644 --- a/apps/wakucanary/wakucanary.nim +++ b/apps/wakucanary/wakucanary.nim @@ -278,6 +278,10 @@ proc main(rng: ref HmacDrbgContext): Future[int] {.async.} = pingSuccess = false error "Ping operation failed or timed out", error = exc.msg + if not pingSuccess: + error "Ping to the node failed", peerId = peer.peerId, conStatus = $conStatus + quit(QuitFailure) + if conStatus in [Connected, CanConnect]: let nodeProtocols = lp2pPeerStore[ProtoBook][peer.peerId] @@ -285,11 +289,6 @@ proc main(rng: ref HmacDrbgContext): Future[int] {.async.} = error "Not all protocols are supported", expected = conf.protocols, supported = nodeProtocols quit(QuitFailure) - - # Check ping result if ping was enabled - if conf.ping and not pingSuccess: - error "Node is reachable and supports protocols but ping failed - connection may be unstable" - quit(QuitFailure) elif conStatus == CannotConnect: error "Could not connect", peerId = peer.peerId quit(QuitFailure) From 1fb4d1eab0460b3e033e0ab87bd4ac3b6966f3c9 Mon Sep 17 00:00:00 2001 From: Fabiana Cecin Date: Thu, 12 Feb 2026 14:52:39 -0300 Subject: [PATCH 50/70] feat: implement Waku API Health spec (#3689) * Fix protocol strength metric to consider connected peers only * Remove polling loop; event-driven node connection health updates * Remove 10s WakuRelay topic health polling loop; now event-driven * Change NodeHealthStatus to ConnectionStatus * Change new nodeState (rest API /health) field to connectionStatus * Add getSyncProtocolHealthInfo and getSyncNodeHealthReport * Add ConnectionStatusChangeEvent * Add RequestHealthReport * Refactor sync/async protocol health queries in the health monitor * Add EventRelayTopicHealthChange * Add EventWakuPeer emitted by PeerManager * Add Edge support for topics health requests and events * Rename "RelayTopic" -> "Topic" * Add RequestContentTopicsHealth sync request * Add EventContentTopicHealthChange * Rename RequestTopicsHealth -> RequestShardTopicsHealth * Remove health check gating from checkApiAvailability * Add basic health smoke tests * Other misc improvements, refactors, fixes Co-authored-by: NagyZoltanPeter <113987313+NagyZoltanPeter@users.noreply.github.com> Co-authored-by: Ivan FB <128452529+Ivansete-status@users.noreply.github.com> --- .../json_connection_status_change_event.nim | 19 + library/libwaku.nim | 8 + tests/api/test_all.nim | 2 +- tests/api/test_api_health.nim | 296 +++++++++ tests/api/test_api_send.nim | 9 +- tests/node/test_all.nim | 3 +- tests/node/test_wakunode_health_monitor.nim | 301 +++++++++ tests/waku_relay/utils.nim | 18 +- tests/wakunode_rest/test_rest_health.nim | 63 +- waku/api/api.nim | 16 +- waku/api/types.nim | 8 +- waku/common/waku_protocol.nim | 24 + waku/events/events.nim | 4 +- waku/events/health_events.nim | 27 + waku/events/peer_events.nim | 13 + waku/factory/app_callbacks.nim | 3 +- waku/factory/builder.nim | 3 +- waku/factory/waku.nim | 165 +++-- .../send_service/relay_processor.nim | 4 +- waku/node/health_monitor.nim | 9 +- .../node/health_monitor/connection_status.nim | 15 + waku/node/health_monitor/health_report.nim | 10 + .../health_monitor/node_health_monitor.nim | 598 ++++++++++++++---- waku/node/health_monitor/protocol_health.nim | 10 +- waku/node/peer_manager/peer_manager.nim | 128 +++- waku/node/peer_manager/waku_peer_store.nim | 7 +- waku/node/waku_node.nim | 126 +++- waku/requests/health_request.nim | 21 - waku/requests/health_requests.nim | 39 ++ waku/requests/requests.nim | 4 +- waku/rest_api/endpoint/health/types.nim | 22 +- waku/waku_relay/protocol.nim | 146 +++-- 32 files changed, 1727 insertions(+), 394 deletions(-) create mode 100644 library/events/json_connection_status_change_event.nim create mode 100644 tests/api/test_api_health.nim create mode 100644 tests/node/test_wakunode_health_monitor.nim create mode 100644 waku/common/waku_protocol.nim create mode 100644 waku/events/health_events.nim create mode 100644 waku/events/peer_events.nim create mode 100644 waku/node/health_monitor/connection_status.nim create mode 100644 waku/node/health_monitor/health_report.nim delete mode 100644 waku/requests/health_request.nim create mode 100644 waku/requests/health_requests.nim diff --git a/library/events/json_connection_status_change_event.nim b/library/events/json_connection_status_change_event.nim new file mode 100644 index 000000000..347a84c48 --- /dev/null +++ b/library/events/json_connection_status_change_event.nim @@ -0,0 +1,19 @@ +{.push raises: [].} + +import system, std/json +import ./json_base_event +import ../../waku/api/types + +type JsonConnectionStatusChangeEvent* = ref object of JsonEvent + status*: ConnectionStatus + +proc new*( + T: type JsonConnectionStatusChangeEvent, status: ConnectionStatus +): T = + return JsonConnectionStatusChangeEvent( + eventType: "node_health_change", + status: status + ) + +method `$`*(event: JsonConnectionStatusChangeEvent): string = + $(%*event) diff --git a/library/libwaku.nim b/library/libwaku.nim index c71e823d6..eb3cdff5e 100644 --- a/library/libwaku.nim +++ b/library/libwaku.nim @@ -7,9 +7,11 @@ import ./events/json_message_event, ./events/json_topic_health_change_event, ./events/json_connection_change_event, + ./events/json_connection_status_change_event, ../waku/factory/app_callbacks, waku/factory/waku, waku/node/waku_node, + waku/node/health_monitor/health_status, ./declare_lib ################################################################################ @@ -61,10 +63,16 @@ proc waku_new( callEventCallback(ctx, "onConnectionChange"): $JsonConnectionChangeEvent.new($peerId, peerEvent) + proc onConnectionStatusChange(ctx: ptr FFIContext): ConnectionStatusChangeHandler = + return proc(status: ConnectionStatus) {.async.} = + callEventCallback(ctx, "onConnectionStatusChange"): + $JsonConnectionStatusChangeEvent.new(status) + let appCallbacks = AppCallbacks( relayHandler: onReceivedMessage(ctx), topicHealthChangeHandler: onTopicHealthChange(ctx), connectionChangeHandler: onConnectionChange(ctx), + connectionStatusChangeHandler: onConnectionStatusChange(ctx) ) ffi.sendRequestToFFIThread( diff --git a/tests/api/test_all.nim b/tests/api/test_all.nim index 99c1b3b4c..57f7f37f2 100644 --- a/tests/api/test_all.nim +++ b/tests/api/test_all.nim @@ -1,3 +1,3 @@ {.used.} -import ./test_entry_nodes, ./test_node_conf +import ./test_entry_nodes, ./test_node_conf, ./test_api_send, ./test_api_health diff --git a/tests/api/test_api_health.nim b/tests/api/test_api_health.nim new file mode 100644 index 000000000..b7aab43f9 --- /dev/null +++ b/tests/api/test_api_health.nim @@ -0,0 +1,296 @@ +{.used.} + +import std/[options, sequtils, times] +import chronos, testutils/unittests, stew/byteutils, libp2p/[switch, peerinfo] +import ../testlib/[common, wakucore, wakunode, testasync] + +import + waku, + waku/[waku_node, waku_core, waku_relay/protocol, common/broker/broker_context], + waku/node/health_monitor/[topic_health, health_status, protocol_health, health_report], + waku/requests/health_requests, + waku/requests/node_requests, + waku/events/health_events, + waku/common/waku_protocol, + waku/factory/waku_conf + +const TestTimeout = chronos.seconds(10) +const DefaultShard = PubsubTopic("/waku/2/rs/1/0") +const TestContentTopic = ContentTopic("/waku/2/default-content/proto") + +proc dummyHandler( + topic: PubsubTopic, msg: WakuMessage +): Future[void] {.async, gcsafe.} = + discard + +proc waitForConnectionStatus( + brokerCtx: BrokerContext, expected: ConnectionStatus +) {.async.} = + var future = newFuture[void]("waitForConnectionStatus") + + let handler: EventConnectionStatusChangeListenerProc = proc( + e: EventConnectionStatusChange + ) {.async: (raises: []), gcsafe.} = + if not future.finished: + if e.connectionStatus == expected: + future.complete() + + let handle = EventConnectionStatusChange.listen(brokerCtx, handler).valueOr: + raiseAssert error + + try: + if not await future.withTimeout(TestTimeout): + raiseAssert "Timeout waiting for status: " & $expected + finally: + EventConnectionStatusChange.dropListener(brokerCtx, handle) + +proc waitForShardHealthy( + brokerCtx: BrokerContext +): Future[EventShardTopicHealthChange] {.async.} = + var future = newFuture[EventShardTopicHealthChange]("waitForShardHealthy") + + let handler: EventShardTopicHealthChangeListenerProc = proc( + e: EventShardTopicHealthChange + ) {.async: (raises: []), gcsafe.} = + if not future.finished: + if e.health == TopicHealth.MINIMALLY_HEALTHY or + e.health == TopicHealth.SUFFICIENTLY_HEALTHY: + future.complete(e) + + let handle = EventShardTopicHealthChange.listen(brokerCtx, handler).valueOr: + raiseAssert error + + try: + if await future.withTimeout(TestTimeout): + return future.read() + else: + raiseAssert "Timeout waiting for shard health event" + finally: + EventShardTopicHealthChange.dropListener(brokerCtx, handle) + +suite "LM API health checking": + var + serviceNode {.threadvar.}: WakuNode + client {.threadvar.}: Waku + servicePeerInfo {.threadvar.}: RemotePeerInfo + + asyncSetup: + lockNewGlobalBrokerContext: + serviceNode = + newTestWakuNode(generateSecp256k1Key(), parseIpAddress("0.0.0.0"), Port(0)) + (await serviceNode.mountRelay()).isOkOr: + raiseAssert error + serviceNode.mountMetadata(1, @[0'u16]).isOkOr: + raiseAssert error + await serviceNode.mountLibp2pPing() + await serviceNode.start() + + servicePeerInfo = serviceNode.peerInfo.toRemotePeerInfo() + serviceNode.wakuRelay.subscribe(DefaultShard, dummyHandler) + + lockNewGlobalBrokerContext: + let conf = NodeConfig.init( + mode = WakuMode.Core, + networkingConfig = + NetworkingConfig(listenIpv4: "0.0.0.0", p2pTcpPort: 0, discv5UdpPort: 0), + protocolsConfig = ProtocolsConfig.init( + entryNodes = @[], + clusterId = 1'u16, + autoShardingConfig = AutoShardingConfig(numShardsInCluster: 1), + ), + ) + + client = (await createNode(conf)).valueOr: + raiseAssert error + (await startWaku(addr client)).isOkOr: + raiseAssert error + + asyncTeardown: + discard await client.stop() + await serviceNode.stop() + + asyncTest "RequestShardTopicsHealth, check PubsubTopic health": + client.node.wakuRelay.subscribe(DefaultShard, dummyHandler) + await client.node.connectToNodes(@[servicePeerInfo]) + + var isHealthy = false + let start = Moment.now() + while Moment.now() - start < TestTimeout: + let req = RequestShardTopicsHealth.request(client.brokerCtx, @[DefaultShard]).valueOr: + raiseAssert "RequestShardTopicsHealth failed" + + if req.topicHealth.len > 0: + let h = req.topicHealth[0].health + if h == TopicHealth.MINIMALLY_HEALTHY or h == TopicHealth.SUFFICIENTLY_HEALTHY: + isHealthy = true + break + await sleepAsync(chronos.milliseconds(100)) + + check isHealthy == true + + asyncTest "RequestShardTopicsHealth, check disconnected PubsubTopic": + const GhostShard = PubsubTopic("/waku/2/rs/1/666") + client.node.wakuRelay.subscribe(GhostShard, dummyHandler) + + let req = RequestShardTopicsHealth.request(client.brokerCtx, @[GhostShard]).valueOr: + raiseAssert "Request failed" + + check req.topicHealth.len > 0 + check req.topicHealth[0].health == TopicHealth.UNHEALTHY + + asyncTest "RequestProtocolHealth, check relay status": + await client.node.connectToNodes(@[servicePeerInfo]) + + var isReady = false + let start = Moment.now() + while Moment.now() - start < TestTimeout: + let relayReq = await RequestProtocolHealth.request( + client.brokerCtx, WakuProtocol.RelayProtocol + ) + if relayReq.isOk() and relayReq.get().healthStatus.health == HealthStatus.READY: + isReady = true + break + await sleepAsync(chronos.milliseconds(100)) + + check isReady == true + + let storeReq = + await RequestProtocolHealth.request(client.brokerCtx, WakuProtocol.StoreProtocol) + if storeReq.isOk(): + check storeReq.get().healthStatus.health != HealthStatus.READY + + asyncTest "RequestProtocolHealth, check unmounted protocol": + let req = + await RequestProtocolHealth.request(client.brokerCtx, WakuProtocol.StoreProtocol) + check req.isOk() + + let status = req.get().healthStatus + check status.health == HealthStatus.NOT_MOUNTED + check status.desc.isNone() + + asyncTest "RequestConnectionStatus, check connectivity state": + let initialReq = RequestConnectionStatus.request(client.brokerCtx).valueOr: + raiseAssert "RequestConnectionStatus failed" + check initialReq.connectionStatus == ConnectionStatus.Disconnected + + await client.node.connectToNodes(@[servicePeerInfo]) + + var isConnected = false + let start = Moment.now() + while Moment.now() - start < TestTimeout: + let req = RequestConnectionStatus.request(client.brokerCtx).valueOr: + raiseAssert "RequestConnectionStatus failed" + + if req.connectionStatus == ConnectionStatus.PartiallyConnected or + req.connectionStatus == ConnectionStatus.Connected: + isConnected = true + break + await sleepAsync(chronos.milliseconds(100)) + + check isConnected == true + + asyncTest "EventConnectionStatusChange, detect connect and disconnect": + let connectFuture = + waitForConnectionStatus(client.brokerCtx, ConnectionStatus.PartiallyConnected) + + await client.node.connectToNodes(@[servicePeerInfo]) + await connectFuture + + let disconnectFuture = + waitForConnectionStatus(client.brokerCtx, ConnectionStatus.Disconnected) + await client.node.disconnectNode(servicePeerInfo) + await disconnectFuture + + asyncTest "EventShardTopicHealthChange, detect health improvement": + client.node.wakuRelay.subscribe(DefaultShard, dummyHandler) + + let healthEventFuture = waitForShardHealthy(client.brokerCtx) + + await client.node.connectToNodes(@[servicePeerInfo]) + + let event = await healthEventFuture + check event.topic == DefaultShard + + asyncTest "RequestHealthReport, check aggregate report": + let req = await RequestHealthReport.request(client.brokerCtx) + + check req.isOk() + + let report = req.get().healthReport + check report.nodeHealth == HealthStatus.READY + check report.protocolsHealth.len > 0 + check report.protocolsHealth.anyIt(it.protocol == $WakuProtocol.RelayProtocol) + + asyncTest "RequestContentTopicsHealth, smoke test": + let fictionalTopic = ContentTopic("/waku/2/this-does-not-exist/proto") + + let req = RequestContentTopicsHealth.request(client.brokerCtx, @[fictionalTopic]) + + check req.isOk() + + let res = req.get() + check res.contentTopicHealth.len == 1 + check res.contentTopicHealth[0].topic == fictionalTopic + check res.contentTopicHealth[0].health == TopicHealth.NOT_SUBSCRIBED + + asyncTest "RequestContentTopicsHealth, core mode trivial 1-shard autosharding": + let cTopic = ContentTopic("/waku/2/my-content-topic/proto") + + let shardReq = + RequestRelayShard.request(client.brokerCtx, none(PubsubTopic), cTopic) + + check shardReq.isOk() + let targetShard = $shardReq.get().relayShard + + client.node.wakuRelay.subscribe(targetShard, dummyHandler) + serviceNode.wakuRelay.subscribe(targetShard, dummyHandler) + + await client.node.connectToNodes(@[servicePeerInfo]) + + var isHealthy = false + let start = Moment.now() + while Moment.now() - start < TestTimeout: + let req = RequestContentTopicsHealth.request(client.brokerCtx, @[cTopic]).valueOr: + raiseAssert "Request failed" + + if req.contentTopicHealth.len > 0: + let h = req.contentTopicHealth[0].health + if h == TopicHealth.MINIMALLY_HEALTHY or h == TopicHealth.SUFFICIENTLY_HEALTHY: + isHealthy = true + break + + await sleepAsync(chronos.milliseconds(100)) + + check isHealthy == true + + asyncTest "RequestProtocolHealth, edge mode smoke test": + var edgeWaku: Waku + + lockNewGlobalBrokerContext: + let edgeConf = NodeConfig.init( + mode = WakuMode.Edge, + networkingConfig = + NetworkingConfig(listenIpv4: "0.0.0.0", p2pTcpPort: 0, discv5UdpPort: 0), + protocolsConfig = ProtocolsConfig.init( + entryNodes = @[], + clusterId = 1'u16, + messageValidation = + MessageValidation(maxMessageSize: "150 KiB", rlnConfig: none(RlnConfig)), + ), + ) + + edgeWaku = (await createNode(edgeConf)).valueOr: + raiseAssert "Failed to create edge node: " & error + + (await startWaku(addr edgeWaku)).isOkOr: + raiseAssert "Failed to start edge waku: " & error + + let relayReq = await RequestProtocolHealth.request( + edgeWaku.brokerCtx, WakuProtocol.RelayProtocol + ) + check relayReq.isOk() + check relayReq.get().healthStatus.health == HealthStatus.NOT_MOUNTED + + check not edgeWaku.node.wakuFilterClient.isNil() + + discard await edgeWaku.stop() diff --git a/tests/api/test_api_send.nim b/tests/api/test_api_send.nim index e247c65ce..7343fc655 100644 --- a/tests/api/test_api_send.nim +++ b/tests/api/test_api_send.nim @@ -117,6 +117,9 @@ proc validate( check requestId == expectedRequestId proc createApiNodeConf(mode: WakuMode = WakuMode.Core): NodeConfig = + # allocate random ports to avoid port-already-in-use errors + let netConf = NetworkingConfig(listenIpv4: "0.0.0.0", p2pTcpPort: 0, discv5UdpPort: 0) + result = NodeConfig.init( mode = mode, protocolsConfig = ProtocolsConfig.init( @@ -124,6 +127,7 @@ proc createApiNodeConf(mode: WakuMode = WakuMode.Core): NodeConfig = clusterId = 1, autoShardingConfig = AutoShardingConfig(numShardsInCluster: 1), ), + networkingConfig = netConf, p2pReliability = true, ) @@ -246,8 +250,9 @@ suite "Waku API - Send": let sendResult = await node.send(envelope) - check sendResult.isErr() # Depending on implementation, it might say "not healthy" - check sendResult.error().contains("not healthy") + # TODO: The API is not enforcing a health check before the send, + # so currently this test cannot successfully fail to send. + check sendResult.isOk() (await node.stop()).isOkOr: raiseAssert "Failed to stop node: " & error diff --git a/tests/node/test_all.nim b/tests/node/test_all.nim index f6e7507b7..fe785dee2 100644 --- a/tests/node/test_all.nim +++ b/tests/node/test_all.nim @@ -7,4 +7,5 @@ import ./test_wakunode_peer_exchange, ./test_wakunode_store, ./test_wakunode_legacy_store, - ./test_wakunode_peer_manager + ./test_wakunode_peer_manager, + ./test_wakunode_health_monitor diff --git a/tests/node/test_wakunode_health_monitor.nim b/tests/node/test_wakunode_health_monitor.nim new file mode 100644 index 000000000..8be9c444d --- /dev/null +++ b/tests/node/test_wakunode_health_monitor.nim @@ -0,0 +1,301 @@ +{.used.} + +import + std/[json, options, sequtils, strutils, tables], testutils/unittests, chronos, results + +import + waku/[ + waku_core, + common/waku_protocol, + node/waku_node, + node/peer_manager, + node/health_monitor/health_status, + node/health_monitor/connection_status, + node/health_monitor/protocol_health, + node/health_monitor/node_health_monitor, + node/kernel_api/relay, + node/kernel_api/store, + node/kernel_api/lightpush, + node/kernel_api/filter, + waku_archive, + ] + +import ../testlib/[wakunode, wakucore], ../waku_archive/archive_utils + +const MockDLow = 4 # Mocked GossipSub DLow value + +const TestConnectivityTimeLimit = 3.seconds + +proc protoHealthMock(kind: WakuProtocol, health: HealthStatus): ProtocolHealth = + var ph = ProtocolHealth.init(kind) + if health == HealthStatus.READY: + return ph.ready() + else: + return ph.notReady("mock") + +suite "Health Monitor - health state calculation": + test "Disconnected, zero peers": + let protocols = + @[ + protoHealthMock(RelayProtocol, HealthStatus.NOT_READY), + protoHealthMock(StoreClientProtocol, HealthStatus.NOT_READY), + protoHealthMock(FilterClientProtocol, HealthStatus.NOT_READY), + protoHealthMock(LightpushClientProtocol, HealthStatus.NOT_READY), + ] + let strength = initTable[WakuProtocol, int]() + let state = calculateConnectionState(protocols, strength, some(MockDLow)) + check state == ConnectionStatus.Disconnected + + test "PartiallyConnected, weak relay": + let weakCount = MockDLow - 1 + let protocols = @[protoHealthMock(RelayProtocol, HealthStatus.READY)] + var strength = initTable[WakuProtocol, int]() + strength[RelayProtocol] = weakCount + let state = calculateConnectionState(protocols, strength, some(MockDLow)) + # Partially connected since relay connectivity is weak (> 0, but < dLow) + check state == ConnectionStatus.PartiallyConnected + + test "Connected, robust relay": + let protocols = @[protoHealthMock(RelayProtocol, HealthStatus.READY)] + var strength = initTable[WakuProtocol, int]() + strength[RelayProtocol] = MockDLow + let state = calculateConnectionState(protocols, strength, some(MockDLow)) + # Fully connected since relay connectivity is ideal (>= dLow) + check state == ConnectionStatus.Connected + + test "Connected, robust edge": + let protocols = + @[ + protoHealthMock(RelayProtocol, HealthStatus.NOT_MOUNTED), + protoHealthMock(LightpushClientProtocol, HealthStatus.READY), + protoHealthMock(FilterClientProtocol, HealthStatus.READY), + protoHealthMock(StoreClientProtocol, HealthStatus.READY), + ] + var strength = initTable[WakuProtocol, int]() + strength[LightpushClientProtocol] = HealthyThreshold + strength[FilterClientProtocol] = HealthyThreshold + strength[StoreClientProtocol] = HealthyThreshold + let state = calculateConnectionState(protocols, strength, some(MockDLow)) + check state == ConnectionStatus.Connected + + test "Disconnected, edge missing store": + let protocols = + @[ + protoHealthMock(LightpushClientProtocol, HealthStatus.READY), + protoHealthMock(FilterClientProtocol, HealthStatus.READY), + protoHealthMock(StoreClientProtocol, HealthStatus.NOT_READY), + ] + var strength = initTable[WakuProtocol, int]() + strength[LightpushClientProtocol] = HealthyThreshold + strength[FilterClientProtocol] = HealthyThreshold + strength[StoreClientProtocol] = 0 + let state = calculateConnectionState(protocols, strength, some(MockDLow)) + check state == ConnectionStatus.Disconnected + + test "PartiallyConnected, edge meets minimum failover requirement": + let weakCount = max(1, HealthyThreshold - 1) + let protocols = + @[ + protoHealthMock(LightpushClientProtocol, HealthStatus.READY), + protoHealthMock(FilterClientProtocol, HealthStatus.READY), + protoHealthMock(StoreClientProtocol, HealthStatus.READY), + ] + var strength = initTable[WakuProtocol, int]() + strength[LightpushClientProtocol] = weakCount + strength[FilterClientProtocol] = weakCount + strength[StoreClientProtocol] = weakCount + let state = calculateConnectionState(protocols, strength, some(MockDLow)) + check state == ConnectionStatus.PartiallyConnected + + test "Connected, robust relay ignores store server": + let protocols = + @[ + protoHealthMock(RelayProtocol, HealthStatus.READY), + protoHealthMock(StoreProtocol, HealthStatus.READY), + ] + var strength = initTable[WakuProtocol, int]() + strength[RelayProtocol] = MockDLow + strength[StoreProtocol] = 0 + let state = calculateConnectionState(protocols, strength, some(MockDLow)) + check state == ConnectionStatus.Connected + + test "Connected, robust relay ignores store client": + let protocols = + @[ + protoHealthMock(RelayProtocol, HealthStatus.READY), + protoHealthMock(StoreProtocol, HealthStatus.READY), + protoHealthMock(StoreClientProtocol, HealthStatus.NOT_READY), + ] + var strength = initTable[WakuProtocol, int]() + strength[RelayProtocol] = MockDLow + strength[StoreProtocol] = 0 + strength[StoreClientProtocol] = 0 + let state = calculateConnectionState(protocols, strength, some(MockDLow)) + check state == ConnectionStatus.Connected + +suite "Health Monitor - events": + asyncTest "Core (relay) health update": + let + nodeAKey = generateSecp256k1Key() + nodeA = newTestWakuNode(nodeAKey, parseIpAddress("127.0.0.1"), Port(0)) + + (await nodeA.mountRelay()).expect("Node A failed to mount Relay") + + await nodeA.start() + + let monitorA = NodeHealthMonitor.new(nodeA) + + var + lastStatus = ConnectionStatus.Disconnected + callbackCount = 0 + healthChangeSignal = newAsyncEvent() + + monitorA.onConnectionStatusChange = proc(status: ConnectionStatus) {.async.} = + lastStatus = status + callbackCount.inc() + healthChangeSignal.fire() + + monitorA.startHealthMonitor().expect("Health monitor failed to start") + + let + nodeBKey = generateSecp256k1Key() + nodeB = newTestWakuNode(nodeBKey, parseIpAddress("127.0.0.1"), Port(0)) + + let driver = newSqliteArchiveDriver() + nodeB.mountArchive(driver).expect("Node B failed to mount archive") + + (await nodeB.mountRelay()).expect("Node B failed to mount relay") + await nodeB.mountStore() + + await nodeB.start() + + await nodeA.connectToNodes(@[nodeB.switch.peerInfo.toRemotePeerInfo()]) + + proc dummyHandler(topic: PubsubTopic, msg: WakuMessage): Future[void] {.async.} = + discard + + nodeA.subscribe((kind: PubsubSub, topic: DefaultPubsubTopic), dummyHandler).expect( + "Node A failed to subscribe" + ) + nodeB.subscribe((kind: PubsubSub, topic: DefaultPubsubTopic), dummyHandler).expect( + "Node B failed to subscribe" + ) + + let connectTimeLimit = Moment.now() + TestConnectivityTimeLimit + var gotConnected = false + + while Moment.now() < connectTimeLimit: + if lastStatus == ConnectionStatus.PartiallyConnected: + gotConnected = true + break + + if await healthChangeSignal.wait().withTimeout(connectTimeLimit - Moment.now()): + healthChangeSignal.clear() + + check: + gotConnected == true + callbackCount >= 1 + lastStatus == ConnectionStatus.PartiallyConnected + + healthChangeSignal.clear() + + await nodeB.stop() + await nodeA.disconnectNode(nodeB.switch.peerInfo.toRemotePeerInfo()) + + let disconnectTimeLimit = Moment.now() + TestConnectivityTimeLimit + var gotDisconnected = false + + while Moment.now() < disconnectTimeLimit: + if lastStatus == ConnectionStatus.Disconnected: + gotDisconnected = true + break + + if await healthChangeSignal.wait().withTimeout(disconnectTimeLimit - Moment.now()): + healthChangeSignal.clear() + + check: + gotDisconnected == true + + await monitorA.stopHealthMonitor() + await nodeA.stop() + + asyncTest "Edge (light client) health update": + let + nodeAKey = generateSecp256k1Key() + nodeA = newTestWakuNode(nodeAKey, parseIpAddress("127.0.0.1"), Port(0)) + + nodeA.mountLightpushClient() + await nodeA.mountFilterClient() + nodeA.mountStoreClient() + + await nodeA.start() + + let monitorA = NodeHealthMonitor.new(nodeA) + + var + lastStatus = ConnectionStatus.Disconnected + callbackCount = 0 + healthChangeSignal = newAsyncEvent() + + monitorA.onConnectionStatusChange = proc(status: ConnectionStatus) {.async.} = + lastStatus = status + callbackCount.inc() + healthChangeSignal.fire() + + monitorA.startHealthMonitor().expect("Health monitor failed to start") + + let + nodeBKey = generateSecp256k1Key() + nodeB = newTestWakuNode(nodeBKey, parseIpAddress("127.0.0.1"), Port(0)) + + let driver = newSqliteArchiveDriver() + nodeB.mountArchive(driver).expect("Node B failed to mount archive") + + (await nodeB.mountRelay()).expect("Node B failed to mount relay") + + (await nodeB.mountLightpush()).expect("Node B failed to mount lightpush") + await nodeB.mountFilter() + await nodeB.mountStore() + + await nodeB.start() + + await nodeA.connectToNodes(@[nodeB.switch.peerInfo.toRemotePeerInfo()]) + + let connectTimeLimit = Moment.now() + TestConnectivityTimeLimit + var gotConnected = false + + while Moment.now() < connectTimeLimit: + if lastStatus == ConnectionStatus.PartiallyConnected: + gotConnected = true + break + + if await healthChangeSignal.wait().withTimeout(connectTimeLimit - Moment.now()): + healthChangeSignal.clear() + + check: + gotConnected == true + callbackCount >= 1 + lastStatus == ConnectionStatus.PartiallyConnected + + healthChangeSignal.clear() + + await nodeB.stop() + await nodeA.disconnectNode(nodeB.switch.peerInfo.toRemotePeerInfo()) + + let disconnectTimeLimit = Moment.now() + TestConnectivityTimeLimit + var gotDisconnected = false + + while Moment.now() < disconnectTimeLimit: + if lastStatus == ConnectionStatus.Disconnected: + gotDisconnected = true + break + + if await healthChangeSignal.wait().withTimeout(disconnectTimeLimit - Moment.now()): + healthChangeSignal.clear() + + check: + gotDisconnected == true + lastStatus == ConnectionStatus.Disconnected + + await monitorA.stopHealthMonitor() + await nodeA.stop() diff --git a/tests/waku_relay/utils.nim b/tests/waku_relay/utils.nim index d5703d415..4e958a4ea 100644 --- a/tests/waku_relay/utils.nim +++ b/tests/waku_relay/utils.nim @@ -11,15 +11,15 @@ import from std/times import epochTime import - waku/ - [ - waku_relay, - node/waku_node, - node/peer_manager, - waku_core, - waku_node, - waku_rln_relay, - ], + waku/[ + waku_relay, + node/waku_node, + node/peer_manager, + waku_core, + waku_node, + waku_rln_relay, + common/broker/broker_context, + ], ../waku_store/store_utils, ../waku_archive/archive_utils, ../testlib/[wakucore, futures] diff --git a/tests/wakunode_rest/test_rest_health.nim b/tests/wakunode_rest/test_rest_health.nim index 2a70fee5f..37abaf4f5 100644 --- a/tests/wakunode_rest/test_rest_health.nim +++ b/tests/wakunode_rest/test_rest_health.nim @@ -10,6 +10,7 @@ import libp2p/crypto/crypto import waku/[ + common/waku_protocol, waku_node, node/waku_node as waku_node2, # TODO: Remove after moving `git_version` to the app code. @@ -78,47 +79,39 @@ suite "Waku v2 REST API - health": # When var response = await client.healthCheck() + let report = response.data # Then check: response.status == 200 $response.contentType == $MIMETYPE_JSON - response.data.nodeHealth == HealthStatus.READY - response.data.protocolsHealth.len() == 15 - response.data.protocolsHealth[0].protocol == "Relay" - response.data.protocolsHealth[0].health == HealthStatus.NOT_READY - response.data.protocolsHealth[0].desc == some("No connected peers") - response.data.protocolsHealth[1].protocol == "Rln Relay" - response.data.protocolsHealth[1].health == HealthStatus.READY - response.data.protocolsHealth[2].protocol == "Lightpush" - response.data.protocolsHealth[2].health == HealthStatus.NOT_MOUNTED - response.data.protocolsHealth[3].protocol == "Legacy Lightpush" - response.data.protocolsHealth[3].health == HealthStatus.NOT_MOUNTED - response.data.protocolsHealth[4].protocol == "Filter" - response.data.protocolsHealth[4].health == HealthStatus.NOT_MOUNTED - response.data.protocolsHealth[5].protocol == "Store" - response.data.protocolsHealth[5].health == HealthStatus.NOT_MOUNTED - response.data.protocolsHealth[6].protocol == "Legacy Store" - response.data.protocolsHealth[6].health == HealthStatus.NOT_MOUNTED - response.data.protocolsHealth[7].protocol == "Peer Exchange" - response.data.protocolsHealth[7].health == HealthStatus.NOT_MOUNTED - response.data.protocolsHealth[8].protocol == "Rendezvous" - response.data.protocolsHealth[8].health == HealthStatus.NOT_MOUNTED - response.data.protocolsHealth[9].protocol == "Mix" - response.data.protocolsHealth[9].health == HealthStatus.NOT_MOUNTED - response.data.protocolsHealth[10].protocol == "Lightpush Client" - response.data.protocolsHealth[10].health == HealthStatus.NOT_READY - response.data.protocolsHealth[10].desc == + report.nodeHealth == HealthStatus.READY + report.protocolsHealth.len() == 15 + + report.getHealth(RelayProtocol).health == HealthStatus.NOT_READY + report.getHealth(RelayProtocol).desc == some("No connected peers") + + report.getHealth(RlnRelayProtocol).health == HealthStatus.READY + + report.getHealth(LightpushProtocol).health == HealthStatus.NOT_MOUNTED + report.getHealth(LegacyLightpushProtocol).health == HealthStatus.NOT_MOUNTED + report.getHealth(FilterProtocol).health == HealthStatus.NOT_MOUNTED + report.getHealth(StoreProtocol).health == HealthStatus.NOT_MOUNTED + report.getHealth(LegacyStoreProtocol).health == HealthStatus.NOT_MOUNTED + report.getHealth(PeerExchangeProtocol).health == HealthStatus.NOT_MOUNTED + report.getHealth(RendezvousProtocol).health == HealthStatus.NOT_MOUNTED + report.getHealth(MixProtocol).health == HealthStatus.NOT_MOUNTED + + report.getHealth(LightpushClientProtocol).health == HealthStatus.NOT_READY + report.getHealth(LightpushClientProtocol).desc == some("No Lightpush service peer available yet") - response.data.protocolsHealth[11].protocol == "Legacy Lightpush Client" - response.data.protocolsHealth[11].health == HealthStatus.NOT_MOUNTED - response.data.protocolsHealth[12].protocol == "Store Client" - response.data.protocolsHealth[12].health == HealthStatus.NOT_MOUNTED - response.data.protocolsHealth[13].protocol == "Legacy Store Client" - response.data.protocolsHealth[13].health == HealthStatus.NOT_MOUNTED - response.data.protocolsHealth[14].protocol == "Filter Client" - response.data.protocolsHealth[14].health == HealthStatus.NOT_READY - response.data.protocolsHealth[14].desc == + + report.getHealth(LegacyLightpushClientProtocol).health == HealthStatus.NOT_MOUNTED + report.getHealth(StoreClientProtocol).health == HealthStatus.NOT_MOUNTED + report.getHealth(LegacyStoreClientProtocol).health == HealthStatus.NOT_MOUNTED + + report.getHealth(FilterClientProtocol).health == HealthStatus.NOT_READY + report.getHealth(FilterClientProtocol).desc == some("No Filter service peer available yet") await restServer.stop() diff --git a/waku/api/api.nim b/waku/api/api.nim index 41f4fd240..7f13919b3 100644 --- a/waku/api/api.nim +++ b/waku/api/api.nim @@ -1,7 +1,7 @@ -import chronicles, chronos, results +import chronicles, chronos, results, std/strutils import waku/factory/waku -import waku/[requests/health_request, waku_core, waku_node] +import waku/[requests/health_requests, waku_core, waku_node] import waku/node/delivery_service/send_service import waku/node/delivery_service/subscription_service import ./[api_conf, types] @@ -25,16 +25,8 @@ proc checkApiAvailability(w: Waku): Result[void, string] = if w.isNil(): return err("Waku node is not initialized") - # check if health is satisfactory - # If Node is not healthy, return err("Waku node is not healthy") - let healthStatus = RequestNodeHealth.request(w.brokerCtx) - - if healthStatus.isErr(): - warn "Failed to get Waku node health status: ", error = healthStatus.error - # Let's suppose the node is hesalthy enough, go ahead - else: - if healthStatus.get().healthStatus == NodeHealth.Unhealthy: - return err("Waku node is not healthy, has got no connections.") + # TODO: Conciliate request-bouncing health checks here with unit testing. + # (For now, better to just allow all sends and rely on retries.) return ok() diff --git a/waku/api/types.nim b/waku/api/types.nim index a0626e98c..9eae503c8 100644 --- a/waku/api/types.nim +++ b/waku/api/types.nim @@ -14,10 +14,10 @@ type RequestId* = distinct string - NodeHealth* {.pure.} = enum - Healthy - MinimallyHealthy - Unhealthy + ConnectionStatus* {.pure.} = enum + Disconnected + PartiallyConnected + Connected proc new*(T: typedesc[RequestId], rng: ref HmacDrbgContext): T = ## Generate a new RequestId using the provided RNG. diff --git a/waku/common/waku_protocol.nim b/waku/common/waku_protocol.nim new file mode 100644 index 000000000..5063f4c98 --- /dev/null +++ b/waku/common/waku_protocol.nim @@ -0,0 +1,24 @@ +{.push raises: [].} + +type WakuProtocol* {.pure.} = enum + RelayProtocol = "Relay" + RlnRelayProtocol = "Rln Relay" + StoreProtocol = "Store" + LegacyStoreProtocol = "Legacy Store" + FilterProtocol = "Filter" + LightpushProtocol = "Lightpush" + LegacyLightpushProtocol = "Legacy Lightpush" + PeerExchangeProtocol = "Peer Exchange" + RendezvousProtocol = "Rendezvous" + MixProtocol = "Mix" + StoreClientProtocol = "Store Client" + LegacyStoreClientProtocol = "Legacy Store Client" + FilterClientProtocol = "Filter Client" + LightpushClientProtocol = "Lightpush Client" + LegacyLightpushClientProtocol = "Legacy Lightpush Client" + +const + RelayProtocols* = {RelayProtocol} + StoreClientProtocols* = {StoreClientProtocol, LegacyStoreClientProtocol} + LightpushClientProtocols* = {LightpushClientProtocol, LegacyLightpushClientProtocol} + FilterClientProtocols* = {FilterClientProtocol} diff --git a/waku/events/events.nim b/waku/events/events.nim index 2a0af8828..46dd4fdd3 100644 --- a/waku/events/events.nim +++ b/waku/events/events.nim @@ -1,3 +1,3 @@ -import ./[message_events, delivery_events] +import ./[message_events, delivery_events, health_events, peer_events] -export message_events, delivery_events +export message_events, delivery_events, health_events, peer_events diff --git a/waku/events/health_events.nim b/waku/events/health_events.nim new file mode 100644 index 000000000..1e6decedb --- /dev/null +++ b/waku/events/health_events.nim @@ -0,0 +1,27 @@ +import waku/common/broker/event_broker + +import waku/api/types +import waku/node/health_monitor/[protocol_health, topic_health] +import waku/waku_core/topics + +export protocol_health, topic_health + +# Notify health changes to node connectivity +EventBroker: + type EventConnectionStatusChange* = object + connectionStatus*: ConnectionStatus + +# Notify health changes to a subscribed topic +# TODO: emit content topic health change events when subscribe/unsubscribe +# from/to content topic is provided in the new API (so we know which +# content topics are of interest to the application) +EventBroker: + type EventContentTopicHealthChange* = object + contentTopic*: ContentTopic + health*: TopicHealth + +# Notify health changes to a shard (pubsub topic) +EventBroker: + type EventShardTopicHealthChange* = object + topic*: PubsubTopic + health*: TopicHealth diff --git a/waku/events/peer_events.nim b/waku/events/peer_events.nim new file mode 100644 index 000000000..49dfa9f9a --- /dev/null +++ b/waku/events/peer_events.nim @@ -0,0 +1,13 @@ +import waku/common/broker/event_broker +import libp2p/switch + +type WakuPeerEventKind* {.pure.} = enum + EventConnected + EventDisconnected + EventIdentified + EventMetadataUpdated + +EventBroker: + type EventWakuPeer* = object + peerId*: PeerId + kind*: WakuPeerEventKind diff --git a/waku/factory/app_callbacks.nim b/waku/factory/app_callbacks.nim index d28b9f2d1..f1d3369be 100644 --- a/waku/factory/app_callbacks.nim +++ b/waku/factory/app_callbacks.nim @@ -1,6 +1,7 @@ -import ../waku_relay, ../node/peer_manager +import ../waku_relay, ../node/peer_manager, ../node/health_monitor/connection_status type AppCallbacks* = ref object relayHandler*: WakuRelayHandler topicHealthChangeHandler*: TopicHealthChangeHandler connectionChangeHandler*: ConnectionChangeHandler + connectionStatusChangeHandler*: ConnectionStatusChangeHandler diff --git a/waku/factory/builder.nim b/waku/factory/builder.nim index f379f92bb..e0b643fc0 100644 --- a/waku/factory/builder.nim +++ b/waku/factory/builder.nim @@ -15,7 +15,8 @@ import ../waku_node, ../node/peer_manager, ../common/rate_limit/setting, - ../common/utils/parse_size_units + ../common/utils/parse_size_units, + ../common/broker/broker_context type WakuNodeBuilder* = object # General diff --git a/waku/factory/waku.nim b/waku/factory/waku.nim index 3748847f1..dd253129c 100644 --- a/waku/factory/waku.nim +++ b/waku/factory/waku.nim @@ -17,35 +17,36 @@ import eth/p2p/discoveryv5/enr, presto, metrics, - metrics/chronos_httpserver -import - ../common/logging, - ../waku_core, - ../waku_node, - ../node/peer_manager, - ../node/health_monitor, - ../node/waku_metrics, - ../node/delivery_service/delivery_service, - ../rest_api/message_cache, - ../rest_api/endpoint/server, - ../rest_api/endpoint/builder as rest_server_builder, - ../waku_archive, - ../waku_relay/protocol, - ../discovery/waku_dnsdisc, - ../discovery/waku_discv5, - ../discovery/autonat_service, - ../waku_enr/sharding, - ../waku_rln_relay, - ../waku_store, - ../waku_filter_v2, - ../factory/node_factory, - ../factory/internal_config, - ../factory/app_callbacks, - ../waku_enr/multiaddr, - ./waku_conf, - ../common/broker/broker_context, - ../requests/health_request, - ../api/types + metrics/chronos_httpserver, + waku/[ + waku_core, + waku_node, + waku_archive, + waku_rln_relay, + waku_store, + waku_filter_v2, + waku_relay/protocol, + waku_enr/sharding, + waku_enr/multiaddr, + api/types, + common/logging, + common/broker/broker_context, + node/peer_manager, + node/health_monitor, + node/waku_metrics, + node/delivery_service/delivery_service, + rest_api/message_cache, + rest_api/endpoint/server, + rest_api/endpoint/builder as rest_server_builder, + discovery/waku_dnsdisc, + discovery/waku_discv5, + discovery/autonat_service, + requests/health_requests, + factory/node_factory, + factory/internal_config, + factory/app_callbacks, + ], + ./waku_conf logScope: topics = "wakunode waku" @@ -118,7 +119,10 @@ proc newCircuitRelay(isRelayClient: bool): Relay = return Relay.new() proc setupAppCallbacks( - node: WakuNode, conf: WakuConf, appCallbacks: AppCallbacks + node: WakuNode, + conf: WakuConf, + appCallbacks: AppCallbacks, + healthMonitor: NodeHealthMonitor, ): Result[void, string] = if appCallbacks.isNil(): info "No external callbacks to be set" @@ -159,6 +163,13 @@ proc setupAppCallbacks( err("Cannot configure connectionChangeHandler callback with empty peer manager") node.peerManager.onConnectionChange = appCallbacks.connectionChangeHandler + if not appCallbacks.connectionStatusChangeHandler.isNil(): + if healthMonitor.isNil(): + return + err("Cannot configure connectionStatusChangeHandler with empty health monitor") + + healthMonitor.onConnectionStatusChange = appCallbacks.connectionStatusChangeHandler + return ok() proc new*( @@ -192,7 +203,7 @@ proc new*( else: nil - node.setupAppCallbacks(wakuConf, appCallbacks).isOkOr: + node.setupAppCallbacks(wakuConf, appCallbacks, healthMonitor).isOkOr: error "Failed setting up app callbacks", error = error return err("Failed setting up app callbacks: " & $error) @@ -409,60 +420,48 @@ proc startWaku*(waku: ptr Waku): Future[Result[void, string]] {.async: (raises: waku[].healthMonitor.startHealthMonitor().isOkOr: return err("failed to start health monitor: " & $error) - ## Setup RequestNodeHealth provider + ## Setup RequestConnectionStatus provider - RequestNodeHealth.setProvider( + RequestConnectionStatus.setProvider( globalBrokerContext(), - proc(): Result[RequestNodeHealth, string] = - let healthReportFut = waku[].healthMonitor.getNodeHealthReport() - if not healthReportFut.completed(): - return err("Health report not available") + proc(): Result[RequestConnectionStatus, string] = try: - let healthReport = healthReportFut.read() - - # Check if Relay or Lightpush Client is ready (MinimallyHealthy condition) - var relayReady = false - var lightpushClientReady = false - var storeClientReady = false - var filterClientReady = false - - for protocolHealth in healthReport.protocolsHealth: - if protocolHealth.protocol == "Relay" and - protocolHealth.health == HealthStatus.READY: - relayReady = true - elif protocolHealth.protocol == "Lightpush Client" and - protocolHealth.health == HealthStatus.READY: - lightpushClientReady = true - elif protocolHealth.protocol == "Store Client" and - protocolHealth.health == HealthStatus.READY: - storeClientReady = true - elif protocolHealth.protocol == "Filter Client" and - protocolHealth.health == HealthStatus.READY: - filterClientReady = true - - # Determine node health based on protocol states - let isMinimallyHealthy = relayReady or lightpushClientReady - let nodeHealth = - if isMinimallyHealthy and storeClientReady and filterClientReady: - NodeHealth.Healthy - elif isMinimallyHealthy: - NodeHealth.MinimallyHealthy - else: - NodeHealth.Unhealthy - - debug "Providing health report", - nodeHealth = $nodeHealth, - relayReady = relayReady, - lightpushClientReady = lightpushClientReady, - storeClientReady = storeClientReady, - filterClientReady = filterClientReady, - details = $(healthReport) - - return ok(RequestNodeHealth(healthStatus: nodeHealth)) - except CatchableError as exc: - err("Failed to read health report: " & exc.msg), + let healthReport = waku[].healthMonitor.getSyncNodeHealthReport() + return + ok(RequestConnectionStatus(connectionStatus: healthReport.connectionStatus)) + except CatchableError: + err("Failed to read health report: " & getCurrentExceptionMsg()), ).isOkOr: - error "Failed to set RequestNodeHealth provider", error = error + error "Failed to set RequestConnectionStatus provider", error = error + + ## Setup RequestProtocolHealth provider + + RequestProtocolHealth.setProvider( + globalBrokerContext(), + proc( + protocol: WakuProtocol + ): Future[Result[RequestProtocolHealth, string]] {.async.} = + try: + let protocolHealthStatus = + await waku[].healthMonitor.getProtocolHealthInfo(protocol) + return ok(RequestProtocolHealth(healthStatus: protocolHealthStatus)) + except CatchableError: + return err("Failed to get protocol health: " & getCurrentExceptionMsg()), + ).isOkOr: + error "Failed to set RequestProtocolHealth provider", error = error + + ## Setup RequestHealthReport provider (The lost child) + + RequestHealthReport.setProvider( + globalBrokerContext(), + proc(): Future[Result[RequestHealthReport, string]] {.async.} = + try: + let report = await waku[].healthMonitor.getNodeHealthReport() + return ok(RequestHealthReport(healthReport: report)) + except CatchableError: + return err("Failed to get health report: " & getCurrentExceptionMsg()), + ).isOkOr: + error "Failed to set RequestHealthReport provider", error = error if conf.restServerConf.isSome(): rest_server_builder.startRestServerProtocolSupport( @@ -521,8 +520,8 @@ proc stop*(waku: Waku): Future[Result[void, string]] {.async: (raises: []).} = if not waku.healthMonitor.isNil(): await waku.healthMonitor.stopHealthMonitor() - ## Clear RequestNodeHealth provider - RequestNodeHealth.clearProvider(waku.brokerCtx) + ## Clear RequestConnectionStatus provider + RequestConnectionStatus.clearProvider(waku.brokerCtx) if not waku.restServer.isNil(): await waku.restServer.stop() diff --git a/waku/node/delivery_service/send_service/relay_processor.nim b/waku/node/delivery_service/send_service/relay_processor.nim index 974c22f6c..833d15845 100644 --- a/waku/node/delivery_service/send_service/relay_processor.nim +++ b/waku/node/delivery_service/send_service/relay_processor.nim @@ -1,7 +1,7 @@ import std/options import chronos, chronicles import waku/[waku_core], waku/waku_lightpush/[common, rpc] -import waku/requests/health_request +import waku/requests/health_requests import waku/common/broker/broker_context import waku/api/types import ./[delivery_task, send_processor] @@ -32,7 +32,7 @@ proc new*( ) proc isTopicHealthy(self: RelaySendProcessor, topic: PubsubTopic): bool {.gcsafe.} = - let healthReport = RequestRelayTopicsHealth.request(self.brokerCtx, @[topic]).valueOr: + let healthReport = RequestShardTopicsHealth.request(self.brokerCtx, @[topic]).valueOr: error "isTopicHealthy: failed to get health report", topic = topic, error = error return false diff --git a/waku/node/health_monitor.nim b/waku/node/health_monitor.nim index 854a8bbc0..6e42352d4 100644 --- a/waku/node/health_monitor.nim +++ b/waku/node/health_monitor.nim @@ -1,4 +1,9 @@ import - health_monitor/[node_health_monitor, protocol_health, online_monitor, health_status] + health_monitor/[ + node_health_monitor, protocol_health, online_monitor, health_status, + connection_status, health_report, + ] -export node_health_monitor, protocol_health, online_monitor, health_status +export + node_health_monitor, protocol_health, online_monitor, health_status, + connection_status, health_report diff --git a/waku/node/health_monitor/connection_status.nim b/waku/node/health_monitor/connection_status.nim new file mode 100644 index 000000000..77696130a --- /dev/null +++ b/waku/node/health_monitor/connection_status.nim @@ -0,0 +1,15 @@ +import chronos, results, std/strutils, ../../api/types + +export ConnectionStatus + +proc init*( + t: typedesc[ConnectionStatus], strRep: string +): Result[ConnectionStatus, string] = + try: + let status = parseEnum[ConnectionStatus](strRep) + return ok(status) + except ValueError: + return err("Invalid ConnectionStatus string representation: " & strRep) + +type ConnectionStatusChangeHandler* = + proc(status: ConnectionStatus): Future[void] {.gcsafe, raises: [Defect].} diff --git a/waku/node/health_monitor/health_report.nim b/waku/node/health_monitor/health_report.nim new file mode 100644 index 000000000..d6c23cd28 --- /dev/null +++ b/waku/node/health_monitor/health_report.nim @@ -0,0 +1,10 @@ +{.push raises: [].} + +import ./health_status, ./connection_status, ./protocol_health + +type HealthReport* = object + ## Rest API type returned for /health endpoint + ## + nodeHealth*: HealthStatus # legacy "READY" health indicator + connectionStatus*: ConnectionStatus # new "Connected" health indicator + protocolsHealth*: seq[ProtocolHealth] diff --git a/waku/node/health_monitor/node_health_monitor.nim b/waku/node/health_monitor/node_health_monitor.nim index 4b13dfd3d..ba0518e61 100644 --- a/waku/node/health_monitor/node_health_monitor.nim +++ b/waku/node/health_monitor/node_health_monitor.nim @@ -1,55 +1,89 @@ {.push raises: [].} import - std/[options, sets, random, sequtils], + std/[options, sets, random, sequtils, json, strutils, tables], chronos, chronicles, - libp2p/protocols/rendezvous - -import - ../waku_node, - ../kernel_api, - ../../waku_rln_relay, - ../../waku_relay, - ../peer_manager, - ./online_monitor, - ./health_status, - ./protocol_health + libp2p/protocols/rendezvous, + libp2p/protocols/pubsub, + libp2p/protocols/pubsub/rpc/messages, + waku/[ + waku_relay, + waku_rln_relay, + api/types, + events/health_events, + events/peer_events, + node/waku_node, + node/peer_manager, + node/kernel_api, + node/health_monitor/online_monitor, + node/health_monitor/health_status, + node/health_monitor/health_report, + node/health_monitor/connection_status, + node/health_monitor/protocol_health, + ] ## This module is aimed to check the state of the "self" Waku Node # randomize initializes sdt/random's random number generator # if not called, the outcome of randomization procedures will be the same in every run -randomize() +random.randomize() -type - HealthReport* = object - nodeHealth*: HealthStatus - protocolsHealth*: seq[ProtocolHealth] +const HealthyThreshold* = 2 + ## minimum peers required for all services for a Connected status, excluding Relay - NodeHealthMonitor* = ref object - nodeHealth: HealthStatus - node: WakuNode - onlineMonitor*: OnlineMonitor - keepAliveFut: Future[void] +type NodeHealthMonitor* = ref object + nodeHealth: HealthStatus + node: WakuNode + onlineMonitor*: OnlineMonitor + keepAliveFut: Future[void] + healthLoopFut: Future[void] + healthUpdateEvent: AsyncEvent + connectionStatus: ConnectionStatus + onConnectionStatusChange*: ConnectionStatusChangeHandler + cachedProtocols: seq[ProtocolHealth] + ## state of each protocol to report. + ## calculated on last event that can change any protocol's state so fetching a report is fast. + strength: Table[WakuProtocol, int] + ## latest known connectivity strength (e.g. connected peer count) metric for each protocol. + ## if it doesn't make sense for the protocol in question, this is set to zero. + relayObserver: PubSubObserver + peerEventListener: EventWakuPeerListener + +func getHealth*(report: HealthReport, kind: WakuProtocol): ProtocolHealth = + for h in report.protocolsHealth: + if h.protocol == $kind: + return h + # Shouldn't happen, but if it does, then assume protocol is not mounted + return ProtocolHealth.init(kind) + +proc countCapablePeers(hm: NodeHealthMonitor, codec: string): int = + if isNil(hm.node.peerManager): + return 0 + + return hm.node.peerManager.getCapablePeersCount(codec) proc getRelayHealth(hm: NodeHealthMonitor): ProtocolHealth = - var p = ProtocolHealth.init("Relay") + var p = ProtocolHealth.init(WakuProtocol.RelayProtocol) - if hm.node.wakuRelay == nil: + if isNil(hm.node.wakuRelay): + hm.strength[WakuProtocol.RelayProtocol] = 0 return p.notMounted() let relayPeers = hm.node.wakuRelay.getConnectedPubSubPeers(pubsubTopic = "").valueOr: + hm.strength[WakuProtocol.RelayProtocol] = 0 return p.notMounted() - if relayPeers.len() == 0: + let count = relayPeers.len + hm.strength[WakuProtocol.RelayProtocol] = count + if count == 0: return p.notReady("No connected peers") return p.ready() proc getRlnRelayHealth(hm: NodeHealthMonitor): Future[ProtocolHealth] {.async.} = - var p = ProtocolHealth.init("Rln Relay") - if hm.node.wakuRlnRelay.isNil(): + var p = ProtocolHealth.init(WakuProtocol.RlnRelayProtocol) + if isNil(hm.node.wakuRlnRelay): return p.notMounted() const FutIsReadyTimout = 5.seconds @@ -72,121 +106,144 @@ proc getRlnRelayHealth(hm: NodeHealthMonitor): Future[ProtocolHealth] {.async.} proc getLightpushHealth( hm: NodeHealthMonitor, relayHealth: HealthStatus ): ProtocolHealth = - var p = ProtocolHealth.init("Lightpush") + var p = ProtocolHealth.init(WakuProtocol.LightpushProtocol) - if hm.node.wakuLightPush == nil: + if isNil(hm.node.wakuLightPush): + hm.strength[WakuProtocol.LightpushProtocol] = 0 return p.notMounted() + let peerCount = countCapablePeers(hm, WakuLightPushCodec) + hm.strength[WakuProtocol.LightpushProtocol] = peerCount + if relayHealth == HealthStatus.READY: return p.ready() return p.notReady("Node has no relay peers to fullfill push requests") -proc getLightpushClientHealth( - hm: NodeHealthMonitor, relayHealth: HealthStatus -): ProtocolHealth = - var p = ProtocolHealth.init("Lightpush Client") - - if hm.node.wakuLightpushClient == nil: - return p.notMounted() - - let selfServiceAvailable = - hm.node.wakuLightPush != nil and relayHealth == HealthStatus.READY - let servicePeerAvailable = hm.node.peerManager.selectPeer(WakuLightPushCodec).isSome() - - if selfServiceAvailable or servicePeerAvailable: - return p.ready() - - return p.notReady("No Lightpush service peer available yet") - proc getLegacyLightpushHealth( hm: NodeHealthMonitor, relayHealth: HealthStatus ): ProtocolHealth = - var p = ProtocolHealth.init("Legacy Lightpush") + var p = ProtocolHealth.init(WakuProtocol.LegacyLightpushProtocol) - if hm.node.wakuLegacyLightPush == nil: + if isNil(hm.node.wakuLegacyLightPush): + hm.strength[WakuProtocol.LegacyLightpushProtocol] = 0 return p.notMounted() + let peerCount = countCapablePeers(hm, WakuLegacyLightPushCodec) + hm.strength[WakuProtocol.LegacyLightpushProtocol] = peerCount + if relayHealth == HealthStatus.READY: return p.ready() return p.notReady("Node has no relay peers to fullfill push requests") -proc getLegacyLightpushClientHealth( - hm: NodeHealthMonitor, relayHealth: HealthStatus -): ProtocolHealth = - var p = ProtocolHealth.init("Legacy Lightpush Client") - - if hm.node.wakuLegacyLightpushClient == nil: - return p.notMounted() - - if (hm.node.wakuLegacyLightPush != nil and relayHealth == HealthStatus.READY) or - hm.node.peerManager.selectPeer(WakuLegacyLightPushCodec).isSome(): - return p.ready() - - return p.notReady("No Lightpush service peer available yet") - proc getFilterHealth(hm: NodeHealthMonitor, relayHealth: HealthStatus): ProtocolHealth = - var p = ProtocolHealth.init("Filter") + var p = ProtocolHealth.init(WakuProtocol.FilterProtocol) - if hm.node.wakuFilter == nil: + if isNil(hm.node.wakuFilter): + hm.strength[WakuProtocol.FilterProtocol] = 0 return p.notMounted() + let peerCount = countCapablePeers(hm, WakuFilterSubscribeCodec) + hm.strength[WakuProtocol.FilterProtocol] = peerCount + if relayHealth == HealthStatus.READY: return p.ready() return p.notReady("Relay is not ready, filter will not be able to sort out messages") -proc getFilterClientHealth( - hm: NodeHealthMonitor, relayHealth: HealthStatus -): ProtocolHealth = - var p = ProtocolHealth.init("Filter Client") - - if hm.node.wakuFilterClient == nil: - return p.notMounted() - - if hm.node.peerManager.selectPeer(WakuFilterSubscribeCodec).isSome(): - return p.ready() - - return p.notReady("No Filter service peer available yet") - proc getStoreHealth(hm: NodeHealthMonitor): ProtocolHealth = - var p = ProtocolHealth.init("Store") + var p = ProtocolHealth.init(WakuProtocol.StoreProtocol) - if hm.node.wakuStore == nil: + if isNil(hm.node.wakuStore): + hm.strength[WakuProtocol.StoreProtocol] = 0 return p.notMounted() + let peerCount = countCapablePeers(hm, WakuStoreCodec) + hm.strength[WakuProtocol.StoreProtocol] = peerCount return p.ready() -proc getStoreClientHealth(hm: NodeHealthMonitor): ProtocolHealth = - var p = ProtocolHealth.init("Store Client") +proc getLegacyStoreHealth(hm: NodeHealthMonitor): ProtocolHealth = + var p = ProtocolHealth.init(WakuProtocol.LegacyStoreProtocol) - if hm.node.wakuStoreClient == nil: + if isNil(hm.node.wakuLegacyStore): + hm.strength[WakuProtocol.LegacyStoreProtocol] = 0 return p.notMounted() - if hm.node.peerManager.selectPeer(WakuStoreCodec).isSome() or hm.node.wakuStore != nil: + let peerCount = hm.countCapablePeers(WakuLegacyStoreCodec) + hm.strength[WakuProtocol.LegacyStoreProtocol] = peerCount + return p.ready() + +proc getLightpushClientHealth(hm: NodeHealthMonitor): ProtocolHealth = + var p = ProtocolHealth.init(WakuProtocol.LightpushClientProtocol) + + if isNil(hm.node.wakuLightpushClient): + hm.strength[WakuProtocol.LightpushClientProtocol] = 0 + return p.notMounted() + + let peerCount = countCapablePeers(hm, WakuLightPushCodec) + hm.strength[WakuProtocol.LightpushClientProtocol] = peerCount + + if peerCount > 0: + return p.ready() + return p.notReady("No Lightpush service peer available yet") + +proc getLegacyLightpushClientHealth(hm: NodeHealthMonitor): ProtocolHealth = + var p = ProtocolHealth.init(WakuProtocol.LegacyLightpushClientProtocol) + + if isNil(hm.node.wakuLegacyLightpushClient): + hm.strength[WakuProtocol.LegacyLightpushClientProtocol] = 0 + return p.notMounted() + + let peerCount = countCapablePeers(hm, WakuLegacyLightPushCodec) + hm.strength[WakuProtocol.LegacyLightpushClientProtocol] = peerCount + + if peerCount > 0: + return p.ready() + return p.notReady("No Lightpush service peer available yet") + +proc getFilterClientHealth(hm: NodeHealthMonitor): ProtocolHealth = + var p = ProtocolHealth.init(WakuProtocol.FilterClientProtocol) + + if isNil(hm.node.wakuFilterClient): + hm.strength[WakuProtocol.FilterClientProtocol] = 0 + return p.notMounted() + + let peerCount = countCapablePeers(hm, WakuFilterSubscribeCodec) + hm.strength[WakuProtocol.FilterClientProtocol] = peerCount + + if peerCount > 0: + return p.ready() + return p.notReady("No Filter service peer available yet") + +proc getStoreClientHealth(hm: NodeHealthMonitor): ProtocolHealth = + var p = ProtocolHealth.init(WakuProtocol.StoreClientProtocol) + + if isNil(hm.node.wakuStoreClient): + hm.strength[WakuProtocol.StoreClientProtocol] = 0 + return p.notMounted() + + let peerCount = countCapablePeers(hm, WakuStoreCodec) + hm.strength[WakuProtocol.StoreClientProtocol] = peerCount + + if peerCount > 0 or not isNil(hm.node.wakuStore): return p.ready() return p.notReady( "No Store service peer available yet, neither Store service set up for the node" ) -proc getLegacyStoreHealth(hm: NodeHealthMonitor): ProtocolHealth = - var p = ProtocolHealth.init("Legacy Store") - - if hm.node.wakuLegacyStore == nil: - return p.notMounted() - - return p.ready() - proc getLegacyStoreClientHealth(hm: NodeHealthMonitor): ProtocolHealth = - var p = ProtocolHealth.init("Legacy Store Client") + var p = ProtocolHealth.init(WakuProtocol.LegacyStoreClientProtocol) - if hm.node.wakuLegacyStoreClient == nil: + if isNil(hm.node.wakuLegacyStoreClient): + hm.strength[WakuProtocol.LegacyStoreClientProtocol] = 0 return p.notMounted() - if hm.node.peerManager.selectPeer(WakuLegacyStoreCodec).isSome() or - hm.node.wakuLegacyStore != nil: + let peerCount = countCapablePeers(hm, WakuLegacyStoreCodec) + hm.strength[WakuProtocol.LegacyStoreClientProtocol] = peerCount + + if peerCount > 0 or not isNil(hm.node.wakuLegacyStore): return p.ready() return p.notReady( @@ -194,38 +251,305 @@ proc getLegacyStoreClientHealth(hm: NodeHealthMonitor): ProtocolHealth = ) proc getPeerExchangeHealth(hm: NodeHealthMonitor): ProtocolHealth = - var p = ProtocolHealth.init("Peer Exchange") + var p = ProtocolHealth.init(WakuProtocol.PeerExchangeProtocol) - if hm.node.wakuPeerExchange == nil: + if isNil(hm.node.wakuPeerExchange): + hm.strength[WakuProtocol.PeerExchangeProtocol] = 0 return p.notMounted() + let peerCount = countCapablePeers(hm, WakuPeerExchangeCodec) + hm.strength[WakuProtocol.PeerExchangeProtocol] = peerCount + return p.ready() proc getRendezvousHealth(hm: NodeHealthMonitor): ProtocolHealth = - var p = ProtocolHealth.init("Rendezvous") + var p = ProtocolHealth.init(WakuProtocol.RendezvousProtocol) - if hm.node.wakuRendezvous == nil: + if isNil(hm.node.wakuRendezvous): + hm.strength[WakuProtocol.RendezvousProtocol] = 0 return p.notMounted() - if hm.node.peerManager.switch.peerStore.peers(RendezVousCodec).len() == 0: + let peerCount = countCapablePeers(hm, RendezVousCodec) + hm.strength[WakuProtocol.RendezvousProtocol] = peerCount + if peerCount == 0: return p.notReady("No Rendezvous peers are available yet") return p.ready() proc getMixHealth(hm: NodeHealthMonitor): ProtocolHealth = - var p = ProtocolHealth.init("Mix") + var p = ProtocolHealth.init(WakuProtocol.MixProtocol) - if hm.node.wakuMix.isNil(): + if isNil(hm.node.wakuMix): return p.notMounted() return p.ready() +proc getSyncProtocolHealthInfo*( + hm: NodeHealthMonitor, protocol: WakuProtocol +): ProtocolHealth = + ## Get ProtocolHealth for a given protocol that can provide it synchronously + ## + case protocol + of WakuProtocol.RelayProtocol: + return hm.getRelayHealth() + of WakuProtocol.StoreProtocol: + return hm.getStoreHealth() + of WakuProtocol.LegacyStoreProtocol: + return hm.getLegacyStoreHealth() + of WakuProtocol.FilterProtocol: + return hm.getFilterHealth(hm.getRelayHealth().health) + of WakuProtocol.LightpushProtocol: + return hm.getLightpushHealth(hm.getRelayHealth().health) + of WakuProtocol.LegacyLightpushProtocol: + return hm.getLegacyLightpushHealth(hm.getRelayHealth().health) + of WakuProtocol.PeerExchangeProtocol: + return hm.getPeerExchangeHealth() + of WakuProtocol.RendezvousProtocol: + return hm.getRendezvousHealth() + of WakuProtocol.MixProtocol: + return hm.getMixHealth() + of WakuProtocol.StoreClientProtocol: + return hm.getStoreClientHealth() + of WakuProtocol.LegacyStoreClientProtocol: + return hm.getLegacyStoreClientHealth() + of WakuProtocol.FilterClientProtocol: + return hm.getFilterClientHealth() + of WakuProtocol.LightpushClientProtocol: + return hm.getLightpushClientHealth() + of WakuProtocol.LegacyLightpushClientProtocol: + return hm.getLegacyLightpushClientHealth() + of WakuProtocol.RlnRelayProtocol: + # Could waitFor here but we don't want to block the main thread. + # Could also return a cached value from a previous check. + var p = ProtocolHealth.init(protocol) + return p.notReady("RLN Relay health check is async") + else: + var p = ProtocolHealth.init(protocol) + return p.notMounted() + +proc getProtocolHealthInfo*( + hm: NodeHealthMonitor, protocol: WakuProtocol +): Future[ProtocolHealth] {.async.} = + ## Get ProtocolHealth for a given protocol + ## + case protocol + of WakuProtocol.RlnRelayProtocol: + return await hm.getRlnRelayHealth() + else: + return hm.getSyncProtocolHealthInfo(protocol) + +proc getSyncAllProtocolHealthInfo(hm: NodeHealthMonitor): seq[ProtocolHealth] = + ## Get ProtocolHealth for the subset of protocols that can provide it synchronously + ## + var protocols: seq[ProtocolHealth] = @[] + let relayHealth = hm.getRelayHealth() + protocols.add(relayHealth) + + protocols.add(hm.getLightpushHealth(relayHealth.health)) + protocols.add(hm.getLegacyLightpushHealth(relayHealth.health)) + protocols.add(hm.getFilterHealth(relayHealth.health)) + protocols.add(hm.getStoreHealth()) + protocols.add(hm.getLegacyStoreHealth()) + protocols.add(hm.getPeerExchangeHealth()) + protocols.add(hm.getRendezvousHealth()) + protocols.add(hm.getMixHealth()) + + protocols.add(hm.getLightpushClientHealth()) + protocols.add(hm.getLegacyLightpushClientHealth()) + protocols.add(hm.getStoreClientHealth()) + protocols.add(hm.getLegacyStoreClientHealth()) + protocols.add(hm.getFilterClientHealth()) + return protocols + +proc getAllProtocolHealthInfo( + hm: NodeHealthMonitor +): Future[seq[ProtocolHealth]] {.async.} = + ## Get ProtocolHealth for all protocols + ## + var protocols = hm.getSyncAllProtocolHealthInfo() + + let rlnHealth = await hm.getRlnRelayHealth() + protocols.add(rlnHealth) + + return protocols + +proc calculateConnectionState*( + protocols: seq[ProtocolHealth], + strength: Table[WakuProtocol, int], ## latest connectivity strength (e.g. peer count) for a protocol + dLowOpt: Option[int], ## minimum relay peers for Connected status if in Core (Relay) mode +): ConnectionStatus = + var + relayCount = 0 + lightpushCount = 0 + filterCount = 0 + storeClientCount = 0 + + for p in protocols: + let kind = + try: + parseEnum[WakuProtocol](p.protocol) + except ValueError: + continue + + if p.health != HealthStatus.READY: + continue + + let strength = strength.getOrDefault(kind, 0) + + if kind in RelayProtocols: + relayCount = max(relayCount, strength) + elif kind in StoreClientProtocols: + storeClientCount = max(storeClientCount, strength) + elif kind in LightpushClientProtocols: + lightpushCount = max(lightpushCount, strength) + elif kind in FilterClientProtocols: + filterCount = max(filterCount, strength) + + debug "calculateConnectionState", + protocol = kind, + strength = strength, + relayCount = relayCount, + storeClientCount = storeClientCount, + lightpushCount = lightpushCount, + filterCount = filterCount + + # Relay connectivity should be a sufficient check in Core mode. + # "Store peers" are relay peers because incoming messages in + # the relay are input to the store server. + # But if Store server (or client, even) is not mounted as well, this logic assumes + # the user knows what they're doing. + + if dLowOpt.isSome(): + if relayCount >= dLowOpt.get(): + return ConnectionStatus.Connected + + if relayCount > 0: + return ConnectionStatus.PartiallyConnected + + # No relay connectivity. Relay might not be mounted, or may just have zero peers. + # Fall back to Edge check in any case to be sure. + + let canSend = lightpushCount > 0 + let canReceive = filterCount > 0 + let canStore = storeClientCount > 0 + + let meetsMinimum = canSend and canReceive and canStore + + if not meetsMinimum: + return ConnectionStatus.Disconnected + + let isEdgeRobust = + (lightpushCount >= HealthyThreshold) and (filterCount >= HealthyThreshold) and + (storeClientCount >= HealthyThreshold) + + if isEdgeRobust: + return ConnectionStatus.Connected + + return ConnectionStatus.PartiallyConnected + +proc calculateConnectionState*(hm: NodeHealthMonitor): ConnectionStatus = + let dLow = + if isNil(hm.node.wakuRelay): + none(int) + else: + some(hm.node.wakuRelay.parameters.dLow) + return calculateConnectionState(hm.cachedProtocols, hm.strength, dLow) + +proc getNodeHealthReport*(hm: NodeHealthMonitor): Future[HealthReport] {.async.} = + ## Get a HealthReport that includes all protocols + ## + var report: HealthReport + + if hm.nodeHealth == HealthStatus.INITIALIZING or + hm.nodeHealth == HealthStatus.SHUTTING_DOWN: + report.nodeHealth = hm.nodeHealth + report.connectionStatus = ConnectionStatus.Disconnected + return report + + if hm.cachedProtocols.len == 0: + hm.cachedProtocols = await hm.getAllProtocolHealthInfo() + hm.connectionStatus = hm.calculateConnectionState() + + report.nodeHealth = HealthStatus.READY + report.connectionStatus = hm.connectionStatus + report.protocolsHealth = hm.cachedProtocols + return report + +proc getSyncNodeHealthReport*(hm: NodeHealthMonitor): HealthReport = + ## Get a HealthReport that includes the subset of protocols that inform health synchronously + ## + var report: HealthReport + + if hm.nodeHealth == HealthStatus.INITIALIZING or + hm.nodeHealth == HealthStatus.SHUTTING_DOWN: + report.nodeHealth = hm.nodeHealth + report.connectionStatus = ConnectionStatus.Disconnected + return report + + if hm.cachedProtocols.len == 0: + hm.cachedProtocols = hm.getSyncAllProtocolHealthInfo() + hm.connectionStatus = hm.calculateConnectionState() + + report.nodeHealth = HealthStatus.READY + report.connectionStatus = hm.connectionStatus + report.protocolsHealth = hm.cachedProtocols + return report + +proc onRelayMsg( + hm: NodeHealthMonitor, peer: PubSubPeer, msg: var RPCMsg +) {.gcsafe, raises: [].} = + ## Inspect Relay events for health-update relevance in Core (Relay) mode. + ## + ## For Core (Relay) mode, the connectivity health state is mostly determined + ## by the relay protocol state (it is the dominant factor), and we know + ## that a peer Relay can only affect this Relay's health if there is a + ## subscription change or a mesh (GRAFT/PRUNE) change. + ## + + if msg.subscriptions.len == 0: + if msg.control.isNone(): + return + let ctrl = msg.control.get() + if ctrl.graft.len == 0 and ctrl.prune.len == 0: + return + + hm.healthUpdateEvent.fire() + +proc healthLoop(hm: NodeHealthMonitor) {.async.} = + ## Re-evaluate the global health state of the node when notified of a potential change, + ## and call back the application if an actual change from the last notified state happened. + info "Health monitor loop start" + while true: + try: + await hm.healthUpdateEvent.wait() + hm.healthUpdateEvent.clear() + + hm.cachedProtocols = await hm.getAllProtocolHealthInfo() + let newConnectionStatus = hm.calculateConnectionState() + + if newConnectionStatus != hm.connectionStatus: + hm.connectionStatus = newConnectionStatus + + EventConnectionStatusChange.emit(hm.node.brokerCtx, newConnectionStatus) + + if not isNil(hm.onConnectionStatusChange): + await hm.onConnectionStatusChange(newConnectionStatus) + except CancelledError: + break + except Exception as e: + error "HealthMonitor: error in update loop", error = e.msg + + # safety cooldown to protect from edge cases + await sleepAsync(100.milliseconds) + + info "Health monitor loop end" + proc selectRandomPeersForKeepalive( node: WakuNode, outPeers: seq[PeerId], numRandomPeers: int ): Future[seq[PeerId]] {.async.} = ## Select peers for random keepalive, prioritizing mesh peers - if node.wakuRelay.isNil(): + if isNil(node.wakuRelay): return selectRandomPeers(outPeers, numRandomPeers) let meshPeers = node.wakuRelay.getPeersInMesh().valueOr: @@ -359,45 +683,55 @@ proc startKeepalive*( hm.keepAliveFut = hm.node.keepAliveLoop(randomPeersKeepalive, allPeersKeepalive) return ok() -proc getNodeHealthReport*(hm: NodeHealthMonitor): Future[HealthReport] {.async.} = - var report: HealthReport - report.nodeHealth = hm.nodeHealth - - let relayHealth = hm.getRelayHealth() - report.protocolsHealth.add(relayHealth) - report.protocolsHealth.add(await hm.getRlnRelayHealth()) - report.protocolsHealth.add(hm.getLightpushHealth(relayHealth.health)) - report.protocolsHealth.add(hm.getLegacyLightpushHealth(relayHealth.health)) - report.protocolsHealth.add(hm.getFilterHealth(relayHealth.health)) - report.protocolsHealth.add(hm.getStoreHealth()) - report.protocolsHealth.add(hm.getLegacyStoreHealth()) - report.protocolsHealth.add(hm.getPeerExchangeHealth()) - report.protocolsHealth.add(hm.getRendezvousHealth()) - report.protocolsHealth.add(hm.getMixHealth()) - - report.protocolsHealth.add(hm.getLightpushClientHealth(relayHealth.health)) - report.protocolsHealth.add(hm.getLegacyLightpushClientHealth(relayHealth.health)) - report.protocolsHealth.add(hm.getStoreClientHealth()) - report.protocolsHealth.add(hm.getLegacyStoreClientHealth()) - report.protocolsHealth.add(hm.getFilterClientHealth(relayHealth.health)) - return report - proc setOverallHealth*(hm: NodeHealthMonitor, health: HealthStatus) = hm.nodeHealth = health proc startHealthMonitor*(hm: NodeHealthMonitor): Result[void, string] = hm.onlineMonitor.startOnlineMonitor() + + if isNil(hm.node.peerManager): + return err("startHealthMonitor: no node peerManager to monitor") + + if not isNil(hm.node.wakuRelay): + hm.relayObserver = PubSubObserver( + onRecv: proc(peer: PubSubPeer, msgs: var RPCMsg) {.gcsafe, raises: [].} = + hm.onRelayMsg(peer, msgs) + ) + hm.node.wakuRelay.addObserver(hm.relayObserver) + + hm.peerEventListener = EventWakuPeer.listen( + hm.node.brokerCtx, + proc(evt: EventWakuPeer): Future[void] {.async: (raises: []), gcsafe.} = + ## Recompute health on any peer changing anything (join, leave, identify, metadata update) + hm.healthUpdateEvent.fire(), + ).valueOr: + return err("Failed to subscribe to peer events: " & error) + + hm.healthUpdateEvent = newAsyncEvent() + hm.healthUpdateEvent.fire() + + hm.healthLoopFut = hm.healthLoop() + hm.startKeepalive().isOkOr: return err("startHealthMonitor: failed starting keep alive: " & error) return ok() proc stopHealthMonitor*(hm: NodeHealthMonitor) {.async.} = - if not hm.onlineMonitor.isNil(): + if not isNil(hm.onlineMonitor): await hm.onlineMonitor.stopOnlineMonitor() - if not hm.keepAliveFut.isNil(): + if not isNil(hm.keepAliveFut): await hm.keepAliveFut.cancelAndWait() + if not isNil(hm.healthLoopFut): + await hm.healthLoopFut.cancelAndWait() + + if hm.peerEventListener.id != 0: + EventWakuPeer.dropListener(hm.node.brokerCtx, hm.peerEventListener) + + if not isNil(hm.node.wakuRelay) and not isNil(hm.relayObserver): + hm.node.wakuRelay.removeObserver(hm.relayObserver) + proc new*( T: type NodeHealthMonitor, node: WakuNode, @@ -406,4 +740,10 @@ proc new*( let om = OnlineMonitor.init(dnsNameServers) om.setPeerStoreToOnlineMonitor(node.switch.peerStore) om.addOnlineStateObserver(node.peerManager.getOnlineStateObserver()) - T(nodeHealth: INITIALIZING, node: node, onlineMonitor: om) + T( + nodeHealth: INITIALIZING, + node: node, + onlineMonitor: om, + connectionStatus: ConnectionStatus.Disconnected, + strength: initTable[WakuProtocol, int](), + ) diff --git a/waku/node/health_monitor/protocol_health.nim b/waku/node/health_monitor/protocol_health.nim index 7bacea94b..4479888c8 100644 --- a/waku/node/health_monitor/protocol_health.nim +++ b/waku/node/health_monitor/protocol_health.nim @@ -1,5 +1,8 @@ import std/[options, strformat] import ./health_status +import waku/common/waku_protocol + +export waku_protocol type ProtocolHealth* = object protocol*: string @@ -39,8 +42,7 @@ proc shuttingDown*(p: var ProtocolHealth): ProtocolHealth = proc `$`*(p: ProtocolHealth): string = return fmt"protocol: {p.protocol}, health: {p.health}, description: {p.desc}" -proc init*(p: typedesc[ProtocolHealth], protocol: string): ProtocolHealth = - let p = ProtocolHealth( - protocol: protocol, health: HealthStatus.NOT_MOUNTED, desc: none[string]() +proc init*(p: typedesc[ProtocolHealth], protocol: WakuProtocol): ProtocolHealth = + return ProtocolHealth( + protocol: $protocol, health: HealthStatus.NOT_MOUNTED, desc: none[string]() ) - return p diff --git a/waku/node/peer_manager/peer_manager.nim b/waku/node/peer_manager/peer_manager.nim index bdb68905e..834fb19cf 100644 --- a/waku/node/peer_manager/peer_manager.nim +++ b/waku/node/peer_manager/peer_manager.nim @@ -1,27 +1,31 @@ {.push raises: [].} import - std/[options, sets, sequtils, times, strformat, strutils, math, random, tables], + std/ + [ + options, sets, sequtils, times, strformat, strutils, math, random, tables, + algorithm, + ], chronos, chronicles, metrics, - libp2p/multistream, - libp2p/muxers/muxer, - libp2p/nameresolving/nameresolver, - libp2p/peerstore - -import - ../../common/nimchronos, - ../../common/enr, - ../../common/callbacks, - ../../common/utils/parse_size_units, - ../../waku_core, - ../../waku_relay, - ../../waku_relay/protocol, - ../../waku_enr/sharding, - ../../waku_enr/capabilities, - ../../waku_metadata, - ../health_monitor/online_monitor, + libp2p/[multistream, muxers/muxer, nameresolving/nameresolver, peerstore], + waku/[ + waku_core, + waku_relay, + waku_metadata, + waku_core/topics/sharding, + waku_relay/protocol, + waku_enr/sharding, + waku_enr/capabilities, + events/peer_events, + common/nimchronos, + common/enr, + common/callbacks, + common/utils/parse_size_units, + common/broker/broker_context, + node/health_monitor/online_monitor, + ], ./peer_store/peer_storage, ./waku_peer_store @@ -84,6 +88,7 @@ type ConnectionChangeHandler* = proc( ): Future[void] {.gcsafe, raises: [Defect].} type PeerManager* = ref object of RootObj + brokerCtx: BrokerContext switch*: Switch wakuMetadata*: WakuMetadata initialBackoffInSec*: int @@ -483,8 +488,9 @@ proc canBeConnected*(pm: PeerManager, peerId: PeerId): bool = proc connectedPeers*( pm: PeerManager, protocol: string = "" ): (seq[PeerId], seq[PeerId]) = - ## Returns the peerIds of physical connections (in and out) - ## If a protocol is specified, only returns peers with at least one stream of that protocol + ## Returns the PeerIds of peers with an active socket connection. + ## If a protocol is specified, it returns peers that currently have one + ## or more active logical streams for that protocol. var inPeers: seq[PeerId] var outPeers: seq[PeerId] @@ -500,6 +506,65 @@ proc connectedPeers*( return (inPeers, outPeers) +proc capablePeers*(pm: PeerManager, protocol: string): (seq[PeerId], seq[PeerId]) = + ## Returns the PeerIds of peers with an active socket connection. + ## If a protocol is specified, it returns peers that have identified + ## themselves as supporting the protocol. + + var inPeers: seq[PeerId] + var outPeers: seq[PeerId] + + for peerId, muxers in pm.switch.connManager.getConnections(): + # filter out peers that don't have the capability registered in the peer store + if pm.switch.peerStore.hasPeer(peerId, protocol): + for peerConn in muxers: + if peerConn.connection.transportDir == Direction.In: + inPeers.add(peerId) + elif peerConn.connection.transportDir == Direction.Out: + outPeers.add(peerId) + + return (inPeers, outPeers) + +proc getConnectedPeersCount*(pm: PeerManager, protocol: string): int = + ## Returns the total number of unique connected peers (inbound + outbound) + ## with active streams for a specific protocol. + let (inPeers, outPeers) = pm.connectedPeers(protocol) + var peers = initHashSet[PeerId](nextPowerOfTwo(inPeers.len + outPeers.len)) + for p in inPeers: + peers.incl(p) + for p in outPeers: + peers.incl(p) + return peers.len + +proc getCapablePeersCount*(pm: PeerManager, protocol: string): int = + ## Returns the total number of unique connected peers (inbound + outbound) + ## who have identified themselves as supporting the given protocol. + let (inPeers, outPeers) = pm.capablePeers(protocol) + var peers = initHashSet[PeerId](nextPowerOfTwo(inPeers.len + outPeers.len)) + for p in inPeers: + peers.incl(p) + for p in outPeers: + peers.incl(p) + return peers.len + +proc getPeersForShard*(pm: PeerManager, protocolId: string, shard: PubsubTopic): int = + let (inPeers, outPeers) = pm.connectedPeers(protocolId) + let connectedProtocolPeers = inPeers & outPeers + if connectedProtocolPeers.len == 0: + return 0 + + let shardInfo = RelayShard.parse(shard).valueOr: + # count raw peers of the given protocol if for some reason we can't get + # a shard mapping out of the gossipsub topic string. + return connectedProtocolPeers.len + + var shardPeers = 0 + for peerId in connectedProtocolPeers: + if pm.switch.peerStore.hasShard(peerId, shardInfo.clusterId, shardInfo.shardId): + shardPeers.inc() + + return shardPeers + proc disconnectAllPeers*(pm: PeerManager) {.async.} = let (inPeerIds, outPeerIds) = pm.connectedPeers() let connectedPeers = concat(inPeerIds, outPeerIds) @@ -635,7 +700,7 @@ proc getPeerIp(pm: PeerManager, peerId: PeerId): Option[string] = # Event Handling # #~~~~~~~~~~~~~~~~~# -proc onPeerMetadata(pm: PeerManager, peerId: PeerId) {.async.} = +proc refreshPeerMetadata(pm: PeerManager, peerId: PeerId) {.async.} = let res = catch: await pm.switch.dial(peerId, WakuMetadataCodec) @@ -664,6 +729,10 @@ proc onPeerMetadata(pm: PeerManager, peerId: PeerId) {.async.} = let shards = metadata.shards.mapIt(it.uint16) pm.switch.peerStore.setShardInfo(peerId, shards) + # TODO: should only trigger an event if metadata actually changed + # should include the shard subscription delta in the event when + # it is a MetadataUpdated event + EventWakuPeer.emit(pm.brokerCtx, peerId, WakuPeerEventKind.EventMetadataUpdated) return info "disconnecting from peer", peerId = peerId, reason = reason @@ -673,14 +742,14 @@ proc onPeerMetadata(pm: PeerManager, peerId: PeerId) {.async.} = # called when a peer i) first connects to us ii) disconnects all connections from us proc onPeerEvent(pm: PeerManager, peerId: PeerId, event: PeerEvent) {.async.} = if not pm.wakuMetadata.isNil() and event.kind == PeerEventKind.Joined: - await pm.onPeerMetadata(peerId) + await pm.refreshPeerMetadata(peerId) var peerStore = pm.switch.peerStore var direction: PeerDirection var connectedness: Connectedness case event.kind - of Joined: + of PeerEventKind.Joined: direction = if event.initiator: Outbound else: Inbound connectedness = Connected @@ -708,10 +777,12 @@ proc onPeerEvent(pm: PeerManager, peerId: PeerId, event: PeerEvent) {.async.} = asyncSpawn(pm.switch.disconnect(peerId)) peerStore.delete(peerId) + EventWakuPeer.emit(pm.brokerCtx, peerId, WakuPeerEventKind.EventConnected) + if not pm.onConnectionChange.isNil(): # we don't want to await for the callback to finish asyncSpawn pm.onConnectionChange(peerId, Joined) - of Left: + of PeerEventKind.Left: direction = UnknownDirection connectedness = CanConnect @@ -723,12 +794,16 @@ proc onPeerEvent(pm: PeerManager, peerId: PeerId, event: PeerEvent) {.async.} = pm.ipTable.del(ip) break + EventWakuPeer.emit(pm.brokerCtx, peerId, WakuPeerEventKind.EventDisconnected) + if not pm.onConnectionChange.isNil(): # we don't want to await for the callback to finish asyncSpawn pm.onConnectionChange(peerId, Left) - of Identified: + of PeerEventKind.Identified: info "event identified", peerId = peerId + EventWakuPeer.emit(pm.brokerCtx, peerId, WakuPeerEventKind.EventIdentified) + peerStore[ConnectionBook][peerId] = connectedness peerStore[DirectionBook][peerId] = direction @@ -1085,8 +1160,11 @@ proc new*( error "Max backoff time can't be over 1 week", maxBackoff = backoff raise newException(Defect, "Max backoff time can't be over 1 week") + let brokerCtx = globalBrokerContext() + let pm = PeerManager( switch: switch, + brokerCtx: brokerCtx, wakuMetadata: wakuMetadata, storage: storage, initialBackoffInSec: initialBackoffInSec, diff --git a/waku/node/peer_manager/waku_peer_store.nim b/waku/node/peer_manager/waku_peer_store.nim index b7f2669e5..a03b5ae2e 100644 --- a/waku/node/peer_manager/waku_peer_store.nim +++ b/waku/node/peer_manager/waku_peer_store.nim @@ -162,7 +162,9 @@ proc connectedness*(peerStore: PeerStore, peerId: PeerId): Connectedness = peerStore[ConnectionBook].book.getOrDefault(peerId, NotConnected) proc hasShard*(peerStore: PeerStore, peerId: PeerID, cluster, shard: uint16): bool = - peerStore[ENRBook].book.getOrDefault(peerId).containsShard(cluster, shard) + return + peerStore[ENRBook].book.getOrDefault(peerId).containsShard(cluster, shard) or + peerStore[ShardBook].book.getOrDefault(peerId, @[]).contains(shard) proc hasCapability*(peerStore: PeerStore, peerId: PeerID, cap: Capabilities): bool = peerStore[ENRBook].book.getOrDefault(peerId).supportsCapability(cap) @@ -219,7 +221,8 @@ proc getPeersByShard*( peerStore: PeerStore, cluster, shard: uint16 ): seq[RemotePeerInfo] = return peerStore.peers.filterIt( - it.enr.isSome() and it.enr.get().containsShard(cluster, shard) + (it.enr.isSome() and it.enr.get().containsShard(cluster, shard)) or + it.shards.contains(shard) ) proc getPeersByCapability*( diff --git a/waku/node/waku_node.nim b/waku/node/waku_node.nim index d556811ac..cb3d81c7c 100644 --- a/waku/node/waku_node.nim +++ b/waku/node/waku_node.nim @@ -42,6 +42,7 @@ import waku_store/resume, waku_store_sync, waku_filter_v2, + waku_filter_v2/common as filter_common, waku_filter_v2/client as filter_client, waku_metadata, waku_rendezvous/protocol, @@ -57,12 +58,18 @@ import common/rate_limit/setting, common/callbacks, common/nimchronos, + common/broker/broker_context, + common/broker/request_broker, waku_mix, requests/node_requests, - common/broker/broker_context, + requests/health_requests, + events/health_events, + events/peer_events, ], ./net_config, - ./peer_manager + ./peer_manager, + ./health_monitor/health_status, + ./health_monitor/topic_health declarePublicCounter waku_node_messages, "number of messages received", ["type"] @@ -91,6 +98,9 @@ const clientId* = "Nimbus Waku v2 node" const WakuNodeVersionString* = "version / git commit hash: " & git_version +const EdgeTopicHealthyThreshold = 2 + ## Lightpush server and filter server requirement for a healthy topic in edge mode + # key and crypto modules different type # TODO: Move to application instance (e.g., `WakuNode2`) @@ -135,6 +145,10 @@ type topicSubscriptionQueue*: AsyncEventQueue[SubscriptionEvent] rateLimitSettings*: ProtocolRateLimitSettings wakuMix*: WakuMix + edgeTopicsHealth*: Table[PubsubTopic, TopicHealth] + edgeHealthEvent*: AsyncEvent + edgeHealthLoop: Future[void] + peerEventListener*: EventWakuPeerListener proc deduceRelayShard( node: WakuNode, @@ -469,7 +483,52 @@ proc updateAnnouncedAddrWithPrimaryIpAddr*(node: WakuNode): Result[void, string] return ok() -proc startProvidersAndListeners(node: WakuNode) = +proc calculateEdgeTopicHealth(node: WakuNode, shard: PubsubTopic): TopicHealth = + let filterPeers = + node.peerManager.getPeersForShard(filter_common.WakuFilterSubscribeCodec, shard) + let lightpushPeers = + node.peerManager.getPeersForShard(lightpush_protocol.WakuLightPushCodec, shard) + + if filterPeers >= EdgeTopicHealthyThreshold and + lightpushPeers >= EdgeTopicHealthyThreshold: + return TopicHealth.SUFFICIENTLY_HEALTHY + elif filterPeers > 0 and lightpushPeers > 0: + return TopicHealth.MINIMALLY_HEALTHY + + return TopicHealth.UNHEALTHY + +proc loopEdgeHealth(node: WakuNode) {.async.} = + while node.started: + await node.edgeHealthEvent.wait() + node.edgeHealthEvent.clear() + + try: + for shard in node.edgeTopicsHealth.keys: + if not node.wakuRelay.isNil and node.wakuRelay.isSubscribed(shard): + continue + + let oldHealth = node.edgeTopicsHealth.getOrDefault(shard, TopicHealth.UNHEALTHY) + let newHealth = node.calculateEdgeTopicHealth(shard) + if newHealth != oldHealth: + node.edgeTopicsHealth[shard] = newHealth + EventShardTopicHealthChange.emit(node.brokerCtx, shard, newHealth) + except CancelledError: + break + except CatchableError as e: + warn "Error in edge health check", error = e.msg + + # safety cooldown to protect from edge cases + await sleepAsync(100.milliseconds) + +proc startProvidersAndListeners*(node: WakuNode) = + node.peerEventListener = EventWakuPeer.listen( + node.brokerCtx, + proc(evt: EventWakuPeer) {.async: (raises: []), gcsafe.} = + node.edgeHealthEvent.fire(), + ).valueOr: + error "Failed to listen to peer events", error = error + return + RequestRelayShard.setProvider( node.brokerCtx, proc( @@ -481,8 +540,60 @@ proc startProvidersAndListeners(node: WakuNode) = ).isOkOr: error "Can't set provider for RequestRelayShard", error = error -proc stopProvidersAndListeners(node: WakuNode) = + RequestShardTopicsHealth.setProvider( + node.brokerCtx, + proc(topics: seq[PubsubTopic]): Result[RequestShardTopicsHealth, string] = + var response: RequestShardTopicsHealth + + for shard in topics: + var healthStatus = TopicHealth.UNHEALTHY + + if not node.wakuRelay.isNil: + healthStatus = + node.wakuRelay.topicsHealth.getOrDefault(shard, TopicHealth.NOT_SUBSCRIBED) + + if healthStatus == TopicHealth.NOT_SUBSCRIBED: + healthStatus = node.calculateEdgeTopicHealth(shard) + + response.topicHealth.add((shard, healthStatus)) + + return ok(response), + ).isOkOr: + error "Can't set provider for RequestShardTopicsHealth", error = error + + RequestContentTopicsHealth.setProvider( + node.brokerCtx, + proc(topics: seq[ContentTopic]): Result[RequestContentTopicsHealth, string] = + var response: RequestContentTopicsHealth + + for contentTopic in topics: + var topicHealth = TopicHealth.NOT_SUBSCRIBED + + let shardResult = node.deduceRelayShard(contentTopic, none[PubsubTopic]()) + + if shardResult.isOk(): + let shardObj = shardResult.get() + let pubsubTopic = $shardObj + if not isNil(node.wakuRelay): + topicHealth = node.wakuRelay.topicsHealth.getOrDefault( + pubsubTopic, TopicHealth.NOT_SUBSCRIBED + ) + + if topicHealth == TopicHealth.NOT_SUBSCRIBED and + pubsubTopic in node.edgeTopicsHealth: + topicHealth = node.calculateEdgeTopicHealth(pubsubTopic) + + response.contentTopicHealth.add((topic: contentTopic, health: topicHealth)) + + return ok(response), + ).isOkOr: + error "Can't set provider for RequestContentTopicsHealth", error = error + +proc stopProvidersAndListeners*(node: WakuNode) = + EventWakuPeer.dropListener(node.brokerCtx, node.peerEventListener) RequestRelayShard.clearProvider(node.brokerCtx) + RequestContentTopicsHealth.clearProvider(node.brokerCtx) + RequestShardTopicsHealth.clearProvider(node.brokerCtx) proc start*(node: WakuNode) {.async.} = ## Starts a created Waku Node and @@ -532,6 +643,9 @@ proc start*(node: WakuNode) {.async.} = ## The switch will update addresses after start using the addressMapper await node.switch.start() + node.edgeHealthEvent = newAsyncEvent() + node.edgeHealthLoop = loopEdgeHealth(node) + node.startProvidersAndListeners() node.started = true @@ -549,6 +663,10 @@ proc stop*(node: WakuNode) {.async.} = node.stopProvidersAndListeners() + if not node.edgeHealthLoop.isNil: + await node.edgeHealthLoop.cancelAndWait() + node.edgeHealthLoop = nil + await node.switch.stop() node.peerManager.stop() diff --git a/waku/requests/health_request.nim b/waku/requests/health_request.nim deleted file mode 100644 index 9f98eba67..000000000 --- a/waku/requests/health_request.nim +++ /dev/null @@ -1,21 +0,0 @@ -import waku/common/broker/[request_broker, multi_request_broker] - -import waku/api/types -import waku/node/health_monitor/[protocol_health, topic_health] -import waku/waku_core/topics - -export protocol_health, topic_health - -RequestBroker(sync): - type RequestNodeHealth* = object - healthStatus*: NodeHealth - -RequestBroker(sync): - type RequestRelayTopicsHealth* = object - topicHealth*: seq[tuple[topic: PubsubTopic, health: TopicHealth]] - - proc signature(topics: seq[PubsubTopic]): Result[RequestRelayTopicsHealth, string] - -MultiRequestBroker: - type RequestProtocolHealth* = object - healthStatus*: ProtocolHealth diff --git a/waku/requests/health_requests.nim b/waku/requests/health_requests.nim new file mode 100644 index 000000000..3554922b3 --- /dev/null +++ b/waku/requests/health_requests.nim @@ -0,0 +1,39 @@ +import waku/common/broker/request_broker + +import waku/api/types +import waku/node/health_monitor/[protocol_health, topic_health, health_report] +import waku/waku_core/topics +import waku/common/waku_protocol + +export protocol_health, topic_health + +# Get the overall node connectivity status +RequestBroker(sync): + type RequestConnectionStatus* = object + connectionStatus*: ConnectionStatus + +# Get the health status of a set of content topics +RequestBroker(sync): + type RequestContentTopicsHealth* = object + contentTopicHealth*: seq[tuple[topic: ContentTopic, health: TopicHealth]] + + proc signature(topics: seq[ContentTopic]): Result[RequestContentTopicsHealth, string] + +# Get a consolidated node health report +RequestBroker: + type RequestHealthReport* = object + healthReport*: HealthReport + +# Get the health status of a set of shards (pubsub topics) +RequestBroker(sync): + type RequestShardTopicsHealth* = object + topicHealth*: seq[tuple[topic: PubsubTopic, health: TopicHealth]] + + proc signature(topics: seq[PubsubTopic]): Result[RequestShardTopicsHealth, string] + +# Get the health status of a mounted protocol +RequestBroker: + type RequestProtocolHealth* = object + healthStatus*: ProtocolHealth + + proc signature(protocol: WakuProtocol): Future[Result[RequestProtocolHealth, string]] diff --git a/waku/requests/requests.nim b/waku/requests/requests.nim index 03e10f882..9225c0f3e 100644 --- a/waku/requests/requests.nim +++ b/waku/requests/requests.nim @@ -1,3 +1,3 @@ -import ./[health_request, rln_requests, node_requests] +import ./[health_requests, rln_requests, node_requests] -export health_request, rln_requests, node_requests +export health_requests, rln_requests, node_requests diff --git a/waku/rest_api/endpoint/health/types.nim b/waku/rest_api/endpoint/health/types.nim index 57f8b284c..88fa736a8 100644 --- a/waku/rest_api/endpoint/health/types.nim +++ b/waku/rest_api/endpoint/health/types.nim @@ -2,7 +2,8 @@ import results import chronicles, json_serialization, json_serialization/std/options -import ../../../waku_node, ../serdes +import ../serdes +import waku/[waku_node, api/types] #### Serialization and deserialization @@ -44,6 +45,7 @@ proc writeValue*( ) {.raises: [IOError].} = writer.beginRecord() writer.writeField("nodeHealth", $value.nodeHealth) + writer.writeField("connectionStatus", $value.connectionStatus) writer.writeField("protocolsHealth", value.protocolsHealth) writer.endRecord() @@ -52,6 +54,7 @@ proc readValue*( ) {.raises: [SerializationError, IOError].} = var nodeHealth: Option[HealthStatus] + connectionStatus: Option[ConnectionStatus] protocolsHealth: Option[seq[ProtocolHealth]] for fieldName in readObjectFields(reader): @@ -66,6 +69,16 @@ proc readValue*( reader.raiseUnexpectedValue("Invalid `health` value: " & $error) nodeHealth = some(health) + of "connectionStatus": + if connectionStatus.isSome(): + reader.raiseUnexpectedField( + "Multiple `connectionStatus` fields found", "HealthReport" + ) + + let state = ConnectionStatus.init(reader.readValue(string)).valueOr: + reader.raiseUnexpectedValue("Invalid `connectionStatus` value: " & $error) + + connectionStatus = some(state) of "protocolsHealth": if protocolsHealth.isSome(): reader.raiseUnexpectedField( @@ -79,5 +92,8 @@ proc readValue*( if nodeHealth.isNone(): reader.raiseUnexpectedValue("Field `nodeHealth` is missing") - value = - HealthReport(nodeHealth: nodeHealth.get, protocolsHealth: protocolsHealth.get(@[])) + value = HealthReport( + nodeHealth: nodeHealth.get, + connectionStatus: connectionStatus.get, + protocolsHealth: protocolsHealth.get(@[]), + ) diff --git a/waku/waku_relay/protocol.nim b/waku/waku_relay/protocol.nim index 3f343269a..17470af29 100644 --- a/waku/waku_relay/protocol.nim +++ b/waku/waku_relay/protocol.nim @@ -5,7 +5,7 @@ {.push raises: [].} import - std/[strformat, strutils], + std/[strformat, strutils, sets], stew/byteutils, results, sequtils, @@ -21,11 +21,13 @@ import import waku/waku_core, waku/node/health_monitor/topic_health, - waku/requests/health_request, + waku/requests/health_requests, + waku/events/health_events, ./message_id, - waku/common/broker/broker_context + waku/common/broker/broker_context, + waku/events/peer_events -from ../waku_core/codecs import WakuRelayCodec +from waku/waku_core/codecs import WakuRelayCodec export WakuRelayCodec type ShardMetrics = object @@ -154,6 +156,8 @@ type pubsubTopic: PubsubTopic, message: WakuMessage ): Future[ValidationResult] {.gcsafe, raises: [Defect].} WakuRelay* = ref object of GossipSub + brokerCtx: BrokerContext + peerEventListener: EventWakuPeerListener # seq of tuples: the first entry in the tuple contains the validators are called for every topic # the second entry contains the error messages to be returned when the validator fails wakuValidators: seq[tuple[handler: WakuValidatorHandler, errorMessage: string]] @@ -165,6 +169,11 @@ type topicsHealth*: Table[string, TopicHealth] onTopicHealthChange*: TopicHealthChangeHandler topicHealthLoopHandle*: Future[void] + topicHealthUpdateEvent: AsyncEvent + topicHealthDirty: HashSet[string] + # list of topics that need their health updated in the update event + topicHealthCheckAll: bool + # true if all topics need to have their health status refreshed in the update event msgMetricsPerShard*: Table[string, ShardMetrics] # predefinition for more detailed results from publishing new message @@ -287,6 +296,21 @@ proc initRelayObservers(w: WakuRelay) = ) proc onRecv(peer: PubSubPeer, msgs: var RPCMsg) = + if msgs.control.isSome(): + let ctrl = msgs.control.get() + var topicsChanged = false + + for graft in ctrl.graft: + w.topicHealthDirty.incl(graft.topicID) + topicsChanged = true + + for prune in ctrl.prune: + w.topicHealthDirty.incl(prune.topicID) + topicsChanged = true + + if topicsChanged: + w.topicHealthUpdateEvent.fire() + for msg in msgs.messages: let (msg_id_short, topic, wakuMessage, msgSize) = decodeRpcMessageInfo(peer, msg).valueOr: continue @@ -325,18 +349,6 @@ proc initRelayObservers(w: WakuRelay) = w.addObserver(administrativeObserver) -proc initRequestProviders(w: WakuRelay) = - RequestRelayTopicsHealth.setProvider( - globalBrokerContext(), - proc(topics: seq[PubsubTopic]): Result[RequestRelayTopicsHealth, string] = - var collectedRes: RequestRelayTopicsHealth - for topic in topics: - let health = w.topicsHealth.getOrDefault(topic, TopicHealth.NOT_SUBSCRIBED) - collectedRes.topicHealth.add((topic, health)) - return ok(collectedRes), - ).isOkOr: - error "Cannot set Relay Topics Health request provider", error = error - proc new*( T: type WakuRelay, switch: Switch, maxMessageSize = int(DefaultMaxWakuMessageSize) ): WakuRelayResult[T] = @@ -354,12 +366,25 @@ proc new*( maxMessageSize = maxMessageSize, parameters = GossipsubParameters, ) + w.brokerCtx = globalBrokerContext() procCall GossipSub(w).initPubSub() w.topicsHealth = initTable[string, TopicHealth]() + w.topicHealthUpdateEvent = newAsyncEvent() + w.topicHealthDirty = initHashSet[string]() + w.topicHealthCheckAll = false w.initProtocolHandler() w.initRelayObservers() - w.initRequestProviders() + + w.peerEventListener = EventWakuPeer.listen( + w.brokerCtx, + proc(evt: EventWakuPeer): Future[void] {.async: (raises: []), gcsafe.} = + if evt.kind == WakuPeerEventKind.EventDisconnected: + w.topicHealthCheckAll = true + w.topicHealthUpdateEvent.fire() + , + ).valueOr: + return err("Failed to subscribe to peer events: " & error) except InitializationError: return err("initialization error: " & getCurrentExceptionMsg()) @@ -437,38 +462,58 @@ proc calculateTopicHealth(wakuRelay: WakuRelay, topic: string): TopicHealth = return TopicHealth.MINIMALLY_HEALTHY return TopicHealth.SUFFICIENTLY_HEALTHY -proc updateTopicsHealth(wakuRelay: WakuRelay) {.async.} = - var futs = newSeq[Future[void]]() - for topic in toSeq(wakuRelay.topics.keys): - ## loop over all the topics I'm subscribed to - let - oldHealth = wakuRelay.topicsHealth.getOrDefault(topic) - currentHealth = wakuRelay.calculateTopicHealth(topic) +proc isSubscribed*(w: WakuRelay, topic: PubsubTopic): bool = + GossipSub(w).topics.hasKey(topic) - if oldHealth == currentHealth: - continue +proc subscribedTopics*(w: WakuRelay): seq[PubsubTopic] = + return toSeq(GossipSub(w).topics.keys()) - wakuRelay.topicsHealth[topic] = currentHealth - if not wakuRelay.onTopicHealthChange.isNil(): - let fut = wakuRelay.onTopicHealthChange(topic, currentHealth) - if not fut.completed(): # Fast path for successful sync handlers - futs.add(fut) +proc topicsHealthLoop(w: WakuRelay) {.async.} = + while true: + await w.topicHealthUpdateEvent.wait() + w.topicHealthUpdateEvent.clear() + + var topicsToCheck: seq[string] + + if w.topicHealthCheckAll: + topicsToCheck = toSeq(w.topics.keys) + else: + topicsToCheck = toSeq(w.topicHealthDirty) + + w.topicHealthCheckAll = false + w.topicHealthDirty.clear() + + var futs = newSeq[Future[void]]() + + for topic in topicsToCheck: + # guard against topic being unsubscribed since fire() + if not w.isSubscribed(topic): + continue + + let + oldHealth = w.topicsHealth.getOrDefault(topic, TopicHealth.UNHEALTHY) + currentHealth = w.calculateTopicHealth(topic) + + if oldHealth == currentHealth: + continue + + w.topicsHealth[topic] = currentHealth + + EventShardTopicHealthChange.emit(w.brokerCtx, topic, currentHealth) + + if not w.onTopicHealthChange.isNil(): + futs.add(w.onTopicHealthChange(topic, currentHealth)) if futs.len() > 0: - # slow path - we have to wait for the handlers to complete try: - futs = await allFinished(futs) + discard await allFinished(futs) except CancelledError: - # check for errors in futures - for fut in futs: - if fut.failed: - let err = fut.readError() - warn "Error in health change handler", description = err.msg + break + except CatchableError as e: + warn "Error in topic health callback", error = e.msg -proc topicsHealthLoop(wakuRelay: WakuRelay) {.async.} = - while true: - await wakuRelay.updateTopicsHealth() - await sleepAsync(10.seconds) + # safety cooldown to protect from edge cases + await sleepAsync(100.milliseconds) method start*(w: WakuRelay) {.async, base.} = info "start" @@ -478,15 +523,13 @@ method start*(w: WakuRelay) {.async, base.} = method stop*(w: WakuRelay) {.async, base.} = info "stop" await procCall GossipSub(w).stop() + + if w.peerEventListener.id != 0: + EventWakuPeer.dropListener(w.brokerCtx, w.peerEventListener) + if not w.topicHealthLoopHandle.isNil(): await w.topicHealthLoopHandle.cancelAndWait() -proc isSubscribed*(w: WakuRelay, topic: PubsubTopic): bool = - GossipSub(w).topics.hasKey(topic) - -proc subscribedTopics*(w: WakuRelay): seq[PubsubTopic] = - return toSeq(GossipSub(w).topics.keys()) - proc generateOrderedValidator(w: WakuRelay): ValidatorHandler {.gcsafe.} = # rejects messages that are not WakuMessage let wrappedValidator = proc( @@ -584,7 +627,8 @@ proc subscribe*(w: WakuRelay, pubsubTopic: PubsubTopic, handler: WakuRelayHandle procCall GossipSub(w).subscribe(pubsubTopic, topicHandler) w.topicHandlers[pubsubTopic] = topicHandler - asyncSpawn w.updateTopicsHealth() + w.topicHealthDirty.incl(pubsubTopic) + w.topicHealthUpdateEvent.fire() proc unsubscribeAll*(w: WakuRelay, pubsubTopic: PubsubTopic) = ## Unsubscribe all handlers on this pubsub topic @@ -594,6 +638,8 @@ proc unsubscribeAll*(w: WakuRelay, pubsubTopic: PubsubTopic) = procCall GossipSub(w).unsubscribeAll(pubsubTopic) w.topicValidator.del(pubsubTopic) w.topicHandlers.del(pubsubTopic) + w.topicsHealth.del(pubsubTopic) + w.topicHealthDirty.excl(pubsubTopic) proc unsubscribe*(w: WakuRelay, pubsubTopic: PubsubTopic) = if not w.topicValidator.hasKey(pubsubTopic): @@ -619,6 +665,8 @@ proc unsubscribe*(w: WakuRelay, pubsubTopic: PubsubTopic) = w.topicValidator.del(pubsubTopic) w.topicHandlers.del(pubsubTopic) + w.topicsHealth.del(pubsubTopic) + w.topicHealthDirty.excl(pubsubTopic) proc publish*( w: WakuRelay, pubsubTopic: PubsubTopic, wakuMessage: WakuMessage From 84f791100fcc4367ca17a9d999e9aa943359dd51 Mon Sep 17 00:00:00 2001 From: NagyZoltanPeter <113987313+NagyZoltanPeter@users.noreply.github.com> Date: Fri, 13 Feb 2026 11:23:21 +0100 Subject: [PATCH 51/70] fix: peer selection by shard and rendezvous/metadata sharding initialization (#3718) * Fix peer selection for cases where ENR is not yet advertiesed but metadata exchange already adjusted supported shards. Fix initialization rendezvous protocol with configured and autoshards to let connect to relay nodes without having a valid subscribed shard already. This solves issue for autoshard nodes to connect ahead of subscribing. * Extend peer selection, rendezvous and metadata tests * Fix rendezvous test, fix metadata test failing due wrong setup, added it into all_tests --- tests/all_tests_waku.nim | 1 + tests/test_peer_manager.nim | 230 ++++++++++++++++++++++++ tests/test_waku_metadata.nim | 85 +++++++-- tests/test_waku_rendezvous.nim | 63 +++++++ waku/factory/node_factory.nim | 2 +- waku/node/peer_manager/peer_manager.nim | 14 +- waku/node/waku_node.nim | 28 ++- 7 files changed, 401 insertions(+), 22 deletions(-) diff --git a/tests/all_tests_waku.nim b/tests/all_tests_waku.nim index 3d22cd9c2..4d4225f9f 100644 --- a/tests/all_tests_waku.nim +++ b/tests/all_tests_waku.nim @@ -89,6 +89,7 @@ import ./test_waku_netconfig, ./test_waku_switch, ./test_waku_rendezvous, + ./test_waku_metadata, ./waku_discv5/test_waku_discv5 # Waku Keystore test suite diff --git a/tests/test_peer_manager.nim b/tests/test_peer_manager.nim index 97df39582..c96f21b6e 100644 --- a/tests/test_peer_manager.nim +++ b/tests/test_peer_manager.nim @@ -1207,3 +1207,233 @@ procSuite "Peer Manager": r = node1.peerManager.selectPeer(WakuPeerExchangeCodec) assert r.isSome(), "could not retrieve peer mounting WakuPeerExchangeCodec" + + asyncTest "selectPeer() filters peers by shard using ENR": + ## Given: A peer manager with 3 peers having different shards in their ENRs + let + clusterId = 0.uint16 + shardId0 = 0.uint16 + shardId1 = 1.uint16 + + # Create 3 nodes with different shards + let nodes = + @[ + newTestWakuNode( + generateSecp256k1Key(), + parseIpAddress("0.0.0.0"), + Port(0), + clusterId = clusterId, + subscribeShards = @[shardId0], + ), + newTestWakuNode( + generateSecp256k1Key(), + parseIpAddress("0.0.0.0"), + Port(0), + clusterId = clusterId, + subscribeShards = @[shardId1], + ), + newTestWakuNode( + generateSecp256k1Key(), + parseIpAddress("0.0.0.0"), + Port(0), + clusterId = clusterId, + subscribeShards = @[shardId0], + ), + ] + + await allFutures(nodes.mapIt(it.start())) + for node in nodes: + discard await node.mountRelay() + + # Get peer infos with ENRs + let peerInfos = collect: + for node in nodes: + var peerInfo = node.switch.peerInfo.toRemotePeerInfo() + peerInfo.enr = some(node.enr) + peerInfo + + # Add all peers to node 0's peer manager and peerstore + for i in 1 .. 2: + nodes[0].peerManager.addPeer(peerInfos[i]) + nodes[0].peerManager.switch.peerStore[AddressBook][peerInfos[i].peerId] = + peerInfos[i].addrs + nodes[0].peerManager.switch.peerStore[ProtoBook][peerInfos[i].peerId] = + @[WakuRelayCodec] + + ## When: We select a peer for shard 0 + let shard0Topic = some(PubsubTopic("/waku/2/rs/0/0")) + let selectedPeer0 = nodes[0].peerManager.selectPeer(WakuRelayCodec, shard0Topic) + + ## Then: Only peers supporting shard 0 are considered (nodes 2, not node 1) + check: + selectedPeer0.isSome() + selectedPeer0.get().peerId != peerInfos[1].peerId # node1 has shard 1 + selectedPeer0.get().peerId == peerInfos[2].peerId # node2 has shard 0 + + ## When: We select a peer for shard 1 + let shard1Topic = some(PubsubTopic("/waku/2/rs/0/1")) + let selectedPeer1 = nodes[0].peerManager.selectPeer(WakuRelayCodec, shard1Topic) + + ## Then: Only peer with shard 1 is selected + check: + selectedPeer1.isSome() + selectedPeer1.get().peerId == peerInfos[1].peerId # node1 has shard 1 + + await allFutures(nodes.mapIt(it.stop())) + + asyncTest "selectPeer() filters peers by shard using shards field": + ## Given: A peer manager with peers having shards in RemotePeerInfo (no ENR) + let + clusterId = 0.uint16 + shardId0 = 0.uint16 + shardId1 = 1.uint16 + + # Create peer manager + let pm = PeerManager.new( + switch = SwitchBuilder.new().withRng(rng()).withMplex().withNoise().build(), + storage = nil, + ) + + # Create peer infos with shards field populated (simulating metadata exchange) + let basePeerId = "16Uiu2HAm7QGEZKujdSbbo1aaQyfDPQ6Bw3ybQnj6fruH5Dxwd7D" + let peers = toSeq(1 .. 3) + .mapIt(parsePeerInfo("/ip4/0.0.0.0/tcp/30300/p2p/" & basePeerId & $it)) + .filterIt(it.isOk()) + .mapIt(it.value) + require: + peers.len == 3 + + # Manually populate the shards field (ENR is not available) + var peerInfos: seq[RemotePeerInfo] = @[] + for i, peer in peers: + var peerInfo = RemotePeerInfo.init(peer.peerId, peer.addrs) + # Peer 0 and 2 have shard 0, peer 1 has shard 1 + peerInfo.shards = + if i == 1: + @[shardId1] + else: + @[shardId0] + # Note: ENR is intentionally left as none + peerInfos.add(peerInfo) + + # Add peers to peerstore + for peerInfo in peerInfos: + pm.switch.peerStore[AddressBook][peerInfo.peerId] = peerInfo.addrs + pm.switch.peerStore[ProtoBook][peerInfo.peerId] = @[WakuRelayCodec] + # simulate metadata exchange by setting shards field in peerstore + pm.switch.peerStore.setShardInfo(peerInfo.peerId, peerInfo.shards) + + ## When: We select a peer for shard 0 + let shard0Topic = some(PubsubTopic("/waku/2/rs/0/0")) + let selectedPeer0 = pm.selectPeer(WakuRelayCodec, shard0Topic) + + ## Then: Peers with shard 0 in shards field are selected + check: + selectedPeer0.isSome() + selectedPeer0.get().peerId in [peerInfos[0].peerId, peerInfos[2].peerId] + + ## When: We select a peer for shard 1 + let shard1Topic = some(PubsubTopic("/waku/2/rs/0/1")) + let selectedPeer1 = pm.selectPeer(WakuRelayCodec, shard1Topic) + + ## Then: Peer with shard 1 in shards field is selected + check: + selectedPeer1.isSome() + selectedPeer1.get().peerId == peerInfos[1].peerId + + asyncTest "selectPeer() handles invalid pubsub topic gracefully": + ## Given: A peer manager with valid peers + let node = newTestWakuNode( + generateSecp256k1Key(), + parseIpAddress("0.0.0.0"), + Port(0), + clusterId = 0, + subscribeShards = @[0'u16], + ) + await node.start() + + # Add a peer + let peer = + newTestWakuNode(generateSecp256k1Key(), parseIpAddress("0.0.0.0"), Port(0)) + await peer.start() + discard await peer.mountRelay() + + var peerInfo = peer.switch.peerInfo.toRemotePeerInfo() + peerInfo.enr = some(peer.enr) + node.peerManager.addPeer(peerInfo) + node.peerManager.switch.peerStore[ProtoBook][peerInfo.peerId] = @[WakuRelayCodec] + + ## When: selectPeer is called with malformed pubsub topic + let invalidTopics = + @[ + some(PubsubTopic("invalid-topic")), + some(PubsubTopic("/waku/2/invalid")), + some(PubsubTopic("/waku/2/rs/abc/0")), # non-numeric cluster + some(PubsubTopic("")), # empty topic + ] + + ## Then: Returns none(RemotePeerInfo) without crashing + for invalidTopic in invalidTopics: + let result = node.peerManager.selectPeer(WakuRelayCodec, invalidTopic) + check: + result.isNone() + + await allFutures(node.stop(), peer.stop()) + + asyncTest "selectPeer() prioritizes ENR over shards field": + ## Given: A peer with both ENR and shards field populated + let + clusterId = 0.uint16 + shardId0 = 0.uint16 + shardId1 = 1.uint16 + + let node = newTestWakuNode( + generateSecp256k1Key(), + parseIpAddress("0.0.0.0"), + Port(0), + clusterId = clusterId, + subscribeShards = @[shardId0], + ) + await node.start() + discard await node.mountRelay() + + # Create peer with ENR containing shard 0 + let peer = newTestWakuNode( + generateSecp256k1Key(), + parseIpAddress("0.0.0.0"), + Port(0), + clusterId = clusterId, + subscribeShards = @[shardId0], + ) + await peer.start() + discard await peer.mountRelay() + + # Create peer info with ENR (shard 0) but set shards field to shard 1 + var peerInfo = peer.switch.peerInfo.toRemotePeerInfo() + peerInfo.enr = some(peer.enr) # ENR has shard 0 + peerInfo.shards = @[shardId1] # shards field has shard 1 + + node.peerManager.addPeer(peerInfo) + node.peerManager.switch.peerStore[ProtoBook][peerInfo.peerId] = @[WakuRelayCodec] + # simulate metadata exchange by setting shards field in peerstore + node.peerManager.switch.peerStore.setShardInfo(peerInfo.peerId, peerInfo.shards) + + ## When: We select for shard 0 + let shard0Topic = some(PubsubTopic("/waku/2/rs/0/0")) + let selectedPeer = node.peerManager.selectPeer(WakuRelayCodec, shard0Topic) + + ## Then: Peer is selected because ENR (shard 0) takes precedence + check: + selectedPeer.isSome() + selectedPeer.get().peerId == peerInfo.peerId + + ## When: We select for shard 1 + let shard1Topic = some(PubsubTopic("/waku/2/rs/0/1")) + let selectedPeer1 = node.peerManager.selectPeer(WakuRelayCodec, shard1Topic) + + ## Then: Peer is still selected because shards field is checked as fallback + check: + selectedPeer1.isSome() + selectedPeer1.get().peerId == peerInfo.peerId + + await allFutures(node.stop(), peer.stop()) diff --git a/tests/test_waku_metadata.nim b/tests/test_waku_metadata.nim index b30fd1712..cfceb89b5 100644 --- a/tests/test_waku_metadata.nim +++ b/tests/test_waku_metadata.nim @@ -13,14 +13,15 @@ import eth/keys, eth/p2p/discoveryv5/enr import - waku/ - [ - waku_node, - waku_core/topics, - node/peer_manager, - discovery/waku_discv5, - waku_metadata, - ], + waku/[ + waku_node, + waku_core/topics, + waku_core, + node/peer_manager, + discovery/waku_discv5, + waku_metadata, + waku_relay/protocol, + ], ./testlib/wakucore, ./testlib/wakunode @@ -41,26 +42,86 @@ procSuite "Waku Metadata Protocol": clusterId = clusterId, ) + # Mount metadata protocol on both nodes before starting + discard node1.mountMetadata(clusterId, @[]) + discard node2.mountMetadata(clusterId, @[]) + + # Mount relay so metadata can track subscriptions + discard await node1.mountRelay() + discard await node2.mountRelay() + # Start nodes await allFutures([node1.start(), node2.start()]) - node1.topicSubscriptionQueue.emit((kind: PubsubSub, topic: "/waku/2/rs/10/7")) - node1.topicSubscriptionQueue.emit((kind: PubsubSub, topic: "/waku/2/rs/10/6")) + # Subscribe to topics on node1 - relay will track these and metadata will report them + let noOpHandler: WakuRelayHandler = proc( + pubsubTopic: PubsubTopic, message: WakuMessage + ): Future[void] {.async.} = + discard + + node1.wakuRelay.subscribe("/waku/2/rs/10/7", noOpHandler) + node1.wakuRelay.subscribe("/waku/2/rs/10/6", noOpHandler) # Create connection let connOpt = await node2.peerManager.dialPeer( node1.switch.peerInfo.toRemotePeerInfo(), WakuMetadataCodec ) require: - connOpt.isSome + connOpt.isSome() # Request metadata let response1 = await node2.wakuMetadata.request(connOpt.get()) # Check the response or dont even continue require: - response1.isOk + response1.isOk() check: response1.get().clusterId.get() == clusterId response1.get().shards == @[uint32(6), uint32(7)] + + await allFutures([node1.stop(), node2.stop()]) + + asyncTest "Metadata reports configured shards before relay subscription": + ## Given: Node with configured shards but no relay subscriptions yet + let + clusterId = 10.uint16 + configuredShards = @[uint16(0), uint16(1)] + + let node1 = newTestWakuNode( + generateSecp256k1Key(), + parseIpAddress("0.0.0.0"), + Port(0), + clusterId = clusterId, + subscribeShards = configuredShards, + ) + let node2 = newTestWakuNode( + generateSecp256k1Key(), parseIpAddress("0.0.0.0"), Port(0), clusterId = clusterId + ) + + # Mount metadata with configured shards on node1 + discard node1.mountMetadata(clusterId, configuredShards) + # Mount metadata on node2 so it can make requests + discard node2.mountMetadata(clusterId, @[]) + + # Start nodes (relay is NOT mounted yet on node1) + await allFutures([node1.start(), node2.start()]) + + ## When: Node2 requests metadata from Node1 before relay is active + let connOpt = await node2.peerManager.dialPeer( + node1.switch.peerInfo.toRemotePeerInfo(), WakuMetadataCodec + ) + require: + connOpt.isSome + + let response = await node2.wakuMetadata.request(connOpt.get()) + + ## Then: Response contains configured shards even without relay subscriptions + require: + response.isOk() + + check: + response.get().clusterId.get() == clusterId + response.get().shards == @[uint32(0), uint32(1)] + + await allFutures([node1.stop(), node2.stop()]) diff --git a/tests/test_waku_rendezvous.nim b/tests/test_waku_rendezvous.nim index d3dd6f920..07113ca4a 100644 --- a/tests/test_waku_rendezvous.nim +++ b/tests/test_waku_rendezvous.nim @@ -10,6 +10,7 @@ import import waku/waku_core/peers, waku/waku_core/codecs, + waku/waku_core, waku/node/waku_node, waku/node/peer_manager/peer_manager, waku/waku_rendezvous/protocol, @@ -81,3 +82,65 @@ procSuite "Waku Rendezvous": records.len == 1 records[0].peerId == peerInfo1.peerId #records[0].mixPubKey == $node1.wakuMix.pubKey + + asyncTest "Rendezvous advertises configured shards before relay is active": + ## Given: A node with configured shards but no relay subscriptions yet + let + clusterId = 10.uint16 + configuredShards = @[RelayShard(clusterId: clusterId, shardId: 0)] + + let node = newTestWakuNode( + generateSecp256k1Key(), + parseIpAddress("0.0.0.0"), + Port(0), + clusterId = clusterId, + subscribeShards = @[0'u16], + ) + + ## When: Node mounts rendezvous with configured shards (before relay) + await node.mountRendezvous(clusterId, configuredShards) + await node.start() + + ## Then: The rendezvous protocol should be mounted successfully + check: + node.wakuRendezvous != nil + + # Verify that the protocol is running without errors + # (shards are used internally by the getShardsGetter closure) + let namespace = computeMixNamespace(clusterId) + check: + namespace.len > 0 + + await node.stop() + + asyncTest "Rendezvous uses configured shards when relay not mounted": + ## Given: A light client node with no relay protocol + let + clusterId = 10.uint16 + configuredShards = + @[ + RelayShard(clusterId: clusterId, shardId: 0), + RelayShard(clusterId: clusterId, shardId: 1), + ] + + let lightClient = newTestWakuNode( + generateSecp256k1Key(), parseIpAddress("0.0.0.0"), Port(0), clusterId = clusterId + ) + + ## When: Node mounts rendezvous with configured shards (no relay mounted) + await lightClient.mountRendezvous(clusterId, configuredShards) + await lightClient.start() + + ## Then: Rendezvous should be mounted successfully without relay + check: + lightClient.wakuRendezvous != nil + lightClient.wakuRelay == nil # Verify relay is not mounted + + # Verify the protocol is working (doesn't fail immediately) + # advertiseAll requires peers,so we just check the protocol is initialized + await sleepAsync(100.milliseconds) + + check: + lightClient.wakuRendezvous != nil + + await lightClient.stop() diff --git a/waku/factory/node_factory.nim b/waku/factory/node_factory.nim index 2cdfdb0d2..dc383e89d 100644 --- a/waku/factory/node_factory.nim +++ b/waku/factory/node_factory.nim @@ -337,7 +337,7 @@ proc setupProtocols( node.wakuRelay.addSignedShardsValidator(subscribedProtectedShards, conf.clusterId) if conf.rendezvous: - await node.mountRendezvous(conf.clusterId) + await node.mountRendezvous(conf.clusterId, shards) await node.mountRendezvousClient(conf.clusterId) # Keepalive mounted on all nodes diff --git a/waku/node/peer_manager/peer_manager.nim b/waku/node/peer_manager/peer_manager.nim index 834fb19cf..0c435468f 100644 --- a/waku/node/peer_manager/peer_manager.nim +++ b/waku/node/peer_manager/peer_manager.nim @@ -227,7 +227,19 @@ proc selectPeer*( protocol = proto, peers, address = cast[uint](pm.switch.peerStore) if shard.isSome(): - peers.keepItIf((it.enr.isSome() and it.enr.get().containsShard(shard.get()))) + # Parse the shard from the pubsub topic to get cluster and shard ID + let shardInfo = RelayShard.parse(shard.get()).valueOr: + trace "Failed to parse shard from pubsub topic", topic = shard.get() + return none(RemotePeerInfo) + + # Filter peers that support the requested shard + # Check both ENR (if present) and the shards field on RemotePeerInfo + peers.keepItIf( + # Check ENR if available + (it.enr.isSome() and it.enr.get().containsShard(shard.get())) or + # Otherwise check the shards field directly + (it.shards.len > 0 and it.shards.contains(shardInfo.shardId)) + ) shuffle(peers) diff --git a/waku/node/waku_node.nim b/waku/node/waku_node.nim index cb3d81c7c..53ce0349a 100644 --- a/waku/node/waku_node.nim +++ b/waku/node/waku_node.nim @@ -167,20 +167,28 @@ proc deduceRelayShard( return err("Invalid topic:" & pubsubTopic & " " & $error) return ok(shard) -proc getShardsGetter(node: WakuNode): GetShards = +proc getShardsGetter(node: WakuNode, configuredShards: seq[uint16]): GetShards = return proc(): seq[uint16] {.closure, gcsafe, raises: [].} = # fetch pubsubTopics subscribed to relay and convert them to shards if node.wakuRelay.isNil(): - return @[] + # If relay is not mounted, return configured shards + return configuredShards + let subscribedTopics = node.wakuRelay.subscribedTopics() + + # If relay hasn't subscribed to any topics yet, return configured shards + if subscribedTopics.len == 0: + return configuredShards + let relayShards = topicsToRelayShards(subscribedTopics).valueOr: error "could not convert relay topics to shards", error = $error, topics = subscribedTopics - return @[] + # Fall back to configured shards on error + return configuredShards if relayShards.isSome(): let shards = relayShards.get().shardIds return shards - return @[] + return configuredShards proc getCapabilitiesGetter(node: WakuNode): GetCapabilities = return proc(): seq[Capabilities] {.closure, gcsafe, raises: [].} = @@ -227,7 +235,7 @@ proc new*( rateLimitSettings: rateLimitSettings, ) - peerManager.setShardGetter(node.getShardsGetter()) + peerManager.setShardGetter(node.getShardsGetter(@[])) return node @@ -272,7 +280,7 @@ proc mountMetadata*( if not node.wakuMetadata.isNil(): return err("Waku metadata already mounted, skipping") - let metadata = WakuMetadata.new(clusterId, node.getShardsGetter()) + let metadata = WakuMetadata.new(clusterId, node.getShardsGetter(shards)) node.wakuMetadata = metadata node.peerManager.wakuMetadata = metadata @@ -413,14 +421,18 @@ proc mountRendezvousClient*(node: WakuNode, clusterId: uint16) {.async: (raises: if node.started: await node.wakuRendezvousClient.start() -proc mountRendezvous*(node: WakuNode, clusterId: uint16) {.async: (raises: []).} = +proc mountRendezvous*( + node: WakuNode, clusterId: uint16, shards: seq[RelayShard] = @[] +) {.async: (raises: []).} = info "mounting rendezvous discovery protocol" + let configuredShards = shards.mapIt(it.shardId) + node.wakuRendezvous = WakuRendezVous.new( node.switch, node.peerManager, clusterId, - node.getShardsGetter(), + node.getShardsGetter(configuredShards), node.getCapabilitiesGetter(), node.getWakuPeerRecordGetter(), ).valueOr: From eb0c34c553c4a0ba7f1806d14083e6ef1e49dc92 Mon Sep 17 00:00:00 2001 From: Ivan FB <128452529+Ivansete-status@users.noreply.github.com> Date: Fri, 13 Feb 2026 12:55:31 +0100 Subject: [PATCH 52/70] Adjust docker file to bsd (#3720) * add libbsd-dev into Dockerfile * add libstdc++ in Dockerfile to avoid runtime error loading shared library libstdc++.so.6: No such file or directory (needed by /usr/bin/wakunode) --- Dockerfile | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/Dockerfile b/Dockerfile index 90fb0a9c9..5b16b9eee 100644 --- a/Dockerfile +++ b/Dockerfile @@ -8,7 +8,7 @@ ARG LOG_LEVEL=TRACE ARG HEAPTRACK_BUILD=0 # Get build tools and required header files -RUN apk add --no-cache bash git build-base openssl-dev linux-headers curl jq +RUN apk add --no-cache bash git build-base openssl-dev linux-headers curl jq libbsd-dev WORKDIR /app COPY . . @@ -46,7 +46,7 @@ LABEL version="unknown" EXPOSE 30303 60000 8545 # Referenced in the binary -RUN apk add --no-cache libgcc libpq-dev bind-tools +RUN apk add --no-cache libgcc libpq-dev bind-tools libstdc++ # Copy to separate location to accomodate different MAKE_TARGET values COPY --from=nim-build /app/build/$MAKE_TARGET /usr/local/bin/ From 8f29070dcfc7c621fdcd3941369fd2cdc81b69e7 Mon Sep 17 00:00:00 2001 From: Ivan FB <128452529+Ivansete-status@users.noreply.github.com> Date: Mon, 16 Feb 2026 11:49:35 +0100 Subject: [PATCH 53/70] fix avoid IndexDefect if DB error message is short (#3725) --- waku/common/databases/db_postgres/dbconn.nim | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/waku/common/databases/db_postgres/dbconn.nim b/waku/common/databases/db_postgres/dbconn.nim index a6c237ae5..7ccf32099 100644 --- a/waku/common/databases/db_postgres/dbconn.nim +++ b/waku/common/databases/db_postgres/dbconn.nim @@ -48,8 +48,8 @@ proc check(db: DbConn): Result[void, string] = return err("exception in check: " & getCurrentExceptionMsg()) if message.len > 0: - let truncatedErr = message[0 .. 80] - ## libpq sometimes gives extremely long error messages + let truncatedErr = message[0 ..< min(80, message.len)] + error "postgres check issue. see truncated db error.", error = truncatedErr return err(truncatedErr) return ok() From b38b5aaea17e36c8f4dcccfa2e0030cb4a6619e7 Mon Sep 17 00:00:00 2001 From: Ivan FB <128452529+Ivansete-status@users.noreply.github.com> Date: Tue, 17 Feb 2026 00:18:46 +0100 Subject: [PATCH 54/70] force FINALIZE partition detach after detecting shorter error (#3728) --- .../waku_archive/driver/postgres_driver/postgres_driver.nim | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/waku/waku_archive/driver/postgres_driver/postgres_driver.nim b/waku/waku_archive/driver/postgres_driver/postgres_driver.nim index 4877cb126..2f495ba5d 100644 --- a/waku/waku_archive/driver/postgres_driver/postgres_driver.nim +++ b/waku/waku_archive/driver/postgres_driver/postgres_driver.nim @@ -1347,8 +1347,10 @@ proc removePartition( (await self.performWriteQuery(detachPartitionQuery)).isOkOr: info "detected error when trying to detach partition", error - if ($error).contains("FINALIZE") or - ($error).contains("already pending detach in part"): + if ($error).contains("FINALIZE") or ($error).contains("already pending"): + ## We assume "already pending detach in partitioned table ..." as possible error + debug "enforce detach with FINALIZE because of detected error", error + ## We assume the database is suggesting to use FINALIZE when detaching a partition let detachPartitionFinalizeQuery = "ALTER TABLE messages DETACH PARTITION " & partitionName & " FINALIZE;" From 3603b838b92502db3208fb059c58868532aaf1c0 Mon Sep 17 00:00:00 2001 From: NagyZoltanPeter <113987313+NagyZoltanPeter@users.noreply.github.com> Date: Tue, 17 Feb 2026 10:38:35 +0100 Subject: [PATCH 55/70] feat: liblogosdelivery FFI library of new API (#3714) * Initial for liblogosdelivery library (static & dynamic) based on current state of API. * nix build support added. * logosdelivery_example * Added support for missing logLevel/logFormat in new API create_node * Added full JSON to NodeConfig support * Added ctx and ctx.myLib check to avoid uninitialzed calls and crash. Adjusted logosdelivery_example with proper error handling and JSON config format * target aware install phase * Fix base64 decode of payload --- Makefile | 38 +- flake.nix | 7 + liblogosdelivery/BUILD.md | 123 +++ liblogosdelivery/MESSAGE_EVENTS.md | 148 ++++ liblogosdelivery/README.md | 262 +++++++ liblogosdelivery/declare_lib.nim | 24 + .../examples/logosdelivery_example.c | 193 +++++ liblogosdelivery/json_event.nim | 27 + liblogosdelivery/liblogosdelivery.h | 82 ++ liblogosdelivery/liblogosdelivery.nim | 29 + .../logos_delivery_api/messaging_api.nim | 91 +++ .../logos_delivery_api/node_api.nim | 111 +++ liblogosdelivery/nim.cfg | 27 + nix/default.nix | 40 +- nix/submodules.json | 247 ++++++ scripts/generate_nix_submodules.sh | 82 ++ tests/api/test_node_conf.nim | 708 ++++++++++++++++++ waku.nimble | 18 +- waku/api/api.nim | 1 + waku/api/api_conf.nim | 311 ++++++++ .../send_service/send_service.nim | 1 + 21 files changed, 2557 insertions(+), 13 deletions(-) create mode 100644 liblogosdelivery/BUILD.md create mode 100644 liblogosdelivery/MESSAGE_EVENTS.md create mode 100644 liblogosdelivery/README.md create mode 100644 liblogosdelivery/declare_lib.nim create mode 100644 liblogosdelivery/examples/logosdelivery_example.c create mode 100644 liblogosdelivery/json_event.nim create mode 100644 liblogosdelivery/liblogosdelivery.h create mode 100644 liblogosdelivery/liblogosdelivery.nim create mode 100644 liblogosdelivery/logos_delivery_api/messaging_api.nim create mode 100644 liblogosdelivery/logos_delivery_api/node_api.nim create mode 100644 liblogosdelivery/nim.cfg create mode 100644 nix/submodules.json create mode 100755 scripts/generate_nix_submodules.sh diff --git a/Makefile b/Makefile index 6457b3c0f..4fafd6310 100644 --- a/Makefile +++ b/Makefile @@ -434,10 +434,11 @@ docker-liteprotocoltester-push: ################ ## C Bindings ## ################ -.PHONY: cbindings cwaku_example libwaku +.PHONY: cbindings cwaku_example libwaku liblogosdelivery liblogosdelivery_example STATIC ?= 0 -BUILD_COMMAND ?= libwakuDynamic +LIBWAKU_BUILD_COMMAND ?= libwakuDynamic +LIBLOGOSDELIVERY_BUILD_COMMAND ?= liblogosdeliveryDynamic ifeq ($(detected_OS),Windows) LIB_EXT_DYNAMIC = dll @@ -453,11 +454,40 @@ endif LIB_EXT := $(LIB_EXT_DYNAMIC) ifeq ($(STATIC), 1) LIB_EXT = $(LIB_EXT_STATIC) - BUILD_COMMAND = libwakuStatic + LIBWAKU_BUILD_COMMAND = libwakuStatic + LIBLOGOSDELIVERY_BUILD_COMMAND = liblogosdeliveryStatic endif libwaku: | build deps librln - echo -e $(BUILD_MSG) "build/$@.$(LIB_EXT)" && $(ENV_SCRIPT) nim $(BUILD_COMMAND) $(NIM_PARAMS) waku.nims $@.$(LIB_EXT) + echo -e $(BUILD_MSG) "build/$@.$(LIB_EXT)" && $(ENV_SCRIPT) nim $(LIBWAKU_BUILD_COMMAND) $(NIM_PARAMS) waku.nims $@.$(LIB_EXT) + +liblogosdelivery: | build deps librln + echo -e $(BUILD_MSG) "build/$@.$(LIB_EXT)" && $(ENV_SCRIPT) nim $(LIBLOGOSDELIVERY_BUILD_COMMAND) $(NIM_PARAMS) waku.nims $@.$(LIB_EXT) + +logosdelivery_example: | build liblogosdelivery + @echo -e $(BUILD_MSG) "build/$@" +ifeq ($(detected_OS),Darwin) + gcc -o build/$@ \ + liblogosdelivery/examples/logosdelivery_example.c \ + -I./liblogosdelivery \ + -L./build \ + -llogosdelivery \ + -Wl,-rpath,./build +else ifeq ($(detected_OS),Linux) + gcc -o build/$@ \ + liblogosdelivery/examples/logosdelivery_example.c \ + -I./liblogosdelivery \ + -L./build \ + -llogosdelivery \ + -Wl,-rpath,'$$ORIGIN' +else ifeq ($(detected_OS),Windows) + gcc -o build/$@.exe \ + liblogosdelivery/examples/logosdelivery_example.c \ + -I./liblogosdelivery \ + -L./build \ + -llogosdelivery \ + -lws2_32 +endif ##################### ## Mobile Bindings ## diff --git a/flake.nix b/flake.nix index 88229a826..ee24c8f13 100644 --- a/flake.nix +++ b/flake.nix @@ -71,6 +71,13 @@ zerokitRln = zerokit.packages.${system}.rln; }; + liblogosdelivery = pkgs.callPackage ./nix/default.nix { + inherit stableSystems; + src = self; + targets = ["liblogosdelivery"]; + zerokitRln = zerokit.packages.${system}.rln; + }; + default = libwaku; }); diff --git a/liblogosdelivery/BUILD.md b/liblogosdelivery/BUILD.md new file mode 100644 index 000000000..011fbb438 --- /dev/null +++ b/liblogosdelivery/BUILD.md @@ -0,0 +1,123 @@ +# Building liblogosdelivery and Examples + +## Prerequisites + +- Nim 2.x compiler +- Rust toolchain (for RLN dependencies) +- GCC or Clang compiler +- Make + +## Building the Library + +### Dynamic Library + +```bash +make liblogosdelivery +``` + +This creates `build/liblogosdelivery.dylib` (macOS) or `build/liblogosdelivery.so` (Linux). + +### Static Library + +```bash +nim liblogosdelivery STATIC=1 +``` + +This creates `build/liblogosdelivery.a`. + +## Building Examples + +### liblogosdelivery Example + +Compile the C example that demonstrates all library features: + +```bash +# Using Make (recommended) +make liblogosdelivery_example + +## Running Examples + +```bash +./build/liblogosdelivery_example +``` + +The example will: +1. Create a Logos Messaging node +2. Register event callbacks for message events +3. Start the node +4. Subscribe to a content topic +5. Send a message +6. Show message delivery events (sent, propagated, or error) +7. Unsubscribe and cleanup + +## Build Artifacts + +After building, you'll have: + +``` +build/ +├── liblogosdelivery.dylib # Dynamic library (34MB) +├── liblogosdelivery.dylib.dSYM/ # Debug symbols +└── liblogosdelivery_example # Compiled example (34KB) +``` + +## Library Headers + +The main header file is: +- `liblogosdelivery/liblogosdelivery.h` - C API declarations + +## Troubleshooting + +### Library not found at runtime + +If you get "library not found" errors when running the example: + +**macOS:** +```bash +export DYLD_LIBRARY_PATH=/path/to/build:$DYLD_LIBRARY_PATH +./build/liblogosdelivery_example +``` + +**Linux:** +```bash +export LD_LIBRARY_PATH=/path/to/build:$LD_LIBRARY_PATH +./build/liblogosdelivery_example +``` +## Cross-Compilation + +For cross-compilation, you need to: +1. Build the Nim library for the target platform +2. Use the appropriate cross-compiler +3. Link against the target platform's liblogosdelivery + +Example for Linux from macOS: +```bash +# Build library for Linux (requires Docker or cross-compilation setup) +# Then compile with cross-compiler +``` + +## Integration with Your Project + +### CMake + +```cmake +find_library(LMAPI_LIBRARY NAMES lmapi PATHS ${PROJECT_SOURCE_DIR}/build) +include_directories(${PROJECT_SOURCE_DIR}/liblogosdelivery) +target_link_libraries(your_target ${LMAPI_LIBRARY}) +``` + +### Makefile + +```makefile +CFLAGS += -I/path/to/liblogosdelivery +LDFLAGS += -L/path/to/build -llmapi -Wl,-rpath,/path/to/build + +your_program: your_program.c + $(CC) $(CFLAGS) $< -o $@ $(LDFLAGS) +``` + +## API Documentation + +See: +- [liblogosdelivery.h](liblogosdelivery/liblogosdelivery.h) - API function declarations +- [MESSAGE_EVENTS.md](liblogosdelivery/MESSAGE_EVENTS.md) - Message event handling guide diff --git a/liblogosdelivery/MESSAGE_EVENTS.md b/liblogosdelivery/MESSAGE_EVENTS.md new file mode 100644 index 000000000..60740fb62 --- /dev/null +++ b/liblogosdelivery/MESSAGE_EVENTS.md @@ -0,0 +1,148 @@ +# Message Event Handling in LMAPI + +## Overview + +The liblogosdelivery library emits three types of message delivery events that clients can listen to by registering an event callback using `logosdelivery_set_event_callback()`. + +## Event Types + +### 1. message_sent +Emitted when a message is successfully accepted by the send service and queued for delivery. + +**JSON Structure:** +```json +{ + "eventType": "message_sent", + "requestId": "unique-request-id", + "messageHash": "0x..." +} +``` + +**Fields:** +- `eventType`: Always "message_sent" +- `requestId`: Request ID returned from the send operation +- `messageHash`: Hash of the message that was sent + +### 2. message_propagated +Emitted when a message has been successfully propagated to neighboring nodes on the network. + +**JSON Structure:** +```json +{ + "eventType": "message_propagated", + "requestId": "unique-request-id", + "messageHash": "0x..." +} +``` + +**Fields:** +- `eventType`: Always "message_propagated" +- `requestId`: Request ID from the send operation +- `messageHash`: Hash of the message that was propagated + +### 3. message_error +Emitted when an error occurs during message sending or propagation. + +**JSON Structure:** +```json +{ + "eventType": "message_error", + "requestId": "unique-request-id", + "messageHash": "0x...", + "error": "error description" +} +``` + +**Fields:** +- `eventType`: Always "message_error" +- `requestId`: Request ID from the send operation +- `messageHash`: Hash of the message that failed +- `error`: Description of what went wrong + +## Usage + +### 1. Define an Event Callback + +```c +void event_callback(int ret, const char *msg, size_t len, void *userData) { + if (ret != RET_OK || msg == NULL || len == 0) { + return; + } + + // Parse the JSON message + // Extract eventType field + // Handle based on event type + + if (eventType == "message_sent") { + // Handle message sent + } else if (eventType == "message_propagated") { + // Handle message propagated + } else if (eventType == "message_error") { + // Handle message error + } +} +``` + +### 2. Register the Callback + +```c +void *ctx = logosdelivery_create_node(config, callback, userData); +logosdelivery_set_event_callback(ctx, event_callback, NULL); +``` + +### 3. Start the Node + +Once the node is started, events will be delivered to your callback: + +```c +logosdelivery_start_node(ctx, callback, userData); +``` + +## Event Flow + +For a typical successful message send: + +1. **send** → Returns request ID +2. **message_sent** → Message accepted and queued +3. **message_propagated** → Message delivered to peers + +For a failed message send: + +1. **send** → Returns request ID +2. **message_sent** → Message accepted and queued +3. **message_error** → Delivery failed with error description + +## Important Notes + +1. **Thread Safety**: The event callback is invoked from the FFI worker thread. Ensure your callback is thread-safe if it accesses shared state. + +2. **Non-Blocking**: Keep the callback fast and non-blocking. Do not perform long-running operations in the callback. + +3. **JSON Parsing**: The example uses a simple string-based parser. For production, use a proper JSON library like: + - [cJSON](https://github.com/DaveGamble/cJSON) + - [json-c](https://github.com/json-c/json-c) + - [Jansson](https://github.com/akheron/jansson) + +4. **Memory Management**: The message buffer is owned by the library. Copy any data you need to retain. + +5. **Event Order**: Events are delivered in the order they occur, but timing depends on network conditions. + +## Example Implementation + +See `examples/liblogosdelivery_example.c` for a complete working example that: +- Registers an event callback +- Sends a message +- Receives and prints all three event types +- Properly parses the JSON event structure + +## Debugging Events + +To see all events during development: + +```c +void debug_event_callback(int ret, const char *msg, size_t len, void *userData) { + printf("Event received: %.*s\n", (int)len, msg); +} +``` + +This will print the raw JSON for all events, helping you understand the event structure. diff --git a/liblogosdelivery/README.md b/liblogosdelivery/README.md new file mode 100644 index 000000000..f9909dd3d --- /dev/null +++ b/liblogosdelivery/README.md @@ -0,0 +1,262 @@ +# Logos Messaging API (LMAPI) Library + +A C FFI library providing a simplified interface to Logos Messaging functionality. + +## Overview + +This library wraps the high-level API functions from `waku/api/api.nim` and exposes them via a C FFI interface, making them accessible from C, C++, and other languages that support C FFI. + +## API Functions + +### Node Lifecycle + +#### `logosdelivery_create_node` +Creates a new instance of the node from the given configuration JSON. + +```c +void *logosdelivery_create_node( + const char *configJson, + FFICallBack callback, + void *userData +); +``` + +**Parameters:** +- `configJson`: JSON string containing node configuration +- `callback`: Callback function to receive the result +- `userData`: User data passed to the callback + +**Returns:** Pointer to the context needed by other API functions, or NULL on error. + +**Example configuration JSON:** +```json +{ + "mode": "Core", + "clusterId": 1, + "entryNodes": [ + "enrtree://AIRVQ5DDA4FFWLRBCHJWUWOO6X6S4ZTZ5B667LQ6AJU6PEYDLRD5O@sandbox.waku.nodes.status.im" + ], + "networkingConfig": { + "listenIpv4": "0.0.0.0", + "p2pTcpPort": 60000, + "discv5UdpPort": 9000 + } +} +``` + +#### `logosdelivery_start_node` +Starts the node. + +```c +int logosdelivery_start_node( + void *ctx, + FFICallBack callback, + void *userData +); +``` + +#### `logosdelivery_stop_node` +Stops the node. + +```c +int logosdelivery_stop_node( + void *ctx, + FFICallBack callback, + void *userData +); +``` + +#### `logosdelivery_destroy` +Destroys a node instance and frees resources. + +```c +int logosdelivery_destroy( + void *ctx, + FFICallBack callback, + void *userData +); +``` + +### Messaging + +#### `logosdelivery_subscribe` +Subscribe to a content topic to receive messages. + +```c +int logosdelivery_subscribe( + void *ctx, + FFICallBack callback, + void *userData, + const char *contentTopic +); +``` + +**Parameters:** +- `ctx`: Context pointer from `logosdelivery_create_node` +- `callback`: Callback function to receive the result +- `userData`: User data passed to the callback +- `contentTopic`: Content topic string (e.g., "/myapp/1/chat/proto") + +#### `logosdelivery_unsubscribe` +Unsubscribe from a content topic. + +```c +int logosdelivery_unsubscribe( + void *ctx, + FFICallBack callback, + void *userData, + const char *contentTopic +); +``` + +#### `logosdelivery_send` +Send a message. + +```c +int logosdelivery_send( + void *ctx, + FFICallBack callback, + void *userData, + const char *messageJson +); +``` + +**Parameters:** +- `messageJson`: JSON string containing the message + +**Example message JSON:** +```json +{ + "contentTopic": "/myapp/1/chat/proto", + "payload": "SGVsbG8gV29ybGQ=", + "ephemeral": false +} +``` + +Note: The `payload` field should be base64-encoded. + +**Returns:** Request ID in the callback message that can be used to track message delivery. + +### Events + +#### `logosdelivery_set_event_callback` +Sets a callback that will be invoked whenever an event occurs (e.g., message received). + +```c +void logosdelivery_set_event_callback( + void *ctx, + FFICallBack callback, + void *userData +); +``` + +**Important:** The callback should be fast, non-blocking, and thread-safe. + +## Building + +The library follows the same build system as the main Logos Messaging project. + +### Build the library + +```bash +make liblogosdeliveryStatic # Build static library +# or +make liblogosdeliveryDynamic # Build dynamic library +``` + +## Return Codes + +All functions that return `int` use the following return codes: + +- `RET_OK` (0): Success +- `RET_ERR` (1): Error +- `RET_MISSING_CALLBACK` (2): Missing callback function + +## Callback Function + +All API functions use the following callback signature: + +```c +typedef void (*FFICallBack)( + int callerRet, + const char *msg, + size_t len, + void *userData +); +``` + +**Parameters:** +- `callerRet`: Return code (RET_OK, RET_ERR, etc.) +- `msg`: Response message (may be empty for success) +- `len`: Length of the message +- `userData`: User data passed in the original call + +## Example Usage + +```c +#include "liblogosdelivery.h" +#include + +void callback(int ret, const char *msg, size_t len, void *userData) { + if (ret == RET_OK) { + printf("Success: %.*s\n", (int)len, msg); + } else { + printf("Error: %.*s\n", (int)len, msg); + } +} + +int main() { + const char *config = "{" + "\"mode\": \"Core\"," + "\"clusterId\": 1" + "}"; + + // Create node + void *ctx = logosdelivery_create_node(config, callback, NULL); + if (ctx == NULL) { + return 1; + } + + // Start node + logosdelivery_start_node(ctx, callback, NULL); + + // Subscribe to a topic + logosdelivery_subscribe(ctx, callback, NULL, "/myapp/1/chat/proto"); + + // Send a message + const char *msg = "{" + "\"contentTopic\": \"/myapp/1/chat/proto\"," + "\"payload\": \"SGVsbG8gV29ybGQ=\"," + "\"ephemeral\": false" + "}"; + logosdelivery_send(ctx, callback, NULL, msg); + + // Clean up + logosdelivery_stop_node(ctx, callback, NULL); + logosdelivery_destroy(ctx, callback, NULL); + + return 0; +} +``` + +## Architecture + +The library is structured as follows: + +- `liblogosdelivery.h`: C header file with function declarations +- `liblogosdelivery.nim`: Main library entry point +- `declare_lib.nim`: Library declaration and initialization +- `lmapi/node_api.nim`: Node lifecycle API implementation +- `lmapi/messaging_api.nim`: Subscribe/send API implementation + +The library uses the nim-ffi framework for FFI infrastructure, which handles: +- Thread-safe request processing +- Async operation management +- Memory management between C and Nim +- Callback marshaling + +## See Also + +- Main API documentation: `waku/api/api.nim` +- Original libwaku library: `library/libwaku.nim` +- nim-ffi framework: `vendor/nim-ffi/` diff --git a/liblogosdelivery/declare_lib.nim b/liblogosdelivery/declare_lib.nim new file mode 100644 index 000000000..98209c649 --- /dev/null +++ b/liblogosdelivery/declare_lib.nim @@ -0,0 +1,24 @@ +import ffi +import waku/factory/waku + +declareLibrary("logosdelivery") + +template requireInitializedNode*( + ctx: ptr FFIContext[Waku], opName: string, onError: untyped +) = + if isNil(ctx): + let errMsg {.inject.} = opName & " failed: invalid context" + onError + elif isNil(ctx.myLib) or isNil(ctx.myLib[]): + let errMsg {.inject.} = opName & " failed: node is not initialized" + onError + +proc logosdelivery_set_event_callback( + ctx: ptr FFIContext[Waku], callback: FFICallBack, userData: pointer +) {.dynlib, exportc, cdecl.} = + if isNil(ctx): + echo "error: invalid context in logosdelivery_set_event_callback" + return + + ctx[].eventCallback = cast[pointer](callback) + ctx[].eventUserData = userData diff --git a/liblogosdelivery/examples/logosdelivery_example.c b/liblogosdelivery/examples/logosdelivery_example.c new file mode 100644 index 000000000..5437be427 --- /dev/null +++ b/liblogosdelivery/examples/logosdelivery_example.c @@ -0,0 +1,193 @@ +#include "../liblogosdelivery.h" +#include +#include +#include +#include + +static int create_node_ok = -1; + +// Helper function to extract a JSON string field value +// Very basic parser - for production use a proper JSON library +const char* extract_json_field(const char *json, const char *field, char *buffer, size_t bufSize) { + char searchStr[256]; + snprintf(searchStr, sizeof(searchStr), "\"%s\":\"", field); + + const char *start = strstr(json, searchStr); + if (!start) { + return NULL; + } + + start += strlen(searchStr); + const char *end = strchr(start, '"'); + if (!end) { + return NULL; + } + + size_t len = end - start; + if (len >= bufSize) { + len = bufSize - 1; + } + + memcpy(buffer, start, len); + buffer[len] = '\0'; + + return buffer; +} + +// Event callback that handles message events +void event_callback(int ret, const char *msg, size_t len, void *userData) { + if (ret != RET_OK || msg == NULL || len == 0) { + return; + } + + // Create null-terminated string for easier parsing + char *eventJson = malloc(len + 1); + if (!eventJson) { + return; + } + memcpy(eventJson, msg, len); + eventJson[len] = '\0'; + + // Extract eventType + char eventType[64]; + if (!extract_json_field(eventJson, "eventType", eventType, sizeof(eventType))) { + free(eventJson); + return; + } + + // Handle different event types + if (strcmp(eventType, "message_sent") == 0) { + char requestId[128]; + char messageHash[128]; + extract_json_field(eventJson, "requestId", requestId, sizeof(requestId)); + extract_json_field(eventJson, "messageHash", messageHash, sizeof(messageHash)); + printf("📤 [EVENT] Message sent - RequestID: %s, Hash: %s\n", requestId, messageHash); + + } else if (strcmp(eventType, "message_error") == 0) { + char requestId[128]; + char messageHash[128]; + char error[256]; + extract_json_field(eventJson, "requestId", requestId, sizeof(requestId)); + extract_json_field(eventJson, "messageHash", messageHash, sizeof(messageHash)); + extract_json_field(eventJson, "error", error, sizeof(error)); + printf("❌ [EVENT] Message error - RequestID: %s, Hash: %s, Error: %s\n", + requestId, messageHash, error); + + } else if (strcmp(eventType, "message_propagated") == 0) { + char requestId[128]; + char messageHash[128]; + extract_json_field(eventJson, "requestId", requestId, sizeof(requestId)); + extract_json_field(eventJson, "messageHash", messageHash, sizeof(messageHash)); + printf("✅ [EVENT] Message propagated - RequestID: %s, Hash: %s\n", requestId, messageHash); + + } else { + printf("ℹ️ [EVENT] Unknown event type: %s\n", eventType); + } + + free(eventJson); +} + +// Simple callback that prints results +void simple_callback(int ret, const char *msg, size_t len, void *userData) { + const char *operation = (const char *)userData; + + if (operation != NULL && strcmp(operation, "create_node") == 0) { + create_node_ok = (ret == RET_OK) ? 1 : 0; + } + + if (ret == RET_OK) { + if (len > 0) { + printf("[%s] Success: %.*s\n", operation, (int)len, msg); + } else { + printf("[%s] Success\n", operation); + } + } else { + printf("[%s] Error: %.*s\n", operation, (int)len, msg); + } +} + +int main() { + printf("=== Logos Messaging API (LMAPI) Example ===\n\n"); + + // Configuration JSON for creating a node + const char *config = "{" + "\"logLevel\": \"DEBUG\"," + // "\"mode\": \"Edge\"," + "\"mode\": \"Core\"," + "\"protocolsConfig\": {" + "\"entryNodes\": [\"/dns4/node-01.do-ams3.misc.logos-chat.status.im/tcp/30303/p2p/16Uiu2HAkxoqUTud5LUPQBRmkeL2xP4iKx2kaABYXomQRgmLUgf78\"]," + "\"clusterId\": 42," + "\"autoShardingConfig\": {" + "\"numShardsInCluster\": 8" + "}" + "}," + "\"networkingConfig\": {" + "\"listenIpv4\": \"0.0.0.0\"," + "\"p2pTcpPort\": 60000," + "\"discv5UdpPort\": 9000" + "}" + "}"; + + printf("1. Creating node...\n"); + void *ctx = logosdelivery_create_node(config, simple_callback, (void *)"create_node"); + if (ctx == NULL) { + printf("Failed to create node\n"); + return 1; + } + + // Wait a bit for the callback + sleep(1); + + if (create_node_ok != 1) { + printf("Create node failed, stopping example early.\n"); + logosdelivery_destroy(ctx, simple_callback, (void *)"destroy"); + return 1; + } + + printf("\n2. Setting up event callback...\n"); + logosdelivery_set_event_callback(ctx, event_callback, NULL); + printf("Event callback registered for message events\n"); + + printf("\n3. Starting node...\n"); + logosdelivery_start_node(ctx, simple_callback, (void *)"start_node"); + + // Wait for node to start + sleep(2); + + printf("\n4. Subscribing to content topic...\n"); + const char *contentTopic = "/example/1/chat/proto"; + logosdelivery_subscribe(ctx, simple_callback, (void *)"subscribe", contentTopic); + + // Wait for subscription + sleep(1); + + printf("\n5. Sending a message...\n"); + printf("Watch for message events (sent, propagated, or error):\n"); + // Create base64-encoded payload: "Hello, Logos Messaging!" + const char *message = "{" + "\"contentTopic\": \"/example/1/chat/proto\"," + "\"payload\": \"SGVsbG8sIExvZ29zIE1lc3NhZ2luZyE=\"," + "\"ephemeral\": false" + "}"; + logosdelivery_send(ctx, simple_callback, (void *)"send", message); + + // Wait for message events to arrive + printf("Waiting for message delivery events...\n"); + sleep(60); + + printf("\n6. Unsubscribing from content topic...\n"); + logosdelivery_unsubscribe(ctx, simple_callback, (void *)"unsubscribe", contentTopic); + + sleep(1); + + printf("\n7. Stopping node...\n"); + logosdelivery_stop_node(ctx, simple_callback, (void *)"stop_node"); + + sleep(1); + + printf("\n8. Destroying context...\n"); + logosdelivery_destroy(ctx, simple_callback, (void *)"destroy"); + + printf("\n=== Example completed ===\n"); + return 0; +} diff --git a/liblogosdelivery/json_event.nim b/liblogosdelivery/json_event.nim new file mode 100644 index 000000000..389e29120 --- /dev/null +++ b/liblogosdelivery/json_event.nim @@ -0,0 +1,27 @@ +import std/[json, macros] + +type JsonEvent*[T] = ref object + eventType*: string + payload*: T + +macro toFlatJson*(event: JsonEvent): JsonNode = + ## Serializes JsonEvent[T] to flat JSON with eventType first, + ## followed by all fields from T's payload + result = quote: + var jsonObj = newJObject() + jsonObj["eventType"] = %`event`.eventType + + # Serialize payload fields into the same object (flattening) + let payloadJson = %`event`.payload + for key, val in payloadJson.pairs: + jsonObj[key] = val + + jsonObj + +proc `$`*[T](event: JsonEvent[T]): string = + $toFlatJson(event) + +proc newJsonEvent*[T](eventType: string, payload: T): JsonEvent[T] = + ## Creates a new JsonEvent with the given eventType and payload. + ## The payload's fields will be flattened into the JSON output. + JsonEvent[T](eventType: eventType, payload: payload) diff --git a/liblogosdelivery/liblogosdelivery.h b/liblogosdelivery/liblogosdelivery.h new file mode 100644 index 000000000..b014d6385 --- /dev/null +++ b/liblogosdelivery/liblogosdelivery.h @@ -0,0 +1,82 @@ + +// Generated manually and inspired by libwaku.h +// Header file for Logos Messaging API (LMAPI) library +#pragma once +#ifndef __liblogosdelivery__ +#define __liblogosdelivery__ + +#include +#include + +// The possible returned values for the functions that return int +#define RET_OK 0 +#define RET_ERR 1 +#define RET_MISSING_CALLBACK 2 + +#ifdef __cplusplus +extern "C" +{ +#endif + + typedef void (*FFICallBack)(int callerRet, const char *msg, size_t len, void *userData); + + // Creates a new instance of the node from the given configuration JSON. + // Returns a pointer to the Context needed by the rest of the API functions. + // Configuration should be in JSON format following the NodeConfig structure. + void *logosdelivery_create_node( + const char *configJson, + FFICallBack callback, + void *userData); + + // Starts the node. + int logosdelivery_start_node(void *ctx, + FFICallBack callback, + void *userData); + + // Stops the node. + int logosdelivery_stop_node(void *ctx, + FFICallBack callback, + void *userData); + + // Destroys an instance of a node created with logosdelivery_create_node + int logosdelivery_destroy(void *ctx, + FFICallBack callback, + void *userData); + + // Subscribe to a content topic. + // contentTopic: string representing the content topic (e.g., "/myapp/1/chat/proto") + int logosdelivery_subscribe(void *ctx, + FFICallBack callback, + void *userData, + const char *contentTopic); + + // Unsubscribe from a content topic. + int logosdelivery_unsubscribe(void *ctx, + FFICallBack callback, + void *userData, + const char *contentTopic); + + // Send a message. + // messageJson: JSON string with the following structure: + // { + // "contentTopic": "/myapp/1/chat/proto", + // "payload": "base64-encoded-payload", + // "ephemeral": false + // } + // Returns a request ID that can be used to track the message delivery. + int logosdelivery_send(void *ctx, + FFICallBack callback, + void *userData, + const char *messageJson); + + // Sets a callback that will be invoked whenever an event occurs. + // It is crucial that the passed callback is fast, non-blocking and potentially thread-safe. + void logosdelivery_set_event_callback(void *ctx, + FFICallBack callback, + void *userData); + +#ifdef __cplusplus +} +#endif + +#endif /* __liblogosdelivery__ */ diff --git a/liblogosdelivery/liblogosdelivery.nim b/liblogosdelivery/liblogosdelivery.nim new file mode 100644 index 000000000..7d068b065 --- /dev/null +++ b/liblogosdelivery/liblogosdelivery.nim @@ -0,0 +1,29 @@ +import std/[atomics, options] +import chronicles, chronos, chronos/threadsync, ffi +import waku/factory/waku, waku/node/waku_node, ./declare_lib + +################################################################################ +## Include different APIs, i.e. all procs with {.ffi.} pragma +include ./logos_delivery_api/node_api, ./logos_delivery_api/messaging_api + +################################################################################ +### Exported procs + +proc logosdelivery_destroy( + ctx: ptr FFIContext[Waku], callback: FFICallBack, userData: pointer +): cint {.dynlib, exportc, cdecl.} = + initializeLibrary() + checkParams(ctx, callback, userData) + + ffi.destroyFFIContext(ctx).isOkOr: + let msg = "liblogosdelivery error: " & $error + callback(RET_ERR, unsafeAddr msg[0], cast[csize_t](len(msg)), userData) + return RET_ERR + + ## always need to invoke the callback although we don't retrieve value to the caller + callback(RET_OK, nil, 0, userData) + + return RET_OK + +# ### End of exported procs +# ################################################################################ diff --git a/liblogosdelivery/logos_delivery_api/messaging_api.nim b/liblogosdelivery/logos_delivery_api/messaging_api.nim new file mode 100644 index 000000000..cb2771034 --- /dev/null +++ b/liblogosdelivery/logos_delivery_api/messaging_api.nim @@ -0,0 +1,91 @@ +import std/[json] +import chronos, results, ffi +import stew/byteutils +import + waku/common/base64, + waku/factory/waku, + waku/waku_core/topics/content_topic, + waku/api/[api, types], + ../declare_lib + +proc logosdelivery_subscribe( + ctx: ptr FFIContext[Waku], + callback: FFICallBack, + userData: pointer, + contentTopicStr: cstring, +) {.ffi.} = + requireInitializedNode(ctx, "Subscribe"): + return err(errMsg) + + # ContentTopic is just a string type alias + let contentTopic = ContentTopic($contentTopicStr) + + (await api.subscribe(ctx.myLib[], contentTopic)).isOkOr: + let errMsg = $error + return err("Subscribe failed: " & errMsg) + + return ok("") + +proc logosdelivery_unsubscribe( + ctx: ptr FFIContext[Waku], + callback: FFICallBack, + userData: pointer, + contentTopicStr: cstring, +) {.ffi.} = + requireInitializedNode(ctx, "Unsubscribe"): + return err(errMsg) + + # ContentTopic is just a string type alias + let contentTopic = ContentTopic($contentTopicStr) + + api.unsubscribe(ctx.myLib[], contentTopic).isOkOr: + let errMsg = $error + return err("Unsubscribe failed: " & errMsg) + + return ok("") + +proc logosdelivery_send( + ctx: ptr FFIContext[Waku], + callback: FFICallBack, + userData: pointer, + messageJson: cstring, +) {.ffi.} = + requireInitializedNode(ctx, "Send"): + return err(errMsg) + + ## Parse the message JSON and send the message + var jsonNode: JsonNode + try: + jsonNode = parseJson($messageJson) + except Exception as e: + return err("Failed to parse message JSON: " & e.msg) + + # Extract content topic + if not jsonNode.hasKey("contentTopic"): + return err("Missing contentTopic field") + + # ContentTopic is just a string type alias + let contentTopic = ContentTopic(jsonNode["contentTopic"].getStr()) + + # Extract payload (expect base64 encoded string) + if not jsonNode.hasKey("payload"): + return err("Missing payload field") + + let payloadStr = jsonNode["payload"].getStr() + let payload = base64.decode(Base64String(payloadStr)).valueOr: + return err("invalid payload format: " & error) + + # Extract ephemeral flag + let ephemeral = jsonNode.getOrDefault("ephemeral").getBool(false) + + # Create message envelope + let envelope = MessageEnvelope.init( + contentTopic = contentTopic, payload = payload, ephemeral = ephemeral + ) + + # Send the message + let requestId = (await api.send(ctx.myLib[], envelope)).valueOr: + let errMsg = $error + return err("Send failed: " & errMsg) + + return ok($requestId) diff --git a/liblogosdelivery/logos_delivery_api/node_api.nim b/liblogosdelivery/logos_delivery_api/node_api.nim new file mode 100644 index 000000000..6a0041857 --- /dev/null +++ b/liblogosdelivery/logos_delivery_api/node_api.nim @@ -0,0 +1,111 @@ +import std/json +import chronos, results, ffi +import + waku/factory/waku, + waku/node/waku_node, + waku/api/[api, api_conf, types], + waku/events/message_events, + ../declare_lib, + ../json_event + +# Add JSON serialization for RequestId +proc `%`*(id: RequestId): JsonNode = + %($id) + +registerReqFFI(CreateNodeRequest, ctx: ptr FFIContext[Waku]): + proc(configJson: cstring): Future[Result[string, string]] {.async.} = + ## Parse the JSON configuration and create a node + let nodeConfig = + try: + decodeNodeConfigFromJson($configJson) + except SerializationError as e: + return err("Failed to parse config JSON: " & e.msg) + + # Create the node + ctx.myLib[] = (await api.createNode(nodeConfig)).valueOr: + let errMsg = $error + chronicles.error "CreateNodeRequest failed", err = errMsg + return err(errMsg) + + return ok("") + +proc logosdelivery_create_node( + configJson: cstring, callback: FFICallback, userData: pointer +): pointer {.dynlib, exportc, cdecl.} = + initializeLibrary() + + if isNil(callback): + echo "error: missing callback in logosdelivery_create_node" + return nil + + var ctx = ffi.createFFIContext[Waku]().valueOr: + let msg = "Error in createFFIContext: " & $error + callback(RET_ERR, unsafeAddr msg[0], cast[csize_t](len(msg)), userData) + return nil + + ctx.userData = userData + + ffi.sendRequestToFFIThread( + ctx, CreateNodeRequest.ffiNewReq(callback, userData, configJson) + ).isOkOr: + let msg = "error in sendRequestToFFIThread: " & $error + callback(RET_ERR, unsafeAddr msg[0], cast[csize_t](len(msg)), userData) + return nil + + return ctx + +proc logosdelivery_start_node( + ctx: ptr FFIContext[Waku], callback: FFICallBack, userData: pointer +) {.ffi.} = + requireInitializedNode(ctx, "START_NODE"): + return err(errMsg) + + # setting up outgoing event listeners + let sentListener = MessageSentEvent.listen( + ctx.myLib[].brokerCtx, + proc(event: MessageSentEvent) {.async: (raises: []).} = + callEventCallback(ctx, "onMessageSent"): + $newJsonEvent("message_sent", event), + ).valueOr: + chronicles.error "MessageSentEvent.listen failed", err = $error + return err("MessageSentEvent.listen failed: " & $error) + + let errorListener = MessageErrorEvent.listen( + ctx.myLib[].brokerCtx, + proc(event: MessageErrorEvent) {.async: (raises: []).} = + callEventCallback(ctx, "onMessageError"): + $newJsonEvent("message_error", event), + ).valueOr: + chronicles.error "MessageErrorEvent.listen failed", err = $error + return err("MessageErrorEvent.listen failed: " & $error) + + let propagatedListener = MessagePropagatedEvent.listen( + ctx.myLib[].brokerCtx, + proc(event: MessagePropagatedEvent) {.async: (raises: []).} = + callEventCallback(ctx, "onMessagePropagated"): + $newJsonEvent("message_propagated", event), + ).valueOr: + chronicles.error "MessagePropagatedEvent.listen failed", err = $error + return err("MessagePropagatedEvent.listen failed: " & $error) + + (await startWaku(addr ctx.myLib[])).isOkOr: + let errMsg = $error + chronicles.error "START_NODE failed", err = errMsg + return err("failed to start: " & errMsg) + return ok("") + +proc logosdelivery_stop_node( + ctx: ptr FFIContext[Waku], callback: FFICallBack, userData: pointer +) {.ffi.} = + requireInitializedNode(ctx, "STOP_NODE"): + return err(errMsg) + + MessageErrorEvent.dropAllListeners(ctx.myLib[].brokerCtx) + MessageSentEvent.dropAllListeners(ctx.myLib[].brokerCtx) + MessagePropagatedEvent.dropAllListeners(ctx.myLib[].brokerCtx) + + (await ctx.myLib[].stop()).isOkOr: + let errMsg = $error + chronicles.error "STOP_NODE failed", err = errMsg + return err("failed to stop: " & errMsg) + return ok("") diff --git a/liblogosdelivery/nim.cfg b/liblogosdelivery/nim.cfg new file mode 100644 index 000000000..3fd5adb32 --- /dev/null +++ b/liblogosdelivery/nim.cfg @@ -0,0 +1,27 @@ +# Nim configuration for liblogosdelivery + +# Ensure correct compiler configuration +--gc: + refc +--threads: + on + +# Include paths +--path: + "../vendor/nim-ffi" +--path: + "../" + +# Optimization and debugging +--opt: + speed +--debugger: + native + +# Export symbols for dynamic library +--app: + lib +--noMain + +# Enable FFI macro features when needed for debugging +# --define:ffiDumpMacros diff --git a/nix/default.nix b/nix/default.nix index d77862e8f..d532ec5b5 100644 --- a/nix/default.nix +++ b/nix/default.nix @@ -23,6 +23,10 @@ let tools = pkgs.callPackage ./tools.nix {}; version = tools.findKeyValue "^version = \"([a-f0-9.-]+)\"$" ../waku.nimble; revision = lib.substring 0 8 (src.rev or src.dirtyRev or "00000000"); + copyLibwaku = lib.elem "libwaku" targets; + copyLiblogosdelivery = lib.elem "liblogosdelivery" targets; + copyWakunode2 = lib.elem "wakunode2" targets; + hasKnownInstallTarget = copyLibwaku || copyLiblogosdelivery || copyWakunode2; in stdenv.mkDerivation { pname = "logos-messaging-nim"; @@ -91,11 +95,39 @@ in stdenv.mkDerivation { '' else '' mkdir -p $out/bin $out/include - # Copy library files - cp build/* $out/bin/ 2>/dev/null || true + # Copy artifacts from build directory (created by Make during buildPhase) + # Note: build/ is in the source tree, not result/ (which is a post-build symlink) + if [ -d build ]; then + ${lib.optionalString copyLibwaku '' + cp build/libwaku.{so,dylib,dll,a,lib} $out/bin/ 2>/dev/null || true + ''} - # Copy the header file - cp library/libwaku.h $out/include/ + ${lib.optionalString copyLiblogosdelivery '' + cp build/liblogosdelivery.{so,dylib,dll,a,lib} $out/bin/ 2>/dev/null || true + ''} + + ${lib.optionalString copyWakunode2 '' + cp build/wakunode2 $out/bin/ 2>/dev/null || true + ''} + + ${lib.optionalString (!hasKnownInstallTarget) '' + cp build/lib*.{so,dylib,dll,a,lib} $out/bin/ 2>/dev/null || true + ''} + fi + + # Copy header files + ${lib.optionalString copyLibwaku '' + cp library/libwaku.h $out/include/ 2>/dev/null || true + ''} + + ${lib.optionalString copyLiblogosdelivery '' + cp liblogosdelivery/liblogosdelivery.h $out/include/ 2>/dev/null || true + ''} + + ${lib.optionalString (!hasKnownInstallTarget) '' + cp library/libwaku.h $out/include/ 2>/dev/null || true + cp liblogosdelivery/liblogosdelivery.h $out/include/ 2>/dev/null || true + ''} ''; meta = with pkgs.lib; { diff --git a/nix/submodules.json b/nix/submodules.json new file mode 100644 index 000000000..2f94e5f2b --- /dev/null +++ b/nix/submodules.json @@ -0,0 +1,247 @@ +[ + { + "path": "vendor/db_connector", + "url": "https://github.com/nim-lang/db_connector.git", + "rev": "74aef399e5c232f95c9fc5c987cebac846f09d62" + } + , + { + "path": "vendor/dnsclient.nim", + "url": "https://github.com/ba0f3/dnsclient.nim.git", + "rev": "23214235d4784d24aceed99bbfe153379ea557c8" + } + , + { + "path": "vendor/nim-bearssl", + "url": "https://github.com/status-im/nim-bearssl.git", + "rev": "11e798b62b8e6beabe958e048e9e24c7e0f9ee63" + } + , + { + "path": "vendor/nim-chronicles", + "url": "https://github.com/status-im/nim-chronicles.git", + "rev": "54f5b726025e8c7385e3a6529d3aa27454c6e6ff" + } + , + { + "path": "vendor/nim-chronos", + "url": "https://github.com/status-im/nim-chronos.git", + "rev": "85af4db764ecd3573c4704139560df3943216cf1" + } + , + { + "path": "vendor/nim-confutils", + "url": "https://github.com/status-im/nim-confutils.git", + "rev": "e214b3992a31acece6a9aada7d0a1ad37c928f3b" + } + , + { + "path": "vendor/nim-dnsdisc", + "url": "https://github.com/status-im/nim-dnsdisc.git", + "rev": "b71d029f4da4ec56974d54c04518bada00e1b623" + } + , + { + "path": "vendor/nim-eth", + "url": "https://github.com/status-im/nim-eth.git", + "rev": "d9135e6c3c5d6d819afdfb566aa8d958756b73a8" + } + , + { + "path": "vendor/nim-faststreams", + "url": "https://github.com/status-im/nim-faststreams.git", + "rev": "c3ac3f639ed1d62f59d3077d376a29c63ac9750c" + } + , + { + "path": "vendor/nim-ffi", + "url": "https://github.com/logos-messaging/nim-ffi", + "rev": "06111de155253b34e47ed2aaed1d61d08d62cc1b" + } + , + { + "path": "vendor/nim-http-utils", + "url": "https://github.com/status-im/nim-http-utils.git", + "rev": "79cbab1460f4c0cdde2084589d017c43a3d7b4f1" + } + , + { + "path": "vendor/nim-json-rpc", + "url": "https://github.com/status-im/nim-json-rpc.git", + "rev": "9665c265035f49f5ff94bbffdeadde68e19d6221" + } + , + { + "path": "vendor/nim-json-serialization", + "url": "https://github.com/status-im/nim-json-serialization.git", + "rev": "b65fd6a7e64c864dabe40e7dfd6c7d07db0014ac" + } + , + { + "path": "vendor/nim-jwt", + "url": "https://github.com/vacp2p/nim-jwt.git", + "rev": "18f8378de52b241f321c1f9ea905456e89b95c6f" + } + , + { + "path": "vendor/nim-libbacktrace", + "url": "https://github.com/status-im/nim-libbacktrace.git", + "rev": "d8bd4ce5c46bb6d2f984f6b3f3d7380897d95ecb" + } + , + { + "path": "vendor/nim-libp2p", + "url": "https://github.com/vacp2p/nim-libp2p.git", + "rev": "eb7e6ff89889e41b57515f891ba82986c54809fb" + } + , + { + "path": "vendor/nim-lsquic", + "url": "https://github.com/vacp2p/nim-lsquic", + "rev": "f3fe33462601ea34eb2e8e9c357c92e61f8d121b" + } + , + { + "path": "vendor/nim-metrics", + "url": "https://github.com/status-im/nim-metrics.git", + "rev": "ecf64c6078d1276d3b7d9b3d931fbdb70004db11" + } + , + { + "path": "vendor/nim-minilru", + "url": "https://github.com/status-im/nim-minilru.git", + "rev": "0c4b2bce959591f0a862e9b541ba43c6d0cf3476" + } + , + { + "path": "vendor/nim-nat-traversal", + "url": "https://github.com/status-im/nim-nat-traversal.git", + "rev": "860e18c37667b5dd005b94c63264560c35d88004" + } + , + { + "path": "vendor/nim-presto", + "url": "https://github.com/status-im/nim-presto.git", + "rev": "92b1c7ff141e6920e1f8a98a14c35c1fa098e3be" + } + , + { + "path": "vendor/nim-regex", + "url": "https://github.com/nitely/nim-regex.git", + "rev": "4593305ed1e49731fc75af1dc572dd2559aad19c" + } + , + { + "path": "vendor/nim-results", + "url": "https://github.com/arnetheduck/nim-results.git", + "rev": "df8113dda4c2d74d460a8fa98252b0b771bf1f27" + } + , + { + "path": "vendor/nim-secp256k1", + "url": "https://github.com/status-im/nim-secp256k1.git", + "rev": "9dd3df62124aae79d564da636bb22627c53c7676" + } + , + { + "path": "vendor/nim-serialization", + "url": "https://github.com/status-im/nim-serialization.git", + "rev": "6f525d5447d97256750ca7856faead03e562ed20" + } + , + { + "path": "vendor/nim-sqlite3-abi", + "url": "https://github.com/arnetheduck/nim-sqlite3-abi.git", + "rev": "bdf01cf4236fb40788f0733466cdf6708783cbac" + } + , + { + "path": "vendor/nim-stew", + "url": "https://github.com/status-im/nim-stew.git", + "rev": "e5740014961438610d336cd81706582dbf2c96f0" + } + , + { + "path": "vendor/nim-stint", + "url": "https://github.com/status-im/nim-stint.git", + "rev": "470b7892561b5179ab20bd389a69217d6213fe58" + } + , + { + "path": "vendor/nim-taskpools", + "url": "https://github.com/status-im/nim-taskpools.git", + "rev": "9e8ccc754631ac55ac2fd495e167e74e86293edb" + } + , + { + "path": "vendor/nim-testutils", + "url": "https://github.com/status-im/nim-testutils.git", + "rev": "94d68e796c045d5b37cabc6be32d7bfa168f8857" + } + , + { + "path": "vendor/nim-toml-serialization", + "url": "https://github.com/status-im/nim-toml-serialization.git", + "rev": "fea85b27f0badcf617033ca1bc05444b5fd8aa7a" + } + , + { + "path": "vendor/nim-unicodedb", + "url": "https://github.com/nitely/nim-unicodedb.git", + "rev": "66f2458710dc641dd4640368f9483c8a0ec70561" + } + , + { + "path": "vendor/nim-unittest2", + "url": "https://github.com/status-im/nim-unittest2.git", + "rev": "8b51e99b4a57fcfb31689230e75595f024543024" + } + , + { + "path": "vendor/nim-web3", + "url": "https://github.com/status-im/nim-web3.git", + "rev": "81ee8ce479d86acb73be7c4f365328e238d9b4a3" + } + , + { + "path": "vendor/nim-websock", + "url": "https://github.com/status-im/nim-websock.git", + "rev": "ebe308a79a7b440a11dfbe74f352be86a3883508" + } + , + { + "path": "vendor/nim-zlib", + "url": "https://github.com/status-im/nim-zlib.git", + "rev": "daa8723fd32299d4ca621c837430c29a5a11e19a" + } + , + { + "path": "vendor/nimbus-build-system", + "url": "https://github.com/status-im/nimbus-build-system.git", + "rev": "e6c2c9da39c2d368d9cf420ac22692e99715d22c" + } + , + { + "path": "vendor/nimcrypto", + "url": "https://github.com/cheatfate/nimcrypto.git", + "rev": "721fb99ee099b632eb86dfad1f0d96ee87583774" + } + , + { + "path": "vendor/nph", + "url": "https://github.com/arnetheduck/nph.git", + "rev": "c6e03162dc2820d3088660f644818d7040e95791" + } + , + { + "path": "vendor/waku-rlnv2-contract", + "url": "https://github.com/logos-messaging/waku-rlnv2-contract.git", + "rev": "8a338f354481e8a3f3d64a72e38fad4c62e32dcd" + } + , + { + "path": "vendor/zerokit", + "url": "https://github.com/vacp2p/zerokit.git", + "rev": "70c79fbc989d4f87d9352b2f4bddcb60ebe55b19" + } +] diff --git a/scripts/generate_nix_submodules.sh b/scripts/generate_nix_submodules.sh new file mode 100755 index 000000000..51073294c --- /dev/null +++ b/scripts/generate_nix_submodules.sh @@ -0,0 +1,82 @@ +#!/usr/bin/env bash + +# Generates nix/submodules.json from .gitmodules and git ls-tree. +# This allows Nix to fetch all git submodules without requiring +# locally initialized submodules or the '?submodules=1' URI flag. +# +# Usage: ./scripts/generate_nix_submodules.sh +# +# Run this script after: +# - Adding/removing submodules +# - Updating submodule commits (e.g. after 'make update') +# - Any change to .gitmodules +# +# Compatible with macOS bash 3.x (no associative arrays). + +set -euo pipefail + +REPO_ROOT="$(cd "$(dirname "${BASH_SOURCE[0]}")/.." && pwd)" +OUTPUT="${REPO_ROOT}/nix/submodules.json" + +cd "$REPO_ROOT" + +TMP_URLS=$(mktemp) +TMP_REVS=$(mktemp) +trap 'rm -f "$TMP_URLS" "$TMP_REVS"' EXIT + +# Parse .gitmodules: extract (path, url) pairs +current_path="" +while IFS= read -r line; do + case "$line" in + *"path = "*) + current_path="${line#*path = }" + ;; + *"url = "*) + if [ -n "$current_path" ]; then + url="${line#*url = }" + url="${url%/}" + printf '%s\t%s\n' "$current_path" "$url" >> "$TMP_URLS" + current_path="" + fi + ;; + esac +done < .gitmodules + +# Get pinned commit hashes from git tree +git ls-tree HEAD vendor/ | while IFS= read -r tree_line; do + mode=$(echo "$tree_line" | awk '{print $1}') + type=$(echo "$tree_line" | awk '{print $2}') + hash=$(echo "$tree_line" | awk '{print $3}') + path=$(echo "$tree_line" | awk '{print $4}') + if [ "$type" = "commit" ]; then + path="${path%/}" + printf '%s\t%s\n' "$path" "$hash" >> "$TMP_REVS" + fi +done + +# Generate JSON by joining urls and revs on path +printf '[\n' > "$OUTPUT" +first=true + +sort "$TMP_URLS" | while IFS="$(printf '\t')" read -r path url; do + rev=$(grep "^${path} " "$TMP_REVS" | cut -f2 || true) + + if [ -z "$rev" ]; then + echo "WARNING: No commit hash found for submodule '$path', skipping" >&2 + continue + fi + + if [ "$first" = true ]; then + first=false + else + printf ' ,\n' >> "$OUTPUT" + fi + + printf ' {\n "path": "%s",\n "url": "%s",\n "rev": "%s"\n }\n' \ + "$path" "$url" "$rev" >> "$OUTPUT" +done + +printf ']\n' >> "$OUTPUT" + +count=$(grep -c '"path"' "$OUTPUT" || echo 0) +echo "Generated $OUTPUT with $count submodule entries" diff --git a/tests/api/test_node_conf.nim b/tests/api/test_node_conf.nim index 4dfbd4b51..84bbfead3 100644 --- a/tests/api/test_node_conf.nim +++ b/tests/api/test_node_conf.nim @@ -1,7 +1,9 @@ {.used.} import std/options, results, stint, testutils/unittests +import json_serialization import waku/api/api_conf, waku/factory/waku_conf, waku/factory/networks_config +import waku/common/logging suite "LibWaku Conf - toWakuConf": test "Minimal configuration": @@ -298,3 +300,709 @@ suite "LibWaku Conf - toWakuConf": check: wakuConf.staticNodes.len == 1 wakuConf.staticNodes[0] == entryNodes[1] + +suite "NodeConfig JSON - complete format": + test "Full NodeConfig from complete JSON with field validation": + ## Given + let jsonStr = + """ + { + "mode": "Core", + "protocolsConfig": { + "entryNodes": ["enrtree://TREE@nodes.example.com"], + "staticStoreNodes": ["/ip4/1.2.3.4/tcp/80/p2p/16Uuu2HBmAcHvhLqQKwSSbX6BG5JLWUDRcaLVrehUVqpw7fz1hbYc"], + "clusterId": 10, + "autoShardingConfig": { + "numShardsInCluster": 4 + }, + "messageValidation": { + "maxMessageSize": "100 KiB", + "rlnConfig": null + } + }, + "networkingConfig": { + "listenIpv4": "192.168.1.1", + "p2pTcpPort": 7000, + "discv5UdpPort": 7001 + }, + "ethRpcEndpoints": ["http://localhost:8545"], + "p2pReliability": true, + "logLevel": "WARN", + "logFormat": "TEXT" + } + """ + + ## When + let config = decodeNodeConfigFromJson(jsonStr) + + ## Then — check every field + check: + config.mode == WakuMode.Core + config.ethRpcEndpoints == @["http://localhost:8545"] + config.p2pReliability == true + config.logLevel == LogLevel.WARN + config.logFormat == LogFormat.TEXT + + check: + config.networkingConfig.listenIpv4 == "192.168.1.1" + config.networkingConfig.p2pTcpPort == 7000 + config.networkingConfig.discv5UdpPort == 7001 + + let pc = config.protocolsConfig + check: + pc.entryNodes == @["enrtree://TREE@nodes.example.com"] + pc.staticStoreNodes == + @[ + "/ip4/1.2.3.4/tcp/80/p2p/16Uuu2HBmAcHvhLqQKwSSbX6BG5JLWUDRcaLVrehUVqpw7fz1hbYc" + ] + pc.clusterId == 10 + pc.autoShardingConfig.numShardsInCluster == 4 + pc.messageValidation.maxMessageSize == "100 KiB" + pc.messageValidation.rlnConfig.isNone() + + test "Full NodeConfig with RlnConfig present": + ## Given + let jsonStr = + """ + { + "mode": "Edge", + "protocolsConfig": { + "entryNodes": [], + "clusterId": 1, + "messageValidation": { + "maxMessageSize": "150 KiB", + "rlnConfig": { + "contractAddress": "0x1234567890ABCDEF1234567890ABCDEF12345678", + "chainId": 5, + "epochSizeSec": 600 + } + } + }, + "networkingConfig": { + "listenIpv4": "0.0.0.0", + "p2pTcpPort": 60000, + "discv5UdpPort": 9000 + } + } + """ + + ## When + let config = decodeNodeConfigFromJson(jsonStr) + + ## Then + check config.mode == WakuMode.Edge + + let mv = config.protocolsConfig.messageValidation + check: + mv.maxMessageSize == "150 KiB" + mv.rlnConfig.isSome() + let rln = mv.rlnConfig.get() + check: + rln.contractAddress == "0x1234567890ABCDEF1234567890ABCDEF12345678" + rln.chainId == 5'u + rln.epochSizeSec == 600'u64 + + test "Round-trip encode/decode preserves all fields": + ## Given + let original = NodeConfig.init( + mode = Edge, + protocolsConfig = ProtocolsConfig.init( + entryNodes = @["enrtree://TREE@example.com"], + staticStoreNodes = + @[ + "/ip4/1.2.3.4/tcp/80/p2p/16Uuu2HBmAcHvhLqQKwSSbX6BG5JLWUDRcaLVrehUVqpw7fz1hbYc" + ], + clusterId = 42, + autoShardingConfig = AutoShardingConfig(numShardsInCluster: 16), + messageValidation = MessageValidation( + maxMessageSize: "256 KiB", + rlnConfig: some( + RlnConfig( + contractAddress: "0xAABBCCDDEEFF00112233445566778899AABBCCDD", + chainId: 137, + epochSizeSec: 300, + ) + ), + ), + ), + networkingConfig = + NetworkingConfig(listenIpv4: "10.0.0.1", p2pTcpPort: 9090, discv5UdpPort: 9091), + ethRpcEndpoints = @["https://rpc.example.com"], + p2pReliability = true, + logLevel = LogLevel.DEBUG, + logFormat = LogFormat.JSON, + ) + + ## When + let decoded = decodeNodeConfigFromJson(Json.encode(original)) + + ## Then — check field by field + check: + decoded.mode == original.mode + decoded.ethRpcEndpoints == original.ethRpcEndpoints + decoded.p2pReliability == original.p2pReliability + decoded.logLevel == original.logLevel + decoded.logFormat == original.logFormat + decoded.networkingConfig.listenIpv4 == original.networkingConfig.listenIpv4 + decoded.networkingConfig.p2pTcpPort == original.networkingConfig.p2pTcpPort + decoded.networkingConfig.discv5UdpPort == original.networkingConfig.discv5UdpPort + decoded.protocolsConfig.entryNodes == original.protocolsConfig.entryNodes + decoded.protocolsConfig.staticStoreNodes == + original.protocolsConfig.staticStoreNodes + decoded.protocolsConfig.clusterId == original.protocolsConfig.clusterId + decoded.protocolsConfig.autoShardingConfig.numShardsInCluster == + original.protocolsConfig.autoShardingConfig.numShardsInCluster + decoded.protocolsConfig.messageValidation.maxMessageSize == + original.protocolsConfig.messageValidation.maxMessageSize + decoded.protocolsConfig.messageValidation.rlnConfig.isSome() + + let decodedRln = decoded.protocolsConfig.messageValidation.rlnConfig.get() + let originalRln = original.protocolsConfig.messageValidation.rlnConfig.get() + check: + decodedRln.contractAddress == originalRln.contractAddress + decodedRln.chainId == originalRln.chainId + decodedRln.epochSizeSec == originalRln.epochSizeSec + +suite "NodeConfig JSON - partial format with defaults": + test "Minimal NodeConfig - empty object uses all defaults": + ## Given + let config = decodeNodeConfigFromJson("{}") + let defaultConfig = NodeConfig.init() + + ## Then — compare field by field against defaults + check: + config.mode == defaultConfig.mode + config.ethRpcEndpoints == defaultConfig.ethRpcEndpoints + config.p2pReliability == defaultConfig.p2pReliability + config.logLevel == defaultConfig.logLevel + config.logFormat == defaultConfig.logFormat + config.networkingConfig.listenIpv4 == defaultConfig.networkingConfig.listenIpv4 + config.networkingConfig.p2pTcpPort == defaultConfig.networkingConfig.p2pTcpPort + config.networkingConfig.discv5UdpPort == + defaultConfig.networkingConfig.discv5UdpPort + config.protocolsConfig.entryNodes == defaultConfig.protocolsConfig.entryNodes + config.protocolsConfig.staticStoreNodes == + defaultConfig.protocolsConfig.staticStoreNodes + config.protocolsConfig.clusterId == defaultConfig.protocolsConfig.clusterId + config.protocolsConfig.autoShardingConfig.numShardsInCluster == + defaultConfig.protocolsConfig.autoShardingConfig.numShardsInCluster + config.protocolsConfig.messageValidation.maxMessageSize == + defaultConfig.protocolsConfig.messageValidation.maxMessageSize + config.protocolsConfig.messageValidation.rlnConfig.isSome() == + defaultConfig.protocolsConfig.messageValidation.rlnConfig.isSome() + + test "Minimal NodeConfig keeps network preset defaults": + ## Given + let config = decodeNodeConfigFromJson("{}") + + ## Then + check: + config.protocolsConfig.entryNodes == TheWakuNetworkPreset.entryNodes + config.protocolsConfig.messageValidation.rlnConfig.isSome() + + test "NodeConfig with only mode specified": + ## Given + let config = decodeNodeConfigFromJson("""{"mode": "Edge"}""") + + ## Then + check: + config.mode == WakuMode.Edge + ## Remaining fields get defaults + config.logLevel == LogLevel.INFO + config.logFormat == LogFormat.TEXT + config.p2pReliability == false + config.ethRpcEndpoints == newSeq[string]() + + test "ProtocolsConfig partial - optional fields get defaults": + ## Given — only entryNodes and clusterId provided + let jsonStr = + """ + { + "protocolsConfig": { + "entryNodes": ["enrtree://X@y.com"], + "clusterId": 5 + }, + "networkingConfig": { + "listenIpv4": "0.0.0.0", + "p2pTcpPort": 60000, + "discv5UdpPort": 9000 + } + } + """ + + ## When + let config = decodeNodeConfigFromJson(jsonStr) + + ## Then — required fields are set, optionals get defaults + check: + config.protocolsConfig.entryNodes == @["enrtree://X@y.com"] + config.protocolsConfig.clusterId == 5 + config.protocolsConfig.staticStoreNodes == newSeq[string]() + config.protocolsConfig.autoShardingConfig.numShardsInCluster == + DefaultAutoShardingConfig.numShardsInCluster + config.protocolsConfig.messageValidation.maxMessageSize == + DefaultMessageValidation.maxMessageSize + config.protocolsConfig.messageValidation.rlnConfig.isNone() + + test "MessageValidation partial - rlnConfig omitted defaults to none": + ## Given + let jsonStr = + """ + { + "protocolsConfig": { + "entryNodes": [], + "clusterId": 1, + "messageValidation": { + "maxMessageSize": "200 KiB" + } + }, + "networkingConfig": { + "listenIpv4": "0.0.0.0", + "p2pTcpPort": 60000, + "discv5UdpPort": 9000 + } + } + """ + + ## When + let config = decodeNodeConfigFromJson(jsonStr) + + ## Then + check: + config.protocolsConfig.messageValidation.maxMessageSize == "200 KiB" + config.protocolsConfig.messageValidation.rlnConfig.isNone() + + test "logLevel and logFormat omitted use defaults": + ## Given + let jsonStr = + """ + { + "mode": "Core", + "protocolsConfig": { + "entryNodes": [], + "clusterId": 1 + }, + "networkingConfig": { + "listenIpv4": "0.0.0.0", + "p2pTcpPort": 60000, + "discv5UdpPort": 9000 + } + } + """ + + ## When + let config = decodeNodeConfigFromJson(jsonStr) + + ## Then + check: + config.logLevel == LogLevel.INFO + config.logFormat == LogFormat.TEXT + +suite "NodeConfig JSON - unsupported fields raise errors": + test "Unknown field at NodeConfig level raises": + let jsonStr = + """ + { + "mode": "Core", + "unknownTopLevel": true + } + """ + + var raised = false + try: + discard decodeNodeConfigFromJson(jsonStr) + except SerializationError: + raised = true + check raised + + test "Typo in NodeConfig field name raises": + let jsonStr = + """ + { + "modes": "Core" + } + """ + + var raised = false + try: + discard decodeNodeConfigFromJson(jsonStr) + except SerializationError: + raised = true + check raised + + test "Unknown field in ProtocolsConfig raises": + let jsonStr = + """ + { + "protocolsConfig": { + "entryNodes": [], + "clusterId": 1, + "futureField": "something" + }, + "networkingConfig": { + "listenIpv4": "0.0.0.0", + "p2pTcpPort": 60000, + "discv5UdpPort": 9000 + } + } + """ + + var raised = false + try: + discard decodeNodeConfigFromJson(jsonStr) + except SerializationError: + raised = true + check raised + + test "Unknown field in NetworkingConfig raises": + let jsonStr = + """ + { + "protocolsConfig": { + "entryNodes": [], + "clusterId": 1 + }, + "networkingConfig": { + "listenIpv4": "0.0.0.0", + "p2pTcpPort": 60000, + "discv5UdpPort": 9000, + "futureNetworkField": "value" + } + } + """ + + var raised = false + try: + discard decodeNodeConfigFromJson(jsonStr) + except SerializationError: + raised = true + check raised + + test "Unknown field in MessageValidation raises": + let jsonStr = + """ + { + "protocolsConfig": { + "entryNodes": [], + "clusterId": 1, + "messageValidation": { + "maxMessageSize": "150 KiB", + "maxMesssageSize": "typo" + } + }, + "networkingConfig": { + "listenIpv4": "0.0.0.0", + "p2pTcpPort": 60000, + "discv5UdpPort": 9000 + } + } + """ + + var raised = false + try: + discard decodeNodeConfigFromJson(jsonStr) + except SerializationError: + raised = true + check raised + + test "Unknown field in RlnConfig raises": + let jsonStr = + """ + { + "protocolsConfig": { + "entryNodes": [], + "clusterId": 1, + "messageValidation": { + "maxMessageSize": "150 KiB", + "rlnConfig": { + "contractAddress": "0xABCDEF1234567890ABCDEF1234567890ABCDEF12", + "chainId": 1, + "epochSizeSec": 600, + "unknownRlnField": true + } + } + }, + "networkingConfig": { + "listenIpv4": "0.0.0.0", + "p2pTcpPort": 60000, + "discv5UdpPort": 9000 + } + } + """ + + var raised = false + try: + discard decodeNodeConfigFromJson(jsonStr) + except SerializationError: + raised = true + check raised + + test "Unknown field in AutoShardingConfig raises": + let jsonStr = + """ + { + "protocolsConfig": { + "entryNodes": [], + "clusterId": 1, + "autoShardingConfig": { + "numShardsInCluster": 8, + "shardPrefix": "extra" + } + }, + "networkingConfig": { + "listenIpv4": "0.0.0.0", + "p2pTcpPort": 60000, + "discv5UdpPort": 9000 + } + } + """ + + var raised = false + try: + discard decodeNodeConfigFromJson(jsonStr) + except SerializationError: + raised = true + check raised + +suite "NodeConfig JSON - missing required fields": + test "Missing 'entryNodes' in ProtocolsConfig": + let jsonStr = + """ + { + "protocolsConfig": { + "clusterId": 1 + }, + "networkingConfig": { + "listenIpv4": "0.0.0.0", + "p2pTcpPort": 60000, + "discv5UdpPort": 9000 + } + } + """ + + var raised = false + try: + discard decodeNodeConfigFromJson(jsonStr) + except SerializationError: + raised = true + check raised + + test "Missing 'clusterId' in ProtocolsConfig": + let jsonStr = + """ + { + "protocolsConfig": { + "entryNodes": [] + }, + "networkingConfig": { + "listenIpv4": "0.0.0.0", + "p2pTcpPort": 60000, + "discv5UdpPort": 9000 + } + } + """ + + var raised = false + try: + discard decodeNodeConfigFromJson(jsonStr) + except SerializationError: + raised = true + check raised + + test "Missing required fields in NetworkingConfig": + let jsonStr = + """ + { + "protocolsConfig": { + "entryNodes": [], + "clusterId": 1 + }, + "networkingConfig": { + "listenIpv4": "0.0.0.0" + } + } + """ + + var raised = false + try: + discard decodeNodeConfigFromJson(jsonStr) + except SerializationError: + raised = true + check raised + + test "Missing 'numShardsInCluster' in AutoShardingConfig": + let jsonStr = + """ + { + "protocolsConfig": { + "entryNodes": [], + "clusterId": 1, + "autoShardingConfig": {} + }, + "networkingConfig": { + "listenIpv4": "0.0.0.0", + "p2pTcpPort": 60000, + "discv5UdpPort": 9000 + } + } + """ + + var raised = false + try: + discard decodeNodeConfigFromJson(jsonStr) + except SerializationError: + raised = true + check raised + + test "Missing required fields in RlnConfig": + let jsonStr = + """ + { + "protocolsConfig": { + "entryNodes": [], + "clusterId": 1, + "messageValidation": { + "maxMessageSize": "150 KiB", + "rlnConfig": { + "contractAddress": "0xABCD" + } + } + }, + "networkingConfig": { + "listenIpv4": "0.0.0.0", + "p2pTcpPort": 60000, + "discv5UdpPort": 9000 + } + } + """ + + var raised = false + try: + discard decodeNodeConfigFromJson(jsonStr) + except SerializationError: + raised = true + check raised + + test "Missing 'maxMessageSize' in MessageValidation": + let jsonStr = + """ + { + "protocolsConfig": { + "entryNodes": [], + "clusterId": 1, + "messageValidation": { + "rlnConfig": null + } + }, + "networkingConfig": { + "listenIpv4": "0.0.0.0", + "p2pTcpPort": 60000, + "discv5UdpPort": 9000 + } + } + """ + + var raised = false + try: + discard decodeNodeConfigFromJson(jsonStr) + except SerializationError: + raised = true + check raised + +suite "NodeConfig JSON - invalid values": + test "Invalid enum value for mode": + var raised = false + try: + discard decodeNodeConfigFromJson("""{"mode": "InvalidMode"}""") + except SerializationError: + raised = true + check raised + + test "Invalid enum value for logLevel": + var raised = false + try: + discard decodeNodeConfigFromJson("""{"logLevel": "SUPERVERBOSE"}""") + except SerializationError: + raised = true + check raised + + test "Wrong type for clusterId (string instead of number)": + let jsonStr = + """ + { + "protocolsConfig": { + "entryNodes": [], + "clusterId": "not-a-number" + }, + "networkingConfig": { + "listenIpv4": "0.0.0.0", + "p2pTcpPort": 60000, + "discv5UdpPort": 9000 + } + } + """ + + var raised = false + try: + discard decodeNodeConfigFromJson(jsonStr) + except SerializationError: + raised = true + check raised + + test "Completely invalid JSON syntax": + var raised = false + try: + discard decodeNodeConfigFromJson("""{ not valid json at all }""") + except SerializationError: + raised = true + check raised + +suite "NodeConfig JSON -> WakuConf integration": + test "Decoded config translates to valid WakuConf": + ## Given + let jsonStr = + """ + { + "mode": "Core", + "protocolsConfig": { + "entryNodes": [ + "enrtree://AIRVQ5DDA4FFWLRBCHJWUWOO6X6S4ZTZ5B667LQ6AJU6PEYDLRD5O@sandbox.waku.nodes.status.im" + ], + "staticStoreNodes": [ + "/ip4/127.0.0.1/tcp/60000/p2p/16Uuu2HBmAcHvhLqQKwSSbX6BG5JLWUDRcaLVrehUVqpw7fz1hbYc" + ], + "clusterId": 55, + "autoShardingConfig": { + "numShardsInCluster": 6 + }, + "messageValidation": { + "maxMessageSize": "256 KiB", + "rlnConfig": null + } + }, + "networkingConfig": { + "listenIpv4": "0.0.0.0", + "p2pTcpPort": 60000, + "discv5UdpPort": 9000 + }, + "ethRpcEndpoints": ["http://localhost:8545"], + "p2pReliability": true, + "logLevel": "INFO", + "logFormat": "TEXT" + } + """ + + ## When + let nodeConfig = decodeNodeConfigFromJson(jsonStr) + let wakuConfRes = toWakuConf(nodeConfig) + + ## Then + require wakuConfRes.isOk() + let wakuConf = wakuConfRes.get() + require wakuConf.validate().isOk() + check: + wakuConf.clusterId == 55 + wakuConf.shardingConf.numShardsInCluster == 6 + wakuConf.maxMessageSizeBytes == 256'u64 * 1024'u64 + wakuConf.staticNodes.len == 1 + wakuConf.p2pReliability == true diff --git a/waku.nimble b/waku.nimble index 7368ba74b..e4c436e8d 100644 --- a/waku.nimble +++ b/waku.nimble @@ -64,7 +64,7 @@ proc buildBinary(name: string, srcDir = "./", params = "", lang = "c") = exec "nim " & lang & " --out:build/" & name & " --mm:refc " & extra_params & " " & srcDir & name & ".nim" -proc buildLibrary(lib_name: string, srcDir = "./", params = "", `type` = "static") = +proc buildLibrary(lib_name: string, srcDir = "./", params = "", `type` = "static", srcFile = "libwaku.nim", mainPrefix = "libwaku") = if not dirExists "build": mkDir "build" # allow something like "nim nimbus --verbosity:0 --hints:off nimbus.nims" @@ -73,12 +73,12 @@ proc buildLibrary(lib_name: string, srcDir = "./", params = "", `type` = "static extra_params &= " " & paramStr(i) if `type` == "static": exec "nim c" & " --out:build/" & lib_name & - " --threads:on --app:staticlib --opt:size --noMain --mm:refc --header -d:metrics --nimMainPrefix:libwaku --skipParentCfg:on -d:discv5_protocol_id=d5waku " & - extra_params & " " & srcDir & "libwaku.nim" + " --threads:on --app:staticlib --opt:size --noMain --mm:refc --header -d:metrics --nimMainPrefix:" & mainPrefix & " --skipParentCfg:on -d:discv5_protocol_id=d5waku " & + extra_params & " " & srcDir & srcFile else: exec "nim c" & " --out:build/" & lib_name & - " --threads:on --app:lib --opt:size --noMain --mm:refc --header -d:metrics --nimMainPrefix:libwaku --skipParentCfg:off -d:discv5_protocol_id=d5waku " & - extra_params & " " & srcDir & "libwaku.nim" + " --threads:on --app:lib --opt:size --noMain --mm:refc --header -d:metrics --nimMainPrefix:" & mainPrefix & " --skipParentCfg:off -d:discv5_protocol_id=d5waku " & + extra_params & " " & srcDir & srcFile proc buildMobileAndroid(srcDir = ".", params = "") = let cpu = getEnv("CPU") @@ -400,3 +400,11 @@ task libWakuIOS, "Build the mobile bindings for iOS": let srcDir = "./library" let extraParams = "-d:chronicles_log_level=ERROR" buildMobileIOS srcDir, extraParams + +task liblogosdeliveryStatic, "Build the liblogosdelivery (Logos Messaging Delivery API) static library": + let lib_name = paramStr(paramCount()) + buildLibrary lib_name, "liblogosdelivery/", chroniclesParams, "static", "liblogosdelivery.nim", "liblogosdelivery" + +task liblogosdeliveryDynamic, "Build the liblogosdelivery (Logos Messaging Delivery API) dynamic library": + let lib_name = paramStr(paramCount()) + buildLibrary lib_name, "liblogosdelivery/", chroniclesParams, "dynamic", "liblogosdelivery.nim", "liblogosdelivery" diff --git a/waku/api/api.nim b/waku/api/api.nim index 7f13919b3..3493513a3 100644 --- a/waku/api/api.nim +++ b/waku/api/api.nim @@ -4,6 +4,7 @@ import waku/factory/waku import waku/[requests/health_requests, waku_core, waku_node] import waku/node/delivery_service/send_service import waku/node/delivery_service/subscription_service +import libp2p/peerid import ./[api_conf, types] logScope: diff --git a/waku/api/api_conf.nim b/waku/api/api_conf.nim index 47aa9e7d8..7cac66426 100644 --- a/waku/api/api_conf.nim +++ b/waku/api/api_conf.nim @@ -1,14 +1,18 @@ import std/[net, options] import results +import json_serialization, json_serialization/std/options as json_options import waku/common/utils/parse_size_units, + waku/common/logging, waku/factory/waku_conf, waku/factory/conf_builder/conf_builder, waku/factory/networks_config, ./entry_nodes +export json_serialization, json_options + type AutoShardingConfig* {.requiresInit.} = object numShardsInCluster*: uint16 @@ -87,6 +91,8 @@ type NodeConfig* {.requiresInit.} = object networkingConfig: NetworkingConfig ethRpcEndpoints: seq[string] p2pReliability: bool + logLevel: LogLevel + logFormat: LogFormat proc init*( T: typedesc[NodeConfig], @@ -95,6 +101,8 @@ proc init*( networkingConfig: NetworkingConfig = DefaultNetworkingConfig, ethRpcEndpoints: seq[string] = @[], p2pReliability: bool = false, + logLevel: LogLevel = LogLevel.INFO, + logFormat: LogFormat = LogFormat.TEXT, ): T = return T( mode: mode, @@ -102,11 +110,57 @@ proc init*( networkingConfig: networkingConfig, ethRpcEndpoints: ethRpcEndpoints, p2pReliability: p2pReliability, + logLevel: logLevel, + logFormat: logFormat, ) +# -- Getters for ProtocolsConfig (private fields) - used for testing -- + +proc entryNodes*(c: ProtocolsConfig): seq[string] = + c.entryNodes + +proc staticStoreNodes*(c: ProtocolsConfig): seq[string] = + c.staticStoreNodes + +proc clusterId*(c: ProtocolsConfig): uint16 = + c.clusterId + +proc autoShardingConfig*(c: ProtocolsConfig): AutoShardingConfig = + c.autoShardingConfig + +proc messageValidation*(c: ProtocolsConfig): MessageValidation = + c.messageValidation + +# -- Getters for NodeConfig (private fields) - used for testing -- + +proc mode*(c: NodeConfig): WakuMode = + c.mode + +proc protocolsConfig*(c: NodeConfig): ProtocolsConfig = + c.protocolsConfig + +proc networkingConfig*(c: NodeConfig): NetworkingConfig = + c.networkingConfig + +proc ethRpcEndpoints*(c: NodeConfig): seq[string] = + c.ethRpcEndpoints + +proc p2pReliability*(c: NodeConfig): bool = + c.p2pReliability + +proc logLevel*(c: NodeConfig): LogLevel = + c.logLevel + +proc logFormat*(c: NodeConfig): LogFormat = + c.logFormat + proc toWakuConf*(nodeConfig: NodeConfig): Result[WakuConf, string] = var b = WakuConfBuilder.init() + # Apply log configuration + b.withLogLevel(nodeConfig.logLevel) + b.withLogFormat(nodeConfig.logFormat) + # Apply networking configuration let networkingConfig = nodeConfig.networkingConfig let ip = parseIpAddress(networkingConfig.listenIpv4) @@ -214,3 +268,260 @@ proc toWakuConf*(nodeConfig: NodeConfig): Result[WakuConf, string] = return err("Failed to validate configuration: " & error) return ok(wakuConf) + +# ---- JSON serialization (writeValue / readValue) ---- +# ---------- AutoShardingConfig ---------- + +proc writeValue*(w: var JsonWriter, val: AutoShardingConfig) {.raises: [IOError].} = + w.beginRecord() + w.writeField("numShardsInCluster", val.numShardsInCluster) + w.endRecord() + +proc readValue*( + r: var JsonReader, val: var AutoShardingConfig +) {.raises: [SerializationError, IOError].} = + var numShardsInCluster: Option[uint16] + + for fieldName in readObjectFields(r): + case fieldName + of "numShardsInCluster": + numShardsInCluster = some(r.readValue(uint16)) + else: + r.raiseUnexpectedField(fieldName, "AutoShardingConfig") + + if numShardsInCluster.isNone(): + r.raiseUnexpectedValue("Missing required field 'numShardsInCluster'") + + val = AutoShardingConfig(numShardsInCluster: numShardsInCluster.get()) + +# ---------- RlnConfig ---------- + +proc writeValue*(w: var JsonWriter, val: RlnConfig) {.raises: [IOError].} = + w.beginRecord() + w.writeField("contractAddress", val.contractAddress) + w.writeField("chainId", val.chainId) + w.writeField("epochSizeSec", val.epochSizeSec) + w.endRecord() + +proc readValue*( + r: var JsonReader, val: var RlnConfig +) {.raises: [SerializationError, IOError].} = + var + contractAddress: Option[string] + chainId: Option[uint] + epochSizeSec: Option[uint64] + + for fieldName in readObjectFields(r): + case fieldName + of "contractAddress": + contractAddress = some(r.readValue(string)) + of "chainId": + chainId = some(r.readValue(uint)) + of "epochSizeSec": + epochSizeSec = some(r.readValue(uint64)) + else: + r.raiseUnexpectedField(fieldName, "RlnConfig") + + if contractAddress.isNone(): + r.raiseUnexpectedValue("Missing required field 'contractAddress'") + if chainId.isNone(): + r.raiseUnexpectedValue("Missing required field 'chainId'") + if epochSizeSec.isNone(): + r.raiseUnexpectedValue("Missing required field 'epochSizeSec'") + + val = RlnConfig( + contractAddress: contractAddress.get(), + chainId: chainId.get(), + epochSizeSec: epochSizeSec.get(), + ) + +# ---------- NetworkingConfig ---------- + +proc writeValue*(w: var JsonWriter, val: NetworkingConfig) {.raises: [IOError].} = + w.beginRecord() + w.writeField("listenIpv4", val.listenIpv4) + w.writeField("p2pTcpPort", val.p2pTcpPort) + w.writeField("discv5UdpPort", val.discv5UdpPort) + w.endRecord() + +proc readValue*( + r: var JsonReader, val: var NetworkingConfig +) {.raises: [SerializationError, IOError].} = + var + listenIpv4: Option[string] + p2pTcpPort: Option[uint16] + discv5UdpPort: Option[uint16] + + for fieldName in readObjectFields(r): + case fieldName + of "listenIpv4": + listenIpv4 = some(r.readValue(string)) + of "p2pTcpPort": + p2pTcpPort = some(r.readValue(uint16)) + of "discv5UdpPort": + discv5UdpPort = some(r.readValue(uint16)) + else: + r.raiseUnexpectedField(fieldName, "NetworkingConfig") + + if listenIpv4.isNone(): + r.raiseUnexpectedValue("Missing required field 'listenIpv4'") + if p2pTcpPort.isNone(): + r.raiseUnexpectedValue("Missing required field 'p2pTcpPort'") + if discv5UdpPort.isNone(): + r.raiseUnexpectedValue("Missing required field 'discv5UdpPort'") + + val = NetworkingConfig( + listenIpv4: listenIpv4.get(), + p2pTcpPort: p2pTcpPort.get(), + discv5UdpPort: discv5UdpPort.get(), + ) + +# ---------- MessageValidation ---------- + +proc writeValue*(w: var JsonWriter, val: MessageValidation) {.raises: [IOError].} = + w.beginRecord() + w.writeField("maxMessageSize", val.maxMessageSize) + w.writeField("rlnConfig", val.rlnConfig) + w.endRecord() + +proc readValue*( + r: var JsonReader, val: var MessageValidation +) {.raises: [SerializationError, IOError].} = + var + maxMessageSize: Option[string] + rlnConfig: Option[Option[RlnConfig]] + + for fieldName in readObjectFields(r): + case fieldName + of "maxMessageSize": + maxMessageSize = some(r.readValue(string)) + of "rlnConfig": + rlnConfig = some(r.readValue(Option[RlnConfig])) + else: + r.raiseUnexpectedField(fieldName, "MessageValidation") + + if maxMessageSize.isNone(): + r.raiseUnexpectedValue("Missing required field 'maxMessageSize'") + + val = MessageValidation( + maxMessageSize: maxMessageSize.get(), rlnConfig: rlnConfig.get(none(RlnConfig)) + ) + +# ---------- ProtocolsConfig ---------- + +proc writeValue*(w: var JsonWriter, val: ProtocolsConfig) {.raises: [IOError].} = + w.beginRecord() + w.writeField("entryNodes", val.entryNodes) + w.writeField("staticStoreNodes", val.staticStoreNodes) + w.writeField("clusterId", val.clusterId) + w.writeField("autoShardingConfig", val.autoShardingConfig) + w.writeField("messageValidation", val.messageValidation) + w.endRecord() + +proc readValue*( + r: var JsonReader, val: var ProtocolsConfig +) {.raises: [SerializationError, IOError].} = + var + entryNodes: Option[seq[string]] + staticStoreNodes: Option[seq[string]] + clusterId: Option[uint16] + autoShardingConfig: Option[AutoShardingConfig] + messageValidation: Option[MessageValidation] + + for fieldName in readObjectFields(r): + case fieldName + of "entryNodes": + entryNodes = some(r.readValue(seq[string])) + of "staticStoreNodes": + staticStoreNodes = some(r.readValue(seq[string])) + of "clusterId": + clusterId = some(r.readValue(uint16)) + of "autoShardingConfig": + autoShardingConfig = some(r.readValue(AutoShardingConfig)) + of "messageValidation": + messageValidation = some(r.readValue(MessageValidation)) + else: + r.raiseUnexpectedField(fieldName, "ProtocolsConfig") + + if entryNodes.isNone(): + r.raiseUnexpectedValue("Missing required field 'entryNodes'") + if clusterId.isNone(): + r.raiseUnexpectedValue("Missing required field 'clusterId'") + + val = ProtocolsConfig.init( + entryNodes = entryNodes.get(), + staticStoreNodes = staticStoreNodes.get(@[]), + clusterId = clusterId.get(), + autoShardingConfig = autoShardingConfig.get(DefaultAutoShardingConfig), + messageValidation = messageValidation.get(DefaultMessageValidation), + ) + +# ---------- NodeConfig ---------- + +proc writeValue*(w: var JsonWriter, val: NodeConfig) {.raises: [IOError].} = + w.beginRecord() + w.writeField("mode", val.mode) + w.writeField("protocolsConfig", val.protocolsConfig) + w.writeField("networkingConfig", val.networkingConfig) + w.writeField("ethRpcEndpoints", val.ethRpcEndpoints) + w.writeField("p2pReliability", val.p2pReliability) + w.writeField("logLevel", val.logLevel) + w.writeField("logFormat", val.logFormat) + w.endRecord() + +proc readValue*( + r: var JsonReader, val: var NodeConfig +) {.raises: [SerializationError, IOError].} = + var + mode: Option[WakuMode] + protocolsConfig: Option[ProtocolsConfig] + networkingConfig: Option[NetworkingConfig] + ethRpcEndpoints: Option[seq[string]] + p2pReliability: Option[bool] + logLevel: Option[LogLevel] + logFormat: Option[LogFormat] + + for fieldName in readObjectFields(r): + case fieldName + of "mode": + mode = some(r.readValue(WakuMode)) + of "protocolsConfig": + protocolsConfig = some(r.readValue(ProtocolsConfig)) + of "networkingConfig": + networkingConfig = some(r.readValue(NetworkingConfig)) + of "ethRpcEndpoints": + ethRpcEndpoints = some(r.readValue(seq[string])) + of "p2pReliability": + p2pReliability = some(r.readValue(bool)) + of "logLevel": + logLevel = some(r.readValue(LogLevel)) + of "logFormat": + logFormat = some(r.readValue(LogFormat)) + else: + r.raiseUnexpectedField(fieldName, "NodeConfig") + + val = NodeConfig.init( + mode = mode.get(WakuMode.Core), + protocolsConfig = protocolsConfig.get(TheWakuNetworkPreset), + networkingConfig = networkingConfig.get(DefaultNetworkingConfig), + ethRpcEndpoints = ethRpcEndpoints.get(@[]), + p2pReliability = p2pReliability.get(false), + logLevel = logLevel.get(LogLevel.INFO), + logFormat = logFormat.get(LogFormat.TEXT), + ) + +# ---------- Decode helper ---------- +# Json.decode returns T via `result`, which conflicts with {.requiresInit.} +# on Nim 2.x. This helper avoids the issue by using readValue into a var. + +proc decodeNodeConfigFromJson*( + jsonStr: string +): NodeConfig {.raises: [SerializationError].} = + var val = NodeConfig.init() # default-initialized + try: + var stream = unsafeMemoryInput(jsonStr) + var reader = JsonReader[DefaultFlavor].init(stream) + reader.readValue(val) + except IOError as err: + raise (ref SerializationError)(msg: err.msg) + return val diff --git a/waku/node/delivery_service/send_service/send_service.nim b/waku/node/delivery_service/send_service/send_service.nim index f6a6ac94c..a41d07786 100644 --- a/waku/node/delivery_service/send_service/send_service.nim +++ b/waku/node/delivery_service/send_service/send_service.nim @@ -91,6 +91,7 @@ proc setupSendProcessorChain( for i in 1 ..< processors.len: currentProcessor.chain(processors[i]) currentProcessor = processors[i] + trace "Send processor chain", index = i, processor = type(processors[i]).name return ok(processors[0]) From 895f3e2d36c6265947a7f3aa3c44cd3fc4aa5fc5 Mon Sep 17 00:00:00 2001 From: Ivan FB <128452529+Ivansete-status@users.noreply.github.com> Date: Tue, 17 Feb 2026 19:59:45 +0100 Subject: [PATCH 56/70] update after rename to logos-delivery (#3729) --- .../ISSUE_TEMPLATE/prepare_beta_release.md | 20 +++++++++---------- .../ISSUE_TEMPLATE/prepare_full_release.md | 16 +++++++-------- .github/workflows/ci-daily.yml | 4 ++-- .github/workflows/ci.yml | 8 ++++---- .github/workflows/pre-release.yml | 8 ++++---- README.md | 2 +- 6 files changed, 29 insertions(+), 29 deletions(-) diff --git a/.github/ISSUE_TEMPLATE/prepare_beta_release.md b/.github/ISSUE_TEMPLATE/prepare_beta_release.md index 383d9018c..2e4226e67 100644 --- a/.github/ISSUE_TEMPLATE/prepare_beta_release.md +++ b/.github/ISSUE_TEMPLATE/prepare_beta_release.md @@ -10,7 +10,7 @@ assignees: '' ### Items to complete @@ -22,11 +22,11 @@ All items below are to be completed by the owner of the given release. - [ ] Generate and edit release notes in CHANGELOG.md. - [ ] **Waku test and fleets validation** - - [ ] Ensure all the unit tests (specifically logos-messaging-js tests) are green against the release candidate. + - [ ] Ensure all the unit tests (specifically logos-delivery-js tests) are green against the release candidate. - [ ] Deploy the release candidate to `waku.test` only through [deploy-waku-test job](https://ci.infra.status.im/job/nim-waku/job/deploy-waku-test/) and wait for it to finish (Jenkins access required; ask the infra team if you don't have it). - After completion, disable [deployment job](https://ci.infra.status.im/job/nim-waku/) so that its version is not updated on every merge to master. - Verify the deployed version at https://fleets.waku.org/. - - Confirm the container image exists on [Harbor](https://harbor.status.im/harbor/projects/9/repositories/nwaku/artifacts-tab). + - Confirm the container image exists on [Harbor](https://harbor.status.im/harbor/projects/9/repositories/logos-delivery/artifacts-tab). - [ ] Analyze Kibana logs from the previous month (since the last release was deployed) for possible crashes or errors in `waku.test`. - Most relevant logs are `(fleet: "waku.test" AND message: "SIGSEGV")`. - [ ] Enable again the `waku.test` fleet to resume auto-deployment of the latest `master` commit. @@ -34,10 +34,10 @@ All items below are to be completed by the owner of the given release. - [ ] **Proceed with release** - [ ] Assign a final release tag (`v0.X.0-beta`) to the same commit that contains the validated release-candidate tag (e.g. `v0.X.0-beta-rc.N`) and submit a PR from the release branch to `master`. - - [ ] Update [nwaku-compose](https://github.com/logos-messaging/nwaku-compose) and [waku-simulator](https://github.com/logos-messaging/waku-simulator) according to the new release. - - [ ] Bump nwaku dependency in [waku-rust-bindings](https://github.com/logos-messaging/waku-rust-bindings) and make sure all examples and tests work. - - [ ] Bump nwaku dependency in [waku-go-bindings](https://github.com/logos-messaging/waku-go-bindings) and make sure all tests work. - - [ ] Create GitHub release (https://github.com/logos-messaging/nwaku/releases). + - [ ] Update [logos-delivery-compose](https://github.com/logos-messaging/logos-delivery-compose) and [logos-delivery-simulator](https://github.com/logos-messaging/waku-simulator) according to the new release. + - [ ] Bump logos-delivery dependency in [logos-delivery-rust-bindings](https://github.com/logos-messaging/logos-delivery-rust-bindings) and make sure all examples and tests work. + - [ ] Bump logos-delivery dependency in [logos-delivery-go-bindings](https://github.com/logos-messaging/logos-delivery-go-bindings) and make sure all tests work. + - [ ] Create GitHub release (https://github.com/logos-messaging/logos-delivery/releases). - [ ] Submit a PR to merge the release branch back to `master`. Make sure you use the option "Merge pull request (Create a merge commit)" to perform the merge. Ping repo admin if this option is not available. - [ ] **Promote release to fleets** @@ -47,10 +47,10 @@ All items below are to be completed by the owner of the given release. ### Links -- [Release process](https://github.com/logos-messaging/nwaku/blob/master/docs/contributors/release-process.md) -- [Release notes](https://github.com/logos-messaging/nwaku/blob/master/CHANGELOG.md) +- [Release process](https://github.com/logos-messaging/logos-delivery/blob/master/docs/contributors/release-process.md) +- [Release notes](https://github.com/logos-messaging/logos-delivery/blob/master/CHANGELOG.md) - [Fleet ownership](https://www.notion.so/Fleet-Ownership-7532aad8896d46599abac3c274189741?pvs=4#d2d2f0fe4b3c429fbd860a1d64f89a64) - [Infra-nim-waku](https://github.com/status-im/infra-nim-waku) - [Jenkins](https://ci.infra.status.im/job/nim-waku/) - [Fleets](https://fleets.waku.org/) -- [Harbor](https://harbor.status.im/harbor/projects/9/repositories/nwaku/artifacts-tab) +- [Harbor](https://harbor.status.im/harbor/projects/9/repositories/logos-delivery/artifacts-tab) diff --git a/.github/ISSUE_TEMPLATE/prepare_full_release.md b/.github/ISSUE_TEMPLATE/prepare_full_release.md index d7458a8e3..d919f9ed0 100644 --- a/.github/ISSUE_TEMPLATE/prepare_full_release.md +++ b/.github/ISSUE_TEMPLATE/prepare_full_release.md @@ -10,7 +10,7 @@ assignees: '' ### Items to complete @@ -24,7 +24,7 @@ All items below are to be completed by the owner of the given release. - [ ] **Validation of release candidate** - [ ] **Automated testing** - - [ ] Ensure all the unit tests (specifically logos-messaging-js tests) are green against the release candidate. + - [ ] Ensure all the unit tests (specifically logos-delivery-js tests) are green against the release candidate. - [ ] Ask Vac-QA and Vac-DST to perform the available tests against the release candidate. - [ ] Vac-DST (an additional report is needed; see [this](https://www.notion.so/DST-Reports-1228f96fb65c80729cd1d98a7496fe6f)) @@ -55,10 +55,10 @@ All items below are to be completed by the owner of the given release. - [ ] **Proceed with release** - [ ] Assign a final release tag (`v0.X.0`) to the same commit that contains the validated release-candidate tag (e.g. `v0.X.0`). - - [ ] Update [nwaku-compose](https://github.com/logos-messaging/nwaku-compose) and [waku-simulator](https://github.com/logos-messaging/waku-simulator) according to the new release. - - [ ] Bump nwaku dependency in [waku-rust-bindings](https://github.com/logos-messaging/waku-rust-bindings) and make sure all examples and tests work. - - [ ] Bump nwaku dependency in [waku-go-bindings](https://github.com/logos-messaging/waku-go-bindings) and make sure all tests work. - - [ ] Create GitHub release (https://github.com/logos-messaging/nwaku/releases). + - [ ] Update [logos-delivery-compose](https://github.com/logos-messaging/logos-delivery-compose) and [logos-delivery-simulator](https://github.com/logos-messaging/logos-delivery-simulator) according to the new release. + - [ ] Bump logos-delivery dependency in [logos-delivery-rust-bindings](https://github.com/logos-messaging/logos-delivery-rust-bindings) and make sure all examples and tests work. + - [ ] Bump logos-delivery dependency in [logos-delivery-go-bindings](https://github.com/logos-messaging/logos-delivery-go-bindings) and make sure all tests work. + - [ ] Create GitHub release (https://github.com/logos-messaging/logos-delivery/releases). - [ ] Submit a PR to merge the release branch back to `master`. Make sure you use the option "Merge pull request (Create a merge commit)" to perform the merge. Ping repo admin if this option is not available. - [ ] **Promote release to fleets** @@ -67,8 +67,8 @@ All items below are to be completed by the owner of the given release. ### Links -- [Release process](https://github.com/logos-messaging/nwaku/blob/master/docs/contributors/release-process.md) -- [Release notes](https://github.com/logos-messaging/nwaku/blob/master/CHANGELOG.md) +- [Release process](https://github.com/logos-messaging/logos-delivery/blob/master/docs/contributors/release-process.md) +- [Release notes](https://github.com/logos-messaging/logos-delivery/blob/master/CHANGELOG.md) - [Fleet ownership](https://www.notion.so/Fleet-Ownership-7532aad8896d46599abac3c274189741?pvs=4#d2d2f0fe4b3c429fbd860a1d64f89a64) - [Infra-nim-waku](https://github.com/status-im/infra-nim-waku) - [Jenkins](https://ci.infra.status.im/job/nim-waku/) diff --git a/.github/workflows/ci-daily.yml b/.github/workflows/ci-daily.yml index 236bd5216..b442014a6 100644 --- a/.github/workflows/ci-daily.yml +++ b/.github/workflows/ci-daily.yml @@ -1,4 +1,4 @@ -name: Daily logos-messaging-nim CI +name: Daily logos-delivery CI on: schedule: @@ -72,7 +72,7 @@ jobs: {\"name\": \"Status\", \"value\": \"$STATUS\", \"inline\": true} ], \"url\": \"$RUN_URL\", - \"footer\": {\"text\": \"Daily logos-messaging-nim CI\"} + \"footer\": {\"text\": \"Daily logos-delivery CI\"} }] }" \ "$DISCORD_WEBHOOK_URL" diff --git a/.github/workflows/ci.yml b/.github/workflows/ci.yml index 3de6eb4f5..3c84f5c6f 100644 --- a/.github/workflows/ci.yml +++ b/.github/workflows/ci.yml @@ -138,12 +138,12 @@ jobs: build-docker-image: needs: changes if: ${{ needs.changes.outputs.v2 == 'true' || needs.changes.outputs.common == 'true' || needs.changes.outputs.docker == 'true' }} - uses: logos-messaging/logos-messaging-nim/.github/workflows/container-image.yml@10dc3d3eb4b6a3d4313f7b2cc4a85a925e9ce039 + uses: logos-messaging/logos-delivery/.github/workflows/container-image.yml@10dc3d3eb4b6a3d4313f7b2cc4a85a925e9ce039 secrets: inherit nwaku-nwaku-interop-tests: needs: build-docker-image - uses: logos-messaging/logos-messaging-interop-tests/.github/workflows/nim_waku_PR.yml@SMOKE_TEST_STABLE + uses: logos-messaging/logos-delivery-interop-tests/.github/workflows/nim_waku_PR.yml@SMOKE_TEST_STABLE with: node_nwaku: ${{ needs.build-docker-image.outputs.image }} @@ -151,14 +151,14 @@ jobs: js-waku-node: needs: build-docker-image - uses: logos-messaging/logos-messaging-js/.github/workflows/test-node.yml@master + uses: logos-messaging/logos-delivery-js/.github/workflows/test-node.yml@master with: nim_wakunode_image: ${{ needs.build-docker-image.outputs.image }} test_type: node js-waku-node-optional: needs: build-docker-image - uses: logos-messaging/logos-messaging-js/.github/workflows/test-node.yml@master + uses: logos-messaging/logos-delivery-js/.github/workflows/test-node.yml@master with: nim_wakunode_image: ${{ needs.build-docker-image.outputs.image }} test_type: node-optional diff --git a/.github/workflows/pre-release.yml b/.github/workflows/pre-release.yml index faded198b..e145e28ae 100644 --- a/.github/workflows/pre-release.yml +++ b/.github/workflows/pre-release.yml @@ -91,14 +91,14 @@ jobs: build-docker-image: needs: tag-name - uses: logos-messaging/nwaku/.github/workflows/container-image.yml@master + uses: logos-messaging/logos-delivery/.github/workflows/container-image.yml@master with: image_tag: ${{ needs.tag-name.outputs.tag }} secrets: inherit js-waku-node: needs: build-docker-image - uses: logos-messaging/logos-messaging-js/.github/workflows/test-node.yml@master + uses: logos-messaging/logos-delivery-js/.github/workflows/test-node.yml@master with: nim_wakunode_image: ${{ needs.build-docker-image.outputs.image }} test_type: node @@ -106,7 +106,7 @@ jobs: js-waku-node-optional: needs: build-docker-image - uses: logos-messaging/logos-messaging-js/.github/workflows/test-node.yml@master + uses: logos-messaging/logos-delivery-js/.github/workflows/test-node.yml@master with: nim_wakunode_image: ${{ needs.build-docker-image.outputs.image }} test_type: node-optional @@ -150,7 +150,7 @@ jobs: -u $(id -u) \ docker.io/wakuorg/sv4git:latest \ release-notes ${RELEASE_NOTES_TAG} --previous $(git tag -l --sort -creatordate | grep -e "^v[0-9]*\.[0-9]*\.[0-9]*$") |\ - sed -E 's@#([0-9]+)@[#\1](https://github.com/logos-messaging/nwaku/issues/\1)@g' > release_notes.md + sed -E 's@#([0-9]+)@[#\1](https://github.com/logos-messaging/logos-delivery/issues/\1)@g' > release_notes.md sed -i "s/^## .*/Generated at $(date)/" release_notes.md diff --git a/README.md b/README.md index c64479738..8833ae131 100644 --- a/README.md +++ b/README.md @@ -2,7 +2,7 @@ ## Introduction -The logos-messaging-nim, a.k.a. lmn or nwaku, repository implements a set of libp2p protocols aimed to bring +This repository implements a set of libp2p protocols aimed to bring private communications. - Nim implementation of [these specs](https://github.com/vacp2p/rfc-index/tree/main/waku). From f208cb79ed726eba5706d88e6183dfe751118124 Mon Sep 17 00:00:00 2001 From: Ivan FB <128452529+Ivansete-status@users.noreply.github.com> Date: Tue, 17 Feb 2026 20:00:51 +0100 Subject: [PATCH 57/70] adjust Dockerfile.lightpushWithMix.compile (#3724) --- Dockerfile.lightpushWithMix.compile | 7 +++---- .../Dockerfile.liteprotocoltester.compile | 5 +++-- 2 files changed, 6 insertions(+), 6 deletions(-) diff --git a/Dockerfile.lightpushWithMix.compile b/Dockerfile.lightpushWithMix.compile index 8006ec50b..82e076b41 100644 --- a/Dockerfile.lightpushWithMix.compile +++ b/Dockerfile.lightpushWithMix.compile @@ -7,7 +7,7 @@ ARG NIM_COMMIT ARG LOG_LEVEL=TRACE # Get build tools and required header files -RUN apk add --no-cache bash git build-base openssl-dev linux-headers curl jq +RUN apk add --no-cache bash git build-base openssl-dev linux-headers curl jq libbsd-dev WORKDIR /app COPY . . @@ -24,7 +24,6 @@ RUN make -j$(nproc) deps QUICK_AND_DIRTY_COMPILER=1 ${NIM_COMMIT} # Build the final node binary RUN make -j$(nproc) ${NIM_COMMIT} $MAKE_TARGET LOG_LEVEL=${LOG_LEVEL} NIMFLAGS="${NIMFLAGS}" - # REFERENCE IMAGE as BASE for specialized PRODUCTION IMAGES---------------------------------------- FROM alpine:3.18 AS base_lpt @@ -44,8 +43,8 @@ RUN apk add --no-cache libgcc libpq-dev \ wget \ iproute2 \ python3 \ - jq - + jq \ + libstdc++ COPY --from=nim-build /app/build/lightpush_publisher_mix /usr/bin/ RUN chmod +x /usr/bin/lightpush_publisher_mix diff --git a/apps/liteprotocoltester/Dockerfile.liteprotocoltester.compile b/apps/liteprotocoltester/Dockerfile.liteprotocoltester.compile index 9e2432051..dd7018cc0 100644 --- a/apps/liteprotocoltester/Dockerfile.liteprotocoltester.compile +++ b/apps/liteprotocoltester/Dockerfile.liteprotocoltester.compile @@ -7,7 +7,7 @@ ARG NIM_COMMIT ARG LOG_LEVEL=TRACE # Get build tools and required header files -RUN apk add --no-cache bash git build-base openssl-dev linux-headers curl jq +RUN apk add --no-cache bash git build-base openssl-dev linux-headers curl jq libbsd-dev WORKDIR /app COPY . . @@ -43,7 +43,8 @@ EXPOSE 30303 60000 8545 RUN apk add --no-cache libgcc libpq-dev \ wget \ iproute2 \ - python3 + python3 \ + libstdc++ COPY --from=nim-build /app/build/liteprotocoltester /usr/bin/ RUN chmod +x /usr/bin/liteprotocoltester From 2f3f56898f3251254523703cc3a3dec905eedbee Mon Sep 17 00:00:00 2001 From: Darshan <35736874+darshankabariya@users.noreply.github.com> Date: Wed, 18 Feb 2026 11:51:15 +0530 Subject: [PATCH 58/70] fix: update release process (#3727) --- .../ISSUE_TEMPLATE/prepare_beta_release.md | 27 ++++++---- .../ISSUE_TEMPLATE/prepare_full_release.md | 53 +++++++++++-------- docs/contributors/release-process.md | 33 +++++++----- 3 files changed, 67 insertions(+), 46 deletions(-) diff --git a/.github/ISSUE_TEMPLATE/prepare_beta_release.md b/.github/ISSUE_TEMPLATE/prepare_beta_release.md index 2e4226e67..3c4e76854 100644 --- a/.github/ISSUE_TEMPLATE/prepare_beta_release.md +++ b/.github/ISSUE_TEMPLATE/prepare_beta_release.md @@ -21,15 +21,21 @@ All items below are to be completed by the owner of the given release. - [ ] Assign release candidate tag to the release branch HEAD (e.g. `v0.X.0-beta-rc.0`, `v0.X.0-beta-rc.1`, ... `v0.X.0-beta-rc.N`). - [ ] Generate and edit release notes in CHANGELOG.md. -- [ ] **Waku test and fleets validation** - - [ ] Ensure all the unit tests (specifically logos-delivery-js tests) are green against the release candidate. - - [ ] Deploy the release candidate to `waku.test` only through [deploy-waku-test job](https://ci.infra.status.im/job/nim-waku/job/deploy-waku-test/) and wait for it to finish (Jenkins access required; ask the infra team if you don't have it). - - After completion, disable [deployment job](https://ci.infra.status.im/job/nim-waku/) so that its version is not updated on every merge to master. - - Verify the deployed version at https://fleets.waku.org/. - - Confirm the container image exists on [Harbor](https://harbor.status.im/harbor/projects/9/repositories/logos-delivery/artifacts-tab). - - [ ] Analyze Kibana logs from the previous month (since the last release was deployed) for possible crashes or errors in `waku.test`. - - Most relevant logs are `(fleet: "waku.test" AND message: "SIGSEGV")`. - - [ ] Enable again the `waku.test` fleet to resume auto-deployment of the latest `master` commit. +- [ ] **Validation of release candidate** + - [ ] **Automated testing** + - [ ] Ensure all the unit tests (specifically logos-messaging-js tests) are green against the release candidate. + - [ ] **Waku fleet testing** + - [ ] Deploy the release candidate to `waku.test` through [deploy-waku-test job](https://ci.infra.status.im/job/nim-waku/job/deploy-waku-test/) and wait for it to finish (Jenkins access required; ask the infra team if you don't have it). + - After completion, disable fleet so that daily CI does not override your release candidate. + - Verify at https://fleets.waku.org/ that the fleet is locked to the release candidate image. + - Confirm the container image exists on [Harbor](https://harbor.status.im/harbor/projects/9/repositories/nwaku/artifacts-tab). + - [ ] Search [Kibana logs](https://kibana.infra.status.im/app/discover) from the previous month (since the last release was deployed) for possible crashes or errors in `waku.test`. + - Set time range to "Last 30 days" (or since last release). + - Most relevant search query: `(fleet: "waku.test" AND message: "SIGSEGV")`, `(fleet: "waku.test" AND message: "exception")`, `(fleet: "waku.test" AND message: "error")`. + - Document any crashes or errors found. + - [ ] If `waku.test` validation is successful, deploy to `waku.sandbox` using the [deploy-waku-sandbox job](https://ci.infra.status.im/job/nim-waku/job/deploy-waku-sandbox/). + - [ ] Search [Kibana logs](https://kibana.infra.status.im/app/discover) for `waku.sandbox`: `(fleet: "waku.sandbox" AND message: "SIGSEGV")`, `(fleet: "waku.sandbox" AND message: "exception")`, `(fleet: "waku.sandbox" AND message: "error")`. most probably if there are no crashes or errors in `waku.test`, there will be no crashes or errors in `waku.sandbox`. + - [ ] Enable the `waku.test` fleet again to resume auto-deployment of the latest `master` commit. - [ ] **Proceed with release** @@ -53,4 +59,5 @@ All items below are to be completed by the owner of the given release. - [Infra-nim-waku](https://github.com/status-im/infra-nim-waku) - [Jenkins](https://ci.infra.status.im/job/nim-waku/) - [Fleets](https://fleets.waku.org/) -- [Harbor](https://harbor.status.im/harbor/projects/9/repositories/logos-delivery/artifacts-tab) +- [Harbor](https://harbor.status.im/harbor/projects/9/repositories/nwaku/artifacts-tab) +- [Kibana](https://kibana.infra.status.im/app/) diff --git a/.github/ISSUE_TEMPLATE/prepare_full_release.md b/.github/ISSUE_TEMPLATE/prepare_full_release.md index d919f9ed0..4df808bd4 100644 --- a/.github/ISSUE_TEMPLATE/prepare_full_release.md +++ b/.github/ISSUE_TEMPLATE/prepare_full_release.md @@ -24,33 +24,39 @@ All items below are to be completed by the owner of the given release. - [ ] **Validation of release candidate** - [ ] **Automated testing** - - [ ] Ensure all the unit tests (specifically logos-delivery-js tests) are green against the release candidate. - - [ ] Ask Vac-QA and Vac-DST to perform the available tests against the release candidate. - - [ ] Vac-DST (an additional report is needed; see [this](https://www.notion.so/DST-Reports-1228f96fb65c80729cd1d98a7496fe6f)) + - [ ] Ensure all the unit tests (specifically logos-messaging-js tests) are green against the release candidate. - [ ] **Waku fleet testing** - - [ ] Deploy the release candidate to `waku.test` and `waku.sandbox` fleets. - - Start the [deployment job](https://ci.infra.status.im/job/nim-waku/) for both fleets and wait for it to finish (Jenkins access required; ask the infra team if you don't have it). - - After completion, disable [deployment job](https://ci.infra.status.im/job/nim-waku/) so that its version is not updated on every merge to `master`. - - Verify the deployed version at https://fleets.waku.org/. + - [ ] Deploy the release candidate to `waku.test` fleet. + - Start the [deployment job](https://ci.infra.status.im/job/nim-waku/) and wait for it to finish (Jenkins access required; ask the infra team if you don't have it). + - After completion, disable fleet so that daily CI does not override your release candidate. + - Verify at https://fleets.waku.org/ that the fleet is locked to the release candidate image. - Confirm the container image exists on [Harbor](https://harbor.status.im/harbor/projects/9/repositories/nwaku/artifacts-tab). - - [ ] Search _Kibana_ logs from the previous month (since the last release was deployed) for possible crashes or errors in `waku.test` and `waku.sandbox`. - - Most relevant logs are `(fleet: "waku.test" AND message: "SIGSEGV")` OR `(fleet: "waku.sandbox" AND message: "SIGSEGV")`. - - [ ] Enable again the `waku.test` fleet to resume auto-deployment of the latest `master` commit. + - [ ] Search [Kibana logs](https://kibana.infra.status.im/app/discover) from the previous month (since the last release was deployed) for possible crashes or errors in `waku.test`. + - Set time range to "Last 30 days" (or since last release). + - Most relevant search query: `(fleet: "waku.test" AND message: "SIGSEGV")`, `(fleet: "waku.test" AND message: "exception")`, `(fleet: "waku.test" AND message: "error")`. + - Document any crashes or errors found. + - [ ] If `waku.test` validation is successful, deploy to `waku.sandbox` using the same [deployment job](https://ci.infra.status.im/job/nim-waku/). + - [ ] Search [Kibana logs](https://kibana.infra.status.im/app/discover) for `waku.sandbox`: `(fleet: "waku.sandbox" AND message: "SIGSEGV")`, `(fleet: "waku.sandbox" AND message: "exception")`, `(fleet: "waku.sandbox" AND message: "error")`. most probably if there are no crashes or errors in `waku.test`, there will be no crashes or errors in `waku.sandbox`. + - [ ] Enable the `waku.test` fleet again to resume auto-deployment of the latest `master` commit. -- [ ] **Status fleet testing** - - [ ] Deploy release candidate to `status.staging` - - [ ] Perform [sanity check](https://www.notion.so/How-to-test-Nwaku-on-Status-12c6e4b9bf06420ca868bd199129b425) and log results as comments in this issue. - - [ ] Connect 2 instances to `status.staging` fleet, one in relay mode, the other one in light client. - - 1:1 Chats with each other - - Send and receive messages in a community - - Close one instance, send messages with second instance, reopen first instance and confirm messages sent while offline are retrieved from store - - [ ] Perform checks based on _end user impact_ - - [ ] Inform other (Waku and Status) CCs to point their instances to `status.staging` for a few days. Ping Status colleagues on their Discord server or in the [Status community](https://status.app/c/G3kAAMSQtb05kog3aGbr3kiaxN4tF5xy4BAGEkkLwILk2z3GcoYlm5hSJXGn7J3laft-tnTwDWmYJ18dP_3bgX96dqr_8E3qKAvxDf3NrrCMUBp4R9EYkQez9XSM4486mXoC3mIln2zc-TNdvjdfL9eHVZ-mGgs=#zQ3shZeEJqTC1xhGUjxuS4rtHSrhJ8vUYp64v6qWkLpvdy9L9) (this is not a blocking point.) - - [ ] Ask Status-QA to perform sanity checks (as described above) and checks based on _end user impact_; specify the version being tested - - [ ] Ask Status-QA or infra to run the automated Status e2e tests against `status.staging` - - [ ] Get other CCs' sign-off: they should comment on this PR, e.g., "Used the app for a week, no problem." If problems are reported, resolve them and create a new RC. - - [ ] **Get Status-QA sign-off**, ensuring that the `status.test` update will not disturb ongoing activities. + - [ ] **QA and DST testing** + - [ ] Ask Vac-QA and Vac-DST to run their available tests against the release candidate; share all release candidates with both teams. + - [ ] Vac-DST: An additional report is needed ([see this example](https://www.notion.so/DST-Reports-1228f96fb65c80729cd1d98a7496fe6f)). Inform DST team about what are the expectations for this rc. For example, if we expect higher or lower bandwidth consumption. + + - [ ] **Status fleet testing** + - [ ] Deploy release candidate to `status.staging` + - [ ] Perform [sanity check](https://www.notion.so/How-to-test-Nwaku-on-Status-12c6e4b9bf06420ca868bd199129b425) and log results as comments in this issue. + - [ ] Connect 2 instances to `status.staging` fleet, one in relay mode, the other one in light client. + - 1:1 Chats with each other + - Send and receive messages in a community + - Close one instance, send messages with second instance, reopen first instance and confirm messages sent while offline are retrieved from store + - [ ] Perform checks based on _end user impact_ + - [ ] Inform other (Waku and Status) CCs to point their instances to `status.staging` for a few days. Ping Status colleagues on their Discord server or in the [Status community](https://status.app/c/G3kAAMSQtb05kog3aGbr3kiaxN4tF5xy4BAGEkkLwILk2z3GcoYlm5hSJXGn7J3laft-tnTwDWmYJ18dP_3bgX96dqr_8E3qKAvxDf3NrrCMUBp4R9EYkQez9XSM4486mXoC3mIln2zc-TNdvjdfL9eHVZ-mGgs=#zQ3shZeEJqTC1xhGUjxuS4rtHSrhJ8vUYp64v6qWkLpvdy9L9) (this is not a blocking point.) + - [ ] Ask Status-QA to perform sanity checks (as described above) and checks based on _end user impact_; specify the version being tested + - [ ] Ask Status-QA or infra to run the automated Status e2e tests against `status.staging` + - [ ] Get other CCs' sign-off: they should comment on this PR, e.g., "Used the app for a week, no problem." If problems are reported, resolve them and create a new RC. + - [ ] **Get Status-QA sign-off**, ensuring that the `status.test` update will not disturb ongoing activities. - [ ] **Proceed with release** @@ -74,3 +80,4 @@ All items below are to be completed by the owner of the given release. - [Jenkins](https://ci.infra.status.im/job/nim-waku/) - [Fleets](https://fleets.waku.org/) - [Harbor](https://harbor.status.im/harbor/projects/9/repositories/nwaku/artifacts-tab) +- [Kibana](https://kibana.infra.status.im/app/) diff --git a/docs/contributors/release-process.md b/docs/contributors/release-process.md index bde63aa6f..8aa9282cd 100644 --- a/docs/contributors/release-process.md +++ b/docs/contributors/release-process.md @@ -20,7 +20,7 @@ For more context, see https://trunkbaseddevelopment.com/branch-for-release/ - **Full release**: follow the entire [Release process](#release-process--step-by-step). -- **Beta release**: skip just `6a` and `6c` steps from [Release process](#release-process--step-by-step). +- **Beta release**: skip just `6c` and `6d` steps from [Release process](#release-process--step-by-step). - Choose the appropriate release process based on the release type: - [Full Release](../../.github/ISSUE_TEMPLATE/prepare_full_release.md) @@ -70,20 +70,26 @@ For more context, see https://trunkbaseddevelopment.com/branch-for-release/ 6a. **Automated testing** - Ensure all the unit tests (specifically js-waku tests) are green against the release candidate. - - Ask Vac-QA and Vac-DST to run their available tests against the release candidate; share all release candidates with both teams. - - > We need an additional report like [this](https://www.notion.so/DST-Reports-1228f96fb65c80729cd1d98a7496fe6f) specifically from the DST team. 6b. **Waku fleet testing** - - Start job on `waku.sandbox` and `waku.test` [Deployment job](https://ci.infra.status.im/job/nim-waku/), wait for completion of the job. If it fails, then debug it. - - After completion, disable [deployment job](https://ci.infra.status.im/job/nim-waku/) so that its version is not updated on every merge to `master`. - - Verify at https://fleets.waku.org/ that the fleet is locked to the release candidate version. + - Start job on `waku.test` [Deployment job](https://ci.infra.status.im/job/nim-waku/), wait for completion of the job. If it fails, then debug it. + - After completion, disable fleet so that daily ci not override your release candidate. + - Verify at https://fleets.waku.org/ that the fleet is locked to the release candidate image. - Check if the image is created at [Harbor](https://harbor.status.im/harbor/projects/9/repositories/nwaku/artifacts-tab). - - Search _Kibana_ logs from the previous month (since the last release was deployed) for possible crashes or errors in `waku.test` and `waku.sandbox`. - - Most relevant logs are `(fleet: "waku.test" AND message: "SIGSEGV")` OR `(fleet: "waku.sandbox" AND message: "SIGSEGV")`. + - Search [Kibana logs](https://kibana.infra.status.im/app/discover) from the previous month (since the last release was deployed) for possible crashes or errors in `waku.test`. + - Set time range to "Last 30 days" (or since last release). + - Most relevant search query: `(fleet: "waku.test" AND message: "SIGSEGV")`, `(fleet: "waku.test" AND message: "exception")`, `(fleet: "waku.test" AND message: "error")`. + - Document any crashes or errors found. + - If `waku.test` validation is successful, deploy to `waku.sandbox` using the same [Deployment job](https://ci.infra.status.im/job/nim-waku/). + - Search [Kibana logs](https://kibana.infra.status.im/app/discover) for `waku.sandbox`: `(fleet: "waku.sandbox" AND message: "SIGSEGV")`, `(fleet: "waku.sandbox" AND message: "exception")`, `(fleet: "waku.sandbox" AND message: "error")`. most probably if there are no crashes or errors in `waku.test`, there will be no crashes or errors in `waku.sandbox`. - Enable the `waku.test` fleet again to resume auto-deployment of the latest `master` commit. - 6c. **Status fleet testing** + 6c. **QA and DST testing** + - Ask Vac-QA and Vac-DST to run their available tests against the release candidate; share all release candidates with both teams. + + > We need an additional report like [this](https://www.notion.so/DST-Reports-1228f96fb65c80729cd1d98a7496fe6f) specifically from the DST team. Inform DST team about what are the expectations for this rc. For example, if we expect higher or lower bandwidth consumption. + + 6d. **Status fleet testing** - Deploy release candidate to `status.staging` - Perform [sanity check](https://www.notion.so/How-to-test-Nwaku-on-Status-12c6e4b9bf06420ca868bd199129b425) and log results as comments in this issue. - Connect 2 instances to `status.staging` fleet, one in relay mode, the other one in light client. @@ -120,10 +126,10 @@ We also need to merge the release branch back into master as a final step. 2. Deploy the release image to [Dockerhub](https://hub.docker.com/r/wakuorg/nwaku) by triggering [the manual Jenkins deployment job](https://ci.infra.status.im/job/nim-waku/job/docker-manual/). > Ensure the following build parameters are set: > - `MAKE_TARGET`: `wakunode2` - > - `IMAGE_TAG`: the release tag (e.g. `v0.36.0`) + > - `IMAGE_TAG`: the release tag (e.g. `v0.38.0`) > - `IMAGE_NAME`: `wakuorg/nwaku` > - `NIMFLAGS`: `--colors:off -d:disableMarchNative -d:chronicles_colors:none -d:postgres` - > - `GIT_REF` the release tag (e.g. `v0.36.0`) + > - `GIT_REF` the release tag (e.g. `v0.38.0`) ### Performing a patch release @@ -154,4 +160,5 @@ We also need to merge the release branch back into master as a final step. - [Infra-nim-waku](https://github.com/status-im/infra-nim-waku) - [Jenkins](https://ci.infra.status.im/job/nim-waku/) - [Fleets](https://fleets.waku.org/) -- [Harbor](https://harbor.status.im/harbor/projects/9/repositories/nwaku/artifacts-tab) \ No newline at end of file +- [Harbor](https://harbor.status.im/harbor/projects/9/repositories/nwaku/artifacts-tab) +- [Kibana](https://kibana.infra.status.im/app/) \ No newline at end of file From 335600ebcb95a34f85a1e3c22d6cc736abc7559f Mon Sep 17 00:00:00 2001 From: Prem Chaitanya Prathi Date: Thu, 19 Feb 2026 10:26:17 +0530 Subject: [PATCH 59/70] feat: waku kademlia integration and mix updates (#3722) * feat: integrate mix protocol with extended kademlia discovery Co-authored-by: Ivan FB <128452529+Ivansete-status@users.noreply.github.com> --- apps/chat2mix/chat2mix.nim | 45 ++- apps/chat2mix/config_chat2mix.nim | 13 +- nix/default.nix | 13 +- simulations/mixnet/config.toml | 13 +- simulations/mixnet/config1.toml | 11 +- simulations/mixnet/config2.toml | 11 +- simulations/mixnet/config3.toml | 11 +- simulations/mixnet/config4.toml | 11 +- simulations/mixnet/run_chat_mix.sh | 2 +- simulations/mixnet/run_mix_node.sh | 2 +- simulations/mixnet/run_mix_node1.sh | 2 +- simulations/mixnet/run_mix_node2.sh | 2 +- simulations/mixnet/run_mix_node3.sh | 2 +- simulations/mixnet/run_mix_node4.sh | 2 +- tools/confutils/cli_args.nim | 17 ++ vendor/nim-libp2p | 2 +- waku.nimble | 2 +- waku/discovery/waku_kademlia.nim | 280 ++++++++++++++++++ waku/factory/conf_builder/conf_builder.nim | 6 +- .../kademlia_discovery_conf_builder.nim | 40 +++ .../conf_builder/waku_conf_builder.nim | 9 +- waku/factory/node_factory.nim | 42 ++- waku/factory/waku.nim | 3 + waku/factory/waku_conf.nim | 6 + waku/node/peer_manager/waku_peer_store.nim | 5 +- waku/node/waku_node.nim | 14 +- waku/waku_core/peers.nim | 1 + waku/waku_mix/protocol.nim | 151 ++-------- 28 files changed, 543 insertions(+), 175 deletions(-) create mode 100644 waku/discovery/waku_kademlia.nim create mode 100644 waku/factory/conf_builder/kademlia_discovery_conf_builder.nim diff --git a/apps/chat2mix/chat2mix.nim b/apps/chat2mix/chat2mix.nim index 45fd1fa2d..558454307 100644 --- a/apps/chat2mix/chat2mix.nim +++ b/apps/chat2mix/chat2mix.nim @@ -30,6 +30,7 @@ import protobuf/minprotobuf, # message serialisation/deserialisation from and to protobufs nameresolving/dnsresolver, protocols/mix/curve25519, + protocols/mix/mix_protocol, ] # define DNS resolution import waku/[ @@ -38,6 +39,7 @@ import waku_lightpush/rpc, waku_enr, discovery/waku_dnsdisc, + discovery/waku_kademlia, waku_node, node/waku_metrics, node/peer_manager, @@ -453,14 +455,48 @@ proc processInput(rfd: AsyncFD, rng: ref HmacDrbgContext) {.async.} = (await node.mountMix(conf.clusterId, mixPrivKey, conf.mixnodes)).isOkOr: error "failed to mount waku mix protocol: ", error = $error quit(QuitFailure) - await node.mountRendezvousClient(conf.clusterId) + + # Setup extended kademlia discovery if bootstrap nodes are provided + if conf.kadBootstrapNodes.len > 0: + var kadBootstrapPeers: seq[(PeerId, seq[MultiAddress])] + for nodeStr in conf.kadBootstrapNodes: + let (peerId, ma) = parseFullAddress(nodeStr).valueOr: + error "Failed to parse kademlia bootstrap node", node = nodeStr, error = error + continue + kadBootstrapPeers.add((peerId, @[ma])) + + if kadBootstrapPeers.len > 0: + node.wakuKademlia = WakuKademlia.new( + node.switch, + ExtendedKademliaDiscoveryParams( + bootstrapNodes: kadBootstrapPeers, + mixPubKey: some(mixPubKey), + advertiseMix: false, + ), + node.peerManager, + getMixNodePoolSize = proc(): int {.gcsafe, raises: [].} = + if node.wakuMix.isNil(): + 0 + else: + node.getMixNodePoolSize(), + isNodeStarted = proc(): bool {.gcsafe, raises: [].} = + node.started, + ).valueOr: + error "failed to setup kademlia discovery", error = error + quit(QuitFailure) + + #await node.mountRendezvousClient(conf.clusterId) await node.start() node.peerManager.start() + if not node.wakuKademlia.isNil(): + (await node.wakuKademlia.start(minMixPeers = MinMixNodePoolSize)).isOkOr: + error "failed to start kademlia discovery", error = error + quit(QuitFailure) await node.mountLibp2pPing() - await node.mountPeerExchangeClient() + #await node.mountPeerExchangeClient() let pubsubTopic = conf.getPubsubTopic(node, conf.contentTopic) echo "pubsub topic is: " & pubsubTopic let nick = await readNick(transp) @@ -601,11 +637,6 @@ proc processInput(rfd: AsyncFD, rng: ref HmacDrbgContext) {.async.} = node, pubsubTopic, conf.contentTopic, servicePeerInfo, false ) echo "waiting for mix nodes to be discovered..." - while true: - if node.getMixNodePoolSize() >= MinMixNodePoolSize: - break - discard await node.fetchPeerExchangePeers() - await sleepAsync(1000) while node.getMixNodePoolSize() < MinMixNodePoolSize: info "waiting for mix nodes to be discovered", diff --git a/apps/chat2mix/config_chat2mix.nim b/apps/chat2mix/config_chat2mix.nim index ddb7136cb..46cd481d7 100644 --- a/apps/chat2mix/config_chat2mix.nim +++ b/apps/chat2mix/config_chat2mix.nim @@ -203,13 +203,13 @@ type fleet* {. desc: "Select the fleet to connect to. This sets the DNS discovery URL to the selected fleet.", - defaultValue: Fleet.test, + defaultValue: Fleet.none, name: "fleet" .}: Fleet contentTopic* {. desc: "Content topic for chat messages.", - defaultValue: "/toy-chat-mix/2/huilong/proto", + defaultValue: "/toy-chat/2/baixa-chiado/proto", name: "content-topic" .}: string @@ -228,7 +228,14 @@ type desc: "WebSocket Secure Support.", defaultValue: false, name: "websocket-secure-support" - .}: bool ## rln-relay configuration + .}: bool + + ## Kademlia Discovery config + kadBootstrapNodes* {. + desc: + "Peer multiaddr for kademlia discovery bootstrap node (must include /p2p/). Argument may be repeated.", + name: "kad-bootstrap-node" + .}: seq[string] proc parseCmdArg*(T: type MixNodePubInfo, p: string): T = let elements = p.split(":") diff --git a/nix/default.nix b/nix/default.nix index d532ec5b5..ca91d0e2f 100644 --- a/nix/default.nix +++ b/nix/default.nix @@ -50,8 +50,8 @@ in stdenv.mkDerivation { ]; # Environment variables required for Android builds - ANDROID_SDK_ROOT="${pkgs.androidPkgs.sdk}"; - ANDROID_NDK_HOME="${pkgs.androidPkgs.ndk}"; + ANDROID_SDK_ROOT = "${pkgs.androidPkgs.sdk}"; + ANDROID_NDK_HOME = "${pkgs.androidPkgs.ndk}"; NIMFLAGS = "-d:disableMarchNative -d:git_revision_override=${revision}"; XDG_CACHE_HOME = "/tmp"; @@ -65,6 +65,15 @@ in stdenv.mkDerivation { configurePhase = '' patchShebangs . vendor/nimbus-build-system > /dev/null + + # build_nim.sh guards "rm -rf dist/checksums" with NIX_BUILD_TOP != "/build", + # but on macOS the nix sandbox uses /private/tmp/... so the check fails and + # dist/checksums (provided via preBuild) gets deleted. Fix the check to skip + # the removal whenever NIX_BUILD_TOP is set (i.e. any nix build). + substituteInPlace vendor/nimbus-build-system/scripts/build_nim.sh \ + --replace 'if [[ "''${NIX_BUILD_TOP}" != "/build" ]]; then' \ + 'if [[ -z "''${NIX_BUILD_TOP}" ]]; then' + make nimbus-build-system-paths make nimbus-build-system-nimble-dir ''; diff --git a/simulations/mixnet/config.toml b/simulations/mixnet/config.toml index 3719d8177..5cd1aa936 100644 --- a/simulations/mixnet/config.toml +++ b/simulations/mixnet/config.toml @@ -1,16 +1,17 @@ -log-level = "INFO" +log-level = "TRACE" relay = true mix = true filter = true -store = false +store = true lightpush = true max-connections = 150 -peer-exchange = true +peer-exchange = false metrics-logging = false cluster-id = 2 -discv5-discovery = true +discv5-discovery = false discv5-udp-port = 9000 discv5-enr-auto-update = true +enable-kad-discovery = true rest = true rest-admin = true ports-shift = 1 @@ -19,7 +20,9 @@ shard = [0] agent-string = "nwaku-mix" nodekey = "f98e3fba96c32e8d1967d460f1b79457380e1a895f7971cecc8528abe733781a" mixkey = "a87db88246ec0eedda347b9b643864bee3d6933eb15ba41e6d58cb678d813258" -rendezvous = true +rendezvous = false listen-address = "127.0.0.1" nat = "extip:127.0.0.1" +ext-multiaddr = ["/ip4/127.0.0.1/tcp/60001"] +ext-multiaddr-only = true ip-colocation-limit=0 diff --git a/simulations/mixnet/config1.toml b/simulations/mixnet/config1.toml index e06a527c1..73cccb8c6 100644 --- a/simulations/mixnet/config1.toml +++ b/simulations/mixnet/config1.toml @@ -1,17 +1,18 @@ -log-level = "INFO" +log-level = "TRACE" relay = true mix = true filter = true store = false lightpush = true max-connections = 150 -peer-exchange = true +peer-exchange = false metrics-logging = false cluster-id = 2 -discv5-discovery = true +discv5-discovery = false discv5-udp-port = 9001 discv5-enr-auto-update = true discv5-bootstrap-node = ["enr:-LG4QBaAbcA921hmu3IrreLqGZ4y3VWCjBCgNN9mpX9vqkkbSrM3HJHZTXnb5iVXgc5pPtDhWLxkB6F3yY25hSwMezkEgmlkgnY0gmlwhH8AAAGKbXVsdGlhZGRyc4oACATAqEQ-BuphgnJzhQACAQAAiXNlY3AyNTZrMaEDpEW1UlUGHRJg6g_zGuCddKWmIUBGZCQX13xGfh9J6KiDdGNwguphg3VkcIIjKYV3YWt1Mg0"] +kad-bootstrap-node = ["/ip4/127.0.0.1/tcp/60001/p2p/16Uiu2HAmPiEs2ozjjJF2iN2Pe2FYeMC9w4caRHKYdLdAfjgbWM6o"] rest = true rest-admin = true ports-shift = 2 @@ -20,8 +21,10 @@ shard = [0] agent-string = "nwaku-mix" nodekey = "09e9d134331953357bd38bbfce8edb377f4b6308b4f3bfbe85c610497053d684" mixkey = "c86029e02c05a7e25182974b519d0d52fcbafeca6fe191fbb64857fb05be1a53" -rendezvous = true +rendezvous = false listen-address = "127.0.0.1" nat = "extip:127.0.0.1" +ext-multiaddr = ["/ip4/127.0.0.1/tcp/60002"] +ext-multiaddr-only = true ip-colocation-limit=0 #staticnode = ["/ip4/127.0.0.1/tcp/60001/p2p/16Uiu2HAmPiEs2ozjjJF2iN2Pe2FYeMC9w4caRHKYdLdAfjgbWM6o", "/ip4/127.0.0.1/tcp/60003/p2p/16Uiu2HAmTEDHwAziWUSz6ZE23h5vxG2o4Nn7GazhMor4bVuMXTrA","/ip4/127.0.0.1/tcp/60004/p2p/16Uiu2HAmPwRKZajXtfb1Qsv45VVfRZgK3ENdfmnqzSrVm3BczF6f","/ip4/127.0.0.1/tcp/60005/p2p/16Uiu2HAmRhxmCHBYdXt1RibXrjAUNJbduAhzaTHwFCZT4qWnqZAu"] diff --git a/simulations/mixnet/config2.toml b/simulations/mixnet/config2.toml index 93822603b..c40e41103 100644 --- a/simulations/mixnet/config2.toml +++ b/simulations/mixnet/config2.toml @@ -1,17 +1,18 @@ -log-level = "INFO" +log-level = "TRACE" relay = true mix = true filter = true store = false lightpush = true max-connections = 150 -peer-exchange = true +peer-exchange = false metrics-logging = false cluster-id = 2 -discv5-discovery = true +discv5-discovery = false discv5-udp-port = 9002 discv5-enr-auto-update = true discv5-bootstrap-node = ["enr:-LG4QBaAbcA921hmu3IrreLqGZ4y3VWCjBCgNN9mpX9vqkkbSrM3HJHZTXnb5iVXgc5pPtDhWLxkB6F3yY25hSwMezkEgmlkgnY0gmlwhH8AAAGKbXVsdGlhZGRyc4oACATAqEQ-BuphgnJzhQACAQAAiXNlY3AyNTZrMaEDpEW1UlUGHRJg6g_zGuCddKWmIUBGZCQX13xGfh9J6KiDdGNwguphg3VkcIIjKYV3YWt1Mg0"] +kad-bootstrap-node = ["/ip4/127.0.0.1/tcp/60001/p2p/16Uiu2HAmPiEs2ozjjJF2iN2Pe2FYeMC9w4caRHKYdLdAfjgbWM6o"] rest = false rest-admin = false ports-shift = 3 @@ -20,8 +21,10 @@ shard = [0] agent-string = "nwaku-mix" nodekey = "ed54db994682e857d77cd6fb81be697382dc43aa5cd78e16b0ec8098549f860e" mixkey = "b858ac16bbb551c4b2973313b1c8c8f7ea469fca03f1608d200bbf58d388ec7f" -rendezvous = true +rendezvous = false listen-address = "127.0.0.1" nat = "extip:127.0.0.1" +ext-multiaddr = ["/ip4/127.0.0.1/tcp/60003"] +ext-multiaddr-only = true ip-colocation-limit=0 #staticnode = ["/ip4/127.0.0.1/tcp/60001/p2p/16Uiu2HAmPiEs2ozjjJF2iN2Pe2FYeMC9w4caRHKYdLdAfjgbWM6o", "/ip4/127.0.0.1/tcp/60002/p2p/16Uiu2HAmLtKaFaSWDohToWhWUZFLtqzYZGPFuXwKrojFVF6az5UF","/ip4/127.0.0.1/tcp/60004/p2p/16Uiu2HAmPwRKZajXtfb1Qsv45VVfRZgK3ENdfmnqzSrVm3BczF6f","/ip4/127.0.0.1/tcp/60005/p2p/16Uiu2HAmRhxmCHBYdXt1RibXrjAUNJbduAhzaTHwFCZT4qWnqZAu"] diff --git a/simulations/mixnet/config3.toml b/simulations/mixnet/config3.toml index 6f339dfff..80c19b34b 100644 --- a/simulations/mixnet/config3.toml +++ b/simulations/mixnet/config3.toml @@ -1,17 +1,18 @@ -log-level = "INFO" +log-level = "TRACE" relay = true mix = true filter = true store = false lightpush = true max-connections = 150 -peer-exchange = true +peer-exchange = false metrics-logging = false cluster-id = 2 -discv5-discovery = true +discv5-discovery = false discv5-udp-port = 9003 discv5-enr-auto-update = true discv5-bootstrap-node = ["enr:-LG4QBaAbcA921hmu3IrreLqGZ4y3VWCjBCgNN9mpX9vqkkbSrM3HJHZTXnb5iVXgc5pPtDhWLxkB6F3yY25hSwMezkEgmlkgnY0gmlwhH8AAAGKbXVsdGlhZGRyc4oACATAqEQ-BuphgnJzhQACAQAAiXNlY3AyNTZrMaEDpEW1UlUGHRJg6g_zGuCddKWmIUBGZCQX13xGfh9J6KiDdGNwguphg3VkcIIjKYV3YWt1Mg0"] +kad-bootstrap-node = ["/ip4/127.0.0.1/tcp/60001/p2p/16Uiu2HAmPiEs2ozjjJF2iN2Pe2FYeMC9w4caRHKYdLdAfjgbWM6o"] rest = false rest-admin = false ports-shift = 4 @@ -20,8 +21,10 @@ shard = [0] agent-string = "nwaku-mix" nodekey = "42f96f29f2d6670938b0864aced65a332dcf5774103b4c44ec4d0ea4ef3c47d6" mixkey = "d8bd379bb394b0f22dd236d63af9f1a9bc45266beffc3fbbe19e8b6575f2535b" -rendezvous = true +rendezvous = false listen-address = "127.0.0.1" nat = "extip:127.0.0.1" +ext-multiaddr = ["/ip4/127.0.0.1/tcp/60004"] +ext-multiaddr-only = true ip-colocation-limit=0 #staticnode = ["/ip4/127.0.0.1/tcp/60002/p2p/16Uiu2HAmLtKaFaSWDohToWhWUZFLtqzYZGPFuXwKrojFVF6az5UF", "/ip4/127.0.0.1/tcp/60003/p2p/16Uiu2HAmTEDHwAziWUSz6ZE23h5vxG2o4Nn7GazhMor4bVuMXTrA","/ip4/127.0.0.1/tcp/60001/p2p/16Uiu2HAmPiEs2ozjjJF2iN2Pe2FYeMC9w4caRHKYdLdAfjgbWM6o","/ip4/127.0.0.1/tcp/60005/p2p/16Uiu2HAmRhxmCHBYdXt1RibXrjAUNJbduAhzaTHwFCZT4qWnqZAu"] diff --git a/simulations/mixnet/config4.toml b/simulations/mixnet/config4.toml index 23115ac03..ed5b2dad0 100644 --- a/simulations/mixnet/config4.toml +++ b/simulations/mixnet/config4.toml @@ -1,17 +1,18 @@ -log-level = "INFO" +log-level = "TRACE" relay = true mix = true filter = true store = false lightpush = true max-connections = 150 -peer-exchange = true +peer-exchange = false metrics-logging = false cluster-id = 2 -discv5-discovery = true +discv5-discovery = false discv5-udp-port = 9004 discv5-enr-auto-update = true discv5-bootstrap-node = ["enr:-LG4QBaAbcA921hmu3IrreLqGZ4y3VWCjBCgNN9mpX9vqkkbSrM3HJHZTXnb5iVXgc5pPtDhWLxkB6F3yY25hSwMezkEgmlkgnY0gmlwhH8AAAGKbXVsdGlhZGRyc4oACATAqEQ-BuphgnJzhQACAQAAiXNlY3AyNTZrMaEDpEW1UlUGHRJg6g_zGuCddKWmIUBGZCQX13xGfh9J6KiDdGNwguphg3VkcIIjKYV3YWt1Mg0"] +kad-bootstrap-node = ["/ip4/127.0.0.1/tcp/60001/p2p/16Uiu2HAmPiEs2ozjjJF2iN2Pe2FYeMC9w4caRHKYdLdAfjgbWM6o"] rest = false rest-admin = false ports-shift = 5 @@ -20,8 +21,10 @@ shard = [0] agent-string = "nwaku-mix" nodekey = "3ce887b3c34b7a92dd2868af33941ed1dbec4893b054572cd5078da09dd923d4" mixkey = "780fff09e51e98df574e266bf3266ec6a3a1ddfcf7da826a349a29c137009d49" -rendezvous = true +rendezvous = false listen-address = "127.0.0.1" nat = "extip:127.0.0.1" +ext-multiaddr = ["/ip4/127.0.0.1/tcp/60005"] +ext-multiaddr-only = true ip-colocation-limit=0 #staticnode = ["/ip4/127.0.0.1/tcp/60001/p2p/16Uiu2HAmPiEs2ozjjJF2iN2Pe2FYeMC9w4caRHKYdLdAfjgbWM6o", "/ip4/127.0.0.1/tcp/60003/p2p/16Uiu2HAmTEDHwAziWUSz6ZE23h5vxG2o4Nn7GazhMor4bVuMXTrA","/ip4/127.0.0.1/tcp/60004/p2p/16Uiu2HAmPwRKZajXtfb1Qsv45VVfRZgK3ENdfmnqzSrVm3BczF6f","/ip4/127.0.0.1/tcp/60002/p2p/16Uiu2HAmLtKaFaSWDohToWhWUZFLtqzYZGPFuXwKrojFVF6az5UF"] diff --git a/simulations/mixnet/run_chat_mix.sh b/simulations/mixnet/run_chat_mix.sh index 3dd6f5932..f711c055e 100755 --- a/simulations/mixnet/run_chat_mix.sh +++ b/simulations/mixnet/run_chat_mix.sh @@ -1,2 +1,2 @@ -../../build/chat2mix --cluster-id=2 --num-shards-in-network=1 --shard=0 --servicenode="/ip4/127.0.0.1/tcp/60001/p2p/16Uiu2HAmPiEs2ozjjJF2iN2Pe2FYeMC9w4caRHKYdLdAfjgbWM6o" --log-level=TRACE +../../build/chat2mix --cluster-id=2 --num-shards-in-network=1 --shard=0 --servicenode="/ip4/127.0.0.1/tcp/60001/p2p/16Uiu2HAmPiEs2ozjjJF2iN2Pe2FYeMC9w4caRHKYdLdAfjgbWM6o" --log-level=TRACE --kad-bootstrap-node="/ip4/127.0.0.1/tcp/60001/p2p/16Uiu2HAmPiEs2ozjjJF2iN2Pe2FYeMC9w4caRHKYdLdAfjgbWM6o" #--mixnode="/ip4/127.0.0.1/tcp/60002/p2p/16Uiu2HAmLtKaFaSWDohToWhWUZFLtqzYZGPFuXwKrojFVF6az5UF:9231e86da6432502900a84f867004ce78632ab52cd8e30b1ec322cd795710c2a" --mixnode="/ip4/127.0.0.1/tcp/60003/p2p/16Uiu2HAmTEDHwAziWUSz6ZE23h5vxG2o4Nn7GazhMor4bVuMXTrA:275cd6889e1f29ca48e5b9edb800d1a94f49f13d393a0ecf1a07af753506de6c" --mixnode="/ip4/127.0.0.1/tcp/60004/p2p/16Uiu2HAmPwRKZajXtfb1Qsv45VVfRZgK3ENdfmnqzSrVm3BczF6f:e0ed594a8d506681be075e8e23723478388fb182477f7a469309a25e7076fc18" --mixnode="/ip4/127.0.0.1/tcp/60005/p2p/16Uiu2HAmRhxmCHBYdXt1RibXrjAUNJbduAhzaTHwFCZT4qWnqZAu:8fd7a1a7c19b403d231452a9b1ea40eb1cc76f455d918ef8980e7685f9eeeb1f" diff --git a/simulations/mixnet/run_mix_node.sh b/simulations/mixnet/run_mix_node.sh index 1d005796e..2b293540c 100755 --- a/simulations/mixnet/run_mix_node.sh +++ b/simulations/mixnet/run_mix_node.sh @@ -1 +1 @@ -../../build/wakunode2 --config-file="config.toml" +../../build/wakunode2 --config-file="config.toml" 2>&1 | tee mix_node.log diff --git a/simulations/mixnet/run_mix_node1.sh b/simulations/mixnet/run_mix_node1.sh index 024eb3f99..617312122 100755 --- a/simulations/mixnet/run_mix_node1.sh +++ b/simulations/mixnet/run_mix_node1.sh @@ -1 +1 @@ -../../build/wakunode2 --config-file="config1.toml" +../../build/wakunode2 --config-file="config1.toml" 2>&1 | tee mix_node1.log diff --git a/simulations/mixnet/run_mix_node2.sh b/simulations/mixnet/run_mix_node2.sh index e55a9bac8..5fc2ef498 100755 --- a/simulations/mixnet/run_mix_node2.sh +++ b/simulations/mixnet/run_mix_node2.sh @@ -1 +1 @@ -../../build/wakunode2 --config-file="config2.toml" +../../build/wakunode2 --config-file="config2.toml" 2>&1 | tee mix_node2.log diff --git a/simulations/mixnet/run_mix_node3.sh b/simulations/mixnet/run_mix_node3.sh index dca8119a3..d77d04c02 100755 --- a/simulations/mixnet/run_mix_node3.sh +++ b/simulations/mixnet/run_mix_node3.sh @@ -1 +1 @@ -../../build/wakunode2 --config-file="config3.toml" +../../build/wakunode2 --config-file="config3.toml" 2>&1 | tee mix_node3.log diff --git a/simulations/mixnet/run_mix_node4.sh b/simulations/mixnet/run_mix_node4.sh index 9cf25158b..3a2b0299d 100755 --- a/simulations/mixnet/run_mix_node4.sh +++ b/simulations/mixnet/run_mix_node4.sh @@ -1 +1 @@ -../../build/wakunode2 --config-file="config4.toml" +../../build/wakunode2 --config-file="config4.toml" 2>&1 | tee mix_node4.log diff --git a/tools/confutils/cli_args.nim b/tools/confutils/cli_args.nim index 6811e335f..5e4adacb2 100644 --- a/tools/confutils/cli_args.nim +++ b/tools/confutils/cli_args.nim @@ -621,6 +621,20 @@ with the drawback of consuming some more bandwidth.""", name: "mixnode" .}: seq[MixNodePubInfo] + # Kademlia Discovery config + enableKadDiscovery* {. + desc: + "Enable extended kademlia discovery. Can be enabled without bootstrap nodes for the first node in the network.", + defaultValue: false, + name: "enable-kad-discovery" + .}: bool + + kadBootstrapNodes* {. + desc: + "Peer multiaddr for kademlia discovery bootstrap node (must include /p2p/). Argument may be repeated.", + name: "kad-bootstrap-node" + .}: seq[string] + ## websocket config websocketSupport* {. desc: "Enable websocket: true|false", @@ -1057,4 +1071,7 @@ proc toWakuConf*(n: WakuNodeConf): ConfResult[WakuConf] = b.rateLimitConf.withRateLimits(n.rateLimits) + b.kademliaDiscoveryConf.withEnabled(n.enableKadDiscovery) + b.kademliaDiscoveryConf.withBootstrapNodes(n.kadBootstrapNodes) + return b.build() diff --git a/vendor/nim-libp2p b/vendor/nim-libp2p index ca48c3718..ff8d51857 160000 --- a/vendor/nim-libp2p +++ b/vendor/nim-libp2p @@ -1 +1 @@ -Subproject commit ca48c3718246bb411ff0e354a70cb82d9a28de0d +Subproject commit ff8d51857b4b79a68468e7bcc27b2026cca02996 diff --git a/waku.nimble b/waku.nimble index e4c436e8d..d879bc0e1 100644 --- a/waku.nimble +++ b/waku.nimble @@ -24,7 +24,7 @@ requires "nim >= 2.2.4", "stew", "stint", "metrics", - "libp2p >= 1.14.3", + "libp2p >= 1.15.0", "web3", "presto", "regex", diff --git a/waku/discovery/waku_kademlia.nim b/waku/discovery/waku_kademlia.nim new file mode 100644 index 000000000..94b63a321 --- /dev/null +++ b/waku/discovery/waku_kademlia.nim @@ -0,0 +1,280 @@ +{.push raises: [].} + +import std/[options, sequtils] +import + chronos, + chronicles, + results, + stew/byteutils, + libp2p/[peerid, multiaddress, switch], + libp2p/extended_peer_record, + libp2p/crypto/curve25519, + libp2p/protocols/[kademlia, kad_disco], + libp2p/protocols/kademlia_discovery/types as kad_types, + libp2p/protocols/mix/mix_protocol + +import waku/waku_core, waku/node/peer_manager + +logScope: + topics = "waku extended kademlia discovery" + +const + DefaultExtendedKademliaDiscoveryInterval* = chronos.seconds(5) + ExtendedKademliaDiscoveryStartupDelay* = chronos.seconds(5) + +type + MixNodePoolSizeProvider* = proc(): int {.gcsafe, raises: [].} + NodeStartedProvider* = proc(): bool {.gcsafe, raises: [].} + + ExtendedKademliaDiscoveryParams* = object + bootstrapNodes*: seq[(PeerId, seq[MultiAddress])] + mixPubKey*: Option[Curve25519Key] + advertiseMix*: bool = false + + WakuKademlia* = ref object + protocol*: KademliaDiscovery + peerManager: PeerManager + discoveryLoop: Future[void] + running*: bool + getMixNodePoolSize: MixNodePoolSizeProvider + isNodeStarted: NodeStartedProvider + +proc new*( + T: type WakuKademlia, + switch: Switch, + params: ExtendedKademliaDiscoveryParams, + peerManager: PeerManager, + getMixNodePoolSize: MixNodePoolSizeProvider = nil, + isNodeStarted: NodeStartedProvider = nil, +): Result[T, string] = + if params.bootstrapNodes.len == 0: + info "creating kademlia discovery as seed node (no bootstrap nodes)" + + let kademlia = KademliaDiscovery.new( + switch, + bootstrapNodes = params.bootstrapNodes, + config = KadDHTConfig.new( + validator = kad_types.ExtEntryValidator(), selector = kad_types.ExtEntrySelector() + ), + codec = ExtendedKademliaDiscoveryCodec, + ) + + try: + switch.mount(kademlia) + except CatchableError: + return err("failed to mount kademlia discovery: " & getCurrentExceptionMsg()) + + # Register services BEFORE starting kademlia so they are included in the + # initial self-signed peer record published to the DHT + if params.advertiseMix: + if params.mixPubKey.isSome(): + let alreadyAdvertising = kademlia.startAdvertising( + ServiceInfo(id: MixProtocolID, data: @(params.mixPubKey.get())) + ) + if alreadyAdvertising: + warn "mix service was already being advertised" + debug "extended kademlia advertising mix service", + keyHex = byteutils.toHex(params.mixPubKey.get()), + bootstrapNodes = params.bootstrapNodes.len + else: + warn "mix advertising enabled but no key provided" + + info "kademlia discovery created", + bootstrapNodes = params.bootstrapNodes.len, advertiseMix = params.advertiseMix + + return ok( + WakuKademlia( + protocol: kademlia, + peerManager: peerManager, + running: false, + getMixNodePoolSize: getMixNodePoolSize, + isNodeStarted: isNodeStarted, + ) + ) + +proc extractMixPubKey(service: ServiceInfo): Option[Curve25519Key] = + if service.id != MixProtocolID: + trace "service is not mix protocol", + serviceId = service.id, mixProtocolId = MixProtocolID + return none(Curve25519Key) + + if service.data.len != Curve25519KeySize: + warn "invalid mix pub key length from kademlia record", + expected = Curve25519KeySize, + actual = service.data.len, + dataHex = byteutils.toHex(service.data) + return none(Curve25519Key) + + debug "found mix protocol service", + dataLen = service.data.len, expectedLen = Curve25519KeySize + + let key = intoCurve25519Key(service.data) + debug "successfully extracted mix pub key", keyHex = byteutils.toHex(key) + return some(key) + +proc remotePeerInfoFrom(record: ExtendedPeerRecord): Option[RemotePeerInfo] = + debug "processing kademlia record", + peerId = record.peerId, + numAddresses = record.addresses.len, + numServices = record.services.len, + serviceIds = record.services.mapIt(it.id) + + if record.addresses.len == 0: + trace "kademlia record missing addresses", peerId = record.peerId + return none(RemotePeerInfo) + + let addrs = record.addresses.mapIt(it.address) + if addrs.len == 0: + trace "kademlia record produced no dialable addresses", peerId = record.peerId + return none(RemotePeerInfo) + + let protocols = record.services.mapIt(it.id) + + var mixPubKey = none(Curve25519Key) + for service in record.services: + debug "checking service", + peerId = record.peerId, serviceId = service.id, dataLen = service.data.len + mixPubKey = extractMixPubKey(service) + if mixPubKey.isSome(): + debug "extracted mix public key from service", peerId = record.peerId + break + + if record.services.len > 0 and mixPubKey.isNone(): + debug "record has services but no valid mix key", + peerId = record.peerId, services = record.services.mapIt(it.id) + return none(RemotePeerInfo) + return some( + RemotePeerInfo.init( + record.peerId, + addrs = addrs, + protocols = protocols, + origin = PeerOrigin.Kademlia, + mixPubKey = mixPubKey, + ) + ) + +proc lookupMixPeers*( + wk: WakuKademlia +): Future[Result[int, string]] {.async: (raises: []).} = + ## Lookup mix peers via kademlia and add them to the peer store. + ## Returns the number of mix peers found and added. + if wk.protocol.isNil(): + return err("cannot lookup mix peers: kademlia not mounted") + + let mixService = ServiceInfo(id: MixProtocolID, data: @[]) + var records: seq[ExtendedPeerRecord] + try: + records = await wk.protocol.lookup(mixService) + except CatchableError: + return err("mix peer lookup failed: " & getCurrentExceptionMsg()) + + debug "mix peer lookup returned records", numRecords = records.len + + var added = 0 + for record in records: + let peerOpt = remotePeerInfoFrom(record) + if peerOpt.isNone(): + continue + + let peerInfo = peerOpt.get() + if peerInfo.mixPubKey.isNone(): + continue + + wk.peerManager.addPeer(peerInfo, PeerOrigin.Kademlia) + info "mix peer added via kademlia lookup", + peerId = $peerInfo.peerId, mixPubKey = byteutils.toHex(peerInfo.mixPubKey.get()) + added.inc() + + info "mix peer lookup complete", found = added + return ok(added) + +proc runDiscoveryLoop( + wk: WakuKademlia, interval: Duration, minMixPeers: int +) {.async: (raises: []).} = + info "extended kademlia discovery loop started", interval = interval + + try: + while true: + # Wait for node to be started + if not wk.isNodeStarted.isNil() and not wk.isNodeStarted(): + await sleepAsync(ExtendedKademliaDiscoveryStartupDelay) + continue + + var records: seq[ExtendedPeerRecord] + try: + records = await wk.protocol.randomRecords() + except CatchableError as e: + warn "extended kademlia discovery failed", error = e.msg + await sleepAsync(interval) + continue + + debug "received random records from kademlia", numRecords = records.len + + var added = 0 + for record in records: + let peerOpt = remotePeerInfoFrom(record) + if peerOpt.isNone(): + continue + + let peerInfo = peerOpt.get() + wk.peerManager.addPeer(peerInfo, PeerOrigin.Kademlia) + debug "peer added via extended kademlia discovery", + peerId = $peerInfo.peerId, + addresses = peerInfo.addrs.mapIt($it), + protocols = peerInfo.protocols, + hasMixPubKey = peerInfo.mixPubKey.isSome() + added.inc() + + if added > 0: + info "added peers from extended kademlia discovery", count = added + + # Targeted mix peer lookup when pool is low + if minMixPeers > 0 and not wk.getMixNodePoolSize.isNil() and + wk.getMixNodePoolSize() < minMixPeers: + debug "mix node pool below threshold, performing targeted lookup", + currentPoolSize = wk.getMixNodePoolSize(), threshold = minMixPeers + let found = (await wk.lookupMixPeers()).valueOr: + warn "targeted mix peer lookup failed", error = error + 0 + if found > 0: + info "found mix peers via targeted kademlia lookup", count = found + + await sleepAsync(interval) + except CancelledError as e: + debug "extended kademlia discovery loop cancelled", error = e.msg + except CatchableError as e: + error "extended kademlia discovery loop failed", error = e.msg + +proc start*( + wk: WakuKademlia, + interval: Duration = DefaultExtendedKademliaDiscoveryInterval, + minMixPeers: int = 0, +): Future[Result[void, string]] {.async: (raises: []).} = + if wk.running: + return err("already running") + + try: + await wk.protocol.start() + except CatchableError as e: + return err("failed to start kademlia discovery: " & e.msg) + + wk.discoveryLoop = wk.runDiscoveryLoop(interval, minMixPeers) + + info "kademlia discovery started" + return ok() + +proc stop*(wk: WakuKademlia) {.async: (raises: []).} = + if not wk.running: + return + + info "Stopping kademlia discovery" + + wk.running = false + + if not wk.discoveryLoop.isNil(): + await wk.discoveryLoop.cancelAndWait() + wk.discoveryLoop = nil + + if not wk.protocol.isNil(): + await wk.protocol.stop() + info "Successfully stopped kademlia discovery" diff --git a/waku/factory/conf_builder/conf_builder.nim b/waku/factory/conf_builder/conf_builder.nim index 37cea76fe..b8d0316c3 100644 --- a/waku/factory/conf_builder/conf_builder.nim +++ b/waku/factory/conf_builder/conf_builder.nim @@ -10,10 +10,12 @@ import ./metrics_server_conf_builder, ./rate_limit_conf_builder, ./rln_relay_conf_builder, - ./mix_conf_builder + ./mix_conf_builder, + ./kademlia_discovery_conf_builder export waku_conf_builder, filter_service_conf_builder, store_sync_conf_builder, store_service_conf_builder, rest_server_conf_builder, dns_discovery_conf_builder, discv5_conf_builder, web_socket_conf_builder, metrics_server_conf_builder, - rate_limit_conf_builder, rln_relay_conf_builder, mix_conf_builder + rate_limit_conf_builder, rln_relay_conf_builder, mix_conf_builder, + kademlia_discovery_conf_builder diff --git a/waku/factory/conf_builder/kademlia_discovery_conf_builder.nim b/waku/factory/conf_builder/kademlia_discovery_conf_builder.nim new file mode 100644 index 000000000..916d71be1 --- /dev/null +++ b/waku/factory/conf_builder/kademlia_discovery_conf_builder.nim @@ -0,0 +1,40 @@ +import chronicles, std/options, results +import libp2p/[peerid, multiaddress, peerinfo] +import waku/factory/waku_conf + +logScope: + topics = "waku conf builder kademlia discovery" + +####################################### +## Kademlia Discovery Config Builder ## +####################################### +type KademliaDiscoveryConfBuilder* = object + enabled*: bool + bootstrapNodes*: seq[string] + +proc init*(T: type KademliaDiscoveryConfBuilder): KademliaDiscoveryConfBuilder = + KademliaDiscoveryConfBuilder() + +proc withEnabled*(b: var KademliaDiscoveryConfBuilder, enabled: bool) = + b.enabled = enabled + +proc withBootstrapNodes*( + b: var KademliaDiscoveryConfBuilder, bootstrapNodes: seq[string] +) = + b.bootstrapNodes = bootstrapNodes + +proc build*( + b: KademliaDiscoveryConfBuilder +): Result[Option[KademliaDiscoveryConf], string] = + # Kademlia is enabled if explicitly enabled OR if bootstrap nodes are provided + let enabled = b.enabled or b.bootstrapNodes.len > 0 + if not enabled: + return ok(none(KademliaDiscoveryConf)) + + var parsedNodes: seq[(PeerId, seq[MultiAddress])] + for nodeStr in b.bootstrapNodes: + let (peerId, ma) = parseFullAddress(nodeStr).valueOr: + return err("Failed to parse kademlia bootstrap node: " & error) + parsedNodes.add((peerId, @[ma])) + + return ok(some(KademliaDiscoveryConf(bootstrapNodes: parsedNodes))) diff --git a/waku/factory/conf_builder/waku_conf_builder.nim b/waku/factory/conf_builder/waku_conf_builder.nim index b952e711e..e51f02dbd 100644 --- a/waku/factory/conf_builder/waku_conf_builder.nim +++ b/waku/factory/conf_builder/waku_conf_builder.nim @@ -25,7 +25,8 @@ import ./metrics_server_conf_builder, ./rate_limit_conf_builder, ./rln_relay_conf_builder, - ./mix_conf_builder + ./mix_conf_builder, + ./kademlia_discovery_conf_builder logScope: topics = "waku conf builder" @@ -80,6 +81,7 @@ type WakuConfBuilder* = object mixConf*: MixConfBuilder webSocketConf*: WebSocketConfBuilder rateLimitConf*: RateLimitConfBuilder + kademliaDiscoveryConf*: KademliaDiscoveryConfBuilder # End conf builders relay: Option[bool] lightPush: Option[bool] @@ -140,6 +142,7 @@ proc init*(T: type WakuConfBuilder): WakuConfBuilder = storeServiceConf: StoreServiceConfBuilder.init(), webSocketConf: WebSocketConfBuilder.init(), rateLimitConf: RateLimitConfBuilder.init(), + kademliaDiscoveryConf: KademliaDiscoveryConfBuilder.init(), ) proc withNetworkConf*(b: var WakuConfBuilder, networkConf: NetworkConf) = @@ -506,6 +509,9 @@ proc build*( let rateLimit = builder.rateLimitConf.build().valueOr: return err("Rate limits Conf building failed: " & $error) + let kademliaDiscoveryConf = builder.kademliaDiscoveryConf.build().valueOr: + return err("Kademlia Discovery Conf building failed: " & $error) + # End - Build sub-configs let logLevel = @@ -628,6 +634,7 @@ proc build*( restServerConf: restServerConf, dnsDiscoveryConf: dnsDiscoveryConf, mixConf: mixConf, + kademliaDiscoveryConf: kademliaDiscoveryConf, # end confs nodeKey: nodeKey, clusterId: clusterId, diff --git a/waku/factory/node_factory.nim b/waku/factory/node_factory.nim index dc383e89d..2f82440f6 100644 --- a/waku/factory/node_factory.nim +++ b/waku/factory/node_factory.nim @@ -6,7 +6,8 @@ import libp2p/protocols/pubsub/gossipsub, libp2p/protocols/connectivity/relay/relay, libp2p/nameresolving/dnsresolver, - libp2p/crypto/crypto + libp2p/crypto/crypto, + libp2p/crypto/curve25519 import ./internal_config, @@ -32,6 +33,7 @@ import ../waku_store_legacy/common as legacy_common, ../waku_filter_v2, ../waku_peer_exchange, + ../discovery/waku_kademlia, ../node/peer_manager, ../node/peer_manager/peer_store/waku_peer_storage, ../node/peer_manager/peer_store/migrations as peer_store_sqlite_migrations, @@ -165,13 +167,36 @@ proc setupProtocols( #mount mix if conf.mixConf.isSome(): - ( - await node.mountMix( - conf.clusterId, conf.mixConf.get().mixKey, conf.mixConf.get().mixnodes - ) - ).isOkOr: + let mixConf = conf.mixConf.get() + (await node.mountMix(conf.clusterId, mixConf.mixKey, mixConf.mixnodes)).isOkOr: return err("failed to mount waku mix protocol: " & $error) + # Setup extended kademlia discovery + if conf.kademliaDiscoveryConf.isSome(): + let mixPubKey = + if conf.mixConf.isSome(): + some(conf.mixConf.get().mixPubKey) + else: + none(Curve25519Key) + + node.wakuKademlia = WakuKademlia.new( + node.switch, + ExtendedKademliaDiscoveryParams( + bootstrapNodes: conf.kademliaDiscoveryConf.get().bootstrapNodes, + mixPubKey: mixPubKey, + advertiseMix: conf.mixConf.isSome(), + ), + node.peerManager, + getMixNodePoolSize = proc(): int {.gcsafe, raises: [].} = + if node.wakuMix.isNil(): + 0 + else: + node.getMixNodePoolSize(), + isNodeStarted = proc(): bool {.gcsafe, raises: [].} = + node.started, + ).valueOr: + return err("failed to setup kademlia discovery: " & error) + if conf.storeServiceConf.isSome(): let storeServiceConf = conf.storeServiceConf.get() if storeServiceConf.supportV2: @@ -477,6 +502,11 @@ proc startNode*( if conf.relay: node.peerManager.start() + if not node.wakuKademlia.isNil(): + let minMixPeers = if conf.mixConf.isSome(): 4 else: 0 + (await node.wakuKademlia.start(minMixPeers = minMixPeers)).isOkOr: + return err("failed to start kademlia discovery: " & error) + return ok() proc setupNode*( diff --git a/waku/factory/waku.nim b/waku/factory/waku.nim index dd253129c..9803a53a9 100644 --- a/waku/factory/waku.nim +++ b/waku/factory/waku.nim @@ -203,6 +203,9 @@ proc new*( else: nil + # Set the extMultiAddrsOnly flag so the node knows not to replace explicit addresses + node.extMultiAddrsOnly = wakuConf.endpointConf.extMultiAddrsOnly + node.setupAppCallbacks(wakuConf, appCallbacks, healthMonitor).isOkOr: error "Failed setting up app callbacks", error = error return err("Failed setting up app callbacks: " & $error) diff --git a/waku/factory/waku_conf.nim b/waku/factory/waku_conf.nim index 899008221..01574d067 100644 --- a/waku/factory/waku_conf.nim +++ b/waku/factory/waku_conf.nim @@ -4,6 +4,7 @@ import libp2p/crypto/crypto, libp2p/multiaddress, libp2p/crypto/curve25519, + libp2p/peerid, secp256k1, results @@ -51,6 +52,10 @@ type MixConf* = ref object mixPubKey*: Curve25519Key mixnodes*: seq[MixNodePubInfo] +type KademliaDiscoveryConf* = object + bootstrapNodes*: seq[(PeerId, seq[MultiAddress])] + ## Bootstrap nodes for extended kademlia discovery. + type StoreServiceConf* {.requiresInit.} = object dbMigration*: bool dbURl*: string @@ -109,6 +114,7 @@ type WakuConf* {.requiresInit.} = ref object metricsServerConf*: Option[MetricsServerConf] webSocketConf*: Option[WebSocketConf] mixConf*: Option[MixConf] + kademliaDiscoveryConf*: Option[KademliaDiscoveryConf] portsShift*: uint16 dnsAddrsNameServers*: seq[IpAddress] diff --git a/waku/node/peer_manager/waku_peer_store.nim b/waku/node/peer_manager/waku_peer_store.nim index a03b5ae2e..93ac9ad2e 100644 --- a/waku/node/peer_manager/waku_peer_store.nim +++ b/waku/node/peer_manager/waku_peer_store.nim @@ -43,9 +43,6 @@ type # Keeps track of peer shards ShardBook* = ref object of PeerBook[seq[uint16]] - # Keeps track of Mix protocol public keys of peers - MixPubKeyBook* = ref object of PeerBook[Curve25519Key] - proc getPeer*(peerStore: PeerStore, peerId: PeerId): RemotePeerInfo = let addresses = if peerStore[LastSeenBook][peerId].isSome(): @@ -85,7 +82,7 @@ proc delete*(peerStore: PeerStore, peerId: PeerId) = proc peers*(peerStore: PeerStore): seq[RemotePeerInfo] = let allKeys = concat( - toSeq(peerStore[LastSeenBook].book.keys()), + toSeq(peerStore[LastSeenOutboundBook].book.keys()), toSeq(peerStore[AddressBook].book.keys()), toSeq(peerStore[ProtoBook].book.keys()), toSeq(peerStore[KeyBook].book.keys()), diff --git a/waku/node/waku_node.nim b/waku/node/waku_node.nim index 53ce0349a..254387c32 100644 --- a/waku/node/waku_node.nim +++ b/waku/node/waku_node.nim @@ -66,6 +66,7 @@ import events/health_events, events/peer_events, ], + waku/discovery/waku_kademlia, ./net_config, ./peer_manager, ./health_monitor/health_status, @@ -141,6 +142,7 @@ type wakuRendezvous*: WakuRendezVous wakuRendezvousClient*: rendezvous_client.WakuRendezVousClient announcedAddresses*: seq[MultiAddress] + extMultiAddrsOnly*: bool # When true, skip automatic IP address replacement started*: bool # Indicates that node has started listening topicSubscriptionQueue*: AsyncEventQueue[SubscriptionEvent] rateLimitSettings*: ProtocolRateLimitSettings @@ -149,6 +151,8 @@ type edgeHealthEvent*: AsyncEvent edgeHealthLoop: Future[void] peerEventListener*: EventWakuPeerListener + kademliaDiscoveryLoop*: Future[void] + wakuKademlia*: WakuKademlia proc deduceRelayShard( node: WakuNode, @@ -303,7 +307,7 @@ proc mountAutoSharding*( return ok() proc getMixNodePoolSize*(node: WakuNode): int = - return node.wakuMix.getNodePoolSize() + return node.wakuMix.poolSize() proc mountMix*( node: WakuNode, @@ -455,6 +459,11 @@ proc isBindIpWithZeroPort(inputMultiAdd: MultiAddress): bool = return false proc updateAnnouncedAddrWithPrimaryIpAddr*(node: WakuNode): Result[void, string] = + # Skip automatic IP replacement if extMultiAddrsOnly is set + # This respects the user's explicitly configured announced addresses + if node.extMultiAddrsOnly: + return ok() + let peerInfo = node.switch.peerInfo var announcedStr = "" var listenStr = "" @@ -705,6 +714,9 @@ proc stop*(node: WakuNode) {.async.} = not node.wakuPeerExchangeClient.pxLoopHandle.isNil(): await node.wakuPeerExchangeClient.pxLoopHandle.cancelAndWait() + if not node.wakuKademlia.isNil(): + await node.wakuKademlia.stop() + if not node.wakuRendezvous.isNil(): await node.wakuRendezvous.stopWait() diff --git a/waku/waku_core/peers.nim b/waku/waku_core/peers.nim index 48c994403..51a8e1157 100644 --- a/waku/waku_core/peers.nim +++ b/waku/waku_core/peers.nim @@ -38,6 +38,7 @@ type Static PeerExchange Dns + Kademlia PeerDirection* = enum UnknownDirection diff --git a/waku/waku_mix/protocol.nim b/waku/waku_mix/protocol.nim index 366d5da91..2c972bef6 100644 --- a/waku/waku_mix/protocol.nim +++ b/waku/waku_mix/protocol.nim @@ -1,22 +1,23 @@ {.push raises: [].} -import chronicles, std/[options, tables, sequtils], chronos, results, metrics, strutils +import chronicles, std/options, chronos, results, metrics import libp2p/crypto/curve25519, + libp2p/crypto/crypto, libp2p/protocols/mix, libp2p/protocols/mix/mix_node, libp2p/protocols/mix/mix_protocol, libp2p/protocols/mix/mix_metrics, - libp2p/[multiaddress, multicodec, peerid], + libp2p/protocols/mix/delay_strategy, + libp2p/[multiaddress, peerid], eth/common/keys import - ../node/peer_manager, - ../waku_core, - ../waku_enr, - ../node/peer_manager/waku_peer_store, - ../common/nimchronos + waku/node/peer_manager, + waku/waku_core, + waku/waku_enr, + waku/node/peer_manager/waku_peer_store logScope: topics = "waku mix" @@ -27,7 +28,6 @@ type WakuMix* = ref object of MixProtocol peerManager*: PeerManager clusterId: uint16 - nodePoolLoopHandle: Future[void] pubKey*: Curve25519Key WakuMixResult*[T] = Result[T, string] @@ -36,106 +36,10 @@ type multiAddr*: string pubKey*: Curve25519Key -proc filterMixNodes(cluster: Option[uint16], peer: RemotePeerInfo): bool = - # Note that origin based(discv5) filtering is not done intentionally - # so that more mix nodes can be discovered. - if peer.mixPubKey.isNone(): - trace "remote peer has no mix Pub Key", peer = $peer - return false - - if cluster.isSome() and peer.enr.isSome() and - peer.enr.get().isClusterMismatched(cluster.get()): - trace "peer has mismatching cluster", peer = $peer - return false - - return true - -proc appendPeerIdToMultiaddr*(multiaddr: MultiAddress, peerId: PeerId): MultiAddress = - if multiaddr.contains(multiCodec("p2p")).get(): - return multiaddr - - var maddrStr = multiaddr.toString().valueOr: - error "Failed to convert multiaddress to string.", err = error - return multiaddr - maddrStr.add("/p2p/" & $peerId) - var cleanAddr = MultiAddress.init(maddrStr).valueOr: - error "Failed to convert string to multiaddress.", err = error - return multiaddr - return cleanAddr - -func getIPv4Multiaddr*(maddrs: seq[MultiAddress]): Option[MultiAddress] = - for multiaddr in maddrs: - trace "checking multiaddr", addr = $multiaddr - if multiaddr.contains(multiCodec("ip4")).get(): - trace "found ipv4 multiaddr", addr = $multiaddr - return some(multiaddr) - trace "no ipv4 multiaddr found" - return none(MultiAddress) - -proc populateMixNodePool*(mix: WakuMix) = - # populate only peers that i) are reachable ii) share cluster iii) support mix - let remotePeers = mix.peerManager.switch.peerStore.peers().filterIt( - filterMixNodes(some(mix.clusterId), it) - ) - var mixNodes = initTable[PeerId, MixPubInfo]() - - for i in 0 ..< min(remotePeers.len, 100): - let ipv4addr = getIPv4Multiaddr(remotePeers[i].addrs).valueOr: - trace "peer has no ipv4 address", peer = $remotePeers[i] - continue - let maddrWithPeerId = appendPeerIdToMultiaddr(ipv4addr, remotePeers[i].peerId) - trace "remote peer info", info = remotePeers[i] - - if remotePeers[i].mixPubKey.isNone(): - trace "peer has no mix Pub Key", remotePeerId = $remotePeers[i] - continue - - let peerMixPubKey = remotePeers[i].mixPubKey.get() - var peerPubKey: crypto.PublicKey - if not remotePeers[i].peerId.extractPublicKey(peerPubKey): - warn "Failed to extract public key from peerId, skipping node", - remotePeerId = remotePeers[i].peerId - continue - - if peerPubKey.scheme != PKScheme.Secp256k1: - warn "Peer public key is not Secp256k1, skipping node", - remotePeerId = remotePeers[i].peerId, scheme = peerPubKey.scheme - continue - - let mixNodePubInfo = MixPubInfo.init( - remotePeers[i].peerId, - ipv4addr, - intoCurve25519Key(peerMixPubKey), - peerPubKey.skkey, - ) - trace "adding mix node to pool", - remotePeerId = remotePeers[i].peerId, multiAddr = $ipv4addr - mixNodes[remotePeers[i].peerId] = mixNodePubInfo - - # set the mix node pool - mix.setNodePool(mixNodes) - mix_pool_size.set(len(mixNodes)) - trace "mix node pool updated", poolSize = mix.getNodePoolSize() - -# Once mix protocol starts to use info from PeerStore, then this can be removed. -proc startMixNodePoolMgr*(mix: WakuMix) {.async.} = - info "starting mix node pool manager" - # try more aggressively to populate the pool at startup - var attempts = 50 - # TODO: make initial pool size configurable - while mix.getNodePoolSize() < 100 and attempts > 0: - attempts -= 1 - mix.populateMixNodePool() - await sleepAsync(1.seconds) - - # TODO: make interval configurable - heartbeat "Updating mix node pool", 5.seconds: - mix.populateMixNodePool() - proc processBootNodes( - bootnodes: seq[MixNodePubInfo], peermgr: PeerManager -): Table[PeerId, MixPubInfo] = - var mixNodes = initTable[PeerId, MixPubInfo]() + bootnodes: seq[MixNodePubInfo], peermgr: PeerManager, mix: WakuMix +) = + var count = 0 for node in bootnodes: let pInfo = parsePeerInfo(node.multiAddr).valueOr: error "Failed to get peer id from multiaddress: ", @@ -156,14 +60,15 @@ proc processBootNodes( error "Failed to parse multiaddress", multiAddr = node.multiAddr, error = error continue - mixNodes[peerId] = MixPubInfo.init(peerId, multiAddr, node.pubKey, peerPubKey.skkey) + let mixPubInfo = MixPubInfo.init(peerId, multiAddr, node.pubKey, peerPubKey.skkey) + mix.nodePool.add(mixPubInfo) + count.inc() peermgr.addPeer( RemotePeerInfo.init(peerId, @[multiAddr], mixPubKey = some(node.pubKey)) ) - mix_pool_size.set(len(mixNodes)) - info "using mix bootstrap nodes ", bootNodes = mixNodes - return mixNodes + mix_pool_size.set(count) + info "using mix bootstrap nodes ", count = count proc new*( T: type WakuMix, @@ -183,22 +88,28 @@ proc new*( ) if bootnodes.len < minMixPoolSize: warn "publishing with mix won't work until atleast 3 mix nodes in node pool" - let initTable = processBootNodes(bootnodes, peermgr) - if len(initTable) < minMixPoolSize: - warn "publishing with mix won't work until atleast 3 mix nodes in node pool" var m = WakuMix(peerManager: peermgr, clusterId: clusterId, pubKey: mixPubKey) - procCall MixProtocol(m).init(localMixNodeInfo, initTable, peermgr.switch) + procCall MixProtocol(m).init( + localMixNodeInfo, + peermgr.switch, + delayStrategy = + ExponentialDelayStrategy.new(meanDelayMs = 50, rng = crypto.newRng()), + ) + + processBootNodes(bootnodes, peermgr, m) + + if m.nodePool.len < minMixPoolSize: + warn "publishing with mix won't work until atleast 3 mix nodes in node pool" return ok(m) +proc poolSize*(mix: WakuMix): int = + mix.nodePool.len + method start*(mix: WakuMix) = info "starting waku mix protocol" - mix.nodePoolLoopHandle = mix.startMixNodePoolMgr() method stop*(mix: WakuMix) {.async.} = - if mix.nodePoolLoopHandle.isNil(): - return - await mix.nodePoolLoopHandle.cancelAndWait() - mix.nodePoolLoopHandle = nil + discard # Mix Protocol From b23e722cb416fa104097b4c6e38a8b9bf5aa7e50 Mon Sep 17 00:00:00 2001 From: Ivan FB <128452529+Ivansete-status@users.noreply.github.com> Date: Thu, 19 Feb 2026 18:11:33 +0100 Subject: [PATCH 60/70] bump nim-metrics to latest master (#3730) --- vendor/nim-metrics | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/vendor/nim-metrics b/vendor/nim-metrics index 11d0cddfb..9b9afee96 160000 --- a/vendor/nim-metrics +++ b/vendor/nim-metrics @@ -1 +1 @@ -Subproject commit 11d0cddfb0e711aa2a8c75d1892ae24a64c299fc +Subproject commit 9b9afee96357ad82dabf4563cf292f89b50423df From a7872d59d1991c7f726c7845d8ac536c6bdf7fb4 Mon Sep 17 00:00:00 2001 From: Ivan Folgueira Bande Date: Fri, 20 Feb 2026 00:10:33 +0100 Subject: [PATCH 61/70] add POSTGRES support in nix --- nix/default.nix | 1 + 1 file changed, 1 insertion(+) diff --git a/nix/default.nix b/nix/default.nix index ca91d0e2f..7df58df60 100644 --- a/nix/default.nix +++ b/nix/default.nix @@ -61,6 +61,7 @@ in stdenv.mkDerivation { "QUICK_AND_DIRTY_NIMBLE=${if quickAndDirty then "1" else "0"}" "USE_SYSTEM_NIM=${if useSystemNim then "1" else "0"}" "LIBRLN_FILE=${zerokitRln}/lib/librln.${if abidir != null then "so" else "a"}" + "POSTGRES=1" ]; configurePhase = '' From c7e0cc0eaaa7c5cbfe8cf20a39fb4ca5d0ba4681 Mon Sep 17 00:00:00 2001 From: Ivan FB <128452529+Ivansete-status@users.noreply.github.com> Date: Sat, 21 Feb 2026 16:24:26 +0100 Subject: [PATCH 62/70] bump nim-metrics to proper tagged commit v0.2.1 (#3734) --- vendor/nim-metrics | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/vendor/nim-metrics b/vendor/nim-metrics index 9b9afee96..a1296caf3 160000 --- a/vendor/nim-metrics +++ b/vendor/nim-metrics @@ -1 +1 @@ -Subproject commit 9b9afee96357ad82dabf4563cf292f89b50423df +Subproject commit a1296caf3ebb5f30f51a5feae7749a30df2824c2 From ba85873f03a1da6ab04287949849815fd97b7bfd Mon Sep 17 00:00:00 2001 From: NagyZoltanPeter <113987313+NagyZoltanPeter@users.noreply.github.com> Date: Thu, 26 Feb 2026 17:55:31 +0100 Subject: [PATCH 63/70] health status event support for liblogosdelivery (#3737) --- liblogosdelivery/logos_delivery_api/node_api.nim | 12 +++++++++++- 1 file changed, 11 insertions(+), 1 deletion(-) diff --git a/liblogosdelivery/logos_delivery_api/node_api.nim b/liblogosdelivery/logos_delivery_api/node_api.nim index 6a0041857..5c2d7885f 100644 --- a/liblogosdelivery/logos_delivery_api/node_api.nim +++ b/liblogosdelivery/logos_delivery_api/node_api.nim @@ -4,7 +4,7 @@ import waku/factory/waku, waku/node/waku_node, waku/api/[api, api_conf, types], - waku/events/message_events, + waku/events/[message_events, health_events], ../declare_lib, ../json_event @@ -88,6 +88,15 @@ proc logosdelivery_start_node( chronicles.error "MessagePropagatedEvent.listen failed", err = $error return err("MessagePropagatedEvent.listen failed: " & $error) + let ConnectionStatusChangeListener = EventConnectionStatusChange.listen( + ctx.myLib[].brokerCtx, + proc(event: EventConnectionStatusChange) {.async: (raises: []).} = + callEventCallback(ctx, "onConnectionStatusChange"): + $newJsonEvent("connection_status_change", event), + ).valueOr: + chronicles.error "ConnectionStatusChange.listen failed", err = $error + return err("ConnectionStatusChange.listen failed: " & $error) + (await startWaku(addr ctx.myLib[])).isOkOr: let errMsg = $error chronicles.error "START_NODE failed", err = errMsg @@ -103,6 +112,7 @@ proc logosdelivery_stop_node( MessageErrorEvent.dropAllListeners(ctx.myLib[].brokerCtx) MessageSentEvent.dropAllListeners(ctx.myLib[].brokerCtx) MessagePropagatedEvent.dropAllListeners(ctx.myLib[].brokerCtx) + EventConnectionStatusChange.dropAllListeners(ctx.myLib[].brokerCtx) (await ctx.myLib[].stop()).isOkOr: let errMsg = $error From 51ec09c39df1ce8b442381300d91ebdd3bfe9e15 Mon Sep 17 00:00:00 2001 From: Fabiana Cecin Date: Mon, 2 Mar 2026 14:52:36 -0300 Subject: [PATCH 64/70] Implement stateful SubscriptionService for Core mode (#3732) * SubscriptionManager tracks shard and content topic interest * RecvService emits MessageReceivedEvent on subscribed content topics * Route MAPI through old Kernel API relay unique-handler infra to avoid code duplication * Encode current gen-zero network policy: on Core node boot, subscribe to all pubsub topics (all shards) * Add test_api_subscriptions.nim (basic relay/core testing only) * Removed any MAPI Edge sub/unsub/receive support code that was there (will add in next PR) * Hook MessageSeenEvent to Kernel API bus * Fix MAPI vs Kernel API unique relay handler support * RecvService delegating topic subs to SubscriptionManager * RecvService emits MessageReceivedEvent (fully filtered) * Rename old SubscriptionManager to LegacySubscriptionManager --- tests/api/test_all.nim | 7 +- tests/api/test_api_subscription.nim | 399 ++++++++++++++++++ tests/waku_store/test_wakunode_store.nim | 6 + waku/api/api.nim | 15 +- waku/events/message_events.nim | 10 +- waku/factory/waku.nim | 7 +- .../delivery_service/delivery_service.nim | 18 +- .../recv_service/recv_service.nim | 100 ++--- .../send_service/send_service.nim | 14 +- .../delivery_service/subscription_manager.nim | 164 +++++++ .../delivery_service/subscription_service.nim | 64 --- waku/node/kernel_api/relay.nim | 100 +++-- waku/node/waku_node.nim | 2 + .../subscription/subscription_manager.nim | 16 +- 14 files changed, 735 insertions(+), 187 deletions(-) create mode 100644 tests/api/test_api_subscription.nim create mode 100644 waku/node/delivery_service/subscription_manager.nim delete mode 100644 waku/node/delivery_service/subscription_service.nim diff --git a/tests/api/test_all.nim b/tests/api/test_all.nim index 57f7f37f2..4617c8cdb 100644 --- a/tests/api/test_all.nim +++ b/tests/api/test_all.nim @@ -1,3 +1,8 @@ {.used.} -import ./test_entry_nodes, ./test_node_conf, ./test_api_send, ./test_api_health +import + ./test_entry_nodes, + ./test_node_conf, + ./test_api_send, + ./test_api_subscription, + ./test_api_health diff --git a/tests/api/test_api_subscription.nim b/tests/api/test_api_subscription.nim new file mode 100644 index 000000000..8983c2934 --- /dev/null +++ b/tests/api/test_api_subscription.nim @@ -0,0 +1,399 @@ +{.used.} + +import std/[strutils, net, options, sets] +import chronos, testutils/unittests, stew/byteutils +import libp2p/[peerid, peerinfo, multiaddress, crypto/crypto] +import ../testlib/[common, wakucore, wakunode, testasync] + +import + waku, + waku/[ + waku_node, + waku_core, + common/broker/broker_context, + events/message_events, + waku_relay/protocol, + ] +import waku/api/api_conf, waku/factory/waku_conf + +# TODO: Edge testing (after MAPI edge support is completed) + +const TestTimeout = chronos.seconds(10) +const NegativeTestTimeout = chronos.seconds(2) + +type ReceiveEventListenerManager = ref object + brokerCtx: BrokerContext + receivedListener: MessageReceivedEventListener + receivedEvent: AsyncEvent + receivedMessages: seq[WakuMessage] + targetCount: int + +proc newReceiveEventListenerManager( + brokerCtx: BrokerContext, expectedCount: int = 1 +): ReceiveEventListenerManager = + let manager = ReceiveEventListenerManager( + brokerCtx: brokerCtx, receivedMessages: @[], targetCount: expectedCount + ) + manager.receivedEvent = newAsyncEvent() + + manager.receivedListener = MessageReceivedEvent + .listen( + brokerCtx, + proc(event: MessageReceivedEvent) {.async: (raises: []).} = + manager.receivedMessages.add(event.message) + + if manager.receivedMessages.len >= manager.targetCount: + manager.receivedEvent.fire() + , + ) + .expect("Failed to listen to MessageReceivedEvent") + + return manager + +proc teardown(manager: ReceiveEventListenerManager) = + MessageReceivedEvent.dropListener(manager.brokerCtx, manager.receivedListener) + +proc waitForEvents( + manager: ReceiveEventListenerManager, timeout: Duration +): Future[bool] {.async.} = + return await manager.receivedEvent.wait().withTimeout(timeout) + +type TestNetwork = ref object + publisher: WakuNode + subscriber: Waku + publisherPeerInfo: RemotePeerInfo + +proc createApiNodeConf( + mode: WakuMode = WakuMode.Core, numShards: uint16 = 1 +): NodeConfig = + let netConf = NetworkingConfig(listenIpv4: "0.0.0.0", p2pTcpPort: 0, discv5UdpPort: 0) + result = NodeConfig.init( + mode = mode, + protocolsConfig = ProtocolsConfig.init( + entryNodes = @[], + clusterId = 1, + autoShardingConfig = AutoShardingConfig(numShardsInCluster: numShards), + ), + networkingConfig = netConf, + p2pReliability = true, + ) + +proc setupSubscriberNode(conf: NodeConfig): Future[Waku] {.async.} = + var node: Waku + lockNewGlobalBrokerContext: + node = (await createNode(conf)).expect("Failed to create subscriber node") + (await startWaku(addr node)).expect("Failed to start subscriber node") + return node + +proc setupNetwork( + numShards: uint16 = 1, mode: WakuMode = WakuMode.Core +): Future[TestNetwork] {.async.} = + var net = TestNetwork() + + lockNewGlobalBrokerContext: + net.publisher = + newTestWakuNode(generateSecp256k1Key(), parseIpAddress("0.0.0.0"), Port(0)) + net.publisher.mountMetadata(1, @[0'u16]).expect("Failed to mount metadata") + (await net.publisher.mountRelay()).expect("Failed to mount relay") + await net.publisher.mountLibp2pPing() + await net.publisher.start() + + net.publisherPeerInfo = net.publisher.peerInfo.toRemotePeerInfo() + + proc dummyHandler(topic: PubsubTopic, msg: WakuMessage) {.async, gcsafe.} = + discard + + # Subscribe the publisher to all shards to guarantee a GossipSub mesh with the subscriber. + # Currently, Core/Relay nodes auto-subscribe to all network shards on boot, but if + # that changes, this will be needed to cause the publisher to have shard interest + # for any shards the subscriber may want to use, which is required for waitForMesh to work. + for i in 0 ..< numShards.int: + let shard = PubsubTopic("/waku/2/rs/1/" & $i) + net.publisher.subscribe((kind: PubsubSub, topic: shard), dummyHandler).expect( + "Failed to sub publisher" + ) + + net.subscriber = await setupSubscriberNode(createApiNodeConf(mode, numShards)) + + await net.subscriber.node.connectToNodes(@[net.publisherPeerInfo]) + + return net + +proc teardown(net: TestNetwork) {.async.} = + if not isNil(net.subscriber): + (await net.subscriber.stop()).expect("Failed to stop subscriber node") + net.subscriber = nil + + if not isNil(net.publisher): + await net.publisher.stop() + net.publisher = nil + +proc getRelayShard(node: WakuNode, contentTopic: ContentTopic): PubsubTopic = + let autoSharding = node.wakuAutoSharding.get() + let shardObj = autoSharding.getShard(contentTopic).expect("Failed to get shard") + return PubsubTopic($shardObj) + +proc waitForMesh(node: WakuNode, shard: PubsubTopic) {.async.} = + for _ in 0 ..< 50: + if node.wakuRelay.getNumPeersInMesh(shard).valueOr(0) > 0: + return + await sleepAsync(100.milliseconds) + raise newException(ValueError, "GossipSub Mesh failed to stabilize on " & shard) + +proc publishToMesh( + net: TestNetwork, contentTopic: ContentTopic, payload: seq[byte] +): Future[Result[int, string]] {.async.} = + let shard = net.subscriber.node.getRelayShard(contentTopic) + + await waitForMesh(net.publisher, shard) + + let msg = WakuMessage( + payload: payload, contentTopic: contentTopic, version: 0, timestamp: now() + ) + return await net.publisher.publish(some(shard), msg) + +suite "Messaging API, SubscriptionManager": + asyncTest "Subscription API, relay node auto subscribe and receive message": + let net = await setupNetwork(1) + defer: + await net.teardown() + + let testTopic = ContentTopic("/waku/2/test-content/proto") + (await net.subscriber.subscribe(testTopic)).expect( + "subscriberNode failed to subscribe" + ) + + let eventManager = newReceiveEventListenerManager(net.subscriber.brokerCtx, 1) + defer: + eventManager.teardown() + + discard (await net.publishToMesh(testTopic, "Hello, world!".toBytes())).expect( + "Publish failed" + ) + + require await eventManager.waitForEvents(TestTimeout) + require eventManager.receivedMessages.len == 1 + check eventManager.receivedMessages[0].contentTopic == testTopic + + asyncTest "Subscription API, relay node ignores unsubscribed content topics on same shard": + let net = await setupNetwork(1) + defer: + await net.teardown() + + let subbedTopic = ContentTopic("/waku/2/subbed-topic/proto") + let ignoredTopic = ContentTopic("/waku/2/ignored-topic/proto") + (await net.subscriber.subscribe(subbedTopic)).expect("failed to subscribe") + + let eventManager = newReceiveEventListenerManager(net.subscriber.brokerCtx, 1) + defer: + eventManager.teardown() + + discard (await net.publishToMesh(ignoredTopic, "Ghost Msg".toBytes())).expect( + "Publish failed" + ) + + check not await eventManager.waitForEvents(NegativeTestTimeout) + check eventManager.receivedMessages.len == 0 + + asyncTest "Subscription API, relay node unsubscribe stops message receipt": + let net = await setupNetwork(1) + defer: + await net.teardown() + + let testTopic = ContentTopic("/waku/2/unsub-test/proto") + + (await net.subscriber.subscribe(testTopic)).expect("failed to subscribe") + net.subscriber.unsubscribe(testTopic).expect("failed to unsubscribe") + + let eventManager = newReceiveEventListenerManager(net.subscriber.brokerCtx, 1) + defer: + eventManager.teardown() + + discard (await net.publishToMesh(testTopic, "Should be dropped".toBytes())).expect( + "Publish failed" + ) + + check not await eventManager.waitForEvents(NegativeTestTimeout) + check eventManager.receivedMessages.len == 0 + + asyncTest "Subscription API, overlapping topics on same shard maintain correct isolation": + let net = await setupNetwork(1) + defer: + await net.teardown() + + let topicA = ContentTopic("/waku/2/topic-a/proto") + let topicB = ContentTopic("/waku/2/topic-b/proto") + (await net.subscriber.subscribe(topicA)).expect("failed to sub A") + (await net.subscriber.subscribe(topicB)).expect("failed to sub B") + + let eventManager = newReceiveEventListenerManager(net.subscriber.brokerCtx, 1) + defer: + eventManager.teardown() + + net.subscriber.unsubscribe(topicA).expect("failed to unsub A") + + discard (await net.publishToMesh(topicA, "Dropped Message".toBytes())).expect( + "Publish A failed" + ) + discard + (await net.publishToMesh(topicB, "Kept Msg".toBytes())).expect("Publish B failed") + + require await eventManager.waitForEvents(TestTimeout) + require eventManager.receivedMessages.len == 1 + check eventManager.receivedMessages[0].contentTopic == topicB + + asyncTest "Subscription API, redundant subs tolerated and subs are removed": + let net = await setupNetwork(1) + defer: + await net.teardown() + + let glitchTopic = ContentTopic("/waku/2/glitch/proto") + + (await net.subscriber.subscribe(glitchTopic)).expect("failed to sub") + (await net.subscriber.subscribe(glitchTopic)).expect("failed to double sub") + net.subscriber.unsubscribe(glitchTopic).expect("failed to unsub") + + let eventManager = newReceiveEventListenerManager(net.subscriber.brokerCtx, 1) + defer: + eventManager.teardown() + + discard (await net.publishToMesh(glitchTopic, "Ghost Msg".toBytes())).expect( + "Publish failed" + ) + + check not await eventManager.waitForEvents(NegativeTestTimeout) + check eventManager.receivedMessages.len == 0 + + asyncTest "Subscription API, resubscribe to an unsubscribed topic": + let net = await setupNetwork(1) + defer: + await net.teardown() + + let testTopic = ContentTopic("/waku/2/resub-test/proto") + + # Subscribe + (await net.subscriber.subscribe(testTopic)).expect("Initial sub failed") + + var eventManager = newReceiveEventListenerManager(net.subscriber.brokerCtx, 1) + discard + (await net.publishToMesh(testTopic, "Msg 1".toBytes())).expect("Pub 1 failed") + + require await eventManager.waitForEvents(TestTimeout) + eventManager.teardown() + + # Unsubscribe and verify teardown + net.subscriber.unsubscribe(testTopic).expect("Unsub failed") + eventManager = newReceiveEventListenerManager(net.subscriber.brokerCtx, 1) + + discard + (await net.publishToMesh(testTopic, "Ghost".toBytes())).expect("Ghost pub failed") + + check not await eventManager.waitForEvents(NegativeTestTimeout) + eventManager.teardown() + + # Resubscribe + (await net.subscriber.subscribe(testTopic)).expect("Resub failed") + eventManager = newReceiveEventListenerManager(net.subscriber.brokerCtx, 1) + + discard + (await net.publishToMesh(testTopic, "Msg 2".toBytes())).expect("Pub 2 failed") + + require await eventManager.waitForEvents(TestTimeout) + check eventManager.receivedMessages[0].payload == "Msg 2".toBytes() + + asyncTest "Subscription API, two content topics in different shards": + let net = await setupNetwork(8) + defer: + await net.teardown() + + var topicA = ContentTopic("/appA/2/shard-test-a/proto") + var topicB = ContentTopic("/appB/2/shard-test-b/proto") + + # generate two content topics that land in two different shards + var i = 0 + while net.subscriber.node.getRelayShard(topicA) == + net.subscriber.node.getRelayShard(topicB): + topicB = ContentTopic("/appB" & $i & "/2/shard-test-b/proto") + inc i + + (await net.subscriber.subscribe(topicA)).expect("failed to sub A") + (await net.subscriber.subscribe(topicB)).expect("failed to sub B") + + let eventManager = newReceiveEventListenerManager(net.subscriber.brokerCtx, 2) + defer: + eventManager.teardown() + + discard (await net.publishToMesh(topicA, "Msg on Shard A".toBytes())).expect( + "Publish A failed" + ) + discard (await net.publishToMesh(topicB, "Msg on Shard B".toBytes())).expect( + "Publish B failed" + ) + + require await eventManager.waitForEvents(TestTimeout) + require eventManager.receivedMessages.len == 2 + + asyncTest "Subscription API, many content topics in many shards": + let net = await setupNetwork(8) + defer: + await net.teardown() + + var allTopics: seq[ContentTopic] + for i in 0 ..< 100: + allTopics.add(ContentTopic("/stress-app-" & $i & "/2/state-test/proto")) + + var activeSubs: seq[ContentTopic] + + proc verifyNetworkState(expected: seq[ContentTopic]) {.async.} = + let eventManager = + newReceiveEventListenerManager(net.subscriber.brokerCtx, expected.len) + + for topic in allTopics: + discard (await net.publishToMesh(topic, "Stress Payload".toBytes())).expect( + "publish failed" + ) + + require await eventManager.waitForEvents(TestTimeout) + + # here we just give a chance for any messages that we don't expect to arrive + await sleepAsync(1.seconds) + eventManager.teardown() + + # weak check (but catches most bugs) + require eventManager.receivedMessages.len == expected.len + + # strict expected receipt test + var receivedTopics = initHashSet[ContentTopic]() + for msg in eventManager.receivedMessages: + receivedTopics.incl(msg.contentTopic) + var expectedTopics = initHashSet[ContentTopic]() + for t in expected: + expectedTopics.incl(t) + + check receivedTopics == expectedTopics + + # subscribe to all content topics we generated + for t in allTopics: + (await net.subscriber.subscribe(t)).expect("sub failed") + activeSubs.add(t) + + await verifyNetworkState(activeSubs) + + # unsubscribe from some content topics + for i in 0 ..< 50: + let t = allTopics[i] + net.subscriber.unsubscribe(t).expect("unsub failed") + + let idx = activeSubs.find(t) + if idx >= 0: + activeSubs.del(idx) + + await verifyNetworkState(activeSubs) + + # re-subscribe to some content topics + for i in 0 ..< 25: + let t = allTopics[i] + (await net.subscriber.subscribe(t)).expect("resub failed") + activeSubs.add(t) + + await verifyNetworkState(activeSubs) diff --git a/tests/waku_store/test_wakunode_store.nim b/tests/waku_store/test_wakunode_store.nim index e30854906..9239435af 100644 --- a/tests/waku_store/test_wakunode_store.nim +++ b/tests/waku_store/test_wakunode_store.nim @@ -374,6 +374,12 @@ procSuite "WakuNode - Store": waitFor allFutures(client.stop(), server.stop()) test "Store protocol queries overrun request rate limitation": + when defined(macosx): + # on macos CI, this test is resulting a code 200 (OK) instead of a 429 error + # means the runner is somehow too slow to cause a request limit failure + skip() + return + ## Setup let serverKey = generateSecp256k1Key() diff --git a/waku/api/api.nim b/waku/api/api.nim index 3493513a3..ba6f83b78 100644 --- a/waku/api/api.nim +++ b/waku/api/api.nim @@ -3,7 +3,7 @@ import chronicles, chronos, results, std/strutils import waku/factory/waku import waku/[requests/health_requests, waku_core, waku_node] import waku/node/delivery_service/send_service -import waku/node/delivery_service/subscription_service +import waku/node/delivery_service/subscription_manager import libp2p/peerid import ./[api_conf, types] @@ -36,18 +36,27 @@ proc subscribe*( ): Future[Result[void, string]] {.async.} = ?checkApiAvailability(w) - return w.deliveryService.subscriptionService.subscribe(contentTopic) + return w.deliveryService.subscriptionManager.subscribe(contentTopic) proc unsubscribe*(w: Waku, contentTopic: ContentTopic): Result[void, string] = ?checkApiAvailability(w) - return w.deliveryService.subscriptionService.unsubscribe(contentTopic) + return w.deliveryService.subscriptionManager.unsubscribe(contentTopic) proc send*( w: Waku, envelope: MessageEnvelope ): Future[Result[RequestId, string]] {.async.} = ?checkApiAvailability(w) + let isSubbed = w.deliveryService.subscriptionManager + .isSubscribed(envelope.contentTopic) + .valueOr(false) + if not isSubbed: + info "Auto-subscribing to topic on send", contentTopic = envelope.contentTopic + w.deliveryService.subscriptionManager.subscribe(envelope.contentTopic).isOkOr: + warn "Failed to auto-subscribe", error = error + return err("Failed to auto-subscribe before sending: " & error) + let requestId = RequestId.new(w.rng) let deliveryTask = DeliveryTask.new(requestId, envelope, w.brokerCtx).valueOr: diff --git a/waku/events/message_events.nim b/waku/events/message_events.nim index cf3dac9b7..677a4a433 100644 --- a/waku/events/message_events.nim +++ b/waku/events/message_events.nim @@ -1,6 +1,4 @@ -import waku/common/broker/event_broker -import waku/api/types -import waku/waku_core/message +import waku/[api/types, waku_core/message, waku_core/topics, common/broker/event_broker] export types @@ -28,3 +26,9 @@ EventBroker: type MessageReceivedEvent* = object messageHash*: string message*: WakuMessage + +EventBroker: + # Internal event emitted when a message arrives from the network via any protocol + type MessageSeenEvent* = object + topic*: PubsubTopic + message*: WakuMessage diff --git a/waku/factory/waku.nim b/waku/factory/waku.nim index 9803a53a9..ff0ab0568 100644 --- a/waku/factory/waku.nim +++ b/waku/factory/waku.nim @@ -35,6 +35,7 @@ import node/health_monitor, node/waku_metrics, node/delivery_service/delivery_service, + node/delivery_service/subscription_manager, rest_api/message_cache, rest_api/endpoint/server, rest_api/endpoint/builder as rest_server_builder, @@ -453,7 +454,7 @@ proc startWaku*(waku: ptr Waku): Future[Result[void, string]] {.async: (raises: ).isOkOr: error "Failed to set RequestProtocolHealth provider", error = error - ## Setup RequestHealthReport provider (The lost child) + ## Setup RequestHealthReport provider RequestHealthReport.setProvider( globalBrokerContext(), @@ -514,6 +515,10 @@ proc stop*(waku: Waku): Future[Result[void, string]] {.async: (raises: []).} = if not waku.wakuDiscv5.isNil(): await waku.wakuDiscv5.stop() + if not waku.deliveryService.isNil(): + await waku.deliveryService.stopDeliveryService() + waku.deliveryService = nil + if not waku.node.isNil(): await waku.node.stop() diff --git a/waku/node/delivery_service/delivery_service.nim b/waku/node/delivery_service/delivery_service.nim index 8106cba9f..258c01e95 100644 --- a/waku/node/delivery_service/delivery_service.nim +++ b/waku/node/delivery_service/delivery_service.nim @@ -5,7 +5,7 @@ import chronos import ./recv_service, ./send_service, - ./subscription_service, + ./subscription_manager, waku/[ waku_core, waku_node, @@ -18,29 +18,31 @@ import type DeliveryService* = ref object sendService*: SendService recvService: RecvService - subscriptionService*: SubscriptionService + subscriptionManager*: SubscriptionManager proc new*( T: type DeliveryService, useP2PReliability: bool, w: WakuNode ): Result[T, string] = ## storeClient is needed to give store visitility to DeliveryService ## wakuRelay and wakuLightpushClient are needed to give a mechanism to SendService to re-publish - let subscriptionService = SubscriptionService.new(w) - let sendService = ?SendService.new(useP2PReliability, w, subscriptionService) - let recvService = RecvService.new(w, subscriptionService) + let subscriptionManager = SubscriptionManager.new(w) + let sendService = ?SendService.new(useP2PReliability, w, subscriptionManager) + let recvService = RecvService.new(w, subscriptionManager) return ok( DeliveryService( sendService: sendService, recvService: recvService, - subscriptionService: subscriptionService, + subscriptionManager: subscriptionManager, ) ) proc startDeliveryService*(self: DeliveryService) = - self.sendService.startSendService() + self.subscriptionManager.startSubscriptionManager() self.recvService.startRecvService() + self.sendService.startSendService() proc stopDeliveryService*(self: DeliveryService) {.async.} = - self.sendService.stopSendService() + await self.sendService.stopSendService() await self.recvService.stopRecvService() + await self.subscriptionManager.stopSubscriptionManager() diff --git a/waku/node/delivery_service/recv_service/recv_service.nim b/waku/node/delivery_service/recv_service/recv_service.nim index 12780033a..0eba2c450 100644 --- a/waku/node/delivery_service/recv_service/recv_service.nim +++ b/waku/node/delivery_service/recv_service/recv_service.nim @@ -2,9 +2,9 @@ ## receive and is backed by store-v3 requests to get an additional degree of certainty ## -import std/[tables, sequtils, options] +import std/[tables, sequtils, options, sets] import chronos, chronicles, libp2p/utility -import ../[subscription_service] +import ../[subscription_manager] import waku/[ waku_core, @@ -13,6 +13,7 @@ import waku_filter_v2/client, waku_core/topics, events/delivery_events, + events/message_events, waku_node, common/broker/broker_context, ] @@ -35,14 +36,9 @@ type RecvMessage = object type RecvService* = ref object of RootObj brokerCtx: BrokerContext - topicsInterest: Table[PubsubTopic, seq[ContentTopic]] - ## Tracks message verification requests and when was the last time a - ## pubsub topic was verified for missing messages - ## The key contains pubsub-topics node: WakuNode - onSubscribeListener: OnFilterSubscribeEventListener - onUnsubscribeListener: OnFilterUnsubscribeEventListener - subscriptionService: SubscriptionService + seenMsgListener: MessageSeenEventListener + subscriptionManager: SubscriptionManager recentReceivedMsgs: seq[RecvMessage] @@ -95,20 +91,20 @@ proc msgChecker(self: RecvService) {.async.} = self.endTimeToCheck = getNowInNanosecondTime() var msgHashesInStore = newSeq[WakuMessageHash](0) - for pubsubTopic, cTopics in self.topicsInterest.pairs: + for sub in self.subscriptionManager.getActiveSubscriptions(): let storeResp: StoreQueryResponse = ( await self.node.wakuStoreClient.queryToAny( StoreQueryRequest( includeData: false, - pubsubTopic: some(PubsubTopic(pubsubTopic)), - contentTopics: cTopics, + pubsubTopic: some(PubsubTopic(sub.pubsubTopic)), + contentTopics: sub.contentTopics, startTime: some(self.startTimeToCheck - DelayExtra.nanos), endTime: some(self.endTimeToCheck + DelayExtra.nanos), ) ) ).valueOr: error "msgChecker failed to get remote msgHashes", - pubsubTopic, cTopics, error = $error + pubsubTopic = sub.pubsubTopic, cTopics = sub.contentTopics, error = $error continue msgHashesInStore.add(storeResp.messages.mapIt(it.messageHash)) @@ -133,31 +129,20 @@ proc msgChecker(self: RecvService) {.async.} = ## update next check times self.startTimeToCheck = self.endTimeToCheck -proc onSubscribe( - self: RecvService, pubsubTopic: string, contentTopics: seq[string] -) {.gcsafe, raises: [].} = - info "onSubscribe", pubsubTopic, contentTopics - self.topicsInterest.withValue(pubsubTopic, contentTopicsOfInterest): - contentTopicsOfInterest[].add(contentTopics) - do: - self.topicsInterest[pubsubTopic] = contentTopics +proc processIncomingMessageOfInterest( + self: RecvService, pubsubTopic: string, message: WakuMessage +) = + ## Resolve an incoming network message that was already filtered by topic. + ## Deduplicate (by hash), store (saves in recently-seen messages) and emit + ## the MAPI MessageReceivedEvent for every unique incoming message. -proc onUnsubscribe( - self: RecvService, pubsubTopic: string, contentTopics: seq[string] -) {.gcsafe, raises: [].} = - info "onUnsubscribe", pubsubTopic, contentTopics + let msgHash = computeMessageHash(pubsubTopic, message) + if not self.recentReceivedMsgs.anyIt(it.msgHash == msgHash): + let rxMsg = RecvMessage(msgHash: msgHash, rxTime: message.timestamp) + self.recentReceivedMsgs.add(rxMsg) + MessageReceivedEvent.emit(self.brokerCtx, msgHash.to0xHex(), message) - self.topicsInterest.withValue(pubsubTopic, contentTopicsOfInterest): - let remainingCTopics = - contentTopicsOfInterest[].filterIt(not contentTopics.contains(it)) - contentTopicsOfInterest[] = remainingCTopics - - if remainingCTopics.len == 0: - self.topicsInterest.del(pubsubTopic) - do: - error "onUnsubscribe unsubscribing from wrong topic", pubsubTopic, contentTopics - -proc new*(T: typedesc[RecvService], node: WakuNode, s: SubscriptionService): T = +proc new*(T: typedesc[RecvService], node: WakuNode, s: SubscriptionManager): T = ## The storeClient will help to acquire any possible missed messages let now = getNowInNanosecondTime() @@ -165,22 +150,13 @@ proc new*(T: typedesc[RecvService], node: WakuNode, s: SubscriptionService): T = node: node, startTimeToCheck: now, brokerCtx: node.brokerCtx, - subscriptionService: s, - topicsInterest: initTable[PubsubTopic, seq[ContentTopic]](), + subscriptionManager: s, recentReceivedMsgs: @[], ) - if not node.wakuFilterClient.isNil(): - let filterPushHandler = proc( - pubsubTopic: PubsubTopic, message: WakuMessage - ) {.async, closure.} = - ## Captures all the messages received through filter - - let msgHash = computeMessageHash(pubSubTopic, message) - let rxMsg = RecvMessage(msgHash: msgHash, rxTime: message.timestamp) - recvService.recentReceivedMsgs.add(rxMsg) - - node.wakuFilterClient.registerPushHandler(filterPushHandler) + # TODO: For MAPI Edge support, either call node.wakuFilterClient.registerPushHandler + # so that the RecvService listens to incoming filter messages, + # or have the filter client emit MessageSeenEvent. return recvService @@ -194,26 +170,26 @@ proc startRecvService*(self: RecvService) = self.msgCheckerHandler = self.msgChecker() self.msgPrunerHandler = self.loopPruneOldMessages() - self.onSubscribeListener = OnFilterSubscribeEvent.listen( + self.seenMsgListener = MessageSeenEvent.listen( self.brokerCtx, - proc(subsEv: OnFilterSubscribeEvent) {.async: (raises: []).} = - self.onSubscribe(subsEv.pubsubTopic, subsEv.contentTopics), - ).valueOr: - error "Failed to set OnFilterSubscribeEvent listener", error = error - quit(QuitFailure) + proc(event: MessageSeenEvent) {.async: (raises: []).} = + if not self.subscriptionManager.isSubscribed( + event.topic, event.message.contentTopic + ): + trace "skipping message as I am not subscribed", + shard = event.topic, contenttopic = event.message.contentTopic + return - self.onUnsubscribeListener = OnFilterUnsubscribeEvent.listen( - self.brokerCtx, - proc(subsEv: OnFilterUnsubscribeEvent) {.async: (raises: []).} = - self.onUnsubscribe(subsEv.pubsubTopic, subsEv.contentTopics), + self.processIncomingMessageOfInterest(event.topic, event.message), ).valueOr: - error "Failed to set OnFilterUnsubscribeEvent listener", error = error + error "Failed to set MessageSeenEvent listener", error = error quit(QuitFailure) proc stopRecvService*(self: RecvService) {.async.} = - OnFilterSubscribeEvent.dropListener(self.brokerCtx, self.onSubscribeListener) - OnFilterUnsubscribeEvent.dropListener(self.brokerCtx, self.onUnsubscribeListener) + MessageSeenEvent.dropListener(self.brokerCtx, self.seenMsgListener) if not self.msgCheckerHandler.isNil(): await self.msgCheckerHandler.cancelAndWait() + self.msgCheckerHandler = nil if not self.msgPrunerHandler.isNil(): await self.msgPrunerHandler.cancelAndWait() + self.msgPrunerHandler = nil diff --git a/waku/node/delivery_service/send_service/send_service.nim b/waku/node/delivery_service/send_service/send_service.nim index a41d07786..a3c44bc0c 100644 --- a/waku/node/delivery_service/send_service/send_service.nim +++ b/waku/node/delivery_service/send_service/send_service.nim @@ -5,7 +5,7 @@ import std/[sequtils, tables, options] import chronos, chronicles, libp2p/utility import ./[send_processor, relay_processor, lightpush_processor, delivery_task], - ../[subscription_service], + ../[subscription_manager], waku/[ waku_core, node/waku_node, @@ -58,7 +58,7 @@ type SendService* = ref object of RootObj node: WakuNode checkStoreForMessages: bool - subscriptionService: SubscriptionService + subscriptionManager: SubscriptionManager proc setupSendProcessorChain( peerManager: PeerManager, @@ -99,7 +99,7 @@ proc new*( T: typedesc[SendService], preferP2PReliability: bool, w: WakuNode, - s: SubscriptionService, + s: SubscriptionManager, ): Result[T, string] = if w.wakuRelay.isNil() and w.wakuLightpushClient.isNil(): return err( @@ -120,7 +120,7 @@ proc new*( sendProcessor: sendProcessorChain, node: w, checkStoreForMessages: checkStoreForMessages, - subscriptionService: s, + subscriptionManager: s, ) return ok(sendService) @@ -250,9 +250,9 @@ proc serviceLoop(self: SendService) {.async.} = proc startSendService*(self: SendService) = self.serviceLoopHandle = self.serviceLoop() -proc stopSendService*(self: SendService) = +proc stopSendService*(self: SendService) {.async.} = if not self.serviceLoopHandle.isNil(): - discard self.serviceLoopHandle.cancelAndWait() + await self.serviceLoopHandle.cancelAndWait() proc send*(self: SendService, task: DeliveryTask) {.async.} = assert(not task.isNil(), "task for send must not be nil") @@ -260,7 +260,7 @@ proc send*(self: SendService, task: DeliveryTask) {.async.} = info "SendService.send: processing delivery task", requestId = task.requestId, msgHash = task.msgHash.to0xHex() - self.subscriptionService.subscribe(task.msg.contentTopic).isOkOr: + self.subscriptionManager.subscribe(task.msg.contentTopic).isOkOr: error "SendService.send: failed to subscribe to content topic", contentTopic = task.msg.contentTopic, error = error diff --git a/waku/node/delivery_service/subscription_manager.nim b/waku/node/delivery_service/subscription_manager.nim new file mode 100644 index 000000000..22df47413 --- /dev/null +++ b/waku/node/delivery_service/subscription_manager.nim @@ -0,0 +1,164 @@ +import std/[sets, tables, options, strutils], chronos, chronicles, results +import + waku/[ + waku_core, + waku_core/topics, + waku_core/topics/sharding, + waku_node, + waku_relay, + common/broker/broker_context, + events/delivery_events, + ] + +type SubscriptionManager* = ref object of RootObj + node: WakuNode + contentTopicSubs: Table[PubsubTopic, HashSet[ContentTopic]] + ## Map of Shard to ContentTopic needed because e.g. WakuRelay is PubsubTopic only. + ## A present key with an empty HashSet value means pubsubtopic already subscribed + ## (via subscribePubsubTopics()) but there's no specific content topic interest yet. + +proc new*(T: typedesc[SubscriptionManager], node: WakuNode): T = + SubscriptionManager( + node: node, contentTopicSubs: initTable[PubsubTopic, HashSet[ContentTopic]]() + ) + +proc addContentTopicInterest( + self: SubscriptionManager, shard: PubsubTopic, topic: ContentTopic +): Result[void, string] = + if not self.contentTopicSubs.hasKey(shard): + self.contentTopicSubs[shard] = initHashSet[ContentTopic]() + + self.contentTopicSubs.withValue(shard, cTopics): + if not cTopics[].contains(topic): + cTopics[].incl(topic) + + # TODO: Call a "subscribe(shard, topic)" on filter client here, + # so the filter client can know that subscriptions changed. + + return ok() + +proc removeContentTopicInterest( + self: SubscriptionManager, shard: PubsubTopic, topic: ContentTopic +): Result[void, string] = + self.contentTopicSubs.withValue(shard, cTopics): + if cTopics[].contains(topic): + cTopics[].excl(topic) + + if cTopics[].len == 0 and isNil(self.node.wakuRelay): + self.contentTopicSubs.del(shard) # We're done with cTopics here + + # TODO: Call a "unsubscribe(shard, topic)" on filter client here, + # so the filter client can know that subscriptions changed. + + return ok() + +proc subscribePubsubTopics( + self: SubscriptionManager, shards: seq[PubsubTopic] +): Result[void, string] = + if isNil(self.node.wakuRelay): + return err("subscribePubsubTopics requires a Relay") + + var errors: seq[string] = @[] + + for shard in shards: + if not self.contentTopicSubs.hasKey(shard): + self.node.subscribe((kind: PubsubSub, topic: shard), nil).isOkOr: + errors.add("shard " & shard & ": " & error) + continue + + self.contentTopicSubs[shard] = initHashSet[ContentTopic]() + + if errors.len > 0: + return err("subscribeShard errors: " & errors.join("; ")) + + return ok() + +proc startSubscriptionManager*(self: SubscriptionManager) = + if isNil(self.node.wakuRelay): + return + + if self.node.wakuAutoSharding.isSome(): + # Subscribe relay to all shards in autosharding. + let autoSharding = self.node.wakuAutoSharding.get() + let clusterId = autoSharding.clusterId + let numShards = autoSharding.shardCountGenZero + + if numShards > 0: + var clusterPubsubTopics = newSeqOfCap[PubsubTopic](numShards) + + for i in 0 ..< numShards: + let shardObj = RelayShard(clusterId: clusterId, shardId: uint16(i)) + clusterPubsubTopics.add(PubsubTopic($shardObj)) + + self.subscribePubsubTopics(clusterPubsubTopics).isOkOr: + error "Failed to auto-subscribe Relay to cluster shards: ", error = error + else: + info "SubscriptionManager has no AutoSharding configured; skipping auto-subscribe." + +proc stopSubscriptionManager*(self: SubscriptionManager) {.async.} = + discard + +proc getActiveSubscriptions*( + self: SubscriptionManager +): seq[tuple[pubsubTopic: string, contentTopics: seq[ContentTopic]]] = + var activeSubs: seq[tuple[pubsubTopic: string, contentTopics: seq[ContentTopic]]] = + @[] + + for pubsub, cTopicSet in self.contentTopicSubs.pairs: + if cTopicSet.len > 0: + var cTopicSeq = newSeqOfCap[ContentTopic](cTopicSet.len) + for t in cTopicSet: + cTopicSeq.add(t) + activeSubs.add((pubsub, cTopicSeq)) + + return activeSubs + +proc getShardForContentTopic( + self: SubscriptionManager, topic: ContentTopic +): Result[PubsubTopic, string] = + if self.node.wakuAutoSharding.isSome(): + let shardObj = ?self.node.wakuAutoSharding.get().getShard(topic) + return ok($shardObj) + + return err("SubscriptionManager requires AutoSharding") + +proc isSubscribed*( + self: SubscriptionManager, topic: ContentTopic +): Result[bool, string] = + let shard = ?self.getShardForContentTopic(topic) + return ok( + self.contentTopicSubs.hasKey(shard) and self.contentTopicSubs[shard].contains(topic) + ) + +proc isSubscribed*( + self: SubscriptionManager, shard: PubsubTopic, contentTopic: ContentTopic +): bool {.raises: [].} = + self.contentTopicSubs.withValue(shard, cTopics): + return cTopics[].contains(contentTopic) + return false + +proc subscribe*(self: SubscriptionManager, topic: ContentTopic): Result[void, string] = + if isNil(self.node.wakuRelay) and isNil(self.node.wakuFilterClient): + return err("SubscriptionManager requires either Relay or Filter Client.") + + let shard = ?self.getShardForContentTopic(topic) + + if not isNil(self.node.wakuRelay) and not self.contentTopicSubs.hasKey(shard): + ?self.subscribePubsubTopics(@[shard]) + + ?self.addContentTopicInterest(shard, topic) + + return ok() + +proc unsubscribe*( + self: SubscriptionManager, topic: ContentTopic +): Result[void, string] = + if isNil(self.node.wakuRelay) and isNil(self.node.wakuFilterClient): + return err("SubscriptionManager requires either Relay or Filter Client.") + + let shard = ?self.getShardForContentTopic(topic) + + if self.isSubscribed(shard, topic): + ?self.removeContentTopicInterest(shard, topic) + + return ok() diff --git a/waku/node/delivery_service/subscription_service.nim b/waku/node/delivery_service/subscription_service.nim deleted file mode 100644 index 78763161b..000000000 --- a/waku/node/delivery_service/subscription_service.nim +++ /dev/null @@ -1,64 +0,0 @@ -import chronos, chronicles -import - waku/[ - waku_core, - waku_core/topics, - events/message_events, - waku_node, - common/broker/broker_context, - ] - -type SubscriptionService* = ref object of RootObj - brokerCtx: BrokerContext - node: WakuNode - -proc new*(T: typedesc[SubscriptionService], node: WakuNode): T = - ## The storeClient will help to acquire any possible missed messages - - return SubscriptionService(brokerCtx: node.brokerCtx, node: node) - -proc isSubscribed*( - self: SubscriptionService, topic: ContentTopic -): Result[bool, string] = - var isSubscribed = false - if self.node.wakuRelay.isNil() == false: - return self.node.isSubscribed((kind: ContentSub, topic: topic)) - - # TODO: Add support for edge mode with Filter subscription management - return ok(isSubscribed) - -#TODO: later PR may consider to refactor or place this function elsewhere -# The only important part is that it emits MessageReceivedEvent -proc getReceiveHandler(self: SubscriptionService): WakuRelayHandler = - return proc(topic: PubsubTopic, msg: WakuMessage): Future[void] {.async, gcsafe.} = - let msgHash = computeMessageHash(topic, msg).to0xHex() - info "API received message", - pubsubTopic = topic, contentTopic = msg.contentTopic, msgHash = msgHash - - MessageReceivedEvent.emit(self.brokerCtx, msgHash, msg) - -proc subscribe*(self: SubscriptionService, topic: ContentTopic): Result[void, string] = - let isSubscribed = self.isSubscribed(topic).valueOr: - error "Failed to check subscription status: ", error = error - return err("Failed to check subscription status: " & error) - - if isSubscribed == false: - if self.node.wakuRelay.isNil() == false: - self.node.subscribe((kind: ContentSub, topic: topic), self.getReceiveHandler()).isOkOr: - error "Failed to subscribe: ", error = error - return err("Failed to subscribe: " & error) - - # TODO: Add support for edge mode with Filter subscription management - - return ok() - -proc unsubscribe*( - self: SubscriptionService, topic: ContentTopic -): Result[void, string] = - if self.node.wakuRelay.isNil() == false: - self.node.unsubscribe((kind: ContentSub, topic: topic)).isOkOr: - error "Failed to unsubscribe: ", error = error - return err("Failed to unsubscribe: " & error) - - # TODO: Add support for edge mode with Filter subscription management - return ok() diff --git a/waku/node/kernel_api/relay.nim b/waku/node/kernel_api/relay.nim index a0a128449..ec4d05ddd 100644 --- a/waku/node/kernel_api/relay.nim +++ b/waku/node/kernel_api/relay.nim @@ -19,16 +19,20 @@ import libp2p/utility import - ../waku_node, - ../../waku_relay, - ../../waku_core, - ../../waku_core/topics/sharding, - ../../waku_filter_v2, - ../../waku_archive_legacy, - ../../waku_archive, - ../../waku_store_sync, - ../peer_manager, - ../../waku_rln_relay + waku/[ + waku_relay, + waku_core, + waku_core/topics/sharding, + waku_filter_v2, + waku_archive_legacy, + waku_archive, + waku_store_sync, + waku_rln_relay, + node/waku_node, + node/peer_manager, + common/broker/broker_context, + events/message_events, + ] export waku_relay.WakuRelayHandler @@ -44,14 +48,25 @@ logScope: ## Waku relay proc registerRelayHandler( - node: WakuNode, topic: PubsubTopic, appHandler: WakuRelayHandler -) = + node: WakuNode, topic: PubsubTopic, appHandler: WakuRelayHandler = nil +): bool = ## Registers the only handler for the given topic. ## Notice that this handler internally calls other handlers, such as filter, ## archive, etc, plus the handler provided by the application. + ## Returns `true` if a mesh subscription was created or `false` if the relay + ## was already subscribed to the topic. - if node.wakuRelay.isSubscribed(topic): - return + let alreadySubscribed = node.wakuRelay.isSubscribed(topic) + + if not appHandler.isNil(): + if not alreadySubscribed or not node.legacyAppHandlers.hasKey(topic): + node.legacyAppHandlers[topic] = appHandler + else: + debug "Legacy appHandler already exists for active PubsubTopic, ignoring new handler", + topic = topic + + if alreadySubscribed: + return false proc traceHandler(topic: PubsubTopic, msg: WakuMessage) {.async, gcsafe.} = let msgSizeKB = msg.payload.len / 1000 @@ -82,6 +97,9 @@ proc registerRelayHandler( node.wakuStoreReconciliation.messageIngress(topic, msg) + proc internalHandler(topic: PubsubTopic, msg: WakuMessage) {.async, gcsafe.} = + MessageSeenEvent.emit(node.brokerCtx, topic, msg) + let uniqueTopicHandler = proc( topic: PubsubTopic, msg: WakuMessage ): Future[void] {.async, gcsafe.} = @@ -89,7 +107,15 @@ proc registerRelayHandler( await filterHandler(topic, msg) await archiveHandler(topic, msg) await syncHandler(topic, msg) - await appHandler(topic, msg) + await internalHandler(topic, msg) + + # Call the legacy (kernel API) app handler if it exists. + # Normally, hasKey is false and the MessageSeenEvent bus (new API) is used instead. + # But we need to support legacy behavior (kernel API use), hence this. + # NOTE: We can delete `legacyAppHandlers` if instead we refactor WakuRelay to support multiple + # PubsubTopic handlers, since that's actually supported by libp2p PubSub (bigger refactor...) + if node.legacyAppHandlers.hasKey(topic) and not node.legacyAppHandlers[topic].isNil(): + await node.legacyAppHandlers[topic](topic, msg) node.wakuRelay.subscribe(topic, uniqueTopicHandler) @@ -115,8 +141,11 @@ proc subscribe*( ): Result[void, string] = ## Subscribes to a PubSub or Content topic. Triggers handler when receiving messages on ## this topic. WakuRelayHandler is a method that takes a topic and a Waku message. + ## If `handler` is nil, the API call will subscribe to the topic in the relay mesh + ## but no app handler will be registered at this time (it can be registered later with + ## another call to this proc for the same gossipsub topic). - if node.wakuRelay.isNil(): + if isNil(node.wakuRelay): error "Invalid API call to `subscribe`. WakuRelay not mounted." return err("Invalid API call to `subscribe`. WakuRelay not mounted.") @@ -124,13 +153,15 @@ proc subscribe*( error "Failed to decode subscription event", error = error return err("Failed to decode subscription event: " & error) - if node.wakuRelay.isSubscribed(pubsubTopic): - warn "No-effect API call to subscribe. Already subscribed to topic", pubsubTopic - return ok() - - info "subscribe", pubsubTopic, contentTopicOp - node.registerRelayHandler(pubsubTopic, handler) - node.topicSubscriptionQueue.emit((kind: PubsubSub, topic: pubsubTopic)) + if node.registerRelayHandler(pubsubTopic, handler): + info "subscribe", pubsubTopic, contentTopicOp + node.topicSubscriptionQueue.emit((kind: PubsubSub, topic: pubsubTopic)) + else: + if isNil(handler): + warn "No-effect API call to subscribe. Already subscribed to topic", pubsubTopic + else: + info "subscribe (was already subscribed in the mesh; appHandler set)", + pubsubTopic = pubsubTopic return ok() @@ -138,8 +169,10 @@ proc unsubscribe*( node: WakuNode, subscription: SubscriptionEvent ): Result[void, string] = ## Unsubscribes from a specific PubSub or Content topic. + ## This will both unsubscribe from the relay mesh and remove the app handler, if any. + ## NOTE: This works because using MAPI and Kernel API at the same time is unsupported. - if node.wakuRelay.isNil(): + if isNil(node.wakuRelay): error "Invalid API call to `unsubscribe`. WakuRelay not mounted." return err("Invalid API call to `unsubscribe`. WakuRelay not mounted.") @@ -147,13 +180,20 @@ proc unsubscribe*( error "Failed to decode unsubscribe event", error = error return err("Failed to decode unsubscribe event: " & error) - if not node.wakuRelay.isSubscribed(pubsubTopic): - warn "No-effect API call to `unsubscribe`. Was not subscribed", pubsubTopic - return ok() + let hadHandler = node.legacyAppHandlers.hasKey(pubsubTopic) + if hadHandler: + node.legacyAppHandlers.del(pubsubTopic) - info "unsubscribe", pubsubTopic, contentTopicOp - node.wakuRelay.unsubscribe(pubsubTopic) - node.topicSubscriptionQueue.emit((kind: PubsubUnsub, topic: pubsubTopic)) + if node.wakuRelay.isSubscribed(pubsubTopic): + info "unsubscribe", pubsubTopic, contentTopicOp + node.wakuRelay.unsubscribe(pubsubTopic) + node.topicSubscriptionQueue.emit((kind: PubsubUnsub, topic: pubsubTopic)) + else: + if not hadHandler: + warn "No-effect API call to `unsubscribe`. Was not subscribed", pubsubTopic + else: + info "unsubscribe (was not subscribed in the mesh; appHandler removed)", + pubsubTopic = pubsubTopic return ok() diff --git a/waku/node/waku_node.nim b/waku/node/waku_node.nim index 254387c32..0cef4cc5d 100644 --- a/waku/node/waku_node.nim +++ b/waku/node/waku_node.nim @@ -146,6 +146,8 @@ type started*: bool # Indicates that node has started listening topicSubscriptionQueue*: AsyncEventQueue[SubscriptionEvent] rateLimitSettings*: ProtocolRateLimitSettings + legacyAppHandlers*: Table[PubsubTopic, WakuRelayHandler] + ## Kernel API Relay appHandlers (if any) wakuMix*: WakuMix edgeTopicsHealth*: Table[PubsubTopic, TopicHealth] edgeHealthEvent*: AsyncEvent diff --git a/waku/waku_core/subscription/subscription_manager.nim b/waku/waku_core/subscription/subscription_manager.nim index 1b950b3b4..ccade763b 100644 --- a/waku/waku_core/subscription/subscription_manager.nim +++ b/waku/waku_core/subscription/subscription_manager.nim @@ -5,19 +5,19 @@ import std/tables, results, chronicles, chronos import ./push_handler, ../topics, ../message ## Subscription manager -type SubscriptionManager* = object +type LegacySubscriptionManager* = object subscriptions: TableRef[(string, ContentTopic), FilterPushHandler] -proc init*(T: type SubscriptionManager): T = - SubscriptionManager( +proc init*(T: type LegacySubscriptionManager): T = + LegacySubscriptionManager( subscriptions: newTable[(string, ContentTopic), FilterPushHandler]() ) -proc clear*(m: var SubscriptionManager) = +proc clear*(m: var LegacySubscriptionManager) = m.subscriptions.clear() proc registerSubscription*( - m: SubscriptionManager, + m: LegacySubscriptionManager, pubsubTopic: PubsubTopic, contentTopic: ContentTopic, handler: FilterPushHandler, @@ -29,12 +29,12 @@ proc registerSubscription*( error "failed to register filter subscription", error = getCurrentExceptionMsg() proc removeSubscription*( - m: SubscriptionManager, pubsubTopic: PubsubTopic, contentTopic: ContentTopic + m: LegacySubscriptionManager, pubsubTopic: PubsubTopic, contentTopic: ContentTopic ) = m.subscriptions.del((pubsubTopic, contentTopic)) proc notifySubscriptionHandler*( - m: SubscriptionManager, + m: LegacySubscriptionManager, pubsubTopic: PubsubTopic, contentTopic: ContentTopic, message: WakuMessage, @@ -48,5 +48,5 @@ proc notifySubscriptionHandler*( except CatchableError: discard -proc getSubscriptionsCount*(m: SubscriptionManager): int = +proc getSubscriptionsCount*(m: LegacySubscriptionManager): int = m.subscriptions.len() From db19da9254994b8363f0daa6a8869bfb9544e8a6 Mon Sep 17 00:00:00 2001 From: NagyZoltanPeter <113987313+NagyZoltanPeter@users.noreply.github.com> Date: Mon, 2 Mar 2026 18:56:39 +0100 Subject: [PATCH 65/70] move destroy api to node_api, add some security checks and fix a possible resource leak (#3736) --- liblogosdelivery/declare_lib.nim | 9 ++++++++ liblogosdelivery/liblogosdelivery.nim | 22 ------------------- .../logos_delivery_api/node_api.nim | 19 ++++++++++++++++ 3 files changed, 28 insertions(+), 22 deletions(-) diff --git a/liblogosdelivery/declare_lib.nim b/liblogosdelivery/declare_lib.nim index 98209c649..5087a0dee 100644 --- a/liblogosdelivery/declare_lib.nim +++ b/liblogosdelivery/declare_lib.nim @@ -1,8 +1,12 @@ import ffi +import std/locks import waku/factory/waku declareLibrary("logosdelivery") +var eventCallbackLock: Lock +initLock(eventCallbackLock) + template requireInitializedNode*( ctx: ptr FFIContext[Waku], opName: string, onError: untyped ) = @@ -20,5 +24,10 @@ proc logosdelivery_set_event_callback( echo "error: invalid context in logosdelivery_set_event_callback" return + # prevent race conditions that might happen due incorrect usage. + eventCallbackLock.acquire() + defer: + eventCallbackLock.release() + ctx[].eventCallback = cast[pointer](callback) ctx[].eventUserData = userData diff --git a/liblogosdelivery/liblogosdelivery.nim b/liblogosdelivery/liblogosdelivery.nim index 7d068b065..b6a4c0bda 100644 --- a/liblogosdelivery/liblogosdelivery.nim +++ b/liblogosdelivery/liblogosdelivery.nim @@ -5,25 +5,3 @@ import waku/factory/waku, waku/node/waku_node, ./declare_lib ################################################################################ ## Include different APIs, i.e. all procs with {.ffi.} pragma include ./logos_delivery_api/node_api, ./logos_delivery_api/messaging_api - -################################################################################ -### Exported procs - -proc logosdelivery_destroy( - ctx: ptr FFIContext[Waku], callback: FFICallBack, userData: pointer -): cint {.dynlib, exportc, cdecl.} = - initializeLibrary() - checkParams(ctx, callback, userData) - - ffi.destroyFFIContext(ctx).isOkOr: - let msg = "liblogosdelivery error: " & $error - callback(RET_ERR, unsafeAddr msg[0], cast[csize_t](len(msg)), userData) - return RET_ERR - - ## always need to invoke the callback although we don't retrieve value to the caller - callback(RET_OK, nil, 0, userData) - - return RET_OK - -# ### End of exported procs -# ################################################################################ diff --git a/liblogosdelivery/logos_delivery_api/node_api.nim b/liblogosdelivery/logos_delivery_api/node_api.nim index 5c2d7885f..2d6c8d0de 100644 --- a/liblogosdelivery/logos_delivery_api/node_api.nim +++ b/liblogosdelivery/logos_delivery_api/node_api.nim @@ -29,6 +29,22 @@ registerReqFFI(CreateNodeRequest, ctx: ptr FFIContext[Waku]): return ok("") +proc logosdelivery_destroy( + ctx: ptr FFIContext[Waku], callback: FFICallBack, userData: pointer +): cint {.dynlib, exportc, cdecl.} = + initializeLibrary() + checkParams(ctx, callback, userData) + + ffi.destroyFFIContext(ctx).isOkOr: + let msg = "liblogosdelivery error: " & $error + callback(RET_ERR, unsafeAddr msg[0], cast[csize_t](len(msg)), userData) + return RET_ERR + + ## always need to invoke the callback although we don't retrieve value to the caller + callback(RET_OK, nil, 0, userData) + + return RET_OK + proc logosdelivery_create_node( configJson: cstring, callback: FFICallback, userData: pointer ): pointer {.dynlib, exportc, cdecl.} = @@ -50,6 +66,9 @@ proc logosdelivery_create_node( ).isOkOr: let msg = "error in sendRequestToFFIThread: " & $error callback(RET_ERR, unsafeAddr msg[0], cast[csize_t](len(msg)), userData) + # free allocated resources as they won't be available + ffi.destroyFFIContext(ctx).isOkOr: + chronicles.error "Error in destroyFFIContext after sendRequestToFFIThread during creation", err = $error return nil return ctx From 7e36e268676bbc0fe6d81664732de870d8352c7d Mon Sep 17 00:00:00 2001 From: Fabiana Cecin Date: Tue, 3 Mar 2026 08:11:16 -0300 Subject: [PATCH 66/70] Fix NodeHealthMonitor logspam (#3743) --- waku/node/health_monitor/node_health_monitor.nim | 12 +++++------- 1 file changed, 5 insertions(+), 7 deletions(-) diff --git a/waku/node/health_monitor/node_health_monitor.nim b/waku/node/health_monitor/node_health_monitor.nim index ba0518e61..ddba47ccb 100644 --- a/waku/node/health_monitor/node_health_monitor.nim +++ b/waku/node/health_monitor/node_health_monitor.nim @@ -405,13 +405,8 @@ proc calculateConnectionState*( elif kind in FilterClientProtocols: filterCount = max(filterCount, strength) - debug "calculateConnectionState", - protocol = kind, - strength = strength, - relayCount = relayCount, - storeClientCount = storeClientCount, - lightpushCount = lightpushCount, - filterCount = filterCount + debug "calculateConnectionState", + relayCount, storeClientCount, lightpushCount, filterCount # Relay connectivity should be a sufficient check in Core mode. # "Store peers" are relay peers because incoming messages in @@ -528,6 +523,9 @@ proc healthLoop(hm: NodeHealthMonitor) {.async.} = let newConnectionStatus = hm.calculateConnectionState() if newConnectionStatus != hm.connectionStatus: + debug "connectionStatus change", + oldstatus = hm.connectionStatus, newstatus = newConnectionStatus + hm.connectionStatus = newConnectionStatus EventConnectionStatusChange.emit(hm.node.brokerCtx, newConnectionStatus) From 09618a2656195f3c2d529e97d0447ec13d4f118a Mon Sep 17 00:00:00 2001 From: Ivan FB <128452529+Ivansete-status@users.noreply.github.com> Date: Tue, 3 Mar 2026 13:32:45 +0100 Subject: [PATCH 67/70] Add debug API in liblogosdelivery (#3742) --- .../examples/logosdelivery_example.c | 24 +++++++-- liblogosdelivery/liblogosdelivery.h | 16 ++++++ liblogosdelivery/liblogosdelivery.nim | 6 ++- .../logos_delivery_api/debug_api.nim | 46 +++++++++++++++++ tests/wakunode2/test_app.nim | 2 +- waku/factory/waku.nim | 10 ++-- waku/factory/waku_state_info.nim | 50 +++++++++++++++++++ 7 files changed, 142 insertions(+), 12 deletions(-) create mode 100644 liblogosdelivery/logos_delivery_api/debug_api.nim create mode 100644 waku/factory/waku_state_info.nim diff --git a/liblogosdelivery/examples/logosdelivery_example.c b/liblogosdelivery/examples/logosdelivery_example.c index 5437be427..c0929e650 100644 --- a/liblogosdelivery/examples/logosdelivery_example.c +++ b/liblogosdelivery/examples/logosdelivery_example.c @@ -161,7 +161,23 @@ int main() { // Wait for subscription sleep(1); - printf("\n5. Sending a message...\n"); + printf("\n5. Retrieving all possibl node info ids...\n"); + logosdelivery_get_available_node_info_ids(ctx, simple_callback, (void *)"get_available_node_info_ids"); + + printf("\nRetrieving node info for a specific invalid ID...\n"); + logosdelivery_get_node_info(ctx, simple_callback, (void *)"get_node_info", "WrongNodeInfoId"); + + printf("\nRetrieving several node info for specific correct IDs...\n"); + logosdelivery_get_node_info(ctx, simple_callback, (void *)"get_node_info", "Version"); + // logosdelivery_get_node_info(ctx, simple_callback, (void *)"get_node_info", "Metrics"); + logosdelivery_get_node_info(ctx, simple_callback, (void *)"get_node_info", "MyMultiaddresses"); + logosdelivery_get_node_info(ctx, simple_callback, (void *)"get_node_info", "MyENR"); + logosdelivery_get_node_info(ctx, simple_callback, (void *)"get_node_info", "MyPeerId"); + + printf("\nRetrieving available configs...\n"); + logosdelivery_get_available_configs(ctx, simple_callback, (void *)"get_available_configs"); + + printf("\n6. Sending a message...\n"); printf("Watch for message events (sent, propagated, or error):\n"); // Create base64-encoded payload: "Hello, Logos Messaging!" const char *message = "{" @@ -175,17 +191,17 @@ int main() { printf("Waiting for message delivery events...\n"); sleep(60); - printf("\n6. Unsubscribing from content topic...\n"); + printf("\n7. Unsubscribing from content topic...\n"); logosdelivery_unsubscribe(ctx, simple_callback, (void *)"unsubscribe", contentTopic); sleep(1); - printf("\n7. Stopping node...\n"); + printf("\n8. Stopping node...\n"); logosdelivery_stop_node(ctx, simple_callback, (void *)"stop_node"); sleep(1); - printf("\n8. Destroying context...\n"); + printf("\n9. Destroying context...\n"); logosdelivery_destroy(ctx, simple_callback, (void *)"destroy"); printf("\n=== Example completed ===\n"); diff --git a/liblogosdelivery/liblogosdelivery.h b/liblogosdelivery/liblogosdelivery.h index b014d6385..0d318b691 100644 --- a/liblogosdelivery/liblogosdelivery.h +++ b/liblogosdelivery/liblogosdelivery.h @@ -75,6 +75,22 @@ extern "C" FFICallBack callback, void *userData); + // Retrieves the list of available node info IDs. + int logosdelivery_get_available_node_info_ids(void *ctx, + FFICallBack callback, + void *userData); + + // Given a node info ID, retrieves the corresponding info. + int logosdelivery_get_node_info(void *ctx, + FFICallBack callback, + void *userData, + const char *nodeInfoId); + + // Retrieves the list of available configurations. + int logosdelivery_get_available_configs(void *ctx, + FFICallBack callback, + void *userData); + #ifdef __cplusplus } #endif diff --git a/liblogosdelivery/liblogosdelivery.nim b/liblogosdelivery/liblogosdelivery.nim index b6a4c0bda..fc907498a 100644 --- a/liblogosdelivery/liblogosdelivery.nim +++ b/liblogosdelivery/liblogosdelivery.nim @@ -4,4 +4,8 @@ import waku/factory/waku, waku/node/waku_node, ./declare_lib ################################################################################ ## Include different APIs, i.e. all procs with {.ffi.} pragma -include ./logos_delivery_api/node_api, ./logos_delivery_api/messaging_api + +include + ./logos_delivery_api/node_api, + ./logos_delivery_api/messaging_api, + ./logos_delivery_api/debug_api diff --git a/liblogosdelivery/logos_delivery_api/debug_api.nim b/liblogosdelivery/logos_delivery_api/debug_api.nim new file mode 100644 index 000000000..bee8ab537 --- /dev/null +++ b/liblogosdelivery/logos_delivery_api/debug_api.nim @@ -0,0 +1,46 @@ +import std/[json, strutils] +import waku/factory/waku_state_info + +proc logosdelivery_get_available_node_info_ids( + ctx: ptr FFIContext[Waku], callback: FFICallBack, userData: pointer +) {.ffi.} = + ## Returns the list of all available node info item ids that + ## can be queried with `get_node_info_item`. + requireInitializedNode(ctx, "GetNodeInfoIds"): + return err(errMsg) + + return ok($ctx.myLib[].stateInfo.getAllPossibleInfoItemIds()) + +proc logosdelivery_get_node_info( + ctx: ptr FFIContext[Waku], + callback: FFICallBack, + userData: pointer, + nodeInfoId: cstring, +) {.ffi.} = + ## Returns the content of the node info item with the given id if it exists. + requireInitializedNode(ctx, "GetNodeInfoItem"): + return err(errMsg) + + let infoItemIdEnum = + try: + parseEnum[NodeInfoId]($nodeInfoId) + except ValueError: + return err("Invalid node info id: " & $nodeInfoId) + + return ok(ctx.myLib[].stateInfo.getNodeInfoItem(infoItemIdEnum)) + +proc logosdelivery_get_available_configs( + ctx: ptr FFIContext[Waku], callback: FFICallBack, userData: pointer +) {.ffi.} = + ## Returns information about the accepted config items. + ## For analogy with a CLI app, this is the info when typing --help for a command. + requireInitializedNode(ctx, "GetAvailableConfigs"): + return err(errMsg) + + ## TODO: we are now returning a simple default value for NodeConfig. + ## The NodeConfig struct is too complex and we need to have a flattened simpler config. + ## The expected returned value for this is a list of possible config items with their + ## description, accepted values, default value, etc. + + let defaultConfig = NodeConfig.init() + return ok($(%*defaultConfig)) diff --git a/tests/wakunode2/test_app.nim b/tests/wakunode2/test_app.nim index e94a3b21d..6ec6043fe 100644 --- a/tests/wakunode2/test_app.nim +++ b/tests/wakunode2/test_app.nim @@ -21,7 +21,7 @@ suite "Wakunode2 - Waku": raiseAssert error ## When - let version = waku.version + let version = waku.stateInfo.getNodeInfoItem(NodeInfoId.Version) ## Then check: diff --git a/waku/factory/waku.nim b/waku/factory/waku.nim index ff0ab0568..dbee8d093 100644 --- a/waku/factory/waku.nim +++ b/waku/factory/waku.nim @@ -47,7 +47,8 @@ import factory/internal_config, factory/app_callbacks, ], - ./waku_conf + ./waku_conf, + ./waku_state_info logScope: topics = "wakunode waku" @@ -56,7 +57,7 @@ logScope: const git_version* {.strdefine.} = "n/a" type Waku* = ref object - version: string + stateInfo*: WakuStateInfo conf*: WakuConf rng*: ref HmacDrbgContext @@ -79,9 +80,6 @@ type Waku* = ref object brokerCtx*: BrokerContext -func version*(waku: Waku): string = - waku.version - proc setupSwitchServices( waku: Waku, conf: WakuConf, circuitRelay: Relay, rng: ref HmacDrbgContext ) = @@ -216,7 +214,7 @@ proc new*( return err("could not create delivery service: " & $error) var waku = Waku( - version: git_version, + stateInfo: WakuStateInfo.init(node), conf: wakuConf, rng: rng, key: wakuConf.nodeKey, diff --git a/waku/factory/waku_state_info.nim b/waku/factory/waku_state_info.nim new file mode 100644 index 000000000..5dc72a693 --- /dev/null +++ b/waku/factory/waku_state_info.nim @@ -0,0 +1,50 @@ +## This module is aimed to collect and provide information about the state of the node, +## such as its version, metrics values, etc. +## It has been originally designed to be used by the debug API, which acts as a consumer of +## this information, but any other module can populate the information it needs to be +## accessible through the debug API. + +import std/[tables, sequtils, strutils] +import metrics, eth/p2p/discoveryv5/enr, libp2p/peerid +import waku/waku_node + +type + NodeInfoId* {.pure.} = enum + Version + Metrics + MyMultiaddresses + MyENR + MyPeerId + + WakuStateInfo* {.requiresInit.} = object + node: WakuNode + +proc getAllPossibleInfoItemIds*(self: WakuStateInfo): seq[NodeInfoId] = + ## Returns all possible options that can be queried to learn about the node's information. + var ret = newSeq[NodeInfoId](0) + for item in NodeInfoId: + ret.add(item) + return ret + +proc getMetrics(): string = + {.gcsafe.}: + return defaultRegistry.toText() ## defaultRegistry is {.global.} in metrics module + +proc getNodeInfoItem*(self: WakuStateInfo, infoItemId: NodeInfoId): string = + ## Returns the content of the info item with the given id if it exists. + case infoItemId + of NodeInfoId.Version: + return git_version + of NodeInfoId.Metrics: + return getMetrics() + of NodeInfoId.MyMultiaddresses: + return self.node.info().listenAddresses.join(",") + of NodeInfoId.MyENR: + return self.node.enr.toURI() + of NodeInfoId.MyPeerId: + return $PeerId(self.node.peerId()) + else: + return "unknown info item id" + +proc init*(T: typedesc[WakuStateInfo], node: WakuNode): T = + return WakuStateInfo(node: node) From 1f9c4cb8cc4cac2347d19bac6eecb169775ca366 Mon Sep 17 00:00:00 2001 From: NagyZoltanPeter <113987313+NagyZoltanPeter@users.noreply.github.com> Date: Tue, 3 Mar 2026 19:17:54 +0100 Subject: [PATCH 68/70] Chore: adapt cli args for delivery api (#3744) * LogosDeliveryAPI: NodeConfig -> WakluNodeConf + mode selector and logos.dev preset * Adjustment made on test, logos.dev preset * change default agentString from nwaku to logos-delivery * Add p2pReliability switch to presets and make it default to true. * Borrow entryNode idea from NodeConfig to WakuNodeConf to easy shortcut among diffrent bootstrap node list all needs different formats * Fix rateLimit assignment for builder * Remove Core mode default as we already have a defaul, user must override * Removed obsolate API createNode with NodeConfig - tests are refactored for WakuNodeConf usage * Fix failing test due to twn-clusterid(1) default has overwrite for maxMessagSize. Fix readme. --- examples/api_example/api_example.nim | 27 +- liblogosdelivery/README.md | 20 +- .../examples/logosdelivery_example.c | 34 +- liblogosdelivery/liblogosdelivery.h | 4 +- .../logos_delivery_api/node_api.nim | 38 +- tests/api/test_api_health.nim | 44 +- tests/api/test_api_send.nim | 45 +- tests/api/test_api_subscription.nim | 37 +- tests/api/test_entry_nodes.nim | 2 +- tests/api/test_node_conf.nim | 1206 ++++------------- tests/test_waku.nim | 71 +- tools/confutils/cli_args.nim | 80 +- {waku/api => tools/confutils}/entry_nodes.nim | 0 waku/api.nim | 3 +- waku/api/api.nim | 10 +- waku/api/api_conf.nim | 15 +- .../filter_service_conf_builder.nim | 6 + .../conf_builder/rate_limit_conf_builder.nim | 6 + .../conf_builder/waku_conf_builder.nim | 41 +- waku/factory/networks_config.nim | 40 + waku/rest_api/endpoint/builder.nim | 4 +- 21 files changed, 641 insertions(+), 1092 deletions(-) rename {waku/api => tools/confutils}/entry_nodes.nim (100%) diff --git a/examples/api_example/api_example.nim b/examples/api_example/api_example.nim index 37dd5d34b..4a7cde5db 100644 --- a/examples/api_example/api_example.nim +++ b/examples/api_example/api_example.nim @@ -59,19 +59,24 @@ when isMainModule: echo "Starting Waku node..." - let config = - if (args.ethRpcEndpoint == ""): - # Create a basic configuration for the Waku node - # No RLN as we don't have an ETH RPC Endpoint - NodeConfig.init( - protocolsConfig = ProtocolsConfig.init(entryNodes = @[], clusterId = 42) - ) - else: - # Connect to TWN, use ETH RPC Endpoint for RLN - NodeConfig.init(mode = WakuMode.Core, ethRpcEndpoints = @[args.ethRpcEndpoint]) + # Use WakuNodeConf (the CLI configuration type) for node setup + var conf = defaultWakuNodeConf().valueOr: + echo "Failed to create default config: ", error + quit(QuitFailure) + + if args.ethRpcEndpoint == "": + # Create a basic configuration for the Waku node + # No RLN as we don't have an ETH RPC Endpoint + conf.mode = Core + conf.preset = "logos.dev" + else: + # Connect to TWN, use ETH RPC Endpoint for RLN + conf.mode = Core + conf.preset = "twn" + conf.ethClientUrls = @[EthRpcUrl(args.ethRpcEndpoint)] # Create the node using the library API's createNode function - let node = (waitFor createNode(config)).valueOr: + let node = (waitFor createNode(conf)).valueOr: echo "Failed to create node: ", error quit(QuitFailure) diff --git a/liblogosdelivery/README.md b/liblogosdelivery/README.md index f9909dd3d..e8352c611 100644 --- a/liblogosdelivery/README.md +++ b/liblogosdelivery/README.md @@ -32,18 +32,17 @@ void *logosdelivery_create_node( ```json { "mode": "Core", - "clusterId": 1, - "entryNodes": [ - "enrtree://AIRVQ5DDA4FFWLRBCHJWUWOO6X6S4ZTZ5B667LQ6AJU6PEYDLRD5O@sandbox.waku.nodes.status.im" - ], - "networkingConfig": { - "listenIpv4": "0.0.0.0", - "p2pTcpPort": 60000, - "discv5UdpPort": 9000 - } + "preset": "logos.dev", + "listenAddress": "0.0.0.0", + "tcpPort": 60000, + "discv5UdpPort": 9000 } ``` +Configuration uses flat field names matching `WakuNodeConf` in `tools/confutils/cli_args.nim`. +Use `"preset"` to select a network preset (e.g., `"twn"`, `"logos.dev"`) which auto-configures +entry nodes, cluster ID, sharding, and other network-specific settings. + #### `logosdelivery_start_node` Starts the node. @@ -207,8 +206,9 @@ void callback(int ret, const char *msg, size_t len, void *userData) { int main() { const char *config = "{" + "\"logLevel\": \"INFO\"," "\"mode\": \"Core\"," - "\"clusterId\": 1" + "\"preset\": \"logos.dev\"" "}"; // Create node diff --git a/liblogosdelivery/examples/logosdelivery_example.c b/liblogosdelivery/examples/logosdelivery_example.c index c0929e650..61333f84d 100644 --- a/liblogosdelivery/examples/logosdelivery_example.c +++ b/liblogosdelivery/examples/logosdelivery_example.c @@ -61,7 +61,7 @@ void event_callback(int ret, const char *msg, size_t len, void *userData) { char messageHash[128]; extract_json_field(eventJson, "requestId", requestId, sizeof(requestId)); extract_json_field(eventJson, "messageHash", messageHash, sizeof(messageHash)); - printf("📤 [EVENT] Message sent - RequestID: %s, Hash: %s\n", requestId, messageHash); + printf("[EVENT] Message sent - RequestID: %s, Hash: %s\n", requestId, messageHash); } else if (strcmp(eventType, "message_error") == 0) { char requestId[128]; @@ -70,7 +70,7 @@ void event_callback(int ret, const char *msg, size_t len, void *userData) { extract_json_field(eventJson, "requestId", requestId, sizeof(requestId)); extract_json_field(eventJson, "messageHash", messageHash, sizeof(messageHash)); extract_json_field(eventJson, "error", error, sizeof(error)); - printf("❌ [EVENT] Message error - RequestID: %s, Hash: %s, Error: %s\n", + printf("[EVENT] Message error - RequestID: %s, Hash: %s, Error: %s\n", requestId, messageHash, error); } else if (strcmp(eventType, "message_propagated") == 0) { @@ -78,10 +78,15 @@ void event_callback(int ret, const char *msg, size_t len, void *userData) { char messageHash[128]; extract_json_field(eventJson, "requestId", requestId, sizeof(requestId)); extract_json_field(eventJson, "messageHash", messageHash, sizeof(messageHash)); - printf("✅ [EVENT] Message propagated - RequestID: %s, Hash: %s\n", requestId, messageHash); + printf("[EVENT] Message propagated - RequestID: %s, Hash: %s\n", requestId, messageHash); + + } else if (strcmp(eventType, "connection_status_change") == 0) { + char connectionStatus[256]; + extract_json_field(eventJson, "connectionStatus", connectionStatus, sizeof(connectionStatus)); + printf("[EVENT] Connection status change - Status: %s\n", connectionStatus); } else { - printf("ℹ️ [EVENT] Unknown event type: %s\n", eventType); + printf("[EVENT] Unknown event type: %s\n", eventType); } free(eventJson); @@ -109,23 +114,12 @@ void simple_callback(int ret, const char *msg, size_t len, void *userData) { int main() { printf("=== Logos Messaging API (LMAPI) Example ===\n\n"); - // Configuration JSON for creating a node + // Configuration JSON using WakuNodeConf field names (flat structure). + // Field names match Nim identifiers from WakuNodeConf in tools/confutils/cli_args.nim. const char *config = "{" - "\"logLevel\": \"DEBUG\"," - // "\"mode\": \"Edge\"," + "\"logLevel\": \"INFO\"," "\"mode\": \"Core\"," - "\"protocolsConfig\": {" - "\"entryNodes\": [\"/dns4/node-01.do-ams3.misc.logos-chat.status.im/tcp/30303/p2p/16Uiu2HAkxoqUTud5LUPQBRmkeL2xP4iKx2kaABYXomQRgmLUgf78\"]," - "\"clusterId\": 42," - "\"autoShardingConfig\": {" - "\"numShardsInCluster\": 8" - "}" - "}," - "\"networkingConfig\": {" - "\"listenIpv4\": \"0.0.0.0\"," - "\"p2pTcpPort\": 60000," - "\"discv5UdpPort\": 9000" - "}" + "\"preset\": \"logos.dev\"" "}"; printf("1. Creating node...\n"); @@ -152,7 +146,7 @@ int main() { logosdelivery_start_node(ctx, simple_callback, (void *)"start_node"); // Wait for node to start - sleep(2); + sleep(10); printf("\n4. Subscribing to content topic...\n"); const char *contentTopic = "/example/1/chat/proto"; diff --git a/liblogosdelivery/liblogosdelivery.h b/liblogosdelivery/liblogosdelivery.h index 0d318b691..5092db9f2 100644 --- a/liblogosdelivery/liblogosdelivery.h +++ b/liblogosdelivery/liblogosdelivery.h @@ -22,7 +22,9 @@ extern "C" // Creates a new instance of the node from the given configuration JSON. // Returns a pointer to the Context needed by the rest of the API functions. - // Configuration should be in JSON format following the NodeConfig structure. + // Configuration should be in JSON format using WakuNodeConf field names. + // Field names match Nim identifiers from WakuNodeConf (camelCase). + // Example: {"mode": "Core", "clusterId": 42, "relay": true} void *logosdelivery_create_node( const char *configJson, FFICallBack callback, diff --git a/liblogosdelivery/logos_delivery_api/node_api.nim b/liblogosdelivery/logos_delivery_api/node_api.nim index 2d6c8d0de..1835f75b5 100644 --- a/liblogosdelivery/logos_delivery_api/node_api.nim +++ b/liblogosdelivery/logos_delivery_api/node_api.nim @@ -1,10 +1,11 @@ -import std/json -import chronos, results, ffi +import std/[json, strutils] +import chronos, chronicles, results, confutils, confutils/std/net, ffi import waku/factory/waku, waku/node/waku_node, - waku/api/[api, api_conf, types], + waku/api/[api, types], waku/events/[message_events, health_events], + tools/confutils/cli_args, ../declare_lib, ../json_event @@ -14,15 +15,32 @@ proc `%`*(id: RequestId): JsonNode = registerReqFFI(CreateNodeRequest, ctx: ptr FFIContext[Waku]): proc(configJson: cstring): Future[Result[string, string]] {.async.} = - ## Parse the JSON configuration and create a node - let nodeConfig = - try: - decodeNodeConfigFromJson($configJson) - except SerializationError as e: - return err("Failed to parse config JSON: " & e.msg) + ## Parse the JSON configuration using fieldPairs approach (WakuNodeConf) + var conf = defaultWakuNodeConf().valueOr: + return err("Failed creating default conf: " & error) + + var jsonNode: JsonNode + try: + jsonNode = parseJson($configJson) + except Exception: + return err( + "Failed to parse config JSON: " & getCurrentExceptionMsg() & + " configJson string: " & $configJson + ) + + for confField, confValue in fieldPairs(conf): + if jsonNode.contains(confField): + let formattedString = ($jsonNode[confField]).strip(chars = {'\"'}) + try: + confValue = parseCmdArg(typeof(confValue), formattedString) + except Exception: + return err( + "Failed to parse field '" & confField & "': " & + getCurrentExceptionMsg() & ". Value: " & formattedString + ) # Create the node - ctx.myLib[] = (await api.createNode(nodeConfig)).valueOr: + ctx.myLib[] = (await api.createNode(conf)).valueOr: let errMsg = $error chronicles.error "CreateNodeRequest failed", err = errMsg return err(errMsg) diff --git a/tests/api/test_api_health.nim b/tests/api/test_api_health.nim index b7aab43f9..f3dd340af 100644 --- a/tests/api/test_api_health.nim +++ b/tests/api/test_api_health.nim @@ -13,9 +13,10 @@ import waku/events/health_events, waku/common/waku_protocol, waku/factory/waku_conf +import tools/confutils/cli_args const TestTimeout = chronos.seconds(10) -const DefaultShard = PubsubTopic("/waku/2/rs/1/0") +const DefaultShard = PubsubTopic("/waku/2/rs/3/0") const TestContentTopic = ContentTopic("/waku/2/default-content/proto") proc dummyHandler( @@ -80,7 +81,7 @@ suite "LM API health checking": newTestWakuNode(generateSecp256k1Key(), parseIpAddress("0.0.0.0"), Port(0)) (await serviceNode.mountRelay()).isOkOr: raiseAssert error - serviceNode.mountMetadata(1, @[0'u16]).isOkOr: + serviceNode.mountMetadata(3, @[0'u16]).isOkOr: raiseAssert error await serviceNode.mountLibp2pPing() await serviceNode.start() @@ -89,16 +90,15 @@ suite "LM API health checking": serviceNode.wakuRelay.subscribe(DefaultShard, dummyHandler) lockNewGlobalBrokerContext: - let conf = NodeConfig.init( - mode = WakuMode.Core, - networkingConfig = - NetworkingConfig(listenIpv4: "0.0.0.0", p2pTcpPort: 0, discv5UdpPort: 0), - protocolsConfig = ProtocolsConfig.init( - entryNodes = @[], - clusterId = 1'u16, - autoShardingConfig = AutoShardingConfig(numShardsInCluster: 1), - ), - ) + var conf = defaultWakuNodeConf().valueOr: + raiseAssert error + conf.mode = Core + conf.listenAddress = parseIpAddress("0.0.0.0") + conf.tcpPort = Port(0) + conf.discv5UdpPort = Port(0) + conf.clusterId = 3'u16 + conf.numShardsInNetwork = 1 + conf.rest = false client = (await createNode(conf)).valueOr: raiseAssert error @@ -267,17 +267,15 @@ suite "LM API health checking": var edgeWaku: Waku lockNewGlobalBrokerContext: - let edgeConf = NodeConfig.init( - mode = WakuMode.Edge, - networkingConfig = - NetworkingConfig(listenIpv4: "0.0.0.0", p2pTcpPort: 0, discv5UdpPort: 0), - protocolsConfig = ProtocolsConfig.init( - entryNodes = @[], - clusterId = 1'u16, - messageValidation = - MessageValidation(maxMessageSize: "150 KiB", rlnConfig: none(RlnConfig)), - ), - ) + var edgeConf = defaultWakuNodeConf().valueOr: + raiseAssert error + edgeConf.mode = Edge + edgeConf.listenAddress = parseIpAddress("0.0.0.0") + edgeConf.tcpPort = Port(0) + edgeConf.discv5UdpPort = Port(0) + edgeConf.clusterId = 3'u16 + edgeConf.maxMessageSize = "150 KiB" + edgeConf.rest = false edgeWaku = (await createNode(edgeConf)).valueOr: raiseAssert "Failed to create edge node: " & error diff --git a/tests/api/test_api_send.nim b/tests/api/test_api_send.nim index 7343fc655..30a176119 100644 --- a/tests/api/test_api_send.nim +++ b/tests/api/test_api_send.nim @@ -6,7 +6,8 @@ import ../testlib/[common, wakucore, wakunode, testasync] import ../waku_archive/archive_utils import waku, waku/[waku_node, waku_core, waku_relay/protocol, common/broker/broker_context] -import waku/api/api_conf, waku/factory/waku_conf +import waku/factory/waku_conf +import tools/confutils/cli_args type SendEventOutcome {.pure.} = enum Sent @@ -116,20 +117,18 @@ proc validate( for requestId in manager.errorRequestIds: check requestId == expectedRequestId -proc createApiNodeConf(mode: WakuMode = WakuMode.Core): NodeConfig = - # allocate random ports to avoid port-already-in-use errors - let netConf = NetworkingConfig(listenIpv4: "0.0.0.0", p2pTcpPort: 0, discv5UdpPort: 0) - - result = NodeConfig.init( - mode = mode, - protocolsConfig = ProtocolsConfig.init( - entryNodes = @[], - clusterId = 1, - autoShardingConfig = AutoShardingConfig(numShardsInCluster: 1), - ), - networkingConfig = netConf, - p2pReliability = true, - ) +proc createApiNodeConf(mode: cli_args.WakuMode = cli_args.WakuMode.Core): WakuNodeConf = + var conf = defaultWakuNodeConf().valueOr: + raiseAssert error + conf.mode = mode + conf.listenAddress = parseIpAddress("0.0.0.0") + conf.tcpPort = Port(0) + conf.discv5UdpPort = Port(0) + conf.clusterId = 3'u16 + conf.numShardsInNetwork = 1 + conf.reliabilityEnabled = true + conf.rest = false + result = conf suite "Waku API - Send": var @@ -153,7 +152,7 @@ suite "Waku API - Send": lockNewGlobalBrokerContext: relayNode1 = newTestWakuNode(generateSecp256k1Key(), parseIpAddress("0.0.0.0"), Port(0)) - relayNode1.mountMetadata(1, @[0'u16]).isOkOr: + relayNode1.mountMetadata(3, @[0'u16]).isOkOr: raiseAssert "Failed to mount metadata: " & error (await relayNode1.mountRelay()).isOkOr: raiseAssert "Failed to mount relay" @@ -163,7 +162,7 @@ suite "Waku API - Send": lockNewGlobalBrokerContext: relayNode2 = newTestWakuNode(generateSecp256k1Key(), parseIpAddress("0.0.0.0"), Port(0)) - relayNode2.mountMetadata(1, @[0'u16]).isOkOr: + relayNode2.mountMetadata(3, @[0'u16]).isOkOr: raiseAssert "Failed to mount metadata: " & error (await relayNode2.mountRelay()).isOkOr: raiseAssert "Failed to mount relay" @@ -173,7 +172,7 @@ suite "Waku API - Send": lockNewGlobalBrokerContext: lightpushNode = newTestWakuNode(generateSecp256k1Key(), parseIpAddress("0.0.0.0"), Port(0)) - lightpushNode.mountMetadata(1, @[0'u16]).isOkOr: + lightpushNode.mountMetadata(3, @[0'u16]).isOkOr: raiseAssert "Failed to mount metadata: " & error (await lightpushNode.mountRelay()).isOkOr: raiseAssert "Failed to mount relay" @@ -185,7 +184,7 @@ suite "Waku API - Send": lockNewGlobalBrokerContext: storeNode = newTestWakuNode(generateSecp256k1Key(), parseIpAddress("0.0.0.0"), Port(0)) - storeNode.mountMetadata(1, @[0'u16]).isOkOr: + storeNode.mountMetadata(3, @[0'u16]).isOkOr: raiseAssert "Failed to mount metadata: " & error (await storeNode.mountRelay()).isOkOr: raiseAssert "Failed to mount relay" @@ -210,7 +209,7 @@ suite "Waku API - Send": storeNodePeerId = storeNode.peerInfo.peerId # Subscribe all relay nodes to the default shard topic - const testPubsubTopic = PubsubTopic("/waku/2/rs/1/0") + const testPubsubTopic = PubsubTopic("/waku/2/rs/3/0") proc dummyHandler( topic: PubsubTopic, msg: WakuMessage ): Future[void] {.async, gcsafe.} = @@ -387,7 +386,7 @@ suite "Waku API - Send": lockNewGlobalBrokerContext: fakeLightpushNode = newTestWakuNode(generateSecp256k1Key(), parseIpAddress("0.0.0.0"), Port(0)) - fakeLightpushNode.mountMetadata(1, @[0'u16]).isOkOr: + fakeLightpushNode.mountMetadata(3, @[0'u16]).isOkOr: raiseAssert "Failed to mount metadata: " & error (await fakeLightpushNode.mountRelay()).isOkOr: raiseAssert "Failed to mount relay" @@ -402,13 +401,13 @@ suite "Waku API - Send": discard fakeLightpushNode.subscribe( - (kind: PubsubSub, topic: PubsubTopic("/waku/2/rs/1/0")), dummyHandler + (kind: PubsubSub, topic: PubsubTopic("/waku/2/rs/3/0")), dummyHandler ).isOkOr: raiseAssert "Failed to subscribe fakeLightpushNode: " & error var node: Waku lockNewGlobalBrokerContext: - node = (await createNode(createApiNodeConf(WakuMode.Edge))).valueOr: + node = (await createNode(createApiNodeConf(cli_args.WakuMode.Edge))).valueOr: raiseAssert error (await startWaku(addr node)).isOkOr: raiseAssert "Failed to start Waku node: " & error diff --git a/tests/api/test_api_subscription.nim b/tests/api/test_api_subscription.nim index 8983c2934..6639e3dea 100644 --- a/tests/api/test_api_subscription.nim +++ b/tests/api/test_api_subscription.nim @@ -14,7 +14,8 @@ import events/message_events, waku_relay/protocol, ] -import waku/api/api_conf, waku/factory/waku_conf +import waku/factory/waku_conf +import tools/confutils/cli_args # TODO: Edge testing (after MAPI edge support is completed) @@ -64,21 +65,21 @@ type TestNetwork = ref object publisherPeerInfo: RemotePeerInfo proc createApiNodeConf( - mode: WakuMode = WakuMode.Core, numShards: uint16 = 1 -): NodeConfig = - let netConf = NetworkingConfig(listenIpv4: "0.0.0.0", p2pTcpPort: 0, discv5UdpPort: 0) - result = NodeConfig.init( - mode = mode, - protocolsConfig = ProtocolsConfig.init( - entryNodes = @[], - clusterId = 1, - autoShardingConfig = AutoShardingConfig(numShardsInCluster: numShards), - ), - networkingConfig = netConf, - p2pReliability = true, - ) + mode: cli_args.WakuMode = cli_args.WakuMode.Core, numShards: uint16 = 1 +): WakuNodeConf = + var conf = defaultWakuNodeConf().valueOr: + raiseAssert error + conf.mode = mode + conf.listenAddress = parseIpAddress("0.0.0.0") + conf.tcpPort = Port(0) + conf.discv5UdpPort = Port(0) + conf.clusterId = 3'u16 + conf.numShardsInNetwork = numShards + conf.reliabilityEnabled = true + conf.rest = false + result = conf -proc setupSubscriberNode(conf: NodeConfig): Future[Waku] {.async.} = +proc setupSubscriberNode(conf: WakuNodeConf): Future[Waku] {.async.} = var node: Waku lockNewGlobalBrokerContext: node = (await createNode(conf)).expect("Failed to create subscriber node") @@ -86,14 +87,14 @@ proc setupSubscriberNode(conf: NodeConfig): Future[Waku] {.async.} = return node proc setupNetwork( - numShards: uint16 = 1, mode: WakuMode = WakuMode.Core + numShards: uint16 = 1, mode: cli_args.WakuMode = cli_args.WakuMode.Core ): Future[TestNetwork] {.async.} = var net = TestNetwork() lockNewGlobalBrokerContext: net.publisher = newTestWakuNode(generateSecp256k1Key(), parseIpAddress("0.0.0.0"), Port(0)) - net.publisher.mountMetadata(1, @[0'u16]).expect("Failed to mount metadata") + net.publisher.mountMetadata(3, @[0'u16]).expect("Failed to mount metadata") (await net.publisher.mountRelay()).expect("Failed to mount relay") await net.publisher.mountLibp2pPing() await net.publisher.start() @@ -108,7 +109,7 @@ proc setupNetwork( # that changes, this will be needed to cause the publisher to have shard interest # for any shards the subscriber may want to use, which is required for waitForMesh to work. for i in 0 ..< numShards.int: - let shard = PubsubTopic("/waku/2/rs/1/" & $i) + let shard = PubsubTopic("/waku/2/rs/3/" & $i) net.publisher.subscribe((kind: PubsubSub, topic: shard), dummyHandler).expect( "Failed to sub publisher" ) diff --git a/tests/api/test_entry_nodes.nim b/tests/api/test_entry_nodes.nim index 136a49b2b..38dc38ba4 100644 --- a/tests/api/test_entry_nodes.nim +++ b/tests/api/test_entry_nodes.nim @@ -2,7 +2,7 @@ import std/options, results, testutils/unittests -import waku/api/entry_nodes +import tools/confutils/entry_nodes # Since classifyEntryNode is internal, we test it indirectly through processEntryNodes behavior # The enum is exported so we can test against it diff --git a/tests/api/test_node_conf.nim b/tests/api/test_node_conf.nim index 84bbfead3..d0b3d433c 100644 --- a/tests/api/test_node_conf.nim +++ b/tests/api/test_node_conf.nim @@ -1,36 +1,64 @@ {.used.} -import std/options, results, stint, testutils/unittests +import std/[options, json, strutils], results, stint, testutils/unittests import json_serialization -import waku/api/api_conf, waku/factory/waku_conf, waku/factory/networks_config +import confutils, confutils/std/net +import tools/confutils/cli_args +import waku/factory/waku_conf, waku/factory/networks_config import waku/common/logging -suite "LibWaku Conf - toWakuConf": - test "Minimal configuration": +# Helper: parse JSON into WakuNodeConf using fieldPairs (same as liblogosdelivery) +proc parseWakuNodeConfFromJson(jsonStr: string): Result[WakuNodeConf, string] = + var conf = defaultWakuNodeConf().valueOr: + return err(error) + var jsonNode: JsonNode + try: + jsonNode = parseJson(jsonStr) + except Exception: + return err("JSON parse error: " & getCurrentExceptionMsg()) + for confField, confValue in fieldPairs(conf): + if jsonNode.contains(confField): + let formattedString = ($jsonNode[confField]).strip(chars = {'\"'}) + try: + confValue = parseCmdArg(typeof(confValue), formattedString) + except Exception: + return err( + "Field '" & confField & "' parse error: " & getCurrentExceptionMsg() & + ". Value: " & formattedString + ) + return ok(conf) + +suite "WakuNodeConf - mode-driven toWakuConf": + test "Core mode enables service protocols": ## Given - let nodeConfig = NodeConfig.init(ethRpcEndpoints = @["http://someaddress"]) + var conf = defaultWakuNodeConf().valueOr: + raiseAssert error + conf.mode = Core + conf.clusterId = 1 ## When - let wakuConfRes = toWakuConf(nodeConfig) + let wakuConfRes = conf.toWakuConf() ## Then - let wakuConf = wakuConfRes.valueOr: - raiseAssert error - wakuConf.validate().isOkOr: - raiseAssert error + require wakuConfRes.isOk() + let wakuConf = wakuConfRes.get() + require wakuConf.validate().isOk() check: + wakuConf.relay == true + wakuConf.lightPush == true + wakuConf.peerExchangeService == true + wakuConf.rendezvous == true wakuConf.clusterId == 1 - wakuConf.shardingConf.numShardsInCluster == 8 - wakuConf.staticNodes.len == 0 - test "Edge mode configuration": + test "Edge mode disables service protocols": ## Given - let protocolsConfig = ProtocolsConfig.init(entryNodes = @[], clusterId = 1) - - let nodeConfig = NodeConfig.init(mode = Edge, protocolsConfig = protocolsConfig) + var conf = defaultWakuNodeConf().valueOr: + raiseAssert error + conf.mode = Edge + conf.clusterId = 1 ## When - let wakuConfRes = toWakuConf(nodeConfig) + let wakuConfRes = conf.toWakuConf() ## Then require wakuConfRes.isOk() @@ -42,16 +70,175 @@ suite "LibWaku Conf - toWakuConf": wakuConf.filterServiceConf.isSome() == false wakuConf.storeServiceConf.isSome() == false wakuConf.peerExchangeService == true - wakuConf.clusterId == 1 - test "Core mode configuration": + test "noMode uses explicit CLI flags as-is": ## Given - let protocolsConfig = ProtocolsConfig.init(entryNodes = @[], clusterId = 1) - - let nodeConfig = NodeConfig.init(mode = Core, protocolsConfig = protocolsConfig) + var conf = defaultWakuNodeConf().valueOr: + raiseAssert error + conf.mode = WakuMode.noMode + conf.relay = true + conf.lightpush = false + conf.clusterId = 5 ## When - let wakuConfRes = toWakuConf(nodeConfig) + let wakuConfRes = conf.toWakuConf() + + ## Then + require wakuConfRes.isOk() + let wakuConf = wakuConfRes.get() + require wakuConf.validate().isOk() + check: + wakuConf.relay == true + wakuConf.lightPush == false + wakuConf.clusterId == 5 + + test "Core mode overrides individual protocol flags": + ## Given - user sets relay=false but mode=Core should override + var conf = defaultWakuNodeConf().valueOr: + raiseAssert error + conf.mode = Core + conf.relay = false # will be overridden by Core mode + + ## When + let wakuConfRes = conf.toWakuConf() + + ## Then + require wakuConfRes.isOk() + let wakuConf = wakuConfRes.get() + require wakuConf.validate().isOk() + check: + wakuConf.relay == true # mode overrides + +suite "WakuNodeConf - JSON parsing with fieldPairs": + test "Empty JSON produces valid default conf": + ## Given / When + let confRes = parseWakuNodeConfFromJson("{}") + + ## Then + require confRes.isOk() + let conf = confRes.get() + check: + conf.mode == WakuMode.noMode + conf.clusterId == 0 + conf.logLevel == logging.LogLevel.INFO + + test "JSON with mode and clusterId": + ## Given / When + let confRes = parseWakuNodeConfFromJson("""{"mode": "Core", "clusterId": 42}""") + + ## Then + require confRes.isOk() + let conf = confRes.get() + check: + conf.mode == Core + conf.clusterId == 42 + + test "JSON with Edge mode": + ## Given / When + let confRes = parseWakuNodeConfFromJson("""{"mode": "Edge"}""") + + ## Then + require confRes.isOk() + let conf = confRes.get() + check: + conf.mode == Edge + + test "JSON with logLevel": + ## Given / When + let confRes = parseWakuNodeConfFromJson("""{"logLevel": "DEBUG"}""") + + ## Then + require confRes.isOk() + let conf = confRes.get() + check: + conf.logLevel == logging.LogLevel.DEBUG + + test "JSON with sharding config": + ## Given / When + let confRes = + parseWakuNodeConfFromJson("""{"clusterId": 99, "numShardsInNetwork": 16}""") + + ## Then + require confRes.isOk() + let conf = confRes.get() + check: + conf.clusterId == 99 + conf.numShardsInNetwork == 16 + + test "JSON with unknown fields is silently ignored": + ## Given / When + let confRes = + parseWakuNodeConfFromJson("""{"unknownField": true, "clusterId": 5}""") + + ## Then - unknown fields are just ignored (not in fieldPairs) + require confRes.isOk() + let conf = confRes.get() + check: + conf.clusterId == 5 + + test "Invalid JSON syntax returns error": + ## Given / When + let confRes = parseWakuNodeConfFromJson("{ not valid json }") + + ## Then + check confRes.isErr() + +suite "WakuNodeConf - preset integration": + test "TWN preset applies TheWakuNetworkConf": + ## Given + var conf = defaultWakuNodeConf().valueOr: + raiseAssert error + conf.preset = "twn" + + ## When + let wakuConfRes = conf.toWakuConf() + + ## Then + require wakuConfRes.isOk() + let wakuConf = wakuConfRes.get() + require wakuConf.validate().isOk() + check: + wakuConf.clusterId == 1 + + test "LogosDev preset applies LogosDevConf": + ## Given + var conf = defaultWakuNodeConf().valueOr: + raiseAssert error + conf.preset = "logosdev" + + ## When + let wakuConfRes = conf.toWakuConf() + + ## Then + require wakuConfRes.isOk() + let wakuConf = wakuConfRes.get() + require wakuConf.validate().isOk() + check: + wakuConf.clusterId == 2 + + test "Invalid preset returns error": + ## Given + var conf = defaultWakuNodeConf().valueOr: + raiseAssert error + conf.preset = "nonexistent" + + ## When + let wakuConfRes = conf.toWakuConf() + + ## Then + check wakuConfRes.isErr() + +suite "WakuNodeConf JSON -> WakuConf integration": + test "Core mode JSON config produces valid WakuConf": + ## Given + let confRes = parseWakuNodeConfFromJson( + """{"mode": "Core", "clusterId": 55, "numShardsInNetwork": 6}""" + ) + require confRes.isOk() + let conf = confRes.get() + + ## When + let wakuConfRes = conf.toWakuConf() ## Then require wakuConfRes.isOk() @@ -61,93 +248,72 @@ suite "LibWaku Conf - toWakuConf": wakuConf.relay == true wakuConf.lightPush == true wakuConf.peerExchangeService == true - wakuConf.clusterId == 1 + wakuConf.clusterId == 55 + wakuConf.shardingConf.numShardsInCluster == 6 - test "Auto-sharding configuration": + test "Edge mode JSON config produces valid WakuConf": ## Given - let nodeConfig = NodeConfig.init( - mode = Core, - protocolsConfig = ProtocolsConfig.init( - entryNodes = @[], - staticStoreNodes = @[], - clusterId = 42, - autoShardingConfig = AutoShardingConfig(numShardsInCluster: 16), - ), - ) + let confRes = parseWakuNodeConfFromJson("""{"mode": "Edge", "clusterId": 1}""") + require confRes.isOk() + let conf = confRes.get() ## When - let wakuConfRes = toWakuConf(nodeConfig) + let wakuConfRes = conf.toWakuConf() ## Then require wakuConfRes.isOk() let wakuConf = wakuConfRes.get() require wakuConf.validate().isOk() check: - wakuConf.clusterId == 42 - wakuConf.shardingConf.numShardsInCluster == 16 + wakuConf.relay == false + wakuConf.lightPush == false + wakuConf.peerExchangeService == true - test "Bootstrap nodes configuration": + test "JSON with preset produces valid WakuConf": ## Given - let entryNodes = - @[ - "enr:-QESuEC1p_s3xJzAC_XlOuuNrhVUETmfhbm1wxRGis0f7DlqGSw2FM-p2Vn7gmfkTTnAe8Ys2cgGBN8ufJnvzKQFZqFMBgmlkgnY0iXNlY3AyNTZrMaEDS8-D878DrdbNwcuY-3p1qdDp5MOoCurhdsNPJTXZ3c5g3RjcIJ2X4N1ZHCCd2g", - "enr:-QEkuECnZ3IbVAgkOzv-QLnKC4dRKAPRY80m1-R7G8jZ7yfT3ipEfBrhKN7ARcQgQ-vg-h40AQzyvAkPYlHPaFKk6u9MBgmlkgnY0iXNlY3AyNTZrMaEDk49D8JjMSns4p1XVNBvJquOUzT4PENSJknkROspfAFGg3RjcIJ2X4N1ZHCCd2g", - ] - let libConf = NodeConfig.init( - mode = Core, - protocolsConfig = ProtocolsConfig.init( - entryNodes = entryNodes, staticStoreNodes = @[], clusterId = 1 - ), - ) + let confRes = + parseWakuNodeConfFromJson("""{"mode": "Core", "preset": "logosdev"}""") + require confRes.isOk() + let conf = confRes.get() ## When - let wakuConfRes = toWakuConf(libConf) - - ## Then - require wakuConfRes.isOk() - let wakuConf = wakuConfRes.get() - require wakuConf.validate().isOk() - require wakuConf.discv5Conf.isSome() - check: - wakuConf.discv5Conf.get().bootstrapNodes == entryNodes - - test "Static store nodes configuration": - ## Given - let staticStoreNodes = - @[ - "/ip4/127.0.0.1/tcp/60000/p2p/16Uuu2HBmAcHvhLqQKwSSbX6BG5JLWUDRcaLVrehUVqpw7fz1hbYc", - "/ip4/192.168.1.1/tcp/60001/p2p/16Uuu2HBmAcHvhLqQKwSSbX6BG5JLWUDRcaLVrehUVqpw7fz1hbYd", - ] - let nodeConf = NodeConfig.init( - protocolsConfig = ProtocolsConfig.init( - entryNodes = @[], staticStoreNodes = staticStoreNodes, clusterId = 1 - ) - ) - - ## When - let wakuConfRes = toWakuConf(nodeConf) + let wakuConfRes = conf.toWakuConf() ## Then require wakuConfRes.isOk() let wakuConf = wakuConfRes.get() require wakuConf.validate().isOk() check: - wakuConf.staticNodes == staticStoreNodes + wakuConf.clusterId == 2 + wakuConf.relay == true - test "Message validation with max message size": + test "JSON with static nodes": ## Given - let nodeConfig = NodeConfig.init( - protocolsConfig = ProtocolsConfig.init( - entryNodes = @[], - staticStoreNodes = @[], - clusterId = 1, - messageValidation = - MessageValidation(maxMessageSize: "100KiB", rlnConfig: none(RlnConfig)), - ) + let confRes = parseWakuNodeConfFromJson( + """{"mode": "Core", "clusterId": 42, "staticnodes": ["/ip4/127.0.0.1/tcp/60000/p2p/16Uuu2HBmAcHvhLqQKwSSbX6BG5JLWUDRcaLVrehUVqpw7fz1hbYc"]}""" ) + require confRes.isOk() + let conf = confRes.get() ## When - let wakuConfRes = toWakuConf(nodeConfig) + let wakuConfRes = conf.toWakuConf() + + ## Then + require wakuConfRes.isOk() + let wakuConf = wakuConfRes.get() + require wakuConf.validate().isOk() + check: + wakuConf.staticNodes.len == 1 + + test "JSON with max message size": + ## Given + let confRes = + parseWakuNodeConfFromJson("""{"clusterId": 42, "maxMessageSize": "100KiB"}""") + require confRes.isOk() + let conf = confRes.get() + + ## When + let wakuConfRes = conf.toWakuConf() ## Then require wakuConfRes.isOk() @@ -156,853 +322,49 @@ suite "LibWaku Conf - toWakuConf": check: wakuConf.maxMessageSizeBytes == 100'u64 * 1024'u64 - test "Message validation with RLN config": - ## Given - let nodeConfig = NodeConfig.init( - protocolsConfig = ProtocolsConfig.init( - entryNodes = @[], - clusterId = 1, - messageValidation = MessageValidation( - maxMessageSize: "150 KiB", - rlnConfig: some( - RlnConfig( - contractAddress: "0x1234567890123456789012345678901234567890", - chainId: 1'u, - epochSizeSec: 600'u64, - ) - ), - ), - ), - ethRpcEndpoints = @["http://127.0.0.1:1111"], - ) +# ---- Deprecated NodeConfig tests (kept for backward compatibility) ---- - ## When - let wakuConf = toWakuConf(nodeConfig).valueOr: - raiseAssert error +{.push warning[Deprecated]: off.} - wakuConf.validate().isOkOr: - raiseAssert error +import waku/api/api_conf - check: - wakuConf.maxMessageSizeBytes == 150'u64 * 1024'u64 - - require wakuConf.rlnRelayConf.isSome() - let rlnConf = wakuConf.rlnRelayConf.get() - check: - rlnConf.dynamic == true - rlnConf.ethContractAddress == "0x1234567890123456789012345678901234567890" - rlnConf.chainId == 1'u256 - rlnConf.epochSizeSec == 600'u64 - - test "Full Core mode configuration with all fields": - ## Given - let nodeConfig = NodeConfig.init( - mode = Core, - protocolsConfig = ProtocolsConfig.init( - entryNodes = - @[ - "enr:-QESuEC1p_s3xJzAC_XlOuuNrhVUETmfhbm1wxRGis0f7DlqGSw2FM-p2Vn7gmfkTTnAe8Ys2cgGBN8ufJnvzKQFZqFMBgmlkgnY0iXNlY3AyNTZrMaEDS8-D878DrdbNwcuY-3p1qdDp5MOoCurhdsNPJTXZ3c5g3RjcIJ2X4N1ZHCCd2g" - ], - staticStoreNodes = - @[ - "/ip4/127.0.0.1/tcp/60000/p2p/16Uuu2HBmAcHvhLqQKwSSbX6BG5JLWUDRcaLVrehUVqpw7fz1hbYc" - ], - clusterId = 99, - autoShardingConfig = AutoShardingConfig(numShardsInCluster: 12), - messageValidation = MessageValidation( - maxMessageSize: "512KiB", - rlnConfig: some( - RlnConfig( - contractAddress: "0xAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA", - chainId: 5'u, # Goerli - epochSizeSec: 300'u64, - ) - ), - ), - ), - ethRpcEndpoints = @["https://127.0.0.1:8333"], - ) - - ## When - let wakuConfRes = toWakuConf(nodeConfig) - - ## Then +suite "NodeConfig (deprecated) - toWakuConf": + test "Minimal configuration": + let nodeConfig = NodeConfig.init(ethRpcEndpoints = @["http://someaddress"]) + let wakuConfRes = api_conf.toWakuConf(nodeConfig) let wakuConf = wakuConfRes.valueOr: raiseAssert error wakuConf.validate().isOkOr: raiseAssert error + check: + wakuConf.clusterId == 1 + wakuConf.shardingConf.numShardsInCluster == 8 + wakuConf.staticNodes.len == 0 - # Check basic settings + test "Edge mode configuration": + let protocolsConfig = ProtocolsConfig.init(entryNodes = @[], clusterId = 1) + let nodeConfig = + NodeConfig.init(mode = api_conf.WakuMode.Edge, protocolsConfig = protocolsConfig) + let wakuConfRes = api_conf.toWakuConf(nodeConfig) + require wakuConfRes.isOk() + let wakuConf = wakuConfRes.get() + require wakuConf.validate().isOk() + check: + wakuConf.relay == false + wakuConf.lightPush == false + wakuConf.peerExchangeService == true + + test "Core mode configuration": + let protocolsConfig = ProtocolsConfig.init(entryNodes = @[], clusterId = 1) + let nodeConfig = + NodeConfig.init(mode = api_conf.WakuMode.Core, protocolsConfig = protocolsConfig) + let wakuConfRes = api_conf.toWakuConf(nodeConfig) + require wakuConfRes.isOk() + let wakuConf = wakuConfRes.get() + require wakuConf.validate().isOk() check: wakuConf.relay == true wakuConf.lightPush == true wakuConf.peerExchangeService == true - wakuConf.rendezvous == true - wakuConf.clusterId == 99 - # Check sharding - check: - wakuConf.shardingConf.numShardsInCluster == 12 - - # Check bootstrap nodes - require wakuConf.discv5Conf.isSome() - check: - wakuConf.discv5Conf.get().bootstrapNodes.len == 1 - - # Check static nodes - check: - wakuConf.staticNodes.len == 1 - wakuConf.staticNodes[0] == - "/ip4/127.0.0.1/tcp/60000/p2p/16Uuu2HBmAcHvhLqQKwSSbX6BG5JLWUDRcaLVrehUVqpw7fz1hbYc" - - # Check message validation - check: - wakuConf.maxMessageSizeBytes == 512'u64 * 1024'u64 - - # Check RLN config - require wakuConf.rlnRelayConf.isSome() - let rlnConf = wakuConf.rlnRelayConf.get() - check: - rlnConf.dynamic == true - rlnConf.ethContractAddress == "0xAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA" - rlnConf.chainId == 5'u256 - rlnConf.epochSizeSec == 300'u64 - - test "NodeConfig with mixed entry nodes (integration test)": - ## Given - let entryNodes = - @[ - "enrtree://AIRVQ5DDA4FFWLRBCHJWUWOO6X6S4ZTZ5B667LQ6AJU6PEYDLRD5O@sandbox.waku.nodes.status.im", - "/ip4/127.0.0.1/tcp/60000/p2p/16Uuu2HBmAcHvhLqQKwSSbX6BG5JLWUDRcaLVrehUVqpw7fz1hbYc", - ] - - let nodeConfig = NodeConfig.init( - mode = Core, - protocolsConfig = ProtocolsConfig.init( - entryNodes = entryNodes, staticStoreNodes = @[], clusterId = 1 - ), - ) - - ## When - let wakuConfRes = toWakuConf(nodeConfig) - - ## Then - require wakuConfRes.isOk() - let wakuConf = wakuConfRes.get() - require wakuConf.validate().isOk() - - # Check that ENRTree went to DNS discovery - require wakuConf.dnsDiscoveryConf.isSome() - check: - wakuConf.dnsDiscoveryConf.get().enrTreeUrl == entryNodes[0] - - # Check that multiaddr went to static nodes - check: - wakuConf.staticNodes.len == 1 - wakuConf.staticNodes[0] == entryNodes[1] - -suite "NodeConfig JSON - complete format": - test "Full NodeConfig from complete JSON with field validation": - ## Given - let jsonStr = - """ - { - "mode": "Core", - "protocolsConfig": { - "entryNodes": ["enrtree://TREE@nodes.example.com"], - "staticStoreNodes": ["/ip4/1.2.3.4/tcp/80/p2p/16Uuu2HBmAcHvhLqQKwSSbX6BG5JLWUDRcaLVrehUVqpw7fz1hbYc"], - "clusterId": 10, - "autoShardingConfig": { - "numShardsInCluster": 4 - }, - "messageValidation": { - "maxMessageSize": "100 KiB", - "rlnConfig": null - } - }, - "networkingConfig": { - "listenIpv4": "192.168.1.1", - "p2pTcpPort": 7000, - "discv5UdpPort": 7001 - }, - "ethRpcEndpoints": ["http://localhost:8545"], - "p2pReliability": true, - "logLevel": "WARN", - "logFormat": "TEXT" - } - """ - - ## When - let config = decodeNodeConfigFromJson(jsonStr) - - ## Then — check every field - check: - config.mode == WakuMode.Core - config.ethRpcEndpoints == @["http://localhost:8545"] - config.p2pReliability == true - config.logLevel == LogLevel.WARN - config.logFormat == LogFormat.TEXT - - check: - config.networkingConfig.listenIpv4 == "192.168.1.1" - config.networkingConfig.p2pTcpPort == 7000 - config.networkingConfig.discv5UdpPort == 7001 - - let pc = config.protocolsConfig - check: - pc.entryNodes == @["enrtree://TREE@nodes.example.com"] - pc.staticStoreNodes == - @[ - "/ip4/1.2.3.4/tcp/80/p2p/16Uuu2HBmAcHvhLqQKwSSbX6BG5JLWUDRcaLVrehUVqpw7fz1hbYc" - ] - pc.clusterId == 10 - pc.autoShardingConfig.numShardsInCluster == 4 - pc.messageValidation.maxMessageSize == "100 KiB" - pc.messageValidation.rlnConfig.isNone() - - test "Full NodeConfig with RlnConfig present": - ## Given - let jsonStr = - """ - { - "mode": "Edge", - "protocolsConfig": { - "entryNodes": [], - "clusterId": 1, - "messageValidation": { - "maxMessageSize": "150 KiB", - "rlnConfig": { - "contractAddress": "0x1234567890ABCDEF1234567890ABCDEF12345678", - "chainId": 5, - "epochSizeSec": 600 - } - } - }, - "networkingConfig": { - "listenIpv4": "0.0.0.0", - "p2pTcpPort": 60000, - "discv5UdpPort": 9000 - } - } - """ - - ## When - let config = decodeNodeConfigFromJson(jsonStr) - - ## Then - check config.mode == WakuMode.Edge - - let mv = config.protocolsConfig.messageValidation - check: - mv.maxMessageSize == "150 KiB" - mv.rlnConfig.isSome() - let rln = mv.rlnConfig.get() - check: - rln.contractAddress == "0x1234567890ABCDEF1234567890ABCDEF12345678" - rln.chainId == 5'u - rln.epochSizeSec == 600'u64 - - test "Round-trip encode/decode preserves all fields": - ## Given - let original = NodeConfig.init( - mode = Edge, - protocolsConfig = ProtocolsConfig.init( - entryNodes = @["enrtree://TREE@example.com"], - staticStoreNodes = - @[ - "/ip4/1.2.3.4/tcp/80/p2p/16Uuu2HBmAcHvhLqQKwSSbX6BG5JLWUDRcaLVrehUVqpw7fz1hbYc" - ], - clusterId = 42, - autoShardingConfig = AutoShardingConfig(numShardsInCluster: 16), - messageValidation = MessageValidation( - maxMessageSize: "256 KiB", - rlnConfig: some( - RlnConfig( - contractAddress: "0xAABBCCDDEEFF00112233445566778899AABBCCDD", - chainId: 137, - epochSizeSec: 300, - ) - ), - ), - ), - networkingConfig = - NetworkingConfig(listenIpv4: "10.0.0.1", p2pTcpPort: 9090, discv5UdpPort: 9091), - ethRpcEndpoints = @["https://rpc.example.com"], - p2pReliability = true, - logLevel = LogLevel.DEBUG, - logFormat = LogFormat.JSON, - ) - - ## When - let decoded = decodeNodeConfigFromJson(Json.encode(original)) - - ## Then — check field by field - check: - decoded.mode == original.mode - decoded.ethRpcEndpoints == original.ethRpcEndpoints - decoded.p2pReliability == original.p2pReliability - decoded.logLevel == original.logLevel - decoded.logFormat == original.logFormat - decoded.networkingConfig.listenIpv4 == original.networkingConfig.listenIpv4 - decoded.networkingConfig.p2pTcpPort == original.networkingConfig.p2pTcpPort - decoded.networkingConfig.discv5UdpPort == original.networkingConfig.discv5UdpPort - decoded.protocolsConfig.entryNodes == original.protocolsConfig.entryNodes - decoded.protocolsConfig.staticStoreNodes == - original.protocolsConfig.staticStoreNodes - decoded.protocolsConfig.clusterId == original.protocolsConfig.clusterId - decoded.protocolsConfig.autoShardingConfig.numShardsInCluster == - original.protocolsConfig.autoShardingConfig.numShardsInCluster - decoded.protocolsConfig.messageValidation.maxMessageSize == - original.protocolsConfig.messageValidation.maxMessageSize - decoded.protocolsConfig.messageValidation.rlnConfig.isSome() - - let decodedRln = decoded.protocolsConfig.messageValidation.rlnConfig.get() - let originalRln = original.protocolsConfig.messageValidation.rlnConfig.get() - check: - decodedRln.contractAddress == originalRln.contractAddress - decodedRln.chainId == originalRln.chainId - decodedRln.epochSizeSec == originalRln.epochSizeSec - -suite "NodeConfig JSON - partial format with defaults": - test "Minimal NodeConfig - empty object uses all defaults": - ## Given - let config = decodeNodeConfigFromJson("{}") - let defaultConfig = NodeConfig.init() - - ## Then — compare field by field against defaults - check: - config.mode == defaultConfig.mode - config.ethRpcEndpoints == defaultConfig.ethRpcEndpoints - config.p2pReliability == defaultConfig.p2pReliability - config.logLevel == defaultConfig.logLevel - config.logFormat == defaultConfig.logFormat - config.networkingConfig.listenIpv4 == defaultConfig.networkingConfig.listenIpv4 - config.networkingConfig.p2pTcpPort == defaultConfig.networkingConfig.p2pTcpPort - config.networkingConfig.discv5UdpPort == - defaultConfig.networkingConfig.discv5UdpPort - config.protocolsConfig.entryNodes == defaultConfig.protocolsConfig.entryNodes - config.protocolsConfig.staticStoreNodes == - defaultConfig.protocolsConfig.staticStoreNodes - config.protocolsConfig.clusterId == defaultConfig.protocolsConfig.clusterId - config.protocolsConfig.autoShardingConfig.numShardsInCluster == - defaultConfig.protocolsConfig.autoShardingConfig.numShardsInCluster - config.protocolsConfig.messageValidation.maxMessageSize == - defaultConfig.protocolsConfig.messageValidation.maxMessageSize - config.protocolsConfig.messageValidation.rlnConfig.isSome() == - defaultConfig.protocolsConfig.messageValidation.rlnConfig.isSome() - - test "Minimal NodeConfig keeps network preset defaults": - ## Given - let config = decodeNodeConfigFromJson("{}") - - ## Then - check: - config.protocolsConfig.entryNodes == TheWakuNetworkPreset.entryNodes - config.protocolsConfig.messageValidation.rlnConfig.isSome() - - test "NodeConfig with only mode specified": - ## Given - let config = decodeNodeConfigFromJson("""{"mode": "Edge"}""") - - ## Then - check: - config.mode == WakuMode.Edge - ## Remaining fields get defaults - config.logLevel == LogLevel.INFO - config.logFormat == LogFormat.TEXT - config.p2pReliability == false - config.ethRpcEndpoints == newSeq[string]() - - test "ProtocolsConfig partial - optional fields get defaults": - ## Given — only entryNodes and clusterId provided - let jsonStr = - """ - { - "protocolsConfig": { - "entryNodes": ["enrtree://X@y.com"], - "clusterId": 5 - }, - "networkingConfig": { - "listenIpv4": "0.0.0.0", - "p2pTcpPort": 60000, - "discv5UdpPort": 9000 - } - } - """ - - ## When - let config = decodeNodeConfigFromJson(jsonStr) - - ## Then — required fields are set, optionals get defaults - check: - config.protocolsConfig.entryNodes == @["enrtree://X@y.com"] - config.protocolsConfig.clusterId == 5 - config.protocolsConfig.staticStoreNodes == newSeq[string]() - config.protocolsConfig.autoShardingConfig.numShardsInCluster == - DefaultAutoShardingConfig.numShardsInCluster - config.protocolsConfig.messageValidation.maxMessageSize == - DefaultMessageValidation.maxMessageSize - config.protocolsConfig.messageValidation.rlnConfig.isNone() - - test "MessageValidation partial - rlnConfig omitted defaults to none": - ## Given - let jsonStr = - """ - { - "protocolsConfig": { - "entryNodes": [], - "clusterId": 1, - "messageValidation": { - "maxMessageSize": "200 KiB" - } - }, - "networkingConfig": { - "listenIpv4": "0.0.0.0", - "p2pTcpPort": 60000, - "discv5UdpPort": 9000 - } - } - """ - - ## When - let config = decodeNodeConfigFromJson(jsonStr) - - ## Then - check: - config.protocolsConfig.messageValidation.maxMessageSize == "200 KiB" - config.protocolsConfig.messageValidation.rlnConfig.isNone() - - test "logLevel and logFormat omitted use defaults": - ## Given - let jsonStr = - """ - { - "mode": "Core", - "protocolsConfig": { - "entryNodes": [], - "clusterId": 1 - }, - "networkingConfig": { - "listenIpv4": "0.0.0.0", - "p2pTcpPort": 60000, - "discv5UdpPort": 9000 - } - } - """ - - ## When - let config = decodeNodeConfigFromJson(jsonStr) - - ## Then - check: - config.logLevel == LogLevel.INFO - config.logFormat == LogFormat.TEXT - -suite "NodeConfig JSON - unsupported fields raise errors": - test "Unknown field at NodeConfig level raises": - let jsonStr = - """ - { - "mode": "Core", - "unknownTopLevel": true - } - """ - - var raised = false - try: - discard decodeNodeConfigFromJson(jsonStr) - except SerializationError: - raised = true - check raised - - test "Typo in NodeConfig field name raises": - let jsonStr = - """ - { - "modes": "Core" - } - """ - - var raised = false - try: - discard decodeNodeConfigFromJson(jsonStr) - except SerializationError: - raised = true - check raised - - test "Unknown field in ProtocolsConfig raises": - let jsonStr = - """ - { - "protocolsConfig": { - "entryNodes": [], - "clusterId": 1, - "futureField": "something" - }, - "networkingConfig": { - "listenIpv4": "0.0.0.0", - "p2pTcpPort": 60000, - "discv5UdpPort": 9000 - } - } - """ - - var raised = false - try: - discard decodeNodeConfigFromJson(jsonStr) - except SerializationError: - raised = true - check raised - - test "Unknown field in NetworkingConfig raises": - let jsonStr = - """ - { - "protocolsConfig": { - "entryNodes": [], - "clusterId": 1 - }, - "networkingConfig": { - "listenIpv4": "0.0.0.0", - "p2pTcpPort": 60000, - "discv5UdpPort": 9000, - "futureNetworkField": "value" - } - } - """ - - var raised = false - try: - discard decodeNodeConfigFromJson(jsonStr) - except SerializationError: - raised = true - check raised - - test "Unknown field in MessageValidation raises": - let jsonStr = - """ - { - "protocolsConfig": { - "entryNodes": [], - "clusterId": 1, - "messageValidation": { - "maxMessageSize": "150 KiB", - "maxMesssageSize": "typo" - } - }, - "networkingConfig": { - "listenIpv4": "0.0.0.0", - "p2pTcpPort": 60000, - "discv5UdpPort": 9000 - } - } - """ - - var raised = false - try: - discard decodeNodeConfigFromJson(jsonStr) - except SerializationError: - raised = true - check raised - - test "Unknown field in RlnConfig raises": - let jsonStr = - """ - { - "protocolsConfig": { - "entryNodes": [], - "clusterId": 1, - "messageValidation": { - "maxMessageSize": "150 KiB", - "rlnConfig": { - "contractAddress": "0xABCDEF1234567890ABCDEF1234567890ABCDEF12", - "chainId": 1, - "epochSizeSec": 600, - "unknownRlnField": true - } - } - }, - "networkingConfig": { - "listenIpv4": "0.0.0.0", - "p2pTcpPort": 60000, - "discv5UdpPort": 9000 - } - } - """ - - var raised = false - try: - discard decodeNodeConfigFromJson(jsonStr) - except SerializationError: - raised = true - check raised - - test "Unknown field in AutoShardingConfig raises": - let jsonStr = - """ - { - "protocolsConfig": { - "entryNodes": [], - "clusterId": 1, - "autoShardingConfig": { - "numShardsInCluster": 8, - "shardPrefix": "extra" - } - }, - "networkingConfig": { - "listenIpv4": "0.0.0.0", - "p2pTcpPort": 60000, - "discv5UdpPort": 9000 - } - } - """ - - var raised = false - try: - discard decodeNodeConfigFromJson(jsonStr) - except SerializationError: - raised = true - check raised - -suite "NodeConfig JSON - missing required fields": - test "Missing 'entryNodes' in ProtocolsConfig": - let jsonStr = - """ - { - "protocolsConfig": { - "clusterId": 1 - }, - "networkingConfig": { - "listenIpv4": "0.0.0.0", - "p2pTcpPort": 60000, - "discv5UdpPort": 9000 - } - } - """ - - var raised = false - try: - discard decodeNodeConfigFromJson(jsonStr) - except SerializationError: - raised = true - check raised - - test "Missing 'clusterId' in ProtocolsConfig": - let jsonStr = - """ - { - "protocolsConfig": { - "entryNodes": [] - }, - "networkingConfig": { - "listenIpv4": "0.0.0.0", - "p2pTcpPort": 60000, - "discv5UdpPort": 9000 - } - } - """ - - var raised = false - try: - discard decodeNodeConfigFromJson(jsonStr) - except SerializationError: - raised = true - check raised - - test "Missing required fields in NetworkingConfig": - let jsonStr = - """ - { - "protocolsConfig": { - "entryNodes": [], - "clusterId": 1 - }, - "networkingConfig": { - "listenIpv4": "0.0.0.0" - } - } - """ - - var raised = false - try: - discard decodeNodeConfigFromJson(jsonStr) - except SerializationError: - raised = true - check raised - - test "Missing 'numShardsInCluster' in AutoShardingConfig": - let jsonStr = - """ - { - "protocolsConfig": { - "entryNodes": [], - "clusterId": 1, - "autoShardingConfig": {} - }, - "networkingConfig": { - "listenIpv4": "0.0.0.0", - "p2pTcpPort": 60000, - "discv5UdpPort": 9000 - } - } - """ - - var raised = false - try: - discard decodeNodeConfigFromJson(jsonStr) - except SerializationError: - raised = true - check raised - - test "Missing required fields in RlnConfig": - let jsonStr = - """ - { - "protocolsConfig": { - "entryNodes": [], - "clusterId": 1, - "messageValidation": { - "maxMessageSize": "150 KiB", - "rlnConfig": { - "contractAddress": "0xABCD" - } - } - }, - "networkingConfig": { - "listenIpv4": "0.0.0.0", - "p2pTcpPort": 60000, - "discv5UdpPort": 9000 - } - } - """ - - var raised = false - try: - discard decodeNodeConfigFromJson(jsonStr) - except SerializationError: - raised = true - check raised - - test "Missing 'maxMessageSize' in MessageValidation": - let jsonStr = - """ - { - "protocolsConfig": { - "entryNodes": [], - "clusterId": 1, - "messageValidation": { - "rlnConfig": null - } - }, - "networkingConfig": { - "listenIpv4": "0.0.0.0", - "p2pTcpPort": 60000, - "discv5UdpPort": 9000 - } - } - """ - - var raised = false - try: - discard decodeNodeConfigFromJson(jsonStr) - except SerializationError: - raised = true - check raised - -suite "NodeConfig JSON - invalid values": - test "Invalid enum value for mode": - var raised = false - try: - discard decodeNodeConfigFromJson("""{"mode": "InvalidMode"}""") - except SerializationError: - raised = true - check raised - - test "Invalid enum value for logLevel": - var raised = false - try: - discard decodeNodeConfigFromJson("""{"logLevel": "SUPERVERBOSE"}""") - except SerializationError: - raised = true - check raised - - test "Wrong type for clusterId (string instead of number)": - let jsonStr = - """ - { - "protocolsConfig": { - "entryNodes": [], - "clusterId": "not-a-number" - }, - "networkingConfig": { - "listenIpv4": "0.0.0.0", - "p2pTcpPort": 60000, - "discv5UdpPort": 9000 - } - } - """ - - var raised = false - try: - discard decodeNodeConfigFromJson(jsonStr) - except SerializationError: - raised = true - check raised - - test "Completely invalid JSON syntax": - var raised = false - try: - discard decodeNodeConfigFromJson("""{ not valid json at all }""") - except SerializationError: - raised = true - check raised - -suite "NodeConfig JSON -> WakuConf integration": - test "Decoded config translates to valid WakuConf": - ## Given - let jsonStr = - """ - { - "mode": "Core", - "protocolsConfig": { - "entryNodes": [ - "enrtree://AIRVQ5DDA4FFWLRBCHJWUWOO6X6S4ZTZ5B667LQ6AJU6PEYDLRD5O@sandbox.waku.nodes.status.im" - ], - "staticStoreNodes": [ - "/ip4/127.0.0.1/tcp/60000/p2p/16Uuu2HBmAcHvhLqQKwSSbX6BG5JLWUDRcaLVrehUVqpw7fz1hbYc" - ], - "clusterId": 55, - "autoShardingConfig": { - "numShardsInCluster": 6 - }, - "messageValidation": { - "maxMessageSize": "256 KiB", - "rlnConfig": null - } - }, - "networkingConfig": { - "listenIpv4": "0.0.0.0", - "p2pTcpPort": 60000, - "discv5UdpPort": 9000 - }, - "ethRpcEndpoints": ["http://localhost:8545"], - "p2pReliability": true, - "logLevel": "INFO", - "logFormat": "TEXT" - } - """ - - ## When - let nodeConfig = decodeNodeConfigFromJson(jsonStr) - let wakuConfRes = toWakuConf(nodeConfig) - - ## Then - require wakuConfRes.isOk() - let wakuConf = wakuConfRes.get() - require wakuConf.validate().isOk() - check: - wakuConf.clusterId == 55 - wakuConf.shardingConf.numShardsInCluster == 6 - wakuConf.maxMessageSizeBytes == 256'u64 * 1024'u64 - wakuConf.staticNodes.len == 1 - wakuConf.p2pReliability == true +{.pop.} diff --git a/tests/test_waku.nim b/tests/test_waku.nim index b8e2b26b1..dabd65af7 100644 --- a/tests/test_waku.nim +++ b/tests/test_waku.nim @@ -3,49 +3,49 @@ import chronos, testutils/unittests, std/options import waku +import tools/confutils/cli_args suite "Waku API - Create node": asyncTest "Create node with minimal configuration": ## Given - let nodeConfig = NodeConfig.init( - protocolsConfig = ProtocolsConfig.init(entryNodes = @[], clusterId = 1) - ) + var nodeConf = defaultWakuNodeConf().valueOr: + raiseAssert error + nodeConf.mode = Core + nodeConf.clusterId = 3'u16 + nodeConf.rest = false # This is the actual minimal config but as the node auto-start, it is not suitable for tests - # NodeConfig.init(ethRpcEndpoints = @["http://someaddress"]) ## When - let node = (await createNode(nodeConfig)).valueOr: + let node = (await createNode(nodeConf)).valueOr: raiseAssert error ## Then check: not node.isNil() - node.conf.clusterId == 1 + node.conf.clusterId == 3 node.conf.relay == true asyncTest "Create node with full configuration": ## Given - let nodeConfig = NodeConfig.init( - mode = Core, - protocolsConfig = ProtocolsConfig.init( - entryNodes = - @[ - "enr:-QESuEC1p_s3xJzAC_XlOuuNrhVUETmfhbm1wxRGis0f7DlqGSw2FM-p2Vn7gmfkTTnAe8Ys2cgGBN8ufJnvzKQFZqFMBgmlkgnY0iXNlY3AyNTZrMaEDS8-D878DrdbNwcuY-3p1qdDp5MOoCurhdsNPJTXZ3c5g3RjcIJ2X4N1ZHCCd2g" - ], - staticStoreNodes = - @[ - "/ip4/127.0.0.1/tcp/60000/p2p/16Uuu2HBmAcHvhLqQKwSSbX6BG5JLWUDRcaLVrehUVqpw7fz1hbYc" - ], - clusterId = 99, - autoShardingConfig = AutoShardingConfig(numShardsInCluster: 16), - messageValidation = - MessageValidation(maxMessageSize: "1024 KiB", rlnConfig: none(RlnConfig)), - ), - ) + var nodeConf = defaultWakuNodeConf().valueOr: + raiseAssert error + nodeConf.mode = Core + nodeConf.clusterId = 99'u16 + nodeConf.rest = false + nodeConf.numShardsInNetwork = 16 + nodeConf.maxMessageSize = "1024 KiB" + nodeConf.entryNodes = + @[ + "enr:-QESuEC1p_s3xJzAC_XlOuuNrhVUETmfhbm1wxRGis0f7DlqGSw2FM-p2Vn7gmfkTTnAe8Ys2cgGBN8ufJnvzKQFZqFMBgmlkgnY0iXNlY3AyNTZrMaEDS8-D878DrdbNwcuY-3p1qdDp5MOoCurhdsNPJTXZ3c5g3RjcIJ2X4N1ZHCCd2g" + ] + nodeConf.staticnodes = + @[ + "/ip4/127.0.0.1/tcp/60000/p2p/16Uuu2HBmAcHvhLqQKwSSbX6BG5JLWUDRcaLVrehUVqpw7fz1hbYc" + ] ## When - let node = (await createNode(nodeConfig)).valueOr: + let node = (await createNode(nodeConf)).valueOr: raiseAssert error ## Then @@ -62,20 +62,19 @@ suite "Waku API - Create node": asyncTest "Create node with mixed entry nodes (enrtree, multiaddr)": ## Given - let nodeConfig = NodeConfig.init( - mode = Core, - protocolsConfig = ProtocolsConfig.init( - entryNodes = - @[ - "enrtree://AIRVQ5DDA4FFWLRBCHJWUWOO6X6S4ZTZ5B667LQ6AJU6PEYDLRD5O@sandbox.waku.nodes.status.im", - "/ip4/127.0.0.1/tcp/60000/p2p/16Uuu2HBmAcHvhLqQKwSSbX6BG5JLWUDRcaLVrehUVqpw7fz1hbYc", - ], - clusterId = 42, - ), - ) + var nodeConf = defaultWakuNodeConf().valueOr: + raiseAssert error + nodeConf.mode = Core + nodeConf.clusterId = 42'u16 + nodeConf.rest = false + nodeConf.entryNodes = + @[ + "enrtree://AIRVQ5DDA4FFWLRBCHJWUWOO6X6S4ZTZ5B667LQ6AJU6PEYDLRD5O@sandbox.waku.nodes.status.im", + "/ip4/127.0.0.1/tcp/60000/p2p/16Uuu2HBmAcHvhLqQKwSSbX6BG5JLWUDRcaLVrehUVqpw7fz1hbYc", + ] ## When - let node = (await createNode(nodeConfig)).valueOr: + let node = (await createNode(nodeConf)).valueOr: raiseAssert error ## Then diff --git a/tools/confutils/cli_args.nim b/tools/confutils/cli_args.nim index 5e4adacb2..d8bd9b7f5 100644 --- a/tools/confutils/cli_args.nim +++ b/tools/confutils/cli_args.nim @@ -30,7 +30,8 @@ import waku_core/message/default_values, waku_mix, ], - ../../tools/rln_keystore_generator/rln_keystore_generator + ../../tools/rln_keystore_generator/rln_keystore_generator, + ./entry_nodes import ./envvar as confEnvvarDefs, ./envvar_net as confEnvvarNet @@ -52,6 +53,11 @@ type StartUpCommand* = enum noCommand # default, runs waku generateRlnKeystore # generates a new RLN keystore +type WakuMode* {.pure.} = enum + noMode # default - use explicit CLI flags as-is + Core # full service node + Edge # client-only node + type WakuNodeConf* = object configFile* {. desc: "Loads configuration from a TOML file (cmd-line parameters take precedence)", @@ -150,9 +156,16 @@ type WakuNodeConf* = object .}: seq[ProtectedShard] ## General node config + mode* {. + desc: + "Node operation mode. 'Core' enables relay+service protocols. 'Edge' enables client-only protocols. Default: explicit CLI flags used.", + defaultValue: WakuMode.noMode, + name: "mode" + .}: WakuMode + preset* {. desc: - "Network preset to use. 'twn' is The RLN-protected Waku Network (cluster 1). Overrides other values.", + "Network preset to use. 'twn' is The RLN-protected Waku Network (cluster 1). 'logos.dev' is the Logos Dev Network (cluster 2). Overrides other values.", defaultValue: "", name: "preset" .}: string @@ -165,7 +178,7 @@ type WakuNodeConf* = object .}: uint16 agentString* {. - defaultValue: "nwaku-" & cli_args.git_version, + defaultValue: "logos-delivery-" & cli_args.git_version, desc: "Node agent string which is used as identifier in network", name: "agent-string" .}: string @@ -293,6 +306,14 @@ hence would have reachability issues.""", name: "rln-relay-dynamic" .}: bool + entryNodes* {. + desc: + "Entry node address (enrtree:, enr:, or multiaddr). " & + "Automatically classified and distributed to DNS discovery, discv5 bootstrap, " & + "and static nodes. Argument may be repeated.", + name: "entry-node" + .}: seq[string] + staticnodes* {. desc: "Peer multiaddr to directly connect with. Argument may be repeated.", name: "staticnode" @@ -453,7 +474,7 @@ hence would have reachability issues.""", desc: """Adds an extra effort in the delivery/reception of messages by leveraging store-v3 requests. with the drawback of consuming some more bandwidth.""", - defaultValue: false, + defaultValue: true, name: "reliability" .}: bool @@ -907,12 +928,19 @@ proc toNetworkConf( "TWN - The Waku Network configuration will not be applied when `--cluster-id=1` is passed in future releases. Use `--preset=twn` instead." ) lcPreset = "twn" + if clusterId.isSome() and clusterId.get() == 2: + warn( + "Logos.dev - Logos.dev configuration will not be applied when `--cluster-id=2` is passed in future releases. Use `--preset=logos.dev` instead." + ) + lcPreset = "logos.dev" case lcPreset of "": ok(none(NetworkConf)) of "twn": ok(some(NetworkConf.TheWakuNetworkConf())) + of "logos.dev", "logosdev": + ok(some(NetworkConf.LogosDevConf())) else: err("Invalid --preset value passed: " & lcPreset) @@ -982,6 +1010,26 @@ proc toWakuConf*(n: WakuNodeConf): ConfResult[WakuConf] = b.withRelayShardedPeerManagement(n.relayShardedPeerManagement) b.withStaticNodes(n.staticNodes) + # Process entry nodes - supports enrtree:, enr:, and multiaddress formats + if n.entryNodes.len > 0: + let (enrTreeUrls, bootstrapEnrs, staticNodesFromEntry) = processEntryNodes( + n.entryNodes + ).valueOr: + return err("Failed to process entry nodes: " & error) + + # Set ENRTree URLs for DNS discovery + if enrTreeUrls.len > 0: + for url in enrTreeUrls: + b.dnsDiscoveryConf.withEnrTreeUrl(url) + + # Set ENR records as bootstrap nodes for discv5 + if bootstrapEnrs.len > 0: + b.discv5Conf.withBootstrapNodes(bootstrapEnrs) + + # Add static nodes (multiaddrs and those extracted from ENR entries) + if staticNodesFromEntry.len > 0: + b.withStaticNodes(staticNodesFromEntry) + if n.numShardsInNetwork != 0: b.withNumShardsInCluster(n.numShardsInNetwork) b.withShardingConf(AutoSharding) @@ -1069,9 +1117,31 @@ proc toWakuConf*(n: WakuNodeConf): ConfResult[WakuConf] = b.webSocketConf.withKeyPath(n.websocketSecureKeyPath) b.webSocketConf.withCertPath(n.websocketSecureCertPath) - b.rateLimitConf.withRateLimits(n.rateLimits) + if n.rateLimits.len > 0: + b.rateLimitConf.withRateLimits(n.rateLimits) b.kademliaDiscoveryConf.withEnabled(n.enableKadDiscovery) b.kademliaDiscoveryConf.withBootstrapNodes(n.kadBootstrapNodes) + # Mode-driven configuration overrides + case n.mode + of WakuMode.Core: + b.withRelay(true) + b.filterServiceConf.withEnabled(true) + b.withLightPush(true) + b.discv5Conf.withEnabled(true) + b.withPeerExchange(true) + b.withRendezvous(true) + b.rateLimitConf.withRateLimitsIfNotAssigned( + @["filter:100/1s", "lightpush:5/1s", "px:5/1s"] + ) + of WakuMode.Edge: + b.withPeerExchange(true) + b.withRelay(false) + b.filterServiceConf.withEnabled(false) + b.withLightPush(false) + b.storeServiceConf.withEnabled(false) + of WakuMode.noMode: + discard # use explicit CLI flags as-is + return b.build() diff --git a/waku/api/entry_nodes.nim b/tools/confutils/entry_nodes.nim similarity index 100% rename from waku/api/entry_nodes.nim rename to tools/confutils/entry_nodes.nim diff --git a/waku/api.nim b/waku/api.nim index 110a8f431..a977a062a 100644 --- a/waku/api.nim +++ b/waku/api.nim @@ -1,4 +1,5 @@ -import ./api/[api, api_conf, entry_nodes] +import ./api/[api, api_conf] import ./events/message_events +import tools/confutils/entry_nodes export api, api_conf, entry_nodes, message_events diff --git a/waku/api/api.nim b/waku/api/api.nim index ba6f83b78..1eee982fd 100644 --- a/waku/api/api.nim +++ b/waku/api/api.nim @@ -1,18 +1,20 @@ -import chronicles, chronos, results, std/strutils +import chronicles, chronos, results import waku/factory/waku import waku/[requests/health_requests, waku_core, waku_node] import waku/node/delivery_service/send_service import waku/node/delivery_service/subscription_manager import libp2p/peerid +import ../../tools/confutils/cli_args import ./[api_conf, types] +export cli_args + logScope: topics = "api" -# TODO: Specs says it should return a `WakuNode`. As `send` and other APIs are defined, we can align. -proc createNode*(config: NodeConfig): Future[Result[Waku, string]] {.async.} = - let wakuConf = toWakuConf(config).valueOr: +proc createNode*(conf: WakuNodeConf): Future[Result[Waku, string]] {.async.} = + let wakuConf = conf.toWakuConf().valueOr: return err("Failed to handle the configuration: " & error) ## We are not defining app callbacks at node creation diff --git a/waku/api/api_conf.nim b/waku/api/api_conf.nim index 7cac66426..70bb02af3 100644 --- a/waku/api/api_conf.nim +++ b/waku/api/api_conf.nim @@ -9,7 +9,7 @@ import waku/factory/waku_conf, waku/factory/conf_builder/conf_builder, waku/factory/networks_config, - ./entry_nodes + tools/confutils/entry_nodes export json_serialization, json_options @@ -85,7 +85,9 @@ type WakuMode* {.pure.} = enum Edge Core -type NodeConfig* {.requiresInit.} = object +type NodeConfig* {. + requiresInit, deprecated: "Use WakuNodeConf from tools/confutils/cli_args instead" +.} = object mode: WakuMode protocolsConfig: ProtocolsConfig networkingConfig: NetworkingConfig @@ -154,7 +156,9 @@ proc logLevel*(c: NodeConfig): LogLevel = proc logFormat*(c: NodeConfig): LogFormat = c.logFormat -proc toWakuConf*(nodeConfig: NodeConfig): Result[WakuConf, string] = +proc toWakuConf*( + nodeConfig: NodeConfig +): Result[WakuConf, string] {.deprecated: "Use WakuNodeConf.toWakuConf instead".} = var b = WakuConfBuilder.init() # Apply log configuration @@ -516,7 +520,10 @@ proc readValue*( proc decodeNodeConfigFromJson*( jsonStr: string -): NodeConfig {.raises: [SerializationError].} = +): NodeConfig {. + raises: [SerializationError], + deprecated: "Use WakuNodeConf with fieldPairs-based JSON parsing instead" +.} = var val = NodeConfig.init() # default-initialized try: var stream = unsafeMemoryInput(jsonStr) diff --git a/waku/factory/conf_builder/filter_service_conf_builder.nim b/waku/factory/conf_builder/filter_service_conf_builder.nim index a3f056b01..0a6617430 100644 --- a/waku/factory/conf_builder/filter_service_conf_builder.nim +++ b/waku/factory/conf_builder/filter_service_conf_builder.nim @@ -22,6 +22,12 @@ proc withEnabled*(b: var FilterServiceConfBuilder, enabled: bool) = proc withMaxPeersToServe*(b: var FilterServiceConfBuilder, maxPeersToServe: uint32) = b.maxPeersToServe = some(maxPeersToServe) +proc withMaxPeersToServeIfNotAssigned*( + b: var FilterServiceConfBuilder, maxPeersToServe: uint32 +) = + if b.maxPeersToServe.isNone(): + b.maxPeersToServe = some(maxPeersToServe) + proc withSubscriptionTimeout*( b: var FilterServiceConfBuilder, subscriptionTimeout: uint16 ) = diff --git a/waku/factory/conf_builder/rate_limit_conf_builder.nim b/waku/factory/conf_builder/rate_limit_conf_builder.nim index 0d466a132..b2edbef03 100644 --- a/waku/factory/conf_builder/rate_limit_conf_builder.nim +++ b/waku/factory/conf_builder/rate_limit_conf_builder.nim @@ -14,6 +14,12 @@ proc init*(T: type RateLimitConfBuilder): RateLimitConfBuilder = proc withRateLimits*(b: var RateLimitConfBuilder, rateLimits: seq[string]) = b.strValue = some(rateLimits) +proc withRateLimitsIfNotAssigned*( + b: var RateLimitConfBuilder, rateLimits: seq[string] +) = + if b.strValue.isNone() or b.strValue.get().len == 0: + b.strValue = some(rateLimits) + proc build*(b: RateLimitConfBuilder): Result[ProtocolRateLimitSettings, string] = if b.strValue.isSome() and b.objValue.isSome(): return err("Rate limits conf must only be set once on the builder") diff --git a/waku/factory/conf_builder/waku_conf_builder.nim b/waku/factory/conf_builder/waku_conf_builder.nim index e51f02dbd..2c427918d 100644 --- a/waku/factory/conf_builder/waku_conf_builder.nim +++ b/waku/factory/conf_builder/waku_conf_builder.nim @@ -12,7 +12,8 @@ import ../networks_config, ../../common/logging, ../../common/utils/parse_size_units, - ../../waku_enr/capabilities + ../../waku_enr/capabilities, + tools/confutils/entry_nodes import ./filter_service_conf_builder, @@ -393,6 +394,42 @@ proc applyNetworkConf(builder: var WakuConfBuilder) = discarded = builder.discv5Conf.bootstrapNodes builder.discv5Conf.withBootstrapNodes(networkConf.discv5BootstrapNodes) + if networkConf.enableKadDiscovery: + if not builder.kademliaDiscoveryConf.enabled: + builder.kademliaDiscoveryConf.withEnabled(networkConf.enableKadDiscovery) + + if builder.kademliaDiscoveryConf.bootstrapNodes.len == 0 and + networkConf.kadBootstrapNodes.len > 0: + builder.kademliaDiscoveryConf.withBootstrapNodes(networkConf.kadBootstrapNodes) + + if networkConf.mix: + if builder.mix.isNone: + builder.mix = some(networkConf.mix) + + if builder.p2pReliability.isNone: + builder.withP2pReliability(networkConf.p2pReliability) + + # Process entry nodes from network config - classify and distribute + if networkConf.entryNodes.len > 0: + let processed = processEntryNodes(networkConf.entryNodes) + if processed.isOk(): + let (enrTreeUrls, bootstrapEnrs, staticNodesFromEntry) = processed.get() + + # Set ENRTree URLs for DNS discovery + if enrTreeUrls.len > 0: + for url in enrTreeUrls: + builder.dnsDiscoveryConf.withEnrTreeUrl(url) + + # Set ENR records as bootstrap nodes for discv5 + if bootstrapEnrs.len > 0: + builder.discv5Conf.withBootstrapNodes(bootstrapEnrs) + + # Add static nodes (multiaddrs and those extracted from ENR entries) + if staticNodesFromEntry.len > 0: + builder.withStaticNodes(staticNodesFromEntry) + else: + warn "Failed to process entry nodes from network conf", error = processed.error() + proc build*( builder: var WakuConfBuilder, rng: ref HmacDrbgContext = crypto.newRng() ): Result[WakuConf, string] = @@ -606,7 +643,7 @@ proc build*( provided = maxConnections, recommended = DefaultMaxConnections # TODO: Do the git version thing here - let agentString = builder.agentString.get("nwaku") + let agentString = builder.agentString.get("logos-delivery") # TODO: use `DefaultColocationLimit`. the user of this value should # probably be defining a config object diff --git a/waku/factory/networks_config.nim b/waku/factory/networks_config.nim index c7193aa9c..94856fb21 100644 --- a/waku/factory/networks_config.nim +++ b/waku/factory/networks_config.nim @@ -29,6 +29,11 @@ type NetworkConf* = object shardingConf*: ShardingConf discv5Discovery*: bool discv5BootstrapNodes*: seq[string] + enableKadDiscovery*: bool + kadBootstrapNodes*: seq[string] + entryNodes*: seq[string] + mix*: bool + p2pReliability*: bool # cluster-id=1 (aka The Waku Network) # Cluster configuration corresponding to The Waku Network. Note that it @@ -45,6 +50,11 @@ proc TheWakuNetworkConf*(T: type NetworkConf): NetworkConf = rlnEpochSizeSec: 600, rlnRelayUserMessageLimit: 100, shardingConf: ShardingConf(kind: AutoSharding, numShardsInCluster: 8), + enableKadDiscovery: false, + kadBootstrapNodes: @[], + entryNodes: @[], + mix: false, + p2pReliability: false, discv5Discovery: true, discv5BootstrapNodes: @[ @@ -54,6 +64,36 @@ proc TheWakuNetworkConf*(T: type NetworkConf): NetworkConf = ], ) +# cluster-id=2 (Logos Dev Network) +# Cluster configuration for the Logos Dev Network. +proc LogosDevConf*(T: type NetworkConf): NetworkConf = + const ZeroChainId = 0'u256 + return NetworkConf( + maxMessageSize: "150KiB", + clusterId: 2, + rlnRelay: false, + rlnRelayEthContractAddress: "", + rlnRelayDynamic: false, + rlnRelayChainId: ZeroChainId, + rlnEpochSizeSec: 0, + rlnRelayUserMessageLimit: 0, + shardingConf: ShardingConf(kind: AutoSharding, numShardsInCluster: 8), + enableKadDiscovery: true, + mix: true, + p2pReliability: true, + discv5Discovery: true, + discv5BootstrapNodes: @[], + entryNodes: + @[ + "/dns4/delivery-01.do-ams3.logos.dev.status.im/tcp/30303/p2p/16Uiu2HAmTUbnxLGT9JvV6mu9oPyDjqHK4Phs1VDJNUgESgNSkuby", + "/dns4/delivery-02.do-ams3.logos.dev.status.im/tcp/30303/p2p/16Uiu2HAmMK7PYygBtKUQ8EHp7EfaD3bCEsJrkFooK8RQ2PVpJprH", + "/dns4/delivery-01.gc-us-central1-a.logos.dev.status.im/tcp/30303/p2p/16Uiu2HAm4S1JYkuzDKLKQvwgAhZKs9otxXqt8SCGtB4hoJP1S397", + "/dns4/delivery-02.gc-us-central1-a.logos.dev.status.im/tcp/30303/p2p/16Uiu2HAm8Y9kgBNtjxvCnf1X6gnZJW5EGE4UwwCL3CCm55TwqBiH", + "/dns4/delivery-01.ac-cn-hongkong-c.logos.dev.status.im/tcp/30303/p2p/16Uiu2HAm8YokiNun9BkeA1ZRmhLbtNUvcwRr64F69tYj9fkGyuEP", + "/dns4/delivery-02.ac-cn-hongkong-c.logos.dev.status.im/tcp/30303/p2p/16Uiu2HAkvwhGHKNry6LACrB8TmEFoCJKEX29XR5dDUzk3UT3UNSE", + ], + ) + proc validateShards*( shardingConf: ShardingConf, shards: seq[uint16] ): Result[void, string] = diff --git a/waku/rest_api/endpoint/builder.nim b/waku/rest_api/endpoint/builder.nim index bbd8de422..41ab7e06b 100644 --- a/waku/rest_api/endpoint/builder.nim +++ b/waku/rest_api/endpoint/builder.nim @@ -28,7 +28,6 @@ import # It will always be called from main thread anyway. # Ref: https://nim-lang.org/docs/manual.html#threads-gc-safety var restServerNotInstalledTab {.threadvar.}: TableRef[string, string] -restServerNotInstalledTab = newTable[string, string]() export WakuRestServerRef @@ -42,6 +41,9 @@ type RestServerConf* = object proc startRestServerEssentials*( nodeHealthMonitor: NodeHealthMonitor, conf: RestServerConf, portsShift: uint16 ): Result[WakuRestServerRef, string] = + if restServerNotInstalledTab.isNil: + restServerNotInstalledTab = newTable[string, string]() + let requestErrorHandler: RestRequestErrorHandler = proc( error: RestRequestError, request: HttpRequestRef ): Future[HttpResponseRef] {.async: (raises: [CancelledError]).} = From 4a6ad732356c99d00c5f0143df8a470b84b6281d Mon Sep 17 00:00:00 2001 From: NagyZoltanPeter <113987313+NagyZoltanPeter@users.noreply.github.com> Date: Tue, 3 Mar 2026 23:48:00 +0100 Subject: [PATCH 69/70] Chore: adapt debugapi to wakonodeconf (#3745) * logosdelivery_get_available_configs collects and format WakuNodeConf options * simplify debug config output --- .../logos_delivery_api/debug_api.nim | 22 ++- tools/confutils/cli_args.nim | 4 +- tools/confutils/config_option_meta.nim | 143 ++++++++++++++++++ 3 files changed, 161 insertions(+), 8 deletions(-) create mode 100644 tools/confutils/config_option_meta.nim diff --git a/liblogosdelivery/logos_delivery_api/debug_api.nim b/liblogosdelivery/logos_delivery_api/debug_api.nim index bee8ab537..623b3b08f 100644 --- a/liblogosdelivery/logos_delivery_api/debug_api.nim +++ b/liblogosdelivery/logos_delivery_api/debug_api.nim @@ -1,5 +1,6 @@ import std/[json, strutils] import waku/factory/waku_state_info +import tools/confutils/[cli_args, config_option_meta] proc logosdelivery_get_available_node_info_ids( ctx: ptr FFIContext[Waku], callback: FFICallBack, userData: pointer @@ -33,14 +34,21 @@ proc logosdelivery_get_available_configs( ctx: ptr FFIContext[Waku], callback: FFICallBack, userData: pointer ) {.ffi.} = ## Returns information about the accepted config items. - ## For analogy with a CLI app, this is the info when typing --help for a command. requireInitializedNode(ctx, "GetAvailableConfigs"): return err(errMsg) - ## TODO: we are now returning a simple default value for NodeConfig. - ## The NodeConfig struct is too complex and we need to have a flattened simpler config. - ## The expected returned value for this is a list of possible config items with their - ## description, accepted values, default value, etc. + let optionMetas: seq[ConfigOptionMeta] = extractConfigOptionMeta(WakuNodeConf) + var configOptionDetails = newJArray() - let defaultConfig = NodeConfig.init() - return ok($(%*defaultConfig)) + # for confField, confValue in fieldPairs(conf): + # defaultConfig[confField] = $confValue + + for meta in optionMetas: + configOptionDetails.add( + %*{meta.fieldName: meta.typeName & "(" & meta.defaultValue & ")", "desc": meta.desc} + ) + + var jsonNode = newJObject() + jsonNode["configOptions"] = configOptionDetails + let asString = pretty(jsonNode) + return ok(pretty(jsonNode)) diff --git a/tools/confutils/cli_args.nim b/tools/confutils/cli_args.nim index d8bd9b7f5..4a6e8c618 100644 --- a/tools/confutils/cli_args.nim +++ b/tools/confutils/cli_args.nim @@ -480,7 +480,9 @@ with the drawback of consuming some more bandwidth.""", ## REST HTTP config rest* {. - desc: "Enable Waku REST HTTP server: true|false", defaultValue: true, name: "rest" + desc: "Enable Waku REST HTTP server: true|false", + defaultValue: false, + name: "rest" .}: bool restAddress* {. diff --git a/tools/confutils/config_option_meta.nim b/tools/confutils/config_option_meta.nim new file mode 100644 index 000000000..1880fdef5 --- /dev/null +++ b/tools/confutils/config_option_meta.nim @@ -0,0 +1,143 @@ +import std/[macros] + +type ConfigOptionMeta* = object + fieldName*: string + typeName*: string + cliName*: string + desc*: string + defaultValue*: string + command*: string + +proc getPragmaValue(pragmaNode: NimNode, pragmaName: string): string {.compileTime.} = + if pragmaNode.kind != nnkPragma: + return "" + + for item in pragmaNode: + if item.kind == nnkExprColonExpr and item[0].eqIdent(pragmaName): + return item[1].repr + + return "" + +proc getFieldName(fieldNode: NimNode): string {.compileTime.} = + case fieldNode.kind + of nnkPragmaExpr: + if fieldNode.len >= 1: + return getFieldName(fieldNode[0]) + of nnkPostfix: + if fieldNode.len >= 2: + return getFieldName(fieldNode[1]) + of nnkIdent, nnkSym: + return fieldNode.strVal + else: + discard + + return fieldNode.repr + +proc getFieldAndPragma( + fieldDef: NimNode +): tuple[fieldName, typeName: string, pragmaNode: NimNode] {.compileTime.} = + if fieldDef.kind != nnkIdentDefs: + return ("", "", newNimNode(nnkEmpty)) + + let declaredField = fieldDef[0] + var typeNode = fieldDef[1] + var pragmaNode = newNimNode(nnkEmpty) + + if declaredField.kind == nnkPragmaExpr: + pragmaNode = declaredField[1] + elif typeNode.kind == nnkPragmaExpr: + pragmaNode = typeNode[1] + typeNode = typeNode[0] + + return (getFieldName(declaredField), typeNode.repr, pragmaNode) + +proc makeMetaNode( + fieldName, typeName, cliName, desc, defaultValue, command: string +): NimNode {.compileTime.} = + result = newTree( + nnkObjConstr, + ident("ConfigOptionMeta"), + newTree(nnkExprColonExpr, ident("fieldName"), newLit(fieldName)), + newTree(nnkExprColonExpr, ident("typeName"), newLit(typeName)), + newTree(nnkExprColonExpr, ident("cliName"), newLit(cliName)), + newTree(nnkExprColonExpr, ident("desc"), newLit(desc)), + newTree(nnkExprColonExpr, ident("defaultValue"), newLit(defaultValue)), + newTree(nnkExprColonExpr, ident("command"), newLit(command)), + ) + +macro extractConfigOptionMeta*(T: typedesc): untyped = + proc findFirstRecList(n: NimNode): NimNode {.compileTime.} = + if n.kind == nnkRecList: + return n + for child in n: + let found = findFirstRecList(child) + if not found.isNil: + return found + return nil + + proc collectRecList( + recList: NimNode, metas: var seq[NimNode], commandCtx: string + ) {.compileTime.} = + for child in recList: + case child.kind + of nnkIdentDefs: + let (fieldName, typeName, pragmaNode) = getFieldAndPragma(child) + if fieldName.len == 0: + continue + let cliName = block: + let n = getPragmaValue(pragmaNode, "name") + if n.len > 0: n else: fieldName + let desc = getPragmaValue(pragmaNode, "desc") + let defaultValue = getPragmaValue(pragmaNode, "defaultValue") + metas.add( + makeMetaNode(fieldName, typeName, cliName, desc, defaultValue, commandCtx) + ) + of nnkRecCase: + let discriminator = child[0] + if discriminator.kind == nnkIdentDefs: + let (fieldName, typeName, pragmaNode) = getFieldAndPragma(discriminator) + if fieldName.len > 0: + let cliName = block: + let n = getPragmaValue(pragmaNode, "name") + if n.len > 0: n else: fieldName + let desc = getPragmaValue(pragmaNode, "desc") + let defaultValue = getPragmaValue(pragmaNode, "defaultValue") + metas.add( + makeMetaNode(fieldName, typeName, cliName, desc, defaultValue, commandCtx) + ) + + for i in 1 ..< child.len: + let branch = child[i] + case branch.kind + of nnkOfBranch: + let branchCtx = branch[0].repr + for j in 1 ..< branch.len: + if branch[j].kind == nnkRecList: + collectRecList(branch[j], metas, branchCtx) + of nnkElse: + for j in 0 ..< branch.len: + if branch[j].kind == nnkRecList: + collectRecList(branch[j], metas, commandCtx) + else: + discard + else: + discard + + let typeInst = getTypeInst(T) + var targetType = T + if typeInst.kind == nnkBracketExpr and typeInst.len >= 2: + targetType = typeInst[1] + + let typeImpl = getImpl(targetType) + let recList = findFirstRecList(typeImpl) + if recList.isNil: + return newTree(nnkPrefix, ident("@"), newNimNode(nnkBracket)) + + var metas: seq[NimNode] = @[] + collectRecList(recList, metas, "") + + let bracket = newNimNode(nnkBracket) + for node in metas: + bracket.add(node) + + result = newTree(nnkPrefix, ident("@"), bracket) From 0ad55159b39ed74ba6b88b5a9d72db766b4b0f08 Mon Sep 17 00:00:00 2001 From: NagyZoltanPeter <113987313+NagyZoltanPeter@users.noreply.github.com> Date: Wed, 4 Mar 2026 09:48:48 +0100 Subject: [PATCH 70/70] liblogosdelivery supports MessageReceivedEvent propagation over FFI. Adjusted example. (#3747) --- Makefile | 3 + liblogosdelivery/examples/json_utils.c | 96 +++++++++++++++++++ liblogosdelivery/examples/json_utils.h | 21 ++++ .../examples/logosdelivery_example.c | 84 ++++++++++------ .../logos_delivery_api/node_api.nim | 17 +++- 5 files changed, 188 insertions(+), 33 deletions(-) create mode 100644 liblogosdelivery/examples/json_utils.c create mode 100644 liblogosdelivery/examples/json_utils.h diff --git a/Makefile b/Makefile index 4fafd6310..8f98e90bd 100644 --- a/Makefile +++ b/Makefile @@ -469,6 +469,7 @@ logosdelivery_example: | build liblogosdelivery ifeq ($(detected_OS),Darwin) gcc -o build/$@ \ liblogosdelivery/examples/logosdelivery_example.c \ + liblogosdelivery/examples/json_utils.c \ -I./liblogosdelivery \ -L./build \ -llogosdelivery \ @@ -476,6 +477,7 @@ ifeq ($(detected_OS),Darwin) else ifeq ($(detected_OS),Linux) gcc -o build/$@ \ liblogosdelivery/examples/logosdelivery_example.c \ + liblogosdelivery/examples/json_utils.c \ -I./liblogosdelivery \ -L./build \ -llogosdelivery \ @@ -483,6 +485,7 @@ else ifeq ($(detected_OS),Linux) else ifeq ($(detected_OS),Windows) gcc -o build/$@.exe \ liblogosdelivery/examples/logosdelivery_example.c \ + liblogosdelivery/examples/json_utils.c \ -I./liblogosdelivery \ -L./build \ -llogosdelivery \ diff --git a/liblogosdelivery/examples/json_utils.c b/liblogosdelivery/examples/json_utils.c new file mode 100644 index 000000000..8b33bb648 --- /dev/null +++ b/liblogosdelivery/examples/json_utils.c @@ -0,0 +1,96 @@ +#include "json_utils.h" +#include +#include + +const char* extract_json_field(const char *json, const char *field, char *buffer, size_t bufSize) { + char searchStr[256]; + snprintf(searchStr, sizeof(searchStr), "\"%s\":\"", field); + + const char *start = strstr(json, searchStr); + if (!start) { + return NULL; + } + + start += strlen(searchStr); + const char *end = strchr(start, '"'); + if (!end) { + return NULL; + } + + size_t len = end - start; + if (len >= bufSize) { + len = bufSize - 1; + } + + memcpy(buffer, start, len); + buffer[len] = '\0'; + + return buffer; +} + +const char* extract_json_object(const char *json, const char *field, size_t *outLen) { + char searchStr[256]; + snprintf(searchStr, sizeof(searchStr), "\"%s\":{", field); + + const char *start = strstr(json, searchStr); + if (!start) { + return NULL; + } + + // Advance to the opening brace + start = strchr(start, '{'); + if (!start) { + return NULL; + } + + // Find the matching closing brace (handles nested braces) + int depth = 0; + const char *p = start; + while (*p) { + if (*p == '{') depth++; + else if (*p == '}') { + depth--; + if (depth == 0) { + *outLen = (size_t)(p - start + 1); + return start; + } + } + p++; + } + return NULL; +} + +int decode_json_byte_array(const char *json, const char *field, char *buffer, size_t bufSize) { + char searchStr[256]; + snprintf(searchStr, sizeof(searchStr), "\"%s\":[", field); + + const char *start = strstr(json, searchStr); + if (!start) { + return -1; + } + + // Advance to the opening bracket + start = strchr(start, '['); + if (!start) { + return -1; + } + start++; // skip '[' + + size_t pos = 0; + const char *p = start; + while (*p && *p != ']' && pos < bufSize - 1) { + // Skip whitespace and commas + while (*p == ' ' || *p == ',' || *p == '\n' || *p == '\r' || *p == '\t') p++; + if (*p == ']') break; + + // Parse integer + int val = 0; + while (*p >= '0' && *p <= '9') { + val = val * 10 + (*p - '0'); + p++; + } + buffer[pos++] = (char)val; + } + buffer[pos] = '\0'; + return (int)pos; +} diff --git a/liblogosdelivery/examples/json_utils.h b/liblogosdelivery/examples/json_utils.h new file mode 100644 index 000000000..4039ca4f6 --- /dev/null +++ b/liblogosdelivery/examples/json_utils.h @@ -0,0 +1,21 @@ +#ifndef JSON_UTILS_H +#define JSON_UTILS_H + +#include + +// Extract a JSON string field value into buffer. +// Returns pointer to buffer on success, NULL on failure. +// Very basic parser - for production use a proper JSON library. +const char* extract_json_field(const char *json, const char *field, char *buffer, size_t bufSize); + +// Extract a nested JSON object as a raw string. +// Returns a pointer into `json` at the start of the object, and sets `outLen`. +// Handles nested braces. +const char* extract_json_object(const char *json, const char *field, size_t *outLen); + +// Decode a JSON array of integers (byte values) into a buffer. +// Parses e.g. [72,101,108,108,111] into "Hello". +// Returns number of bytes decoded, or -1 on error. +int decode_json_byte_array(const char *json, const char *field, char *buffer, size_t bufSize); + +#endif // JSON_UTILS_H diff --git a/liblogosdelivery/examples/logosdelivery_example.c b/liblogosdelivery/examples/logosdelivery_example.c index 61333f84d..729f7f0dc 100644 --- a/liblogosdelivery/examples/logosdelivery_example.c +++ b/liblogosdelivery/examples/logosdelivery_example.c @@ -1,4 +1,5 @@ #include "../liblogosdelivery.h" +#include "json_utils.h" #include #include #include @@ -6,33 +7,10 @@ static int create_node_ok = -1; -// Helper function to extract a JSON string field value -// Very basic parser - for production use a proper JSON library -const char* extract_json_field(const char *json, const char *field, char *buffer, size_t bufSize) { - char searchStr[256]; - snprintf(searchStr, sizeof(searchStr), "\"%s\":\"", field); - - const char *start = strstr(json, searchStr); - if (!start) { - return NULL; - } - - start += strlen(searchStr); - const char *end = strchr(start, '"'); - if (!end) { - return NULL; - } - - size_t len = end - start; - if (len >= bufSize) { - len = bufSize - 1; - } - - memcpy(buffer, start, len); - buffer[len] = '\0'; - - return buffer; -} +// Flags set by event callback, polled by main thread +static volatile int got_message_sent = 0; +static volatile int got_message_error = 0; +static volatile int got_message_received = 0; // Event callback that handles message events void event_callback(int ret, const char *msg, size_t len, void *userData) { @@ -62,6 +40,7 @@ void event_callback(int ret, const char *msg, size_t len, void *userData) { extract_json_field(eventJson, "requestId", requestId, sizeof(requestId)); extract_json_field(eventJson, "messageHash", messageHash, sizeof(messageHash)); printf("[EVENT] Message sent - RequestID: %s, Hash: %s\n", requestId, messageHash); + got_message_sent = 1; } else if (strcmp(eventType, "message_error") == 0) { char requestId[128]; @@ -72,6 +51,7 @@ void event_callback(int ret, const char *msg, size_t len, void *userData) { extract_json_field(eventJson, "error", error, sizeof(error)); printf("[EVENT] Message error - RequestID: %s, Hash: %s, Error: %s\n", requestId, messageHash, error); + got_message_error = 1; } else if (strcmp(eventType, "message_propagated") == 0) { char requestId[128]; @@ -85,6 +65,41 @@ void event_callback(int ret, const char *msg, size_t len, void *userData) { extract_json_field(eventJson, "connectionStatus", connectionStatus, sizeof(connectionStatus)); printf("[EVENT] Connection status change - Status: %s\n", connectionStatus); + } else if (strcmp(eventType, "message_received") == 0) { + char messageHash[128]; + extract_json_field(eventJson, "messageHash", messageHash, sizeof(messageHash)); + + // Extract the nested "message" object + size_t msgObjLen = 0; + const char *msgObj = extract_json_object(eventJson, "message", &msgObjLen); + if (msgObj) { + // Make a null-terminated copy of the message object + char *msgJson = malloc(msgObjLen + 1); + if (msgJson) { + memcpy(msgJson, msgObj, msgObjLen); + msgJson[msgObjLen] = '\0'; + + char contentTopic[256]; + extract_json_field(msgJson, "contentTopic", contentTopic, sizeof(contentTopic)); + + // Decode payload from JSON byte array to string + char payload[4096]; + int payloadLen = decode_json_byte_array(msgJson, "payload", payload, sizeof(payload)); + + printf("[EVENT] Message received - Hash: %s, ContentTopic: %s\n", messageHash, contentTopic); + if (payloadLen > 0) { + printf(" Payload (%d bytes): %.*s\n", payloadLen, payloadLen, payload); + } else { + printf(" Payload: (empty or could not decode)\n"); + } + + free(msgJson); + } + } else { + printf("[EVENT] Message received - Hash: %s (could not parse message)\n", messageHash); + } + got_message_received = 1; + } else { printf("[EVENT] Unknown event type: %s\n", eventType); } @@ -146,7 +161,7 @@ int main() { logosdelivery_start_node(ctx, simple_callback, (void *)"start_node"); // Wait for node to start - sleep(10); + sleep(5); printf("\n4. Subscribing to content topic...\n"); const char *contentTopic = "/example/1/chat/proto"; @@ -181,9 +196,18 @@ int main() { "}"; logosdelivery_send(ctx, simple_callback, (void *)"send", message); - // Wait for message events to arrive + // Poll for terminal message events (sent, error, or received) with timeout printf("Waiting for message delivery events...\n"); - sleep(60); + int timeout_sec = 60; + int elapsed = 0; + while (!(got_message_sent || got_message_error || got_message_received) + && elapsed < timeout_sec) { + usleep(100000); // 100ms + elapsed++; + } + if (elapsed >= timeout_sec) { + printf("Timed out waiting for message events after %d seconds\n", timeout_sec); + } printf("\n7. Unsubscribing from content topic...\n"); logosdelivery_unsubscribe(ctx, simple_callback, (void *)"unsubscribe", contentTopic); diff --git a/liblogosdelivery/logos_delivery_api/node_api.nim b/liblogosdelivery/logos_delivery_api/node_api.nim index 1835f75b5..cd644abd7 100644 --- a/liblogosdelivery/logos_delivery_api/node_api.nim +++ b/liblogosdelivery/logos_delivery_api/node_api.nim @@ -35,8 +35,8 @@ registerReqFFI(CreateNodeRequest, ctx: ptr FFIContext[Waku]): confValue = parseCmdArg(typeof(confValue), formattedString) except Exception: return err( - "Failed to parse field '" & confField & "': " & - getCurrentExceptionMsg() & ". Value: " & formattedString + "Failed to parse field '" & confField & "': " & getCurrentExceptionMsg() & + ". Value: " & formattedString ) # Create the node @@ -86,7 +86,8 @@ proc logosdelivery_create_node( callback(RET_ERR, unsafeAddr msg[0], cast[csize_t](len(msg)), userData) # free allocated resources as they won't be available ffi.destroyFFIContext(ctx).isOkOr: - chronicles.error "Error in destroyFFIContext after sendRequestToFFIThread during creation", err = $error + chronicles.error "Error in destroyFFIContext after sendRequestToFFIThread during creation", + err = $error return nil return ctx @@ -125,6 +126,15 @@ proc logosdelivery_start_node( chronicles.error "MessagePropagatedEvent.listen failed", err = $error return err("MessagePropagatedEvent.listen failed: " & $error) + let receivedListener = MessageReceivedEvent.listen( + ctx.myLib[].brokerCtx, + proc(event: MessageReceivedEvent) {.async: (raises: []).} = + callEventCallback(ctx, "onMessageReceived"): + $newJsonEvent("message_received", event), + ).valueOr: + chronicles.error "MessageReceivedEvent.listen failed", err = $error + return err("MessageReceivedEvent.listen failed: " & $error) + let ConnectionStatusChangeListener = EventConnectionStatusChange.listen( ctx.myLib[].brokerCtx, proc(event: EventConnectionStatusChange) {.async: (raises: []).} = @@ -149,6 +159,7 @@ proc logosdelivery_stop_node( MessageErrorEvent.dropAllListeners(ctx.myLib[].brokerCtx) MessageSentEvent.dropAllListeners(ctx.myLib[].brokerCtx) MessagePropagatedEvent.dropAllListeners(ctx.myLib[].brokerCtx) + MessageReceivedEvent.dropAllListeners(ctx.myLib[].brokerCtx) EventConnectionStatusChange.dropAllListeners(ctx.myLib[].brokerCtx) (await ctx.myLib[].stop()).isOkOr:

_gN@u zx1e{e;Yi#z6R*f8_JUNsqr=+mm`Tc}TRGQD}8D5 z@9X=^YU8*@GPiy^AQrT>51)E{SHJZGbVC}^!^mfd@NrQKgL(P;YkY}e|2s2Aoo#vKEev@dlG&H2=$l8zIn0*3d>p?^F z3AOr`^{i7jQ`sl0e4_}U-fYKU=(cf(HO05~&3l#mCHT8Wv+WT*o{b-o-rGD}cAOp( zls(r9@{T`CHxNkuWsmLNO3!_Hcb_F;G>G0cfnENypQM!N{QdTub7H|PhGC)MdWw6I z^2xWbl5%b*9rGk4UF7;Yg}?B{em?vW{CLmj;YUwzR|*{w`x#Hh^chOg8zE2(sK8Yq z^Q79|ya}$SK>NDlmrb85xBLD{_uVdFbLp&GE9SKJ@yB%Q*@FviT-NcV+IF5YW%VrS z`dNK@@99VNZIQ1n;wiEh@q}iu{KGz(X7Q5D?1?@u#(%eHRbum)7P}HXHS-lks9t5 zZ3)5a)W*$C^ZL{%?FUEcS&}o_ExMQ|0D6{#pX+WW)SK%r9LYTGOq6*$PZKK({rTHj z2J$4*-DDuO+Q$N!o5jxpXj=pXn0AZ0;v2Dm%-jcZdlZCNH}^03^X(`I7S7qPW5(Be zPbXI@*_Z&TKZG0)R{2acxBqpc#EjWK@bYuGvGw!4$eZ)-j@a(r-rW)V?IO-<7A;=U zO>1Wtf?`XRw!H<*Va@=4^1D98YGFq3>nYykoMaOBF8_}`E|OPPl=il_Mu8Z8_@PP_y`i%o45WN0h7XeUw()Pl{(2kIzMo^DN?WqPNknpS(yqJu(tZ$Uk7r zd(T%(HTjyA{FuMWjjQ*w+S?w~^izKDuu+?(WA^svF51@L_@G-Jyb1yJr4rBwv(IqJ zSDy=f#`rNQ_2PAZJ1^t8?{-=?BmOR&v^?^c?vR*$hqBVmo$2TOJ4J{&OFG?HJG5xxGzR3;uk~{AdgD@oo0|-(_3>@>?rnE7 zczWl<+@9au&u_L*Y4}}lPFqTFETJ5OnTdvZ``xMITyaJ!oV{xf1d(l( zx@RaQ`)Tjm8QPM7{~+RGb$>kdI9>Ypc6sJ3vDCEF8Hmt7l>A%%4}0IXBS&&%`7eC# z2fttRDD3X+0D2Gxn5V^H;8!)VO|pk%&qxsZ-}fB%h{%Y{tYQ_bTEpOWwZy87$nfxs zx!EyuyM+2h`TZ=O7<#?6!iV7jp<+zGHC|GC-^T-T{DpKqy>J;Zhk zsO7EOmr>WFGxf|FS^D$T^FfuwBjWu(d#ogt%JB@XC6B2%l(UI0pL-VSFfj!1O3RwX3#?hVfa`qzG{Wn=(0daKlJC(gOjb>)H=PX@sIy+J^=+O_&@EH%v9%d~TPiifqCuCKTKY2G)41-P z%c$(06JMEM!Ea*K^N)77>y}2{x~K1-N26EhlL|SrMabWsiKp9{PdgJ2(LizaJ0Cg5 z=rcIp_8*I2+H%D!9<+05wopgmi>J;~ec0~$B%Acn4t=mtPb=B&?7noCUco~B502O1 zs<5agR?``|*&Gxz=MUak-SAMQt!bZUgEow5Th@=}mAdss-XWLGRz?nwKWr?3gJ1cx zBA0#iH?dq`kqSLMS>k%WHMLXXj<3#cv6p*a-{;8K$hkJNC_O!m-IhrH_sH+dOg%aB zdyV*3t@jHOuQk=A!^@EPdai9AIM?Qm>2j3CVu2((>;mJ2PF}oZm*Q9ByNXFM*YBF$ zcNedM^}FF_M~{K_I|sbXwFy}g>LR;!dpxFJ$?o-=#ANE<)rJ&nFqf)GxEm8zeNwf zy@B8_cG7VcJUtZbumps;mXBClp>{0U#EhJ!*!I@eWnak=ox0`&Ue1f78h#LgI~KgQ zWNO5wsX{v{bnc;pL&MJ1Tct-Xy?}8Bkb(E@#yFEphq3x*~}?FJjphz>-hd_c+(CX zHsXU8PiVX`xP703T&&i1^N-v7*tF84hz)-q{MhWy=Wd)NYQU@A^K$QFr$oxKjvqv+x%U{4ya?W>L;}WrE~DAes#oT6sxYL=>PnPE|0Z;j(_wi zb$ME9v_ASr-=B;644>mO&c*R`I~-4)hdp1LeynG7%*}BpDp_xy7`uX$>i9i=>zM!b zPNd{ORsU@{n;d;yUY0wOzTaFl|7;c({doH;MTtK3XRQ-(a}rNp2pJp0h-$Jk(sKFj zcl!IlvoqsLoVJfw$`QBgp;;s?BP0E-Jx4o}M(o4l&H4Z*9)8Qd=i&<>KXznm6w|Du zl~dMq4(|}3fJikLmpknn_Yd)d?wuD)jO<1Vl5Z!R@pt{!2Lec zxM2-kT#HiEGs%Z`(Wb#B^3g^ITxMx}N7w`oVLG|IXHzV#IBG7g{FToAJTB5fw3&!6scIy@|zcr%?PndLAsoO37b+|$;L{0jk-3m?5v(@8{ zwbP2e-hHAc<3Wqb=3Qo@rOMr?ToiFvqr%(?r9(1GNRMh#Q5pU zMTg$6jzYT~#ls$=2anLP-}|24M-0O=v+_Na-UqGUv)kF=H)@YceVN)BG}7g&)m~Bj zQo-;A6c0Y|2*pQ__uo)F1ITV-H))Tas~gwy!^5&g4!BfLdvL|YHhb%Qq+J$~p{1*E z;Qn&a3ip{ph7XxKdb&)>kq*yJjr$AF-YtU8h?Hw|=dQ{m>vXLD6NYc?kcwl?+fuh= zlsUdC;yg2?&sLsV#2I@)UQXh!i#X3Uke%x-X?sc6MVv?f9?b5duk!z|kx_ihT-^~d z!9d}dO?ydMttgIKHuhc*AL%OIQO%x`kMC4P$g<1Ph_V##8br`CC~|KxU^ zzepWd|IUn+!PmKkMultn;YZP%Rau!|Ic;4lwH=vPWxdQ za%LS9K08f3viZ50QRRMoC%3h)OU@>bbybI5UHp+g80kQpk{!}y-Z6t3M_wehblG+H zdBFS^@)L6AGOMW`wQlSWJ>*fzPLA-squ4R#IYWf3*-0dO^k1uCF|_qkvABYc2rghdx-PQ)f;^x zgj+fsa`N);L-MN~{Xd6{5eJ+Ed2?4%9d}Mk#|pmGgC66JEypX~XcyjS54>?^0oZ+S zo<*#!)INA4eqVUw#D#L3qduaz_M9;+UL+&7eC(uPrdH3qlS|wESpS6)al@;Vv08Sa zx2{+j+1TNGy?!GuJ9)G*dITQiuTb_YOH!?FCF$$6F7L_dU3}M|7Rq5a&fPDo+|yhi z{N}$-FI;MmYj4LVmfOL5WwH@5VX#s9`iJe~#aWC#1MOR}^>Nyyin3dHv?^ zeJgjy{DmikSaX(yafau>D_?Pa?yP>8ZzRNWDiJT`h*_C%QrU$M z-JgqB+}(F^@|~HM$NIllGrdUGIRlr7QzHXw!OvVCGU&}(vsiWg=1D$Li&(9lVtN=3 z9iNERjQ;rfl$)Gy$*p;$z17}3pAQ=j;Ax?eBj59ahi+S>bhzjRYx~ope0fW6cdSeI zEXY^c=pV8vzxmxeA70!@?{9wn^#?T5eCHZ9@3kon z02qA6`6hE_a?vPqh8(g|vFGnUye}An{NYVbKc(?EJA&br4|RTUVhxnp3jYatUEUnz zs7LxGAyxoE;LwCDcv^>=Y{_)M=Qn7kI9w&HUa{eY zFlKZ6mCVLb#8i9v?y;iC7~t}k7$hqqo|gQkamWR^+wUU=-uu za2c=PTyLV=80mB0sHaoi#oROEYAgITKbewbCxYEfnqJS4UxsCz$)c;iPjn`>`%&-9o{+t@kc~=sDCk(>IZR ze!KbphopQ`K0TAt)SxtNcj?`0PbR7KnbGJ7R`HwfIPVm#-l0zd3omITm4az> zd`$V_F1j&<+&NT$RB}hNz;!ah+m=(m?rU^I<}7^;^Y}`#v=m+YFzZ0wyw6m%zpWa7 zfz1C8P$ijhJ%u4D#d64=?YDe6%OF=bGuG>wExtmrMt4 zp;e90aHm>kHl^tn^e zuJ+o%fhvCGn{)g0`QP(TAIksz_xHcN%kATvKSW&Rg->t(@PGdP=Es&Qum1hb&*@`( z|M`EXPd|>f8c;p@R)}{4)VF3Jo@`3WL&8^?=?@5nzcb%o0&LH!F!(@a)pZ9aVqbnt z`d{CdpW3IoTS!eQy8q;lt@~eWpSXMFr)# zs{-;$co*9I>BC<>{qd&{AKtx!68rnl*?&L~^=i1*3_ss-Dxx`CTKO$793Js(?mpe_@e#DEbY#=BM_* zTKjYR_)lqa918g(+T@7kjBUVMML?x%Hk-gCqmfpR z%CJAD&@C`w2V|w#6R^_!PrI2BWAvqc;;zeU2Y7Yz*FUyYKD_@U8SrJBzkF3cw*UAA zOYv#PzB8DWQ8Dx)i!?Fy46YC`6p9)H?VuUipqfR%BCrb+EqF=x!76Yk`{vgppButr zcYzY)Pwg+M{EdH=AJg0S|2${F48F5lp-DS`{`hve;OB({A$=I1DyCHy1DM;IGgW~3 zTiSbTYOdM_eykK^JzMp()CP$d zd@DdXXcHRYKScfeUw-=O z*MIxr&4+jIa1#srZ@>QV2oL}~G4!78#U#dQK(Szgv6L-6xS-i}3>_UL46lIhC0kbQ zW2mPoCFgGy2uQ-QUNtxnRIJh?IFeE!45L{AD3Ah-!DrEB4+*d{K+p16fa;BpraS}$ zobZ+*R!M6S40N%kvDZX{g2Jn@2quy7eyHdyXw{`km`~$6Co$av0w6M}rG)m%JUWp6 zw7?&jf<^}7-j_9TkhzIQy}ku$wZe?6|zauJB(1Eu8grd;3xrPv+@u<2Ad+kbld=`*F;9tDh3 zpyNrI;(SSUeO7iq=taccIZRMVs~mH}Z=Y>e8Exb^ovQ76B6qJ9xlcjKgW?|6r(WaD z54NFqZ78~zNuK|ut;ZuhBhS3%tN;h`Af11!xF$E}InTJ8<|)ka?zmILbWdIqXv{fY zkAkrRBhA40W}S%}-Ssy{k4M--_jx3C_PtABt+p;9| z96edmaVjmgWSY}~P|U<3tIO<2w4jBaYZJw~T2`)M*GlZl3~>y@cBw}@W3@S6UF*S? zsqJ(W7;_P5%bb7tpyyxdQp{z3R?|sheu{-oj@K*ClP`uS+;MVObFBf97=|F%H`CQ? z4Iy&?LNvgp1>eT`}-}bos@=VqN z*+y3xy8Dcg%V7ideU@*wVsV2|QVfTs@g2IMJoGMg$HnKEl%5XL+ia=LN-eh*mf-jC z&EZ;s2`IBjg_;>zmuJ0PQ$<~US1sY=7YEh(E}pZiFwZ>XvIff6ODMzkdkHRan`69L z9pks2XTMMwInh|3KAn8VwbB>5QiTCmlYb~N6ACY%mQ(t%-={O-uzbs5!5`}^{4zUc zeQRrD%ytB{%SnrLtVZ8lL?0pL;tVNeAUi%ol-bMpSL(GZR}4}uAv`?C#jNF^c5}vD zuRbsru^i5WHV*geI8#S1ZbxCIXD$5_oMnfz`_hJ`CxhS8->m%&anKDXYWA8dvrq5l zmej?V`6X@5VZV!!fJ$$UIdXvwSsm!t~z&hb$Cq3_3Dz|ZC+K2Urxoe0x3;j z8l}r}6*JbVZu6??*WNl?Z^r3(T0DQC4mTc3&Z~4*AKp5K#Oh{Svg>&@+8SauvjuhK zsLe8eBP_rzBdg0(+ttyQTl|nT>oya|jH`a>4PL>(hoz6JN0AZ3b0{w7`u6(0!u494 zSkC>i#l}-2+LnA?uBxRs1Gdy!7oVnOp4UVf^QsLu-bvG2XLIc~3$@Cn>>8_V-I05G zKC4GU8!cB~8+2wVM=G@1$;-F|4($AeSblbL3uVnc-=^Jnwl_ICLCgdVd(5vi0(<{>+(NQ%7B9bK;ZR(k)ARvL!JRJ<4`7J%iL|vSaipjg&sp zOxm8PV8J~jA)(kbsFj*8y3U;Gu+y`GTHCGS*zVxUnjwMn9fK3AS$=+`ccg4JeBTJ? zkxNnmxje--)YMO|QCk#iqN9oI#p0Me`XdIyK^x@x#Lr>`iY{fx4&_<*_@6L@PmOT1 zkvJLBYsE54qM%!ZKA$mt1ZeV^uu+1xiS3N5aZ=HOHtr;09(%e|+S$H^t&F=`PvrX8 z`ai?V#4{nxQpj(X_t_HqaqKfSb(lVQIx|JLpR2m@l66+NN7LdjSfbzi3qI5PZcpF- zm@IElme!Y4lwxXtVFmz$S|OCbI#ucVGjwTijLpN-*zhW?x?QVV0#~tw!L9o4)N=8@ z7rv3!rI>~u1lVX?9Hr73(!K_nT@as}zH)at!as`QE>8Er1mzit6YQNN%4=d2syliX zVE1~KsoRwfyQEA|!_}SFcAk_MAIq;@@h5pwk~650aN00DXUAu5dC*P#31O^0L*`Actj+ zcwd#>0fva)`hb2tsh(0}j1I^xK?^CEF=JWu-IW-|4_oanu#icU`M7sxlH-|ziNz0| zN`*7+R2%`eGo|S$^)9vqz^fjm_Ex1b17+OAFYPO=OABvzwhcbntk|qW%N<8+-%{LO zw76N1LVTW2_jA5IaI|aBNNg|fmDapRM@(AcW=gYBB9duKhUcy4p{1IX^aveE)}IcE z|6O**O}OCm<;+NUBz$;ILL#%;8gICa^lR};HxsaoGSu|}Uv1;eF+I!I`)c3gGMCwV zwl6XrJD)xu>Jh$sH6t_e&O@^?xXgY=UN9T4HY0k)B9BWxa1y_jNi-926Emri)N!Pi zl(RgT@1fY-WV2A=iwEzA0^2>sY?UpTu4|=MTtxSo8FQM=;2f!h;)`a)^BT;xOAe~P zj{0wK)&qLPC;@pWbouf`or|w<99F(9m>+=TIsE4VIrfD_NjH-Ji*juAlj$Eu3J86N z2Yn6SiO-SQkKvmoc~zr=x*q#7 z`tkevx#yI)WVXV+gfrUE9b7)$=5>_#kJwSw^$7e-r$hVd)dawBPpUDXF}s{u(QLgoXCb6*Ep;qcYpIjvDceRNZcq<2k};BapdPzWPZ-q0>U>))6dd>EHXEUC9S5FDo49TZ2RKI5Bg@W8R4%+Wk#~5Bi-|ATd8Vy zZ`%d?~rZR1Y=5xO_gl4s-}hD zm4OtwbMl*S=I)vaGQS zRaE&_<3+7j8T637Gp&mpG6RqCvf{n-+E>C(2?STWvpUj-3B;g7s3|+=lY1y$R4Jw} z2aV|5F&v$^nlgh~}9fznx$)W*(~6Lg;p5||`lBP8XR=RSa?R|-^Xv3f3gUpE8-u3xZ#g#jSi zrEC&23F%TpvxYu&fK#$}7a=e^t=7e|GCV#ue7Bb?1R^&CBKHU+SHRQS(5`G6QUvNI z^&Muk3tc!>8uqHtiSbb(Lz)3q1#e2F2Pg#gt054=_-Z_Wfxre-GIXF+NYr2(-SwL{ zSrzYqM4);>OaoLzW^pq)#FPfl`X&(wNRN3yyufm{Dp@PiWSxMY8kLJ&K#xTkO>*+8 zS|wHWnYF8%vL-*6x(c+XOcvd@SUP|}GPSaHrUGnKQ1L+1CM8lmNMuY`NxCz42H2e) zQTR6qM446rmx1{Tk32PWCnIvD^A4t27A=|6*C({d0@lFJl8(Mmz7h{)+8GeY??3;* zpD!X1`!ob%Kxj1`0UY|PNljn53UWk7BKeUt>PKSS>?BD3&L zxj`upXSR6h{5xq`XJse}nsz3}KI6YpLUKkbCnx+o^qbe}fOFv~5P*6niHYkM0rYcU)f`iwjkEQWKPC*IWY?-oCZKCTgT!tRP69-<`g z=70B)Y79g5s$st;CQKkOvw0LYvB5$BY9KPq70A>RC6_fdjU!UXhonC70cj4#T;WB_imU(z6Cc5wUX9& zyuw`D6>tG_)K zM)6pviGi*Wk(Sj_>I!#8Bm@7x+#My~#@SK~p3X512(PgmQ7oc3m!MikqfD}>$jGyWyK1f{ncWk(2W-QI0_|8OHPg1tAa6?SaASyHIukuQ5}zPE$xXKL(V|V z!B^&Xz-vdrL%H@R_{?0bpijncJ1htD z$LXzS#&bB`6%cZO{J@eYRk~%CvrwRKTPhY^juafd*JKj5jb~|U%W@UWF2YiD5i1RW z^06I%oASI#8w5B4p_Sq{%I~BdKYheLf zD=SMq8N=@N4g-pqED&4?ogwrRqpr1KL7CwTx((C@@8xcc zUeV$T9gb5hpos)5xDE|aff|=$!VVs&1Cs#{6uJ#zB5JKY7GQ*t1At@n+d96KX4iEz zS}55H*auju)}mTWoU_KUl({VpkNL>lr`R{%YONe}Ea+7>$t;uKjS(`o7>jA~Ov0?7 zv!;xlr~p7BRptih32~w!n7}6-_(hK39B8T($1E2-v%(m8KEBdCDaDOC@Q|0s;jXXsM+1N3OQcEKLHX7gm)8=prOi) zrIT7!T9QC(r9%eO*+wqB&Ts4Z(p?Z2&;WajjYr^_`PPa?XCN68 zsE;nH5lu;708;_9T&#@_qbHCBAmnNScwz=oc3<&A>uYErIl$@;6;dk$O@=6;(qbG! zij4=jPd|ZkTp7MyQ9xNE&=1jCeS5q6Wt*)9YZkrtbRkT5KGb~*G%`{f>!Jk zWx+Sh&jnr5p`Y{~Eyx3UjFk+h;Bje+5$cMj`5K#^W)@*b2!_E@KpO(&uBYJV&MT0Y z?E&jamqn-x8cPk$uO?6v{c{t=RtZ> OUH9b49e}-tEfu9RH*{E#fa7nNxD+!ba zCAKF_lI^0z6e~LIFsC9kDv*^#o0(J~$0zdhZ3;o^&70ENaEt8vko z(2NG|N-+cFvFI(n)&j`f5eXAYg00}onO}ekhe^Zwrydl=3!||uX3W!! zdp4kTw({-sWajIaJY#K= z*(;ka9kfZKyN*l=XSK)ZFxVP6l9WO7C2Ihw*q9LQ1U$J}NK9bn=CIs_4rCqgk1Ujc z2c1ze06Rhi6=|BvQB{SO8I_W|=-&ZHR*VGnO)DU&MC6nuw1?>F@|UsL4s0332vG#F z6tUQWUU{Gl2w(5AJq1U$_@==&%+(~4F%U5tb>3h*Ha9{c@jd_!20uWl3ZkfLiZV=V zK)<$cpCbz{dxfUUlv_GF5QeREyVh6>MrNs80GR=s6J|7mT}$ARJiO=%<2_E#k+B3i z(|ZrF3fq{adq-wrRlt~4%`AtFEi3|XwDEK}>DDOi=)__g4+y_~jw};qNEqosN`)@8 z8ZsB#wf7QbExHvMz_v}rq;;jD#~eCVFD`Z@cs&6}?ot(@bxfcU$d^EKe9po<&yyD8 zqrznBhOJpLGuj&F{cKGb2Xf;<&iQ#VaD@Thr$dOu{6Im#hfuR=vUy=TBW~%;r7VDR z!jv;IJz$OKNTPV^eFC1WJw%9!4XXu>9djXpGfboz61@S22(<6|s2#MShQX$+QwjmI z0p;lwR%~Gt<1Koy^neF|0PT7#m23*gBj!5d_ADqY2{0spED)LGo|R4;K5}}V9Fvb0 z^9_j$gaHg0d5UYR#bk704EA*k9&Ei+fF(`SaN9PfZB5&@&1u`VZQIkfZTGZo+qP}} z`+2`}d2aS)Rqd>b%!;gtwPL7dEr)qehzhM&h?qfk;dnFxpod5VYBi{P$&65xD2ruv z;iv|IqE`Ip0*Lw#2dnl$RN@U#BbT(heR8T8EX!!;lx4ixDTVZ@=45o@lZWMue3mEb%jw}3` z^b!V<0ft&Q9;`%l#~|&Y_J)*>I0Qp-HFTy~LPoJ-P8{I0HkfQ0O~!Dt)^N$>(%ms0 z6m+(_ZJj*3rVCO7Q1mgFaW+=FQ{MndDws)$-KFv5Wk!>BvPwV__MXenq1Y1z(JWd#d z%6+W+5QK1YGCFAsLkZ)KM)h!?HmT1Q<0kKA(JD;r4Lj zWVFG2H%hm9sKEb&f>eA=7=_U=V^GvknH4O-o)1TUbTrPij*hTFd8YSovA_K_b*HN zfb}4`-3h_8-#p8~76V4Vhk_EVE?<2?V&SBowS5-ytE6SXObd^P#}S(Hgj+}ovE~cw z3mBiwVMiN{x@Z@RkdLt38+Nx;>qJilv4y1{R~AXp-l1{A<${v?#vhz(1uoH-a2CP; zp4Me{gHD7PhQ`3*&Qim#L!;m(eoKXl8$l*L(u--LcBo>Tu zY|$h3N_$TJ9%ZYOc;=f%S@@v-Zl*UezZJbxxOM&atDQVTS^m6yz?GHYdH9e8(> z5!@@l;ZgFisF@Q;Zp4CzT0|(Eo>VRHT!esdi0Y0MzPVJTuCubohmwMA5+D}Y@<*Os z0`TxRaG5u4Q*X$`(**>$7_lZnLg}%fW=_hsNZ8e(2asEX7U09DsV`7|?xN222*{;j zkv5Kt(p7|3S(%k!eHwOoKB6lHyIGjP5V4RYa!Nv{%k?ILk1~&FIs?;!n-0c1e=ZC$ zKGp0y{?f>6(@bh!cb@flmJb4^0se3Phm%ywmYGEWDcnPTt}vcYvkh-dlm2zXHXFcD zSoJS8jTz@_57*dgx;IgrBAUR%%FQAct2nyDr?8BDjew zr$2)xt8%q&Qi#R^xB3*}q4{`U;4oX)5ad?A4Qx-@7-pP8-~39X=Pu8Ct- z=UAgT!6B)5$k<=sWmzy~-}I{7!`Cib zn!h9z{CxV@3F@RziOV+LC zt2ATi{Er}H(wXFK1tr@^Df;pt#cyCKd?IqP0Y?qN!i++s)q=`ihq*OSbk-YsMs=44 z@c~xDI?1-oV;kt~ac-ZvBRTP!7h=st0Fxm^2r%qKm$6<*A3{ZqaNvBzdnEC`PWBy3 zH#Te{k>%qSzwV#4T45FjJ5^1X&#Mh$b=x@z$2?0?7nsFr)HHdh!J7F`Qj2zS8w4tH zFCru`285Izf)P&Eaj~XSDISYQ(Hu=E{z2A4XnY(YNE1nxjdE*DlnBo3I4;})Dq!fj zzUj2&Kb-TC621zoKmtXo9j!SwpH>hR5I5zG$ivB8D5@WcQX6zUzB&!2?ED9P4M>pE zdC>%YxC$B!S_)&*Pum$E`=)gUAgKktH5lYmE@@ zZUtn+Y=JOSl)qnt4X!T$MeV>n>Zah%vIm^`-QzOxQ-|mAEb_2j#yVcs$}-gJ{czd2 zNgC{ z-!8-fc2qT7KT`-Q&+lO(5`|y%$qq^B)y*2ns4BWk4_0=k0y06fYoKcMcQ5kI`TY=y zwOp4t86wq^8G{y$CaMS6Wr-Jq^?T1#*&R>sXbT7y15aEbWz`kD%d-u;yvN#e(Nt+;La{ zOKmShcMuAnff7(f_+O79#rbsnN}jGk%?+M&Cgz8tR&zujaNFxYOu$sjivG5Mzl5e_ zy$@5^M><+$#AiL3?BOo_N7M~Hi9SceJqj^El_p%bHbIq^!D$~5BlSnOe5OF*&1#V6 zj}^ip1Xx1F2lcysYqf?7Y?)_t`>F_X*;Co|}+#t%}U+7R$L-X};tDT`Y%ZjAHNS7)IZJOC!%ndn! z$b%(ZGMtq8tkj?d<>Drz`-k8`t=Zp#0MOJDZjWazL`EH; z7^YiR8-l#?ZR#(5H6bJZp*PT{>fj3@%kCa%G)RC4C7MrNgP{^4qayBCc{5qg0++LB z<0#~E@tg5!$!3@FQsf|sW3(~#b}r!jx7z|qP~eckBkRA@;PmD~N4}SVl)I1M+JQ&s z*W=`s&5h&JBnu$6 z^9^7Tl%bxc<`5Uu8AjP^Zs7WZ;QdciYbcP)_4>DgjRCaOW-D3do|9j^#5OZM3laqq zIeboy;$;nxOG0|*RkoNbutAxR)5AwDK}@Wr`BmoZ1=pHP{!qjiX@usKOC>>a*4c}K zgMtjo3t-M7GKl}0R}kKHw$@c}87mD(N0M|Y^IUY+yq?|DHXeR90KN8y&$L5d|jggu|h#K(F8d)Uv7o}i{-c~yqJxY)) zR5!1?m~+j+BKc!Pf?-~ldyYUkg8!$f9jB85R$0*xk%8Ig78s2b)eeB*J#FSx6*<*p zqSi!~cLBnJbs7@HFfM`RGz=ZaQZ(|dUOa?;Wz+y@Y7O^Uyts45JsR?WC^!96idZDg z>zMi?9XUDCgFzL570#LMjWsF*zadm0J>ex*=HDiQ%PePpZ!Bs_CEJeHr@{H~rK}n= za;*?K^gvdIP`M;|`M(C#{e|cOwK8DK6Tz8-`IYLJ{hy|`CE01%&}O`-M$Vxf5<3V% z?g=#&_B5cz2Y+GWJ*2BiH@&DyYss=`#Rr@e6ribPy}PD#Z@@L5A@xi(!Y)s6o07W8 zaHZsyiV2}nSJ)E!Ol)T8N)>@z))@Lae>2fL+VL7NGF!oLUH>unz;`;}jbh>;Gop%&?VkE?lt4A1 zBo}C=*Wp$h@;NmG#KuoTRp|dS)sAHJqI+q*nrEM3T4bx7$xbJVQlDBo<*k}({Aa4I z83UMVgXcobEqajxUDR&F4XEHC%PVP|kDLzMfbv%28T&YMRE@OLKGIw&XCX*_IhSXs z)ECBvpe-^im-+P5cWf*SM?ea z<>|9O+;uwPRTXM5ppZ=oi*{C_+tk1v%8?bM3N*)oQ0bi9uoM#Dr2tcHz5O;IXS&Q0 zJx>R16ZpeES*fKCT15d`OsHAHDe?cAYEfiWBhWPtS~8SwjmJLm?*3@FcuErV57#%0PN2&Prk zqDGL1mOw0okR|_w1jw#;Kzh&Ar#K7&=Ukq;3`Ak125}6cocBvl{ftWR)8Ev24joJ& zic?99981=qL*5)c|4g+t zj0~HIJbtmU&co3_@l7;w)&d^G0pnUF#Oa0+CVdq#?E*?XpmL$HIe9+}0tzN6h;Z{@ zdP@{~xM0D^i~K8do*_a&zXz2CWP+^Wngsk{;$TPdxKyF4$SM=&jL1CIY-)63q=sZ} zrkR#Cgz)d6QcnvlA}|64eIVahQ?^0p4DVfeWx1f{d>s)IIcRJ53gw!1rg@qb((91pev(=g>K8m=ICas+WJD z@^E6Lrv;$|4|1i2d@C+tTG>?kkERgmA)E_U`Omj3Jj1M#r1dJOeCUWo=?l<7qK^cK z%=f|)qk-%)UD(S*DM%Xrf%%1PMJiZHu|!&6WiVqPw;a24+@Sw9v7A`A;t-wRLea_c z=Y1H2dYhz@FjHwvkrUm^3|xxG1186saSYnAsYSUjCY=!TSNY*$Z?*NrqohQOD7wIm zJecKJTb-*rB2g?175T}Se)$;J4X$cS^J<;_bcLqMipDYt3n5ur@)5zrV$Kjhjh&y)05gkJ}{1l)b}G&vL264Z-*|98>3^_D{!n<6BAgsPqj zjoA^#Je(Fms>;puC*kb=8=+NsM_5WS>ZBP~QuNVakw3=svK|r`w`waUSS%TJQ06Sn zY4Ei^kUUu7t-$Ewr*(fIJSX)J(~5_|RR|lb>lmFbXhi>xNJ2M%D{Af`%>x2&7UIB= z6I$OJvNt+AeELB`z_f4POLiVD!E41B5!)CZ`Pj1j@3#mb4V?w&q5fo`gjXVSIlWWo zqNvFiMx&s~Ok^aP_#9g+Nx~$mC2|tr9P>qdHclX(339d)l@t(DrMKc?ijc{M(2;o2 zwMj$=0|)>K|EM7yl7BX@%t#xzi6<4^xhDUj zi8c$qZhjsOldn`^u%>w;W8U|s3KSDiEn&67>leNR2xx~_D0vFKIX6+UFyefGY z#rcxC&1~J{`-#OJF@-n~r?`e~?;;em4ZI^*dW!-BLM?DY!=WZVA~$Jx@OlXU3RKKd z8A2}3W}}AV=}n;|Fq?V!zKu|N2&9Y5lfoRgvHZ<+&s~Ix-3la(p?^zag<;hoZfJjlpoQ#8w zmhk`|X!bx^IAs8;+BJlQYg9T}$St8sO(x24vfU^;<1%>!l<*HgdPgFq&7fCZU&)w0 zgyTsDHCh<`lZRy3=2XI^UXR(uH5|xEIw`T@1(TDN6}P1dk1*7cEUj9bN#025PYJkZ z9;mxNe#SLZW8{WgZz3M5MseQVJhF*Y2~8*nA{uaT3}4GU%K@iw#DA_@M=C-{itnM^ z!JO!)fwW%tuQE&&+N4)YZxFT?cxhZ#3ShVz=?fXqtnkV7;r@J?Y8QkYxGOa6RU;~Z zs}`23sa{g_o$=gX^0L1)UYYH5m0|!PAxa0dJOmN!Md(tx9 zd*B}O%A#p>3|%-~q{2T3*S$=C2H>jYIjk(O#1KH>Wwggv-^oBt?ylO=StbWU!rls&POrV_c8?21lc9 zBJyrqL6WSxmpnLRv#cFmh*Gt zPyTTlPLxJV6hozs2RcwBMyhcG{qv*@jBPR5*K!ouNQJ!ElVr#VF01M+ILuBRTor^( z-4wWwEHni&d40aM z#mOJZ?bBB2>UPvRJ(Ev$Ypo~v0ytleqye?DY zV{HF^{KlDNK-p!N_fKRr9VIGNO`bqP+N>1DIif~mPn%>}DTc=4G|LK07zyx6a^B-T z#KJDiK5rbXOIP{UK3a{cs!&0FMis{tI%|gU|8%uN#A7ILZ?8?8E*%|CzBFxvjw{#K z8Fsqp&Z>(F z5V+b6B!?=TEyd(z=+zG@WjchS)STnGL=Sm6CC=!)M2 zgs^E-qx1*9XM6&cdAX@Kpxh^N9Om}@R4F^VdOBM8eFt{LV!kX>$j626)WS;XE258< zPWt6+`ZjqMtk=(9I(#v0COGbprb_#w-FjbOLkl-+T#JlxcPv3uyAeX&=Vo|O`JBVi z3h9hg&+gG<306~cf~5do?PoN8(!pylh-uHs*0>=}Ao$F&w&*>tq+H=0%`Ar*6pLBg z@rK{^6rvqp_uBx@of^C{z6@G~EPg0J_9gPG1vSsyWLnQ(sfkzw0c#Xboe$%3Za2S? zW5|^+#?@OI`ePFs=QJY{KAV9L$_Wm$ikF%NE(5F<7#UfJ*`Y8l<*6ppp0nWpx}iSa z>N~x{GXK|fc(p?CzsAGi zt)8AYP9JMP?bA&&vHuBA(?EaSC6aloVJ&&ZQ3L4kJ&hX3&M=M0fowig7-MNw-jb>o zryW1pSMAM&oQ-T=RWU~T4m0Y87N)FHH>erh z#DAb}4=nZWVdU{!i7I!;*Zs%BO3go0s?Yo77x~)Ren7H^w_f7kB(7C`8citIOg!cn zI4$R;5|wh7-BZ8{xBdP3Hg8H=`y62#Hu+&$x75d*XcTHHhWoT13B+`o4XFSP5>pzT zH=pD^9})bt5Z2@UejrBx7^vl{uWw=3$0~w{{uLeGTHX1S&WDv(V8O-Kt{&Q+&!##e zUC*D1j!tXpggw3aQvNQV7uSS5W2#%i<0HSX#q-1R!OBRjy{nGUuczx1hqXP+AlpEk ztJ+o4A7UTbgW=V7zb$vzK74h%Mg{EU>hh|XD|I>=Ht7R0T3S{M|1f2LM%;hgd2s)1 zAzvf{e7tuxGC6%Le=3{#TjPF`_-qc|Yg=>ust6YRigW3>;Xe5orV=K>=|S{D zj9FV04nQiMrc4L+d($Zo6-{t4n_hBgQO->Y{|(&B5!~}%9g9WJr9MY|(ADDYE{z%d ztSY|U;=hdlbTPzXe8U~z5%lO&^4TPA5(-2z&h`<~M6&3wPcC9HB;^z`>C@1N&V-la zYK(1&r0`DvVt~n7>wbG0JtJiN?(PvtzvKUTo0(r9<)4TOu1!){R=Rz*O3?)Qgy6=0j%|ix^*@v7RzfXyB(5n}q|IkS{ zTLSiVR`_GkZdi?3kBdEQ3l#CR^q@l1k6S3}&F$g9?sF)S^qNY_N5Y8fk)|3r_#NCA z_@ge|A2dYxB_3y#c~X{x&jY0B4SJWFOFoO8j?3TmGx=6)kA5O4L{>Q2JZD>B-onnF zRET14=Zex`Ib~}Z9i*71&LKK_juj{tGf`u>YihO=*7&By z&V$fUQVyYIm>L&WnE+A;RnOs%Ycd|H?h@r*RiGG{1f3xx0w%;_$t}JDp?`jdr*JfA z(o|dfwbAccJnCN@J#e(Lg<*>GFZg;`T}67##&u)37w_e56XX#ouDM0eMyf!?XH=6t zViSS(o$TbQr{m2F=5Rj?{WF|%n4Idu(Aq?`HlW&xO4V5LOJ`|iQsEvdES$B0bHJ@= zytSc69e4tzf=M%#y-FjMMeW9)j%W*+&hQTvTD1^*`IBQFtgoPD+dZi-#IS=gmF20E z&dvvvBNFXrD#q=Bp3q3Pk%~zmiqAL)*B-=qo|v@mPJjJd)9D z5Y$Gzgw<;-3AhQLJ2RFmThPX!?i`xeA$Pd1Yn^bIauz%-q-)jkPm-)pq=Z>}Y+=(7 z*DY8YTYdB6uBdYFEi{zOU1#f37+)K-;pKa|;w*=kHnwe9c&;ywC^59B8)($FJXxtk zi3d8d2VJzVE0~@u@w!x6rb^XwYAPAbv?D@nZ4BjfE0YUGOxN0%KugMB!ka)4|}k-9aU7oUDQ5Mso%P2Q3?vOI=#$09wMrZ zC<#@mS%EL+emQQqNrT`#I*H+(90#-=_*`JLFGl3l`CQZDq*jKy8YVAvuG=mPLJ!9X zF=KY=Pzo|182@p4sYuqcQ4`m4)PR(+rO^+0IOl@6N-(7u>a-jPx;gUcs=K&V}uc7nXaq?ce9qR6HC(=nN9VzQ5(Ve?1u(`Qw7y0D;@F~D71j1c+163tls6j zf_RSCE$VN{XUXMQQd1@+3Y|SH1(E&{51@tivZ@UCRE0|?5xQJjNYLLZ&PHrZvb?51 z90pD%W`OUsa%WmE1HkOA4|LZxbHe5&Nx&0`EP`!bGp^alUYkl$NqD@w?DTff_pTxi zn-PXMNJTG9zh%gOK&EciCEicX`19GC8Ag?>pPp)G85!rd!EIupWmyJ^1F`WdW%gDNaU1-ln)}jxR9`@RNTBUUj z>#;n}^uR4Ued4JscBl-yg@vIiHO(eLxL!?~RGCA&+iQ0*g9^sFdgKmw9{udXVl}Gb z=P=HruODql7#Zl8{BFSnN#xM;JQXHPP^Li?obq_(&4U?N!I>fQCgvjVcibMNOi@H3JI>FV6tG~yT=cGBf8 z@$K#*{w-0xe&^xx@wp4VsxrcQ|DMwQ`MGk)&ATLcSxoUO&L=6?{YC%dTla&e)!Utq zr^(NI(rtP2^g!zM5#Uh2_fAjhKdC(_<+#JD~N;ctlO;Ze-qk z>Ug+%$Pd1|IeMEvdiQQX+&!*{^1Eg<`ioghEuf}aroD!-+3c6u*tbbmH& z3GP%hiCL42QOsN7rkx|x`aY)}llIlMR3_eyGJ*F+r4q^|Ai{=NtT!PN7OPLx&`7); z!ly}O?*dEa&np_o$&{gM9?gTxf8U<_mC{2Z_>KIM^(^6W#36mw_m%7V(ev!`L+&CX zb!Pfg>sVbx97F0Q-jiYHc#gncqqfQM_VJE&6`FdIsijstU+nVZ-Rb)sPIp5ToCL$V zmqgbXp+W=mkJFD;9D%xIh5O>&*YB*Jo$Z4CYAUnZ`;w&=#mX}D+FJ#0o-Jhk{B}L6 zU=D6Z`!(%m(4qQ^l-+e8fe(+ZtlsEH&~4Z)9c7q~Il1qEmpGMEt$nKJIBV+XcwvG) zsieOB0z2#j;-_RN`%C9?)_iWA;^a#(gcfd6&@A1d^~Nk+^9}dO{@vd3Or_ups(p{A zua`zv9BDIuVEZ%1wOzLgEpmiNIQEz2t|V)2*7}sUqp>_ePi?Mtb5mbtgO(1!tF~6s zIJ~|Rbr7b%S23rng#Bsz+4|Jb?or!5(J>51bF;A7R!wK_?c;>-Ftx$$P+f>e>a6kc z>oCVNY^ppQ)K|CY{<*rg@wavReLT$5ZiT%(?D2Pp58Su+e= z`T&ULGm{mHg^btmP(q(h{;NfNPMWM_#tFUffdV(|h;6iy?l4`G6yCoIZ|& z=ATEvT4KsPdw6wly}h5$uwh&OTc)y)cOncs1+haNXP zj`F5+7@U*Hz^u1Zf+Y3%M)zzGeMN>*8Mvd40x)&PbU$H85fLpoY-w561n(f;O-)yp z>MI-V>d(~$nHl%?9cS72C;sj7N(~NWLyc;lDx91-qZ%al{$Wg02HYUGqK6pkp`n%f zP2NgZ(op|}TBY*#!&}1j-4gFep5CsL-?DfQuP^5#ShKxMo4rh<{p1+p%VvW1r1rm~ zemR2Uv|~oEOJ2$=pC&rEGR#=#nYfyjn?*_?r51eCCAAa2W6TzYCALh`dpz46*Z&cyB@aIxE zGqS3WQnG}UX?pWq#Y}fnXb4viKCZUow_U^zlmk1+O7GP6Y@}3H^aJm~-~fYT3-TGe z8c_9P&hR_&#k<#a{%qmREDRn2;LBoT^McORagSo_c48mCGxNXQ8FsXbhQk0X_WOC= z0zg-3U5GKUw%9A_A3n3|a8GD)Dl3UmtAOv1YyP^3E$p`u`E#lzjiUHq>d5n{Gj_zI zGt6p*ElC005Qx{MC!O?C6mdPgBJP|wk;0iMNKtnyhWsJ7g<&! zt~$U{Wm_q2BthiLi$G|S!_-kKZJa>Kl z;uNDvCXqbAGmTcQvz=E-juyg-lXUO5!B(;}>^sH9u&?U^tl53?s#m${N-f?4S_q9J z*jE;+qYImUY?D{l&Y-Px7O&k>>%0=<$HKAfoR25jT}IRd0E~WY0lzc#mfbr`N}=?= z$Is{cKtlQ0)!Nt83s%pM@h?kZG7<5Nx$Do-`4oQ@>pw_{!ZR0wu#_ov^(am}$7BC^ z+p61x$-pbswv~;DQip0VKg`wtm$lg0@n7HYYQIa|S(lc2fSJ|rQd z9~`%}o=mY|m5l#hJa#|jC#}&Fvw>k(c65p1+;+C)LD#?Ptv*#bby_(FjaDQ+yoBIH zqmi)ne7hzbcqjJUWZP-h=P)yUnk{`E99cieZmO%di5bd7qD{OE!_?{)4E%BI+-W%O zoE18Ga`xzI;cI=wgz|XD;1T~(yF&XtPxtzw&y1t~+9G<9gUb8?#fHwzbammfm;6HW z!1g-I)Ov9E?Xpdn!rT9(=Z>~Q{*a_uyo`Q6)xPia?D=KQRJ8s~aPu_rtr!GbIR@1I zU1c~!A4^V%L1!MiPx5v75$R!`XD1h43$v(VN+%b{ScJ}hPA!e~{DNGZEkd#UH+td|*yl>%Jzeou!oMbuC1rg06DZVi)Fa$y7h6~Fi! ze-U?AV-AXk_u^N_%kG5N_BI1g+|C06iY(1$_Vb2;xLH}2qtETf!5S1PNOiwn^Ke`) zWMj&SXppYLfBW6T`DYvtp&3UNg#zf(|6u>`M={cIZye3|rZNsW&r(I@!%5gd}2P}~ADYA|&f)fg?RG^57 z9e@8GD@a=O%e_%zWljUW1;5T2dnA!h-n;e}Of8k$=}uEY)U*j&d8F-?v-7a)ur5ga z<9T=~Fpu2L`CP8R>0)3`A*q3`^^Xm$68M~RxylOdyql4g3BI_=Qf3T6ge_JIRn7e* zz)Ux8aD`Yp8GYzPAy)LmJ8R$Wdl98t&&QK`ir*nfoFXg_FOutU8m5m1ypdl^@g7CI zu!N|O*DJx6MP`Jp!dzGA{;v36M&59A&_q=Rr8X&S-lnyu*?CJ#Cfkt026C5_7xXus zAU#`w^v-L_Sy$FT-#!Z!_aWK?K@!ZolwXN;Uhh*WUyB&EPTY}>Q3`6K;2{-s)o1Pn zNl}sv{mM?23dCqZI{JB>?j~qzTnz$2xDGGg017gqw~yx@qI*36Kq<_RALZd*{lXTV zWm`N+`#1UyU4jw2FA*MUMk_D0bUmv8I#xIDN%zjApsk*4(J1GdnOeWufnF=BV3p+B zrQu1vFTfV}`p$oSUk3)7*}{&}j(TZnTx&lXapMT~lDPz-?Mx_| zW{N0NZsRHF{@?s{s5)u+be%)q!d(mPV>-(#bmmy$;@dw}#+kTmD zn@5&=BCE9HS2Vx5XIdMWzm2A%4!N17z3Qv~5<#5kxllzNI^x3;EGoK0R)$pSjbuQK{ztqPAR<_A*tRFqvi*0i{_+6NnfKtyyO$Nqi18ZzW<-@Ly!bVy zC)RVbPm6=)2Pu<*^rqw*Ue(P!e!uSU?GUkif~(PJG?()`=ha$rLHbGkb2Ht|?MZ&EN` zH6@)bd(l!--U9i8Q|Mwz5)D_1sZ121QDpPm8Y?op9>SB)q;zg0OxqyZbRHCKmG6>x zbD(_g;ST%Upnz3>8ya6U6x0wxd{13x(Q|zq0LPB=MC;>H~#n2q5py$e<8KAuJtM8!~=GrXxl@vv%_e@ zV|DIW;R)Y=ymZ6=cKYTU)`mm;PWC=*556(-ci=L0E$4 zGUemF<8*=5S&O%w**Nm9nW(jz&ocGZOSUJLO$Tkt7vWBGd5;#EQqD4lU8d*@8#+jP zn0f>^2P;kpu53?;V?P8sHVuN1F8!oG1LI0bmrZTcW^4DhUdxv`It4JxW&=Yp_&JIS z#}8`vmSJKAY;TQP%-46LO=&m5u7R;c^+nY$w0Q@$$HrqWq|xe`Y;!K8E*O5lkTX5I z)ZyKyrC5lDRRdS_+3m&_`1}Lxs>GRW`a)GH#eJi(`{swcVe|1jwpQ_xc*UULp{%Atj+H{`zzFhRwFP#+teCn1nDjA+%krq&wXW}A12E}qubyJkV2r^#_ysbY&dDpnwIO<9JeWctp{1wwI- z(NrK7v{M0UPHD>vb=P`vgxmLbE0}`}Ow+?tIKbGS(e`EAf4INIxaHmenDp_UgOhD` zuE(wu(>wn2pzsA0hEpa+*$ag?pU=ZcQ=5^HW%~75sQRPjqlb3a17r3nAV%I!p1l=VJGa0e^ zbIZuPnYhz?jt{No1l zE>XiO&iKxy6Re6~LK}DGQ8HshDqpEf){fB*00pJQuF2VyN^Y*T!ioF-Gdw+PSBNae@-j-@`K@WshfTtW~wEoBuCrfPcBxzMd5 zPY4&Y{ne>_u+rV^Wx+dY*z64!-5ABD#tDd5MSp^14$J>+3?aH^g}lg$DuiCyDSG8RJ$ov)xK5ats3pmslmL#kZtSW${wC2 zk>x((l`5v^yJ{T!vMlxc`x!GaWt^&W+f4hq_(R<47vCFwU7>C(z=tMjVb-0T&I^;` zBlmHuij-YXHaw$F2l3d=0ej_$fhZo1|66Fp=f=yMfD?{g^~j5^8&SK*w6(F%ZWTY- z6MG~5f0XnbgU|WCTQp2Wu3M;K@whj7h8$>PjJC`s@tK8~!iV2qSTveFj$Pn=a^?qoZ?*GZkC&J-fntWzRn|QPtI{Z4{YXC?^uUT zZ$(BwxeSI@kZQPQ-H%~-+}PMN(F9r*wY5lHTc4?OWd}9Oa6i0A4YIr`|0Q7 zBhmWT6yg&6z#@tV~i&Gs(FbPu`MT)w$B_z(BsGta4u#evB?NB6m0U zU9J8Qm5r<<$34y5$@XE}BijlM4GxK&i`6nXJf1>68!vSzq@paBR)t}rGo`SDbh07*_ls>a% z-aWfg$~bEW+aj2U?!(RHW;)hPo@o_4TbDcWPyG^58!1=&#J32nRgUxD4n+&q9kQT+ z9r@f{X7aKU&&75J{>NQ$QbhVb&yf?n%x6|+nx(k;XvG6Sp=Zpe%_HjO92~6AOmv#D z6{6yn3U1lRd+tFtS5dBuPhzTSy4rNd$0dyMq9=&u^9;w4X(Se&P0H)3$&9zz4x+Hy z#B`D8eJsaxkhj67c45R_l19Ze0=5Uyz=~L`w%+K3kN#X9qjvHnfVul*||Qfw?j-r{oVlJJAPtTxRczmqeebL zWV$>1+p&cn#JgJ8=pS4jG2BCv+sw+Wb9X0AF4x;){6|b1YtEcGv5r|Wue_K>nBMEQ zvnX}(a-UhTPoBA9N{n%MF&|@xD@r5v0tqqOw||m-GSxXQ=(k4>P?Q;}9eDn?Qv((*zu4R`I|+wtHB>cI-3y}FcHf5+ z-y~`;-+dgRDpu+K+daQaK^Xw>>mDap9^K~7pidY&ln6!LP5F~-&D!}{!; zxXkxAyzThE+_lDF@KT1O{!%2Y zW?&!Q?nQ<34W{U*{^tg(2xFRy$bOuIm8lNnv+9T_XuFCvN@d0F@1c?g&J^fPjEP z%UEHFaOvmS84+4|sr(9PyGp2H>Spj4iCR~to-2 z*Ykakg{liXc;*Y~g4&qKQ5 z{%zu9z6K!_(dm8pyLS1z+~+HdWenkL*w6d(`{bSVOZ@$@B$e9lJ>h2$j^>KK*ykX5 z;sWILnxy8Zu=xInnUDIXh4ug*=)U>OkN<$8+eW~aE6OlBG9N>uD55+FG}*t}*}Por z4a{|;*OqvZM5*j-grX5qHGpCR(T^LMlPSuKIktqA!^e$I(1ac-vLl?)Ee@LGs1it$ zzIB$nqXr_hS*U#)4VPt!CiAPB3E?;bj8MoGO9}jv@MRT9b=2lHz~@-IvANyrESoI` z0(JSDu0{G_Ksb7Ux|+9x0#$`-XraiON)^MTfMp4kh*D4B)!#m`Vg|ay3Z%E8paZ*x z5h0l{E?l`_-W|VzG|2~=vO^lq)+U&0p+Rmq!#bFfPB=VZ`2VqW&%u>+?V`YA+n8u# z+fF9t#L13r8xz~MZQHgdwr%_7eNTPoo?GYswQ6^-)w@<#ckkY-p6B@i9Lk?(7jPM( zZ0G0cqn%pqpqvi4<>(%UUgf)=&sNE*yl$^OqPyvUZ%OtTrVW0`77;<~7~l)?5AxXC zfS*FZDz=dkPP$wOf&}Q(9vb@n)jpO~z*u%)pJ7^5V{UH7@{@Ntv}--m@ppT7`hrj& z+WNZk*8cXtr$ zd{ue9cUbF5gYo@{Cg}Fce!ti|%Knq3>!eT1Thpk3sqjOF)`HMiSKLH5#z6t6#T zo30Af6Y%i6MW=xkE)(0`l7@6bqb6Ho){?njex`-;I?nuLbp2MI>RF^lSW~gxpOe&? zerGrALUoI<5s+2rqyyGwM}LsuUiMG%{(bs!Hf;40L+z z?YfM!LE7mr9kR;i9&ju#9$Mq~a+B9Z|9rX_-;{X8kWT4oHKIAa@*3PB$L^=TM4sTFmn2Eh#>;mRii4>*@2Cgz&pVqL-^S2$d zhXSb|^>$_gWcQYft$_HH$%Vx{t>&x|X(Y;%nTm0E7^B zy}Fgl#j*GmLHrCca5U#u2Xf34E-{s z06}JSM0xf(MC{4VQovK#+SUH9`|;X0i;(Jg>e=&>JP~2|%_879V$Bg>3l`jiaKQmf z|3e}ifU_m5K?@Mkj2aF_Jj9jL&An}d{1U6Ah82~XCpt1qt|&`2Y8RR#8|(wIOqY@l zU#x@SG*NB@S&66V6S`oiS$Di={B?){xu87g@GMVK=n$@fX`98IoU*K`uYZMdHp5Ly zyi-}%_`7&$QtziU{bPNW7q6o+Zv9zr+Bjyz($#{<86jDby@h3QW_E38P)9pO` z=FHDsAg8VQtiL%2Ld2z%`!zwV7RS+(Q=Cz zB$~OS-%`U)`n2jTxr)sxD3vs^VZ+dU5;yj;HQZ4|4u-I8$zmB}5+61@U4<=`3+YcL=vc1pA5RX!S@k_vV3Xqu%;uvIF5WBH zy-RZwUldO_`gUP_=G&=YCCMg*P3ftziz%#1FAuc;IknPnu>D((##ZTHF3p;*^}~p& zQ$jP#?RQ$QS3?D_j_N`_(S~{O8enY>-6EZiW^VK}q5qT=<4q+3N7JK;7lVe)!PUjI zpaJN5<6Nq*u<}leDx{w&S)Y3Q@NNTVy3%T`V;-kDqLEEL&KbJM;VBcX5kV#0hhHWIs=F{ho>+Pq#lQ%aA zxJ~V}L&t2{E80k=4g9EujA_@qgmoy5#ygx}JanrTf?Flfc}sYP8ZUfU^U-OC_ml`N8cWZ2MyeqEInU)Yj?0Gg2d zkz@Pz(#iJNEXd?jw9IQMi7C6|UvR>^wG%)8Om1>Ww&I!gQjAj=ms|ApZOA){tPblj zDzseGJj-zB+&%6X9#J<5&@KWBJ~YPg+4E(8%$j(_N*U;Sz(KlxcxkT!m*Nh`ao-$`{?mEGGl2RGTi?kVjdP(Md6o@#Owu_I z?CAb1L#!fDgIokwQKRu8C{JU;r4yS930q=^E}Vm3dR}z(wSf3iK+KXj7+!8R)*@1K z(Ju>h4byy-NHxOATm~HHpa@}4G;H7GYQkBe#Z-=`wPyvJJ3VZttg=%6Kq12c4HFs3 zldTM5#dZWU$6U%IJ+e;644LWagF}RRiE?%I=L7q%aZOtK9HYw~Ci=73B$+R8 z{u68Rbho6E<(ur5J^p6cjG0WGW<4`+ZZUPlMwWWK`~cjvw6WsnDbosZog4J2 z`RlEn1cF$c;SQtU&zSBcN(fK4VJKt!}z9>-w({@V0}46>-E*^5-&ROY2FJKq==naz3+DcxR>jUPr}u? zY+|cqjVdr+vtTL%>=Qeq3Nwl-_+bHX;zb(j6Nc|>u)y_tGJyBnhKFTyHn-7>{lar> zNZYC3j0h}3-ek;~nM~c47nY24sw?*ORno6E$&VF&l%sz>_s1FlCr2>AwP=dkw#3Qm z=0_B*#>|}Bnj~@~uqGLML63vefV~$OAwb0ebCq?opNO{G0!&>t^~k3gh1M*b*0nz~ zVBEIdt$cq+MHzc)%7ap)!?u=**gI9MYe#(4xph>e1^p3GZ!o2h10~DWNAovy0!#Pk zXyk7CxC8ZhSZ0fwn$SBo)=(as$=#HejZ4lmm)_0HiS{X_e_1{LyPCuVWcc2lYG>u; zkD$VUmFJueoTsX)BDwee>7(r}^cAFHJ_g$adX!c=w*50w63i0`t2xDoPl&WDg5nqF zdic8My^yYb<}o*yLgbNr1p%$Ke7Zp@px+W>o9kq#oqr)*-D@*0PeKd&aBCds3;?x7u9RUP;A!(*UNX3qjPw34!-W*Lo<0*fGB` zN#zX!$q~GAa!tPSIBSS?&(p4wl$zz<_#BMD%`}qaK>5Vh34!@Y7ujaY_iGrZfnNQU zrp!6U6F8~FLH}pKg(JK#sQ3)*3>iRyY8HTNK{s3G=!C$1csxdP1PqPEF#}Siytt8c z+%WJ_4}{Z>O@?VuoBL~9psYw}I-g*{OapCR zbIMF z*VnUJRW%iR^uxK6fY)b=>R=(n9-43+iGvB3Jryc0UDF}bjyV61PE%{-+(s!2Vy5iP68API1#&0%FvtxOkr{swnJN(bZyUoQR=F=_A_!8&dMD|ID>Uw#PT01fN|&`kn;SDp419T z^Kk_kvwSH@G0;JMOayB!E2oaDw4xBtg(o@Z?D+*+8SswH!$*C@uo=oh68bPL%;9LP zzWt$m_r+giKZv#mVAMk?Uq4QNxKGWpefEm$LfWqq#q1$jLgq7Z0oled3Zi;&BJ$2T*elQH^G+GNA{wBeQuEWIPPoL=7I|nNjm*#hB{R zwL?_HCZfO}78)RvKyC01G;-rIzvu60ztGrs9QO8yf!kj#Uw%pi>`)4|2&ymNp^8y$ zn}{*$EA*(C<3}@wC5(*l z7$&X#U4oET!lO+@siNUCfykChbuo%W)S$nulz$K6fbI9V~t!9mJ)FYZR)$QYW*C_4{AeIaFjFPN`2rUC0v6;DpsVw4QXtPcc|la=Mt$-+JXp?LWO5JIFPQ%+57;_9*X)3)>Eb{a!7PywE&G{YlC zUTlEwn0e-^@a@%-5d`Z?uwA+*xB#dS@cI;cK1uB@Rut9p}AmZ4l%1G!XhngxEqSA(7}oE!j${sY$a+T}{8Wyatr z)t^3E-*2&{Q`xB2hpN*;Axv(L!i-|tYYuVEvJmg$@6dGeilv>^>YY{73=P#Qj@=z80#MEsI-Su$l7Uw5ixPMo}n1rge+1BT;^;82V+1C_f zgwuO!xGz{K3Iy+YeEheJt#ZDh=2gxo^?nq^+7<9-4%pUqIc?^?)$@z+Iry)3tgL`^ z1L0!>{?&TQcDRYb>7!*s(t{5@sklD_^;f(wmXQ8)R^@hN6O`{YGF9^@LZTs zv#3|uS1bH457q3BFqek${1^JGVJ~+(?VZtof;9T=9_=d3A07$TYP9}-^|+*Kb-quB zGxw~W((b{v3>ake)?h$XR?VN`6{NhM9Zj3;D8#j!8juac!?k7f@^VW`@RLiL2}Cwl znfz+uU$2fCpTmdgu8z!+J;Uw_8t2q@#57r}>eWhmN$bKXZ1iWibNoKYdI_*WEh{tk znwgT+{b>lZM`E!bw_88JRF+x5;Sp0lBp-a>`Qw$w6;&Gn*@+MM@>SGUI)mI#_OWju|C@6|tCVw;5pwKN?57RS8Q~S(8EGZ8E|tM~$vL zCBoVl&L`|r6s5_RzWHk#c=hOHwX}HS`T*><2o+MPRTYgpiE2AOJSK^|o?^kA>k{x+ znPRBY^yMtge}t!FT8;(MS33=3YaeuTbQ}k&W$7>^-NY)g%9D6;Pa>)V(`ogJem!d2 zmL1=m1gNayBI1QS>R=yZgy%2adG8{`l9+qP43Fs?GwkH}tr~F+>~iv*sP0?eM>H#k zZq>}S|K5=`CStEb07QZpL0=qmr+dw{N9(f!5895L!@HfkGHhtN$P*U zWi`&lLnY!NLGz|6^n6yA{+4yQL%G-0BUd{$G_jK}$04mGY%$oJ8;9np`791m4W6AC z^+Rl4+dS5BPL-tMT3eN5boIyC!1*;k{VnWF@o7E=^32P=-75k$zJ}D*qW6ewv1R-< zr1v?i34Ci5`;Wzsx=GzBfQlxdUuAX>eWq~MzjzjXPwNbty}C{;i=nhG2iwbbv_RCP zTQAl(NJ5_h{M@PhcvowxY~NgHYN#=9txdm}nUm1kkJZGRBn|csC#l+W2}G5V zM&gKeVFaec6P=POfu?}l`4Jt*(Hr?sSeuJ=;0XLj9Ys?PnKEpIAOVGI9f{!HH z>FWw=)Djk4l#>1ETGVQWpn<3YilyC0FT}4n?)&)%sn)Xqm-s%whWtIWH!n7bG+ars z7JfKVqADD#9JH$uW#wj5yZjg=GEX0S>@0=kFX@I=LCle`Dx2UR$es-6eO|JGp+z&U zay1WQO+P4hRmWE8VfXd*r=o>y6GmeNxW~^m_t;`D06Sz`{#p&jkkJX6$L9e{_lL&^D zx-pDMSlCIgO-vxm;TAVpN*J*MhPypSrGHm(sTW-3n;kwu*>jm>_k@na zv1#fyI7Y;seUz9s_uJK0Pp4-!4#WgBI_MVwAgeCC7`p+efm0L%!+4IP&^;J4#-p|v zLI)ScGzlmcH&4_P^Md?syb(Ofb>zIrDLRU0?uBK34ucaJm9Sw_{{Xx4oR`&*g-&lh zQuTXS<#su17N_3aIc!zcEwbwM?NNUCiJ`k##O!r&+v4oAI2KVz@-322jdpq-ibS$uYT6H_se=`# zt_1~kharNu%Bc~aQa!~kEE89@MJ{T)(U*V%b=lXE+5_&&OvNLPCJ)RQ|0QM*^>&xu znH8f>K>Jha*%W{bcjRXE8n1uaa=09WP%P8V6g&!^*5u+AyFX%$msb>Wq8|2;Hk*$* zZ#%>-8Gf~C(IUbiyV2U$0{zNmH`~OpH)+R-G*94Egczaz^+E-As60@c2HlRHX)P(} zY367i7I#zXLdB9|9^zU6yzFdsx0aMZN{x3tPOe1#%4l_%UgK6vMMwFgj`Iv%qfWO7 zLoEH=&p}1@oXcx|!p547ks|AnQ${P7C4<#_gWU;zbs~f5eSbB6Me*7TaIJ0^=wiH{ zveJ{NQ@(k)=BCizN{xYfUc+W&_idbfR z_{LhRo*RBl@ZfQV&JJ(q(K%E%=ToD7z-i#9QEJ~IqKbq7CBMFB;S+?o%BiQI^|!HE zX-RbVk+k)8tA^{#1UBbyQ3}24qRVFG+bI1z)QGJH0FTUgE1fs4!bvNR{O+e)) z^qFgAS81~q%}b1{1}uwqlPoyT&ruLX8eEbo2$+q0MJ<7H+8ZaKYYV4YeGUEIEngZ{ zok9`mp0%8FOdMb^20$V?4fe1U0dH`Uv(ZCO~3ry7#dzA@>Oa&D7Abcr36 zM1deDj7-gihF9uVnc&`1gAa(nlC%K(@B!Nq@jb26TQ}tlMbnBeCLSbk_8tw9*23mn(*oVY?s;j4^@M6D3m~>@S%YFWm$CPtg~0rSC_I7~SA5 zpC@O-2k_m@z+>BS9O$NhtfPa&K_=V%CT9IYjv(_Ltmt#i4@_^u>e4_C$W=0#26)A* zf#If^$zF$;#TSAh81bbQtb*$pepHBiXr}JiL?Mk>k%7|3)SF4nO{gMZ*eYT_xf=69cA%iUrjlF_DJ+Tc1d1zz!W4@Jt*E}yLgyX-ep9V#d%kz{8A#Fi zwjHds=7OO!G>UB?gvSLBwCw;a20yg@D_y@1)|G}LHuzoBPBk&8ZV+erVWh+RIVADp zpr2rnfbYH9fVJX-_Vl? zE>rjV;UjtCGQ{WIupHLSo&IE%L+zejUrAGT*Oyzz!^M%_%jfm*m5#Udv-az~S?)2r z<#Ay77vqPCHUB%<$-#Iuzt?*}q7#yO#V+6tjPn3DsR43wePySqGJF z&qJu4d8gM0On`>z}@q!Per|I9jfwC$+&KhqgQN zp|HY4DWqI=*qyoU)ud_TT2WT%UKU;)F@r58ru*od0!_X|K)HZ1{#jnQ|4%G;Q1qlS z-?$n>Ng8mjAJ$>hbHg+_3g|JqW$qYC>S8_h^w5I!6uFvxYkwK$>8>0|^V3}mFJ(Zu8{(5BCUNJMrtHCmw zJhI_Cep$05VwW}j>`V~K_7+5A#x$W;+g#SB1exlbsyFs!;D(Pd$oQ_60_?c9!|I%Ac53NiHZR$lZp;a ztx%KcGE${0G&SxcecPW`vl?mcQFYjx(R9OZnA323#SSX@>kGGQJlTAct9zYum>mj% z2L$9FYH0`o2lVye1;MQVS~=j*y^KJe9D_ymHE~G3+QCpq3!?>_*KoGks4d zoc_9$EJR`awMzB~po>R7EzYr=oUGuX*NEaCTi!Nn+3DPJ0pSMIG5x^kNMF zbYcB0Pd6CtVlfQGSe0XQ03%&}fzvphBea>yTsQ!%wbw_E&sC_`V{jd$+*HX~2xE}V ztQtjHtQirDdK|Q{@HZaa!7N!=>6`8SJYR(}K8=-fNH$^>wk&lrlQ+QNpuYZDz35`q zV!Fcvz}W+GrsL<@pNsoT?9tWtv`9vsFA~PrDE`|WUUb%cn0}8i$Rtu_(MNwUW)L#% zdqqqCPn}Hq7U1r7tpB z6KF;PlPY14Bx7)KL-%ch(+!Y}(cELgCXO873b9@B@voow6D|>Uv*ViL%Vp|5I>sNt z)9wa-qghSXF{|hy3~ur19<*{rMafOTH5>)H%Ik6Y$0; zz@Xjw30$(%y54PLtnG{#l^+&-#WG*OT_j6EAx#eyLn&?8N|tfr`CuFv z^CEFjK))@Rg4Ez=jDd5rxFz>@)r>F7pa>+;@oWEBf%Cy6=eFw&SZJ?5!;bcEH<>kh zeobac^gUO0`m;Z+GQeYLJfnK4ww>z-|4uzr6n|^6FIzKVOZ5>o%BW%#$;eWRKxkHb zGaP-EN>$8@z4q5U3z10`@;i7kQXWJ}BY=aobOukEmaXe_Qff?dMpgDjCn$&<6||u# zH<#Xo>7gKY?1c+at^0lDYJlmU_}~1b@jABZ^yr|&Z!~sbkrcti zPYP%QvZ2=+SerXcw@NWU2X@a{Ps$n0w5#HN0PdHP{Qc4{lY0VdV%BdhV>E7h=D#4_ ztPDEbQnoZ#7m{)rIJ4dN1KA7KSqZ)uESf2UlyZ)TaS*f##HRHSQY-dS(90Hr>Cz;~ z2HEK1Oyf@SZu+)G8oI`!rH-IjwLhY@o?MW*OfDAHCYYCHo6>NeYbKEIl!G1x8ZIgk zXRFB<6#ans23`gtFa;)R)YI5Giwg9`rjH6NM|ooo(Cy6oe|7FSpUnxaWm?sfL~nqEFUtL>^!lqgiVj$>5F^_BX$S5$ zJ}(^Qd=X?%Zz7V3Ka@Xms=YTPvWtAF>$u4{*YC<_mfb_#q-IBS<6WF?2gS2tH-%d= zL!)*f+~zx#vaskXXjGsBbI5FI!J=$%^Ueh#-%nBENoB#>*Y&7m3=ixG@8?O*Ov%wrPA+R2t!&owb|Fha>C|&SZb=}Bb24X_M4;i>D)%c7&!>_C;4DH?~bX* zpCLBQP8R_dzxz8`$x%JUcL8pv_R{nd3`yKNmml%EHuBJP@z(%#-PDCNcgxUA_P&8Z zW_{;jEO&R<yVQ@%W z%wsd?mdA*3Wh3pcb!@_hOv*r!z(%!8aHO56Hkj0{Hk#SqC=uCc@CU{%i;J9#(mGj3XhvZrW%ZVs@W-~_z!L3Lf=ftp2Lo}3Z0iGsr4Qed#h*LE3t=*l-$Y1*nq0=d zKBji(H9*p_J0!9h!)b76O>N!FWUq3Pr+;WolsSB9#TEL?WZ6{?$sKome~i0`KubEi zkt13VX}m-Ds+xT5GHsPR4XuM}(^Wl{OLUx~aTA~U8|epL?H!`Msvg_-K28U|1_3vh7ky#&roc84ku41=3$U1%{jLFY(~sV){ghW}L7VEi zm)ZMzy6b`e27vj5@6g!hL`^0yNL7-8eQ;weMlLsK@>964d3|w6L4eerN+Qrt;L_n` zJ=!wxPkOm~_qfFPxLekk!?B835aruCblU${lzl;Xr$Lr}J6%B-gKz8MjPA06l)5=Q ze7A&sIYzjDL0F}U4b3*;j(J3XEC#sbsFf`62k|ua zlBnLEMNZ-<(6)PYTsMZlwcm4v6tdjssG8hJ+23Q`hp!t9rtw{hHy(9~+6lH|OfKi; zJJ(A2oRynRiLA%-M7yt&v(9r*2kfS*@P-YHp9J%W($tN7XSi8XP8CO;943P+OpgT$ z`SVaqmWif%HASa7+7wtL+H~R%Dm-9MqNYbX= zO}ecBxW8!lIk_Yi_r*}P-(e44>sI97R_HhwfZ-ijb4xJpML$8GDm?T(J+pfOpRLR# zQQoJu@MZn=GrMUwfdC5~_NU*nea!DPQtS97{8eWm>^SzG(M*#K!40UQuj>&%IdZ#{ zw8y`~x?eFt(^{b3=;0vMe)81R{l;a1>yhT1len@HiGg|>@{Nwdh^%$Lb=lf@*B<n%huric?)N=W%}is6EW*tBd^eQq=~9n=n_8;`KNEq|}Gug?x1 z6MV<@j!$pVkBe^Xaa`GDJaJ5U{5ax`y2BrKi{A58Q25XqveRJp1e!2BrJLWhji2eI zNuajNcy(gMGsZvoPsfg7yUqX4`0veGdm%0V?j-ob`-Ap*-A~>If7q<74P!VIZ(irX zFOa7icrHmTg21OjrhJHRXWMZK6TJ?OrOX6;djV*nv1Va}9)B3ye^~dY+v27Kj2yaS68c^e`31A{d4-_ZCZyvI*Re4*rxu_-ReN< z04d+|yt^t*wWv#H8^HdSp15p;0bBY1L}k~;2vMd9g(Or zI3}yf?%(!%cXih#QNdgBfA``HTK{*nS9pwIApPbe}}$apZ#b2e;;=Ke{cGK zzv;WDP3m1YVZi-e@^qosb3O-HJXNs(-PdCYp2z$cX)CxVa@jF zJ#PSux;OINYYj5o?4-|T^m_ReE_<+ed$fYEk7i9AvUplT~ zak9zwm8kf#hE4Hxy;DrT#TO?KQQn9W+Ko3)^ZB|%0IlEAIo_;>a58=pf|*786fE;0 z&)ZWwSkKu+h`gNn3TtzPCp@LS#IVmBBCPjvHJi-&CB@4yx3F{u;t|-vwMMOdD6-c% z__HcRSjOTyMta3~%HL+$D=`O4~n+FlX3He~| zrF+t*yb(~{*bf6H-cn-^iKYc(&%mK|YDYGkj5^xZ9}XC`>QMS0*t_yewW`6xie*T( zn(T$G3ANbOgscKj)5xpIfPq{Q^HwAWDEt}tXj0c|#-Kjk87^Lo%v|~=3j0MZwbqIg zOw?fOS|bMwS-r_h$BXNj-PyyZ2Y9@z8$xT#t6fQKbswRj)Y80eC>& zB=gA}-AUqFyCx&uh9@RT1BvdoXEVJ-GG0LS2}5?Gy9Cn(cv_5|nM49l%CqRtLX%f9 zYKIC|<#{RY%F?|J)&q2d;#~C!mp2jj!VrV2ViIA!33vVi>h}BI7n028aV-hGjB4oi z`ynsC{e=SQ39Ct3;EOW~b`r1%FdyMK(&ZH*H``^@r4Q>;K_>i+b@s^;6Wfc^IyK>t zDPtX7(|5DS7E+4Eg;6pwfuIt$&FvgKZf((9;s zxY9J#fT4hWe-T<{S-%&w^B(V?DLG>fW5eZx1>4f^=kl4KdvfZY3J^+304!8JTXjecfd`R}xZdg6w5ix)&8DYIJh(TUqTgP#99 z!RwM{Zo@V>dvM35H@x+z;W&zssq=J`(0%)?dZ} zjxl7%>+?%0IEb>l`STyIBc<8ZtZlsxNsS05m#l!Wn2Jks4I|HEdg)Op?06o{m^yr z`|p;^b?cw@54eg8@CRKEzm7mR270MMglHFAZ11Syxv*Yr^8n$83)1BlVa3g8NPquX zB5=So!7kPwd$)su?Y{TN|I7}tC#{HWHCb>?EgL}w<8_Qd*72-~WG(tvJWsI68D4a@ zu^>_R<-k^i+5K5D%Y(gW>WPf54@LpNC>q$bVuUYC56x31=OU{fefUX4XGZay=a?@D zLBtVAxXD)$Yhli2JVH5S5~3)7ZVB}}G>!*oj_Bq4-(>Uc7!BeJ^)TyZP34wn+VJGF z_oIExRJzk1zM1{<57C~yC<}Pl)E^DKrNh*VVl&jk8NlAuU$`095nm^f-0mv)b6Dz3 zYL_qWfyR1Ylg4$;5vCqy*uDa~R>$>Az-Zx8<^H96TQac2I7?|4`YM{O3dvt(ti3Mh zQz?ha@fy2%Llcr?Nel8IZwN{u(rw^9RENi@VTI}EF^L$!sJFRe``t5&-?VtpGM^Aeg|6CnN+bf0Q-<5O-j4G{%e1)oHtXNTgGWzWj%WDKl`j)n;xd3y4 z2K{DUSQ9Dn((h!nW1_>UfuI!A7vIaBz%jf!L{-VuUYhxtmvDdMTj+FwwFVju*&TyrPH8Cd2EIw9a82~9v8%lBjBJ@lg*i!pp zOt4VU!qi?R%?eIbF|1-s8y>Zd?c)xuArgE-s`mTg`~A{1N`PpaoI8l)2P8+z+}c>g zk#ifhz&_9dxLB5wuTmtDa4FTd3hFpohnx2V1CF``@m>riwelN6L0$ln9RBzGj@O_u ztZo(zIgnO^U@^?L7OUtvejWL-I8fHxIg_LDaYbvuA|VYbjDa!?z_#K|9TtheK^2FC zbUBNzE&*=lUJ!(aM6R-2QBzHMbl1neSR&4Zs-`T2I;Ht;p(2UrD=jhp#)EHLbbSZH zdm3_m+m*-no~;F$>B`m+qcI@|41PETJr}mK$TZnjV|M>AJb)!H;t_Zt*KcIG#NJQC z|G}-#Qh-*VzKoPnkhRv@b#8M3-}97=6Q05Z5#dOYRe%~@_-xzP_X=FTy-gR}!h+tz z6P9KVKNLBor4dHM%r(B2Gemw_9<0X4mei5WDt6haGR9Sj1!O-zRI@X=o!76~|5*@- zI3(~g&W)6dMct4!k1F?pp(!M?Z zv7n&@+rw$`;|_|(5H)5J3cX@`(-q2YjFJI~C)O}&^y(1`9}oeq#^W3wej+p05=te3 zn!2*)UZqQp69Mhm=c~OBHRhWuX{P~if%P;JMngS?s<@tFFnDpZbX&US#kFJLhM6pP zaj96p4)P@0OxQ6?%RqgvHYcMlgA2Tf(stEb3KDRF{{`tiqOi@aLNYRo{TIvx?o~`> zxBR!1Rg@j^MT7HQuld`G-Gj_#N6AYiWAU_ca8p9P?kDzd%J5t@>GI!s# z*&fnffAviXx`rv6qvurs(&jNn#uPGcbkkGba(t-(LLkOK7m#wjNW9G`3mkASK}LBq zie0O^DJk}rG&s{(pg!*A4EWL4+BFEYo4O%kyJ{WDVx0v`qhOp5N@*>YZ4lUbJ}$jJ z#R=?y5|jqr;t)#Ul@BIRY&mpJD7N^D*I>Zn(u|KiQhEFD-@@KOrwEj}OfDP&Xn-IT zki_9=z;*f&c%7(;l}p8KSTo@_FJ41pg$jFZh$9c|`Klm}t}n5=pLW7PjWx&@kNF;S z(yCj>dLO@;5AH4yT^k4P39F|2&5Oer9wN94u!=x)hkfGw0B^`;D&y+_b-(NYabLl= z&2oei)SBTiH~qs3oUVq<6l;~}lP_+Wt&me>0(mP(e&&&L{ip=x#6m72rVgCcqdAXSWcW~k zG0T1x8goGhh5wKNEzqP^)XcyZs0;?@K9~=R9`&*PA*lQg5KD&lsbFkp%~vq^0MDd& zR)rYmo^<7b){G%`uCM4Usi?1gAmlTa!w2lez)Tg%j6_Qgsv(L1YOwEk_;?vp&NFw; zD2^(lN6F6xl@sRh2bBwZRKkIl*s>HQ_g)bjJVG#AGgGuy6Hv_~if*s82y#4xl4Y_L z+NzT|-*lAc65{k2jf+}t%O;3R5~FR;Q5-)dOUHw5(6n_L4l={TL)-Nk4v~`-elp@5 zWgufoh><8=L6Ni~dnIK(!$rXnq1yr)i{g&lJA7$0-`lP zYYId95qMR0ky5H%R=QxrEN=>WS-*+#JVp|IxYLeaCrfLztzl0j4mE2`No6DvWyJ(N z;!MI#P!SBCM;Mu88GI*40XTsAvEm+$8m#64NG!I#1IRDpk;VWXkUH?6+E^CPmy}FV z0w(m_LAqiP5PnpmiX|X@6;!xN9GZH_gkreUlKdMug?LiXsYBMt-CkidsDvaD_X4C) z;hME*EmfV#6c)3UBS3XlR6dJ|VVkuyB0=v^V{Bux?}wE3$z@QL1|U&B-C${Z4j7wm z?ivx%Fg<_aRqG8YBiPPaHqsanjzz_$-ik+KsNwd#zKa?8AbK)QizT za=wvqEw$P}nTWH~ULhAuBtoBdd@<5=zkOr}yu@8v1{!E25-4>6rcrL{Ot$y2_To{- z7gJyxTTOAS7$V}%mL?6Ld{_h9YQQy?60j;Wp94P6m@}TeQ&lHqT7^J>K1$D01^U1@ zGoJobD60WSGXN33gp?1Ot`LLr23HZyF{Yh(dgV(-eU4);#e(j0MJB=^4SwD~)MM8U z7L4pehPDpI&@zUksks*5q`Q@GvCLgO>SI)mGUC(gp@Po>Th>Wm2wrG=IJaj~iu6C3 zab!l0J>KsG@Y;Qdt}q89W1naQOY)^T|Gxr^i-Im}Lb`{Pj&Y_?@{&gwz)CN4rc7Ra zeGq$$Sk%ZZ+9-6ou@S?KrEdM#ut230NW^5<6sj7KBaSjEUkIl5{&NVm$i*>sP@~S8 zCb4yt5IyH(q``rlih|XN)4m!p&1`FunB^2#K4i^=urddt{#`7=A_A6REm(gSS(5TC z+3?tvEX0$17fXR^B^maG8B$Jp<@e1Pl1P`1Wd>NS%ZgWl7RU&Zs5>A7F~r3#6MP{k zoqM$J5b-woMU45aY=4KS<}!)(7^>j1`$$Ow1!;I(pyi$<%I!Lo3a~DZyl04o338PGzfWl$pal%i5izl6Wl_jDI!Bfe5 zTw@;FBNahKGv6e!r(q#}yEA)l+&LpXhST%xcw=0su)vXNfU?C|KZMOVxG6S+%Bq3C z`LqJlQFl%9@ngp*cj?8We%Vca)O}EMSQ`2rMRcVMvdcbRI2mxVS(}|8O#meudCXt> zJ=U*yYIJCK6-@gf#^4R@P?2In>!wHuj8UvGuhFm@CFV+c*%&d^zv?bR)>S&bufUq* zKKv2ODp+XjbfBO)jA24)HaKgZ8Wa*NM=&jj9=vNp(^HY!?`t*U?xyQxTM2x!m+QI_ zs0juhe;+dioGKv#r6oJ2K*le^sD4COR>qB&q{Kxwo^&cM(~Z~>h_E;>A+8jBpI*eFo%t=!iEJ$ zLHZZ+Di(@Br|W}pCTLV{)>DqRm}DS|OJ&JSA#($;P%q0IxA`~Zr}UpRCaiKmf%EzV zC7tI4r5FMPn+|zr_G4l8ydX%+fTasPa4m{k8!dRMG$f%%{KS0z0mp6s3mor)wWzbL z+U={m>MXEp(C4hsjR^;ha}f=N4hbG3WgtKFpvor{wjgJUbou_-sEF$o4F}ycdM!&d z#3}lD39R>j!SNdu82`u7QZV^%aQyqp0?OL!OxmSIxUJe>gF>U2wl4+RVs&T!e5>h# z9&T~I25*I5bG(r1A{cKHFTDVaIsB7}Y4_NI@RWgv4CL}W#b@oTtx+H*FU`7lFknxH zvC|`MUBd+|W^B8z3QW9L-jgTRzbFh$oa(4Nkk3XTghd~CVo=zh!pjxz&mEdNOVa9` z(qFv!G9eXNp%(Eh^;&ofq(FNTzJ4}pidY#^Gk{=uUiItu-5T-_IDQa*swE%hHZi0= z60|I4ibD;a0tbXnq{gVQARron)oGMJD zeS71X&rbz`=6s+~Xe8haXnT#i%3)LPDm~)o<9Zi89+ZiHO zA2`|}IuZw1j?V}uH=a0{M7cOINRHu3R8~4HhU!s(ggpk|Lj3(h_9>wSC4-P+3kl~J z;yPzF8y&TU<3NB4z|GkH5yy*$2{AR8w$4R;NgyCKsu)MtO1D#k4d-Wk#yC&|Hmlj8z@3Th>w#0phpgd_}F zH9?xQZl)dI?;&tSID`bE>%yc|AxX#ZeZOo2!I@cqh5jGL?lC&@u3HdzY}>YN+qRu_ ztcuyOla6iMw#|ys9dw*@I`-swXXc%md+(Z=Z}q9_ziQRlwf8x{v-cGmhDhYs%yETt z{njN){NS+Ph8BNd$s6C{3NnI-YiM8v%5QRt`oQ{Vt7|FBgY`}*ZFrE%Kx%N{QG7l* zboG*hY!lH=o?Xz=Tc~9w-Hn^^*lE&kJa*GbkiA!@r=up5VHNa5f^5J8!yVF|>S zy0%dOVl9mH)(?^%V0UB}N^VkikjCJ$9C+F~w2X2i)IK60zA8gG%jIX+ui8n z+-6gv)GoB*RUKzIHCs9knk!uz>|YhjiaoAmcBXPfcO-8Tyk`0WY_ZAfMC_w$QY0pJ zVLiJ@6^^kDvSdfd6q$6mS|kj$sWH3*LG|bP#4dx1DbZ3A`#s52gvu7rg;U)NK6rQ- z@!oMZu%{Y`bzD{SjZ>gcGzO>2BLGVeO#4QtW>O9>KT0Od2$ZHeW3oyB&RAVoISDX| zV8rH4avAoloUSf-N8D0$go2GXw=~Vg$8^wYKytMrP5hL7Rfm3fmLE zQ^Ced-2V!^W=%@=)>|6|J1a<(Uz3$~z`3g@A!TIjD|xxQzz|^!8U}zA4OI6;Hf&O+ zX=Y77vjo?3s>X?{nnOKILBiRm^wTSmaZdaanSYEl7Z@Y~K=y;+2TE$M7pz$dLh1JO0PvAD5ckqd^xQ(ZaGwf7(u|2=kGk`{oH4iQ9TC8*8qGn4vbw~@y19FTK!w9N)6?M(GK_upi zU~M=2fBwK`gOx<%=xQ@i=HfQWT~p3&T9xNw)(D(3jt%RPaz9e(dShDC266wBQl+hr z@XtV{izFj?t!Gq^9U*XH?g&mO57v$l2tnvI4*2xS>9r0_Mg5bo(?>k(fRd~l5jHGW zBVh(;z@)}2CF9|+4F5Gtp=3+(uo*)91m)iUNFn3g@TP$37d+iw__FbiiNzqoRNrKcy=+?HHD1< z&D&4O8YJab-;<=su}=jS1R0y;)%1`DX)ZXCidO3I#e=4=IXg0HqL~aD9VJ!+E1SvA zzaC&oEYX(L3hM)Sp$MN#7zAGzvKO7^TpLox99sIJ%dP~Ou&Db60iE*oy{4&n2Mzyh z+zE!OMk~4;Ao^pF^GdQdWG+tr0O3{h{Z!0O!3gra`zxr#a_Z=UXo4#Or5jfC(BE5- zP`%uso)nlP`wegbI)y}@c^7z8AFba9G0exo=5zRO5L{+4(km%lQcnAAxK?FbSDdBOpAX7c{CO<~toFxDzkeVr8-iji!FQF}3{ z275Pf{`m6%JWcGy@-ULdzP%uOl?z%XI39N-HbaipM3(SQMi*Zvc)9ILj_Rc+Fj`@U ztCmti|kDn4vq0}suAOx8$Y!Ck%{`!Km%b0c*t{1EGp5Cf|2ro zhN3|vW;8NiqsKwWSxzeko%xvRE4mInlNiZ3saXeY!6(-_ z6~YY?3Pb%GJ-)0B{~mCiM+APn4vsRUQUBe8&bgF`a)gRd-@Qm5OJ~f1iy=wsLJp1| zHSHYfx=5(}?t-dR!m807O9KhAD7hdeV5N3taT)!G!FMYajE7~mcrIFy))|KHdwiT@ z(d%22dew#zW$3*qpnt_o&!_ULJZDI>e=;h}*BFrLhjE6We@(~S3l6O@n}S^`fbQcO zkB2do*OHV6t3r|(8`5)GfYdz|i_aA$JGohmp9GUa?wyFHsYX*YFr*5+7plSz_svpFqt!ae;Vh0YgrFqG zjP34O$1HjD!~lByT~LQnqq2Ahd>uI>j^hcJLaAI-t_8JNWM8e09Vh$OL>rnFq*(-8 z<+KSsXrT71<Zp)3+*tmak0qSfm^2Ni|lsp?{@D1$AGH`=6c`soTYo9?b|*-EZtH}0M)DPhl_ zr(fK;R%$L2s1*dmd-a-QqVO8T&|x9)`@37wy3I;50!%$pdmgZxiP9=e6iS)I@J@Y% z(J~>$c4fvm2{IzGH_o+Bt?K;VR&j)Zq*$}Kt`ZnfMIw;Hl$U&jT>2Uelg-ICu5cMB}3P9^o(>cQgA2x;d z@0CZZ`yrvmzlF1Mtlfg@Ta+XlaC&lA=Y+b>fuyuWw|LQ=kJJIx(GJrI;)KxKI{#a5$zqS#VhH}XZZQmWr30qU>BDQ)(%H( z%eTPPt0OMnC##0t1)X}^)}o%w-#0XdQyDuMn$SF?71waYv_(TUUOO6tQ`bK`Nebql zG8VHOt242`8V9_BB1({*_&4co&Iz>hRj{sQc9T+u8yOD?wrFq=9^@7>u+~-C!A#?N z4!2i40vrS+Yjn>FJ5nA_Fq!WqPd-7du}w>t)%k4koRp=vn+JN68t69^98I+?3So3} za`6%6iYYneI5_8gLlon&BV40-61iD=BegxNq1TllRamNRkQ8dX_Ng_g3~X*Bw3C0P zDS3w}#_wN)B%l)i)J7CM}#khU6eWrm$-Rn5?rcM+gqR>igvLa^p4Jw;JvY}-?NSyY*5ev-H;|K-Y zL`?ietDJuLy@1{OI2!C^#2{r6Yn*&? z&xW%zAWWyCAb$xi%}|_w*TPH$S`P7MmtIZ`0xB0t zD#;8LeeqeBKQftKzY4`j477-mcu9jP?n^`#SLk6i8kN~4*9!X%1bDdt+4`cF%