Merge branch 'master' into release/v0.37

This commit is contained in:
Darshan K 2025-12-10 17:30:47 +05:30 committed by GitHub
commit df659f2e3c
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
73 changed files with 3795 additions and 674 deletions

View File

@ -0,0 +1,56 @@
---
name: Prepare Beta Release
about: Execute tasks for the creation and publishing of a new beta release
title: 'Prepare beta release 0.0.0'
labels: beta-release
assignees: ''
---
<!--
Add appropriate release number to title!
For detailed info on the release process refer to https://github.com/logos-messaging/nwaku/blob/master/docs/contributors/release-process.md
-->
### Items to complete
All items below are to be completed by the owner of the given release.
- [ ] Create release branch with major and minor only ( e.g. release/v0.X ) if it doesn't exist.
- [ ] Assign release candidate tag to the release branch HEAD (e.g. `v0.X.0-beta-rc.0`, `v0.X.0-beta-rc.1`, ... `v0.X.0-beta-rc.N`).
- [ ] Generate and edit release notes in CHANGELOG.md.
- [ ] **Waku test and fleets validation**
- [ ] Ensure all the unit tests (specifically js-waku tests) are green against the release candidate.
- [ ] Deploy the release candidate to `waku.test` only through [deploy-waku-test job](https://ci.infra.status.im/job/nim-waku/job/deploy-waku-test/) and wait for it to finish (Jenkins access required; ask the infra team if you don't have it).
- After completion, disable [deployment job](https://ci.infra.status.im/job/nim-waku/) so that its version is not updated on every merge to master.
- Verify the deployed version at https://fleets.waku.org/.
- Confirm the container image exists on [Harbor](https://harbor.status.im/harbor/projects/9/repositories/nwaku/artifacts-tab).
- [ ] Analyze Kibana logs from the previous month (since the last release was deployed) for possible crashes or errors in `waku.test`.
- Most relevant logs are `(fleet: "waku.test" AND message: "SIGSEGV")`.
- [ ] Enable again the `waku.test` fleet to resume auto-deployment of the latest `master` commit.
- [ ] **Proceed with release**
- [ ] Assign a final release tag (`v0.X.0-beta`) to the same commit that contains the validated release-candidate tag (e.g. `v0.X.0-beta-rc.N`) and submit a PR from the release branch to `master`.
- [ ] Update [nwaku-compose](https://github.com/logos-messaging/nwaku-compose) and [waku-simulator](https://github.com/logos-messaging/waku-simulator) according to the new release.
- [ ] Bump nwaku dependency in [waku-rust-bindings](https://github.com/logos-messaging/waku-rust-bindings) and make sure all examples and tests work.
- [ ] Bump nwaku dependency in [waku-go-bindings](https://github.com/logos-messaging/waku-go-bindings) and make sure all tests work.
- [ ] Create GitHub release (https://github.com/logos-messaging/nwaku/releases).
- [ ] Submit a PR to merge the release branch back to `master`. Make sure you use the option "Merge pull request (Create a merge commit)" to perform the merge. Ping repo admin if this option is not available.
- [ ] **Promote release to fleets**
- [ ] Ask the PM lead to announce the release.
- [ ] Update infra config with any deprecated arguments or changed options.
- [ ] Update waku.sandbox with [this deployment job](https://ci.infra.status.im/job/nim-waku/job/deploy-waku-sandbox/).
### Links
- [Release process](https://github.com/logos-messaging/nwaku/blob/master/docs/contributors/release-process.md)
- [Release notes](https://github.com/logos-messaging/nwaku/blob/master/CHANGELOG.md)
- [Fleet ownership](https://www.notion.so/Fleet-Ownership-7532aad8896d46599abac3c274189741?pvs=4#d2d2f0fe4b3c429fbd860a1d64f89a64)
- [Infra-nim-waku](https://github.com/status-im/infra-nim-waku)
- [Jenkins](https://ci.infra.status.im/job/nim-waku/)
- [Fleets](https://fleets.waku.org/)
- [Harbor](https://harbor.status.im/harbor/projects/9/repositories/nwaku/artifacts-tab)

View File

@ -0,0 +1,76 @@
---
name: Prepare Full Release
about: Execute tasks for the creation and publishing of a new full release
title: 'Prepare full release 0.0.0'
labels: full-release
assignees: ''
---
<!--
Add appropriate release number to title!
For detailed info on the release process refer to https://github.com/logos-messaging/nwaku/blob/master/docs/contributors/release-process.md
-->
### Items to complete
All items below are to be completed by the owner of the given release.
- [ ] Create release branch with major and minor only ( e.g. release/v0.X ) if it doesn't exist.
- [ ] Assign release candidate tag to the release branch HEAD (e.g. `v0.X.0-rc.0`, `v0.X.0-rc.1`, ... `v0.X.0-rc.N`).
- [ ] Generate and edit release notes in CHANGELOG.md.
- [ ] **Validation of release candidate**
- [ ] **Automated testing**
- [ ] Ensure all the unit tests (specifically js-waku tests) are green against the release candidate.
- [ ] Ask Vac-QA and Vac-DST to perform the available tests against the release candidate.
- [ ] Vac-DST (an additional report is needed; see [this](https://www.notion.so/DST-Reports-1228f96fb65c80729cd1d98a7496fe6f))
- [ ] **Waku fleet testing**
- [ ] Deploy the release candidate to `waku.test` and `waku.sandbox` fleets.
- Start the [deployment job](https://ci.infra.status.im/job/nim-waku/) for both fleets and wait for it to finish (Jenkins access required; ask the infra team if you don't have it).
- After completion, disable [deployment job](https://ci.infra.status.im/job/nim-waku/) so that its version is not updated on every merge to `master`.
- Verify the deployed version at https://fleets.waku.org/.
- Confirm the container image exists on [Harbor](https://harbor.status.im/harbor/projects/9/repositories/nwaku/artifacts-tab).
- [ ] Search _Kibana_ logs from the previous month (since the last release was deployed) for possible crashes or errors in `waku.test` and `waku.sandbox`.
- Most relevant logs are `(fleet: "waku.test" AND message: "SIGSEGV")` OR `(fleet: "waku.sandbox" AND message: "SIGSEGV")`.
- [ ] Enable again the `waku.test` fleet to resume auto-deployment of the latest `master` commit.
- [ ] **Status fleet testing**
- [ ] Deploy release candidate to `status.staging`
- [ ] Perform [sanity check](https://www.notion.so/How-to-test-Nwaku-on-Status-12c6e4b9bf06420ca868bd199129b425) and log results as comments in this issue.
- [ ] Connect 2 instances to `status.staging` fleet, one in relay mode, the other one in light client.
- 1:1 Chats with each other
- Send and receive messages in a community
- Close one instance, send messages with second instance, reopen first instance and confirm messages sent while offline are retrieved from store
- [ ] Perform checks based on _end user impact_
- [ ] Inform other (Waku and Status) CCs to point their instances to `status.staging` for a few days. Ping Status colleagues on their Discord server or in the [Status community](https://status.app/c/G3kAAMSQtb05kog3aGbr3kiaxN4tF5xy4BAGEkkLwILk2z3GcoYlm5hSJXGn7J3laft-tnTwDWmYJ18dP_3bgX96dqr_8E3qKAvxDf3NrrCMUBp4R9EYkQez9XSM4486mXoC3mIln2zc-TNdvjdfL9eHVZ-mGgs=#zQ3shZeEJqTC1xhGUjxuS4rtHSrhJ8vUYp64v6qWkLpvdy9L9) (this is not a blocking point.)
- [ ] Ask Status-QA to perform sanity checks (as described above) and checks based on _end user impact_; specify the version being tested
- [ ] Ask Status-QA or infra to run the automated Status e2e tests against `status.staging`
- [ ] Get other CCs' sign-off: they should comment on this PR, e.g., "Used the app for a week, no problem." If problems are reported, resolve them and create a new RC.
- [ ] **Get Status-QA sign-off**, ensuring that the `status.test` update will not disturb ongoing activities.
- [ ] **Proceed with release**
- [ ] Assign a final release tag (`v0.X.0`) to the same commit that contains the validated release-candidate tag (e.g. `v0.X.0`).
- [ ] Update [nwaku-compose](https://github.com/logos-messaging/nwaku-compose) and [waku-simulator](https://github.com/logos-messaging/waku-simulator) according to the new release.
- [ ] Bump nwaku dependency in [waku-rust-bindings](https://github.com/logos-messaging/waku-rust-bindings) and make sure all examples and tests work.
- [ ] Bump nwaku dependency in [waku-go-bindings](https://github.com/logos-messaging/waku-go-bindings) and make sure all tests work.
- [ ] Create GitHub release (https://github.com/logos-messaging/nwaku/releases).
- [ ] Submit a PR to merge the release branch back to `master`. Make sure you use the option "Merge pull request (Create a merge commit)" to perform the merge. Ping repo admin if this option is not available.
- [ ] **Promote release to fleets**
- [ ] Ask the PM lead to announce the release.
- [ ] Update infra config with any deprecated arguments or changed options.
### Links
- [Release process](https://github.com/logos-messaging/nwaku/blob/master/docs/contributors/release-process.md)
- [Release notes](https://github.com/logos-messaging/nwaku/blob/master/CHANGELOG.md)
- [Fleet ownership](https://www.notion.so/Fleet-Ownership-7532aad8896d46599abac3c274189741?pvs=4#d2d2f0fe4b3c429fbd860a1d64f89a64)
- [Infra-nim-waku](https://github.com/status-im/infra-nim-waku)
- [Jenkins](https://ci.infra.status.im/job/nim-waku/)
- [Fleets](https://fleets.waku.org/)
- [Harbor](https://harbor.status.im/harbor/projects/9/repositories/nwaku/artifacts-tab)

View File

@ -1,72 +0,0 @@
---
name: Prepare release
about: Execute tasks for the creation and publishing of a new release
title: 'Prepare release 0.0.0'
labels: release
assignees: ''
---
<!--
Add appropriate release number to title!
For detailed info on the release process refer to https://github.com/waku-org/nwaku/blob/master/docs/contributors/release-process.md
-->
### Items to complete
All items below are to be completed by the owner of the given release.
- [ ] Create release branch
- [ ] Assign release candidate tag to the release branch HEAD. e.g. v0.30.0-rc.0
- [ ] Generate and edit releases notes in CHANGELOG.md
- [ ] Review possible update of [config-options](https://github.com/waku-org/docs.waku.org/blob/develop/docs/guides/nwaku/config-options.md)
- [ ] _End user impact_: Summarize impact of changes on Status end users (can be a comment in this issue).
- [ ] **Validate release candidate**
- [ ] Bump nwaku dependency in [waku-rust-bindings](https://github.com/waku-org/waku-rust-bindings) and make sure all examples and tests work
- [ ] Automated testing
- [ ] Ensures js-waku tests are green against release candidate
- [ ] Ask Vac-QA and Vac-DST to perform available tests against release candidate
- [ ] Vac-QA
- [ ] Vac-DST (we need additional report. see [this](https://www.notion.so/DST-Reports-1228f96fb65c80729cd1d98a7496fe6f))
- [ ] **On Waku fleets**
- [ ] Lock `waku.test` fleet to release candidate version
- [ ] Continuously stress `waku.test` fleet for a week (e.g. from `wakudev`)
- [ ] Search _Kibana_ logs from the previous month (since last release was deployed), for possible crashes or errors in `waku.test` and `waku.sandbox`.
- Most relevant logs are `(fleet: "waku.test" OR fleet: "waku.sandbox") AND message: "SIGSEGV"`
- [ ] Run release candidate with `waku-simulator`, ensure that nodes connected to each other
- [ ] Unlock `waku.test` to resume auto-deployment of latest `master` commit
- [ ] **On Status fleet**
- [ ] Deploy release candidate to `status.staging`
- [ ] Perform [sanity check](https://www.notion.so/How-to-test-Nwaku-on-Status-12c6e4b9bf06420ca868bd199129b425) and log results as comments in this issue.
- [ ] Connect 2 instances to `status.staging` fleet, one in relay mode, the other one in light client.
- [ ] 1:1 Chats with each other
- [ ] Send and receive messages in a community
- [ ] Close one instance, send messages with second instance, reopen first instance and confirm messages sent while offline are retrieved from store
- [ ] Perform checks based _end user impact_
- [ ] Inform other (Waku and Status) CCs to point their instance to `status.staging` for a few days. Ping Status colleagues from their Discord server or [Status community](https://status.app/c/G3kAAMSQtb05kog3aGbr3kiaxN4tF5xy4BAGEkkLwILk2z3GcoYlm5hSJXGn7J3laft-tnTwDWmYJ18dP_3bgX96dqr_8E3qKAvxDf3NrrCMUBp4R9EYkQez9XSM4486mXoC3mIln2zc-TNdvjdfL9eHVZ-mGgs=#zQ3shZeEJqTC1xhGUjxuS4rtHSrhJ8vUYp64v6qWkLpvdy9L9) (not blocking point.)
- [ ] Ask Status-QA to perform sanity checks (as described above) + checks based on _end user impact_; do specify the version being tested
- [ ] Ask Status-QA or infra to run the automated Status e2e tests against `status.staging`
- [ ] Get other CCs sign-off: they comment on this PR "used app for a week, no problem", or problem reported, resolved and new RC
- [ ] **Get Status-QA sign-off**. Ensuring that `status.test` update will not disturb ongoing activities.
- [ ] **Proceed with release**
- [ ] Assign a release tag to the same commit that contains the validated release-candidate tag
- [ ] Create GitHub release
- [ ] Deploy the release to DockerHub
- [ ] Announce the release
- [ ] **Promote release to fleets**.
- [ ] Update infra config with any deprecated arguments or changed options
- [ ] [Deploy final release to `waku.sandbox` fleet](https://ci.infra.status.im/job/nim-waku/job/deploy-waku-sandbox)
- [ ] [Deploy final release to `status.staging` fleet](https://ci.infra.status.im/job/nim-waku/job/deploy-shards-staging/)
- [ ] [Deploy final release to `status.prod` fleet](https://ci.infra.status.im/job/nim-waku/job/deploy-shards-test/)
- [ ] **Post release**
- [ ] Submit a PR from the release branch to master. Important to commit the PR with "create a merge commit" option.
- [ ] Update waku-org/nwaku-compose with the new release version.
- [ ] Update version in js-waku repo. [update only this](https://github.com/waku-org/js-waku/blob/7c0ce7b2eca31cab837da0251e1e4255151be2f7/.github/workflows/ci.yml#L135) by submitting a PR.

View File

@ -78,7 +78,7 @@ jobs:
- name: Build binaries
run: make V=1 QUICK_AND_DIRTY_COMPILER=1 all tools
build-windows:
needs: changes
if: ${{ needs.changes.outputs.v2 == 'true' || needs.changes.outputs.common == 'true' }}
@ -121,7 +121,7 @@ jobs:
sudo docker run --rm -d -e POSTGRES_PASSWORD=test123 -p 5432:5432 postgres:15.4-alpine3.18
postgres_enabled=1
fi
export MAKEFLAGS="-j1"
export NIMFLAGS="--colors:off -d:chronicles_colors:none"
export USE_LIBBACKTRACE=0
@ -132,12 +132,12 @@ jobs:
build-docker-image:
needs: changes
if: ${{ needs.changes.outputs.v2 == 'true' || needs.changes.outputs.common == 'true' || needs.changes.outputs.docker == 'true' }}
uses: waku-org/nwaku/.github/workflows/container-image.yml@master
uses: logos-messaging/nwaku/.github/workflows/container-image.yml@master
secrets: inherit
nwaku-nwaku-interop-tests:
needs: build-docker-image
uses: waku-org/waku-interop-tests/.github/workflows/nim_waku_PR.yml@SMOKE_TEST_0.0.1
uses: logos-messaging/logos-messaging-interop-tests/.github/workflows/nim_waku_PR.yml@SMOKE_TEST_0.0.1
with:
node_nwaku: ${{ needs.build-docker-image.outputs.image }}
@ -145,14 +145,14 @@ jobs:
js-waku-node:
needs: build-docker-image
uses: waku-org/js-waku/.github/workflows/test-node.yml@master
uses: logos-messaging/js-waku/.github/workflows/test-node.yml@master
with:
nim_wakunode_image: ${{ needs.build-docker-image.outputs.image }}
test_type: node
js-waku-node-optional:
needs: build-docker-image
uses: waku-org/js-waku/.github/workflows/test-node.yml@master
uses: logos-messaging/js-waku/.github/workflows/test-node.yml@master
with:
nim_wakunode_image: ${{ needs.build-docker-image.outputs.image }}
test_type: node-optional

View File

@ -47,7 +47,7 @@ jobs:
- name: prep variables
id: vars
run: |
ARCH=${{matrix.arch}}
ARCH=${{matrix.arch}}
echo "arch=${ARCH}" >> $GITHUB_OUTPUT
@ -91,14 +91,14 @@ jobs:
build-docker-image:
needs: tag-name
uses: waku-org/nwaku/.github/workflows/container-image.yml@master
uses: logos-messaging/nwaku/.github/workflows/container-image.yml@master
with:
image_tag: ${{ needs.tag-name.outputs.tag }}
secrets: inherit
js-waku-node:
needs: build-docker-image
uses: waku-org/js-waku/.github/workflows/test-node.yml@master
uses: logos-messaging/js-waku/.github/workflows/test-node.yml@master
with:
nim_wakunode_image: ${{ needs.build-docker-image.outputs.image }}
test_type: node
@ -106,7 +106,7 @@ jobs:
js-waku-node-optional:
needs: build-docker-image
uses: waku-org/js-waku/.github/workflows/test-node.yml@master
uses: logos-messaging/js-waku/.github/workflows/test-node.yml@master
with:
nim_wakunode_image: ${{ needs.build-docker-image.outputs.image }}
test_type: node-optional
@ -150,7 +150,7 @@ jobs:
-u $(id -u) \
docker.io/wakuorg/sv4git:latest \
release-notes ${RELEASE_NOTES_TAG} --previous $(git tag -l --sort -creatordate | grep -e "^v[0-9]*\.[0-9]*\.[0-9]*$") |\
sed -E 's@#([0-9]+)@[#\1](https://github.com/waku-org/nwaku/issues/\1)@g' > release_notes.md
sed -E 's@#([0-9]+)@[#\1](https://github.com/logos-messaging/nwaku/issues/\1)@g' > release_notes.md
sed -i "s/^## .*/Generated at $(date)/" release_notes.md

2
.gitmodules vendored
View File

@ -181,6 +181,6 @@
branch = master
[submodule "vendor/waku-rlnv2-contract"]
path = vendor/waku-rlnv2-contract
url = https://github.com/waku-org/waku-rlnv2-contract.git
url = https://github.com/logos-messaging/waku-rlnv2-contract.git
ignore = untracked
branch = master

509
AGENTS.md Normal file
View File

@ -0,0 +1,509 @@
# AGENTS.md - AI Coding Context
This file provides essential context for LLMs assisting with Logos Messaging development.
## Project Identity
Logos Messaging is designed as a shared public network for generalized messaging, not application-specific infrastructure.
This project is a Nim implementation of a libp2p protocol suite for private, censorship-resistant P2P messaging. It targets resource-restricted devices and privacy-preserving communication.
Logos Messaging was formerly known as Waku. Waku-related terminology remains within the codebase for historical reasons.
### Design Philosophy
Key architectural decisions:
Resource-restricted first: Protocols differentiate between full nodes (relay) and light clients (filter, lightpush, store). Light clients can participate without maintaining full message history or relay capabilities. This explains the client/server split in protocol implementations.
Privacy through unlinkability: RLN (Rate Limiting Nullifier) provides DoS protection while preserving sender anonymity. Messages are routed through pubsub topics with automatic sharding across 8 shards. Code prioritizes metadata privacy alongside content encryption.
Scalability via sharding: The network uses automatic content-topic-based sharding to distribute traffic. This is why you'll see sharding logic throughout the codebase and why pubsub topic selection is protocol-level, not application-level.
See [documentation](https://docs.waku.org/learn/) for architectural details.
### Core Protocols
- Relay: Pub/sub message routing using GossipSub
- Store: Historical message retrieval and persistence
- Filter: Lightweight message filtering for resource-restricted clients
- Lightpush: Lightweight message publishing for clients
- Peer Exchange: Peer discovery mechanism
- RLN Relay: Rate limiting nullifier for spam protection
- Metadata: Cluster and shard metadata exchange between peers
- Mix: Mixnet protocol for enhanced privacy through onion routing
- Rendezvous: Alternative peer discovery mechanism
### Key Terminology
- ENR (Ethereum Node Record): Node identity and capability advertisement
- Multiaddr: libp2p addressing format (e.g., `/ip4/127.0.0.1/tcp/60000/p2p/16Uiu2...`)
- PubsubTopic: Gossipsub topic for message routing (e.g., `/waku/2/default-waku/proto`)
- ContentTopic: Application-level message categorization (e.g., `/my-app/1/chat/proto`)
- Sharding: Partitioning network traffic across topics (static or auto-sharding)
- RLN (Rate Limiting Nullifier): Zero-knowledge proof system for spam prevention
### Specifications
All specs are at [rfc.vac.dev/waku](https://rfc.vac.dev/waku). RFCs use `WAKU2-XXX` format (not legacy `WAKU-XXX`).
## Architecture
### Protocol Module Pattern
Each protocol typically follows this structure:
```
waku_<protocol>/
├── protocol.nim # Main protocol type and handler logic
├── client.nim # Client-side API
├── rpc.nim # RPC message types
├── rpc_codec.nim # Protobuf encoding/decoding
├── common.nim # Shared types and constants
└── protocol_metrics.nim # Prometheus metrics
```
### WakuNode Architecture
- WakuNode (`waku/node/waku_node.nim`) is the central orchestrator
- Protocols are "mounted" onto the node's switch (libp2p component)
- PeerManager handles peer selection and connection management
- Switch provides libp2p transport, security, and multiplexing
Example protocol type definition:
```nim
type WakuFilter* = ref object of LPProtocol
subscriptions*: FilterSubscriptions
peerManager: PeerManager
messageCache: TimedCache[string]
```
## Development Essentials
### Build Requirements
- Nim 2.x (check `waku.nimble` for minimum version)
- Rust toolchain (required for RLN dependencies)
- Build system: Make with nimbus-build-system
### Build System
The project uses Makefile with nimbus-build-system (Status's Nim build framework):
```bash
# Initial build (updates submodules)
make wakunode2
# After git pull, update submodules
make update
# Build with custom flags
make wakunode2 NIMFLAGS="-d:chronicles_log_level=DEBUG"
```
Note: The build system uses `--mm:refc` memory management (automatically enforced). Only relevant if compiling outside the standard build system.
### Common Make Targets
```bash
make wakunode2 # Build main node binary
make test # Run all tests
make testcommon # Run common tests only
make libwakuStatic # Build static C library
make chat2 # Build chat example
make install-nph # Install git hook for auto-formatting
```
### Testing
```bash
# Run all tests
make test
# Run specific test file
make test tests/test_waku_enr.nim
# Run specific test case from file
make test tests/test_waku_enr.nim "check capabilities support"
# Build and run test separately (for development iteration)
make test tests/test_waku_enr.nim
```
Test structure uses `testutils/unittests`:
```nim
import testutils/unittests
suite "Waku ENR - Capabilities":
test "check capabilities support":
## Given
let bitfield: CapabilitiesBitfield = 0b0000_1101u8
## Then
check:
bitfield.supportsCapability(Capabilities.Relay)
not bitfield.supportsCapability(Capabilities.Store)
```
### Code Formatting
Mandatory: All code must be formatted with `nph` (vendored in `vendor/nph`)
```bash
# Format specific file
make nph/waku/waku_core.nim
# Install git pre-commit hook (auto-formats on commit)
make install-nph
```
The nph formatter handles all formatting details automatically, especially with the pre-commit hook installed. Focus on semantic correctness.
### Logging
Uses `chronicles` library with compile-time configuration:
```nim
import chronicles
logScope:
topics = "waku lightpush"
info "handling request", peerId = peerId, topic = pubsubTopic
error "request failed", error = msg
```
Compile with log level:
```bash
nim c -d:chronicles_log_level=TRACE myfile.nim
```
## Code Conventions
Common pitfalls:
- Always handle Result types explicitly
- Avoid global mutable state: Pass state through parameters
- Keep functions focused: Under 50 lines when possible
- Prefer compile-time checks (`static assert`) over runtime checks
### Naming
- Files/Directories: `snake_case` (e.g., `waku_lightpush`, `peer_manager`)
- Procedures: `camelCase` (e.g., `handleRequest`, `pushMessage`)
- Types: `PascalCase` (e.g., `WakuFilter`, `PubsubTopic`)
- Constants: `PascalCase` (e.g., `MaxContentTopicsPerRequest`)
- Constructors: `func init(T: type Xxx, params): T`
- For ref types: `func new(T: type Xxx, params): ref T`
- Exceptions: `XxxError` for CatchableError, `XxxDefect` for Defect
- ref object types: `XxxRef` suffix
### Imports Organization
Group imports: stdlib, external libs, internal modules:
```nim
import
std/[options, sequtils], # stdlib
results, chronicles, chronos, # external
libp2p/peerid
import
../node/peer_manager, # internal (separate import block)
../waku_core,
./common
```
### Async Programming
Uses chronos, not stdlib `asyncdispatch`:
```nim
proc handleRequest(
wl: WakuLightPush, peerId: PeerId
): Future[WakuLightPushResult] {.async.} =
let res = await wl.pushHandler(peerId, pubsubTopic, message)
return res
```
### Error Handling
The project uses both Result types and exceptions:
Result types from nim-results are used for protocol and API-level errors:
```nim
proc subscribe(
wf: WakuFilter, peerId: PeerID
): Future[FilterSubscribeResult] {.async.} =
if contentTopics.len > MaxContentTopicsPerRequest:
return err(FilterSubscribeError.badRequest("exceeds maximum"))
# Handle Result with isOkOr
(await wf.subscriptions.addSubscription(peerId, criteria)).isOkOr:
return err(FilterSubscribeError.serviceUnavailable(error))
ok()
```
Exceptions still used for:
- chronos async failures (CancelledError, etc.)
- Database/system errors
- Library interop
Most files start with `{.push raises: [].}` to disable exception tracking, then use try/catch blocks where needed.
### Pragma Usage
```nim
{.push raises: [].} # Disable default exception tracking (at file top)
proc myProc(): Result[T, E] {.async.} = # Async proc
```
### Protocol Inheritance
Protocols inherit from libp2p's `LPProtocol`:
```nim
type WakuLightPush* = ref object of LPProtocol
rng*: ref rand.HmacDrbgContext
peerManager*: PeerManager
pushHandler*: PushMessageHandler
```
### Type Visibility
- Public exports use `*` suffix: `type WakuFilter* = ...`
- Fields without `*` are module-private
## Style Guide Essentials
This section summarizes key Nim style guidelines relevant to this project. Full guide: https://status-im.github.io/nim-style-guide/
### Language Features
Import and Export
- Use explicit import paths with std/ prefix for stdlib
- Group imports: stdlib, external, internal (separate blocks)
- Export modules whose types appear in public API
- Avoid include
Macros and Templates
- Avoid macros and templates - prefer simple constructs
- Avoid generating public API with macros
- Put logic in templates, use macros only for glue code
Object Construction
- Prefer Type(field: value) syntax
- Use Type.init(params) convention for constructors
- Default zero-initialization should be valid state
- Avoid using result variable for construction
ref object Types
- Avoid ref object unless needed for:
- Resource handles requiring reference semantics
- Shared ownership
- Reference-based data structures (trees, lists)
- Stable pointer for FFI
- Use explicit ref MyType where possible
- Name ref object types with Ref suffix: XxxRef
Memory Management
- Prefer stack-based and statically sized types in core code
- Use heap allocation in glue layers
- Avoid alloca
- For FFI: use create/dealloc or createShared/deallocShared
Variable Usage
- Use most restrictive of const, let, var (prefer const over let over var)
- Prefer expressions for initialization over var then assignment
- Avoid result variable - use explicit return or expression-based returns
Functions
- Prefer func over proc
- Avoid public (*) symbols not part of intended API
- Prefer openArray over seq for function parameters
Methods (runtime polymorphism)
- Avoid method keyword for dynamic dispatch
- Prefer manual vtable with proc closures for polymorphism
- Methods lack support for generics
Miscellaneous
- Annotate callback proc types with {.raises: [], gcsafe.}
- Avoid explicit {.inline.} pragma
- Avoid converters
- Avoid finalizers
Type Guidelines
Binary Data
- Use byte for binary data
- Use seq[byte] for dynamic arrays
- Convert string to seq[byte] early if stdlib returns binary as string
Integers
- Prefer signed (int, int64) for counting, lengths, indexing
- Use unsigned with explicit size (uint8, uint64) for binary data, bit ops
- Avoid Natural
- Check ranges before converting to int
- Avoid casting pointers to int
- Avoid range types
Strings
- Use string for text
- Use seq[byte] for binary data instead of string
### Error Handling
Philosophy
- Prefer Result, Opt for explicit error handling
- Use Exceptions only for legacy code compatibility
Result Types
- Use Result[T, E] for operations that can fail
- Use cstring for simple error messages: Result[T, cstring]
- Use enum for errors needing differentiation: Result[T, SomeErrorEnum]
- Use Opt[T] for simple optional values
- Annotate all modules: {.push raises: [].} at top
Exceptions (when unavoidable)
- Inherit from CatchableError, name XxxError
- Use Defect for panics/logic errors, name XxxDefect
- Annotate functions explicitly: {.raises: [SpecificError].}
- Catch specific error types, avoid catching CatchableError
- Use expression-based try blocks
- Isolate legacy exception code with try/except, convert to Result
Common Defect Sources
- Overflow in signed arithmetic
- Array/seq indexing with []
- Implicit range type conversions
Status Codes
- Avoid status code pattern
- Use Result instead
### Library Usage
Standard Library
- Use judiciously, prefer focused packages
- Prefer these replacements:
- async: chronos
- bitops: stew/bitops2
- endians: stew/endians2
- exceptions: results
- io: stew/io2
Results Library
- Use cstring errors for diagnostics without differentiation
- Use enum errors when caller needs to act on specific errors
- Use complex types when additional error context needed
- Use isOkOr pattern for chaining
Wrappers (C/FFI)
- Prefer native Nim when available
- For C libraries: use {.compile.} to build from source
- Create xxx_abi.nim for raw ABI wrapper
- Avoid C++ libraries
Miscellaneous
- Print hex output in lowercase, accept both cases
### Common Pitfalls
- Defects lack tracking by {.raises.}
- nil ref causes runtime crashes
- result variable disables branch checking
- Exception hierarchy unclear between Nim versions
- Range types have compiler bugs
- Finalizers infect all instances of type
## Common Workflows
### Adding a New Protocol
1. Create directory: `waku/waku_myprotocol/`
2. Define core files:
- `rpc.nim` - Message types
- `rpc_codec.nim` - Protobuf encoding
- `protocol.nim` - Protocol handler
- `client.nim` - Client API
- `common.nim` - Shared types
3. Define protocol type in `protocol.nim`:
```nim
type WakuMyProtocol* = ref object of LPProtocol
peerManager: PeerManager
# ... fields
```
4. Implement request handler
5. Mount in WakuNode (`waku/node/waku_node.nim`)
6. Add tests in `tests/waku_myprotocol/`
7. Export module via `waku/waku_myprotocol.nim`
### Adding a REST API Endpoint
1. Define handler in `waku/rest_api/endpoint/myprotocol/`
2. Implement endpoint following pattern:
```nim
proc installMyProtocolApiHandlers*(
router: var RestRouter, node: WakuNode
) =
router.api(MethodGet, "/waku/v2/myprotocol/endpoint") do () -> RestApiResponse:
# Implementation
return RestApiResponse.jsonResponse(data, status = Http200)
```
3. Register in `waku/rest_api/handlers.nim`
### Adding Database Migration
For message_store (SQLite):
1. Create `migrations/message_store/NNNNN_description.up.sql`
2. Create corresponding `.down.sql` for rollback
3. Increment version number sequentially
4. Test migration locally before committing
For PostgreSQL: add in `migrations/message_store_postgres/`
### Running Single Test During Development
```bash
# Build test binary
make test tests/waku_filter_v2/test_waku_client.nim
# Binary location
./build/tests/waku_filter_v2/test_waku_client.nim.bin
# Or combine
make test tests/waku_filter_v2/test_waku_client.nim "specific test name"
```
### Debugging with Chronicles
Set log level and filter topics:
```bash
nim c -r \
-d:chronicles_log_level=TRACE \
-d:chronicles_disabled_topics="eth,dnsdisc" \
tests/mytest.nim
```
## Key Constraints
### Vendor Directory
- Never edit files directly in vendor - it is auto-generated from git submodules
- Always run `make update` after pulling changes
- Managed by `nimbus-build-system`
### Chronicles Performance
- Log levels are configured at compile time for performance
- Runtime filtering is available but should be used sparingly: `-d:chronicles_runtime_filtering=on`
- Default sinks are optimized for production
### Memory Management
- Uses `refc` (reference counting with cycle collection)
- Automatically enforced by the build system (hardcoded in `waku.nimble`)
- Do not override unless absolutely necessary, as it breaks compatibility
### RLN Dependencies
- RLN code requires a Rust toolchain, which explains Rust imports in some modules
- Pre-built `librln` libraries are checked into the repository
## Quick Reference
Language: Nim 2.x | License: MIT or Apache 2.0
### Important Files
- `Makefile` - Primary build interface
- `waku.nimble` - Package definition and build tasks (called via nimbus-build-system)
- `vendor/nimbus-build-system/` - Status's build framework
- `waku/node/waku_node.nim` - Core node implementation
- `apps/wakunode2/wakunode2.nim` - Main CLI application
- `waku/factory/waku_conf.nim` - Configuration types
- `library/libwaku.nim` - C bindings entry point
### Testing Entry Points
- `tests/all_tests_waku.nim` - All Waku protocol tests
- `tests/all_tests_wakunode2.nim` - Node application tests
- `tests/all_tests_common.nim` - Common utilities tests
### Key Dependencies
- `chronos` - Async framework
- `nim-results` - Result type for error handling
- `chronicles` - Logging
- `libp2p` - P2P networking
- `confutils` - CLI argument parsing
- `presto` - REST server
- `nimcrypto` - Cryptographic primitives
Note: For specific version requirements, check `waku.nimble`.

View File

@ -1,5 +1,5 @@
# BUILD NIM APP ----------------------------------------------------------------
FROM rust:1.81.0-alpine3.19 AS nim-build
FROM rustlang/rust:nightly-alpine3.19 AS nim-build
ARG NIMFLAGS
ARG MAKE_TARGET=lightpushwithmix

View File

@ -43,6 +43,9 @@ ifeq ($(detected_OS),Windows)
LIBS = -lws2_32 -lbcrypt -liphlpapi -luserenv -lntdll -lminiupnpc -lnatpmp -lpq
NIM_PARAMS += $(foreach lib,$(LIBS),--passL:"$(lib)")
export PATH := /c/msys64/usr/bin:/c/msys64/mingw64/bin:/c/msys64/usr/lib:/c/msys64/mingw64/lib:$(PATH)
endif
##########
@ -143,6 +146,9 @@ ifeq ($(USE_LIBBACKTRACE), 0)
NIM_PARAMS := $(NIM_PARAMS) -d:disable_libbacktrace
endif
# enable experimental exit is dest feature in libp2p mix
NIM_PARAMS := $(NIM_PARAMS) -d:libp2p_mix_experimental_exit_is_dest
libbacktrace:
+ $(MAKE) -C vendor/nim-libbacktrace --no-print-directory BUILD_CXX_LIB=0
@ -421,13 +427,13 @@ docker-liteprotocoltester-push:
STATIC ?= 0
libwaku: | build deps librln
rm -f build/libwaku*
ifeq ($(STATIC), 1)
echo -e $(BUILD_MSG) "build/$@.a" && $(ENV_SCRIPT) nim libwakuStatic $(NIM_PARAMS) waku.nims
else ifeq ($(detected_OS),Windows)
make -f scripts/libwaku_windows_setup.mk windows-setup
echo -e $(BUILD_MSG) "build/$@.dll" && $(ENV_SCRIPT) nim libwakuDynamic $(NIM_PARAMS) waku.nims
else
echo -e $(BUILD_MSG) "build/$@.so" && $(ENV_SCRIPT) nim libwakuDynamic $(NIM_PARAMS) waku.nims

View File

@ -82,6 +82,8 @@ type
PrivateKey* = crypto.PrivateKey
Topic* = waku_core.PubsubTopic
const MinMixNodePoolSize = 4
#####################
## chat2 protobufs ##
#####################
@ -124,7 +126,7 @@ proc encode*(message: Chat2Message): ProtoBuffer =
return serialised
proc toString*(message: Chat2Message): string =
proc `$`*(message: Chat2Message): string =
# Get message date and timestamp in local time
let time = message.timestamp.fromUnix().local().format("'<'MMM' 'dd,' 'HH:mm'>'")
@ -331,13 +333,14 @@ proc maintainSubscription(
const maxFailedServiceNodeSwitches = 10
var noFailedSubscribes = 0
var noFailedServiceNodeSwitches = 0
const RetryWaitMs = 2.seconds # Quick retry interval
const SubscriptionMaintenanceMs = 30.seconds # Subscription maintenance interval
# Use chronos.Duration explicitly to avoid mismatch with std/times.Duration
let RetryWait = chronos.seconds(2) # Quick retry interval
let SubscriptionMaintenance = chronos.seconds(30) # Subscription maintenance interval
while true:
info "maintaining subscription at", peer = constructMultiaddrStr(actualFilterPeer)
# First use filter-ping to check if we have an active subscription
let pingErr = (await wakuNode.wakuFilterClient.ping(actualFilterPeer)).errorOr:
await sleepAsync(SubscriptionMaintenanceMs)
await sleepAsync(SubscriptionMaintenance)
info "subscription is live."
continue
@ -350,7 +353,7 @@ proc maintainSubscription(
some(filterPubsubTopic), filterContentTopic, actualFilterPeer
)
).errorOr:
await sleepAsync(SubscriptionMaintenanceMs)
await sleepAsync(SubscriptionMaintenance)
if noFailedSubscribes > 0:
noFailedSubscribes -= 1
notice "subscribe request successful."
@ -365,7 +368,7 @@ proc maintainSubscription(
# wakunode.peerManager.peerStore.delete(actualFilterPeer)
if noFailedSubscribes < maxFailedSubscribes:
await sleepAsync(RetryWaitMs) # Wait a bit before retrying
await sleepAsync(RetryWait) # Wait a bit before retrying
elif not preventPeerSwitch:
# try again with new peer without delay
let actualFilterPeer = selectRandomServicePeer(
@ -380,7 +383,7 @@ proc maintainSubscription(
noFailedSubscribes = 0
else:
await sleepAsync(SubscriptionMaintenanceMs)
await sleepAsync(SubscriptionMaintenance)
{.pop.}
# @TODO confutils.nim(775, 17) Error: can raise an unlisted exception: ref IOError
@ -450,6 +453,8 @@ proc processInput(rfd: AsyncFD, rng: ref HmacDrbgContext) {.async.} =
(await node.mountMix(conf.clusterId, mixPrivKey, conf.mixnodes)).isOkOr:
error "failed to mount waku mix protocol: ", error = $error
quit(QuitFailure)
await node.mountRendezvousClient(conf.clusterId)
await node.start()
node.peerManager.start()
@ -587,9 +592,9 @@ proc processInput(rfd: AsyncFD, rng: ref HmacDrbgContext) {.async.} =
error "Couldn't find any service peer"
quit(QuitFailure)
#await mountLegacyLightPush(node)
node.peerManager.addServicePeer(servicePeerInfo, WakuLightpushCodec)
node.peerManager.addServicePeer(servicePeerInfo, WakuPeerExchangeCodec)
#node.peerManager.addServicePeer(servicePeerInfo, WakuRendezVousCodec)
# Start maintaining subscription
asyncSpawn maintainSubscription(
@ -597,12 +602,12 @@ proc processInput(rfd: AsyncFD, rng: ref HmacDrbgContext) {.async.} =
)
echo "waiting for mix nodes to be discovered..."
while true:
if node.getMixNodePoolSize() >= 3:
if node.getMixNodePoolSize() >= MinMixNodePoolSize:
break
discard await node.fetchPeerExchangePeers()
await sleepAsync(1000)
while node.getMixNodePoolSize() < 3:
while node.getMixNodePoolSize() < MinMixNodePoolSize:
info "waiting for mix nodes to be discovered",
currentpoolSize = node.getMixNodePoolSize()
await sleepAsync(1000)

View File

@ -143,16 +143,18 @@ proc areProtocolsSupported(
proc pingNode(
node: WakuNode, peerInfo: RemotePeerInfo
): Future[void] {.async, gcsafe.} =
): Future[bool] {.async, gcsafe.} =
try:
let conn = await node.switch.dial(peerInfo.peerId, peerInfo.addrs, PingCodec)
let pingDelay = await node.libp2pPing.ping(conn)
info "Peer response time (ms)", peerId = peerInfo.peerId, ping = pingDelay.millis
return true
except CatchableError:
var msg = getCurrentExceptionMsg()
if msg == "Future operation cancelled!":
msg = "timedout"
error "Failed to ping the peer", peer = peerInfo, err = msg
return false
proc main(rng: ref HmacDrbgContext): Future[int] {.async.} =
let conf: WakuCanaryConf = WakuCanaryConf.load()
@ -268,8 +270,13 @@ proc main(rng: ref HmacDrbgContext): Future[int] {.async.} =
let lp2pPeerStore = node.switch.peerStore
let conStatus = node.peerManager.switch.peerStore[ConnectionBook][peer.peerId]
var pingSuccess = true
if conf.ping:
discard await pingFut
try:
pingSuccess = await pingFut
except CatchableError as exc:
pingSuccess = false
error "Ping operation failed or timed out", error = exc.msg
if conStatus in [Connected, CanConnect]:
let nodeProtocols = lp2pPeerStore[ProtoBook][peer.peerId]
@ -278,6 +285,11 @@ proc main(rng: ref HmacDrbgContext): Future[int] {.async.} =
error "Not all protocols are supported",
expected = conf.protocols, supported = nodeProtocols
quit(QuitFailure)
# Check ping result if ping was enabled
if conf.ping and not pingSuccess:
error "Node is reachable and supports protocols but ping failed - connection may be unstable"
quit(QuitFailure)
elif conStatus == CannotConnect:
error "Could not connect", peerId = peer.peerId
quit(QuitFailure)

View File

@ -38,6 +38,9 @@ A particular OpenAPI spec can be easily imported into [Postman](https://www.post
curl http://localhost:8645/debug/v1/info -s | jq
```
### Store API
The `page_size` flag in the Store API has a default value of 20 and a max value of 100.
### Node configuration
Find details [here](https://github.com/waku-org/nwaku/tree/master/docs/operators/how-to/configure-rest-api.md)

View File

@ -6,44 +6,52 @@ For more context, see https://trunkbaseddevelopment.com/branch-for-release/
## How to do releases
### Before release
### Prerequisites
- All issues under the corresponding release [milestone](https://github.com/waku-org/nwaku/milestones) have been closed or, after consultation, deferred to the next release.
- All submodules are up to date.
> Updating submodules requires a PR (and very often several "fixes" to maintain compatibility with the changes in submodules). That PR process must be done and merged a couple of days before the release.
Ensure all items in this list are ticked:
- [ ] All issues under the corresponding release [milestone](https://github.com/waku-org/nwaku/milestones) has been closed or, after consultation, deferred to a next release.
- [ ] All submodules are up to date.
> **IMPORTANT:** Updating submodules requires a PR (and very often several "fixes" to maintain compatibility with the changes in submodules). That PR process must be done and merged a couple of days before the release.
> In case the submodules update has a low effort and/or risk for the release, follow the ["Update submodules"](./git-submodules.md) instructions.
> If the effort or risk is too high, consider postponing the submodules upgrade for the subsequent release or delaying the current release until the submodules updates are included in the release candidate.
- [ ] The [js-waku CI tests](https://github.com/waku-org/js-waku/actions/workflows/ci.yml) pass against the release candidate (i.e. nwaku latest `master`).
> **NOTE:** This serves as a basic regression test against typical clients of nwaku.
> The specific job that needs to pass is named `node_with_nwaku_master`.
### Performing the release
> If the effort or risk is too high, consider postponing the submodules upgrade for the subsequent release or delaying the current release until the submodules updates are included in the release candidate.
### Release types
- **Full release**: follow the entire [Release process](#release-process--step-by-step).
- **Beta release**: skip just `6a` and `6c` steps from [Release process](#release-process--step-by-step).
- Choose the appropriate release process based on the release type:
- [Full Release](../../.github/ISSUE_TEMPLATE/prepare_full_release.md)
- [Beta Release](../../.github/ISSUE_TEMPLATE/prepare_beta_release.md)
### Release process ( step by step )
1. Checkout a release branch from master
```
git checkout -b release/v0.1.0
git checkout -b release/v0.X.0
```
1. Update `CHANGELOG.md` and ensure it is up to date. Use the helper Make target to get PR based release-notes/changelog update.
2. Update `CHANGELOG.md` and ensure it is up to date. Use the helper Make target to get PR based release-notes/changelog update.
```
make release-notes
```
1. Create a release-candidate tag with the same name as release and `-rc.N` suffix a few days before the official release and push it
3. Create a release-candidate tag with the same name as release and `-rc.N` suffix a few days before the official release and push it
```
git tag -as v0.1.0-rc.0 -m "Initial release."
git push origin v0.1.0-rc.0
git tag -as v0.X.0-rc.0 -m "Initial release."
git push origin v0.X.0-rc.0
```
This will trigger a [workflow](../../.github/workflows/pre-release.yml) which will build RC artifacts and create and publish a Github release
This will trigger a [workflow](../../.github/workflows/pre-release.yml) which will build RC artifacts and create and publish a GitHub release
1. Open a PR from the release branch for others to review the included changes and the release-notes
4. Open a PR from the release branch for others to review the included changes and the release-notes
1. In case additional changes are needed, create a new RC tag
5. In case additional changes are needed, create a new RC tag
Make sure the new tag is associated
with CHANGELOG update.
@ -52,25 +60,57 @@ Ensure all items in this list are ticked:
# Make changes, rebase and create new tag
# Squash to one commit and make a nice commit message
git rebase -i origin/master
git tag -as v0.1.0-rc.1 -m "Initial release."
git push origin v0.1.0-rc.1
git tag -as v0.X.0-rc.1 -m "Initial release."
git push origin v0.X.0-rc.1
```
1. Validate the release. For the release validation process, please refer to the following [guide](https://www.notion.so/Release-Process-61234f335b904cd0943a5033ed8f42b4#47af557e7f9744c68fdbe5240bf93ca9)
Similarly use v0.X.0-rc.2, v0.X.0-rc.3 etc. for additional RC tags.
1. Once the release-candidate has been validated, create a final release tag and push it.
We also need to merge release branch back to master as a final step.
6. **Validation of release candidate**
6a. **Automated testing**
- Ensure all the unit tests (specifically js-waku tests) are green against the release candidate.
- Ask Vac-QA and Vac-DST to run their available tests against the release candidate; share all release candidates with both teams.
> We need an additional report like [this](https://www.notion.so/DST-Reports-1228f96fb65c80729cd1d98a7496fe6f) specifically from the DST team.
6b. **Waku fleet testing**
- Start job on `waku.sandbox` and `waku.test` [Deployment job](https://ci.infra.status.im/job/nim-waku/), wait for completion of the job. If it fails, then debug it.
- After completion, disable [deployment job](https://ci.infra.status.im/job/nim-waku/) so that its version is not updated on every merge to `master`.
- Verify at https://fleets.waku.org/ that the fleet is locked to the release candidate version.
- Check if the image is created at [Harbor](https://harbor.status.im/harbor/projects/9/repositories/nwaku/artifacts-tab).
- Search _Kibana_ logs from the previous month (since the last release was deployed) for possible crashes or errors in `waku.test` and `waku.sandbox`.
- Most relevant logs are `(fleet: "waku.test" AND message: "SIGSEGV")` OR `(fleet: "waku.sandbox" AND message: "SIGSEGV")`.
- Enable the `waku.test` fleet again to resume auto-deployment of the latest `master` commit.
6c. **Status fleet testing**
- Deploy release candidate to `status.staging`
- Perform [sanity check](https://www.notion.so/How-to-test-Nwaku-on-Status-12c6e4b9bf06420ca868bd199129b425) and log results as comments in this issue.
- Connect 2 instances to `status.staging` fleet, one in relay mode, the other one in light client.
- 1:1 Chats with each other
- Send and receive messages in a community
- Close one instance, send messages with second instance, reopen first instance and confirm messages sent while offline are retrieved from store
- Perform checks based on _end-user impact_.
- Inform other (Waku and Status) CCs to point their instances to `status.staging` for a few days. Ping Status colleagues from their Discord server or [Status community](https://status.app) (not a blocking point).
- Ask Status-QA to perform sanity checks (as described above) and checks based on _end user impact_; specify the version being tested.
- Ask Status-QA or infra to run the automated Status e2e tests against `status.staging`.
- Get other CCs' sign-off: they should comment on this PR, e.g., "Used the app for a week, no problem." If problems are reported, resolve them and create a new RC.
- **Get Status-QA sign-off**, ensuring that the `status.test` update will not disturb ongoing activities.
7. Once the release-candidate has been validated, create a final release tag and push it.
We also need to merge the release branch back into master as a final step.
```
git checkout release/v0.1.0
git tag -as v0.1.0 -m "Initial release."
git push origin v0.1.0
git checkout release/v0.X.0
git tag -as v0.X.0 -m "final release." (use v0.X.0-beta as the tag if you are creating a beta release)
git push origin v0.X.0
git switch master
git pull
git merge release/v0.1.0
git merge release/v0.X.0
```
8. Update `waku-rust-bindings`, `waku-simulator` and `nwaku-compose` to use the new release.
1. Create a [Github release](https://github.com/waku-org/nwaku/releases) from the release tag.
9. Create a [GitHub release](https://github.com/waku-org/nwaku/releases) from the release tag.
* Add binaries produced by the ["Upload Release Asset"](https://github.com/waku-org/nwaku/actions/workflows/release-assets.yml) workflow. Where possible, test the binaries before uploading to the release.
@ -80,22 +120,10 @@ We also need to merge release branch back to master as a final step.
2. Deploy the release image to [Dockerhub](https://hub.docker.com/r/wakuorg/nwaku) by triggering [the manual Jenkins deployment job](https://ci.infra.status.im/job/nim-waku/job/docker-manual/).
> Ensure the following build parameters are set:
> - `MAKE_TARGET`: `wakunode2`
> - `IMAGE_TAG`: the release tag (e.g. `v0.16.0`)
> - `IMAGE_TAG`: the release tag (e.g. `v0.36.0`)
> - `IMAGE_NAME`: `wakuorg/nwaku`
> - `NIMFLAGS`: `--colors:off -d:disableMarchNative -d:chronicles_colors:none -d:postgres`
> - `GIT_REF` the release tag (e.g. `v0.16.0`)
3. Update the default nwaku image in [nwaku-compose](https://github.com/waku-org/nwaku-compose/blob/master/docker-compose.yml)
4. Deploy the release to appropriate fleets:
- Inform clients
> **NOTE:** known clients are currently using some version of js-waku, go-waku, nwaku or waku-rs.
> Clients are reachable via the corresponding channels on the Vac Discord server.
> It should be enough to inform clients on the `#nwaku` and `#announce` channels on Discord.
> Informal conversations with specific repo maintainers are often part of this process.
- Check if nwaku configuration parameters changed. If so [update fleet configuration](https://www.notion.so/Fleet-Ownership-7532aad8896d46599abac3c274189741?pvs=4#d2d2f0fe4b3c429fbd860a1d64f89a64) in [infra-nim-waku](https://github.com/status-im/infra-nim-waku)
- Deploy release to the `waku.sandbox` fleet from [Jenkins](https://ci.infra.status.im/job/nim-waku/job/deploy-waku-sandbox/).
- Ensure that nodes successfully start up and monitor health using [Grafana](https://grafana.infra.status.im/d/qrp_ZCTGz/nim-waku-v2?orgId=1) and [Kibana](https://kibana.infra.status.im/goto/a7728e70-eb26-11ec-81d1-210eb3022c76).
- If necessary, revert by deploying the previous release. Download logs and open a bug report issue.
5. Submit a PR to merge the release branch back to `master`. Make sure you use the option `Merge pull request (Create a merge commit)` to perform such merge.
> - `GIT_REF` the release tag (e.g. `v0.36.0`)
### Performing a patch release
@ -116,4 +144,14 @@ We also need to merge release branch back to master as a final step.
4. Once the release-candidate has been validated and changelog PR got merged, cherry-pick the changelog update from master to the release branch. Create a final release tag and push it.
5. Create a [Github release](https://github.com/waku-org/nwaku/releases) from the release tag and follow the same post-release process as usual.
5. Create a [GitHub release](https://github.com/waku-org/nwaku/releases) from the release tag and follow the same post-release process as usual.
### Links
- [Release process](https://github.com/waku-org/nwaku/blob/master/docs/contributors/release-process.md)
- [Release notes](https://github.com/waku-org/nwaku/blob/master/CHANGELOG.md)
- [Fleet ownership](https://www.notion.so/Fleet-Ownership-7532aad8896d46599abac3c274189741?pvs=4#d2d2f0fe4b3c429fbd860a1d64f89a64)
- [Infra-nim-waku](https://github.com/status-im/infra-nim-waku)
- [Jenkins](https://ci.infra.status.im/job/nim-waku/)
- [Fleets](https://fleets.waku.org/)
- [Harbor](https://harbor.status.im/harbor/projects/9/repositories/nwaku/artifacts-tab)

View File

@ -1,4 +1,3 @@
# Configure a REST API node
A subset of the node configuration can be used to modify the behaviour of the HTTP REST API.
@ -21,3 +20,5 @@ Example:
```shell
wakunode2 --rest=true
```
The `page_size` flag in the Store API has a default value of 20 and a max value of 100.

View File

@ -51,7 +51,6 @@ proc splitPeerIdAndAddr(maddr: string): (string, string) =
proc setupAndPublish(rng: ref HmacDrbgContext, conf: LightPushMixConf) {.async.} =
# use notice to filter all waku messaging
setupLog(logging.LogLevel.DEBUG, logging.LogFormat.TEXT)
notice "starting publisher", wakuPort = conf.port
let
@ -114,17 +113,8 @@ proc setupAndPublish(rng: ref HmacDrbgContext, conf: LightPushMixConf) {.async.}
let dPeerId = PeerId.init(destPeerId).valueOr:
error "Failed to initialize PeerId", error = error
return
var conn: Connection
if not conf.mixDisabled:
conn = node.wakuMix.toConnection(
MixDestination.init(dPeerId, pxPeerInfo.addrs[0]), # destination lightpush peer
WakuLightPushCodec, # protocol codec which will be used over the mix connection
MixParameters(expectReply: Opt.some(true), numSurbs: Opt.some(byte(1))),
# mix parameters indicating we expect a single reply
).valueOr:
error "failed to create mix connection", error = error
return
await node.mountRendezvousClient(clusterId)
await node.start()
node.peerManager.start()
node.startPeerExchangeLoop()
@ -145,20 +135,26 @@ proc setupAndPublish(rng: ref HmacDrbgContext, conf: LightPushMixConf) {.async.}
var i = 0
while i < conf.numMsgs:
var conn: Connection
if conf.mixDisabled:
let connOpt = await node.peerManager.dialPeer(dPeerId, WakuLightPushCodec)
if connOpt.isNone():
error "failed to dial peer with WakuLightPushCodec", target_peer_id = dPeerId
return
conn = connOpt.get()
else:
conn = node.wakuMix.toConnection(
MixDestination.exitNode(dPeerId), # destination lightpush peer
WakuLightPushCodec, # protocol codec which will be used over the mix connection
MixParameters(expectReply: Opt.some(true), numSurbs: Opt.some(byte(1))),
# mix parameters indicating we expect a single reply
).valueOr:
error "failed to create mix connection", error = error
return
i = i + 1
let text =
"""Lorem ipsum dolor sit amet, consectetur adipiscing elit. Nullam venenatis magna ut tortor faucibus, in vestibulum nibh commodo. Aenean eget vestibulum augue. Nullam suscipit urna non nunc efficitur, at iaculis nisl consequat. Mauris quis ultrices elit. Suspendisse lobortis odio vitae laoreet facilisis. Cras ornare sem felis, at vulputate magna aliquam ac. Duis quis est ultricies, euismod nulla ac, interdum dui. Maecenas sit amet est vitae enim commodo gravida. Proin vitae elit nulla. Donec tempor dolor lectus, in faucibus velit elementum quis. Donec non mauris eu nibh faucibus cursus ut egestas dolor. Aliquam venenatis ligula id velit pulvinar malesuada. Vestibulum scelerisque, justo non porta gravida, nulla justo tempor purus, at sollicitudin erat erat vel libero.
Fusce nec eros eu metus tristique aliquet. Sed ut magna sagittis, vulputate diam sit amet, aliquam magna. Aenean sollicitudin velit lacus, eu ultrices magna semper at. Integer vitae felis ligula. In a eros nec risus condimentum tincidunt fermentum sit amet ex. Class aptent taciti sociosqu ad litora torquent per conubia nostra, per inceptos himenaeos. Pellentesque habitant morbi tristique senectus et netus et malesuada fames ac turpis egestas. Nullam vitae justo maximus, fringilla tellus nec, rutrum purus. Etiam efficitur nisi dapibus euismod vestibulum. Phasellus at felis elementum, tristique nulla ac, consectetur neque.
Maecenas hendrerit nibh eget velit rutrum, in ornare mauris molestie. Vestibulum ante ipsum primis in faucibus orci luctus et ultrices posuere cubilia curae; Praesent dignissim efficitur eros, sit amet rutrum justo mattis a. Fusce mollis neque at erat placerat bibendum. Ut fringilla fringilla orci, ut fringilla metus fermentum vel. In hac habitasse platea dictumst. Donec hendrerit porttitor odio. Suspendisse ornare sollicitudin mauris, sodales pulvinar velit finibus vel. Fusce id pulvinar neque. Suspendisse eget tincidunt sapien, ac accumsan turpis.
Curabitur cursus tincidunt leo at aliquet. Nunc dapibus quam id venenatis varius. Aenean eget augue vel velit dapibus aliquam. Nulla facilisi. Curabitur cursus, turpis vel congue volutpat, tellus eros cursus lacus, eu fringilla turpis orci non ipsum. In hac habitasse platea dictumst. Nulla aliquam nisl a nunc placerat, eget dignissim felis pulvinar. Fusce sed porta mauris. Donec sodales arcu in nisl sodales, quis posuere massa ultricies. Nam feugiat massa eget felis ultricies finibus. Nunc magna nulla, interdum a elit vel, egestas efficitur urna. Ut posuere tincidunt odio in maximus. Sed at dignissim est.
Morbi accumsan elementum ligula ut fringilla. Praesent in ex metus. Phasellus urna est, tempus sit amet elementum vitae, sollicitudin vel ipsum. Fusce hendrerit eleifend dignissim. Maecenas tempor dapibus dui quis laoreet. Cras tincidunt sed ipsum sed pellentesque. Proin ut tellus nec ipsum varius interdum. Curabitur id velit ligula. Etiam sapien nulla, cursus sodales orci eu, porta lobortis nunc. Nunc at dapibus velit. Nulla et nunc vehicula, condimentum erat quis, elementum dolor. Quisque eu metus fermentum, vestibulum tellus at, sollicitudin odio. Ut vel neque justo.
Praesent porta porta velit, vel porttitor sem. Donec sagittis at nulla venenatis iaculis. Nullam vel eleifend felis. Nullam a pellentesque lectus. Aliquam tincidunt semper dui sed bibendum. Donec hendrerit, urna et cursus dictum, neque neque convallis magna, id condimentum sem urna quis massa. Fusce non quam vulputate, fermentum mauris at, malesuada ipsum. Mauris id pellentesque libero. Donec vel erat ullamcorper, dapibus quam id, imperdiet urna. Praesent sed ligula ut est pellentesque pharetra quis et diam. Ut placerat lorem eget mi fermentum aliquet.
Fusce nec eros eu metus tristique aliquet.
This is message #""" &
$i & """ sent from a publisher using mix. End of transmission."""
let message = WakuMessage(
@ -168,25 +164,34 @@ proc setupAndPublish(rng: ref HmacDrbgContext, conf: LightPushMixConf) {.async.}
timestamp: getNowInNanosecondTime(),
) # current timestamp
let res = await node.wakuLightpushClient.publishWithConn(
LightpushPubsubTopic, message, conn, dPeerId
)
let res =
await node.wakuLightpushClient.publish(some(LightpushPubsubTopic), message, conn)
if res.isOk():
lp_mix_success.inc()
notice "published message",
text = text,
timestamp = message.timestamp,
psTopic = LightpushPubsubTopic,
contentTopic = LightpushContentTopic
else:
error "failed to publish message", error = $res.error
let startTime = getNowInNanosecondTime()
(
await node.wakuLightpushClient.publishWithConn(
LightpushPubsubTopic, message, conn, dPeerId
)
).isOkOr:
error "failed to publish message via mix", error = error.desc
lp_mix_failed.inc(labelValues = ["publish_error"])
return
let latency = float64(getNowInNanosecondTime() - startTime) / 1_000_000.0
lp_mix_latency.observe(latency)
lp_mix_success.inc()
notice "published message",
text = text,
timestamp = message.timestamp,
latency = latency,
psTopic = LightpushPubsubTopic,
contentTopic = LightpushContentTopic
if conf.mixDisabled:
await conn.close()
await sleepAsync(conf.msgIntervalMilliseconds)
info "###########Sent all messages via mix"
info "Sent all messages via mix"
quit(0)
when isMainModule:

View File

@ -6,3 +6,6 @@ declarePublicCounter lp_mix_success, "number of lightpush messages sent via mix"
declarePublicCounter lp_mix_failed,
"number of lightpush messages failed via mix", labels = ["error"]
declarePublicHistogram lp_mix_latency,
"lightpush publish latency via mix in milliseconds"

32
flake.lock generated
View File

@ -22,24 +22,46 @@
"zerokit": "zerokit"
}
},
"zerokit": {
"rust-overlay": {
"inputs": {
"nixpkgs": [
"zerokit",
"nixpkgs"
]
},
"locked": {
"lastModified": 1743756626,
"narHash": "sha256-SvhfEl0bJcRsCd79jYvZbxQecGV2aT+TXjJ57WVv7Aw=",
"lastModified": 1748399823,
"narHash": "sha256-kahD8D5hOXOsGbNdoLLnqCL887cjHkx98Izc37nDjlA=",
"owner": "oxalica",
"repo": "rust-overlay",
"rev": "d68a69dc71bc19beb3479800392112c2f6218159",
"type": "github"
},
"original": {
"owner": "oxalica",
"repo": "rust-overlay",
"type": "github"
}
},
"zerokit": {
"inputs": {
"nixpkgs": [
"nixpkgs"
],
"rust-overlay": "rust-overlay"
},
"locked": {
"lastModified": 1749115386,
"narHash": "sha256-UexIE2D7zr6aRajwnKongXwCZCeRZDXOL0kfjhqUFSU=",
"owner": "vacp2p",
"repo": "zerokit",
"rev": "c60e0c33fc6350a4b1c20e6b6727c44317129582",
"rev": "dc0b31752c91e7b4fefc441cfa6a8210ad7dba7b",
"type": "github"
},
"original": {
"owner": "vacp2p",
"repo": "zerokit",
"rev": "c60e0c33fc6350a4b1c20e6b6727c44317129582",
"rev": "dc0b31752c91e7b4fefc441cfa6a8210ad7dba7b",
"type": "github"
}
}

View File

@ -9,7 +9,7 @@
inputs = {
nixpkgs.url = "github:NixOS/nixpkgs?rev=f44bd8ca21e026135061a0a57dcf3d0775b67a49";
zerokit = {
url = "github:vacp2p/zerokit?rev=c60e0c33fc6350a4b1c20e6b6727c44317129582";
url = "github:vacp2p/zerokit?rev=dc0b31752c91e7b4fefc441cfa6a8210ad7dba7b";
inputs.nixpkgs.follows = "nixpkgs";
};
};
@ -49,11 +49,18 @@
libwaku-android-arm64 = pkgs.callPackage ./nix/default.nix {
inherit stableSystems;
src = self;
targets = ["libwaku-android-arm64"];
androidArch = "aarch64-linux-android";
targets = ["libwaku-android-arm64"];
abidir = "arm64-v8a";
zerokitPkg = zerokit.packages.${system}.zerokit-android-arm64;
zerokitRln = zerokit.packages.${system}.rln-android-arm64;
};
wakucanary = pkgs.callPackage ./nix/default.nix {
inherit stableSystems;
src = self;
targets = ["wakucanary"];
zerokitRln = zerokit.packages.${system}.rln;
};
default = libwaku-android-arm64;
});
@ -61,4 +68,4 @@
default = pkgsFor.${system}.callPackage ./nix/shell.nix {};
});
};
}
}

View File

@ -1,12 +0,0 @@
{ pkgs ? import <nixpkgs> { } }:
let
tools = pkgs.callPackage ./tools.nix {};
sourceFile = ../vendor/nimbus-build-system/vendor/Nim/koch.nim;
in pkgs.fetchFromGitHub {
owner = "nim-lang";
repo = "atlas";
rev = tools.findKeyValue "^ +AtlasStableCommit = \"([a-f0-9]+)\"$" sourceFile;
# WARNING: Requires manual updates when Nim compiler version changes.
hash = "sha256-G1TZdgbRPSgxXZ3VsBP2+XFCLHXVb3an65MuQx67o/k=";
}

View File

@ -6,7 +6,7 @@ let
in pkgs.fetchFromGitHub {
owner = "nim-lang";
repo = "checksums";
rev = tools.findKeyValue "^ +ChecksumsStableCommit = \"([a-f0-9]+)\"$" sourceFile;
rev = tools.findKeyValue "^ +ChecksumsStableCommit = \"([a-f0-9]+)\".*$" sourceFile;
# WARNING: Requires manual updates when Nim compiler version changes.
hash = "sha256-Bm5iJoT2kAvcTexiLMFBa9oU5gf7d4rWjo3OiN7obWQ=";
}

View File

@ -9,9 +9,8 @@
stableSystems ? [
"x86_64-linux" "aarch64-linux"
],
androidArch,
abidir,
zerokitPkg,
abidir ? null,
zerokitRln,
}:
assert pkgs.lib.assertMsg ((src.submodules or true) == true)
@ -51,7 +50,7 @@ in stdenv.mkDerivation rec {
cmake
which
lsb-release
zerokitPkg
zerokitRln
nim-unwrapped-2_0
fakeGit
fakeCargo
@ -84,27 +83,24 @@ in stdenv.mkDerivation rec {
pushd vendor/nimbus-build-system/vendor/Nim
mkdir dist
cp -r ${callPackage ./nimble.nix {}} dist/nimble
chmod 777 -R dist/nimble
mkdir -p dist/nimble/dist
cp -r ${callPackage ./checksums.nix {}} dist/checksums # need both
cp -r ${callPackage ./checksums.nix {}} dist/nimble/dist/checksums
cp -r ${callPackage ./atlas.nix {}} dist/atlas
chmod 777 -R dist/atlas
mkdir dist/atlas/dist
cp -r ${callPackage ./sat.nix {}} dist/nimble/dist/sat
cp -r ${callPackage ./sat.nix {}} dist/atlas/dist/sat
cp -r ${callPackage ./checksums.nix {}} dist/checksums
cp -r ${callPackage ./csources.nix {}} csources_v2
chmod 777 -R dist/nimble csources_v2
popd
mkdir -p vendor/zerokit/target/${androidArch}/release
cp ${zerokitPkg}/librln.so vendor/zerokit/target/${androidArch}/release/
cp -r ${zerokitRln}/target vendor/zerokit/
find vendor/zerokit/target
# FIXME
cp vendor/zerokit/target/*/release/librln.a librln_v${zerokitRln.version}.a
'';
installPhase = ''
installPhase = if abidir != null then ''
mkdir -p $out/jni
cp -r ./build/android/${abidir}/* $out/jni/
echo '${androidManifest}' > $out/jni/AndroidManifest.xml
cd $out && zip -r libwaku.aar *
'' else ''
mkdir -p $out/bin
cp -r build/* $out/bin
'';
meta = with pkgs.lib; {

View File

@ -6,7 +6,7 @@ let
in pkgs.fetchFromGitHub {
owner = "nim-lang";
repo = "nimble";
rev = tools.findKeyValue "^ +NimbleStableCommit = \"([a-f0-9]+)\".+" sourceFile;
rev = tools.findKeyValue "^ +NimbleStableCommit = \"([a-f0-9]+)\".*$" sourceFile;
# WARNING: Requires manual updates when Nim compiler version changes.
hash = "sha256-MVHf19UbOWk8Zba2scj06PxdYYOJA6OXrVyDQ9Ku6Us=";
}
}

View File

@ -6,7 +6,8 @@ let
in pkgs.fetchFromGitHub {
owner = "nim-lang";
repo = "sat";
rev = tools.findKeyValue "^ +SatStableCommit = \"([a-f0-9]+)\"$" sourceFile;
rev = tools.findKeyValue "^ +SatStableCommit = \"([a-f0-9]+)\".*$" sourceFile;
# WARNING: Requires manual updates when Nim compiler version changes.
# WARNING: Requires manual updates when Nim compiler version changes.
hash = "sha256-JFrrSV+mehG0gP7NiQ8hYthL0cjh44HNbXfuxQNhq7c=";
}
}

View File

@ -0,0 +1,53 @@
# ---------------------------------------------------------
# Windows Setup Makefile
# ---------------------------------------------------------
# Extend PATH (Make preserves environment variables)
export PATH := /c/msys64/usr/bin:/c/msys64/mingw64/bin:/c/msys64/usr/lib:/c/msys64/mingw64/lib:$(PATH)
# Tools required
DEPS = gcc g++ make cmake cargo upx rustc python
# Default target
.PHONY: windows-setup
windows-setup: check-deps update-submodules create-tmp libunwind miniupnpc libnatpmp
@echo "Windows setup completed successfully!"
.PHONY: check-deps
check-deps:
@echo "Checking libwaku build dependencies..."
@for dep in $(DEPS); do \
if ! which $$dep >/dev/null 2>&1; then \
echo "✗ Missing dependency: $$dep"; \
exit 1; \
else \
echo "✓ Found: $$dep"; \
fi; \
done
.PHONY: update-submodules
update-submodules:
@echo "Updating libwaku git submodules..."
git submodule update --init --recursive
.PHONY: create-tmp
create-tmp:
@echo "Creating tmp directory..."
mkdir -p tmp
.PHONY: libunwind
libunwind:
@echo "Building libunwind..."
cd vendor/nim-libbacktrace && make all V=1
.PHONY: miniupnpc
miniupnpc:
@echo "Building miniupnpc..."
cd vendor/nim-nat-traversal/vendor/miniupnp/miniupnpc && \
make -f Makefile.mingw CC=gcc CXX=g++ libminiupnpc.a V=1
.PHONY: libnatpmp
libnatpmp:
@echo "Building libnatpmp..."
cd vendor/nim-nat-traversal/vendor/libnatpmp-upstream && \
make CC="gcc -fPIC -D_WIN32_WINNT=0x0600 -DNATPMP_STATICLIB" libnatpmp.a V=1

View File

@ -1,6 +1,6 @@
log-level = "INFO"
relay = true
#mix = true
mix = true
filter = true
store = false
lightpush = true
@ -18,7 +18,7 @@ num-shards-in-network = 1
shard = [0]
agent-string = "nwaku-mix"
nodekey = "f98e3fba96c32e8d1967d460f1b79457380e1a895f7971cecc8528abe733781a"
#mixkey = "a87db88246ec0eedda347b9b643864bee3d6933eb15ba41e6d58cb678d813258"
mixkey = "a87db88246ec0eedda347b9b643864bee3d6933eb15ba41e6d58cb678d813258"
rendezvous = true
listen-address = "127.0.0.1"
nat = "extip:127.0.0.1"

View File

@ -1 +1,2 @@
../../build/chat2mix --cluster-id=2 --num-shards-in-network=1 --shard=0 --servicenode="/ip4/127.0.0.1/tcp/60001/p2p/16Uiu2HAmPiEs2ozjjJF2iN2Pe2FYeMC9w4caRHKYdLdAfjgbWM6o" --log-level=TRACE --mixnode="/ip4/127.0.0.1/tcp/60002/p2p/16Uiu2HAmLtKaFaSWDohToWhWUZFLtqzYZGPFuXwKrojFVF6az5UF:9231e86da6432502900a84f867004ce78632ab52cd8e30b1ec322cd795710c2a" --mixnode="/ip4/127.0.0.1/tcp/60003/p2p/16Uiu2HAmTEDHwAziWUSz6ZE23h5vxG2o4Nn7GazhMor4bVuMXTrA:275cd6889e1f29ca48e5b9edb800d1a94f49f13d393a0ecf1a07af753506de6c" --mixnode="/ip4/127.0.0.1/tcp/60004/p2p/16Uiu2HAmPwRKZajXtfb1Qsv45VVfRZgK3ENdfmnqzSrVm3BczF6f:e0ed594a8d506681be075e8e23723478388fb182477f7a469309a25e7076fc18" --mixnode="/ip4/127.0.0.1/tcp/60005/p2p/16Uiu2HAmRhxmCHBYdXt1RibXrjAUNJbduAhzaTHwFCZT4qWnqZAu:8fd7a1a7c19b403d231452a9b1ea40eb1cc76f455d918ef8980e7685f9eeeb1f"
../../build/chat2mix --cluster-id=2 --num-shards-in-network=1 --shard=0 --servicenode="/ip4/127.0.0.1/tcp/60001/p2p/16Uiu2HAmPiEs2ozjjJF2iN2Pe2FYeMC9w4caRHKYdLdAfjgbWM6o" --log-level=TRACE
#--mixnode="/ip4/127.0.0.1/tcp/60002/p2p/16Uiu2HAmLtKaFaSWDohToWhWUZFLtqzYZGPFuXwKrojFVF6az5UF:9231e86da6432502900a84f867004ce78632ab52cd8e30b1ec322cd795710c2a" --mixnode="/ip4/127.0.0.1/tcp/60003/p2p/16Uiu2HAmTEDHwAziWUSz6ZE23h5vxG2o4Nn7GazhMor4bVuMXTrA:275cd6889e1f29ca48e5b9edb800d1a94f49f13d393a0ecf1a07af753506de6c" --mixnode="/ip4/127.0.0.1/tcp/60004/p2p/16Uiu2HAmPwRKZajXtfb1Qsv45VVfRZgK3ENdfmnqzSrVm3BczF6f:e0ed594a8d506681be075e8e23723478388fb182477f7a469309a25e7076fc18" --mixnode="/ip4/127.0.0.1/tcp/60005/p2p/16Uiu2HAmRhxmCHBYdXt1RibXrjAUNJbduAhzaTHwFCZT4qWnqZAu:8fd7a1a7c19b403d231452a9b1ea40eb1cc76f455d918ef8980e7685f9eeeb1f"

View File

@ -1 +1,2 @@
../../build/chat2mix --cluster-id=2 --num-shards-in-network=1 --shard=0 --servicenode="/ip4/127.0.0.1/tcp/60001/p2p/16Uiu2HAmPiEs2ozjjJF2iN2Pe2FYeMC9w4caRHKYdLdAfjgbWM6o" --log-level=TRACE --mixnode="/ip4/127.0.0.1/tcp/60002/p2p/16Uiu2HAmLtKaFaSWDohToWhWUZFLtqzYZGPFuXwKrojFVF6az5UF:9231e86da6432502900a84f867004ce78632ab52cd8e30b1ec322cd795710c2a" --mixnode="/ip4/127.0.0.1/tcp/60003/p2p/16Uiu2HAmTEDHwAziWUSz6ZE23h5vxG2o4Nn7GazhMor4bVuMXTrA:275cd6889e1f29ca48e5b9edb800d1a94f49f13d393a0ecf1a07af753506de6c" --mixnode="/ip4/127.0.0.1/tcp/60004/p2p/16Uiu2HAmPwRKZajXtfb1Qsv45VVfRZgK3ENdfmnqzSrVm3BczF6f:e0ed594a8d506681be075e8e23723478388fb182477f7a469309a25e7076fc18" --mixnode="/ip4/127.0.0.1/tcp/60005/p2p/16Uiu2HAmRhxmCHBYdXt1RibXrjAUNJbduAhzaTHwFCZT4qWnqZAu:8fd7a1a7c19b403d231452a9b1ea40eb1cc76f455d918ef8980e7685f9eeeb1f"
../../build/chat2mix --cluster-id=2 --num-shards-in-network=1 --shard=0 --servicenode="/ip4/127.0.0.1/tcp/60001/p2p/16Uiu2HAmPiEs2ozjjJF2iN2Pe2FYeMC9w4caRHKYdLdAfjgbWM6o" --log-level=TRACE
#--mixnode="/ip4/127.0.0.1/tcp/60002/p2p/16Uiu2HAmLtKaFaSWDohToWhWUZFLtqzYZGPFuXwKrojFVF6az5UF:9231e86da6432502900a84f867004ce78632ab52cd8e30b1ec322cd795710c2a" --mixnode="/ip4/127.0.0.1/tcp/60003/p2p/16Uiu2HAmTEDHwAziWUSz6ZE23h5vxG2o4Nn7GazhMor4bVuMXTrA:275cd6889e1f29ca48e5b9edb800d1a94f49f13d393a0ecf1a07af753506de6c" --mixnode="/ip4/127.0.0.1/tcp/60004/p2p/16Uiu2HAmPwRKZajXtfb1Qsv45VVfRZgK3ENdfmnqzSrVm3BczF6f:e0ed594a8d506681be075e8e23723478388fb182477f7a469309a25e7076fc18" --mixnode="/ip4/127.0.0.1/tcp/60005/p2p/16Uiu2HAmRhxmCHBYdXt1RibXrjAUNJbduAhzaTHwFCZT4qWnqZAu:8fd7a1a7c19b403d231452a9b1ea40eb1cc76f455d918ef8980e7685f9eeeb1f"

View File

@ -9,4 +9,7 @@ import
./test_tokenbucket,
./test_requestratelimiter,
./test_ratelimit_setting,
./test_timed_map
./test_timed_map,
./test_event_broker,
./test_request_broker,
./test_multi_request_broker

View File

@ -0,0 +1,125 @@
import chronos
import std/sequtils
import testutils/unittests
import waku/common/broker/event_broker
EventBroker:
type SampleEvent = object
value*: int
label*: string
EventBroker:
type BinaryEvent = object
flag*: bool
EventBroker:
type RefEvent = ref object
payload*: seq[int]
template waitForListeners() =
waitFor sleepAsync(1.milliseconds)
suite "EventBroker":
test "delivers events to all listeners":
var seen: seq[(int, string)] = @[]
discard SampleEvent.listen(
proc(evt: SampleEvent): Future[void] {.async: (raises: []).} =
seen.add((evt.value, evt.label))
)
discard SampleEvent.listen(
proc(evt: SampleEvent): Future[void] {.async: (raises: []).} =
seen.add((evt.value * 2, evt.label & "!"))
)
let evt = SampleEvent(value: 5, label: "hi")
SampleEvent.emit(evt)
waitForListeners()
check seen.len == 2
check seen.anyIt(it == (5, "hi"))
check seen.anyIt(it == (10, "hi!"))
SampleEvent.dropAllListeners()
test "forget removes a single listener":
var counter = 0
let handleA = SampleEvent.listen(
proc(evt: SampleEvent): Future[void] {.async: (raises: []).} =
inc counter
)
let handleB = SampleEvent.listen(
proc(evt: SampleEvent): Future[void] {.async: (raises: []).} =
inc(counter, 2)
)
SampleEvent.dropListener(handleA.get())
let eventVal = SampleEvent(value: 1, label: "one")
SampleEvent.emit(eventVal)
waitForListeners()
check counter == 2
SampleEvent.dropAllListeners()
test "forgetAll clears every listener":
var triggered = false
let handle1 = SampleEvent.listen(
proc(evt: SampleEvent): Future[void] {.async: (raises: []).} =
triggered = true
)
let handle2 = SampleEvent.listen(
proc(evt: SampleEvent): Future[void] {.async: (raises: []).} =
discard
)
SampleEvent.dropAllListeners()
SampleEvent.emit(42, "noop")
SampleEvent.emit(label = "noop", value = 42)
waitForListeners()
check not triggered
let freshHandle = SampleEvent.listen(
proc(evt: SampleEvent): Future[void] {.async: (raises: []).} =
discard
)
check freshHandle.get().id > 0'u64
SampleEvent.dropListener(freshHandle.get())
test "broker helpers operate via typedesc":
var toggles: seq[bool] = @[]
let handle = BinaryEvent.listen(
proc(evt: BinaryEvent): Future[void] {.async: (raises: []).} =
toggles.add(evt.flag)
)
BinaryEvent(flag: true).emit()
waitForListeners()
let binaryEvent = BinaryEvent(flag: false)
BinaryEvent.emit(binaryEvent)
waitForListeners()
check toggles == @[true, false]
BinaryEvent.dropAllListeners()
test "ref typed event":
var counter: int = 0
let handle = RefEvent.listen(
proc(evt: RefEvent): Future[void] {.async: (raises: []).} =
for n in evt.payload:
counter += n
)
RefEvent(payload: @[1, 2, 3]).emit()
waitForListeners()
RefEvent.emit(payload = @[4, 5, 6])
waitForListeners()
check counter == 21 # 1+2+3 + 4+5+6
RefEvent.dropAllListeners()

View File

@ -0,0 +1,234 @@
{.used.}
import testutils/unittests
import chronos
import std/sequtils
import std/strutils
import waku/common/broker/multi_request_broker
MultiRequestBroker:
type NoArgResponse = object
label*: string
proc signatureFetch*(): Future[Result[NoArgResponse, string]] {.async.}
MultiRequestBroker:
type ArgResponse = object
id*: string
proc signatureFetch*(
suffix: string, numsuffix: int
): Future[Result[ArgResponse, string]] {.async.}
MultiRequestBroker:
type DualResponse = ref object
note*: string
suffix*: string
proc signatureBase*(): Future[Result[DualResponse, string]] {.async.}
proc signatureWithInput*(
suffix: string
): Future[Result[DualResponse, string]] {.async.}
suite "MultiRequestBroker":
test "aggregates zero-argument providers":
discard NoArgResponse.setProvider(
proc(): Future[Result[NoArgResponse, string]] {.async.} =
ok(NoArgResponse(label: "one"))
)
discard NoArgResponse.setProvider(
proc(): Future[Result[NoArgResponse, string]] {.async.} =
discard catch:
await sleepAsync(1.milliseconds)
ok(NoArgResponse(label: "two"))
)
let responses = waitFor NoArgResponse.request()
check responses.get().len == 2
check responses.get().anyIt(it.label == "one")
check responses.get().anyIt(it.label == "two")
NoArgResponse.clearProviders()
test "aggregates argument providers":
discard ArgResponse.setProvider(
proc(suffix: string, num: int): Future[Result[ArgResponse, string]] {.async.} =
ok(ArgResponse(id: suffix & "-a-" & $num))
)
discard ArgResponse.setProvider(
proc(suffix: string, num: int): Future[Result[ArgResponse, string]] {.async.} =
ok(ArgResponse(id: suffix & "-b-" & $num))
)
let keyed = waitFor ArgResponse.request("topic", 1)
check keyed.get().len == 2
check keyed.get().anyIt(it.id == "topic-a-1")
check keyed.get().anyIt(it.id == "topic-b-1")
ArgResponse.clearProviders()
test "clearProviders resets both provider lists":
discard DualResponse.setProvider(
proc(): Future[Result[DualResponse, string]] {.async.} =
ok(DualResponse(note: "base", suffix: ""))
)
discard DualResponse.setProvider(
proc(suffix: string): Future[Result[DualResponse, string]] {.async.} =
ok(DualResponse(note: "base" & suffix, suffix: suffix))
)
let noArgs = waitFor DualResponse.request()
check noArgs.get().len == 1
let param = waitFor DualResponse.request("-extra")
check param.get().len == 1
check param.get()[0].suffix == "-extra"
DualResponse.clearProviders()
let emptyNoArgs = waitFor DualResponse.request()
check emptyNoArgs.get().len == 0
let emptyWithArgs = waitFor DualResponse.request("-extra")
check emptyWithArgs.get().len == 0
test "request returns empty seq when no providers registered":
let empty = waitFor NoArgResponse.request()
check empty.get().len == 0
test "failed providers will fail the request":
NoArgResponse.clearProviders()
discard NoArgResponse.setProvider(
proc(): Future[Result[NoArgResponse, string]] {.async.} =
err("boom")
)
discard NoArgResponse.setProvider(
proc(): Future[Result[NoArgResponse, string]] {.async.} =
ok(NoArgResponse(label: "survivor"))
)
let filtered = waitFor NoArgResponse.request()
check filtered.isErr()
NoArgResponse.clearProviders()
test "deduplicates identical zero-argument providers":
NoArgResponse.clearProviders()
var invocations = 0
let sharedHandler = proc(): Future[Result[NoArgResponse, string]] {.async.} =
inc invocations
ok(NoArgResponse(label: "dup"))
let first = NoArgResponse.setProvider(sharedHandler)
let second = NoArgResponse.setProvider(sharedHandler)
check first.get().id == second.get().id
check first.get().kind == second.get().kind
let dupResponses = waitFor NoArgResponse.request()
check dupResponses.get().len == 1
check invocations == 1
NoArgResponse.clearProviders()
test "removeProvider deletes registered handlers":
var removedCalled = false
var keptCalled = false
let removable = NoArgResponse.setProvider(
proc(): Future[Result[NoArgResponse, string]] {.async.} =
removedCalled = true
ok(NoArgResponse(label: "removed"))
)
discard NoArgResponse.setProvider(
proc(): Future[Result[NoArgResponse, string]] {.async.} =
keptCalled = true
ok(NoArgResponse(label: "kept"))
)
NoArgResponse.removeProvider(removable.get())
let afterRemoval = (waitFor NoArgResponse.request()).valueOr:
assert false, "request failed"
@[]
check afterRemoval.len == 1
check afterRemoval[0].label == "kept"
check not removedCalled
check keptCalled
NoArgResponse.clearProviders()
test "removeProvider works for argument signatures":
var invoked: seq[string] = @[]
discard ArgResponse.setProvider(
proc(suffix: string, num: int): Future[Result[ArgResponse, string]] {.async.} =
invoked.add("first" & suffix)
ok(ArgResponse(id: suffix & "-one-" & $num))
)
let handle = ArgResponse.setProvider(
proc(suffix: string, num: int): Future[Result[ArgResponse, string]] {.async.} =
invoked.add("second" & suffix)
ok(ArgResponse(id: suffix & "-two-" & $num))
)
ArgResponse.removeProvider(handle.get())
let single = (waitFor ArgResponse.request("topic", 1)).valueOr:
assert false, "request failed"
@[]
check single.len == 1
check single[0].id == "topic-one-1"
check invoked == @["firsttopic"]
ArgResponse.clearProviders()
test "catches exception from providers and report error":
let firstHandler = NoArgResponse.setProvider(
proc(): Future[Result[NoArgResponse, string]] {.async.} =
raise newException(ValueError, "first handler raised")
ok(NoArgResponse(label: "any"))
)
discard NoArgResponse.setProvider(
proc(): Future[Result[NoArgResponse, string]] {.async.} =
ok(NoArgResponse(label: "just ok"))
)
let afterException = waitFor NoArgResponse.request()
check afterException.isErr()
check afterException.error().contains("first handler raised")
NoArgResponse.clearProviders()
test "ref providers returning nil fail request":
DualResponse.clearProviders()
discard DualResponse.setProvider(
proc(): Future[Result[DualResponse, string]] {.async.} =
let nilResponse: DualResponse = nil
ok(nilResponse)
)
let zeroArg = waitFor DualResponse.request()
check zeroArg.isErr()
DualResponse.clearProviders()
discard DualResponse.setProvider(
proc(suffix: string): Future[Result[DualResponse, string]] {.async.} =
let nilResponse: DualResponse = nil
ok(nilResponse)
)
let withInput = waitFor DualResponse.request("-extra")
check withInput.isErr()
DualResponse.clearProviders()

View File

@ -0,0 +1,198 @@
{.used.}
import testutils/unittests
import chronos
import std/strutils
import waku/common/broker/request_broker
RequestBroker:
type SimpleResponse = object
value*: string
proc signatureFetch*(): Future[Result[SimpleResponse, string]] {.async.}
RequestBroker:
type KeyedResponse = object
key*: string
payload*: string
proc signatureFetchWithKey*(
key: string, subKey: int
): Future[Result[KeyedResponse, string]] {.async.}
RequestBroker:
type DualResponse = object
note*: string
count*: int
proc signatureNoInput*(): Future[Result[DualResponse, string]] {.async.}
proc signatureWithInput*(
suffix: string
): Future[Result[DualResponse, string]] {.async.}
RequestBroker:
type ImplicitResponse = ref object
note*: string
suite "RequestBroker macro":
test "serves zero-argument providers":
check SimpleResponse
.setProvider(
proc(): Future[Result[SimpleResponse, string]] {.async.} =
ok(SimpleResponse(value: "hi"))
)
.isOk()
let res = waitFor SimpleResponse.request()
check res.isOk()
check res.value.value == "hi"
SimpleResponse.clearProvider()
test "zero-argument request errors when unset":
let res = waitFor SimpleResponse.request()
check res.isErr
check res.error.contains("no zero-arg provider")
test "serves input-based providers":
var seen: seq[string] = @[]
check KeyedResponse
.setProvider(
proc(key: string, subKey: int): Future[Result[KeyedResponse, string]] {.async.} =
seen.add(key)
ok(KeyedResponse(key: key, payload: key & "-payload+" & $subKey))
)
.isOk()
let res = waitFor KeyedResponse.request("topic", 1)
check res.isOk()
check res.value.key == "topic"
check res.value.payload == "topic-payload+1"
check seen == @["topic"]
KeyedResponse.clearProvider()
test "catches provider exception":
check KeyedResponse
.setProvider(
proc(key: string, subKey: int): Future[Result[KeyedResponse, string]] {.async.} =
raise newException(ValueError, "simulated failure")
ok(KeyedResponse(key: key, payload: ""))
)
.isOk()
let res = waitFor KeyedResponse.request("neglected", 11)
check res.isErr()
check res.error.contains("simulated failure")
KeyedResponse.clearProvider()
test "input request errors when unset":
let res = waitFor KeyedResponse.request("foo", 2)
check res.isErr
check res.error.contains("input signature")
test "supports both provider types simultaneously":
check DualResponse
.setProvider(
proc(): Future[Result[DualResponse, string]] {.async.} =
ok(DualResponse(note: "base", count: 1))
)
.isOk()
check DualResponse
.setProvider(
proc(suffix: string): Future[Result[DualResponse, string]] {.async.} =
ok(DualResponse(note: "base" & suffix, count: suffix.len))
)
.isOk()
let noInput = waitFor DualResponse.request()
check noInput.isOk
check noInput.value.note == "base"
let withInput = waitFor DualResponse.request("-extra")
check withInput.isOk
check withInput.value.note == "base-extra"
check withInput.value.count == 6
DualResponse.clearProvider()
test "clearProvider resets both entries":
check DualResponse
.setProvider(
proc(): Future[Result[DualResponse, string]] {.async.} =
ok(DualResponse(note: "temp", count: 0))
)
.isOk()
DualResponse.clearProvider()
let res = waitFor DualResponse.request()
check res.isErr
test "implicit zero-argument provider works by default":
check ImplicitResponse
.setProvider(
proc(): Future[Result[ImplicitResponse, string]] {.async.} =
ok(ImplicitResponse(note: "auto"))
)
.isOk()
let res = waitFor ImplicitResponse.request()
check res.isOk
ImplicitResponse.clearProvider()
check res.value.note == "auto"
test "implicit zero-argument request errors when unset":
let res = waitFor ImplicitResponse.request()
check res.isErr
check res.error.contains("no zero-arg provider")
test "no provider override":
check DualResponse
.setProvider(
proc(): Future[Result[DualResponse, string]] {.async.} =
ok(DualResponse(note: "base", count: 1))
)
.isOk()
check DualResponse
.setProvider(
proc(suffix: string): Future[Result[DualResponse, string]] {.async.} =
ok(DualResponse(note: "base" & suffix, count: suffix.len))
)
.isOk()
let overrideProc = proc(): Future[Result[DualResponse, string]] {.async.} =
ok(DualResponse(note: "something else", count: 1))
check DualResponse.setProvider(overrideProc).isErr()
let noInput = waitFor DualResponse.request()
check noInput.isOk
check noInput.value.note == "base"
let stillResponse = waitFor DualResponse.request(" still works")
check stillResponse.isOk()
check stillResponse.value.note.contains("base still works")
DualResponse.clearProvider()
let noResponse = waitFor DualResponse.request()
check noResponse.isErr()
check noResponse.error.contains("no zero-arg provider")
let noResponseArg = waitFor DualResponse.request("Should not work")
check noResponseArg.isErr()
check noResponseArg.error.contains("no provider")
check DualResponse.setProvider(overrideProc).isOk()
let nowSuccWithOverride = waitFor DualResponse.request()
check nowSuccWithOverride.isOk
check nowSuccWithOverride.value.note == "something else"
check nowSuccWithOverride.value.count == 1
DualResponse.clearProvider()

View File

@ -135,8 +135,8 @@ suite "RLN Proofs as a Lightpush Service":
server = newTestWakuNode(serverKey, parseIpAddress("0.0.0.0"), Port(0))
client = newTestWakuNode(clientKey, parseIpAddress("0.0.0.0"), Port(0))
anvilProc = runAnvil()
manager = waitFor setupOnchainGroupManager()
anvilProc = runAnvil(stateFile = some(DEFAULT_ANVIL_STATE_PATH))
manager = waitFor setupOnchainGroupManager(deployContracts = false)
# mount rln-relay
let wakuRlnConfig = getWakuRlnConfig(manager = manager, index = MembershipIndex(1))

View File

@ -135,8 +135,8 @@ suite "RLN Proofs as a Lightpush Service":
server = newTestWakuNode(serverKey, parseIpAddress("0.0.0.0"), Port(0))
client = newTestWakuNode(clientKey, parseIpAddress("0.0.0.0"), Port(0))
anvilProc = runAnvil()
manager = waitFor setupOnchainGroupManager()
anvilProc = runAnvil(stateFile = some(DEFAULT_ANVIL_STATE_PATH))
manager = waitFor setupOnchainGroupManager(deployContracts = false)
# mount rln-relay
let wakuRlnConfig = getWakuRlnConfig(manager = manager, index = MembershipIndex(1))

View File

@ -1,12 +1,20 @@
{.used.}
import std/options, chronos, testutils/unittests, libp2p/builders
import
std/options,
chronos,
testutils/unittests,
libp2p/builders,
libp2p/protocols/rendezvous
import
waku/waku_core/peers,
waku/waku_core/codecs,
waku/node/waku_node,
waku/node/peer_manager/peer_manager,
waku/waku_rendezvous/protocol,
waku/waku_rendezvous/common,
waku/waku_rendezvous/waku_peer_record,
./testlib/[wakucore, wakunode]
procSuite "Waku Rendezvous":
@ -50,18 +58,26 @@ procSuite "Waku Rendezvous":
node2.peerManager.addPeer(peerInfo3)
node3.peerManager.addPeer(peerInfo2)
let namespace = "test/name/space"
let res = await node1.wakuRendezvous.batchAdvertise(
namespace, 60.seconds, @[peerInfo2.peerId]
)
let res = await node1.wakuRendezvous.advertiseAll()
assert res.isOk(), $res.error
# Rendezvous Request API requires dialing first
let connOpt =
await node3.peerManager.dialPeer(peerInfo2.peerId, WakuRendezVousCodec)
require:
connOpt.isSome
let response =
await node3.wakuRendezvous.batchRequest(namespace, 1, @[peerInfo2.peerId])
assert response.isOk(), $response.error
let records = response.get()
var records: seq[WakuPeerRecord]
try:
records = await rendezvous.request[WakuPeerRecord](
node3.wakuRendezvous,
Opt.some(computeMixNamespace(clusterId)),
Opt.some(1),
Opt.some(@[peerInfo2.peerId]),
)
except CatchableError as e:
assert false, "Request failed with exception: " & e.msg
check:
records.len == 1
records[0].peerId == peerInfo1.peerId
#records[0].mixPubKey == $node1.wakuMix.pubKey

View File

@ -426,7 +426,6 @@ suite "Waku Discovery v5":
confBuilder.withNodeKey(libp2p_keys.PrivateKey.random(Secp256k1, myRng[])[])
confBuilder.discv5Conf.withEnabled(true)
confBuilder.discv5Conf.withUdpPort(9000.Port)
let conf = confBuilder.build().valueOr:
raiseAssert error
@ -468,6 +467,9 @@ suite "Waku Discovery v5":
# leave some time for discv5 to act
await sleepAsync(chronos.seconds(10))
# Connect peers via peer manager to ensure identify happens
discard await waku0.node.peerManager.connectPeer(waku1.node.switch.peerInfo)
var r = waku0.node.peerManager.selectPeer(WakuPeerExchangeCodec)
assert r.isSome(), "could not retrieve peer mounting WakuPeerExchangeCodec"
@ -480,7 +482,7 @@ suite "Waku Discovery v5":
r = waku2.node.peerManager.selectPeer(WakuPeerExchangeCodec)
assert r.isSome(), "could not retrieve peer mounting WakuPeerExchangeCodec"
r = waku2.node.peerManager.selectPeer(RendezVousCodec)
r = waku2.node.peerManager.selectPeer(WakuRendezVousCodec)
assert r.isSome(), "could not retrieve peer mounting RendezVousCodec"
asyncTest "Discv5 bootstrap nodes should be added to the peer store":

View File

@ -37,7 +37,7 @@ suite "Rate limited push service":
handlerFuture = newFuture[(string, WakuMessage)]()
let requestRes =
await client.publish(some(DefaultPubsubTopic), message, peer = serverPeerId)
await client.publish(some(DefaultPubsubTopic), message, serverPeerId)
check await handlerFuture.withTimeout(50.millis)
@ -66,7 +66,7 @@ suite "Rate limited push service":
var endTime = Moment.now()
var elapsed: Duration = (endTime - startTime)
await sleepAsync(tokenPeriod - elapsed + firstWaitExtend)
firstWaitEXtend = 100.millis
firstWaitExtend = 100.millis
## Cleanup
await allFutures(clientSwitch.stop(), serverSwitch.stop())
@ -99,7 +99,7 @@ suite "Rate limited push service":
let message = fakeWakuMessage()
handlerFuture = newFuture[(string, WakuMessage)]()
let requestRes =
await client.publish(some(DefaultPubsubTopic), message, peer = serverPeerId)
await client.publish(some(DefaultPubsubTopic), message, serverPeerId)
discard await handlerFuture.withTimeout(10.millis)
check:
@ -114,7 +114,7 @@ suite "Rate limited push service":
let message = fakeWakuMessage()
handlerFuture = newFuture[(string, WakuMessage)]()
let requestRes =
await client.publish(some(DefaultPubsubTopic), message, peer = serverPeerId)
await client.publish(some(DefaultPubsubTopic), message, serverPeerId)
discard await handlerFuture.withTimeout(10.millis)
check:

View File

@ -0,0 +1,29 @@
{.used.}
{.push raises: [].}
import std/[options, os], results, testutils/unittests, chronos, web3
import
waku/[
waku_rln_relay,
waku_rln_relay/conversion_utils,
waku_rln_relay/group_manager/on_chain/group_manager,
],
./utils_onchain
suite "Token and RLN Contract Deployment":
test "anvil should dump state to file on exit":
# git will ignore this file, if the contract has been updated and the state file needs to be regenerated then this file can be renamed to replace the one in the repo (tests/waku_rln_relay/anvil_state/tests/waku_rln_relay/anvil_state/state-deployed-contracts-mint-and-approved.json)
let testStateFile = some("tests/waku_rln_relay/anvil_state/anvil_state.ignore.json")
let anvilProc = runAnvil(stateFile = testStateFile, dumpStateOnExit = true)
let manager = waitFor setupOnchainGroupManager(deployContracts = true)
stopAnvil(anvilProc)
check:
fileExists(testStateFile.get())
#The test should still pass even if thie compression fails
compressGzipFile(testStateFile.get(), testStateFile.get() & ".gz").isOkOr:
error "Failed to compress state file", error = error

View File

@ -33,8 +33,8 @@ suite "Onchain group manager":
var manager {.threadVar.}: OnchainGroupManager
setup:
anvilProc = runAnvil()
manager = waitFor setupOnchainGroupManager()
anvilProc = runAnvil(stateFile = some(DEFAULT_ANVIL_STATE_PATH))
manager = waitFor setupOnchainGroupManager(deployContracts = false)
teardown:
stopAnvil(anvilProc)

View File

@ -27,8 +27,8 @@ suite "Waku rln relay":
var manager {.threadVar.}: OnchainGroupManager
setup:
anvilProc = runAnvil()
manager = waitFor setupOnchainGroupManager()
anvilProc = runAnvil(stateFile = some(DEFAULT_ANVIL_STATE_PATH))
manager = waitFor setupOnchainGroupManager(deployContracts = false)
teardown:
stopAnvil(anvilProc)

View File

@ -30,8 +30,8 @@ procSuite "WakuNode - RLN relay":
var manager {.threadVar.}: OnchainGroupManager
setup:
anvilProc = runAnvil()
manager = waitFor setupOnchainGroupManager()
anvilProc = runAnvil(stateFile = some(DEFAULT_ANVIL_STATE_PATH))
manager = waitFor setupOnchainGroupManager(deployContracts = false)
teardown:
stopAnvil(anvilProc)

View File

@ -3,7 +3,7 @@
{.push raises: [].}
import
std/[options, os, osproc, deques, streams, strutils, tempfiles, strformat],
std/[options, os, osproc, streams, strutils, strformat],
results,
stew/byteutils,
testutils/unittests,
@ -14,7 +14,6 @@ import
web3/conversions,
web3/eth_api_types,
json_rpc/rpcclient,
json,
libp2p/crypto/crypto,
eth/keys,
results
@ -24,25 +23,19 @@ import
waku_rln_relay,
waku_rln_relay/protocol_types,
waku_rln_relay/constants,
waku_rln_relay/contract,
waku_rln_relay/rln,
],
../testlib/common,
./utils
../testlib/common
const CHAIN_ID* = 1234'u256
template skip0xPrefix(hexStr: string): int =
## Returns the index of the first meaningful char in `hexStr` by skipping
## "0x" prefix
if hexStr.len > 1 and hexStr[0] == '0' and hexStr[1] in {'x', 'X'}: 2 else: 0
func strip0xPrefix(s: string): string =
let prefixLen = skip0xPrefix(s)
if prefixLen != 0:
s[prefixLen .. ^1]
else:
s
# Path to the file which Anvil loads at startup to initialize the chain with pre-deployed contracts, an account funded with tokens and approved for spending
const DEFAULT_ANVIL_STATE_PATH* =
"tests/waku_rln_relay/anvil_state/state-deployed-contracts-mint-and-approved.json.gz"
# The contract address of the TestStableToken used for the RLN Membership registration fee
const TOKEN_ADDRESS* = "0x5FbDB2315678afecb367f032d93F642f64180aa3"
# The contract address used ti interact with the WakuRLNV2 contract via the proxy
const WAKU_RLNV2_PROXY_ADDRESS* = "0x5fc8d32690cc91d4c39d9d3abcbd16989f875707"
proc generateCredentials*(): IdentityCredential =
let credRes = membershipKeyGen()
@ -82,6 +75,10 @@ proc getForgePath(): string =
forgePath = joinPath(forgePath, ".foundry/bin/forge")
return $forgePath
template execForge(cmd: string): tuple[output: string, exitCode: int] =
# unset env vars that affect e.g. "forge script" before running forge
execCmdEx("unset ETH_FROM ETH_PASSWORD && " & cmd)
contract(ERC20Token):
proc allowance(owner: Address, spender: Address): UInt256 {.view.}
proc balanceOf(account: Address): UInt256 {.view.}
@ -102,7 +99,7 @@ proc sendMintCall(
recipientAddress: Address,
amountTokens: UInt256,
recipientBalanceBeforeExpectedTokens: Option[UInt256] = none(UInt256),
): Future[TxHash] {.async.} =
): Future[void] {.async.} =
let doBalanceAssert = recipientBalanceBeforeExpectedTokens.isSome()
if doBalanceAssert:
@ -138,7 +135,7 @@ proc sendMintCall(
tx.data = Opt.some(byteutils.hexToSeqByte(mintCallData))
trace "Sending mint call"
let txHash = await web3.send(tx)
discard await web3.send(tx)
let balanceOfSelector = "0x70a08231"
let balanceCallData = balanceOfSelector & paddedAddress
@ -153,8 +150,6 @@ proc sendMintCall(
assert balanceAfterMint == balanceAfterExpectedTokens,
fmt"Balance is {balanceAfterMint} after transfer but expected {balanceAfterExpectedTokens}"
return txHash
# Check how many tokens a spender (the RLN contract) is allowed to spend on behalf of the owner (account which wishes to register a membership)
proc checkTokenAllowance(
web3: Web3, tokenAddress: Address, owner: Address, spender: Address
@ -225,11 +220,14 @@ proc deployTestToken*(
# Deploy TestToken contract
let forgeCmdTestToken =
fmt"""cd {submodulePath} && {forgePath} script test/TestToken.sol --broadcast -vvv --rpc-url http://localhost:8540 --tc TestTokenFactory --private-key {pk} && rm -rf broadcast/*/*/run-1*.json && rm -rf cache/*/*/run-1*.json"""
let (outputDeployTestToken, exitCodeDeployTestToken) = execCmdEx(forgeCmdTestToken)
let (outputDeployTestToken, exitCodeDeployTestToken) = execForge(forgeCmdTestToken)
trace "Executed forge command to deploy TestToken contract",
output = outputDeployTestToken
if exitCodeDeployTestToken != 0:
return error("Forge command to deploy TestToken contract failed")
error "Forge command to deploy TestToken contract failed",
error = outputDeployTestToken
return
err("Forge command to deploy TestToken contract failed: " & outputDeployTestToken)
# Parse the command output to find contract address
let testTokenAddress = getContractAddressFromDeployScriptOutput(outputDeployTestToken).valueOr:
@ -351,7 +349,7 @@ proc executeForgeContractDeployScripts*(
let forgeCmdPriceCalculator =
fmt"""cd {submodulePath} && {forgePath} script script/Deploy.s.sol --broadcast -vvvv --rpc-url http://localhost:8540 --tc DeployPriceCalculator --private-key {privateKey} && rm -rf broadcast/*/*/run-1*.json && rm -rf cache/*/*/run-1*.json"""
let (outputDeployPriceCalculator, exitCodeDeployPriceCalculator) =
execCmdEx(forgeCmdPriceCalculator)
execForge(forgeCmdPriceCalculator)
trace "Executed forge command to deploy LinearPriceCalculator contract",
output = outputDeployPriceCalculator
if exitCodeDeployPriceCalculator != 0:
@ -368,7 +366,7 @@ proc executeForgeContractDeployScripts*(
let forgeCmdWakuRln =
fmt"""cd {submodulePath} && {forgePath} script script/Deploy.s.sol --broadcast -vvvv --rpc-url http://localhost:8540 --tc DeployWakuRlnV2 --private-key {privateKey} && rm -rf broadcast/*/*/run-1*.json && rm -rf cache/*/*/run-1*.json"""
let (outputDeployWakuRln, exitCodeDeployWakuRln) = execCmdEx(forgeCmdWakuRln)
let (outputDeployWakuRln, exitCodeDeployWakuRln) = execForge(forgeCmdWakuRln)
trace "Executed forge command to deploy WakuRlnV2 contract",
output = outputDeployWakuRln
if exitCodeDeployWakuRln != 0:
@ -388,7 +386,7 @@ proc executeForgeContractDeployScripts*(
# Deploy Proxy contract
let forgeCmdProxy =
fmt"""cd {submodulePath} && {forgePath} script script/Deploy.s.sol --broadcast -vvvv --rpc-url http://localhost:8540 --tc DeployProxy --private-key {privateKey} && rm -rf broadcast/*/*/run-1*.json && rm -rf cache/*/*/run-1*.json"""
let (outputDeployProxy, exitCodeDeployProxy) = execCmdEx(forgeCmdProxy)
let (outputDeployProxy, exitCodeDeployProxy) = execForge(forgeCmdProxy)
trace "Executed forge command to deploy proxy contract", output = outputDeployProxy
if exitCodeDeployProxy != 0:
error "Forge command to deploy Proxy failed", error = outputDeployProxy
@ -480,20 +478,64 @@ proc getAnvilPath*(): string =
anvilPath = joinPath(anvilPath, ".foundry/bin/anvil")
return $anvilPath
proc decompressGzipFile*(
compressedPath: string, targetPath: string
): Result[void, string] =
## Decompress a gzipped file using the gunzip command-line utility
let cmd = fmt"gunzip -c {compressedPath} > {targetPath}"
try:
let (output, exitCode) = execCmdEx(cmd)
if exitCode != 0:
return err(
"Failed to decompress '" & compressedPath & "' to '" & targetPath & "': " &
output
)
except OSError as e:
return err("Failed to execute gunzip command: " & e.msg)
except IOError as e:
return err("Failed to execute gunzip command: " & e.msg)
ok()
proc compressGzipFile*(sourcePath: string, targetPath: string): Result[void, string] =
## Compress a file with gzip using the gzip command-line utility
let cmd = fmt"gzip -c {sourcePath} > {targetPath}"
try:
let (output, exitCode) = execCmdEx(cmd)
if exitCode != 0:
return err(
"Failed to compress '" & sourcePath & "' to '" & targetPath & "': " & output
)
except OSError as e:
return err("Failed to execute gzip command: " & e.msg)
except IOError as e:
return err("Failed to execute gzip command: " & e.msg)
ok()
# Runs Anvil daemon
proc runAnvil*(port: int = 8540, chainId: string = "1234"): Process =
proc runAnvil*(
port: int = 8540,
chainId: string = "1234",
stateFile: Option[string] = none(string),
dumpStateOnExit: bool = false,
): Process =
# Passed options are
# --port Port to listen on.
# --gas-limit Sets the block gas limit in WEI.
# --balance The default account balance, specified in ether.
# --chain-id Chain ID of the network.
# --load-state Initialize the chain from a previously saved state snapshot (read-only)
# --dump-state Dump the state on exit to the given file (write-only)
# See anvil documentation https://book.getfoundry.sh/reference/anvil/ for more details
try:
let anvilPath = getAnvilPath()
info "Anvil path", anvilPath
let runAnvil = startProcess(
anvilPath,
args = [
var args =
@[
"--port",
$port,
"--gas-limit",
@ -502,9 +544,54 @@ proc runAnvil*(port: int = 8540, chainId: string = "1234"): Process =
"1000000000",
"--chain-id",
$chainId,
],
options = {poUsePath},
)
]
# Add state file argument if provided
if stateFile.isSome():
var statePath = stateFile.get()
info "State file parameter provided",
statePath = statePath,
dumpStateOnExit = dumpStateOnExit,
absolutePath = absolutePath(statePath)
# Check if the file is gzip compressed and handle decompression
if statePath.endsWith(".gz"):
let decompressedPath = statePath[0 .. ^4] # Remove .gz extension
debug "Gzip compressed state file detected",
compressedPath = statePath, decompressedPath = decompressedPath
if not fileExists(decompressedPath):
decompressGzipFile(statePath, decompressedPath).isOkOr:
error "Failed to decompress state file", error = error
return nil
statePath = decompressedPath
if dumpStateOnExit:
# Ensure the directory exists
let stateDir = parentDir(statePath)
if not dirExists(stateDir):
createDir(stateDir)
# Fresh deployment: start clean and dump state on exit
args.add("--dump-state")
args.add(statePath)
debug "Anvil configured to dump state on exit", path = statePath
else:
# Using cache: only load state, don't overwrite it (preserves clean cached state)
if fileExists(statePath):
args.add("--load-state")
args.add(statePath)
debug "Anvil configured to load state file (read-only)", path = statePath
else:
warn "State file does not exist, anvil will start fresh",
path = statePath, absolutePath = absolutePath(statePath)
else:
info "No state file provided, anvil will start fresh without state persistence"
info "Starting anvil with arguments", args = args.join(" ")
let runAnvil =
startProcess(anvilPath, args = args, options = {poUsePath, poStdErrToStdOut})
let anvilPID = runAnvil.processID
# We read stdout from Anvil to see when daemon is ready
@ -516,7 +603,13 @@ proc runAnvil*(port: int = 8540, chainId: string = "1234"): Process =
anvilStartLog.add(cmdline)
if cmdline.contains("Listening on 127.0.0.1:" & $port):
break
else:
error "Anvil daemon exited (closed output)",
pid = anvilPID, startLog = anvilStartLog
return
except Exception, CatchableError:
warn "Anvil daemon stdout reading error; assuming it started OK",
pid = anvilPID, startLog = anvilStartLog, err = getCurrentExceptionMsg()
break
info "Anvil daemon is running and ready", pid = anvilPID, startLog = anvilStartLog
return runAnvil
@ -536,7 +629,14 @@ proc stopAnvil*(runAnvil: Process) {.used.} =
# Send termination signals
when not defined(windows):
discard execCmdEx(fmt"kill -TERM {anvilPID}")
discard execCmdEx(fmt"kill -9 {anvilPID}")
# Wait for graceful shutdown to allow state dumping
sleep(200)
# Only force kill if process is still running
let checkResult = execCmdEx(fmt"kill -0 {anvilPID} 2>/dev/null")
if checkResult.exitCode == 0:
info "Anvil process still running after TERM signal, sending KILL",
anvilPID = anvilPID
discard execCmdEx(fmt"kill -9 {anvilPID}")
else:
discard execCmdEx(fmt"taskkill /F /PID {anvilPID}")
@ -547,52 +647,100 @@ proc stopAnvil*(runAnvil: Process) {.used.} =
info "Error stopping Anvil daemon", anvilPID = anvilPID, error = e.msg
proc setupOnchainGroupManager*(
ethClientUrl: string = EthClient, amountEth: UInt256 = 10.u256
ethClientUrl: string = EthClient,
amountEth: UInt256 = 10.u256,
deployContracts: bool = true,
): Future[OnchainGroupManager] {.async.} =
## Setup an onchain group manager for testing
## If deployContracts is false, it will assume that the Anvil testnet already has the required contracts deployed, this significantly speeds up test runs.
## To run Anvil with a cached state file containing pre-deployed contracts, see runAnvil documentation.
##
## To generate/update the cached state file:
## 1. Call runAnvil with stateFile and dumpStateOnExit=true
## 2. Run setupOnchainGroupManager with deployContracts=true to deploy contracts
## 3. The state will be saved to the specified file when anvil exits
## 4. Commit this file to git
##
## To use cached state:
## 1. Call runAnvil with stateFile and dumpStateOnExit=false
## 2. Anvil loads state in read-only mode (won't overwrite the cached file)
## 3. Call setupOnchainGroupManager with deployContracts=false
## 4. Tests run fast using pre-deployed contracts
let rlnInstanceRes = createRlnInstance()
check:
rlnInstanceRes.isOk()
let rlnInstance = rlnInstanceRes.get()
# connect to the eth client
let web3 = await newWeb3(ethClientUrl)
let accounts = await web3.provider.eth_accounts()
web3.defaultAccount = accounts[1]
let (privateKey, acc) = createEthAccount(web3)
var privateKey: keys.PrivateKey
var acc: Address
var testTokenAddress: Address
var contractAddress: Address
# we just need to fund the default account
# the send procedure returns a tx hash that we don't use, hence discard
discard await sendEthTransfer(
web3, web3.defaultAccount, acc, ethToWei(1000.u256), some(0.u256)
)
if not deployContracts:
info "Using contract addresses from constants"
let testTokenAddress = (await deployTestToken(privateKey, acc, web3)).valueOr:
assert false, "Failed to deploy test token contract: " & $error
return
testTokenAddress = Address(hexToByteArray[20](TOKEN_ADDRESS))
contractAddress = Address(hexToByteArray[20](WAKU_RLNV2_PROXY_ADDRESS))
# mint the token from the generated account
discard await sendMintCall(
web3, web3.defaultAccount, testTokenAddress, acc, ethToWei(1000.u256), some(0.u256)
)
(privateKey, acc) = createEthAccount(web3)
let contractAddress = (await executeForgeContractDeployScripts(privateKey, acc, web3)).valueOr:
assert false, "Failed to deploy RLN contract: " & $error
return
# Fund the test account
discard await sendEthTransfer(web3, web3.defaultAccount, acc, ethToWei(1000.u256))
# If the generated account wishes to register a membership, it needs to approve the contract to spend its tokens
let tokenApprovalResult = await approveTokenAllowanceAndVerify(
web3,
acc,
privateKey,
testTokenAddress,
contractAddress,
ethToWei(200.u256),
some(0.u256),
)
# Mint tokens to the test account
await sendMintCall(
web3, web3.defaultAccount, testTokenAddress, acc, ethToWei(1000.u256)
)
assert tokenApprovalResult.isOk, tokenApprovalResult.error()
# Approve the contract to spend tokens
let tokenApprovalResult = await approveTokenAllowanceAndVerify(
web3, acc, privateKey, testTokenAddress, contractAddress, ethToWei(200.u256)
)
assert tokenApprovalResult.isOk(), tokenApprovalResult.error
else:
info "Performing Token and RLN contracts deployment"
(privateKey, acc) = createEthAccount(web3)
# fund the default account
discard await sendEthTransfer(
web3, web3.defaultAccount, acc, ethToWei(1000.u256), some(0.u256)
)
testTokenAddress = (await deployTestToken(privateKey, acc, web3)).valueOr:
assert false, "Failed to deploy test token contract: " & $error
return
# mint the token from the generated account
await sendMintCall(
web3,
web3.defaultAccount,
testTokenAddress,
acc,
ethToWei(1000.u256),
some(0.u256),
)
contractAddress = (await executeForgeContractDeployScripts(privateKey, acc, web3)).valueOr:
assert false, "Failed to deploy RLN contract: " & $error
return
# If the generated account wishes to register a membership, it needs to approve the contract to spend its tokens
let tokenApprovalResult = await approveTokenAllowanceAndVerify(
web3,
acc,
privateKey,
testTokenAddress,
contractAddress,
ethToWei(200.u256),
some(0.u256),
)
assert tokenApprovalResult.isOk(), tokenApprovalResult.error
let manager = OnchainGroupManager(
ethClientUrls: @[ethClientUrl],

View File

@ -65,7 +65,7 @@ suite "Waku v2 Rest API - Admin":
): Future[void] {.async, gcsafe.} =
await sleepAsync(0.milliseconds)
let shard = RelayShard(clusterId: clusterId, shardId: 0)
let shard = RelayShard(clusterId: clusterId, shardId: 5)
node1.subscribe((kind: PubsubSub, topic: $shard), simpleHandler).isOkOr:
assert false, "Failed to subscribe to topic: " & $error
node2.subscribe((kind: PubsubSub, topic: $shard), simpleHandler).isOkOr:
@ -212,6 +212,18 @@ suite "Waku v2 Rest API - Admin":
let conn2 = await node1.peerManager.connectPeer(peerInfo2)
let conn3 = await node1.peerManager.connectPeer(peerInfo3)
var count = 0
while count < 20:
## Wait ~1s at most for the peer store to update shard info
let getRes = await client.getPeers()
if getRes.data.allIt(it.shards == @[5.uint16]):
break
count.inc()
await sleepAsync(50.milliseconds)
assert count < 20, "Timeout waiting for shards to be updated in peer store"
# Check successful connections
check:
conn2 == true

View File

@ -41,8 +41,8 @@ suite "Waku v2 REST API - health":
var manager {.threadVar.}: OnchainGroupManager
setup:
anvilProc = runAnvil()
manager = waitFor setupOnchainGroupManager()
anvilProc = runAnvil(stateFile = some(DEFAULT_ANVIL_STATE_PATH))
manager = waitFor setupOnchainGroupManager(deployContracts = false)
teardown:
stopAnvil(anvilProc)

2
vendor/nim-libp2p vendored

@ -1 +1 @@
Subproject commit 0309685cd27d4bf763c8b3be86a76c33bcfe67ea
Subproject commit e82080f7b1aa61c6d35fa5311b873f41eff4bb52

@ -1 +1 @@
Subproject commit 900d4f95e0e618bdeb4c241f7a4b6347df6bb950
Subproject commit 8a338f354481e8a3f3d64a72e38fad4c62e32dcd

View File

@ -24,7 +24,7 @@ requires "nim >= 2.2.4",
"stew",
"stint",
"metrics",
"libp2p >= 1.14.2",
"libp2p >= 1.14.3",
"web3",
"presto",
"regex",

View File

@ -0,0 +1,308 @@
## EventBroker
## -------------------
## EventBroker represents a reactive decoupling pattern, that
## allows event-driven development without
## need for direct dependencies in between emitters and listeners.
## Worth considering using it in a single or many emitters to many listeners scenario.
##
## Generates a standalone, type-safe event broker for the declared object type.
## The macro exports the value type itself plus a broker companion that manages
## listeners via thread-local storage.
##
## Usage:
## Declare your desired event type inside an `EventBroker` macro, add any number of fields.:
## ```nim
## EventBroker:
## type TypeName = object
## field1*: FieldType
## field2*: AnotherFieldType
## ```
##
## After this, you can register async listeners anywhere in your code with
## `TypeName.listen(...)`, which returns a handle to the registered listener.
## Listeners are async procs or lambdas that take a single argument of the event type.
## Any number of listeners can be registered in different modules.
##
## Events can be emitted from anywhere with no direct dependency on the listeners by
## calling `TypeName.emit(...)` with an instance of the event type.
## This will asynchronously notify all registered listeners with the emitted event.
##
## Whenever you no longer need a listener (or your object instance that listen to the event goes out of scope),
## you can remove it from the broker with the handle returned by `listen`.
## This is done by calling `TypeName.dropListener(handle)`.
## Alternatively, you can remove all registered listeners through `TypeName.dropAllListeners()`.
##
##
## Example:
## ```nim
## EventBroker:
## type GreetingEvent = object
## text*: string
##
## let handle = GreetingEvent.listen(
## proc(evt: GreetingEvent): Future[void] {.async.} =
## echo evt.text
## )
## GreetingEvent.emit(text= "hi")
## GreetingEvent.dropListener(handle)
## ```
import std/[macros, tables]
import chronos, chronicles, results
import ./helper/broker_utils
export chronicles, results, chronos
macro EventBroker*(body: untyped): untyped =
when defined(eventBrokerDebug):
echo body.treeRepr
var typeIdent: NimNode = nil
var objectDef: NimNode = nil
var fieldNames: seq[NimNode] = @[]
var fieldTypes: seq[NimNode] = @[]
var isRefObject = false
for stmt in body:
if stmt.kind == nnkTypeSection:
for def in stmt:
if def.kind != nnkTypeDef:
continue
let rhs = def[2]
var objectType: NimNode
case rhs.kind
of nnkObjectTy:
objectType = rhs
of nnkRefTy:
isRefObject = true
if rhs.len != 1 or rhs[0].kind != nnkObjectTy:
error("EventBroker ref object must wrap a concrete object definition", rhs)
objectType = rhs[0]
else:
continue
if not typeIdent.isNil():
error("Only one object type may be declared inside EventBroker", def)
typeIdent = baseTypeIdent(def[0])
let recList = objectType[2]
if recList.kind != nnkRecList:
error("EventBroker object must declare a standard field list", objectType)
var exportedRecList = newTree(nnkRecList)
for field in recList:
case field.kind
of nnkIdentDefs:
ensureFieldDef(field)
let fieldTypeNode = field[field.len - 2]
for i in 0 ..< field.len - 2:
let baseFieldIdent = baseTypeIdent(field[i])
fieldNames.add(copyNimTree(baseFieldIdent))
fieldTypes.add(copyNimTree(fieldTypeNode))
var cloned = copyNimTree(field)
for i in 0 ..< cloned.len - 2:
cloned[i] = exportIdentNode(cloned[i])
exportedRecList.add(cloned)
of nnkEmpty:
discard
else:
error(
"EventBroker object definition only supports simple field declarations",
field,
)
let exportedObjectType = newTree(
nnkObjectTy,
copyNimTree(objectType[0]),
copyNimTree(objectType[1]),
exportedRecList,
)
if isRefObject:
objectDef = newTree(nnkRefTy, exportedObjectType)
else:
objectDef = exportedObjectType
if typeIdent.isNil():
error("EventBroker body must declare exactly one object type", body)
let exportedTypeIdent = postfix(copyNimTree(typeIdent), "*")
let sanitized = sanitizeIdentName(typeIdent)
let typeNameLit = newLit($typeIdent)
let isRefObjectLit = newLit(isRefObject)
let handlerProcIdent = ident(sanitized & "ListenerProc")
let listenerHandleIdent = ident(sanitized & "Listener")
let brokerTypeIdent = ident(sanitized & "Broker")
let exportedHandlerProcIdent = postfix(copyNimTree(handlerProcIdent), "*")
let exportedListenerHandleIdent = postfix(copyNimTree(listenerHandleIdent), "*")
let exportedBrokerTypeIdent = postfix(copyNimTree(brokerTypeIdent), "*")
let accessProcIdent = ident("access" & sanitized & "Broker")
let globalVarIdent = ident("g" & sanitized & "Broker")
let listenImplIdent = ident("register" & sanitized & "Listener")
let dropListenerImplIdent = ident("drop" & sanitized & "Listener")
let dropAllListenersImplIdent = ident("dropAll" & sanitized & "Listeners")
let emitImplIdent = ident("emit" & sanitized & "Value")
let listenerTaskIdent = ident("notify" & sanitized & "Listener")
result = newStmtList()
result.add(
quote do:
type
`exportedTypeIdent` = `objectDef`
`exportedListenerHandleIdent` = object
id*: uint64
`exportedHandlerProcIdent` =
proc(event: `typeIdent`): Future[void] {.async: (raises: []), gcsafe.}
`exportedBrokerTypeIdent` = ref object
listeners: Table[uint64, `handlerProcIdent`]
nextId: uint64
)
result.add(
quote do:
var `globalVarIdent` {.threadvar.}: `brokerTypeIdent`
)
result.add(
quote do:
proc `accessProcIdent`(): `brokerTypeIdent` =
if `globalVarIdent`.isNil():
new(`globalVarIdent`)
`globalVarIdent`.listeners = initTable[uint64, `handlerProcIdent`]()
`globalVarIdent`
)
result.add(
quote do:
proc `listenImplIdent`(
handler: `handlerProcIdent`
): Result[`listenerHandleIdent`, string] =
if handler.isNil():
return err("Must provide a non-nil event handler")
var broker = `accessProcIdent`()
if broker.nextId == 0'u64:
broker.nextId = 1'u64
if broker.nextId == high(uint64):
error "Cannot add more listeners: ID space exhausted", nextId = $broker.nextId
return err("Cannot add more listeners, listener ID space exhausted")
let newId = broker.nextId
inc broker.nextId
broker.listeners[newId] = handler
return ok(`listenerHandleIdent`(id: newId))
)
result.add(
quote do:
proc `dropListenerImplIdent`(handle: `listenerHandleIdent`) =
if handle.id == 0'u64:
return
var broker = `accessProcIdent`()
if broker.listeners.len == 0:
return
broker.listeners.del(handle.id)
)
result.add(
quote do:
proc `dropAllListenersImplIdent`() =
var broker = `accessProcIdent`()
if broker.listeners.len > 0:
broker.listeners.clear()
)
result.add(
quote do:
proc listen*(
_: typedesc[`typeIdent`], handler: `handlerProcIdent`
): Result[`listenerHandleIdent`, string] =
return `listenImplIdent`(handler)
)
result.add(
quote do:
proc dropListener*(_: typedesc[`typeIdent`], handle: `listenerHandleIdent`) =
`dropListenerImplIdent`(handle)
proc dropAllListeners*(_: typedesc[`typeIdent`]) =
`dropAllListenersImplIdent`()
)
result.add(
quote do:
proc `listenerTaskIdent`(
callback: `handlerProcIdent`, event: `typeIdent`
) {.async: (raises: []), gcsafe.} =
if callback.isNil():
return
try:
await callback(event)
except Exception:
error "Failed to execute event listener", error = getCurrentExceptionMsg()
proc `emitImplIdent`(
event: `typeIdent`
): Future[void] {.async: (raises: []), gcsafe.} =
when `isRefObjectLit`:
if event.isNil():
error "Cannot emit uninitialized event object", eventType = `typeNameLit`
return
let broker = `accessProcIdent`()
if broker.listeners.len == 0:
# nothing to do as nobody is listening
return
var callbacks: seq[`handlerProcIdent`] = @[]
for cb in broker.listeners.values:
callbacks.add(cb)
for cb in callbacks:
asyncSpawn `listenerTaskIdent`(cb, event)
proc emit*(event: `typeIdent`) =
asyncSpawn `emitImplIdent`(event)
proc emit*(_: typedesc[`typeIdent`], event: `typeIdent`) =
asyncSpawn `emitImplIdent`(event)
)
var emitCtorParams = newTree(nnkFormalParams, newEmptyNode())
let typedescParamType =
newTree(nnkBracketExpr, ident("typedesc"), copyNimTree(typeIdent))
emitCtorParams.add(
newTree(nnkIdentDefs, ident("_"), typedescParamType, newEmptyNode())
)
for i in 0 ..< fieldNames.len:
emitCtorParams.add(
newTree(
nnkIdentDefs,
copyNimTree(fieldNames[i]),
copyNimTree(fieldTypes[i]),
newEmptyNode(),
)
)
var emitCtorExpr = newTree(nnkObjConstr, copyNimTree(typeIdent))
for i in 0 ..< fieldNames.len:
emitCtorExpr.add(
newTree(nnkExprColonExpr, copyNimTree(fieldNames[i]), copyNimTree(fieldNames[i]))
)
let emitCtorCall = newCall(copyNimTree(emitImplIdent), emitCtorExpr)
let emitCtorBody = quote:
asyncSpawn `emitCtorCall`
let typedescEmitProc = newTree(
nnkProcDef,
postfix(ident("emit"), "*"),
newEmptyNode(),
newEmptyNode(),
emitCtorParams,
newEmptyNode(),
newEmptyNode(),
emitCtorBody,
)
result.add(typedescEmitProc)
when defined(eventBrokerDebug):
echo result.repr

View File

@ -0,0 +1,43 @@
import std/macros
proc sanitizeIdentName*(node: NimNode): string =
var raw = $node
var sanitizedName = newStringOfCap(raw.len)
for ch in raw:
case ch
of 'A' .. 'Z', 'a' .. 'z', '0' .. '9', '_':
sanitizedName.add(ch)
else:
sanitizedName.add('_')
sanitizedName
proc ensureFieldDef*(node: NimNode) =
if node.kind != nnkIdentDefs or node.len < 3:
error("Expected field definition of the form `name: Type`", node)
let typeSlot = node.len - 2
if node[typeSlot].kind == nnkEmpty:
error("Field `" & $node[0] & "` must declare a type", node)
proc exportIdentNode*(node: NimNode): NimNode =
case node.kind
of nnkIdent:
postfix(copyNimTree(node), "*")
of nnkPostfix:
node
else:
error("Unsupported identifier form in field definition", node)
proc baseTypeIdent*(defName: NimNode): NimNode =
case defName.kind
of nnkIdent:
defName
of nnkAccQuoted:
if defName.len != 1:
error("Unsupported quoted identifier", defName)
defName[0]
of nnkPostfix:
baseTypeIdent(defName[1])
of nnkPragmaExpr:
baseTypeIdent(defName[0])
else:
error("Unsupported type name in broker definition", defName)

View File

@ -0,0 +1,583 @@
## MultiRequestBroker
## --------------------
## MultiRequestBroker represents a proactive decoupling pattern, that
## allows defining request-response style interactions between modules without
## need for direct dependencies in between.
## Worth considering using it for use cases where you need to collect data from multiple providers.
##
## Provides a declarative way to define an immutable value type together with a
## thread-local broker that can register multiple asynchronous providers, dispatch
## typed requests, and clear handlers. Unlike `RequestBroker`,
## every call to `request` fan-outs to every registered provider and returns with
## collected responses.
## Request succeeds if all providers succeed, otherwise fails with an error.
##
## Usage:
##
## Declare collectable request data type inside a `MultiRequestBroker` macro, add any number of fields:
## ```nim
## MultiRequestBroker:
## type TypeName = object
## field1*: Type1
## field2*: Type2
##
## ## Define the request and provider signature, that is enforced at compile time.
## proc signature*(): Future[Result[TypeName, string]] {.async: (raises: []).}
##
## ## Also possible to define signature with arbitrary input arguments.
## proc signature*(arg1: ArgType, arg2: AnotherArgType): Future[Result[TypeName, string]] {.async: (raises: []).}
##
## ```
##
## You regiser request processor (proveder) at any place of the code without the need to know of who ever may request.
## Respectively to the defined signatures register provider functions with `TypeName.setProvider(...)`.
## Providers are async procs or lambdas that return with a Future[Result[seq[TypeName], string]].
## Notice MultiRequestBroker's `setProvider` return with a handler that can be used to remove the provider later (or error).
## Requests can be made from anywhere with no direct dependency on the provider(s) by
## calling `TypeName.request()` - with arguments respecting the signature(s).
## This will asynchronously call the registered provider and return the collected data, in form of `Future[Result[seq[TypeName], string]]`.
##
## Whenever you don't want to process requests anymore (or your object instance that provides the request goes out of scope),
## you can remove it from the broker with `TypeName.removeProvider(handle)`.
## Alternatively, you can remove all registered providers through `TypeName.clearProviders()`.
##
## Example:
## ```nim
## MultiRequestBroker:
## type Greeting = object
## text*: string
##
## ## Define the request and provider signature, that is enforced at compile time.
## proc signature*(): Future[Result[Greeting, string]] {.async: (raises: []).}
##
## ## Also possible to define signature with arbitrary input arguments.
## proc signature*(lang: string): Future[Result[Greeting, string]] {.async: (raises: []).}
##
## ...
## let handle = Greeting.setProvider(
## proc(): Future[Result[Greeting, string]] {.async: (raises: []).} =
## ok(Greeting(text: "hello"))
## )
##
## let anotherHandle = Greeting.setProvider(
## proc(): Future[Result[Greeting, string]] {.async: (raises: []).} =
## ok(Greeting(text: "szia"))
## )
##
## let responses = (await Greeting.request()).valueOr(@[Greeting(text: "default")])
##
## echo responses.len
## Greeting.clearProviders()
## ```
## If no `signature` proc is declared, a zero-argument form is generated
## automatically, so the caller only needs to provide the type definition.
import std/[macros, strutils, tables, sugar]
import chronos
import results
import ./helper/broker_utils
export results, chronos
proc isReturnTypeValid(returnType, typeIdent: NimNode): bool =
## Accept Future[Result[TypeIdent, string]] as the contract.
if returnType.kind != nnkBracketExpr or returnType.len != 2:
return false
if returnType[0].kind != nnkIdent or not returnType[0].eqIdent("Future"):
return false
let inner = returnType[1]
if inner.kind != nnkBracketExpr or inner.len != 3:
return false
if inner[0].kind != nnkIdent or not inner[0].eqIdent("Result"):
return false
if inner[1].kind != nnkIdent or not inner[1].eqIdent($typeIdent):
return false
inner[2].kind == nnkIdent and inner[2].eqIdent("string")
proc cloneParams(params: seq[NimNode]): seq[NimNode] =
## Deep copy parameter definitions so they can be reused in generated nodes.
result = @[]
for param in params:
result.add(copyNimTree(param))
proc collectParamNames(params: seq[NimNode]): seq[NimNode] =
## Extract identifiers declared in parameter definitions.
result = @[]
for param in params:
assert param.kind == nnkIdentDefs
for i in 0 ..< param.len - 2:
let nameNode = param[i]
if nameNode.kind == nnkEmpty:
continue
result.add(ident($nameNode))
proc makeProcType(returnType: NimNode, params: seq[NimNode]): NimNode =
var formal = newTree(nnkFormalParams)
formal.add(returnType)
for param in params:
formal.add(param)
let pragmas = quote:
{.async.}
newTree(nnkProcTy, formal, pragmas)
macro MultiRequestBroker*(body: untyped): untyped =
when defined(requestBrokerDebug):
echo body.treeRepr
var typeIdent: NimNode = nil
var objectDef: NimNode = nil
var isRefObject = false
for stmt in body:
if stmt.kind == nnkTypeSection:
for def in stmt:
if def.kind != nnkTypeDef:
continue
let rhs = def[2]
var objectType: NimNode
case rhs.kind
of nnkObjectTy:
objectType = rhs
of nnkRefTy:
isRefObject = true
if rhs.len != 1 or rhs[0].kind != nnkObjectTy:
error(
"MultiRequestBroker ref object must wrap a concrete object definition",
rhs,
)
objectType = rhs[0]
else:
continue
if not typeIdent.isNil():
error("Only one object type may be declared inside MultiRequestBroker", def)
typeIdent = baseTypeIdent(def[0])
let recList = objectType[2]
if recList.kind != nnkRecList:
error(
"MultiRequestBroker object must declare a standard field list", objectType
)
var exportedRecList = newTree(nnkRecList)
for field in recList:
case field.kind
of nnkIdentDefs:
ensureFieldDef(field)
var cloned = copyNimTree(field)
for i in 0 ..< cloned.len - 2:
cloned[i] = exportIdentNode(cloned[i])
exportedRecList.add(cloned)
of nnkEmpty:
discard
else:
error(
"MultiRequestBroker object definition only supports simple field declarations",
field,
)
let exportedObjectType = newTree(
nnkObjectTy,
copyNimTree(objectType[0]),
copyNimTree(objectType[1]),
exportedRecList,
)
if isRefObject:
objectDef = newTree(nnkRefTy, exportedObjectType)
else:
objectDef = exportedObjectType
if typeIdent.isNil():
error("MultiRequestBroker body must declare exactly one object type", body)
when defined(requestBrokerDebug):
echo "MultiRequestBroker generating type: ", $typeIdent
let exportedTypeIdent = postfix(copyNimTree(typeIdent), "*")
let sanitized = sanitizeIdentName(typeIdent)
let typeNameLit = newLit($typeIdent)
let isRefObjectLit = newLit(isRefObject)
let tableSym = bindSym"Table"
let initTableSym = bindSym"initTable"
let uint64Ident = ident("uint64")
let providerKindIdent = ident(sanitized & "ProviderKind")
let providerHandleIdent = ident(sanitized & "ProviderHandle")
let exportedProviderHandleIdent = postfix(copyNimTree(providerHandleIdent), "*")
let zeroKindIdent = ident("pk" & sanitized & "NoArgs")
let argKindIdent = ident("pk" & sanitized & "WithArgs")
var zeroArgSig: NimNode = nil
var zeroArgProviderName: NimNode = nil
var zeroArgFieldName: NimNode = nil
var argSig: NimNode = nil
var argParams: seq[NimNode] = @[]
var argProviderName: NimNode = nil
var argFieldName: NimNode = nil
for stmt in body:
case stmt.kind
of nnkProcDef:
let procName = stmt[0]
let procNameIdent =
case procName.kind
of nnkIdent:
procName
of nnkPostfix:
procName[1]
else:
procName
let procNameStr = $procNameIdent
if not procNameStr.startsWith("signature"):
error("Signature proc names must start with `signature`", procName)
let params = stmt.params
if params.len == 0:
error("Signature must declare a return type", stmt)
let returnType = params[0]
if not isReturnTypeValid(returnType, typeIdent):
error(
"Signature must return Future[Result[`" & $typeIdent & "`, string]]", stmt
)
let paramCount = params.len - 1
if paramCount == 0:
if zeroArgSig != nil:
error("Only one zero-argument signature is allowed", stmt)
zeroArgSig = stmt
zeroArgProviderName = ident(sanitizeIdentName(typeIdent) & "ProviderNoArgs")
zeroArgFieldName = ident("providerNoArgs")
elif paramCount >= 1:
if argSig != nil:
error("Only one argument-based signature is allowed", stmt)
argSig = stmt
argParams = @[]
for idx in 1 ..< params.len:
let paramDef = params[idx]
if paramDef.kind != nnkIdentDefs:
error(
"Signature parameter must be a standard identifier declaration", paramDef
)
let paramTypeNode = paramDef[paramDef.len - 2]
if paramTypeNode.kind == nnkEmpty:
error("Signature parameter must declare a type", paramDef)
var hasName = false
for i in 0 ..< paramDef.len - 2:
if paramDef[i].kind != nnkEmpty:
hasName = true
if not hasName:
error("Signature parameter must declare a name", paramDef)
argParams.add(copyNimTree(paramDef))
argProviderName = ident(sanitizeIdentName(typeIdent) & "ProviderWithArgs")
argFieldName = ident("providerWithArgs")
of nnkTypeSection, nnkEmpty:
discard
else:
error("Unsupported statement inside MultiRequestBroker definition", stmt)
if zeroArgSig.isNil() and argSig.isNil():
zeroArgSig = newEmptyNode()
zeroArgProviderName = ident(sanitizeIdentName(typeIdent) & "ProviderNoArgs")
zeroArgFieldName = ident("providerNoArgs")
var typeSection = newTree(nnkTypeSection)
typeSection.add(newTree(nnkTypeDef, exportedTypeIdent, newEmptyNode(), objectDef))
var kindEnum = newTree(nnkEnumTy, newEmptyNode())
if not zeroArgSig.isNil():
kindEnum.add(zeroKindIdent)
if not argSig.isNil():
kindEnum.add(argKindIdent)
typeSection.add(newTree(nnkTypeDef, providerKindIdent, newEmptyNode(), kindEnum))
var handleRecList = newTree(nnkRecList)
handleRecList.add(newTree(nnkIdentDefs, ident("id"), uint64Ident, newEmptyNode()))
handleRecList.add(
newTree(nnkIdentDefs, ident("kind"), providerKindIdent, newEmptyNode())
)
typeSection.add(
newTree(
nnkTypeDef,
exportedProviderHandleIdent,
newEmptyNode(),
newTree(nnkObjectTy, newEmptyNode(), newEmptyNode(), handleRecList),
)
)
let returnType = quote:
Future[Result[`typeIdent`, string]]
if not zeroArgSig.isNil():
let procType = makeProcType(returnType, @[])
typeSection.add(newTree(nnkTypeDef, zeroArgProviderName, newEmptyNode(), procType))
if not argSig.isNil():
let procType = makeProcType(returnType, cloneParams(argParams))
typeSection.add(newTree(nnkTypeDef, argProviderName, newEmptyNode(), procType))
var brokerRecList = newTree(nnkRecList)
if not zeroArgSig.isNil():
brokerRecList.add(
newTree(
nnkIdentDefs,
zeroArgFieldName,
newTree(nnkBracketExpr, tableSym, uint64Ident, zeroArgProviderName),
newEmptyNode(),
)
)
if not argSig.isNil():
brokerRecList.add(
newTree(
nnkIdentDefs,
argFieldName,
newTree(nnkBracketExpr, tableSym, uint64Ident, argProviderName),
newEmptyNode(),
)
)
brokerRecList.add(newTree(nnkIdentDefs, ident("nextId"), uint64Ident, newEmptyNode()))
let brokerTypeIdent = ident(sanitizeIdentName(typeIdent) & "Broker")
let brokerTypeDef = newTree(
nnkTypeDef,
brokerTypeIdent,
newEmptyNode(),
newTree(
nnkRefTy, newTree(nnkObjectTy, newEmptyNode(), newEmptyNode(), brokerRecList)
),
)
typeSection.add(brokerTypeDef)
result = newStmtList()
result.add(typeSection)
let globalVarIdent = ident("g" & sanitizeIdentName(typeIdent) & "Broker")
let accessProcIdent = ident("access" & sanitizeIdentName(typeIdent) & "Broker")
var initStatements = newStmtList()
if not zeroArgSig.isNil():
initStatements.add(
quote do:
`globalVarIdent`.`zeroArgFieldName` =
`initTableSym`[`uint64Ident`, `zeroArgProviderName`]()
)
if not argSig.isNil():
initStatements.add(
quote do:
`globalVarIdent`.`argFieldName` =
`initTableSym`[`uint64Ident`, `argProviderName`]()
)
result.add(
quote do:
var `globalVarIdent` {.threadvar.}: `brokerTypeIdent`
proc `accessProcIdent`(): `brokerTypeIdent` =
if `globalVarIdent`.isNil():
new(`globalVarIdent`)
`globalVarIdent`.nextId = 1'u64
`initStatements`
return `globalVarIdent`
)
var clearBody = newStmtList()
if not zeroArgSig.isNil():
result.add(
quote do:
proc setProvider*(
_: typedesc[`typeIdent`], handler: `zeroArgProviderName`
): Result[`providerHandleIdent`, string] =
if handler.isNil():
return err("Provider handler must be provided")
let broker = `accessProcIdent`()
if broker.nextId == 0'u64:
broker.nextId = 1'u64
for existingId, existing in broker.`zeroArgFieldName`.pairs:
if existing == handler:
return ok(`providerHandleIdent`(id: existingId, kind: `zeroKindIdent`))
let newId = broker.nextId
inc broker.nextId
broker.`zeroArgFieldName`[newId] = handler
return ok(`providerHandleIdent`(id: newId, kind: `zeroKindIdent`))
)
clearBody.add(
quote do:
let broker = `accessProcIdent`()
if not broker.isNil() and broker.`zeroArgFieldName`.len > 0:
broker.`zeroArgFieldName`.clear()
)
result.add(
quote do:
proc request*(
_: typedesc[`typeIdent`]
): Future[Result[seq[`typeIdent`], string]] {.async: (raises: []), gcsafe.} =
var aggregated: seq[`typeIdent`] = @[]
let providers = `accessProcIdent`().`zeroArgFieldName`
if providers.len == 0:
return ok(aggregated)
# var providersFut: seq[Future[Result[`typeIdent`, string]]] = collect:
var providersFut = collect(newSeq):
for provider in providers.values:
if provider.isNil():
continue
provider()
let catchable = catch:
await allFinished(providersFut)
catchable.isOkOr:
return err("Some provider(s) failed:" & error.msg)
for fut in catchable.get():
if fut.failed():
return err("Some provider(s) failed:" & fut.error.msg)
elif fut.finished():
let providerResult = fut.value()
if providerResult.isOk:
let providerValue = providerResult.get()
when `isRefObjectLit`:
if providerValue.isNil():
return err(
"MultiRequestBroker(" & `typeNameLit` &
"): provider returned nil result"
)
aggregated.add(providerValue)
else:
return err("Some provider(s) failed:" & providerResult.error)
return ok(aggregated)
)
if not argSig.isNil():
result.add(
quote do:
proc setProvider*(
_: typedesc[`typeIdent`], handler: `argProviderName`
): Result[`providerHandleIdent`, string] =
if handler.isNil():
return err("Provider handler must be provided")
let broker = `accessProcIdent`()
if broker.nextId == 0'u64:
broker.nextId = 1'u64
for existingId, existing in broker.`argFieldName`.pairs:
if existing == handler:
return ok(`providerHandleIdent`(id: existingId, kind: `argKindIdent`))
let newId = broker.nextId
inc broker.nextId
broker.`argFieldName`[newId] = handler
return ok(`providerHandleIdent`(id: newId, kind: `argKindIdent`))
)
clearBody.add(
quote do:
let broker = `accessProcIdent`()
if not broker.isNil() and broker.`argFieldName`.len > 0:
broker.`argFieldName`.clear()
)
let requestParamDefs = cloneParams(argParams)
let argNameIdents = collectParamNames(requestParamDefs)
let providerSym = genSym(nskLet, "providerVal")
var providerCall = newCall(providerSym)
for argName in argNameIdents:
providerCall.add(argName)
var formalParams = newTree(nnkFormalParams)
formalParams.add(
quote do:
Future[Result[seq[`typeIdent`], string]]
)
formalParams.add(
newTree(
nnkIdentDefs,
ident("_"),
newTree(nnkBracketExpr, ident("typedesc"), copyNimTree(typeIdent)),
newEmptyNode(),
)
)
for paramDef in requestParamDefs:
formalParams.add(paramDef)
let requestPragmas = quote:
{.async: (raises: []), gcsafe.}
let requestBody = quote:
var aggregated: seq[`typeIdent`] = @[]
let providers = `accessProcIdent`().`argFieldName`
if providers.len == 0:
return ok(aggregated)
var providersFut = collect(newSeq):
for provider in providers.values:
if provider.isNil():
continue
let `providerSym` = provider
`providerCall`
let catchable = catch:
await allFinished(providersFut)
catchable.isOkOr:
return err("Some provider(s) failed:" & error.msg)
for fut in catchable.get():
if fut.failed():
return err("Some provider(s) failed:" & fut.error.msg)
elif fut.finished():
let providerResult = fut.value()
if providerResult.isOk:
let providerValue = providerResult.get()
when `isRefObjectLit`:
if providerValue.isNil():
return err(
"MultiRequestBroker(" & `typeNameLit` &
"): provider returned nil result"
)
aggregated.add(providerValue)
else:
return err("Some provider(s) failed:" & providerResult.error)
return ok(aggregated)
result.add(
newTree(
nnkProcDef,
postfix(ident("request"), "*"),
newEmptyNode(),
newEmptyNode(),
formalParams,
requestPragmas,
newEmptyNode(),
requestBody,
)
)
result.add(
quote do:
proc clearProviders*(_: typedesc[`typeIdent`]) =
`clearBody`
let broker = `accessProcIdent`()
if not broker.isNil():
broker.nextId = 1'u64
)
let removeHandleSym = genSym(nskParam, "handle")
let removeBrokerSym = genSym(nskLet, "broker")
var removeBody = newStmtList()
removeBody.add(
quote do:
if `removeHandleSym`.id == 0'u64:
return
let `removeBrokerSym` = `accessProcIdent`()
if `removeBrokerSym`.isNil():
return
)
if not zeroArgSig.isNil():
removeBody.add(
quote do:
if `removeHandleSym`.kind == `zeroKindIdent`:
`removeBrokerSym`.`zeroArgFieldName`.del(`removeHandleSym`.id)
return
)
if not argSig.isNil():
removeBody.add(
quote do:
if `removeHandleSym`.kind == `argKindIdent`:
`removeBrokerSym`.`argFieldName`.del(`removeHandleSym`.id)
return
)
removeBody.add(
quote do:
discard
)
result.add(
quote do:
proc removeProvider*(
_: typedesc[`typeIdent`], `removeHandleSym`: `providerHandleIdent`
) =
`removeBody`
)
when defined(requestBrokerDebug):
echo result.repr

View File

@ -0,0 +1,438 @@
## RequestBroker
## --------------------
## RequestBroker represents a proactive decoupling pattern, that
## allows defining request-response style interactions between modules without
## need for direct dependencies in between.
## Worth considering using it in a single provider, many requester scenario.
##
## Provides a declarative way to define an immutable value type together with a
## thread-local broker that can register an asynchronous provider, dispatch typed
## requests and clear provider.
##
## Usage:
## Declare your desired request type inside a `RequestBroker` macro, add any number of fields.
## Define the provider signature, that is enforced at compile time.
##
## ```nim
## RequestBroker:
## type TypeName = object
## field1*: FieldType
## field2*: AnotherFieldType
##
## proc signature*(): Future[Result[TypeName, string]]
## ## Also possible to define signature with arbitrary input arguments.
## proc signature*(arg1: ArgType, arg2: AnotherArgType): Future[Result[TypeName, string]]
##
## ```
## The 'TypeName' object defines the requestable data (but also can be seen as request for action with return value).
## The 'signature' proc defines the provider(s) signature, that is enforced at compile time.
## One signature can be with no arguments, another with any number of arguments - where the input arguments are
## not related to the request type - but alternative inputs for the request to be processed.
##
## After this, you can register a provider anywhere in your code with
## `TypeName.setProvider(...)`, which returns error if already having a provider.
## Providers are async procs or lambdas that take no arguments and return a Future[Result[TypeName, string]].
## Only one provider can be registered at a time per signature type (zero arg and/or multi arg).
##
## Requests can be made from anywhere with no direct dependency on the provider by
## calling `TypeName.request()` - with arguments respecting the signature(s).
## This will asynchronously call the registered provider and return a Future[Result[TypeName, string]].
##
## Whenever you no want to process requests (or your object instance that provides the request goes out of scope),
## you can remove it from the broker with `TypeName.clearProvider()`.
##
##
## Example:
## ```nim
## RequestBroker:
## type Greeting = object
## text*: string
##
## ## Define the request and provider signature, that is enforced at compile time.
## proc signature*(): Future[Result[Greeting, string]]
##
## ## Also possible to define signature with arbitrary input arguments.
## proc signature*(lang: string): Future[Result[Greeting, string]]
##
## ...
## Greeting.setProvider(
## proc(): Future[Result[Greeting, string]] {.async.} =
## ok(Greeting(text: "hello"))
## )
## let res = await Greeting.request()
## ```
## If no `signature` proc is declared, a zero-argument form is generated
## automatically, so the caller only needs to provide the type definition.
import std/[macros, strutils]
import chronos
import results
import ./helper/broker_utils
export results, chronos
proc errorFuture[T](message: string): Future[Result[T, string]] {.inline.} =
## Build a future that is already completed with an error result.
let fut = newFuture[Result[T, string]]("request_broker.errorFuture")
fut.complete(err(Result[T, string], message))
fut
proc isReturnTypeValid(returnType, typeIdent: NimNode): bool =
## Accept Future[Result[TypeIdent, string]] as the contract.
if returnType.kind != nnkBracketExpr or returnType.len != 2:
return false
if returnType[0].kind != nnkIdent or not returnType[0].eqIdent("Future"):
return false
let inner = returnType[1]
if inner.kind != nnkBracketExpr or inner.len != 3:
return false
if inner[0].kind != nnkIdent or not inner[0].eqIdent("Result"):
return false
if inner[1].kind != nnkIdent or not inner[1].eqIdent($typeIdent):
return false
inner[2].kind == nnkIdent and inner[2].eqIdent("string")
proc cloneParams(params: seq[NimNode]): seq[NimNode] =
## Deep copy parameter definitions so they can be inserted in multiple places.
result = @[]
for param in params:
result.add(copyNimTree(param))
proc collectParamNames(params: seq[NimNode]): seq[NimNode] =
## Extract all identifier symbols declared across IdentDefs nodes.
result = @[]
for param in params:
assert param.kind == nnkIdentDefs
for i in 0 ..< param.len - 2:
let nameNode = param[i]
if nameNode.kind == nnkEmpty:
continue
result.add(ident($nameNode))
proc makeProcType(returnType: NimNode, params: seq[NimNode]): NimNode =
var formal = newTree(nnkFormalParams)
formal.add(returnType)
for param in params:
formal.add(param)
let pragmas = newTree(nnkPragma, ident("async"))
newTree(nnkProcTy, formal, pragmas)
macro RequestBroker*(body: untyped): untyped =
when defined(requestBrokerDebug):
echo body.treeRepr
var typeIdent: NimNode = nil
var objectDef: NimNode = nil
var isRefObject = false
for stmt in body:
if stmt.kind == nnkTypeSection:
for def in stmt:
if def.kind != nnkTypeDef:
continue
let rhs = def[2]
var objectType: NimNode
case rhs.kind
of nnkObjectTy:
objectType = rhs
of nnkRefTy:
isRefObject = true
if rhs.len != 1 or rhs[0].kind != nnkObjectTy:
error(
"RequestBroker ref object must wrap a concrete object definition", rhs
)
objectType = rhs[0]
else:
continue
if not typeIdent.isNil():
error("Only one object type may be declared inside RequestBroker", def)
typeIdent = baseTypeIdent(def[0])
let recList = objectType[2]
if recList.kind != nnkRecList:
error("RequestBroker object must declare a standard field list", objectType)
var exportedRecList = newTree(nnkRecList)
for field in recList:
case field.kind
of nnkIdentDefs:
ensureFieldDef(field)
var cloned = copyNimTree(field)
for i in 0 ..< cloned.len - 2:
cloned[i] = exportIdentNode(cloned[i])
exportedRecList.add(cloned)
of nnkEmpty:
discard
else:
error(
"RequestBroker object definition only supports simple field declarations",
field,
)
let exportedObjectType = newTree(
nnkObjectTy,
copyNimTree(objectType[0]),
copyNimTree(objectType[1]),
exportedRecList,
)
if isRefObject:
objectDef = newTree(nnkRefTy, exportedObjectType)
else:
objectDef = exportedObjectType
if typeIdent.isNil():
error("RequestBroker body must declare exactly one object type", body)
when defined(requestBrokerDebug):
echo "RequestBroker generating type: ", $typeIdent
let exportedTypeIdent = postfix(copyNimTree(typeIdent), "*")
let typeDisplayName = sanitizeIdentName(typeIdent)
let typeNameLit = newLit(typeDisplayName)
let isRefObjectLit = newLit(isRefObject)
var zeroArgSig: NimNode = nil
var zeroArgProviderName: NimNode = nil
var zeroArgFieldName: NimNode = nil
var argSig: NimNode = nil
var argParams: seq[NimNode] = @[]
var argProviderName: NimNode = nil
var argFieldName: NimNode = nil
for stmt in body:
case stmt.kind
of nnkProcDef:
let procName = stmt[0]
let procNameIdent =
case procName.kind
of nnkIdent:
procName
of nnkPostfix:
procName[1]
else:
procName
let procNameStr = $procNameIdent
if not procNameStr.startsWith("signature"):
error("Signature proc names must start with `signature`", procName)
let params = stmt.params
if params.len == 0:
error("Signature must declare a return type", stmt)
let returnType = params[0]
if not isReturnTypeValid(returnType, typeIdent):
error(
"Signature must return Future[Result[`" & $typeIdent & "`, string]]", stmt
)
let paramCount = params.len - 1
if paramCount == 0:
if zeroArgSig != nil:
error("Only one zero-argument signature is allowed", stmt)
zeroArgSig = stmt
zeroArgProviderName = ident(sanitizeIdentName(typeIdent) & "ProviderNoArgs")
zeroArgFieldName = ident("providerNoArgs")
elif paramCount >= 1:
if argSig != nil:
error("Only one argument-based signature is allowed", stmt)
argSig = stmt
argParams = @[]
for idx in 1 ..< params.len:
let paramDef = params[idx]
if paramDef.kind != nnkIdentDefs:
error(
"Signature parameter must be a standard identifier declaration", paramDef
)
let paramTypeNode = paramDef[paramDef.len - 2]
if paramTypeNode.kind == nnkEmpty:
error("Signature parameter must declare a type", paramDef)
var hasName = false
for i in 0 ..< paramDef.len - 2:
if paramDef[i].kind != nnkEmpty:
hasName = true
if not hasName:
error("Signature parameter must declare a name", paramDef)
argParams.add(copyNimTree(paramDef))
argProviderName = ident(sanitizeIdentName(typeIdent) & "ProviderWithArgs")
argFieldName = ident("providerWithArgs")
of nnkTypeSection, nnkEmpty:
discard
else:
error("Unsupported statement inside RequestBroker definition", stmt)
if zeroArgSig.isNil() and argSig.isNil():
zeroArgSig = newEmptyNode()
zeroArgProviderName = ident(sanitizeIdentName(typeIdent) & "ProviderNoArgs")
zeroArgFieldName = ident("providerNoArgs")
var typeSection = newTree(nnkTypeSection)
typeSection.add(newTree(nnkTypeDef, exportedTypeIdent, newEmptyNode(), objectDef))
let returnType = quote:
Future[Result[`typeIdent`, string]]
if not zeroArgSig.isNil():
let procType = makeProcType(returnType, @[])
typeSection.add(newTree(nnkTypeDef, zeroArgProviderName, newEmptyNode(), procType))
if not argSig.isNil():
let procType = makeProcType(returnType, cloneParams(argParams))
typeSection.add(newTree(nnkTypeDef, argProviderName, newEmptyNode(), procType))
var brokerRecList = newTree(nnkRecList)
if not zeroArgSig.isNil():
brokerRecList.add(
newTree(nnkIdentDefs, zeroArgFieldName, zeroArgProviderName, newEmptyNode())
)
if not argSig.isNil():
brokerRecList.add(
newTree(nnkIdentDefs, argFieldName, argProviderName, newEmptyNode())
)
let brokerTypeIdent = ident(sanitizeIdentName(typeIdent) & "Broker")
let brokerTypeDef = newTree(
nnkTypeDef,
brokerTypeIdent,
newEmptyNode(),
newTree(nnkObjectTy, newEmptyNode(), newEmptyNode(), brokerRecList),
)
typeSection.add(brokerTypeDef)
result = newStmtList()
result.add(typeSection)
let globalVarIdent = ident("g" & sanitizeIdentName(typeIdent) & "Broker")
let accessProcIdent = ident("access" & sanitizeIdentName(typeIdent) & "Broker")
result.add(
quote do:
var `globalVarIdent` {.threadvar.}: `brokerTypeIdent`
proc `accessProcIdent`(): var `brokerTypeIdent` =
`globalVarIdent`
)
var clearBody = newStmtList()
if not zeroArgSig.isNil():
result.add(
quote do:
proc setProvider*(
_: typedesc[`typeIdent`], handler: `zeroArgProviderName`
): Result[void, string] =
if not `accessProcIdent`().`zeroArgFieldName`.isNil():
return err("Zero-arg provider already set")
`accessProcIdent`().`zeroArgFieldName` = handler
return ok()
)
clearBody.add(
quote do:
`accessProcIdent`().`zeroArgFieldName` = nil
)
result.add(
quote do:
proc request*(
_: typedesc[`typeIdent`]
): Future[Result[`typeIdent`, string]] {.async: (raises: []).} =
let provider = `accessProcIdent`().`zeroArgFieldName`
if provider.isNil():
return err(
"RequestBroker(" & `typeNameLit` & "): no zero-arg provider registered"
)
let catchedRes = catch:
await provider()
if catchedRes.isErr():
return err("Request failed:" & catchedRes.error.msg)
let providerRes = catchedRes.get()
when `isRefObjectLit`:
if providerRes.isOk():
let resultValue = providerRes.get()
if resultValue.isNil():
return err(
"RequestBroker(" & `typeNameLit` & "): provider returned nil result"
)
return providerRes
)
if not argSig.isNil():
result.add(
quote do:
proc setProvider*(
_: typedesc[`typeIdent`], handler: `argProviderName`
): Result[void, string] =
if not `accessProcIdent`().`argFieldName`.isNil():
return err("Provider already set")
`accessProcIdent`().`argFieldName` = handler
return ok()
)
clearBody.add(
quote do:
`accessProcIdent`().`argFieldName` = nil
)
let requestParamDefs = cloneParams(argParams)
let argNameIdents = collectParamNames(requestParamDefs)
let providerSym = genSym(nskLet, "provider")
var formalParams = newTree(nnkFormalParams)
formalParams.add(
quote do:
Future[Result[`typeIdent`, string]]
)
formalParams.add(
newTree(
nnkIdentDefs,
ident("_"),
newTree(nnkBracketExpr, ident("typedesc"), copyNimTree(typeIdent)),
newEmptyNode(),
)
)
for paramDef in requestParamDefs:
formalParams.add(paramDef)
let requestPragmas = quote:
{.async: (raises: []), gcsafe.}
var providerCall = newCall(providerSym)
for argName in argNameIdents:
providerCall.add(argName)
var requestBody = newStmtList()
requestBody.add(
quote do:
let `providerSym` = `accessProcIdent`().`argFieldName`
)
requestBody.add(
quote do:
if `providerSym`.isNil():
return err(
"RequestBroker(" & `typeNameLit` &
"): no provider registered for input signature"
)
)
requestBody.add(
quote do:
let catchedRes = catch:
await `providerCall`
if catchedRes.isErr():
return err("Request failed:" & catchedRes.error.msg)
let providerRes = catchedRes.get()
when `isRefObjectLit`:
if providerRes.isOk():
let resultValue = providerRes.get()
if resultValue.isNil():
return err(
"RequestBroker(" & `typeNameLit` & "): provider returned nil result"
)
return providerRes
)
# requestBody.add(providerCall)
result.add(
newTree(
nnkProcDef,
postfix(ident("request"), "*"),
newEmptyNode(),
newEmptyNode(),
formalParams,
requestPragmas,
newEmptyNode(),
requestBody,
)
)
result.add(
quote do:
proc clearProvider*(_: typedesc[`typeIdent`]) =
`clearBody`
)
when defined(requestBrokerDebug):
echo result.repr

View File

@ -1,5 +1,7 @@
import ../waku_enr/capabilities
import waku/waku_enr/capabilities, waku/waku_rendezvous/waku_peer_record
type GetShards* = proc(): seq[uint16] {.closure, gcsafe, raises: [].}
type GetCapabilities* = proc(): seq[Capabilities] {.closure, gcsafe, raises: [].}
type GetWakuPeerRecord* = proc(): WakuPeerRecord {.closure, gcsafe, raises: [].}

View File

@ -163,6 +163,15 @@ proc setupProtocols(
error "Unrecoverable error occurred", error = msg
quit(QuitFailure)
#mount mix
if conf.mixConf.isSome():
(
await node.mountMix(
conf.clusterId, conf.mixConf.get().mixKey, conf.mixConf.get().mixnodes
)
).isOkOr:
return err("failed to mount waku mix protocol: " & $error)
if conf.storeServiceConf.isSome():
let storeServiceConf = conf.storeServiceConf.get()
if storeServiceConf.supportV2:
@ -327,9 +336,9 @@ proc setupProtocols(
protectedShard = shardKey.shard, publicKey = shardKey.key
node.wakuRelay.addSignedShardsValidator(subscribedProtectedShards, conf.clusterId)
# Only relay nodes should be rendezvous points.
if conf.rendezvous:
await node.mountRendezvous(conf.clusterId)
if conf.rendezvous:
await node.mountRendezvous(conf.clusterId)
await node.mountRendezvousClient(conf.clusterId)
# Keepalive mounted on all nodes
try:
@ -414,14 +423,6 @@ proc setupProtocols(
if conf.peerExchangeDiscovery:
await node.mountPeerExchangeClient()
#mount mix
if conf.mixConf.isSome():
(
await node.mountMix(
conf.clusterId, conf.mixConf.get().mixKey, conf.mixConf.get().mixnodes
)
).isOkOr:
return err("failed to mount waku mix protocol: " & $error)
return ok()
## Start node

View File

@ -154,7 +154,8 @@ proc logConf*(conf: WakuConf) =
store = conf.storeServiceConf.isSome(),
filter = conf.filterServiceConf.isSome(),
lightPush = conf.lightPush,
peerExchange = conf.peerExchangeService
peerExchange = conf.peerExchangeService,
rendezvous = conf.rendezvous
info "Configuration. Network", cluster = conf.clusterId

View File

@ -199,7 +199,7 @@ proc lightpushPublishHandler(
if mixify: #indicates we want to use mix to send the message
#TODO: How to handle multiple addresses?
let conn = node.wakuMix.toConnection(
MixDestination.init(peer.peerId, peer.addrs[0]),
MixDestination.exitNode(peer.peerId),
WakuLightPushCodec,
MixParameters(expectReply: Opt.some(true), numSurbs: Opt.some(byte(1))),
# indicating we only want a single path to be used for reply hence numSurbs = 1
@ -210,9 +210,7 @@ proc lightpushPublishHandler(
"Waku lightpush with mix not available",
)
return await node.wakuLightpushClient.publishWithConn(
pubsubTopic, message, conn, peer.peerId
)
return await node.wakuLightpushClient.publish(some(pubsubTopic), message, conn)
else:
return await node.wakuLightpushClient.publish(some(pubsubTopic), message, peer)

View File

@ -658,6 +658,11 @@ proc onPeerMetadata(pm: PeerManager, peerId: PeerId) {.async.} =
$clusterId
break guardClauses
# Store the shard information from metadata in the peer store
if pm.switch.peerStore.peerExists(peerId):
let shards = metadata.shards.mapIt(it.uint16)
pm.switch.peerStore.setShardInfo(peerId, shards)
return
info "disconnecting from peer", peerId = peerId, reason = reason

View File

@ -6,7 +6,8 @@ import
chronicles,
eth/p2p/discoveryv5/enr,
libp2p/builders,
libp2p/peerstore
libp2p/peerstore,
libp2p/crypto/curve25519
import
../../waku_core,
@ -39,6 +40,12 @@ type
# Keeps track of the ENR (Ethereum Node Record) of a peer
ENRBook* = ref object of PeerBook[enr.Record]
# Keeps track of peer shards
ShardBook* = ref object of PeerBook[seq[uint16]]
# Keeps track of Mix protocol public keys of peers
MixPubKeyBook* = ref object of PeerBook[Curve25519Key]
proc getPeer*(peerStore: PeerStore, peerId: PeerId): RemotePeerInfo =
let addresses =
if peerStore[LastSeenBook][peerId].isSome():
@ -55,6 +62,7 @@ proc getPeer*(peerStore: PeerStore, peerId: PeerId): RemotePeerInfo =
else:
none(enr.Record),
protocols: peerStore[ProtoBook][peerId],
shards: peerStore[ShardBook][peerId],
agent: peerStore[AgentBook][peerId],
protoVersion: peerStore[ProtoVersionBook][peerId],
publicKey: peerStore[KeyBook][peerId],
@ -64,6 +72,11 @@ proc getPeer*(peerStore: PeerStore, peerId: PeerId): RemotePeerInfo =
direction: peerStore[DirectionBook][peerId],
lastFailedConn: peerStore[LastFailedConnBook][peerId],
numberFailedConn: peerStore[NumberFailedConnBook][peerId],
mixPubKey:
if peerStore[MixPubKeyBook][peerId] != default(Curve25519Key):
some(peerStore[MixPubKeyBook][peerId])
else:
none(Curve25519Key),
)
proc delete*(peerStore: PeerStore, peerId: PeerId) =
@ -76,12 +89,20 @@ proc peers*(peerStore: PeerStore): seq[RemotePeerInfo] =
toSeq(peerStore[AddressBook].book.keys()),
toSeq(peerStore[ProtoBook].book.keys()),
toSeq(peerStore[KeyBook].book.keys()),
toSeq(peerStore[ShardBook].book.keys()),
)
.toHashSet()
return allKeys.mapIt(peerStore.getPeer(it))
proc addPeer*(peerStore: PeerStore, peer: RemotePeerInfo, origin = UnknownOrigin) =
## Storing MixPubKey even if peer is already present as this info might be new
## or updated.
if peer.mixPubKey.isSome():
trace "adding mix pub key to peer store",
peer_id = $peer.peerId, mix_pub_key = $peer.mixPubKey.get()
peerStore[MixPubKeyBook].book[peer.peerId] = peer.mixPubKey.get()
## Notice that the origin parameter is used to manually override the given peer origin.
## At the time of writing, this is used in waku_discv5 or waku_node (peer exchange.)
if peerStore[AddressBook][peer.peerId] == peer.addrs and
@ -108,6 +129,7 @@ proc addPeer*(peerStore: PeerStore, peer: RemotePeerInfo, origin = UnknownOrigin
peerStore[ProtoBook][peer.peerId] = protos
## We don't care whether the item was already present in the table or not. Hence, we always discard the hasKeyOrPut's bool returned value
discard peerStore[AgentBook].book.hasKeyOrPut(peer.peerId, peer.agent)
discard peerStore[ProtoVersionBook].book.hasKeyOrPut(peer.peerId, peer.protoVersion)
discard peerStore[KeyBook].book.hasKeyOrPut(peer.peerId, peer.publicKey)
@ -127,6 +149,9 @@ proc addPeer*(peerStore: PeerStore, peer: RemotePeerInfo, origin = UnknownOrigin
if peer.enr.isSome():
peerStore[ENRBook][peer.peerId] = peer.enr.get()
proc setShardInfo*(peerStore: PeerStore, peerId: PeerID, shards: seq[uint16]) =
peerStore[ShardBook][peerId] = shards
proc peers*(peerStore: PeerStore, proto: string): seq[RemotePeerInfo] =
peerStore.peers().filterIt(it.protocols.contains(proto))

View File

@ -22,6 +22,7 @@ import
libp2p/transports/tcptransport,
libp2p/transports/wstransport,
libp2p/utility,
libp2p/utils/offsettedseq,
libp2p/protocols/mix,
libp2p/protocols/mix/mix_protocol
@ -43,6 +44,8 @@ import
../waku_filter_v2/client as filter_client,
../waku_metadata,
../waku_rendezvous/protocol,
../waku_rendezvous/client as rendezvous_client,
../waku_rendezvous/waku_peer_record,
../waku_lightpush_legacy/client as legacy_ligntpuhs_client,
../waku_lightpush_legacy as legacy_lightpush_protocol,
../waku_lightpush/client as ligntpuhs_client,
@ -121,6 +124,7 @@ type
libp2pPing*: Ping
rng*: ref rand.HmacDrbgContext
wakuRendezvous*: WakuRendezVous
wakuRendezvousClient*: rendezvous_client.WakuRendezVousClient
announcedAddresses*: seq[MultiAddress]
started*: bool # Indicates that node has started listening
topicSubscriptionQueue*: AsyncEventQueue[SubscriptionEvent]
@ -148,6 +152,17 @@ proc getCapabilitiesGetter(node: WakuNode): GetCapabilities =
return @[]
return node.enr.getCapabilities()
proc getWakuPeerRecordGetter(node: WakuNode): GetWakuPeerRecord =
return proc(): WakuPeerRecord {.closure, gcsafe, raises: [].} =
var mixKey: string
if not node.wakuMix.isNil():
mixKey = node.wakuMix.pubKey.to0xHex()
return WakuPeerRecord.init(
peerId = node.switch.peerInfo.peerId,
addresses = node.announcedAddresses,
mixKey = mixKey,
)
proc new*(
T: type WakuNode,
netConfig: NetConfig,
@ -257,12 +272,12 @@ proc mountMix*(
return err("Failed to convert multiaddress to string.")
info "local addr", localaddr = localaddrStr
let nodeAddr = localaddrStr & "/p2p/" & $node.peerId
node.wakuMix = WakuMix.new(
nodeAddr, node.peerManager, clusterId, mixPrivKey, mixnodes
localaddrStr, node.peerManager, clusterId, mixPrivKey, mixnodes
).valueOr:
error "Waku Mix protocol initialization failed", err = error
return
#TODO: should we do the below only for exit node? Also, what if multiple protocols use mix?
node.wakuMix.registerDestReadBehavior(WakuLightPushCodec, readLp(int(-1)))
let catchRes = catch:
node.switch.mount(node.wakuMix)
@ -346,6 +361,18 @@ proc selectRandomPeers*(peers: seq[PeerId], numRandomPeers: int): seq[PeerId] =
shuffle(randomPeers)
return randomPeers[0 ..< min(len(randomPeers), numRandomPeers)]
proc mountRendezvousClient*(node: WakuNode, clusterId: uint16) {.async: (raises: []).} =
info "mounting rendezvous client"
node.wakuRendezvousClient = rendezvous_client.WakuRendezVousClient.new(
node.switch, node.peerManager, clusterId
).valueOr:
error "initializing waku rendezvous client failed", error = error
return
if node.started:
await node.wakuRendezvousClient.start()
proc mountRendezvous*(node: WakuNode, clusterId: uint16) {.async: (raises: []).} =
info "mounting rendezvous discovery protocol"
@ -355,6 +382,7 @@ proc mountRendezvous*(node: WakuNode, clusterId: uint16) {.async: (raises: []).}
clusterId,
node.getShardsGetter(),
node.getCapabilitiesGetter(),
node.getWakuPeerRecordGetter(),
).valueOr:
error "initializing waku rendezvous failed", error = error
return
@ -362,6 +390,11 @@ proc mountRendezvous*(node: WakuNode, clusterId: uint16) {.async: (raises: []).}
if node.started:
await node.wakuRendezvous.start()
try:
node.switch.mount(node.wakuRendezvous, protocolMatcher(WakuRendezVousCodec))
except LPError:
error "failed to mount wakuRendezvous", error = getCurrentExceptionMsg()
proc isBindIpWithZeroPort(inputMultiAdd: MultiAddress): bool =
let inputStr = $inputMultiAdd
if inputStr.contains("0.0.0.0/tcp/0") or inputStr.contains("127.0.0.1/tcp/0"):
@ -438,6 +471,9 @@ proc start*(node: WakuNode) {.async.} =
if not node.wakuRendezvous.isNil():
await node.wakuRendezvous.start()
if not node.wakuRendezvousClient.isNil():
await node.wakuRendezvousClient.start()
if not node.wakuStoreReconciliation.isNil():
node.wakuStoreReconciliation.start()
@ -496,6 +532,9 @@ proc stop*(node: WakuNode) {.async.} =
if not node.wakuRendezvous.isNil():
await node.wakuRendezvous.stopWait()
if not node.wakuRendezvousClient.isNil():
await node.wakuRendezvousClient.stopWait()
node.started = false
proc isReady*(node: WakuNode): Future[bool] {.async: (raises: [Exception]).} =

View File

@ -57,7 +57,7 @@ proc getStoreMessagesV3*(
# Optional cursor fields
cursor: string = "", # base64-encoded hash
ascending: string = "",
pageSize: string = "",
pageSize: string = "20", # default value is 20
): RestResponse[StoreQueryResponseHex] {.
rest, endpoint: "/store/v3/messages", meth: HttpMethod.MethodGet
.}

View File

@ -129,6 +129,14 @@ proc createStoreQuery(
except CatchableError:
return err("page size parsing error: " & getCurrentExceptionMsg())
# Enforce default value of page_size to 20
if parsedPagedSize.isNone():
parsedPagedSize = some(20.uint64)
# Enforce max value of page_size to 100
if parsedPagedSize.get() > 100:
parsedPagedSize = some(100.uint64)
return ok(
StoreQueryRequest(
includeData: parsedIncludeData,

View File

@ -5,6 +5,7 @@ import
stew/[byteutils, arrayops],
results,
chronos,
metrics,
db_connector/[postgres, db_common],
chronicles
import
@ -16,6 +17,9 @@ import
./postgres_healthcheck,
./partitions_manager
declarePublicGauge postgres_payload_size_bytes,
"Payload size in bytes of correctly stored messages"
type PostgresDriver* = ref object of ArchiveDriver
## Establish a separate pools for read/write operations
writeConnPool: PgAsyncPool
@ -333,7 +337,7 @@ method put*(
return err("could not put msg in messages table: " & $error)
## Now add the row to messages_lookup
return await s.writeConnPool.runStmt(
let ret = await s.writeConnPool.runStmt(
InsertRowInMessagesLookupStmtName,
InsertRowInMessagesLookupStmtDefinition,
@[messageHash, timestamp],
@ -341,6 +345,10 @@ method put*(
@[int32(0), int32(0)],
)
if ret.isOk():
postgres_payload_size_bytes.set(message.payload.len)
return ret
method getAllMessages*(
s: PostgresDriver
): Future[ArchiveDriverResult[seq[ArchiveRow]]] {.async.} =

View File

@ -10,3 +10,4 @@ const
WakuMetadataCodec* = "/vac/waku/metadata/1.0.0"
WakuPeerExchangeCodec* = "/vac/waku/peer-exchange/2.0.0-alpha1"
WakuLegacyStoreCodec* = "/vac/waku/store/2.0.0-beta4"
WakuRendezVousCodec* = "/vac/waku/rendezvous/1.0.0"

View File

@ -9,6 +9,7 @@ import
eth/p2p/discoveryv5/enr,
eth/net/utils,
libp2p/crypto/crypto,
libp2p/crypto/curve25519,
libp2p/crypto/secp,
libp2p/errors,
libp2p/multiaddress,
@ -48,6 +49,8 @@ type RemotePeerInfo* = ref object
addrs*: seq[MultiAddress]
enr*: Option[enr.Record]
protocols*: seq[string]
shards*: seq[uint16]
mixPubKey*: Option[Curve25519Key]
agent*: string
protoVersion*: string
@ -73,6 +76,7 @@ proc init*(
addrs: seq[MultiAddress] = @[],
enr: Option[enr.Record] = none(enr.Record),
protocols: seq[string] = @[],
shards: seq[uint16] = @[],
publicKey: crypto.PublicKey = crypto.PublicKey(),
agent: string = "",
protoVersion: string = "",
@ -82,12 +86,14 @@ proc init*(
direction: PeerDirection = UnknownDirection,
lastFailedConn: Moment = Moment.init(0, Second),
numberFailedConn: int = 0,
mixPubKey: Option[Curve25519Key] = none(Curve25519Key),
): T =
RemotePeerInfo(
peerId: peerId,
addrs: addrs,
enr: enr,
protocols: protocols,
shards: shards,
publicKey: publicKey,
agent: agent,
protoVersion: protoVersion,
@ -97,6 +103,7 @@ proc init*(
direction: direction,
lastFailedConn: lastFailedConn,
numberFailedConn: numberFailedConn,
mixPubKey: mixPubKey,
)
proc init*(
@ -105,9 +112,12 @@ proc init*(
addrs: seq[MultiAddress] = @[],
enr: Option[enr.Record] = none(enr.Record),
protocols: seq[string] = @[],
shards: seq[uint16] = @[],
): T {.raises: [Defect, ResultError[cstring], LPError].} =
let peerId = PeerID.init(peerId).tryGet()
RemotePeerInfo(peerId: peerId, addrs: addrs, enr: enr, protocols: protocols)
RemotePeerInfo(
peerId: peerId, addrs: addrs, enr: enr, protocols: protocols, shards: shards
)
## Parse
@ -326,6 +336,7 @@ converter toRemotePeerInfo*(peerInfo: PeerInfo): RemotePeerInfo =
addrs: peerInfo.listenAddrs,
enr: none(enr.Record),
protocols: peerInfo.protocols,
shards: @[],
agent: peerInfo.agentVersion,
protoVersion: peerInfo.protoVersion,
publicKey: peerInfo.publicKey,
@ -361,6 +372,9 @@ proc getAgent*(peer: RemotePeerInfo): string =
return peer.agent
proc getShards*(peer: RemotePeerInfo): seq[uint16] =
if peer.shards.len > 0:
return peer.shards
if peer.enr.isNone():
return @[]

View File

@ -17,8 +17,8 @@ logScope:
topics = "waku lightpush client"
type WakuLightPushClient* = ref object
peerManager*: PeerManager
rng*: ref rand.HmacDrbgContext
peerManager*: PeerManager
publishObservers: seq[PublishObserver]
proc new*(
@ -29,43 +29,47 @@ proc new*(
proc addPublishObserver*(wl: WakuLightPushClient, obs: PublishObserver) =
wl.publishObservers.add(obs)
proc sendPushRequest(
wl: WakuLightPushClient,
req: LightPushRequest,
peer: PeerId | RemotePeerInfo,
conn: Option[Connection] = none(Connection),
proc ensureTimestampSet(message: var WakuMessage) =
if message.timestamp == 0:
message.timestamp = getNowInNanosecondTime()
## Short log string for peer identifiers (overloads for convenience)
func shortPeerId(peer: PeerId): string =
shortLog(peer)
func shortPeerId(peer: RemotePeerInfo): string =
shortLog(peer.peerId)
proc sendPushRequestToConn(
wl: WakuLightPushClient, request: LightPushRequest, conn: Connection
): Future[WakuLightPushResult] {.async.} =
let connection = conn.valueOr:
(await wl.peerManager.dialPeer(peer, WakuLightPushCodec)).valueOr:
waku_lightpush_v3_errors.inc(labelValues = [dialFailure])
return lighpushErrorResult(
LightPushErrorCode.NO_PEERS_TO_RELAY,
dialFailure & ": " & $peer & " is not accessible",
)
defer:
await connection.closeWithEOF()
await connection.writeLP(req.encode().buffer)
try:
await conn.writeLp(request.encode().buffer)
except LPStreamRemoteClosedError:
error "Failed to write request to peer", error = getCurrentExceptionMsg()
return lightpushResultInternalError(
"Failed to write request to peer: " & getCurrentExceptionMsg()
)
var buffer: seq[byte]
try:
buffer = await connection.readLp(DefaultMaxRpcSize.int)
buffer = await conn.readLp(DefaultMaxRpcSize.int)
except LPStreamRemoteClosedError:
error "Failed to read response from peer", error = getCurrentExceptionMsg()
return lightpushResultInternalError(
"Failed to read response from peer: " & getCurrentExceptionMsg()
)
let response = LightpushResponse.decode(buffer).valueOr:
error "failed to decode response"
error "failed to decode response", error = $error
waku_lightpush_v3_errors.inc(labelValues = [decodeRpcFailure])
return lightpushResultInternalError(decodeRpcFailure)
if response.requestId != req.requestId and
response.statusCode != LightPushErrorCode.TOO_MANY_REQUESTS:
let requestIdMismatch = response.requestId != request.requestId
let tooManyRequests = response.statusCode == LightPushErrorCode.TOO_MANY_REQUESTS
if requestIdMismatch and (not tooManyRequests):
# response with TOO_MANY_REQUESTS error code has no requestId by design
error "response failure, requestId mismatch",
requestId = req.requestId, responseRequestId = response.requestId
requestId = request.requestId, responseRequestId = response.requestId
return lightpushResultInternalError("response failure, requestId mismatch")
return toPushResult(response)
@ -74,88 +78,49 @@ proc publish*(
wl: WakuLightPushClient,
pubSubTopic: Option[PubsubTopic] = none(PubsubTopic),
wakuMessage: WakuMessage,
peer: PeerId | RemotePeerInfo,
dest: Connection | PeerId | RemotePeerInfo,
): Future[WakuLightPushResult] {.async, gcsafe.} =
let conn =
when dest is Connection:
dest
else:
(await wl.peerManager.dialPeer(dest, WakuLightPushCodec)).valueOr:
waku_lightpush_v3_errors.inc(labelValues = [dialFailure])
return lighpushErrorResult(
LightPushErrorCode.NO_PEERS_TO_RELAY,
"Peer is not accessible: " & dialFailure & " - " & $dest,
)
defer:
await conn.closeWithEOF()
var message = wakuMessage
if message.timestamp == 0:
message.timestamp = getNowInNanosecondTime()
ensureTimestampSet(message)
when peer is PeerId:
info "publish",
peerId = shortLog(peer),
msg_hash = computeMessageHash(pubsubTopic.get(""), message).to0xHex
else:
info "publish",
peerId = shortLog(peer.peerId),
msg_hash = computeMessageHash(pubsubTopic.get(""), message).to0xHex
let msgHash = computeMessageHash(pubSubTopic.get(""), message).to0xHex()
info "publish",
myPeerId = wl.peerManager.switch.peerInfo.peerId,
peerId = shortPeerId(conn.peerId),
msgHash = msgHash,
sentTime = getNowInNanosecondTime()
let pushRequest = LightpushRequest(
requestId: generateRequestId(wl.rng), pubSubTopic: pubSubTopic, message: message
let request = LightpushRequest(
requestId: generateRequestId(wl.rng), pubsubTopic: pubSubTopic, message: message
)
let publishedCount = ?await wl.sendPushRequest(pushRequest, peer)
let relayPeerCount = ?await wl.sendPushRequestToConn(request, conn)
for obs in wl.publishObservers:
obs.onMessagePublished(pubSubTopic.get(""), message)
return lightpushSuccessResult(publishedCount)
return lightpushSuccessResult(relayPeerCount)
proc publishToAny*(
wl: WakuLightPushClient, pubSubTopic: PubsubTopic, wakuMessage: WakuMessage
wl: WakuLightPushClient, pubsubTopic: PubsubTopic, wakuMessage: WakuMessage
): Future[WakuLightPushResult] {.async, gcsafe.} =
## This proc is similar to the publish one but in this case
## we don't specify a particular peer and instead we get it from peer manager
var message = wakuMessage
if message.timestamp == 0:
message.timestamp = getNowInNanosecondTime()
# Like publish, but selects a peer automatically from the peer manager
let peer = wl.peerManager.selectPeer(WakuLightPushCodec).valueOr:
# TODO: check if it is matches the situation - shall we distinguish client side missing peers from server side?
return lighpushErrorResult(
LightPushErrorCode.NO_PEERS_TO_RELAY, "no suitable remote peers"
)
info "publishToAny",
my_peer_id = wl.peerManager.switch.peerInfo.peerId,
peer_id = peer.peerId,
msg_hash = computeMessageHash(pubsubTopic, message).to0xHex,
sentTime = getNowInNanosecondTime()
let pushRequest = LightpushRequest(
requestId: generateRequestId(wl.rng),
pubSubTopic: some(pubSubTopic),
message: message,
)
let publishedCount = ?await wl.sendPushRequest(pushRequest, peer)
for obs in wl.publishObservers:
obs.onMessagePublished(pubSubTopic, message)
return lightpushSuccessResult(publishedCount)
proc publishWithConn*(
wl: WakuLightPushClient,
pubSubTopic: PubsubTopic,
message: WakuMessage,
conn: Connection,
destPeer: PeerId,
): Future[WakuLightPushResult] {.async, gcsafe.} =
info "publishWithConn",
my_peer_id = wl.peerManager.switch.peerInfo.peerId,
peer_id = destPeer,
msg_hash = computeMessageHash(pubsubTopic, message).to0xHex,
sentTime = getNowInNanosecondTime()
let pushRequest = LightpushRequest(
requestId: generateRequestId(wl.rng),
pubSubTopic: some(pubSubTopic),
message: message,
)
#TODO: figure out how to not pass destPeer as this is just a hack
let publishedCount =
?await wl.sendPushRequest(pushRequest, destPeer, conn = some(conn))
for obs in wl.publishObservers:
obs.onMessagePublished(pubSubTopic, message)
return lightpushSuccessResult(publishedCount)
return await wl.publish(some(pubsubTopic), wakuMessage, peer)

View File

@ -35,7 +35,15 @@ func isSuccess*(response: LightPushResponse): bool =
func toPushResult*(response: LightPushResponse): WakuLightPushResult =
if isSuccess(response):
return ok(response.relayPeerCount.get(0))
let relayPeerCount = response.relayPeerCount.get(0)
return (
if (relayPeerCount == 0):
# Consider publishing to zero peers an error even if the service node
# sent us a "successful" response with zero peers
err((LightPushErrorCode.NO_PEERS_TO_RELAY, response.statusDesc))
else:
ok(relayPeerCount)
)
else:
return err((response.statusCode, response.statusDesc))
@ -51,11 +59,6 @@ func lightpushResultBadRequest*(msg: string): WakuLightPushResult =
func lightpushResultServiceUnavailable*(msg: string): WakuLightPushResult =
return err((LightPushErrorCode.SERVICE_NOT_AVAILABLE, some(msg)))
func lighpushErrorResult*(
statusCode: LightpushStatusCode, desc: Option[string]
): WakuLightPushResult =
return err((statusCode, desc))
func lighpushErrorResult*(
statusCode: LightpushStatusCode, desc: string
): WakuLightPushResult =

View File

@ -78,9 +78,9 @@ proc handleRequest(
proc handleRequest*(
wl: WakuLightPush, peerId: PeerId, buffer: seq[byte]
): Future[LightPushResponse] {.async.} =
let pushRequest = LightPushRequest.decode(buffer).valueOr:
let request = LightPushRequest.decode(buffer).valueOr:
let desc = decodeRpcFailure & ": " & $error
error "failed to push message", error = desc
error "failed to decode Lightpush request", error = desc
let errorCode = LightPushErrorCode.BAD_REQUEST
waku_lightpush_v3_errors.inc(labelValues = [$errorCode])
return LightPushResponse(
@ -89,16 +89,16 @@ proc handleRequest*(
statusDesc: some(desc),
)
let relayPeerCount = (await handleRequest(wl, peerId, pushRequest)).valueOr:
let relayPeerCount = (await wl.handleRequest(peerId, request)).valueOr:
let desc = error.desc
waku_lightpush_v3_errors.inc(labelValues = [$error.code])
error "failed to push message", error = desc
return LightPushResponse(
requestId: pushRequest.requestId, statusCode: error.code, statusDesc: desc
requestId: request.requestId, statusCode: error.code, statusDesc: desc
)
return LightPushResponse(
requestId: pushRequest.requestId,
requestId: request.requestId,
statusCode: LightPushSuccessCode.SUCCESS,
statusDesc: none[string](),
relayPeerCount: some(relayPeerCount),
@ -123,7 +123,7 @@ proc initProtocolHandler(wl: WakuLightPush) =
)
try:
rpc = await handleRequest(wl, conn.peerId, buffer)
rpc = await wl.handleRequest(conn.peerId, buffer)
except CatchableError:
error "lightpush failed handleRequest", error = getCurrentExceptionMsg()
do:

View File

@ -6,6 +6,8 @@ import
libp2p/crypto/curve25519,
libp2p/protocols/mix,
libp2p/protocols/mix/mix_node,
libp2p/protocols/mix/mix_protocol,
libp2p/protocols/mix/mix_metrics,
libp2p/[multiaddress, multicodec, peerid],
eth/common/keys
@ -19,7 +21,7 @@ import
logScope:
topics = "waku mix"
const mixMixPoolSize = 3
const minMixPoolSize = 4
type
WakuMix* = ref object of MixProtocol
@ -34,22 +36,18 @@ type
multiAddr*: string
pubKey*: Curve25519Key
proc mixPoolFilter*(cluster: Option[uint16], peer: RemotePeerInfo): bool =
proc filterMixNodes(cluster: Option[uint16], peer: RemotePeerInfo): bool =
# Note that origin based(discv5) filtering is not done intentionally
# so that more mix nodes can be discovered.
if peer.enr.isNone():
trace "peer has no ENR", peer = $peer
if peer.mixPubKey.isNone():
trace "remote peer has no mix Pub Key", peer = $peer
return false
if cluster.isSome() and peer.enr.get().isClusterMismatched(cluster.get()):
if cluster.isSome() and peer.enr.isSome() and
peer.enr.get().isClusterMismatched(cluster.get()):
trace "peer has mismatching cluster", peer = $peer
return false
# Filter if mix is enabled
if not peer.enr.get().supportsCapability(Capabilities.Mix):
trace "peer doesn't support mix", peer = $peer
return false
return true
proc appendPeerIdToMultiaddr*(multiaddr: MultiAddress, peerId: PeerId): MultiAddress =
@ -74,34 +72,52 @@ func getIPv4Multiaddr*(maddrs: seq[MultiAddress]): Option[MultiAddress] =
trace "no ipv4 multiaddr found"
return none(MultiAddress)
#[ Not deleting as these can be reused once discovery is sorted
proc populateMixNodePool*(mix: WakuMix) =
proc populateMixNodePool*(mix: WakuMix) =
# populate only peers that i) are reachable ii) share cluster iii) support mix
let remotePeers = mix.peerManager.switch.peerStore.peers().filterIt(
mixPoolFilter(some(mix.clusterId), it)
filterMixNodes(some(mix.clusterId), it)
)
var mixNodes = initTable[PeerId, MixPubInfo]()
for i in 0 ..< min(remotePeers.len, 100):
let remotePeerENR = remotePeers[i].enr.get()
let ipv4addr = getIPv4Multiaddr(remotePeers[i].addrs).valueOr:
trace "peer has no ipv4 address", peer = $remotePeers[i]
continue
let maddrWithPeerId =
toString(appendPeerIdToMultiaddr(ipv4addr, remotePeers[i].peerId))
trace "remote peer ENR",
peerId = remotePeers[i].peerId, enr = remotePeerENR, maddr = maddrWithPeerId
let maddrWithPeerId = appendPeerIdToMultiaddr(ipv4addr, remotePeers[i].peerId)
trace "remote peer info", info = remotePeers[i]
let peerMixPubKey = mixKey(remotePeerENR).get()
let mixNodePubInfo =
createMixPubInfo(maddrWithPeerId.value, intoCurve25519Key(peerMixPubKey))
if remotePeers[i].mixPubKey.isNone():
trace "peer has no mix Pub Key", remotePeerId = $remotePeers[i]
continue
let peerMixPubKey = remotePeers[i].mixPubKey.get()
var peerPubKey: crypto.PublicKey
if not remotePeers[i].peerId.extractPublicKey(peerPubKey):
warn "Failed to extract public key from peerId, skipping node",
remotePeerId = remotePeers[i].peerId
continue
if peerPubKey.scheme != PKScheme.Secp256k1:
warn "Peer public key is not Secp256k1, skipping node",
remotePeerId = remotePeers[i].peerId, scheme = peerPubKey.scheme
continue
let mixNodePubInfo = MixPubInfo.init(
remotePeers[i].peerId,
ipv4addr,
intoCurve25519Key(peerMixPubKey),
peerPubKey.skkey,
)
trace "adding mix node to pool",
remotePeerId = remotePeers[i].peerId, multiAddr = $ipv4addr
mixNodes[remotePeers[i].peerId] = mixNodePubInfo
mix_pool_size.set(len(mixNodes))
# set the mix node pool
mix.setNodePool(mixNodes)
mix_pool_size.set(len(mixNodes))
trace "mix node pool updated", poolSize = mix.getNodePoolSize()
# Once mix protocol starts to use info from PeerStore, then this can be removed.
proc startMixNodePoolMgr*(mix: WakuMix) {.async.} =
info "starting mix node pool manager"
# try more aggressively to populate the pool at startup
@ -115,9 +131,10 @@ proc startMixNodePoolMgr*(mix: WakuMix) {.async.} =
# TODO: make interval configurable
heartbeat "Updating mix node pool", 5.seconds:
mix.populateMixNodePool()
]#
proc toMixNodeTable(bootnodes: seq[MixNodePubInfo]): Table[PeerId, MixPubInfo] =
proc processBootNodes(
bootnodes: seq[MixNodePubInfo], peermgr: PeerManager
): Table[PeerId, MixPubInfo] =
var mixNodes = initTable[PeerId, MixPubInfo]()
for node in bootnodes:
let pInfo = parsePeerInfo(node.multiAddr).valueOr:
@ -140,6 +157,11 @@ proc toMixNodeTable(bootnodes: seq[MixNodePubInfo]): Table[PeerId, MixPubInfo] =
continue
mixNodes[peerId] = MixPubInfo.init(peerId, multiAddr, node.pubKey, peerPubKey.skkey)
peermgr.addPeer(
RemotePeerInfo.init(peerId, @[multiAddr], mixPubKey = some(node.pubKey))
)
mix_pool_size.set(len(mixNodes))
info "using mix bootstrap nodes ", bootNodes = mixNodes
return mixNodes
@ -152,25 +174,26 @@ proc new*(
bootnodes: seq[MixNodePubInfo],
): WakuMixResult[T] =
let mixPubKey = public(mixPrivKey)
info "mixPrivKey", mixPrivKey = mixPrivKey, mixPubKey = mixPubKey
info "mixPubKey", mixPubKey = mixPubKey
let nodeMultiAddr = MultiAddress.init(nodeAddr).valueOr:
return err("failed to parse mix node address: " & $nodeAddr & ", error: " & error)
let localMixNodeInfo = initMixNodeInfo(
peermgr.switch.peerInfo.peerId, nodeMultiAddr, mixPubKey, mixPrivKey,
peermgr.switch.peerInfo.publicKey.skkey, peermgr.switch.peerInfo.privateKey.skkey,
)
if bootnodes.len < mixMixPoolSize:
warn "publishing with mix won't work as there are less than 3 mix nodes in node pool"
let initTable = toMixNodeTable(bootnodes)
if len(initTable) < mixMixPoolSize:
warn "publishing with mix won't work as there are less than 3 mix nodes in node pool"
if bootnodes.len < minMixPoolSize:
warn "publishing with mix won't work until atleast 3 mix nodes in node pool"
let initTable = processBootNodes(bootnodes, peermgr)
if len(initTable) < minMixPoolSize:
warn "publishing with mix won't work until atleast 3 mix nodes in node pool"
var m = WakuMix(peerManager: peermgr, clusterId: clusterId, pubKey: mixPubKey)
procCall MixProtocol(m).init(localMixNodeInfo, initTable, peermgr.switch)
return ok(m)
method start*(mix: WakuMix) =
info "starting waku mix protocol"
#mix.nodePoolLoopHandle = mix.startMixNodePoolMgr() This can be re-enabled once discovery is addressed
mix.nodePoolLoopHandle = mix.startMixNodePoolMgr()
method stop*(mix: WakuMix) {.async.} =
if mix.nodePoolLoopHandle.isNil():

View File

@ -0,0 +1,142 @@
{.push raises: [].}
import
std/[options, sequtils, tables],
results,
chronos,
chronicles,
libp2p/protocols/rendezvous,
libp2p/crypto/curve25519,
libp2p/switch,
libp2p/utils/semaphore
import metrics except collect
import
waku/node/peer_manager,
waku/waku_core/peers,
waku/waku_core/codecs,
./common,
./waku_peer_record
logScope:
topics = "waku rendezvous client"
declarePublicCounter rendezvousPeerFoundTotal,
"total number of peers found via rendezvous"
type WakuRendezVousClient* = ref object
switch: Switch
peerManager: PeerManager
clusterId: uint16
requestInterval: timer.Duration
periodicRequestFut: Future[void]
# Internal rendezvous instance for making requests
rdv: GenericRendezVous[WakuPeerRecord]
const MaxSimultanesousAdvertisements = 5
const RendezVousLookupInterval = 10.seconds
proc requestAll*(
self: WakuRendezVousClient
): Future[Result[void, string]] {.async: (raises: []).} =
trace "waku rendezvous client requests started"
let namespace = computeMixNamespace(self.clusterId)
# Get a random WakuRDV peer
let rpi = self.peerManager.selectPeer(WakuRendezVousCodec).valueOr:
return err("could not get a peer supporting WakuRendezVousCodec")
var records: seq[WakuPeerRecord]
try:
# Use the libp2p rendezvous request method
records = await self.rdv.request(
Opt.some(namespace), Opt.some(PeersRequestedCount), Opt.some(@[rpi.peerId])
)
except CatchableError as e:
return err("rendezvous request failed: " & e.msg)
trace "waku rendezvous client request got peers", count = records.len
for record in records:
if not self.switch.peerStore.peerExists(record.peerId):
rendezvousPeerFoundTotal.inc()
if record.mixKey.len == 0 or record.peerId == self.switch.peerInfo.peerId:
continue
trace "adding peer from rendezvous",
peerId = record.peerId, addresses = $record.addresses, mixKey = record.mixKey
let rInfo = RemotePeerInfo.init(
record.peerId,
record.addresses,
mixPubKey = some(intoCurve25519Key(fromHex(record.mixKey))),
)
self.peerManager.addPeer(rInfo)
trace "waku rendezvous client request finished"
return ok()
proc periodicRequests(self: WakuRendezVousClient) {.async.} =
info "waku rendezvous periodic requests started", interval = self.requestInterval
# infinite loop
while true:
await sleepAsync(self.requestInterval)
(await self.requestAll()).isOkOr:
error "waku rendezvous requests failed", error = error
# Exponential backoff
#[ TODO: Reevaluate for mix, maybe be aggresive in the start until a sizeable pool is built and then backoff
self.requestInterval += self.requestInterval
if self.requestInterval >= 1.days:
break ]#
proc new*(
T: type WakuRendezVousClient,
switch: Switch,
peerManager: PeerManager,
clusterId: uint16,
): Result[T, string] {.raises: [].} =
# Create a minimal GenericRendezVous instance for client-side requests
# We don't need the full server functionality, just the request method
let rng = newRng()
let rdv = GenericRendezVous[WakuPeerRecord](
switch: switch,
rng: rng,
sema: newAsyncSemaphore(MaxSimultanesousAdvertisements),
minDuration: rendezvous.MinimumAcceptedDuration,
maxDuration: rendezvous.MaximumDuration,
minTTL: rendezvous.MinimumAcceptedDuration.seconds.uint64,
maxTTL: rendezvous.MaximumDuration.seconds.uint64,
peers: @[], # Will be populated from selectPeer calls
cookiesSaved: initTable[PeerId, Table[string, seq[byte]]](),
peerRecordValidator: checkWakuPeerRecord,
)
# Set codec separately as it's inherited from LPProtocol
rdv.codec = WakuRendezVousCodec
let client = T(
switch: switch,
peerManager: peerManager,
clusterId: clusterId,
requestInterval: RendezVousLookupInterval,
rdv: rdv,
)
info "waku rendezvous client initialized", clusterId = clusterId
return ok(client)
proc start*(self: WakuRendezVousClient) {.async: (raises: []).} =
self.periodicRequestFut = self.periodicRequests()
info "waku rendezvous client started"
proc stopWait*(self: WakuRendezVousClient) {.async: (raises: []).} =
if not self.periodicRequestFut.isNil():
await self.periodicRequestFut.cancelAndWait()
info "waku rendezvous client stopped"

View File

@ -11,6 +11,14 @@ const DefaultRequestsInterval* = 1.minutes
const MaxRegistrationInterval* = 5.minutes
const PeersRequestedCount* = 12
proc computeMixNamespace*(clusterId: uint16): string =
var namespace = "rs/"
namespace &= $clusterId
namespace &= "/mix"
return namespace
proc computeNamespace*(clusterId: uint16, shard: uint16): string =
var namespace = "rs/"

View File

@ -1,70 +1,91 @@
{.push raises: [].}
import
std/[sugar, options],
std/[sugar, options, sequtils, tables],
results,
chronos,
chronicles,
metrics,
stew/byteutils,
libp2p/protocols/rendezvous,
libp2p/protocols/rendezvous/protobuf,
libp2p/discovery/discoverymngr,
libp2p/utils/semaphore,
libp2p/utils/offsettedseq,
libp2p/crypto/curve25519,
libp2p/switch,
libp2p/utility
import metrics except collect
import
../node/peer_manager,
../common/callbacks,
../waku_enr/capabilities,
../waku_core/peers,
../waku_core/topics,
../waku_core/topics/pubsub_topic,
./common
../waku_core/codecs,
./common,
./waku_peer_record
logScope:
topics = "waku rendezvous"
declarePublicCounter rendezvousPeerFoundTotal,
"total number of peers found via rendezvous"
type WakuRendezVous* = ref object
rendezvous: Rendezvous
type WakuRendezVous* = ref object of GenericRendezVous[WakuPeerRecord]
peerManager: PeerManager
clusterId: uint16
getShards: GetShards
getCapabilities: GetCapabilities
getPeerRecord: GetWakuPeerRecord
registrationInterval: timer.Duration
periodicRegistrationFut: Future[void]
requestInterval: timer.Duration
periodicRequestFut: Future[void]
const MaximumNamespaceLen = 255
proc batchAdvertise*(
method discover*(
self: WakuRendezVous, conn: Connection, d: Discover
) {.async: (raises: [CancelledError, LPStreamError]).} =
# Override discover method to avoid collect macro generic instantiation issues
trace "Received Discover", peerId = conn.peerId, ns = d.ns
await procCall GenericRendezVous[WakuPeerRecord](self).discover(conn, d)
proc advertise*(
self: WakuRendezVous,
namespace: string,
ttl: Duration = DefaultRegistrationTTL,
peers: seq[PeerId],
ttl: timer.Duration = self.minDuration,
): Future[Result[void, string]] {.async: (raises: []).} =
## Register with all rendezvous peers under a namespace
trace "advertising via waku rendezvous",
namespace = namespace, ttl = ttl, peers = $peers, peerRecord = $self.getPeerRecord()
let se = SignedPayload[WakuPeerRecord].init(
self.switch.peerInfo.privateKey, self.getPeerRecord()
).valueOr:
return
err("rendezvous advertisement failed: Failed to sign Waku Peer Record: " & $error)
let sprBuff = se.encode().valueOr:
return err("rendezvous advertisement failed: Wrong Signed Peer Record: " & $error)
# rendezvous.advertise expects already opened connections
# must dial first
var futs = collect(newSeq):
for peerId in peers:
self.peerManager.dialPeer(peerId, RendezVousCodec)
self.peerManager.dialPeer(peerId, self.codec)
let dialCatch = catch:
await allFinished(futs)
futs = dialCatch.valueOr:
return err("batchAdvertise: " & error.msg)
if dialCatch.isErr():
return err("advertise: " & dialCatch.error.msg)
futs = dialCatch.get()
let conns = collect(newSeq):
for fut in futs:
let catchable = catch:
fut.read()
catchable.isOkOr:
warn "a rendezvous dial failed", cause = error.msg
if catchable.isErr():
warn "a rendezvous dial failed", cause = catchable.error.msg
continue
let connOpt = catchable.get()
@ -74,149 +95,34 @@ proc batchAdvertise*(
conn
let advertCatch = catch:
await self.rendezvous.advertise(namespace, Opt.some(ttl))
for conn in conns:
await conn.close()
advertCatch.isOkOr:
return err("batchAdvertise: " & error.msg)
if conns.len == 0:
return err("could not establish any connections to rendezvous peers")
try:
await self.advertise(namespace, ttl, peers, sprBuff)
except Exception as e:
return err("rendezvous advertisement failed: " & e.msg)
finally:
for conn in conns:
await conn.close()
return ok()
proc batchRequest*(
self: WakuRendezVous,
namespace: string,
count: int = DiscoverLimit,
peers: seq[PeerId],
): Future[Result[seq[PeerRecord], string]] {.async: (raises: []).} =
## Request all records from all rendezvous peers matching a namespace
# rendezvous.request expects already opened connections
# must dial first
var futs = collect(newSeq):
for peerId in peers:
self.peerManager.dialPeer(peerId, RendezVousCodec)
let dialCatch = catch:
await allFinished(futs)
futs = dialCatch.valueOr:
return err("batchRequest: " & error.msg)
let conns = collect(newSeq):
for fut in futs:
let catchable = catch:
fut.read()
catchable.isOkOr:
warn "a rendezvous dial failed", cause = error.msg
continue
let connOpt = catchable.get()
let conn = connOpt.valueOr:
continue
conn
let reqCatch = catch:
await self.rendezvous.request(Opt.some(namespace), Opt.some(count), Opt.some(peers))
for conn in conns:
await conn.close()
reqCatch.isOkOr:
return err("batchRequest: " & error.msg)
return ok(reqCatch.get())
proc advertiseAll(
proc advertiseAll*(
self: WakuRendezVous
): Future[Result[void, string]] {.async: (raises: []).} =
info "waku rendezvous advertisements started"
trace "waku rendezvous advertisements started"
let shards = self.getShards()
let futs = collect(newSeq):
for shardId in shards:
# Get a random RDV peer for that shard
let pubsub =
toPubsubTopic(RelayShard(clusterId: self.clusterId, shardId: shardId))
let rpi = self.peerManager.selectPeer(RendezVousCodec, some(pubsub)).valueOr:
continue
let namespace = computeNamespace(self.clusterId, shardId)
# Advertise yourself on that peer
self.batchAdvertise(namespace, DefaultRegistrationTTL, @[rpi.peerId])
if futs.len < 1:
let rpi = self.peerManager.selectPeer(self.codec).valueOr:
return err("could not get a peer supporting RendezVousCodec")
let catchable = catch:
await allFinished(futs)
let namespace = computeMixNamespace(self.clusterId)
catchable.isOkOr:
return err(error.msg)
# Advertise yourself on that peer
let res = await self.advertise(namespace, @[rpi.peerId])
for fut in catchable.get():
if fut.failed():
warn "a rendezvous advertisement failed", cause = fut.error.msg
trace "waku rendezvous advertisements finished"
info "waku rendezvous advertisements finished"
return ok()
proc initialRequestAll*(
self: WakuRendezVous
): Future[Result[void, string]] {.async: (raises: []).} =
info "waku rendezvous initial requests started"
let shards = self.getShards()
let futs = collect(newSeq):
for shardId in shards:
let namespace = computeNamespace(self.clusterId, shardId)
# Get a random RDV peer for that shard
let rpi = self.peerManager.selectPeer(
RendezVousCodec,
some(toPubsubTopic(RelayShard(clusterId: self.clusterId, shardId: shardId))),
).valueOr:
continue
# Ask for peer records for that shard
self.batchRequest(namespace, PeersRequestedCount, @[rpi.peerId])
if futs.len < 1:
return err("could not get a peer supporting RendezVousCodec")
let catchable = catch:
await allFinished(futs)
catchable.isOkOr:
return err(error.msg)
for fut in catchable.get():
if fut.failed():
warn "a rendezvous request failed", cause = fut.error.msg
elif fut.finished():
let res = fut.value()
let records = res.valueOr:
warn "a rendezvous request failed", cause = $error
continue
for record in records:
rendezvousPeerFoundTotal.inc()
self.peerManager.addPeer(record)
info "waku rendezvous initial request finished"
return ok()
return res
proc periodicRegistration(self: WakuRendezVous) {.async.} =
info "waku rendezvous periodic registration started",
@ -237,22 +143,6 @@ proc periodicRegistration(self: WakuRendezVous) {.async.} =
# Back to normal interval if no errors
self.registrationInterval = DefaultRegistrationInterval
proc periodicRequests(self: WakuRendezVous) {.async.} =
info "waku rendezvous periodic requests started", interval = self.requestInterval
# infinite loop
while true:
(await self.initialRequestAll()).isOkOr:
error "waku rendezvous requests failed", error = error
await sleepAsync(self.requestInterval)
# Exponential backoff
self.requestInterval += self.requestInterval
if self.requestInterval >= 1.days:
break
proc new*(
T: type WakuRendezVous,
switch: Switch,
@ -260,46 +150,91 @@ proc new*(
clusterId: uint16,
getShards: GetShards,
getCapabilities: GetCapabilities,
getPeerRecord: GetWakuPeerRecord,
): Result[T, string] {.raises: [].} =
let rvCatchable = catch:
RendezVous.new(switch = switch, minDuration = DefaultRegistrationTTL)
let rng = newRng()
let wrv = T(
rng: rng,
salt: string.fromBytes(generateBytes(rng[], 8)),
registered: initOffsettedSeq[RegisteredData](),
expiredDT: Moment.now() - 1.days,
sema: newAsyncSemaphore(SemaphoreDefaultSize),
minDuration: rendezvous.MinimumAcceptedDuration,
maxDuration: rendezvous.MaximumDuration,
minTTL: rendezvous.MinimumAcceptedDuration.seconds.uint64,
maxTTL: rendezvous.MaximumDuration.seconds.uint64,
peerRecordValidator: checkWakuPeerRecord,
)
let rv = rvCatchable.valueOr:
return err(error.msg)
let mountCatchable = catch:
switch.mount(rv)
mountCatchable.isOkOr:
return err(error.msg)
var wrv = WakuRendezVous()
wrv.rendezvous = rv
wrv.peerManager = peerManager
wrv.clusterId = clusterId
wrv.getShards = getShards
wrv.getCapabilities = getCapabilities
wrv.registrationInterval = DefaultRegistrationInterval
wrv.requestInterval = DefaultRequestsInterval
wrv.getPeerRecord = getPeerRecord
wrv.switch = switch
wrv.codec = WakuRendezVousCodec
proc handleStream(
conn: Connection, proto: string
) {.async: (raises: [CancelledError]).} =
try:
let
buf = await conn.readLp(4096)
msg = Message.decode(buf).tryGet()
case msg.msgType
of MessageType.Register:
#TODO: override this to store peers registered with us in peerstore with their info as well.
await wrv.register(conn, msg.register.tryGet(), wrv.getPeerRecord())
of MessageType.RegisterResponse:
trace "Got an unexpected Register Response", response = msg.registerResponse
of MessageType.Unregister:
wrv.unregister(conn, msg.unregister.tryGet())
of MessageType.Discover:
await wrv.discover(conn, msg.discover.tryGet())
of MessageType.DiscoverResponse:
trace "Got an unexpected Discover Response", response = msg.discoverResponse
except CancelledError as exc:
trace "cancelled rendezvous handler"
raise exc
except CatchableError as exc:
trace "exception in rendezvous handler", description = exc.msg
finally:
await conn.close()
wrv.handler = handleStream
info "waku rendezvous initialized",
clusterId = clusterId, shards = getShards(), capabilities = getCapabilities()
clusterId = clusterId,
shards = getShards(),
capabilities = getCapabilities(),
wakuPeerRecord = getPeerRecord()
return ok(wrv)
proc start*(self: WakuRendezVous) {.async: (raises: []).} =
# Start the parent GenericRendezVous (starts the register deletion loop)
if self.started:
warn "waku rendezvous already started"
return
try:
await procCall GenericRendezVous[WakuPeerRecord](self).start()
except CancelledError as exc:
error "failed to start GenericRendezVous", cause = exc.msg
return
# start registering forever
self.periodicRegistrationFut = self.periodicRegistration()
self.periodicRequestFut = self.periodicRequests()
info "waku rendezvous discovery started"
proc stopWait*(self: WakuRendezVous) {.async: (raises: []).} =
if not self.periodicRegistrationFut.isNil():
await self.periodicRegistrationFut.cancelAndWait()
if not self.periodicRequestFut.isNil():
await self.periodicRequestFut.cancelAndWait()
# Stop the parent GenericRendezVous (stops the register deletion loop)
await GenericRendezVous[WakuPeerRecord](self).stop()
# Stop the parent GenericRendezVous (stops the register deletion loop)
await GenericRendezVous[WakuPeerRecord](self).stop()
info "waku rendezvous discovery stopped"

View File

@ -0,0 +1,74 @@
import std/times, sugar
import
libp2p/[
protocols/rendezvous,
signed_envelope,
multicodec,
multiaddress,
protobuf/minprotobuf,
peerid,
]
type WakuPeerRecord* = object
# Considering only mix as of now, but we can keep extending this to include all capabilities part of Waku ENR
peerId*: PeerId
seqNo*: uint64
addresses*: seq[MultiAddress]
mixKey*: string
proc payloadDomain*(T: typedesc[WakuPeerRecord]): string =
$multiCodec("libp2p-custom-peer-record")
proc payloadType*(T: typedesc[WakuPeerRecord]): seq[byte] =
@[(byte) 0x30, (byte) 0x00, (byte) 0x00]
proc init*(
T: typedesc[WakuPeerRecord],
peerId: PeerId,
seqNo = getTime().toUnix().uint64,
addresses: seq[MultiAddress],
mixKey: string,
): T =
WakuPeerRecord(peerId: peerId, seqNo: seqNo, addresses: addresses, mixKey: mixKey)
proc decode*(
T: typedesc[WakuPeerRecord], buffer: seq[byte]
): Result[WakuPeerRecord, ProtoError] =
let pb = initProtoBuffer(buffer)
var record = WakuPeerRecord()
?pb.getRequiredField(1, record.peerId)
?pb.getRequiredField(2, record.seqNo)
discard ?pb.getRepeatedField(3, record.addresses)
if record.addresses.len == 0:
return err(ProtoError.RequiredFieldMissing)
?pb.getRequiredField(4, record.mixKey)
return ok(record)
proc encode*(record: WakuPeerRecord): seq[byte] =
var pb = initProtoBuffer()
pb.write(1, record.peerId)
pb.write(2, record.seqNo)
for address in record.addresses:
pb.write(3, address)
pb.write(4, record.mixKey)
pb.finish()
return pb.buffer
proc checkWakuPeerRecord*(
_: WakuPeerRecord, spr: seq[byte], peerId: PeerId
): Result[void, string] {.gcsafe.} =
if spr.len == 0:
return err("Empty peer record")
let signedEnv = ?SignedPayload[WakuPeerRecord].decode(spr).mapErr(x => $x)
if signedEnv.data.peerId != peerId:
return err("Bad Peer ID")
return ok()

View File

@ -229,9 +229,20 @@ method register*(
var gasPrice: int
g.retryWrapper(gasPrice, "Failed to get gas price"):
int(await ethRpc.provider.eth_gasPrice()) * 2
let fetchedGasPrice = uint64(await ethRpc.provider.eth_gasPrice())
## Multiply by 2 to speed up the transaction
## Check for overflow when casting to int
if fetchedGasPrice > uint64(high(int) div 2):
warn "Gas price overflow detected, capping at maximum int value",
fetchedGasPrice = fetchedGasPrice, maxInt = high(int)
high(int)
else:
let calculatedGasPrice = int(fetchedGasPrice) * 2
debug "Gas price calculated",
fetchedGasPrice = fetchedGasPrice, gasPrice = calculatedGasPrice
calculatedGasPrice
let idCommitmentHex = identityCredential.idCommitment.inHex()
info "identityCredential idCommitmentHex", idCommitment = idCommitmentHex
debug "identityCredential idCommitmentHex", idCommitment = idCommitmentHex
let idCommitment = identityCredential.idCommitment.toUInt256()
let idCommitmentsToErase: seq[UInt256] = @[]
info "registering the member",
@ -248,11 +259,10 @@ method register*(
var tsReceipt: ReceiptObject
g.retryWrapper(tsReceipt, "Failed to get the transaction receipt"):
await ethRpc.getMinedTransactionReceipt(txHash)
info "registration transaction mined", txHash = txHash
debug "registration transaction mined", txHash = txHash
g.registrationTxHash = some(txHash)
# the receipt topic holds the hash of signature of the raised events
# TODO: make this robust. search within the event list for the event
info "ts receipt", receipt = tsReceipt[]
debug "ts receipt", receipt = tsReceipt[]
if tsReceipt.status.isNone():
raise newException(ValueError, "Transaction failed: status is None")
@ -261,18 +271,27 @@ method register*(
ValueError, "Transaction failed with status: " & $tsReceipt.status.get()
)
## Extract MembershipRegistered event from transaction logs (third event)
let thirdTopic = tsReceipt.logs[2].topics[0]
info "third topic", thirdTopic = thirdTopic
if thirdTopic !=
cast[FixedBytes[32]](keccak.keccak256.digest(
"MembershipRegistered(uint256,uint256,uint32)"
).data):
raise newException(ValueError, "register: unexpected event signature")
## Search through all transaction logs to find the MembershipRegistered event
let expectedEventSignature = cast[FixedBytes[32]](keccak.keccak256.digest(
"MembershipRegistered(uint256,uint256,uint32)"
).data)
## Parse MembershipRegistered event data: rateCommitment(256) || membershipRateLimit(256) || index(32)
let arguments = tsReceipt.logs[2].data
info "tx log data", arguments = arguments
var membershipRegisteredLog: Option[LogObject]
for log in tsReceipt.logs:
if log.topics.len > 0 and log.topics[0] == expectedEventSignature:
membershipRegisteredLog = some(log)
break
if membershipRegisteredLog.isNone():
raise newException(
ValueError, "register: MembershipRegistered event not found in transaction logs"
)
let registrationLog = membershipRegisteredLog.get()
## Parse MembershipRegistered event data: idCommitment(256) || membershipRateLimit(256) || index(32)
let arguments = registrationLog.data
trace "registration transaction log data", arguments = arguments
let
## Extract membership index from transaction log data (big endian)
membershipIndex = UInt256.fromBytesBE(arguments[64 .. 95])