Compare commits

..

No commits in common. "master" and "v0.35.0" have entirely different histories.

592 changed files with 27290 additions and 46864 deletions

View File

@ -12,6 +12,7 @@ assignees: ''
Update `nwaku` "vendor" dependencies. Update `nwaku` "vendor" dependencies.
### Items to bump ### Items to bump
- [ ] negentropy
- [ ] dnsclient.nim ( update to the latest tag version ) - [ ] dnsclient.nim ( update to the latest tag version )
- [ ] nim-bearssl - [ ] nim-bearssl
- [ ] nimbus-build-system - [ ] nimbus-build-system
@ -37,12 +38,12 @@ Update `nwaku` "vendor" dependencies.
- [ ] nim-sqlite3-abi ( update to the latest tag version ) - [ ] nim-sqlite3-abi ( update to the latest tag version )
- [ ] nim-stew - [ ] nim-stew
- [ ] nim-stint - [ ] nim-stint
- [ ] nim-taskpools ( update to the latest tag version ) - [ ] nim-taskpools
- [ ] nim-testutils ( update to the latest tag version ) - [ ] nim-testutils
- [ ] nim-toml-serialization - [ ] nim-toml-serialization
- [ ] nim-unicodedb - [ ] nim-unicodedb
- [ ] nim-unittest2 ( update to the latest tag version ) - [ ] nim-unittest2
- [ ] nim-web3 ( update to the latest tag version ) - [ ] nim-web3
- [ ] nim-websock ( update to the latest tag version ) - [ ] nim-websock
- [ ] nim-zlib - [ ] nim-zlib
- [ ] zerokit ( this should be kept in version `v0.7.0` ) - [ ] zerokit ( this should be kept in version `v0.5.1` )

View File

@ -1,56 +0,0 @@
---
name: Prepare Beta Release
about: Execute tasks for the creation and publishing of a new beta release
title: 'Prepare beta release 0.0.0'
labels: beta-release
assignees: ''
---
<!--
Add appropriate release number to title!
For detailed info on the release process refer to https://github.com/logos-messaging/nwaku/blob/master/docs/contributors/release-process.md
-->
### Items to complete
All items below are to be completed by the owner of the given release.
- [ ] Create release branch with major and minor only ( e.g. release/v0.X ) if it doesn't exist.
- [ ] Assign release candidate tag to the release branch HEAD (e.g. `v0.X.0-beta-rc.0`, `v0.X.0-beta-rc.1`, ... `v0.X.0-beta-rc.N`).
- [ ] Generate and edit release notes in CHANGELOG.md.
- [ ] **Waku test and fleets validation**
- [ ] Ensure all the unit tests (specifically js-waku tests) are green against the release candidate.
- [ ] Deploy the release candidate to `waku.test` only through [deploy-waku-test job](https://ci.infra.status.im/job/nim-waku/job/deploy-waku-test/) and wait for it to finish (Jenkins access required; ask the infra team if you don't have it).
- After completion, disable [deployment job](https://ci.infra.status.im/job/nim-waku/) so that its version is not updated on every merge to master.
- Verify the deployed version at https://fleets.waku.org/.
- Confirm the container image exists on [Harbor](https://harbor.status.im/harbor/projects/9/repositories/nwaku/artifacts-tab).
- [ ] Analyze Kibana logs from the previous month (since the last release was deployed) for possible crashes or errors in `waku.test`.
- Most relevant logs are `(fleet: "waku.test" AND message: "SIGSEGV")`.
- [ ] Enable again the `waku.test` fleet to resume auto-deployment of the latest `master` commit.
- [ ] **Proceed with release**
- [ ] Assign a final release tag (`v0.X.0-beta`) to the same commit that contains the validated release-candidate tag (e.g. `v0.X.0-beta-rc.N`) and submit a PR from the release branch to `master`.
- [ ] Update [nwaku-compose](https://github.com/logos-messaging/nwaku-compose) and [waku-simulator](https://github.com/logos-messaging/waku-simulator) according to the new release.
- [ ] Bump nwaku dependency in [waku-rust-bindings](https://github.com/logos-messaging/waku-rust-bindings) and make sure all examples and tests work.
- [ ] Bump nwaku dependency in [waku-go-bindings](https://github.com/logos-messaging/waku-go-bindings) and make sure all tests work.
- [ ] Create GitHub release (https://github.com/logos-messaging/nwaku/releases).
- [ ] Submit a PR to merge the release branch back to `master`. Make sure you use the option "Merge pull request (Create a merge commit)" to perform the merge. Ping repo admin if this option is not available.
- [ ] **Promote release to fleets**
- [ ] Ask the PM lead to announce the release.
- [ ] Update infra config with any deprecated arguments or changed options.
- [ ] Update waku.sandbox with [this deployment job](https://ci.infra.status.im/job/nim-waku/job/deploy-waku-sandbox/).
### Links
- [Release process](https://github.com/logos-messaging/nwaku/blob/master/docs/contributors/release-process.md)
- [Release notes](https://github.com/logos-messaging/nwaku/blob/master/CHANGELOG.md)
- [Fleet ownership](https://www.notion.so/Fleet-Ownership-7532aad8896d46599abac3c274189741?pvs=4#d2d2f0fe4b3c429fbd860a1d64f89a64)
- [Infra-nim-waku](https://github.com/status-im/infra-nim-waku)
- [Jenkins](https://ci.infra.status.im/job/nim-waku/)
- [Fleets](https://fleets.waku.org/)
- [Harbor](https://harbor.status.im/harbor/projects/9/repositories/nwaku/artifacts-tab)

View File

@ -1,76 +0,0 @@
---
name: Prepare Full Release
about: Execute tasks for the creation and publishing of a new full release
title: 'Prepare full release 0.0.0'
labels: full-release
assignees: ''
---
<!--
Add appropriate release number to title!
For detailed info on the release process refer to https://github.com/logos-messaging/nwaku/blob/master/docs/contributors/release-process.md
-->
### Items to complete
All items below are to be completed by the owner of the given release.
- [ ] Create release branch with major and minor only ( e.g. release/v0.X ) if it doesn't exist.
- [ ] Assign release candidate tag to the release branch HEAD (e.g. `v0.X.0-rc.0`, `v0.X.0-rc.1`, ... `v0.X.0-rc.N`).
- [ ] Generate and edit release notes in CHANGELOG.md.
- [ ] **Validation of release candidate**
- [ ] **Automated testing**
- [ ] Ensure all the unit tests (specifically js-waku tests) are green against the release candidate.
- [ ] Ask Vac-QA and Vac-DST to perform the available tests against the release candidate.
- [ ] Vac-DST (an additional report is needed; see [this](https://www.notion.so/DST-Reports-1228f96fb65c80729cd1d98a7496fe6f))
- [ ] **Waku fleet testing**
- [ ] Deploy the release candidate to `waku.test` and `waku.sandbox` fleets.
- Start the [deployment job](https://ci.infra.status.im/job/nim-waku/) for both fleets and wait for it to finish (Jenkins access required; ask the infra team if you don't have it).
- After completion, disable [deployment job](https://ci.infra.status.im/job/nim-waku/) so that its version is not updated on every merge to `master`.
- Verify the deployed version at https://fleets.waku.org/.
- Confirm the container image exists on [Harbor](https://harbor.status.im/harbor/projects/9/repositories/nwaku/artifacts-tab).
- [ ] Search _Kibana_ logs from the previous month (since the last release was deployed) for possible crashes or errors in `waku.test` and `waku.sandbox`.
- Most relevant logs are `(fleet: "waku.test" AND message: "SIGSEGV")` OR `(fleet: "waku.sandbox" AND message: "SIGSEGV")`.
- [ ] Enable again the `waku.test` fleet to resume auto-deployment of the latest `master` commit.
- [ ] **Status fleet testing**
- [ ] Deploy release candidate to `status.staging`
- [ ] Perform [sanity check](https://www.notion.so/How-to-test-Nwaku-on-Status-12c6e4b9bf06420ca868bd199129b425) and log results as comments in this issue.
- [ ] Connect 2 instances to `status.staging` fleet, one in relay mode, the other one in light client.
- 1:1 Chats with each other
- Send and receive messages in a community
- Close one instance, send messages with second instance, reopen first instance and confirm messages sent while offline are retrieved from store
- [ ] Perform checks based on _end user impact_
- [ ] Inform other (Waku and Status) CCs to point their instances to `status.staging` for a few days. Ping Status colleagues on their Discord server or in the [Status community](https://status.app/c/G3kAAMSQtb05kog3aGbr3kiaxN4tF5xy4BAGEkkLwILk2z3GcoYlm5hSJXGn7J3laft-tnTwDWmYJ18dP_3bgX96dqr_8E3qKAvxDf3NrrCMUBp4R9EYkQez9XSM4486mXoC3mIln2zc-TNdvjdfL9eHVZ-mGgs=#zQ3shZeEJqTC1xhGUjxuS4rtHSrhJ8vUYp64v6qWkLpvdy9L9) (this is not a blocking point.)
- [ ] Ask Status-QA to perform sanity checks (as described above) and checks based on _end user impact_; specify the version being tested
- [ ] Ask Status-QA or infra to run the automated Status e2e tests against `status.staging`
- [ ] Get other CCs' sign-off: they should comment on this PR, e.g., "Used the app for a week, no problem." If problems are reported, resolve them and create a new RC.
- [ ] **Get Status-QA sign-off**, ensuring that the `status.test` update will not disturb ongoing activities.
- [ ] **Proceed with release**
- [ ] Assign a final release tag (`v0.X.0`) to the same commit that contains the validated release-candidate tag (e.g. `v0.X.0`).
- [ ] Update [nwaku-compose](https://github.com/logos-messaging/nwaku-compose) and [waku-simulator](https://github.com/logos-messaging/waku-simulator) according to the new release.
- [ ] Bump nwaku dependency in [waku-rust-bindings](https://github.com/logos-messaging/waku-rust-bindings) and make sure all examples and tests work.
- [ ] Bump nwaku dependency in [waku-go-bindings](https://github.com/logos-messaging/waku-go-bindings) and make sure all tests work.
- [ ] Create GitHub release (https://github.com/logos-messaging/nwaku/releases).
- [ ] Submit a PR to merge the release branch back to `master`. Make sure you use the option "Merge pull request (Create a merge commit)" to perform the merge. Ping repo admin if this option is not available.
- [ ] **Promote release to fleets**
- [ ] Ask the PM lead to announce the release.
- [ ] Update infra config with any deprecated arguments or changed options.
### Links
- [Release process](https://github.com/logos-messaging/nwaku/blob/master/docs/contributors/release-process.md)
- [Release notes](https://github.com/logos-messaging/nwaku/blob/master/CHANGELOG.md)
- [Fleet ownership](https://www.notion.so/Fleet-Ownership-7532aad8896d46599abac3c274189741?pvs=4#d2d2f0fe4b3c429fbd860a1d64f89a64)
- [Infra-nim-waku](https://github.com/status-im/infra-nim-waku)
- [Jenkins](https://ci.infra.status.im/job/nim-waku/)
- [Fleets](https://fleets.waku.org/)
- [Harbor](https://harbor.status.im/harbor/projects/9/repositories/nwaku/artifacts-tab)

View File

@ -0,0 +1,72 @@
---
name: Prepare release
about: Execute tasks for the creation and publishing of a new release
title: 'Prepare release 0.0.0'
labels: release
assignees: ''
---
<!--
Add appropriate release number to title!
For detailed info on the release process refer to https://github.com/waku-org/nwaku/blob/master/docs/contributors/release-process.md
-->
### Items to complete
All items below are to be completed by the owner of the given release.
- [ ] Create release branch
- [ ] Assign release candidate tag to the release branch HEAD. e.g. v0.30.0-rc.0
- [ ] Generate and edit releases notes in CHANGELOG.md
- [ ] Review possible update of [config-options](https://github.com/waku-org/docs.waku.org/blob/develop/docs/guides/nwaku/config-options.md)
- [ ] _End user impact_: Summarize impact of changes on Status end users (can be a comment in this issue).
- [ ] **Validate release candidate**
- [ ] Bump nwaku dependency in [waku-rust-bindings](https://github.com/waku-org/waku-rust-bindings) and make sure all examples and tests work
- [ ] Automated testing
- [ ] Ensures js-waku tests are green against release candidate
- [ ] Ask Vac-QA and Vac-DST to perform available tests against release candidate
- [ ] Vac-QA
- [ ] Vac-DST (we need additional report. see [this](https://www.notion.so/DST-Reports-1228f96fb65c80729cd1d98a7496fe6f))
- [ ] **On Waku fleets**
- [ ] Lock `waku.test` fleet to release candidate version
- [ ] Continuously stress `waku.test` fleet for a week (e.g. from `wakudev`)
- [ ] Search _Kibana_ logs from the previous month (since last release was deployed), for possible crashes or errors in `waku.test` and `waku.sandbox`.
- Most relevant logs are `(fleet: "waku.test" OR fleet: "waku.sandbox") AND message: "SIGSEGV"`
- [ ] Run release candidate with `waku-simulator`, ensure that nodes connected to each other
- [ ] Unlock `waku.test` to resume auto-deployment of latest `master` commit
- [ ] **On Status fleet**
- [ ] Deploy release candidate to `status.staging`
- [ ] Perform [sanity check](https://www.notion.so/How-to-test-Nwaku-on-Status-12c6e4b9bf06420ca868bd199129b425) and log results as comments in this issue.
- [ ] Connect 2 instances to `status.staging` fleet, one in relay mode, the other one in light client.
- [ ] 1:1 Chats with each other
- [ ] Send and receive messages in a community
- [ ] Close one instance, send messages with second instance, reopen first instance and confirm messages sent while offline are retrieved from store
- [ ] Perform checks based _end user impact_
- [ ] Inform other (Waku and Status) CCs to point their instance to `status.staging` for a few days. Ping Status colleagues from their Discord server or [Status community](https://status.app/c/G3kAAMSQtb05kog3aGbr3kiaxN4tF5xy4BAGEkkLwILk2z3GcoYlm5hSJXGn7J3laft-tnTwDWmYJ18dP_3bgX96dqr_8E3qKAvxDf3NrrCMUBp4R9EYkQez9XSM4486mXoC3mIln2zc-TNdvjdfL9eHVZ-mGgs=#zQ3shZeEJqTC1xhGUjxuS4rtHSrhJ8vUYp64v6qWkLpvdy9L9) (not blocking point.)
- [ ] Ask Status-QA to perform sanity checks (as described above) + checks based on _end user impact_; do specify the version being tested
- [ ] Ask Status-QA or infra to run the automated Status e2e tests against `status.staging`
- [ ] Get other CCs sign-off: they comment on this PR "used app for a week, no problem", or problem reported, resolved and new RC
- [ ] **Get Status-QA sign-off**. Ensuring that `status.test` update will not disturb ongoing activities.
- [ ] **Proceed with release**
- [ ] Assign a release tag to the same commit that contains the validated release-candidate tag
- [ ] Create GitHub release
- [ ] Deploy the release to DockerHub
- [ ] Announce the release
- [ ] **Promote release to fleets**.
- [ ] Update infra config with any deprecated arguments or changed options
- [ ] [Deploy final release to `waku.sandbox` fleet](https://ci.infra.status.im/job/nim-waku/job/deploy-waku-sandbox)
- [ ] [Deploy final release to `status.staging` fleet](https://ci.infra.status.im/job/nim-waku/job/deploy-shards-staging/)
- [ ] [Deploy final release to `status.prod` fleet](https://ci.infra.status.im/job/nim-waku/job/deploy-shards-test/)
- [ ] **Post release**
- [ ] Submit a PR from the release branch to master. Important to commit the PR with "create a merge commit" option.
- [ ] Update waku-org/nwaku-compose with the new release version.
- [ ] Update version in js-waku repo. [update only this](https://github.com/waku-org/js-waku/blob/7c0ce7b2eca31cab837da0251e1e4255151be2f7/.github/workflows/ci.yml#L135) by submitting a PR.

View File

@ -1,8 +1,26 @@
## Description # Description
<!--- Describe your changes to provide context for reviewrs -->
## Changes # Changes
<!-- List of detailed changes -->
- [ ] ...
- [ ] ...
<!--
## How to test
1.
1.
1.
-->
<!--
## Issue ## Issue
closes # closes #
-->

View File

@ -54,9 +54,9 @@ jobs:
strategy: strategy:
fail-fast: false fail-fast: false
matrix: matrix:
os: [ubuntu-22.04, macos-15] os: [ubuntu-22.04, macos-13]
runs-on: ${{ matrix.os }} runs-on: ${{ matrix.os }}
timeout-minutes: 45 timeout-minutes: 60
name: build-${{ matrix.os }} name: build-${{ matrix.os }}
steps: steps:
@ -76,28 +76,18 @@ jobs:
.git/modules .git/modules
key: ${{ runner.os }}-vendor-modules-${{ steps.submodules.outputs.hash }} key: ${{ runner.os }}-vendor-modules-${{ steps.submodules.outputs.hash }}
- name: Make update
run: make update
- name: Build binaries - name: Build binaries
run: make V=1 QUICK_AND_DIRTY_COMPILER=1 all tools run: make V=1 QUICK_AND_DIRTY_COMPILER=1 all tools
build-windows:
needs: changes
if: ${{ needs.changes.outputs.v2 == 'true' || needs.changes.outputs.common == 'true' }}
uses: ./.github/workflows/windows-build.yml
with:
branch: ${{ github.ref }}
test: test:
needs: changes needs: changes
if: ${{ needs.changes.outputs.v2 == 'true' || needs.changes.outputs.common == 'true' }} if: ${{ needs.changes.outputs.v2 == 'true' || needs.changes.outputs.common == 'true' }}
strategy: strategy:
fail-fast: false fail-fast: false
matrix: matrix:
os: [ubuntu-22.04, macos-15] os: [ubuntu-22.04, macos-13]
runs-on: ${{ matrix.os }} runs-on: ${{ matrix.os }}
timeout-minutes: 45 timeout-minutes: 60
name: test-${{ matrix.os }} name: test-${{ matrix.os }}
steps: steps:
@ -117,9 +107,6 @@ jobs:
.git/modules .git/modules
key: ${{ runner.os }}-vendor-modules-${{ steps.submodules.outputs.hash }} key: ${{ runner.os }}-vendor-modules-${{ steps.submodules.outputs.hash }}
- name: Make update
run: make update
- name: Run tests - name: Run tests
run: | run: |
postgres_enabled=0 postgres_enabled=0
@ -132,36 +119,38 @@ jobs:
export NIMFLAGS="--colors:off -d:chronicles_colors:none" export NIMFLAGS="--colors:off -d:chronicles_colors:none"
export USE_LIBBACKTRACE=0 export USE_LIBBACKTRACE=0
make V=1 LOG_LEVEL=DEBUG QUICK_AND_DIRTY_COMPILER=1 POSTGRES=$postgres_enabled test make V=1 LOG_LEVEL=DEBUG QUICK_AND_DIRTY_COMPILER=1 POSTGRES=$postgres_enabled test testwakunode2
make V=1 LOG_LEVEL=DEBUG QUICK_AND_DIRTY_COMPILER=1 POSTGRES=$postgres_enabled testwakunode2
build-docker-image: build-docker-image:
needs: changes needs: changes
if: ${{ needs.changes.outputs.v2 == 'true' || needs.changes.outputs.common == 'true' || needs.changes.outputs.docker == 'true' }} if: ${{ needs.changes.outputs.v2 == 'true' || needs.changes.outputs.common == 'true' || needs.changes.outputs.docker == 'true' }}
uses: logos-messaging/logos-messaging-nim/.github/workflows/container-image.yml@10dc3d3eb4b6a3d4313f7b2cc4a85a925e9ce039 uses: waku-org/nwaku/.github/workflows/container-image.yml@master
secrets: inherit secrets: inherit
nwaku-nwaku-interop-tests: nwaku-nwaku-interop-tests:
needs: build-docker-image needs: build-docker-image
uses: logos-messaging/logos-messaging-interop-tests/.github/workflows/nim_waku_PR.yml@SMOKE_TEST_STABLE uses: waku-org/waku-interop-tests/.github/workflows/nim_waku_PR.yml@SMOKE_TEST_0.0.1
with: with:
node_nwaku: ${{ needs.build-docker-image.outputs.image }} node_nwaku: ${{ needs.build-docker-image.outputs.image }}
secrets: inherit secrets: inherit
js-waku-node: js-waku-node:
needs: build-docker-image needs: build-docker-image
uses: logos-messaging/js-waku/.github/workflows/test-node.yml@master uses: waku-org/js-waku/.github/workflows/test-node.yml@master
with: with:
nim_wakunode_image: ${{ needs.build-docker-image.outputs.image }} nim_wakunode_image: ${{ needs.build-docker-image.outputs.image }}
test_type: node test_type: node
debug: waku*
js-waku-node-optional: js-waku-node-optional:
needs: build-docker-image needs: build-docker-image
uses: logos-messaging/js-waku/.github/workflows/test-node.yml@master uses: waku-org/js-waku/.github/workflows/test-node.yml@master
with: with:
nim_wakunode_image: ${{ needs.build-docker-image.outputs.image }} nim_wakunode_image: ${{ needs.build-docker-image.outputs.image }}
test_type: node-optional test_type: node-optional
debug: waku*
lint: lint:
name: "Lint" name: "Lint"

View File

@ -41,7 +41,7 @@ jobs:
env: env:
QUAY_PASSWORD: ${{ secrets.QUAY_PASSWORD }} QUAY_PASSWORD: ${{ secrets.QUAY_PASSWORD }}
QUAY_USER: ${{ secrets.QUAY_USER }} QUAY_USER: ${{ secrets.QUAY_USER }}
- name: Checkout code - name: Checkout code
if: ${{ steps.secrets.outcome == 'success' }} if: ${{ steps.secrets.outcome == 'success' }}
uses: actions/checkout@v4 uses: actions/checkout@v4
@ -65,7 +65,6 @@ jobs:
id: build id: build
if: ${{ steps.secrets.outcome == 'success' }} if: ${{ steps.secrets.outcome == 'success' }}
run: | run: |
make update
make -j${NPROC} V=1 QUICK_AND_DIRTY_COMPILER=1 NIMFLAGS="-d:disableMarchNative -d:postgres -d:chronicles_colors:none" wakunode2 make -j${NPROC} V=1 QUICK_AND_DIRTY_COMPILER=1 NIMFLAGS="-d:disableMarchNative -d:postgres -d:chronicles_colors:none" wakunode2

View File

@ -8,6 +8,47 @@ on:
- synchronize - synchronize
jobs: jobs:
main:
name: Validate PR title
runs-on: ubuntu-22.04
permissions:
pull-requests: write
steps:
- uses: amannn/action-semantic-pull-request@v5
id: lint_pr_title
with:
types: |
chore
docs
feat
fix
refactor
style
test
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
- uses: marocchino/sticky-pull-request-comment@v2
# When the previous steps fails, the workflow would stop. By adding this
# condition you can continue the execution with the populated error message.
if: always() && (steps.lint_pr_title.outputs.error_message != null)
with:
header: pr-title-lint-error
message: |
Hey there and thank you for opening this pull request! 👋🏼
We require pull request titles to follow the [Conventional Commits specification](https://www.conventionalcommits.org/en/v1.0.0/) and it looks like your proposed title needs to be adjusted.
Details:
> ${{ steps.lint_pr_title.outputs.error_message }}
# Delete a previous comment when the issue has been resolved
- if: ${{ steps.lint_pr_title.outputs.error_message == null }}
uses: marocchino/sticky-pull-request-comment@v2
with:
header: pr-title-lint-error
delete: true
labels: labels:
runs-on: ubuntu-22.04 runs-on: ubuntu-22.04
@ -40,6 +81,7 @@ jobs:
Please also make sure the label `release-notes` is added to make sure any changes to the user interface are properly announced in changelog and release notes. Please also make sure the label `release-notes` is added to make sure any changes to the user interface are properly announced in changelog and release notes.
comment_tag: configs comment_tag: configs
- name: Comment DB schema change - name: Comment DB schema change
uses: thollander/actions-comment-pull-request@v2 uses: thollander/actions-comment-pull-request@v2
if: ${{steps.filter.outputs.db_schema == 'true'}} if: ${{steps.filter.outputs.db_schema == 'true'}}

View File

@ -34,10 +34,10 @@ jobs:
needs: tag-name needs: tag-name
strategy: strategy:
matrix: matrix:
os: [ubuntu-22.04, macos-15] os: [ubuntu-22.04, macos-13]
arch: [amd64] arch: [amd64]
include: include:
- os: macos-15 - os: macos-13
arch: arm64 arch: arm64
runs-on: ${{ matrix.os }} runs-on: ${{ matrix.os }}
steps: steps:
@ -47,7 +47,7 @@ jobs:
- name: prep variables - name: prep variables
id: vars id: vars
run: | run: |
ARCH=${{matrix.arch}} ARCH=${{matrix.arch}}
echo "arch=${ARCH}" >> $GITHUB_OUTPUT echo "arch=${ARCH}" >> $GITHUB_OUTPUT
@ -76,14 +76,14 @@ jobs:
tar -cvzf ${{steps.vars.outputs.nwakutools}} ./build/wakucanary ./build/networkmonitor tar -cvzf ${{steps.vars.outputs.nwakutools}} ./build/wakucanary ./build/networkmonitor
- name: upload artifacts - name: upload artifacts
uses: actions/upload-artifact@v4 uses: actions/upload-artifact@v3
with: with:
name: wakunode2 name: wakunode2
path: ${{steps.vars.outputs.nwaku}} path: ${{steps.vars.outputs.nwaku}}
retention-days: 2 retention-days: 2
- name: upload artifacts - name: upload artifacts
uses: actions/upload-artifact@v4 uses: actions/upload-artifact@v3
with: with:
name: wakutools name: wakutools
path: ${{steps.vars.outputs.nwakutools}} path: ${{steps.vars.outputs.nwakutools}}
@ -91,14 +91,14 @@ jobs:
build-docker-image: build-docker-image:
needs: tag-name needs: tag-name
uses: logos-messaging/nwaku/.github/workflows/container-image.yml@master uses: waku-org/nwaku/.github/workflows/container-image.yml@master
with: with:
image_tag: ${{ needs.tag-name.outputs.tag }} image_tag: ${{ needs.tag-name.outputs.tag }}
secrets: inherit secrets: inherit
js-waku-node: js-waku-node:
needs: build-docker-image needs: build-docker-image
uses: logos-messaging/js-waku/.github/workflows/test-node.yml@master uses: waku-org/js-waku/.github/workflows/test-node.yml@master
with: with:
nim_wakunode_image: ${{ needs.build-docker-image.outputs.image }} nim_wakunode_image: ${{ needs.build-docker-image.outputs.image }}
test_type: node test_type: node
@ -106,7 +106,7 @@ jobs:
js-waku-node-optional: js-waku-node-optional:
needs: build-docker-image needs: build-docker-image
uses: logos-messaging/js-waku/.github/workflows/test-node.yml@master uses: waku-org/js-waku/.github/workflows/test-node.yml@master
with: with:
nim_wakunode_image: ${{ needs.build-docker-image.outputs.image }} nim_wakunode_image: ${{ needs.build-docker-image.outputs.image }}
test_type: node-optional test_type: node-optional
@ -150,7 +150,7 @@ jobs:
-u $(id -u) \ -u $(id -u) \
docker.io/wakuorg/sv4git:latest \ docker.io/wakuorg/sv4git:latest \
release-notes ${RELEASE_NOTES_TAG} --previous $(git tag -l --sort -creatordate | grep -e "^v[0-9]*\.[0-9]*\.[0-9]*$") |\ release-notes ${RELEASE_NOTES_TAG} --previous $(git tag -l --sort -creatordate | grep -e "^v[0-9]*\.[0-9]*\.[0-9]*$") |\
sed -E 's@#([0-9]+)@[#\1](https://github.com/logos-messaging/nwaku/issues/\1)@g' > release_notes.md sed -E 's@#([0-9]+)@[#\1](https://github.com/waku-org/nwaku/issues/\1)@g' > release_notes.md
sed -i "s/^## .*/Generated at $(date)/" release_notes.md sed -i "s/^## .*/Generated at $(date)/" release_notes.md

View File

@ -14,10 +14,10 @@ jobs:
build-and-upload: build-and-upload:
strategy: strategy:
matrix: matrix:
os: [ubuntu-22.04, macos-15] os: [ubuntu-22.04, macos-13]
arch: [amd64] arch: [amd64]
include: include:
- os: macos-15 - os: macos-13
arch: arm64 arch: arm64
runs-on: ${{ matrix.os }} runs-on: ${{ matrix.os }}
timeout-minutes: 60 timeout-minutes: 60
@ -41,84 +41,25 @@ jobs:
.git/modules .git/modules
key: ${{ runner.os }}-${{matrix.arch}}-submodules-${{ steps.submodules.outputs.hash }} key: ${{ runner.os }}-${{matrix.arch}}-submodules-${{ steps.submodules.outputs.hash }}
- name: Get tag - name: prep variables
id: version
run: |
# Use full tag, e.g., v0.37.0
echo "version=${GITHUB_REF_NAME}" >> $GITHUB_OUTPUT
- name: Prep variables
id: vars id: vars
run: | run: |
VERSION=${{ steps.version.outputs.version }} NWAKU_ARTIFACT_NAME=$(echo "nwaku-${{matrix.arch}}-${{runner.os}}.tar.gz" | tr "[:upper:]" "[:lower:]")
NWAKU_ARTIFACT_NAME=$(echo "waku-${{matrix.arch}}-${{runner.os}}.tar.gz" | tr "[:upper:]" "[:lower:]") echo "nwaku=${NWAKU_ARTIFACT_NAME}" >> $GITHUB_OUTPUT
echo "waku=${NWAKU_ARTIFACT_NAME}" >> $GITHUB_OUTPUT
if [[ "${{ runner.os }}" == "Linux" ]]; then - name: Install dependencies
LIBWAKU_ARTIFACT_NAME=$(echo "libwaku-${VERSION}-${{matrix.arch}}-${{runner.os}}-linux.deb" | tr "[:upper:]" "[:lower:]")
fi
if [[ "${{ runner.os }}" == "macOS" ]]; then
LIBWAKU_ARTIFACT_NAME=$(echo "libwaku-${VERSION}-${{matrix.arch}}-macos.tar.gz" | tr "[:upper:]" "[:lower:]")
fi
echo "libwaku=${LIBWAKU_ARTIFACT_NAME}" >> $GITHUB_OUTPUT
- name: Install build dependencies
run: |
if [[ "${{ runner.os }}" == "Linux" ]]; then
sudo apt-get update && sudo apt-get install -y build-essential dpkg-dev
fi
- name: Build Waku artifacts
run: | run: |
OS=$([[ "${{runner.os}}" == "macOS" ]] && echo "macosx" || echo "linux") OS=$([[ "${{runner.os}}" == "macOS" ]] && echo "macosx" || echo "linux")
make -j${NPROC} NIMFLAGS="--parallelBuild:${NPROC} -d:disableMarchNative --os:${OS} --cpu:${{matrix.arch}}" V=1 update make -j${NPROC} NIMFLAGS="--parallelBuild:${NPROC} -d:disableMarchNative --os:${OS} --cpu:${{matrix.arch}}" V=1 update
make -j${NPROC} NIMFLAGS="--parallelBuild:${NPROC} -d:disableMarchNative --os:${OS} --cpu:${{matrix.arch}} -d:postgres" CI=false wakunode2 make -j${NPROC} NIMFLAGS="--parallelBuild:${NPROC} -d:disableMarchNative --os:${OS} --cpu:${{matrix.arch}} -d:postgres" CI=false wakunode2
make -j${NPROC} NIMFLAGS="--parallelBuild:${NPROC} -d:disableMarchNative --os:${OS} --cpu:${{matrix.arch}}" CI=false chat2 make -j${NPROC} NIMFLAGS="--parallelBuild:${NPROC} -d:disableMarchNative --os:${OS} --cpu:${{matrix.arch}}" CI=false chat2
tar -cvzf ${{steps.vars.outputs.waku}} ./build/ tar -cvzf ${{steps.vars.outputs.nwaku}} ./build/
make -j${NPROC} NIMFLAGS="--parallelBuild:${NPROC} -d:disableMarchNative --os:${OS} --cpu:${{matrix.arch}} -d:postgres" CI=false libwaku - name: Upload asset
make -j${NPROC} NIMFLAGS="--parallelBuild:${NPROC} -d:disableMarchNative --os:${OS} --cpu:${{matrix.arch}} -d:postgres" CI=false STATIC=1 libwaku
- name: Create distributable libwaku package
run: |
VERSION=${{ steps.version.outputs.version }}
if [[ "${{ runner.os }}" == "Linux" ]]; then
rm -rf pkg
mkdir -p pkg/DEBIAN pkg/usr/local/lib pkg/usr/local/include
cp build/libwaku.so pkg/usr/local/lib/
cp build/libwaku.a pkg/usr/local/lib/
cp library/libwaku.h pkg/usr/local/include/
echo "Package: waku" >> pkg/DEBIAN/control
echo "Version: ${VERSION}" >> pkg/DEBIAN/control
echo "Priority: optional" >> pkg/DEBIAN/control
echo "Section: libs" >> pkg/DEBIAN/control
echo "Architecture: ${{matrix.arch}}" >> pkg/DEBIAN/control
echo "Maintainer: Waku Team <ivansete@status.im>" >> pkg/DEBIAN/control
echo "Description: Waku library" >> pkg/DEBIAN/control
dpkg-deb --build pkg ${{steps.vars.outputs.libwaku}}
fi
if [[ "${{ runner.os }}" == "macOS" ]]; then
tar -cvzf ${{steps.vars.outputs.libwaku}} ./build/libwaku.dylib ./build/libwaku.a ./library/libwaku.h
fi
- name: Upload waku artifact
uses: actions/upload-artifact@v4.4.0 uses: actions/upload-artifact@v4.4.0
with: with:
name: waku-${{ steps.version.outputs.version }}-${{ matrix.arch }}-${{ runner.os }} name: ${{steps.vars.outputs.nwaku}}
path: ${{ steps.vars.outputs.waku }} path: ${{steps.vars.outputs.nwaku}}
if-no-files-found: error
- name: Upload libwaku artifact
uses: actions/upload-artifact@v4.4.0
with:
name: libwaku-${{ steps.version.outputs.version }}-${{ matrix.arch }}-${{ runner.os }}
path: ${{ steps.vars.outputs.libwaku }}
if-no-files-found: error if-no-files-found: error

View File

@ -1,104 +0,0 @@
name: ci / build-windows
on:
workflow_call:
inputs:
branch:
required: true
type: string
jobs:
build:
runs-on: windows-latest
defaults:
run:
shell: msys2 {0}
env:
MSYSTEM: MINGW64
steps:
- name: Checkout code
uses: actions/checkout@v3
- name: Setup MSYS2
uses: msys2/setup-msys2@v2
with:
update: true
install: >-
git
base-devel
mingw-w64-x86_64-toolchain
make
cmake
upx
mingw-w64-x86_64-rust
mingw-w64-x86_64-postgresql
mingw-w64-x86_64-gcc
mingw-w64-x86_64-gcc-libs
mingw-w64-x86_64-libwinpthread-git
mingw-w64-x86_64-zlib
mingw-w64-x86_64-openssl
mingw-w64-x86_64-python
mingw-w64-x86_64-cmake
mingw-w64-x86_64-llvm
mingw-w64-x86_64-clang
- name: Add UPX to PATH
run: |
echo "/usr/bin:$PATH" >> $GITHUB_PATH
echo "/mingw64/bin:$PATH" >> $GITHUB_PATH
echo "/usr/lib:$PATH" >> $GITHUB_PATH
echo "/mingw64/lib:$PATH" >> $GITHUB_PATH
- name: Verify dependencies
run: |
which upx gcc g++ make cmake cargo rustc python
- name: Updating submodules
run: git submodule update --init --recursive
- name: Creating tmp directory
run: mkdir -p tmp
- name: Building Nim
run: |
cd vendor/nimbus-build-system/vendor/Nim
./build_all.bat
cd ../../../..
- name: Building miniupnpc
run: |
cd vendor/nim-nat-traversal/vendor/miniupnp/miniupnpc
make -f Makefile.mingw CC=gcc CXX=g++ libminiupnpc.a V=1
cd ../../../../..
- name: Building libnatpmp
run: |
cd ./vendor/nim-nat-traversal/vendor/libnatpmp-upstream
make CC="gcc -fPIC -D_WIN32_WINNT=0x0600 -DNATPMP_STATICLIB" libnatpmp.a V=1
cd ../../../../
- name: Building wakunode2.exe
run: |
make wakunode2 LOG_LEVEL=DEBUG V=3 -j8
- name: Building libwaku.dll
run: |
make libwaku STATIC=0 LOG_LEVEL=DEBUG V=1 -j
- name: Check Executable
run: |
if [ -f "./build/wakunode2.exe" ]; then
echo "wakunode2.exe build successful"
else
echo "Build failed: wakunode2.exe not found"
exit 1
fi
if [ -f "./build/libwaku.dll" ]; then
echo "libwaku.dll build successful"
else
echo "Build failed: libwaku.dll not found"
exit 1
fi

17
.gitignore vendored
View File

@ -59,10 +59,6 @@ nimbus-build-system.paths
/examples/nodejs/build/ /examples/nodejs/build/
/examples/rust/target/ /examples/rust/target/
# Xcode user data
xcuserdata/
*.xcuserstate
# Coverage # Coverage
coverage_html_report/ coverage_html_report/
@ -76,16 +72,3 @@ coverage_html_report/
**/rln_tree/ **/rln_tree/
**/certs/ **/certs/
# simple qt example
.qmake.stash
main-qt
waku_handler.moc.cpp
# Nix build result
result
# llms
AGENTS.md
nimble.develop
nimble.paths
nimbledeps

14
.gitmodules vendored
View File

@ -168,7 +168,7 @@
path = vendor/db_connector path = vendor/db_connector
url = https://github.com/nim-lang/db_connector.git url = https://github.com/nim-lang/db_connector.git
ignore = untracked ignore = untracked
branch = devel branch = master
[submodule "vendor/nph"] [submodule "vendor/nph"]
ignore = untracked ignore = untracked
branch = master branch = master
@ -179,13 +179,13 @@
url = https://github.com/status-im/nim-minilru.git url = https://github.com/status-im/nim-minilru.git
ignore = untracked ignore = untracked
branch = master branch = master
[submodule "vendor/waku-rlnv2-contract"] [submodule "vendor/nim-quic"]
path = vendor/waku-rlnv2-contract path = vendor/nim-quic
url = https://github.com/logos-messaging/waku-rlnv2-contract.git url = https://github.com/status-im/nim-quic.git
ignore = untracked ignore = untracked
branch = master branch = master
[submodule "vendor/nim-ffi"] [submodule "vendor/nim-ngtcp2"]
path = vendor/nim-ffi path = vendor/nim-ngtcp2
url = https://github.com/logos-messaging/nim-ffi/ url = https://github.com/vacp2p/nim-ngtcp2.git
ignore = untracked ignore = untracked
branch = master branch = master

509
AGENTS.md
View File

@ -1,509 +0,0 @@
# AGENTS.md - AI Coding Context
This file provides essential context for LLMs assisting with Logos Messaging development.
## Project Identity
Logos Messaging is designed as a shared public network for generalized messaging, not application-specific infrastructure.
This project is a Nim implementation of a libp2p protocol suite for private, censorship-resistant P2P messaging. It targets resource-restricted devices and privacy-preserving communication.
Logos Messaging was formerly known as Waku. Waku-related terminology remains within the codebase for historical reasons.
### Design Philosophy
Key architectural decisions:
Resource-restricted first: Protocols differentiate between full nodes (relay) and light clients (filter, lightpush, store). Light clients can participate without maintaining full message history or relay capabilities. This explains the client/server split in protocol implementations.
Privacy through unlinkability: RLN (Rate Limiting Nullifier) provides DoS protection while preserving sender anonymity. Messages are routed through pubsub topics with automatic sharding across 8 shards. Code prioritizes metadata privacy alongside content encryption.
Scalability via sharding: The network uses automatic content-topic-based sharding to distribute traffic. This is why you'll see sharding logic throughout the codebase and why pubsub topic selection is protocol-level, not application-level.
See [documentation](https://docs.waku.org/learn/) for architectural details.
### Core Protocols
- Relay: Pub/sub message routing using GossipSub
- Store: Historical message retrieval and persistence
- Filter: Lightweight message filtering for resource-restricted clients
- Lightpush: Lightweight message publishing for clients
- Peer Exchange: Peer discovery mechanism
- RLN Relay: Rate limiting nullifier for spam protection
- Metadata: Cluster and shard metadata exchange between peers
- Mix: Mixnet protocol for enhanced privacy through onion routing
- Rendezvous: Alternative peer discovery mechanism
### Key Terminology
- ENR (Ethereum Node Record): Node identity and capability advertisement
- Multiaddr: libp2p addressing format (e.g., `/ip4/127.0.0.1/tcp/60000/p2p/16Uiu2...`)
- PubsubTopic: Gossipsub topic for message routing (e.g., `/waku/2/default-waku/proto`)
- ContentTopic: Application-level message categorization (e.g., `/my-app/1/chat/proto`)
- Sharding: Partitioning network traffic across topics (static or auto-sharding)
- RLN (Rate Limiting Nullifier): Zero-knowledge proof system for spam prevention
### Specifications
All specs are at [rfc.vac.dev/waku](https://rfc.vac.dev/waku). RFCs use `WAKU2-XXX` format (not legacy `WAKU-XXX`).
## Architecture
### Protocol Module Pattern
Each protocol typically follows this structure:
```
waku_<protocol>/
├── protocol.nim # Main protocol type and handler logic
├── client.nim # Client-side API
├── rpc.nim # RPC message types
├── rpc_codec.nim # Protobuf encoding/decoding
├── common.nim # Shared types and constants
└── protocol_metrics.nim # Prometheus metrics
```
### WakuNode Architecture
- WakuNode (`waku/node/waku_node.nim`) is the central orchestrator
- Protocols are "mounted" onto the node's switch (libp2p component)
- PeerManager handles peer selection and connection management
- Switch provides libp2p transport, security, and multiplexing
Example protocol type definition:
```nim
type WakuFilter* = ref object of LPProtocol
subscriptions*: FilterSubscriptions
peerManager: PeerManager
messageCache: TimedCache[string]
```
## Development Essentials
### Build Requirements
- Nim 2.x (check `waku.nimble` for minimum version)
- Rust toolchain (required for RLN dependencies)
- Build system: Make with nimbus-build-system
### Build System
The project uses Makefile with nimbus-build-system (Status's Nim build framework):
```bash
# Initial build (updates submodules)
make wakunode2
# After git pull, update submodules
make update
# Build with custom flags
make wakunode2 NIMFLAGS="-d:chronicles_log_level=DEBUG"
```
Note: The build system uses `--mm:refc` memory management (automatically enforced). Only relevant if compiling outside the standard build system.
### Common Make Targets
```bash
make wakunode2 # Build main node binary
make test # Run all tests
make testcommon # Run common tests only
make libwakuStatic # Build static C library
make chat2 # Build chat example
make install-nph # Install git hook for auto-formatting
```
### Testing
```bash
# Run all tests
make test
# Run specific test file
make test tests/test_waku_enr.nim
# Run specific test case from file
make test tests/test_waku_enr.nim "check capabilities support"
# Build and run test separately (for development iteration)
make test tests/test_waku_enr.nim
```
Test structure uses `testutils/unittests`:
```nim
import testutils/unittests
suite "Waku ENR - Capabilities":
test "check capabilities support":
## Given
let bitfield: CapabilitiesBitfield = 0b0000_1101u8
## Then
check:
bitfield.supportsCapability(Capabilities.Relay)
not bitfield.supportsCapability(Capabilities.Store)
```
### Code Formatting
Mandatory: All code must be formatted with `nph` (vendored in `vendor/nph`)
```bash
# Format specific file
make nph/waku/waku_core.nim
# Install git pre-commit hook (auto-formats on commit)
make install-nph
```
The nph formatter handles all formatting details automatically, especially with the pre-commit hook installed. Focus on semantic correctness.
### Logging
Uses `chronicles` library with compile-time configuration:
```nim
import chronicles
logScope:
topics = "waku lightpush"
info "handling request", peerId = peerId, topic = pubsubTopic
error "request failed", error = msg
```
Compile with log level:
```bash
nim c -d:chronicles_log_level=TRACE myfile.nim
```
## Code Conventions
Common pitfalls:
- Always handle Result types explicitly
- Avoid global mutable state: Pass state through parameters
- Keep functions focused: Under 50 lines when possible
- Prefer compile-time checks (`static assert`) over runtime checks
### Naming
- Files/Directories: `snake_case` (e.g., `waku_lightpush`, `peer_manager`)
- Procedures: `camelCase` (e.g., `handleRequest`, `pushMessage`)
- Types: `PascalCase` (e.g., `WakuFilter`, `PubsubTopic`)
- Constants: `PascalCase` (e.g., `MaxContentTopicsPerRequest`)
- Constructors: `func init(T: type Xxx, params): T`
- For ref types: `func new(T: type Xxx, params): ref T`
- Exceptions: `XxxError` for CatchableError, `XxxDefect` for Defect
- ref object types: `XxxRef` suffix
### Imports Organization
Group imports: stdlib, external libs, internal modules:
```nim
import
std/[options, sequtils], # stdlib
results, chronicles, chronos, # external
libp2p/peerid
import
../node/peer_manager, # internal (separate import block)
../waku_core,
./common
```
### Async Programming
Uses chronos, not stdlib `asyncdispatch`:
```nim
proc handleRequest(
wl: WakuLightPush, peerId: PeerId
): Future[WakuLightPushResult] {.async.} =
let res = await wl.pushHandler(peerId, pubsubTopic, message)
return res
```
### Error Handling
The project uses both Result types and exceptions:
Result types from nim-results are used for protocol and API-level errors:
```nim
proc subscribe(
wf: WakuFilter, peerId: PeerID
): Future[FilterSubscribeResult] {.async.} =
if contentTopics.len > MaxContentTopicsPerRequest:
return err(FilterSubscribeError.badRequest("exceeds maximum"))
# Handle Result with isOkOr
(await wf.subscriptions.addSubscription(peerId, criteria)).isOkOr:
return err(FilterSubscribeError.serviceUnavailable(error))
ok()
```
Exceptions still used for:
- chronos async failures (CancelledError, etc.)
- Database/system errors
- Library interop
Most files start with `{.push raises: [].}` to disable exception tracking, then use try/catch blocks where needed.
### Pragma Usage
```nim
{.push raises: [].} # Disable default exception tracking (at file top)
proc myProc(): Result[T, E] {.async.} = # Async proc
```
### Protocol Inheritance
Protocols inherit from libp2p's `LPProtocol`:
```nim
type WakuLightPush* = ref object of LPProtocol
rng*: ref rand.HmacDrbgContext
peerManager*: PeerManager
pushHandler*: PushMessageHandler
```
### Type Visibility
- Public exports use `*` suffix: `type WakuFilter* = ...`
- Fields without `*` are module-private
## Style Guide Essentials
This section summarizes key Nim style guidelines relevant to this project. Full guide: https://status-im.github.io/nim-style-guide/
### Language Features
Import and Export
- Use explicit import paths with std/ prefix for stdlib
- Group imports: stdlib, external, internal (separate blocks)
- Export modules whose types appear in public API
- Avoid include
Macros and Templates
- Avoid macros and templates - prefer simple constructs
- Avoid generating public API with macros
- Put logic in templates, use macros only for glue code
Object Construction
- Prefer Type(field: value) syntax
- Use Type.init(params) convention for constructors
- Default zero-initialization should be valid state
- Avoid using result variable for construction
ref object Types
- Avoid ref object unless needed for:
- Resource handles requiring reference semantics
- Shared ownership
- Reference-based data structures (trees, lists)
- Stable pointer for FFI
- Use explicit ref MyType where possible
- Name ref object types with Ref suffix: XxxRef
Memory Management
- Prefer stack-based and statically sized types in core code
- Use heap allocation in glue layers
- Avoid alloca
- For FFI: use create/dealloc or createShared/deallocShared
Variable Usage
- Use most restrictive of const, let, var (prefer const over let over var)
- Prefer expressions for initialization over var then assignment
- Avoid result variable - use explicit return or expression-based returns
Functions
- Prefer func over proc
- Avoid public (*) symbols not part of intended API
- Prefer openArray over seq for function parameters
Methods (runtime polymorphism)
- Avoid method keyword for dynamic dispatch
- Prefer manual vtable with proc closures for polymorphism
- Methods lack support for generics
Miscellaneous
- Annotate callback proc types with {.raises: [], gcsafe.}
- Avoid explicit {.inline.} pragma
- Avoid converters
- Avoid finalizers
Type Guidelines
Binary Data
- Use byte for binary data
- Use seq[byte] for dynamic arrays
- Convert string to seq[byte] early if stdlib returns binary as string
Integers
- Prefer signed (int, int64) for counting, lengths, indexing
- Use unsigned with explicit size (uint8, uint64) for binary data, bit ops
- Avoid Natural
- Check ranges before converting to int
- Avoid casting pointers to int
- Avoid range types
Strings
- Use string for text
- Use seq[byte] for binary data instead of string
### Error Handling
Philosophy
- Prefer Result, Opt for explicit error handling
- Use Exceptions only for legacy code compatibility
Result Types
- Use Result[T, E] for operations that can fail
- Use cstring for simple error messages: Result[T, cstring]
- Use enum for errors needing differentiation: Result[T, SomeErrorEnum]
- Use Opt[T] for simple optional values
- Annotate all modules: {.push raises: [].} at top
Exceptions (when unavoidable)
- Inherit from CatchableError, name XxxError
- Use Defect for panics/logic errors, name XxxDefect
- Annotate functions explicitly: {.raises: [SpecificError].}
- Catch specific error types, avoid catching CatchableError
- Use expression-based try blocks
- Isolate legacy exception code with try/except, convert to Result
Common Defect Sources
- Overflow in signed arithmetic
- Array/seq indexing with []
- Implicit range type conversions
Status Codes
- Avoid status code pattern
- Use Result instead
### Library Usage
Standard Library
- Use judiciously, prefer focused packages
- Prefer these replacements:
- async: chronos
- bitops: stew/bitops2
- endians: stew/endians2
- exceptions: results
- io: stew/io2
Results Library
- Use cstring errors for diagnostics without differentiation
- Use enum errors when caller needs to act on specific errors
- Use complex types when additional error context needed
- Use isOkOr pattern for chaining
Wrappers (C/FFI)
- Prefer native Nim when available
- For C libraries: use {.compile.} to build from source
- Create xxx_abi.nim for raw ABI wrapper
- Avoid C++ libraries
Miscellaneous
- Print hex output in lowercase, accept both cases
### Common Pitfalls
- Defects lack tracking by {.raises.}
- nil ref causes runtime crashes
- result variable disables branch checking
- Exception hierarchy unclear between Nim versions
- Range types have compiler bugs
- Finalizers infect all instances of type
## Common Workflows
### Adding a New Protocol
1. Create directory: `waku/waku_myprotocol/`
2. Define core files:
- `rpc.nim` - Message types
- `rpc_codec.nim` - Protobuf encoding
- `protocol.nim` - Protocol handler
- `client.nim` - Client API
- `common.nim` - Shared types
3. Define protocol type in `protocol.nim`:
```nim
type WakuMyProtocol* = ref object of LPProtocol
peerManager: PeerManager
# ... fields
```
4. Implement request handler
5. Mount in WakuNode (`waku/node/waku_node.nim`)
6. Add tests in `tests/waku_myprotocol/`
7. Export module via `waku/waku_myprotocol.nim`
### Adding a REST API Endpoint
1. Define handler in `waku/rest_api/endpoint/myprotocol/`
2. Implement endpoint following pattern:
```nim
proc installMyProtocolApiHandlers*(
router: var RestRouter, node: WakuNode
) =
router.api(MethodGet, "/waku/v2/myprotocol/endpoint") do () -> RestApiResponse:
# Implementation
return RestApiResponse.jsonResponse(data, status = Http200)
```
3. Register in `waku/rest_api/handlers.nim`
### Adding Database Migration
For message_store (SQLite):
1. Create `migrations/message_store/NNNNN_description.up.sql`
2. Create corresponding `.down.sql` for rollback
3. Increment version number sequentially
4. Test migration locally before committing
For PostgreSQL: add in `migrations/message_store_postgres/`
### Running Single Test During Development
```bash
# Build test binary
make test tests/waku_filter_v2/test_waku_client.nim
# Binary location
./build/tests/waku_filter_v2/test_waku_client.nim.bin
# Or combine
make test tests/waku_filter_v2/test_waku_client.nim "specific test name"
```
### Debugging with Chronicles
Set log level and filter topics:
```bash
nim c -r \
-d:chronicles_log_level=TRACE \
-d:chronicles_disabled_topics="eth,dnsdisc" \
tests/mytest.nim
```
## Key Constraints
### Vendor Directory
- Never edit files directly in vendor - it is auto-generated from git submodules
- Always run `make update` after pulling changes
- Managed by `nimbus-build-system`
### Chronicles Performance
- Log levels are configured at compile time for performance
- Runtime filtering is available but should be used sparingly: `-d:chronicles_runtime_filtering=on`
- Default sinks are optimized for production
### Memory Management
- Uses `refc` (reference counting with cycle collection)
- Automatically enforced by the build system (hardcoded in `waku.nimble`)
- Do not override unless absolutely necessary, as it breaks compatibility
### RLN Dependencies
- RLN code requires a Rust toolchain, which explains Rust imports in some modules
- Pre-built `librln` libraries are checked into the repository
## Quick Reference
Language: Nim 2.x | License: MIT or Apache 2.0
### Important Files
- `Makefile` - Primary build interface
- `waku.nimble` - Package definition and build tasks (called via nimbus-build-system)
- `vendor/nimbus-build-system/` - Status's build framework
- `waku/node/waku_node.nim` - Core node implementation
- `apps/wakunode2/wakunode2.nim` - Main CLI application
- `waku/factory/waku_conf.nim` - Configuration types
- `library/libwaku.nim` - C bindings entry point
### Testing Entry Points
- `tests/all_tests_waku.nim` - All Waku protocol tests
- `tests/all_tests_wakunode2.nim` - Node application tests
- `tests/all_tests_common.nim` - Common utilities tests
### Key Dependencies
- `chronos` - Async framework
- `nim-results` - Result type for error handling
- `chronicles` - Logging
- `libp2p` - P2P networking
- `confutils` - CLI argument parsing
- `presto` - REST server
- `nimcrypto` - Cryptographic primitives
Note: For specific version requirements, check `waku.nimble`.

View File

@ -1,193 +1,3 @@
## v0.37.1-beta (2025-12-10)
### Bug Fixes
- Remove ENR cache from peer exchange ([#3652](https://github.com/logos-messaging/logos-messaging-nim/pull/3652)) ([7920368a](https://github.com/logos-messaging/logos-messaging-nim/commit/7920368a36687cd5f12afa52d59866792d8457ca))
## v0.37.0-beta (2025-10-01)
### Notes
- Deprecated parameters:
- `tree_path` and `rlnDB` (RLN-related storage paths)
- `--dns-discovery` (fully removed, including dns-discovery-name-server)
- `keepAlive` (deprecated, config updated accordingly)
- Legacy `store` protocol is no longer supported by default.
- Improved sharding configuration: now explicit and shard-specific metrics added.
- Mix nodes are limited to IPv4 addresses only.
- [lightpush legacy](https://github.com/vacp2p/rfc-index/blob/main/waku/standards/core/19/lightpush.md) is being deprecated. Use [lightpush v3](https://github.com/waku-org/specs/blob/master/standards/core/lightpush.md) instead.
### Features
- Waku API: create node via API ([#3580](https://github.com/waku-org/nwaku/pull/3580)) ([bc8acf76](https://github.com/waku-org/nwaku/commit/bc8acf76))
- Waku Sync: full topic support ([#3275](https://github.com/waku-org/nwaku/pull/3275)) ([9327da5a](https://github.com/waku-org/nwaku/commit/9327da5a))
- Mix PoC implementation ([#3284](https://github.com/waku-org/nwaku/pull/3284)) ([eb7a3d13](https://github.com/waku-org/nwaku/commit/eb7a3d13))
- Rendezvous: add request interval option ([#3569](https://github.com/waku-org/nwaku/pull/3569)) ([cc7a6406](https://github.com/waku-org/nwaku/commit/cc7a6406))
- Shard-specific metrics tracking ([#3520](https://github.com/waku-org/nwaku/pull/3520)) ([c3da29fd](https://github.com/waku-org/nwaku/commit/c3da29fd))
- Libwaku: build Windows DLL for Status-go ([#3460](https://github.com/waku-org/nwaku/pull/3460)) ([5c38a53f](https://github.com/waku-org/nwaku/commit/5c38a53f))
- RLN: add Stateless RLN support ([#3621](https://github.com/waku-org/nwaku/pull/3621))
- LOG: Reduce log level of messages from debug to info for better visibility ([#3622](https://github.com/waku-org/nwaku/pull/3622))
### Bug Fixes
- Prevent invalid pubsub topic subscription via Relay REST API ([#3559](https://github.com/waku-org/nwaku/pull/3559)) ([a36601ab](https://github.com/waku-org/nwaku/commit/a36601ab))
- Fixed node crash when RLN is unregistered ([#3573](https://github.com/waku-org/nwaku/pull/3573)) ([3d0c6279](https://github.com/waku-org/nwaku/commit/3d0c6279))
- REST: fixed sync protocol issues ([#3503](https://github.com/waku-org/nwaku/pull/3503)) ([393e3cce](https://github.com/waku-org/nwaku/commit/393e3cce))
- Regex pattern fix for `username:password@` in URLs ([#3517](https://github.com/waku-org/nwaku/pull/3517)) ([89a3f735](https://github.com/waku-org/nwaku/commit/89a3f735))
- Sharding: applied modulus fix ([#3530](https://github.com/waku-org/nwaku/pull/3530)) ([f68d7999](https://github.com/waku-org/nwaku/commit/f68d7999))
- Metrics: switched to counter instead of gauge ([#3355](https://github.com/waku-org/nwaku/pull/3355)) ([a27eec90](https://github.com/waku-org/nwaku/commit/a27eec90))
- Fixed lightpush metrics and diagnostics ([#3486](https://github.com/waku-org/nwaku/pull/3486)) ([0ed3fc80](https://github.com/waku-org/nwaku/commit/0ed3fc80))
- Misc sync, dashboard, and CI fixes ([#3434](https://github.com/waku-org/nwaku/pull/3434), [#3508](https://github.com/waku-org/nwaku/pull/3508), [#3464](https://github.com/waku-org/nwaku/pull/3464))
- Raise log level of numerous operational messages from debug to info for better visibility ([#3622](https://github.com/waku-org/nwaku/pull/3622))
### Changes
- Enable peer-exchange by default ([#3557](https://github.com/waku-org/nwaku/pull/3557)) ([7df526f8](https://github.com/waku-org/nwaku/commit/7df526f8))
- Refactor peer-exchange client and service implementations ([#3523](https://github.com/waku-org/nwaku/pull/3523)) ([4379f9ec](https://github.com/waku-org/nwaku/commit/4379f9ec))
- Updated rendezvous to use callback-based shard/capability updates ([#3558](https://github.com/waku-org/nwaku/pull/3558)) ([028bf297](https://github.com/waku-org/nwaku/commit/028bf297))
- Config updates and explicit sharding setup ([#3468](https://github.com/waku-org/nwaku/pull/3468)) ([994d485b](https://github.com/waku-org/nwaku/commit/994d485b))
- Bumped libp2p to v1.13.0 ([#3574](https://github.com/waku-org/nwaku/pull/3574)) ([b1616e55](https://github.com/waku-org/nwaku/commit/b1616e55))
- Removed legacy dependencies (e.g., libpcre in Docker builds) ([#3552](https://github.com/waku-org/nwaku/pull/3552)) ([4db4f830](https://github.com/waku-org/nwaku/commit/4db4f830))
- Benchmarks for RLN proof generation & verification ([#3567](https://github.com/waku-org/nwaku/pull/3567)) ([794c3a85](https://github.com/waku-org/nwaku/commit/794c3a85))
- Various CI/CD & infra updates ([#3515](https://github.com/waku-org/nwaku/pull/3515), [#3505](https://github.com/waku-org/nwaku/pull/3505))
### This release supports the following [libp2p protocols](https://docs.libp2p.io/concepts/protocols/):
| Protocol | Spec status | Protocol id |
| ---: | :---: | :--- |
| [`11/WAKU2-RELAY`](https://github.com/vacp2p/rfc-index/blob/main/waku/standards/core/11/relay.md) | `stable` | `/vac/waku/relay/2.0.0` |
| [`12/WAKU2-FILTER`](https://github.com/vacp2p/rfc-index/blob/main/waku/standards/core/12/filter.md) | `draft` | `/vac/waku/filter/2.0.0-beta1` <br />`/vac/waku/filter-subscribe/2.0.0-beta1` <br />`/vac/waku/filter-push/2.0.0-beta1` |
| [`13/WAKU2-STORE`](https://github.com/vacp2p/rfc-index/blob/main/waku/standards/core/13/store.md) | `draft` | `/vac/waku/store/2.0.0-beta4` |
| [`19/WAKU2-LIGHTPUSH`](https://github.com/vacp2p/rfc-index/blob/main/waku/standards/core/19/lightpush.md) | `draft` | `/vac/waku/lightpush/2.0.0-beta1` |
| [`WAKU2-LIGHTPUSH v3`](https://github.com/waku-org/specs/blob/master/standards/core/lightpush.md) | `draft` | `/vac/waku/lightpush/3.0.0` |
| [`66/WAKU2-METADATA`](https://github.com/waku-org/specs/blob/master/standards/core/metadata.md) | `raw` | `/vac/waku/metadata/1.0.0` |
| [`WAKU-SYNC`](https://github.com/waku-org/specs/blob/master/standards/core/sync.md) | `draft` | `/vac/waku/sync/1.0.0` |
## v0.36.0 (2025-06-20)
### Notes
- Extended REST API for better debugging
- Extended `/health` report
- Very detailed access to peers and actual status through [`/admin/v1/peers/...` endpoints](https://waku-org.github.io/waku-rest-api/#get-/admin/v1/peers/stats)
- Dynamic log level change with[ `/admin/v1/log-level`](https://waku-org.github.io/waku-rest-api/#post-/admin/v1/log-level/-logLevel-)
- The `rln-relay-eth-client-address` parameter, from now on, should be passed as an array of RPC addresses.
- new `preset` parameter. `preset=twn` is the RLN-protected Waku Network (cluster 1). Overrides other values.
- Removed `dns-addrs` parameter as it was duplicated and unused.
- Removed `rln-relay-id-key`, `rln-relay-id-commitment-key`, `rln-relay-bandwidth-threshold` parameters.
- Effectively removed `pubsub-topic`, which was deprecated in `v0.33.0`.
- Removed `store-sync-max-payload-size` parameter.
- Removed `dns-discovery-name-server` and `discv5-only` parameters.
### Features
- Update implementation for new contract abi ([#3390](https://github.com/waku-org/nwaku/issues/3390)) ([ee4058b2d](https://github.com/waku-org/nwaku/commit/ee4058b2d))
- Lighptush v3 for lite-protocol-tester ([#3455](https://github.com/waku-org/nwaku/issues/3455)) ([3f3c59488](https://github.com/waku-org/nwaku/commit/3f3c59488))
- Retrieve metrics from libwaku ([#3452](https://github.com/waku-org/nwaku/issues/3452)) ([f016ede60](https://github.com/waku-org/nwaku/commit/f016ede60))
- Dynamic logging via REST API ([#3451](https://github.com/waku-org/nwaku/issues/3451)) ([9fe8ef8d2](https://github.com/waku-org/nwaku/commit/9fe8ef8d2))
- Add waku_disconnect_all_peers to libwaku ([#3438](https://github.com/waku-org/nwaku/issues/3438)) ([7f51d103b](https://github.com/waku-org/nwaku/commit/7f51d103b))
- Extend node /health REST endpoint with all protocol's state ([#3419](https://github.com/waku-org/nwaku/issues/3419)) ([1632496a2](https://github.com/waku-org/nwaku/commit/1632496a2))
- Deprecate sync / local merkle tree ([#3312](https://github.com/waku-org/nwaku/issues/3312)) ([50fe7d727](https://github.com/waku-org/nwaku/commit/50fe7d727))
- Refactor waku sync DOS protection ([#3391](https://github.com/waku-org/nwaku/issues/3391)) ([a81f9498c](https://github.com/waku-org/nwaku/commit/a81f9498c))
- Waku Sync dashboard new panel & update ([#3379](https://github.com/waku-org/nwaku/issues/3379)) ([5ed6aae10](https://github.com/waku-org/nwaku/commit/5ed6aae10))
- Enhance Waku Sync logs and metrics ([#3370](https://github.com/waku-org/nwaku/issues/3370)) ([f6c680a46](https://github.com/waku-org/nwaku/commit/f6c680a46))
- Add waku_get_connected_peers_info to libwaku ([#3356](https://github.com/waku-org/nwaku/issues/3356)) ([0eb9c6200](https://github.com/waku-org/nwaku/commit/0eb9c6200))
- Add waku_relay_get_peers_in_mesh to libwaku ([#3352](https://github.com/waku-org/nwaku/issues/3352)) ([ef9074443](https://github.com/waku-org/nwaku/commit/ef9074443))
- Add waku_relay_get_connected_peers to libwaku ([#3353](https://github.com/waku-org/nwaku/issues/3353)) ([7250d7392](https://github.com/waku-org/nwaku/commit/7250d7392))
- Introduce `preset` option ([#3346](https://github.com/waku-org/nwaku/issues/3346)) ([0eaf90465](https://github.com/waku-org/nwaku/commit/0eaf90465))
- Add store sync dashboard panel ([#3307](https://github.com/waku-org/nwaku/issues/3307)) ([ef8ee233f](https://github.com/waku-org/nwaku/commit/ef8ee233f))
### Bug Fixes
- Fix typo from DIRVER to DRIVER ([#3442](https://github.com/waku-org/nwaku/issues/3442)) ([b9a4d7702](https://github.com/waku-org/nwaku/commit/b9a4d7702))
- Fix discv5 protocol id in libwaku ([#3447](https://github.com/waku-org/nwaku/issues/3447)) ([f7be4c2f0](https://github.com/waku-org/nwaku/commit/f7be4c2f0))
- Fix dnsresolver ([#3440](https://github.com/waku-org/nwaku/issues/3440)) ([e42e28cc6](https://github.com/waku-org/nwaku/commit/e42e28cc6))
- Misc sync fixes, added debug logging ([#3411](https://github.com/waku-org/nwaku/issues/3411)) ([b9efa874d](https://github.com/waku-org/nwaku/commit/b9efa874d))
- Relay unsubscribe ([#3422](https://github.com/waku-org/nwaku/issues/3422)) ([9fc631e10](https://github.com/waku-org/nwaku/commit/9fc631e10))
- Fix build_rln.sh update version to download v0.7.0 ([#3425](https://github.com/waku-org/nwaku/issues/3425)) ([2678303bf](https://github.com/waku-org/nwaku/commit/2678303bf))
- Timestamp based validation ([#3406](https://github.com/waku-org/nwaku/issues/3406)) ([1512bdaf0](https://github.com/waku-org/nwaku/commit/1512bdaf0))
- Enable WebSocket connection also in case only websocket-secure-support enabled ([#3417](https://github.com/waku-org/nwaku/issues/3417)) ([698fe6525](https://github.com/waku-org/nwaku/commit/698fe6525))
- Fix addPeer could unintentionally override metadata of previously stored peer with defaults and empty ([#3403](https://github.com/waku-org/nwaku/issues/3403)) ([5cccaaac6](https://github.com/waku-org/nwaku/commit/5cccaaac6))
- Fix bad HttpCode conversion, add missing lightpush v3 rest api tests ([#3389](https://github.com/waku-org/nwaku/issues/3389)) ([7ff055e42](https://github.com/waku-org/nwaku/commit/7ff055e42))
- Adjust mistaken comments and broken link ([#3381](https://github.com/waku-org/nwaku/issues/3381)) ([237f7abbb](https://github.com/waku-org/nwaku/commit/237f7abbb))
- Avoid libwaku's redundant allocs ([#3380](https://github.com/waku-org/nwaku/issues/3380)) ([ac454a30b](https://github.com/waku-org/nwaku/commit/ac454a30b))
- Avoid performing nil check for userData ([#3365](https://github.com/waku-org/nwaku/issues/3365)) ([b8707b6a5](https://github.com/waku-org/nwaku/commit/b8707b6a5))
- Fix waku sync timing ([#3337](https://github.com/waku-org/nwaku/issues/3337)) ([b01b1837d](https://github.com/waku-org/nwaku/commit/b01b1837d))
- Fix filter out ephemeral msg from waku sync ([#3332](https://github.com/waku-org/nwaku/issues/3332)) ([4b963d8f5](https://github.com/waku-org/nwaku/commit/4b963d8f5))
- Apply latest nph formating ([#3334](https://github.com/waku-org/nwaku/issues/3334)) ([77105a6c2](https://github.com/waku-org/nwaku/commit/77105a6c2))
- waku sync 2.0 codecs ENR support ([#3326](https://github.com/waku-org/nwaku/issues/3326)) ([bf735e777](https://github.com/waku-org/nwaku/commit/bf735e777))
- waku sync mounting ([#3321](https://github.com/waku-org/nwaku/issues/3321)) ([380d2e338](https://github.com/waku-org/nwaku/commit/380d2e338))
- Fix rest-relay-cache-capacity ([#3454](https://github.com/waku-org/nwaku/issues/3454)) ([fed4dc280](https://github.com/waku-org/nwaku/commit/fed4dc280))
### Changes
- Lower waku sync log lvl ([#3461](https://github.com/waku-org/nwaku/issues/3461)) ([4277a5349](https://github.com/waku-org/nwaku/commit/4277a5349))
- Refactor to unify online and health monitors ([#3456](https://github.com/waku-org/nwaku/issues/3456)) ([2e40f2971](https://github.com/waku-org/nwaku/commit/2e40f2971))
- Refactor rm discv5-only ([#3453](https://github.com/waku-org/nwaku/issues/3453)) ([b998430d5](https://github.com/waku-org/nwaku/commit/b998430d5))
- Add extra debug REST helper via getting peer statistics ([#3443](https://github.com/waku-org/nwaku/issues/3443)) ([f4ad7a332](https://github.com/waku-org/nwaku/commit/f4ad7a332))
- Expose online state in libwaku ([#3433](https://github.com/waku-org/nwaku/issues/3433)) ([e7f5c8cb2](https://github.com/waku-org/nwaku/commit/e7f5c8cb2))
- Add heaptrack support build for Nim v2.0.12 builds ([#3424](https://github.com/waku-org/nwaku/issues/3424)) ([91885fb9e](https://github.com/waku-org/nwaku/commit/91885fb9e))
- Remove debug for js-waku ([#3423](https://github.com/waku-org/nwaku/issues/3423)) ([5628dc6ad](https://github.com/waku-org/nwaku/commit/5628dc6ad))
- Bump dependencies for v0.36 ([#3410](https://github.com/waku-org/nwaku/issues/3410)) ([005815746](https://github.com/waku-org/nwaku/commit/005815746))
- Enhance feedback on error CLI ([#3405](https://github.com/waku-org/nwaku/issues/3405)) ([3464d81a6](https://github.com/waku-org/nwaku/commit/3464d81a6))
- Allow multiple rln eth clients ([#3402](https://github.com/waku-org/nwaku/issues/3402)) ([861710bc7](https://github.com/waku-org/nwaku/commit/861710bc7))
- Separate internal and CLI configurations ([#3357](https://github.com/waku-org/nwaku/issues/3357)) ([dd8d66431](https://github.com/waku-org/nwaku/commit/dd8d66431))
- Avoid double relay subscription ([#3396](https://github.com/waku-org/nwaku/issues/3396)) ([7d5eb9374](https://github.com/waku-org/nwaku/commit/7d5eb9374) [#3429](https://github.com/waku-org/nwaku/issues/3429)) ([ee5932ebc](https://github.com/waku-org/nwaku/commit/ee5932ebc))
- Improve disconnection handling ([#3385](https://github.com/waku-org/nwaku/issues/3385)) ([1ec9b8d96](https://github.com/waku-org/nwaku/commit/1ec9b8d96))
- Return all peers from REST admin ([#3395](https://github.com/waku-org/nwaku/issues/3395)) ([f6fdd960f](https://github.com/waku-org/nwaku/commit/f6fdd960f))
- Simplify rln_relay code a little ([#3392](https://github.com/waku-org/nwaku/issues/3392)) ([7a6c00bd0](https://github.com/waku-org/nwaku/commit/7a6c00bd0))
- Extended /admin/v1 RESP API with different option to look at current connected/relay/mesh state of the node ([#3382](https://github.com/waku-org/nwaku/issues/3382)) ([3db00f39e](https://github.com/waku-org/nwaku/commit/3db00f39e))
- Timestamp set to now in publish if not provided ([#3373](https://github.com/waku-org/nwaku/issues/3373)) ([f7b424451](https://github.com/waku-org/nwaku/commit/f7b424451))
- Update lite-protocol-tester for handling shard argument ([#3371](https://github.com/waku-org/nwaku/issues/3371)) ([5ab69edd7](https://github.com/waku-org/nwaku/commit/5ab69edd7))
- Fix unused and deprecated imports ([#3368](https://github.com/waku-org/nwaku/issues/3368)) ([6ebb49a14](https://github.com/waku-org/nwaku/commit/6ebb49a14))
- Expect camelCase JSON for libwaku store queries ([#3366](https://github.com/waku-org/nwaku/issues/3366)) ([ccb4ed51d](https://github.com/waku-org/nwaku/commit/ccb4ed51d))
- Maintenance to c and c++ simple examples ([#3367](https://github.com/waku-org/nwaku/issues/3367)) ([25d30d44d](https://github.com/waku-org/nwaku/commit/25d30d44d))
- Skip two flaky tests ([#3364](https://github.com/waku-org/nwaku/issues/3364)) ([b672617b2](https://github.com/waku-org/nwaku/commit/b672617b2))
- Retrieve protocols in new added peer from discv5 ([#3354](https://github.com/waku-org/nwaku/issues/3354)) ([df58643ea](https://github.com/waku-org/nwaku/commit/df58643ea))
- Better keystore management ([#3358](https://github.com/waku-org/nwaku/issues/3358)) ([a914fdccc](https://github.com/waku-org/nwaku/commit/a914fdccc))
- Remove pubsub topics arguments ([#3350](https://github.com/waku-org/nwaku/issues/3350)) ([9778b45c6](https://github.com/waku-org/nwaku/commit/9778b45c6))
- New performance measurement metrics for non-relay protocols ([#3299](https://github.com/waku-org/nwaku/issues/3299)) ([68c50a09a](https://github.com/waku-org/nwaku/commit/68c50a09a))
- Start triggering CI for windows build ([#3316](https://github.com/waku-org/nwaku/issues/3316)) ([55ac6ba9f](https://github.com/waku-org/nwaku/commit/55ac6ba9f))
- Less logs for rendezvous ([#3319](https://github.com/waku-org/nwaku/issues/3319)) ([6df05bae2](https://github.com/waku-org/nwaku/commit/6df05bae2))
- Add test reporting doc to benchmarks dir ([#3238](https://github.com/waku-org/nwaku/issues/3238)) ([94554a6e0](https://github.com/waku-org/nwaku/commit/94554a6e0))
- Improve epoch monitoring ([#3197](https://github.com/waku-org/nwaku/issues/3197)) ([b0c025f81](https://github.com/waku-org/nwaku/commit/b0c025f81))
### This release supports the following [libp2p protocols](https://docs.libp2p.io/concepts/protocols/):
| Protocol | Spec status | Protocol id |
| ---: | :---: | :--- |
| [`11/WAKU2-RELAY`](https://github.com/vacp2p/rfc-index/blob/main/waku/standards/core/11/relay.md) | `stable` | `/vac/waku/relay/2.0.0` |
| [`12/WAKU2-FILTER`](https://github.com/vacp2p/rfc-index/blob/main/waku/standards/core/12/filter.md) | `draft` | `/vac/waku/filter/2.0.0-beta1` <br />`/vac/waku/filter-subscribe/2.0.0-beta1` <br />`/vac/waku/filter-push/2.0.0-beta1` |
| [`13/WAKU2-STORE`](https://github.com/vacp2p/rfc-index/blob/main/waku/standards/core/13/store.md) | `draft` | `/vac/waku/store/2.0.0-beta4` |
| [`19/WAKU2-LIGHTPUSH`](https://github.com/vacp2p/rfc-index/blob/main/waku/standards/core/19/lightpush.md) | `draft` | `/vac/waku/lightpush/2.0.0-beta1` |
| [`WAKU2-LIGHTPUSH v3`](https://github.com/waku-org/specs/blob/master/standards/core/lightpush.md) | `draft` | `/vac/waku/lightpush/3.0.0` |
| [`66/WAKU2-METADATA`](https://github.com/waku-org/specs/blob/master/standards/core/metadata.md) | `raw` | `/vac/waku/metadata/1.0.0` |
| [`WAKU-SYNC`](https://github.com/waku-org/specs/blob/feat--waku-sync/standards/core/sync.md) | `draft` | `/vac/waku/sync/1.0.0` |
## v0.35.1 (2025-03-30)
### Bug fixes
* Update RLN references ([3287](https://github.com/waku-org/nwaku/pull/3287)) ([ea961fa](https://github.com/waku-org/nwaku/pull/3287/commits/ea961faf4ed4f8287a2043a6b5d84b660745072b))
**Info:** before upgrading to this version, make sure you delete the previous rln_tree folder, i.e.,
the one that is passed through this CLI: `--rln-relay-tree-path`.
### Features
* lightpush v3 ([#3279](https://github.com/waku-org/nwaku/pull/3279)) ([e0b563ff](https://github.com/waku-org/nwaku/commit/e0b563ffe5af20bd26d37cd9b4eb9ed9eb82ff80))
Upgrade for Waku Llightpush protocol with enhanced error handling. Read specification [here](https://github.com/waku-org/specs/blob/master/standards/core/lightpush.md)
This release supports the following [libp2p protocols](https://docs.libp2p.io/concepts/protocols/):
| Protocol | Spec status | Protocol id |
| ---: | :---: | :--- |
| [`11/WAKU2-RELAY`](https://github.com/vacp2p/rfc-index/blob/main/waku/standards/core/11/relay.md) | `stable` | `/vac/waku/relay/2.0.0` |
| [`12/WAKU2-FILTER`](https://github.com/vacp2p/rfc-index/blob/main/waku/standards/core/12/filter.md) | `draft` | `/vac/waku/filter/2.0.0-beta1` <br />`/vac/waku/filter-subscribe/2.0.0-beta1` <br />`/vac/waku/filter-push/2.0.0-beta1` |
| [`13/WAKU2-STORE`](https://github.com/vacp2p/rfc-index/blob/main/waku/standards/core/13/store.md) | `draft` | `/vac/waku/store/2.0.0-beta4` |
| [`19/WAKU2-LIGHTPUSH`](https://github.com/vacp2p/rfc-index/blob/main/waku/standards/core/19/lightpush.md) | `draft` | `/vac/waku/lightpush/2.0.0-beta1` |
| [`WAKU2-LIGHTPUSH v3`](https://github.com/waku-org/specs/blob/master/standards/core/lightpush.md) | `draft` | `/vac/waku/lightpush/3.0.0` |
| [`66/WAKU2-METADATA`](https://github.com/waku-org/specs/blob/master/standards/core/metadata.md) | `raw` | `/vac/waku/metadata/1.0.0` |
| [`WAKU-SYNC`](https://github.com/waku-org/specs/blob/feat--waku-sync/standards/core/sync.md) | `draft` | `/vac/waku/sync/1.0.0` |
## v0.35.0 (2025-03-03) ## v0.35.0 (2025-03-03)
### Notes ### Notes

View File

@ -1,14 +1,13 @@
# BUILD NIM APP ---------------------------------------------------------------- # BUILD NIM APP ----------------------------------------------------------------
FROM rustlang/rust:nightly-alpine3.19 AS nim-build FROM rust:1.77.1-alpine3.18 AS nim-build
ARG NIMFLAGS ARG NIMFLAGS
ARG MAKE_TARGET=wakunode2 ARG MAKE_TARGET=wakunode2
ARG NIM_COMMIT ARG NIM_COMMIT
ARG LOG_LEVEL=TRACE ARG LOG_LEVEL=TRACE
ARG HEAPTRACK_BUILD=0
# Get build tools and required header files # Get build tools and required header files
RUN apk add --no-cache bash git build-base openssl-dev linux-headers curl jq RUN apk add --no-cache bash git build-base openssl-dev pcre-dev linux-headers curl jq
WORKDIR /app WORKDIR /app
COPY . . COPY . .
@ -19,10 +18,6 @@ RUN apk update && apk upgrade
# Ran separately from 'make' to avoid re-doing # Ran separately from 'make' to avoid re-doing
RUN git submodule update --init --recursive RUN git submodule update --init --recursive
RUN if [ "$HEAPTRACK_BUILD" = "1" ]; then \
git apply --directory=vendor/nimbus-build-system/vendor/Nim docs/tutorial/nim.2.2.4_heaptracker_addon.patch; \
fi
# Slowest build step for the sake of caching layers # Slowest build step for the sake of caching layers
RUN make -j$(nproc) deps QUICK_AND_DIRTY_COMPILER=1 ${NIM_COMMIT} RUN make -j$(nproc) deps QUICK_AND_DIRTY_COMPILER=1 ${NIM_COMMIT}
@ -32,7 +27,7 @@ RUN make -j$(nproc) ${NIM_COMMIT} $MAKE_TARGET LOG_LEVEL=${LOG_LEVEL} NIMFLAGS="
# PRODUCTION IMAGE ------------------------------------------------------------- # PRODUCTION IMAGE -------------------------------------------------------------
FROM alpine:3.18 AS prod FROM alpine:3.18 as prod
ARG MAKE_TARGET=wakunode2 ARG MAKE_TARGET=wakunode2
@ -46,7 +41,10 @@ LABEL version="unknown"
EXPOSE 30303 60000 8545 EXPOSE 30303 60000 8545
# Referenced in the binary # Referenced in the binary
RUN apk add --no-cache libgcc libpq-dev bind-tools RUN apk add --no-cache libgcc pcre-dev libpq-dev bind-tools
# Fix for 'Error loading shared library libpcre.so.3: No such file or directory'
RUN ln -s /usr/lib/libpcre.so /usr/lib/libpcre.so.3
# Copy to separate location to accomodate different MAKE_TARGET values # Copy to separate location to accomodate different MAKE_TARGET values
COPY --from=nim-build /app/build/$MAKE_TARGET /usr/local/bin/ COPY --from=nim-build /app/build/$MAKE_TARGET /usr/local/bin/
@ -80,7 +78,7 @@ RUN make -j$(nproc)
# Debug image # Debug image
FROM prod AS debug-with-heaptrack FROM prod AS debug
RUN apk add --no-cache gdb libunwind RUN apk add --no-cache gdb libunwind

View File

@ -1,56 +0,0 @@
# BUILD NIM APP ----------------------------------------------------------------
FROM rustlang/rust:nightly-alpine3.19 AS nim-build
ARG NIMFLAGS
ARG MAKE_TARGET=lightpushwithmix
ARG NIM_COMMIT
ARG LOG_LEVEL=TRACE
# Get build tools and required header files
RUN apk add --no-cache bash git build-base openssl-dev linux-headers curl jq
WORKDIR /app
COPY . .
# workaround for alpine issue: https://github.com/alpinelinux/docker-alpine/issues/383
RUN apk update && apk upgrade
# Ran separately from 'make' to avoid re-doing
RUN git submodule update --init --recursive
# Slowest build step for the sake of caching layers
RUN make -j$(nproc) deps QUICK_AND_DIRTY_COMPILER=1 ${NIM_COMMIT}
# Build the final node binary
RUN make -j$(nproc) ${NIM_COMMIT} $MAKE_TARGET LOG_LEVEL=${LOG_LEVEL} NIMFLAGS="${NIMFLAGS}"
# REFERENCE IMAGE as BASE for specialized PRODUCTION IMAGES----------------------------------------
FROM alpine:3.18 AS base_lpt
ARG MAKE_TARGET=lightpushwithmix
LABEL maintainer="prem@waku.org"
LABEL source="https://github.com/waku-org/nwaku"
LABEL description="Lite Push With Mix: Waku light-client"
LABEL commit="unknown"
LABEL version="unknown"
# DevP2P, LibP2P, and JSON RPC ports
EXPOSE 30303 60000 8545
# Referenced in the binary
RUN apk add --no-cache libgcc libpq-dev \
wget \
iproute2 \
python3 \
jq
COPY --from=nim-build /app/build/lightpush_publisher_mix /usr/bin/
RUN chmod +x /usr/bin/lightpush_publisher_mix
# Standalone image to be used manually and in lpt-runner -------------------------------------------
FROM base_lpt AS standalone_lpt
ENTRYPOINT ["/usr/bin/lightpush_publisher_mix"]

180
Makefile
View File

@ -4,8 +4,8 @@
# - MIT license # - MIT license
# at your option. This file may not be copied, modified, or distributed except # at your option. This file may not be copied, modified, or distributed except
# according to those terms. # according to those terms.
export BUILD_SYSTEM_DIR := vendor/nimbus-build-system BUILD_SYSTEM_DIR := vendor/nimbus-build-system
export EXCLUDED_NIM_PACKAGES := vendor/nim-dnsdisc/vendor EXCLUDED_NIM_PACKAGES := vendor/nim-dnsdisc/vendor
LINK_PCRE := 0 LINK_PCRE := 0
FORMAT_MSG := "\\x1B[95mFormatting:\\x1B[39m" FORMAT_MSG := "\\x1B[95mFormatting:\\x1B[39m"
# we don't want an error here, so we can handle things later, in the ".DEFAULT" target # we don't want an error here, so we can handle things later, in the ".DEFAULT" target
@ -34,18 +34,15 @@ ifneq (,$(findstring MINGW,$(detected_OS)))
endif endif
ifeq ($(detected_OS),Windows) ifeq ($(detected_OS),Windows)
# Update MINGW_PATH to standard MinGW location # Define a new temporary directory for Windows
MINGW_PATH = /mingw64 TMP_DIR := $(CURDIR)/tmp
NIM_PARAMS += --passC:"-I$(MINGW_PATH)/include" $(shell mkdir -p $(TMP_DIR))
NIM_PARAMS += --passL:"-L$(MINGW_PATH)/lib" export TMP := $(TMP_DIR)
NIM_PARAMS += --passL:"-Lvendor/nim-nat-traversal/vendor/miniupnp/miniupnpc" export TEMP := $(TMP_DIR)
NIM_PARAMS += --passL:"-Lvendor/nim-nat-traversal/vendor/libnatpmp-upstream"
LIBS = -lws2_32 -lbcrypt -liphlpapi -luserenv -lntdll -lminiupnpc -lnatpmp -lpq # Add the necessary libraries to the linker flags
LIBS = -static -lws2_32 -lbcrypt -luserenv -lntdll -lminiupnpc
NIM_PARAMS += $(foreach lib,$(LIBS),--passL:"$(lib)") NIM_PARAMS += $(foreach lib,$(LIBS),--passL:"$(lib)")
export PATH := /c/msys64/usr/bin:/c/msys64/mingw64/bin:/c/msys64/usr/lib:/c/msys64/mingw64/lib:$(PATH)
endif endif
########## ##########
@ -56,21 +53,7 @@ endif
# default target, because it's the first one that doesn't start with '.' # default target, because it's the first one that doesn't start with '.'
all: | wakunode2 example2 chat2 chat2bridge libwaku all: | wakunode2 example2 chat2 chat2bridge libwaku
test_file := $(word 2,$(MAKECMDGOALS)) test: | testcommon testwaku
define test_name
$(shell echo '$(MAKECMDGOALS)' | cut -d' ' -f3-)
endef
test:
ifeq ($(strip $(test_file)),)
$(MAKE) testcommon
$(MAKE) testwaku
else
$(MAKE) compile-test TEST_FILE="$(test_file)" TEST_NAME="$(call test_name)"
endif
# this prevents make from erroring on unknown targets like "Index"
%:
@true
waku.nims: waku.nims:
ln -s waku.nimble $@ ln -s waku.nimble $@
@ -98,17 +81,16 @@ NIM_PARAMS := $(NIM_PARAMS) -d:git_version=\"$(GIT_VERSION)\"
HEAPTRACKER ?= 0 HEAPTRACKER ?= 0
HEAPTRACKER_INJECT ?= 0 HEAPTRACKER_INJECT ?= 0
ifeq ($(HEAPTRACKER), 1) ifeq ($(HEAPTRACKER), 1)
# Assumes Nim's lib/system/alloc.nim is patched! # Needed to make nimbus-build-system use the Nim's 'heaptrack_support' branch
TARGET := debug-with-heaptrack DOCKER_NIM_COMMIT := NIM_COMMIT=heaptrack_support
TARGET := prod-with-heaptrack
ifeq ($(HEAPTRACKER_INJECT), 1) ifeq ($(HEAPTRACKER_INJECT), 1)
# the Nim compiler will load 'libheaptrack_inject.so' # the Nim compiler will load 'libheaptrack_inject.so'
HEAPTRACK_PARAMS := -d:heaptracker -d:heaptracker_inject HEAPTRACK_PARAMS := -d:heaptracker -d:heaptracker_inject
NIM_PARAMS := $(NIM_PARAMS) -d:heaptracker -d:heaptracker_inject
else else
# the Nim compiler will load 'libheaptrack_preload.so' # the Nim compiler will load 'libheaptrack_preload.so'
HEAPTRACK_PARAMS := -d:heaptracker HEAPTRACK_PARAMS := -d:heaptracker
NIM_PARAMS := $(NIM_PARAMS) -d:heaptracker
endif endif
endif endif
@ -119,10 +101,6 @@ endif
################## ##################
.PHONY: deps libbacktrace .PHONY: deps libbacktrace
FOUNDRY_VERSION := 1.5.0
PNPM_VERSION := 10.23.0
rustup: rustup:
ifeq (, $(shell which cargo)) ifeq (, $(shell which cargo))
# Install Rustup if it's not installed # Install Rustup if it's not installed
@ -131,8 +109,11 @@ ifeq (, $(shell which cargo))
curl https://sh.rustup.rs -sSf | sh -s -- -y --default-toolchain stable curl https://sh.rustup.rs -sSf | sh -s -- -y --default-toolchain stable
endif endif
rln-deps: rustup anvil: rustup
./scripts/install_rln_tests_dependencies.sh $(FOUNDRY_VERSION) $(PNPM_VERSION) ifeq (, $(shell which anvil 2> /dev/null))
# Install Anvil if it's not installed
./scripts/install_anvil.sh
endif
deps: | deps-common nat-libs waku.nims deps: | deps-common nat-libs waku.nims
@ -150,9 +131,6 @@ ifeq ($(USE_LIBBACKTRACE), 0)
NIM_PARAMS := $(NIM_PARAMS) -d:disable_libbacktrace NIM_PARAMS := $(NIM_PARAMS) -d:disable_libbacktrace
endif endif
# enable experimental exit is dest feature in libp2p mix
NIM_PARAMS := $(NIM_PARAMS) -d:libp2p_mix_experimental_exit_is_dest
libbacktrace: libbacktrace:
+ $(MAKE) -C vendor/nim-libbacktrace --no-print-directory BUILD_CXX_LIB=0 + $(MAKE) -C vendor/nim-libbacktrace --no-print-directory BUILD_CXX_LIB=0
@ -174,12 +152,6 @@ endif
clean: | clean-libbacktrace clean: | clean-libbacktrace
### Create nimble links (used when building with Nix)
nimbus-build-system-nimble-dir:
NIMBLE_DIR="$(CURDIR)/$(NIMBLE_DIR)" \
PWD_CMD="$(PWD)" \
$(CURDIR)/scripts/generate_nimble_links.sh
################## ##################
## RLN ## ## RLN ##
@ -187,7 +159,7 @@ nimbus-build-system-nimble-dir:
.PHONY: librln .PHONY: librln
LIBRLN_BUILDDIR := $(CURDIR)/vendor/zerokit LIBRLN_BUILDDIR := $(CURDIR)/vendor/zerokit
LIBRLN_VERSION := v0.9.0 LIBRLN_VERSION := v0.5.1
ifeq ($(detected_OS),Windows) ifeq ($(detected_OS),Windows)
LIBRLN_FILE := rln.lib LIBRLN_FILE := rln.lib
@ -224,14 +196,13 @@ testcommon: | build deps
########## ##########
.PHONY: testwaku wakunode2 testwakunode2 example2 chat2 chat2bridge liteprotocoltester .PHONY: testwaku wakunode2 testwakunode2 example2 chat2 chat2bridge liteprotocoltester
# install rln-deps only for the testwaku target # install anvil only for the testwaku target
testwaku: | build deps rln-deps librln testwaku: | build deps anvil librln
echo -e $(BUILD_MSG) "build/$@" && \ echo -e $(BUILD_MSG) "build/$@" && \
$(ENV_SCRIPT) nim test -d:os=$(shell uname) $(NIM_PARAMS) waku.nims $(ENV_SCRIPT) nim test -d:os=$(shell uname) $(NIM_PARAMS) waku.nims
wakunode2: | build deps librln wakunode2: | build deps librln
echo -e $(BUILD_MSG) "build/$@" && \ echo -e $(BUILD_MSG) "build/$@" && \
\
$(ENV_SCRIPT) nim wakunode2 $(NIM_PARAMS) waku.nims $(ENV_SCRIPT) nim wakunode2 $(NIM_PARAMS) waku.nims
benchmarks: | build deps librln benchmarks: | build deps librln
@ -250,10 +221,6 @@ chat2: | build deps librln
echo -e $(BUILD_MSG) "build/$@" && \ echo -e $(BUILD_MSG) "build/$@" && \
$(ENV_SCRIPT) nim chat2 $(NIM_PARAMS) waku.nims $(ENV_SCRIPT) nim chat2 $(NIM_PARAMS) waku.nims
chat2mix: | build deps librln
echo -e $(BUILD_MSG) "build/$@" && \
$(ENV_SCRIPT) nim chat2mix $(NIM_PARAMS) waku.nims
rln-db-inspector: | build deps librln rln-db-inspector: | build deps librln
echo -e $(BUILD_MSG) "build/$@" && \ echo -e $(BUILD_MSG) "build/$@" && \
$(ENV_SCRIPT) nim rln_db_inspector $(NIM_PARAMS) waku.nims $(ENV_SCRIPT) nim rln_db_inspector $(NIM_PARAMS) waku.nims
@ -266,18 +233,13 @@ liteprotocoltester: | build deps librln
echo -e $(BUILD_MSG) "build/$@" && \ echo -e $(BUILD_MSG) "build/$@" && \
$(ENV_SCRIPT) nim liteprotocoltester $(NIM_PARAMS) waku.nims $(ENV_SCRIPT) nim liteprotocoltester $(NIM_PARAMS) waku.nims
lightpushwithmix: | build deps librln
echo -e $(BUILD_MSG) "build/$@" && \
$(ENV_SCRIPT) nim lightpushwithmix $(NIM_PARAMS) waku.nims
build/%: | build deps librln build/%: | build deps librln
echo -e $(BUILD_MSG) "build/$*" && \ echo -e $(BUILD_MSG) "build/$*" && \
$(ENV_SCRIPT) nim buildone $(NIM_PARAMS) waku.nims $* $(ENV_SCRIPT) nim buildone $(NIM_PARAMS) waku.nims $*
compile-test: | build deps librln test/%: | build deps librln
echo -e $(BUILD_MSG) "$(TEST_FILE)" "\"$(TEST_NAME)\"" && \ echo -e $(BUILD_MSG) "test/$*" && \
$(ENV_SCRIPT) nim buildTest $(NIM_PARAMS) waku.nims $(TEST_FILE) && \ $(ENV_SCRIPT) nim testone $(NIM_PARAMS) waku.nims $*
$(ENV_SCRIPT) nim execTest $(NIM_PARAMS) waku.nims $(TEST_FILE) "\"$(TEST_NAME)\""; \
################ ################
## Waku tools ## ## Waku tools ##
@ -367,24 +329,11 @@ docker-image:
--build-arg="NIMFLAGS=$(DOCKER_IMAGE_NIMFLAGS)" \ --build-arg="NIMFLAGS=$(DOCKER_IMAGE_NIMFLAGS)" \
--build-arg="NIM_COMMIT=$(DOCKER_NIM_COMMIT)" \ --build-arg="NIM_COMMIT=$(DOCKER_NIM_COMMIT)" \
--build-arg="LOG_LEVEL=$(LOG_LEVEL)" \ --build-arg="LOG_LEVEL=$(LOG_LEVEL)" \
--build-arg="HEAPTRACK_BUILD=$(HEAPTRACKER)" \
--label="commit=$(shell git rev-parse HEAD)" \ --label="commit=$(shell git rev-parse HEAD)" \
--label="version=$(GIT_VERSION)" \ --label="version=$(GIT_VERSION)" \
--target $(TARGET) \ --target $(TARGET) \
--tag $(DOCKER_IMAGE_NAME) . --tag $(DOCKER_IMAGE_NAME) .
docker-quick-image: MAKE_TARGET ?= wakunode2
docker-quick-image: DOCKER_IMAGE_TAG ?= $(MAKE_TARGET)-$(GIT_VERSION)
docker-quick-image: DOCKER_IMAGE_NAME ?= wakuorg/nwaku:$(DOCKER_IMAGE_TAG)
docker-quick-image: NIM_PARAMS := $(NIM_PARAMS) -d:chronicles_colors:none -d:insecure -d:postgres --passL:$(LIBRLN_FILE) --passL:-lm
docker-quick-image: | build deps librln wakunode2
docker build \
--build-arg="MAKE_TARGET=$(MAKE_TARGET)" \
--tag $(DOCKER_IMAGE_NAME) \
--target $(TARGET) \
--file docker/binaries/Dockerfile.bn.local \
.
docker-push: docker-push:
docker push $(DOCKER_IMAGE_NAME) docker push $(DOCKER_IMAGE_NAME)
@ -412,14 +361,6 @@ docker-liteprotocoltester:
--file apps/liteprotocoltester/Dockerfile.liteprotocoltester.compile \ --file apps/liteprotocoltester/Dockerfile.liteprotocoltester.compile \
. .
docker-quick-liteprotocoltester: DOCKER_LPT_TAG ?= latest
docker-quick-liteprotocoltester: DOCKER_LPT_NAME ?= wakuorg/liteprotocoltester:$(DOCKER_LPT_TAG)
docker-quick-liteprotocoltester: | liteprotocoltester
docker build \
--tag $(DOCKER_LPT_NAME) \
--file apps/liteprotocoltester/Dockerfile.liteprotocoltester \
.
docker-liteprotocoltester-push: docker-liteprotocoltester-push:
docker push $(DOCKER_LPT_NAME) docker push $(DOCKER_LPT_NAME)
@ -430,27 +371,16 @@ docker-liteprotocoltester-push:
.PHONY: cbindings cwaku_example libwaku .PHONY: cbindings cwaku_example libwaku
STATIC ?= 0 STATIC ?= 0
BUILD_COMMAND ?= libwakuDynamic
ifeq ($(detected_OS),Windows)
LIB_EXT_DYNAMIC = dll
LIB_EXT_STATIC = lib
else ifeq ($(detected_OS),Darwin)
LIB_EXT_DYNAMIC = dylib
LIB_EXT_STATIC = a
else ifeq ($(detected_OS),Linux)
LIB_EXT_DYNAMIC = so
LIB_EXT_STATIC = a
endif
LIB_EXT := $(LIB_EXT_DYNAMIC)
ifeq ($(STATIC), 1)
LIB_EXT = $(LIB_EXT_STATIC)
BUILD_COMMAND = libwakuStatic
endif
libwaku: | build deps librln libwaku: | build deps librln
echo -e $(BUILD_MSG) "build/$@.$(LIB_EXT)" && $(ENV_SCRIPT) nim $(BUILD_COMMAND) $(NIM_PARAMS) waku.nims $@.$(LIB_EXT) rm -f build/libwaku*
ifeq ($(STATIC), 1)
echo -e $(BUILD_MSG) "build/$@.a" && \
$(ENV_SCRIPT) nim libwakuStatic $(NIM_PARAMS) waku.nims
else
echo -e $(BUILD_MSG) "build/$@.so" && \
$(ENV_SCRIPT) nim libwakuDynamic $(NIM_PARAMS) waku.nims
endif
##################### #####################
## Mobile Bindings ## ## Mobile Bindings ##
@ -517,51 +447,6 @@ libwaku-android:
# It's likely this architecture is not used so we might just not support it. # It's likely this architecture is not used so we might just not support it.
# $(MAKE) libwaku-android-arm # $(MAKE) libwaku-android-arm
#################
## iOS Bindings #
#################
.PHONY: libwaku-ios-precheck \
libwaku-ios-device \
libwaku-ios-simulator \
libwaku-ios
IOS_DEPLOYMENT_TARGET ?= 18.0
# Get SDK paths dynamically using xcrun
define get_ios_sdk_path
$(shell xcrun --sdk $(1) --show-sdk-path 2>/dev/null)
endef
libwaku-ios-precheck:
ifeq ($(detected_OS),Darwin)
@command -v xcrun >/dev/null 2>&1 || { echo "Error: Xcode command line tools not installed"; exit 1; }
else
$(error iOS builds are only supported on macOS)
endif
# Build for iOS architecture
build-libwaku-for-ios-arch:
IOS_SDK=$(IOS_SDK) IOS_ARCH=$(IOS_ARCH) IOS_SDK_PATH=$(IOS_SDK_PATH) $(ENV_SCRIPT) nim libWakuIOS $(NIM_PARAMS) waku.nims
# iOS device (arm64)
libwaku-ios-device: IOS_ARCH=arm64
libwaku-ios-device: IOS_SDK=iphoneos
libwaku-ios-device: IOS_SDK_PATH=$(call get_ios_sdk_path,iphoneos)
libwaku-ios-device: | libwaku-ios-precheck build deps
$(MAKE) build-libwaku-for-ios-arch IOS_ARCH=$(IOS_ARCH) IOS_SDK=$(IOS_SDK) IOS_SDK_PATH=$(IOS_SDK_PATH)
# iOS simulator (arm64 - Apple Silicon Macs)
libwaku-ios-simulator: IOS_ARCH=arm64
libwaku-ios-simulator: IOS_SDK=iphonesimulator
libwaku-ios-simulator: IOS_SDK_PATH=$(call get_ios_sdk_path,iphonesimulator)
libwaku-ios-simulator: | libwaku-ios-precheck build deps
$(MAKE) build-libwaku-for-ios-arch IOS_ARCH=$(IOS_ARCH) IOS_SDK=$(IOS_SDK) IOS_SDK_PATH=$(IOS_SDK_PATH)
# Build all iOS targets
libwaku-ios:
$(MAKE) libwaku-ios-device
$(MAKE) libwaku-ios-simulator
cwaku_example: | build libwaku cwaku_example: | build libwaku
echo -e $(BUILD_MSG) "build/$@" && \ echo -e $(BUILD_MSG) "build/$@" && \
cc -o "build/$@" \ cc -o "build/$@" \
@ -607,3 +492,4 @@ release-notes:
sed -E 's@#([0-9]+)@[#\1](https://github.com/waku-org/nwaku/issues/\1)@g' sed -E 's@#([0-9]+)@[#\1](https://github.com/waku-org/nwaku/issues/\1)@g'
# I could not get the tool to replace issue ids with links, so using sed for now, # I could not get the tool to replace issue ids with links, so using sed for now,
# asked here: https://github.com/bvieira/sv4git/discussions/101 # asked here: https://github.com/bvieira/sv4git/discussions/101

View File

@ -1,21 +1,19 @@
# Logos Messaging Nim # Nwaku
## Introduction ## Introduction
The logos-messaging-nim, a.k.a. lmn or nwaku, repository implements a set of libp2p protocols aimed to bring The nwaku repository implements Waku, and provides tools related to it.
private communications.
- Nim implementation of [these specs](https://github.com/vacp2p/rfc-index/tree/main/waku). - A Nim implementation of the [Waku (v2) protocol](https://specs.vac.dev/specs/waku/v2/waku-v2.html).
- C library that exposes the implemented protocols. - CLI application `wakunode2` that allows you to run a Waku node.
- CLI application that allows you to run an lmn node. - Examples of Waku usage.
- Examples.
- Various tests of above. - Various tests of above.
For more details see the [source code](waku/README.md) For more details see the [source code](waku/README.md)
## How to Build & Run ( Linux, MacOS & WSL ) ## How to Build & Run ( Linux, MacOS & WSL )
These instructions are generic. For more detailed instructions, see the source code above. These instructions are generic. For more detailed instructions, see the Waku source code above.
### Prerequisites ### Prerequisites
@ -23,13 +21,6 @@ The standard developer tools, including a C compiler, GNU Make, Bash, and Git. M
> In some distributions (Fedora linux for example), you may need to install `which` utility separately. Nimbus build system is relying on it. > In some distributions (Fedora linux for example), you may need to install `which` utility separately. Nimbus build system is relying on it.
You'll also need an installation of Rust and its toolchain (specifically `rustc` and `cargo`).
The easiest way to install these, is using `rustup`:
```bash
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
```
### Wakunode ### Wakunode
```bash ```bash
@ -61,42 +52,14 @@ If you encounter difficulties building the project on WSL, consider placing the
### How to Build & Run ( Windows ) ### How to Build & Run ( Windows )
### Windows Build Instructions Note: This is a work in progress. The current setup procedure is as follows:
Goal: Get rid of windows specific procedures and make the build process the same as linux/macos.
#### 1. Install Required Tools The current setup procedure is as follows:
- **Git Bash Terminal**: Download and install from https://git-scm.com/download/win
- **MSYS2**:
a. Download installer from https://www.msys2.org
b. Install at "C:\" (default location). Remove/rename the msys folder in case of previous installation.
c. Use the mingw64 terminal from msys64 directory for package installation.
#### 2. Install Dependencies 1. Clone the repository and checkout master branch
Open MSYS2 mingw64 terminal and run the following one-by-one : 2. Ensure prerequisites are installed (Make, GCC, MSYS2/MinGW)
```bash 3. Run scripts/windows_setup.sh
pacman -Syu --noconfirm
pacman -S --noconfirm --needed mingw-w64-x86_64-toolchain
pacman -S --noconfirm --needed base-devel make cmake upx
pacman -S --noconfirm --needed mingw-w64-x86_64-rust
pacman -S --noconfirm --needed mingw-w64-x86_64-postgresql
pacman -S --noconfirm --needed mingw-w64-x86_64-gcc
pacman -S --noconfirm --needed mingw-w64-x86_64-gcc-libs
pacman -S --noconfirm --needed mingw-w64-x86_64-libwinpthread-git
pacman -S --noconfirm --needed mingw-w64-x86_64-zlib
pacman -S --noconfirm --needed mingw-w64-x86_64-openssl
pacman -S --noconfirm --needed mingw-w64-x86_64-python
```
#### 3. Build Wakunode
- Open Git Bash as administrator
- clone nwaku and cd nwaku
- Execute: `./scripts/build_windows.sh`
#### 4. Troubleshooting
If `wakunode2.exe` isn't generated:
- **Missing Dependencies**: Verify with:
`which make cmake gcc g++ rustc cargo python3 upx`
If missing, revisit Step 2 or ensure MSYS2 is at `C:\`
- **Installation Conflicts**: Remove existing MinGW/MSYS2/Git Bash installations and perform fresh install
### Developing ### Developing
@ -112,19 +75,11 @@ source env.sh
``` ```
If everything went well, you should see your prompt suffixed with `[Nimbus env]$`. Now you can run `nim` commands as usual. If everything went well, you should see your prompt suffixed with `[Nimbus env]$`. Now you can run `nim` commands as usual.
### Test Suite ### Waku Protocol Test Suite
```bash ```bash
# Run all the Waku tests # Run all the Waku tests
make test make test
# Run a specific test file
make test <test_file_path>
# e.g. : make test tests/wakunode2/test_all.nim
# Run a specific test name from a specific test file
make test <test_file_path> <test_name>
# e.g. : make test tests/wakunode2/test_all.nim "node setup is successful with default configuration"
``` ```
### Building single test files ### Building single test files
@ -143,9 +98,6 @@ Binary will be created as `<path to your test file.nim>.bin` under the `build` d
make test/tests/common/test_enr_builder.nim make test/tests/common/test_enr_builder.nim
``` ```
### Testing against `js-waku`
Refer to [js-waku repo](https://github.com/waku-org/js-waku/tree/master/packages/tests) for instructions.
## Formatting ## Formatting
Nim files are expected to be formatted using the [`nph`](https://github.com/arnetheduck/nph) version present in `vendor/nph`. Nim files are expected to be formatted using the [`nph`](https://github.com/arnetheduck/nph) version present in `vendor/nph`.

View File

@ -1,73 +1,49 @@
import import
std/[strutils, times, sequtils, osproc], math, results, options, testutils/unittests math,
std/sequtils,
import results,
options,
waku/[ waku/[
waku_rln_relay/protocol_types, waku_rln_relay/protocol_types,
waku_rln_relay/rln, waku_rln_relay/rln,
waku_rln_relay, waku_rln_relay,
waku_rln_relay/conversion_utils, waku_rln_relay/conversion_utils,
waku_rln_relay/group_manager/on_chain/group_manager, waku_rln_relay/group_manager/static/group_manager,
], ]
tests/waku_rln_relay/utils_onchain
proc benchmark( import std/[times, os]
manager: OnChainGroupManager, registerCount: int, messageLimit: int
): Future[string] {.async, gcsafe.} =
# Register a new member so that we can later generate proofs
let idCredentials = generateCredentials(registerCount)
var start_time = getTime() proc main(): Future[string] {.async, gcsafe.} =
for i in 0 .. registerCount - 1: let rlnIns = createRLNInstance(20).get()
try: let credentials = toSeq(0 .. 1000).mapIt(membershipKeyGen(rlnIns).get())
await manager.register(idCredentials[i], UserMessageLimit(messageLimit + 1))
except Exception, CatchableError:
assert false, "exception raised: " & getCurrentExceptionMsg()
info "registration finished", let manager = StaticGroupManager(
iter = i, elapsed_ms = (getTime() - start_time).inMilliseconds rlnInstance: rlnIns,
groupSize: 1000,
membershipIndex: some(MembershipIndex(900)),
groupKeys: credentials,
)
discard await manager.updateRoots() await manager.init()
manager.merkleProofCache = (await manager.fetchMerkleProofElements()).valueOr:
error "Failed to fetch Merkle proof", error = error
quit(QuitFailure)
let epoch = default(Epoch)
info "epoch in bytes", epochHex = epoch.inHex()
let data: seq[byte] = newSeq[byte](1024) let data: seq[byte] = newSeq[byte](1024)
var proofGenTimes: seq[times.Duration] = @[] var proofGenTimes: seq[times.Duration] = @[]
var proofVerTimes: seq[times.Duration] = @[] var proofVerTimes: seq[times.Duration] = @[]
for i in 0 .. 50:
var time = getTime()
let proof = manager.generateProof(data, default(Epoch)).get()
proofGenTimes.add(getTime() - time)
start_time = getTime() time = getTime()
for i in 1 .. messageLimit: let res = manager.verifyProof(data, proof).get()
var generate_time = getTime() proofVerTimes.add(getTime() - time)
let proof = manager.generateProof(data, epoch, MessageId(i.uint8)).valueOr:
raiseAssert $error
proofGenTimes.add(getTime() - generate_time)
let verify_time = getTime()
let ok = manager.verifyProof(data, proof).valueOr:
raiseAssert $error
proofVerTimes.add(getTime() - verify_time)
info "iteration finished",
iter = i, elapsed_ms = (getTime() - start_time).inMilliseconds
echo "Proof generation times: ", sum(proofGenTimes) div len(proofGenTimes) echo "Proof generation times: ", sum(proofGenTimes) div len(proofGenTimes)
echo "Proof verification times: ", sum(proofVerTimes) div len(proofVerTimes) echo "Proof verification times: ", sum(proofVerTimes) div len(proofVerTimes)
proc main() =
# Start a local Ethereum JSON-RPC (Anvil) so that the group-manager setup can connect.
let anvilProc = runAnvil()
defer:
stopAnvil(anvilProc)
# Set up an On-chain group manager (includes contract deployment)
let manager = waitFor setupOnchainGroupManager()
(waitFor manager.init()).isOkOr:
raiseAssert $error
discard waitFor benchmark(manager, 200, 20)
when isMainModule: when isMainModule:
main() try:
waitFor(main())
except CatchableError as e:
raise e

View File

@ -11,6 +11,7 @@ import
confutils, confutils,
chronicles, chronicles,
chronos, chronos,
stew/shims/net as stewNet,
eth/keys, eth/keys,
bearssl, bearssl,
stew/[byteutils, results], stew/[byteutils, results],
@ -32,8 +33,8 @@ import
import import
waku/[ waku/[
waku_core, waku_core,
waku_lightpush_legacy/common, waku_lightpush/common,
waku_lightpush_legacy/rpc, waku_lightpush/rpc,
waku_enr, waku_enr,
discovery/waku_dnsdisc, discovery/waku_dnsdisc,
waku_store_legacy, waku_store_legacy,
@ -132,14 +133,25 @@ proc showChatPrompt(c: Chat) =
except IOError: except IOError:
discard discard
proc getChatLine(payload: seq[byte]): string = proc getChatLine(c: Chat, msg: WakuMessage): Result[string, string] =
# No payload encoding/encryption from Waku # No payload encoding/encryption from Waku
let pb = Chat2Message.init(payload).valueOr: let
return string.fromBytes(payload) pb = Chat2Message.init(msg.payload)
return $pb chatLine =
if pb.isOk:
pb[].toString()
else:
string.fromBytes(msg.payload)
return ok(chatline)
proc printReceivedMessage(c: Chat, msg: WakuMessage) = proc printReceivedMessage(c: Chat, msg: WakuMessage) =
let chatLine = getChatLine(msg.payload) let
pb = Chat2Message.init(msg.payload)
chatLine =
if pb.isOk:
pb[].toString()
else:
string.fromBytes(msg.payload)
try: try:
echo &"{chatLine}" echo &"{chatLine}"
except ValueError: except ValueError:
@ -162,16 +174,18 @@ proc startMetricsServer(
): Result[MetricsHttpServerRef, string] = ): Result[MetricsHttpServerRef, string] =
info "Starting metrics HTTP server", serverIp = $serverIp, serverPort = $serverPort info "Starting metrics HTTP server", serverIp = $serverIp, serverPort = $serverPort
let server = MetricsHttpServerRef.new($serverIp, serverPort).valueOr: let metricsServerRes = MetricsHttpServerRef.new($serverIp, serverPort)
return err("metrics HTTP server start failed: " & $error) if metricsServerRes.isErr():
return err("metrics HTTP server start failed: " & $metricsServerRes.error)
let server = metricsServerRes.value
try: try:
waitFor server.start() waitFor server.start()
except CatchableError: except CatchableError:
return err("metrics HTTP server start failed: " & getCurrentExceptionMsg()) return err("metrics HTTP server start failed: " & getCurrentExceptionMsg())
info "Metrics HTTP server started", serverIp = $serverIp, serverPort = $serverPort info "Metrics HTTP server started", serverIp = $serverIp, serverPort = $serverPort
ok(server) ok(metricsServerRes.value)
proc publish(c: Chat, line: string) = proc publish(c: Chat, line: string) =
# First create a Chat2Message protobuf with this line of text # First create a Chat2Message protobuf with this line of text
@ -189,17 +203,19 @@ proc publish(c: Chat, line: string) =
version: 0, version: 0,
timestamp: getNanosecondTime(time), timestamp: getNanosecondTime(time),
) )
if not isNil(c.node.wakuRlnRelay): if not isNil(c.node.wakuRlnRelay):
# for future version when we support more than one rln protected content topic, # for future version when we support more than one rln protected content topic,
# we should check the message content topic as well # we should check the message content topic as well
if c.node.wakuRlnRelay.appendRLNProof(message, float64(time)).isErr(): let appendRes = c.node.wakuRlnRelay.appendRLNProof(message, float64(time))
info "could not append rate limit proof to the message" if appendRes.isErr():
debug "could not append rate limit proof to the message"
else: else:
info "rate limit proof is appended to the message" debug "rate limit proof is appended to the message"
let proof = RateLimitProof.init(message.proof).valueOr: let decodeRes = RateLimitProof.init(message.proof)
if decodeRes.isErr():
error "could not decode the RLN proof" error "could not decode the RLN proof"
return
let proof = decodeRes.get()
# TODO move it to log after dogfooding # TODO move it to log after dogfooding
let msgEpoch = fromEpoch(proof.epoch) let msgEpoch = fromEpoch(proof.epoch)
if fromEpoch(c.node.wakuRlnRelay.lastEpoch) == msgEpoch: if fromEpoch(c.node.wakuRlnRelay.lastEpoch) == msgEpoch:
@ -211,9 +227,9 @@ proc publish(c: Chat, line: string) =
c.node.wakuRlnRelay.lastEpoch = proof.epoch c.node.wakuRlnRelay.lastEpoch = proof.epoch
try: try:
if not c.node.wakuLegacyLightPush.isNil(): if not c.node.wakuLightPush.isNil():
# Attempt lightpush # Attempt lightpush
(waitFor c.node.legacyLightpushPublish(some(DefaultPubsubTopic), message)).isOkOr: (waitFor c.node.lightpushPublish(some(DefaultPubsubTopic), message)).isOkOr:
error "failed to publish lightpush message", error = error error "failed to publish lightpush message", error = error
else: else:
(waitFor c.node.publish(some(DefaultPubsubTopic), message)).isOkOr: (waitFor c.node.publish(some(DefaultPubsubTopic), message)).isOkOr:
@ -317,19 +333,27 @@ proc processInput(rfd: AsyncFD, rng: ref HmacDrbgContext) {.async.} =
if conf.logLevel != LogLevel.NONE: if conf.logLevel != LogLevel.NONE:
setLogLevel(conf.logLevel) setLogLevel(conf.logLevel)
let (extIp, extTcpPort, extUdpPort) = setupNat( let natRes = setupNat(
conf.nat, conf.nat,
clientId, clientId,
Port(uint16(conf.tcpPort) + conf.portsShift), Port(uint16(conf.tcpPort) + conf.portsShift),
Port(uint16(conf.udpPort) + conf.portsShift), Port(uint16(conf.udpPort) + conf.portsShift),
).valueOr: )
raise newException(ValueError, "setupNat error " & error)
if natRes.isErr():
raise newException(ValueError, "setupNat error " & natRes.error)
let (extIp, extTcpPort, extUdpPort) = natRes.get()
var enrBuilder = EnrBuilder.init(nodeKey) var enrBuilder = EnrBuilder.init(nodeKey)
let record = enrBuilder.build().valueOr: let recordRes = enrBuilder.build()
error "failed to create enr record", error = error let record =
quit(QuitFailure) if recordRes.isErr():
error "failed to create enr record", error = recordRes.error
quit(QuitFailure)
else:
recordRes.get()
let node = block: let node = block:
var builder = WakuNodeBuilder.init() var builder = WakuNodeBuilder.init()
@ -357,9 +381,7 @@ proc processInput(rfd: AsyncFD, rng: ref HmacDrbgContext) {.async.} =
if conf.relay: if conf.relay:
let shards = let shards =
conf.shards.mapIt(RelayShard(clusterId: conf.clusterId, shardId: uint16(it))) conf.shards.mapIt(RelayShard(clusterId: conf.clusterId, shardId: uint16(it)))
(await node.mountRelay()).isOkOr: await node.mountRelay(shards)
echo "failed to mount relay: " & error
return
await node.mountLibp2pPing() await node.mountLibp2pPing()
@ -396,16 +418,16 @@ proc processInput(rfd: AsyncFD, rng: ref HmacDrbgContext) {.async.} =
dnsDiscoveryUrl = some( dnsDiscoveryUrl = some(
"enrtree://AIRVQ5DDA4FFWLRBCHJWUWOO6X6S4ZTZ5B667LQ6AJU6PEYDLRD5O@sandbox.waku.nodes.status.im" "enrtree://AIRVQ5DDA4FFWLRBCHJWUWOO6X6S4ZTZ5B667LQ6AJU6PEYDLRD5O@sandbox.waku.nodes.status.im"
) )
elif conf.dnsDiscoveryUrl != "": elif conf.dnsDiscovery and conf.dnsDiscoveryUrl != "":
# No pre-selected fleet. Discover nodes via DNS using user config # No pre-selected fleet. Discover nodes via DNS using user config
info "Discovering nodes using Waku DNS discovery", url = conf.dnsDiscoveryUrl debug "Discovering nodes using Waku DNS discovery", url = conf.dnsDiscoveryUrl
dnsDiscoveryUrl = some(conf.dnsDiscoveryUrl) dnsDiscoveryUrl = some(conf.dnsDiscoveryUrl)
var discoveredNodes: seq[RemotePeerInfo] var discoveredNodes: seq[RemotePeerInfo]
if dnsDiscoveryUrl.isSome: if dnsDiscoveryUrl.isSome:
var nameServers: seq[TransportAddress] var nameServers: seq[TransportAddress]
for ip in conf.dnsAddrsNameServers: for ip in conf.dnsDiscoveryNameServers:
nameServers.add(initTAddress(ip, Port(53))) # Assume all servers use port 53 nameServers.add(initTAddress(ip, Port(53))) # Assume all servers use port 53
let dnsResolver = DnsResolver.new(nameServers) let dnsResolver = DnsResolver.new(nameServers)
@ -415,7 +437,7 @@ proc processInput(rfd: AsyncFD, rng: ref HmacDrbgContext) {.async.} =
let resolved = await dnsResolver.resolveTxt(domain) let resolved = await dnsResolver.resolveTxt(domain)
return resolved[0] # Use only first answer return resolved[0] # Use only first answer
let wakuDnsDiscovery = WakuDnsDiscovery.init(dnsDiscoveryUrl.get(), resolver) var wakuDnsDiscovery = WakuDnsDiscovery.init(dnsDiscoveryUrl.get(), resolver)
if wakuDnsDiscovery.isOk: if wakuDnsDiscovery.isOk:
let discoveredPeers = await wakuDnsDiscovery.get().findPeers() let discoveredPeers = await wakuDnsDiscovery.get().findPeers()
if discoveredPeers.isOk: if discoveredPeers.isOk:
@ -423,10 +445,8 @@ proc processInput(rfd: AsyncFD, rng: ref HmacDrbgContext) {.async.} =
discoveredNodes = discoveredPeers.get() discoveredNodes = discoveredPeers.get()
echo "Discovered and connecting to " & $discoveredNodes echo "Discovered and connecting to " & $discoveredNodes
waitFor chat.node.connectToNodes(discoveredNodes) waitFor chat.node.connectToNodes(discoveredNodes)
else:
warn "Failed to find peers via DNS discovery", error = discoveredPeers.error
else: else:
warn "Failed to init Waku DNS discovery", error = wakuDnsDiscovery.error warn "Failed to init Waku DNS discovery"
let peerInfo = node.switch.peerInfo let peerInfo = node.switch.peerInfo
let listenStr = $peerInfo.addrs[0] & "/p2p/" & $peerInfo.peerId let listenStr = $peerInfo.addrs[0] & "/p2p/" & $peerInfo.peerId
@ -462,37 +482,36 @@ proc processInput(rfd: AsyncFD, rng: ref HmacDrbgContext) {.async.} =
else: else:
newSeq[byte](0) newSeq[byte](0)
let chatLine = getChatLine(payload) let
pb = Chat2Message.init(payload)
chatLine =
if pb.isOk:
pb[].toString()
else:
string.fromBytes(payload)
echo &"{chatLine}" echo &"{chatLine}"
info "Hit store handler" info "Hit store handler"
block storeQueryBlock: let queryRes = await node.query(
let queryRes = ( StoreQueryRequest(contentTopics: @[chat.contentTopic]), storenode.get()
await node.query( )
StoreQueryRequest(contentTopics: @[chat.contentTopic]), storenode.get() if queryRes.isOk():
) storeHandler(queryRes.value)
).valueOr:
error "Store query failed", error = error
break storeQueryBlock
storeHandler(queryRes)
# NOTE Must be mounted after relay # NOTE Must be mounted after relay
if conf.lightpushnode != "": if conf.lightpushnode != "":
let peerInfo = parsePeerInfo(conf.lightpushnode) let peerInfo = parsePeerInfo(conf.lightpushnode)
if peerInfo.isOk(): if peerInfo.isOk():
(await node.mountLegacyLightPush()).isOkOr: await mountLightPush(node)
error "failed to mount legacy lightpush", error = error node.mountLightPushClient()
quit(QuitFailure)
node.mountLegacyLightPushClient()
node.peerManager.addServicePeer(peerInfo.value, WakuLightpushCodec) node.peerManager.addServicePeer(peerInfo.value, WakuLightpushCodec)
else: else:
error "LightPush not mounted. Couldn't parse conf.lightpushnode", error "LightPush not mounted. Couldn't parse conf.lightpushnode",
error = peerInfo.error error = peerInfo.error
if conf.filternode != "": if conf.filternode != "":
if (let peerInfo = parsePeerInfo(conf.filternode); peerInfo.isErr()): let peerInfo = parsePeerInfo(conf.filternode)
error "Filter not mounted. Couldn't parse conf.filternode", error = peerInfo.error if peerInfo.isOk():
else:
await node.mountFilter() await node.mountFilter()
await node.mountFilterClient() await node.mountFilterClient()
@ -503,6 +522,8 @@ proc processInput(rfd: AsyncFD, rng: ref HmacDrbgContext) {.async.} =
chat.printReceivedMessage(msg) chat.printReceivedMessage(msg)
# TODO: Here to support FilterV2 relevant subscription. # TODO: Here to support FilterV2 relevant subscription.
else:
error "Filter not mounted. Couldn't parse conf.filternode", error = peerInfo.error
# Subscribe to a topic, if relay is mounted # Subscribe to a topic, if relay is mounted
if conf.relay: if conf.relay:
@ -513,35 +534,33 @@ proc processInput(rfd: AsyncFD, rng: ref HmacDrbgContext) {.async.} =
chat.printReceivedMessage(msg) chat.printReceivedMessage(msg)
node.subscribe( node.subscribe(
(kind: PubsubSub, topic: DefaultPubsubTopic), WakuRelayHandler(handler) (kind: PubsubSub, topic: DefaultPubsubTopic), some(WakuRelayHandler(handler))
).isOkOr: )
error "failed to subscribe to pubsub topic",
topic = DefaultPubsubTopic, error = error
if conf.rlnRelay: if conf.rlnRelay:
info "WakuRLNRelay is enabled" info "WakuRLNRelay is enabled"
proc spamHandler(wakuMessage: WakuMessage) {.gcsafe, closure.} = proc spamHandler(wakuMessage: WakuMessage) {.gcsafe, closure.} =
info "spam handler is called" debug "spam handler is called"
let chatLineResult = getChatLine(wakuMessage.payload) let chatLineResult = chat.getChatLine(wakuMessage)
echo "spam message is found and discarded : " & chatLineResult if chatLineResult.isOk():
echo "A spam message is found and discarded : ", chatLineResult.value
else:
echo "A spam message is found and discarded"
chat.prompt = false chat.prompt = false
showChatPrompt(chat) showChatPrompt(chat)
echo "rln-relay preparation is in progress..." echo "rln-relay preparation is in progress..."
let rlnConf = WakuRlnConfig( let rlnConf = WakuRlnConfig(
dynamic: conf.rlnRelayDynamic, rlnRelayDynamic: conf.rlnRelayDynamic,
credIndex: conf.rlnRelayCredIndex, rlnRelayCredIndex: conf.rlnRelayCredIndex,
chainId: UInt256.fromBytesBE(conf.rlnRelayChainId.toBytesBE()), rlnRelayEthContractAddress: conf.rlnRelayEthContractAddress,
ethClientUrls: conf.ethClientUrls.mapIt(string(it)), rlnRelayEthClientAddress: string(conf.rlnRelayethClientAddress),
creds: some( rlnRelayCredPath: conf.rlnRelayCredPath,
RlnRelayCreds( rlnRelayCredPassword: conf.rlnRelayCredPassword,
path: conf.rlnRelayCredPath, password: conf.rlnRelayCredPassword rlnRelayUserMessageLimit: conf.rlnRelayUserMessageLimit,
) rlnEpochSizeSec: conf.rlnEpochSizeSec,
),
userMessageLimit: conf.rlnRelayUserMessageLimit,
epochSizeSec: conf.rlnEpochSizeSec,
) )
waitFor node.mountRlnRelay(rlnConf, spamHandler = some(spamHandler)) waitFor node.mountRlnRelay(rlnConf, spamHandler = some(spamHandler))
@ -564,6 +583,9 @@ proc processInput(rfd: AsyncFD, rng: ref HmacDrbgContext) {.async.} =
await chat.readWriteLoop() await chat.readWriteLoop()
if conf.keepAlive:
node.startKeepalive()
runForever() runForever()
proc main(rng: ref HmacDrbgContext) {.async.} = proc main(rng: ref HmacDrbgContext) {.async.} =

View File

@ -18,8 +18,7 @@ type
prod prod
test test
EthRpcUrl* = distinct string EthRpcUrl = distinct string
Chat2Conf* = object ## General node config Chat2Conf* = object ## General node config
logLevel* {. logLevel* {.
desc: "Sets the log level.", defaultValue: LogLevel.INFO, name: "log-level" desc: "Sets the log level.", defaultValue: LogLevel.INFO, name: "log-level"
@ -158,8 +157,7 @@ type
## DNS discovery config ## DNS discovery config
dnsDiscovery* {. dnsDiscovery* {.
desc: desc: "Enable discovering nodes via DNS",
"Deprecated, please set dns-discovery-url instead. Enable discovering nodes via DNS",
defaultValue: false, defaultValue: false,
name: "dns-discovery" name: "dns-discovery"
.}: bool .}: bool
@ -170,11 +168,10 @@ type
name: "dns-discovery-url" name: "dns-discovery-url"
.}: string .}: string
dnsAddrsNameServers* {. dnsDiscoveryNameServers* {.
desc: desc: "DNS name server IPs to query. Argument may be repeated.",
"DNS name server IPs to query for DNS multiaddrs resolution. Argument may be repeated.",
defaultValue: @[parseIpAddress("1.1.1.1"), parseIpAddress("1.0.0.1")], defaultValue: @[parseIpAddress("1.1.1.1"), parseIpAddress("1.0.0.1")],
name: "dns-addrs-name-server" name: "dns-discovery-name-server"
.}: seq[IpAddress] .}: seq[IpAddress]
## Chat2 configuration ## Chat2 configuration
@ -215,13 +212,6 @@ type
name: "rln-relay" name: "rln-relay"
.}: bool .}: bool
rlnRelayChainId* {.
desc:
"Chain ID of the provided contract (optional, will fetch from RPC provider if not used)",
defaultValue: 0,
name: "rln-relay-chain-id"
.}: uint
rlnRelayCredPath* {. rlnRelayCredPath* {.
desc: "The path for peristing rln-relay credential", desc: "The path for peristing rln-relay credential",
defaultValue: "", defaultValue: "",
@ -250,12 +240,11 @@ type
name: "rln-relay-id-commitment-key" name: "rln-relay-id-commitment-key"
.}: string .}: string
ethClientUrls* {. rlnRelayEthClientAddress* {.
desc: desc: "HTTP address of an Ethereum testnet client e.g., http://localhost:8540/",
"HTTP address of an Ethereum testnet client e.g., http://localhost:8540/. Argument may be repeated.", defaultValue: "http://localhost:8540/",
defaultValue: newSeq[EthRpcUrl](0),
name: "rln-relay-eth-client-address" name: "rln-relay-eth-client-address"
.}: seq[EthRpcUrl] .}: EthRpcUrl
rlnRelayEthContractAddress* {. rlnRelayEthContractAddress* {.
desc: "Address of membership contract on an Ethereum testnet", desc: "Address of membership contract on an Ethereum testnet",

View File

@ -23,7 +23,6 @@ import
waku_store, waku_store,
factory/builder, factory/builder,
common/utils/matterbridge_client, common/utils/matterbridge_client,
common/rate_limit/setting,
], ],
# Chat 2 imports # Chat 2 imports
../chat2/chat2, ../chat2/chat2,
@ -126,20 +125,25 @@ proc toMatterbridge(
assert chat2Msg.isOk assert chat2Msg.isOk
if not cmb.mbClient let postRes = cmb.mbClient.postMessage(
.postMessage(text = string.fromBytes(chat2Msg[].payload), username = chat2Msg[].nick) text = string.fromBytes(chat2Msg[].payload), username = chat2Msg[].nick
.containsValue(true): )
if postRes.isErr() or (postRes[] == false):
chat2_mb_dropped.inc(labelValues = ["duplicate"]) chat2_mb_dropped.inc(labelValues = ["duplicate"])
error "Matterbridge host unreachable. Dropping message." error "Matterbridge host unreachable. Dropping message."
proc pollMatterbridge(cmb: Chat2MatterBridge, handler: MbMessageHandler) {.async.} = proc pollMatterbridge(cmb: Chat2MatterBridge, handler: MbMessageHandler) {.async.} =
while cmb.running: while cmb.running:
let msg = cmb.mbClient.getMessages().valueOr: let getRes = cmb.mbClient.getMessages()
if getRes.isOk():
for jsonNode in getRes[]:
await handler(jsonNode)
else:
error "Matterbridge host unreachable. Sleeping before retrying." error "Matterbridge host unreachable. Sleeping before retrying."
await sleepAsync(chronos.seconds(10)) await sleepAsync(chronos.seconds(10))
continue
for jsonNode in msg:
await handler(jsonNode)
await sleepAsync(cmb.pollPeriod) await sleepAsync(cmb.pollPeriod)
############## ##############
@ -164,7 +168,9 @@ proc new*(
let mbClient = MatterbridgeClient.new(mbHostUri, mbGateway) let mbClient = MatterbridgeClient.new(mbHostUri, mbGateway)
# Let's verify the Matterbridge configuration before continuing # Let's verify the Matterbridge configuration before continuing
if mbClient.isHealthy().valueOr(false): let clientHealth = mbClient.isHealthy()
if clientHealth.isOk() and clientHealth[]:
info "Reached Matterbridge host", host = mbClient.host info "Reached Matterbridge host", host = mbClient.host
else: else:
raise newException(ValueError, "Matterbridge client not reachable/healthy") raise newException(ValueError, "Matterbridge client not reachable/healthy")
@ -194,7 +200,7 @@ proc start*(cmb: Chat2MatterBridge) {.async.} =
cmb.running = true cmb.running = true
info "Start polling Matterbridge" debug "Start polling Matterbridge"
# Start Matterbridge polling (@TODO: use streaming interface) # Start Matterbridge polling (@TODO: use streaming interface)
proc mbHandler(jsonNode: JsonNode) {.async.} = proc mbHandler(jsonNode: JsonNode) {.async.} =
@ -204,15 +210,12 @@ proc start*(cmb: Chat2MatterBridge) {.async.} =
asyncSpawn cmb.pollMatterbridge(mbHandler) asyncSpawn cmb.pollMatterbridge(mbHandler)
# Start Waku v2 node # Start Waku v2 node
info "Start listening on Waku v2" debug "Start listening on Waku v2"
await cmb.nodev2.start() await cmb.nodev2.start()
# Always mount relay for bridge # Always mount relay for bridge
# `triggerSelf` is false on a `bridge` to avoid duplicates # `triggerSelf` is false on a `bridge` to avoid duplicates
(await cmb.nodev2.mountRelay()).isOkOr: await cmb.nodev2.mountRelay()
error "failed to mount relay", error = error
return
cmb.nodev2.wakuRelay.triggerSelf = false cmb.nodev2.wakuRelay.triggerSelf = false
# Bridging # Bridging
@ -226,9 +229,7 @@ proc start*(cmb: Chat2MatterBridge) {.async.} =
except: except:
error "exception in relayHandler: " & getCurrentExceptionMsg() error "exception in relayHandler: " & getCurrentExceptionMsg()
cmb.nodev2.subscribe((kind: PubsubSub, topic: DefaultPubsubTopic), relayHandler).isOkOr: cmb.nodev2.subscribe((kind: PubsubSub, topic: DefaultPubsubTopic), some(relayHandler))
error "failed to subscribe to relay", topic = DefaultPubsubTopic, error = error
return
proc stop*(cmb: Chat2MatterBridge) {.async: (raises: [Exception]).} = proc stop*(cmb: Chat2MatterBridge) {.async: (raises: [Exception]).} =
info "Stopping Chat2MatterBridge" info "Stopping Chat2MatterBridge"
@ -240,7 +241,7 @@ proc stop*(cmb: Chat2MatterBridge) {.async: (raises: [Exception]).} =
{.pop.} {.pop.}
# @TODO confutils.nim(775, 17) Error: can raise an unlisted exception: ref IOError # @TODO confutils.nim(775, 17) Error: can raise an unlisted exception: ref IOError
when isMainModule: when isMainModule:
import waku/common/utils/nat, waku/rest_api/message_cache import waku/common/utils/nat, waku/waku_api/message_cache
let let
rng = newRng() rng = newRng()
@ -249,21 +250,25 @@ when isMainModule:
if conf.logLevel != LogLevel.NONE: if conf.logLevel != LogLevel.NONE:
setLogLevel(conf.logLevel) setLogLevel(conf.logLevel)
let (nodev2ExtIp, nodev2ExtPort, _) = setupNat( let natRes = setupNat(
conf.nat, conf.nat,
clientId, clientId,
Port(uint16(conf.libp2pTcpPort) + conf.portsShift), Port(uint16(conf.libp2pTcpPort) + conf.portsShift),
Port(uint16(conf.udpPort) + conf.portsShift), Port(uint16(conf.udpPort) + conf.portsShift),
).valueOr: )
raise newException(ValueError, "setupNat error " & error) if natRes.isErr():
error "Error in setupNat", error = natRes.error
## The following heuristic assumes that, in absence of manual # Load address configuration
## config, the external port is the same as the bind port. let
let extPort = (nodev2ExtIp, nodev2ExtPort, _) = natRes.get()
if nodev2ExtIp.isSome() and nodev2ExtPort.isNone(): ## The following heuristic assumes that, in absence of manual
some(Port(uint16(conf.libp2pTcpPort) + conf.portsShift)) ## config, the external port is the same as the bind port.
else: extPort =
nodev2ExtPort if nodev2ExtIp.isSome() and nodev2ExtPort.isNone():
some(Port(uint16(conf.libp2pTcpPort) + conf.portsShift))
else:
nodev2ExtPort
let bridge = Chat2Matterbridge.new( let bridge = Chat2Matterbridge.new(
mbHostUri = "http://" & $initTAddress(conf.mbHostAddress, Port(conf.mbHostPort)), mbHostUri = "http://" & $initTAddress(conf.mbHostAddress, Port(conf.mbHostPort)),

View File

@ -91,7 +91,7 @@ type Chat2MatterbridgeConf* = object
name: "filternode" name: "filternode"
.}: string .}: string
# Matterbridge options # Matterbridge options
mbHostAddress* {. mbHostAddress* {.
desc: "Listening address of the Matterbridge host", desc: "Listening address of the Matterbridge host",
defaultValue: parseIpAddress("127.0.0.1"), defaultValue: parseIpAddress("127.0.0.1"),
@ -126,9 +126,11 @@ proc completeCmdArg*(T: type keys.KeyPair, val: string): seq[string] =
return @[] return @[]
proc parseCmdArg*(T: type crypto.PrivateKey, p: string): T = proc parseCmdArg*(T: type crypto.PrivateKey, p: string): T =
let key = SkPrivateKey.init(p).valueOr: let key = SkPrivateKey.init(p)
if key.isOk():
crypto.PrivateKey(scheme: Secp256k1, skkey: key.get())
else:
raise newException(ValueError, "Invalid private key") raise newException(ValueError, "Invalid private key")
return crypto.PrivateKey(scheme: Secp256k1, skkey: key)
proc completeCmdArg*(T: type crypto.PrivateKey, val: string): seq[string] = proc completeCmdArg*(T: type crypto.PrivateKey, val: string): seq[string] =
return @[] return @[]

View File

@ -1,663 +0,0 @@
## chat2 is an example of usage of Waku v2. For suggested usage options, please
## see dingpu tutorial in docs folder.
when not (compileOption("threads")):
{.fatal: "Please, compile this program with the --threads:on option!".}
{.push raises: [].}
import std/[strformat, strutils, times, options, random, sequtils]
import
confutils,
chronicles,
chronos,
eth/keys,
bearssl,
results,
stew/[byteutils],
metrics,
metrics/chronos_httpserver
import
libp2p/[
switch, # manage transports, a single entry point for dialing and listening
crypto/crypto, # cryptographic functions
stream/connection, # create and close stream read / write connections
multiaddress,
# encode different addressing schemes. For example, /ip4/7.7.7.7/tcp/6543 means it is using IPv4 protocol and TCP
peerinfo,
# manage the information of a peer, such as peer ID and public / private key
peerid, # Implement how peers interact
protobuf/minprotobuf, # message serialisation/deserialisation from and to protobufs
nameresolving/dnsresolver,
protocols/mix/curve25519,
] # define DNS resolution
import
waku/[
waku_core,
waku_lightpush/common,
waku_lightpush/rpc,
waku_enr,
discovery/waku_dnsdisc,
waku_node,
node/waku_metrics,
node/peer_manager,
factory/builder,
common/utils/nat,
waku_store/common,
waku_filter_v2/client,
common/logging,
],
./config_chat2mix
import libp2p/protocols/pubsub/rpc/messages, libp2p/protocols/pubsub/pubsub
import ../../waku/waku_rln_relay
logScope:
topics = "chat2 mix"
const Help =
"""
Commands: /[?|help|connect|nick|exit]
help: Prints this help
connect: dials a remote peer
nick: change nickname for current chat session
exit: exits chat session
"""
# XXX Connected is a bit annoying, because incoming connections don't trigger state change
# Could poll connection pool or something here, I suppose
# TODO Ensure connected turns true on incoming connections, or get rid of it
type Chat = ref object
node: WakuNode # waku node for publishing, subscribing, etc
transp: StreamTransport # transport streams between read & write file descriptor
subscribed: bool # indicates if a node is subscribed or not to a topic
connected: bool # if the node is connected to another peer
started: bool # if the node has started
nick: string # nickname for this chat session
prompt: bool # chat prompt is showing
contentTopic: string # default content topic for chat messages
conf: Chat2Conf # configuration for chat2
type
PrivateKey* = crypto.PrivateKey
Topic* = waku_core.PubsubTopic
const MinMixNodePoolSize = 4
#####################
## chat2 protobufs ##
#####################
type
SelectResult*[T] = Result[T, string]
Chat2Message* = object
timestamp*: int64
nick*: string
payload*: seq[byte]
proc getPubsubTopic*(
conf: Chat2Conf, node: WakuNode, contentTopic: string
): PubsubTopic =
let shard = node.wakuAutoSharding.get().getShard(contentTopic).valueOr:
echo "Could not parse content topic: " & error
return "" #TODO: fix this.
return $RelayShard(clusterId: conf.clusterId, shardId: shard.shardId)
proc init*(T: type Chat2Message, buffer: seq[byte]): ProtoResult[T] =
var msg = Chat2Message()
let pb = initProtoBuffer(buffer)
var timestamp: uint64
discard ?pb.getField(1, timestamp)
msg.timestamp = int64(timestamp)
discard ?pb.getField(2, msg.nick)
discard ?pb.getField(3, msg.payload)
ok(msg)
proc encode*(message: Chat2Message): ProtoBuffer =
var serialised = initProtoBuffer()
serialised.write(1, uint64(message.timestamp))
serialised.write(2, message.nick)
serialised.write(3, message.payload)
return serialised
proc `$`*(message: Chat2Message): string =
# Get message date and timestamp in local time
let time = message.timestamp.fromUnix().local().format("'<'MMM' 'dd,' 'HH:mm'>'")
return time & " " & message.nick & ": " & string.fromBytes(message.payload)
#####################
proc connectToNodes(c: Chat, nodes: seq[string]) {.async.} =
echo "Connecting to nodes"
await c.node.connectToNodes(nodes)
c.connected = true
proc showChatPrompt(c: Chat) =
if not c.prompt:
try:
stdout.write(">> ")
stdout.flushFile()
c.prompt = true
except IOError:
discard
proc getChatLine(payload: seq[byte]): string =
# No payload encoding/encryption from Waku
let pb = Chat2Message.init(payload).valueOr:
return string.fromBytes(payload)
return $pb
proc printReceivedMessage(c: Chat, msg: WakuMessage) =
let chatLine = getChatLine(msg.payload)
try:
echo &"{chatLine}"
except ValueError:
# Formatting fail. Print chat line in any case.
echo chatLine
c.prompt = false
showChatPrompt(c)
trace "Printing message", chatLine, contentTopic = msg.contentTopic
proc readNick(transp: StreamTransport): Future[string] {.async.} =
# Chat prompt
stdout.write("Choose a nickname >> ")
stdout.flushFile()
return await transp.readLine()
proc startMetricsServer(
serverIp: IpAddress, serverPort: Port
): Result[MetricsHttpServerRef, string] =
info "Starting metrics HTTP server", serverIp = $serverIp, serverPort = $serverPort
let server = MetricsHttpServerRef.new($serverIp, serverPort).valueOr:
return err("metrics HTTP server start failed: " & $error)
try:
waitFor server.start()
except CatchableError:
return err("metrics HTTP server start failed: " & getCurrentExceptionMsg())
info "Metrics HTTP server started", serverIp = $serverIp, serverPort = $serverPort
ok(server)
proc publish(c: Chat, line: string) {.async.} =
# First create a Chat2Message protobuf with this line of text
let time = getTime().toUnix()
let chat2pb =
Chat2Message(timestamp: time, nick: c.nick, payload: line.toBytes()).encode()
## @TODO: error handling on failure
proc handler(response: LightPushResponse) {.gcsafe, closure.} =
trace "lightpush response received", response = response
var message = WakuMessage(
payload: chat2pb.buffer,
contentTopic: c.contentTopic,
version: 0,
timestamp: getNanosecondTime(time),
)
try:
if not c.node.wakuLightpushClient.isNil():
# Attempt lightpush with mix
(
waitFor c.node.lightpushPublish(
some(c.conf.getPubsubTopic(c.node, c.contentTopic)),
message,
none(RemotePeerInfo),
true,
)
).isOkOr:
error "failed to publish lightpush message", error = error
else:
error "failed to publish message as lightpush client is not initialized"
except CatchableError:
error "caught error publishing message: ", error = getCurrentExceptionMsg()
# TODO This should read or be subscribe handler subscribe
proc readAndPrint(c: Chat) {.async.} =
while true:
# while p.connected:
# # TODO: echo &"{p.id} -> "
#
# echo cast[string](await p.conn.readLp(1024))
#echo "readAndPrint subscribe NYI"
await sleepAsync(100)
# TODO Implement
proc writeAndPrint(c: Chat) {.async.} =
while true:
# Connect state not updated on incoming WakuRelay connections
# if not c.connected:
# echo "type an address or wait for a connection:"
# echo "type /[help|?] for help"
# Chat prompt
showChatPrompt(c)
let line = await c.transp.readLine()
if line.startsWith("/help") or line.startsWith("/?") or not c.started:
echo Help
continue
# if line.startsWith("/disconnect"):
# echo "Ending current session"
# if p.connected and p.conn.closed.not:
# await p.conn.close()
# p.connected = false
elif line.startsWith("/connect"):
# TODO Should be able to connect to multiple peers for Waku chat
if c.connected:
echo "already connected to at least one peer"
continue
echo "enter address of remote peer"
let address = await c.transp.readLine()
if address.len > 0:
await c.connectToNodes(@[address])
elif line.startsWith("/nick"):
# Set a new nickname
c.nick = await readNick(c.transp)
echo "You are now known as " & c.nick
elif line.startsWith("/exit"):
echo "quitting..."
try:
await c.node.stop()
except:
echo "exception happened when stopping: " & getCurrentExceptionMsg()
quit(QuitSuccess)
else:
# XXX connected state problematic
if c.started:
echo "publishing message: " & line
await c.publish(line)
# TODO Connect to peer logic?
else:
try:
if line.startsWith("/") and "p2p" in line:
await c.connectToNodes(@[line])
except:
echo &"unable to dial remote peer {line}"
echo getCurrentExceptionMsg()
proc readWriteLoop(c: Chat) {.async.} =
asyncSpawn c.writeAndPrint() # execute the async function but does not block
asyncSpawn c.readAndPrint()
proc readInput(wfd: AsyncFD) {.thread, raises: [Defect, CatchableError].} =
## This procedure performs reading from `stdin` and sends data over
## pipe to main thread.
let transp = fromPipe(wfd)
while true:
let line = stdin.readLine()
discard waitFor transp.write(line & "\r\n")
var alreadyUsedServicePeers {.threadvar.}: seq[RemotePeerInfo]
proc selectRandomServicePeer*(
pm: PeerManager, actualPeer: Option[RemotePeerInfo], codec: string
): Result[RemotePeerInfo, void] =
if actualPeer.isSome():
alreadyUsedServicePeers.add(actualPeer.get())
let supportivePeers = pm.switch.peerStore.getPeersByProtocol(codec).filterIt(
it notin alreadyUsedServicePeers
)
if supportivePeers.len == 0:
return err()
let rndPeerIndex = rand(0 .. supportivePeers.len - 1)
return ok(supportivePeers[rndPeerIndex])
proc maintainSubscription(
wakuNode: WakuNode,
filterPubsubTopic: PubsubTopic,
filterContentTopic: ContentTopic,
filterPeer: RemotePeerInfo,
preventPeerSwitch: bool,
) {.async.} =
var actualFilterPeer = filterPeer
const maxFailedSubscribes = 3
const maxFailedServiceNodeSwitches = 10
var noFailedSubscribes = 0
var noFailedServiceNodeSwitches = 0
# Use chronos.Duration explicitly to avoid mismatch with std/times.Duration
let RetryWait = chronos.seconds(2) # Quick retry interval
let SubscriptionMaintenance = chronos.seconds(30) # Subscription maintenance interval
while true:
info "maintaining subscription at", peer = constructMultiaddrStr(actualFilterPeer)
# First use filter-ping to check if we have an active subscription
let pingErr = (await wakuNode.wakuFilterClient.ping(actualFilterPeer)).errorOr:
await sleepAsync(SubscriptionMaintenance)
info "subscription is live."
continue
# No subscription found. Let's subscribe.
error "ping failed.", error = pingErr
trace "no subscription found. Sending subscribe request"
let subscribeErr = (
await wakuNode.filterSubscribe(
some(filterPubsubTopic), filterContentTopic, actualFilterPeer
)
).errorOr:
await sleepAsync(SubscriptionMaintenance)
if noFailedSubscribes > 0:
noFailedSubscribes -= 1
notice "subscribe request successful."
continue
noFailedSubscribes += 1
error "Subscribe request failed.",
error = subscribeErr, peer = actualFilterPeer, failCount = noFailedSubscribes
# TODO: disconnet from failed actualFilterPeer
# asyncSpawn(wakuNode.peerManager.switch.disconnect(p))
# wakunode.peerManager.peerStore.delete(actualFilterPeer)
if noFailedSubscribes < maxFailedSubscribes:
await sleepAsync(RetryWait) # Wait a bit before retrying
elif not preventPeerSwitch:
# try again with new peer without delay
let actualFilterPeer = selectRandomServicePeer(
wakuNode.peerManager, some(actualFilterPeer), WakuFilterSubscribeCodec
).valueOr:
error "Failed to find new service peer. Exiting."
noFailedServiceNodeSwitches += 1
break
info "Found new peer for codec",
codec = filterPubsubTopic, peer = constructMultiaddrStr(actualFilterPeer)
noFailedSubscribes = 0
else:
await sleepAsync(SubscriptionMaintenance)
{.pop.}
# @TODO confutils.nim(775, 17) Error: can raise an unlisted exception: ref IOError
proc processInput(rfd: AsyncFD, rng: ref HmacDrbgContext) {.async.} =
let
transp = fromPipe(rfd)
conf = Chat2Conf.load()
nodekey =
if conf.nodekey.isSome():
conf.nodekey.get()
else:
PrivateKey.random(Secp256k1, rng[]).tryGet()
# set log level
if conf.logLevel != LogLevel.NONE:
setLogLevel(conf.logLevel)
let (extIp, extTcpPort, extUdpPort) = setupNat(
conf.nat,
clientId,
Port(uint16(conf.tcpPort) + conf.portsShift),
Port(uint16(conf.udpPort) + conf.portsShift),
).valueOr:
raise newException(ValueError, "setupNat error " & error)
var enrBuilder = EnrBuilder.init(nodeKey)
enrBuilder.withWakuRelaySharding(
RelayShards(clusterId: conf.clusterId, shardIds: conf.shards)
).isOkOr:
error "failed to add sharded topics to ENR", error = error
quit(QuitFailure)
let record = enrBuilder.build().valueOr:
error "failed to create enr record", error = error
quit(QuitFailure)
let node = block:
var builder = WakuNodeBuilder.init()
builder.withNodeKey(nodeKey)
builder.withRecord(record)
builder
.withNetworkConfigurationDetails(
conf.listenAddress,
Port(uint16(conf.tcpPort) + conf.portsShift),
extIp,
extTcpPort,
wsBindPort = Port(uint16(conf.websocketPort) + conf.portsShift),
wsEnabled = conf.websocketSupport,
wssEnabled = conf.websocketSecureSupport,
)
.tryGet()
builder.build().tryGet()
node.mountAutoSharding(conf.clusterId, conf.numShardsInNetwork).isOkOr:
error "failed to mount waku sharding: ", error = error
quit(QuitFailure)
node.mountMetadata(conf.clusterId, conf.shards).isOkOr:
error "failed to mount waku metadata protocol: ", err = error
quit(QuitFailure)
let (mixPrivKey, mixPubKey) = generateKeyPair().valueOr:
error "failed to generate mix key pair", error = error
return
(await node.mountMix(conf.clusterId, mixPrivKey, conf.mixnodes)).isOkOr:
error "failed to mount waku mix protocol: ", error = $error
quit(QuitFailure)
await node.mountRendezvousClient(conf.clusterId)
await node.start()
node.peerManager.start()
await node.mountLibp2pPing()
await node.mountPeerExchangeClient()
let pubsubTopic = conf.getPubsubTopic(node, conf.contentTopic)
echo "pubsub topic is: " & pubsubTopic
let nick = await readNick(transp)
echo "Welcome, " & nick & "!"
var chat = Chat(
node: node,
transp: transp,
subscribed: true,
connected: false,
started: true,
nick: nick,
prompt: false,
contentTopic: conf.contentTopic,
conf: conf,
)
var dnsDiscoveryUrl = none(string)
if conf.fleet != Fleet.none:
# Use DNS discovery to connect to selected fleet
echo "Connecting to " & $conf.fleet & " fleet using DNS discovery..."
if conf.fleet == Fleet.test:
dnsDiscoveryUrl = some(
"enrtree://AOGYWMBYOUIMOENHXCHILPKY3ZRFEULMFI4DOM442QSZ73TT2A7VI@test.waku.nodes.status.im"
)
else:
# Connect to sandbox by default
dnsDiscoveryUrl = some(
"enrtree://AIRVQ5DDA4FFWLRBCHJWUWOO6X6S4ZTZ5B667LQ6AJU6PEYDLRD5O@sandbox.waku.nodes.status.im"
)
elif conf.dnsDiscoveryUrl != "":
# No pre-selected fleet. Discover nodes via DNS using user config
info "Discovering nodes using Waku DNS discovery", url = conf.dnsDiscoveryUrl
dnsDiscoveryUrl = some(conf.dnsDiscoveryUrl)
var discoveredNodes: seq[RemotePeerInfo]
if dnsDiscoveryUrl.isSome:
var nameServers: seq[TransportAddress]
for ip in conf.dnsDiscoveryNameServers:
nameServers.add(initTAddress(ip, Port(53))) # Assume all servers use port 53
let dnsResolver = DnsResolver.new(nameServers)
proc resolver(domain: string): Future[string] {.async, gcsafe.} =
trace "resolving", domain = domain
let resolved = await dnsResolver.resolveTxt(domain)
return resolved[0] # Use only first answer
let wakuDnsDiscovery = WakuDnsDiscovery.init(dnsDiscoveryUrl.get(), resolver)
if wakuDnsDiscovery.isOk:
let discoveredPeers = await wakuDnsDiscovery.get().findPeers()
if discoveredPeers.isOk:
info "Connecting to discovered peers"
discoveredNodes = discoveredPeers.get()
echo "Discovered and connecting to " & $discoveredNodes
waitFor chat.node.connectToNodes(discoveredNodes)
else:
warn "Failed to find peers via DNS discovery", error = discoveredPeers.error
else:
warn "Failed to init Waku DNS discovery", error = wakuDnsDiscovery.error
let peerInfo = node.switch.peerInfo
let listenStr = $peerInfo.addrs[0] & "/p2p/" & $peerInfo.peerId
echo &"Listening on\n {listenStr}"
if (conf.storenode != "") or (conf.store == true):
await node.mountStore()
var storenode: Option[RemotePeerInfo]
if conf.storenode != "":
let peerInfo = parsePeerInfo(conf.storenode)
if peerInfo.isOk():
storenode = some(peerInfo.value)
else:
error "Incorrect conf.storenode", error = peerInfo.error
elif discoveredNodes.len > 0:
echo "Store enabled, but no store nodes configured. Choosing one at random from discovered peers"
storenode = some(discoveredNodes[rand(0 .. len(discoveredNodes) - 1)])
if storenode.isSome():
# We have a viable storenode. Let's query it for historical messages.
echo "Connecting to storenode: " & $(storenode.get())
node.mountStoreClient()
node.peerManager.addServicePeer(storenode.get(), WakuStoreCodec)
proc storeHandler(response: StoreQueryResponse) {.gcsafe.} =
for msg in response.messages:
let payload =
if msg.message.isSome():
msg.message.get().payload
else:
newSeq[byte](0)
let chatLine = getChatLine(payload)
echo &"{chatLine}"
info "Hit store handler"
let queryRes = await node.query(
StoreQueryRequest(contentTopics: @[chat.contentTopic]), storenode.get()
)
if queryRes.isOk():
storeHandler(queryRes.value)
if conf.edgemode: #Mount light protocol clients
node.mountLightPushClient()
await node.mountFilterClient()
let filterHandler = proc(
pubsubTopic: PubsubTopic, msg: WakuMessage
): Future[void] {.async, closure.} =
trace "Hit filter handler", contentTopic = msg.contentTopic
chat.printReceivedMessage(msg)
node.wakuFilterClient.registerPushHandler(filterHandler)
var servicePeerInfo: RemotePeerInfo
if conf.serviceNode != "":
servicePeerInfo = parsePeerInfo(conf.serviceNode).valueOr:
error "Couldn't parse conf.serviceNode", error = error
RemotePeerInfo()
if servicePeerInfo == nil or $servicePeerInfo.peerId == "":
# Assuming that service node supports all services
servicePeerInfo = selectRandomServicePeer(
node.peerManager, none(RemotePeerInfo), WakuLightpushCodec
).valueOr:
error "Couldn't find any service peer"
quit(QuitFailure)
node.peerManager.addServicePeer(servicePeerInfo, WakuLightpushCodec)
node.peerManager.addServicePeer(servicePeerInfo, WakuPeerExchangeCodec)
#node.peerManager.addServicePeer(servicePeerInfo, WakuRendezVousCodec)
# Start maintaining subscription
asyncSpawn maintainSubscription(
node, pubsubTopic, conf.contentTopic, servicePeerInfo, false
)
echo "waiting for mix nodes to be discovered..."
while true:
if node.getMixNodePoolSize() >= MinMixNodePoolSize:
break
discard await node.fetchPeerExchangePeers()
await sleepAsync(1000)
while node.getMixNodePoolSize() < MinMixNodePoolSize:
info "waiting for mix nodes to be discovered",
currentpoolSize = node.getMixNodePoolSize()
await sleepAsync(1000)
notice "ready to publish with mix node pool size ",
currentpoolSize = node.getMixNodePoolSize()
echo "ready to publish messages now"
# Once min mixnodes are discovered loop as per default setting
node.startPeerExchangeLoop()
if conf.metricsLogging:
startMetricsLog()
if conf.metricsServer:
let metricsServer = startMetricsServer(
conf.metricsServerAddress, Port(conf.metricsServerPort + conf.portsShift)
)
await chat.readWriteLoop()
runForever()
proc main(rng: ref HmacDrbgContext) {.async.} =
let (rfd, wfd) = createAsyncPipe()
if rfd == asyncInvalidPipe or wfd == asyncInvalidPipe:
raise newException(ValueError, "Could not initialize pipe!")
var thread: Thread[AsyncFD]
thread.createThread(readInput, wfd)
try:
await processInput(rfd, rng)
# Handle only ConfigurationError for now
# TODO: Throw other errors from the mounting procedure
except ConfigurationError as e:
raise e
when isMainModule: # isMainModule = true when the module is compiled as the main file
let rng = crypto.newRng()
try:
waitFor(main(rng))
except CatchableError as e:
raise e
## Dump of things that can be improved:
##
## - Incoming dialed peer does not change connected state (not relying on it for now)
## - Unclear if staticnode argument works (can enter manually)
## - Don't trigger self / double publish own messages
## - Test/default to cluster node connection (diff protocol version)
## - Redirect logs to separate file
## - Expose basic publish/subscribe etc commands with /syntax
## - Show part of peerid to know who sent message
## - Deal with protobuf messages (e.g. other chat protocol, or encrypted)

View File

@ -1,315 +0,0 @@
import chronicles, chronos, std/strutils, regex
import
eth/keys,
libp2p/crypto/crypto,
libp2p/crypto/secp,
libp2p/crypto/curve25519,
libp2p/multiaddress,
libp2p/multicodec,
nimcrypto/utils,
confutils,
confutils/defs,
confutils/std/net
import waku/waku_core, waku/waku_mix
type
Fleet* = enum
none
sandbox
test
EthRpcUrl* = distinct string
Chat2Conf* = object ## General node config
edgemode* {.
defaultValue: true, desc: "Run the app in edge mode", name: "edge-mode"
.}: bool
logLevel* {.
desc: "Sets the log level.", defaultValue: LogLevel.INFO, name: "log-level"
.}: LogLevel
nodekey* {.desc: "P2P node private key as 64 char hex string.", name: "nodekey".}:
Option[crypto.PrivateKey]
listenAddress* {.
defaultValue: defaultListenAddress(config),
desc: "Listening address for the LibP2P traffic.",
name: "listen-address"
.}: IpAddress
tcpPort* {.desc: "TCP listening port.", defaultValue: 60000, name: "tcp-port".}:
Port
udpPort* {.desc: "UDP listening port.", defaultValue: 60000, name: "udp-port".}:
Port
portsShift* {.
desc: "Add a shift to all port numbers.", defaultValue: 0, name: "ports-shift"
.}: uint16
nat* {.
desc:
"Specify method to use for determining public address. " &
"Must be one of: any, none, upnp, pmp, extip:<IP>.",
defaultValue: "any"
.}: string
## Persistence config
dbPath* {.
desc: "The database path for peristent storage", defaultValue: "", name: "db-path"
.}: string
persistPeers* {.
desc: "Enable peer persistence: true|false",
defaultValue: false,
name: "persist-peers"
.}: bool
persistMessages* {.
desc: "Enable message persistence: true|false",
defaultValue: false,
name: "persist-messages"
.}: bool
## Relay config
relay* {.
desc: "Enable relay protocol: true|false", defaultValue: true, name: "relay"
.}: bool
staticnodes* {.
desc: "Peer multiaddr to directly connect with. Argument may be repeated.",
name: "staticnode",
defaultValue: @[]
.}: seq[string]
mixnodes* {.
desc:
"Multiaddress and mix-key of mix node to be statically specified in format multiaddr:mixPubKey. Argument may be repeated.",
name: "mixnode"
.}: seq[MixNodePubInfo]
keepAlive* {.
desc: "Enable keep-alive for idle connections: true|false",
defaultValue: false,
name: "keep-alive"
.}: bool
clusterId* {.
desc:
"Cluster id that the node is running in. Node in a different cluster id is disconnected.",
defaultValue: 1,
name: "cluster-id"
.}: uint16
numShardsInNetwork* {.
desc: "Number of shards in the network",
defaultValue: 8,
name: "num-shards-in-network"
.}: uint32
shards* {.
desc:
"Shards index to subscribe to [0..NUM_SHARDS_IN_NETWORK-1]. Argument may be repeated.",
defaultValue:
@[
uint16(0),
uint16(1),
uint16(2),
uint16(3),
uint16(4),
uint16(5),
uint16(6),
uint16(7),
],
name: "shard"
.}: seq[uint16]
## Store config
store* {.
desc: "Enable store protocol: true|false", defaultValue: false, name: "store"
.}: bool
storenode* {.
desc: "Peer multiaddr to query for storage.", defaultValue: "", name: "storenode"
.}: string
## Filter config
filter* {.
desc: "Enable filter protocol: true|false", defaultValue: false, name: "filter"
.}: bool
## Lightpush config
lightpush* {.
desc: "Enable lightpush protocol: true|false",
defaultValue: false,
name: "lightpush"
.}: bool
servicenode* {.
desc: "Peer multiaddr to request lightpush and filter services",
defaultValue: "",
name: "servicenode"
.}: string
## Metrics config
metricsServer* {.
desc: "Enable the metrics server: true|false",
defaultValue: false,
name: "metrics-server"
.}: bool
metricsServerAddress* {.
desc: "Listening address of the metrics server.",
defaultValue: parseIpAddress("127.0.0.1"),
name: "metrics-server-address"
.}: IpAddress
metricsServerPort* {.
desc: "Listening HTTP port of the metrics server.",
defaultValue: 8008,
name: "metrics-server-port"
.}: uint16
metricsLogging* {.
desc: "Enable metrics logging: true|false",
defaultValue: true,
name: "metrics-logging"
.}: bool
## DNS discovery config
dnsDiscovery* {.
desc:
"Deprecated, please set dns-discovery-url instead. Enable discovering nodes via DNS",
defaultValue: false,
name: "dns-discovery"
.}: bool
dnsDiscoveryUrl* {.
desc: "URL for DNS node list in format 'enrtree://<key>@<fqdn>'",
defaultValue: "",
name: "dns-discovery-url"
.}: string
dnsDiscoveryNameServers* {.
desc: "DNS name server IPs to query. Argument may be repeated.",
defaultValue: @[parseIpAddress("1.1.1.1"), parseIpAddress("1.0.0.1")],
name: "dns-discovery-name-server"
.}: seq[IpAddress]
## Chat2 configuration
fleet* {.
desc:
"Select the fleet to connect to. This sets the DNS discovery URL to the selected fleet.",
defaultValue: Fleet.test,
name: "fleet"
.}: Fleet
contentTopic* {.
desc: "Content topic for chat messages.",
defaultValue: "/toy-chat-mix/2/huilong/proto",
name: "content-topic"
.}: string
## Websocket Configuration
websocketSupport* {.
desc: "Enable websocket: true|false",
defaultValue: false,
name: "websocket-support"
.}: bool
websocketPort* {.
desc: "WebSocket listening port.", defaultValue: 8000, name: "websocket-port"
.}: Port
websocketSecureSupport* {.
desc: "WebSocket Secure Support.",
defaultValue: false,
name: "websocket-secure-support"
.}: bool ## rln-relay configuration
proc parseCmdArg*(T: type MixNodePubInfo, p: string): T =
let elements = p.split(":")
if elements.len != 2:
raise newException(
ValueError, "Invalid format for mix node expected multiaddr:mixPublicKey"
)
let multiaddr = MultiAddress.init(elements[0]).valueOr:
raise newException(ValueError, "Invalid multiaddress format")
if not multiaddr.contains(multiCodec("ip4")).get():
raise newException(
ValueError, "Invalid format for ip address, expected a ipv4 multiaddress"
)
return MixNodePubInfo(
multiaddr: elements[0], pubKey: intoCurve25519Key(ncrutils.fromHex(elements[1]))
)
# NOTE: Keys are different in nim-libp2p
proc parseCmdArg*(T: type crypto.PrivateKey, p: string): T =
try:
let key = SkPrivateKey.init(utils.fromHex(p)).tryGet()
# XXX: Here at the moment
result = crypto.PrivateKey(scheme: Secp256k1, skkey: key)
except CatchableError as e:
raise newException(ValueError, "Invalid private key")
proc completeCmdArg*(T: type crypto.PrivateKey, val: string): seq[string] =
return @[]
proc parseCmdArg*(T: type IpAddress, p: string): T =
try:
result = parseIpAddress(p)
except CatchableError as e:
raise newException(ValueError, "Invalid IP address")
proc completeCmdArg*(T: type IpAddress, val: string): seq[string] =
return @[]
proc parseCmdArg*(T: type Port, p: string): T =
try:
result = Port(parseInt(p))
except CatchableError as e:
raise newException(ValueError, "Invalid Port number")
proc completeCmdArg*(T: type Port, val: string): seq[string] =
return @[]
proc parseCmdArg*(T: type Option[uint], p: string): T =
try:
some(parseUint(p))
except CatchableError:
raise newException(ValueError, "Invalid unsigned integer")
proc completeCmdArg*(T: type EthRpcUrl, val: string): seq[string] =
return @[]
proc parseCmdArg*(T: type EthRpcUrl, s: string): T =
## allowed patterns:
## http://url:port
## https://url:port
## http://url:port/path
## https://url:port/path
## http://url/with/path
## http://url:port/path?query
## https://url:port/path?query
## disallowed patterns:
## any valid/invalid ws or wss url
var httpPattern =
re2"^(https?):\/\/((localhost)|([\w_-]+(?:(?:\.[\w_-]+)+)))(:[0-9]{1,5})?([\w.,@?^=%&:\/~+#-]*[\w@?^=%&\/~+#-])*"
var wsPattern =
re2"^(wss?):\/\/((localhost)|([\w_-]+(?:(?:\.[\w_-]+)+)))(:[0-9]{1,5})?([\w.,@?^=%&:\/~+#-]*[\w@?^=%&\/~+#-])*"
if regex.match(s, wsPattern):
raise newException(
ValueError, "Websocket RPC URL is not supported, Please use an HTTP URL"
)
if not regex.match(s, httpPattern):
raise newException(ValueError, "Invalid HTTP RPC URL")
return EthRpcUrl(s)
func defaultListenAddress*(conf: Chat2Conf): IpAddress =
# TODO: How should we select between IPv4 and IPv6
# Maybe there should be a config option for this.
(static parseIpAddress("0.0.0.0"))

View File

@ -1,4 +0,0 @@
-d:chronicles_line_numbers
-d:chronicles_runtime_filtering:on
-d:discv5_protocol_id:d5waku
path = "../.."

View File

@ -12,16 +12,16 @@ MIN_MESSAGE_SIZE=15Kb
MAX_MESSAGE_SIZE=145Kb MAX_MESSAGE_SIZE=145Kb
## for wakusim ## for wakusim
#SHARD=0 #PUBSUB=/waku/2/rs/66/0
#CONTENT_TOPIC=/tester/2/light-pubsub-test/wakusim #CONTENT_TOPIC=/tester/2/light-pubsub-test/wakusim
#CLUSTER_ID=66 #CLUSTER_ID=66
## for status.prod ## for status.prod
#SHARDS=32 PUBSUB=/waku/2/rs/16/32
CONTENT_TOPIC=/tester/2/light-pubsub-test/fleet CONTENT_TOPIC=/tester/2/light-pubsub-test/fleet
CLUSTER_ID=16 CLUSTER_ID=16
## for TWN ## for TWN
#SHARD=4 #PUBSUB=/waku/2/rs/1/4
#CONTENT_TOPIC=/tester/2/light-pubsub-test/twn #CONTENT_TOPIC=/tester/2/light-pubsub-test/twn
#CLUSTER_ID=1 #CLUSTER_ID=1

View File

@ -1,33 +1,37 @@
# TESTING IMAGE -------------------------------------------------------------- # TESTING IMAGE --------------------------------------------------------------
## NOTICE: This is a short cut build file for ubuntu users who compiles nwaku in ubuntu distro. ## NOTICE: This is a short cut build file for ubuntu users who compiles nwaku in ubuntu distro.
## This is used for faster turnaround time for testing the compiled binary. ## This is used for faster turnaround time for testing the compiled binary.
## Prerequisites: compiled liteprotocoltester binary in build/ directory ## Prerequisites: compiled liteprotocoltester binary in build/ directory
FROM ubuntu:noble AS prod FROM ubuntu:noble AS prod
LABEL maintainer="zoltan@status.im" LABEL maintainer="zoltan@status.im"
LABEL source="https://github.com/waku-org/nwaku" LABEL source="https://github.com/waku-org/nwaku"
LABEL description="Lite Protocol Tester: Waku light-client" LABEL description="Lite Protocol Tester: Waku light-client"
LABEL commit="unknown" LABEL commit="unknown"
LABEL version="unknown" LABEL version="unknown"
# DevP2P, LibP2P, and JSON RPC ports # DevP2P, LibP2P, and JSON RPC ports
EXPOSE 30303 60000 8545 EXPOSE 30303 60000 8545
# Referenced in the binary # Referenced in the binary
RUN apt-get update && apt-get install -y --no-install-recommends \ RUN apt-get update && apt-get install -y --no-install-recommends \
libgcc1 \ libgcc1 \
libpq-dev \ libpcre3 \
wget \ libpq-dev \
iproute2 \ wget \
&& rm -rf /var/lib/apt/lists/* iproute2 \
&& rm -rf /var/lib/apt/lists/*
COPY build/liteprotocoltester /usr/bin/ # Fix for 'Error loading shared library libpcre.so.3: No such file or directory'
COPY apps/liteprotocoltester/run_tester_node.sh /usr/bin/ RUN ln -s /usr/lib/libpcre.so /usr/lib/libpcre.so.3
COPY apps/liteprotocoltester/run_tester_node_on_fleet.sh /usr/bin/
ENTRYPOINT ["/usr/bin/run_tester_node.sh", "/usr/bin/liteprotocoltester"] COPY build/liteprotocoltester /usr/bin/
COPY apps/liteprotocoltester/run_tester_node.sh /usr/bin/
COPY apps/liteprotocoltester/run_tester_node_on_fleet.sh /usr/bin/
# # By default just show help if called without arguments ENTRYPOINT ["/usr/bin/run_tester_node.sh", "/usr/bin/liteprotocoltester"]
CMD ["--help"]
# # By default just show help if called without arguments
CMD ["--help"]

View File

@ -7,7 +7,7 @@ ARG NIM_COMMIT
ARG LOG_LEVEL=TRACE ARG LOG_LEVEL=TRACE
# Get build tools and required header files # Get build tools and required header files
RUN apk add --no-cache bash git build-base openssl-dev linux-headers curl jq RUN apk add --no-cache bash git build-base openssl-dev pcre-dev linux-headers curl jq
WORKDIR /app WORKDIR /app
COPY . . COPY . .
@ -40,11 +40,14 @@ LABEL version="unknown"
EXPOSE 30303 60000 8545 EXPOSE 30303 60000 8545
# Referenced in the binary # Referenced in the binary
RUN apk add --no-cache libgcc libpq-dev \ RUN apk add --no-cache libgcc pcre-dev libpq-dev \
wget \ wget \
iproute2 \ iproute2 \
python3 python3
# Fix for 'Error loading shared library libpcre.so.3: No such file or directory'
RUN ln -s /usr/lib/libpcre.so /usr/lib/libpcre.so.3
COPY --from=nim-build /app/build/liteprotocoltester /usr/bin/ COPY --from=nim-build /app/build/liteprotocoltester /usr/bin/
RUN chmod +x /usr/bin/liteprotocoltester RUN chmod +x /usr/bin/liteprotocoltester
@ -52,8 +55,6 @@ RUN chmod +x /usr/bin/liteprotocoltester
FROM base_lpt AS standalone_lpt FROM base_lpt AS standalone_lpt
COPY --from=nim-build /app/apps/liteprotocoltester/run_tester_node.sh /usr/bin/ COPY --from=nim-build /app/apps/liteprotocoltester/run_tester_node.sh /usr/bin/
COPY --from=nim-build /app/apps/liteprotocoltester/run_tester_node_on_fleet.sh /usr/bin/
RUN chmod +x /usr/bin/run_tester_node.sh RUN chmod +x /usr/bin/run_tester_node.sh
ENTRYPOINT ["/usr/bin/run_tester_node.sh", "/usr/bin/liteprotocoltester"] ENTRYPOINT ["/usr/bin/run_tester_node.sh", "/usr/bin/liteprotocoltester"]

View File

@ -127,7 +127,7 @@ Run a SENDER role liteprotocoltester and a RECEIVER role one on different termin
| ---: | :--- | :--- | | ---: | :--- | :--- |
| NUM_MESSAGES | Number of message to publish, 0 means infinite | 120 | | NUM_MESSAGES | Number of message to publish, 0 means infinite | 120 |
| MESSAGE_INTERVAL_MILLIS | Frequency of messages in milliseconds | 1000 | | MESSAGE_INTERVAL_MILLIS | Frequency of messages in milliseconds | 1000 |
| SHARD | Used shard for testing | 0 | | PUBSUB | Used pubsub_topic for testing | /waku/2/rs/66/0 |
| CONTENT_TOPIC | content_topic for testing | /tester/1/light-pubsub-example/proto | | CONTENT_TOPIC | content_topic for testing | /tester/1/light-pubsub-example/proto |
| CLUSTER_ID | cluster_id of the network | 16 | | CLUSTER_ID | cluster_id of the network | 16 |
| START_PUBLISHING_AFTER_SECS | Delay in seconds before starting to publish to let service node connected | 5 | | START_PUBLISHING_AFTER_SECS | Delay in seconds before starting to publish to let service node connected | 5 |
@ -272,7 +272,7 @@ export NUM_MESSAGES=200
export MESSAGE_INTERVAL_MILLIS=1000 export MESSAGE_INTERVAL_MILLIS=1000
export MIN_MESSAGE_SIZE=15Kb export MIN_MESSAGE_SIZE=15Kb
export MAX_MESSAGE_SIZE=145Kb export MAX_MESSAGE_SIZE=145Kb
export SHARD=32 export PUBSUB=/waku/2/rs/16/32
export CONTENT_TOPIC=/tester/2/light-pubsub-test/fleet export CONTENT_TOPIC=/tester/2/light-pubsub-test/fleet
export CLUSTER_ID=16 export CLUSTER_ID=16
@ -307,7 +307,7 @@ export NUM_MESSAGES=300
export MESSAGE_INTERVAL_MILLIS=7000 export MESSAGE_INTERVAL_MILLIS=7000
export MIN_MESSAGE_SIZE=15Kb export MIN_MESSAGE_SIZE=15Kb
export MAX_MESSAGE_SIZE=145Kb export MAX_MESSAGE_SIZE=145Kb
export SHARD=4 export PUBSUB=/waku/2/rs/1/4
export CONTENT_TOPIC=/tester/2/light-pubsub-test/twn export CONTENT_TOPIC=/tester/2/light-pubsub-test/twn
export CLUSTER_ID=1 export CLUSTER_ID=1

View File

@ -14,8 +14,8 @@ import
libp2p/wire libp2p/wire
import import
tools/confutils/cli_args,
waku/[ waku/[
factory/external_config,
node/peer_manager, node/peer_manager,
waku_lightpush/common, waku_lightpush/common,
waku_relay, waku_relay,
@ -27,9 +27,22 @@ import
logScope: logScope:
topics = "diagnose connections" topics = "diagnose connections"
proc `$`*(cap: Capabilities): string =
case cap
of Capabilities.Relay:
return "Relay"
of Capabilities.Store:
return "Store"
of Capabilities.Filter:
return "Filter"
of Capabilities.Lightpush:
return "Lightpush"
of Capabilities.Sync:
return "Sync"
proc allPeers(pm: PeerManager): string = proc allPeers(pm: PeerManager): string =
var allStr: string = "" var allStr: string = ""
for idx, peer in pm.switch.peerStore.peers(): for idx, peer in pm.wakuPeerStore.peers():
allStr.add( allStr.add(
" " & $idx & ". | " & constructMultiaddrStr(peer) & " | agent: " & " " & $idx & ". | " & constructMultiaddrStr(peer) & " | agent: " &
peer.getAgent() & " | protos: " & $peer.protocols & " | caps: " & peer.getAgent() & " | protos: " & $peer.protocols & " | caps: " &
@ -38,10 +51,10 @@ proc allPeers(pm: PeerManager): string =
return allStr return allStr
proc logSelfPeers*(pm: PeerManager) = proc logSelfPeers*(pm: PeerManager) =
let selfLighpushPeers = pm.switch.peerStore.getPeersByProtocol(WakuLightPushCodec) let selfLighpushPeers = pm.wakuPeerStore.getPeersByProtocol(WakuLightPushCodec)
let selfRelayPeers = pm.switch.peerStore.getPeersByProtocol(WakuRelayCodec) let selfRelayPeers = pm.wakuPeerStore.getPeersByProtocol(WakuRelayCodec)
let selfFilterPeers = pm.switch.peerStore.getPeersByProtocol(WakuFilterSubscribeCodec) let selfFilterPeers = pm.wakuPeerStore.getPeersByProtocol(WakuFilterSubscribeCodec)
let selfPxPeers = pm.switch.peerStore.getPeersByProtocol(WakuPeerExchangeCodec) let selfPxPeers = pm.wakuPeerStore.getPeersByProtocol(WakuPeerExchangeCodec)
let printable = catch: let printable = catch:
"""*------------------------------------------------------------------------------------------* """*------------------------------------------------------------------------------------------*
@ -59,4 +72,7 @@ proc logSelfPeers*(pm: PeerManager) =
{allPeers(pm)} {allPeers(pm)}
*------------------------------------------------------------------------------------------*""".fmt() *------------------------------------------------------------------------------------------*""".fmt()
echo printable.valueOr("Error while printing statistics: " & error.msg) if printable.isErr():
echo "Error while printing statistics: " & printable.error().msg
else:
echo printable.get()

View File

@ -16,7 +16,7 @@ x-rln-environment: &rln_env
x-test-running-conditions: &test_running_conditions x-test-running-conditions: &test_running_conditions
NUM_MESSAGES: ${NUM_MESSAGES:-120} NUM_MESSAGES: ${NUM_MESSAGES:-120}
MESSAGE_INTERVAL_MILLIS: "${MESSAGE_INTERVAL_MILLIS:-1000}" MESSAGE_INTERVAL_MILLIS: "${MESSAGE_INTERVAL_MILLIS:-1000}"
SHARD: ${SHARD:-0} PUBSUB: ${PUBSUB:-/waku/2/rs/66/0}
CONTENT_TOPIC: ${CONTENT_TOPIC:-/tester/2/light-pubsub-test/wakusim} CONTENT_TOPIC: ${CONTENT_TOPIC:-/tester/2/light-pubsub-test/wakusim}
CLUSTER_ID: ${CLUSTER_ID:-66} CLUSTER_ID: ${CLUSTER_ID:-66}
MIN_MESSAGE_SIZE: ${MIN_MESSAGE_SIZE:-1Kb} MIN_MESSAGE_SIZE: ${MIN_MESSAGE_SIZE:-1Kb}

View File

@ -9,14 +9,14 @@ x-logging: &logging
x-eth-client-address: &eth_client_address ${ETH_CLIENT_ADDRESS:-} # Add your ETH_CLIENT_ADDRESS after the "-" x-eth-client-address: &eth_client_address ${ETH_CLIENT_ADDRESS:-} # Add your ETH_CLIENT_ADDRESS after the "-"
x-rln-environment: &rln_env x-rln-environment: &rln_env
RLN_RELAY_CONTRACT_ADDRESS: ${RLN_RELAY_CONTRACT_ADDRESS:-0xB9cd878C90E49F797B4431fBF4fb333108CB90e6} RLN_RELAY_CONTRACT_ADDRESS: ${RLN_RELAY_CONTRACT_ADDRESS:-0xF471d71E9b1455bBF4b85d475afb9BB0954A29c4}
RLN_RELAY_CRED_PATH: ${RLN_RELAY_CRED_PATH:-} # Optional: Add your RLN_RELAY_CRED_PATH after the "-" RLN_RELAY_CRED_PATH: ${RLN_RELAY_CRED_PATH:-} # Optional: Add your RLN_RELAY_CRED_PATH after the "-"
RLN_RELAY_CRED_PASSWORD: ${RLN_RELAY_CRED_PASSWORD:-} # Optional: Add your RLN_RELAY_CRED_PASSWORD after the "-" RLN_RELAY_CRED_PASSWORD: ${RLN_RELAY_CRED_PASSWORD:-} # Optional: Add your RLN_RELAY_CRED_PASSWORD after the "-"
x-test-running-conditions: &test_running_conditions x-test-running-conditions: &test_running_conditions
NUM_MESSAGES: ${NUM_MESSAGES:-120} NUM_MESSAGES: ${NUM_MESSAGES:-120}
MESSAGE_INTERVAL_MILLIS: "${MESSAGE_INTERVAL_MILLIS:-1000}" MESSAGE_INTERVAL_MILLIS: "${MESSAGE_INTERVAL_MILLIS:-1000}"
SHARD: ${SHARD:-0} PUBSUB: ${PUBSUB:-/waku/2/rs/66/0}
CONTENT_TOPIC: ${CONTENT_TOPIC:-/tester/2/light-pubsub-test/wakusim} CONTENT_TOPIC: ${CONTENT_TOPIC:-/tester/2/light-pubsub-test/wakusim}
CLUSTER_ID: ${CLUSTER_ID:-66} CLUSTER_ID: ${CLUSTER_ID:-66}
MIN_MESSAGE_SIZE: ${MIN_MESSAGE_SIZE:-1Kb} MIN_MESSAGE_SIZE: ${MIN_MESSAGE_SIZE:-1Kb}

View File

@ -54,67 +54,69 @@ proc maintainSubscription(
var noFailedSubscribes = 0 var noFailedSubscribes = 0
var noFailedServiceNodeSwitches = 0 var noFailedServiceNodeSwitches = 0
var isFirstPingOnNewPeer = true var isFirstPingOnNewPeer = true
const RetryWaitMs = 2.seconds # Quick retry interval
const SubscriptionMaintenanceMs = 30.seconds # Subscription maintenance interval
while true: while true:
info "maintaining subscription at", peer = constructMultiaddrStr(actualFilterPeer) info "maintaining subscription at", peer = constructMultiaddrStr(actualFilterPeer)
# First use filter-ping to check if we have an active subscription # First use filter-ping to check if we have an active subscription
let pingErr = (await wakuNode.wakuFilterClient.ping(actualFilterPeer)).errorOr: let pingRes = await wakuNode.wakuFilterClient.ping(actualFilterPeer)
await sleepAsync(SubscriptionMaintenanceMs) if pingRes.isErr():
info "subscription is live." if isFirstPingOnNewPeer == false:
continue # Very first ping expected to fail as we have not yet subscribed at all
lpt_receiver_lost_subscription_count.inc()
isFirstPingOnNewPeer = false
# No subscription found. Let's subscribe.
error "ping failed.", err = pingRes.error
trace "no subscription found. Sending subscribe request"
if isFirstPingOnNewPeer == false: let subscribeRes = await wakuNode.filterSubscribe(
# Very first ping expected to fail as we have not yet subscribed at all
lpt_receiver_lost_subscription_count.inc()
isFirstPingOnNewPeer = false
# No subscription found. Let's subscribe.
error "ping failed.", error = pingErr
trace "no subscription found. Sending subscribe request"
let subscribeErr = (
await wakuNode.filterSubscribe(
some(filterPubsubTopic), filterContentTopic, actualFilterPeer some(filterPubsubTopic), filterContentTopic, actualFilterPeer
) )
).errorOr:
await sleepAsync(SubscriptionMaintenanceMs)
if noFailedSubscribes > 0:
noFailedSubscribes -= 1
notice "subscribe request successful."
continue
noFailedSubscribes += 1 if subscribeRes.isErr():
lpt_service_peer_failure_count.inc( noFailedSubscribes += 1
labelValues = ["receiver", actualFilterPeer.getAgent()] lpt_service_peer_failure_count.inc(
) labelValues = ["receiver", actualFilterPeer.getAgent()]
error "Subscribe request failed.", )
err = subscribeErr, peer = actualFilterPeer, failCount = noFailedSubscribes error "Subscribe request failed.",
err = subscribeRes.error,
peer = actualFilterPeer,
failCount = noFailedSubscribes
# TODO: disconnet from failed actualFilterPeer # TODO: disconnet from failed actualFilterPeer
# asyncSpawn(wakuNode.peerManager.switch.disconnect(p)) # asyncSpawn(wakuNode.peerManager.switch.disconnect(p))
# wakunode.peerManager.peerStore.delete(actualFilterPeer) # wakunode.peerManager.peerStore.delete(actualFilterPeer)
if noFailedSubscribes < maxFailedSubscribes: if noFailedSubscribes < maxFailedSubscribes:
await sleepAsync(RetryWaitMs) # Wait a bit before retrying await sleepAsync(2.seconds) # Wait a bit before retrying
elif not preventPeerSwitch: continue
# try again with new peer without delay elif not preventPeerSwitch:
actualFilterPeer = selectRandomServicePeer( let peerOpt = selectRandomServicePeer(
wakuNode.peerManager, some(actualFilterPeer), WakuFilterSubscribeCodec wakuNode.peerManager, some(actualFilterPeer), WakuFilterSubscribeCodec
).valueOr: )
error "Failed to find new service peer. Exiting." if peerOpt.isOk():
noFailedServiceNodeSwitches += 1 actualFilterPeer = peerOpt.get()
break
info "Found new peer for codec", info "Found new peer for codec",
codec = filterPubsubTopic, peer = constructMultiaddrStr(actualFilterPeer) codec = filterPubsubTopic, peer = constructMultiaddrStr(actualFilterPeer)
noFailedSubscribes = 0 noFailedSubscribes = 0
lpt_change_service_peer_count.inc(labelValues = ["receiver"]) lpt_change_service_peer_count.inc(labelValues = ["receiver"])
isFirstPingOnNewPeer = true isFirstPingOnNewPeer = true
continue # try again with new peer without delay
else:
error "Failed to find new service peer. Exiting."
noFailedServiceNodeSwitches += 1
break
else:
if noFailedSubscribes > 0:
noFailedSubscribes -= 1
notice "subscribe request successful."
else: else:
await sleepAsync(SubscriptionMaintenanceMs) info "subscription is live."
proc setupAndListen*( await sleepAsync(30.seconds) # Subscription maintenance interval
proc setupAndSubscribe*(
wakuNode: WakuNode, conf: LiteProtocolTesterConf, servicePeer: RemotePeerInfo wakuNode: WakuNode, conf: LiteProtocolTesterConf, servicePeer: RemotePeerInfo
) = ) =
if isNil(wakuNode.wakuFilterClient): if isNil(wakuNode.wakuFilterClient):
@ -128,9 +130,7 @@ proc setupAndListen*(
var stats: PerPeerStatistics var stats: PerPeerStatistics
actualFilterPeer = servicePeer actualFilterPeer = servicePeer
let pushHandler = proc( let pushHandler = proc(pubsubTopic: PubsubTopic, message: WakuMessage) {.async.} =
pubsubTopic: PubsubTopic, message: WakuMessage
): Future[void] {.async, closure.} =
let payloadStr = string.fromBytes(message.payload) let payloadStr = string.fromBytes(message.payload)
let testerMessage = js.Json.decode(payloadStr, ProtocolTesterMessage) let testerMessage = js.Json.decode(payloadStr, ProtocolTesterMessage)
let msgHash = computeMessageHash(pubsubTopic, message).to0xHex let msgHash = computeMessageHash(pubsubTopic, message).to0xHex
@ -163,7 +163,7 @@ proc setupAndListen*(
if conf.numMessages > 0 and if conf.numMessages > 0 and
waitFor stats.checkIfAllMessagesReceived(maxWaitForLastMessage): waitFor stats.checkIfAllMessagesReceived(maxWaitForLastMessage):
waitFor unsubscribe(wakuNode, conf.getPubsubTopic(), conf.contentTopics[0]) waitFor unsubscribe(wakuNode, conf.pubsubTopics[0], conf.contentTopics[0])
info "All messages received. Exiting." info "All messages received. Exiting."
## for gracefull shutdown through signal hooks ## for gracefull shutdown through signal hooks
@ -176,5 +176,5 @@ proc setupAndListen*(
# Start maintaining subscription # Start maintaining subscription
asyncSpawn maintainSubscription( asyncSpawn maintainSubscription(
wakuNode, conf.getPubsubTopic(), conf.contentTopics[0], conf.fixedServicePeer wakuNode, conf.pubsubTopics[0], conf.contentTopics[0], conf.fixedServicePeer
) )

View File

@ -4,7 +4,7 @@ NUM_MESSAGES=300
MESSAGE_INTERVAL_MILLIS=1000 MESSAGE_INTERVAL_MILLIS=1000
MIN_MESSAGE_SIZE=15Kb MIN_MESSAGE_SIZE=15Kb
MAX_MESSAGE_SIZE=145Kb MAX_MESSAGE_SIZE=145Kb
SHARD=32 PUBSUB=/waku/2/rs/16/32
CONTENT_TOPIC=/tester/2/light-pubsub-test-at-infra/status-prod CONTENT_TOPIC=/tester/2/light-pubsub-test-at-infra/status-prod
CLUSTER_ID=16 CLUSTER_ID=16
LIGHTPUSH_BOOTSTRAP=enr:-QEKuED9AJm2HGgrRpVaJY2nj68ao_QiPeUT43sK-aRM7sMJ6R4G11OSDOwnvVacgN1sTw-K7soC5dzHDFZgZkHU0u-XAYJpZIJ2NIJpcISnYxMvim11bHRpYWRkcnO4WgAqNiVib290LTAxLmRvLWFtczMuc3RhdHVzLnByb2Quc3RhdHVzLmltBnZfACw2JWJvb3QtMDEuZG8tYW1zMy5zdGF0dXMucHJvZC5zdGF0dXMuaW0GAbveA4Jyc40AEAUAAQAgAEAAgAEAiXNlY3AyNTZrMaEC3rRtFQSgc24uWewzXaxTY8hDAHB8sgnxr9k8Rjb5GeSDdGNwgnZfg3VkcIIjKIV3YWt1Mg0 LIGHTPUSH_BOOTSTRAP=enr:-QEKuED9AJm2HGgrRpVaJY2nj68ao_QiPeUT43sK-aRM7sMJ6R4G11OSDOwnvVacgN1sTw-K7soC5dzHDFZgZkHU0u-XAYJpZIJ2NIJpcISnYxMvim11bHRpYWRkcnO4WgAqNiVib290LTAxLmRvLWFtczMuc3RhdHVzLnByb2Quc3RhdHVzLmltBnZfACw2JWJvb3QtMDEuZG8tYW1zMy5zdGF0dXMucHJvZC5zdGF0dXMuaW0GAbveA4Jyc40AEAUAAQAgAEAAgAEAiXNlY3AyNTZrMaEC3rRtFQSgc24uWewzXaxTY8hDAHB8sgnxr9k8Rjb5GeSDdGNwgnZfg3VkcIIjKIV3YWt1Mg0

View File

@ -1,24 +0,0 @@
import chronos, results, options
import waku/[waku_node, waku_core]
import publisher_base
type LegacyPublisher* = ref object of PublisherBase
proc new*(T: type LegacyPublisher, wakuNode: WakuNode): T =
if isNil(wakuNode.wakuLegacyLightpushClient):
wakuNode.mountLegacyLightPushClient()
return LegacyPublisher(wakuNode: wakuNode)
method send*(
self: LegacyPublisher,
topic: PubsubTopic,
message: WakuMessage,
servicePeer: RemotePeerInfo,
): Future[Result[void, string]] {.async.} =
# when error it must return original error desc due the text is used for distinction between error types in metrics.
discard (
await self.wakuNode.legacyLightpushPublish(some(topic), message, servicePeer)
).valueOr:
return err(error)
return ok()

View File

@ -21,17 +21,14 @@ import
./tester_message, ./tester_message,
./lpt_metrics, ./lpt_metrics,
./diagnose_connections, ./diagnose_connections,
./service_peer_management, ./service_peer_management
./publisher_base,
./legacy_publisher,
./v3_publisher
randomize() randomize()
type SizeRange* = tuple[min: uint64, max: uint64] type SizeRange* = tuple[min: uint64, max: uint64]
var RANDOM_PAYLOAD {.threadvar.}: seq[byte] var RANDOM_PALYLOAD {.threadvar.}: seq[byte]
RANDOM_PAYLOAD = urandom(1024 * 1024) RANDOM_PALYLOAD = urandom(1024 * 1024)
# 1MiB of random payload to be used to extend message # 1MiB of random payload to be used to extend message
proc prepareMessage( proc prepareMessage(
@ -62,8 +59,9 @@ proc prepareMessage(
if renderSize < len(contentPayload).uint64: if renderSize < len(contentPayload).uint64:
renderSize = len(contentPayload).uint64 renderSize = len(contentPayload).uint64
let finalPayload = let finalPayload = concat(
concat(contentPayload, RANDOM_PAYLOAD[0 .. renderSize - len(contentPayload).uint64]) contentPayload, RANDOM_PALYLOAD[0 .. renderSize - len(contentPayload).uint64]
)
let message = WakuMessage( let message = WakuMessage(
payload: finalPayload, # content of the message payload: finalPayload, # content of the message
contentTopic: contentTopic, # content topic to publish to contentTopic: contentTopic, # content topic to publish to
@ -89,7 +87,10 @@ proc reportSentMessages() =
|{numMessagesToSend+failedToSendCount:>11} |{messagesSent:>11} |{failedToSendCount:>11} | |{numMessagesToSend+failedToSendCount:>11} |{messagesSent:>11} |{failedToSendCount:>11} |
*----------------------------------------*""".fmt() *----------------------------------------*""".fmt()
echo report.valueOr("Error while printing statistics") if report.isErr:
echo "Error while printing statistics"
else:
echo report.get()
echo "*--------------------------------------------------------------------------------------------------*" echo "*--------------------------------------------------------------------------------------------------*"
echo "| Failure cause | count |" echo "| Failure cause | count |"
@ -107,7 +108,6 @@ proc reportSentMessages() =
proc publishMessages( proc publishMessages(
wakuNode: WakuNode, wakuNode: WakuNode,
publisher: PublisherBase,
servicePeer: RemotePeerInfo, servicePeer: RemotePeerInfo,
lightpushPubsubTopic: PubsubTopic, lightpushPubsubTopic: PubsubTopic,
lightpushContentTopic: ContentTopic, lightpushContentTopic: ContentTopic,
@ -145,18 +145,13 @@ proc publishMessages(
lightpushContentTopic, lightpushContentTopic,
renderMsgSize, renderMsgSize,
) )
let wlpRes = await wakuNode.lightpushPublish(
let publishStartTime = Moment.now() some(lightpushPubsubTopic), message, actualServicePeer
)
let wlpRes = await publisher.send(lightpushPubsubTopic, message, actualServicePeer)
let publishDuration = Moment.now() - publishStartTime
let msgHash = computeMessageHash(lightpushPubsubTopic, message).to0xHex let msgHash = computeMessageHash(lightpushPubsubTopic, message).to0xHex
if wlpRes.isOk(): if wlpRes.isOk():
lpt_publish_duration_seconds.observe(publishDuration.milliseconds.float / 1000)
sentMessages[messagesSent] = (hash: msgHash, relayed: true) sentMessages[messagesSent] = (hash: msgHash, relayed: true)
notice "published message using lightpush", notice "published message using lightpush",
index = messagesSent + 1, index = messagesSent + 1,
@ -187,34 +182,34 @@ proc publishMessages(
) )
if not preventPeerSwitch and noFailedPush > maxFailedPush: if not preventPeerSwitch and noFailedPush > maxFailedPush:
info "Max push failure limit reached, Try switching peer." info "Max push failure limit reached, Try switching peer."
actualServicePeer = selectRandomServicePeer( let peerOpt = selectRandomServicePeer(
wakuNode.peerManager, some(actualServicePeer), WakuLightPushCodec wakuNode.peerManager, some(actualServicePeer), WakuLightPushCodec
).valueOr: )
if peerOpt.isOk():
actualServicePeer = peerOpt.get()
info "New service peer in use",
codec = lightpushPubsubTopic,
peer = constructMultiaddrStr(actualServicePeer)
noFailedPush = 0
noOfServicePeerSwitches += 1
lpt_change_service_peer_count.inc(labelValues = ["publisher"])
continue # try again with new peer without delay
else:
error "Failed to find new service peer. Exiting." error "Failed to find new service peer. Exiting."
noFailedServiceNodeSwitches += 1 noFailedServiceNodeSwitches += 1
break break
info "New service peer in use",
codec = lightpushPubsubTopic,
peer = constructMultiaddrStr(actualServicePeer)
noFailedPush = 0
noOfServicePeerSwitches += 1
lpt_change_service_peer_count.inc(labelValues = ["publisher"])
continue # try again with new peer without delay
await sleepAsync(messageInterval) await sleepAsync(messageInterval)
proc setupAndPublish*( proc setupAndPublish*(
wakuNode: WakuNode, conf: LiteProtocolTesterConf, servicePeer: RemotePeerInfo wakuNode: WakuNode, conf: LiteProtocolTesterConf, servicePeer: RemotePeerInfo
) = ) =
var publisher: PublisherBase if isNil(wakuNode.wakuLightpushClient):
if conf.lightpushVersion == LightpushVersion.LEGACY: # if we have not yet initialized lightpush client, then do it as the only way we can get here is
info "Using legacy lightpush protocol for publishing messages" # by having a service peer discovered.
publisher = LegacyPublisher.new(wakuNode) wakuNode.mountLightPushClient()
else:
info "Using lightpush v3 protocol for publishing messages"
publisher = V3Publisher.new(wakuNode)
# give some time to receiver side to set up # give some time to receiver side to set up
let waitTillStartTesting = conf.startPublishingAfter.seconds let waitTillStartTesting = conf.startPublishingAfter.seconds
@ -255,9 +250,8 @@ proc setupAndPublish*(
# Start maintaining subscription # Start maintaining subscription
asyncSpawn publishMessages( asyncSpawn publishMessages(
wakuNode, wakuNode,
publisher,
servicePeer, servicePeer,
conf.getPubsubTopic(), conf.pubsubTopics[0],
conf.contentTopics[0], conf.contentTopics[0],
conf.numMessages, conf.numMessages,
(min: parsedMinMsgSize, max: parsedMaxMsgSize), (min: parsedMinMsgSize, max: parsedMaxMsgSize),

View File

@ -11,14 +11,16 @@ import
confutils confutils
import import
tools/confutils/cli_args,
waku/[ waku/[
common/enr, common/enr,
common/logging, common/logging,
factory/waku as waku_factory, factory/waku,
factory/external_config,
waku_node, waku_node,
node/health_monitor,
node/waku_metrics, node/waku_metrics,
node/peer_manager, node/peer_manager,
waku_api/rest/builder as rest_server_builder,
waku_lightpush/common, waku_lightpush/common,
waku_filter_v2, waku_filter_v2,
waku_peer_exchange/protocol, waku_peer_exchange/protocol,
@ -26,8 +28,8 @@ import
waku_core/multiaddrstr, waku_core/multiaddrstr,
], ],
./tester_config, ./tester_config,
./publisher, ./lightpush_publisher,
./receiver, ./filter_subscriber,
./diagnose_connections, ./diagnose_connections,
./service_peer_management ./service_peer_management
@ -47,16 +49,19 @@ when isMainModule:
## 5. Start monitoring tools and external interfaces ## 5. Start monitoring tools and external interfaces
## 6. Setup graceful shutdown hooks ## 6. Setup graceful shutdown hooks
const versionString = "version / git commit hash: " & waku_factory.git_version const versionString = "version / git commit hash: " & waku.git_version
let conf = LiteProtocolTesterConf.load(version = versionString).valueOr: let confRes = LiteProtocolTesterConf.load(version = versionString)
error "failure while loading the configuration", error = error if confRes.isErr():
error "failure while loading the configuration", error = confRes.error
quit(QuitFailure) quit(QuitFailure)
var conf = confRes.get()
## Logging setup ## Logging setup
logging.setupLog(conf.logLevel, conf.logFormat) logging.setupLog(conf.logLevel, conf.logFormat)
info "Running Lite Protocol Tester node", version = waku_factory.git_version info "Running Lite Protocol Tester node", version = waku.git_version
logConfig(conf) logConfig(conf)
##Prepare Waku configuration ##Prepare Waku configuration
@ -64,13 +69,13 @@ when isMainModule:
## - override according to tester functionality ## - override according to tester functionality
## ##
var wakuNodeConf: WakuNodeConf var wakuConf: WakuNodeConf
if conf.configFile.isSome(): if conf.configFile.isSome():
try: try:
var configFile {.threadvar.}: InputFile var configFile {.threadvar.}: InputFile
configFile = conf.configFile.get() configFile = conf.configFile.get()
wakuNodeConf = WakuNodeConf.load( wakuConf = WakuNodeConf.load(
version = versionString, version = versionString,
printUsage = false, printUsage = false,
secondarySources = proc( secondarySources = proc(
@ -83,54 +88,81 @@ when isMainModule:
error "Loading Waku configuration failed", error = getCurrentExceptionMsg() error "Loading Waku configuration failed", error = getCurrentExceptionMsg()
quit(QuitFailure) quit(QuitFailure)
wakuNodeConf.logLevel = conf.logLevel wakuConf.logLevel = conf.logLevel
wakuNodeConf.logFormat = conf.logFormat wakuConf.logFormat = conf.logFormat
wakuNodeConf.nat = conf.nat wakuConf.nat = conf.nat
wakuNodeConf.maxConnections = 500 wakuConf.maxConnections = 500
wakuNodeConf.restAddress = conf.restAddress wakuConf.restAddress = conf.restAddress
wakuNodeConf.restPort = conf.restPort wakuConf.restPort = conf.restPort
wakuNodeConf.restAllowOrigin = conf.restAllowOrigin wakuConf.restAllowOrigin = conf.restAllowOrigin
wakuNodeConf.dnsAddrsNameServers = wakuConf.dnsAddrs = true
@[parseIpAddress("8.8.8.8"), parseIpAddress("1.1.1.1")] wakuConf.dnsAddrsNameServers = @[parseIpAddress("8.8.8.8"), parseIpAddress("1.1.1.1")]
wakuNodeConf.shards = @[conf.shard] wakuConf.pubsubTopics = conf.pubsubTopics
wakuNodeConf.contentTopics = conf.contentTopics wakuConf.contentTopics = conf.contentTopics
wakuNodeConf.clusterId = conf.clusterId wakuConf.clusterId = conf.clusterId
## TODO: Depending on the tester needs we might extend here with shards, clusterId, etc... ## TODO: Depending on the tester needs we might extend here with shards, clusterId, etc...
wakuNodeConf.metricsServer = true wakuConf.metricsServer = true
wakuNodeConf.metricsServerAddress = parseIpAddress("0.0.0.0") wakuConf.metricsServerAddress = parseIpAddress("0.0.0.0")
wakuNodeConf.metricsServerPort = conf.metricsPort wakuConf.metricsServerPort = conf.metricsPort
# If bootstrap option is chosen we expect our clients will not mounted # If bootstrap option is chosen we expect our clients will not mounted
# so we will mount PeerExchange manually to gather possible service peers, # so we will mount PeerExchange manually to gather possible service peers,
# if got some we will mount the client protocols afterward. # if got some we will mount the client protocols afterward.
wakuNodeConf.peerExchange = false wakuConf.peerExchange = false
wakuNodeConf.relay = false wakuConf.relay = false
wakuNodeConf.filter = false wakuConf.filter = false
wakuNodeConf.lightpush = false wakuConf.lightpush = false
wakuNodeConf.store = false wakuConf.store = false
wakuNodeConf.rest = false wakuConf.rest = false
wakuNodeConf.relayServiceRatio = "40:60"
let wakuConf = wakuNodeConf.toWakuConf().valueOr: # NOTE: {.threadvar.} is used to make the global variable GC safe for the closure uses it
error "Issue converting toWakuConf", error = $error # It will always be called from main thread anyway.
# Ref: https://nim-lang.org/docs/manual.html#threads-gc-safety
var nodeHealthMonitor {.threadvar.}: WakuNodeHealthMonitor
nodeHealthMonitor = WakuNodeHealthMonitor()
nodeHealthMonitor.setOverallHealth(HealthStatus.INITIALIZING)
let restServer = rest_server_builder.startRestServerEsentials(
nodeHealthMonitor, wakuConf
).valueOr:
error "Starting esential REST server failed.", error = $error
quit(QuitFailure) quit(QuitFailure)
var waku = (waitFor Waku.new(wakuConf)).valueOr: var wakuApp = Waku.new(wakuConf).valueOr:
error "Waku initialization failed", error = error error "Waku initialization failed", error = error
quit(QuitFailure) quit(QuitFailure)
(waitFor startWaku(addr waku)).isOkOr: wakuApp.restServer = restServer
nodeHealthMonitor.setNode(wakuApp.node)
(waitFor startWaku(addr wakuApp)).isOkOr:
error "Starting waku failed", error = error error "Starting waku failed", error = error
quit(QuitFailure) quit(QuitFailure)
info "Setting up shutdown hooks" rest_server_builder.startRestServerProtocolSupport(
restServer, wakuApp.node, wakuApp.wakuDiscv5, wakuConf
).isOkOr:
error "Starting protocols support REST server failed.", error = $error
quit(QuitFailure)
proc asyncStopper(waku: Waku) {.async: (raises: [Exception]).} = wakuApp.metricsServer = waku_metrics.startMetricsServerAndLogging(wakuConf).valueOr:
await waku.stop() error "Starting monitoring and external interfaces failed", error = error
quit(QuitFailure)
nodeHealthMonitor.setOverallHealth(HealthStatus.READY)
debug "Setting up shutdown hooks"
## Setup shutdown hooks for this process.
## Stop node gracefully on shutdown.
proc asyncStopper(wakuApp: Waku) {.async: (raises: [Exception]).} =
nodeHealthMonitor.setOverallHealth(HealthStatus.SHUTTING_DOWN)
await wakuApp.stop()
quit(QuitSuccess) quit(QuitSuccess)
# Handle Ctrl-C SIGINT # Handle Ctrl-C SIGINT
@ -139,7 +171,7 @@ when isMainModule:
# workaround for https://github.com/nim-lang/Nim/issues/4057 # workaround for https://github.com/nim-lang/Nim/issues/4057
setupForeignThreadGc() setupForeignThreadGc()
notice "Shutting down after receiving SIGINT" notice "Shutting down after receiving SIGINT"
asyncSpawn asyncStopper(waku) asyncSpawn asyncStopper(wakuApp)
setControlCHook(handleCtrlC) setControlCHook(handleCtrlC)
@ -147,7 +179,7 @@ when isMainModule:
when defined(posix): when defined(posix):
proc handleSigterm(signal: cint) {.noconv.} = proc handleSigterm(signal: cint) {.noconv.} =
notice "Shutting down after receiving SIGTERM" notice "Shutting down after receiving SIGTERM"
asyncSpawn asyncStopper(waku) asyncSpawn asyncStopper(wakuApp)
c_signal(ansi_c.SIGTERM, handleSigterm) c_signal(ansi_c.SIGTERM, handleSigterm)
@ -160,7 +192,7 @@ when isMainModule:
# Not available in -d:release mode # Not available in -d:release mode
writeStackTrace() writeStackTrace()
waitFor waku.stop() waitFor wakuApp.stop()
quit(QuitFailure) quit(QuitFailure)
c_signal(ansi_c.SIGSEGV, handleSigsegv) c_signal(ansi_c.SIGSEGV, handleSigsegv)
@ -170,8 +202,10 @@ when isMainModule:
var codec = WakuLightPushCodec var codec = WakuLightPushCodec
# mounting relevant client, for PX filter client must be mounted ahead # mounting relevant client, for PX filter client must be mounted ahead
if conf.testFunc == TesterFunctionality.SENDER: if conf.testFunc == TesterFunctionality.SENDER:
wakuApp.node.mountLightPushClient()
codec = WakuLightPushCodec codec = WakuLightPushCodec
else: else:
waitFor wakuApp.node.mountFilterClient()
codec = WakuFilterSubscribeCodec codec = WakuFilterSubscribeCodec
var lookForServiceNode = false var lookForServiceNode = false
@ -179,17 +213,17 @@ when isMainModule:
if conf.serviceNode.len == 0: if conf.serviceNode.len == 0:
if conf.bootstrapNode.len > 0: if conf.bootstrapNode.len > 0:
info "Bootstrapping with PeerExchange to gather random service node" info "Bootstrapping with PeerExchange to gather random service node"
let futForServiceNode = pxLookupServiceNode(waku.node, conf) let futForServiceNode = pxLookupServiceNode(wakuApp.node, conf)
if not (waitFor futForServiceNode.withTimeout(20.minutes)): if not (waitFor futForServiceNode.withTimeout(20.minutes)):
error "Service node not found in time via PX" error "Service node not found in time via PX"
quit(QuitFailure) quit(QuitFailure)
futForServiceNode.read().isOkOr: if futForServiceNode.read().isErr():
error "Service node for test not found via PX" error "Service node for test not found via PX"
quit(QuitFailure) quit(QuitFailure)
serviceNodePeerInfo = selectRandomServicePeer( serviceNodePeerInfo = selectRandomServicePeer(
waku.node.peerManager, none(RemotePeerInfo), codec wakuApp.node.peerManager, none(RemotePeerInfo), codec
).valueOr: ).valueOr:
error "Service node selection failed" error "Service node selection failed"
quit(QuitFailure) quit(QuitFailure)
@ -204,11 +238,11 @@ when isMainModule:
info "Service node to be used", serviceNode = $serviceNodePeerInfo info "Service node to be used", serviceNode = $serviceNodePeerInfo
logSelfPeers(waku.node.peerManager) logSelfPeers(wakuApp.node.peerManager)
if conf.testFunc == TesterFunctionality.SENDER: if conf.testFunc == TesterFunctionality.SENDER:
setupAndPublish(waku.node, conf, serviceNodePeerInfo) setupAndPublish(wakuApp.node, conf, serviceNodePeerInfo)
else: else:
setupAndListen(waku.node, conf, serviceNodePeerInfo) setupAndSubscribe(wakuApp.node, conf, serviceNodePeerInfo)
runForever() runForever()

View File

@ -47,10 +47,3 @@ declarePublicGauge lpt_px_peers,
declarePublicGauge lpt_dialed_peers, "Number of peers successfully dialed", ["agent"] declarePublicGauge lpt_dialed_peers, "Number of peers successfully dialed", ["agent"]
declarePublicGauge lpt_dial_failures, "Number of dial failures by cause", ["agent"] declarePublicGauge lpt_dial_failures, "Number of dial failures by cause", ["agent"]
declarePublicHistogram lpt_publish_duration_seconds,
"duration to lightpush messages",
buckets = [
0.005, 0.01, 0.025, 0.05, 0.075, 0.1, 0.25, 0.5, 0.75, 1.0, 2.5, 5.0, 7.5, 10.0,
15.0, 20.0, 30.0, Inf,
]

View File

@ -24,8 +24,8 @@ def run_tester_node(predefined_test_env):
return os.system(script_cmd) return os.system(script_cmd)
if __name__ == "__main__": if __name__ == "__main__":
if len(sys.argv) < 2 or sys.argv[1] not in ["RECEIVER", "SENDER", "SENDERV3"]: if len(sys.argv) < 2 or sys.argv[1] not in ["RECEIVER", "SENDER"]:
print("Error: First argument must be either 'RECEIVER' or 'SENDER' or 'SENDERV3'") print("Error: First argument must be either 'RECEIVER' or 'SENDER'")
sys.exit(1) sys.exit(1)
predefined_test_env_file = '/usr/bin/infra.env' predefined_test_env_file = '/usr/bin/infra.env'

View File

@ -1,14 +0,0 @@
import chronos, results
import waku/[waku_node, waku_core]
type PublisherBase* = ref object of RootObj
wakuNode*: WakuNode
method send*(
self: PublisherBase,
topic: PubsubTopic,
message: WakuMessage,
servicePeer: RemotePeerInfo,
): Future[Result[void, string]] {.base, async.} =
discard
# when error it must return original error desc due the text is used for distinction between error types in metrics.

View File

@ -5,10 +5,10 @@ IP=$(ip a | grep "inet " | grep -Fv 127.0.0.1 | sed 's/.*inet \([^/]*\).*/\1/')
echo "Service node IP: ${IP}" echo "Service node IP: ${IP}"
if [ -n "${SHARD}" ]; then if [ -n "${PUBSUB}" ]; then
SHARD=--shard="${SHARD}" PUBSUB=--pubsub-topic="${PUBSUB}"
else else
SHARD=--shard="0" PUBSUB=--pubsub-topic="/waku/2/rs/66/0"
fi fi
if [ -n "${CLUSTER_ID}" ]; then if [ -n "${CLUSTER_ID}" ]; then
@ -59,5 +59,5 @@ exec /usr/bin/wakunode\
--metrics-server-port=8003\ --metrics-server-port=8003\
--metrics-server-address=0.0.0.0\ --metrics-server-address=0.0.0.0\
--nat=extip:${IP}\ --nat=extip:${IP}\
${SHARD}\ ${PUBSUB}\
${CLUSTER_ID} ${CLUSTER_ID}

View File

@ -25,12 +25,7 @@ fi
FUNCTION=$2 FUNCTION=$2
if [ "${FUNCTION}" = "SENDER" ]; then if [ "${FUNCTION}" = "SENDER" ]; then
FUNCTION="--test-func=SENDER --lightpush-version=LEGACY" FUNCTION=--test-func=SENDER
SERVICENAME=lightpush-service
fi
if [ "${FUNCTION}" = "SENDERV3" ]; then
FUNCTION="--test-func=SENDER --lightpush-version=V3"
SERVICENAME=lightpush-service SERVICENAME=lightpush-service
fi fi
@ -98,10 +93,10 @@ else
FULL_NODE=--bootstrap-node="${SERIVCE_NODE_ADDR}" FULL_NODE=--bootstrap-node="${SERIVCE_NODE_ADDR}"
fi fi
if [ -n "${SHARD}" ]; then if [ -n "${PUBSUB}" ]; then
SHARD=--shard="${SHARD}" PUBSUB=--pubsub-topic="${PUBSUB}"
else else
SHARD=--shard="0" PUBSUB=--pubsub-topic="/waku/2/rs/66/0"
fi fi
if [ -n "${CONTENT_TOPIC}" ]; then if [ -n "${CONTENT_TOPIC}" ]; then
@ -133,25 +128,19 @@ if [ -n "${MESSAGE_INTERVAL_MILLIS}" ]; then
MESSAGE_INTERVAL_MILLIS=--message-interval="${MESSAGE_INTERVAL_MILLIS}" MESSAGE_INTERVAL_MILLIS=--message-interval="${MESSAGE_INTERVAL_MILLIS}"
fi fi
if [ -n "${LOG_LEVEL}" ]; then
LOG_LEVEL=--log-level=${LOG_LEVEL}
else
LOG_LEVEL=--log-level=INFO
fi
echo "Running binary: ${BINARY_PATH}" echo "Running binary: ${BINARY_PATH}"
echo "Tester node: ${FUNCTION}" echo "Tester node: ${FUNCTION}"
echo "Using service node: ${SERIVCE_NODE_ADDR}" echo "Using service node: ${SERIVCE_NODE_ADDR}"
echo "My external IP: ${MY_EXT_IP}" echo "My external IP: ${MY_EXT_IP}"
exec "${BINARY_PATH}"\ exec "${BINARY_PATH}"\
--log-level=INFO\
--nat=extip:${MY_EXT_IP}\ --nat=extip:${MY_EXT_IP}\
--test-peers\ --test-peers\
${LOG_LEVEL}\
${FULL_NODE}\ ${FULL_NODE}\
${MESSAGE_INTERVAL_MILLIS}\ ${MESSAGE_INTERVAL_MILLIS}\
${NUM_MESSAGES}\ ${NUM_MESSAGES}\
${SHARD}\ ${PUBSUB}\
${CONTENT_TOPIC}\ ${CONTENT_TOPIC}\
${CLUSTER_ID}\ ${CLUSTER_ID}\
${FUNCTION}\ ${FUNCTION}\

View File

@ -26,15 +26,7 @@ fi
FUNCTION=$2 FUNCTION=$2
if [ "${FUNCTION}" = "SENDER" ]; then if [ "${FUNCTION}" = "SENDER" ]; then
FUNCTION="--test-func=SENDER --lightpush-version=LEGACY" FUNCTION=--test-func=SENDER
SERIVCE_NODE_ADDR=${LIGHTPUSH_SERVICE_PEER:-${LIGHTPUSH_BOOTSTRAP:-}}
NODE_ARG=${LIGHTPUSH_SERVICE_PEER:+--service-node="${LIGHTPUSH_SERVICE_PEER}"}
NODE_ARG=${NODE_ARG:---bootstrap-node="${LIGHTPUSH_BOOTSTRAP}"}
METRICS_PORT=--metrics-port="${PUBLISHER_METRICS_PORT:-8003}"
fi
if [ "${FUNCTION}" = "SENDERV3" ]; then
FUNCTION="--test-func=SENDER --lightpush-version=V3"
SERIVCE_NODE_ADDR=${LIGHTPUSH_SERVICE_PEER:-${LIGHTPUSH_BOOTSTRAP:-}} SERIVCE_NODE_ADDR=${LIGHTPUSH_SERVICE_PEER:-${LIGHTPUSH_BOOTSTRAP:-}}
NODE_ARG=${LIGHTPUSH_SERVICE_PEER:+--service-node="${LIGHTPUSH_SERVICE_PEER}"} NODE_ARG=${LIGHTPUSH_SERVICE_PEER:+--service-node="${LIGHTPUSH_SERVICE_PEER}"}
NODE_ARG=${NODE_ARG:---bootstrap-node="${LIGHTPUSH_BOOTSTRAP}"} NODE_ARG=${NODE_ARG:---bootstrap-node="${LIGHTPUSH_BOOTSTRAP}"}
@ -56,10 +48,10 @@ fi
MY_EXT_IP=$(wget -qO- --no-check-certificate https://api4.ipify.org) MY_EXT_IP=$(wget -qO- --no-check-certificate https://api4.ipify.org)
if [ -n "${SHARD}" ]; then if [ -n "${PUBSUB}" ]; then
SHARD=--shard="${SHARD}" PUBSUB=--pubsub-topic="${PUBSUB}"
else else
SHARD=--shard="0" PUBSUB=--pubsub-topic="/waku/2/rs/66/0"
fi fi
if [ -n "${CONTENT_TOPIC}" ]; then if [ -n "${CONTENT_TOPIC}" ]; then
@ -91,25 +83,19 @@ if [ -n "${MESSAGE_INTERVAL_MILLIS}" ]; then
MESSAGE_INTERVAL_MILLIS=--message-interval="${MESSAGE_INTERVAL_MILLIS}" MESSAGE_INTERVAL_MILLIS=--message-interval="${MESSAGE_INTERVAL_MILLIS}"
fi fi
if [ -n "${LOG_LEVEL}" ]; then
LOG_LEVEL=--log-level=${LOG_LEVEL}
else
LOG_LEVEL=--log-level=INFO
fi
echo "Running binary: ${BINARY_PATH}" echo "Running binary: ${BINARY_PATH}"
echo "Node function is: ${FUNCTION}" echo "Node function is: ${FUNCTION}"
echo "Using service/bootstrap node as: ${NODE_ARG}" echo "Using service/bootstrap node as: ${NODE_ARG}"
echo "My external IP: ${MY_EXT_IP}" echo "My external IP: ${MY_EXT_IP}"
exec "${BINARY_PATH}"\ exec "${BINARY_PATH}"\
--log-level=INFO\
--nat=extip:${MY_EXT_IP}\ --nat=extip:${MY_EXT_IP}\
--test-peers\ --test-peers\
${LOG_LEVEL}\
${NODE_ARG}\ ${NODE_ARG}\
${MESSAGE_INTERVAL_MILLIS}\ ${MESSAGE_INTERVAL_MILLIS}\
${NUM_MESSAGES}\ ${NUM_MESSAGES}\
${SHARD}\ ${PUBSUB}\
${CONTENT_TOPIC}\ ${CONTENT_TOPIC}\
${CLUSTER_ID}\ ${CLUSTER_ID}\
${FUNCTION}\ ${FUNCTION}\

View File

@ -26,15 +26,7 @@ fi
FUNCTION=$2 FUNCTION=$2
if [ "${FUNCTION}" = "SENDER" ]; then if [ "${FUNCTION}" = "SENDER" ]; then
FUNCTION="--test-func=SENDER --lightpush-version=LEGACY" FUNCTION=--test-func=SENDER
SERIVCE_NODE_ADDR=${LIGHTPUSH_SERVICE_PEER:-${LIGHTPUSH_BOOTSTRAP:-}}
NODE_ARG=${LIGHTPUSH_SERVICE_PEER:+--service-node="${LIGHTPUSH_SERVICE_PEER}"}
NODE_ARG=${NODE_ARG:---bootstrap-node="${LIGHTPUSH_BOOTSTRAP}"}
METRICS_PORT=--metrics-port="${PUBLISHER_METRICS_PORT:-8003}"
fi
if [ "${FUNCTION}" = "SENDERV3" ]; then
FUNCTION="--test-func=SENDER --lightpush-version=V3"
SERIVCE_NODE_ADDR=${LIGHTPUSH_SERVICE_PEER:-${LIGHTPUSH_BOOTSTRAP:-}} SERIVCE_NODE_ADDR=${LIGHTPUSH_SERVICE_PEER:-${LIGHTPUSH_BOOTSTRAP:-}}
NODE_ARG=${LIGHTPUSH_SERVICE_PEER:+--service-node="${LIGHTPUSH_SERVICE_PEER}"} NODE_ARG=${LIGHTPUSH_SERVICE_PEER:+--service-node="${LIGHTPUSH_SERVICE_PEER}"}
NODE_ARG=${NODE_ARG:---bootstrap-node="${LIGHTPUSH_BOOTSTRAP}"} NODE_ARG=${NODE_ARG:---bootstrap-node="${LIGHTPUSH_BOOTSTRAP}"}
@ -56,10 +48,10 @@ fi
MY_EXT_IP=$(wget -qO- --no-check-certificate https://api4.ipify.org) MY_EXT_IP=$(wget -qO- --no-check-certificate https://api4.ipify.org)
if [ -n "${SHARD}" ]; then if [ -n "${PUBSUB}" ]; then
SHARD=--shard=${SHARD} PUBSUB=--pubsub-topic="${PUBSUB}"
else else
SHARD=--shard=0 PUBSUB=--pubsub-topic="/waku/2/rs/66/0"
fi fi
if [ -n "${CONTENT_TOPIC}" ]; then if [ -n "${CONTENT_TOPIC}" ]; then
@ -87,14 +79,8 @@ if [ -n "${NUM_MESSAGES}" ]; then
NUM_MESSAGES=--num-messages="${NUM_MESSAGES}" NUM_MESSAGES=--num-messages="${NUM_MESSAGES}"
fi fi
if [ -n "${MESSAGE_INTERVAL_MILLIS}" ]; then if [ -n "${DELAY_MESSAGES}" ]; then
MESSAGE_INTERVAL_MILLIS=--message-interval="${MESSAGE_INTERVAL_MILLIS}" DELAY_MESSAGES=--delay-messages="${DELAY_MESSAGES}"
fi
if [ -n "${LOG_LEVEL}" ]; then
LOG_LEVEL=--log-level=${LOG_LEVEL}
else
LOG_LEVEL=--log-level=INFO
fi fi
echo "Running binary: ${BINARY_PATH}" echo "Running binary: ${BINARY_PATH}"
@ -103,12 +89,12 @@ echo "Using service/bootstrap node as: ${NODE_ARG}"
echo "My external IP: ${MY_EXT_IP}" echo "My external IP: ${MY_EXT_IP}"
exec "${BINARY_PATH}"\ exec "${BINARY_PATH}"\
--log-level=INFO\
--nat=extip:${MY_EXT_IP}\ --nat=extip:${MY_EXT_IP}\
${LOG_LEVEL}\
${NODE_ARG}\ ${NODE_ARG}\
${MESSAGE_INTERVAL_MILLIS}\ ${DELAY_MESSAGES}\
${NUM_MESSAGES}\ ${NUM_MESSAGES}\
${SHARD}\ ${PUBSUB}\
${CONTENT_TOPIC}\ ${CONTENT_TOPIC}\
${CLUSTER_ID}\ ${CLUSTER_ID}\
${FUNCTION}\ ${FUNCTION}\

View File

@ -11,8 +11,8 @@ import
libp2p/wire libp2p/wire
import import
tools/confutils/cli_args,
waku/[ waku/[
factory/external_config,
common/enr, common/enr,
waku_node, waku_node,
node/peer_manager, node/peer_manager,
@ -61,7 +61,7 @@ proc selectRandomCapablePeer*(
elif codec.contains("filter"): elif codec.contains("filter"):
cap = Capabilities.Filter cap = Capabilities.Filter
var supportivePeers = pm.switch.peerStore.getPeersByCapability(cap) var supportivePeers = pm.wakuPeerStore.getPeersByCapability(cap)
trace "Found supportive peers count", count = supportivePeers.len() trace "Found supportive peers count", count = supportivePeers.len()
trace "Found supportive peers", supportivePeers = $supportivePeers trace "Found supportive peers", supportivePeers = $supportivePeers
@ -73,7 +73,7 @@ proc selectRandomCapablePeer*(
let rndPeerIndex = rand(0 .. supportivePeers.len - 1) let rndPeerIndex = rand(0 .. supportivePeers.len - 1)
let randomPeer = supportivePeers[rndPeerIndex] let randomPeer = supportivePeers[rndPeerIndex]
info "Dialing random peer", debug "Dialing random peer",
idx = $rndPeerIndex, peer = constructMultiaddrStr(randomPeer) idx = $rndPeerIndex, peer = constructMultiaddrStr(randomPeer)
supportivePeers.delete(rndPeerIndex .. rndPeerIndex) supportivePeers.delete(rndPeerIndex .. rndPeerIndex)
@ -82,12 +82,12 @@ proc selectRandomCapablePeer*(
if (await connOpt.withTimeout(10.seconds)): if (await connOpt.withTimeout(10.seconds)):
if connOpt.value().isSome(): if connOpt.value().isSome():
found = some(randomPeer) found = some(randomPeer)
info "Dialing successful", debug "Dialing successful",
peer = constructMultiaddrStr(randomPeer), codec = codec peer = constructMultiaddrStr(randomPeer), codec = codec
else: else:
info "Dialing failed", peer = constructMultiaddrStr(randomPeer), codec = codec debug "Dialing failed", peer = constructMultiaddrStr(randomPeer), codec = codec
else: else:
info "Timeout dialing service peer", debug "Timeout dialing service peer",
peer = constructMultiaddrStr(randomPeer), codec = codec peer = constructMultiaddrStr(randomPeer), codec = codec
return found return found
@ -102,11 +102,11 @@ proc tryCallAllPxPeers*(
elif codec.contains("filter"): elif codec.contains("filter"):
capability = Capabilities.Filter capability = Capabilities.Filter
var supportivePeers = pm.switch.peerStore.getPeersByCapability(capability) var supportivePeers = pm.wakuPeerStore.getPeersByCapability(capability)
lpt_px_peers.set(supportivePeers.len) lpt_px_peers.set(supportivePeers.len)
info "Found supportive peers count", count = supportivePeers.len() debug "Found supportive peers count", count = supportivePeers.len()
info "Found supportive peers", supportivePeers = $supportivePeers debug "Found supportive peers", supportivePeers = $supportivePeers
if supportivePeers.len == 0: if supportivePeers.len == 0:
return none(seq[RemotePeerInfo]) return none(seq[RemotePeerInfo])
@ -116,7 +116,7 @@ proc tryCallAllPxPeers*(
let rndPeerIndex = rand(0 .. supportivePeers.len - 1) let rndPeerIndex = rand(0 .. supportivePeers.len - 1)
let randomPeer = supportivePeers[rndPeerIndex] let randomPeer = supportivePeers[rndPeerIndex]
info "Dialing random peer", debug "Dialing random peer",
idx = $rndPeerIndex, peer = constructMultiaddrStr(randomPeer) idx = $rndPeerIndex, peer = constructMultiaddrStr(randomPeer)
supportivePeers.delete(rndPeerIndex, rndPeerIndex) supportivePeers.delete(rndPeerIndex, rndPeerIndex)
@ -158,7 +158,9 @@ proc tryCallAllPxPeers*(
proc pxLookupServiceNode*( proc pxLookupServiceNode*(
node: WakuNode, conf: LiteProtocolTesterConf node: WakuNode, conf: LiteProtocolTesterConf
): Future[Result[bool, void]] {.async.} = ): Future[Result[bool, void]] {.async.} =
let codec: string = conf.getCodec() var codec: string = WakuLightPushCodec
if conf.testFunc == TesterFunctionality.RECEIVER:
codec = WakuFilterSubscribeCodec
if node.wakuPeerExchange.isNil(): if node.wakuPeerExchange.isNil():
let peerExchangeNode = translateToRemotePeerInfo(conf.bootstrapNode).valueOr: let peerExchangeNode = translateToRemotePeerInfo(conf.bootstrapNode).valueOr:
@ -181,20 +183,20 @@ proc pxLookupServiceNode*(
if not await futPeers.withTimeout(30.seconds): if not await futPeers.withTimeout(30.seconds):
notice "Cannot get peers from PX", round = 5 - trialCount notice "Cannot get peers from PX", round = 5 - trialCount
else: else:
futPeers.value().isOkOr: if futPeers.value().isErr():
info "PeerExchange reported error", error = futPeers.read().error info "PeerExchange reported error", error = futPeers.read().error
return err() return err()
if conf.testPeers: if conf.testPeers:
let peersOpt = let peersOpt =
await tryCallAllPxPeers(node.peerManager, codec, conf.getPubsubTopic()) await tryCallAllPxPeers(node.peerManager, codec, conf.pubsubTopics[0])
if peersOpt.isSome(): if peersOpt.isSome():
info "Found service peers for codec", info "Found service peers for codec",
codec = codec, peer_count = peersOpt.get().len() codec = codec, peer_count = peersOpt.get().len()
return ok(peersOpt.get().len > 0) return ok(peersOpt.get().len > 0)
else: else:
let peerOpt = let peerOpt =
await selectRandomCapablePeer(node.peerManager, codec, conf.getPubsubTopic()) await selectRandomCapablePeer(node.peerManager, codec, conf.pubsubTopics[0])
if peerOpt.isSome(): if peerOpt.isSome():
info "Found service peer for codec", codec = codec, peer = peerOpt.get() info "Found service peer for codec", codec = codec, peer = peerOpt.get()
return ok(true) return ok(true)
@ -213,7 +215,7 @@ proc selectRandomServicePeer*(
if actualPeer.isSome(): if actualPeer.isSome():
alreadyUsedServicePeers.add(actualPeer.get()) alreadyUsedServicePeers.add(actualPeer.get())
let supportivePeers = pm.switch.peerStore.getPeersByProtocol(codec).filterIt( let supportivePeers = pm.wakuPeerStore.getPeersByProtocol(codec).filterIt(
it notin alreadyUsedServicePeers it notin alreadyUsedServicePeers
) )
if supportivePeers.len == 0: if supportivePeers.len == 0:

View File

@ -8,8 +8,6 @@ import
results, results,
libp2p/peerid libp2p/peerid
from std/sugar import `=>`
import ./tester_message, ./lpt_metrics import ./tester_message, ./lpt_metrics
type type
@ -116,7 +114,12 @@ proc addMessage*(
if not self.contains(peerId): if not self.contains(peerId):
self[peerId] = Statistics.init() self[peerId] = Statistics.init()
let shortSenderId = PeerId.init(msg.sender).map(p => p.shortLog()).valueOr(msg.sender) let shortSenderId = block:
let senderPeer = PeerId.init(msg.sender)
if senderPeer.isErr():
msg.sender
else:
senderPeer.get().shortLog()
discard catch: discard catch:
self[peerId].addMessage(shortSenderId, msg, msgHash) self[peerId].addMessage(shortSenderId, msg, msgHash)
@ -217,7 +220,10 @@ proc echoStat*(self: Statistics, peerId: string) =
| {self.missingIndices()} | | {self.missingIndices()} |
*------------------------------------------------------------------------------------------*""".fmt() *------------------------------------------------------------------------------------------*""".fmt()
echo printable.valueOr("Error while printing statistics: " & error.msg) if printable.isErr():
echo "Error while printing statistics: " & printable.error().msg
else:
echo printable.get()
proc jsonStat*(self: Statistics): string = proc jsonStat*(self: Statistics): string =
let minL, maxL, avgL = self.calcLatency() let minL, maxL, avgL = self.calcLatency()
@ -237,18 +243,20 @@ proc jsonStat*(self: Statistics): string =
}}, }},
"lostIndices": {self.missingIndices()} "lostIndices": {self.missingIndices()}
}}""".fmt() }}""".fmt()
if json.isErr:
return "{\"result:\": \"" & json.error.msg & "\"}"
return json.valueOr("{\"result:\": \"" & error.msg & "\"}") return json.get()
proc echoStats*(self: var PerPeerStatistics) = proc echoStats*(self: var PerPeerStatistics) =
for peerId, stats in self.pairs: for peerId, stats in self.pairs:
let peerLine = catch: let peerLine = catch:
"Receiver statistics from peer {peerId}".fmt() "Receiver statistics from peer {peerId}".fmt()
peerLine.isOkOr: if peerLine.isErr:
echo "Error while printing statistics" echo "Error while printing statistics"
continue else:
echo peerLine.get() echo peerLine.get()
stats.echoStat(peerId) stats.echoStat(peerId)
proc jsonStats*(self: PerPeerStatistics): string = proc jsonStats*(self: PerPeerStatistics): string =
try: try:

View File

@ -12,9 +12,13 @@ import
secp256k1 secp256k1
import import
../../tools/confutils/ waku/[
[cli_args, envvar as confEnvvarDefs, envvar_net as confEnvvarNet], common/confutils/envvar/defs as confEnvvarDefs,
waku/[common/logging, waku_core, waku_core/topics/pubsub_topic] common/confutils/envvar/std/net as confEnvvarNet,
common/logging,
factory/external_config,
waku_core,
]
export confTomlDefs, confTomlNet, confEnvvarDefs, confEnvvarNet export confTomlDefs, confTomlNet, confEnvvarDefs, confEnvvarNet
@ -28,10 +32,6 @@ type TesterFunctionality* = enum
SENDER # pumps messages to the network SENDER # pumps messages to the network
RECEIVER # gather and analyze messages from the network RECEIVER # gather and analyze messages from the network
type LightpushVersion* = enum
LEGACY # legacy lightpush protocol
V3 # lightpush v3 protocol
type LiteProtocolTesterConf* = object type LiteProtocolTesterConf* = object
configFile* {. configFile* {.
desc: desc:
@ -79,12 +79,6 @@ type LiteProtocolTesterConf* = object
name: "test-func" name: "test-func"
.}: TesterFunctionality .}: TesterFunctionality
lightpushVersion* {.
desc: "Version of the sender to use. Supported values: legacy, v3.",
defaultValue: LightpushVersion.LEGACY,
name: "lightpush-version"
.}: LightpushVersion
numMessages* {. numMessages* {.
desc: "Number of messages to send.", defaultValue: 120, name: "num-messages" desc: "Number of messages to send.", defaultValue: 120, name: "num-messages"
.}: uint32 .}: uint32
@ -101,9 +95,18 @@ type LiteProtocolTesterConf* = object
name: "message-interval" name: "message-interval"
.}: uint32 .}: uint32
shard* {.desc: "Shards index to subscribe to. ", defaultValue: 0, name: "shard".}: pubsubTopics* {.
uint16 desc: "Default pubsub topic to subscribe to. Argument may be repeated.",
defaultValue: @[LitePubsubTopic],
name: "pubsub-topic"
.}: seq[PubsubTopic]
## TODO: extend lite protocol tester configuration based on testing needs
# shards* {.
# desc: "Shards index to subscribe to [0..NUM_SHARDS_IN_NETWORK-1]. Argument may be repeated.",
# defaultValue: @[],
# name: "shard"
# .}: seq[uint16]
contentTopics* {. contentTopics* {.
desc: "Default content topic to subscribe to. Argument may be repeated.", desc: "Default content topic to subscribe to. Argument may be repeated.",
defaultValue: @[LiteContentTopic], defaultValue: @[LiteContentTopic],
@ -192,17 +195,4 @@ proc load*(T: type LiteProtocolTesterConf, version = ""): ConfResult[T] =
except CatchableError: except CatchableError:
err(getCurrentExceptionMsg()) err(getCurrentExceptionMsg())
proc getPubsubTopic*(conf: LiteProtocolTesterConf): PubsubTopic =
return $RelayShard(clusterId: conf.clusterId, shardId: conf.shard)
proc getCodec*(conf: LiteProtocolTesterConf): string =
return
if conf.testFunc == TesterFunctionality.RECEIVER:
WakuFilterSubscribeCodec
else:
if conf.lightpushVersion == LightpushVersion.LEGACY:
WakuLegacyLightPushCodec
else:
WakuLightPushCodec
{.pop.} {.pop.}

View File

@ -6,7 +6,7 @@ import
json_serialization/std/options, json_serialization/std/options,
json_serialization/lexer json_serialization/lexer
import waku/rest_api/endpoint/serdes import ../../waku/waku_api/rest/serdes
type ProtocolTesterMessage* = object type ProtocolTesterMessage* = object
sender*: string sender*: string

View File

@ -1,29 +0,0 @@
import results, options, chronos
import waku/[waku_node, waku_core, waku_lightpush, waku_lightpush/common]
import publisher_base
type V3Publisher* = ref object of PublisherBase
proc new*(T: type V3Publisher, wakuNode: WakuNode): T =
if isNil(wakuNode.wakuLightpushClient):
wakuNode.mountLightPushClient()
return V3Publisher(wakuNode: wakuNode)
method send*(
self: V3Publisher,
topic: PubsubTopic,
message: WakuMessage,
servicePeer: RemotePeerInfo,
): Future[Result[void, string]] {.async.} =
# when error it must return original error desc due the text is used for distinction between error types in metrics.
discard (
await self.wakuNode.lightpushPublish(some(topic), message, some(servicePeer))
).valueOr:
if error.code == LightPushErrorCode.NO_PEERS_TO_RELAY and
error.desc != some("No peers for topic, skipping publish"):
# TODO: We need better separation of errors happening on the client side or the server side.-
return err("dial_failure")
else:
return err($error.code)
return ok()

View File

@ -29,6 +29,7 @@ The following options are available:
--rln-relay Enable spam protection through rln-relay: true|false [=true]. --rln-relay Enable spam protection through rln-relay: true|false [=true].
--rln-relay-dynamic Enable waku-rln-relay with on-chain dynamic group management: true|false --rln-relay-dynamic Enable waku-rln-relay with on-chain dynamic group management: true|false
[=true]. [=true].
--rln-relay-tree-path Path to the RLN merkle tree sled db (https://github.com/spacejam/sled).
--rln-relay-eth-client-address HTTP address of an Ethereum testnet client e.g., http://localhost:8540/ --rln-relay-eth-client-address HTTP address of an Ethereum testnet client e.g., http://localhost:8540/
[=http://localhost:8540/]. [=http://localhost:8540/].
--rln-relay-eth-contract-address Address of membership contract on an Ethereum testnet. --rln-relay-eth-contract-address Address of membership contract on an Ethereum testnet.

View File

@ -1,7 +1,7 @@
{.push raises: [].} {.push raises: [].}
import import
std/[net, tables, strutils, times, sequtils, random, sugar], std/[net, tables, strutils, times, sequtils, random],
results, results,
chronicles, chronicles,
chronicles/topics_registry, chronicles/topics_registry,
@ -183,14 +183,16 @@ proc setConnectedPeersMetrics(
for maddr in peerInfo.addrs: for maddr in peerInfo.addrs:
if $maddr notin customPeerInfo.maddrs: if $maddr notin customPeerInfo.maddrs:
customPeerInfo.maddrs.add $maddr customPeerInfo.maddrs.add $maddr
let typedRecord = discNode.toTypedRecord().valueOr: let typedRecord = discNode.toTypedRecord()
if not typedRecord.isOk():
warn "could not convert record to typed record", record = discNode warn "could not convert record to typed record", record = discNode
continue continue
let ipAddr = typedRecord.ip.valueOr: if not typedRecord.get().ip.isSome():
warn "ip field is not set", record = typedRecord warn "ip field is not set", record = typedRecord.get()
continue continue
customPeerInfo.ip = $ipAddr.join(".") let ip = $typedRecord.get().ip.get().join(".")
customPeerInfo.ip = ip
# try to ping the peer # try to ping the peer
if shouldReconnect(customPeerInfo): if shouldReconnect(customPeerInfo):
@ -213,7 +215,7 @@ proc setConnectedPeersMetrics(
continue continue
var customPeerInfo = allPeers[peerIdStr] var customPeerInfo = allPeers[peerIdStr]
info "connected to peer", peer = customPeerInfo[] debug "connected to peer", peer = customPeerInfo[]
# after connection, get supported protocols # after connection, get supported protocols
let lp2pPeerStore = node.switch.peerStore let lp2pPeerStore = node.switch.peerStore
@ -352,16 +354,16 @@ proc crawlNetwork(
await sleepAsync(crawlInterval.millis - elapsed.millis) await sleepAsync(crawlInterval.millis - elapsed.millis)
proc retrieveDynamicBootstrapNodes( proc retrieveDynamicBootstrapNodes(
dnsDiscoveryUrl: string, dnsAddrsNameServers: seq[IpAddress] dnsDiscovery: bool, dnsDiscoveryUrl: string, dnsDiscoveryNameServers: seq[IpAddress]
): Future[Result[seq[RemotePeerInfo], string]] {.async.} = ): Future[Result[seq[RemotePeerInfo], string]] {.async.} =
## Retrieve dynamic bootstrap nodes (DNS discovery) ## Retrieve dynamic bootstrap nodes (DNS discovery)
if dnsDiscoveryUrl != "": if dnsDiscovery and dnsDiscoveryUrl != "":
# DNS discovery # DNS discovery
info "Discovering nodes using Waku DNS discovery", url = dnsDiscoveryUrl debug "Discovering nodes using Waku DNS discovery", url = dnsDiscoveryUrl
var nameServers: seq[TransportAddress] var nameServers: seq[TransportAddress]
for ip in dnsAddrsNameServers: for ip in dnsDiscoveryNameServers:
nameServers.add(initTAddress(ip, Port(53))) # Assume all servers use port 53 nameServers.add(initTAddress(ip, Port(53))) # Assume all servers use port 53
let dnsResolver = DnsResolver.new(nameServers) let dnsResolver = DnsResolver.new(nameServers)
@ -372,11 +374,16 @@ proc retrieveDynamicBootstrapNodes(
if resolved.len > 0: if resolved.len > 0:
return resolved[0] # Use only first answer return resolved[0] # Use only first answer
var wakuDnsDiscovery = WakuDnsDiscovery.init(dnsDiscoveryUrl, resolver).errorOr: var wakuDnsDiscovery = WakuDnsDiscovery.init(dnsDiscoveryUrl, resolver)
return (await value.findPeers()).mapErr(e => $e) if wakuDnsDiscovery.isOk():
warn "Failed to init Waku DNS discovery" return (await wakuDnsDiscovery.get().findPeers()).mapErr(
proc(e: cstring): string =
$e
)
else:
warn "Failed to init Waku DNS discovery"
info "No method for retrieving dynamic bootstrap nodes specified." debug "No method for retrieving dynamic bootstrap nodes specified."
ok(newSeq[RemotePeerInfo]()) # Return an empty seq by default ok(newSeq[RemotePeerInfo]()) # Return an empty seq by default
proc getBootstrapFromDiscDns( proc getBootstrapFromDiscDns(
@ -384,10 +391,11 @@ proc getBootstrapFromDiscDns(
): Future[Result[seq[enr.Record], string]] {.async.} = ): Future[Result[seq[enr.Record], string]] {.async.} =
try: try:
let dnsNameServers = @[parseIpAddress("1.1.1.1"), parseIpAddress("1.0.0.1")] let dnsNameServers = @[parseIpAddress("1.1.1.1"), parseIpAddress("1.0.0.1")]
let dynamicBootstrapNodes = ( let dynamicBootstrapNodesRes =
await retrieveDynamicBootstrapNodes(conf.dnsDiscoveryUrl, dnsNameServers) await retrieveDynamicBootstrapNodes(true, conf.dnsDiscoveryUrl, dnsNameServers)
).valueOr: if not dynamicBootstrapNodesRes.isOk():
return err("Failed retrieving dynamic bootstrap nodes: " & $error) error("failed discovering peers from DNS")
let dynamicBootstrapNodes = dynamicBootstrapNodesRes.get()
# select dynamic bootstrap nodes that have an ENR containing a udp port. # select dynamic bootstrap nodes that have an ENR containing a udp port.
# Discv5 only supports UDP https://github.com/ethereum/devp2p/blob/master/discv5/discv5-theory.md) # Discv5 only supports UDP https://github.com/ethereum/devp2p/blob/master/discv5/discv5-theory.md)
@ -403,7 +411,7 @@ proc getBootstrapFromDiscDns(
discv5BootstrapEnrs.add(enr) discv5BootstrapEnrs.add(enr)
return ok(discv5BootstrapEnrs) return ok(discv5BootstrapEnrs)
except CatchableError: except CatchableError:
error("failed discovering peers from DNS: " & getCurrentExceptionMsg()) error("failed discovering peers from DNS")
proc initAndStartApp( proc initAndStartApp(
conf: NetworkMonitorConf conf: NetworkMonitorConf
@ -443,29 +451,39 @@ proc initAndStartApp(
error "failed to add sharded topics to ENR", error = error error "failed to add sharded topics to ENR", error = error
return err("failed to add sharded topics to ENR: " & $error) return err("failed to add sharded topics to ENR: " & $error)
let record = builder.build().valueOr: let recordRes = builder.build()
return err("cannot build record: " & $error) let record =
if recordRes.isErr():
return err("cannot build record: " & $recordRes.error)
else:
recordRes.get()
var nodeBuilder = WakuNodeBuilder.init() var nodeBuilder = WakuNodeBuilder.init()
nodeBuilder.withNodeKey(key) nodeBuilder.withNodeKey(key)
nodeBuilder.withRecord(record) nodeBuilder.withRecord(record)
nodeBuilder.withSwitchConfiguration(maxConnections = some(MaxConnectedPeers)) nodeBUilder.withSwitchConfiguration(maxConnections = some(MaxConnectedPeers))
nodeBuilder.withPeerManagerConfig( nodeBuilder.withPeerManagerConfig(
maxConnections = MaxConnectedPeers, maxConnections = MaxConnectedPeers,
relayServiceRatio = "13.33:86.67", relayServiceRatio = "13.33:86.67",
shardAware = true, shardAware = true,
) )
nodeBuilder.withNetworkConfigurationDetails(bindIp, nodeTcpPort).isOkOr: let res = nodeBuilder.withNetworkConfigurationDetails(bindIp, nodeTcpPort)
return err("node building error" & $error) if res.isErr():
return err("node building error" & $res.error)
let node = nodeBuilder.build().valueOr: let nodeRes = nodeBuilder.build()
return err("node building error" & $error) let node =
if nodeRes.isErr():
return err("node building error" & $res.error)
else:
nodeRes.get()
var discv5BootstrapEnrs = (await getBootstrapFromDiscDns(conf)).valueOr: var discv5BootstrapEnrsRes = await getBootstrapFromDiscDns(conf)
if discv5BootstrapEnrsRes.isErr():
error("failed discovering peers from DNS") error("failed discovering peers from DNS")
quit(QuitFailure) var discv5BootstrapEnrs = discv5BootstrapEnrsRes.get()
# parse enrURIs from the configuration and add the resulting ENRs to the discv5BootstrapEnrs seq # parse enrURIs from the configuration and add the resulting ENRs to the discv5BootstrapEnrs seq
for enrUri in conf.bootstrapNodes: for enrUri in conf.bootstrapNodes:
@ -536,32 +554,31 @@ proc subscribeAndHandleMessages(
else: else:
msgPerContentTopic[msg.contentTopic] = 1 msgPerContentTopic[msg.contentTopic] = 1
node.subscribe((kind: PubsubSub, topic: pubsubTopic), WakuRelayHandler(handler)).isOkOr: node.subscribe((kind: PubsubSub, topic: pubsubTopic), some(WakuRelayHandler(handler)))
error "failed to subscribe to pubsub topic", pubsubTopic, error
quit(1)
when isMainModule: when isMainModule:
# known issue: confutils.nim(775, 17) Error: can raise an unlisted exception: ref IOError # known issue: confutils.nim(775, 17) Error: can raise an unlisted exception: ref IOError
{.pop.} {.pop.}
var conf = NetworkMonitorConf.loadConfig().valueOr: let confRes = NetworkMonitorConf.loadConfig()
error "could not load cli variables", error = error if confRes.isErr():
quit(QuitFailure) error "could not load cli variables", err = confRes.error
quit(1)
var conf = confRes.get()
info "cli flags", conf = conf info "cli flags", conf = conf
if conf.clusterId == 1: if conf.clusterId == 1:
let twnNetworkConf = NetworkConf.TheWakuNetworkConf() let twnClusterConf = ClusterConf.TheWakuNetworkConf()
conf.bootstrapNodes = twnNetworkConf.discv5BootstrapNodes conf.bootstrapNodes = twnClusterConf.discv5BootstrapNodes
conf.rlnRelayDynamic = twnNetworkConf.rlnRelayDynamic conf.rlnRelayDynamic = twnClusterConf.rlnRelayDynamic
conf.rlnRelayEthContractAddress = twnNetworkConf.rlnRelayEthContractAddress conf.rlnRelayEthContractAddress = twnClusterConf.rlnRelayEthContractAddress
conf.rlnEpochSizeSec = twnNetworkConf.rlnEpochSizeSec conf.rlnEpochSizeSec = twnClusterConf.rlnEpochSizeSec
conf.rlnRelayUserMessageLimit = twnNetworkConf.rlnRelayUserMessageLimit conf.rlnRelayUserMessageLimit = twnClusterConf.rlnRelayUserMessageLimit
conf.numShardsInNetwork = twnNetworkConf.shardingConf.numShardsInCluster conf.numShardsInNetwork = twnClusterConf.numShardsInNetwork
if conf.shards.len == 0: if conf.shards.len == 0:
conf.shards = conf.shards = toSeq(uint16(0) .. uint16(twnClusterConf.numShardsInNetwork - 1))
toSeq(uint16(0) .. uint16(twnNetworkConf.shardingConf.numShardsInCluster - 1))
if conf.logLevel != LogLevel.NONE: if conf.logLevel != LogLevel.NONE:
setLogLevel(conf.logLevel) setLogLevel(conf.logLevel)
@ -574,31 +591,35 @@ when isMainModule:
# start metrics server # start metrics server
if conf.metricsServer: if conf.metricsServer:
startMetricsServer(conf.metricsServerAddress, Port(conf.metricsServerPort)).isOkOr: let res =
error "could not start metrics server", error = error startMetricsServer(conf.metricsServerAddress, Port(conf.metricsServerPort))
quit(QuitFailure) if res.isErr():
error "could not start metrics server", err = res.error
quit(1)
# start rest server for custom metrics # start rest server for custom metrics
startRestApiServer(conf, allPeersInfo, msgPerContentTopic).isOkOr: let res = startRestApiServer(conf, allPeersInfo, msgPerContentTopic)
error "could not start rest api server", error = error if res.isErr():
quit(QuitFailure) error "could not start rest api server", err = res.error
quit(1)
# create a rest client # create a rest client
let restClient = RestClientRef.new( let clientRest =
url = "http://ip-api.com", connectTimeout = ctime.seconds(2) RestClientRef.new(url = "http://ip-api.com", connectTimeout = ctime.seconds(2))
).valueOr: if clientRest.isErr():
error "could not start rest api client", error = error error "could not start rest api client", err = res.error
quit(QuitFailure) quit(1)
let restClient = clientRest.get()
# start waku node # start waku node
let (node, discv5) = (waitFor initAndStartApp(conf)).valueOr: let nodeRes = waitFor initAndStartApp(conf)
error "could not start node", error = error if nodeRes.isErr():
quit(QuitFailure) error "could not start node"
quit 1
(waitFor node.mountRelay()).isOkOr: let (node, discv5) = nodeRes.get()
error "failed to mount waku relay protocol: ", error = error
quit(QuitFailure)
waitFor node.mountRelay()
waitFor node.mountLibp2pPing() waitFor node.mountLibp2pPing()
var onFatalErrorAction = proc(msg: string) {.gcsafe, closure.} = var onFatalErrorAction = proc(msg: string) {.gcsafe, closure.} =
@ -609,24 +630,26 @@ when isMainModule:
if conf.rlnRelay and conf.rlnRelayEthContractAddress != "": if conf.rlnRelay and conf.rlnRelayEthContractAddress != "":
let rlnConf = WakuRlnConfig( let rlnConf = WakuRlnConfig(
dynamic: conf.rlnRelayDynamic, rlnRelayDynamic: conf.rlnRelayDynamic,
credIndex: some(uint(0)), rlnRelayCredIndex: some(uint(0)),
ethContractAddress: conf.rlnRelayEthContractAddress, rlnRelayEthContractAddress: conf.rlnRelayEthContractAddress,
ethClientUrls: conf.ethClientUrls.mapIt(string(it)), rlnRelayEthClientAddress: string(conf.rlnRelayethClientAddress),
epochSizeSec: conf.rlnEpochSizeSec, rlnRelayCredPath: "",
creds: none(RlnRelayCreds), rlnRelayCredPassword: "",
rlnRelayTreePath: conf.rlnRelayTreePath,
rlnEpochSizeSec: conf.rlnEpochSizeSec,
onFatalErrorAction: onFatalErrorAction, onFatalErrorAction: onFatalErrorAction,
) )
try: try:
waitFor node.mountRlnRelay(rlnConf) waitFor node.mountRlnRelay(rlnConf)
except CatchableError: except CatchableError:
error "failed to setup RLN", error = getCurrentExceptionMsg() error "failed to setup RLN", err = getCurrentExceptionMsg()
quit(QuitFailure) quit 1
node.mountMetadata(conf.clusterId, conf.shards).isOkOr: node.mountMetadata(conf.clusterId).isOkOr:
error "failed to mount waku metadata protocol: ", error = error error "failed to mount waku metadata protocol: ", err = error
quit(QuitFailure) quit 1
for shard in conf.shards: for shard in conf.shards:
# Subscribe the node to the shards, to count messages # Subscribe the node to the shards, to count messages

View File

@ -5,14 +5,10 @@ import
chronos, chronos,
std/strutils, std/strutils,
results, results,
stew/shims/net,
regex regex
const git_version* {.strdefine.} = "n/a" type EthRpcUrl = distinct string
type EthRpcUrl* = distinct string
proc `$`*(u: EthRpcUrl): string =
string(u)
type NetworkMonitorConf* = object type NetworkMonitorConf* = object
logLevel* {. logLevel* {.
@ -80,12 +76,17 @@ type NetworkMonitorConf* = object
name: "rln-relay-dynamic" name: "rln-relay-dynamic"
.}: bool .}: bool
ethClientUrls* {. rlnRelayTreePath* {.
desc: desc: "Path to the RLN merkle tree sled db (https://github.com/spacejam/sled)",
"HTTP address of an Ethereum testnet client e.g., http://localhost:8540/. Argument may be repeated.", defaultValue: "",
defaultValue: newSeq[EthRpcUrl](0), name: "rln-relay-tree-path"
.}: string
rlnRelayEthClientAddress* {.
desc: "HTTP address of an Ethereum testnet client e.g., http://localhost:8540/",
defaultValue: "http://localhost:8540/",
name: "rln-relay-eth-client-address" name: "rln-relay-eth-client-address"
.}: seq[EthRpcUrl] .}: EthRpcUrl
rlnRelayEthContractAddress* {. rlnRelayEthContractAddress* {.
desc: "Address of membership contract on an Ethereum testnet", desc: "Address of membership contract on an Ethereum testnet",

View File

@ -3,6 +3,7 @@
import import
std/json, std/json,
results, results,
stew/shims/net,
chronicles, chronicles,
chronicles/topics_registry, chronicles/topics_registry,
chronos, chronos,
@ -31,7 +32,7 @@ proc decodeBytes*(
try: try:
let jsonContent = parseJson(res) let jsonContent = parseJson(res)
if $jsonContent["status"].getStr() != "success": if $jsonContent["status"].getStr() != "success":
error "query failed", result = $jsonContent error "query failed", result = jsonContent
return err("query failed") return err("query failed")
return ok( return ok(
NodeLocation( NodeLocation(

View File

@ -1,20 +1,12 @@
# RPC URL for accessing testnet via HTTP. # RPC URL for accessing testnet via HTTP.
# e.g. https://linea-sepolia.infura.io/v3/123aa110320f4aec179150fba1e1b1b1 # e.g. https://sepolia.infura.io/v3/123aa110320f4aec179150fba1e1b1b1
RLN_RELAY_ETH_CLIENT_ADDRESS= RLN_RELAY_ETH_CLIENT_ADDRESS=
# Account of testnet where you have Linea Sepolia ETH that would be staked into RLN contract. # Private key of testnet where you have sepolia ETH that would be staked into RLN contract.
ETH_TESTNET_ACCOUNT=
# Private key of testnet where you have Linea Sepolia ETH that would be staked into RLN contract.
# Note: make sure you don't use the '0x' prefix. # Note: make sure you don't use the '0x' prefix.
# e.g. 0116196e9a8abed42dd1a22eb63fa2a5a17b0c27d716b87ded2c54f1bf192a0b # e.g. 0116196e9a8abed42dd1a22eb63fa2a5a17b0c27d716b87ded2c54f1bf192a0b
ETH_TESTNET_KEY= ETH_TESTNET_KEY=
# Address of the RLN contract on Linea Sepolia.
RLN_CONTRACT_ADDRESS=0xB9cd878C90E49F797B4431fBF4fb333108CB90e6
# Address of the RLN Membership Token contract on Linea Sepolia used to pay for membership.
TOKEN_CONTRACT_ADDRESS=0x185A0015aC462a0aECb81beCc0497b649a64B9ea
# Password you would like to use to protect your RLN membership. # Password you would like to use to protect your RLN membership.
RLN_RELAY_CRED_PASSWORD= RLN_RELAY_CRED_PASSWORD=
@ -23,8 +15,7 @@ NWAKU_IMAGE=
NODEKEY= NODEKEY=
DOMAIN= DOMAIN=
EXTRA_ARGS= EXTRA_ARGS=
STORAGE_SIZE= RLN_RELAY_CONTRACT_ADDRESS=
# -------------------- SONDA CONFIG ------------------ # -------------------- SONDA CONFIG ------------------
METRICS_PORT=8004 METRICS_PORT=8004

View File

@ -30,13 +30,13 @@ It works by running a `nwaku` node, publishing a message from it every fixed int
2. If you want to query nodes in `cluster-id` 1, then you have to follow the steps of registering an RLN membership. Otherwise, you can skip this step. 2. If you want to query nodes in `cluster-id` 1, then you have to follow the steps of registering an RLN membership. Otherwise, you can skip this step.
For it, you need: For it, you need:
* Ethereum Linea Sepolia WebSocket endpoint. Get one free from [Infura](https://linea-sepolia.infura.io/). * Ethereum Sepolia WebSocket endpoint. Get one free from [Infura](https://www.infura.io/).
* Ethereum Linea Sepolia account with minimum 0.01ETH. Get some [here](https://docs.metamask.io/developer-tools/faucet/). * Ethereum Sepolia account with some balance <0.01 Eth. Get some [here](https://www.infura.io/faucet/sepolia).
* A password to protect your rln membership. * A password to protect your rln membership.
Fill the `RLN_RELAY_ETH_CLIENT_ADDRESS`, `ETH_TESTNET_KEY` and `RLN_RELAY_CRED_PASSWORD` env variables and run Fill the `RLN_RELAY_ETH_CLIENT_ADDRESS`, `ETH_TESTNET_KEY` and `RLN_RELAY_CRED_PASSWORD` env variables and run
``` ```
./register_rln.sh ./register_rln.sh
``` ```

View File

@ -9,7 +9,7 @@ x-logging: &logging
x-rln-relay-eth-client-address: &rln_relay_eth_client_address ${RLN_RELAY_ETH_CLIENT_ADDRESS:-} # Add your RLN_RELAY_ETH_CLIENT_ADDRESS after the "-" x-rln-relay-eth-client-address: &rln_relay_eth_client_address ${RLN_RELAY_ETH_CLIENT_ADDRESS:-} # Add your RLN_RELAY_ETH_CLIENT_ADDRESS after the "-"
x-rln-environment: &rln_env x-rln-environment: &rln_env
RLN_RELAY_CONTRACT_ADDRESS: ${RLN_RELAY_CONTRACT_ADDRESS:-0xB9cd878C90E49F797B4431fBF4fb333108CB90e6} RLN_RELAY_CONTRACT_ADDRESS: ${RLN_RELAY_CONTRACT_ADDRESS:-0xCB33Aa5B38d79E3D9Fa8B10afF38AA201399a7e3}
RLN_RELAY_CRED_PATH: ${RLN_RELAY_CRED_PATH:-} # Optional: Add your RLN_RELAY_CRED_PATH after the "-" RLN_RELAY_CRED_PATH: ${RLN_RELAY_CRED_PATH:-} # Optional: Add your RLN_RELAY_CRED_PATH after the "-"
RLN_RELAY_CRED_PASSWORD: ${RLN_RELAY_CRED_PASSWORD:-} # Optional: Add your RLN_RELAY_CRED_PASSWORD after the "-" RLN_RELAY_CRED_PASSWORD: ${RLN_RELAY_CRED_PASSWORD:-} # Optional: Add your RLN_RELAY_CRED_PASSWORD after the "-"

View File

@ -24,7 +24,7 @@ fi
docker run -v $(pwd)/keystore:/keystore/:Z harbor.status.im/wakuorg/nwaku:v0.30.1 generateRlnKeystore \ docker run -v $(pwd)/keystore:/keystore/:Z harbor.status.im/wakuorg/nwaku:v0.30.1 generateRlnKeystore \
--rln-relay-eth-client-address=${RLN_RELAY_ETH_CLIENT_ADDRESS} \ --rln-relay-eth-client-address=${RLN_RELAY_ETH_CLIENT_ADDRESS} \
--rln-relay-eth-private-key=${ETH_TESTNET_KEY} \ --rln-relay-eth-private-key=${ETH_TESTNET_KEY} \
--rln-relay-eth-contract-address=0xB9cd878C90E49F797B4431fBF4fb333108CB90e6 \ --rln-relay-eth-contract-address=0xCB33Aa5B38d79E3D9Fa8B10afF38AA201399a7e3 \
--rln-relay-cred-path=/keystore/keystore.json \ --rln-relay-cred-path=/keystore/keystore.json \
--rln-relay-cred-password="${RLN_RELAY_CRED_PASSWORD}" \ --rln-relay-cred-password="${RLN_RELAY_CRED_PASSWORD}" \
--rln-relay-user-message-limit=20 \ --rln-relay-user-message-limit=20 \

View File

@ -61,6 +61,7 @@ fi
if [ "${CLUSTER_ID}" -eq 1 ]; then if [ "${CLUSTER_ID}" -eq 1 ]; then
RLN_RELAY_CRED_PATH=--rln-relay-cred-path=${RLN_RELAY_CRED_PATH:-/keystore/keystore.json} RLN_RELAY_CRED_PATH=--rln-relay-cred-path=${RLN_RELAY_CRED_PATH:-/keystore/keystore.json}
RLN_TREE_PATH=--rln-relay-tree-path="/etc/rln_tree"
fi fi
if [ -n "${RLN_RELAY_CRED_PASSWORD}" ]; then if [ -n "${RLN_RELAY_CRED_PASSWORD}" ]; then

View File

@ -32,31 +32,21 @@ $ make wakucanary
And used as follows. A reachable node that supports both `store` and `filter` protocols. And used as follows. A reachable node that supports both `store` and `filter` protocols.
```console ```console
$ ./build/wakucanary \ $ ./build/wakucanary --address=/dns4/node-01.ac-cn-hongkong-c.waku.sandbox.status.im/tcp/30303/p2p/16Uiu2HAmSJvSJphxRdbnigUV5bjRRZFBhTtWFTSyiKaQByCjwmpV --protocol=store --protocol=filter
--address=/dns4/store-01.do-ams3.status.staging.status.im/tcp/30303/p2p/16Uiu2HAm3xVDaz6SRJ6kErwC21zBJEZjavVXg7VSkoWzaV1aMA3F \
--protocol=store \
--protocol=filter \
--cluster-id=16 \
--shard=64
$ echo $? $ echo $?
0 0
``` ```
A node that can't be reached. A node that can't be reached.
```console ```console
$ ./build/wakucanary \ $ ./build/wakucanary --address=/dns4/node-01.ac-cn-hongkong-c.waku.sandbox.status.im/tcp/1000/p2p/16Uiu2HAmSJvSJphxRdbnigUV5bjRRZFBhTtWFTSyiKaQByCjwmpV --protocol=store --protocol=filter
--address=/dns4/store-01.do-ams3.status.staging.status.im/tcp/1000/p2p/16Uiu2HAm3xVDaz6SRJ6kErwC21zBJEZjavVXg7VSkoWzaV1aMA3F \
--protocol=store \
--protocol=filter \
--cluster-id=16 \
--shard=64
$ echo $? $ echo $?
1 1
``` ```
Note that a domain name can also be used. Note that a domain name can also be used.
```console ```console
--- not defined yet $ ./build/wakucanary --address=/dns4/node-01.do-ams3.status.test.status.im/tcp/30303/p2p/16Uiu2HAkukebeXjTQ9QDBeNDWuGfbaSg79wkkhK4vPocLgR6QFDf --protocol=store --protocol=filter
$ echo $? $ echo $?
0 0
``` ```

View File

@ -1,50 +0,0 @@
#!/bin/bash
#this script build the canary app and make basic run to connect to well-known peer via TCP .
set -e
PEER_ADDRESS="/dns4/store-01.do-ams3.status.staging.status.im/tcp/30303/p2p/16Uiu2HAm3xVDaz6SRJ6kErwC21zBJEZjavVXg7VSkoWzaV1aMA3F"
PROTOCOL="relay"
LOG_DIR="logs"
CLUSTER="16"
SHARD="64"
TIMESTAMP=$(date +"%Y-%m-%d_%H-%M-%S")
LOG_FILE="$LOG_DIR/canary_run_$TIMESTAMP.log"
mkdir -p "$LOG_DIR"
echo "Building Waku Canary app..."
( cd ../../../ && make wakucanary ) >> "$LOG_FILE" 2>&1
echo "Running Waku Canary against:"
echo " Peer : $PEER_ADDRESS"
echo " Protocol: $PROTOCOL"
echo "Log file : $LOG_FILE"
echo "-----------------------------------"
{
echo "=== Canary Run: $TIMESTAMP ==="
echo "Peer : $PEER_ADDRESS"
echo "Protocol : $PROTOCOL"
echo "LogLevel : DEBUG"
echo "-----------------------------------"
../../../build/wakucanary \
--address="$PEER_ADDRESS" \
--protocol="$PROTOCOL" \
--cluster-id="$CLUSTER"\
--shard="$SHARD"\
--log-level=DEBUG
echo "-----------------------------------"
echo "Exit code: $?"
} 2>&1 | tee "$LOG_FILE"
EXIT_CODE=${PIPESTATUS[0]}
if [ $EXIT_CODE -eq 0 ]; then
echo "SUCCESS: Connected to peer and protocol '$PROTOCOL' is supported."
else
echo "FAILURE: Could not connect or protocol '$PROTOCOL' is unsupported."
fi
exit $EXIT_CODE

View File

@ -1,46 +0,0 @@
#!/bin/bash
# === Configuration ===
WAKUCANARY_BINARY="../../../build/wakucanary"
PEER_ADDRESS="/dns4/store-01.do-ams3.status.staging.status.im/tcp/30303/p2p/16Uiu2HAm3xVDaz6SRJ6kErwC21zBJEZjavVXg7VSkoWzaV1aMA3F"
TIMEOUT=5
LOG_LEVEL="info"
PROTOCOLS=("store" "relay" "lightpush" "filter")
# === Logging Setup ===
LOG_DIR="logs"
mkdir -p "$LOG_DIR"
TIMESTAMP=$(date +"%Y-%m-%d_%H-%M-%S")
LOG_FILE="$LOG_DIR/ping_test_$TIMESTAMP.log"
echo "Building Waku Canary app..."
( cd ../../../ && make wakucanary ) >> "$LOG_FILE" 2>&1
echo "Protocol Support Test - $TIMESTAMP" | tee -a "$LOG_FILE"
echo "Peer: $PEER_ADDRESS" | tee -a "$LOG_FILE"
echo "---------------------------------------" | tee -a "$LOG_FILE"
# === Protocol Testing Loop ===
for PROTOCOL in "${PROTOCOLS[@]}"; do
TIMESTAMP=$(date +"%Y-%m-%d_%H-%M-%S")
LOG_FILE="$LOG_DIR/ping_test_${PROTOCOL}_$TIMESTAMP.log"
{
echo "=== Canary Run: $TIMESTAMP ==="
echo "Peer : $PEER_ADDRESS"
echo "Protocol : $PROTOCOL"
echo "LogLevel : DEBUG"
echo "-----------------------------------"
$WAKUCANARY_BINARY \
--address="$PEER_ADDRESS" \
--protocol="$PROTOCOL" \
--log-level=DEBUG
echo "-----------------------------------"
echo "Exit code: $?"
} 2>&1 | tee "$LOG_FILE"
echo "✅ Log saved to: $LOG_FILE"
echo ""
done
echo "All protocol checks completed. Log saved to: $LOG_FILE"

View File

@ -1,51 +0,0 @@
#!/bin/bash
#this script build the canary app and make basic run to connect to well-known peer via TCP .
set -e
PEER_ADDRESS="/ip4/127.0.0.1/tcp/7777/ws/p2p/16Uiu2HAm4ng2DaLPniRoZtMQbLdjYYWnXjrrJkGoXWCoBWAdn1tu"
PROTOCOL="relay"
LOG_DIR="logs"
CLUSTER="16"
SHARD="64"
TIMESTAMP=$(date +"%Y-%m-%d_%H-%M-%S")
LOG_FILE="$LOG_DIR/canary_run_$TIMESTAMP.log"
mkdir -p "$LOG_DIR"
echo "Building Waku Canary app..."
( cd ../../../ && make wakucanary ) >> "$LOG_FILE" 2>&1
echo "Running Waku Canary against:"
echo " Peer : $PEER_ADDRESS"
echo " Protocol: $PROTOCOL"
echo "Log file : $LOG_FILE"
echo "-----------------------------------"
{
echo "=== Canary Run: $TIMESTAMP ==="
echo "Peer : $PEER_ADDRESS"
echo "Protocol : $PROTOCOL"
echo "LogLevel : DEBUG"
echo "-----------------------------------"
../../../build/wakucanary \
--address="$PEER_ADDRESS" \
--protocol="$PROTOCOL" \
--cluster-id="$CLUSTER"\
--shard="$SHARD"\
--log-level=DEBUG
echo "-----------------------------------"
echo "Exit code: $?"
} 2>&1 | tee "$LOG_FILE"
EXIT_CODE=${PIPESTATUS[0]}
if [ $EXIT_CODE -eq 0 ]; then
echo "SUCCESS: Connected to peer and protocol '$PROTOCOL' is supported."
else
echo "FAILURE: Could not connect or protocol '$PROTOCOL' is unsupported."
fi
exit $EXIT_CODE

View File

@ -1,43 +0,0 @@
#!/bin/bash
WAKUCANARY_BINARY="../../../build/wakucanary"
NODE_PORT=60000
WSS_PORT=$((NODE_PORT + 1000))
PEER_ID="16Uiu2HAmB6JQpewXScGoQ2syqmimbe4GviLxRwfsR8dCpwaGBPSE"
PROTOCOL="relay"
KEY_PATH="./certs/client.key"
CERT_PATH="./certs/client.crt"
LOG_DIR="logs"
mkdir -p "$LOG_DIR"
PEER_ADDRESS="/ip4/127.0.0.1/tcp/$WSS_PORT/wss/p2p/$PEER_ID"
TIMESTAMP=$(date +"%Y-%m-%d_%H-%M-%S")
LOG_FILE="$LOG_DIR/wss_cert_test_$TIMESTAMP.log"
echo "Building Waku Canary app..."
( cd ../../../ && make wakucanary ) >> "$LOG_FILE" 2>&1
{
echo "=== Canary WSS + Cert Test ==="
echo "Timestamp : $TIMESTAMP"
echo "Node Port : $NODE_PORT"
echo "WSS Port : $WSS_PORT"
echo "Peer ID : $PEER_ID"
echo "Protocol : $PROTOCOL"
echo "Key Path : $KEY_PATH"
echo "Cert Path : $CERT_PATH"
echo "Address : $PEER_ADDRESS"
echo "------------------------------------------"
$WAKUCANARY_BINARY \
--address="$PEER_ADDRESS" \
--protocol="$PROTOCOL" \
--log-level=DEBUG \
--websocket-secure-key-path="$KEY_PATH" \
--websocket-secure-cert-path="$CERT_PATH"
echo "------------------------------------------"
echo "Exit code: $?"
} 2>&1 | tee "$LOG_FILE"
echo "✅ Log saved to: $LOG_FILE"

View File

@ -1,7 +1,8 @@
import import
std/[strutils, sequtils, tables, strformat], std/[strutils, sequtils, tables],
confutils, confutils,
chronos, chronos,
stew/shims/net,
chronicles/topics_registry, chronicles/topics_registry,
os os
import import
@ -20,15 +21,6 @@ const ProtocolsTable = {
"relay": "/vac/waku/relay/", "relay": "/vac/waku/relay/",
"lightpush": "/vac/waku/lightpush/", "lightpush": "/vac/waku/lightpush/",
"filter": "/vac/waku/filter-subscribe/2", "filter": "/vac/waku/filter-subscribe/2",
"filter-push": "/vac/waku/filter-push/",
"ipfs-id": "/ipfs/id/",
"autonat": "/libp2p/autonat/",
"circuit-relay": "/libp2p/circuit/relay/",
"metadata": "/vac/waku/metadata/",
"rendezvous": "/rendezvous/",
"ipfs-ping": "/ipfs/ping/",
"peer-exchange": "/vac/waku/peer-exchange/",
"mix": "mix/1.0.0",
}.toTable }.toTable
const WebSocketPortOffset = 1000 const WebSocketPortOffset = 1000
@ -113,48 +105,37 @@ proc parseCmdArg*(T: type chronos.Duration, p: string): T =
proc completeCmdArg*(T: type chronos.Duration, val: string): seq[string] = proc completeCmdArg*(T: type chronos.Duration, val: string): seq[string] =
return @[] return @[]
# checks if rawProtocols (skipping version) are supported in nodeProtocols
proc areProtocolsSupported( proc areProtocolsSupported(
toValidateProtocols: seq[string], nodeProtocols: seq[string] rawProtocols: seq[string], nodeProtocols: seq[string]
): bool = ): bool =
## Checks if all toValidateProtocols are contained in nodeProtocols.
## nodeProtocols contains the full list of protocols currently informed by the node under analysis.
## toValidateProtocols contains the protocols, without version number, that we want to check if they are supported by the node.
var numOfSupportedProt: int = 0 var numOfSupportedProt: int = 0
for rawProtocol in toValidateProtocols: for nodeProtocol in nodeProtocols:
let protocolTag = ProtocolsTable[rawProtocol] for rawProtocol in rawProtocols:
info "Checking if protocol is supported", expected_protocol_tag = protocolTag let protocolTag = ProtocolsTable[rawProtocol]
var protocolSupported = false
for nodeProtocol in nodeProtocols:
if nodeProtocol.startsWith(protocolTag): if nodeProtocol.startsWith(protocolTag):
info "The node supports the protocol", supported_protocol = nodeProtocol info "Supported protocol ok", expected = protocolTag, supported = nodeProtocol
numOfSupportedProt += 1 numOfSupportedProt += 1
protocolSupported = true
break break
if not protocolSupported: if numOfSupportedProt == rawProtocols.len:
error "The node does not support the protocol", expected_protocol = protocolTag
if numOfSupportedProt == toValidateProtocols.len:
return true return true
return false return false
proc pingNode( proc pingNode(
node: WakuNode, peerInfo: RemotePeerInfo node: WakuNode, peerInfo: RemotePeerInfo
): Future[bool] {.async, gcsafe.} = ): Future[void] {.async, gcsafe.} =
try: try:
let conn = await node.switch.dial(peerInfo.peerId, peerInfo.addrs, PingCodec) let conn = await node.switch.dial(peerInfo.peerId, peerInfo.addrs, PingCodec)
let pingDelay = await node.libp2pPing.ping(conn) let pingDelay = await node.libp2pPing.ping(conn)
info "Peer response time (ms)", peerId = peerInfo.peerId, ping = pingDelay.millis info "Peer response time (ms)", peerId = peerInfo.peerId, ping = pingDelay.millis
return true
except CatchableError: except CatchableError:
var msg = getCurrentExceptionMsg() var msg = getCurrentExceptionMsg()
if msg == "Future operation cancelled!": if msg == "Future operation cancelled!":
msg = "timedout" msg = "timedout"
error "Failed to ping the peer", peer = peerInfo, err = msg error "Failed to ping the peer", peer = peerInfo, err = msg
return false
proc main(rng: ref HmacDrbgContext): Future[int] {.async.} = proc main(rng: ref HmacDrbgContext): Future[int] {.async.} =
let conf: WakuCanaryConf = WakuCanaryConf.load() let conf: WakuCanaryConf = WakuCanaryConf.load()
@ -183,9 +164,12 @@ proc main(rng: ref HmacDrbgContext): Future[int] {.async.} =
protocols = conf.protocols, protocols = conf.protocols,
logLevel = conf.logLevel logLevel = conf.logLevel
let peer = parsePeerInfo(conf.address).valueOr: let peerRes = parsePeerInfo(conf.address)
error "Couldn't parse 'conf.address'", error = error if peerRes.isErr():
quit(QuitFailure) error "Couldn't parse 'conf.address'", error = peerRes.error
return 1
let peer = peerRes.value
let let
nodeKey = crypto.PrivateKey.random(Secp256k1, rng[])[] nodeKey = crypto.PrivateKey.random(Secp256k1, rng[])[]
@ -211,22 +195,27 @@ proc main(rng: ref HmacDrbgContext): Future[int] {.async.} =
let netConfig = NetConfig.init( let netConfig = NetConfig.init(
bindIp = bindIp, bindIp = bindIp,
bindPort = nodeTcpPort, bindPort = nodeTcpPort,
wsBindPort = some(wsBindPort), wsBindPort = wsBindPort,
wsEnabled = isWs, wsEnabled = isWs,
wssEnabled = isWss, wssEnabled = isWss,
) )
var enrBuilder = EnrBuilder.init(nodeKey) var enrBuilder = EnrBuilder.init(nodeKey)
enrBuilder.withWakuRelaySharding( let relayShards = RelayShards.init(conf.clusterId, conf.shards).valueOr:
RelayShards(clusterId: conf.clusterId, shardIds: conf.shards) error "Relay shards initialization failed", error = error
).isOkOr: return 1
error "could not initialize ENR with shards", error enrBuilder.withWakuRelaySharding(relayShards).isOkOr:
quit(QuitFailure) error "Building ENR with relay sharding failed", error = error
return 1
let record = enrBuilder.build().valueOr: let recordRes = enrBuilder.build()
error "failed to create enr record", error = error let record =
quit(QuitFailure) if recordRes.isErr():
error "failed to create enr record", error = recordRes.error
quit(QuitFailure)
else:
recordRes.get()
if isWss and if isWss and
(conf.websocketSecureKeyPath.len == 0 or conf.websocketSecureCertPath.len == 0): (conf.websocketSecureKeyPath.len == 0 or conf.websocketSecureCertPath.len == 0):
@ -235,7 +224,7 @@ proc main(rng: ref HmacDrbgContext): Future[int] {.async.} =
createDir(CertsDirectory) createDir(CertsDirectory)
if generateSelfSignedCertificate(certPath, keyPath) != 0: if generateSelfSignedCertificate(certPath, keyPath) != 0:
error "Error generating key and certificate" error "Error generating key and certificate"
quit(QuitFailure) return 1
builder.withRecord(record) builder.withRecord(record)
builder.withNetworkConfiguration(netConfig.tryGet()) builder.withNetworkConfiguration(netConfig.tryGet())
@ -244,17 +233,15 @@ proc main(rng: ref HmacDrbgContext): Future[int] {.async.} =
) )
let node = builder.build().tryGet() let node = builder.build().tryGet()
node.mountMetadata(conf.clusterId).isOkOr:
error "failed to mount waku metadata protocol: ", err = error
if conf.ping: if conf.ping:
try: try:
await mountLibp2pPing(node) await mountLibp2pPing(node)
except CatchableError: except CatchableError:
error "failed to mount libp2p ping protocol: " & getCurrentExceptionMsg() error "failed to mount libp2p ping protocol: " & getCurrentExceptionMsg()
quit(QuitFailure) return 1
node.mountMetadata(conf.clusterId, conf.shards).isOkOr:
error "failed to mount metadata protocol", error
quit(QuitFailure)
await node.start() await node.start()
@ -265,34 +252,23 @@ proc main(rng: ref HmacDrbgContext): Future[int] {.async.} =
let timedOut = not await node.connectToNodes(@[peer]).withTimeout(conf.timeout) let timedOut = not await node.connectToNodes(@[peer]).withTimeout(conf.timeout)
if timedOut: if timedOut:
error "Timedout after", timeout = conf.timeout error "Timedout after", timeout = conf.timeout
quit(QuitFailure) return 1
let lp2pPeerStore = node.switch.peerStore let lp2pPeerStore = node.switch.peerStore
let conStatus = node.peerManager.switch.peerStore[ConnectionBook][peer.peerId] let conStatus = node.peerManager.wakuPeerStore[ConnectionBook][peer.peerId]
var pingSuccess = true
if conf.ping: if conf.ping:
try: discard await pingFut
pingSuccess = await pingFut
except CatchableError as exc:
pingSuccess = false
error "Ping operation failed or timed out", error = exc.msg
if conStatus in [Connected, CanConnect]: if conStatus in [Connected, CanConnect]:
let nodeProtocols = lp2pPeerStore[ProtoBook][peer.peerId] let nodeProtocols = lp2pPeerStore[ProtoBook][peer.peerId]
if not areProtocolsSupported(conf.protocols, nodeProtocols): if not areProtocolsSupported(conf.protocols, nodeProtocols):
error "Not all protocols are supported", error "Not all protocols are supported",
expected = conf.protocols, supported = nodeProtocols expected = conf.protocols, supported = nodeProtocols
quit(QuitFailure) return 1
# Check ping result if ping was enabled
if conf.ping and not pingSuccess:
error "Node is reachable and supports protocols but ping failed - connection may be unstable"
quit(QuitFailure)
elif conStatus == CannotConnect: elif conStatus == CannotConnect:
error "Could not connect", peerId = peer.peerId error "Could not connect", peerId = peer.peerId
quit(QuitFailure) return 1
return 0 return 0
when isMainModule: when isMainModule:

View File

@ -9,13 +9,15 @@ import
system/ansi_c, system/ansi_c,
libp2p/crypto/crypto libp2p/crypto/crypto
import import
../../tools/[rln_keystore_generator/rln_keystore_generator, confutils/cli_args], ../../tools/rln_keystore_generator/rln_keystore_generator,
../../tools/rln_db_inspector/rln_db_inspector,
waku/[ waku/[
common/logging, common/logging,
factory/external_config,
factory/waku, factory/waku,
node/health_monitor, node/health_monitor,
rest_api/endpoint/builder as rest_server_builder, node/waku_metrics,
waku_core/message/default_values, waku_api/rest/builder as rest_server_builder,
] ]
logScope: logScope:
@ -36,33 +38,65 @@ when isMainModule:
const versionString = "version / git commit hash: " & waku.git_version const versionString = "version / git commit hash: " & waku.git_version
var wakuNodeConf = WakuNodeConf.load(version = versionString).valueOr: var conf = WakuNodeConf.load(version = versionString).valueOr:
error "failure while loading the configuration", error = error error "failure while loading the configuration", error = error
quit(QuitFailure) quit(QuitFailure)
## Also called within Waku.new. The call to startRestServerEssentials needs the following line ## Also called within Waku.new. The call to startRestServerEsentials needs the following line
logging.setupLog(wakuNodeConf.logLevel, wakuNodeConf.logFormat) logging.setupLog(conf.logLevel, conf.logFormat)
case wakuNodeConf.cmd case conf.cmd
of generateRlnKeystore: of generateRlnKeystore:
let conf = wakuNodeConf.toKeystoreGeneratorConf()
doRlnKeystoreGenerator(conf) doRlnKeystoreGenerator(conf)
of inspectRlnDb:
doInspectRlnDb(conf)
of noCommand: of noCommand:
let conf = wakuNodeConf.toWakuConf().valueOr: # NOTE: {.threadvar.} is used to make the global variable GC safe for the closure uses it
error "Waku configuration failed", error = error # It will always be called from main thread anyway.
# Ref: https://nim-lang.org/docs/manual.html#threads-gc-safety
var nodeHealthMonitor {.threadvar.}: WakuNodeHealthMonitor
nodeHealthMonitor = WakuNodeHealthMonitor()
nodeHealthMonitor.setOverallHealth(HealthStatus.INITIALIZING)
var confCopy = conf
let restServer = rest_server_builder.startRestServerEsentials(
nodeHealthMonitor, confCopy
).valueOr:
error "Starting esential REST server failed.", error = $error
quit(QuitFailure) quit(QuitFailure)
var waku = (waitFor Waku.new(conf)).valueOr: var waku = Waku.new(confCopy).valueOr:
error "Waku initialization failed", error = error error "Waku initialization failed", error = error
quit(QuitFailure) quit(QuitFailure)
waku.restServer = restServer
nodeHealthMonitor.setNode(waku.node)
(waitFor startWaku(addr waku)).isOkOr: (waitFor startWaku(addr waku)).isOkOr:
error "Starting waku failed", error = error error "Starting waku failed", error = error
quit(QuitFailure) quit(QuitFailure)
info "Setting up shutdown hooks" rest_server_builder.startRestServerProtocolSupport(
proc asyncStopper(waku: Waku) {.async: (raises: [Exception]).} = restServer, waku.node, waku.wakuDiscv5, confCopy
await waku.stop() ).isOkOr:
error "Starting protocols support REST server failed.", error = $error
quit(QuitFailure)
waku.metricsServer = waku_metrics.startMetricsServerAndLogging(confCopy).valueOr:
error "Starting monitoring and external interfaces failed", error = error
quit(QuitFailure)
nodeHealthMonitor.setOverallHealth(HealthStatus.READY)
debug "Setting up shutdown hooks"
## Setup shutdown hooks for this process.
## Stop node gracefully on shutdown.
proc asyncStopper(node: Waku) {.async: (raises: [Exception]).} =
nodeHealthMonitor.setOverallHealth(HealthStatus.SHUTTING_DOWN)
await node.stop()
quit(QuitSuccess) quit(QuitSuccess)
# Handle Ctrl-C SIGINT # Handle Ctrl-C SIGINT

View File

@ -2,19 +2,11 @@
library 'status-jenkins-lib@v1.8.17' library 'status-jenkins-lib@v1.8.17'
pipeline { pipeline {
agent { agent { label 'linux' }
docker {
label 'linuxcontainer'
image 'harbor.status.im/infra/ci-build-containers:linux-base-1.0.0'
args '--volume=/var/run/docker.sock:/var/run/docker.sock ' +
'--user jenkins'
}
}
options { options {
timestamps() timestamps()
timeout(time: 20, unit: 'MINUTES') timeout(time: 20, unit: 'MINUTES')
disableRestartFromStage()
buildDiscarder(logRotator( buildDiscarder(logRotator(
numToKeepStr: '10', numToKeepStr: '10',
daysToKeepStr: '30', daysToKeepStr: '30',

View File

@ -36,7 +36,6 @@ pipeline {
options { options {
timestamps() timestamps()
disableRestartFromStage()
/* Prevent Jenkins jobs from running forever */ /* Prevent Jenkins jobs from running forever */
timeout(time: 30, unit: 'MINUTES') timeout(time: 30, unit: 'MINUTES')
/* Limit builds retained. */ /* Limit builds retained. */

View File

@ -2,18 +2,10 @@
library 'status-jenkins-lib@v1.8.17' library 'status-jenkins-lib@v1.8.17'
pipeline { pipeline {
agent { agent { label 'linux' }
docker {
label 'linuxcontainer'
image 'harbor.status.im/infra/ci-build-containers:linux-base-1.0.0'
args '--volume=/var/run/docker.sock:/var/run/docker.sock ' +
'--user jenkins'
}
}
options { options {
timestamps() timestamps()
disableRestartFromStage()
timeout(time: 20, unit: 'MINUTES') timeout(time: 20, unit: 'MINUTES')
buildDiscarder(logRotator( buildDiscarder(logRotator(
numToKeepStr: '10', numToKeepStr: '10',
@ -77,33 +69,17 @@ pipeline {
stages { stages {
stage('Build') { stage('Build') {
steps { script { steps { script {
if (params.HEAPTRACK) { image = docker.build(
echo 'Building with heaptrack support' "${params.IMAGE_NAME}:${params.IMAGE_TAG ?: env.GIT_COMMIT.take(8)}",
image = docker.build( "--label=build='${env.BUILD_URL}' " +
"${params.IMAGE_NAME}:${params.IMAGE_TAG ?: env.GIT_COMMIT.take(8)}", "--label=commit='${git.commit()}' " +
"--label=build='${env.BUILD_URL}' " + "--label=version='${git.describe('--tags')}' " +
"--label=commit='${git.commit()}' " + "--build-arg=MAKE_TARGET='${params.MAKE_TARGET}' " +
"--label=version='${git.describe('--tags')}' " + "--build-arg=NIMFLAGS='${params.NIMFLAGS} -d:postgres ' " +
"--build-arg=MAKE_TARGET='${params.MAKE_TARGET}' " + "--build-arg=LOG_LEVEL='${params.LOWEST_LOG_LEVEL_ALLOWED}' " +
"--build-arg=NIMFLAGS='${params.NIMFLAGS} -d:postgres -d:heaptracker ' " + "--build-arg=DEBUG='${params.DEBUG ? "1" : "0"} ' " +
"--build-arg=LOG_LEVEL='${params.LOWEST_LOG_LEVEL_ALLOWED}' " + "--target=${params.HEAPTRACK ? "prod-with-heaptrack" : "prod"} ."
"--build-arg=DEBUG='${params.DEBUG ? "1" : "0"} ' " + )
"--build-arg=NIM_COMMIT='NIM_COMMIT=heaptrack_support_v2.0.12' " +
"--target='debug-with-heaptrack' ."
)
} else {
image = docker.build(
"${params.IMAGE_NAME}:${params.IMAGE_TAG ?: env.GIT_COMMIT.take(8)}",
"--label=build='${env.BUILD_URL}' " +
"--label=commit='${git.commit()}' " +
"--label=version='${git.describe('--tags')}' " +
"--build-arg=MAKE_TARGET='${params.MAKE_TARGET}' " +
"--build-arg=NIMFLAGS='${params.NIMFLAGS} -d:postgres ' " +
"--build-arg=LOG_LEVEL='${params.LOWEST_LOG_LEVEL_ALLOWED}' " +
"--build-arg=DEBUG='${params.DEBUG ? "1" : "0"} ' " +
"--target='prod' ."
)
}
} } } }
} }

View File

@ -7,7 +7,6 @@ else:
if defined(windows): if defined(windows):
switch("passL", "rln.lib") switch("passL", "rln.lib")
switch("define", "postgres=false")
# Automatically add all vendor subdirectories # Automatically add all vendor subdirectories
for dir in walkDir("./vendor"): for dir in walkDir("./vendor"):

View File

@ -1,5 +1,5 @@
# Dockerfile to build a distributable container image from pre-existing binaries # Dockerfile to build a distributable container image from pre-existing binaries
FROM debian:bookworm-slim AS prod FROM debian:stable-slim as prod
ARG MAKE_TARGET=wakunode2 ARG MAKE_TARGET=wakunode2
@ -13,9 +13,12 @@ EXPOSE 30303 60000 8545
# Referenced in the binary # Referenced in the binary
RUN apt-get update &&\ RUN apt-get update &&\
apt-get install -y libpq-dev curl iproute2 wget dnsutils &&\ apt-get install -y libpcre3 libpq-dev curl iproute2 wget &&\
apt-get clean && rm -rf /var/lib/apt/lists/* apt-get clean && rm -rf /var/lib/apt/lists/*
# Fix for 'Error loading shared library libpcre.so.3: No such file or directory'
RUN ln -s /usr/lib/libpcre.so /usr/lib/libpcre.so.3
# Copy to separate location to accomodate different MAKE_TARGET values # Copy to separate location to accomodate different MAKE_TARGET values
ADD ./build/$MAKE_TARGET /usr/local/bin/ ADD ./build/$MAKE_TARGET /usr/local/bin/

View File

@ -1,60 +0,0 @@
# Dockerfile to build a distributable container image from pre-existing binaries
# FROM debian:stable-slim AS prod
FROM ubuntu:24.04 AS prod
ARG MAKE_TARGET=wakunode2
LABEL maintainer="vaclav@status.im"
LABEL source="https://github.com/waku-org/nwaku"
LABEL description="Wakunode: Waku client"
LABEL commit="unknown"
# DevP2P, LibP2P, and JSON RPC ports
EXPOSE 30303 60000 8545
# Referenced in the binary
RUN apt-get update &&\
apt-get install -y libpq-dev curl iproute2 wget jq dnsutils &&\
apt-get clean && rm -rf /var/lib/apt/lists/*
# Copy to separate location to accomodate different MAKE_TARGET values
ADD ./build/$MAKE_TARGET /usr/local/bin/
# Copy migration scripts for DB upgrades
ADD ./migrations/ /app/migrations/
# Symlink the correct wakunode binary
RUN ln -sv /usr/local/bin/$MAKE_TARGET /usr/bin/wakunode
ENTRYPOINT ["/usr/bin/wakunode"]
# By default just show help if called without arguments
CMD ["--help"]
# Build debug tools: heaptrack
FROM ubuntu:24.04 AS heaptrack-build
RUN apt update
RUN apt install -y gdb git g++ make cmake zlib1g-dev libboost-all-dev libunwind-dev
RUN git clone https://github.com/KDE/heaptrack.git /heaptrack
WORKDIR /heaptrack/build
# going to a commit that builds properly. We will revisit this for new releases
RUN git reset --hard f9cc35ebbdde92a292fe3870fe011ad2874da0ca
RUN cmake -DCMAKE_BUILD_TYPE=Release ..
RUN make -j$(nproc)
# Debug image
FROM prod AS debug-with-heaptrack
RUN apt update
RUN apt install -y gdb libunwind8
# Add heaptrack
COPY --from=heaptrack-build /heaptrack/build/ /heaptrack/build/
ENV LD_LIBRARY_PATH=/heaptrack/build/lib/heaptrack/
RUN ln -s /heaptrack/build/bin/heaptrack /usr/local/bin/heaptrack
ENTRYPOINT ["/heaptrack/build/bin/heaptrack", "/usr/bin/wakunode"]

View File

@ -38,9 +38,6 @@ A particular OpenAPI spec can be easily imported into [Postman](https://www.post
curl http://localhost:8645/debug/v1/info -s | jq curl http://localhost:8645/debug/v1/info -s | jq
``` ```
### Store API
The `page_size` flag in the Store API has a default value of 20 and a max value of 100.
### Node configuration ### Node configuration
Find details [here](https://github.com/waku-org/nwaku/tree/master/docs/operators/how-to/configure-rest-api.md) Find details [here](https://github.com/waku-org/nwaku/tree/master/docs/operators/how-to/configure-rest-api.md)

View File

@ -1,90 +0,0 @@
---
title: Performance Benchmarks and Test Reports
---
## Introduction
This page summarises key performance metrics for nwaku and provides links to detailed test reports.
> ## TL;DR
>
> - Average Waku bandwidth usage: ~**10 KB/s** (minus discv5 Discovery) for 1KB message size and message injection rate of 1msg/s.
Confirmed for topologies of up to 2000 Relay nodes.
> - Average time for a message to propagate to 100% of nodes: **0.4s** for topologies of up to 2000 Relay nodes.
> - Average per-node bandwidth usage of the discv5 protocol: **8 KB/s** for incoming traffic and **7.4 KB/s** for outgoing traffic,
in a network with 100 continuously online nodes.
> - Future improvements: A messaging API is currently in development to streamline interactions with the Waku protocol suite.
Once completed, it will enable benchmarking at the messaging API level, allowing applications to more easily compare their
own performance results.
## Insights
### Relay Bandwidth Usage: nwaku v0.34.0
The average per-node `libp2p` bandwidth usage in a 1000-node Relay network with 1KB messages at varying injection rates.
| Message Injection Rate | Average libp2p incoming bandwidth (KB/s) | Average libp2p outgoing bandwidth (KB/s) |
|------------------------|------------------------------------------|------------------------------------------|
| 1 msg/s | ~10.1 | ~10.3 |
| 1 msg/10s | ~1.8 | ~1.9 |
### Message Propagation Latency: nwaku v0.34.0-rc1
The message propagation latency is measured as the total time for a message to reach all nodes.
We compare the latency in different network configurations for the following simulation parameters:
- Total messages published: 600
- Message size: 1KB
- Message injection rate: 1msg/s
The different network configurations tested are:
- Relay Config: 1000 nodes with relay enabled
- Mixed Config: 210 nodes, consisting of bootstrap nodes, filter clients and servers, lightpush clients and servers, store nodes
- Non-persistent Relay Config: 500 persistent relay nodes, 10 store nodes and 100 non-persistent relay nodes
Click on a specific config to see the detailed test report.
| Config | Average Message Propagation Latency (s) | Max Message Propagation Latency (s)|
|------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------|------------------------------------|
| [Relay](https://www.notion.so/Waku-regression-testing-v0-34-1618f96fb65c803bb7bad6ecd6bafff9) (1000 nodes) | 0.05 | 1.6 |
| [Mixed](https://www.notion.so/Mixed-environment-analysis-1688f96fb65c809eb235c59b97d6e15b) (210 nodes) | 0.0125 | 0.007 |
| [Non-persistent Relay](https://www.notion.so/High-Churn-Relay-Store-Reliability-16c8f96fb65c8008bacaf5e86881160c) (510 nodes)| 0.0125 | 0.25 |
### Discv5 Bandwidth Usage: nwaku v0.34.0
The average bandwidth usage of discv5 for a network of 100 nodes and message injection rate of 0 or 1msg/s.
The measurements are based on a stable network where all nodes have already connected to peers to form a healthy mesh.
|Message size |Average discv5 incoming bandwidth (KB/s)|Average discv5 outgoing bandwidth (KB/s)|
|-------------------- |----------------------------------------|----------------------------------------|
| no message injection| 7.88 | 6.70 |
| 1KB | 8.04 | 7.40 |
| 10KB | 8.03 | 7.45 |
## Testing
### DST
The VAC DST team performs regression testing on all new **nwaku** releases, comparing performance with previous versions.
They simulate large Waku networks with a variety of network and protocol configurations that are representative of real-world usage.
**Test Reports**: [DST Reports](https://www.notion.so/DST-Reports-1228f96fb65c80729cd1d98a7496fe6f)
### QA
The VAC QA team performs interoperability tests for **nwaku** and **go-waku** using the latest main branch builds.
These tests run daily and verify protocol functionality by targeting specific features of each protocol.
**Test Reports**: [QA Reports](https://discord.com/channels/1110799176264056863/1196933819614363678)
### nwaku
The **nwaku** team follows a structured release procedure for all release candidates.
This involves deploying RCs to `status.staging` fleet for validation and performing sanity checks.
**Release Process**: [nwaku Release Procedure](https://github.com/waku-org/nwaku/blob/master/.github/ISSUE_TEMPLATE/prepare_release.md)
### Research
The Waku Research team conducts a variety of benchmarking, performance testing, proof-of-concept validations and debugging efforts.
They also maintain a Waku simulator designed for small-scale, single-purpose, on-demand testing.
**Test Reports**: [Waku Research Reports](https://www.notion.so/Miscellaneous-2c02516248db4a28ba8cb2797a40d1bb)
**Waku Simulator**: [Waku Simulator Book](https://waku-org.github.io/waku-simulator/)

View File

@ -6,52 +6,44 @@ For more context, see https://trunkbaseddevelopment.com/branch-for-release/
## How to do releases ## How to do releases
### Prerequisites ### Before release
- All issues under the corresponding release [milestone](https://github.com/waku-org/nwaku/milestones) have been closed or, after consultation, deferred to the next release.
- All submodules are up to date.
> Updating submodules requires a PR (and very often several "fixes" to maintain compatibility with the changes in submodules). That PR process must be done and merged a couple of days before the release.
Ensure all items in this list are ticked:
- [ ] All issues under the corresponding release [milestone](https://github.com/waku-org/nwaku/milestones) has been closed or, after consultation, deferred to a next release.
- [ ] All submodules are up to date.
> **IMPORTANT:** Updating submodules requires a PR (and very often several "fixes" to maintain compatibility with the changes in submodules). That PR process must be done and merged a couple of days before the release.
> In case the submodules update has a low effort and/or risk for the release, follow the ["Update submodules"](./git-submodules.md) instructions. > In case the submodules update has a low effort and/or risk for the release, follow the ["Update submodules"](./git-submodules.md) instructions.
> If the effort or risk is too high, consider postponing the submodules upgrade for the subsequent release or delaying the current release until the submodules updates are included in the release candidate. > If the effort or risk is too high, consider postponing the submodules upgrade for the subsequent release or delaying the current release until the submodules updates are included in the release candidate.
- [ ] The [js-waku CI tests](https://github.com/waku-org/js-waku/actions/workflows/ci.yml) pass against the release candidate (i.e. nwaku latest `master`).
> **NOTE:** This serves as a basic regression test against typical clients of nwaku.
> The specific job that needs to pass is named `node_with_nwaku_master`.
### Release types ### Performing the release
- **Full release**: follow the entire [Release process](#release-process--step-by-step).
- **Beta release**: skip just `6a` and `6c` steps from [Release process](#release-process--step-by-step).
- Choose the appropriate release process based on the release type:
- [Full Release](../../.github/ISSUE_TEMPLATE/prepare_full_release.md)
- [Beta Release](../../.github/ISSUE_TEMPLATE/prepare_beta_release.md)
### Release process ( step by step )
1. Checkout a release branch from master 1. Checkout a release branch from master
``` ```
git checkout -b release/v0.X.0 git checkout -b release/v0.1.0
``` ```
2. Update `CHANGELOG.md` and ensure it is up to date. Use the helper Make target to get PR based release-notes/changelog update. 1. Update `CHANGELOG.md` and ensure it is up to date. Use the helper Make target to get PR based release-notes/changelog update.
``` ```
make release-notes make release-notes
``` ```
3. Create a release-candidate tag with the same name as release and `-rc.N` suffix a few days before the official release and push it 1. Create a release-candidate tag with the same name as release and `-rc.N` suffix a few days before the official release and push it
``` ```
git tag -as v0.X.0-rc.0 -m "Initial release." git tag -as v0.1.0-rc.0 -m "Initial release."
git push origin v0.X.0-rc.0 git push origin v0.1.0-rc.0
``` ```
This will trigger a [workflow](../../.github/workflows/pre-release.yml) which will build RC artifacts and create and publish a GitHub release This will trigger a [workflow](../../.github/workflows/pre-release.yml) which will build RC artifacts and create and publish a Github release
4. Open a PR from the release branch for others to review the included changes and the release-notes 1. Open a PR from the release branch for others to review the included changes and the release-notes
5. In case additional changes are needed, create a new RC tag 1. In case additional changes are needed, create a new RC tag
Make sure the new tag is associated Make sure the new tag is associated
with CHANGELOG update. with CHANGELOG update.
@ -60,57 +52,25 @@ For more context, see https://trunkbaseddevelopment.com/branch-for-release/
# Make changes, rebase and create new tag # Make changes, rebase and create new tag
# Squash to one commit and make a nice commit message # Squash to one commit and make a nice commit message
git rebase -i origin/master git rebase -i origin/master
git tag -as v0.X.0-rc.1 -m "Initial release." git tag -as v0.1.0-rc.1 -m "Initial release."
git push origin v0.X.0-rc.1 git push origin v0.1.0-rc.1
``` ```
Similarly use v0.X.0-rc.2, v0.X.0-rc.3 etc. for additional RC tags. 1. Validate the release. For the release validation process, please refer to the following [guide](https://www.notion.so/Release-Process-61234f335b904cd0943a5033ed8f42b4#47af557e7f9744c68fdbe5240bf93ca9)
6. **Validation of release candidate** 1. Once the release-candidate has been validated, create a final release tag and push it.
We also need to merge release branch back to master as a final step.
6a. **Automated testing**
- Ensure all the unit tests (specifically js-waku tests) are green against the release candidate.
- Ask Vac-QA and Vac-DST to run their available tests against the release candidate; share all release candidates with both teams.
> We need an additional report like [this](https://www.notion.so/DST-Reports-1228f96fb65c80729cd1d98a7496fe6f) specifically from the DST team.
6b. **Waku fleet testing**
- Start job on `waku.sandbox` and `waku.test` [Deployment job](https://ci.infra.status.im/job/nim-waku/), wait for completion of the job. If it fails, then debug it.
- After completion, disable [deployment job](https://ci.infra.status.im/job/nim-waku/) so that its version is not updated on every merge to `master`.
- Verify at https://fleets.waku.org/ that the fleet is locked to the release candidate version.
- Check if the image is created at [Harbor](https://harbor.status.im/harbor/projects/9/repositories/nwaku/artifacts-tab).
- Search _Kibana_ logs from the previous month (since the last release was deployed) for possible crashes or errors in `waku.test` and `waku.sandbox`.
- Most relevant logs are `(fleet: "waku.test" AND message: "SIGSEGV")` OR `(fleet: "waku.sandbox" AND message: "SIGSEGV")`.
- Enable the `waku.test` fleet again to resume auto-deployment of the latest `master` commit.
6c. **Status fleet testing**
- Deploy release candidate to `status.staging`
- Perform [sanity check](https://www.notion.so/How-to-test-Nwaku-on-Status-12c6e4b9bf06420ca868bd199129b425) and log results as comments in this issue.
- Connect 2 instances to `status.staging` fleet, one in relay mode, the other one in light client.
- 1:1 Chats with each other
- Send and receive messages in a community
- Close one instance, send messages with second instance, reopen first instance and confirm messages sent while offline are retrieved from store
- Perform checks based on _end-user impact_.
- Inform other (Waku and Status) CCs to point their instances to `status.staging` for a few days. Ping Status colleagues from their Discord server or [Status community](https://status.app) (not a blocking point).
- Ask Status-QA to perform sanity checks (as described above) and checks based on _end user impact_; specify the version being tested.
- Ask Status-QA or infra to run the automated Status e2e tests against `status.staging`.
- Get other CCs' sign-off: they should comment on this PR, e.g., "Used the app for a week, no problem." If problems are reported, resolve them and create a new RC.
- **Get Status-QA sign-off**, ensuring that the `status.test` update will not disturb ongoing activities.
7. Once the release-candidate has been validated, create a final release tag and push it.
We also need to merge the release branch back into master as a final step.
``` ```
git checkout release/v0.X.0 git checkout release/v0.1.0
git tag -as v0.X.0 -m "final release." (use v0.X.0-beta as the tag if you are creating a beta release) git tag -as v0.1.0 -m "Initial release."
git push origin v0.X.0 git push origin v0.1.0
git switch master git switch master
git pull git pull
git merge release/v0.X.0 git merge release/v0.1.0
``` ```
8. Update `waku-rust-bindings`, `waku-simulator` and `nwaku-compose` to use the new release.
9. Create a [GitHub release](https://github.com/waku-org/nwaku/releases) from the release tag. 1. Create a [Github release](https://github.com/waku-org/nwaku/releases) from the release tag.
* Add binaries produced by the ["Upload Release Asset"](https://github.com/waku-org/nwaku/actions/workflows/release-assets.yml) workflow. Where possible, test the binaries before uploading to the release. * Add binaries produced by the ["Upload Release Asset"](https://github.com/waku-org/nwaku/actions/workflows/release-assets.yml) workflow. Where possible, test the binaries before uploading to the release.
@ -120,10 +80,22 @@ We also need to merge the release branch back into master as a final step.
2. Deploy the release image to [Dockerhub](https://hub.docker.com/r/wakuorg/nwaku) by triggering [the manual Jenkins deployment job](https://ci.infra.status.im/job/nim-waku/job/docker-manual/). 2. Deploy the release image to [Dockerhub](https://hub.docker.com/r/wakuorg/nwaku) by triggering [the manual Jenkins deployment job](https://ci.infra.status.im/job/nim-waku/job/docker-manual/).
> Ensure the following build parameters are set: > Ensure the following build parameters are set:
> - `MAKE_TARGET`: `wakunode2` > - `MAKE_TARGET`: `wakunode2`
> - `IMAGE_TAG`: the release tag (e.g. `v0.36.0`) > - `IMAGE_TAG`: the release tag (e.g. `v0.16.0`)
> - `IMAGE_NAME`: `wakuorg/nwaku` > - `IMAGE_NAME`: `wakuorg/nwaku`
> - `NIMFLAGS`: `--colors:off -d:disableMarchNative -d:chronicles_colors:none -d:postgres` > - `NIMFLAGS`: `--colors:off -d:disableMarchNative -d:chronicles_colors:none -d:postgres`
> - `GIT_REF` the release tag (e.g. `v0.36.0`) > - `GIT_REF` the release tag (e.g. `v0.16.0`)
3. Update the default nwaku image in [nwaku-compose](https://github.com/waku-org/nwaku-compose/blob/master/docker-compose.yml)
4. Deploy the release to appropriate fleets:
- Inform clients
> **NOTE:** known clients are currently using some version of js-waku, go-waku, nwaku or waku-rs.
> Clients are reachable via the corresponding channels on the Vac Discord server.
> It should be enough to inform clients on the `#nwaku` and `#announce` channels on Discord.
> Informal conversations with specific repo maintainers are often part of this process.
- Check if nwaku configuration parameters changed. If so [update fleet configuration](https://www.notion.so/Fleet-Ownership-7532aad8896d46599abac3c274189741?pvs=4#d2d2f0fe4b3c429fbd860a1d64f89a64) in [infra-nim-waku](https://github.com/status-im/infra-nim-waku)
- Deploy release to the `waku.sandbox` fleet from [Jenkins](https://ci.infra.status.im/job/nim-waku/job/deploy-waku-sandbox/).
- Ensure that nodes successfully start up and monitor health using [Grafana](https://grafana.infra.status.im/d/qrp_ZCTGz/nim-waku-v2?orgId=1) and [Kibana](https://kibana.infra.status.im/goto/a7728e70-eb26-11ec-81d1-210eb3022c76).
- If necessary, revert by deploying the previous release. Download logs and open a bug report issue.
5. Submit a PR to merge the release branch back to `master`. Make sure you use the option `Merge pull request (Create a merge commit)` to perform such merge.
### Performing a patch release ### Performing a patch release
@ -144,14 +116,4 @@ We also need to merge the release branch back into master as a final step.
4. Once the release-candidate has been validated and changelog PR got merged, cherry-pick the changelog update from master to the release branch. Create a final release tag and push it. 4. Once the release-candidate has been validated and changelog PR got merged, cherry-pick the changelog update from master to the release branch. Create a final release tag and push it.
5. Create a [GitHub release](https://github.com/waku-org/nwaku/releases) from the release tag and follow the same post-release process as usual. 5. Create a [Github release](https://github.com/waku-org/nwaku/releases) from the release tag and follow the same post-release process as usual.
### Links
- [Release process](https://github.com/waku-org/nwaku/blob/master/docs/contributors/release-process.md)
- [Release notes](https://github.com/waku-org/nwaku/blob/master/CHANGELOG.md)
- [Fleet ownership](https://www.notion.so/Fleet-Ownership-7532aad8896d46599abac3c274189741?pvs=4#d2d2f0fe4b3c429fbd860a1d64f89a64)
- [Infra-nim-waku](https://github.com/status-im/infra-nim-waku)
- [Jenkins](https://ci.infra.status.im/job/nim-waku/)
- [Fleets](https://fleets.waku.org/)
- [Harbor](https://harbor.status.im/harbor/projects/9/repositories/nwaku/artifacts-tab)

View File

@ -9,6 +9,7 @@ The following command line options are available:
``` ```
--dns-discovery Enable DNS Discovery --dns-discovery Enable DNS Discovery
--dns-discovery-url URL for DNS node list in format 'enrtree://<key>@<fqdn>' --dns-discovery-url URL for DNS node list in format 'enrtree://<key>@<fqdn>'
--dns-discovery-name-server DNS name server IPs to query. Argument may be repeated.
``` ```
- `--dns-discovery` is used to enable DNS discovery on the node. - `--dns-discovery` is used to enable DNS discovery on the node.
@ -16,6 +17,8 @@ Waku DNS discovery is disabled by default.
- `--dns-discovery-url` is mandatory if DNS discovery is enabled. - `--dns-discovery-url` is mandatory if DNS discovery is enabled.
It contains the URL for the node list. It contains the URL for the node list.
The URL must be in the format `enrtree://<key>@<fqdn>` where `<fqdn>` is the fully qualified domain name and `<key>` is the base32 encoding of the compressed 32-byte public key that signed the list at that location. The URL must be in the format `enrtree://<key>@<fqdn>` where `<fqdn>` is the fully qualified domain name and `<key>` is the base32 encoding of the compressed 32-byte public key that signed the list at that location.
- `--dns-discovery-name-server` is optional and contains the IP(s) of the DNS name servers to query.
If left unspecified, the Cloudflare servers `1.1.1.1` and `1.0.0.1` will be used by default.
A node will attempt connection to all discovered nodes. A node will attempt connection to all discovered nodes.

View File

@ -1,3 +1,4 @@
# Configure a REST API node # Configure a REST API node
A subset of the node configuration can be used to modify the behaviour of the HTTP REST API. A subset of the node configuration can be used to modify the behaviour of the HTTP REST API.
@ -20,5 +21,3 @@ Example:
```shell ```shell
wakunode2 --rest=true wakunode2 --rest=true
``` ```
The `page_size` flag in the Store API has a default value of 20 and a max value of 100.

View File

@ -33,8 +33,8 @@ make wakunode2
Follow [Step 10](../droplet-quickstart.md#10-run-nwaku) of the [droplet quickstart](../droplet-quickstart.md) guide, while replacing the run command with - Follow [Step 10](../droplet-quickstart.md#10-run-nwaku) of the [droplet quickstart](../droplet-quickstart.md) guide, while replacing the run command with -
```bash ```bash
export LINEA_SEPOLIA_HTTP_NODE_ADDRESS=<HTTP RPC URL to a Linea Sepolia Node> export SEPOLIA_HTTP_NODE_ADDRESS=<HTTP RPC URL to a Sepolia Node>
export RLN_RELAY_CONTRACT_ADDRESS="0xB9cd878C90E49F797B4431fBF4fb333108CB90e6" # Replace this with any compatible implementation export RLN_RELAY_CONTRACT_ADDRESS="0xF471d71E9b1455bBF4b85d475afb9BB0954A29c4" # Replace this with any compatible implementation
$WAKUNODE_DIR/wakunode2 \ $WAKUNODE_DIR/wakunode2 \
--store:true \ --store:true \
--persist-messages \ --persist-messages \
@ -44,7 +44,7 @@ $WAKUNODE_DIR/wakunode2 \
--rln-relay:true \ --rln-relay:true \
--rln-relay-dynamic:true \ --rln-relay-dynamic:true \
--rln-relay-eth-contract-address:"$RLN_RELAY_CONTRACT_ADDRESS" \ --rln-relay-eth-contract-address:"$RLN_RELAY_CONTRACT_ADDRESS" \
--rln-relay-eth-client-address:"$LINEA_SEPOLIA_HTTP_NODE_ADDRESS" --rln-relay-eth-client-address:"$SEPOLIA_HTTP_NODE_ADDRESS"
``` ```
OR OR
@ -53,9 +53,9 @@ If you are running the nwaku node within docker, follow [Step 2](../docker-quick
```bash ```bash
export WAKU_FLEET=<entree of the fleet> export WAKU_FLEET=<entree of the fleet>
export LINEA_SEPOLIA_HTTP_NODE_ADDRESS=<HTTP RPC URL to a Sepolia Node> export SEPOLIA_HTTP_NODE_ADDRESS=<HTTP RPC URL to a Sepolia Node>
export RLN_RELAY_CONTRACT_ADDRESS="0xB9cd878C90E49F797B4431fBF4fb333108CB90e6" # Replace this with any compatible implementation export RLN_RELAY_CONTRACT_ADDRESS="0xF471d71E9b1455bBF4b85d475afb9BB0954A29c4" # Replace this with any compatible implementation
docker run -i -t -p 60000:60000 -p 9000:9000/udp wakuorg/nwaku:v0.36.0 \ docker run -i -t -p 60000:60000 -p 9000:9000/udp wakuorg/nwaku:v0.20.0 \
--dns-discovery:true \ --dns-discovery:true \
--dns-discovery-url:"$WAKU_FLEET" \ --dns-discovery-url:"$WAKU_FLEET" \
--discv5-discovery \ --discv5-discovery \
@ -63,7 +63,7 @@ docker run -i -t -p 60000:60000 -p 9000:9000/udp wakuorg/nwaku:v0.36.0 \
--rln-relay:true \ --rln-relay:true \
--rln-relay-dynamic:true \ --rln-relay-dynamic:true \
--rln-relay-eth-contract-address:"$RLN_RELAY_CONTRACT_ADDRESS" \ --rln-relay-eth-contract-address:"$RLN_RELAY_CONTRACT_ADDRESS" \
--rln-relay-eth-client-address:"$LINEA_SEPOLIA_HTTP_NODE_ADDRESS" --rln-relay-eth-client-address:"$SEPOLIA_HTTP_NODE_ADDRESS"
``` ```
> Note: You can choose to keep connections to other nodes alive by adding the `--keep-alive` flag. > Note: You can choose to keep connections to other nodes alive by adding the `--keep-alive` flag.
@ -74,7 +74,7 @@ runtime arguments -
1. `--rln-relay`: Allows waku-rln-relay to be mounted into the setup of the nwaku node 1. `--rln-relay`: Allows waku-rln-relay to be mounted into the setup of the nwaku node
2. `--rln-relay-dynamic`: Enables waku-rln-relay to connect to an ethereum node to fetch the membership group 2. `--rln-relay-dynamic`: Enables waku-rln-relay to connect to an ethereum node to fetch the membership group
3. `--rln-relay-eth-contract-address`: The contract address of an RLN membership group 3. `--rln-relay-eth-contract-address`: The contract address of an RLN membership group
4. `--rln-relay-eth-client-address`: The HTTP url to a Linea Sepolia ethereum node 4. `--rln-relay-eth-client-address`: The HTTP url to a Sepolia ethereum node
You should now have nwaku running, with RLN enabled! You should now have nwaku running, with RLN enabled!

View File

@ -33,10 +33,12 @@ The following command line options are available for both `wakunode2` or `chat2`
``` ```
--dns-discovery Enable DNS Discovery --dns-discovery Enable DNS Discovery
--dns-discovery-url URL for DNS node list in format 'enrtree://<key>@<fqdn>' --dns-discovery-url URL for DNS node list in format 'enrtree://<key>@<fqdn>'
--dns-discovery-name-server DNS name server IPs to query. Argument may be repeated.
``` ```
- `--dns-discovery` is used to enable DNS discovery on the node. Waku DNS discovery is disabled by default. - `--dns-discovery` is used to enable DNS discovery on the node. Waku DNS discovery is disabled by default.
- `--dns-discovery-url` is mandatory if DNS discovery is enabled. It contains the URL for the node list. The URL must be in the format `enrtree://<key>@<fqdn>` where `<fqdn>` is the fully qualified domain name and `<key>` is the base32 encoding of the compressed 32-byte public key that signed the list at that location. See [EIP-1459](https://eips.ethereum.org/EIPS/eip-1459#specification) or the example below to illustrate. - `--dns-discovery-url` is mandatory if DNS discovery is enabled. It contains the URL for the node list. The URL must be in the format `enrtree://<key>@<fqdn>` where `<fqdn>` is the fully qualified domain name and `<key>` is the base32 encoding of the compressed 32-byte public key that signed the list at that location. See [EIP-1459](https://eips.ethereum.org/EIPS/eip-1459#specification) or the example below to illustrate.
- `--dns-discovery-name-server` is optional and contains the IP(s) of the DNS name servers to query. If left unspecified, the Cloudflare servers `1.1.1.1` and `1.0.0.1` will be used by default.
A node will attempt connection to all discovered nodes. A node will attempt connection to all discovered nodes.
@ -61,9 +63,9 @@ Similarly, for `chat2`:
The node will discover and attempt connection to all `waku.test` nodes during setup procedures. The node will discover and attempt connection to all `waku.test` nodes during setup procedures.
To use specific DNS name servers, one or more `--dns-addrs-name-server` arguments can be added: To use specific DNS name servers, one or more `--dns-discovery-name-server` arguments can be added:
``` ```
./build/wakunode2 --dns-discovery:true --dns-discovery-url:enrtree://AOGYWMBYOUIMOENHXCHILPKY3ZRFEULMFI4DOM442QSZ73TT2A7VI@test.waku.nodes.status.im --dns-dis ./build/wakunode2 --dns-discovery:true --dns-discovery-url:enrtree://AOGYWMBYOUIMOENHXCHILPKY3ZRFEULMFI4DOM442QSZ73TT2A7VI@test.waku.nodes.status.im --dns-dis
covery-name-server:8.8.8.8 --dns-addrs-name-server:8.8.4.4 covery-name-server:8.8.8.8 --dns-discovery-name-server:8.8.4.4
``` ```

View File

@ -33,33 +33,20 @@ It operates in two modes:
- `sudo apt install libkf5kio-dev` - `sudo apt install libkf5kio-dev`
- `sudo apt install libkf5iconthemes-dev` - `sudo apt install libkf5iconthemes-dev`
- `make` - `make`
- On completion, the `bin/heaptrack_gui` and `bin/heaptrack` binaries will be generated. - On completion, the `bin/heaptrack_gui` and `bin/heaptrack` binaries will get generated.
- heaptrack: needed to generate the report. - heaptrack: needed to generate the report.
- heaptrack_gui: needed to analyse the report. - heaptrack_gui: needed to analyse the report.
## Heaptrack & Nwaku ## Heaptrack & Nwaku
nwaku supports heaptrack, but it needs a special compilation setting. nwaku supports heaptrack but it needs a special compilation setting.
### Patch Nim compiler to register allocations on Heaptrack
Currently, we rely on the official Nim repository. So we need to patch the Nim compiler to register allocations and deallocations on Heaptrack.
For Nim 2.2.4 version, we created a patch that can be applied as:
```bash
git apply --directory=vendor/nimbus-build-system/vendor/Nim docs/tutorial/nim.2.2.4_heaptracker_addon.patch
git add .
git commit -m "Add heaptrack support to Nim compiler - temporary patch"
```
> Until heaptrack support is not available in official Nim, so it is important to keep it in the `nimbus-build-system` repository.
> Commit ensures that `make update` will not override the patch unintentionally.
> We are planning to make it available through an official PR for Nim.
When the patch is applied, we can build wakunode2 with heaptrack support.
### Build nwaku with heaptrack support ### Build nwaku with heaptrack support
`make -j<nproc> HEAPTRACKER=1 wakunode2` The make command should have the 'NIM_COMMIT' setting as:
`make -j<nproc> NIM_COMMIT=heaptrack_support ...`
This is to force the `nimbus-build-system` to use the Nim compiler that points at the [heaptrack_support](https://github.com/status-im/nim/tree/heaptrack_support) branch.
### Create nwaku memory report with heaptrack ### Create nwaku memory report with heaptrack
@ -82,18 +69,9 @@ Having Docker properly installed in your machine, do the next:
- cd to the `nwaku` root folder. - cd to the `nwaku` root folder.
- ```sudo make docker-image DOCKER_IMAGE_NAME=docker_repo:docker_tag HEAPTRACKER=1``` - ```sudo make docker-image DOCKER_IMAGE_NAME=docker_repo:docker_tag HEAPTRACKER=1```
- alternatively you can use the `docker-quick-image` target, this is faster but creates an ubuntu based image, so your local build environment must match.
That will create a Docker image with both nwaku and heaptrack. The container's entry point is `ENTRYPOINT ["/heaptrack/build/bin/heaptrack", "/usr/bin/wakunode"]`, so the memory report starts being generated from the beginning. That will create a Docker image with both nwaku and heaptrack. The container's entry point is `ENTRYPOINT ["/heaptrack/build/bin/heaptrack", "/usr/bin/wakunode"]`, so the memory report starts being generated from the beginning.
#### Notice for using heaptrack supporting image with `docker compose`
Take care that wakunode2 should be started as
```
exec /heaptrack/build/bin/heaptrack /usr/bin/wakunode\
... all the arguments you want to pass to wakunode ...
```
### Extract report file from a running Docker container ### Extract report file from a running Docker container
Bear in mind that if you restart the container, the previous report will get lost. Therefore, before restarting, it is important to extract it from the container once you consider it has enough information. Bear in mind that if you restart the container, the previous report will get lost. Therefore, before restarting, it is important to extract it from the container once you consider it has enough information.

View File

@ -1,44 +0,0 @@
diff --git a/lib/system/alloc.nim b/lib/system/alloc.nim
index e2dd43075..7f8c8e04e 100644
--- a/lib/system/alloc.nim
+++ b/lib/system/alloc.nim
@@ -1,4 +1,4 @@
-#
+#!fmt: off
#
# Nim's Runtime Library
# (c) Copyright 2012 Andreas Rumpf
@@ -862,6 +862,15 @@ when defined(gcDestructors):
dec maxIters
if it == nil: break
+when defined(heaptracker):
+ const heaptrackLib =
+ when defined(heaptracker_inject):
+ "libheaptrack_inject.so"
+ else:
+ "libheaptrack_preload.so"
+ proc heaptrack_malloc(a: pointer, size: int) {.cdecl, importc, dynlib: heaptrackLib.}
+ proc heaptrack_free(a: pointer) {.cdecl, importc, dynlib: heaptrackLib.}
+
proc rawAlloc(a: var MemRegion, requestedSize: int): pointer =
when defined(nimTypeNames):
inc(a.allocCounter)
@@ -984,6 +993,8 @@ proc rawAlloc(a: var MemRegion, requestedSize: int): pointer =
sysAssert(isAccessible(a, result), "rawAlloc 14")
sysAssert(allocInv(a), "rawAlloc: end")
when logAlloc: cprintf("var pointer_%p = alloc(%ld) # %p\n", result, requestedSize, addr a)
+ when defined(heaptracker):
+ heaptrack_malloc(result, requestedSize)
proc rawAlloc0(a: var MemRegion, requestedSize: int): pointer =
result = rawAlloc(a, requestedSize)
@@ -992,6 +1003,8 @@ proc rawAlloc0(a: var MemRegion, requestedSize: int): pointer =
proc rawDealloc(a: var MemRegion, p: pointer) =
when defined(nimTypeNames):
inc(a.deallocCounter)
+ when defined(heaptracker):
+ heaptrack_free(p)
#sysAssert(isAllocatedPtr(a, p), "rawDealloc: no allocated pointer")
sysAssert(allocInv(a), "rawDealloc: begin")
var c = pageAddr(p)

View File

@ -1,7 +1,7 @@
# Spam-protected chat2 application with on-chain group management # Spam-protected chat2 application with on-chain group management
This document is a tutorial on how to run the chat2 application in the spam-protected mode using the Waku-RLN-Relay protocol and with dynamic/on-chain group management. This document is a tutorial on how to run the chat2 application in the spam-protected mode using the Waku-RLN-Relay protocol and with dynamic/on-chain group management.
In the on-chain/dynamic group management, the state of the group members i.e., their identity commitment keys is moderated via a membership smart contract deployed on the Linea Sepolia network which is one of the test-nets. In the on-chain/dynamic group management, the state of the group members i.e., their identity commitment keys is moderated via a membership smart contract deployed on the Sepolia network which is one of the Ethereum test-nets.
Members can be dynamically added to the group and the group size can grow up to 2^20 members. Members can be dynamically added to the group and the group size can grow up to 2^20 members.
This differs from the prior test scenarios in which the RLN group was static and the set of members' keys was hardcoded and fixed. This differs from the prior test scenarios in which the RLN group was static and the set of members' keys was hardcoded and fixed.
@ -45,7 +45,7 @@ Run the following command to set up your chat2 client.
--content-topic:/toy-chat/3/mingde/proto \ --content-topic:/toy-chat/3/mingde/proto \
--rln-relay:true \ --rln-relay:true \
--rln-relay-dynamic:true \ --rln-relay-dynamic:true \
--rln-relay-eth-contract-address:0xB9cd878C90E49F797B4431fBF4fb333108CB90e6 \ --rln-relay-eth-contract-address:0xF471d71E9b1455bBF4b85d475afb9BB0954A29c4 \
--rln-relay-cred-path:xxx/xx/rlnKeystore.json \ --rln-relay-cred-path:xxx/xx/rlnKeystore.json \
--rln-relay-cred-password:xxxx \ --rln-relay-cred-password:xxxx \
--rln-relay-eth-client-address:xxxx \ --rln-relay-eth-client-address:xxxx \
@ -58,11 +58,11 @@ In this command
- the `rln-relay` flag is set to `true` to enable the Waku-RLN-Relay protocol for spam protection. - the `rln-relay` flag is set to `true` to enable the Waku-RLN-Relay protocol for spam protection.
- the `--rln-relay-dynamic` flag is set to `true` to enable the on-chain mode of Waku-RLN-Relay protocol with dynamic group management. - the `--rln-relay-dynamic` flag is set to `true` to enable the on-chain mode of Waku-RLN-Relay protocol with dynamic group management.
- the `--rln-relay-eth-contract-address` option gets the address of the membership contract. - the `--rln-relay-eth-contract-address` option gets the address of the membership contract.
The current address of the contract is `0xB9cd878C90E49F797B4431fBF4fb333108CB90e6`. The current address of the contract is `0xF471d71E9b1455bBF4b85d475afb9BB0954A29c4`.
You may check the state of the contract on the [Linea Sepolia testnet](https://sepolia.lineascan.build/address/0xB9cd878C90E49F797B4431fBF4fb333108CB90e6). You may check the state of the contract on the [Sepolia testnet](https://sepolia.etherscan.io/address/0xF471d71E9b1455bBF4b85d475afb9BB0954A29c4).
- the `--rln-relay-cred-path` option denotes the path to the keystore file described above - the `--rln-relay-cred-path` option denotes the path to the keystore file described above
- the `--rln-relay-cred-password` option denotes the password to the keystore - the `--rln-relay-cred-password` option denotes the password to the keystore
- the `rln-relay-eth-client-address` is the WebSocket address of the hosted node on the Linea Sepolia testnet. - the `rln-relay-eth-client-address` is the WebSocket address of the hosted node on the Sepolia testnet.
You need to replace the `xxxx` with the actual node's address. You need to replace the `xxxx` with the actual node's address.
For `rln-relay-eth-client-address`, if you do not know how to obtain it, you may use the following tutorial on the [prerequisites of running on-chain spam-protected chat2](./pre-requisites-of-running-on-chain-spam-protected-chat2.md). For `rln-relay-eth-client-address`, if you do not know how to obtain it, you may use the following tutorial on the [prerequisites of running on-chain spam-protected chat2](./pre-requisites-of-running-on-chain-spam-protected-chat2.md).
@ -166,7 +166,7 @@ You can check this fact by looking at `Bob`'s console, where `message3` is missi
**Alice** **Alice**
```bash ```bash
./build/chat2 --fleet:test --content-topic:/toy-chat/3/mingde/proto --rln-relay:true --rln-relay-dynamic:true --rln-relay-eth-contract-address:0xB9cd878C90E49F797B4431fBF4fb333108CB90e6 --rln-relay-cred-path:rlnKeystore.json --rln-relay-cred-password:password --rln-relay-eth-client-address:https://sepolia.infura.io/v3/12345678901234567890123456789012 --ports-shift=1 ./build/chat2 --fleet:test --content-topic:/toy-chat/3/mingde/proto --rln-relay:true --rln-relay-dynamic:true --rln-relay-eth-contract-address:0xF471d71E9b1455bBF4b85d475afb9BB0954A29c4 --rln-relay-cred-path:rlnKeystore.json --rln-relay-cred-password:password --rln-relay-eth-client-address:https://sepolia.infura.io/v3/12345678901234567890123456789012 --ports-shift=1
``` ```
``` ```
@ -209,7 +209,7 @@ your rln identity commitment key is: bd093cbf14fb933d53f596c33f98b3df83b7e9f7a19
**Bob** **Bob**
```bash ```bash
./build/chat2 --fleet:test --content-topic:/toy-chat/3/mingde/proto --rln-relay:true --rln-relay-dynamic:true --rln-relay-eth-contract-address:0xB9cd878C90E49F797B4431fBF4fb333108CB90e6 --rln-relay-cred-path:rlnKeystore.json --rln-relay-cred-index:1 --rln-relay-cred-password:password --rln-relay-eth-client-address:https://sepolia.infura.io/v3/12345678901234567890123456789012 --ports-shift=2 ./build/chat2 --fleet:test --content-topic:/toy-chat/3/mingde/proto --rln-relay:true --rln-relay-dynamic:true --rln-relay-eth-contract-address:0xF471d71E9b1455bBF4b85d475afb9BB0954A29c4 --rln-relay-cred-path:rlnKeystore.json --rln-relay-cred-index:1 --rln-relay-cred-password:password --rln-relay-eth-client-address:https://sepolia.infura.io/v3/12345678901234567890123456789012 --ports-shift=2
``` ```
``` ```

View File

@ -0,0 +1,36 @@
# rln-db-inspector
This document describes how to run and use the `rln-db-inspector` tool.
It is meant to be used to debug and fetch the metadata stored in the RLN tree db.
## Pre-requisites
1. An existing RLN tree db
## Usage
1. First, we compile the binary
```bash
make -j16 wakunode2
```
This command will fetch the rln static library and link it automatically.
2. Define the arguments you wish to use
```bash
export RLN_TREE_DB_PATH="xxx"
```
3. Run the db inspector
```bash
./build/wakunode2 inspectRlnDb \
--rln-relay-tree-path:$RLN_TREE_DB_PATH
```
What this does is -
a. loads the tree db from the path provided
b. Logs out the metadata, including, number of leaves set, past 5 merkle roots, last synced block number

View File

@ -21,9 +21,9 @@ It is meant to be used to generate and persist a set of valid RLN credentials to
2. Define the arguments you wish to use 2. Define the arguments you wish to use
```bash ```bash
export RPC_URL="https://linea-sepolia.infura.io/v3/..." export RPC_URL="https://sepolia.infura.io/v3/..."
export PRIVATE_KEY="0x..." export PRIVATE_KEY="0x..."
export RLN_CONTRACT_ADDRESS="0xB9cd878C90E49F797B4431fBF4fb333108CB90e6" export RLN_CONTRACT_ADDRESS="0xF471d71E9b1455bBF4b85d475afb9BB0954A29c4"
export RLN_CREDENTIAL_PATH="rlnKeystore.json" export RLN_CREDENTIAL_PATH="rlnKeystore.json"
export RLN_CREDENTIAL_PASSWORD="xxx" export RLN_CREDENTIAL_PASSWORD="xxx"
``` ```

View File

@ -7,21 +7,9 @@ Make all examples.
make example2 make example2
``` ```
## Waku API ## basic2
Uses the simplified Waku API to create and start a node, TODO
you need an RPC endpoint for Linea Sepolia for RLN:
```console
./build/waku_api --ethRpcEndpoint=https://linea-sepolia.infura.io/v3/<your key>
```
If you can't be bothered but still want to see some action,
just run the binary and it will use a non-RLN network:
```console
./build/waku_api
```
## publisher/subscriber ## publisher/subscriber

View File

@ -1,18 +0,0 @@
## App description
This is a very simple example that shows how to invoke libwaku functions from a C program.
## Build
1. Open terminal
2. cd to nwaku root folder
3. make cwaku_example -j8
This will create libwaku.so and cwaku_example binary within the build folder.
## Run
1. Open terminal
2. cd to nwaku root folder
3. export LD_LIBRARY_PATH=build
4. `./build/cwaku_example --host=0.0.0.0 --port=60001`
Use `./build/cwaku_example --help` to see some other options.

View File

@ -14,319 +14,306 @@
#include "base64.h" #include "base64.h"
#include "../../library/libwaku.h" #include "../../library/libwaku.h"
// Shared synchronization variables // Shared synchronization variables
pthread_mutex_t mutex = PTHREAD_MUTEX_INITIALIZER; pthread_mutex_t mutex = PTHREAD_MUTEX_INITIALIZER;
pthread_cond_t cond = PTHREAD_COND_INITIALIZER; pthread_cond_t cond = PTHREAD_COND_INITIALIZER;
int callback_executed = 0; int callback_executed = 0;
void waitForCallback() void waitForCallback() {
{ pthread_mutex_lock(&mutex);
pthread_mutex_lock(&mutex); while (!callback_executed) {
while (!callback_executed) pthread_cond_wait(&cond, &mutex);
{ }
pthread_cond_wait(&cond, &mutex); callback_executed = 0;
} pthread_mutex_unlock(&mutex);
callback_executed = 0;
pthread_mutex_unlock(&mutex);
} }
#define WAKU_CALL(call) \
do \
{ \
int ret = call; \
if (ret != 0) \
{ \
printf("Failed the call to: %s. Returned code: %d\n", #call, ret); \
exit(1); \
} \
waitForCallback(); \
} while (0)
struct ConfigNode #define WAKU_CALL(call) \
{ do { \
char host[128]; int ret = call; \
int port; if (ret != 0) { \
char key[128]; printf("Failed the call to: %s. Returned code: %d\n", #call, ret); \
int relay; exit(1); \
char peers[2048]; } \
int store; waitForCallback(); \
char storeNode[2048]; } while (0)
char storeRetentionPolicy[64];
char storeDbUrl[256]; struct ConfigNode {
int storeVacuum; char host[128];
int storeDbMigration; int port;
int storeMaxNumDbConnections; char key[128];
int relay;
char peers[2048];
int store;
char storeNode[2048];
char storeRetentionPolicy[64];
char storeDbUrl[256];
int storeVacuum;
int storeDbMigration;
int storeMaxNumDbConnections;
}; };
// libwaku Context // libwaku Context
void *ctx; void* ctx;
// For the case of C language we don't need to store a particular userData // For the case of C language we don't need to store a particular userData
void *userData = NULL; void* userData = NULL;
// Arguments parsing // Arguments parsing
static char doc[] = "\nC example that shows how to use the waku library."; static char doc[] = "\nC example that shows how to use the waku library.";
static char args_doc[] = ""; static char args_doc[] = "";
static struct argp_option options[] = { static struct argp_option options[] = {
{"host", 'h', "HOST", 0, "IP to listen for for LibP2P traffic. (default: \"0.0.0.0\")"}, { "host", 'h', "HOST", 0, "IP to listen for for LibP2P traffic. (default: \"0.0.0.0\")"},
{"port", 'p', "PORT", 0, "TCP listening port. (default: \"60000\")"}, { "port", 'p', "PORT", 0, "TCP listening port. (default: \"60000\")"},
{"key", 'k', "KEY", 0, "P2P node private key as 64 char hex string."}, { "key", 'k', "KEY", 0, "P2P node private key as 64 char hex string."},
{"relay", 'r', "RELAY", 0, "Enable relay protocol: 1 or 0. (default: 1)"}, { "relay", 'r', "RELAY", 0, "Enable relay protocol: 1 or 0. (default: 1)"},
{"peers", 'a', "PEERS", 0, "Comma-separated list of peer-multiaddress to connect\ { "peers", 'a', "PEERS", 0, "Comma-separated list of peer-multiaddress to connect\
to. (default: \"\") e.g. \"/ip4/127.0.0.1/tcp/60001/p2p/16Uiu2HAmVFXtAfSj4EiR7mL2KvL4EE2wztuQgUSBoj2Jx2KeXFLN\""}, to. (default: \"\") e.g. \"/ip4/127.0.0.1/tcp/60001/p2p/16Uiu2HAmVFXtAfSj4EiR7mL2KvL4EE2wztuQgUSBoj2Jx2KeXFLN\""},
{0}}; { 0 }
};
static error_t parse_opt(int key, char *arg, struct argp_state *state) static error_t parse_opt(int key, char *arg, struct argp_state *state) {
{
struct ConfigNode *cfgNode = state->input; struct ConfigNode *cfgNode = state->input;
switch (key) switch (key) {
{ case 'h':
case 'h': snprintf(cfgNode->host, 128, "%s", arg);
snprintf(cfgNode->host, 128, "%s", arg); break;
break; case 'p':
case 'p': cfgNode->port = atoi(arg);
cfgNode->port = atoi(arg); break;
break; case 'k':
case 'k': snprintf(cfgNode->key, 128, "%s", arg);
snprintf(cfgNode->key, 128, "%s", arg); break;
break; case 'r':
case 'r': cfgNode->relay = atoi(arg);
cfgNode->relay = atoi(arg); break;
break; case 'a':
case 'a': snprintf(cfgNode->peers, 2048, "%s", arg);
snprintf(cfgNode->peers, 2048, "%s", arg); break;
break; case ARGP_KEY_ARG:
case ARGP_KEY_ARG: if (state->arg_num >= 1) /* Too many arguments. */
if (state->arg_num >= 1) /* Too many arguments. */ argp_usage(state);
argp_usage(state); break;
break; case ARGP_KEY_END:
case ARGP_KEY_END: break;
break; default:
default: return ARGP_ERR_UNKNOWN;
return ARGP_ERR_UNKNOWN; }
}
return 0; return 0;
} }
void signal_cond() static struct argp argp = { options, parse_opt, args_doc, doc, 0, 0, 0 };
{
pthread_mutex_lock(&mutex); void event_handler(int callerRet, const char* msg, size_t len, void* userData) {
callback_executed = 1; if (callerRet == RET_ERR) {
pthread_cond_signal(&cond); printf("Error: %s\n", msg);
pthread_mutex_unlock(&mutex); exit(1);
}
else if (callerRet == RET_OK) {
printf("Receiving event: %s\n", msg);
}
pthread_mutex_lock(&mutex);
callback_executed = 1;
pthread_cond_signal(&cond);
pthread_mutex_unlock(&mutex);
} }
static struct argp argp = {options, parse_opt, args_doc, doc, 0, 0, 0}; void on_event_received(int callerRet, const char* msg, size_t len, void* userData) {
if (callerRet == RET_ERR) {
void event_handler(int callerRet, const char *msg, size_t len, void *userData) printf("Error: %s\n", msg);
{ exit(1);
if (callerRet == RET_ERR) }
{ else if (callerRet == RET_OK) {
printf("Error: %s\n", msg); printf("Receiving event: %s\n", msg);
exit(1); }
}
else if (callerRet == RET_OK)
{
printf("Receiving event: %s\n", msg);
}
signal_cond();
} }
void on_event_received(int callerRet, const char *msg, size_t len, void *userData) char* contentTopic = NULL;
{ void handle_content_topic(int callerRet, const char* msg, size_t len, void* userData) {
if (callerRet == RET_ERR) if (contentTopic != NULL) {
{ free(contentTopic);
printf("Error: %s\n", msg); }
exit(1);
} contentTopic = malloc(len * sizeof(char) + 1);
else if (callerRet == RET_OK) strcpy(contentTopic, msg);
{
printf("Receiving event: %s\n", msg);
}
} }
char *contentTopic = NULL; char* publishResponse = NULL;
void handle_content_topic(int callerRet, const char *msg, size_t len, void *userData) void handle_publish_ok(int callerRet, const char* msg, size_t len, void* userData) {
{ printf("Publish Ok: %s %lu\n", msg, len);
if (contentTopic != NULL)
{
free(contentTopic);
}
contentTopic = malloc(len * sizeof(char) + 1); if (publishResponse != NULL) {
strcpy(contentTopic, msg); free(publishResponse);
signal_cond(); }
}
char *publishResponse = NULL; publishResponse = malloc(len * sizeof(char) + 1);
void handle_publish_ok(int callerRet, const char *msg, size_t len, void *userData) strcpy(publishResponse, msg);
{
printf("Publish Ok: %s %lu\n", msg, len);
if (publishResponse != NULL)
{
free(publishResponse);
}
publishResponse = malloc(len * sizeof(char) + 1);
strcpy(publishResponse, msg);
} }
#define MAX_MSG_SIZE 65535 #define MAX_MSG_SIZE 65535
void publish_message(const char *msg) void publish_message(char* pubsubTopic, const char* msg) {
{ char jsonWakuMsg[MAX_MSG_SIZE];
char jsonWakuMsg[MAX_MSG_SIZE]; char *msgPayload = b64_encode(msg, strlen(msg));
char *msgPayload = b64_encode(msg, strlen(msg));
WAKU_CALL(waku_content_topic(ctx, WAKU_CALL( waku_content_topic(RET_OK,
handle_content_topic, "appName",
userData, 1,
"appName", "contentTopicName",
1, "encoding",
"contentTopicName", handle_content_topic,
"encoding")); userData) );
snprintf(jsonWakuMsg,
MAX_MSG_SIZE,
"{\"payload\":\"%s\",\"contentTopic\":\"%s\"}",
msgPayload, contentTopic);
free(msgPayload); snprintf(jsonWakuMsg,
MAX_MSG_SIZE,
"{\"payload\":\"%s\",\"content_topic\":\"%s\"}",
msgPayload, contentTopic);
WAKU_CALL(waku_relay_publish(ctx, free(msgPayload);
event_handler,
userData, WAKU_CALL( waku_relay_publish(&ctx,
"/waku/2/rs/16/32", pubsubTopic,
jsonWakuMsg, jsonWakuMsg,
10000 /*timeout ms*/)); 10000 /*timeout ms*/,
event_handler,
userData) );
printf("waku relay response [%s]\n", publishResponse);
} }
void show_help_and_exit() void show_help_and_exit() {
{ printf("Wrong parameters\n");
printf("Wrong parameters\n"); exit(1);
exit(1);
} }
void print_default_pubsub_topic(int callerRet, const char *msg, size_t len, void *userData) void print_default_pubsub_topic(int callerRet, const char* msg, size_t len, void* userData) {
{ printf("Default pubsub topic: %s\n", msg);
printf("Default pubsub topic: %s\n", msg);
signal_cond(); pthread_mutex_lock(&mutex);
callback_executed = 1;
pthread_cond_signal(&cond);
pthread_mutex_unlock(&mutex);
} }
void print_waku_version(int callerRet, const char *msg, size_t len, void *userData) void print_waku_version(int callerRet, const char* msg, size_t len, void* userData) {
{ printf("Git Version: %s\n", msg);
printf("Git Version: %s\n", msg);
signal_cond(); pthread_mutex_lock(&mutex);
callback_executed = 1;
pthread_cond_signal(&cond);
pthread_mutex_unlock(&mutex);
} }
// Beginning of UI program logic // Beginning of UI program logic
enum PROGRAM_STATE enum PROGRAM_STATE {
{ MAIN_MENU,
MAIN_MENU, SUBSCRIBE_TOPIC_MENU,
SUBSCRIBE_TOPIC_MENU, CONNECT_TO_OTHER_NODE_MENU,
CONNECT_TO_OTHER_NODE_MENU, PUBLISH_MESSAGE_MENU
PUBLISH_MESSAGE_MENU
}; };
enum PROGRAM_STATE current_state = MAIN_MENU; enum PROGRAM_STATE current_state = MAIN_MENU;
void show_main_menu() void show_main_menu() {
{ printf("\nPlease, select an option:\n");
printf("\nPlease, select an option:\n"); printf("\t1.) Subscribe to topic\n");
printf("\t1.) Subscribe to topic\n"); printf("\t2.) Connect to other node\n");
printf("\t2.) Connect to other node\n"); printf("\t3.) Publish a message\n");
printf("\t3.) Publish a message\n");
} }
void handle_user_input() void handle_user_input() {
{ char cmd[1024];
char cmd[1024]; memset(cmd, 0, 1024);
memset(cmd, 0, 1024); int numRead = read(0, cmd, 1024);
int numRead = read(0, cmd, 1024); if (numRead <= 0) {
if (numRead <= 0) return;
{ }
return;
}
switch (atoi(cmd)) int c;
{ while ( (c = getchar()) != '\n' && c != EOF ) { }
case SUBSCRIBE_TOPIC_MENU:
{
printf("Indicate the Pubsubtopic to subscribe:\n");
char pubsubTopic[128];
scanf("%127s", pubsubTopic);
WAKU_CALL(waku_relay_subscribe(ctx, switch (atoi(cmd))
event_handler, {
userData, case SUBSCRIBE_TOPIC_MENU:
pubsubTopic)); {
printf("The subscription went well\n"); printf("Indicate the Pubsubtopic to subscribe:\n");
char pubsubTopic[128];
scanf("%127s", pubsubTopic);
show_main_menu(); WAKU_CALL( waku_relay_subscribe(&ctx,
} pubsubTopic,
break; event_handler,
userData) );
printf("The subscription went well\n");
case CONNECT_TO_OTHER_NODE_MENU: show_main_menu();
// printf("Connecting to a node. Please indicate the peer Multiaddress:\n"); }
// printf("e.g.: /ip4/127.0.0.1/tcp/60001/p2p/16Uiu2HAmVFXtAfSj4EiR7mL2KvL4EE2wztuQgUSBoj2Jx2KeXFLN\n");
// char peerAddr[512];
// scanf("%511s", peerAddr);
// WAKU_CALL(waku_connect(ctx, peerAddr, 10000 /* timeoutMs */, event_handler, userData));
show_main_menu();
break; break;
case PUBLISH_MESSAGE_MENU: case CONNECT_TO_OTHER_NODE_MENU:
{ printf("Connecting to a node. Please indicate the peer Multiaddress:\n");
printf("Type the message to publish:\n"); printf("e.g.: /ip4/127.0.0.1/tcp/60001/p2p/16Uiu2HAmVFXtAfSj4EiR7mL2KvL4EE2wztuQgUSBoj2Jx2KeXFLN\n");
char msg[1024]; char peerAddr[512];
scanf("%1023s", msg); scanf("%511s", peerAddr);
WAKU_CALL(waku_connect(&ctx, peerAddr, 10000 /* timeoutMs */, event_handler, userData));
publish_message(msg); show_main_menu();
show_main_menu();
}
break;
case MAIN_MENU:
break; break;
}
case PUBLISH_MESSAGE_MENU:
{
printf("Indicate the Pubsubtopic:\n");
char pubsubTopic[128];
scanf("%127s", pubsubTopic);
printf("Type the message tp publish:\n");
char msg[1024];
scanf("%1023s", msg);
publish_message(pubsubTopic, msg);
show_main_menu();
}
break;
case MAIN_MENU:
break;
}
} }
// End of UI program logic // End of UI program logic
int main(int argc, char **argv) int main(int argc, char** argv) {
{ struct ConfigNode cfgNode;
struct ConfigNode cfgNode; // default values
// default values snprintf(cfgNode.host, 128, "0.0.0.0");
snprintf(cfgNode.host, 128, "0.0.0.0"); cfgNode.port = 60000;
cfgNode.port = 60000; cfgNode.relay = 1;
cfgNode.relay = 1;
cfgNode.store = 0; cfgNode.store = 0;
snprintf(cfgNode.storeNode, 2048, ""); snprintf(cfgNode.storeNode, 2048, "");
snprintf(cfgNode.storeRetentionPolicy, 64, "time:6000000"); snprintf(cfgNode.storeRetentionPolicy, 64, "time:6000000");
snprintf(cfgNode.storeDbUrl, 256, "postgres://postgres:test123@localhost:5432/postgres"); snprintf(cfgNode.storeDbUrl, 256, "postgres://postgres:test123@localhost:5432/postgres");
cfgNode.storeVacuum = 0; cfgNode.storeVacuum = 0;
cfgNode.storeDbMigration = 0; cfgNode.storeDbMigration = 0;
cfgNode.storeMaxNumDbConnections = 30; cfgNode.storeMaxNumDbConnections = 30;
if (argp_parse(&argp, argc, argv, 0, 0, &cfgNode) == ARGP_ERR_UNKNOWN) if (argp_parse(&argp, argc, argv, 0, 0, &cfgNode)
{ == ARGP_ERR_UNKNOWN) {
show_help_and_exit(); show_help_and_exit();
} }
char jsonConfig[5000]; char jsonConfig[5000];
snprintf(jsonConfig, 5000, "{ \ snprintf(jsonConfig, 5000, "{ \
\"clusterId\": 16, \
\"shards\": [ 1, 32, 64, 128, 256 ], \
\"numShardsInNetwork\": 257, \
\"listenAddress\": \"%s\", \ \"listenAddress\": \"%s\", \
\"tcpPort\": %d, \ \"tcpPort\": %d, \
\"nodekey\": \"%s\", \
\"relay\": %s, \ \"relay\": %s, \
\"store\": %s, \ \"store\": %s, \
\"storeMessageDbUrl\": \"%s\", \ \"storeMessageDbUrl\": \"%s\", \
@ -335,60 +322,63 @@ int main(int argc, char **argv)
\"logLevel\": \"DEBUG\", \ \"logLevel\": \"DEBUG\", \
\"discv5Discovery\": true, \ \"discv5Discovery\": true, \
\"discv5BootstrapNodes\": \ \"discv5BootstrapNodes\": \
[\"enr:-QEKuED9AJm2HGgrRpVaJY2nj68ao_QiPeUT43sK-aRM7sMJ6R4G11OSDOwnvVacgN1sTw-K7soC5dzHDFZgZkHU0u-XAYJpZIJ2NIJpcISnYxMvim11bHRpYWRkcnO4WgAqNiVib290LTAxLmRvLWFtczMuc3RhdHVzLnByb2Quc3RhdHVzLmltBnZfACw2JWJvb3QtMDEuZG8tYW1zMy5zdGF0dXMucHJvZC5zdGF0dXMuaW0GAbveA4Jyc40AEAUAAQAgAEAAgAEAiXNlY3AyNTZrMaEC3rRtFQSgc24uWewzXaxTY8hDAHB8sgnxr9k8Rjb5GeSDdGNwgnZfg3VkcIIjKIV3YWt1Mg0\", \"enr:-QEcuED7ww5vo2rKc1pyBp7fubBUH-8STHEZHo7InjVjLblEVyDGkjdTI9VdqmYQOn95vuQH-Htku17WSTzEufx-Wg4mAYJpZIJ2NIJpcIQihw1Xim11bHRpYWRkcnO4bAAzNi5ib290LTAxLmdjLXVzLWNlbnRyYWwxLWEuc3RhdHVzLnByb2Quc3RhdHVzLmltBnZfADU2LmJvb3QtMDEuZ2MtdXMtY2VudHJhbDEtYS5zdGF0dXMucHJvZC5zdGF0dXMuaW0GAbveA4Jyc40AEAUAAQAgAEAAgAEAiXNlY3AyNTZrMaECxjqgDQ0WyRSOilYU32DA5k_XNlDis3m1VdXkK9xM6kODdGNwgnZfg3VkcIIjKIV3YWt1Mg0\", \"enr:-QEcuEAoShWGyN66wwusE3Ri8hXBaIkoHZHybUB8cCPv5v3ypEf9OCg4cfslJxZFANl90s-jmMOugLUyBx4EfOBNJ6_VAYJpZIJ2NIJpcIQI2hdMim11bHRpYWRkcnO4bAAzNi5ib290LTAxLmFjLWNuLWhvbmdrb25nLWMuc3RhdHVzLnByb2Quc3RhdHVzLmltBnZfADU2LmJvb3QtMDEuYWMtY24taG9uZ2tvbmctYy5zdGF0dXMucHJvZC5zdGF0dXMuaW0GAbveA4Jyc40AEAUAAQAgAEAAgAEAiXNlY3AyNTZrMaEDP7CbRk-YKJwOFFM4Z9ney0GPc7WPJaCwGkpNRyla7mCDdGNwgnZfg3VkcIIjKIV3YWt1Mg0\"], \ [\"enr:-QESuEB4Dchgjn7gfAvwB00CxTA-nGiyk-aALI-H4dYSZD3rUk7bZHmP8d2U6xDiQ2vZffpo45Jp7zKNdnwDUx6g4o6XAYJpZIJ2NIJpcIRA4VDAim11bHRpYWRkcnO4XAArNiZub2RlLTAxLmRvLWFtczMud2FrdS5zYW5kYm94LnN0YXR1cy5pbQZ2XwAtNiZub2RlLTAxLmRvLWFtczMud2FrdS5zYW5kYm94LnN0YXR1cy5pbQYfQN4DgnJzkwABCAAAAAEAAgADAAQABQAGAAeJc2VjcDI1NmsxoQOvD3S3jUNICsrOILlmhENiWAMmMVlAl6-Q8wRB7hidY4N0Y3CCdl-DdWRwgiMohXdha3UyDw\", \"enr:-QEkuEBIkb8q8_mrorHndoXH9t5N6ZfD-jehQCrYeoJDPHqT0l0wyaONa2-piRQsi3oVKAzDShDVeoQhy0uwN1xbZfPZAYJpZIJ2NIJpcIQiQlleim11bHRpYWRkcnO4bgA0Ni9ub2RlLTAxLmdjLXVzLWNlbnRyYWwxLWEud2FrdS5zYW5kYm94LnN0YXR1cy5pbQZ2XwA2Ni9ub2RlLTAxLmdjLXVzLWNlbnRyYWwxLWEud2FrdS5zYW5kYm94LnN0YXR1cy5pbQYfQN4DgnJzkwABCAAAAAEAAgADAAQABQAGAAeJc2VjcDI1NmsxoQKnGt-GSgqPSf3IAPM7bFgTlpczpMZZLF3geeoNNsxzSoN0Y3CCdl-DdWRwgiMohXdha3UyDw\"], \
\"discv5UdpPort\": 9999, \ \"discv5UdpPort\": 9999, \
\"dnsDiscoveryUrl\": \"enrtree://AMOJVZX4V6EXP7NTJPMAYJYST2QP6AJXYW76IU6VGJS7UVSNDYZG4@boot.prod.status.nodes.status.im\", \ \"dnsDiscovery\": true, \
\"dnsDiscoveryUrl\": \"enrtree://AOGYWMBYOUIMOENHXCHILPKY3ZRFEULMFI4DOM442QSZ73TT2A7VI@test.waku.nodes.status.im\", \
\"dnsDiscoveryNameServers\": [\"8.8.8.8\", \"1.0.0.1\"] \ \"dnsDiscoveryNameServers\": [\"8.8.8.8\", \"1.0.0.1\"] \
}", }", cfgNode.host,
cfgNode.host, cfgNode.port,
cfgNode.port, cfgNode.key,
cfgNode.relay ? "true" : "false", cfgNode.relay ? "true":"false",
cfgNode.store ? "true" : "false", cfgNode.store ? "true":"false",
cfgNode.storeDbUrl, cfgNode.storeDbUrl,
cfgNode.storeRetentionPolicy, cfgNode.storeRetentionPolicy,
cfgNode.storeMaxNumDbConnections); cfgNode.storeMaxNumDbConnections);
ctx = waku_new(jsonConfig, event_handler, userData); ctx = waku_new(jsonConfig, event_handler, userData);
waitForCallback(); waitForCallback();
WAKU_CALL(waku_default_pubsub_topic(ctx, print_default_pubsub_topic, userData)); WAKU_CALL( waku_default_pubsub_topic(ctx, print_default_pubsub_topic, userData) );
WAKU_CALL(waku_version(ctx, print_waku_version, userData)); WAKU_CALL( waku_version(ctx, print_waku_version, userData) );
printf("Bind addr: %s:%u\n", cfgNode.host, cfgNode.port); printf("Bind addr: %s:%u\n", cfgNode.host, cfgNode.port);
printf("Waku Relay enabled: %s\n", cfgNode.relay == 1 ? "YES" : "NO"); printf("Waku Relay enabled: %s\n", cfgNode.relay == 1 ? "YES": "NO");
set_event_callback(ctx, on_event_received, userData); waku_set_event_callback(ctx, on_event_received, userData);
waku_start(ctx, event_handler, userData); waku_start(ctx, event_handler, userData);
waitForCallback(); waitForCallback();
WAKU_CALL(waku_listen_addresses(ctx, event_handler, userData)); WAKU_CALL( waku_listen_addresses(ctx, event_handler, userData) );
WAKU_CALL(waku_relay_subscribe(ctx, printf("Establishing connection with: %s\n", cfgNode.peers);
event_handler,
userData,
"/waku/2/rs/16/32"));
WAKU_CALL(waku_discv5_update_bootnodes(ctx, WAKU_CALL( waku_connect(ctx,
event_handler, cfgNode.peers,
userData, 10000 /* timeoutMs */,
"[\"enr:-QEkuEBIkb8q8_mrorHndoXH9t5N6ZfD-jehQCrYeoJDPHqT0l0wyaONa2-piRQsi3oVKAzDShDVeoQhy0uwN1xbZfPZAYJpZIJ2NIJpcIQiQlleim11bHRpYWRkcnO4bgA0Ni9ub2RlLTAxLmdjLXVzLWNlbnRyYWwxLWEud2FrdS5zYW5kYm94LnN0YXR1cy5pbQZ2XwA2Ni9ub2RlLTAxLmdjLXVzLWNlbnRyYWwxLWEud2FrdS5zYW5kYm94LnN0YXR1cy5pbQYfQN4DgnJzkwABCAAAAAEAAgADAAQABQAGAAeJc2VjcDI1NmsxoQKnGt-GSgqPSf3IAPM7bFgTlpczpMZZLF3geeoNNsxzSoN0Y3CCdl-DdWRwgiMohXdha3UyDw\",\"enr:-QEkuEB3WHNS-xA3RDpfu9A2Qycr3bN3u7VoArMEiDIFZJ66F1EB3d4wxZN1hcdcOX-RfuXB-MQauhJGQbpz3qUofOtLAYJpZIJ2NIJpcIQI2SVcim11bHRpYWRkcnO4bgA0Ni9ub2RlLTAxLmFjLWNuLWhvbmdrb25nLWMud2FrdS5zYW5kYm94LnN0YXR1cy5pbQZ2XwA2Ni9ub2RlLTAxLmFjLWNuLWhvbmdrb25nLWMud2FrdS5zYW5kYm94LnN0YXR1cy5pbQYfQN4DgnJzkwABCAAAAAEAAgADAAQABQAGAAeJc2VjcDI1NmsxoQPK35Nnz0cWUtSAhBp7zvHEhyU_AqeQUlqzLiLxfP2L4oN0Y3CCdl-DdWRwgiMohXdha3UyDw\"]")); event_handler,
userData) );
WAKU_CALL(waku_get_peerids_from_peerstore(ctx, WAKU_CALL( waku_relay_subscribe(ctx,
event_handler, "/waku/2/rs/0/0",
userData)); event_handler,
userData) );
show_main_menu(); WAKU_CALL( waku_discv5_update_bootnodes(ctx,
while (1) "[\"enr:-QEkuEBIkb8q8_mrorHndoXH9t5N6ZfD-jehQCrYeoJDPHqT0l0wyaONa2-piRQsi3oVKAzDShDVeoQhy0uwN1xbZfPZAYJpZIJ2NIJpcIQiQlleim11bHRpYWRkcnO4bgA0Ni9ub2RlLTAxLmdjLXVzLWNlbnRyYWwxLWEud2FrdS5zYW5kYm94LnN0YXR1cy5pbQZ2XwA2Ni9ub2RlLTAxLmdjLXVzLWNlbnRyYWwxLWEud2FrdS5zYW5kYm94LnN0YXR1cy5pbQYfQN4DgnJzkwABCAAAAAEAAgADAAQABQAGAAeJc2VjcDI1NmsxoQKnGt-GSgqPSf3IAPM7bFgTlpczpMZZLF3geeoNNsxzSoN0Y3CCdl-DdWRwgiMohXdha3UyDw\",\"enr:-QEkuEB3WHNS-xA3RDpfu9A2Qycr3bN3u7VoArMEiDIFZJ66F1EB3d4wxZN1hcdcOX-RfuXB-MQauhJGQbpz3qUofOtLAYJpZIJ2NIJpcIQI2SVcim11bHRpYWRkcnO4bgA0Ni9ub2RlLTAxLmFjLWNuLWhvbmdrb25nLWMud2FrdS5zYW5kYm94LnN0YXR1cy5pbQZ2XwA2Ni9ub2RlLTAxLmFjLWNuLWhvbmdrb25nLWMud2FrdS5zYW5kYm94LnN0YXR1cy5pbQYfQN4DgnJzkwABCAAAAAEAAgADAAQABQAGAAeJc2VjcDI1NmsxoQPK35Nnz0cWUtSAhBp7zvHEhyU_AqeQUlqzLiLxfP2L4oN0Y3CCdl-DdWRwgiMohXdha3UyDw\"]",
{ event_handler,
handle_user_input(); userData) );
// Uncomment the following if need to test the metrics retrieval WAKU_CALL( waku_get_peerids_from_peerstore(ctx,
// WAKU_CALL( waku_get_metrics(ctx, event_handler,
// event_handler, userData) );
// userData) );
}
pthread_mutex_destroy(&mutex); show_main_menu();
pthread_cond_destroy(&cond); while(1) {
handle_user_input();
}
pthread_mutex_destroy(&mutex);
pthread_cond_destroy(&cond);
} }

View File

@ -1,18 +0,0 @@
## App description
This is a very simple example that shows how to invoke libwaku functions from a C++ program.
## Build
1. Open terminal
2. cd to nwaku root folder
3. make cppwaku_example -j8
This will create libwaku.so and cppwaku_example binary within the build folder.
## Run
1. Open terminal
2. cd to nwaku root folder
3. export LD_LIBRARY_PATH=build
4. `./build/cppwaku_example --host=0.0.0.0 --port=60001`
Use `./build/cppwaku_example --help` to see some other options.

View File

@ -16,48 +16,20 @@
#include "base64.h" #include "base64.h"
#include "../../library/libwaku.h" #include "../../library/libwaku.h"
// Shared synchronization variables #define WAKU_CALL(call) \
pthread_mutex_t mutex = PTHREAD_MUTEX_INITIALIZER; do { \
pthread_cond_t cond = PTHREAD_COND_INITIALIZER; int ret = call; \
int callback_executed = 0; if (ret != 0) { \
std::cout << "Failed the call to: " << #call << ". Code: " << ret << "\n"; \
} \
} while (0)
void waitForCallback() struct ConfigNode {
{ char host[128];
pthread_mutex_lock(&mutex); int port;
while (!callback_executed) char key[128];
{ int relay;
pthread_cond_wait(&cond, &mutex); char peers[2048];
}
callback_executed = 0;
pthread_mutex_unlock(&mutex);
}
void signal_cond()
{
pthread_mutex_lock(&mutex);
callback_executed = 1;
pthread_cond_signal(&cond);
pthread_mutex_unlock(&mutex);
}
#define WAKU_CALL(call) \
do \
{ \
int ret = call; \
if (ret != 0) \
{ \
std::cout << "Failed the call to: " << #call << ". Code: " << ret << "\n"; \
} \
waitForCallback(); \
} while (0)
struct ConfigNode
{
char host[128];
int port;
char key[128];
int relay;
char peers[2048];
}; };
// Arguments parsing // Arguments parsing
@ -65,76 +37,52 @@ static char doc[] = "\nC example that shows how to use the waku library.";
static char args_doc[] = ""; static char args_doc[] = "";
static struct argp_option options[] = { static struct argp_option options[] = {
{"host", 'h', "HOST", 0, "IP to listen for for LibP2P traffic. (default: \"0.0.0.0\")"}, { "host", 'h', "HOST", 0, "IP to listen for for LibP2P traffic. (default: \"0.0.0.0\")"},
{"port", 'p', "PORT", 0, "TCP listening port. (default: \"60000\")"}, { "port", 'p', "PORT", 0, "TCP listening port. (default: \"60000\")"},
{"key", 'k', "KEY", 0, "P2P node private key as 64 char hex string."}, { "key", 'k', "KEY", 0, "P2P node private key as 64 char hex string."},
{"relay", 'r', "RELAY", 0, "Enable relay protocol: 1 or 0. (default: 1)"}, { "relay", 'r', "RELAY", 0, "Enable relay protocol: 1 or 0. (default: 1)"},
{"peers", 'a', "PEERS", 0, "Comma-separated list of peer-multiaddress to connect\ { "peers", 'a', "PEERS", 0, "Comma-separated list of peer-multiaddress to connect\
to. (default: \"\") e.g. \"/ip4/127.0.0.1/tcp/60001/p2p/16Uiu2HAmVFXtAfSj4EiR7mL2KvL4EE2wztuQgUSBoj2Jx2KeXFLN\""}, to. (default: \"\") e.g. \"/ip4/127.0.0.1/tcp/60001/p2p/16Uiu2HAmVFXtAfSj4EiR7mL2KvL4EE2wztuQgUSBoj2Jx2KeXFLN\""},
{0}}; { 0 }
};
static error_t parse_opt(int key, char *arg, struct argp_state *state) static error_t parse_opt(int key, char *arg, struct argp_state *state) {
{
struct ConfigNode *cfgNode = (ConfigNode *)state->input; struct ConfigNode *cfgNode = (ConfigNode *) state->input;
switch (key) switch (key) {
{ case 'h':
case 'h': snprintf(cfgNode->host, 128, "%s", arg);
snprintf(cfgNode->host, 128, "%s", arg); break;
break; case 'p':
case 'p': cfgNode->port = atoi(arg);
cfgNode->port = atoi(arg); break;
break; case 'k':
case 'k': snprintf(cfgNode->key, 128, "%s", arg);
snprintf(cfgNode->key, 128, "%s", arg); break;
break; case 'r':
case 'r': cfgNode->relay = atoi(arg);
cfgNode->relay = atoi(arg); break;
break; case 'a':
case 'a': snprintf(cfgNode->peers, 2048, "%s", arg);
snprintf(cfgNode->peers, 2048, "%s", arg); break;
break; case ARGP_KEY_ARG:
case ARGP_KEY_ARG: if (state->arg_num >= 1) /* Too many arguments. */
if (state->arg_num >= 1) /* Too many arguments. */
argp_usage(state); argp_usage(state);
break; break;
case ARGP_KEY_END: case ARGP_KEY_END:
break; break;
default: default:
return ARGP_ERR_UNKNOWN; return ARGP_ERR_UNKNOWN;
} }
return 0; return 0;
} }
void event_handler(const char *msg, size_t len) static struct argp argp = { options, parse_opt, args_doc, doc, 0, 0, 0 };
{
printf("Receiving event: %s\n", msg);
}
void handle_error(const char *msg, size_t len)
{
printf("handle_error: %s\n", msg);
exit(1);
}
template <class F>
auto cify(F &&f)
{
static F fn = std::forward<F>(f);
return [](int callerRet, const char *msg, size_t len, void *userData)
{
signal_cond();
return fn(msg, len);
};
}
static struct argp argp = {options, parse_opt, args_doc, doc, 0, 0, 0};
// Beginning of UI program logic // Beginning of UI program logic
enum PROGRAM_STATE enum PROGRAM_STATE {
{
MAIN_MENU, MAIN_MENU,
SUBSCRIBE_TOPIC_MENU, SUBSCRIBE_TOPIC_MENU,
CONNECT_TO_OTHER_NODE_MENU, CONNECT_TO_OTHER_NODE_MENU,
@ -143,24 +91,24 @@ enum PROGRAM_STATE
enum PROGRAM_STATE current_state = MAIN_MENU; enum PROGRAM_STATE current_state = MAIN_MENU;
void show_main_menu() void show_main_menu() {
{
printf("\nPlease, select an option:\n"); printf("\nPlease, select an option:\n");
printf("\t1.) Subscribe to topic\n"); printf("\t1.) Subscribe to topic\n");
printf("\t2.) Connect to other node\n"); printf("\t2.) Connect to other node\n");
printf("\t3.) Publish a message\n"); printf("\t3.) Publish a message\n");
} }
void handle_user_input(void *ctx) void handle_user_input() {
{
char cmd[1024]; char cmd[1024];
memset(cmd, 0, 1024); memset(cmd, 0, 1024);
int numRead = read(0, cmd, 1024); int numRead = read(0, cmd, 1024);
if (numRead <= 0) if (numRead <= 0) {
{
return; return;
} }
int c;
while ( (c = getchar()) != '\n' && c != EOF ) { }
switch (atoi(cmd)) switch (atoi(cmd))
{ {
case SUBSCRIBE_TOPIC_MENU: case SUBSCRIBE_TOPIC_MENU:
@ -168,13 +116,10 @@ void handle_user_input(void *ctx)
printf("Indicate the Pubsubtopic to subscribe:\n"); printf("Indicate the Pubsubtopic to subscribe:\n");
char pubsubTopic[128]; char pubsubTopic[128];
scanf("%127s", pubsubTopic); scanf("%127s", pubsubTopic);
// if (!waku_relay_subscribe(pubsubTopic, &mResp)) {
WAKU_CALL(waku_relay_subscribe(ctx, // printf("Error subscribing to PubsubTopic: %s\n", mResp->data);
cify([&](const char *msg, size_t len) // }
{ event_handler(msg, len); }), // printf("Waku Relay subscription response: %s\n", mResp->data);
nullptr,
pubsubTopic));
printf("The subscription went well\n");
show_main_menu(); show_main_menu();
} }
@ -185,48 +130,41 @@ void handle_user_input(void *ctx)
printf("e.g.: /ip4/127.0.0.1/tcp/60001/p2p/16Uiu2HAmVFXtAfSj4EiR7mL2KvL4EE2wztuQgUSBoj2Jx2KeXFLN\n"); printf("e.g.: /ip4/127.0.0.1/tcp/60001/p2p/16Uiu2HAmVFXtAfSj4EiR7mL2KvL4EE2wztuQgUSBoj2Jx2KeXFLN\n");
char peerAddr[512]; char peerAddr[512];
scanf("%511s", peerAddr); scanf("%511s", peerAddr);
WAKU_CALL(waku_connect(ctx, // if (!waku_connect(peerAddr, 10000 /* timeoutMs */, &mResp)) {
cify([&](const char *msg, size_t len) // printf("Couldn't connect to the remote peer: %s\n", mResp->data);
{ event_handler(msg, len); }), // }
nullptr,
peerAddr,
10000 /* timeoutMs */));
show_main_menu(); show_main_menu();
break; break;
case PUBLISH_MESSAGE_MENU: case PUBLISH_MESSAGE_MENU:
{ {
printf("Type the message to publish:\n"); printf("Indicate the Pubsubtopic:\n");
char pubsubTopic[128];
scanf("%127s", pubsubTopic);
printf("Type the message tp publish:\n");
char msg[1024]; char msg[1024];
scanf("%1023s", msg); scanf("%1023s", msg);
char jsonWakuMsg[2048]; char jsonWakuMsg[1024];
std::vector<char> msgPayload; std::vector<char> msgPayload;
b64_encode(msg, strlen(msg), msgPayload); b64_encode(msg, strlen(msg), msgPayload);
std::string contentTopic; // waku_content_topic("appName",
waku_content_topic(ctx, // 1,
cify([&contentTopic](const char *msg, size_t len) // "contentTopicName",
{ contentTopic = msg; }), // "encoding",
nullptr, // &mResp);
"appName",
1,
"contentTopicName",
"encoding");
snprintf(jsonWakuMsg, // snprintf(jsonWakuMsg,
2048, // 1024,
"{\"payload\":\"%s\",\"contentTopic\":\"%s\"}", // "{\"payload\":\"%s\",\"content_topic\":\"%s\"}",
msgPayload.data(), contentTopic.c_str()); // msgPayload, mResp->data);
WAKU_CALL(waku_relay_publish(ctx, // free(msgPayload);
cify([&](const char *msg, size_t len)
{ event_handler(msg, len); }),
nullptr,
"/waku/2/rs/16/32",
jsonWakuMsg,
10000 /*timeout ms*/));
// waku_relay_publish(pubsubTopic, jsonWakuMsg, 10000 /*timeout ms*/, &mResp);
// printf("waku relay response [%s]\n", mResp->data);
show_main_menu(); show_main_menu();
} }
break; break;
@ -238,14 +176,29 @@ void handle_user_input(void *ctx)
// End of UI program logic // End of UI program logic
void show_help_and_exit() void show_help_and_exit() {
{
printf("Wrong parameters\n"); printf("Wrong parameters\n");
exit(1); exit(1);
} }
int main(int argc, char **argv) void event_handler(const char* msg, size_t len) {
{ printf("Receiving message %s\n", msg);
}
void handle_error(const char* msg, size_t len) {
printf("Error: %s\n", msg);
exit(1);
}
template <class F>
auto cify(F&& f) {
static F fn = std::forward<F>(f);
return [](const char* msg, size_t len) {
return fn(msg, len);
};
}
int main(int argc, char** argv) {
struct ConfigNode cfgNode; struct ConfigNode cfgNode;
// default values // default values
snprintf(cfgNode.host, 128, "0.0.0.0"); snprintf(cfgNode.host, 128, "0.0.0.0");
@ -254,83 +207,65 @@ int main(int argc, char **argv)
cfgNode.port = 60000; cfgNode.port = 60000;
cfgNode.relay = 1; cfgNode.relay = 1;
if (argp_parse(&argp, argc, argv, 0, 0, &cfgNode) == ARGP_ERR_UNKNOWN) if (argp_parse(&argp, argc, argv, 0, 0, &cfgNode)
{ == ARGP_ERR_UNKNOWN) {
show_help_and_exit(); show_help_and_exit();
} }
char jsonConfig[2048]; char jsonConfig[1024];
snprintf(jsonConfig, 2048, "{ \ snprintf(jsonConfig, 1024, "{ \
\"host\": \"%s\", \ \"host\": \"%s\", \
\"port\": %d, \ \"port\": %d, \
\"relay\": true, \ \"key\": \"%s\", \
\"clusterId\": 16, \ \"relay\": %s, \
\"shards\": [ 1, 32, 64, 128, 256 ], \ \"logLevel\": \"DEBUG\" \
\"logLevel\": \"FATAL\", \ }", cfgNode.host,
\"discv5Discovery\": true, \ cfgNode.port,
\"discv5BootstrapNodes\": \ cfgNode.key,
[\"enr:-QESuEB4Dchgjn7gfAvwB00CxTA-nGiyk-aALI-H4dYSZD3rUk7bZHmP8d2U6xDiQ2vZffpo45Jp7zKNdnwDUx6g4o6XAYJpZIJ2NIJpcIRA4VDAim11bHRpYWRkcnO4XAArNiZub2RlLTAxLmRvLWFtczMud2FrdS5zYW5kYm94LnN0YXR1cy5pbQZ2XwAtNiZub2RlLTAxLmRvLWFtczMud2FrdS5zYW5kYm94LnN0YXR1cy5pbQYfQN4DgnJzkwABCAAAAAEAAgADAAQABQAGAAeJc2VjcDI1NmsxoQOvD3S3jUNICsrOILlmhENiWAMmMVlAl6-Q8wRB7hidY4N0Y3CCdl-DdWRwgiMohXdha3UyDw\", \"enr:-QEkuEBIkb8q8_mrorHndoXH9t5N6ZfD-jehQCrYeoJDPHqT0l0wyaONa2-piRQsi3oVKAzDShDVeoQhy0uwN1xbZfPZAYJpZIJ2NIJpcIQiQlleim11bHRpYWRkcnO4bgA0Ni9ub2RlLTAxLmdjLXVzLWNlbnRyYWwxLWEud2FrdS5zYW5kYm94LnN0YXR1cy5pbQZ2XwA2Ni9ub2RlLTAxLmdjLXVzLWNlbnRyYWwxLWEud2FrdS5zYW5kYm94LnN0YXR1cy5pbQYfQN4DgnJzkwABCAAAAAEAAgADAAQABQAGAAeJc2VjcDI1NmsxoQKnGt-GSgqPSf3IAPM7bFgTlpczpMZZLF3geeoNNsxzSoN0Y3CCdl-DdWRwgiMohXdha3UyDw\"], \ cfgNode.relay ? "true":"false");
\"discv5UdpPort\": 9999, \
\"dnsDiscoveryUrl\": \"enrtree://AMOJVZX4V6EXP7NTJPMAYJYST2QP6AJXYW76IU6VGJS7UVSNDYZG4@boot.prod.status.nodes.status.im\", \
\"dnsDiscoveryNameServers\": [\"8.8.8.8\", \"1.0.0.1\"] \
}",
cfgNode.host,
cfgNode.port);
void *ctx = WAKU_CALL(waku_new(jsonConfig, cify([](const char* msg, size_t len) {
waku_new(jsonConfig, std::cout << "Error: " << msg << std::endl;
cify([](const char *msg, size_t len) exit(1);
{ std::cout << "waku_new feedback: " << msg << std::endl; }), })));
nullptr);
waitForCallback();
// example on how to retrieve a value from the `libwaku` callback. // example on how to retrieve a value from the `libwaku` callback.
std::string defaultPubsubTopic; std::string defaultPubsubTopic;
WAKU_CALL( WAKU_CALL(waku_default_pubsub_topic(cify([&defaultPubsubTopic](const char* msg, size_t len) {
waku_default_pubsub_topic( defaultPubsubTopic = msg;
ctx, })));
cify([&defaultPubsubTopic](const char *msg, size_t len)
{ defaultPubsubTopic = msg; }),
nullptr));
std::cout << "Default pubsub topic: " << defaultPubsubTopic << std::endl; std::cout << "Default pubsub topic: " << defaultPubsubTopic << std::endl;
WAKU_CALL(waku_version(ctx, WAKU_CALL(waku_version(cify([&](const char* msg, size_t len) {
cify([&](const char *msg, size_t len) std::cout << "Git Version: " << msg << std::endl;
{ std::cout << "Git Version: " << msg << std::endl; }), })));
nullptr));
printf("Bind addr: %s:%u\n", cfgNode.host, cfgNode.port); printf("Bind addr: %s:%u\n", cfgNode.host, cfgNode.port);
printf("Waku Relay enabled: %s\n", cfgNode.relay == 1 ? "YES" : "NO"); printf("Waku Relay enabled: %s\n", cfgNode.relay == 1 ? "YES": "NO");
std::string pubsubTopic; std::string pubsubTopic;
WAKU_CALL(waku_pubsub_topic(ctx, WAKU_CALL(waku_pubsub_topic("example", cify([&](const char* msg, size_t len) {
cify([&](const char *msg, size_t len) pubsubTopic = msg;
{ pubsubTopic = msg; }), })));
nullptr,
"example"));
std::cout << "Custom pubsub topic: " << pubsubTopic << std::endl; std::cout << "Custom pubsub topic: " << pubsubTopic << std::endl;
set_event_callback(ctx, waku_set_event_callback(event_handler);
cify([&](const char *msg, size_t len) waku_start();
{ event_handler(msg, len); }),
nullptr);
WAKU_CALL(waku_start(ctx, WAKU_CALL( waku_connect(cfgNode.peers,
cify([&](const char *msg, size_t len) 10000 /* timeoutMs */,
{ event_handler(msg, len); }), handle_error) );
nullptr));
WAKU_CALL(waku_relay_subscribe(ctx, WAKU_CALL( waku_relay_subscribe(defaultPubsubTopic.c_str(),
cify([&](const char *msg, size_t len) handle_error) );
{ event_handler(msg, len); }),
nullptr, std::cout << "Establishing connection with: " << cfgNode.peers << std::endl;
defaultPubsubTopic.c_str())); WAKU_CALL(waku_connect(cfgNode.peers, 10000 /* timeoutMs */, handle_error));
show_main_menu(); show_main_menu();
while (1) while(1) {
{ handle_user_input();
handle_user_input(ctx);
} }
} }

View File

@ -1,38 +1,30 @@
import ## Example showing how a resource restricted client may
std/[tables, sequtils], ## subscribe to messages without relay
stew/byteutils,
chronicles,
chronos,
confutils,
libp2p/crypto/crypto,
eth/keys,
eth/p2p/discoveryv5/enr
import import chronicles, chronos, stew/byteutils, results
waku/[ import waku/[common/logging, node/peer_manager, waku_core, waku_filter_v2/client]
common/logging,
node/peer_manager,
waku_core,
waku_node,
waku_enr,
discovery/waku_discv5,
factory/builder,
waku_relay,
waku_filter_v2/client,
]
# careful if running pub and sub in the same machine
const wakuPort = 50000
const clusterId = 1
const shardId = @[0'u16]
const const
FilterPeer = FilterPeer =
"/ip4/64.225.80.192/tcp/30303/p2p/16Uiu2HAmNaeL4p3WEYzC9mgXBmBWSgWjPHRvatZTXnp8Jgv3iKsb" "/ip4/34.16.1.67/tcp/30303/p2p/16Uiu2HAmDCp8XJ9z1ev18zuv8NHekAsjNyezAvmMfFEJkiharitG"
FilterPubsubTopic = PubsubTopic("/waku/2/rs/1/0") # node-01.gc-us-central1-a.waku.test.status.im on waku.test
FilterPubsubTopic = PubsubTopic("/waku/2/rs/0/0")
FilterContentTopic = ContentTopic("/examples/1/light-pubsub-example/proto") FilterContentTopic = ContentTopic("/examples/1/light-pubsub-example/proto")
proc unsubscribe(
wfc: WakuFilterClient,
filterPeer: RemotePeerInfo,
filterPubsubTopic: PubsubTopic,
filterContentTopic: ContentTopic,
) {.async.} =
notice "unsubscribing from filter"
let unsubscribeRes =
await wfc.unsubscribe(filterPeer, filterPubsubTopic, @[filterContentTopic])
if unsubscribeRes.isErr:
notice "unsubscribe request failed", err = unsubscribeRes.error
else:
notice "unsubscribe request successful"
proc messagePushHandler( proc messagePushHandler(
pubsubTopic: PubsubTopic, message: WakuMessage pubsubTopic: PubsubTopic, message: WakuMessage
) {.async, gcsafe.} = ) {.async, gcsafe.} =
@ -43,69 +35,55 @@ proc messagePushHandler(
contentTopic = message.contentTopic, contentTopic = message.contentTopic,
timestamp = message.timestamp timestamp = message.timestamp
proc setupAndSubscribe(rng: ref HmacDrbgContext) {.async.} = proc maintainSubscription(
# use notice to filter all waku messaging wfc: WakuFilterClient,
setupLog(logging.LogLevel.NOTICE, logging.LogFormat.TEXT) filterPeer: RemotePeerInfo,
filterPubsubTopic: PubsubTopic,
notice "starting subscriber", wakuPort = wakuPort filterContentTopic: ContentTopic,
let ) {.async.} =
nodeKey = crypto.PrivateKey.random(Secp256k1, rng[])[]
ip = parseIpAddress("0.0.0.0")
flags = CapabilitiesBitfield.init(relay = true)
let relayShards = RelayShards.init(clusterId, shardId).valueOr:
error "Relay shards initialization failed", error = error
quit(QuitFailure)
var enrBuilder = EnrBuilder.init(nodeKey)
enrBuilder.withWakuRelaySharding(relayShards).expect(
"Building ENR with relay sharding failed"
)
let record = enrBuilder.build().valueOr:
error "failed to create enr record", error = error
quit(QuitFailure)
var builder = WakuNodeBuilder.init()
builder.withNodeKey(nodeKey)
builder.withRecord(record)
builder.withNetworkConfigurationDetails(ip, Port(wakuPort)).tryGet()
let node = builder.build().tryGet()
node.mountMetadata(clusterId, shardId).expect(
"failed to mount waku metadata protocol"
)
await node.mountFilterClient()
await node.start()
node.peerManager.start()
node.wakuFilterClient.registerPushHandler(messagePushHandler)
let filterPeer = parsePeerInfo(FilterPeer).get()
while true: while true:
notice "maintaining subscription" notice "maintaining subscription"
# First use filter-ping to check if we have an active subscription # First use filter-ping to check if we have an active subscription
if (await node.wakuFilterClient.ping(filterPeer)).isErr(): let pingRes = await wfc.ping(filterPeer)
if pingRes.isErr():
# No subscription found. Let's subscribe. # No subscription found. Let's subscribe.
notice "no subscription found. Sending subscribe request" notice "no subscription found. Sending subscribe request"
( let subscribeRes =
await node.wakuFilterClient.subscribe( await wfc.subscribe(filterPeer, filterPubsubTopic, @[filterContentTopic])
filterPeer, FilterPubsubTopic, @[FilterContentTopic]
) if subscribeRes.isErr():
).isOkOr: notice "subscribe request failed. Quitting.", err = subscribeRes.error
notice "subscribe request failed. Quitting.", error = error
break break
notice "subscribe request successful." else:
notice "subscribe request successful."
else: else:
notice "subscription found." notice "subscription found."
await sleepAsync(60.seconds) # Subscription maintenance interval await sleepAsync(60.seconds) # Subscription maintenance interval
proc setupAndSubscribe(rng: ref HmacDrbgContext) =
let filterPeer = parsePeerInfo(FilterPeer).get()
setupLog(logging.LogLevel.NOTICE, logging.LogFormat.TEXT)
notice "starting filter subscriber"
var
switch = newStandardSwitch()
pm = PeerManager.new(switch)
wfc = WakuFilterClient.new(pm, rng)
# Mount filter client protocol
switch.mount(wfc)
wfc.registerPushHandler(messagePushHandler)
# Start maintaining subscription
asyncSpawn maintainSubscription(
wfc, filterPeer, FilterPubsubTopic, FilterContentTopic
)
when isMainModule: when isMainModule:
let rng = crypto.newRng() let rng = newRng()
asyncSpawn setupAndSubscribe(rng) setupAndSubscribe(rng)
runForever() runForever()

View File

@ -71,32 +71,32 @@ package main
static void* cGoWakuNew(const char* configJson, void* resp) { static void* cGoWakuNew(const char* configJson, void* resp) {
// We pass NULL because we are not interested in retrieving data from this callback // We pass NULL because we are not interested in retrieving data from this callback
void* ret = waku_new(configJson, (FFICallBack) callback, resp); void* ret = waku_new(configJson, (WakuCallBack) callback, resp);
return ret; return ret;
} }
static void cGoWakuStart(void* wakuCtx, void* resp) { static void cGoWakuStart(void* wakuCtx, void* resp) {
WAKU_CALL(waku_start(wakuCtx, (FFICallBack) callback, resp)); WAKU_CALL(waku_start(wakuCtx, (WakuCallBack) callback, resp));
} }
static void cGoWakuStop(void* wakuCtx, void* resp) { static void cGoWakuStop(void* wakuCtx, void* resp) {
WAKU_CALL(waku_stop(wakuCtx, (FFICallBack) callback, resp)); WAKU_CALL(waku_stop(wakuCtx, (WakuCallBack) callback, resp));
} }
static void cGoWakuDestroy(void* wakuCtx, void* resp) { static void cGoWakuDestroy(void* wakuCtx, void* resp) {
WAKU_CALL(waku_destroy(wakuCtx, (FFICallBack) callback, resp)); WAKU_CALL(waku_destroy(wakuCtx, (WakuCallBack) callback, resp));
} }
static void cGoWakuStartDiscV5(void* wakuCtx, void* resp) { static void cGoWakuStartDiscV5(void* wakuCtx, void* resp) {
WAKU_CALL(waku_start_discv5(wakuCtx, (FFICallBack) callback, resp)); WAKU_CALL(waku_start_discv5(wakuCtx, (WakuCallBack) callback, resp));
} }
static void cGoWakuStopDiscV5(void* wakuCtx, void* resp) { static void cGoWakuStopDiscV5(void* wakuCtx, void* resp) {
WAKU_CALL(waku_stop_discv5(wakuCtx, (FFICallBack) callback, resp)); WAKU_CALL(waku_stop_discv5(wakuCtx, (WakuCallBack) callback, resp));
} }
static void cGoWakuVersion(void* wakuCtx, void* resp) { static void cGoWakuVersion(void* wakuCtx, void* resp) {
WAKU_CALL(waku_version(wakuCtx, (FFICallBack) callback, resp)); WAKU_CALL(waku_version(wakuCtx, (WakuCallBack) callback, resp));
} }
static void cGoWakuSetEventCallback(void* wakuCtx) { static void cGoWakuSetEventCallback(void* wakuCtx) {
@ -112,7 +112,7 @@ package main
// This technique is needed because cgo only allows to export Go functions and not methods. // This technique is needed because cgo only allows to export Go functions and not methods.
set_event_callback(wakuCtx, (FFICallBack) globalEventCallback, wakuCtx); waku_set_event_callback(wakuCtx, (WakuCallBack) globalEventCallback, wakuCtx);
} }
static void cGoWakuContentTopic(void* wakuCtx, static void cGoWakuContentTopic(void* wakuCtx,
@ -123,21 +123,20 @@ package main
void* resp) { void* resp) {
WAKU_CALL( waku_content_topic(wakuCtx, WAKU_CALL( waku_content_topic(wakuCtx,
(FFICallBack) callback,
resp,
appName, appName,
appVersion, appVersion,
contentTopicName, contentTopicName,
encoding encoding,
) ); (WakuCallBack) callback,
resp) );
} }
static void cGoWakuPubsubTopic(void* wakuCtx, char* topicName, void* resp) { static void cGoWakuPubsubTopic(void* wakuCtx, char* topicName, void* resp) {
WAKU_CALL( waku_pubsub_topic(wakuCtx, (FFICallBack) callback, resp, topicName) ); WAKU_CALL( waku_pubsub_topic(wakuCtx, topicName, (WakuCallBack) callback, resp) );
} }
static void cGoWakuDefaultPubsubTopic(void* wakuCtx, void* resp) { static void cGoWakuDefaultPubsubTopic(void* wakuCtx, void* resp) {
WAKU_CALL (waku_default_pubsub_topic(wakuCtx, (FFICallBack) callback, resp)); WAKU_CALL (waku_default_pubsub_topic(wakuCtx, (WakuCallBack) callback, resp));
} }
static void cGoWakuRelayPublish(void* wakuCtx, static void cGoWakuRelayPublish(void* wakuCtx,
@ -147,36 +146,34 @@ package main
void* resp) { void* resp) {
WAKU_CALL (waku_relay_publish(wakuCtx, WAKU_CALL (waku_relay_publish(wakuCtx,
(FFICallBack) callback,
resp,
pubSubTopic, pubSubTopic,
jsonWakuMessage, jsonWakuMessage,
timeoutMs timeoutMs,
)); (WakuCallBack) callback,
resp));
} }
static void cGoWakuRelaySubscribe(void* wakuCtx, char* pubSubTopic, void* resp) { static void cGoWakuRelaySubscribe(void* wakuCtx, char* pubSubTopic, void* resp) {
WAKU_CALL ( waku_relay_subscribe(wakuCtx, WAKU_CALL ( waku_relay_subscribe(wakuCtx,
(FFICallBack) callback, pubSubTopic,
resp, (WakuCallBack) callback,
pubSubTopic) ); resp) );
} }
static void cGoWakuRelayUnsubscribe(void* wakuCtx, char* pubSubTopic, void* resp) { static void cGoWakuRelayUnsubscribe(void* wakuCtx, char* pubSubTopic, void* resp) {
WAKU_CALL ( waku_relay_unsubscribe(wakuCtx, WAKU_CALL ( waku_relay_unsubscribe(wakuCtx,
(FFICallBack) callback, pubSubTopic,
resp, (WakuCallBack) callback,
pubSubTopic) ); resp) );
} }
static void cGoWakuConnect(void* wakuCtx, char* peerMultiAddr, int timeoutMs, void* resp) { static void cGoWakuConnect(void* wakuCtx, char* peerMultiAddr, int timeoutMs, void* resp) {
WAKU_CALL( waku_connect(wakuCtx, WAKU_CALL( waku_connect(wakuCtx,
(FFICallBack) callback,
resp,
peerMultiAddr, peerMultiAddr,
timeoutMs timeoutMs,
) ); (WakuCallBack) callback,
resp) );
} }
static void cGoWakuDialPeerById(void* wakuCtx, static void cGoWakuDialPeerById(void* wakuCtx,
@ -186,44 +183,42 @@ package main
void* resp) { void* resp) {
WAKU_CALL( waku_dial_peer_by_id(wakuCtx, WAKU_CALL( waku_dial_peer_by_id(wakuCtx,
(FFICallBack) callback,
resp,
peerId, peerId,
protocol, protocol,
timeoutMs timeoutMs,
) ); (WakuCallBack) callback,
resp) );
} }
static void cGoWakuDisconnectPeerById(void* wakuCtx, char* peerId, void* resp) { static void cGoWakuDisconnectPeerById(void* wakuCtx, char* peerId, void* resp) {
WAKU_CALL( waku_disconnect_peer_by_id(wakuCtx, WAKU_CALL( waku_disconnect_peer_by_id(wakuCtx,
(FFICallBack) callback, peerId,
resp, (WakuCallBack) callback,
peerId resp) );
) );
} }
static void cGoWakuListenAddresses(void* wakuCtx, void* resp) { static void cGoWakuListenAddresses(void* wakuCtx, void* resp) {
WAKU_CALL (waku_listen_addresses(wakuCtx, (FFICallBack) callback, resp) ); WAKU_CALL (waku_listen_addresses(wakuCtx, (WakuCallBack) callback, resp) );
} }
static void cGoWakuGetMyENR(void* ctx, void* resp) { static void cGoWakuGetMyENR(void* ctx, void* resp) {
WAKU_CALL (waku_get_my_enr(ctx, (FFICallBack) callback, resp) ); WAKU_CALL (waku_get_my_enr(ctx, (WakuCallBack) callback, resp) );
} }
static void cGoWakuGetMyPeerId(void* ctx, void* resp) { static void cGoWakuGetMyPeerId(void* ctx, void* resp) {
WAKU_CALL (waku_get_my_peerid(ctx, (FFICallBack) callback, resp) ); WAKU_CALL (waku_get_my_peerid(ctx, (WakuCallBack) callback, resp) );
} }
static void cGoWakuListPeersInMesh(void* ctx, char* pubSubTopic, void* resp) { static void cGoWakuListPeersInMesh(void* ctx, char* pubSubTopic, void* resp) {
WAKU_CALL (waku_relay_get_num_peers_in_mesh(ctx, (FFICallBack) callback, resp, pubSubTopic) ); WAKU_CALL (waku_relay_get_num_peers_in_mesh(ctx, pubSubTopic, (WakuCallBack) callback, resp) );
} }
static void cGoWakuGetNumConnectedPeers(void* ctx, char* pubSubTopic, void* resp) { static void cGoWakuGetNumConnectedPeers(void* ctx, char* pubSubTopic, void* resp) {
WAKU_CALL (waku_relay_get_num_connected_peers(ctx, (FFICallBack) callback, resp, pubSubTopic) ); WAKU_CALL (waku_relay_get_num_connected_peers(ctx, pubSubTopic, (WakuCallBack) callback, resp) );
} }
static void cGoWakuGetPeerIdsFromPeerStore(void* wakuCtx, void* resp) { static void cGoWakuGetPeerIdsFromPeerStore(void* wakuCtx, void* resp) {
WAKU_CALL (waku_get_peerids_from_peerstore(wakuCtx, (FFICallBack) callback, resp) ); WAKU_CALL (waku_get_peerids_from_peerstore(wakuCtx, (WakuCallBack) callback, resp) );
} }
static void cGoWakuLightpushPublish(void* wakuCtx, static void cGoWakuLightpushPublish(void* wakuCtx,
@ -232,11 +227,10 @@ package main
void* resp) { void* resp) {
WAKU_CALL (waku_lightpush_publish(wakuCtx, WAKU_CALL (waku_lightpush_publish(wakuCtx,
(FFICallBack) callback,
resp,
pubSubTopic, pubSubTopic,
jsonWakuMessage jsonWakuMessage,
)); (WakuCallBack) callback,
resp));
} }
static void cGoWakuStoreQuery(void* wakuCtx, static void cGoWakuStoreQuery(void* wakuCtx,
@ -246,12 +240,11 @@ package main
void* resp) { void* resp) {
WAKU_CALL (waku_store_query(wakuCtx, WAKU_CALL (waku_store_query(wakuCtx,
(FFICallBack) callback,
resp,
jsonQuery, jsonQuery,
peerAddr, peerAddr,
timeoutMs timeoutMs,
)); (WakuCallBack) callback,
resp));
} }
static void cGoWakuPeerExchangeQuery(void* wakuCtx, static void cGoWakuPeerExchangeQuery(void* wakuCtx,
@ -259,10 +252,9 @@ package main
void* resp) { void* resp) {
WAKU_CALL (waku_peer_exchange_request(wakuCtx, WAKU_CALL (waku_peer_exchange_request(wakuCtx,
(FFICallBack) callback, numPeers,
resp, (WakuCallBack) callback,
numPeers resp));
));
} }
static void cGoWakuGetPeerIdsByProtocol(void* wakuCtx, static void cGoWakuGetPeerIdsByProtocol(void* wakuCtx,
@ -270,10 +262,9 @@ package main
void* resp) { void* resp) {
WAKU_CALL (waku_get_peerids_by_protocol(wakuCtx, WAKU_CALL (waku_get_peerids_by_protocol(wakuCtx,
(FFICallBack) callback, protocol,
resp, (WakuCallBack) callback,
protocol resp));
));
} }
*/ */

View File

@ -1,331 +0,0 @@
// !$*UTF8*$!
{
archiveVersion = 1;
classes = {
};
objectVersion = 63;
objects = {
/* Begin PBXBuildFile section */
45714AF6D1D12AF5C36694FB /* WakuExampleApp.swift in Sources */ = {isa = PBXBuildFile; fileRef = 0671AF6DCB0D788B0C1E9C8B /* WakuExampleApp.swift */; };
6468FA3F5F760D3FCAD6CDBF /* ContentView.swift in Sources */ = {isa = PBXBuildFile; fileRef = 7D8744E36DADC11F38A1CC99 /* ContentView.swift */; };
C4EA202B782038F96336401F /* WakuNode.swift in Sources */ = {isa = PBXBuildFile; fileRef = 638A565C495A63CFF7396FBC /* WakuNode.swift */; };
/* End PBXBuildFile section */
/* Begin PBXFileReference section */
0671AF6DCB0D788B0C1E9C8B /* WakuExampleApp.swift */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.swift; path = WakuExampleApp.swift; sourceTree = "<group>"; };
31BE20DB2755A11000723420 /* libwaku.h */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.c.h; path = libwaku.h; sourceTree = "<group>"; };
5C5AAC91E0166D28BFA986DB /* Info.plist */ = {isa = PBXFileReference; lastKnownFileType = text.plist; path = Info.plist; sourceTree = "<group>"; };
638A565C495A63CFF7396FBC /* WakuNode.swift */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.swift; path = WakuNode.swift; sourceTree = "<group>"; };
7D8744E36DADC11F38A1CC99 /* ContentView.swift */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.swift; path = ContentView.swift; sourceTree = "<group>"; };
A8655016B3DF9B0877631CE5 /* WakuExample-Bridging-Header.h */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.c.h; path = "WakuExample-Bridging-Header.h"; sourceTree = "<group>"; };
CFBE844B6E18ACB81C65F83B /* WakuExample.app */ = {isa = PBXFileReference; explicitFileType = wrapper.application; includeInIndex = 0; path = WakuExample.app; sourceTree = BUILT_PRODUCTS_DIR; };
/* End PBXFileReference section */
/* Begin PBXGroup section */
34547A6259485BD047D6375C /* Products */ = {
isa = PBXGroup;
children = (
CFBE844B6E18ACB81C65F83B /* WakuExample.app */,
);
name = Products;
sourceTree = "<group>";
};
4F76CB85EC44E951B8E75522 /* WakuExample */ = {
isa = PBXGroup;
children = (
7D8744E36DADC11F38A1CC99 /* ContentView.swift */,
5C5AAC91E0166D28BFA986DB /* Info.plist */,
31BE20DB2755A11000723420 /* libwaku.h */,
A8655016B3DF9B0877631CE5 /* WakuExample-Bridging-Header.h */,
0671AF6DCB0D788B0C1E9C8B /* WakuExampleApp.swift */,
638A565C495A63CFF7396FBC /* WakuNode.swift */,
);
path = WakuExample;
sourceTree = "<group>";
};
D40CD2446F177CAABB0A747A = {
isa = PBXGroup;
children = (
4F76CB85EC44E951B8E75522 /* WakuExample */,
34547A6259485BD047D6375C /* Products */,
);
sourceTree = "<group>";
};
/* End PBXGroup section */
/* Begin PBXNativeTarget section */
F751EF8294AD21F713D47FDA /* WakuExample */ = {
isa = PBXNativeTarget;
buildConfigurationList = 757FA0123629BD63CB254113 /* Build configuration list for PBXNativeTarget "WakuExample" */;
buildPhases = (
D3AFD8C4DA68BF5C4F7D8E10 /* Sources */,
);
buildRules = (
);
dependencies = (
);
name = WakuExample;
packageProductDependencies = (
);
productName = WakuExample;
productReference = CFBE844B6E18ACB81C65F83B /* WakuExample.app */;
productType = "com.apple.product-type.application";
};
/* End PBXNativeTarget section */
/* Begin PBXProject section */
4FF82F0F4AF8E1E34728F150 /* Project object */ = {
isa = PBXProject;
attributes = {
BuildIndependentTargetsInParallel = YES;
LastUpgradeCheck = 1500;
};
buildConfigurationList = B3A4F48294254543E79767C4 /* Build configuration list for PBXProject "WakuExample" */;
compatibilityVersion = "Xcode 14.0";
developmentRegion = en;
hasScannedForEncodings = 0;
knownRegions = (
Base,
en,
);
mainGroup = D40CD2446F177CAABB0A747A;
minimizedProjectReferenceProxies = 1;
projectDirPath = "";
projectRoot = "";
targets = (
F751EF8294AD21F713D47FDA /* WakuExample */,
);
};
/* End PBXProject section */
/* Begin PBXSourcesBuildPhase section */
D3AFD8C4DA68BF5C4F7D8E10 /* Sources */ = {
isa = PBXSourcesBuildPhase;
buildActionMask = 2147483647;
files = (
6468FA3F5F760D3FCAD6CDBF /* ContentView.swift in Sources */,
45714AF6D1D12AF5C36694FB /* WakuExampleApp.swift in Sources */,
C4EA202B782038F96336401F /* WakuNode.swift in Sources */,
);
runOnlyForDeploymentPostprocessing = 0;
};
/* End PBXSourcesBuildPhase section */
/* Begin XCBuildConfiguration section */
36939122077C66DD94082311 /* Release */ = {
isa = XCBuildConfiguration;
buildSettings = {
ASSETCATALOG_COMPILER_APPICON_NAME = AppIcon;
CODE_SIGN_IDENTITY = "iPhone Developer";
DEVELOPMENT_TEAM = 2Q52K2W84K;
HEADER_SEARCH_PATHS = "$(PROJECT_DIR)/WakuExample";
INFOPLIST_FILE = WakuExample/Info.plist;
IPHONEOS_DEPLOYMENT_TARGET = 18.6;
LD_RUNPATH_SEARCH_PATHS = (
"$(inherited)",
"@executable_path/Frameworks",
);
"LIBRARY_SEARCH_PATHS[sdk=iphoneos*]" = "$(PROJECT_DIR)/../../build/ios/iphoneos-arm64";
"LIBRARY_SEARCH_PATHS[sdk=iphonesimulator*]" = "$(PROJECT_DIR)/../../build/ios/iphonesimulator-arm64";
MACOSX_DEPLOYMENT_TARGET = 15.6;
OTHER_LDFLAGS = (
"-lc++",
"-force_load",
"$(PROJECT_DIR)/../../build/ios/iphoneos-arm64/libwaku.a",
"-lsqlite3",
"-lz",
);
PRODUCT_BUNDLE_IDENTIFIER = org.waku.example;
SDKROOT = iphoneos;
SUPPORTED_PLATFORMS = "iphoneos iphonesimulator";
SUPPORTS_MACCATALYST = NO;
SUPPORTS_MAC_DESIGNED_FOR_IPHONE_IPAD = YES;
SUPPORTS_XR_DESIGNED_FOR_IPHONE_IPAD = YES;
SWIFT_OBJC_BRIDGING_HEADER = "WakuExample/WakuExample-Bridging-Header.h";
TARGETED_DEVICE_FAMILY = "1,2";
};
name = Release;
};
9BA833A09EEDB4B3FCCD8F8E /* Release */ = {
isa = XCBuildConfiguration;
buildSettings = {
ALWAYS_SEARCH_USER_PATHS = NO;
CLANG_ANALYZER_NONNULL = YES;
CLANG_ANALYZER_NUMBER_OBJECT_CONVERSION = YES_AGGRESSIVE;
CLANG_CXX_LANGUAGE_STANDARD = "gnu++14";
CLANG_CXX_LIBRARY = "libc++";
CLANG_ENABLE_MODULES = YES;
CLANG_ENABLE_OBJC_ARC = YES;
CLANG_ENABLE_OBJC_WEAK = YES;
CLANG_WARN_BLOCK_CAPTURE_AUTORELEASING = YES;
CLANG_WARN_BOOL_CONVERSION = YES;
CLANG_WARN_COMMA = YES;
CLANG_WARN_CONSTANT_CONVERSION = YES;
CLANG_WARN_DEPRECATED_OBJC_IMPLEMENTATIONS = YES;
CLANG_WARN_DIRECT_OBJC_ISA_USAGE = YES_ERROR;
CLANG_WARN_DOCUMENTATION_COMMENTS = YES;
CLANG_WARN_EMPTY_BODY = YES;
CLANG_WARN_ENUM_CONVERSION = YES;
CLANG_WARN_INFINITE_RECURSION = YES;
CLANG_WARN_INT_CONVERSION = YES;
CLANG_WARN_NON_LITERAL_NULL_CONVERSION = YES;
CLANG_WARN_OBJC_IMPLICIT_RETAIN_SELF = YES;
CLANG_WARN_OBJC_LITERAL_CONVERSION = YES;
CLANG_WARN_OBJC_ROOT_CLASS = YES_ERROR;
CLANG_WARN_QUOTED_INCLUDE_IN_FRAMEWORK_HEADER = YES;
CLANG_WARN_RANGE_LOOP_ANALYSIS = YES;
CLANG_WARN_STRICT_PROTOTYPES = YES;
CLANG_WARN_SUSPICIOUS_MOVE = YES;
CLANG_WARN_UNGUARDED_AVAILABILITY = YES_AGGRESSIVE;
CLANG_WARN_UNREACHABLE_CODE = YES;
CLANG_WARN__DUPLICATE_METHOD_MATCH = YES;
COPY_PHASE_STRIP = NO;
DEBUG_INFORMATION_FORMAT = "dwarf-with-dsym";
ENABLE_NS_ASSERTIONS = NO;
ENABLE_STRICT_OBJC_MSGSEND = YES;
GCC_C_LANGUAGE_STANDARD = gnu11;
GCC_NO_COMMON_BLOCKS = YES;
GCC_WARN_64_TO_32_BIT_CONVERSION = YES;
GCC_WARN_ABOUT_RETURN_TYPE = YES_ERROR;
GCC_WARN_UNDECLARED_SELECTOR = YES;
GCC_WARN_UNINITIALIZED_AUTOS = YES_AGGRESSIVE;
GCC_WARN_UNUSED_FUNCTION = YES;
GCC_WARN_UNUSED_VARIABLE = YES;
IPHONEOS_DEPLOYMENT_TARGET = 18.6;
MTL_ENABLE_DEBUG_INFO = NO;
MTL_FAST_MATH = YES;
PRODUCT_NAME = "$(TARGET_NAME)";
SDKROOT = iphoneos;
SUPPORTED_PLATFORMS = "iphoneos iphonesimulator";
SUPPORTS_MACCATALYST = NO;
SWIFT_COMPILATION_MODE = wholemodule;
SWIFT_OPTIMIZATION_LEVEL = "-O";
SWIFT_VERSION = 5.0;
};
name = Release;
};
A59ABFB792FED8974231E5AC /* Debug */ = {
isa = XCBuildConfiguration;
buildSettings = {
ALWAYS_SEARCH_USER_PATHS = NO;
CLANG_ANALYZER_NONNULL = YES;
CLANG_ANALYZER_NUMBER_OBJECT_CONVERSION = YES_AGGRESSIVE;
CLANG_CXX_LANGUAGE_STANDARD = "gnu++14";
CLANG_CXX_LIBRARY = "libc++";
CLANG_ENABLE_MODULES = YES;
CLANG_ENABLE_OBJC_ARC = YES;
CLANG_ENABLE_OBJC_WEAK = YES;
CLANG_WARN_BLOCK_CAPTURE_AUTORELEASING = YES;
CLANG_WARN_BOOL_CONVERSION = YES;
CLANG_WARN_COMMA = YES;
CLANG_WARN_CONSTANT_CONVERSION = YES;
CLANG_WARN_DEPRECATED_OBJC_IMPLEMENTATIONS = YES;
CLANG_WARN_DIRECT_OBJC_ISA_USAGE = YES_ERROR;
CLANG_WARN_DOCUMENTATION_COMMENTS = YES;
CLANG_WARN_EMPTY_BODY = YES;
CLANG_WARN_ENUM_CONVERSION = YES;
CLANG_WARN_INFINITE_RECURSION = YES;
CLANG_WARN_INT_CONVERSION = YES;
CLANG_WARN_NON_LITERAL_NULL_CONVERSION = YES;
CLANG_WARN_OBJC_IMPLICIT_RETAIN_SELF = YES;
CLANG_WARN_OBJC_LITERAL_CONVERSION = YES;
CLANG_WARN_OBJC_ROOT_CLASS = YES_ERROR;
CLANG_WARN_QUOTED_INCLUDE_IN_FRAMEWORK_HEADER = YES;
CLANG_WARN_RANGE_LOOP_ANALYSIS = YES;
CLANG_WARN_STRICT_PROTOTYPES = YES;
CLANG_WARN_SUSPICIOUS_MOVE = YES;
CLANG_WARN_UNGUARDED_AVAILABILITY = YES_AGGRESSIVE;
CLANG_WARN_UNREACHABLE_CODE = YES;
CLANG_WARN__DUPLICATE_METHOD_MATCH = YES;
COPY_PHASE_STRIP = NO;
DEBUG_INFORMATION_FORMAT = dwarf;
ENABLE_STRICT_OBJC_MSGSEND = YES;
ENABLE_TESTABILITY = YES;
GCC_C_LANGUAGE_STANDARD = gnu11;
GCC_DYNAMIC_NO_PIC = NO;
GCC_NO_COMMON_BLOCKS = YES;
GCC_OPTIMIZATION_LEVEL = 0;
GCC_PREPROCESSOR_DEFINITIONS = (
"$(inherited)",
"DEBUG=1",
);
GCC_WARN_64_TO_32_BIT_CONVERSION = YES;
GCC_WARN_ABOUT_RETURN_TYPE = YES_ERROR;
GCC_WARN_UNDECLARED_SELECTOR = YES;
GCC_WARN_UNINITIALIZED_AUTOS = YES_AGGRESSIVE;
GCC_WARN_UNUSED_FUNCTION = YES;
GCC_WARN_UNUSED_VARIABLE = YES;
IPHONEOS_DEPLOYMENT_TARGET = 18.6;
MTL_ENABLE_DEBUG_INFO = INCLUDE_SOURCE;
MTL_FAST_MATH = YES;
ONLY_ACTIVE_ARCH = YES;
PRODUCT_NAME = "$(TARGET_NAME)";
SDKROOT = iphoneos;
SUPPORTED_PLATFORMS = "iphoneos iphonesimulator";
SUPPORTS_MACCATALYST = NO;
SWIFT_ACTIVE_COMPILATION_CONDITIONS = DEBUG;
SWIFT_OPTIMIZATION_LEVEL = "-Onone";
SWIFT_VERSION = 5.0;
};
name = Debug;
};
AF5ADDAA865B1F6BD4E70A79 /* Debug */ = {
isa = XCBuildConfiguration;
buildSettings = {
ASSETCATALOG_COMPILER_APPICON_NAME = AppIcon;
CODE_SIGN_IDENTITY = "iPhone Developer";
DEVELOPMENT_TEAM = 2Q52K2W84K;
HEADER_SEARCH_PATHS = "$(PROJECT_DIR)/WakuExample";
INFOPLIST_FILE = WakuExample/Info.plist;
IPHONEOS_DEPLOYMENT_TARGET = 18.6;
LD_RUNPATH_SEARCH_PATHS = (
"$(inherited)",
"@executable_path/Frameworks",
);
"LIBRARY_SEARCH_PATHS[sdk=iphoneos*]" = "$(PROJECT_DIR)/../../build/ios/iphoneos-arm64";
"LIBRARY_SEARCH_PATHS[sdk=iphonesimulator*]" = "$(PROJECT_DIR)/../../build/ios/iphonesimulator-arm64";
MACOSX_DEPLOYMENT_TARGET = 15.6;
OTHER_LDFLAGS = (
"-lc++",
"-force_load",
"$(PROJECT_DIR)/../../build/ios/iphoneos-arm64/libwaku.a",
"-lsqlite3",
"-lz",
);
PRODUCT_BUNDLE_IDENTIFIER = org.waku.example;
SDKROOT = iphoneos;
SUPPORTED_PLATFORMS = "iphoneos iphonesimulator";
SUPPORTS_MACCATALYST = NO;
SUPPORTS_MAC_DESIGNED_FOR_IPHONE_IPAD = YES;
SUPPORTS_XR_DESIGNED_FOR_IPHONE_IPAD = YES;
SWIFT_OBJC_BRIDGING_HEADER = "WakuExample/WakuExample-Bridging-Header.h";
TARGETED_DEVICE_FAMILY = "1,2";
};
name = Debug;
};
/* End XCBuildConfiguration section */
/* Begin XCConfigurationList section */
757FA0123629BD63CB254113 /* Build configuration list for PBXNativeTarget "WakuExample" */ = {
isa = XCConfigurationList;
buildConfigurations = (
AF5ADDAA865B1F6BD4E70A79 /* Debug */,
36939122077C66DD94082311 /* Release */,
);
defaultConfigurationIsVisible = 0;
defaultConfigurationName = Debug;
};
B3A4F48294254543E79767C4 /* Build configuration list for PBXProject "WakuExample" */ = {
isa = XCConfigurationList;
buildConfigurations = (
A59ABFB792FED8974231E5AC /* Debug */,
9BA833A09EEDB4B3FCCD8F8E /* Release */,
);
defaultConfigurationIsVisible = 0;
defaultConfigurationName = Debug;
};
/* End XCConfigurationList section */
};
rootObject = 4FF82F0F4AF8E1E34728F150 /* Project object */;
}

View File

@ -1,7 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<Workspace
version = "1.0">
<FileRef
location = "self:">
</FileRef>
</Workspace>

View File

@ -1,229 +0,0 @@
//
// ContentView.swift
// WakuExample
//
// Minimal chat PoC using libwaku on iOS
//
import SwiftUI
struct ContentView: View {
@StateObject private var wakuNode = WakuNode()
@State private var messageText = ""
var body: some View {
ZStack {
// Main content
VStack(spacing: 0) {
// Header with status
HStack {
Circle()
.fill(statusColor)
.frame(width: 10, height: 10)
VStack(alignment: .leading, spacing: 2) {
Text(wakuNode.status.rawValue)
.font(.caption)
if wakuNode.status == .running {
HStack(spacing: 4) {
Text(wakuNode.isConnected ? "Connected" : "Discovering...")
Text("")
filterStatusView
}
.font(.caption2)
.foregroundColor(.secondary)
// Subscription maintenance status
if wakuNode.subscriptionMaintenanceActive {
HStack(spacing: 4) {
Image(systemName: "arrow.triangle.2.circlepath")
.foregroundColor(.blue)
Text("Maintenance active")
if wakuNode.failedSubscribeAttempts > 0 {
Text("(\(wakuNode.failedSubscribeAttempts) retries)")
.foregroundColor(.orange)
}
}
.font(.caption2)
.foregroundColor(.secondary)
}
}
}
Spacer()
if wakuNode.status == .stopped {
Button("Start") {
wakuNode.start()
}
.buttonStyle(.borderedProminent)
.controlSize(.small)
} else if wakuNode.status == .running {
if !wakuNode.filterSubscribed {
Button("Resub") {
wakuNode.resubscribe()
}
.buttonStyle(.bordered)
.controlSize(.small)
}
Button("Stop") {
wakuNode.stop()
}
.buttonStyle(.bordered)
.controlSize(.small)
}
}
.padding()
.background(Color.gray.opacity(0.1))
// Messages list
ScrollViewReader { proxy in
ScrollView {
LazyVStack(alignment: .leading, spacing: 8) {
ForEach(wakuNode.receivedMessages.reversed()) { message in
MessageBubble(message: message)
.id(message.id)
}
}
.padding()
}
.onChange(of: wakuNode.receivedMessages.count) { _, newCount in
if let lastMessage = wakuNode.receivedMessages.first {
withAnimation {
proxy.scrollTo(lastMessage.id, anchor: .bottom)
}
}
}
}
Divider()
// Message input
HStack(spacing: 12) {
TextField("Message", text: $messageText)
.textFieldStyle(.roundedBorder)
.disabled(wakuNode.status != .running)
Button(action: sendMessage) {
Image(systemName: "paperplane.fill")
.foregroundColor(.white)
.padding(10)
.background(canSend ? Color.blue : Color.gray)
.clipShape(Circle())
}
.disabled(!canSend)
}
.padding()
.background(Color.gray.opacity(0.1))
}
// Toast overlay for errors
VStack {
ForEach(wakuNode.errorQueue) { error in
ToastView(error: error) {
wakuNode.dismissError(error)
}
.transition(.asymmetric(
insertion: .move(edge: .top).combined(with: .opacity),
removal: .opacity
))
}
Spacer()
}
.padding(.top, 8)
.animation(.easeInOut(duration: 0.3), value: wakuNode.errorQueue)
}
}
private var statusColor: Color {
switch wakuNode.status {
case .stopped: return .gray
case .starting: return .yellow
case .running: return .green
case .error: return .red
}
}
@ViewBuilder
private var filterStatusView: some View {
if wakuNode.filterSubscribed {
Text("Filter OK")
.foregroundColor(.green)
} else if wakuNode.failedSubscribeAttempts > 0 {
Text("Filter retrying (\(wakuNode.failedSubscribeAttempts))")
.foregroundColor(.orange)
} else {
Text("Filter pending")
.foregroundColor(.orange)
}
}
private var canSend: Bool {
wakuNode.status == .running && wakuNode.isConnected && !messageText.trimmingCharacters(in: .whitespaces).isEmpty
}
private func sendMessage() {
let text = messageText.trimmingCharacters(in: .whitespaces)
guard !text.isEmpty else { return }
wakuNode.publish(message: text)
messageText = ""
}
}
// MARK: - Toast View
struct ToastView: View {
let error: TimestampedError
let onDismiss: () -> Void
var body: some View {
HStack(spacing: 12) {
Image(systemName: "exclamationmark.triangle.fill")
.foregroundColor(.white)
Text(error.message)
.font(.subheadline)
.foregroundColor(.white)
.lineLimit(2)
Spacer()
Button(action: onDismiss) {
Image(systemName: "xmark.circle.fill")
.foregroundColor(.white.opacity(0.8))
.font(.title3)
}
.buttonStyle(.plain)
}
.padding(.horizontal, 16)
.padding(.vertical, 12)
.background(
RoundedRectangle(cornerRadius: 12)
.fill(Color.red.opacity(0.9))
.shadow(color: .black.opacity(0.2), radius: 8, x: 0, y: 4)
)
.padding(.horizontal, 16)
.padding(.vertical, 4)
}
}
// MARK: - Message Bubble
struct MessageBubble: View {
let message: WakuMessage
var body: some View {
VStack(alignment: .leading, spacing: 4) {
Text(message.payload)
.padding(10)
.background(Color.blue.opacity(0.1))
.cornerRadius(12)
Text(message.timestamp, style: .time)
.font(.caption2)
.foregroundColor(.secondary)
}
}
}
#Preview {
ContentView()
}

View File

@ -1,36 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
<key>CFBundleDevelopmentRegion</key>
<string>$(DEVELOPMENT_LANGUAGE)</string>
<key>CFBundleDisplayName</key>
<string>Waku Example</string>
<key>CFBundleExecutable</key>
<string>$(EXECUTABLE_NAME)</string>
<key>CFBundleIdentifier</key>
<string>org.waku.example</string>
<key>CFBundleInfoDictionaryVersion</key>
<string>6.0</string>
<key>CFBundleName</key>
<string>WakuExample</string>
<key>CFBundlePackageType</key>
<string>APPL</string>
<key>CFBundleShortVersionString</key>
<string>1.0</string>
<key>CFBundleVersion</key>
<string>1</string>
<key>NSAppTransportSecurity</key>
<dict>
<key>NSAllowsArbitraryLoads</key>
<true/>
</dict>
<key>UILaunchScreen</key>
<dict/>
<key>UISupportedInterfaceOrientations</key>
<array>
<string>UIInterfaceOrientationPortrait</string>
</array>
</dict>
</plist>

View File

@ -1,15 +0,0 @@
//
// WakuExample-Bridging-Header.h
// WakuExample
//
// Bridging header to expose libwaku C functions to Swift
//
#ifndef WakuExample_Bridging_Header_h
#define WakuExample_Bridging_Header_h
#import "libwaku.h"
#endif /* WakuExample_Bridging_Header_h */

View File

@ -1,19 +0,0 @@
//
// WakuExampleApp.swift
// WakuExample
//
// SwiftUI app entry point for Waku iOS example
//
import SwiftUI
@main
struct WakuExampleApp: App {
var body: some Scene {
WindowGroup {
ContentView()
}
}
}

View File

@ -1,739 +0,0 @@
//
// WakuNode.swift
// WakuExample
//
// Swift wrapper around libwaku C API for edge mode (lightpush + filter)
// Uses Swift actors for thread safety and UI responsiveness
//
import Foundation
// MARK: - Data Types
/// Message received from Waku network
struct WakuMessage: Identifiable, Equatable, Sendable {
let id: String // messageHash from Waku - unique identifier for deduplication
let payload: String
let contentTopic: String
let timestamp: Date
}
/// Waku node status
enum WakuNodeStatus: String, Sendable {
case stopped = "Stopped"
case starting = "Starting..."
case running = "Running"
case error = "Error"
}
/// Status updates from WakuActor to WakuNode
enum WakuStatusUpdate: Sendable {
case statusChanged(WakuNodeStatus)
case connectionChanged(isConnected: Bool)
case filterSubscriptionChanged(subscribed: Bool, failedAttempts: Int)
case maintenanceChanged(active: Bool)
case error(String)
}
/// Error with timestamp for toast queue
struct TimestampedError: Identifiable, Equatable {
let id = UUID()
let message: String
let timestamp: Date
static func == (lhs: TimestampedError, rhs: TimestampedError) -> Bool {
lhs.id == rhs.id
}
}
// MARK: - Callback Context for C API
private final class CallbackContext: @unchecked Sendable {
private let lock = NSLock()
private var _continuation: CheckedContinuation<(success: Bool, result: String?), Never>?
private var _resumed = false
var success: Bool = false
var result: String?
var continuation: CheckedContinuation<(success: Bool, result: String?), Never>? {
get {
lock.lock()
defer { lock.unlock() }
return _continuation
}
set {
lock.lock()
defer { lock.unlock() }
_continuation = newValue
}
}
/// Thread-safe resume - ensures continuation is only resumed once
/// Returns true if this call actually resumed, false if already resumed
@discardableResult
func resumeOnce(returning value: (success: Bool, result: String?)) -> Bool {
lock.lock()
defer { lock.unlock() }
guard !_resumed, let cont = _continuation else {
return false
}
_resumed = true
_continuation = nil
cont.resume(returning: value)
return true
}
}
// MARK: - WakuActor
/// Actor that isolates all Waku operations from the main thread
/// All C API calls and mutable state are contained here
actor WakuActor {
// MARK: - State
private var ctx: UnsafeMutableRawPointer?
private var seenMessageHashes: Set<String> = []
private var isSubscribed: Bool = false
private var isSubscribing: Bool = false
private var hasPeers: Bool = false
private var maintenanceTask: Task<Void, Never>?
private var eventProcessingTask: Task<Void, Never>?
// Stream continuations for communicating with UI
private var messageContinuation: AsyncStream<WakuMessage>.Continuation?
private var statusContinuation: AsyncStream<WakuStatusUpdate>.Continuation?
// Event stream from C callbacks
private var eventContinuation: AsyncStream<String>.Continuation?
// Configuration
let defaultPubsubTopic = "/waku/2/rs/1/0"
let defaultContentTopic = "/waku-ios-example/1/chat/proto"
private let staticPeer = "/dns4/node-01.do-ams3.waku.sandbox.status.im/tcp/30303/p2p/16Uiu2HAmPLe7Mzm8TsYUubgCAW1aJoeFScxrLj8ppHFivPo97bUZ"
// Subscription maintenance settings
private let maxFailedSubscribes = 3
private let retryWaitSeconds: UInt64 = 2_000_000_000 // 2 seconds in nanoseconds
private let maintenanceIntervalSeconds: UInt64 = 30_000_000_000 // 30 seconds in nanoseconds
private let maxSeenHashes = 1000
// MARK: - Static callback storage (for C callbacks)
// We need a way for C callbacks to reach the actor
// Using a simple static reference (safe because we only have one instance)
private static var sharedEventContinuation: AsyncStream<String>.Continuation?
private static let eventCallback: WakuCallBack = { ret, msg, len, userData in
guard ret == RET_OK, let msg = msg else { return }
let str = String(cString: msg)
WakuActor.sharedEventContinuation?.yield(str)
}
private static let syncCallback: WakuCallBack = { ret, msg, len, userData in
guard let userData = userData else { return }
let context = Unmanaged<CallbackContext>.fromOpaque(userData).takeUnretainedValue()
let success = (ret == RET_OK)
var resultStr: String? = nil
if let msg = msg {
resultStr = String(cString: msg)
}
context.resumeOnce(returning: (success, resultStr))
}
// MARK: - Stream Setup
func setMessageContinuation(_ continuation: AsyncStream<WakuMessage>.Continuation?) {
self.messageContinuation = continuation
}
func setStatusContinuation(_ continuation: AsyncStream<WakuStatusUpdate>.Continuation?) {
self.statusContinuation = continuation
}
// MARK: - Public API
var isRunning: Bool {
ctx != nil
}
var hasConnectedPeers: Bool {
hasPeers
}
func start() async {
guard ctx == nil else {
print("[WakuActor] Already started")
return
}
statusContinuation?.yield(.statusChanged(.starting))
// Create event stream for C callbacks
let eventStream = AsyncStream<String> { continuation in
self.eventContinuation = continuation
WakuActor.sharedEventContinuation = continuation
}
// Start event processing task
eventProcessingTask = Task { [weak self] in
for await eventJson in eventStream {
await self?.handleEvent(eventJson)
}
}
// Initialize the node
let success = await initializeNode()
if success {
statusContinuation?.yield(.statusChanged(.running))
// Connect to peer
let connected = await connectToPeer()
if connected {
hasPeers = true
statusContinuation?.yield(.connectionChanged(isConnected: true))
// Start maintenance loop
startMaintenanceLoop()
} else {
statusContinuation?.yield(.error("Failed to connect to service peer"))
}
}
}
func stop() async {
guard let context = ctx else { return }
// Stop maintenance loop
maintenanceTask?.cancel()
maintenanceTask = nil
// Stop event processing
eventProcessingTask?.cancel()
eventProcessingTask = nil
// Close event stream
eventContinuation?.finish()
eventContinuation = nil
WakuActor.sharedEventContinuation = nil
statusContinuation?.yield(.statusChanged(.stopped))
statusContinuation?.yield(.connectionChanged(isConnected: false))
statusContinuation?.yield(.filterSubscriptionChanged(subscribed: false, failedAttempts: 0))
statusContinuation?.yield(.maintenanceChanged(active: false))
// Reset state
let ctxToStop = context
ctx = nil
isSubscribed = false
isSubscribing = false
hasPeers = false
seenMessageHashes.removeAll()
// Unsubscribe and stop in background (fire and forget)
Task.detached {
// Unsubscribe
_ = await self.callWakuSync { waku_filter_unsubscribe_all(ctxToStop, WakuActor.syncCallback, $0) }
print("[WakuActor] Unsubscribed from filter")
// Stop
_ = await self.callWakuSync { waku_stop(ctxToStop, WakuActor.syncCallback, $0) }
print("[WakuActor] Node stopped")
// Destroy
_ = await self.callWakuSync { waku_destroy(ctxToStop, WakuActor.syncCallback, $0) }
print("[WakuActor] Node destroyed")
}
}
func publish(message: String, contentTopic: String? = nil) async {
guard let context = ctx else {
print("[WakuActor] Node not started")
return
}
guard hasPeers else {
print("[WakuActor] No peers connected yet")
statusContinuation?.yield(.error("No peers connected yet. Please wait..."))
return
}
let topic = contentTopic ?? defaultContentTopic
guard let payloadData = message.data(using: .utf8) else { return }
let payloadBase64 = payloadData.base64EncodedString()
let timestamp = Int64(Date().timeIntervalSince1970 * 1_000_000_000)
let jsonMessage = """
{"payload":"\(payloadBase64)","contentTopic":"\(topic)","timestamp":\(timestamp)}
"""
let result = await callWakuSync { userData in
waku_lightpush_publish(
context,
self.defaultPubsubTopic,
jsonMessage,
WakuActor.syncCallback,
userData
)
}
if result.success {
print("[WakuActor] Published message")
} else {
print("[WakuActor] Publish error: \(result.result ?? "unknown")")
statusContinuation?.yield(.error("Failed to send message"))
}
}
func resubscribe() async {
print("[WakuActor] Force resubscribe requested")
isSubscribed = false
isSubscribing = false
statusContinuation?.yield(.filterSubscriptionChanged(subscribed: false, failedAttempts: 0))
_ = await subscribe()
}
// MARK: - Private Methods
private func initializeNode() async -> Bool {
let config = """
{
"tcpPort": 60000,
"clusterId": 1,
"shards": [0],
"relay": false,
"lightpush": true,
"filter": true,
"logLevel": "DEBUG",
"discv5Discovery": true,
"discv5BootstrapNodes": [
"enr:-QESuEB4Dchgjn7gfAvwB00CxTA-nGiyk-aALI-H4dYSZD3rUk7bZHmP8d2U6xDiQ2vZffpo45Jp7zKNdnwDUx6g4o6XAYJpZIJ2NIJpcIRA4VDAim11bHRpYWRkcnO4XAArNiZub2RlLTAxLmRvLWFtczMud2FrdS5zYW5kYm94LnN0YXR1cy5pbQZ2XwAtNiZub2RlLTAxLmRvLWFtczMud2FrdS5zYW5kYm94LnN0YXR1cy5pbQYfQN4DgnJzkwABCAAAAAEAAgADAAQABQAGAAeJc2VjcDI1NmsxoQOvD3S3jUNICsrOILlmhENiWAMmMVlAl6-Q8wRB7hidY4N0Y3CCdl-DdWRwgiMohXdha3UyDw",
"enr:-QEkuEBIkb8q8_mrorHndoXH9t5N6ZfD-jehQCrYeoJDPHqT0l0wyaONa2-piRQsi3oVKAzDShDVeoQhy0uwN1xbZfPZAYJpZIJ2NIJpcIQiQlleim11bHRpYWRkcnO4bgA0Ni9ub2RlLTAxLmdjLXVzLWNlbnRyYWwxLWEud2FrdS5zYW5kYm94LnN0YXR1cy5pbQZ2XwA2Ni9ub2RlLTAxLmdjLXVzLWNlbnRyYWwxLWEud2FrdS5zYW5kYm94LnN0YXR1cy5pbQYfQN4DgnJzkwABCAAAAAEAAgADAAQABQAGAAeJc2VjcDI1NmsxoQKnGt-GSgqPSf3IAPM7bFgTlpczpMZZLF3geeoNNsxzSoN0Y3CCdl-DdWRwgiMohXdha3UyDw"
],
"discv5UdpPort": 9999,
"dnsDiscovery": true,
"dnsDiscoveryUrl": "enrtree://AOGYWMBYOUIMOENHXCHILPKY3ZRFEULMFI4DOM442QSZ73TT2A7VI@test.waku.nodes.status.im",
"dnsDiscoveryNameServers": ["8.8.8.8", "1.0.0.1"]
}
"""
// Create node - waku_new is special, it returns the context directly
let createResult = await withCheckedContinuation { (continuation: CheckedContinuation<(ctx: UnsafeMutableRawPointer?, success: Bool, result: String?), Never>) in
let callbackCtx = CallbackContext()
let userDataPtr = Unmanaged.passRetained(callbackCtx).toOpaque()
// Set up a simple callback for waku_new
let newCtx = waku_new(config, { ret, msg, len, userData in
guard let userData = userData else { return }
let context = Unmanaged<CallbackContext>.fromOpaque(userData).takeUnretainedValue()
context.success = (ret == RET_OK)
if let msg = msg {
context.result = String(cString: msg)
}
}, userDataPtr)
// Small delay to ensure callback completes
DispatchQueue.global().asyncAfter(deadline: .now() + 0.1) {
Unmanaged<CallbackContext>.fromOpaque(userDataPtr).release()
continuation.resume(returning: (newCtx, callbackCtx.success, callbackCtx.result))
}
}
guard createResult.ctx != nil else {
statusContinuation?.yield(.statusChanged(.error))
statusContinuation?.yield(.error("Failed to create node: \(createResult.result ?? "unknown")"))
return false
}
ctx = createResult.ctx
// Set event callback
waku_set_event_callback(ctx, WakuActor.eventCallback, nil)
// Start node
let startResult = await callWakuSync { userData in
waku_start(self.ctx, WakuActor.syncCallback, userData)
}
guard startResult.success else {
statusContinuation?.yield(.statusChanged(.error))
statusContinuation?.yield(.error("Failed to start node: \(startResult.result ?? "unknown")"))
ctx = nil
return false
}
print("[WakuActor] Node started")
return true
}
private func connectToPeer() async -> Bool {
guard let context = ctx else { return false }
print("[WakuActor] Connecting to static peer...")
let result = await callWakuSync { userData in
waku_connect(context, self.staticPeer, 10000, WakuActor.syncCallback, userData)
}
if result.success {
print("[WakuActor] Connected to peer successfully")
return true
} else {
print("[WakuActor] Failed to connect: \(result.result ?? "unknown")")
return false
}
}
private func subscribe(contentTopic: String? = nil) async -> Bool {
guard let context = ctx else { return false }
guard !isSubscribed && !isSubscribing else { return isSubscribed }
isSubscribing = true
let topic = contentTopic ?? defaultContentTopic
let result = await callWakuSync { userData in
waku_filter_subscribe(
context,
self.defaultPubsubTopic,
topic,
WakuActor.syncCallback,
userData
)
}
isSubscribing = false
if result.success {
print("[WakuActor] Subscribe request successful to \(topic)")
isSubscribed = true
statusContinuation?.yield(.filterSubscriptionChanged(subscribed: true, failedAttempts: 0))
return true
} else {
print("[WakuActor] Subscribe error: \(result.result ?? "unknown")")
isSubscribed = false
return false
}
}
private func pingFilterPeer() async -> Bool {
guard let context = ctx else { return false }
let result = await callWakuSync { userData in
waku_ping_peer(
context,
self.staticPeer,
10000,
WakuActor.syncCallback,
userData
)
}
return result.success
}
// MARK: - Subscription Maintenance
private func startMaintenanceLoop() {
guard maintenanceTask == nil else {
print("[WakuActor] Maintenance loop already running")
return
}
statusContinuation?.yield(.maintenanceChanged(active: true))
print("[WakuActor] Starting subscription maintenance loop")
maintenanceTask = Task { [weak self] in
guard let self = self else { return }
var failedSubscribes = 0
var isFirstPingOnConnection = true
while !Task.isCancelled {
guard await self.isRunning else { break }
print("[WakuActor] Maintaining subscription...")
let pingSuccess = await self.pingFilterPeer()
let currentlySubscribed = await self.isSubscribed
if pingSuccess && currentlySubscribed {
print("[WakuActor] Subscription is live, waiting 30s")
try? await Task.sleep(nanoseconds: self.maintenanceIntervalSeconds)
continue
}
if !isFirstPingOnConnection && !pingSuccess {
print("[WakuActor] Ping failed - subscription may be lost")
await self.statusContinuation?.yield(.filterSubscriptionChanged(subscribed: false, failedAttempts: failedSubscribes))
}
isFirstPingOnConnection = false
print("[WakuActor] No active subscription found. Sending subscribe request...")
await self.resetSubscriptionState()
let subscribeSuccess = await self.subscribe()
if subscribeSuccess {
print("[WakuActor] Subscribe request successful")
failedSubscribes = 0
try? await Task.sleep(nanoseconds: self.maintenanceIntervalSeconds)
continue
}
failedSubscribes += 1
await self.statusContinuation?.yield(.filterSubscriptionChanged(subscribed: false, failedAttempts: failedSubscribes))
print("[WakuActor] Subscribe request failed. Attempt \(failedSubscribes)/\(self.maxFailedSubscribes)")
if failedSubscribes < self.maxFailedSubscribes {
print("[WakuActor] Retrying in 2s...")
try? await Task.sleep(nanoseconds: self.retryWaitSeconds)
} else {
print("[WakuActor] Max subscribe failures reached")
await self.statusContinuation?.yield(.error("Filter subscription failed after \(self.maxFailedSubscribes) attempts"))
failedSubscribes = 0
try? await Task.sleep(nanoseconds: self.maintenanceIntervalSeconds)
}
}
print("[WakuActor] Subscription maintenance loop stopped")
await self.statusContinuation?.yield(.maintenanceChanged(active: false))
}
}
private func resetSubscriptionState() {
isSubscribed = false
isSubscribing = false
}
// MARK: - Event Handling
private func handleEvent(_ eventJson: String) {
guard let data = eventJson.data(using: .utf8),
let json = try? JSONSerialization.jsonObject(with: data) as? [String: Any],
let eventType = json["eventType"] as? String else {
return
}
if eventType == "connection_change" {
handleConnectionChange(json)
} else if eventType == "message" {
handleMessage(json)
}
}
private func handleConnectionChange(_ json: [String: Any]) {
guard let peerEvent = json["peerEvent"] as? String else { return }
if peerEvent == "Joined" || peerEvent == "Identified" {
hasPeers = true
statusContinuation?.yield(.connectionChanged(isConnected: true))
} else if peerEvent == "Left" {
statusContinuation?.yield(.filterSubscriptionChanged(subscribed: false, failedAttempts: 0))
}
}
private func handleMessage(_ json: [String: Any]) {
guard let messageHash = json["messageHash"] as? String,
let wakuMessage = json["wakuMessage"] as? [String: Any],
let payloadBase64 = wakuMessage["payload"] as? String,
let contentTopic = wakuMessage["contentTopic"] as? String,
let payloadData = Data(base64Encoded: payloadBase64),
let payloadString = String(data: payloadData, encoding: .utf8) else {
return
}
// Deduplicate
guard !seenMessageHashes.contains(messageHash) else {
return
}
seenMessageHashes.insert(messageHash)
// Limit memory usage
if seenMessageHashes.count > maxSeenHashes {
seenMessageHashes.removeAll()
}
let message = WakuMessage(
id: messageHash,
payload: payloadString,
contentTopic: contentTopic,
timestamp: Date()
)
messageContinuation?.yield(message)
}
// MARK: - Helper for synchronous C calls
private func callWakuSync(_ work: @escaping (UnsafeMutableRawPointer) -> Void) async -> (success: Bool, result: String?) {
await withCheckedContinuation { continuation in
let context = CallbackContext()
context.continuation = continuation
let userDataPtr = Unmanaged.passRetained(context).toOpaque()
work(userDataPtr)
// Set a timeout to avoid hanging forever
DispatchQueue.global().asyncAfter(deadline: .now() + 15) {
// Try to resume with timeout - will be ignored if callback already resumed
let didTimeout = context.resumeOnce(returning: (false, "Timeout"))
if didTimeout {
print("[WakuActor] Call timed out")
}
Unmanaged<CallbackContext>.fromOpaque(userDataPtr).release()
}
}
}
}
// MARK: - WakuNode (MainActor UI Wrapper)
/// Main-thread UI wrapper that consumes updates from WakuActor via AsyncStreams
@MainActor
class WakuNode: ObservableObject {
// MARK: - Published Properties (UI State)
@Published var status: WakuNodeStatus = .stopped
@Published var receivedMessages: [WakuMessage] = []
@Published var errorQueue: [TimestampedError] = []
@Published var isConnected: Bool = false
@Published var filterSubscribed: Bool = false
@Published var subscriptionMaintenanceActive: Bool = false
@Published var failedSubscribeAttempts: Int = 0
// Topics (read-only access to actor's config)
var defaultPubsubTopic: String { "/waku/2/rs/1/0" }
var defaultContentTopic: String { "/waku-ios-example/1/chat/proto" }
// MARK: - Private Properties
private let actor = WakuActor()
private var messageTask: Task<Void, Never>?
private var statusTask: Task<Void, Never>?
// MARK: - Initialization
init() {}
deinit {
messageTask?.cancel()
statusTask?.cancel()
}
// MARK: - Public API
func start() {
guard status == .stopped || status == .error else {
print("[WakuNode] Already started or starting")
return
}
// Create message stream
let messageStream = AsyncStream<WakuMessage> { continuation in
Task {
await self.actor.setMessageContinuation(continuation)
}
}
// Create status stream
let statusStream = AsyncStream<WakuStatusUpdate> { continuation in
Task {
await self.actor.setStatusContinuation(continuation)
}
}
// Start consuming messages
messageTask = Task { @MainActor in
for await message in messageStream {
self.receivedMessages.insert(message, at: 0)
if self.receivedMessages.count > 100 {
self.receivedMessages.removeLast()
}
}
}
// Start consuming status updates
statusTask = Task { @MainActor in
for await update in statusStream {
self.handleStatusUpdate(update)
}
}
// Start the actor
Task {
await actor.start()
}
}
func stop() {
messageTask?.cancel()
messageTask = nil
statusTask?.cancel()
statusTask = nil
Task {
await actor.stop()
}
// Immediate UI update
status = .stopped
isConnected = false
filterSubscribed = false
subscriptionMaintenanceActive = false
failedSubscribeAttempts = 0
}
func publish(message: String, contentTopic: String? = nil) {
Task {
await actor.publish(message: message, contentTopic: contentTopic)
}
}
func resubscribe() {
Task {
await actor.resubscribe()
}
}
func dismissError(_ error: TimestampedError) {
errorQueue.removeAll { $0.id == error.id }
}
func dismissAllErrors() {
errorQueue.removeAll()
}
// MARK: - Private Methods
private func handleStatusUpdate(_ update: WakuStatusUpdate) {
switch update {
case .statusChanged(let newStatus):
status = newStatus
case .connectionChanged(let connected):
isConnected = connected
case .filterSubscriptionChanged(let subscribed, let attempts):
filterSubscribed = subscribed
failedSubscribeAttempts = attempts
case .maintenanceChanged(let active):
subscriptionMaintenanceActive = active
case .error(let message):
let error = TimestampedError(message: message, timestamp: Date())
errorQueue.append(error)
// Schedule auto-dismiss after 10 seconds
let errorId = error.id
Task { @MainActor in
try? await Task.sleep(nanoseconds: 10_000_000_000)
self.errorQueue.removeAll { $0.id == errorId }
}
}
}
}

View File

@ -1,253 +0,0 @@
// Generated manually and inspired by the one generated by the Nim Compiler.
// In order to see the header file generated by Nim just run `make libwaku`
// from the root repo folder and the header should be created in
// nimcache/release/libwaku/libwaku.h
#ifndef __libwaku__
#define __libwaku__
#include <stddef.h>
#include <stdint.h>
// The possible returned values for the functions that return int
#define RET_OK 0
#define RET_ERR 1
#define RET_MISSING_CALLBACK 2
#ifdef __cplusplus
extern "C" {
#endif
typedef void (*WakuCallBack) (int callerRet, const char* msg, size_t len, void* userData);
// Creates a new instance of the waku node.
// Sets up the waku node from the given configuration.
// Returns a pointer to the Context needed by the rest of the API functions.
void* waku_new(
const char* configJson,
WakuCallBack callback,
void* userData);
int waku_start(void* ctx,
WakuCallBack callback,
void* userData);
int waku_stop(void* ctx,
WakuCallBack callback,
void* userData);
// Destroys an instance of a waku node created with waku_new
int waku_destroy(void* ctx,
WakuCallBack callback,
void* userData);
int waku_version(void* ctx,
WakuCallBack callback,
void* userData);
// Sets a callback that will be invoked whenever an event occurs.
// It is crucial that the passed callback is fast, non-blocking and potentially thread-safe.
void waku_set_event_callback(void* ctx,
WakuCallBack callback,
void* userData);
int waku_content_topic(void* ctx,
const char* appName,
unsigned int appVersion,
const char* contentTopicName,
const char* encoding,
WakuCallBack callback,
void* userData);
int waku_pubsub_topic(void* ctx,
const char* topicName,
WakuCallBack callback,
void* userData);
int waku_default_pubsub_topic(void* ctx,
WakuCallBack callback,
void* userData);
int waku_relay_publish(void* ctx,
const char* pubSubTopic,
const char* jsonWakuMessage,
unsigned int timeoutMs,
WakuCallBack callback,
void* userData);
int waku_lightpush_publish(void* ctx,
const char* pubSubTopic,
const char* jsonWakuMessage,
WakuCallBack callback,
void* userData);
int waku_relay_subscribe(void* ctx,
const char* pubSubTopic,
WakuCallBack callback,
void* userData);
int waku_relay_add_protected_shard(void* ctx,
int clusterId,
int shardId,
char* publicKey,
WakuCallBack callback,
void* userData);
int waku_relay_unsubscribe(void* ctx,
const char* pubSubTopic,
WakuCallBack callback,
void* userData);
int waku_filter_subscribe(void* ctx,
const char* pubSubTopic,
const char* contentTopics,
WakuCallBack callback,
void* userData);
int waku_filter_unsubscribe(void* ctx,
const char* pubSubTopic,
const char* contentTopics,
WakuCallBack callback,
void* userData);
int waku_filter_unsubscribe_all(void* ctx,
WakuCallBack callback,
void* userData);
int waku_relay_get_num_connected_peers(void* ctx,
const char* pubSubTopic,
WakuCallBack callback,
void* userData);
int waku_relay_get_connected_peers(void* ctx,
const char* pubSubTopic,
WakuCallBack callback,
void* userData);
int waku_relay_get_num_peers_in_mesh(void* ctx,
const char* pubSubTopic,
WakuCallBack callback,
void* userData);
int waku_relay_get_peers_in_mesh(void* ctx,
const char* pubSubTopic,
WakuCallBack callback,
void* userData);
int waku_store_query(void* ctx,
const char* jsonQuery,
const char* peerAddr,
int timeoutMs,
WakuCallBack callback,
void* userData);
int waku_connect(void* ctx,
const char* peerMultiAddr,
unsigned int timeoutMs,
WakuCallBack callback,
void* userData);
int waku_disconnect_peer_by_id(void* ctx,
const char* peerId,
WakuCallBack callback,
void* userData);
int waku_disconnect_all_peers(void* ctx,
WakuCallBack callback,
void* userData);
int waku_dial_peer(void* ctx,
const char* peerMultiAddr,
const char* protocol,
int timeoutMs,
WakuCallBack callback,
void* userData);
int waku_dial_peer_by_id(void* ctx,
const char* peerId,
const char* protocol,
int timeoutMs,
WakuCallBack callback,
void* userData);
int waku_get_peerids_from_peerstore(void* ctx,
WakuCallBack callback,
void* userData);
int waku_get_connected_peers_info(void* ctx,
WakuCallBack callback,
void* userData);
int waku_get_peerids_by_protocol(void* ctx,
const char* protocol,
WakuCallBack callback,
void* userData);
int waku_listen_addresses(void* ctx,
WakuCallBack callback,
void* userData);
int waku_get_connected_peers(void* ctx,
WakuCallBack callback,
void* userData);
// Returns a list of multiaddress given a url to a DNS discoverable ENR tree
// Parameters
// char* entTreeUrl: URL containing a discoverable ENR tree
// char* nameDnsServer: The nameserver to resolve the ENR tree url.
// int timeoutMs: Timeout value in milliseconds to execute the call.
int waku_dns_discovery(void* ctx,
const char* entTreeUrl,
const char* nameDnsServer,
int timeoutMs,
WakuCallBack callback,
void* userData);
// Updates the bootnode list used for discovering new peers via DiscoveryV5
// bootnodes - JSON array containing the bootnode ENRs i.e. `["enr:...", "enr:..."]`
int waku_discv5_update_bootnodes(void* ctx,
char* bootnodes,
WakuCallBack callback,
void* userData);
int waku_start_discv5(void* ctx,
WakuCallBack callback,
void* userData);
int waku_stop_discv5(void* ctx,
WakuCallBack callback,
void* userData);
// Retrieves the ENR information
int waku_get_my_enr(void* ctx,
WakuCallBack callback,
void* userData);
int waku_get_my_peerid(void* ctx,
WakuCallBack callback,
void* userData);
int waku_get_metrics(void* ctx,
WakuCallBack callback,
void* userData);
int waku_peer_exchange_request(void* ctx,
int numPeers,
WakuCallBack callback,
void* userData);
int waku_ping_peer(void* ctx,
const char* peerAddr,
int timeoutMs,
WakuCallBack callback,
void* userData);
int waku_is_online(void* ctx,
WakuCallBack callback,
void* userData);
#ifdef __cplusplus
}
#endif
#endif /* __libwaku__ */

Some files were not shown because too many files have changed in this diff Show More