mirror of
https://github.com/logos-messaging/logos-messaging-nim.git
synced 2026-01-02 14:03:06 +00:00
Compare commits
31 Commits
v0.37.1-be
...
master
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
dafdee9f5f | ||
|
|
96196ab8bc | ||
|
|
e3dd6203ae | ||
|
|
834eea945d | ||
|
|
2d40cb9d62 | ||
|
|
7c24a15459 | ||
|
|
bc5059083e | ||
|
|
3323325526 | ||
|
|
2477c4980f | ||
|
|
10dc3d3eb4 | ||
|
|
9e2b3830e9 | ||
|
|
7d1c6abaac | ||
|
|
868d43164e | ||
|
|
12952d070f | ||
|
|
7920368a36 | ||
|
|
2cf4fe559a | ||
|
|
a8590a0a7d | ||
|
|
8c30a8e1bb | ||
|
|
54f4ad8fa2 | ||
|
|
ae74b9018a | ||
|
|
7eb1fdb0ac | ||
|
|
c6cf34df06 | ||
|
|
1e73213a36 | ||
|
|
c0a7debfd1 | ||
|
|
454b098ac5 | ||
|
|
088e3108c8 | ||
|
|
b0cd75f4cb | ||
|
|
31e1a81552 | ||
|
|
e54851d9d6 | ||
|
|
adeb1a928e | ||
|
|
cd5909fafe |
56
.github/ISSUE_TEMPLATE/prepare_beta_release.md
vendored
Normal file
56
.github/ISSUE_TEMPLATE/prepare_beta_release.md
vendored
Normal file
@ -0,0 +1,56 @@
|
|||||||
|
---
|
||||||
|
name: Prepare Beta Release
|
||||||
|
about: Execute tasks for the creation and publishing of a new beta release
|
||||||
|
title: 'Prepare beta release 0.0.0'
|
||||||
|
labels: beta-release
|
||||||
|
assignees: ''
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
<!--
|
||||||
|
Add appropriate release number to title!
|
||||||
|
|
||||||
|
For detailed info on the release process refer to https://github.com/logos-messaging/nwaku/blob/master/docs/contributors/release-process.md
|
||||||
|
-->
|
||||||
|
|
||||||
|
### Items to complete
|
||||||
|
|
||||||
|
All items below are to be completed by the owner of the given release.
|
||||||
|
|
||||||
|
- [ ] Create release branch with major and minor only ( e.g. release/v0.X ) if it doesn't exist.
|
||||||
|
- [ ] Assign release candidate tag to the release branch HEAD (e.g. `v0.X.0-beta-rc.0`, `v0.X.0-beta-rc.1`, ... `v0.X.0-beta-rc.N`).
|
||||||
|
- [ ] Generate and edit release notes in CHANGELOG.md.
|
||||||
|
|
||||||
|
- [ ] **Waku test and fleets validation**
|
||||||
|
- [ ] Ensure all the unit tests (specifically js-waku tests) are green against the release candidate.
|
||||||
|
- [ ] Deploy the release candidate to `waku.test` only through [deploy-waku-test job](https://ci.infra.status.im/job/nim-waku/job/deploy-waku-test/) and wait for it to finish (Jenkins access required; ask the infra team if you don't have it).
|
||||||
|
- After completion, disable [deployment job](https://ci.infra.status.im/job/nim-waku/) so that its version is not updated on every merge to master.
|
||||||
|
- Verify the deployed version at https://fleets.waku.org/.
|
||||||
|
- Confirm the container image exists on [Harbor](https://harbor.status.im/harbor/projects/9/repositories/nwaku/artifacts-tab).
|
||||||
|
- [ ] Analyze Kibana logs from the previous month (since the last release was deployed) for possible crashes or errors in `waku.test`.
|
||||||
|
- Most relevant logs are `(fleet: "waku.test" AND message: "SIGSEGV")`.
|
||||||
|
- [ ] Enable again the `waku.test` fleet to resume auto-deployment of the latest `master` commit.
|
||||||
|
|
||||||
|
- [ ] **Proceed with release**
|
||||||
|
|
||||||
|
- [ ] Assign a final release tag (`v0.X.0-beta`) to the same commit that contains the validated release-candidate tag (e.g. `v0.X.0-beta-rc.N`) and submit a PR from the release branch to `master`.
|
||||||
|
- [ ] Update [nwaku-compose](https://github.com/logos-messaging/nwaku-compose) and [waku-simulator](https://github.com/logos-messaging/waku-simulator) according to the new release.
|
||||||
|
- [ ] Bump nwaku dependency in [waku-rust-bindings](https://github.com/logos-messaging/waku-rust-bindings) and make sure all examples and tests work.
|
||||||
|
- [ ] Bump nwaku dependency in [waku-go-bindings](https://github.com/logos-messaging/waku-go-bindings) and make sure all tests work.
|
||||||
|
- [ ] Create GitHub release (https://github.com/logos-messaging/nwaku/releases).
|
||||||
|
- [ ] Submit a PR to merge the release branch back to `master`. Make sure you use the option "Merge pull request (Create a merge commit)" to perform the merge. Ping repo admin if this option is not available.
|
||||||
|
|
||||||
|
- [ ] **Promote release to fleets**
|
||||||
|
- [ ] Ask the PM lead to announce the release.
|
||||||
|
- [ ] Update infra config with any deprecated arguments or changed options.
|
||||||
|
- [ ] Update waku.sandbox with [this deployment job](https://ci.infra.status.im/job/nim-waku/job/deploy-waku-sandbox/).
|
||||||
|
|
||||||
|
### Links
|
||||||
|
|
||||||
|
- [Release process](https://github.com/logos-messaging/nwaku/blob/master/docs/contributors/release-process.md)
|
||||||
|
- [Release notes](https://github.com/logos-messaging/nwaku/blob/master/CHANGELOG.md)
|
||||||
|
- [Fleet ownership](https://www.notion.so/Fleet-Ownership-7532aad8896d46599abac3c274189741?pvs=4#d2d2f0fe4b3c429fbd860a1d64f89a64)
|
||||||
|
- [Infra-nim-waku](https://github.com/status-im/infra-nim-waku)
|
||||||
|
- [Jenkins](https://ci.infra.status.im/job/nim-waku/)
|
||||||
|
- [Fleets](https://fleets.waku.org/)
|
||||||
|
- [Harbor](https://harbor.status.im/harbor/projects/9/repositories/nwaku/artifacts-tab)
|
||||||
76
.github/ISSUE_TEMPLATE/prepare_full_release.md
vendored
Normal file
76
.github/ISSUE_TEMPLATE/prepare_full_release.md
vendored
Normal file
@ -0,0 +1,76 @@
|
|||||||
|
---
|
||||||
|
name: Prepare Full Release
|
||||||
|
about: Execute tasks for the creation and publishing of a new full release
|
||||||
|
title: 'Prepare full release 0.0.0'
|
||||||
|
labels: full-release
|
||||||
|
assignees: ''
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
<!--
|
||||||
|
Add appropriate release number to title!
|
||||||
|
|
||||||
|
For detailed info on the release process refer to https://github.com/logos-messaging/nwaku/blob/master/docs/contributors/release-process.md
|
||||||
|
-->
|
||||||
|
|
||||||
|
### Items to complete
|
||||||
|
|
||||||
|
All items below are to be completed by the owner of the given release.
|
||||||
|
|
||||||
|
- [ ] Create release branch with major and minor only ( e.g. release/v0.X ) if it doesn't exist.
|
||||||
|
- [ ] Assign release candidate tag to the release branch HEAD (e.g. `v0.X.0-rc.0`, `v0.X.0-rc.1`, ... `v0.X.0-rc.N`).
|
||||||
|
- [ ] Generate and edit release notes in CHANGELOG.md.
|
||||||
|
|
||||||
|
- [ ] **Validation of release candidate**
|
||||||
|
|
||||||
|
- [ ] **Automated testing**
|
||||||
|
- [ ] Ensure all the unit tests (specifically js-waku tests) are green against the release candidate.
|
||||||
|
- [ ] Ask Vac-QA and Vac-DST to perform the available tests against the release candidate.
|
||||||
|
- [ ] Vac-DST (an additional report is needed; see [this](https://www.notion.so/DST-Reports-1228f96fb65c80729cd1d98a7496fe6f))
|
||||||
|
|
||||||
|
- [ ] **Waku fleet testing**
|
||||||
|
- [ ] Deploy the release candidate to `waku.test` and `waku.sandbox` fleets.
|
||||||
|
- Start the [deployment job](https://ci.infra.status.im/job/nim-waku/) for both fleets and wait for it to finish (Jenkins access required; ask the infra team if you don't have it).
|
||||||
|
- After completion, disable [deployment job](https://ci.infra.status.im/job/nim-waku/) so that its version is not updated on every merge to `master`.
|
||||||
|
- Verify the deployed version at https://fleets.waku.org/.
|
||||||
|
- Confirm the container image exists on [Harbor](https://harbor.status.im/harbor/projects/9/repositories/nwaku/artifacts-tab).
|
||||||
|
- [ ] Search _Kibana_ logs from the previous month (since the last release was deployed) for possible crashes or errors in `waku.test` and `waku.sandbox`.
|
||||||
|
- Most relevant logs are `(fleet: "waku.test" AND message: "SIGSEGV")` OR `(fleet: "waku.sandbox" AND message: "SIGSEGV")`.
|
||||||
|
- [ ] Enable again the `waku.test` fleet to resume auto-deployment of the latest `master` commit.
|
||||||
|
|
||||||
|
- [ ] **Status fleet testing**
|
||||||
|
- [ ] Deploy release candidate to `status.staging`
|
||||||
|
- [ ] Perform [sanity check](https://www.notion.so/How-to-test-Nwaku-on-Status-12c6e4b9bf06420ca868bd199129b425) and log results as comments in this issue.
|
||||||
|
- [ ] Connect 2 instances to `status.staging` fleet, one in relay mode, the other one in light client.
|
||||||
|
- 1:1 Chats with each other
|
||||||
|
- Send and receive messages in a community
|
||||||
|
- Close one instance, send messages with second instance, reopen first instance and confirm messages sent while offline are retrieved from store
|
||||||
|
- [ ] Perform checks based on _end user impact_
|
||||||
|
- [ ] Inform other (Waku and Status) CCs to point their instances to `status.staging` for a few days. Ping Status colleagues on their Discord server or in the [Status community](https://status.app/c/G3kAAMSQtb05kog3aGbr3kiaxN4tF5xy4BAGEkkLwILk2z3GcoYlm5hSJXGn7J3laft-tnTwDWmYJ18dP_3bgX96dqr_8E3qKAvxDf3NrrCMUBp4R9EYkQez9XSM4486mXoC3mIln2zc-TNdvjdfL9eHVZ-mGgs=#zQ3shZeEJqTC1xhGUjxuS4rtHSrhJ8vUYp64v6qWkLpvdy9L9) (this is not a blocking point.)
|
||||||
|
- [ ] Ask Status-QA to perform sanity checks (as described above) and checks based on _end user impact_; specify the version being tested
|
||||||
|
- [ ] Ask Status-QA or infra to run the automated Status e2e tests against `status.staging`
|
||||||
|
- [ ] Get other CCs' sign-off: they should comment on this PR, e.g., "Used the app for a week, no problem." If problems are reported, resolve them and create a new RC.
|
||||||
|
- [ ] **Get Status-QA sign-off**, ensuring that the `status.test` update will not disturb ongoing activities.
|
||||||
|
|
||||||
|
- [ ] **Proceed with release**
|
||||||
|
|
||||||
|
- [ ] Assign a final release tag (`v0.X.0`) to the same commit that contains the validated release-candidate tag (e.g. `v0.X.0`).
|
||||||
|
- [ ] Update [nwaku-compose](https://github.com/logos-messaging/nwaku-compose) and [waku-simulator](https://github.com/logos-messaging/waku-simulator) according to the new release.
|
||||||
|
- [ ] Bump nwaku dependency in [waku-rust-bindings](https://github.com/logos-messaging/waku-rust-bindings) and make sure all examples and tests work.
|
||||||
|
- [ ] Bump nwaku dependency in [waku-go-bindings](https://github.com/logos-messaging/waku-go-bindings) and make sure all tests work.
|
||||||
|
- [ ] Create GitHub release (https://github.com/logos-messaging/nwaku/releases).
|
||||||
|
- [ ] Submit a PR to merge the release branch back to `master`. Make sure you use the option "Merge pull request (Create a merge commit)" to perform the merge. Ping repo admin if this option is not available.
|
||||||
|
|
||||||
|
- [ ] **Promote release to fleets**
|
||||||
|
- [ ] Ask the PM lead to announce the release.
|
||||||
|
- [ ] Update infra config with any deprecated arguments or changed options.
|
||||||
|
|
||||||
|
### Links
|
||||||
|
|
||||||
|
- [Release process](https://github.com/logos-messaging/nwaku/blob/master/docs/contributors/release-process.md)
|
||||||
|
- [Release notes](https://github.com/logos-messaging/nwaku/blob/master/CHANGELOG.md)
|
||||||
|
- [Fleet ownership](https://www.notion.so/Fleet-Ownership-7532aad8896d46599abac3c274189741?pvs=4#d2d2f0fe4b3c429fbd860a1d64f89a64)
|
||||||
|
- [Infra-nim-waku](https://github.com/status-im/infra-nim-waku)
|
||||||
|
- [Jenkins](https://ci.infra.status.im/job/nim-waku/)
|
||||||
|
- [Fleets](https://fleets.waku.org/)
|
||||||
|
- [Harbor](https://harbor.status.im/harbor/projects/9/repositories/nwaku/artifacts-tab)
|
||||||
72
.github/ISSUE_TEMPLATE/prepare_release.md
vendored
72
.github/ISSUE_TEMPLATE/prepare_release.md
vendored
@ -1,72 +0,0 @@
|
|||||||
---
|
|
||||||
name: Prepare release
|
|
||||||
about: Execute tasks for the creation and publishing of a new release
|
|
||||||
title: 'Prepare release 0.0.0'
|
|
||||||
labels: release
|
|
||||||
assignees: ''
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
<!--
|
|
||||||
Add appropriate release number to title!
|
|
||||||
|
|
||||||
For detailed info on the release process refer to https://github.com/waku-org/nwaku/blob/master/docs/contributors/release-process.md
|
|
||||||
-->
|
|
||||||
|
|
||||||
### Items to complete
|
|
||||||
|
|
||||||
All items below are to be completed by the owner of the given release.
|
|
||||||
|
|
||||||
- [ ] Create release branch
|
|
||||||
- [ ] Assign release candidate tag to the release branch HEAD. e.g. v0.30.0-rc.0
|
|
||||||
- [ ] Generate and edit releases notes in CHANGELOG.md
|
|
||||||
- [ ] Review possible update of [config-options](https://github.com/waku-org/docs.waku.org/blob/develop/docs/guides/nwaku/config-options.md)
|
|
||||||
- [ ] _End user impact_: Summarize impact of changes on Status end users (can be a comment in this issue).
|
|
||||||
- [ ] **Validate release candidate**
|
|
||||||
- [ ] Bump nwaku dependency in [waku-rust-bindings](https://github.com/waku-org/waku-rust-bindings) and make sure all examples and tests work
|
|
||||||
|
|
||||||
- [ ] Automated testing
|
|
||||||
- [ ] Ensures js-waku tests are green against release candidate
|
|
||||||
- [ ] Ask Vac-QA and Vac-DST to perform available tests against release candidate
|
|
||||||
- [ ] Vac-QA
|
|
||||||
- [ ] Vac-DST (we need additional report. see [this](https://www.notion.so/DST-Reports-1228f96fb65c80729cd1d98a7496fe6f))
|
|
||||||
|
|
||||||
- [ ] **On Waku fleets**
|
|
||||||
- [ ] Lock `waku.test` fleet to release candidate version
|
|
||||||
- [ ] Continuously stress `waku.test` fleet for a week (e.g. from `wakudev`)
|
|
||||||
- [ ] Search _Kibana_ logs from the previous month (since last release was deployed), for possible crashes or errors in `waku.test` and `waku.sandbox`.
|
|
||||||
- Most relevant logs are `(fleet: "waku.test" OR fleet: "waku.sandbox") AND message: "SIGSEGV"`
|
|
||||||
- [ ] Run release candidate with `waku-simulator`, ensure that nodes connected to each other
|
|
||||||
- [ ] Unlock `waku.test` to resume auto-deployment of latest `master` commit
|
|
||||||
|
|
||||||
- [ ] **On Status fleet**
|
|
||||||
- [ ] Deploy release candidate to `status.staging`
|
|
||||||
- [ ] Perform [sanity check](https://www.notion.so/How-to-test-Nwaku-on-Status-12c6e4b9bf06420ca868bd199129b425) and log results as comments in this issue.
|
|
||||||
- [ ] Connect 2 instances to `status.staging` fleet, one in relay mode, the other one in light client.
|
|
||||||
- [ ] 1:1 Chats with each other
|
|
||||||
- [ ] Send and receive messages in a community
|
|
||||||
- [ ] Close one instance, send messages with second instance, reopen first instance and confirm messages sent while offline are retrieved from store
|
|
||||||
- [ ] Perform checks based _end user impact_
|
|
||||||
- [ ] Inform other (Waku and Status) CCs to point their instance to `status.staging` for a few days. Ping Status colleagues from their Discord server or [Status community](https://status.app/c/G3kAAMSQtb05kog3aGbr3kiaxN4tF5xy4BAGEkkLwILk2z3GcoYlm5hSJXGn7J3laft-tnTwDWmYJ18dP_3bgX96dqr_8E3qKAvxDf3NrrCMUBp4R9EYkQez9XSM4486mXoC3mIln2zc-TNdvjdfL9eHVZ-mGgs=#zQ3shZeEJqTC1xhGUjxuS4rtHSrhJ8vUYp64v6qWkLpvdy9L9) (not blocking point.)
|
|
||||||
- [ ] Ask Status-QA to perform sanity checks (as described above) + checks based on _end user impact_; do specify the version being tested
|
|
||||||
- [ ] Ask Status-QA or infra to run the automated Status e2e tests against `status.staging`
|
|
||||||
- [ ] Get other CCs sign-off: they comment on this PR "used app for a week, no problem", or problem reported, resolved and new RC
|
|
||||||
- [ ] **Get Status-QA sign-off**. Ensuring that `status.test` update will not disturb ongoing activities.
|
|
||||||
|
|
||||||
- [ ] **Proceed with release**
|
|
||||||
|
|
||||||
- [ ] Assign a release tag to the same commit that contains the validated release-candidate tag
|
|
||||||
- [ ] Create GitHub release
|
|
||||||
- [ ] Deploy the release to DockerHub
|
|
||||||
- [ ] Announce the release
|
|
||||||
|
|
||||||
- [ ] **Promote release to fleets**.
|
|
||||||
- [ ] Update infra config with any deprecated arguments or changed options
|
|
||||||
- [ ] [Deploy final release to `waku.sandbox` fleet](https://ci.infra.status.im/job/nim-waku/job/deploy-waku-sandbox)
|
|
||||||
- [ ] [Deploy final release to `status.staging` fleet](https://ci.infra.status.im/job/nim-waku/job/deploy-shards-staging/)
|
|
||||||
- [ ] [Deploy final release to `status.prod` fleet](https://ci.infra.status.im/job/nim-waku/job/deploy-shards-test/)
|
|
||||||
|
|
||||||
- [ ] **Post release**
|
|
||||||
- [ ] Submit a PR from the release branch to master. Important to commit the PR with "create a merge commit" option.
|
|
||||||
- [ ] Update waku-org/nwaku-compose with the new release version.
|
|
||||||
- [ ] Update version in js-waku repo. [update only this](https://github.com/waku-org/js-waku/blob/7c0ce7b2eca31cab837da0251e1e4255151be2f7/.github/workflows/ci.yml#L135) by submitting a PR.
|
|
||||||
18
.github/workflows/ci.yml
vendored
18
.github/workflows/ci.yml
vendored
@ -76,9 +76,12 @@ jobs:
|
|||||||
.git/modules
|
.git/modules
|
||||||
key: ${{ runner.os }}-vendor-modules-${{ steps.submodules.outputs.hash }}
|
key: ${{ runner.os }}-vendor-modules-${{ steps.submodules.outputs.hash }}
|
||||||
|
|
||||||
|
- name: Make update
|
||||||
|
run: make update
|
||||||
|
|
||||||
- name: Build binaries
|
- name: Build binaries
|
||||||
run: make V=1 QUICK_AND_DIRTY_COMPILER=1 all tools
|
run: make V=1 QUICK_AND_DIRTY_COMPILER=1 all tools
|
||||||
|
|
||||||
build-windows:
|
build-windows:
|
||||||
needs: changes
|
needs: changes
|
||||||
if: ${{ needs.changes.outputs.v2 == 'true' || needs.changes.outputs.common == 'true' }}
|
if: ${{ needs.changes.outputs.v2 == 'true' || needs.changes.outputs.common == 'true' }}
|
||||||
@ -114,6 +117,9 @@ jobs:
|
|||||||
.git/modules
|
.git/modules
|
||||||
key: ${{ runner.os }}-vendor-modules-${{ steps.submodules.outputs.hash }}
|
key: ${{ runner.os }}-vendor-modules-${{ steps.submodules.outputs.hash }}
|
||||||
|
|
||||||
|
- name: Make update
|
||||||
|
run: make update
|
||||||
|
|
||||||
- name: Run tests
|
- name: Run tests
|
||||||
run: |
|
run: |
|
||||||
postgres_enabled=0
|
postgres_enabled=0
|
||||||
@ -121,7 +127,7 @@ jobs:
|
|||||||
sudo docker run --rm -d -e POSTGRES_PASSWORD=test123 -p 5432:5432 postgres:15.4-alpine3.18
|
sudo docker run --rm -d -e POSTGRES_PASSWORD=test123 -p 5432:5432 postgres:15.4-alpine3.18
|
||||||
postgres_enabled=1
|
postgres_enabled=1
|
||||||
fi
|
fi
|
||||||
|
|
||||||
export MAKEFLAGS="-j1"
|
export MAKEFLAGS="-j1"
|
||||||
export NIMFLAGS="--colors:off -d:chronicles_colors:none"
|
export NIMFLAGS="--colors:off -d:chronicles_colors:none"
|
||||||
export USE_LIBBACKTRACE=0
|
export USE_LIBBACKTRACE=0
|
||||||
@ -132,12 +138,12 @@ jobs:
|
|||||||
build-docker-image:
|
build-docker-image:
|
||||||
needs: changes
|
needs: changes
|
||||||
if: ${{ needs.changes.outputs.v2 == 'true' || needs.changes.outputs.common == 'true' || needs.changes.outputs.docker == 'true' }}
|
if: ${{ needs.changes.outputs.v2 == 'true' || needs.changes.outputs.common == 'true' || needs.changes.outputs.docker == 'true' }}
|
||||||
uses: waku-org/nwaku/.github/workflows/container-image.yml@master
|
uses: logos-messaging/logos-messaging-nim/.github/workflows/container-image.yml@10dc3d3eb4b6a3d4313f7b2cc4a85a925e9ce039
|
||||||
secrets: inherit
|
secrets: inherit
|
||||||
|
|
||||||
nwaku-nwaku-interop-tests:
|
nwaku-nwaku-interop-tests:
|
||||||
needs: build-docker-image
|
needs: build-docker-image
|
||||||
uses: waku-org/waku-interop-tests/.github/workflows/nim_waku_PR.yml@SMOKE_TEST_0.0.1
|
uses: logos-messaging/logos-messaging-interop-tests/.github/workflows/nim_waku_PR.yml@SMOKE_TEST_STABLE
|
||||||
with:
|
with:
|
||||||
node_nwaku: ${{ needs.build-docker-image.outputs.image }}
|
node_nwaku: ${{ needs.build-docker-image.outputs.image }}
|
||||||
|
|
||||||
@ -145,14 +151,14 @@ jobs:
|
|||||||
|
|
||||||
js-waku-node:
|
js-waku-node:
|
||||||
needs: build-docker-image
|
needs: build-docker-image
|
||||||
uses: waku-org/js-waku/.github/workflows/test-node.yml@master
|
uses: logos-messaging/js-waku/.github/workflows/test-node.yml@master
|
||||||
with:
|
with:
|
||||||
nim_wakunode_image: ${{ needs.build-docker-image.outputs.image }}
|
nim_wakunode_image: ${{ needs.build-docker-image.outputs.image }}
|
||||||
test_type: node
|
test_type: node
|
||||||
|
|
||||||
js-waku-node-optional:
|
js-waku-node-optional:
|
||||||
needs: build-docker-image
|
needs: build-docker-image
|
||||||
uses: waku-org/js-waku/.github/workflows/test-node.yml@master
|
uses: logos-messaging/js-waku/.github/workflows/test-node.yml@master
|
||||||
with:
|
with:
|
||||||
nim_wakunode_image: ${{ needs.build-docker-image.outputs.image }}
|
nim_wakunode_image: ${{ needs.build-docker-image.outputs.image }}
|
||||||
test_type: node-optional
|
test_type: node-optional
|
||||||
|
|||||||
3
.github/workflows/container-image.yml
vendored
3
.github/workflows/container-image.yml
vendored
@ -41,7 +41,7 @@ jobs:
|
|||||||
env:
|
env:
|
||||||
QUAY_PASSWORD: ${{ secrets.QUAY_PASSWORD }}
|
QUAY_PASSWORD: ${{ secrets.QUAY_PASSWORD }}
|
||||||
QUAY_USER: ${{ secrets.QUAY_USER }}
|
QUAY_USER: ${{ secrets.QUAY_USER }}
|
||||||
|
|
||||||
- name: Checkout code
|
- name: Checkout code
|
||||||
if: ${{ steps.secrets.outcome == 'success' }}
|
if: ${{ steps.secrets.outcome == 'success' }}
|
||||||
uses: actions/checkout@v4
|
uses: actions/checkout@v4
|
||||||
@ -65,6 +65,7 @@ jobs:
|
|||||||
id: build
|
id: build
|
||||||
if: ${{ steps.secrets.outcome == 'success' }}
|
if: ${{ steps.secrets.outcome == 'success' }}
|
||||||
run: |
|
run: |
|
||||||
|
make update
|
||||||
|
|
||||||
make -j${NPROC} V=1 QUICK_AND_DIRTY_COMPILER=1 NIMFLAGS="-d:disableMarchNative -d:postgres -d:chronicles_colors:none" wakunode2
|
make -j${NPROC} V=1 QUICK_AND_DIRTY_COMPILER=1 NIMFLAGS="-d:disableMarchNative -d:postgres -d:chronicles_colors:none" wakunode2
|
||||||
|
|
||||||
|
|||||||
10
.github/workflows/pre-release.yml
vendored
10
.github/workflows/pre-release.yml
vendored
@ -47,7 +47,7 @@ jobs:
|
|||||||
- name: prep variables
|
- name: prep variables
|
||||||
id: vars
|
id: vars
|
||||||
run: |
|
run: |
|
||||||
ARCH=${{matrix.arch}}
|
ARCH=${{matrix.arch}}
|
||||||
|
|
||||||
echo "arch=${ARCH}" >> $GITHUB_OUTPUT
|
echo "arch=${ARCH}" >> $GITHUB_OUTPUT
|
||||||
|
|
||||||
@ -91,14 +91,14 @@ jobs:
|
|||||||
|
|
||||||
build-docker-image:
|
build-docker-image:
|
||||||
needs: tag-name
|
needs: tag-name
|
||||||
uses: waku-org/nwaku/.github/workflows/container-image.yml@master
|
uses: logos-messaging/nwaku/.github/workflows/container-image.yml@master
|
||||||
with:
|
with:
|
||||||
image_tag: ${{ needs.tag-name.outputs.tag }}
|
image_tag: ${{ needs.tag-name.outputs.tag }}
|
||||||
secrets: inherit
|
secrets: inherit
|
||||||
|
|
||||||
js-waku-node:
|
js-waku-node:
|
||||||
needs: build-docker-image
|
needs: build-docker-image
|
||||||
uses: waku-org/js-waku/.github/workflows/test-node.yml@master
|
uses: logos-messaging/js-waku/.github/workflows/test-node.yml@master
|
||||||
with:
|
with:
|
||||||
nim_wakunode_image: ${{ needs.build-docker-image.outputs.image }}
|
nim_wakunode_image: ${{ needs.build-docker-image.outputs.image }}
|
||||||
test_type: node
|
test_type: node
|
||||||
@ -106,7 +106,7 @@ jobs:
|
|||||||
|
|
||||||
js-waku-node-optional:
|
js-waku-node-optional:
|
||||||
needs: build-docker-image
|
needs: build-docker-image
|
||||||
uses: waku-org/js-waku/.github/workflows/test-node.yml@master
|
uses: logos-messaging/js-waku/.github/workflows/test-node.yml@master
|
||||||
with:
|
with:
|
||||||
nim_wakunode_image: ${{ needs.build-docker-image.outputs.image }}
|
nim_wakunode_image: ${{ needs.build-docker-image.outputs.image }}
|
||||||
test_type: node-optional
|
test_type: node-optional
|
||||||
@ -150,7 +150,7 @@ jobs:
|
|||||||
-u $(id -u) \
|
-u $(id -u) \
|
||||||
docker.io/wakuorg/sv4git:latest \
|
docker.io/wakuorg/sv4git:latest \
|
||||||
release-notes ${RELEASE_NOTES_TAG} --previous $(git tag -l --sort -creatordate | grep -e "^v[0-9]*\.[0-9]*\.[0-9]*$") |\
|
release-notes ${RELEASE_NOTES_TAG} --previous $(git tag -l --sort -creatordate | grep -e "^v[0-9]*\.[0-9]*\.[0-9]*$") |\
|
||||||
sed -E 's@#([0-9]+)@[#\1](https://github.com/waku-org/nwaku/issues/\1)@g' > release_notes.md
|
sed -E 's@#([0-9]+)@[#\1](https://github.com/logos-messaging/nwaku/issues/\1)@g' > release_notes.md
|
||||||
|
|
||||||
sed -i "s/^## .*/Generated at $(date)/" release_notes.md
|
sed -i "s/^## .*/Generated at $(date)/" release_notes.md
|
||||||
|
|
||||||
|
|||||||
75
.github/workflows/release-assets.yml
vendored
75
.github/workflows/release-assets.yml
vendored
@ -41,25 +41,84 @@ jobs:
|
|||||||
.git/modules
|
.git/modules
|
||||||
key: ${{ runner.os }}-${{matrix.arch}}-submodules-${{ steps.submodules.outputs.hash }}
|
key: ${{ runner.os }}-${{matrix.arch}}-submodules-${{ steps.submodules.outputs.hash }}
|
||||||
|
|
||||||
- name: prep variables
|
- name: Get tag
|
||||||
|
id: version
|
||||||
|
run: |
|
||||||
|
# Use full tag, e.g., v0.37.0
|
||||||
|
echo "version=${GITHUB_REF_NAME}" >> $GITHUB_OUTPUT
|
||||||
|
|
||||||
|
- name: Prep variables
|
||||||
id: vars
|
id: vars
|
||||||
run: |
|
run: |
|
||||||
NWAKU_ARTIFACT_NAME=$(echo "nwaku-${{matrix.arch}}-${{runner.os}}.tar.gz" | tr "[:upper:]" "[:lower:]")
|
VERSION=${{ steps.version.outputs.version }}
|
||||||
|
|
||||||
echo "nwaku=${NWAKU_ARTIFACT_NAME}" >> $GITHUB_OUTPUT
|
NWAKU_ARTIFACT_NAME=$(echo "waku-${{matrix.arch}}-${{runner.os}}.tar.gz" | tr "[:upper:]" "[:lower:]")
|
||||||
|
echo "waku=${NWAKU_ARTIFACT_NAME}" >> $GITHUB_OUTPUT
|
||||||
|
|
||||||
- name: Install dependencies
|
if [[ "${{ runner.os }}" == "Linux" ]]; then
|
||||||
|
LIBWAKU_ARTIFACT_NAME=$(echo "libwaku-${VERSION}-${{matrix.arch}}-${{runner.os}}-linux.deb" | tr "[:upper:]" "[:lower:]")
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [[ "${{ runner.os }}" == "macOS" ]]; then
|
||||||
|
LIBWAKU_ARTIFACT_NAME=$(echo "libwaku-${VERSION}-${{matrix.arch}}-macos.tar.gz" | tr "[:upper:]" "[:lower:]")
|
||||||
|
fi
|
||||||
|
|
||||||
|
echo "libwaku=${LIBWAKU_ARTIFACT_NAME}" >> $GITHUB_OUTPUT
|
||||||
|
|
||||||
|
- name: Install build dependencies
|
||||||
|
run: |
|
||||||
|
if [[ "${{ runner.os }}" == "Linux" ]]; then
|
||||||
|
sudo apt-get update && sudo apt-get install -y build-essential dpkg-dev
|
||||||
|
fi
|
||||||
|
|
||||||
|
- name: Build Waku artifacts
|
||||||
run: |
|
run: |
|
||||||
OS=$([[ "${{runner.os}}" == "macOS" ]] && echo "macosx" || echo "linux")
|
OS=$([[ "${{runner.os}}" == "macOS" ]] && echo "macosx" || echo "linux")
|
||||||
|
|
||||||
make -j${NPROC} NIMFLAGS="--parallelBuild:${NPROC} -d:disableMarchNative --os:${OS} --cpu:${{matrix.arch}}" V=1 update
|
make -j${NPROC} NIMFLAGS="--parallelBuild:${NPROC} -d:disableMarchNative --os:${OS} --cpu:${{matrix.arch}}" V=1 update
|
||||||
make -j${NPROC} NIMFLAGS="--parallelBuild:${NPROC} -d:disableMarchNative --os:${OS} --cpu:${{matrix.arch}} -d:postgres" CI=false wakunode2
|
make -j${NPROC} NIMFLAGS="--parallelBuild:${NPROC} -d:disableMarchNative --os:${OS} --cpu:${{matrix.arch}} -d:postgres" CI=false wakunode2
|
||||||
make -j${NPROC} NIMFLAGS="--parallelBuild:${NPROC} -d:disableMarchNative --os:${OS} --cpu:${{matrix.arch}}" CI=false chat2
|
make -j${NPROC} NIMFLAGS="--parallelBuild:${NPROC} -d:disableMarchNative --os:${OS} --cpu:${{matrix.arch}}" CI=false chat2
|
||||||
tar -cvzf ${{steps.vars.outputs.nwaku}} ./build/
|
tar -cvzf ${{steps.vars.outputs.waku}} ./build/
|
||||||
|
|
||||||
- name: Upload asset
|
make -j${NPROC} NIMFLAGS="--parallelBuild:${NPROC} -d:disableMarchNative --os:${OS} --cpu:${{matrix.arch}} -d:postgres" CI=false libwaku
|
||||||
|
make -j${NPROC} NIMFLAGS="--parallelBuild:${NPROC} -d:disableMarchNative --os:${OS} --cpu:${{matrix.arch}} -d:postgres" CI=false STATIC=1 libwaku
|
||||||
|
|
||||||
|
- name: Create distributable libwaku package
|
||||||
|
run: |
|
||||||
|
VERSION=${{ steps.version.outputs.version }}
|
||||||
|
|
||||||
|
if [[ "${{ runner.os }}" == "Linux" ]]; then
|
||||||
|
rm -rf pkg
|
||||||
|
mkdir -p pkg/DEBIAN pkg/usr/local/lib pkg/usr/local/include
|
||||||
|
cp build/libwaku.so pkg/usr/local/lib/
|
||||||
|
cp build/libwaku.a pkg/usr/local/lib/
|
||||||
|
cp library/libwaku.h pkg/usr/local/include/
|
||||||
|
|
||||||
|
echo "Package: waku" >> pkg/DEBIAN/control
|
||||||
|
echo "Version: ${VERSION}" >> pkg/DEBIAN/control
|
||||||
|
echo "Priority: optional" >> pkg/DEBIAN/control
|
||||||
|
echo "Section: libs" >> pkg/DEBIAN/control
|
||||||
|
echo "Architecture: ${{matrix.arch}}" >> pkg/DEBIAN/control
|
||||||
|
echo "Maintainer: Waku Team <ivansete@status.im>" >> pkg/DEBIAN/control
|
||||||
|
echo "Description: Waku library" >> pkg/DEBIAN/control
|
||||||
|
|
||||||
|
dpkg-deb --build pkg ${{steps.vars.outputs.libwaku}}
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [[ "${{ runner.os }}" == "macOS" ]]; then
|
||||||
|
tar -cvzf ${{steps.vars.outputs.libwaku}} ./build/libwaku.dylib ./build/libwaku.a ./library/libwaku.h
|
||||||
|
fi
|
||||||
|
|
||||||
|
- name: Upload waku artifact
|
||||||
uses: actions/upload-artifact@v4.4.0
|
uses: actions/upload-artifact@v4.4.0
|
||||||
with:
|
with:
|
||||||
name: ${{steps.vars.outputs.nwaku}}
|
name: waku-${{ steps.version.outputs.version }}-${{ matrix.arch }}-${{ runner.os }}
|
||||||
path: ${{steps.vars.outputs.nwaku}}
|
path: ${{ steps.vars.outputs.waku }}
|
||||||
|
if-no-files-found: error
|
||||||
|
|
||||||
|
- name: Upload libwaku artifact
|
||||||
|
uses: actions/upload-artifact@v4.4.0
|
||||||
|
with:
|
||||||
|
name: libwaku-${{ steps.version.outputs.version }}-${{ matrix.arch }}-${{ runner.os }}
|
||||||
|
path: ${{ steps.vars.outputs.libwaku }}
|
||||||
if-no-files-found: error
|
if-no-files-found: error
|
||||||
|
|||||||
10
.gitignore
vendored
10
.gitignore
vendored
@ -59,6 +59,10 @@ nimbus-build-system.paths
|
|||||||
/examples/nodejs/build/
|
/examples/nodejs/build/
|
||||||
/examples/rust/target/
|
/examples/rust/target/
|
||||||
|
|
||||||
|
# Xcode user data
|
||||||
|
xcuserdata/
|
||||||
|
*.xcuserstate
|
||||||
|
|
||||||
|
|
||||||
# Coverage
|
# Coverage
|
||||||
coverage_html_report/
|
coverage_html_report/
|
||||||
@ -79,3 +83,9 @@ waku_handler.moc.cpp
|
|||||||
|
|
||||||
# Nix build result
|
# Nix build result
|
||||||
result
|
result
|
||||||
|
|
||||||
|
# llms
|
||||||
|
AGENTS.md
|
||||||
|
nimble.develop
|
||||||
|
nimble.paths
|
||||||
|
nimbledeps
|
||||||
|
|||||||
7
.gitmodules
vendored
7
.gitmodules
vendored
@ -181,6 +181,11 @@
|
|||||||
branch = master
|
branch = master
|
||||||
[submodule "vendor/waku-rlnv2-contract"]
|
[submodule "vendor/waku-rlnv2-contract"]
|
||||||
path = vendor/waku-rlnv2-contract
|
path = vendor/waku-rlnv2-contract
|
||||||
url = https://github.com/waku-org/waku-rlnv2-contract.git
|
url = https://github.com/logos-messaging/waku-rlnv2-contract.git
|
||||||
|
ignore = untracked
|
||||||
|
branch = master
|
||||||
|
[submodule "vendor/nim-ffi"]
|
||||||
|
path = vendor/nim-ffi
|
||||||
|
url = https://github.com/logos-messaging/nim-ffi/
|
||||||
ignore = untracked
|
ignore = untracked
|
||||||
branch = master
|
branch = master
|
||||||
|
|||||||
509
AGENTS.md
Normal file
509
AGENTS.md
Normal file
@ -0,0 +1,509 @@
|
|||||||
|
# AGENTS.md - AI Coding Context
|
||||||
|
|
||||||
|
This file provides essential context for LLMs assisting with Logos Messaging development.
|
||||||
|
|
||||||
|
## Project Identity
|
||||||
|
|
||||||
|
Logos Messaging is designed as a shared public network for generalized messaging, not application-specific infrastructure.
|
||||||
|
|
||||||
|
This project is a Nim implementation of a libp2p protocol suite for private, censorship-resistant P2P messaging. It targets resource-restricted devices and privacy-preserving communication.
|
||||||
|
|
||||||
|
Logos Messaging was formerly known as Waku. Waku-related terminology remains within the codebase for historical reasons.
|
||||||
|
|
||||||
|
### Design Philosophy
|
||||||
|
|
||||||
|
Key architectural decisions:
|
||||||
|
|
||||||
|
Resource-restricted first: Protocols differentiate between full nodes (relay) and light clients (filter, lightpush, store). Light clients can participate without maintaining full message history or relay capabilities. This explains the client/server split in protocol implementations.
|
||||||
|
|
||||||
|
Privacy through unlinkability: RLN (Rate Limiting Nullifier) provides DoS protection while preserving sender anonymity. Messages are routed through pubsub topics with automatic sharding across 8 shards. Code prioritizes metadata privacy alongside content encryption.
|
||||||
|
|
||||||
|
Scalability via sharding: The network uses automatic content-topic-based sharding to distribute traffic. This is why you'll see sharding logic throughout the codebase and why pubsub topic selection is protocol-level, not application-level.
|
||||||
|
|
||||||
|
See [documentation](https://docs.waku.org/learn/) for architectural details.
|
||||||
|
|
||||||
|
### Core Protocols
|
||||||
|
- Relay: Pub/sub message routing using GossipSub
|
||||||
|
- Store: Historical message retrieval and persistence
|
||||||
|
- Filter: Lightweight message filtering for resource-restricted clients
|
||||||
|
- Lightpush: Lightweight message publishing for clients
|
||||||
|
- Peer Exchange: Peer discovery mechanism
|
||||||
|
- RLN Relay: Rate limiting nullifier for spam protection
|
||||||
|
- Metadata: Cluster and shard metadata exchange between peers
|
||||||
|
- Mix: Mixnet protocol for enhanced privacy through onion routing
|
||||||
|
- Rendezvous: Alternative peer discovery mechanism
|
||||||
|
|
||||||
|
### Key Terminology
|
||||||
|
- ENR (Ethereum Node Record): Node identity and capability advertisement
|
||||||
|
- Multiaddr: libp2p addressing format (e.g., `/ip4/127.0.0.1/tcp/60000/p2p/16Uiu2...`)
|
||||||
|
- PubsubTopic: Gossipsub topic for message routing (e.g., `/waku/2/default-waku/proto`)
|
||||||
|
- ContentTopic: Application-level message categorization (e.g., `/my-app/1/chat/proto`)
|
||||||
|
- Sharding: Partitioning network traffic across topics (static or auto-sharding)
|
||||||
|
- RLN (Rate Limiting Nullifier): Zero-knowledge proof system for spam prevention
|
||||||
|
|
||||||
|
### Specifications
|
||||||
|
All specs are at [rfc.vac.dev/waku](https://rfc.vac.dev/waku). RFCs use `WAKU2-XXX` format (not legacy `WAKU-XXX`).
|
||||||
|
|
||||||
|
## Architecture
|
||||||
|
|
||||||
|
### Protocol Module Pattern
|
||||||
|
Each protocol typically follows this structure:
|
||||||
|
```
|
||||||
|
waku_<protocol>/
|
||||||
|
├── protocol.nim # Main protocol type and handler logic
|
||||||
|
├── client.nim # Client-side API
|
||||||
|
├── rpc.nim # RPC message types
|
||||||
|
├── rpc_codec.nim # Protobuf encoding/decoding
|
||||||
|
├── common.nim # Shared types and constants
|
||||||
|
└── protocol_metrics.nim # Prometheus metrics
|
||||||
|
```
|
||||||
|
|
||||||
|
### WakuNode Architecture
|
||||||
|
- WakuNode (`waku/node/waku_node.nim`) is the central orchestrator
|
||||||
|
- Protocols are "mounted" onto the node's switch (libp2p component)
|
||||||
|
- PeerManager handles peer selection and connection management
|
||||||
|
- Switch provides libp2p transport, security, and multiplexing
|
||||||
|
|
||||||
|
Example protocol type definition:
|
||||||
|
```nim
|
||||||
|
type WakuFilter* = ref object of LPProtocol
|
||||||
|
subscriptions*: FilterSubscriptions
|
||||||
|
peerManager: PeerManager
|
||||||
|
messageCache: TimedCache[string]
|
||||||
|
```
|
||||||
|
|
||||||
|
## Development Essentials
|
||||||
|
|
||||||
|
### Build Requirements
|
||||||
|
- Nim 2.x (check `waku.nimble` for minimum version)
|
||||||
|
- Rust toolchain (required for RLN dependencies)
|
||||||
|
- Build system: Make with nimbus-build-system
|
||||||
|
|
||||||
|
### Build System
|
||||||
|
The project uses Makefile with nimbus-build-system (Status's Nim build framework):
|
||||||
|
```bash
|
||||||
|
# Initial build (updates submodules)
|
||||||
|
make wakunode2
|
||||||
|
|
||||||
|
# After git pull, update submodules
|
||||||
|
make update
|
||||||
|
|
||||||
|
# Build with custom flags
|
||||||
|
make wakunode2 NIMFLAGS="-d:chronicles_log_level=DEBUG"
|
||||||
|
```
|
||||||
|
|
||||||
|
Note: The build system uses `--mm:refc` memory management (automatically enforced). Only relevant if compiling outside the standard build system.
|
||||||
|
|
||||||
|
### Common Make Targets
|
||||||
|
```bash
|
||||||
|
make wakunode2 # Build main node binary
|
||||||
|
make test # Run all tests
|
||||||
|
make testcommon # Run common tests only
|
||||||
|
make libwakuStatic # Build static C library
|
||||||
|
make chat2 # Build chat example
|
||||||
|
make install-nph # Install git hook for auto-formatting
|
||||||
|
```
|
||||||
|
|
||||||
|
### Testing
|
||||||
|
```bash
|
||||||
|
# Run all tests
|
||||||
|
make test
|
||||||
|
|
||||||
|
# Run specific test file
|
||||||
|
make test tests/test_waku_enr.nim
|
||||||
|
|
||||||
|
# Run specific test case from file
|
||||||
|
make test tests/test_waku_enr.nim "check capabilities support"
|
||||||
|
|
||||||
|
# Build and run test separately (for development iteration)
|
||||||
|
make test tests/test_waku_enr.nim
|
||||||
|
```
|
||||||
|
|
||||||
|
Test structure uses `testutils/unittests`:
|
||||||
|
```nim
|
||||||
|
import testutils/unittests
|
||||||
|
|
||||||
|
suite "Waku ENR - Capabilities":
|
||||||
|
test "check capabilities support":
|
||||||
|
## Given
|
||||||
|
let bitfield: CapabilitiesBitfield = 0b0000_1101u8
|
||||||
|
|
||||||
|
## Then
|
||||||
|
check:
|
||||||
|
bitfield.supportsCapability(Capabilities.Relay)
|
||||||
|
not bitfield.supportsCapability(Capabilities.Store)
|
||||||
|
```
|
||||||
|
|
||||||
|
### Code Formatting
|
||||||
|
Mandatory: All code must be formatted with `nph` (vendored in `vendor/nph`)
|
||||||
|
```bash
|
||||||
|
# Format specific file
|
||||||
|
make nph/waku/waku_core.nim
|
||||||
|
|
||||||
|
# Install git pre-commit hook (auto-formats on commit)
|
||||||
|
make install-nph
|
||||||
|
```
|
||||||
|
The nph formatter handles all formatting details automatically, especially with the pre-commit hook installed. Focus on semantic correctness.
|
||||||
|
|
||||||
|
### Logging
|
||||||
|
Uses `chronicles` library with compile-time configuration:
|
||||||
|
```nim
|
||||||
|
import chronicles
|
||||||
|
|
||||||
|
logScope:
|
||||||
|
topics = "waku lightpush"
|
||||||
|
|
||||||
|
info "handling request", peerId = peerId, topic = pubsubTopic
|
||||||
|
error "request failed", error = msg
|
||||||
|
```
|
||||||
|
|
||||||
|
Compile with log level:
|
||||||
|
```bash
|
||||||
|
nim c -d:chronicles_log_level=TRACE myfile.nim
|
||||||
|
```
|
||||||
|
|
||||||
|
|
||||||
|
## Code Conventions
|
||||||
|
|
||||||
|
Common pitfalls:
|
||||||
|
- Always handle Result types explicitly
|
||||||
|
- Avoid global mutable state: Pass state through parameters
|
||||||
|
- Keep functions focused: Under 50 lines when possible
|
||||||
|
- Prefer compile-time checks (`static assert`) over runtime checks
|
||||||
|
|
||||||
|
### Naming
|
||||||
|
- Files/Directories: `snake_case` (e.g., `waku_lightpush`, `peer_manager`)
|
||||||
|
- Procedures: `camelCase` (e.g., `handleRequest`, `pushMessage`)
|
||||||
|
- Types: `PascalCase` (e.g., `WakuFilter`, `PubsubTopic`)
|
||||||
|
- Constants: `PascalCase` (e.g., `MaxContentTopicsPerRequest`)
|
||||||
|
- Constructors: `func init(T: type Xxx, params): T`
|
||||||
|
- For ref types: `func new(T: type Xxx, params): ref T`
|
||||||
|
- Exceptions: `XxxError` for CatchableError, `XxxDefect` for Defect
|
||||||
|
- ref object types: `XxxRef` suffix
|
||||||
|
|
||||||
|
### Imports Organization
|
||||||
|
Group imports: stdlib, external libs, internal modules:
|
||||||
|
```nim
|
||||||
|
import
|
||||||
|
std/[options, sequtils], # stdlib
|
||||||
|
results, chronicles, chronos, # external
|
||||||
|
libp2p/peerid
|
||||||
|
import
|
||||||
|
../node/peer_manager, # internal (separate import block)
|
||||||
|
../waku_core,
|
||||||
|
./common
|
||||||
|
```
|
||||||
|
|
||||||
|
### Async Programming
|
||||||
|
Uses chronos, not stdlib `asyncdispatch`:
|
||||||
|
```nim
|
||||||
|
proc handleRequest(
|
||||||
|
wl: WakuLightPush, peerId: PeerId
|
||||||
|
): Future[WakuLightPushResult] {.async.} =
|
||||||
|
let res = await wl.pushHandler(peerId, pubsubTopic, message)
|
||||||
|
return res
|
||||||
|
```
|
||||||
|
|
||||||
|
### Error Handling
|
||||||
|
The project uses both Result types and exceptions:
|
||||||
|
|
||||||
|
Result types from nim-results are used for protocol and API-level errors:
|
||||||
|
```nim
|
||||||
|
proc subscribe(
|
||||||
|
wf: WakuFilter, peerId: PeerID
|
||||||
|
): Future[FilterSubscribeResult] {.async.} =
|
||||||
|
if contentTopics.len > MaxContentTopicsPerRequest:
|
||||||
|
return err(FilterSubscribeError.badRequest("exceeds maximum"))
|
||||||
|
|
||||||
|
# Handle Result with isOkOr
|
||||||
|
(await wf.subscriptions.addSubscription(peerId, criteria)).isOkOr:
|
||||||
|
return err(FilterSubscribeError.serviceUnavailable(error))
|
||||||
|
|
||||||
|
ok()
|
||||||
|
```
|
||||||
|
|
||||||
|
Exceptions still used for:
|
||||||
|
- chronos async failures (CancelledError, etc.)
|
||||||
|
- Database/system errors
|
||||||
|
- Library interop
|
||||||
|
|
||||||
|
Most files start with `{.push raises: [].}` to disable exception tracking, then use try/catch blocks where needed.
|
||||||
|
|
||||||
|
### Pragma Usage
|
||||||
|
```nim
|
||||||
|
{.push raises: [].} # Disable default exception tracking (at file top)
|
||||||
|
|
||||||
|
proc myProc(): Result[T, E] {.async.} = # Async proc
|
||||||
|
```
|
||||||
|
|
||||||
|
### Protocol Inheritance
|
||||||
|
Protocols inherit from libp2p's `LPProtocol`:
|
||||||
|
```nim
|
||||||
|
type WakuLightPush* = ref object of LPProtocol
|
||||||
|
rng*: ref rand.HmacDrbgContext
|
||||||
|
peerManager*: PeerManager
|
||||||
|
pushHandler*: PushMessageHandler
|
||||||
|
```
|
||||||
|
|
||||||
|
### Type Visibility
|
||||||
|
- Public exports use `*` suffix: `type WakuFilter* = ...`
|
||||||
|
- Fields without `*` are module-private
|
||||||
|
|
||||||
|
## Style Guide Essentials
|
||||||
|
|
||||||
|
This section summarizes key Nim style guidelines relevant to this project. Full guide: https://status-im.github.io/nim-style-guide/
|
||||||
|
|
||||||
|
### Language Features
|
||||||
|
|
||||||
|
Import and Export
|
||||||
|
- Use explicit import paths with std/ prefix for stdlib
|
||||||
|
- Group imports: stdlib, external, internal (separate blocks)
|
||||||
|
- Export modules whose types appear in public API
|
||||||
|
- Avoid include
|
||||||
|
|
||||||
|
Macros and Templates
|
||||||
|
- Avoid macros and templates - prefer simple constructs
|
||||||
|
- Avoid generating public API with macros
|
||||||
|
- Put logic in templates, use macros only for glue code
|
||||||
|
|
||||||
|
Object Construction
|
||||||
|
- Prefer Type(field: value) syntax
|
||||||
|
- Use Type.init(params) convention for constructors
|
||||||
|
- Default zero-initialization should be valid state
|
||||||
|
- Avoid using result variable for construction
|
||||||
|
|
||||||
|
ref object Types
|
||||||
|
- Avoid ref object unless needed for:
|
||||||
|
- Resource handles requiring reference semantics
|
||||||
|
- Shared ownership
|
||||||
|
- Reference-based data structures (trees, lists)
|
||||||
|
- Stable pointer for FFI
|
||||||
|
- Use explicit ref MyType where possible
|
||||||
|
- Name ref object types with Ref suffix: XxxRef
|
||||||
|
|
||||||
|
Memory Management
|
||||||
|
- Prefer stack-based and statically sized types in core code
|
||||||
|
- Use heap allocation in glue layers
|
||||||
|
- Avoid alloca
|
||||||
|
- For FFI: use create/dealloc or createShared/deallocShared
|
||||||
|
|
||||||
|
Variable Usage
|
||||||
|
- Use most restrictive of const, let, var (prefer const over let over var)
|
||||||
|
- Prefer expressions for initialization over var then assignment
|
||||||
|
- Avoid result variable - use explicit return or expression-based returns
|
||||||
|
|
||||||
|
Functions
|
||||||
|
- Prefer func over proc
|
||||||
|
- Avoid public (*) symbols not part of intended API
|
||||||
|
- Prefer openArray over seq for function parameters
|
||||||
|
|
||||||
|
Methods (runtime polymorphism)
|
||||||
|
- Avoid method keyword for dynamic dispatch
|
||||||
|
- Prefer manual vtable with proc closures for polymorphism
|
||||||
|
- Methods lack support for generics
|
||||||
|
|
||||||
|
Miscellaneous
|
||||||
|
- Annotate callback proc types with {.raises: [], gcsafe.}
|
||||||
|
- Avoid explicit {.inline.} pragma
|
||||||
|
- Avoid converters
|
||||||
|
- Avoid finalizers
|
||||||
|
|
||||||
|
Type Guidelines
|
||||||
|
|
||||||
|
Binary Data
|
||||||
|
- Use byte for binary data
|
||||||
|
- Use seq[byte] for dynamic arrays
|
||||||
|
- Convert string to seq[byte] early if stdlib returns binary as string
|
||||||
|
|
||||||
|
Integers
|
||||||
|
- Prefer signed (int, int64) for counting, lengths, indexing
|
||||||
|
- Use unsigned with explicit size (uint8, uint64) for binary data, bit ops
|
||||||
|
- Avoid Natural
|
||||||
|
- Check ranges before converting to int
|
||||||
|
- Avoid casting pointers to int
|
||||||
|
- Avoid range types
|
||||||
|
|
||||||
|
Strings
|
||||||
|
- Use string for text
|
||||||
|
- Use seq[byte] for binary data instead of string
|
||||||
|
|
||||||
|
### Error Handling
|
||||||
|
|
||||||
|
Philosophy
|
||||||
|
- Prefer Result, Opt for explicit error handling
|
||||||
|
- Use Exceptions only for legacy code compatibility
|
||||||
|
|
||||||
|
Result Types
|
||||||
|
- Use Result[T, E] for operations that can fail
|
||||||
|
- Use cstring for simple error messages: Result[T, cstring]
|
||||||
|
- Use enum for errors needing differentiation: Result[T, SomeErrorEnum]
|
||||||
|
- Use Opt[T] for simple optional values
|
||||||
|
- Annotate all modules: {.push raises: [].} at top
|
||||||
|
|
||||||
|
Exceptions (when unavoidable)
|
||||||
|
- Inherit from CatchableError, name XxxError
|
||||||
|
- Use Defect for panics/logic errors, name XxxDefect
|
||||||
|
- Annotate functions explicitly: {.raises: [SpecificError].}
|
||||||
|
- Catch specific error types, avoid catching CatchableError
|
||||||
|
- Use expression-based try blocks
|
||||||
|
- Isolate legacy exception code with try/except, convert to Result
|
||||||
|
|
||||||
|
Common Defect Sources
|
||||||
|
- Overflow in signed arithmetic
|
||||||
|
- Array/seq indexing with []
|
||||||
|
- Implicit range type conversions
|
||||||
|
|
||||||
|
Status Codes
|
||||||
|
- Avoid status code pattern
|
||||||
|
- Use Result instead
|
||||||
|
|
||||||
|
### Library Usage
|
||||||
|
|
||||||
|
Standard Library
|
||||||
|
- Use judiciously, prefer focused packages
|
||||||
|
- Prefer these replacements:
|
||||||
|
- async: chronos
|
||||||
|
- bitops: stew/bitops2
|
||||||
|
- endians: stew/endians2
|
||||||
|
- exceptions: results
|
||||||
|
- io: stew/io2
|
||||||
|
|
||||||
|
Results Library
|
||||||
|
- Use cstring errors for diagnostics without differentiation
|
||||||
|
- Use enum errors when caller needs to act on specific errors
|
||||||
|
- Use complex types when additional error context needed
|
||||||
|
- Use isOkOr pattern for chaining
|
||||||
|
|
||||||
|
Wrappers (C/FFI)
|
||||||
|
- Prefer native Nim when available
|
||||||
|
- For C libraries: use {.compile.} to build from source
|
||||||
|
- Create xxx_abi.nim for raw ABI wrapper
|
||||||
|
- Avoid C++ libraries
|
||||||
|
|
||||||
|
Miscellaneous
|
||||||
|
- Print hex output in lowercase, accept both cases
|
||||||
|
|
||||||
|
### Common Pitfalls
|
||||||
|
|
||||||
|
- Defects lack tracking by {.raises.}
|
||||||
|
- nil ref causes runtime crashes
|
||||||
|
- result variable disables branch checking
|
||||||
|
- Exception hierarchy unclear between Nim versions
|
||||||
|
- Range types have compiler bugs
|
||||||
|
- Finalizers infect all instances of type
|
||||||
|
|
||||||
|
## Common Workflows
|
||||||
|
|
||||||
|
### Adding a New Protocol
|
||||||
|
1. Create directory: `waku/waku_myprotocol/`
|
||||||
|
2. Define core files:
|
||||||
|
- `rpc.nim` - Message types
|
||||||
|
- `rpc_codec.nim` - Protobuf encoding
|
||||||
|
- `protocol.nim` - Protocol handler
|
||||||
|
- `client.nim` - Client API
|
||||||
|
- `common.nim` - Shared types
|
||||||
|
3. Define protocol type in `protocol.nim`:
|
||||||
|
```nim
|
||||||
|
type WakuMyProtocol* = ref object of LPProtocol
|
||||||
|
peerManager: PeerManager
|
||||||
|
# ... fields
|
||||||
|
```
|
||||||
|
4. Implement request handler
|
||||||
|
5. Mount in WakuNode (`waku/node/waku_node.nim`)
|
||||||
|
6. Add tests in `tests/waku_myprotocol/`
|
||||||
|
7. Export module via `waku/waku_myprotocol.nim`
|
||||||
|
|
||||||
|
### Adding a REST API Endpoint
|
||||||
|
1. Define handler in `waku/rest_api/endpoint/myprotocol/`
|
||||||
|
2. Implement endpoint following pattern:
|
||||||
|
```nim
|
||||||
|
proc installMyProtocolApiHandlers*(
|
||||||
|
router: var RestRouter, node: WakuNode
|
||||||
|
) =
|
||||||
|
router.api(MethodGet, "/waku/v2/myprotocol/endpoint") do () -> RestApiResponse:
|
||||||
|
# Implementation
|
||||||
|
return RestApiResponse.jsonResponse(data, status = Http200)
|
||||||
|
```
|
||||||
|
3. Register in `waku/rest_api/handlers.nim`
|
||||||
|
|
||||||
|
### Adding Database Migration
|
||||||
|
For message_store (SQLite):
|
||||||
|
1. Create `migrations/message_store/NNNNN_description.up.sql`
|
||||||
|
2. Create corresponding `.down.sql` for rollback
|
||||||
|
3. Increment version number sequentially
|
||||||
|
4. Test migration locally before committing
|
||||||
|
|
||||||
|
For PostgreSQL: add in `migrations/message_store_postgres/`
|
||||||
|
|
||||||
|
### Running Single Test During Development
|
||||||
|
```bash
|
||||||
|
# Build test binary
|
||||||
|
make test tests/waku_filter_v2/test_waku_client.nim
|
||||||
|
|
||||||
|
# Binary location
|
||||||
|
./build/tests/waku_filter_v2/test_waku_client.nim.bin
|
||||||
|
|
||||||
|
# Or combine
|
||||||
|
make test tests/waku_filter_v2/test_waku_client.nim "specific test name"
|
||||||
|
```
|
||||||
|
|
||||||
|
### Debugging with Chronicles
|
||||||
|
Set log level and filter topics:
|
||||||
|
```bash
|
||||||
|
nim c -r \
|
||||||
|
-d:chronicles_log_level=TRACE \
|
||||||
|
-d:chronicles_disabled_topics="eth,dnsdisc" \
|
||||||
|
tests/mytest.nim
|
||||||
|
```
|
||||||
|
|
||||||
|
## Key Constraints
|
||||||
|
|
||||||
|
### Vendor Directory
|
||||||
|
- Never edit files directly in vendor - it is auto-generated from git submodules
|
||||||
|
- Always run `make update` after pulling changes
|
||||||
|
- Managed by `nimbus-build-system`
|
||||||
|
|
||||||
|
### Chronicles Performance
|
||||||
|
- Log levels are configured at compile time for performance
|
||||||
|
- Runtime filtering is available but should be used sparingly: `-d:chronicles_runtime_filtering=on`
|
||||||
|
- Default sinks are optimized for production
|
||||||
|
|
||||||
|
### Memory Management
|
||||||
|
- Uses `refc` (reference counting with cycle collection)
|
||||||
|
- Automatically enforced by the build system (hardcoded in `waku.nimble`)
|
||||||
|
- Do not override unless absolutely necessary, as it breaks compatibility
|
||||||
|
|
||||||
|
### RLN Dependencies
|
||||||
|
- RLN code requires a Rust toolchain, which explains Rust imports in some modules
|
||||||
|
- Pre-built `librln` libraries are checked into the repository
|
||||||
|
|
||||||
|
## Quick Reference
|
||||||
|
|
||||||
|
Language: Nim 2.x | License: MIT or Apache 2.0
|
||||||
|
|
||||||
|
### Important Files
|
||||||
|
- `Makefile` - Primary build interface
|
||||||
|
- `waku.nimble` - Package definition and build tasks (called via nimbus-build-system)
|
||||||
|
- `vendor/nimbus-build-system/` - Status's build framework
|
||||||
|
- `waku/node/waku_node.nim` - Core node implementation
|
||||||
|
- `apps/wakunode2/wakunode2.nim` - Main CLI application
|
||||||
|
- `waku/factory/waku_conf.nim` - Configuration types
|
||||||
|
- `library/libwaku.nim` - C bindings entry point
|
||||||
|
|
||||||
|
### Testing Entry Points
|
||||||
|
- `tests/all_tests_waku.nim` - All Waku protocol tests
|
||||||
|
- `tests/all_tests_wakunode2.nim` - Node application tests
|
||||||
|
- `tests/all_tests_common.nim` - Common utilities tests
|
||||||
|
|
||||||
|
### Key Dependencies
|
||||||
|
- `chronos` - Async framework
|
||||||
|
- `nim-results` - Result type for error handling
|
||||||
|
- `chronicles` - Logging
|
||||||
|
- `libp2p` - P2P networking
|
||||||
|
- `confutils` - CLI argument parsing
|
||||||
|
- `presto` - REST server
|
||||||
|
- `nimcrypto` - Cryptographic primitives
|
||||||
|
|
||||||
|
Note: For specific version requirements, check `waku.nimble`.
|
||||||
|
|
||||||
|
|
||||||
65
CHANGELOG.md
65
CHANGELOG.md
@ -1,3 +1,68 @@
|
|||||||
|
## v0.37.1-beta (2025-12-10)
|
||||||
|
|
||||||
|
### Bug Fixes
|
||||||
|
|
||||||
|
- Remove ENR cache from peer exchange ([#3652](https://github.com/logos-messaging/logos-messaging-nim/pull/3652)) ([7920368a](https://github.com/logos-messaging/logos-messaging-nim/commit/7920368a36687cd5f12afa52d59866792d8457ca))
|
||||||
|
|
||||||
|
## v0.37.0-beta (2025-10-01)
|
||||||
|
|
||||||
|
### Notes
|
||||||
|
|
||||||
|
- Deprecated parameters:
|
||||||
|
- `tree_path` and `rlnDB` (RLN-related storage paths)
|
||||||
|
- `--dns-discovery` (fully removed, including dns-discovery-name-server)
|
||||||
|
- `keepAlive` (deprecated, config updated accordingly)
|
||||||
|
- Legacy `store` protocol is no longer supported by default.
|
||||||
|
- Improved sharding configuration: now explicit and shard-specific metrics added.
|
||||||
|
- Mix nodes are limited to IPv4 addresses only.
|
||||||
|
- [lightpush legacy](https://github.com/vacp2p/rfc-index/blob/main/waku/standards/core/19/lightpush.md) is being deprecated. Use [lightpush v3](https://github.com/waku-org/specs/blob/master/standards/core/lightpush.md) instead.
|
||||||
|
|
||||||
|
### Features
|
||||||
|
|
||||||
|
- Waku API: create node via API ([#3580](https://github.com/waku-org/nwaku/pull/3580)) ([bc8acf76](https://github.com/waku-org/nwaku/commit/bc8acf76))
|
||||||
|
- Waku Sync: full topic support ([#3275](https://github.com/waku-org/nwaku/pull/3275)) ([9327da5a](https://github.com/waku-org/nwaku/commit/9327da5a))
|
||||||
|
- Mix PoC implementation ([#3284](https://github.com/waku-org/nwaku/pull/3284)) ([eb7a3d13](https://github.com/waku-org/nwaku/commit/eb7a3d13))
|
||||||
|
- Rendezvous: add request interval option ([#3569](https://github.com/waku-org/nwaku/pull/3569)) ([cc7a6406](https://github.com/waku-org/nwaku/commit/cc7a6406))
|
||||||
|
- Shard-specific metrics tracking ([#3520](https://github.com/waku-org/nwaku/pull/3520)) ([c3da29fd](https://github.com/waku-org/nwaku/commit/c3da29fd))
|
||||||
|
- Libwaku: build Windows DLL for Status-go ([#3460](https://github.com/waku-org/nwaku/pull/3460)) ([5c38a53f](https://github.com/waku-org/nwaku/commit/5c38a53f))
|
||||||
|
- RLN: add Stateless RLN support ([#3621](https://github.com/waku-org/nwaku/pull/3621))
|
||||||
|
- LOG: Reduce log level of messages from debug to info for better visibility ([#3622](https://github.com/waku-org/nwaku/pull/3622))
|
||||||
|
|
||||||
|
### Bug Fixes
|
||||||
|
|
||||||
|
- Prevent invalid pubsub topic subscription via Relay REST API ([#3559](https://github.com/waku-org/nwaku/pull/3559)) ([a36601ab](https://github.com/waku-org/nwaku/commit/a36601ab))
|
||||||
|
- Fixed node crash when RLN is unregistered ([#3573](https://github.com/waku-org/nwaku/pull/3573)) ([3d0c6279](https://github.com/waku-org/nwaku/commit/3d0c6279))
|
||||||
|
- REST: fixed sync protocol issues ([#3503](https://github.com/waku-org/nwaku/pull/3503)) ([393e3cce](https://github.com/waku-org/nwaku/commit/393e3cce))
|
||||||
|
- Regex pattern fix for `username:password@` in URLs ([#3517](https://github.com/waku-org/nwaku/pull/3517)) ([89a3f735](https://github.com/waku-org/nwaku/commit/89a3f735))
|
||||||
|
- Sharding: applied modulus fix ([#3530](https://github.com/waku-org/nwaku/pull/3530)) ([f68d7999](https://github.com/waku-org/nwaku/commit/f68d7999))
|
||||||
|
- Metrics: switched to counter instead of gauge ([#3355](https://github.com/waku-org/nwaku/pull/3355)) ([a27eec90](https://github.com/waku-org/nwaku/commit/a27eec90))
|
||||||
|
- Fixed lightpush metrics and diagnostics ([#3486](https://github.com/waku-org/nwaku/pull/3486)) ([0ed3fc80](https://github.com/waku-org/nwaku/commit/0ed3fc80))
|
||||||
|
- Misc sync, dashboard, and CI fixes ([#3434](https://github.com/waku-org/nwaku/pull/3434), [#3508](https://github.com/waku-org/nwaku/pull/3508), [#3464](https://github.com/waku-org/nwaku/pull/3464))
|
||||||
|
- Raise log level of numerous operational messages from debug to info for better visibility ([#3622](https://github.com/waku-org/nwaku/pull/3622))
|
||||||
|
|
||||||
|
### Changes
|
||||||
|
|
||||||
|
- Enable peer-exchange by default ([#3557](https://github.com/waku-org/nwaku/pull/3557)) ([7df526f8](https://github.com/waku-org/nwaku/commit/7df526f8))
|
||||||
|
- Refactor peer-exchange client and service implementations ([#3523](https://github.com/waku-org/nwaku/pull/3523)) ([4379f9ec](https://github.com/waku-org/nwaku/commit/4379f9ec))
|
||||||
|
- Updated rendezvous to use callback-based shard/capability updates ([#3558](https://github.com/waku-org/nwaku/pull/3558)) ([028bf297](https://github.com/waku-org/nwaku/commit/028bf297))
|
||||||
|
- Config updates and explicit sharding setup ([#3468](https://github.com/waku-org/nwaku/pull/3468)) ([994d485b](https://github.com/waku-org/nwaku/commit/994d485b))
|
||||||
|
- Bumped libp2p to v1.13.0 ([#3574](https://github.com/waku-org/nwaku/pull/3574)) ([b1616e55](https://github.com/waku-org/nwaku/commit/b1616e55))
|
||||||
|
- Removed legacy dependencies (e.g., libpcre in Docker builds) ([#3552](https://github.com/waku-org/nwaku/pull/3552)) ([4db4f830](https://github.com/waku-org/nwaku/commit/4db4f830))
|
||||||
|
- Benchmarks for RLN proof generation & verification ([#3567](https://github.com/waku-org/nwaku/pull/3567)) ([794c3a85](https://github.com/waku-org/nwaku/commit/794c3a85))
|
||||||
|
- Various CI/CD & infra updates ([#3515](https://github.com/waku-org/nwaku/pull/3515), [#3505](https://github.com/waku-org/nwaku/pull/3505))
|
||||||
|
|
||||||
|
### This release supports the following [libp2p protocols](https://docs.libp2p.io/concepts/protocols/):
|
||||||
|
|
||||||
|
| Protocol | Spec status | Protocol id |
|
||||||
|
| ---: | :---: | :--- |
|
||||||
|
| [`11/WAKU2-RELAY`](https://github.com/vacp2p/rfc-index/blob/main/waku/standards/core/11/relay.md) | `stable` | `/vac/waku/relay/2.0.0` |
|
||||||
|
| [`12/WAKU2-FILTER`](https://github.com/vacp2p/rfc-index/blob/main/waku/standards/core/12/filter.md) | `draft` | `/vac/waku/filter/2.0.0-beta1` <br />`/vac/waku/filter-subscribe/2.0.0-beta1` <br />`/vac/waku/filter-push/2.0.0-beta1` |
|
||||||
|
| [`13/WAKU2-STORE`](https://github.com/vacp2p/rfc-index/blob/main/waku/standards/core/13/store.md) | `draft` | `/vac/waku/store/2.0.0-beta4` |
|
||||||
|
| [`19/WAKU2-LIGHTPUSH`](https://github.com/vacp2p/rfc-index/blob/main/waku/standards/core/19/lightpush.md) | `draft` | `/vac/waku/lightpush/2.0.0-beta1` |
|
||||||
|
| [`WAKU2-LIGHTPUSH v3`](https://github.com/waku-org/specs/blob/master/standards/core/lightpush.md) | `draft` | `/vac/waku/lightpush/3.0.0` |
|
||||||
|
| [`66/WAKU2-METADATA`](https://github.com/waku-org/specs/blob/master/standards/core/metadata.md) | `raw` | `/vac/waku/metadata/1.0.0` |
|
||||||
|
| [`WAKU-SYNC`](https://github.com/waku-org/specs/blob/master/standards/core/sync.md) | `draft` | `/vac/waku/sync/1.0.0` |
|
||||||
|
|
||||||
## v0.36.0 (2025-06-20)
|
## v0.36.0 (2025-06-20)
|
||||||
### Notes
|
### Notes
|
||||||
|
|
||||||
|
|||||||
@ -1,5 +1,5 @@
|
|||||||
# BUILD NIM APP ----------------------------------------------------------------
|
# BUILD NIM APP ----------------------------------------------------------------
|
||||||
FROM rust:1.81.0-alpine3.19 AS nim-build
|
FROM rustlang/rust:nightly-alpine3.19 AS nim-build
|
||||||
|
|
||||||
ARG NIMFLAGS
|
ARG NIMFLAGS
|
||||||
ARG MAKE_TARGET=lightpushwithmix
|
ARG MAKE_TARGET=lightpushwithmix
|
||||||
|
|||||||
85
Makefile
85
Makefile
@ -43,6 +43,9 @@ ifeq ($(detected_OS),Windows)
|
|||||||
|
|
||||||
LIBS = -lws2_32 -lbcrypt -liphlpapi -luserenv -lntdll -lminiupnpc -lnatpmp -lpq
|
LIBS = -lws2_32 -lbcrypt -liphlpapi -luserenv -lntdll -lminiupnpc -lnatpmp -lpq
|
||||||
NIM_PARAMS += $(foreach lib,$(LIBS),--passL:"$(lib)")
|
NIM_PARAMS += $(foreach lib,$(LIBS),--passL:"$(lib)")
|
||||||
|
|
||||||
|
export PATH := /c/msys64/usr/bin:/c/msys64/mingw64/bin:/c/msys64/usr/lib:/c/msys64/mingw64/lib:$(PATH)
|
||||||
|
|
||||||
endif
|
endif
|
||||||
|
|
||||||
##########
|
##########
|
||||||
@ -116,6 +119,10 @@ endif
|
|||||||
##################
|
##################
|
||||||
.PHONY: deps libbacktrace
|
.PHONY: deps libbacktrace
|
||||||
|
|
||||||
|
FOUNDRY_VERSION := 1.5.0
|
||||||
|
PNPM_VERSION := 10.23.0
|
||||||
|
|
||||||
|
|
||||||
rustup:
|
rustup:
|
||||||
ifeq (, $(shell which cargo))
|
ifeq (, $(shell which cargo))
|
||||||
# Install Rustup if it's not installed
|
# Install Rustup if it's not installed
|
||||||
@ -125,7 +132,7 @@ ifeq (, $(shell which cargo))
|
|||||||
endif
|
endif
|
||||||
|
|
||||||
rln-deps: rustup
|
rln-deps: rustup
|
||||||
./scripts/install_rln_tests_dependencies.sh
|
./scripts/install_rln_tests_dependencies.sh $(FOUNDRY_VERSION) $(PNPM_VERSION)
|
||||||
|
|
||||||
deps: | deps-common nat-libs waku.nims
|
deps: | deps-common nat-libs waku.nims
|
||||||
|
|
||||||
@ -143,6 +150,9 @@ ifeq ($(USE_LIBBACKTRACE), 0)
|
|||||||
NIM_PARAMS := $(NIM_PARAMS) -d:disable_libbacktrace
|
NIM_PARAMS := $(NIM_PARAMS) -d:disable_libbacktrace
|
||||||
endif
|
endif
|
||||||
|
|
||||||
|
# enable experimental exit is dest feature in libp2p mix
|
||||||
|
NIM_PARAMS := $(NIM_PARAMS) -d:libp2p_mix_experimental_exit_is_dest
|
||||||
|
|
||||||
libbacktrace:
|
libbacktrace:
|
||||||
+ $(MAKE) -C vendor/nim-libbacktrace --no-print-directory BUILD_CXX_LIB=0
|
+ $(MAKE) -C vendor/nim-libbacktrace --no-print-directory BUILD_CXX_LIB=0
|
||||||
|
|
||||||
@ -420,18 +430,27 @@ docker-liteprotocoltester-push:
|
|||||||
.PHONY: cbindings cwaku_example libwaku
|
.PHONY: cbindings cwaku_example libwaku
|
||||||
|
|
||||||
STATIC ?= 0
|
STATIC ?= 0
|
||||||
|
BUILD_COMMAND ?= libwakuDynamic
|
||||||
|
|
||||||
|
ifeq ($(detected_OS),Windows)
|
||||||
|
LIB_EXT_DYNAMIC = dll
|
||||||
|
LIB_EXT_STATIC = lib
|
||||||
|
else ifeq ($(detected_OS),Darwin)
|
||||||
|
LIB_EXT_DYNAMIC = dylib
|
||||||
|
LIB_EXT_STATIC = a
|
||||||
|
else ifeq ($(detected_OS),Linux)
|
||||||
|
LIB_EXT_DYNAMIC = so
|
||||||
|
LIB_EXT_STATIC = a
|
||||||
|
endif
|
||||||
|
|
||||||
|
LIB_EXT := $(LIB_EXT_DYNAMIC)
|
||||||
|
ifeq ($(STATIC), 1)
|
||||||
|
LIB_EXT = $(LIB_EXT_STATIC)
|
||||||
|
BUILD_COMMAND = libwakuStatic
|
||||||
|
endif
|
||||||
|
|
||||||
libwaku: | build deps librln
|
libwaku: | build deps librln
|
||||||
rm -f build/libwaku*
|
echo -e $(BUILD_MSG) "build/$@.$(LIB_EXT)" && $(ENV_SCRIPT) nim $(BUILD_COMMAND) $(NIM_PARAMS) waku.nims $@.$(LIB_EXT)
|
||||||
|
|
||||||
ifeq ($(STATIC), 1)
|
|
||||||
echo -e $(BUILD_MSG) "build/$@.a" && $(ENV_SCRIPT) nim libwakuStatic $(NIM_PARAMS) waku.nims
|
|
||||||
else ifeq ($(detected_OS),Windows)
|
|
||||||
echo -e $(BUILD_MSG) "build/$@.dll" && $(ENV_SCRIPT) nim libwakuDynamic $(NIM_PARAMS) waku.nims
|
|
||||||
else
|
|
||||||
echo -e $(BUILD_MSG) "build/$@.so" && $(ENV_SCRIPT) nim libwakuDynamic $(NIM_PARAMS) waku.nims
|
|
||||||
endif
|
|
||||||
|
|
||||||
#####################
|
#####################
|
||||||
## Mobile Bindings ##
|
## Mobile Bindings ##
|
||||||
@ -498,6 +517,51 @@ libwaku-android:
|
|||||||
# It's likely this architecture is not used so we might just not support it.
|
# It's likely this architecture is not used so we might just not support it.
|
||||||
# $(MAKE) libwaku-android-arm
|
# $(MAKE) libwaku-android-arm
|
||||||
|
|
||||||
|
#################
|
||||||
|
## iOS Bindings #
|
||||||
|
#################
|
||||||
|
.PHONY: libwaku-ios-precheck \
|
||||||
|
libwaku-ios-device \
|
||||||
|
libwaku-ios-simulator \
|
||||||
|
libwaku-ios
|
||||||
|
|
||||||
|
IOS_DEPLOYMENT_TARGET ?= 18.0
|
||||||
|
|
||||||
|
# Get SDK paths dynamically using xcrun
|
||||||
|
define get_ios_sdk_path
|
||||||
|
$(shell xcrun --sdk $(1) --show-sdk-path 2>/dev/null)
|
||||||
|
endef
|
||||||
|
|
||||||
|
libwaku-ios-precheck:
|
||||||
|
ifeq ($(detected_OS),Darwin)
|
||||||
|
@command -v xcrun >/dev/null 2>&1 || { echo "Error: Xcode command line tools not installed"; exit 1; }
|
||||||
|
else
|
||||||
|
$(error iOS builds are only supported on macOS)
|
||||||
|
endif
|
||||||
|
|
||||||
|
# Build for iOS architecture
|
||||||
|
build-libwaku-for-ios-arch:
|
||||||
|
IOS_SDK=$(IOS_SDK) IOS_ARCH=$(IOS_ARCH) IOS_SDK_PATH=$(IOS_SDK_PATH) $(ENV_SCRIPT) nim libWakuIOS $(NIM_PARAMS) waku.nims
|
||||||
|
|
||||||
|
# iOS device (arm64)
|
||||||
|
libwaku-ios-device: IOS_ARCH=arm64
|
||||||
|
libwaku-ios-device: IOS_SDK=iphoneos
|
||||||
|
libwaku-ios-device: IOS_SDK_PATH=$(call get_ios_sdk_path,iphoneos)
|
||||||
|
libwaku-ios-device: | libwaku-ios-precheck build deps
|
||||||
|
$(MAKE) build-libwaku-for-ios-arch IOS_ARCH=$(IOS_ARCH) IOS_SDK=$(IOS_SDK) IOS_SDK_PATH=$(IOS_SDK_PATH)
|
||||||
|
|
||||||
|
# iOS simulator (arm64 - Apple Silicon Macs)
|
||||||
|
libwaku-ios-simulator: IOS_ARCH=arm64
|
||||||
|
libwaku-ios-simulator: IOS_SDK=iphonesimulator
|
||||||
|
libwaku-ios-simulator: IOS_SDK_PATH=$(call get_ios_sdk_path,iphonesimulator)
|
||||||
|
libwaku-ios-simulator: | libwaku-ios-precheck build deps
|
||||||
|
$(MAKE) build-libwaku-for-ios-arch IOS_ARCH=$(IOS_ARCH) IOS_SDK=$(IOS_SDK) IOS_SDK_PATH=$(IOS_SDK_PATH)
|
||||||
|
|
||||||
|
# Build all iOS targets
|
||||||
|
libwaku-ios:
|
||||||
|
$(MAKE) libwaku-ios-device
|
||||||
|
$(MAKE) libwaku-ios-simulator
|
||||||
|
|
||||||
cwaku_example: | build libwaku
|
cwaku_example: | build libwaku
|
||||||
echo -e $(BUILD_MSG) "build/$@" && \
|
echo -e $(BUILD_MSG) "build/$@" && \
|
||||||
cc -o "build/$@" \
|
cc -o "build/$@" \
|
||||||
@ -543,4 +607,3 @@ release-notes:
|
|||||||
sed -E 's@#([0-9]+)@[#\1](https://github.com/waku-org/nwaku/issues/\1)@g'
|
sed -E 's@#([0-9]+)@[#\1](https://github.com/waku-org/nwaku/issues/\1)@g'
|
||||||
# I could not get the tool to replace issue ids with links, so using sed for now,
|
# I could not get the tool to replace issue ids with links, so using sed for now,
|
||||||
# asked here: https://github.com/bvieira/sv4git/discussions/101
|
# asked here: https://github.com/bvieira/sv4git/discussions/101
|
||||||
|
|
||||||
|
|||||||
14
README.md
14
README.md
@ -1,19 +1,21 @@
|
|||||||
# Nwaku
|
# Logos Messaging Nim
|
||||||
|
|
||||||
## Introduction
|
## Introduction
|
||||||
|
|
||||||
The nwaku repository implements Waku, and provides tools related to it.
|
The logos-messaging-nim, a.k.a. lmn or nwaku, repository implements a set of libp2p protocols aimed to bring
|
||||||
|
private communications.
|
||||||
|
|
||||||
- A Nim implementation of the [Waku (v2) protocol](https://specs.vac.dev/specs/waku/v2/waku-v2.html).
|
- Nim implementation of [these specs](https://github.com/vacp2p/rfc-index/tree/main/waku).
|
||||||
- CLI application `wakunode2` that allows you to run a Waku node.
|
- C library that exposes the implemented protocols.
|
||||||
- Examples of Waku usage.
|
- CLI application that allows you to run an lmn node.
|
||||||
|
- Examples.
|
||||||
- Various tests of above.
|
- Various tests of above.
|
||||||
|
|
||||||
For more details see the [source code](waku/README.md)
|
For more details see the [source code](waku/README.md)
|
||||||
|
|
||||||
## How to Build & Run ( Linux, MacOS & WSL )
|
## How to Build & Run ( Linux, MacOS & WSL )
|
||||||
|
|
||||||
These instructions are generic. For more detailed instructions, see the Waku source code above.
|
These instructions are generic. For more detailed instructions, see the source code above.
|
||||||
|
|
||||||
### Prerequisites
|
### Prerequisites
|
||||||
|
|
||||||
|
|||||||
@ -480,7 +480,9 @@ proc processInput(rfd: AsyncFD, rng: ref HmacDrbgContext) {.async.} =
|
|||||||
if conf.lightpushnode != "":
|
if conf.lightpushnode != "":
|
||||||
let peerInfo = parsePeerInfo(conf.lightpushnode)
|
let peerInfo = parsePeerInfo(conf.lightpushnode)
|
||||||
if peerInfo.isOk():
|
if peerInfo.isOk():
|
||||||
await mountLegacyLightPush(node)
|
(await node.mountLegacyLightPush()).isOkOr:
|
||||||
|
error "failed to mount legacy lightpush", error = error
|
||||||
|
quit(QuitFailure)
|
||||||
node.mountLegacyLightPushClient()
|
node.mountLegacyLightPushClient()
|
||||||
node.peerManager.addServicePeer(peerInfo.value, WakuLightpushCodec)
|
node.peerManager.addServicePeer(peerInfo.value, WakuLightpushCodec)
|
||||||
else:
|
else:
|
||||||
|
|||||||
@ -82,6 +82,8 @@ type
|
|||||||
PrivateKey* = crypto.PrivateKey
|
PrivateKey* = crypto.PrivateKey
|
||||||
Topic* = waku_core.PubsubTopic
|
Topic* = waku_core.PubsubTopic
|
||||||
|
|
||||||
|
const MinMixNodePoolSize = 4
|
||||||
|
|
||||||
#####################
|
#####################
|
||||||
## chat2 protobufs ##
|
## chat2 protobufs ##
|
||||||
#####################
|
#####################
|
||||||
@ -124,7 +126,7 @@ proc encode*(message: Chat2Message): ProtoBuffer =
|
|||||||
|
|
||||||
return serialised
|
return serialised
|
||||||
|
|
||||||
proc toString*(message: Chat2Message): string =
|
proc `$`*(message: Chat2Message): string =
|
||||||
# Get message date and timestamp in local time
|
# Get message date and timestamp in local time
|
||||||
let time = message.timestamp.fromUnix().local().format("'<'MMM' 'dd,' 'HH:mm'>'")
|
let time = message.timestamp.fromUnix().local().format("'<'MMM' 'dd,' 'HH:mm'>'")
|
||||||
|
|
||||||
@ -331,13 +333,14 @@ proc maintainSubscription(
|
|||||||
const maxFailedServiceNodeSwitches = 10
|
const maxFailedServiceNodeSwitches = 10
|
||||||
var noFailedSubscribes = 0
|
var noFailedSubscribes = 0
|
||||||
var noFailedServiceNodeSwitches = 0
|
var noFailedServiceNodeSwitches = 0
|
||||||
const RetryWaitMs = 2.seconds # Quick retry interval
|
# Use chronos.Duration explicitly to avoid mismatch with std/times.Duration
|
||||||
const SubscriptionMaintenanceMs = 30.seconds # Subscription maintenance interval
|
let RetryWait = chronos.seconds(2) # Quick retry interval
|
||||||
|
let SubscriptionMaintenance = chronos.seconds(30) # Subscription maintenance interval
|
||||||
while true:
|
while true:
|
||||||
info "maintaining subscription at", peer = constructMultiaddrStr(actualFilterPeer)
|
info "maintaining subscription at", peer = constructMultiaddrStr(actualFilterPeer)
|
||||||
# First use filter-ping to check if we have an active subscription
|
# First use filter-ping to check if we have an active subscription
|
||||||
let pingErr = (await wakuNode.wakuFilterClient.ping(actualFilterPeer)).errorOr:
|
let pingErr = (await wakuNode.wakuFilterClient.ping(actualFilterPeer)).errorOr:
|
||||||
await sleepAsync(SubscriptionMaintenanceMs)
|
await sleepAsync(SubscriptionMaintenance)
|
||||||
info "subscription is live."
|
info "subscription is live."
|
||||||
continue
|
continue
|
||||||
|
|
||||||
@ -350,7 +353,7 @@ proc maintainSubscription(
|
|||||||
some(filterPubsubTopic), filterContentTopic, actualFilterPeer
|
some(filterPubsubTopic), filterContentTopic, actualFilterPeer
|
||||||
)
|
)
|
||||||
).errorOr:
|
).errorOr:
|
||||||
await sleepAsync(SubscriptionMaintenanceMs)
|
await sleepAsync(SubscriptionMaintenance)
|
||||||
if noFailedSubscribes > 0:
|
if noFailedSubscribes > 0:
|
||||||
noFailedSubscribes -= 1
|
noFailedSubscribes -= 1
|
||||||
notice "subscribe request successful."
|
notice "subscribe request successful."
|
||||||
@ -365,7 +368,7 @@ proc maintainSubscription(
|
|||||||
# wakunode.peerManager.peerStore.delete(actualFilterPeer)
|
# wakunode.peerManager.peerStore.delete(actualFilterPeer)
|
||||||
|
|
||||||
if noFailedSubscribes < maxFailedSubscribes:
|
if noFailedSubscribes < maxFailedSubscribes:
|
||||||
await sleepAsync(RetryWaitMs) # Wait a bit before retrying
|
await sleepAsync(RetryWait) # Wait a bit before retrying
|
||||||
elif not preventPeerSwitch:
|
elif not preventPeerSwitch:
|
||||||
# try again with new peer without delay
|
# try again with new peer without delay
|
||||||
let actualFilterPeer = selectRandomServicePeer(
|
let actualFilterPeer = selectRandomServicePeer(
|
||||||
@ -380,7 +383,7 @@ proc maintainSubscription(
|
|||||||
|
|
||||||
noFailedSubscribes = 0
|
noFailedSubscribes = 0
|
||||||
else:
|
else:
|
||||||
await sleepAsync(SubscriptionMaintenanceMs)
|
await sleepAsync(SubscriptionMaintenance)
|
||||||
|
|
||||||
{.pop.}
|
{.pop.}
|
||||||
# @TODO confutils.nim(775, 17) Error: can raise an unlisted exception: ref IOError
|
# @TODO confutils.nim(775, 17) Error: can raise an unlisted exception: ref IOError
|
||||||
@ -450,6 +453,8 @@ proc processInput(rfd: AsyncFD, rng: ref HmacDrbgContext) {.async.} =
|
|||||||
(await node.mountMix(conf.clusterId, mixPrivKey, conf.mixnodes)).isOkOr:
|
(await node.mountMix(conf.clusterId, mixPrivKey, conf.mixnodes)).isOkOr:
|
||||||
error "failed to mount waku mix protocol: ", error = $error
|
error "failed to mount waku mix protocol: ", error = $error
|
||||||
quit(QuitFailure)
|
quit(QuitFailure)
|
||||||
|
await node.mountRendezvousClient(conf.clusterId)
|
||||||
|
|
||||||
await node.start()
|
await node.start()
|
||||||
|
|
||||||
node.peerManager.start()
|
node.peerManager.start()
|
||||||
@ -587,9 +592,9 @@ proc processInput(rfd: AsyncFD, rng: ref HmacDrbgContext) {.async.} =
|
|||||||
error "Couldn't find any service peer"
|
error "Couldn't find any service peer"
|
||||||
quit(QuitFailure)
|
quit(QuitFailure)
|
||||||
|
|
||||||
#await mountLegacyLightPush(node)
|
|
||||||
node.peerManager.addServicePeer(servicePeerInfo, WakuLightpushCodec)
|
node.peerManager.addServicePeer(servicePeerInfo, WakuLightpushCodec)
|
||||||
node.peerManager.addServicePeer(servicePeerInfo, WakuPeerExchangeCodec)
|
node.peerManager.addServicePeer(servicePeerInfo, WakuPeerExchangeCodec)
|
||||||
|
#node.peerManager.addServicePeer(servicePeerInfo, WakuRendezVousCodec)
|
||||||
|
|
||||||
# Start maintaining subscription
|
# Start maintaining subscription
|
||||||
asyncSpawn maintainSubscription(
|
asyncSpawn maintainSubscription(
|
||||||
@ -597,12 +602,12 @@ proc processInput(rfd: AsyncFD, rng: ref HmacDrbgContext) {.async.} =
|
|||||||
)
|
)
|
||||||
echo "waiting for mix nodes to be discovered..."
|
echo "waiting for mix nodes to be discovered..."
|
||||||
while true:
|
while true:
|
||||||
if node.getMixNodePoolSize() >= 3:
|
if node.getMixNodePoolSize() >= MinMixNodePoolSize:
|
||||||
break
|
break
|
||||||
discard await node.fetchPeerExchangePeers()
|
discard await node.fetchPeerExchangePeers()
|
||||||
await sleepAsync(1000)
|
await sleepAsync(1000)
|
||||||
|
|
||||||
while node.getMixNodePoolSize() < 3:
|
while node.getMixNodePoolSize() < MinMixNodePoolSize:
|
||||||
info "waiting for mix nodes to be discovered",
|
info "waiting for mix nodes to be discovered",
|
||||||
currentpoolSize = node.getMixNodePoolSize()
|
currentpoolSize = node.getMixNodePoolSize()
|
||||||
await sleepAsync(1000)
|
await sleepAsync(1000)
|
||||||
|
|||||||
@ -143,16 +143,18 @@ proc areProtocolsSupported(
|
|||||||
|
|
||||||
proc pingNode(
|
proc pingNode(
|
||||||
node: WakuNode, peerInfo: RemotePeerInfo
|
node: WakuNode, peerInfo: RemotePeerInfo
|
||||||
): Future[void] {.async, gcsafe.} =
|
): Future[bool] {.async, gcsafe.} =
|
||||||
try:
|
try:
|
||||||
let conn = await node.switch.dial(peerInfo.peerId, peerInfo.addrs, PingCodec)
|
let conn = await node.switch.dial(peerInfo.peerId, peerInfo.addrs, PingCodec)
|
||||||
let pingDelay = await node.libp2pPing.ping(conn)
|
let pingDelay = await node.libp2pPing.ping(conn)
|
||||||
info "Peer response time (ms)", peerId = peerInfo.peerId, ping = pingDelay.millis
|
info "Peer response time (ms)", peerId = peerInfo.peerId, ping = pingDelay.millis
|
||||||
|
return true
|
||||||
except CatchableError:
|
except CatchableError:
|
||||||
var msg = getCurrentExceptionMsg()
|
var msg = getCurrentExceptionMsg()
|
||||||
if msg == "Future operation cancelled!":
|
if msg == "Future operation cancelled!":
|
||||||
msg = "timedout"
|
msg = "timedout"
|
||||||
error "Failed to ping the peer", peer = peerInfo, err = msg
|
error "Failed to ping the peer", peer = peerInfo, err = msg
|
||||||
|
return false
|
||||||
|
|
||||||
proc main(rng: ref HmacDrbgContext): Future[int] {.async.} =
|
proc main(rng: ref HmacDrbgContext): Future[int] {.async.} =
|
||||||
let conf: WakuCanaryConf = WakuCanaryConf.load()
|
let conf: WakuCanaryConf = WakuCanaryConf.load()
|
||||||
@ -268,8 +270,13 @@ proc main(rng: ref HmacDrbgContext): Future[int] {.async.} =
|
|||||||
let lp2pPeerStore = node.switch.peerStore
|
let lp2pPeerStore = node.switch.peerStore
|
||||||
let conStatus = node.peerManager.switch.peerStore[ConnectionBook][peer.peerId]
|
let conStatus = node.peerManager.switch.peerStore[ConnectionBook][peer.peerId]
|
||||||
|
|
||||||
|
var pingSuccess = true
|
||||||
if conf.ping:
|
if conf.ping:
|
||||||
discard await pingFut
|
try:
|
||||||
|
pingSuccess = await pingFut
|
||||||
|
except CatchableError as exc:
|
||||||
|
pingSuccess = false
|
||||||
|
error "Ping operation failed or timed out", error = exc.msg
|
||||||
|
|
||||||
if conStatus in [Connected, CanConnect]:
|
if conStatus in [Connected, CanConnect]:
|
||||||
let nodeProtocols = lp2pPeerStore[ProtoBook][peer.peerId]
|
let nodeProtocols = lp2pPeerStore[ProtoBook][peer.peerId]
|
||||||
@ -278,6 +285,11 @@ proc main(rng: ref HmacDrbgContext): Future[int] {.async.} =
|
|||||||
error "Not all protocols are supported",
|
error "Not all protocols are supported",
|
||||||
expected = conf.protocols, supported = nodeProtocols
|
expected = conf.protocols, supported = nodeProtocols
|
||||||
quit(QuitFailure)
|
quit(QuitFailure)
|
||||||
|
|
||||||
|
# Check ping result if ping was enabled
|
||||||
|
if conf.ping and not pingSuccess:
|
||||||
|
error "Node is reachable and supports protocols but ping failed - connection may be unstable"
|
||||||
|
quit(QuitFailure)
|
||||||
elif conStatus == CannotConnect:
|
elif conStatus == CannotConnect:
|
||||||
error "Could not connect", peerId = peer.peerId
|
error "Could not connect", peerId = peer.peerId
|
||||||
quit(QuitFailure)
|
quit(QuitFailure)
|
||||||
|
|||||||
@ -38,6 +38,9 @@ A particular OpenAPI spec can be easily imported into [Postman](https://www.post
|
|||||||
curl http://localhost:8645/debug/v1/info -s | jq
|
curl http://localhost:8645/debug/v1/info -s | jq
|
||||||
```
|
```
|
||||||
|
|
||||||
|
### Store API
|
||||||
|
|
||||||
|
The `page_size` flag in the Store API has a default value of 20 and a max value of 100.
|
||||||
|
|
||||||
### Node configuration
|
### Node configuration
|
||||||
Find details [here](https://github.com/waku-org/nwaku/tree/master/docs/operators/how-to/configure-rest-api.md)
|
Find details [here](https://github.com/waku-org/nwaku/tree/master/docs/operators/how-to/configure-rest-api.md)
|
||||||
|
|||||||
@ -6,44 +6,52 @@ For more context, see https://trunkbaseddevelopment.com/branch-for-release/
|
|||||||
|
|
||||||
## How to do releases
|
## How to do releases
|
||||||
|
|
||||||
### Before release
|
### Prerequisites
|
||||||
|
|
||||||
|
- All issues under the corresponding release [milestone](https://github.com/waku-org/nwaku/milestones) have been closed or, after consultation, deferred to the next release.
|
||||||
|
- All submodules are up to date.
|
||||||
|
> Updating submodules requires a PR (and very often several "fixes" to maintain compatibility with the changes in submodules). That PR process must be done and merged a couple of days before the release.
|
||||||
|
|
||||||
Ensure all items in this list are ticked:
|
|
||||||
- [ ] All issues under the corresponding release [milestone](https://github.com/waku-org/nwaku/milestones) has been closed or, after consultation, deferred to a next release.
|
|
||||||
- [ ] All submodules are up to date.
|
|
||||||
> **IMPORTANT:** Updating submodules requires a PR (and very often several "fixes" to maintain compatibility with the changes in submodules). That PR process must be done and merged a couple of days before the release.
|
|
||||||
> In case the submodules update has a low effort and/or risk for the release, follow the ["Update submodules"](./git-submodules.md) instructions.
|
> In case the submodules update has a low effort and/or risk for the release, follow the ["Update submodules"](./git-submodules.md) instructions.
|
||||||
> If the effort or risk is too high, consider postponing the submodules upgrade for the subsequent release or delaying the current release until the submodules updates are included in the release candidate.
|
|
||||||
- [ ] The [js-waku CI tests](https://github.com/waku-org/js-waku/actions/workflows/ci.yml) pass against the release candidate (i.e. nwaku latest `master`).
|
|
||||||
> **NOTE:** This serves as a basic regression test against typical clients of nwaku.
|
|
||||||
> The specific job that needs to pass is named `node_with_nwaku_master`.
|
|
||||||
|
|
||||||
### Performing the release
|
> If the effort or risk is too high, consider postponing the submodules upgrade for the subsequent release or delaying the current release until the submodules updates are included in the release candidate.
|
||||||
|
|
||||||
|
### Release types
|
||||||
|
|
||||||
|
- **Full release**: follow the entire [Release process](#release-process--step-by-step).
|
||||||
|
|
||||||
|
- **Beta release**: skip just `6a` and `6c` steps from [Release process](#release-process--step-by-step).
|
||||||
|
|
||||||
|
- Choose the appropriate release process based on the release type:
|
||||||
|
- [Full Release](../../.github/ISSUE_TEMPLATE/prepare_full_release.md)
|
||||||
|
- [Beta Release](../../.github/ISSUE_TEMPLATE/prepare_beta_release.md)
|
||||||
|
|
||||||
|
### Release process ( step by step )
|
||||||
|
|
||||||
1. Checkout a release branch from master
|
1. Checkout a release branch from master
|
||||||
|
|
||||||
```
|
```
|
||||||
git checkout -b release/v0.1.0
|
git checkout -b release/v0.X.0
|
||||||
```
|
```
|
||||||
|
|
||||||
1. Update `CHANGELOG.md` and ensure it is up to date. Use the helper Make target to get PR based release-notes/changelog update.
|
2. Update `CHANGELOG.md` and ensure it is up to date. Use the helper Make target to get PR based release-notes/changelog update.
|
||||||
|
|
||||||
```
|
```
|
||||||
make release-notes
|
make release-notes
|
||||||
```
|
```
|
||||||
|
|
||||||
1. Create a release-candidate tag with the same name as release and `-rc.N` suffix a few days before the official release and push it
|
3. Create a release-candidate tag with the same name as release and `-rc.N` suffix a few days before the official release and push it
|
||||||
|
|
||||||
```
|
```
|
||||||
git tag -as v0.1.0-rc.0 -m "Initial release."
|
git tag -as v0.X.0-rc.0 -m "Initial release."
|
||||||
git push origin v0.1.0-rc.0
|
git push origin v0.X.0-rc.0
|
||||||
```
|
```
|
||||||
|
|
||||||
This will trigger a [workflow](../../.github/workflows/pre-release.yml) which will build RC artifacts and create and publish a Github release
|
This will trigger a [workflow](../../.github/workflows/pre-release.yml) which will build RC artifacts and create and publish a GitHub release
|
||||||
|
|
||||||
1. Open a PR from the release branch for others to review the included changes and the release-notes
|
4. Open a PR from the release branch for others to review the included changes and the release-notes
|
||||||
|
|
||||||
1. In case additional changes are needed, create a new RC tag
|
5. In case additional changes are needed, create a new RC tag
|
||||||
|
|
||||||
Make sure the new tag is associated
|
Make sure the new tag is associated
|
||||||
with CHANGELOG update.
|
with CHANGELOG update.
|
||||||
@ -52,25 +60,57 @@ Ensure all items in this list are ticked:
|
|||||||
# Make changes, rebase and create new tag
|
# Make changes, rebase and create new tag
|
||||||
# Squash to one commit and make a nice commit message
|
# Squash to one commit and make a nice commit message
|
||||||
git rebase -i origin/master
|
git rebase -i origin/master
|
||||||
git tag -as v0.1.0-rc.1 -m "Initial release."
|
git tag -as v0.X.0-rc.1 -m "Initial release."
|
||||||
git push origin v0.1.0-rc.1
|
git push origin v0.X.0-rc.1
|
||||||
```
|
```
|
||||||
|
|
||||||
1. Validate the release. For the release validation process, please refer to the following [guide](https://www.notion.so/Release-Process-61234f335b904cd0943a5033ed8f42b4#47af557e7f9744c68fdbe5240bf93ca9)
|
Similarly use v0.X.0-rc.2, v0.X.0-rc.3 etc. for additional RC tags.
|
||||||
|
|
||||||
1. Once the release-candidate has been validated, create a final release tag and push it.
|
6. **Validation of release candidate**
|
||||||
We also need to merge release branch back to master as a final step.
|
|
||||||
|
6a. **Automated testing**
|
||||||
|
- Ensure all the unit tests (specifically js-waku tests) are green against the release candidate.
|
||||||
|
- Ask Vac-QA and Vac-DST to run their available tests against the release candidate; share all release candidates with both teams.
|
||||||
|
|
||||||
|
> We need an additional report like [this](https://www.notion.so/DST-Reports-1228f96fb65c80729cd1d98a7496fe6f) specifically from the DST team.
|
||||||
|
|
||||||
|
6b. **Waku fleet testing**
|
||||||
|
- Start job on `waku.sandbox` and `waku.test` [Deployment job](https://ci.infra.status.im/job/nim-waku/), wait for completion of the job. If it fails, then debug it.
|
||||||
|
- After completion, disable [deployment job](https://ci.infra.status.im/job/nim-waku/) so that its version is not updated on every merge to `master`.
|
||||||
|
- Verify at https://fleets.waku.org/ that the fleet is locked to the release candidate version.
|
||||||
|
- Check if the image is created at [Harbor](https://harbor.status.im/harbor/projects/9/repositories/nwaku/artifacts-tab).
|
||||||
|
- Search _Kibana_ logs from the previous month (since the last release was deployed) for possible crashes or errors in `waku.test` and `waku.sandbox`.
|
||||||
|
- Most relevant logs are `(fleet: "waku.test" AND message: "SIGSEGV")` OR `(fleet: "waku.sandbox" AND message: "SIGSEGV")`.
|
||||||
|
- Enable the `waku.test` fleet again to resume auto-deployment of the latest `master` commit.
|
||||||
|
|
||||||
|
6c. **Status fleet testing**
|
||||||
|
- Deploy release candidate to `status.staging`
|
||||||
|
- Perform [sanity check](https://www.notion.so/How-to-test-Nwaku-on-Status-12c6e4b9bf06420ca868bd199129b425) and log results as comments in this issue.
|
||||||
|
- Connect 2 instances to `status.staging` fleet, one in relay mode, the other one in light client.
|
||||||
|
- 1:1 Chats with each other
|
||||||
|
- Send and receive messages in a community
|
||||||
|
- Close one instance, send messages with second instance, reopen first instance and confirm messages sent while offline are retrieved from store
|
||||||
|
- Perform checks based on _end-user impact_.
|
||||||
|
- Inform other (Waku and Status) CCs to point their instances to `status.staging` for a few days. Ping Status colleagues from their Discord server or [Status community](https://status.app) (not a blocking point).
|
||||||
|
- Ask Status-QA to perform sanity checks (as described above) and checks based on _end user impact_; specify the version being tested.
|
||||||
|
- Ask Status-QA or infra to run the automated Status e2e tests against `status.staging`.
|
||||||
|
- Get other CCs' sign-off: they should comment on this PR, e.g., "Used the app for a week, no problem." If problems are reported, resolve them and create a new RC.
|
||||||
|
- **Get Status-QA sign-off**, ensuring that the `status.test` update will not disturb ongoing activities.
|
||||||
|
|
||||||
|
7. Once the release-candidate has been validated, create a final release tag and push it.
|
||||||
|
We also need to merge the release branch back into master as a final step.
|
||||||
|
|
||||||
```
|
```
|
||||||
git checkout release/v0.1.0
|
git checkout release/v0.X.0
|
||||||
git tag -as v0.1.0 -m "Initial release."
|
git tag -as v0.X.0 -m "final release." (use v0.X.0-beta as the tag if you are creating a beta release)
|
||||||
git push origin v0.1.0
|
git push origin v0.X.0
|
||||||
git switch master
|
git switch master
|
||||||
git pull
|
git pull
|
||||||
git merge release/v0.1.0
|
git merge release/v0.X.0
|
||||||
```
|
```
|
||||||
|
8. Update `waku-rust-bindings`, `waku-simulator` and `nwaku-compose` to use the new release.
|
||||||
|
|
||||||
1. Create a [Github release](https://github.com/waku-org/nwaku/releases) from the release tag.
|
9. Create a [GitHub release](https://github.com/waku-org/nwaku/releases) from the release tag.
|
||||||
|
|
||||||
* Add binaries produced by the ["Upload Release Asset"](https://github.com/waku-org/nwaku/actions/workflows/release-assets.yml) workflow. Where possible, test the binaries before uploading to the release.
|
* Add binaries produced by the ["Upload Release Asset"](https://github.com/waku-org/nwaku/actions/workflows/release-assets.yml) workflow. Where possible, test the binaries before uploading to the release.
|
||||||
|
|
||||||
@ -80,22 +120,10 @@ We also need to merge release branch back to master as a final step.
|
|||||||
2. Deploy the release image to [Dockerhub](https://hub.docker.com/r/wakuorg/nwaku) by triggering [the manual Jenkins deployment job](https://ci.infra.status.im/job/nim-waku/job/docker-manual/).
|
2. Deploy the release image to [Dockerhub](https://hub.docker.com/r/wakuorg/nwaku) by triggering [the manual Jenkins deployment job](https://ci.infra.status.im/job/nim-waku/job/docker-manual/).
|
||||||
> Ensure the following build parameters are set:
|
> Ensure the following build parameters are set:
|
||||||
> - `MAKE_TARGET`: `wakunode2`
|
> - `MAKE_TARGET`: `wakunode2`
|
||||||
> - `IMAGE_TAG`: the release tag (e.g. `v0.16.0`)
|
> - `IMAGE_TAG`: the release tag (e.g. `v0.36.0`)
|
||||||
> - `IMAGE_NAME`: `wakuorg/nwaku`
|
> - `IMAGE_NAME`: `wakuorg/nwaku`
|
||||||
> - `NIMFLAGS`: `--colors:off -d:disableMarchNative -d:chronicles_colors:none -d:postgres`
|
> - `NIMFLAGS`: `--colors:off -d:disableMarchNative -d:chronicles_colors:none -d:postgres`
|
||||||
> - `GIT_REF` the release tag (e.g. `v0.16.0`)
|
> - `GIT_REF` the release tag (e.g. `v0.36.0`)
|
||||||
3. Update the default nwaku image in [nwaku-compose](https://github.com/waku-org/nwaku-compose/blob/master/docker-compose.yml)
|
|
||||||
4. Deploy the release to appropriate fleets:
|
|
||||||
- Inform clients
|
|
||||||
> **NOTE:** known clients are currently using some version of js-waku, go-waku, nwaku or waku-rs.
|
|
||||||
> Clients are reachable via the corresponding channels on the Vac Discord server.
|
|
||||||
> It should be enough to inform clients on the `#nwaku` and `#announce` channels on Discord.
|
|
||||||
> Informal conversations with specific repo maintainers are often part of this process.
|
|
||||||
- Check if nwaku configuration parameters changed. If so [update fleet configuration](https://www.notion.so/Fleet-Ownership-7532aad8896d46599abac3c274189741?pvs=4#d2d2f0fe4b3c429fbd860a1d64f89a64) in [infra-nim-waku](https://github.com/status-im/infra-nim-waku)
|
|
||||||
- Deploy release to the `waku.sandbox` fleet from [Jenkins](https://ci.infra.status.im/job/nim-waku/job/deploy-waku-sandbox/).
|
|
||||||
- Ensure that nodes successfully start up and monitor health using [Grafana](https://grafana.infra.status.im/d/qrp_ZCTGz/nim-waku-v2?orgId=1) and [Kibana](https://kibana.infra.status.im/goto/a7728e70-eb26-11ec-81d1-210eb3022c76).
|
|
||||||
- If necessary, revert by deploying the previous release. Download logs and open a bug report issue.
|
|
||||||
5. Submit a PR to merge the release branch back to `master`. Make sure you use the option `Merge pull request (Create a merge commit)` to perform such merge.
|
|
||||||
|
|
||||||
### Performing a patch release
|
### Performing a patch release
|
||||||
|
|
||||||
@ -116,4 +144,14 @@ We also need to merge release branch back to master as a final step.
|
|||||||
|
|
||||||
4. Once the release-candidate has been validated and changelog PR got merged, cherry-pick the changelog update from master to the release branch. Create a final release tag and push it.
|
4. Once the release-candidate has been validated and changelog PR got merged, cherry-pick the changelog update from master to the release branch. Create a final release tag and push it.
|
||||||
|
|
||||||
5. Create a [Github release](https://github.com/waku-org/nwaku/releases) from the release tag and follow the same post-release process as usual.
|
5. Create a [GitHub release](https://github.com/waku-org/nwaku/releases) from the release tag and follow the same post-release process as usual.
|
||||||
|
|
||||||
|
### Links
|
||||||
|
|
||||||
|
- [Release process](https://github.com/waku-org/nwaku/blob/master/docs/contributors/release-process.md)
|
||||||
|
- [Release notes](https://github.com/waku-org/nwaku/blob/master/CHANGELOG.md)
|
||||||
|
- [Fleet ownership](https://www.notion.so/Fleet-Ownership-7532aad8896d46599abac3c274189741?pvs=4#d2d2f0fe4b3c429fbd860a1d64f89a64)
|
||||||
|
- [Infra-nim-waku](https://github.com/status-im/infra-nim-waku)
|
||||||
|
- [Jenkins](https://ci.infra.status.im/job/nim-waku/)
|
||||||
|
- [Fleets](https://fleets.waku.org/)
|
||||||
|
- [Harbor](https://harbor.status.im/harbor/projects/9/repositories/nwaku/artifacts-tab)
|
||||||
@ -1,4 +1,3 @@
|
|||||||
|
|
||||||
# Configure a REST API node
|
# Configure a REST API node
|
||||||
|
|
||||||
A subset of the node configuration can be used to modify the behaviour of the HTTP REST API.
|
A subset of the node configuration can be used to modify the behaviour of the HTTP REST API.
|
||||||
@ -21,3 +20,5 @@ Example:
|
|||||||
```shell
|
```shell
|
||||||
wakunode2 --rest=true
|
wakunode2 --rest=true
|
||||||
```
|
```
|
||||||
|
|
||||||
|
The `page_size` flag in the Store API has a default value of 20 and a max value of 100.
|
||||||
|
|||||||
@ -19,283 +19,309 @@ pthread_mutex_t mutex = PTHREAD_MUTEX_INITIALIZER;
|
|||||||
pthread_cond_t cond = PTHREAD_COND_INITIALIZER;
|
pthread_cond_t cond = PTHREAD_COND_INITIALIZER;
|
||||||
int callback_executed = 0;
|
int callback_executed = 0;
|
||||||
|
|
||||||
void waitForCallback() {
|
void waitForCallback()
|
||||||
pthread_mutex_lock(&mutex);
|
{
|
||||||
while (!callback_executed) {
|
pthread_mutex_lock(&mutex);
|
||||||
pthread_cond_wait(&cond, &mutex);
|
while (!callback_executed)
|
||||||
}
|
{
|
||||||
callback_executed = 0;
|
pthread_cond_wait(&cond, &mutex);
|
||||||
pthread_mutex_unlock(&mutex);
|
}
|
||||||
|
callback_executed = 0;
|
||||||
|
pthread_mutex_unlock(&mutex);
|
||||||
}
|
}
|
||||||
|
|
||||||
#define WAKU_CALL(call) \
|
#define WAKU_CALL(call) \
|
||||||
do { \
|
do \
|
||||||
int ret = call; \
|
{ \
|
||||||
if (ret != 0) { \
|
int ret = call; \
|
||||||
printf("Failed the call to: %s. Returned code: %d\n", #call, ret); \
|
if (ret != 0) \
|
||||||
exit(1); \
|
{ \
|
||||||
} \
|
printf("Failed the call to: %s. Returned code: %d\n", #call, ret); \
|
||||||
waitForCallback(); \
|
exit(1); \
|
||||||
} while (0)
|
} \
|
||||||
|
waitForCallback(); \
|
||||||
|
} while (0)
|
||||||
|
|
||||||
struct ConfigNode {
|
struct ConfigNode
|
||||||
char host[128];
|
{
|
||||||
int port;
|
char host[128];
|
||||||
char key[128];
|
int port;
|
||||||
int relay;
|
char key[128];
|
||||||
char peers[2048];
|
int relay;
|
||||||
int store;
|
char peers[2048];
|
||||||
char storeNode[2048];
|
int store;
|
||||||
char storeRetentionPolicy[64];
|
char storeNode[2048];
|
||||||
char storeDbUrl[256];
|
char storeRetentionPolicy[64];
|
||||||
int storeVacuum;
|
char storeDbUrl[256];
|
||||||
int storeDbMigration;
|
int storeVacuum;
|
||||||
int storeMaxNumDbConnections;
|
int storeDbMigration;
|
||||||
|
int storeMaxNumDbConnections;
|
||||||
};
|
};
|
||||||
|
|
||||||
// libwaku Context
|
// libwaku Context
|
||||||
void* ctx;
|
void *ctx;
|
||||||
|
|
||||||
// For the case of C language we don't need to store a particular userData
|
// For the case of C language we don't need to store a particular userData
|
||||||
void* userData = NULL;
|
void *userData = NULL;
|
||||||
|
|
||||||
// Arguments parsing
|
// Arguments parsing
|
||||||
static char doc[] = "\nC example that shows how to use the waku library.";
|
static char doc[] = "\nC example that shows how to use the waku library.";
|
||||||
static char args_doc[] = "";
|
static char args_doc[] = "";
|
||||||
|
|
||||||
static struct argp_option options[] = {
|
static struct argp_option options[] = {
|
||||||
{ "host", 'h', "HOST", 0, "IP to listen for for LibP2P traffic. (default: \"0.0.0.0\")"},
|
{"host", 'h', "HOST", 0, "IP to listen for for LibP2P traffic. (default: \"0.0.0.0\")"},
|
||||||
{ "port", 'p', "PORT", 0, "TCP listening port. (default: \"60000\")"},
|
{"port", 'p', "PORT", 0, "TCP listening port. (default: \"60000\")"},
|
||||||
{ "key", 'k', "KEY", 0, "P2P node private key as 64 char hex string."},
|
{"key", 'k', "KEY", 0, "P2P node private key as 64 char hex string."},
|
||||||
{ "relay", 'r', "RELAY", 0, "Enable relay protocol: 1 or 0. (default: 1)"},
|
{"relay", 'r', "RELAY", 0, "Enable relay protocol: 1 or 0. (default: 1)"},
|
||||||
{ "peers", 'a', "PEERS", 0, "Comma-separated list of peer-multiaddress to connect\
|
{"peers", 'a', "PEERS", 0, "Comma-separated list of peer-multiaddress to connect\
|
||||||
to. (default: \"\") e.g. \"/ip4/127.0.0.1/tcp/60001/p2p/16Uiu2HAmVFXtAfSj4EiR7mL2KvL4EE2wztuQgUSBoj2Jx2KeXFLN\""},
|
to. (default: \"\") e.g. \"/ip4/127.0.0.1/tcp/60001/p2p/16Uiu2HAmVFXtAfSj4EiR7mL2KvL4EE2wztuQgUSBoj2Jx2KeXFLN\""},
|
||||||
{ 0 }
|
{0}};
|
||||||
};
|
|
||||||
|
|
||||||
static error_t parse_opt(int key, char *arg, struct argp_state *state) {
|
static error_t parse_opt(int key, char *arg, struct argp_state *state)
|
||||||
|
{
|
||||||
|
|
||||||
struct ConfigNode *cfgNode = state->input;
|
struct ConfigNode *cfgNode = state->input;
|
||||||
switch (key) {
|
switch (key)
|
||||||
case 'h':
|
{
|
||||||
snprintf(cfgNode->host, 128, "%s", arg);
|
case 'h':
|
||||||
break;
|
snprintf(cfgNode->host, 128, "%s", arg);
|
||||||
case 'p':
|
break;
|
||||||
cfgNode->port = atoi(arg);
|
case 'p':
|
||||||
break;
|
cfgNode->port = atoi(arg);
|
||||||
case 'k':
|
break;
|
||||||
snprintf(cfgNode->key, 128, "%s", arg);
|
case 'k':
|
||||||
break;
|
snprintf(cfgNode->key, 128, "%s", arg);
|
||||||
case 'r':
|
break;
|
||||||
cfgNode->relay = atoi(arg);
|
case 'r':
|
||||||
break;
|
cfgNode->relay = atoi(arg);
|
||||||
case 'a':
|
break;
|
||||||
snprintf(cfgNode->peers, 2048, "%s", arg);
|
case 'a':
|
||||||
break;
|
snprintf(cfgNode->peers, 2048, "%s", arg);
|
||||||
case ARGP_KEY_ARG:
|
break;
|
||||||
if (state->arg_num >= 1) /* Too many arguments. */
|
case ARGP_KEY_ARG:
|
||||||
argp_usage(state);
|
if (state->arg_num >= 1) /* Too many arguments. */
|
||||||
break;
|
argp_usage(state);
|
||||||
case ARGP_KEY_END:
|
break;
|
||||||
break;
|
case ARGP_KEY_END:
|
||||||
default:
|
break;
|
||||||
return ARGP_ERR_UNKNOWN;
|
default:
|
||||||
}
|
return ARGP_ERR_UNKNOWN;
|
||||||
|
}
|
||||||
|
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
void signal_cond() {
|
void signal_cond()
|
||||||
pthread_mutex_lock(&mutex);
|
{
|
||||||
callback_executed = 1;
|
pthread_mutex_lock(&mutex);
|
||||||
pthread_cond_signal(&cond);
|
callback_executed = 1;
|
||||||
pthread_mutex_unlock(&mutex);
|
pthread_cond_signal(&cond);
|
||||||
|
pthread_mutex_unlock(&mutex);
|
||||||
}
|
}
|
||||||
|
|
||||||
static struct argp argp = { options, parse_opt, args_doc, doc, 0, 0, 0 };
|
static struct argp argp = {options, parse_opt, args_doc, doc, 0, 0, 0};
|
||||||
|
|
||||||
void event_handler(int callerRet, const char* msg, size_t len, void* userData) {
|
void event_handler(int callerRet, const char *msg, size_t len, void *userData)
|
||||||
if (callerRet == RET_ERR) {
|
{
|
||||||
printf("Error: %s\n", msg);
|
if (callerRet == RET_ERR)
|
||||||
exit(1);
|
{
|
||||||
}
|
printf("Error: %s\n", msg);
|
||||||
else if (callerRet == RET_OK) {
|
exit(1);
|
||||||
printf("Receiving event: %s\n", msg);
|
}
|
||||||
}
|
else if (callerRet == RET_OK)
|
||||||
|
{
|
||||||
|
printf("Receiving event: %s\n", msg);
|
||||||
|
}
|
||||||
|
|
||||||
signal_cond();
|
signal_cond();
|
||||||
}
|
}
|
||||||
|
|
||||||
void on_event_received(int callerRet, const char* msg, size_t len, void* userData) {
|
void on_event_received(int callerRet, const char *msg, size_t len, void *userData)
|
||||||
if (callerRet == RET_ERR) {
|
{
|
||||||
printf("Error: %s\n", msg);
|
if (callerRet == RET_ERR)
|
||||||
exit(1);
|
{
|
||||||
}
|
printf("Error: %s\n", msg);
|
||||||
else if (callerRet == RET_OK) {
|
exit(1);
|
||||||
printf("Receiving event: %s\n", msg);
|
}
|
||||||
}
|
else if (callerRet == RET_OK)
|
||||||
|
{
|
||||||
|
printf("Receiving event: %s\n", msg);
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
char* contentTopic = NULL;
|
char *contentTopic = NULL;
|
||||||
void handle_content_topic(int callerRet, const char* msg, size_t len, void* userData) {
|
void handle_content_topic(int callerRet, const char *msg, size_t len, void *userData)
|
||||||
if (contentTopic != NULL) {
|
{
|
||||||
free(contentTopic);
|
if (contentTopic != NULL)
|
||||||
}
|
{
|
||||||
|
free(contentTopic);
|
||||||
|
}
|
||||||
|
|
||||||
contentTopic = malloc(len * sizeof(char) + 1);
|
contentTopic = malloc(len * sizeof(char) + 1);
|
||||||
strcpy(contentTopic, msg);
|
strcpy(contentTopic, msg);
|
||||||
signal_cond();
|
signal_cond();
|
||||||
}
|
}
|
||||||
|
|
||||||
char* publishResponse = NULL;
|
char *publishResponse = NULL;
|
||||||
void handle_publish_ok(int callerRet, const char* msg, size_t len, void* userData) {
|
void handle_publish_ok(int callerRet, const char *msg, size_t len, void *userData)
|
||||||
printf("Publish Ok: %s %lu\n", msg, len);
|
{
|
||||||
|
printf("Publish Ok: %s %lu\n", msg, len);
|
||||||
|
|
||||||
if (publishResponse != NULL) {
|
if (publishResponse != NULL)
|
||||||
free(publishResponse);
|
{
|
||||||
}
|
free(publishResponse);
|
||||||
|
}
|
||||||
|
|
||||||
publishResponse = malloc(len * sizeof(char) + 1);
|
publishResponse = malloc(len * sizeof(char) + 1);
|
||||||
strcpy(publishResponse, msg);
|
strcpy(publishResponse, msg);
|
||||||
}
|
}
|
||||||
|
|
||||||
#define MAX_MSG_SIZE 65535
|
#define MAX_MSG_SIZE 65535
|
||||||
|
|
||||||
void publish_message(const char* msg) {
|
void publish_message(const char *msg)
|
||||||
char jsonWakuMsg[MAX_MSG_SIZE];
|
{
|
||||||
char *msgPayload = b64_encode(msg, strlen(msg));
|
char jsonWakuMsg[MAX_MSG_SIZE];
|
||||||
|
char *msgPayload = b64_encode(msg, strlen(msg));
|
||||||
|
|
||||||
WAKU_CALL( waku_content_topic(ctx,
|
WAKU_CALL(waku_content_topic(ctx,
|
||||||
"appName",
|
handle_content_topic,
|
||||||
1,
|
userData,
|
||||||
"contentTopicName",
|
"appName",
|
||||||
"encoding",
|
1,
|
||||||
handle_content_topic,
|
"contentTopicName",
|
||||||
userData) );
|
"encoding"));
|
||||||
snprintf(jsonWakuMsg,
|
snprintf(jsonWakuMsg,
|
||||||
MAX_MSG_SIZE,
|
MAX_MSG_SIZE,
|
||||||
"{\"payload\":\"%s\",\"contentTopic\":\"%s\"}",
|
"{\"payload\":\"%s\",\"contentTopic\":\"%s\"}",
|
||||||
msgPayload, contentTopic);
|
msgPayload, contentTopic);
|
||||||
|
|
||||||
free(msgPayload);
|
free(msgPayload);
|
||||||
|
|
||||||
WAKU_CALL( waku_relay_publish(ctx,
|
WAKU_CALL(waku_relay_publish(ctx,
|
||||||
"/waku/2/rs/16/32",
|
event_handler,
|
||||||
jsonWakuMsg,
|
userData,
|
||||||
10000 /*timeout ms*/,
|
"/waku/2/rs/16/32",
|
||||||
event_handler,
|
jsonWakuMsg,
|
||||||
userData) );
|
10000 /*timeout ms*/));
|
||||||
}
|
}
|
||||||
|
|
||||||
void show_help_and_exit() {
|
void show_help_and_exit()
|
||||||
printf("Wrong parameters\n");
|
{
|
||||||
exit(1);
|
printf("Wrong parameters\n");
|
||||||
|
exit(1);
|
||||||
}
|
}
|
||||||
|
|
||||||
void print_default_pubsub_topic(int callerRet, const char* msg, size_t len, void* userData) {
|
void print_default_pubsub_topic(int callerRet, const char *msg, size_t len, void *userData)
|
||||||
printf("Default pubsub topic: %s\n", msg);
|
{
|
||||||
signal_cond();
|
printf("Default pubsub topic: %s\n", msg);
|
||||||
|
signal_cond();
|
||||||
}
|
}
|
||||||
|
|
||||||
void print_waku_version(int callerRet, const char* msg, size_t len, void* userData) {
|
void print_waku_version(int callerRet, const char *msg, size_t len, void *userData)
|
||||||
printf("Git Version: %s\n", msg);
|
{
|
||||||
signal_cond();
|
printf("Git Version: %s\n", msg);
|
||||||
|
signal_cond();
|
||||||
}
|
}
|
||||||
|
|
||||||
// Beginning of UI program logic
|
// Beginning of UI program logic
|
||||||
|
|
||||||
enum PROGRAM_STATE {
|
enum PROGRAM_STATE
|
||||||
MAIN_MENU,
|
{
|
||||||
SUBSCRIBE_TOPIC_MENU,
|
MAIN_MENU,
|
||||||
CONNECT_TO_OTHER_NODE_MENU,
|
SUBSCRIBE_TOPIC_MENU,
|
||||||
PUBLISH_MESSAGE_MENU
|
CONNECT_TO_OTHER_NODE_MENU,
|
||||||
|
PUBLISH_MESSAGE_MENU
|
||||||
};
|
};
|
||||||
|
|
||||||
enum PROGRAM_STATE current_state = MAIN_MENU;
|
enum PROGRAM_STATE current_state = MAIN_MENU;
|
||||||
|
|
||||||
void show_main_menu() {
|
void show_main_menu()
|
||||||
printf("\nPlease, select an option:\n");
|
{
|
||||||
printf("\t1.) Subscribe to topic\n");
|
printf("\nPlease, select an option:\n");
|
||||||
printf("\t2.) Connect to other node\n");
|
printf("\t1.) Subscribe to topic\n");
|
||||||
printf("\t3.) Publish a message\n");
|
printf("\t2.) Connect to other node\n");
|
||||||
|
printf("\t3.) Publish a message\n");
|
||||||
}
|
}
|
||||||
|
|
||||||
void handle_user_input() {
|
void handle_user_input()
|
||||||
char cmd[1024];
|
{
|
||||||
memset(cmd, 0, 1024);
|
char cmd[1024];
|
||||||
int numRead = read(0, cmd, 1024);
|
memset(cmd, 0, 1024);
|
||||||
if (numRead <= 0) {
|
int numRead = read(0, cmd, 1024);
|
||||||
return;
|
if (numRead <= 0)
|
||||||
}
|
{
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
switch (atoi(cmd))
|
switch (atoi(cmd))
|
||||||
{
|
{
|
||||||
case SUBSCRIBE_TOPIC_MENU:
|
case SUBSCRIBE_TOPIC_MENU:
|
||||||
{
|
{
|
||||||
printf("Indicate the Pubsubtopic to subscribe:\n");
|
printf("Indicate the Pubsubtopic to subscribe:\n");
|
||||||
char pubsubTopic[128];
|
char pubsubTopic[128];
|
||||||
scanf("%127s", pubsubTopic);
|
scanf("%127s", pubsubTopic);
|
||||||
|
|
||||||
WAKU_CALL( waku_relay_subscribe(ctx,
|
WAKU_CALL(waku_relay_subscribe(ctx,
|
||||||
pubsubTopic,
|
event_handler,
|
||||||
event_handler,
|
userData,
|
||||||
userData) );
|
pubsubTopic));
|
||||||
printf("The subscription went well\n");
|
printf("The subscription went well\n");
|
||||||
|
|
||||||
show_main_menu();
|
show_main_menu();
|
||||||
}
|
}
|
||||||
|
break;
|
||||||
|
|
||||||
|
case CONNECT_TO_OTHER_NODE_MENU:
|
||||||
|
// printf("Connecting to a node. Please indicate the peer Multiaddress:\n");
|
||||||
|
// printf("e.g.: /ip4/127.0.0.1/tcp/60001/p2p/16Uiu2HAmVFXtAfSj4EiR7mL2KvL4EE2wztuQgUSBoj2Jx2KeXFLN\n");
|
||||||
|
// char peerAddr[512];
|
||||||
|
// scanf("%511s", peerAddr);
|
||||||
|
// WAKU_CALL(waku_connect(ctx, peerAddr, 10000 /* timeoutMs */, event_handler, userData));
|
||||||
|
show_main_menu();
|
||||||
break;
|
break;
|
||||||
|
|
||||||
case CONNECT_TO_OTHER_NODE_MENU:
|
case PUBLISH_MESSAGE_MENU:
|
||||||
printf("Connecting to a node. Please indicate the peer Multiaddress:\n");
|
{
|
||||||
printf("e.g.: /ip4/127.0.0.1/tcp/60001/p2p/16Uiu2HAmVFXtAfSj4EiR7mL2KvL4EE2wztuQgUSBoj2Jx2KeXFLN\n");
|
printf("Type the message to publish:\n");
|
||||||
char peerAddr[512];
|
char msg[1024];
|
||||||
scanf("%511s", peerAddr);
|
scanf("%1023s", msg);
|
||||||
WAKU_CALL(waku_connect(ctx, peerAddr, 10000 /* timeoutMs */, event_handler, userData));
|
|
||||||
show_main_menu();
|
publish_message(msg);
|
||||||
|
|
||||||
|
show_main_menu();
|
||||||
|
}
|
||||||
|
break;
|
||||||
|
|
||||||
|
case MAIN_MENU:
|
||||||
break;
|
break;
|
||||||
|
}
|
||||||
case PUBLISH_MESSAGE_MENU:
|
|
||||||
{
|
|
||||||
printf("Type the message to publish:\n");
|
|
||||||
char msg[1024];
|
|
||||||
scanf("%1023s", msg);
|
|
||||||
|
|
||||||
publish_message(msg);
|
|
||||||
|
|
||||||
show_main_menu();
|
|
||||||
}
|
|
||||||
break;
|
|
||||||
|
|
||||||
case MAIN_MENU:
|
|
||||||
break;
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
|
||||||
// End of UI program logic
|
// End of UI program logic
|
||||||
|
|
||||||
int main(int argc, char** argv) {
|
int main(int argc, char **argv)
|
||||||
struct ConfigNode cfgNode;
|
{
|
||||||
// default values
|
struct ConfigNode cfgNode;
|
||||||
snprintf(cfgNode.host, 128, "0.0.0.0");
|
// default values
|
||||||
cfgNode.port = 60000;
|
snprintf(cfgNode.host, 128, "0.0.0.0");
|
||||||
cfgNode.relay = 1;
|
cfgNode.port = 60000;
|
||||||
|
cfgNode.relay = 1;
|
||||||
|
|
||||||
cfgNode.store = 0;
|
cfgNode.store = 0;
|
||||||
snprintf(cfgNode.storeNode, 2048, "");
|
snprintf(cfgNode.storeNode, 2048, "");
|
||||||
snprintf(cfgNode.storeRetentionPolicy, 64, "time:6000000");
|
snprintf(cfgNode.storeRetentionPolicy, 64, "time:6000000");
|
||||||
snprintf(cfgNode.storeDbUrl, 256, "postgres://postgres:test123@localhost:5432/postgres");
|
snprintf(cfgNode.storeDbUrl, 256, "postgres://postgres:test123@localhost:5432/postgres");
|
||||||
cfgNode.storeVacuum = 0;
|
cfgNode.storeVacuum = 0;
|
||||||
cfgNode.storeDbMigration = 0;
|
cfgNode.storeDbMigration = 0;
|
||||||
cfgNode.storeMaxNumDbConnections = 30;
|
cfgNode.storeMaxNumDbConnections = 30;
|
||||||
|
|
||||||
if (argp_parse(&argp, argc, argv, 0, 0, &cfgNode)
|
if (argp_parse(&argp, argc, argv, 0, 0, &cfgNode) == ARGP_ERR_UNKNOWN)
|
||||||
== ARGP_ERR_UNKNOWN) {
|
{
|
||||||
show_help_and_exit();
|
show_help_and_exit();
|
||||||
}
|
}
|
||||||
|
|
||||||
char jsonConfig[5000];
|
char jsonConfig[5000];
|
||||||
snprintf(jsonConfig, 5000, "{ \
|
snprintf(jsonConfig, 5000, "{ \
|
||||||
\"clusterId\": 16, \
|
\"clusterId\": 16, \
|
||||||
\"shards\": [ 1, 32, 64, 128, 256 ], \
|
\"shards\": [ 1, 32, 64, 128, 256 ], \
|
||||||
\"numShardsInNetwork\": 257, \
|
\"numShardsInNetwork\": 257, \
|
||||||
@ -313,54 +339,56 @@ int main(int argc, char** argv) {
|
|||||||
\"discv5UdpPort\": 9999, \
|
\"discv5UdpPort\": 9999, \
|
||||||
\"dnsDiscoveryUrl\": \"enrtree://AMOJVZX4V6EXP7NTJPMAYJYST2QP6AJXYW76IU6VGJS7UVSNDYZG4@boot.prod.status.nodes.status.im\", \
|
\"dnsDiscoveryUrl\": \"enrtree://AMOJVZX4V6EXP7NTJPMAYJYST2QP6AJXYW76IU6VGJS7UVSNDYZG4@boot.prod.status.nodes.status.im\", \
|
||||||
\"dnsDiscoveryNameServers\": [\"8.8.8.8\", \"1.0.0.1\"] \
|
\"dnsDiscoveryNameServers\": [\"8.8.8.8\", \"1.0.0.1\"] \
|
||||||
}", cfgNode.host,
|
}",
|
||||||
cfgNode.port,
|
cfgNode.host,
|
||||||
cfgNode.relay ? "true":"false",
|
cfgNode.port,
|
||||||
cfgNode.store ? "true":"false",
|
cfgNode.relay ? "true" : "false",
|
||||||
cfgNode.storeDbUrl,
|
cfgNode.store ? "true" : "false",
|
||||||
cfgNode.storeRetentionPolicy,
|
cfgNode.storeDbUrl,
|
||||||
cfgNode.storeMaxNumDbConnections);
|
cfgNode.storeRetentionPolicy,
|
||||||
|
cfgNode.storeMaxNumDbConnections);
|
||||||
|
|
||||||
ctx = waku_new(jsonConfig, event_handler, userData);
|
ctx = waku_new(jsonConfig, event_handler, userData);
|
||||||
waitForCallback();
|
waitForCallback();
|
||||||
|
|
||||||
WAKU_CALL( waku_default_pubsub_topic(ctx, print_default_pubsub_topic, userData) );
|
WAKU_CALL(waku_default_pubsub_topic(ctx, print_default_pubsub_topic, userData));
|
||||||
WAKU_CALL( waku_version(ctx, print_waku_version, userData) );
|
WAKU_CALL(waku_version(ctx, print_waku_version, userData));
|
||||||
|
|
||||||
printf("Bind addr: %s:%u\n", cfgNode.host, cfgNode.port);
|
printf("Bind addr: %s:%u\n", cfgNode.host, cfgNode.port);
|
||||||
printf("Waku Relay enabled: %s\n", cfgNode.relay == 1 ? "YES": "NO");
|
printf("Waku Relay enabled: %s\n", cfgNode.relay == 1 ? "YES" : "NO");
|
||||||
|
|
||||||
waku_set_event_callback(ctx, on_event_received, userData);
|
set_event_callback(ctx, on_event_received, userData);
|
||||||
|
|
||||||
waku_start(ctx, event_handler, userData);
|
waku_start(ctx, event_handler, userData);
|
||||||
waitForCallback();
|
waitForCallback();
|
||||||
|
|
||||||
WAKU_CALL( waku_listen_addresses(ctx, event_handler, userData) );
|
WAKU_CALL(waku_listen_addresses(ctx, event_handler, userData));
|
||||||
|
|
||||||
WAKU_CALL( waku_relay_subscribe(ctx,
|
WAKU_CALL(waku_relay_subscribe(ctx,
|
||||||
"/waku/2/rs/0/0",
|
event_handler,
|
||||||
event_handler,
|
userData,
|
||||||
userData) );
|
"/waku/2/rs/16/32"));
|
||||||
|
|
||||||
WAKU_CALL( waku_discv5_update_bootnodes(ctx,
|
WAKU_CALL(waku_discv5_update_bootnodes(ctx,
|
||||||
"[\"enr:-QEkuEBIkb8q8_mrorHndoXH9t5N6ZfD-jehQCrYeoJDPHqT0l0wyaONa2-piRQsi3oVKAzDShDVeoQhy0uwN1xbZfPZAYJpZIJ2NIJpcIQiQlleim11bHRpYWRkcnO4bgA0Ni9ub2RlLTAxLmdjLXVzLWNlbnRyYWwxLWEud2FrdS5zYW5kYm94LnN0YXR1cy5pbQZ2XwA2Ni9ub2RlLTAxLmdjLXVzLWNlbnRyYWwxLWEud2FrdS5zYW5kYm94LnN0YXR1cy5pbQYfQN4DgnJzkwABCAAAAAEAAgADAAQABQAGAAeJc2VjcDI1NmsxoQKnGt-GSgqPSf3IAPM7bFgTlpczpMZZLF3geeoNNsxzSoN0Y3CCdl-DdWRwgiMohXdha3UyDw\",\"enr:-QEkuEB3WHNS-xA3RDpfu9A2Qycr3bN3u7VoArMEiDIFZJ66F1EB3d4wxZN1hcdcOX-RfuXB-MQauhJGQbpz3qUofOtLAYJpZIJ2NIJpcIQI2SVcim11bHRpYWRkcnO4bgA0Ni9ub2RlLTAxLmFjLWNuLWhvbmdrb25nLWMud2FrdS5zYW5kYm94LnN0YXR1cy5pbQZ2XwA2Ni9ub2RlLTAxLmFjLWNuLWhvbmdrb25nLWMud2FrdS5zYW5kYm94LnN0YXR1cy5pbQYfQN4DgnJzkwABCAAAAAEAAgADAAQABQAGAAeJc2VjcDI1NmsxoQPK35Nnz0cWUtSAhBp7zvHEhyU_AqeQUlqzLiLxfP2L4oN0Y3CCdl-DdWRwgiMohXdha3UyDw\"]",
|
event_handler,
|
||||||
event_handler,
|
userData,
|
||||||
userData) );
|
"[\"enr:-QEkuEBIkb8q8_mrorHndoXH9t5N6ZfD-jehQCrYeoJDPHqT0l0wyaONa2-piRQsi3oVKAzDShDVeoQhy0uwN1xbZfPZAYJpZIJ2NIJpcIQiQlleim11bHRpYWRkcnO4bgA0Ni9ub2RlLTAxLmdjLXVzLWNlbnRyYWwxLWEud2FrdS5zYW5kYm94LnN0YXR1cy5pbQZ2XwA2Ni9ub2RlLTAxLmdjLXVzLWNlbnRyYWwxLWEud2FrdS5zYW5kYm94LnN0YXR1cy5pbQYfQN4DgnJzkwABCAAAAAEAAgADAAQABQAGAAeJc2VjcDI1NmsxoQKnGt-GSgqPSf3IAPM7bFgTlpczpMZZLF3geeoNNsxzSoN0Y3CCdl-DdWRwgiMohXdha3UyDw\",\"enr:-QEkuEB3WHNS-xA3RDpfu9A2Qycr3bN3u7VoArMEiDIFZJ66F1EB3d4wxZN1hcdcOX-RfuXB-MQauhJGQbpz3qUofOtLAYJpZIJ2NIJpcIQI2SVcim11bHRpYWRkcnO4bgA0Ni9ub2RlLTAxLmFjLWNuLWhvbmdrb25nLWMud2FrdS5zYW5kYm94LnN0YXR1cy5pbQZ2XwA2Ni9ub2RlLTAxLmFjLWNuLWhvbmdrb25nLWMud2FrdS5zYW5kYm94LnN0YXR1cy5pbQYfQN4DgnJzkwABCAAAAAEAAgADAAQABQAGAAeJc2VjcDI1NmsxoQPK35Nnz0cWUtSAhBp7zvHEhyU_AqeQUlqzLiLxfP2L4oN0Y3CCdl-DdWRwgiMohXdha3UyDw\"]"));
|
||||||
|
|
||||||
WAKU_CALL( waku_get_peerids_from_peerstore(ctx,
|
WAKU_CALL(waku_get_peerids_from_peerstore(ctx,
|
||||||
event_handler,
|
event_handler,
|
||||||
userData) );
|
userData));
|
||||||
|
|
||||||
show_main_menu();
|
show_main_menu();
|
||||||
while(1) {
|
while (1)
|
||||||
handle_user_input();
|
{
|
||||||
|
handle_user_input();
|
||||||
|
|
||||||
// Uncomment the following if need to test the metrics retrieval
|
// Uncomment the following if need to test the metrics retrieval
|
||||||
// WAKU_CALL( waku_get_metrics(ctx,
|
// WAKU_CALL( waku_get_metrics(ctx,
|
||||||
// event_handler,
|
// event_handler,
|
||||||
// userData) );
|
// userData) );
|
||||||
}
|
}
|
||||||
|
|
||||||
pthread_mutex_destroy(&mutex);
|
pthread_mutex_destroy(&mutex);
|
||||||
pthread_cond_destroy(&cond);
|
pthread_cond_destroy(&cond);
|
||||||
}
|
}
|
||||||
|
|||||||
@ -21,37 +21,43 @@ pthread_mutex_t mutex = PTHREAD_MUTEX_INITIALIZER;
|
|||||||
pthread_cond_t cond = PTHREAD_COND_INITIALIZER;
|
pthread_cond_t cond = PTHREAD_COND_INITIALIZER;
|
||||||
int callback_executed = 0;
|
int callback_executed = 0;
|
||||||
|
|
||||||
void waitForCallback() {
|
void waitForCallback()
|
||||||
|
{
|
||||||
pthread_mutex_lock(&mutex);
|
pthread_mutex_lock(&mutex);
|
||||||
while (!callback_executed) {
|
while (!callback_executed)
|
||||||
|
{
|
||||||
pthread_cond_wait(&cond, &mutex);
|
pthread_cond_wait(&cond, &mutex);
|
||||||
}
|
}
|
||||||
callback_executed = 0;
|
callback_executed = 0;
|
||||||
pthread_mutex_unlock(&mutex);
|
pthread_mutex_unlock(&mutex);
|
||||||
}
|
}
|
||||||
|
|
||||||
void signal_cond() {
|
void signal_cond()
|
||||||
|
{
|
||||||
pthread_mutex_lock(&mutex);
|
pthread_mutex_lock(&mutex);
|
||||||
callback_executed = 1;
|
callback_executed = 1;
|
||||||
pthread_cond_signal(&cond);
|
pthread_cond_signal(&cond);
|
||||||
pthread_mutex_unlock(&mutex);
|
pthread_mutex_unlock(&mutex);
|
||||||
}
|
}
|
||||||
|
|
||||||
#define WAKU_CALL(call) \
|
#define WAKU_CALL(call) \
|
||||||
do { \
|
do \
|
||||||
int ret = call; \
|
{ \
|
||||||
if (ret != 0) { \
|
int ret = call; \
|
||||||
std::cout << "Failed the call to: " << #call << ". Code: " << ret << "\n"; \
|
if (ret != 0) \
|
||||||
} \
|
{ \
|
||||||
waitForCallback(); \
|
std::cout << "Failed the call to: " << #call << ". Code: " << ret << "\n"; \
|
||||||
} while (0)
|
} \
|
||||||
|
waitForCallback(); \
|
||||||
|
} while (0)
|
||||||
|
|
||||||
struct ConfigNode {
|
struct ConfigNode
|
||||||
char host[128];
|
{
|
||||||
int port;
|
char host[128];
|
||||||
char key[128];
|
int port;
|
||||||
int relay;
|
char key[128];
|
||||||
char peers[2048];
|
int relay;
|
||||||
|
char peers[2048];
|
||||||
};
|
};
|
||||||
|
|
||||||
// Arguments parsing
|
// Arguments parsing
|
||||||
@ -59,70 +65,76 @@ static char doc[] = "\nC example that shows how to use the waku library.";
|
|||||||
static char args_doc[] = "";
|
static char args_doc[] = "";
|
||||||
|
|
||||||
static struct argp_option options[] = {
|
static struct argp_option options[] = {
|
||||||
{ "host", 'h', "HOST", 0, "IP to listen for for LibP2P traffic. (default: \"0.0.0.0\")"},
|
{"host", 'h', "HOST", 0, "IP to listen for for LibP2P traffic. (default: \"0.0.0.0\")"},
|
||||||
{ "port", 'p', "PORT", 0, "TCP listening port. (default: \"60000\")"},
|
{"port", 'p', "PORT", 0, "TCP listening port. (default: \"60000\")"},
|
||||||
{ "key", 'k', "KEY", 0, "P2P node private key as 64 char hex string."},
|
{"key", 'k', "KEY", 0, "P2P node private key as 64 char hex string."},
|
||||||
{ "relay", 'r', "RELAY", 0, "Enable relay protocol: 1 or 0. (default: 1)"},
|
{"relay", 'r', "RELAY", 0, "Enable relay protocol: 1 or 0. (default: 1)"},
|
||||||
{ "peers", 'a', "PEERS", 0, "Comma-separated list of peer-multiaddress to connect\
|
{"peers", 'a', "PEERS", 0, "Comma-separated list of peer-multiaddress to connect\
|
||||||
to. (default: \"\") e.g. \"/ip4/127.0.0.1/tcp/60001/p2p/16Uiu2HAmVFXtAfSj4EiR7mL2KvL4EE2wztuQgUSBoj2Jx2KeXFLN\""},
|
to. (default: \"\") e.g. \"/ip4/127.0.0.1/tcp/60001/p2p/16Uiu2HAmVFXtAfSj4EiR7mL2KvL4EE2wztuQgUSBoj2Jx2KeXFLN\""},
|
||||||
{ 0 }
|
{0}};
|
||||||
};
|
|
||||||
|
|
||||||
static error_t parse_opt(int key, char *arg, struct argp_state *state) {
|
static error_t parse_opt(int key, char *arg, struct argp_state *state)
|
||||||
|
{
|
||||||
|
|
||||||
struct ConfigNode *cfgNode = (ConfigNode *) state->input;
|
struct ConfigNode *cfgNode = (ConfigNode *)state->input;
|
||||||
switch (key) {
|
switch (key)
|
||||||
case 'h':
|
{
|
||||||
snprintf(cfgNode->host, 128, "%s", arg);
|
case 'h':
|
||||||
break;
|
snprintf(cfgNode->host, 128, "%s", arg);
|
||||||
case 'p':
|
break;
|
||||||
cfgNode->port = atoi(arg);
|
case 'p':
|
||||||
break;
|
cfgNode->port = atoi(arg);
|
||||||
case 'k':
|
break;
|
||||||
snprintf(cfgNode->key, 128, "%s", arg);
|
case 'k':
|
||||||
break;
|
snprintf(cfgNode->key, 128, "%s", arg);
|
||||||
case 'r':
|
break;
|
||||||
cfgNode->relay = atoi(arg);
|
case 'r':
|
||||||
break;
|
cfgNode->relay = atoi(arg);
|
||||||
case 'a':
|
break;
|
||||||
snprintf(cfgNode->peers, 2048, "%s", arg);
|
case 'a':
|
||||||
break;
|
snprintf(cfgNode->peers, 2048, "%s", arg);
|
||||||
case ARGP_KEY_ARG:
|
break;
|
||||||
if (state->arg_num >= 1) /* Too many arguments. */
|
case ARGP_KEY_ARG:
|
||||||
|
if (state->arg_num >= 1) /* Too many arguments. */
|
||||||
argp_usage(state);
|
argp_usage(state);
|
||||||
break;
|
break;
|
||||||
case ARGP_KEY_END:
|
case ARGP_KEY_END:
|
||||||
break;
|
break;
|
||||||
default:
|
default:
|
||||||
return ARGP_ERR_UNKNOWN;
|
return ARGP_ERR_UNKNOWN;
|
||||||
}
|
}
|
||||||
|
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
void event_handler(const char* msg, size_t len) {
|
void event_handler(const char *msg, size_t len)
|
||||||
|
{
|
||||||
printf("Receiving event: %s\n", msg);
|
printf("Receiving event: %s\n", msg);
|
||||||
}
|
}
|
||||||
|
|
||||||
void handle_error(const char* msg, size_t len) {
|
void handle_error(const char *msg, size_t len)
|
||||||
|
{
|
||||||
printf("handle_error: %s\n", msg);
|
printf("handle_error: %s\n", msg);
|
||||||
exit(1);
|
exit(1);
|
||||||
}
|
}
|
||||||
|
|
||||||
template <class F>
|
template <class F>
|
||||||
auto cify(F&& f) {
|
auto cify(F &&f)
|
||||||
static F fn = std::forward<F>(f);
|
{
|
||||||
return [](int callerRet, const char* msg, size_t len, void* userData) {
|
static F fn = std::forward<F>(f);
|
||||||
signal_cond();
|
return [](int callerRet, const char *msg, size_t len, void *userData)
|
||||||
return fn(msg, len);
|
{
|
||||||
};
|
signal_cond();
|
||||||
|
return fn(msg, len);
|
||||||
|
};
|
||||||
}
|
}
|
||||||
|
|
||||||
static struct argp argp = { options, parse_opt, args_doc, doc, 0, 0, 0 };
|
static struct argp argp = {options, parse_opt, args_doc, doc, 0, 0, 0};
|
||||||
|
|
||||||
// Beginning of UI program logic
|
// Beginning of UI program logic
|
||||||
|
|
||||||
enum PROGRAM_STATE {
|
enum PROGRAM_STATE
|
||||||
|
{
|
||||||
MAIN_MENU,
|
MAIN_MENU,
|
||||||
SUBSCRIBE_TOPIC_MENU,
|
SUBSCRIBE_TOPIC_MENU,
|
||||||
CONNECT_TO_OTHER_NODE_MENU,
|
CONNECT_TO_OTHER_NODE_MENU,
|
||||||
@ -131,18 +143,21 @@ enum PROGRAM_STATE {
|
|||||||
|
|
||||||
enum PROGRAM_STATE current_state = MAIN_MENU;
|
enum PROGRAM_STATE current_state = MAIN_MENU;
|
||||||
|
|
||||||
void show_main_menu() {
|
void show_main_menu()
|
||||||
|
{
|
||||||
printf("\nPlease, select an option:\n");
|
printf("\nPlease, select an option:\n");
|
||||||
printf("\t1.) Subscribe to topic\n");
|
printf("\t1.) Subscribe to topic\n");
|
||||||
printf("\t2.) Connect to other node\n");
|
printf("\t2.) Connect to other node\n");
|
||||||
printf("\t3.) Publish a message\n");
|
printf("\t3.) Publish a message\n");
|
||||||
}
|
}
|
||||||
|
|
||||||
void handle_user_input(void* ctx) {
|
void handle_user_input(void *ctx)
|
||||||
|
{
|
||||||
char cmd[1024];
|
char cmd[1024];
|
||||||
memset(cmd, 0, 1024);
|
memset(cmd, 0, 1024);
|
||||||
int numRead = read(0, cmd, 1024);
|
int numRead = read(0, cmd, 1024);
|
||||||
if (numRead <= 0) {
|
if (numRead <= 0)
|
||||||
|
{
|
||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -154,12 +169,11 @@ void handle_user_input(void* ctx) {
|
|||||||
char pubsubTopic[128];
|
char pubsubTopic[128];
|
||||||
scanf("%127s", pubsubTopic);
|
scanf("%127s", pubsubTopic);
|
||||||
|
|
||||||
WAKU_CALL( waku_relay_subscribe(ctx,
|
WAKU_CALL(waku_relay_subscribe(ctx,
|
||||||
pubsubTopic,
|
cify([&](const char *msg, size_t len)
|
||||||
cify([&](const char* msg, size_t len) {
|
{ event_handler(msg, len); }),
|
||||||
event_handler(msg, len);
|
nullptr,
|
||||||
}),
|
pubsubTopic));
|
||||||
nullptr) );
|
|
||||||
printf("The subscription went well\n");
|
printf("The subscription went well\n");
|
||||||
|
|
||||||
show_main_menu();
|
show_main_menu();
|
||||||
@ -171,15 +185,14 @@ void handle_user_input(void* ctx) {
|
|||||||
printf("e.g.: /ip4/127.0.0.1/tcp/60001/p2p/16Uiu2HAmVFXtAfSj4EiR7mL2KvL4EE2wztuQgUSBoj2Jx2KeXFLN\n");
|
printf("e.g.: /ip4/127.0.0.1/tcp/60001/p2p/16Uiu2HAmVFXtAfSj4EiR7mL2KvL4EE2wztuQgUSBoj2Jx2KeXFLN\n");
|
||||||
char peerAddr[512];
|
char peerAddr[512];
|
||||||
scanf("%511s", peerAddr);
|
scanf("%511s", peerAddr);
|
||||||
WAKU_CALL( waku_connect(ctx,
|
WAKU_CALL(waku_connect(ctx,
|
||||||
peerAddr,
|
cify([&](const char *msg, size_t len)
|
||||||
10000 /* timeoutMs */,
|
{ event_handler(msg, len); }),
|
||||||
cify([&](const char* msg, size_t len) {
|
nullptr,
|
||||||
event_handler(msg, len);
|
peerAddr,
|
||||||
}),
|
10000 /* timeoutMs */));
|
||||||
nullptr));
|
|
||||||
show_main_menu();
|
show_main_menu();
|
||||||
break;
|
break;
|
||||||
|
|
||||||
case PUBLISH_MESSAGE_MENU:
|
case PUBLISH_MESSAGE_MENU:
|
||||||
{
|
{
|
||||||
@ -193,28 +206,26 @@ void handle_user_input(void* ctx) {
|
|||||||
|
|
||||||
std::string contentTopic;
|
std::string contentTopic;
|
||||||
waku_content_topic(ctx,
|
waku_content_topic(ctx,
|
||||||
|
cify([&contentTopic](const char *msg, size_t len)
|
||||||
|
{ contentTopic = msg; }),
|
||||||
|
nullptr,
|
||||||
"appName",
|
"appName",
|
||||||
1,
|
1,
|
||||||
"contentTopicName",
|
"contentTopicName",
|
||||||
"encoding",
|
"encoding");
|
||||||
cify([&contentTopic](const char* msg, size_t len) {
|
|
||||||
contentTopic = msg;
|
|
||||||
}),
|
|
||||||
nullptr);
|
|
||||||
|
|
||||||
snprintf(jsonWakuMsg,
|
snprintf(jsonWakuMsg,
|
||||||
2048,
|
2048,
|
||||||
"{\"payload\":\"%s\",\"contentTopic\":\"%s\"}",
|
"{\"payload\":\"%s\",\"contentTopic\":\"%s\"}",
|
||||||
msgPayload.data(), contentTopic.c_str());
|
msgPayload.data(), contentTopic.c_str());
|
||||||
|
|
||||||
WAKU_CALL( waku_relay_publish(ctx,
|
WAKU_CALL(waku_relay_publish(ctx,
|
||||||
"/waku/2/rs/16/32",
|
cify([&](const char *msg, size_t len)
|
||||||
jsonWakuMsg,
|
{ event_handler(msg, len); }),
|
||||||
10000 /*timeout ms*/,
|
nullptr,
|
||||||
cify([&](const char* msg, size_t len) {
|
"/waku/2/rs/16/32",
|
||||||
event_handler(msg, len);
|
jsonWakuMsg,
|
||||||
}),
|
10000 /*timeout ms*/));
|
||||||
nullptr) );
|
|
||||||
|
|
||||||
show_main_menu();
|
show_main_menu();
|
||||||
}
|
}
|
||||||
@ -227,12 +238,14 @@ void handle_user_input(void* ctx) {
|
|||||||
|
|
||||||
// End of UI program logic
|
// End of UI program logic
|
||||||
|
|
||||||
void show_help_and_exit() {
|
void show_help_and_exit()
|
||||||
|
{
|
||||||
printf("Wrong parameters\n");
|
printf("Wrong parameters\n");
|
||||||
exit(1);
|
exit(1);
|
||||||
}
|
}
|
||||||
|
|
||||||
int main(int argc, char** argv) {
|
int main(int argc, char **argv)
|
||||||
|
{
|
||||||
struct ConfigNode cfgNode;
|
struct ConfigNode cfgNode;
|
||||||
// default values
|
// default values
|
||||||
snprintf(cfgNode.host, 128, "0.0.0.0");
|
snprintf(cfgNode.host, 128, "0.0.0.0");
|
||||||
@ -241,8 +254,8 @@ int main(int argc, char** argv) {
|
|||||||
cfgNode.port = 60000;
|
cfgNode.port = 60000;
|
||||||
cfgNode.relay = 1;
|
cfgNode.relay = 1;
|
||||||
|
|
||||||
if (argp_parse(&argp, argc, argv, 0, 0, &cfgNode)
|
if (argp_parse(&argp, argc, argv, 0, 0, &cfgNode) == ARGP_ERR_UNKNOWN)
|
||||||
== ARGP_ERR_UNKNOWN) {
|
{
|
||||||
show_help_and_exit();
|
show_help_and_exit();
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -260,72 +273,64 @@ int main(int argc, char** argv) {
|
|||||||
\"discv5UdpPort\": 9999, \
|
\"discv5UdpPort\": 9999, \
|
||||||
\"dnsDiscoveryUrl\": \"enrtree://AMOJVZX4V6EXP7NTJPMAYJYST2QP6AJXYW76IU6VGJS7UVSNDYZG4@boot.prod.status.nodes.status.im\", \
|
\"dnsDiscoveryUrl\": \"enrtree://AMOJVZX4V6EXP7NTJPMAYJYST2QP6AJXYW76IU6VGJS7UVSNDYZG4@boot.prod.status.nodes.status.im\", \
|
||||||
\"dnsDiscoveryNameServers\": [\"8.8.8.8\", \"1.0.0.1\"] \
|
\"dnsDiscoveryNameServers\": [\"8.8.8.8\", \"1.0.0.1\"] \
|
||||||
}", cfgNode.host,
|
}",
|
||||||
cfgNode.port);
|
cfgNode.host,
|
||||||
|
cfgNode.port);
|
||||||
|
|
||||||
void* ctx =
|
void *ctx =
|
||||||
waku_new(jsonConfig,
|
waku_new(jsonConfig,
|
||||||
cify([](const char* msg, size_t len) {
|
cify([](const char *msg, size_t len)
|
||||||
std::cout << "waku_new feedback: " << msg << std::endl;
|
{ std::cout << "waku_new feedback: " << msg << std::endl; }),
|
||||||
}
|
nullptr);
|
||||||
),
|
|
||||||
nullptr
|
|
||||||
);
|
|
||||||
waitForCallback();
|
waitForCallback();
|
||||||
|
|
||||||
// example on how to retrieve a value from the `libwaku` callback.
|
// example on how to retrieve a value from the `libwaku` callback.
|
||||||
std::string defaultPubsubTopic;
|
std::string defaultPubsubTopic;
|
||||||
WAKU_CALL(
|
WAKU_CALL(
|
||||||
waku_default_pubsub_topic(
|
waku_default_pubsub_topic(
|
||||||
ctx,
|
ctx,
|
||||||
cify([&defaultPubsubTopic](const char* msg, size_t len) {
|
cify([&defaultPubsubTopic](const char *msg, size_t len)
|
||||||
defaultPubsubTopic = msg;
|
{ defaultPubsubTopic = msg; }),
|
||||||
}
|
nullptr));
|
||||||
),
|
|
||||||
nullptr));
|
|
||||||
|
|
||||||
std::cout << "Default pubsub topic: " << defaultPubsubTopic << std::endl;
|
std::cout << "Default pubsub topic: " << defaultPubsubTopic << std::endl;
|
||||||
|
|
||||||
WAKU_CALL(waku_version(ctx,
|
WAKU_CALL(waku_version(ctx,
|
||||||
cify([&](const char* msg, size_t len) {
|
cify([&](const char *msg, size_t len)
|
||||||
std::cout << "Git Version: " << msg << std::endl;
|
{ std::cout << "Git Version: " << msg << std::endl; }),
|
||||||
}),
|
|
||||||
nullptr));
|
nullptr));
|
||||||
|
|
||||||
printf("Bind addr: %s:%u\n", cfgNode.host, cfgNode.port);
|
printf("Bind addr: %s:%u\n", cfgNode.host, cfgNode.port);
|
||||||
printf("Waku Relay enabled: %s\n", cfgNode.relay == 1 ? "YES": "NO");
|
printf("Waku Relay enabled: %s\n", cfgNode.relay == 1 ? "YES" : "NO");
|
||||||
|
|
||||||
std::string pubsubTopic;
|
std::string pubsubTopic;
|
||||||
WAKU_CALL(waku_pubsub_topic(ctx,
|
WAKU_CALL(waku_pubsub_topic(ctx,
|
||||||
"example",
|
cify([&](const char *msg, size_t len)
|
||||||
cify([&](const char* msg, size_t len) {
|
{ pubsubTopic = msg; }),
|
||||||
pubsubTopic = msg;
|
nullptr,
|
||||||
}),
|
"example"));
|
||||||
nullptr));
|
|
||||||
|
|
||||||
std::cout << "Custom pubsub topic: " << pubsubTopic << std::endl;
|
std::cout << "Custom pubsub topic: " << pubsubTopic << std::endl;
|
||||||
|
|
||||||
waku_set_event_callback(ctx,
|
set_event_callback(ctx,
|
||||||
cify([&](const char* msg, size_t len) {
|
cify([&](const char *msg, size_t len)
|
||||||
event_handler(msg, len);
|
{ event_handler(msg, len); }),
|
||||||
}),
|
nullptr);
|
||||||
nullptr);
|
|
||||||
|
|
||||||
WAKU_CALL( waku_start(ctx,
|
WAKU_CALL(waku_start(ctx,
|
||||||
cify([&](const char* msg, size_t len) {
|
cify([&](const char *msg, size_t len)
|
||||||
event_handler(msg, len);
|
{ event_handler(msg, len); }),
|
||||||
}),
|
nullptr));
|
||||||
nullptr));
|
|
||||||
|
|
||||||
WAKU_CALL( waku_relay_subscribe(ctx,
|
WAKU_CALL(waku_relay_subscribe(ctx,
|
||||||
defaultPubsubTopic.c_str(),
|
cify([&](const char *msg, size_t len)
|
||||||
cify([&](const char* msg, size_t len) {
|
{ event_handler(msg, len); }),
|
||||||
event_handler(msg, len);
|
nullptr,
|
||||||
}),
|
defaultPubsubTopic.c_str()));
|
||||||
nullptr) );
|
|
||||||
|
|
||||||
show_main_menu();
|
show_main_menu();
|
||||||
while(1) {
|
while (1)
|
||||||
|
{
|
||||||
handle_user_input(ctx);
|
handle_user_input(ctx);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
@ -71,32 +71,32 @@ package main
|
|||||||
|
|
||||||
static void* cGoWakuNew(const char* configJson, void* resp) {
|
static void* cGoWakuNew(const char* configJson, void* resp) {
|
||||||
// We pass NULL because we are not interested in retrieving data from this callback
|
// We pass NULL because we are not interested in retrieving data from this callback
|
||||||
void* ret = waku_new(configJson, (WakuCallBack) callback, resp);
|
void* ret = waku_new(configJson, (FFICallBack) callback, resp);
|
||||||
return ret;
|
return ret;
|
||||||
}
|
}
|
||||||
|
|
||||||
static void cGoWakuStart(void* wakuCtx, void* resp) {
|
static void cGoWakuStart(void* wakuCtx, void* resp) {
|
||||||
WAKU_CALL(waku_start(wakuCtx, (WakuCallBack) callback, resp));
|
WAKU_CALL(waku_start(wakuCtx, (FFICallBack) callback, resp));
|
||||||
}
|
}
|
||||||
|
|
||||||
static void cGoWakuStop(void* wakuCtx, void* resp) {
|
static void cGoWakuStop(void* wakuCtx, void* resp) {
|
||||||
WAKU_CALL(waku_stop(wakuCtx, (WakuCallBack) callback, resp));
|
WAKU_CALL(waku_stop(wakuCtx, (FFICallBack) callback, resp));
|
||||||
}
|
}
|
||||||
|
|
||||||
static void cGoWakuDestroy(void* wakuCtx, void* resp) {
|
static void cGoWakuDestroy(void* wakuCtx, void* resp) {
|
||||||
WAKU_CALL(waku_destroy(wakuCtx, (WakuCallBack) callback, resp));
|
WAKU_CALL(waku_destroy(wakuCtx, (FFICallBack) callback, resp));
|
||||||
}
|
}
|
||||||
|
|
||||||
static void cGoWakuStartDiscV5(void* wakuCtx, void* resp) {
|
static void cGoWakuStartDiscV5(void* wakuCtx, void* resp) {
|
||||||
WAKU_CALL(waku_start_discv5(wakuCtx, (WakuCallBack) callback, resp));
|
WAKU_CALL(waku_start_discv5(wakuCtx, (FFICallBack) callback, resp));
|
||||||
}
|
}
|
||||||
|
|
||||||
static void cGoWakuStopDiscV5(void* wakuCtx, void* resp) {
|
static void cGoWakuStopDiscV5(void* wakuCtx, void* resp) {
|
||||||
WAKU_CALL(waku_stop_discv5(wakuCtx, (WakuCallBack) callback, resp));
|
WAKU_CALL(waku_stop_discv5(wakuCtx, (FFICallBack) callback, resp));
|
||||||
}
|
}
|
||||||
|
|
||||||
static void cGoWakuVersion(void* wakuCtx, void* resp) {
|
static void cGoWakuVersion(void* wakuCtx, void* resp) {
|
||||||
WAKU_CALL(waku_version(wakuCtx, (WakuCallBack) callback, resp));
|
WAKU_CALL(waku_version(wakuCtx, (FFICallBack) callback, resp));
|
||||||
}
|
}
|
||||||
|
|
||||||
static void cGoWakuSetEventCallback(void* wakuCtx) {
|
static void cGoWakuSetEventCallback(void* wakuCtx) {
|
||||||
@ -112,7 +112,7 @@ package main
|
|||||||
|
|
||||||
// This technique is needed because cgo only allows to export Go functions and not methods.
|
// This technique is needed because cgo only allows to export Go functions and not methods.
|
||||||
|
|
||||||
waku_set_event_callback(wakuCtx, (WakuCallBack) globalEventCallback, wakuCtx);
|
set_event_callback(wakuCtx, (FFICallBack) globalEventCallback, wakuCtx);
|
||||||
}
|
}
|
||||||
|
|
||||||
static void cGoWakuContentTopic(void* wakuCtx,
|
static void cGoWakuContentTopic(void* wakuCtx,
|
||||||
@ -123,20 +123,21 @@ package main
|
|||||||
void* resp) {
|
void* resp) {
|
||||||
|
|
||||||
WAKU_CALL( waku_content_topic(wakuCtx,
|
WAKU_CALL( waku_content_topic(wakuCtx,
|
||||||
|
(FFICallBack) callback,
|
||||||
|
resp,
|
||||||
appName,
|
appName,
|
||||||
appVersion,
|
appVersion,
|
||||||
contentTopicName,
|
contentTopicName,
|
||||||
encoding,
|
encoding
|
||||||
(WakuCallBack) callback,
|
) );
|
||||||
resp) );
|
|
||||||
}
|
}
|
||||||
|
|
||||||
static void cGoWakuPubsubTopic(void* wakuCtx, char* topicName, void* resp) {
|
static void cGoWakuPubsubTopic(void* wakuCtx, char* topicName, void* resp) {
|
||||||
WAKU_CALL( waku_pubsub_topic(wakuCtx, topicName, (WakuCallBack) callback, resp) );
|
WAKU_CALL( waku_pubsub_topic(wakuCtx, (FFICallBack) callback, resp, topicName) );
|
||||||
}
|
}
|
||||||
|
|
||||||
static void cGoWakuDefaultPubsubTopic(void* wakuCtx, void* resp) {
|
static void cGoWakuDefaultPubsubTopic(void* wakuCtx, void* resp) {
|
||||||
WAKU_CALL (waku_default_pubsub_topic(wakuCtx, (WakuCallBack) callback, resp));
|
WAKU_CALL (waku_default_pubsub_topic(wakuCtx, (FFICallBack) callback, resp));
|
||||||
}
|
}
|
||||||
|
|
||||||
static void cGoWakuRelayPublish(void* wakuCtx,
|
static void cGoWakuRelayPublish(void* wakuCtx,
|
||||||
@ -146,34 +147,36 @@ package main
|
|||||||
void* resp) {
|
void* resp) {
|
||||||
|
|
||||||
WAKU_CALL (waku_relay_publish(wakuCtx,
|
WAKU_CALL (waku_relay_publish(wakuCtx,
|
||||||
|
(FFICallBack) callback,
|
||||||
|
resp,
|
||||||
pubSubTopic,
|
pubSubTopic,
|
||||||
jsonWakuMessage,
|
jsonWakuMessage,
|
||||||
timeoutMs,
|
timeoutMs
|
||||||
(WakuCallBack) callback,
|
));
|
||||||
resp));
|
|
||||||
}
|
}
|
||||||
|
|
||||||
static void cGoWakuRelaySubscribe(void* wakuCtx, char* pubSubTopic, void* resp) {
|
static void cGoWakuRelaySubscribe(void* wakuCtx, char* pubSubTopic, void* resp) {
|
||||||
WAKU_CALL ( waku_relay_subscribe(wakuCtx,
|
WAKU_CALL ( waku_relay_subscribe(wakuCtx,
|
||||||
pubSubTopic,
|
(FFICallBack) callback,
|
||||||
(WakuCallBack) callback,
|
resp,
|
||||||
resp) );
|
pubSubTopic) );
|
||||||
}
|
}
|
||||||
|
|
||||||
static void cGoWakuRelayUnsubscribe(void* wakuCtx, char* pubSubTopic, void* resp) {
|
static void cGoWakuRelayUnsubscribe(void* wakuCtx, char* pubSubTopic, void* resp) {
|
||||||
|
|
||||||
WAKU_CALL ( waku_relay_unsubscribe(wakuCtx,
|
WAKU_CALL ( waku_relay_unsubscribe(wakuCtx,
|
||||||
pubSubTopic,
|
(FFICallBack) callback,
|
||||||
(WakuCallBack) callback,
|
resp,
|
||||||
resp) );
|
pubSubTopic) );
|
||||||
}
|
}
|
||||||
|
|
||||||
static void cGoWakuConnect(void* wakuCtx, char* peerMultiAddr, int timeoutMs, void* resp) {
|
static void cGoWakuConnect(void* wakuCtx, char* peerMultiAddr, int timeoutMs, void* resp) {
|
||||||
WAKU_CALL( waku_connect(wakuCtx,
|
WAKU_CALL( waku_connect(wakuCtx,
|
||||||
|
(FFICallBack) callback,
|
||||||
|
resp,
|
||||||
peerMultiAddr,
|
peerMultiAddr,
|
||||||
timeoutMs,
|
timeoutMs
|
||||||
(WakuCallBack) callback,
|
) );
|
||||||
resp) );
|
|
||||||
}
|
}
|
||||||
|
|
||||||
static void cGoWakuDialPeerById(void* wakuCtx,
|
static void cGoWakuDialPeerById(void* wakuCtx,
|
||||||
@ -183,42 +186,44 @@ package main
|
|||||||
void* resp) {
|
void* resp) {
|
||||||
|
|
||||||
WAKU_CALL( waku_dial_peer_by_id(wakuCtx,
|
WAKU_CALL( waku_dial_peer_by_id(wakuCtx,
|
||||||
|
(FFICallBack) callback,
|
||||||
|
resp,
|
||||||
peerId,
|
peerId,
|
||||||
protocol,
|
protocol,
|
||||||
timeoutMs,
|
timeoutMs
|
||||||
(WakuCallBack) callback,
|
) );
|
||||||
resp) );
|
|
||||||
}
|
}
|
||||||
|
|
||||||
static void cGoWakuDisconnectPeerById(void* wakuCtx, char* peerId, void* resp) {
|
static void cGoWakuDisconnectPeerById(void* wakuCtx, char* peerId, void* resp) {
|
||||||
WAKU_CALL( waku_disconnect_peer_by_id(wakuCtx,
|
WAKU_CALL( waku_disconnect_peer_by_id(wakuCtx,
|
||||||
peerId,
|
(FFICallBack) callback,
|
||||||
(WakuCallBack) callback,
|
resp,
|
||||||
resp) );
|
peerId
|
||||||
|
) );
|
||||||
}
|
}
|
||||||
|
|
||||||
static void cGoWakuListenAddresses(void* wakuCtx, void* resp) {
|
static void cGoWakuListenAddresses(void* wakuCtx, void* resp) {
|
||||||
WAKU_CALL (waku_listen_addresses(wakuCtx, (WakuCallBack) callback, resp) );
|
WAKU_CALL (waku_listen_addresses(wakuCtx, (FFICallBack) callback, resp) );
|
||||||
}
|
}
|
||||||
|
|
||||||
static void cGoWakuGetMyENR(void* ctx, void* resp) {
|
static void cGoWakuGetMyENR(void* ctx, void* resp) {
|
||||||
WAKU_CALL (waku_get_my_enr(ctx, (WakuCallBack) callback, resp) );
|
WAKU_CALL (waku_get_my_enr(ctx, (FFICallBack) callback, resp) );
|
||||||
}
|
}
|
||||||
|
|
||||||
static void cGoWakuGetMyPeerId(void* ctx, void* resp) {
|
static void cGoWakuGetMyPeerId(void* ctx, void* resp) {
|
||||||
WAKU_CALL (waku_get_my_peerid(ctx, (WakuCallBack) callback, resp) );
|
WAKU_CALL (waku_get_my_peerid(ctx, (FFICallBack) callback, resp) );
|
||||||
}
|
}
|
||||||
|
|
||||||
static void cGoWakuListPeersInMesh(void* ctx, char* pubSubTopic, void* resp) {
|
static void cGoWakuListPeersInMesh(void* ctx, char* pubSubTopic, void* resp) {
|
||||||
WAKU_CALL (waku_relay_get_num_peers_in_mesh(ctx, pubSubTopic, (WakuCallBack) callback, resp) );
|
WAKU_CALL (waku_relay_get_num_peers_in_mesh(ctx, (FFICallBack) callback, resp, pubSubTopic) );
|
||||||
}
|
}
|
||||||
|
|
||||||
static void cGoWakuGetNumConnectedPeers(void* ctx, char* pubSubTopic, void* resp) {
|
static void cGoWakuGetNumConnectedPeers(void* ctx, char* pubSubTopic, void* resp) {
|
||||||
WAKU_CALL (waku_relay_get_num_connected_peers(ctx, pubSubTopic, (WakuCallBack) callback, resp) );
|
WAKU_CALL (waku_relay_get_num_connected_peers(ctx, (FFICallBack) callback, resp, pubSubTopic) );
|
||||||
}
|
}
|
||||||
|
|
||||||
static void cGoWakuGetPeerIdsFromPeerStore(void* wakuCtx, void* resp) {
|
static void cGoWakuGetPeerIdsFromPeerStore(void* wakuCtx, void* resp) {
|
||||||
WAKU_CALL (waku_get_peerids_from_peerstore(wakuCtx, (WakuCallBack) callback, resp) );
|
WAKU_CALL (waku_get_peerids_from_peerstore(wakuCtx, (FFICallBack) callback, resp) );
|
||||||
}
|
}
|
||||||
|
|
||||||
static void cGoWakuLightpushPublish(void* wakuCtx,
|
static void cGoWakuLightpushPublish(void* wakuCtx,
|
||||||
@ -227,10 +232,11 @@ package main
|
|||||||
void* resp) {
|
void* resp) {
|
||||||
|
|
||||||
WAKU_CALL (waku_lightpush_publish(wakuCtx,
|
WAKU_CALL (waku_lightpush_publish(wakuCtx,
|
||||||
|
(FFICallBack) callback,
|
||||||
|
resp,
|
||||||
pubSubTopic,
|
pubSubTopic,
|
||||||
jsonWakuMessage,
|
jsonWakuMessage
|
||||||
(WakuCallBack) callback,
|
));
|
||||||
resp));
|
|
||||||
}
|
}
|
||||||
|
|
||||||
static void cGoWakuStoreQuery(void* wakuCtx,
|
static void cGoWakuStoreQuery(void* wakuCtx,
|
||||||
@ -240,11 +246,12 @@ package main
|
|||||||
void* resp) {
|
void* resp) {
|
||||||
|
|
||||||
WAKU_CALL (waku_store_query(wakuCtx,
|
WAKU_CALL (waku_store_query(wakuCtx,
|
||||||
|
(FFICallBack) callback,
|
||||||
|
resp,
|
||||||
jsonQuery,
|
jsonQuery,
|
||||||
peerAddr,
|
peerAddr,
|
||||||
timeoutMs,
|
timeoutMs
|
||||||
(WakuCallBack) callback,
|
));
|
||||||
resp));
|
|
||||||
}
|
}
|
||||||
|
|
||||||
static void cGoWakuPeerExchangeQuery(void* wakuCtx,
|
static void cGoWakuPeerExchangeQuery(void* wakuCtx,
|
||||||
@ -252,9 +259,10 @@ package main
|
|||||||
void* resp) {
|
void* resp) {
|
||||||
|
|
||||||
WAKU_CALL (waku_peer_exchange_request(wakuCtx,
|
WAKU_CALL (waku_peer_exchange_request(wakuCtx,
|
||||||
numPeers,
|
(FFICallBack) callback,
|
||||||
(WakuCallBack) callback,
|
resp,
|
||||||
resp));
|
numPeers
|
||||||
|
));
|
||||||
}
|
}
|
||||||
|
|
||||||
static void cGoWakuGetPeerIdsByProtocol(void* wakuCtx,
|
static void cGoWakuGetPeerIdsByProtocol(void* wakuCtx,
|
||||||
@ -262,9 +270,10 @@ package main
|
|||||||
void* resp) {
|
void* resp) {
|
||||||
|
|
||||||
WAKU_CALL (waku_get_peerids_by_protocol(wakuCtx,
|
WAKU_CALL (waku_get_peerids_by_protocol(wakuCtx,
|
||||||
protocol,
|
(FFICallBack) callback,
|
||||||
(WakuCallBack) callback,
|
resp,
|
||||||
resp));
|
protocol
|
||||||
|
));
|
||||||
}
|
}
|
||||||
|
|
||||||
*/
|
*/
|
||||||
|
|||||||
331
examples/ios/WakuExample.xcodeproj/project.pbxproj
Normal file
331
examples/ios/WakuExample.xcodeproj/project.pbxproj
Normal file
@ -0,0 +1,331 @@
|
|||||||
|
// !$*UTF8*$!
|
||||||
|
{
|
||||||
|
archiveVersion = 1;
|
||||||
|
classes = {
|
||||||
|
};
|
||||||
|
objectVersion = 63;
|
||||||
|
objects = {
|
||||||
|
|
||||||
|
/* Begin PBXBuildFile section */
|
||||||
|
45714AF6D1D12AF5C36694FB /* WakuExampleApp.swift in Sources */ = {isa = PBXBuildFile; fileRef = 0671AF6DCB0D788B0C1E9C8B /* WakuExampleApp.swift */; };
|
||||||
|
6468FA3F5F760D3FCAD6CDBF /* ContentView.swift in Sources */ = {isa = PBXBuildFile; fileRef = 7D8744E36DADC11F38A1CC99 /* ContentView.swift */; };
|
||||||
|
C4EA202B782038F96336401F /* WakuNode.swift in Sources */ = {isa = PBXBuildFile; fileRef = 638A565C495A63CFF7396FBC /* WakuNode.swift */; };
|
||||||
|
/* End PBXBuildFile section */
|
||||||
|
|
||||||
|
/* Begin PBXFileReference section */
|
||||||
|
0671AF6DCB0D788B0C1E9C8B /* WakuExampleApp.swift */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.swift; path = WakuExampleApp.swift; sourceTree = "<group>"; };
|
||||||
|
31BE20DB2755A11000723420 /* libwaku.h */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.c.h; path = libwaku.h; sourceTree = "<group>"; };
|
||||||
|
5C5AAC91E0166D28BFA986DB /* Info.plist */ = {isa = PBXFileReference; lastKnownFileType = text.plist; path = Info.plist; sourceTree = "<group>"; };
|
||||||
|
638A565C495A63CFF7396FBC /* WakuNode.swift */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.swift; path = WakuNode.swift; sourceTree = "<group>"; };
|
||||||
|
7D8744E36DADC11F38A1CC99 /* ContentView.swift */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.swift; path = ContentView.swift; sourceTree = "<group>"; };
|
||||||
|
A8655016B3DF9B0877631CE5 /* WakuExample-Bridging-Header.h */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.c.h; path = "WakuExample-Bridging-Header.h"; sourceTree = "<group>"; };
|
||||||
|
CFBE844B6E18ACB81C65F83B /* WakuExample.app */ = {isa = PBXFileReference; explicitFileType = wrapper.application; includeInIndex = 0; path = WakuExample.app; sourceTree = BUILT_PRODUCTS_DIR; };
|
||||||
|
/* End PBXFileReference section */
|
||||||
|
|
||||||
|
/* Begin PBXGroup section */
|
||||||
|
34547A6259485BD047D6375C /* Products */ = {
|
||||||
|
isa = PBXGroup;
|
||||||
|
children = (
|
||||||
|
CFBE844B6E18ACB81C65F83B /* WakuExample.app */,
|
||||||
|
);
|
||||||
|
name = Products;
|
||||||
|
sourceTree = "<group>";
|
||||||
|
};
|
||||||
|
4F76CB85EC44E951B8E75522 /* WakuExample */ = {
|
||||||
|
isa = PBXGroup;
|
||||||
|
children = (
|
||||||
|
7D8744E36DADC11F38A1CC99 /* ContentView.swift */,
|
||||||
|
5C5AAC91E0166D28BFA986DB /* Info.plist */,
|
||||||
|
31BE20DB2755A11000723420 /* libwaku.h */,
|
||||||
|
A8655016B3DF9B0877631CE5 /* WakuExample-Bridging-Header.h */,
|
||||||
|
0671AF6DCB0D788B0C1E9C8B /* WakuExampleApp.swift */,
|
||||||
|
638A565C495A63CFF7396FBC /* WakuNode.swift */,
|
||||||
|
);
|
||||||
|
path = WakuExample;
|
||||||
|
sourceTree = "<group>";
|
||||||
|
};
|
||||||
|
D40CD2446F177CAABB0A747A = {
|
||||||
|
isa = PBXGroup;
|
||||||
|
children = (
|
||||||
|
4F76CB85EC44E951B8E75522 /* WakuExample */,
|
||||||
|
34547A6259485BD047D6375C /* Products */,
|
||||||
|
);
|
||||||
|
sourceTree = "<group>";
|
||||||
|
};
|
||||||
|
/* End PBXGroup section */
|
||||||
|
|
||||||
|
/* Begin PBXNativeTarget section */
|
||||||
|
F751EF8294AD21F713D47FDA /* WakuExample */ = {
|
||||||
|
isa = PBXNativeTarget;
|
||||||
|
buildConfigurationList = 757FA0123629BD63CB254113 /* Build configuration list for PBXNativeTarget "WakuExample" */;
|
||||||
|
buildPhases = (
|
||||||
|
D3AFD8C4DA68BF5C4F7D8E10 /* Sources */,
|
||||||
|
);
|
||||||
|
buildRules = (
|
||||||
|
);
|
||||||
|
dependencies = (
|
||||||
|
);
|
||||||
|
name = WakuExample;
|
||||||
|
packageProductDependencies = (
|
||||||
|
);
|
||||||
|
productName = WakuExample;
|
||||||
|
productReference = CFBE844B6E18ACB81C65F83B /* WakuExample.app */;
|
||||||
|
productType = "com.apple.product-type.application";
|
||||||
|
};
|
||||||
|
/* End PBXNativeTarget section */
|
||||||
|
|
||||||
|
/* Begin PBXProject section */
|
||||||
|
4FF82F0F4AF8E1E34728F150 /* Project object */ = {
|
||||||
|
isa = PBXProject;
|
||||||
|
attributes = {
|
||||||
|
BuildIndependentTargetsInParallel = YES;
|
||||||
|
LastUpgradeCheck = 1500;
|
||||||
|
};
|
||||||
|
buildConfigurationList = B3A4F48294254543E79767C4 /* Build configuration list for PBXProject "WakuExample" */;
|
||||||
|
compatibilityVersion = "Xcode 14.0";
|
||||||
|
developmentRegion = en;
|
||||||
|
hasScannedForEncodings = 0;
|
||||||
|
knownRegions = (
|
||||||
|
Base,
|
||||||
|
en,
|
||||||
|
);
|
||||||
|
mainGroup = D40CD2446F177CAABB0A747A;
|
||||||
|
minimizedProjectReferenceProxies = 1;
|
||||||
|
projectDirPath = "";
|
||||||
|
projectRoot = "";
|
||||||
|
targets = (
|
||||||
|
F751EF8294AD21F713D47FDA /* WakuExample */,
|
||||||
|
);
|
||||||
|
};
|
||||||
|
/* End PBXProject section */
|
||||||
|
|
||||||
|
/* Begin PBXSourcesBuildPhase section */
|
||||||
|
D3AFD8C4DA68BF5C4F7D8E10 /* Sources */ = {
|
||||||
|
isa = PBXSourcesBuildPhase;
|
||||||
|
buildActionMask = 2147483647;
|
||||||
|
files = (
|
||||||
|
6468FA3F5F760D3FCAD6CDBF /* ContentView.swift in Sources */,
|
||||||
|
45714AF6D1D12AF5C36694FB /* WakuExampleApp.swift in Sources */,
|
||||||
|
C4EA202B782038F96336401F /* WakuNode.swift in Sources */,
|
||||||
|
);
|
||||||
|
runOnlyForDeploymentPostprocessing = 0;
|
||||||
|
};
|
||||||
|
/* End PBXSourcesBuildPhase section */
|
||||||
|
|
||||||
|
/* Begin XCBuildConfiguration section */
|
||||||
|
36939122077C66DD94082311 /* Release */ = {
|
||||||
|
isa = XCBuildConfiguration;
|
||||||
|
buildSettings = {
|
||||||
|
ASSETCATALOG_COMPILER_APPICON_NAME = AppIcon;
|
||||||
|
CODE_SIGN_IDENTITY = "iPhone Developer";
|
||||||
|
DEVELOPMENT_TEAM = 2Q52K2W84K;
|
||||||
|
HEADER_SEARCH_PATHS = "$(PROJECT_DIR)/WakuExample";
|
||||||
|
INFOPLIST_FILE = WakuExample/Info.plist;
|
||||||
|
IPHONEOS_DEPLOYMENT_TARGET = 18.6;
|
||||||
|
LD_RUNPATH_SEARCH_PATHS = (
|
||||||
|
"$(inherited)",
|
||||||
|
"@executable_path/Frameworks",
|
||||||
|
);
|
||||||
|
"LIBRARY_SEARCH_PATHS[sdk=iphoneos*]" = "$(PROJECT_DIR)/../../build/ios/iphoneos-arm64";
|
||||||
|
"LIBRARY_SEARCH_PATHS[sdk=iphonesimulator*]" = "$(PROJECT_DIR)/../../build/ios/iphonesimulator-arm64";
|
||||||
|
MACOSX_DEPLOYMENT_TARGET = 15.6;
|
||||||
|
OTHER_LDFLAGS = (
|
||||||
|
"-lc++",
|
||||||
|
"-force_load",
|
||||||
|
"$(PROJECT_DIR)/../../build/ios/iphoneos-arm64/libwaku.a",
|
||||||
|
"-lsqlite3",
|
||||||
|
"-lz",
|
||||||
|
);
|
||||||
|
PRODUCT_BUNDLE_IDENTIFIER = org.waku.example;
|
||||||
|
SDKROOT = iphoneos;
|
||||||
|
SUPPORTED_PLATFORMS = "iphoneos iphonesimulator";
|
||||||
|
SUPPORTS_MACCATALYST = NO;
|
||||||
|
SUPPORTS_MAC_DESIGNED_FOR_IPHONE_IPAD = YES;
|
||||||
|
SUPPORTS_XR_DESIGNED_FOR_IPHONE_IPAD = YES;
|
||||||
|
SWIFT_OBJC_BRIDGING_HEADER = "WakuExample/WakuExample-Bridging-Header.h";
|
||||||
|
TARGETED_DEVICE_FAMILY = "1,2";
|
||||||
|
};
|
||||||
|
name = Release;
|
||||||
|
};
|
||||||
|
9BA833A09EEDB4B3FCCD8F8E /* Release */ = {
|
||||||
|
isa = XCBuildConfiguration;
|
||||||
|
buildSettings = {
|
||||||
|
ALWAYS_SEARCH_USER_PATHS = NO;
|
||||||
|
CLANG_ANALYZER_NONNULL = YES;
|
||||||
|
CLANG_ANALYZER_NUMBER_OBJECT_CONVERSION = YES_AGGRESSIVE;
|
||||||
|
CLANG_CXX_LANGUAGE_STANDARD = "gnu++14";
|
||||||
|
CLANG_CXX_LIBRARY = "libc++";
|
||||||
|
CLANG_ENABLE_MODULES = YES;
|
||||||
|
CLANG_ENABLE_OBJC_ARC = YES;
|
||||||
|
CLANG_ENABLE_OBJC_WEAK = YES;
|
||||||
|
CLANG_WARN_BLOCK_CAPTURE_AUTORELEASING = YES;
|
||||||
|
CLANG_WARN_BOOL_CONVERSION = YES;
|
||||||
|
CLANG_WARN_COMMA = YES;
|
||||||
|
CLANG_WARN_CONSTANT_CONVERSION = YES;
|
||||||
|
CLANG_WARN_DEPRECATED_OBJC_IMPLEMENTATIONS = YES;
|
||||||
|
CLANG_WARN_DIRECT_OBJC_ISA_USAGE = YES_ERROR;
|
||||||
|
CLANG_WARN_DOCUMENTATION_COMMENTS = YES;
|
||||||
|
CLANG_WARN_EMPTY_BODY = YES;
|
||||||
|
CLANG_WARN_ENUM_CONVERSION = YES;
|
||||||
|
CLANG_WARN_INFINITE_RECURSION = YES;
|
||||||
|
CLANG_WARN_INT_CONVERSION = YES;
|
||||||
|
CLANG_WARN_NON_LITERAL_NULL_CONVERSION = YES;
|
||||||
|
CLANG_WARN_OBJC_IMPLICIT_RETAIN_SELF = YES;
|
||||||
|
CLANG_WARN_OBJC_LITERAL_CONVERSION = YES;
|
||||||
|
CLANG_WARN_OBJC_ROOT_CLASS = YES_ERROR;
|
||||||
|
CLANG_WARN_QUOTED_INCLUDE_IN_FRAMEWORK_HEADER = YES;
|
||||||
|
CLANG_WARN_RANGE_LOOP_ANALYSIS = YES;
|
||||||
|
CLANG_WARN_STRICT_PROTOTYPES = YES;
|
||||||
|
CLANG_WARN_SUSPICIOUS_MOVE = YES;
|
||||||
|
CLANG_WARN_UNGUARDED_AVAILABILITY = YES_AGGRESSIVE;
|
||||||
|
CLANG_WARN_UNREACHABLE_CODE = YES;
|
||||||
|
CLANG_WARN__DUPLICATE_METHOD_MATCH = YES;
|
||||||
|
COPY_PHASE_STRIP = NO;
|
||||||
|
DEBUG_INFORMATION_FORMAT = "dwarf-with-dsym";
|
||||||
|
ENABLE_NS_ASSERTIONS = NO;
|
||||||
|
ENABLE_STRICT_OBJC_MSGSEND = YES;
|
||||||
|
GCC_C_LANGUAGE_STANDARD = gnu11;
|
||||||
|
GCC_NO_COMMON_BLOCKS = YES;
|
||||||
|
GCC_WARN_64_TO_32_BIT_CONVERSION = YES;
|
||||||
|
GCC_WARN_ABOUT_RETURN_TYPE = YES_ERROR;
|
||||||
|
GCC_WARN_UNDECLARED_SELECTOR = YES;
|
||||||
|
GCC_WARN_UNINITIALIZED_AUTOS = YES_AGGRESSIVE;
|
||||||
|
GCC_WARN_UNUSED_FUNCTION = YES;
|
||||||
|
GCC_WARN_UNUSED_VARIABLE = YES;
|
||||||
|
IPHONEOS_DEPLOYMENT_TARGET = 18.6;
|
||||||
|
MTL_ENABLE_DEBUG_INFO = NO;
|
||||||
|
MTL_FAST_MATH = YES;
|
||||||
|
PRODUCT_NAME = "$(TARGET_NAME)";
|
||||||
|
SDKROOT = iphoneos;
|
||||||
|
SUPPORTED_PLATFORMS = "iphoneos iphonesimulator";
|
||||||
|
SUPPORTS_MACCATALYST = NO;
|
||||||
|
SWIFT_COMPILATION_MODE = wholemodule;
|
||||||
|
SWIFT_OPTIMIZATION_LEVEL = "-O";
|
||||||
|
SWIFT_VERSION = 5.0;
|
||||||
|
};
|
||||||
|
name = Release;
|
||||||
|
};
|
||||||
|
A59ABFB792FED8974231E5AC /* Debug */ = {
|
||||||
|
isa = XCBuildConfiguration;
|
||||||
|
buildSettings = {
|
||||||
|
ALWAYS_SEARCH_USER_PATHS = NO;
|
||||||
|
CLANG_ANALYZER_NONNULL = YES;
|
||||||
|
CLANG_ANALYZER_NUMBER_OBJECT_CONVERSION = YES_AGGRESSIVE;
|
||||||
|
CLANG_CXX_LANGUAGE_STANDARD = "gnu++14";
|
||||||
|
CLANG_CXX_LIBRARY = "libc++";
|
||||||
|
CLANG_ENABLE_MODULES = YES;
|
||||||
|
CLANG_ENABLE_OBJC_ARC = YES;
|
||||||
|
CLANG_ENABLE_OBJC_WEAK = YES;
|
||||||
|
CLANG_WARN_BLOCK_CAPTURE_AUTORELEASING = YES;
|
||||||
|
CLANG_WARN_BOOL_CONVERSION = YES;
|
||||||
|
CLANG_WARN_COMMA = YES;
|
||||||
|
CLANG_WARN_CONSTANT_CONVERSION = YES;
|
||||||
|
CLANG_WARN_DEPRECATED_OBJC_IMPLEMENTATIONS = YES;
|
||||||
|
CLANG_WARN_DIRECT_OBJC_ISA_USAGE = YES_ERROR;
|
||||||
|
CLANG_WARN_DOCUMENTATION_COMMENTS = YES;
|
||||||
|
CLANG_WARN_EMPTY_BODY = YES;
|
||||||
|
CLANG_WARN_ENUM_CONVERSION = YES;
|
||||||
|
CLANG_WARN_INFINITE_RECURSION = YES;
|
||||||
|
CLANG_WARN_INT_CONVERSION = YES;
|
||||||
|
CLANG_WARN_NON_LITERAL_NULL_CONVERSION = YES;
|
||||||
|
CLANG_WARN_OBJC_IMPLICIT_RETAIN_SELF = YES;
|
||||||
|
CLANG_WARN_OBJC_LITERAL_CONVERSION = YES;
|
||||||
|
CLANG_WARN_OBJC_ROOT_CLASS = YES_ERROR;
|
||||||
|
CLANG_WARN_QUOTED_INCLUDE_IN_FRAMEWORK_HEADER = YES;
|
||||||
|
CLANG_WARN_RANGE_LOOP_ANALYSIS = YES;
|
||||||
|
CLANG_WARN_STRICT_PROTOTYPES = YES;
|
||||||
|
CLANG_WARN_SUSPICIOUS_MOVE = YES;
|
||||||
|
CLANG_WARN_UNGUARDED_AVAILABILITY = YES_AGGRESSIVE;
|
||||||
|
CLANG_WARN_UNREACHABLE_CODE = YES;
|
||||||
|
CLANG_WARN__DUPLICATE_METHOD_MATCH = YES;
|
||||||
|
COPY_PHASE_STRIP = NO;
|
||||||
|
DEBUG_INFORMATION_FORMAT = dwarf;
|
||||||
|
ENABLE_STRICT_OBJC_MSGSEND = YES;
|
||||||
|
ENABLE_TESTABILITY = YES;
|
||||||
|
GCC_C_LANGUAGE_STANDARD = gnu11;
|
||||||
|
GCC_DYNAMIC_NO_PIC = NO;
|
||||||
|
GCC_NO_COMMON_BLOCKS = YES;
|
||||||
|
GCC_OPTIMIZATION_LEVEL = 0;
|
||||||
|
GCC_PREPROCESSOR_DEFINITIONS = (
|
||||||
|
"$(inherited)",
|
||||||
|
"DEBUG=1",
|
||||||
|
);
|
||||||
|
GCC_WARN_64_TO_32_BIT_CONVERSION = YES;
|
||||||
|
GCC_WARN_ABOUT_RETURN_TYPE = YES_ERROR;
|
||||||
|
GCC_WARN_UNDECLARED_SELECTOR = YES;
|
||||||
|
GCC_WARN_UNINITIALIZED_AUTOS = YES_AGGRESSIVE;
|
||||||
|
GCC_WARN_UNUSED_FUNCTION = YES;
|
||||||
|
GCC_WARN_UNUSED_VARIABLE = YES;
|
||||||
|
IPHONEOS_DEPLOYMENT_TARGET = 18.6;
|
||||||
|
MTL_ENABLE_DEBUG_INFO = INCLUDE_SOURCE;
|
||||||
|
MTL_FAST_MATH = YES;
|
||||||
|
ONLY_ACTIVE_ARCH = YES;
|
||||||
|
PRODUCT_NAME = "$(TARGET_NAME)";
|
||||||
|
SDKROOT = iphoneos;
|
||||||
|
SUPPORTED_PLATFORMS = "iphoneos iphonesimulator";
|
||||||
|
SUPPORTS_MACCATALYST = NO;
|
||||||
|
SWIFT_ACTIVE_COMPILATION_CONDITIONS = DEBUG;
|
||||||
|
SWIFT_OPTIMIZATION_LEVEL = "-Onone";
|
||||||
|
SWIFT_VERSION = 5.0;
|
||||||
|
};
|
||||||
|
name = Debug;
|
||||||
|
};
|
||||||
|
AF5ADDAA865B1F6BD4E70A79 /* Debug */ = {
|
||||||
|
isa = XCBuildConfiguration;
|
||||||
|
buildSettings = {
|
||||||
|
ASSETCATALOG_COMPILER_APPICON_NAME = AppIcon;
|
||||||
|
CODE_SIGN_IDENTITY = "iPhone Developer";
|
||||||
|
DEVELOPMENT_TEAM = 2Q52K2W84K;
|
||||||
|
HEADER_SEARCH_PATHS = "$(PROJECT_DIR)/WakuExample";
|
||||||
|
INFOPLIST_FILE = WakuExample/Info.plist;
|
||||||
|
IPHONEOS_DEPLOYMENT_TARGET = 18.6;
|
||||||
|
LD_RUNPATH_SEARCH_PATHS = (
|
||||||
|
"$(inherited)",
|
||||||
|
"@executable_path/Frameworks",
|
||||||
|
);
|
||||||
|
"LIBRARY_SEARCH_PATHS[sdk=iphoneos*]" = "$(PROJECT_DIR)/../../build/ios/iphoneos-arm64";
|
||||||
|
"LIBRARY_SEARCH_PATHS[sdk=iphonesimulator*]" = "$(PROJECT_DIR)/../../build/ios/iphonesimulator-arm64";
|
||||||
|
MACOSX_DEPLOYMENT_TARGET = 15.6;
|
||||||
|
OTHER_LDFLAGS = (
|
||||||
|
"-lc++",
|
||||||
|
"-force_load",
|
||||||
|
"$(PROJECT_DIR)/../../build/ios/iphoneos-arm64/libwaku.a",
|
||||||
|
"-lsqlite3",
|
||||||
|
"-lz",
|
||||||
|
);
|
||||||
|
PRODUCT_BUNDLE_IDENTIFIER = org.waku.example;
|
||||||
|
SDKROOT = iphoneos;
|
||||||
|
SUPPORTED_PLATFORMS = "iphoneos iphonesimulator";
|
||||||
|
SUPPORTS_MACCATALYST = NO;
|
||||||
|
SUPPORTS_MAC_DESIGNED_FOR_IPHONE_IPAD = YES;
|
||||||
|
SUPPORTS_XR_DESIGNED_FOR_IPHONE_IPAD = YES;
|
||||||
|
SWIFT_OBJC_BRIDGING_HEADER = "WakuExample/WakuExample-Bridging-Header.h";
|
||||||
|
TARGETED_DEVICE_FAMILY = "1,2";
|
||||||
|
};
|
||||||
|
name = Debug;
|
||||||
|
};
|
||||||
|
/* End XCBuildConfiguration section */
|
||||||
|
|
||||||
|
/* Begin XCConfigurationList section */
|
||||||
|
757FA0123629BD63CB254113 /* Build configuration list for PBXNativeTarget "WakuExample" */ = {
|
||||||
|
isa = XCConfigurationList;
|
||||||
|
buildConfigurations = (
|
||||||
|
AF5ADDAA865B1F6BD4E70A79 /* Debug */,
|
||||||
|
36939122077C66DD94082311 /* Release */,
|
||||||
|
);
|
||||||
|
defaultConfigurationIsVisible = 0;
|
||||||
|
defaultConfigurationName = Debug;
|
||||||
|
};
|
||||||
|
B3A4F48294254543E79767C4 /* Build configuration list for PBXProject "WakuExample" */ = {
|
||||||
|
isa = XCConfigurationList;
|
||||||
|
buildConfigurations = (
|
||||||
|
A59ABFB792FED8974231E5AC /* Debug */,
|
||||||
|
9BA833A09EEDB4B3FCCD8F8E /* Release */,
|
||||||
|
);
|
||||||
|
defaultConfigurationIsVisible = 0;
|
||||||
|
defaultConfigurationName = Debug;
|
||||||
|
};
|
||||||
|
/* End XCConfigurationList section */
|
||||||
|
};
|
||||||
|
rootObject = 4FF82F0F4AF8E1E34728F150 /* Project object */;
|
||||||
|
}
|
||||||
7
examples/ios/WakuExample.xcodeproj/project.xcworkspace/contents.xcworkspacedata
generated
Normal file
7
examples/ios/WakuExample.xcodeproj/project.xcworkspace/contents.xcworkspacedata
generated
Normal file
@ -0,0 +1,7 @@
|
|||||||
|
<?xml version="1.0" encoding="UTF-8"?>
|
||||||
|
<Workspace
|
||||||
|
version = "1.0">
|
||||||
|
<FileRef
|
||||||
|
location = "self:">
|
||||||
|
</FileRef>
|
||||||
|
</Workspace>
|
||||||
229
examples/ios/WakuExample/ContentView.swift
Normal file
229
examples/ios/WakuExample/ContentView.swift
Normal file
@ -0,0 +1,229 @@
|
|||||||
|
//
|
||||||
|
// ContentView.swift
|
||||||
|
// WakuExample
|
||||||
|
//
|
||||||
|
// Minimal chat PoC using libwaku on iOS
|
||||||
|
//
|
||||||
|
|
||||||
|
import SwiftUI
|
||||||
|
|
||||||
|
struct ContentView: View {
|
||||||
|
@StateObject private var wakuNode = WakuNode()
|
||||||
|
@State private var messageText = ""
|
||||||
|
|
||||||
|
var body: some View {
|
||||||
|
ZStack {
|
||||||
|
// Main content
|
||||||
|
VStack(spacing: 0) {
|
||||||
|
// Header with status
|
||||||
|
HStack {
|
||||||
|
Circle()
|
||||||
|
.fill(statusColor)
|
||||||
|
.frame(width: 10, height: 10)
|
||||||
|
VStack(alignment: .leading, spacing: 2) {
|
||||||
|
Text(wakuNode.status.rawValue)
|
||||||
|
.font(.caption)
|
||||||
|
if wakuNode.status == .running {
|
||||||
|
HStack(spacing: 4) {
|
||||||
|
Text(wakuNode.isConnected ? "Connected" : "Discovering...")
|
||||||
|
Text("•")
|
||||||
|
filterStatusView
|
||||||
|
}
|
||||||
|
.font(.caption2)
|
||||||
|
.foregroundColor(.secondary)
|
||||||
|
|
||||||
|
// Subscription maintenance status
|
||||||
|
if wakuNode.subscriptionMaintenanceActive {
|
||||||
|
HStack(spacing: 4) {
|
||||||
|
Image(systemName: "arrow.triangle.2.circlepath")
|
||||||
|
.foregroundColor(.blue)
|
||||||
|
Text("Maintenance active")
|
||||||
|
if wakuNode.failedSubscribeAttempts > 0 {
|
||||||
|
Text("(\(wakuNode.failedSubscribeAttempts) retries)")
|
||||||
|
.foregroundColor(.orange)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
.font(.caption2)
|
||||||
|
.foregroundColor(.secondary)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
Spacer()
|
||||||
|
if wakuNode.status == .stopped {
|
||||||
|
Button("Start") {
|
||||||
|
wakuNode.start()
|
||||||
|
}
|
||||||
|
.buttonStyle(.borderedProminent)
|
||||||
|
.controlSize(.small)
|
||||||
|
} else if wakuNode.status == .running {
|
||||||
|
if !wakuNode.filterSubscribed {
|
||||||
|
Button("Resub") {
|
||||||
|
wakuNode.resubscribe()
|
||||||
|
}
|
||||||
|
.buttonStyle(.bordered)
|
||||||
|
.controlSize(.small)
|
||||||
|
}
|
||||||
|
Button("Stop") {
|
||||||
|
wakuNode.stop()
|
||||||
|
}
|
||||||
|
.buttonStyle(.bordered)
|
||||||
|
.controlSize(.small)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
.padding()
|
||||||
|
.background(Color.gray.opacity(0.1))
|
||||||
|
|
||||||
|
// Messages list
|
||||||
|
ScrollViewReader { proxy in
|
||||||
|
ScrollView {
|
||||||
|
LazyVStack(alignment: .leading, spacing: 8) {
|
||||||
|
ForEach(wakuNode.receivedMessages.reversed()) { message in
|
||||||
|
MessageBubble(message: message)
|
||||||
|
.id(message.id)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
.padding()
|
||||||
|
}
|
||||||
|
.onChange(of: wakuNode.receivedMessages.count) { _, newCount in
|
||||||
|
if let lastMessage = wakuNode.receivedMessages.first {
|
||||||
|
withAnimation {
|
||||||
|
proxy.scrollTo(lastMessage.id, anchor: .bottom)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
Divider()
|
||||||
|
|
||||||
|
// Message input
|
||||||
|
HStack(spacing: 12) {
|
||||||
|
TextField("Message", text: $messageText)
|
||||||
|
.textFieldStyle(.roundedBorder)
|
||||||
|
.disabled(wakuNode.status != .running)
|
||||||
|
|
||||||
|
Button(action: sendMessage) {
|
||||||
|
Image(systemName: "paperplane.fill")
|
||||||
|
.foregroundColor(.white)
|
||||||
|
.padding(10)
|
||||||
|
.background(canSend ? Color.blue : Color.gray)
|
||||||
|
.clipShape(Circle())
|
||||||
|
}
|
||||||
|
.disabled(!canSend)
|
||||||
|
}
|
||||||
|
.padding()
|
||||||
|
.background(Color.gray.opacity(0.1))
|
||||||
|
}
|
||||||
|
|
||||||
|
// Toast overlay for errors
|
||||||
|
VStack {
|
||||||
|
ForEach(wakuNode.errorQueue) { error in
|
||||||
|
ToastView(error: error) {
|
||||||
|
wakuNode.dismissError(error)
|
||||||
|
}
|
||||||
|
.transition(.asymmetric(
|
||||||
|
insertion: .move(edge: .top).combined(with: .opacity),
|
||||||
|
removal: .opacity
|
||||||
|
))
|
||||||
|
}
|
||||||
|
Spacer()
|
||||||
|
}
|
||||||
|
.padding(.top, 8)
|
||||||
|
.animation(.easeInOut(duration: 0.3), value: wakuNode.errorQueue)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
private var statusColor: Color {
|
||||||
|
switch wakuNode.status {
|
||||||
|
case .stopped: return .gray
|
||||||
|
case .starting: return .yellow
|
||||||
|
case .running: return .green
|
||||||
|
case .error: return .red
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
@ViewBuilder
|
||||||
|
private var filterStatusView: some View {
|
||||||
|
if wakuNode.filterSubscribed {
|
||||||
|
Text("Filter OK")
|
||||||
|
.foregroundColor(.green)
|
||||||
|
} else if wakuNode.failedSubscribeAttempts > 0 {
|
||||||
|
Text("Filter retrying (\(wakuNode.failedSubscribeAttempts))")
|
||||||
|
.foregroundColor(.orange)
|
||||||
|
} else {
|
||||||
|
Text("Filter pending")
|
||||||
|
.foregroundColor(.orange)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
private var canSend: Bool {
|
||||||
|
wakuNode.status == .running && wakuNode.isConnected && !messageText.trimmingCharacters(in: .whitespaces).isEmpty
|
||||||
|
}
|
||||||
|
|
||||||
|
private func sendMessage() {
|
||||||
|
let text = messageText.trimmingCharacters(in: .whitespaces)
|
||||||
|
guard !text.isEmpty else { return }
|
||||||
|
|
||||||
|
wakuNode.publish(message: text)
|
||||||
|
messageText = ""
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// MARK: - Toast View
|
||||||
|
|
||||||
|
struct ToastView: View {
|
||||||
|
let error: TimestampedError
|
||||||
|
let onDismiss: () -> Void
|
||||||
|
|
||||||
|
var body: some View {
|
||||||
|
HStack(spacing: 12) {
|
||||||
|
Image(systemName: "exclamationmark.triangle.fill")
|
||||||
|
.foregroundColor(.white)
|
||||||
|
|
||||||
|
Text(error.message)
|
||||||
|
.font(.subheadline)
|
||||||
|
.foregroundColor(.white)
|
||||||
|
.lineLimit(2)
|
||||||
|
|
||||||
|
Spacer()
|
||||||
|
|
||||||
|
Button(action: onDismiss) {
|
||||||
|
Image(systemName: "xmark.circle.fill")
|
||||||
|
.foregroundColor(.white.opacity(0.8))
|
||||||
|
.font(.title3)
|
||||||
|
}
|
||||||
|
.buttonStyle(.plain)
|
||||||
|
}
|
||||||
|
.padding(.horizontal, 16)
|
||||||
|
.padding(.vertical, 12)
|
||||||
|
.background(
|
||||||
|
RoundedRectangle(cornerRadius: 12)
|
||||||
|
.fill(Color.red.opacity(0.9))
|
||||||
|
.shadow(color: .black.opacity(0.2), radius: 8, x: 0, y: 4)
|
||||||
|
)
|
||||||
|
.padding(.horizontal, 16)
|
||||||
|
.padding(.vertical, 4)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// MARK: - Message Bubble
|
||||||
|
|
||||||
|
struct MessageBubble: View {
|
||||||
|
let message: WakuMessage
|
||||||
|
|
||||||
|
var body: some View {
|
||||||
|
VStack(alignment: .leading, spacing: 4) {
|
||||||
|
Text(message.payload)
|
||||||
|
.padding(10)
|
||||||
|
.background(Color.blue.opacity(0.1))
|
||||||
|
.cornerRadius(12)
|
||||||
|
|
||||||
|
Text(message.timestamp, style: .time)
|
||||||
|
.font(.caption2)
|
||||||
|
.foregroundColor(.secondary)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
#Preview {
|
||||||
|
ContentView()
|
||||||
|
}
|
||||||
36
examples/ios/WakuExample/Info.plist
Normal file
36
examples/ios/WakuExample/Info.plist
Normal file
@ -0,0 +1,36 @@
|
|||||||
|
<?xml version="1.0" encoding="UTF-8"?>
|
||||||
|
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
|
||||||
|
<plist version="1.0">
|
||||||
|
<dict>
|
||||||
|
<key>CFBundleDevelopmentRegion</key>
|
||||||
|
<string>$(DEVELOPMENT_LANGUAGE)</string>
|
||||||
|
<key>CFBundleDisplayName</key>
|
||||||
|
<string>Waku Example</string>
|
||||||
|
<key>CFBundleExecutable</key>
|
||||||
|
<string>$(EXECUTABLE_NAME)</string>
|
||||||
|
<key>CFBundleIdentifier</key>
|
||||||
|
<string>org.waku.example</string>
|
||||||
|
<key>CFBundleInfoDictionaryVersion</key>
|
||||||
|
<string>6.0</string>
|
||||||
|
<key>CFBundleName</key>
|
||||||
|
<string>WakuExample</string>
|
||||||
|
<key>CFBundlePackageType</key>
|
||||||
|
<string>APPL</string>
|
||||||
|
<key>CFBundleShortVersionString</key>
|
||||||
|
<string>1.0</string>
|
||||||
|
<key>CFBundleVersion</key>
|
||||||
|
<string>1</string>
|
||||||
|
<key>NSAppTransportSecurity</key>
|
||||||
|
<dict>
|
||||||
|
<key>NSAllowsArbitraryLoads</key>
|
||||||
|
<true/>
|
||||||
|
</dict>
|
||||||
|
<key>UILaunchScreen</key>
|
||||||
|
<dict/>
|
||||||
|
<key>UISupportedInterfaceOrientations</key>
|
||||||
|
<array>
|
||||||
|
<string>UIInterfaceOrientationPortrait</string>
|
||||||
|
</array>
|
||||||
|
</dict>
|
||||||
|
</plist>
|
||||||
|
|
||||||
15
examples/ios/WakuExample/WakuExample-Bridging-Header.h
Normal file
15
examples/ios/WakuExample/WakuExample-Bridging-Header.h
Normal file
@ -0,0 +1,15 @@
|
|||||||
|
//
|
||||||
|
// WakuExample-Bridging-Header.h
|
||||||
|
// WakuExample
|
||||||
|
//
|
||||||
|
// Bridging header to expose libwaku C functions to Swift
|
||||||
|
//
|
||||||
|
|
||||||
|
#ifndef WakuExample_Bridging_Header_h
|
||||||
|
#define WakuExample_Bridging_Header_h
|
||||||
|
|
||||||
|
#import "libwaku.h"
|
||||||
|
|
||||||
|
#endif /* WakuExample_Bridging_Header_h */
|
||||||
|
|
||||||
|
|
||||||
19
examples/ios/WakuExample/WakuExampleApp.swift
Normal file
19
examples/ios/WakuExample/WakuExampleApp.swift
Normal file
@ -0,0 +1,19 @@
|
|||||||
|
//
|
||||||
|
// WakuExampleApp.swift
|
||||||
|
// WakuExample
|
||||||
|
//
|
||||||
|
// SwiftUI app entry point for Waku iOS example
|
||||||
|
//
|
||||||
|
|
||||||
|
import SwiftUI
|
||||||
|
|
||||||
|
@main
|
||||||
|
struct WakuExampleApp: App {
|
||||||
|
var body: some Scene {
|
||||||
|
WindowGroup {
|
||||||
|
ContentView()
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
739
examples/ios/WakuExample/WakuNode.swift
Normal file
739
examples/ios/WakuExample/WakuNode.swift
Normal file
@ -0,0 +1,739 @@
|
|||||||
|
//
|
||||||
|
// WakuNode.swift
|
||||||
|
// WakuExample
|
||||||
|
//
|
||||||
|
// Swift wrapper around libwaku C API for edge mode (lightpush + filter)
|
||||||
|
// Uses Swift actors for thread safety and UI responsiveness
|
||||||
|
//
|
||||||
|
|
||||||
|
import Foundation
|
||||||
|
|
||||||
|
// MARK: - Data Types
|
||||||
|
|
||||||
|
/// Message received from Waku network
|
||||||
|
struct WakuMessage: Identifiable, Equatable, Sendable {
|
||||||
|
let id: String // messageHash from Waku - unique identifier for deduplication
|
||||||
|
let payload: String
|
||||||
|
let contentTopic: String
|
||||||
|
let timestamp: Date
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Waku node status
|
||||||
|
enum WakuNodeStatus: String, Sendable {
|
||||||
|
case stopped = "Stopped"
|
||||||
|
case starting = "Starting..."
|
||||||
|
case running = "Running"
|
||||||
|
case error = "Error"
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Status updates from WakuActor to WakuNode
|
||||||
|
enum WakuStatusUpdate: Sendable {
|
||||||
|
case statusChanged(WakuNodeStatus)
|
||||||
|
case connectionChanged(isConnected: Bool)
|
||||||
|
case filterSubscriptionChanged(subscribed: Bool, failedAttempts: Int)
|
||||||
|
case maintenanceChanged(active: Bool)
|
||||||
|
case error(String)
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Error with timestamp for toast queue
|
||||||
|
struct TimestampedError: Identifiable, Equatable {
|
||||||
|
let id = UUID()
|
||||||
|
let message: String
|
||||||
|
let timestamp: Date
|
||||||
|
|
||||||
|
static func == (lhs: TimestampedError, rhs: TimestampedError) -> Bool {
|
||||||
|
lhs.id == rhs.id
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// MARK: - Callback Context for C API
|
||||||
|
|
||||||
|
private final class CallbackContext: @unchecked Sendable {
|
||||||
|
private let lock = NSLock()
|
||||||
|
private var _continuation: CheckedContinuation<(success: Bool, result: String?), Never>?
|
||||||
|
private var _resumed = false
|
||||||
|
var success: Bool = false
|
||||||
|
var result: String?
|
||||||
|
|
||||||
|
var continuation: CheckedContinuation<(success: Bool, result: String?), Never>? {
|
||||||
|
get {
|
||||||
|
lock.lock()
|
||||||
|
defer { lock.unlock() }
|
||||||
|
return _continuation
|
||||||
|
}
|
||||||
|
set {
|
||||||
|
lock.lock()
|
||||||
|
defer { lock.unlock() }
|
||||||
|
_continuation = newValue
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Thread-safe resume - ensures continuation is only resumed once
|
||||||
|
/// Returns true if this call actually resumed, false if already resumed
|
||||||
|
@discardableResult
|
||||||
|
func resumeOnce(returning value: (success: Bool, result: String?)) -> Bool {
|
||||||
|
lock.lock()
|
||||||
|
defer { lock.unlock() }
|
||||||
|
|
||||||
|
guard !_resumed, let cont = _continuation else {
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
|
||||||
|
_resumed = true
|
||||||
|
_continuation = nil
|
||||||
|
cont.resume(returning: value)
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// MARK: - WakuActor
|
||||||
|
|
||||||
|
/// Actor that isolates all Waku operations from the main thread
|
||||||
|
/// All C API calls and mutable state are contained here
|
||||||
|
actor WakuActor {
|
||||||
|
|
||||||
|
// MARK: - State
|
||||||
|
|
||||||
|
private var ctx: UnsafeMutableRawPointer?
|
||||||
|
private var seenMessageHashes: Set<String> = []
|
||||||
|
private var isSubscribed: Bool = false
|
||||||
|
private var isSubscribing: Bool = false
|
||||||
|
private var hasPeers: Bool = false
|
||||||
|
private var maintenanceTask: Task<Void, Never>?
|
||||||
|
private var eventProcessingTask: Task<Void, Never>?
|
||||||
|
|
||||||
|
// Stream continuations for communicating with UI
|
||||||
|
private var messageContinuation: AsyncStream<WakuMessage>.Continuation?
|
||||||
|
private var statusContinuation: AsyncStream<WakuStatusUpdate>.Continuation?
|
||||||
|
|
||||||
|
// Event stream from C callbacks
|
||||||
|
private var eventContinuation: AsyncStream<String>.Continuation?
|
||||||
|
|
||||||
|
// Configuration
|
||||||
|
let defaultPubsubTopic = "/waku/2/rs/1/0"
|
||||||
|
let defaultContentTopic = "/waku-ios-example/1/chat/proto"
|
||||||
|
private let staticPeer = "/dns4/node-01.do-ams3.waku.sandbox.status.im/tcp/30303/p2p/16Uiu2HAmPLe7Mzm8TsYUubgCAW1aJoeFScxrLj8ppHFivPo97bUZ"
|
||||||
|
|
||||||
|
// Subscription maintenance settings
|
||||||
|
private let maxFailedSubscribes = 3
|
||||||
|
private let retryWaitSeconds: UInt64 = 2_000_000_000 // 2 seconds in nanoseconds
|
||||||
|
private let maintenanceIntervalSeconds: UInt64 = 30_000_000_000 // 30 seconds in nanoseconds
|
||||||
|
private let maxSeenHashes = 1000
|
||||||
|
|
||||||
|
// MARK: - Static callback storage (for C callbacks)
|
||||||
|
|
||||||
|
// We need a way for C callbacks to reach the actor
|
||||||
|
// Using a simple static reference (safe because we only have one instance)
|
||||||
|
private static var sharedEventContinuation: AsyncStream<String>.Continuation?
|
||||||
|
|
||||||
|
private static let eventCallback: WakuCallBack = { ret, msg, len, userData in
|
||||||
|
guard ret == RET_OK, let msg = msg else { return }
|
||||||
|
let str = String(cString: msg)
|
||||||
|
WakuActor.sharedEventContinuation?.yield(str)
|
||||||
|
}
|
||||||
|
|
||||||
|
private static let syncCallback: WakuCallBack = { ret, msg, len, userData in
|
||||||
|
guard let userData = userData else { return }
|
||||||
|
let context = Unmanaged<CallbackContext>.fromOpaque(userData).takeUnretainedValue()
|
||||||
|
let success = (ret == RET_OK)
|
||||||
|
var resultStr: String? = nil
|
||||||
|
if let msg = msg {
|
||||||
|
resultStr = String(cString: msg)
|
||||||
|
}
|
||||||
|
context.resumeOnce(returning: (success, resultStr))
|
||||||
|
}
|
||||||
|
|
||||||
|
// MARK: - Stream Setup
|
||||||
|
|
||||||
|
func setMessageContinuation(_ continuation: AsyncStream<WakuMessage>.Continuation?) {
|
||||||
|
self.messageContinuation = continuation
|
||||||
|
}
|
||||||
|
|
||||||
|
func setStatusContinuation(_ continuation: AsyncStream<WakuStatusUpdate>.Continuation?) {
|
||||||
|
self.statusContinuation = continuation
|
||||||
|
}
|
||||||
|
|
||||||
|
// MARK: - Public API
|
||||||
|
|
||||||
|
var isRunning: Bool {
|
||||||
|
ctx != nil
|
||||||
|
}
|
||||||
|
|
||||||
|
var hasConnectedPeers: Bool {
|
||||||
|
hasPeers
|
||||||
|
}
|
||||||
|
|
||||||
|
func start() async {
|
||||||
|
guard ctx == nil else {
|
||||||
|
print("[WakuActor] Already started")
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
statusContinuation?.yield(.statusChanged(.starting))
|
||||||
|
|
||||||
|
// Create event stream for C callbacks
|
||||||
|
let eventStream = AsyncStream<String> { continuation in
|
||||||
|
self.eventContinuation = continuation
|
||||||
|
WakuActor.sharedEventContinuation = continuation
|
||||||
|
}
|
||||||
|
|
||||||
|
// Start event processing task
|
||||||
|
eventProcessingTask = Task { [weak self] in
|
||||||
|
for await eventJson in eventStream {
|
||||||
|
await self?.handleEvent(eventJson)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Initialize the node
|
||||||
|
let success = await initializeNode()
|
||||||
|
|
||||||
|
if success {
|
||||||
|
statusContinuation?.yield(.statusChanged(.running))
|
||||||
|
|
||||||
|
// Connect to peer
|
||||||
|
let connected = await connectToPeer()
|
||||||
|
if connected {
|
||||||
|
hasPeers = true
|
||||||
|
statusContinuation?.yield(.connectionChanged(isConnected: true))
|
||||||
|
|
||||||
|
// Start maintenance loop
|
||||||
|
startMaintenanceLoop()
|
||||||
|
} else {
|
||||||
|
statusContinuation?.yield(.error("Failed to connect to service peer"))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func stop() async {
|
||||||
|
guard let context = ctx else { return }
|
||||||
|
|
||||||
|
// Stop maintenance loop
|
||||||
|
maintenanceTask?.cancel()
|
||||||
|
maintenanceTask = nil
|
||||||
|
|
||||||
|
// Stop event processing
|
||||||
|
eventProcessingTask?.cancel()
|
||||||
|
eventProcessingTask = nil
|
||||||
|
|
||||||
|
// Close event stream
|
||||||
|
eventContinuation?.finish()
|
||||||
|
eventContinuation = nil
|
||||||
|
WakuActor.sharedEventContinuation = nil
|
||||||
|
|
||||||
|
statusContinuation?.yield(.statusChanged(.stopped))
|
||||||
|
statusContinuation?.yield(.connectionChanged(isConnected: false))
|
||||||
|
statusContinuation?.yield(.filterSubscriptionChanged(subscribed: false, failedAttempts: 0))
|
||||||
|
statusContinuation?.yield(.maintenanceChanged(active: false))
|
||||||
|
|
||||||
|
// Reset state
|
||||||
|
let ctxToStop = context
|
||||||
|
ctx = nil
|
||||||
|
isSubscribed = false
|
||||||
|
isSubscribing = false
|
||||||
|
hasPeers = false
|
||||||
|
seenMessageHashes.removeAll()
|
||||||
|
|
||||||
|
// Unsubscribe and stop in background (fire and forget)
|
||||||
|
Task.detached {
|
||||||
|
// Unsubscribe
|
||||||
|
_ = await self.callWakuSync { waku_filter_unsubscribe_all(ctxToStop, WakuActor.syncCallback, $0) }
|
||||||
|
print("[WakuActor] Unsubscribed from filter")
|
||||||
|
|
||||||
|
// Stop
|
||||||
|
_ = await self.callWakuSync { waku_stop(ctxToStop, WakuActor.syncCallback, $0) }
|
||||||
|
print("[WakuActor] Node stopped")
|
||||||
|
|
||||||
|
// Destroy
|
||||||
|
_ = await self.callWakuSync { waku_destroy(ctxToStop, WakuActor.syncCallback, $0) }
|
||||||
|
print("[WakuActor] Node destroyed")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func publish(message: String, contentTopic: String? = nil) async {
|
||||||
|
guard let context = ctx else {
|
||||||
|
print("[WakuActor] Node not started")
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
guard hasPeers else {
|
||||||
|
print("[WakuActor] No peers connected yet")
|
||||||
|
statusContinuation?.yield(.error("No peers connected yet. Please wait..."))
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
let topic = contentTopic ?? defaultContentTopic
|
||||||
|
guard let payloadData = message.data(using: .utf8) else { return }
|
||||||
|
let payloadBase64 = payloadData.base64EncodedString()
|
||||||
|
let timestamp = Int64(Date().timeIntervalSince1970 * 1_000_000_000)
|
||||||
|
let jsonMessage = """
|
||||||
|
{"payload":"\(payloadBase64)","contentTopic":"\(topic)","timestamp":\(timestamp)}
|
||||||
|
"""
|
||||||
|
|
||||||
|
let result = await callWakuSync { userData in
|
||||||
|
waku_lightpush_publish(
|
||||||
|
context,
|
||||||
|
self.defaultPubsubTopic,
|
||||||
|
jsonMessage,
|
||||||
|
WakuActor.syncCallback,
|
||||||
|
userData
|
||||||
|
)
|
||||||
|
}
|
||||||
|
|
||||||
|
if result.success {
|
||||||
|
print("[WakuActor] Published message")
|
||||||
|
} else {
|
||||||
|
print("[WakuActor] Publish error: \(result.result ?? "unknown")")
|
||||||
|
statusContinuation?.yield(.error("Failed to send message"))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func resubscribe() async {
|
||||||
|
print("[WakuActor] Force resubscribe requested")
|
||||||
|
isSubscribed = false
|
||||||
|
isSubscribing = false
|
||||||
|
statusContinuation?.yield(.filterSubscriptionChanged(subscribed: false, failedAttempts: 0))
|
||||||
|
_ = await subscribe()
|
||||||
|
}
|
||||||
|
|
||||||
|
// MARK: - Private Methods
|
||||||
|
|
||||||
|
private func initializeNode() async -> Bool {
|
||||||
|
let config = """
|
||||||
|
{
|
||||||
|
"tcpPort": 60000,
|
||||||
|
"clusterId": 1,
|
||||||
|
"shards": [0],
|
||||||
|
"relay": false,
|
||||||
|
"lightpush": true,
|
||||||
|
"filter": true,
|
||||||
|
"logLevel": "DEBUG",
|
||||||
|
"discv5Discovery": true,
|
||||||
|
"discv5BootstrapNodes": [
|
||||||
|
"enr:-QESuEB4Dchgjn7gfAvwB00CxTA-nGiyk-aALI-H4dYSZD3rUk7bZHmP8d2U6xDiQ2vZffpo45Jp7zKNdnwDUx6g4o6XAYJpZIJ2NIJpcIRA4VDAim11bHRpYWRkcnO4XAArNiZub2RlLTAxLmRvLWFtczMud2FrdS5zYW5kYm94LnN0YXR1cy5pbQZ2XwAtNiZub2RlLTAxLmRvLWFtczMud2FrdS5zYW5kYm94LnN0YXR1cy5pbQYfQN4DgnJzkwABCAAAAAEAAgADAAQABQAGAAeJc2VjcDI1NmsxoQOvD3S3jUNICsrOILlmhENiWAMmMVlAl6-Q8wRB7hidY4N0Y3CCdl-DdWRwgiMohXdha3UyDw",
|
||||||
|
"enr:-QEkuEBIkb8q8_mrorHndoXH9t5N6ZfD-jehQCrYeoJDPHqT0l0wyaONa2-piRQsi3oVKAzDShDVeoQhy0uwN1xbZfPZAYJpZIJ2NIJpcIQiQlleim11bHRpYWRkcnO4bgA0Ni9ub2RlLTAxLmdjLXVzLWNlbnRyYWwxLWEud2FrdS5zYW5kYm94LnN0YXR1cy5pbQZ2XwA2Ni9ub2RlLTAxLmdjLXVzLWNlbnRyYWwxLWEud2FrdS5zYW5kYm94LnN0YXR1cy5pbQYfQN4DgnJzkwABCAAAAAEAAgADAAQABQAGAAeJc2VjcDI1NmsxoQKnGt-GSgqPSf3IAPM7bFgTlpczpMZZLF3geeoNNsxzSoN0Y3CCdl-DdWRwgiMohXdha3UyDw"
|
||||||
|
],
|
||||||
|
"discv5UdpPort": 9999,
|
||||||
|
"dnsDiscovery": true,
|
||||||
|
"dnsDiscoveryUrl": "enrtree://AOGYWMBYOUIMOENHXCHILPKY3ZRFEULMFI4DOM442QSZ73TT2A7VI@test.waku.nodes.status.im",
|
||||||
|
"dnsDiscoveryNameServers": ["8.8.8.8", "1.0.0.1"]
|
||||||
|
}
|
||||||
|
"""
|
||||||
|
|
||||||
|
// Create node - waku_new is special, it returns the context directly
|
||||||
|
let createResult = await withCheckedContinuation { (continuation: CheckedContinuation<(ctx: UnsafeMutableRawPointer?, success: Bool, result: String?), Never>) in
|
||||||
|
let callbackCtx = CallbackContext()
|
||||||
|
let userDataPtr = Unmanaged.passRetained(callbackCtx).toOpaque()
|
||||||
|
|
||||||
|
// Set up a simple callback for waku_new
|
||||||
|
let newCtx = waku_new(config, { ret, msg, len, userData in
|
||||||
|
guard let userData = userData else { return }
|
||||||
|
let context = Unmanaged<CallbackContext>.fromOpaque(userData).takeUnretainedValue()
|
||||||
|
context.success = (ret == RET_OK)
|
||||||
|
if let msg = msg {
|
||||||
|
context.result = String(cString: msg)
|
||||||
|
}
|
||||||
|
}, userDataPtr)
|
||||||
|
|
||||||
|
// Small delay to ensure callback completes
|
||||||
|
DispatchQueue.global().asyncAfter(deadline: .now() + 0.1) {
|
||||||
|
Unmanaged<CallbackContext>.fromOpaque(userDataPtr).release()
|
||||||
|
continuation.resume(returning: (newCtx, callbackCtx.success, callbackCtx.result))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
guard createResult.ctx != nil else {
|
||||||
|
statusContinuation?.yield(.statusChanged(.error))
|
||||||
|
statusContinuation?.yield(.error("Failed to create node: \(createResult.result ?? "unknown")"))
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
|
||||||
|
ctx = createResult.ctx
|
||||||
|
|
||||||
|
// Set event callback
|
||||||
|
waku_set_event_callback(ctx, WakuActor.eventCallback, nil)
|
||||||
|
|
||||||
|
// Start node
|
||||||
|
let startResult = await callWakuSync { userData in
|
||||||
|
waku_start(self.ctx, WakuActor.syncCallback, userData)
|
||||||
|
}
|
||||||
|
|
||||||
|
guard startResult.success else {
|
||||||
|
statusContinuation?.yield(.statusChanged(.error))
|
||||||
|
statusContinuation?.yield(.error("Failed to start node: \(startResult.result ?? "unknown")"))
|
||||||
|
ctx = nil
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
|
||||||
|
print("[WakuActor] Node started")
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
|
||||||
|
private func connectToPeer() async -> Bool {
|
||||||
|
guard let context = ctx else { return false }
|
||||||
|
|
||||||
|
print("[WakuActor] Connecting to static peer...")
|
||||||
|
|
||||||
|
let result = await callWakuSync { userData in
|
||||||
|
waku_connect(context, self.staticPeer, 10000, WakuActor.syncCallback, userData)
|
||||||
|
}
|
||||||
|
|
||||||
|
if result.success {
|
||||||
|
print("[WakuActor] Connected to peer successfully")
|
||||||
|
return true
|
||||||
|
} else {
|
||||||
|
print("[WakuActor] Failed to connect: \(result.result ?? "unknown")")
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
private func subscribe(contentTopic: String? = nil) async -> Bool {
|
||||||
|
guard let context = ctx else { return false }
|
||||||
|
guard !isSubscribed && !isSubscribing else { return isSubscribed }
|
||||||
|
|
||||||
|
isSubscribing = true
|
||||||
|
let topic = contentTopic ?? defaultContentTopic
|
||||||
|
|
||||||
|
let result = await callWakuSync { userData in
|
||||||
|
waku_filter_subscribe(
|
||||||
|
context,
|
||||||
|
self.defaultPubsubTopic,
|
||||||
|
topic,
|
||||||
|
WakuActor.syncCallback,
|
||||||
|
userData
|
||||||
|
)
|
||||||
|
}
|
||||||
|
|
||||||
|
isSubscribing = false
|
||||||
|
|
||||||
|
if result.success {
|
||||||
|
print("[WakuActor] Subscribe request successful to \(topic)")
|
||||||
|
isSubscribed = true
|
||||||
|
statusContinuation?.yield(.filterSubscriptionChanged(subscribed: true, failedAttempts: 0))
|
||||||
|
return true
|
||||||
|
} else {
|
||||||
|
print("[WakuActor] Subscribe error: \(result.result ?? "unknown")")
|
||||||
|
isSubscribed = false
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
private func pingFilterPeer() async -> Bool {
|
||||||
|
guard let context = ctx else { return false }
|
||||||
|
|
||||||
|
let result = await callWakuSync { userData in
|
||||||
|
waku_ping_peer(
|
||||||
|
context,
|
||||||
|
self.staticPeer,
|
||||||
|
10000,
|
||||||
|
WakuActor.syncCallback,
|
||||||
|
userData
|
||||||
|
)
|
||||||
|
}
|
||||||
|
|
||||||
|
return result.success
|
||||||
|
}
|
||||||
|
|
||||||
|
// MARK: - Subscription Maintenance
|
||||||
|
|
||||||
|
private func startMaintenanceLoop() {
|
||||||
|
guard maintenanceTask == nil else {
|
||||||
|
print("[WakuActor] Maintenance loop already running")
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
statusContinuation?.yield(.maintenanceChanged(active: true))
|
||||||
|
print("[WakuActor] Starting subscription maintenance loop")
|
||||||
|
|
||||||
|
maintenanceTask = Task { [weak self] in
|
||||||
|
guard let self = self else { return }
|
||||||
|
|
||||||
|
var failedSubscribes = 0
|
||||||
|
var isFirstPingOnConnection = true
|
||||||
|
|
||||||
|
while !Task.isCancelled {
|
||||||
|
guard await self.isRunning else { break }
|
||||||
|
|
||||||
|
print("[WakuActor] Maintaining subscription...")
|
||||||
|
|
||||||
|
let pingSuccess = await self.pingFilterPeer()
|
||||||
|
let currentlySubscribed = await self.isSubscribed
|
||||||
|
|
||||||
|
if pingSuccess && currentlySubscribed {
|
||||||
|
print("[WakuActor] Subscription is live, waiting 30s")
|
||||||
|
try? await Task.sleep(nanoseconds: self.maintenanceIntervalSeconds)
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
|
||||||
|
if !isFirstPingOnConnection && !pingSuccess {
|
||||||
|
print("[WakuActor] Ping failed - subscription may be lost")
|
||||||
|
await self.statusContinuation?.yield(.filterSubscriptionChanged(subscribed: false, failedAttempts: failedSubscribes))
|
||||||
|
}
|
||||||
|
isFirstPingOnConnection = false
|
||||||
|
|
||||||
|
print("[WakuActor] No active subscription found. Sending subscribe request...")
|
||||||
|
|
||||||
|
await self.resetSubscriptionState()
|
||||||
|
let subscribeSuccess = await self.subscribe()
|
||||||
|
|
||||||
|
if subscribeSuccess {
|
||||||
|
print("[WakuActor] Subscribe request successful")
|
||||||
|
failedSubscribes = 0
|
||||||
|
try? await Task.sleep(nanoseconds: self.maintenanceIntervalSeconds)
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
|
||||||
|
failedSubscribes += 1
|
||||||
|
await self.statusContinuation?.yield(.filterSubscriptionChanged(subscribed: false, failedAttempts: failedSubscribes))
|
||||||
|
print("[WakuActor] Subscribe request failed. Attempt \(failedSubscribes)/\(self.maxFailedSubscribes)")
|
||||||
|
|
||||||
|
if failedSubscribes < self.maxFailedSubscribes {
|
||||||
|
print("[WakuActor] Retrying in 2s...")
|
||||||
|
try? await Task.sleep(nanoseconds: self.retryWaitSeconds)
|
||||||
|
} else {
|
||||||
|
print("[WakuActor] Max subscribe failures reached")
|
||||||
|
await self.statusContinuation?.yield(.error("Filter subscription failed after \(self.maxFailedSubscribes) attempts"))
|
||||||
|
failedSubscribes = 0
|
||||||
|
try? await Task.sleep(nanoseconds: self.maintenanceIntervalSeconds)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
print("[WakuActor] Subscription maintenance loop stopped")
|
||||||
|
await self.statusContinuation?.yield(.maintenanceChanged(active: false))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
private func resetSubscriptionState() {
|
||||||
|
isSubscribed = false
|
||||||
|
isSubscribing = false
|
||||||
|
}
|
||||||
|
|
||||||
|
// MARK: - Event Handling
|
||||||
|
|
||||||
|
private func handleEvent(_ eventJson: String) {
|
||||||
|
guard let data = eventJson.data(using: .utf8),
|
||||||
|
let json = try? JSONSerialization.jsonObject(with: data) as? [String: Any],
|
||||||
|
let eventType = json["eventType"] as? String else {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
if eventType == "connection_change" {
|
||||||
|
handleConnectionChange(json)
|
||||||
|
} else if eventType == "message" {
|
||||||
|
handleMessage(json)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
private func handleConnectionChange(_ json: [String: Any]) {
|
||||||
|
guard let peerEvent = json["peerEvent"] as? String else { return }
|
||||||
|
|
||||||
|
if peerEvent == "Joined" || peerEvent == "Identified" {
|
||||||
|
hasPeers = true
|
||||||
|
statusContinuation?.yield(.connectionChanged(isConnected: true))
|
||||||
|
} else if peerEvent == "Left" {
|
||||||
|
statusContinuation?.yield(.filterSubscriptionChanged(subscribed: false, failedAttempts: 0))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
private func handleMessage(_ json: [String: Any]) {
|
||||||
|
guard let messageHash = json["messageHash"] as? String,
|
||||||
|
let wakuMessage = json["wakuMessage"] as? [String: Any],
|
||||||
|
let payloadBase64 = wakuMessage["payload"] as? String,
|
||||||
|
let contentTopic = wakuMessage["contentTopic"] as? String,
|
||||||
|
let payloadData = Data(base64Encoded: payloadBase64),
|
||||||
|
let payloadString = String(data: payloadData, encoding: .utf8) else {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
// Deduplicate
|
||||||
|
guard !seenMessageHashes.contains(messageHash) else {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
seenMessageHashes.insert(messageHash)
|
||||||
|
|
||||||
|
// Limit memory usage
|
||||||
|
if seenMessageHashes.count > maxSeenHashes {
|
||||||
|
seenMessageHashes.removeAll()
|
||||||
|
}
|
||||||
|
|
||||||
|
let message = WakuMessage(
|
||||||
|
id: messageHash,
|
||||||
|
payload: payloadString,
|
||||||
|
contentTopic: contentTopic,
|
||||||
|
timestamp: Date()
|
||||||
|
)
|
||||||
|
|
||||||
|
messageContinuation?.yield(message)
|
||||||
|
}
|
||||||
|
|
||||||
|
// MARK: - Helper for synchronous C calls
|
||||||
|
|
||||||
|
private func callWakuSync(_ work: @escaping (UnsafeMutableRawPointer) -> Void) async -> (success: Bool, result: String?) {
|
||||||
|
await withCheckedContinuation { continuation in
|
||||||
|
let context = CallbackContext()
|
||||||
|
context.continuation = continuation
|
||||||
|
let userDataPtr = Unmanaged.passRetained(context).toOpaque()
|
||||||
|
|
||||||
|
work(userDataPtr)
|
||||||
|
|
||||||
|
// Set a timeout to avoid hanging forever
|
||||||
|
DispatchQueue.global().asyncAfter(deadline: .now() + 15) {
|
||||||
|
// Try to resume with timeout - will be ignored if callback already resumed
|
||||||
|
let didTimeout = context.resumeOnce(returning: (false, "Timeout"))
|
||||||
|
if didTimeout {
|
||||||
|
print("[WakuActor] Call timed out")
|
||||||
|
}
|
||||||
|
Unmanaged<CallbackContext>.fromOpaque(userDataPtr).release()
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// MARK: - WakuNode (MainActor UI Wrapper)
|
||||||
|
|
||||||
|
/// Main-thread UI wrapper that consumes updates from WakuActor via AsyncStreams
|
||||||
|
@MainActor
|
||||||
|
class WakuNode: ObservableObject {
|
||||||
|
|
||||||
|
// MARK: - Published Properties (UI State)
|
||||||
|
|
||||||
|
@Published var status: WakuNodeStatus = .stopped
|
||||||
|
@Published var receivedMessages: [WakuMessage] = []
|
||||||
|
@Published var errorQueue: [TimestampedError] = []
|
||||||
|
@Published var isConnected: Bool = false
|
||||||
|
@Published var filterSubscribed: Bool = false
|
||||||
|
@Published var subscriptionMaintenanceActive: Bool = false
|
||||||
|
@Published var failedSubscribeAttempts: Int = 0
|
||||||
|
|
||||||
|
// Topics (read-only access to actor's config)
|
||||||
|
var defaultPubsubTopic: String { "/waku/2/rs/1/0" }
|
||||||
|
var defaultContentTopic: String { "/waku-ios-example/1/chat/proto" }
|
||||||
|
|
||||||
|
// MARK: - Private Properties
|
||||||
|
|
||||||
|
private let actor = WakuActor()
|
||||||
|
private var messageTask: Task<Void, Never>?
|
||||||
|
private var statusTask: Task<Void, Never>?
|
||||||
|
|
||||||
|
// MARK: - Initialization
|
||||||
|
|
||||||
|
init() {}
|
||||||
|
|
||||||
|
deinit {
|
||||||
|
messageTask?.cancel()
|
||||||
|
statusTask?.cancel()
|
||||||
|
}
|
||||||
|
|
||||||
|
// MARK: - Public API
|
||||||
|
|
||||||
|
func start() {
|
||||||
|
guard status == .stopped || status == .error else {
|
||||||
|
print("[WakuNode] Already started or starting")
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
// Create message stream
|
||||||
|
let messageStream = AsyncStream<WakuMessage> { continuation in
|
||||||
|
Task {
|
||||||
|
await self.actor.setMessageContinuation(continuation)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Create status stream
|
||||||
|
let statusStream = AsyncStream<WakuStatusUpdate> { continuation in
|
||||||
|
Task {
|
||||||
|
await self.actor.setStatusContinuation(continuation)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Start consuming messages
|
||||||
|
messageTask = Task { @MainActor in
|
||||||
|
for await message in messageStream {
|
||||||
|
self.receivedMessages.insert(message, at: 0)
|
||||||
|
if self.receivedMessages.count > 100 {
|
||||||
|
self.receivedMessages.removeLast()
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Start consuming status updates
|
||||||
|
statusTask = Task { @MainActor in
|
||||||
|
for await update in statusStream {
|
||||||
|
self.handleStatusUpdate(update)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Start the actor
|
||||||
|
Task {
|
||||||
|
await actor.start()
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func stop() {
|
||||||
|
messageTask?.cancel()
|
||||||
|
messageTask = nil
|
||||||
|
statusTask?.cancel()
|
||||||
|
statusTask = nil
|
||||||
|
|
||||||
|
Task {
|
||||||
|
await actor.stop()
|
||||||
|
}
|
||||||
|
|
||||||
|
// Immediate UI update
|
||||||
|
status = .stopped
|
||||||
|
isConnected = false
|
||||||
|
filterSubscribed = false
|
||||||
|
subscriptionMaintenanceActive = false
|
||||||
|
failedSubscribeAttempts = 0
|
||||||
|
}
|
||||||
|
|
||||||
|
func publish(message: String, contentTopic: String? = nil) {
|
||||||
|
Task {
|
||||||
|
await actor.publish(message: message, contentTopic: contentTopic)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func resubscribe() {
|
||||||
|
Task {
|
||||||
|
await actor.resubscribe()
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func dismissError(_ error: TimestampedError) {
|
||||||
|
errorQueue.removeAll { $0.id == error.id }
|
||||||
|
}
|
||||||
|
|
||||||
|
func dismissAllErrors() {
|
||||||
|
errorQueue.removeAll()
|
||||||
|
}
|
||||||
|
|
||||||
|
// MARK: - Private Methods
|
||||||
|
|
||||||
|
private func handleStatusUpdate(_ update: WakuStatusUpdate) {
|
||||||
|
switch update {
|
||||||
|
case .statusChanged(let newStatus):
|
||||||
|
status = newStatus
|
||||||
|
|
||||||
|
case .connectionChanged(let connected):
|
||||||
|
isConnected = connected
|
||||||
|
|
||||||
|
case .filterSubscriptionChanged(let subscribed, let attempts):
|
||||||
|
filterSubscribed = subscribed
|
||||||
|
failedSubscribeAttempts = attempts
|
||||||
|
|
||||||
|
case .maintenanceChanged(let active):
|
||||||
|
subscriptionMaintenanceActive = active
|
||||||
|
|
||||||
|
case .error(let message):
|
||||||
|
let error = TimestampedError(message: message, timestamp: Date())
|
||||||
|
errorQueue.append(error)
|
||||||
|
|
||||||
|
// Schedule auto-dismiss after 10 seconds
|
||||||
|
let errorId = error.id
|
||||||
|
Task { @MainActor in
|
||||||
|
try? await Task.sleep(nanoseconds: 10_000_000_000)
|
||||||
|
self.errorQueue.removeAll { $0.id == errorId }
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
253
examples/ios/WakuExample/libwaku.h
Normal file
253
examples/ios/WakuExample/libwaku.h
Normal file
@ -0,0 +1,253 @@
|
|||||||
|
|
||||||
|
// Generated manually and inspired by the one generated by the Nim Compiler.
|
||||||
|
// In order to see the header file generated by Nim just run `make libwaku`
|
||||||
|
// from the root repo folder and the header should be created in
|
||||||
|
// nimcache/release/libwaku/libwaku.h
|
||||||
|
#ifndef __libwaku__
|
||||||
|
#define __libwaku__
|
||||||
|
|
||||||
|
#include <stddef.h>
|
||||||
|
#include <stdint.h>
|
||||||
|
|
||||||
|
// The possible returned values for the functions that return int
|
||||||
|
#define RET_OK 0
|
||||||
|
#define RET_ERR 1
|
||||||
|
#define RET_MISSING_CALLBACK 2
|
||||||
|
|
||||||
|
#ifdef __cplusplus
|
||||||
|
extern "C" {
|
||||||
|
#endif
|
||||||
|
|
||||||
|
typedef void (*WakuCallBack) (int callerRet, const char* msg, size_t len, void* userData);
|
||||||
|
|
||||||
|
// Creates a new instance of the waku node.
|
||||||
|
// Sets up the waku node from the given configuration.
|
||||||
|
// Returns a pointer to the Context needed by the rest of the API functions.
|
||||||
|
void* waku_new(
|
||||||
|
const char* configJson,
|
||||||
|
WakuCallBack callback,
|
||||||
|
void* userData);
|
||||||
|
|
||||||
|
int waku_start(void* ctx,
|
||||||
|
WakuCallBack callback,
|
||||||
|
void* userData);
|
||||||
|
|
||||||
|
int waku_stop(void* ctx,
|
||||||
|
WakuCallBack callback,
|
||||||
|
void* userData);
|
||||||
|
|
||||||
|
// Destroys an instance of a waku node created with waku_new
|
||||||
|
int waku_destroy(void* ctx,
|
||||||
|
WakuCallBack callback,
|
||||||
|
void* userData);
|
||||||
|
|
||||||
|
int waku_version(void* ctx,
|
||||||
|
WakuCallBack callback,
|
||||||
|
void* userData);
|
||||||
|
|
||||||
|
// Sets a callback that will be invoked whenever an event occurs.
|
||||||
|
// It is crucial that the passed callback is fast, non-blocking and potentially thread-safe.
|
||||||
|
void waku_set_event_callback(void* ctx,
|
||||||
|
WakuCallBack callback,
|
||||||
|
void* userData);
|
||||||
|
|
||||||
|
int waku_content_topic(void* ctx,
|
||||||
|
const char* appName,
|
||||||
|
unsigned int appVersion,
|
||||||
|
const char* contentTopicName,
|
||||||
|
const char* encoding,
|
||||||
|
WakuCallBack callback,
|
||||||
|
void* userData);
|
||||||
|
|
||||||
|
int waku_pubsub_topic(void* ctx,
|
||||||
|
const char* topicName,
|
||||||
|
WakuCallBack callback,
|
||||||
|
void* userData);
|
||||||
|
|
||||||
|
int waku_default_pubsub_topic(void* ctx,
|
||||||
|
WakuCallBack callback,
|
||||||
|
void* userData);
|
||||||
|
|
||||||
|
int waku_relay_publish(void* ctx,
|
||||||
|
const char* pubSubTopic,
|
||||||
|
const char* jsonWakuMessage,
|
||||||
|
unsigned int timeoutMs,
|
||||||
|
WakuCallBack callback,
|
||||||
|
void* userData);
|
||||||
|
|
||||||
|
int waku_lightpush_publish(void* ctx,
|
||||||
|
const char* pubSubTopic,
|
||||||
|
const char* jsonWakuMessage,
|
||||||
|
WakuCallBack callback,
|
||||||
|
void* userData);
|
||||||
|
|
||||||
|
int waku_relay_subscribe(void* ctx,
|
||||||
|
const char* pubSubTopic,
|
||||||
|
WakuCallBack callback,
|
||||||
|
void* userData);
|
||||||
|
|
||||||
|
int waku_relay_add_protected_shard(void* ctx,
|
||||||
|
int clusterId,
|
||||||
|
int shardId,
|
||||||
|
char* publicKey,
|
||||||
|
WakuCallBack callback,
|
||||||
|
void* userData);
|
||||||
|
|
||||||
|
int waku_relay_unsubscribe(void* ctx,
|
||||||
|
const char* pubSubTopic,
|
||||||
|
WakuCallBack callback,
|
||||||
|
void* userData);
|
||||||
|
|
||||||
|
int waku_filter_subscribe(void* ctx,
|
||||||
|
const char* pubSubTopic,
|
||||||
|
const char* contentTopics,
|
||||||
|
WakuCallBack callback,
|
||||||
|
void* userData);
|
||||||
|
|
||||||
|
int waku_filter_unsubscribe(void* ctx,
|
||||||
|
const char* pubSubTopic,
|
||||||
|
const char* contentTopics,
|
||||||
|
WakuCallBack callback,
|
||||||
|
void* userData);
|
||||||
|
|
||||||
|
int waku_filter_unsubscribe_all(void* ctx,
|
||||||
|
WakuCallBack callback,
|
||||||
|
void* userData);
|
||||||
|
|
||||||
|
int waku_relay_get_num_connected_peers(void* ctx,
|
||||||
|
const char* pubSubTopic,
|
||||||
|
WakuCallBack callback,
|
||||||
|
void* userData);
|
||||||
|
|
||||||
|
int waku_relay_get_connected_peers(void* ctx,
|
||||||
|
const char* pubSubTopic,
|
||||||
|
WakuCallBack callback,
|
||||||
|
void* userData);
|
||||||
|
|
||||||
|
int waku_relay_get_num_peers_in_mesh(void* ctx,
|
||||||
|
const char* pubSubTopic,
|
||||||
|
WakuCallBack callback,
|
||||||
|
void* userData);
|
||||||
|
|
||||||
|
int waku_relay_get_peers_in_mesh(void* ctx,
|
||||||
|
const char* pubSubTopic,
|
||||||
|
WakuCallBack callback,
|
||||||
|
void* userData);
|
||||||
|
|
||||||
|
int waku_store_query(void* ctx,
|
||||||
|
const char* jsonQuery,
|
||||||
|
const char* peerAddr,
|
||||||
|
int timeoutMs,
|
||||||
|
WakuCallBack callback,
|
||||||
|
void* userData);
|
||||||
|
|
||||||
|
int waku_connect(void* ctx,
|
||||||
|
const char* peerMultiAddr,
|
||||||
|
unsigned int timeoutMs,
|
||||||
|
WakuCallBack callback,
|
||||||
|
void* userData);
|
||||||
|
|
||||||
|
int waku_disconnect_peer_by_id(void* ctx,
|
||||||
|
const char* peerId,
|
||||||
|
WakuCallBack callback,
|
||||||
|
void* userData);
|
||||||
|
|
||||||
|
int waku_disconnect_all_peers(void* ctx,
|
||||||
|
WakuCallBack callback,
|
||||||
|
void* userData);
|
||||||
|
|
||||||
|
int waku_dial_peer(void* ctx,
|
||||||
|
const char* peerMultiAddr,
|
||||||
|
const char* protocol,
|
||||||
|
int timeoutMs,
|
||||||
|
WakuCallBack callback,
|
||||||
|
void* userData);
|
||||||
|
|
||||||
|
int waku_dial_peer_by_id(void* ctx,
|
||||||
|
const char* peerId,
|
||||||
|
const char* protocol,
|
||||||
|
int timeoutMs,
|
||||||
|
WakuCallBack callback,
|
||||||
|
void* userData);
|
||||||
|
|
||||||
|
int waku_get_peerids_from_peerstore(void* ctx,
|
||||||
|
WakuCallBack callback,
|
||||||
|
void* userData);
|
||||||
|
|
||||||
|
int waku_get_connected_peers_info(void* ctx,
|
||||||
|
WakuCallBack callback,
|
||||||
|
void* userData);
|
||||||
|
|
||||||
|
int waku_get_peerids_by_protocol(void* ctx,
|
||||||
|
const char* protocol,
|
||||||
|
WakuCallBack callback,
|
||||||
|
void* userData);
|
||||||
|
|
||||||
|
int waku_listen_addresses(void* ctx,
|
||||||
|
WakuCallBack callback,
|
||||||
|
void* userData);
|
||||||
|
|
||||||
|
int waku_get_connected_peers(void* ctx,
|
||||||
|
WakuCallBack callback,
|
||||||
|
void* userData);
|
||||||
|
|
||||||
|
// Returns a list of multiaddress given a url to a DNS discoverable ENR tree
|
||||||
|
// Parameters
|
||||||
|
// char* entTreeUrl: URL containing a discoverable ENR tree
|
||||||
|
// char* nameDnsServer: The nameserver to resolve the ENR tree url.
|
||||||
|
// int timeoutMs: Timeout value in milliseconds to execute the call.
|
||||||
|
int waku_dns_discovery(void* ctx,
|
||||||
|
const char* entTreeUrl,
|
||||||
|
const char* nameDnsServer,
|
||||||
|
int timeoutMs,
|
||||||
|
WakuCallBack callback,
|
||||||
|
void* userData);
|
||||||
|
|
||||||
|
// Updates the bootnode list used for discovering new peers via DiscoveryV5
|
||||||
|
// bootnodes - JSON array containing the bootnode ENRs i.e. `["enr:...", "enr:..."]`
|
||||||
|
int waku_discv5_update_bootnodes(void* ctx,
|
||||||
|
char* bootnodes,
|
||||||
|
WakuCallBack callback,
|
||||||
|
void* userData);
|
||||||
|
|
||||||
|
int waku_start_discv5(void* ctx,
|
||||||
|
WakuCallBack callback,
|
||||||
|
void* userData);
|
||||||
|
|
||||||
|
int waku_stop_discv5(void* ctx,
|
||||||
|
WakuCallBack callback,
|
||||||
|
void* userData);
|
||||||
|
|
||||||
|
// Retrieves the ENR information
|
||||||
|
int waku_get_my_enr(void* ctx,
|
||||||
|
WakuCallBack callback,
|
||||||
|
void* userData);
|
||||||
|
|
||||||
|
int waku_get_my_peerid(void* ctx,
|
||||||
|
WakuCallBack callback,
|
||||||
|
void* userData);
|
||||||
|
|
||||||
|
int waku_get_metrics(void* ctx,
|
||||||
|
WakuCallBack callback,
|
||||||
|
void* userData);
|
||||||
|
|
||||||
|
int waku_peer_exchange_request(void* ctx,
|
||||||
|
int numPeers,
|
||||||
|
WakuCallBack callback,
|
||||||
|
void* userData);
|
||||||
|
|
||||||
|
int waku_ping_peer(void* ctx,
|
||||||
|
const char* peerAddr,
|
||||||
|
int timeoutMs,
|
||||||
|
WakuCallBack callback,
|
||||||
|
void* userData);
|
||||||
|
|
||||||
|
int waku_is_online(void* ctx,
|
||||||
|
WakuCallBack callback,
|
||||||
|
void* userData);
|
||||||
|
|
||||||
|
#ifdef __cplusplus
|
||||||
|
}
|
||||||
|
#endif
|
||||||
|
|
||||||
|
#endif /* __libwaku__ */
|
||||||
47
examples/ios/project.yml
Normal file
47
examples/ios/project.yml
Normal file
@ -0,0 +1,47 @@
|
|||||||
|
name: WakuExample
|
||||||
|
options:
|
||||||
|
bundleIdPrefix: org.waku
|
||||||
|
deploymentTarget:
|
||||||
|
iOS: "14.0"
|
||||||
|
xcodeVersion: "15.0"
|
||||||
|
|
||||||
|
settings:
|
||||||
|
SWIFT_VERSION: "5.0"
|
||||||
|
SUPPORTED_PLATFORMS: "iphoneos iphonesimulator"
|
||||||
|
SUPPORTS_MACCATALYST: "NO"
|
||||||
|
|
||||||
|
targets:
|
||||||
|
WakuExample:
|
||||||
|
type: application
|
||||||
|
platform: iOS
|
||||||
|
supportedDestinations: [iOS]
|
||||||
|
sources:
|
||||||
|
- WakuExample
|
||||||
|
settings:
|
||||||
|
INFOPLIST_FILE: WakuExample/Info.plist
|
||||||
|
PRODUCT_BUNDLE_IDENTIFIER: org.waku.example
|
||||||
|
SWIFT_OBJC_BRIDGING_HEADER: WakuExample/WakuExample-Bridging-Header.h
|
||||||
|
HEADER_SEARCH_PATHS:
|
||||||
|
- "$(PROJECT_DIR)/WakuExample"
|
||||||
|
"LIBRARY_SEARCH_PATHS[sdk=iphoneos*]":
|
||||||
|
- "$(PROJECT_DIR)/../../build/ios/iphoneos-arm64"
|
||||||
|
"LIBRARY_SEARCH_PATHS[sdk=iphonesimulator*]":
|
||||||
|
- "$(PROJECT_DIR)/../../build/ios/iphonesimulator-arm64"
|
||||||
|
OTHER_LDFLAGS:
|
||||||
|
- "-lc++"
|
||||||
|
- "-lwaku"
|
||||||
|
IPHONEOS_DEPLOYMENT_TARGET: "14.0"
|
||||||
|
info:
|
||||||
|
path: WakuExample/Info.plist
|
||||||
|
properties:
|
||||||
|
CFBundleName: WakuExample
|
||||||
|
CFBundleDisplayName: Waku Example
|
||||||
|
CFBundleIdentifier: org.waku.example
|
||||||
|
CFBundleVersion: "1"
|
||||||
|
CFBundleShortVersionString: "1.0"
|
||||||
|
UILaunchScreen: {}
|
||||||
|
UISupportedInterfaceOrientations:
|
||||||
|
- UIInterfaceOrientationPortrait
|
||||||
|
NSAppTransportSecurity:
|
||||||
|
NSAllowsArbitraryLoads: true
|
||||||
|
|
||||||
@ -51,7 +51,6 @@ proc splitPeerIdAndAddr(maddr: string): (string, string) =
|
|||||||
proc setupAndPublish(rng: ref HmacDrbgContext, conf: LightPushMixConf) {.async.} =
|
proc setupAndPublish(rng: ref HmacDrbgContext, conf: LightPushMixConf) {.async.} =
|
||||||
# use notice to filter all waku messaging
|
# use notice to filter all waku messaging
|
||||||
setupLog(logging.LogLevel.DEBUG, logging.LogFormat.TEXT)
|
setupLog(logging.LogLevel.DEBUG, logging.LogFormat.TEXT)
|
||||||
|
|
||||||
notice "starting publisher", wakuPort = conf.port
|
notice "starting publisher", wakuPort = conf.port
|
||||||
|
|
||||||
let
|
let
|
||||||
@ -114,17 +113,8 @@ proc setupAndPublish(rng: ref HmacDrbgContext, conf: LightPushMixConf) {.async.}
|
|||||||
let dPeerId = PeerId.init(destPeerId).valueOr:
|
let dPeerId = PeerId.init(destPeerId).valueOr:
|
||||||
error "Failed to initialize PeerId", error = error
|
error "Failed to initialize PeerId", error = error
|
||||||
return
|
return
|
||||||
var conn: Connection
|
|
||||||
if not conf.mixDisabled:
|
|
||||||
conn = node.wakuMix.toConnection(
|
|
||||||
MixDestination.init(dPeerId, pxPeerInfo.addrs[0]), # destination lightpush peer
|
|
||||||
WakuLightPushCodec, # protocol codec which will be used over the mix connection
|
|
||||||
MixParameters(expectReply: Opt.some(true), numSurbs: Opt.some(byte(1))),
|
|
||||||
# mix parameters indicating we expect a single reply
|
|
||||||
).valueOr:
|
|
||||||
error "failed to create mix connection", error = error
|
|
||||||
return
|
|
||||||
|
|
||||||
|
await node.mountRendezvousClient(clusterId)
|
||||||
await node.start()
|
await node.start()
|
||||||
node.peerManager.start()
|
node.peerManager.start()
|
||||||
node.startPeerExchangeLoop()
|
node.startPeerExchangeLoop()
|
||||||
@ -145,20 +135,26 @@ proc setupAndPublish(rng: ref HmacDrbgContext, conf: LightPushMixConf) {.async.}
|
|||||||
|
|
||||||
var i = 0
|
var i = 0
|
||||||
while i < conf.numMsgs:
|
while i < conf.numMsgs:
|
||||||
|
var conn: Connection
|
||||||
if conf.mixDisabled:
|
if conf.mixDisabled:
|
||||||
let connOpt = await node.peerManager.dialPeer(dPeerId, WakuLightPushCodec)
|
let connOpt = await node.peerManager.dialPeer(dPeerId, WakuLightPushCodec)
|
||||||
if connOpt.isNone():
|
if connOpt.isNone():
|
||||||
error "failed to dial peer with WakuLightPushCodec", target_peer_id = dPeerId
|
error "failed to dial peer with WakuLightPushCodec", target_peer_id = dPeerId
|
||||||
return
|
return
|
||||||
conn = connOpt.get()
|
conn = connOpt.get()
|
||||||
|
else:
|
||||||
|
conn = node.wakuMix.toConnection(
|
||||||
|
MixDestination.exitNode(dPeerId), # destination lightpush peer
|
||||||
|
WakuLightPushCodec, # protocol codec which will be used over the mix connection
|
||||||
|
MixParameters(expectReply: Opt.some(true), numSurbs: Opt.some(byte(1))),
|
||||||
|
# mix parameters indicating we expect a single reply
|
||||||
|
).valueOr:
|
||||||
|
error "failed to create mix connection", error = error
|
||||||
|
return
|
||||||
i = i + 1
|
i = i + 1
|
||||||
let text =
|
let text =
|
||||||
"""Lorem ipsum dolor sit amet, consectetur adipiscing elit. Nullam venenatis magna ut tortor faucibus, in vestibulum nibh commodo. Aenean eget vestibulum augue. Nullam suscipit urna non nunc efficitur, at iaculis nisl consequat. Mauris quis ultrices elit. Suspendisse lobortis odio vitae laoreet facilisis. Cras ornare sem felis, at vulputate magna aliquam ac. Duis quis est ultricies, euismod nulla ac, interdum dui. Maecenas sit amet est vitae enim commodo gravida. Proin vitae elit nulla. Donec tempor dolor lectus, in faucibus velit elementum quis. Donec non mauris eu nibh faucibus cursus ut egestas dolor. Aliquam venenatis ligula id velit pulvinar malesuada. Vestibulum scelerisque, justo non porta gravida, nulla justo tempor purus, at sollicitudin erat erat vel libero.
|
"""Lorem ipsum dolor sit amet, consectetur adipiscing elit. Nullam venenatis magna ut tortor faucibus, in vestibulum nibh commodo. Aenean eget vestibulum augue. Nullam suscipit urna non nunc efficitur, at iaculis nisl consequat. Mauris quis ultrices elit. Suspendisse lobortis odio vitae laoreet facilisis. Cras ornare sem felis, at vulputate magna aliquam ac. Duis quis est ultricies, euismod nulla ac, interdum dui. Maecenas sit amet est vitae enim commodo gravida. Proin vitae elit nulla. Donec tempor dolor lectus, in faucibus velit elementum quis. Donec non mauris eu nibh faucibus cursus ut egestas dolor. Aliquam venenatis ligula id velit pulvinar malesuada. Vestibulum scelerisque, justo non porta gravida, nulla justo tempor purus, at sollicitudin erat erat vel libero.
|
||||||
Fusce nec eros eu metus tristique aliquet. Sed ut magna sagittis, vulputate diam sit amet, aliquam magna. Aenean sollicitudin velit lacus, eu ultrices magna semper at. Integer vitae felis ligula. In a eros nec risus condimentum tincidunt fermentum sit amet ex. Class aptent taciti sociosqu ad litora torquent per conubia nostra, per inceptos himenaeos. Pellentesque habitant morbi tristique senectus et netus et malesuada fames ac turpis egestas. Nullam vitae justo maximus, fringilla tellus nec, rutrum purus. Etiam efficitur nisi dapibus euismod vestibulum. Phasellus at felis elementum, tristique nulla ac, consectetur neque.
|
Fusce nec eros eu metus tristique aliquet.
|
||||||
Maecenas hendrerit nibh eget velit rutrum, in ornare mauris molestie. Vestibulum ante ipsum primis in faucibus orci luctus et ultrices posuere cubilia curae; Praesent dignissim efficitur eros, sit amet rutrum justo mattis a. Fusce mollis neque at erat placerat bibendum. Ut fringilla fringilla orci, ut fringilla metus fermentum vel. In hac habitasse platea dictumst. Donec hendrerit porttitor odio. Suspendisse ornare sollicitudin mauris, sodales pulvinar velit finibus vel. Fusce id pulvinar neque. Suspendisse eget tincidunt sapien, ac accumsan turpis.
|
|
||||||
Curabitur cursus tincidunt leo at aliquet. Nunc dapibus quam id venenatis varius. Aenean eget augue vel velit dapibus aliquam. Nulla facilisi. Curabitur cursus, turpis vel congue volutpat, tellus eros cursus lacus, eu fringilla turpis orci non ipsum. In hac habitasse platea dictumst. Nulla aliquam nisl a nunc placerat, eget dignissim felis pulvinar. Fusce sed porta mauris. Donec sodales arcu in nisl sodales, quis posuere massa ultricies. Nam feugiat massa eget felis ultricies finibus. Nunc magna nulla, interdum a elit vel, egestas efficitur urna. Ut posuere tincidunt odio in maximus. Sed at dignissim est.
|
|
||||||
Morbi accumsan elementum ligula ut fringilla. Praesent in ex metus. Phasellus urna est, tempus sit amet elementum vitae, sollicitudin vel ipsum. Fusce hendrerit eleifend dignissim. Maecenas tempor dapibus dui quis laoreet. Cras tincidunt sed ipsum sed pellentesque. Proin ut tellus nec ipsum varius interdum. Curabitur id velit ligula. Etiam sapien nulla, cursus sodales orci eu, porta lobortis nunc. Nunc at dapibus velit. Nulla et nunc vehicula, condimentum erat quis, elementum dolor. Quisque eu metus fermentum, vestibulum tellus at, sollicitudin odio. Ut vel neque justo.
|
|
||||||
Praesent porta porta velit, vel porttitor sem. Donec sagittis at nulla venenatis iaculis. Nullam vel eleifend felis. Nullam a pellentesque lectus. Aliquam tincidunt semper dui sed bibendum. Donec hendrerit, urna et cursus dictum, neque neque convallis magna, id condimentum sem urna quis massa. Fusce non quam vulputate, fermentum mauris at, malesuada ipsum. Mauris id pellentesque libero. Donec vel erat ullamcorper, dapibus quam id, imperdiet urna. Praesent sed ligula ut est pellentesque pharetra quis et diam. Ut placerat lorem eget mi fermentum aliquet.
|
|
||||||
This is message #""" &
|
This is message #""" &
|
||||||
$i & """ sent from a publisher using mix. End of transmission."""
|
$i & """ sent from a publisher using mix. End of transmission."""
|
||||||
let message = WakuMessage(
|
let message = WakuMessage(
|
||||||
@ -168,25 +164,34 @@ proc setupAndPublish(rng: ref HmacDrbgContext, conf: LightPushMixConf) {.async.}
|
|||||||
timestamp: getNowInNanosecondTime(),
|
timestamp: getNowInNanosecondTime(),
|
||||||
) # current timestamp
|
) # current timestamp
|
||||||
|
|
||||||
let res = await node.wakuLightpushClient.publishWithConn(
|
let res =
|
||||||
LightpushPubsubTopic, message, conn, dPeerId
|
await node.wakuLightpushClient.publish(some(LightpushPubsubTopic), message, conn)
|
||||||
)
|
|
||||||
|
|
||||||
if res.isOk():
|
let startTime = getNowInNanosecondTime()
|
||||||
lp_mix_success.inc()
|
|
||||||
notice "published message",
|
(
|
||||||
text = text,
|
await node.wakuLightpushClient.publishWithConn(
|
||||||
timestamp = message.timestamp,
|
LightpushPubsubTopic, message, conn, dPeerId
|
||||||
psTopic = LightpushPubsubTopic,
|
)
|
||||||
contentTopic = LightpushContentTopic
|
).isOkOr:
|
||||||
else:
|
error "failed to publish message via mix", error = error.desc
|
||||||
error "failed to publish message", error = $res.error
|
|
||||||
lp_mix_failed.inc(labelValues = ["publish_error"])
|
lp_mix_failed.inc(labelValues = ["publish_error"])
|
||||||
|
return
|
||||||
|
|
||||||
|
let latency = float64(getNowInNanosecondTime() - startTime) / 1_000_000.0
|
||||||
|
lp_mix_latency.observe(latency)
|
||||||
|
lp_mix_success.inc()
|
||||||
|
notice "published message",
|
||||||
|
text = text,
|
||||||
|
timestamp = message.timestamp,
|
||||||
|
latency = latency,
|
||||||
|
psTopic = LightpushPubsubTopic,
|
||||||
|
contentTopic = LightpushContentTopic
|
||||||
|
|
||||||
if conf.mixDisabled:
|
if conf.mixDisabled:
|
||||||
await conn.close()
|
await conn.close()
|
||||||
await sleepAsync(conf.msgIntervalMilliseconds)
|
await sleepAsync(conf.msgIntervalMilliseconds)
|
||||||
info "###########Sent all messages via mix"
|
info "Sent all messages via mix"
|
||||||
quit(0)
|
quit(0)
|
||||||
|
|
||||||
when isMainModule:
|
when isMainModule:
|
||||||
|
|||||||
@ -6,3 +6,6 @@ declarePublicCounter lp_mix_success, "number of lightpush messages sent via mix"
|
|||||||
|
|
||||||
declarePublicCounter lp_mix_failed,
|
declarePublicCounter lp_mix_failed,
|
||||||
"number of lightpush messages failed via mix", labels = ["error"]
|
"number of lightpush messages failed via mix", labels = ["error"]
|
||||||
|
|
||||||
|
declarePublicHistogram lp_mix_latency,
|
||||||
|
"lightpush publish latency via mix in milliseconds"
|
||||||
|
|||||||
@ -102,8 +102,8 @@ print("Waku Relay enabled: {}".format(args.relay))
|
|||||||
# Set the event callback
|
# Set the event callback
|
||||||
callback = callback_type(handle_event) # This line is important so that the callback is not gc'ed
|
callback = callback_type(handle_event) # This line is important so that the callback is not gc'ed
|
||||||
|
|
||||||
libwaku.waku_set_event_callback.argtypes = [callback_type, ctypes.c_void_p]
|
libwaku.set_event_callback.argtypes = [callback_type, ctypes.c_void_p]
|
||||||
libwaku.waku_set_event_callback(callback, ctypes.c_void_p(0))
|
libwaku.set_event_callback(callback, ctypes.c_void_p(0))
|
||||||
|
|
||||||
# Start the node
|
# Start the node
|
||||||
libwaku.waku_start.argtypes = [ctypes.c_void_p,
|
libwaku.waku_start.argtypes = [ctypes.c_void_p,
|
||||||
@ -117,32 +117,32 @@ libwaku.waku_start(ctx,
|
|||||||
|
|
||||||
# Subscribe to the default pubsub topic
|
# Subscribe to the default pubsub topic
|
||||||
libwaku.waku_relay_subscribe.argtypes = [ctypes.c_void_p,
|
libwaku.waku_relay_subscribe.argtypes = [ctypes.c_void_p,
|
||||||
ctypes.c_char_p,
|
|
||||||
callback_type,
|
callback_type,
|
||||||
ctypes.c_void_p]
|
ctypes.c_void_p,
|
||||||
|
ctypes.c_char_p]
|
||||||
libwaku.waku_relay_subscribe(ctx,
|
libwaku.waku_relay_subscribe(ctx,
|
||||||
default_pubsub_topic.encode('utf-8'),
|
|
||||||
callback_type(
|
callback_type(
|
||||||
#onErrCb
|
#onErrCb
|
||||||
lambda ret, msg, len:
|
lambda ret, msg, len:
|
||||||
print("Error calling waku_relay_subscribe: %s" %
|
print("Error calling waku_relay_subscribe: %s" %
|
||||||
msg.decode('utf-8'))
|
msg.decode('utf-8'))
|
||||||
),
|
),
|
||||||
ctypes.c_void_p(0))
|
ctypes.c_void_p(0),
|
||||||
|
default_pubsub_topic.encode('utf-8'))
|
||||||
|
|
||||||
libwaku.waku_connect.argtypes = [ctypes.c_void_p,
|
libwaku.waku_connect.argtypes = [ctypes.c_void_p,
|
||||||
ctypes.c_char_p,
|
|
||||||
ctypes.c_int,
|
|
||||||
callback_type,
|
callback_type,
|
||||||
ctypes.c_void_p]
|
ctypes.c_void_p,
|
||||||
|
ctypes.c_char_p,
|
||||||
|
ctypes.c_int]
|
||||||
libwaku.waku_connect(ctx,
|
libwaku.waku_connect(ctx,
|
||||||
args.peer.encode('utf-8'),
|
|
||||||
10000,
|
|
||||||
# onErrCb
|
# onErrCb
|
||||||
callback_type(
|
callback_type(
|
||||||
lambda ret, msg, len:
|
lambda ret, msg, len:
|
||||||
print("Error calling waku_connect: %s" % msg.decode('utf-8'))),
|
print("Error calling waku_connect: %s" % msg.decode('utf-8'))),
|
||||||
ctypes.c_void_p(0))
|
ctypes.c_void_p(0),
|
||||||
|
args.peer.encode('utf-8'),
|
||||||
|
10000)
|
||||||
|
|
||||||
# app = Flask(__name__)
|
# app = Flask(__name__)
|
||||||
# @app.route("/")
|
# @app.route("/")
|
||||||
|
|||||||
@ -27,7 +27,7 @@ public:
|
|||||||
void initialize(const QString& jsonConfig, WakuCallBack event_handler, void* userData) {
|
void initialize(const QString& jsonConfig, WakuCallBack event_handler, void* userData) {
|
||||||
ctx = waku_new(jsonConfig.toUtf8().constData(), WakuCallBack(event_handler), userData);
|
ctx = waku_new(jsonConfig.toUtf8().constData(), WakuCallBack(event_handler), userData);
|
||||||
|
|
||||||
waku_set_event_callback(ctx, on_event_received, userData);
|
set_event_callback(ctx, on_event_received, userData);
|
||||||
qDebug() << "Waku context initialized, ready to start.";
|
qDebug() << "Waku context initialized, ready to start.";
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@ -3,22 +3,22 @@ use std::ffi::CString;
|
|||||||
use std::os::raw::{c_char, c_int, c_void};
|
use std::os::raw::{c_char, c_int, c_void};
|
||||||
use std::{slice, thread, time};
|
use std::{slice, thread, time};
|
||||||
|
|
||||||
pub type WakuCallback = unsafe extern "C" fn(c_int, *const c_char, usize, *const c_void);
|
pub type FFICallBack = unsafe extern "C" fn(c_int, *const c_char, usize, *const c_void);
|
||||||
|
|
||||||
extern "C" {
|
extern "C" {
|
||||||
pub fn waku_new(
|
pub fn waku_new(
|
||||||
config_json: *const u8,
|
config_json: *const u8,
|
||||||
cb: WakuCallback,
|
cb: FFICallBack,
|
||||||
user_data: *const c_void,
|
user_data: *const c_void,
|
||||||
) -> *mut c_void;
|
) -> *mut c_void;
|
||||||
|
|
||||||
pub fn waku_version(ctx: *const c_void, cb: WakuCallback, user_data: *const c_void) -> c_int;
|
pub fn waku_version(ctx: *const c_void, cb: FFICallBack, user_data: *const c_void) -> c_int;
|
||||||
|
|
||||||
pub fn waku_start(ctx: *const c_void, cb: WakuCallback, user_data: *const c_void) -> c_int;
|
pub fn waku_start(ctx: *const c_void, cb: FFICallBack, user_data: *const c_void) -> c_int;
|
||||||
|
|
||||||
pub fn waku_default_pubsub_topic(
|
pub fn waku_default_pubsub_topic(
|
||||||
ctx: *mut c_void,
|
ctx: *mut c_void,
|
||||||
cb: WakuCallback,
|
cb: FFICallBack,
|
||||||
user_data: *const c_void,
|
user_data: *const c_void,
|
||||||
) -> *mut c_void;
|
) -> *mut c_void;
|
||||||
}
|
}
|
||||||
@ -40,7 +40,7 @@ pub unsafe extern "C" fn trampoline<C>(
|
|||||||
closure(return_val, &buffer_utf8);
|
closure(return_val, &buffer_utf8);
|
||||||
}
|
}
|
||||||
|
|
||||||
pub fn get_trampoline<C>(_closure: &C) -> WakuCallback
|
pub fn get_trampoline<C>(_closure: &C) -> FFICallBack
|
||||||
where
|
where
|
||||||
C: FnMut(i32, &str),
|
C: FnMut(i32, &str),
|
||||||
{
|
{
|
||||||
|
|||||||
32
flake.lock
generated
32
flake.lock
generated
@ -22,24 +22,46 @@
|
|||||||
"zerokit": "zerokit"
|
"zerokit": "zerokit"
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
"zerokit": {
|
"rust-overlay": {
|
||||||
"inputs": {
|
"inputs": {
|
||||||
"nixpkgs": [
|
"nixpkgs": [
|
||||||
|
"zerokit",
|
||||||
"nixpkgs"
|
"nixpkgs"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
"locked": {
|
"locked": {
|
||||||
"lastModified": 1743756626,
|
"lastModified": 1748399823,
|
||||||
"narHash": "sha256-SvhfEl0bJcRsCd79jYvZbxQecGV2aT+TXjJ57WVv7Aw=",
|
"narHash": "sha256-kahD8D5hOXOsGbNdoLLnqCL887cjHkx98Izc37nDjlA=",
|
||||||
|
"owner": "oxalica",
|
||||||
|
"repo": "rust-overlay",
|
||||||
|
"rev": "d68a69dc71bc19beb3479800392112c2f6218159",
|
||||||
|
"type": "github"
|
||||||
|
},
|
||||||
|
"original": {
|
||||||
|
"owner": "oxalica",
|
||||||
|
"repo": "rust-overlay",
|
||||||
|
"type": "github"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"zerokit": {
|
||||||
|
"inputs": {
|
||||||
|
"nixpkgs": [
|
||||||
|
"nixpkgs"
|
||||||
|
],
|
||||||
|
"rust-overlay": "rust-overlay"
|
||||||
|
},
|
||||||
|
"locked": {
|
||||||
|
"lastModified": 1749115386,
|
||||||
|
"narHash": "sha256-UexIE2D7zr6aRajwnKongXwCZCeRZDXOL0kfjhqUFSU=",
|
||||||
"owner": "vacp2p",
|
"owner": "vacp2p",
|
||||||
"repo": "zerokit",
|
"repo": "zerokit",
|
||||||
"rev": "c60e0c33fc6350a4b1c20e6b6727c44317129582",
|
"rev": "dc0b31752c91e7b4fefc441cfa6a8210ad7dba7b",
|
||||||
"type": "github"
|
"type": "github"
|
||||||
},
|
},
|
||||||
"original": {
|
"original": {
|
||||||
"owner": "vacp2p",
|
"owner": "vacp2p",
|
||||||
"repo": "zerokit",
|
"repo": "zerokit",
|
||||||
"rev": "c60e0c33fc6350a4b1c20e6b6727c44317129582",
|
"rev": "dc0b31752c91e7b4fefc441cfa6a8210ad7dba7b",
|
||||||
"type": "github"
|
"type": "github"
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
17
flake.nix
17
flake.nix
@ -9,7 +9,7 @@
|
|||||||
inputs = {
|
inputs = {
|
||||||
nixpkgs.url = "github:NixOS/nixpkgs?rev=f44bd8ca21e026135061a0a57dcf3d0775b67a49";
|
nixpkgs.url = "github:NixOS/nixpkgs?rev=f44bd8ca21e026135061a0a57dcf3d0775b67a49";
|
||||||
zerokit = {
|
zerokit = {
|
||||||
url = "github:vacp2p/zerokit?rev=c60e0c33fc6350a4b1c20e6b6727c44317129582";
|
url = "github:vacp2p/zerokit?rev=dc0b31752c91e7b4fefc441cfa6a8210ad7dba7b";
|
||||||
inputs.nixpkgs.follows = "nixpkgs";
|
inputs.nixpkgs.follows = "nixpkgs";
|
||||||
};
|
};
|
||||||
};
|
};
|
||||||
@ -49,11 +49,18 @@
|
|||||||
libwaku-android-arm64 = pkgs.callPackage ./nix/default.nix {
|
libwaku-android-arm64 = pkgs.callPackage ./nix/default.nix {
|
||||||
inherit stableSystems;
|
inherit stableSystems;
|
||||||
src = self;
|
src = self;
|
||||||
targets = ["libwaku-android-arm64"];
|
targets = ["libwaku-android-arm64"];
|
||||||
androidArch = "aarch64-linux-android";
|
|
||||||
abidir = "arm64-v8a";
|
abidir = "arm64-v8a";
|
||||||
zerokitPkg = zerokit.packages.${system}.zerokit-android-arm64;
|
zerokitRln = zerokit.packages.${system}.rln-android-arm64;
|
||||||
};
|
};
|
||||||
|
|
||||||
|
wakucanary = pkgs.callPackage ./nix/default.nix {
|
||||||
|
inherit stableSystems;
|
||||||
|
src = self;
|
||||||
|
targets = ["wakucanary"];
|
||||||
|
zerokitRln = zerokit.packages.${system}.rln;
|
||||||
|
};
|
||||||
|
|
||||||
default = libwaku-android-arm64;
|
default = libwaku-android-arm64;
|
||||||
});
|
});
|
||||||
|
|
||||||
@ -61,4 +68,4 @@
|
|||||||
default = pkgsFor.${system}.callPackage ./nix/shell.nix {};
|
default = pkgsFor.${system}.callPackage ./nix/shell.nix {};
|
||||||
});
|
});
|
||||||
};
|
};
|
||||||
}
|
}
|
||||||
|
|||||||
@ -1,42 +0,0 @@
|
|||||||
## Can be shared safely between threads
|
|
||||||
type SharedSeq*[T] = tuple[data: ptr UncheckedArray[T], len: int]
|
|
||||||
|
|
||||||
proc alloc*(str: cstring): cstring =
|
|
||||||
# Byte allocation from the given address.
|
|
||||||
# There should be the corresponding manual deallocation with deallocShared !
|
|
||||||
if str.isNil():
|
|
||||||
var ret = cast[cstring](allocShared(1)) # Allocate memory for the null terminator
|
|
||||||
ret[0] = '\0' # Set the null terminator
|
|
||||||
return ret
|
|
||||||
|
|
||||||
let ret = cast[cstring](allocShared(len(str) + 1))
|
|
||||||
copyMem(ret, str, len(str) + 1)
|
|
||||||
return ret
|
|
||||||
|
|
||||||
proc alloc*(str: string): cstring =
|
|
||||||
## Byte allocation from the given address.
|
|
||||||
## There should be the corresponding manual deallocation with deallocShared !
|
|
||||||
var ret = cast[cstring](allocShared(str.len + 1))
|
|
||||||
let s = cast[seq[char]](str)
|
|
||||||
for i in 0 ..< str.len:
|
|
||||||
ret[i] = s[i]
|
|
||||||
ret[str.len] = '\0'
|
|
||||||
return ret
|
|
||||||
|
|
||||||
proc allocSharedSeq*[T](s: seq[T]): SharedSeq[T] =
|
|
||||||
let data = allocShared(sizeof(T) * s.len)
|
|
||||||
if s.len != 0:
|
|
||||||
copyMem(data, unsafeAddr s[0], s.len)
|
|
||||||
return (cast[ptr UncheckedArray[T]](data), s.len)
|
|
||||||
|
|
||||||
proc deallocSharedSeq*[T](s: var SharedSeq[T]) =
|
|
||||||
deallocShared(s.data)
|
|
||||||
s.len = 0
|
|
||||||
|
|
||||||
proc toSeq*[T](s: SharedSeq[T]): seq[T] =
|
|
||||||
## Creates a seq[T] from a SharedSeq[T]. No explicit dealloc is required
|
|
||||||
## as req[T] is a GC managed type.
|
|
||||||
var ret = newSeq[T]()
|
|
||||||
for i in 0 ..< s.len:
|
|
||||||
ret.add(s.data[i])
|
|
||||||
return ret
|
|
||||||
10
library/declare_lib.nim
Normal file
10
library/declare_lib.nim
Normal file
@ -0,0 +1,10 @@
|
|||||||
|
import ffi
|
||||||
|
import waku/factory/waku
|
||||||
|
|
||||||
|
declareLibrary("waku")
|
||||||
|
|
||||||
|
proc set_event_callback(
|
||||||
|
ctx: ptr FFIContext[Waku], callback: FFICallBack, userData: pointer
|
||||||
|
) {.dynlib, exportc, cdecl.} =
|
||||||
|
ctx[].eventCallback = cast[pointer](callback)
|
||||||
|
ctx[].eventUserData = userData
|
||||||
@ -1,9 +0,0 @@
|
|||||||
import system, std/json, ./json_base_event
|
|
||||||
|
|
||||||
type JsonWakuNotRespondingEvent* = ref object of JsonEvent
|
|
||||||
|
|
||||||
proc new*(T: type JsonWakuNotRespondingEvent): T =
|
|
||||||
return JsonWakuNotRespondingEvent(eventType: "waku_not_responding")
|
|
||||||
|
|
||||||
method `$`*(event: JsonWakuNotRespondingEvent): string =
|
|
||||||
$(%*event)
|
|
||||||
@ -1,30 +0,0 @@
|
|||||||
################################################################################
|
|
||||||
### Exported types
|
|
||||||
|
|
||||||
type WakuCallBack* = proc(
|
|
||||||
callerRet: cint, msg: ptr cchar, len: csize_t, userData: pointer
|
|
||||||
) {.cdecl, gcsafe, raises: [].}
|
|
||||||
|
|
||||||
const RET_OK*: cint = 0
|
|
||||||
const RET_ERR*: cint = 1
|
|
||||||
const RET_MISSING_CALLBACK*: cint = 2
|
|
||||||
|
|
||||||
### End of exported types
|
|
||||||
################################################################################
|
|
||||||
|
|
||||||
################################################################################
|
|
||||||
### FFI utils
|
|
||||||
|
|
||||||
template foreignThreadGc*(body: untyped) =
|
|
||||||
when declared(setupForeignThreadGc):
|
|
||||||
setupForeignThreadGc()
|
|
||||||
|
|
||||||
body
|
|
||||||
|
|
||||||
when declared(tearDownForeignThreadGc):
|
|
||||||
tearDownForeignThreadGc()
|
|
||||||
|
|
||||||
type onDone* = proc()
|
|
||||||
|
|
||||||
### End of FFI utils
|
|
||||||
################################################################################
|
|
||||||
32
library/ios_bearssl_stubs.c
Normal file
32
library/ios_bearssl_stubs.c
Normal file
@ -0,0 +1,32 @@
|
|||||||
|
/**
|
||||||
|
* iOS stubs for BearSSL tools functions not normally included in the library.
|
||||||
|
* These are typically from the BearSSL tools/ directory which is for CLI tools.
|
||||||
|
*/
|
||||||
|
|
||||||
|
#include <stddef.h>
|
||||||
|
|
||||||
|
/* x509_noanchor context - simplified stub */
|
||||||
|
typedef struct {
|
||||||
|
void *vtable;
|
||||||
|
void *inner;
|
||||||
|
} x509_noanchor_context;
|
||||||
|
|
||||||
|
/* Stub for x509_noanchor_init - used to skip anchor validation */
|
||||||
|
void x509_noanchor_init(x509_noanchor_context *xwc, const void **inner) {
|
||||||
|
if (xwc && inner) {
|
||||||
|
xwc->inner = (void*)*inner;
|
||||||
|
xwc->vtable = NULL;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/* TAs (Trust Anchors) - empty array stub */
|
||||||
|
/* This is typically defined by applications with their CA certificates */
|
||||||
|
typedef struct {
|
||||||
|
void *dn;
|
||||||
|
size_t dn_len;
|
||||||
|
unsigned flags;
|
||||||
|
void *pkey;
|
||||||
|
} br_x509_trust_anchor;
|
||||||
|
|
||||||
|
const br_x509_trust_anchor TAs[1] = {{0}};
|
||||||
|
const size_t TAs_NUM = 0;
|
||||||
14
library/ios_natpmp_stubs.c
Normal file
14
library/ios_natpmp_stubs.c
Normal file
@ -0,0 +1,14 @@
|
|||||||
|
/**
|
||||||
|
* iOS stub for getgateway.c functions.
|
||||||
|
* iOS doesn't have net/route.h, so we provide a stub that returns failure.
|
||||||
|
* NAT-PMP functionality won't work but the library will link.
|
||||||
|
*/
|
||||||
|
|
||||||
|
#include <stdint.h>
|
||||||
|
#include <netinet/in.h>
|
||||||
|
|
||||||
|
/* getdefaultgateway - returns -1 (failure) on iOS */
|
||||||
|
int getdefaultgateway(in_addr_t *addr) {
|
||||||
|
(void)addr; /* unused */
|
||||||
|
return -1; /* failure - not supported on iOS */
|
||||||
|
}
|
||||||
49
library/kernel_api/debug_node_api.nim
Normal file
49
library/kernel_api/debug_node_api.nim
Normal file
@ -0,0 +1,49 @@
|
|||||||
|
import std/json
|
||||||
|
import
|
||||||
|
chronicles,
|
||||||
|
chronos,
|
||||||
|
results,
|
||||||
|
eth/p2p/discoveryv5/enr,
|
||||||
|
strutils,
|
||||||
|
libp2p/peerid,
|
||||||
|
metrics,
|
||||||
|
ffi
|
||||||
|
import waku/factory/waku, waku/node/waku_node, waku/node/health_monitor, library/declare_lib
|
||||||
|
|
||||||
|
proc getMultiaddresses(node: WakuNode): seq[string] =
|
||||||
|
return node.info().listenAddresses
|
||||||
|
|
||||||
|
proc getMetrics(): string =
|
||||||
|
{.gcsafe.}:
|
||||||
|
return defaultRegistry.toText() ## defaultRegistry is {.global.} in metrics module
|
||||||
|
|
||||||
|
proc waku_version(
|
||||||
|
ctx: ptr FFIContext[Waku], callback: FFICallBack, userData: pointer
|
||||||
|
) {.ffi.} =
|
||||||
|
return ok(WakuNodeVersionString)
|
||||||
|
|
||||||
|
proc waku_listen_addresses(
|
||||||
|
ctx: ptr FFIContext[Waku], callback: FFICallBack, userData: pointer
|
||||||
|
) {.ffi.} =
|
||||||
|
## returns a comma-separated string of the listen addresses
|
||||||
|
return ok(ctx.myLib[].node.getMultiaddresses().join(","))
|
||||||
|
|
||||||
|
proc waku_get_my_enr(
|
||||||
|
ctx: ptr FFIContext[Waku], callback: FFICallBack, userData: pointer
|
||||||
|
) {.ffi.} =
|
||||||
|
return ok(ctx.myLib[].node.enr.toURI())
|
||||||
|
|
||||||
|
proc waku_get_my_peerid(
|
||||||
|
ctx: ptr FFIContext[Waku], callback: FFICallBack, userData: pointer
|
||||||
|
) {.ffi.} =
|
||||||
|
return ok($ctx.myLib[].node.peerId())
|
||||||
|
|
||||||
|
proc waku_get_metrics(
|
||||||
|
ctx: ptr FFIContext[Waku], callback: FFICallBack, userData: pointer
|
||||||
|
) {.ffi.} =
|
||||||
|
return ok(getMetrics())
|
||||||
|
|
||||||
|
proc waku_is_online(
|
||||||
|
ctx: ptr FFIContext[Waku], callback: FFICallBack, userData: pointer
|
||||||
|
) {.ffi.} =
|
||||||
|
return ok($ctx.myLib[].healthMonitor.onlineMonitor.amIOnline())
|
||||||
96
library/kernel_api/discovery_api.nim
Normal file
96
library/kernel_api/discovery_api.nim
Normal file
@ -0,0 +1,96 @@
|
|||||||
|
import std/json
|
||||||
|
import chronos, chronicles, results, strutils, libp2p/multiaddress, ffi
|
||||||
|
import
|
||||||
|
waku/factory/waku,
|
||||||
|
waku/discovery/waku_dnsdisc,
|
||||||
|
waku/discovery/waku_discv5,
|
||||||
|
waku/waku_core/peers,
|
||||||
|
waku/node/waku_node,
|
||||||
|
waku/node/kernel_api,
|
||||||
|
library/declare_lib
|
||||||
|
|
||||||
|
proc retrieveBootstrapNodes(
|
||||||
|
enrTreeUrl: string, ipDnsServer: string
|
||||||
|
): Future[Result[seq[string], string]] {.async.} =
|
||||||
|
let dnsNameServers = @[parseIpAddress(ipDnsServer)]
|
||||||
|
let discoveredPeers: seq[RemotePeerInfo] = (
|
||||||
|
await retrieveDynamicBootstrapNodes(enrTreeUrl, dnsNameServers)
|
||||||
|
).valueOr:
|
||||||
|
return err("failed discovering peers from DNS: " & $error)
|
||||||
|
|
||||||
|
var multiAddresses = newSeq[string]()
|
||||||
|
|
||||||
|
for discPeer in discoveredPeers:
|
||||||
|
for address in discPeer.addrs:
|
||||||
|
multiAddresses.add($address & "/p2p/" & $discPeer)
|
||||||
|
|
||||||
|
return ok(multiAddresses)
|
||||||
|
|
||||||
|
proc updateDiscv5BootstrapNodes(nodes: string, waku: Waku): Result[void, string] =
|
||||||
|
waku.wakuDiscv5.updateBootstrapRecords(nodes).isOkOr:
|
||||||
|
return err("error in updateDiscv5BootstrapNodes: " & $error)
|
||||||
|
return ok()
|
||||||
|
|
||||||
|
proc performPeerExchangeRequestTo*(
|
||||||
|
numPeers: uint64, waku: Waku
|
||||||
|
): Future[Result[int, string]] {.async.} =
|
||||||
|
let numPeersRecv = (await waku.node.fetchPeerExchangePeers(numPeers)).valueOr:
|
||||||
|
return err($error)
|
||||||
|
return ok(numPeersRecv)
|
||||||
|
|
||||||
|
proc waku_discv5_update_bootnodes(
|
||||||
|
ctx: ptr FFIContext[Waku],
|
||||||
|
callback: FFICallBack,
|
||||||
|
userData: pointer,
|
||||||
|
bootnodes: cstring,
|
||||||
|
) {.ffi.} =
|
||||||
|
## Updates the bootnode list used for discovering new peers via DiscoveryV5
|
||||||
|
## bootnodes - JSON array containing the bootnode ENRs i.e. `["enr:...", "enr:..."]`
|
||||||
|
|
||||||
|
updateDiscv5BootstrapNodes($bootnodes, ctx.myLib[]).isOkOr:
|
||||||
|
error "UPDATE_DISCV5_BOOTSTRAP_NODES failed", error = error
|
||||||
|
return err($error)
|
||||||
|
|
||||||
|
return ok("discovery request processed correctly")
|
||||||
|
|
||||||
|
proc waku_dns_discovery(
|
||||||
|
ctx: ptr FFIContext[Waku],
|
||||||
|
callback: FFICallBack,
|
||||||
|
userData: pointer,
|
||||||
|
enrTreeUrl: cstring,
|
||||||
|
nameDnsServer: cstring,
|
||||||
|
timeoutMs: cint,
|
||||||
|
) {.ffi.} =
|
||||||
|
let nodes = (await retrieveBootstrapNodes($enrTreeUrl, $nameDnsServer)).valueOr:
|
||||||
|
error "GET_BOOTSTRAP_NODES failed", error = error
|
||||||
|
return err($error)
|
||||||
|
|
||||||
|
## returns a comma-separated string of bootstrap nodes' multiaddresses
|
||||||
|
return ok(nodes.join(","))
|
||||||
|
|
||||||
|
proc waku_start_discv5(
|
||||||
|
ctx: ptr FFIContext[Waku], callback: FFICallBack, userData: pointer
|
||||||
|
) {.ffi.} =
|
||||||
|
(await ctx.myLib[].wakuDiscv5.start()).isOkOr:
|
||||||
|
error "START_DISCV5 failed", error = error
|
||||||
|
return err("error starting discv5: " & $error)
|
||||||
|
|
||||||
|
return ok("discv5 started correctly")
|
||||||
|
|
||||||
|
proc waku_stop_discv5(
|
||||||
|
ctx: ptr FFIContext[Waku], callback: FFICallBack, userData: pointer
|
||||||
|
) {.ffi.} =
|
||||||
|
await ctx.myLib[].wakuDiscv5.stop()
|
||||||
|
return ok("discv5 stopped correctly")
|
||||||
|
|
||||||
|
proc waku_peer_exchange_request(
|
||||||
|
ctx: ptr FFIContext[Waku],
|
||||||
|
callback: FFICallBack,
|
||||||
|
userData: pointer,
|
||||||
|
numPeers: uint64,
|
||||||
|
) {.ffi.} =
|
||||||
|
let numValidPeers = (await performPeerExchangeRequestTo(numPeers, ctx.myLib[])).valueOr:
|
||||||
|
error "waku_peer_exchange_request failed", error = error
|
||||||
|
return err("failed peer exchange: " & $error)
|
||||||
|
|
||||||
|
return ok($numValidPeers)
|
||||||
@ -1,43 +1,14 @@
|
|||||||
import std/[options, json, strutils, net]
|
import std/[options, json, strutils, net]
|
||||||
import chronos, chronicles, results, confutils, confutils/std/net
|
import chronos, chronicles, results, confutils, confutils/std/net, ffi
|
||||||
|
|
||||||
import
|
import
|
||||||
waku/node/peer_manager/peer_manager,
|
waku/node/peer_manager/peer_manager,
|
||||||
tools/confutils/cli_args,
|
tools/confutils/cli_args,
|
||||||
waku/factory/waku,
|
waku/factory/waku,
|
||||||
waku/factory/node_factory,
|
waku/factory/node_factory,
|
||||||
waku/factory/networks_config,
|
|
||||||
waku/factory/app_callbacks,
|
waku/factory/app_callbacks,
|
||||||
waku/rest_api/endpoint/builder
|
waku/rest_api/endpoint/builder,
|
||||||
|
library/declare_lib
|
||||||
import
|
|
||||||
../../alloc
|
|
||||||
|
|
||||||
type NodeLifecycleMsgType* = enum
|
|
||||||
CREATE_NODE
|
|
||||||
START_NODE
|
|
||||||
STOP_NODE
|
|
||||||
|
|
||||||
type NodeLifecycleRequest* = object
|
|
||||||
operation: NodeLifecycleMsgType
|
|
||||||
configJson: cstring ## Only used in 'CREATE_NODE' operation
|
|
||||||
appCallbacks: AppCallbacks
|
|
||||||
|
|
||||||
proc createShared*(
|
|
||||||
T: type NodeLifecycleRequest,
|
|
||||||
op: NodeLifecycleMsgType,
|
|
||||||
configJson: cstring = "",
|
|
||||||
appCallbacks: AppCallbacks = nil,
|
|
||||||
): ptr type T =
|
|
||||||
var ret = createShared(T)
|
|
||||||
ret[].operation = op
|
|
||||||
ret[].appCallbacks = appCallbacks
|
|
||||||
ret[].configJson = configJson.alloc()
|
|
||||||
return ret
|
|
||||||
|
|
||||||
proc destroyShared(self: ptr NodeLifecycleRequest) =
|
|
||||||
deallocShared(self[].configJson)
|
|
||||||
deallocShared(self)
|
|
||||||
|
|
||||||
proc createWaku(
|
proc createWaku(
|
||||||
configJson: cstring, appCallbacks: AppCallbacks = nil
|
configJson: cstring, appCallbacks: AppCallbacks = nil
|
||||||
@ -87,26 +58,30 @@ proc createWaku(
|
|||||||
|
|
||||||
return ok(wakuRes)
|
return ok(wakuRes)
|
||||||
|
|
||||||
proc process*(
|
registerReqFFI(CreateNodeRequest, ctx: ptr FFIContext[Waku]):
|
||||||
self: ptr NodeLifecycleRequest, waku: ptr Waku
|
proc(
|
||||||
): Future[Result[string, string]] {.async.} =
|
configJson: cstring, appCallbacks: AppCallbacks
|
||||||
defer:
|
): Future[Result[string, string]] {.async.} =
|
||||||
destroyShared(self)
|
ctx.myLib[] = (await createWaku(configJson, cast[AppCallbacks](appCallbacks))).valueOr:
|
||||||
|
error "CreateNodeRequest failed", error = error
|
||||||
case self.operation
|
|
||||||
of CREATE_NODE:
|
|
||||||
waku[] = (await createWaku(self.configJson, self.appCallbacks)).valueOr:
|
|
||||||
error "CREATE_NODE failed", error = error
|
|
||||||
return err($error)
|
return err($error)
|
||||||
of START_NODE:
|
|
||||||
(await waku.startWaku()).isOkOr:
|
|
||||||
error "START_NODE failed", error = error
|
|
||||||
return err($error)
|
|
||||||
of STOP_NODE:
|
|
||||||
try:
|
|
||||||
await waku[].stop()
|
|
||||||
except Exception:
|
|
||||||
error "STOP_NODE failed", error = getCurrentExceptionMsg()
|
|
||||||
return err(getCurrentExceptionMsg())
|
|
||||||
|
|
||||||
|
return ok("")
|
||||||
|
|
||||||
|
proc waku_start(
|
||||||
|
ctx: ptr FFIContext[Waku], callback: FFICallBack, userData: pointer
|
||||||
|
) {.ffi.} =
|
||||||
|
(await startWaku(ctx[].myLib)).isOkOr:
|
||||||
|
error "START_NODE failed", error = error
|
||||||
|
return err("failed to start: " & $error)
|
||||||
|
return ok("")
|
||||||
|
|
||||||
|
proc waku_stop(
|
||||||
|
ctx: ptr FFIContext[Waku], callback: FFICallBack, userData: pointer
|
||||||
|
) {.ffi.} =
|
||||||
|
try:
|
||||||
|
await ctx.myLib[].stop()
|
||||||
|
except Exception as exc:
|
||||||
|
error "STOP_NODE failed", error = exc.msg
|
||||||
|
return err("failed to stop: " & exc.msg)
|
||||||
return ok("")
|
return ok("")
|
||||||
123
library/kernel_api/peer_manager_api.nim
Normal file
123
library/kernel_api/peer_manager_api.nim
Normal file
@ -0,0 +1,123 @@
|
|||||||
|
import std/[sequtils, strutils, tables]
|
||||||
|
import chronicles, chronos, results, options, json, ffi
|
||||||
|
import waku/factory/waku, waku/node/waku_node, waku/node/peer_manager, ../declare_lib
|
||||||
|
|
||||||
|
type PeerInfo = object
|
||||||
|
protocols: seq[string]
|
||||||
|
addresses: seq[string]
|
||||||
|
|
||||||
|
proc waku_get_peerids_from_peerstore(
|
||||||
|
ctx: ptr FFIContext[Waku], callback: FFICallBack, userData: pointer
|
||||||
|
) {.ffi.} =
|
||||||
|
## returns a comma-separated string of peerIDs
|
||||||
|
let peerIDs =
|
||||||
|
ctx.myLib[].node.peerManager.switch.peerStore.peers().mapIt($it.peerId).join(",")
|
||||||
|
return ok(peerIDs)
|
||||||
|
|
||||||
|
proc waku_connect(
|
||||||
|
ctx: ptr FFIContext[Waku],
|
||||||
|
callback: FFICallBack,
|
||||||
|
userData: pointer,
|
||||||
|
peerMultiAddr: cstring,
|
||||||
|
timeoutMs: cuint,
|
||||||
|
) {.ffi.} =
|
||||||
|
let peers = ($peerMultiAddr).split(",").mapIt(strip(it))
|
||||||
|
await ctx.myLib[].node.connectToNodes(peers, source = "static")
|
||||||
|
return ok("")
|
||||||
|
|
||||||
|
proc waku_disconnect_peer_by_id(
|
||||||
|
ctx: ptr FFIContext[Waku], callback: FFICallBack, userData: pointer, peerId: cstring
|
||||||
|
) {.ffi.} =
|
||||||
|
let pId = PeerId.init($peerId).valueOr:
|
||||||
|
error "DISCONNECT_PEER_BY_ID failed", error = $error
|
||||||
|
return err($error)
|
||||||
|
await ctx.myLib[].node.peerManager.disconnectNode(pId)
|
||||||
|
return ok("")
|
||||||
|
|
||||||
|
proc waku_disconnect_all_peers(
|
||||||
|
ctx: ptr FFIContext[Waku], callback: FFICallBack, userData: pointer
|
||||||
|
) {.ffi.} =
|
||||||
|
await ctx.myLib[].node.peerManager.disconnectAllPeers()
|
||||||
|
return ok("")
|
||||||
|
|
||||||
|
proc waku_dial_peer(
|
||||||
|
ctx: ptr FFIContext[Waku],
|
||||||
|
callback: FFICallBack,
|
||||||
|
userData: pointer,
|
||||||
|
peerMultiAddr: cstring,
|
||||||
|
protocol: cstring,
|
||||||
|
timeoutMs: cuint,
|
||||||
|
) {.ffi.} =
|
||||||
|
let remotePeerInfo = parsePeerInfo($peerMultiAddr).valueOr:
|
||||||
|
error "DIAL_PEER failed", error = $error
|
||||||
|
return err($error)
|
||||||
|
let conn = await ctx.myLib[].node.peerManager.dialPeer(remotePeerInfo, $protocol)
|
||||||
|
if conn.isNone():
|
||||||
|
let msg = "failed dialing peer"
|
||||||
|
error "DIAL_PEER failed", error = msg, peerId = $remotePeerInfo.peerId
|
||||||
|
return err(msg)
|
||||||
|
return ok("")
|
||||||
|
|
||||||
|
proc waku_dial_peer_by_id(
|
||||||
|
ctx: ptr FFIContext[Waku],
|
||||||
|
callback: FFICallBack,
|
||||||
|
userData: pointer,
|
||||||
|
peerId: cstring,
|
||||||
|
protocol: cstring,
|
||||||
|
timeoutMs: cuint,
|
||||||
|
) {.ffi.} =
|
||||||
|
let pId = PeerId.init($peerId).valueOr:
|
||||||
|
error "DIAL_PEER_BY_ID failed", error = $error
|
||||||
|
return err($error)
|
||||||
|
let conn = await ctx.myLib[].node.peerManager.dialPeer(pId, $protocol)
|
||||||
|
if conn.isNone():
|
||||||
|
let msg = "failed dialing peer"
|
||||||
|
error "DIAL_PEER_BY_ID failed", error = msg, peerId = $peerId
|
||||||
|
return err(msg)
|
||||||
|
|
||||||
|
return ok("")
|
||||||
|
|
||||||
|
proc waku_get_connected_peers_info(
|
||||||
|
ctx: ptr FFIContext[Waku], callback: FFICallBack, userData: pointer
|
||||||
|
) {.ffi.} =
|
||||||
|
## returns a JSON string mapping peerIDs to objects with protocols and addresses
|
||||||
|
|
||||||
|
var peersMap = initTable[string, PeerInfo]()
|
||||||
|
let peers = ctx.myLib[].node.peerManager.switch.peerStore.peers().filterIt(
|
||||||
|
it.connectedness == Connected
|
||||||
|
)
|
||||||
|
|
||||||
|
# Build a map of peer IDs to peer info objects
|
||||||
|
for peer in peers:
|
||||||
|
let peerIdStr = $peer.peerId
|
||||||
|
peersMap[peerIdStr] =
|
||||||
|
PeerInfo(protocols: peer.protocols, addresses: peer.addrs.mapIt($it))
|
||||||
|
|
||||||
|
# Convert the map to JSON string
|
||||||
|
let jsonObj = %*peersMap
|
||||||
|
let jsonStr = $jsonObj
|
||||||
|
return ok(jsonStr)
|
||||||
|
|
||||||
|
proc waku_get_connected_peers(
|
||||||
|
ctx: ptr FFIContext[Waku], callback: FFICallBack, userData: pointer
|
||||||
|
) {.ffi.} =
|
||||||
|
## returns a comma-separated string of peerIDs
|
||||||
|
let
|
||||||
|
(inPeerIds, outPeerIds) = ctx.myLib[].node.peerManager.connectedPeers()
|
||||||
|
connectedPeerids = concat(inPeerIds, outPeerIds)
|
||||||
|
|
||||||
|
return ok(connectedPeerids.mapIt($it).join(","))
|
||||||
|
|
||||||
|
proc waku_get_peerids_by_protocol(
|
||||||
|
ctx: ptr FFIContext[Waku],
|
||||||
|
callback: FFICallBack,
|
||||||
|
userData: pointer,
|
||||||
|
protocol: cstring,
|
||||||
|
) {.ffi.} =
|
||||||
|
## returns a comma-separated string of peerIDs that mount the given protocol
|
||||||
|
let connectedPeers = ctx.myLib[].node.peerManager.switch.peerStore
|
||||||
|
.peers($protocol)
|
||||||
|
.filterIt(it.connectedness == Connected)
|
||||||
|
.mapIt($it.peerId)
|
||||||
|
.join(",")
|
||||||
|
return ok(connectedPeers)
|
||||||
43
library/kernel_api/ping_api.nim
Normal file
43
library/kernel_api/ping_api.nim
Normal file
@ -0,0 +1,43 @@
|
|||||||
|
import std/[json, strutils]
|
||||||
|
import chronos, results, ffi
|
||||||
|
import libp2p/[protocols/ping, switch, multiaddress, multicodec]
|
||||||
|
import waku/[factory/waku, waku_core/peers, node/waku_node], library/declare_lib
|
||||||
|
|
||||||
|
proc waku_ping_peer(
|
||||||
|
ctx: ptr FFIContext[Waku],
|
||||||
|
callback: FFICallBack,
|
||||||
|
userData: pointer,
|
||||||
|
peerAddr: cstring,
|
||||||
|
timeoutMs: cuint,
|
||||||
|
) {.ffi.} =
|
||||||
|
let peerInfo = peers.parsePeerInfo(($peerAddr).split(",")).valueOr:
|
||||||
|
return err("PingRequest failed to parse peer addr: " & $error)
|
||||||
|
|
||||||
|
let timeout = chronos.milliseconds(timeoutMs)
|
||||||
|
proc ping(): Future[Result[Duration, string]] {.async, gcsafe.} =
|
||||||
|
try:
|
||||||
|
let conn =
|
||||||
|
await ctx.myLib[].node.switch.dial(peerInfo.peerId, peerInfo.addrs, PingCodec)
|
||||||
|
defer:
|
||||||
|
await conn.close()
|
||||||
|
|
||||||
|
let pingRTT = await ctx.myLib[].node.libp2pPing.ping(conn)
|
||||||
|
if pingRTT == 0.nanos:
|
||||||
|
return err("could not ping peer: rtt-0")
|
||||||
|
return ok(pingRTT)
|
||||||
|
except CatchableError as exc:
|
||||||
|
return err("could not ping peer: " & exc.msg)
|
||||||
|
|
||||||
|
let pingFuture = ping()
|
||||||
|
let pingRTT: Duration =
|
||||||
|
if timeout == chronos.milliseconds(0): # No timeout expected
|
||||||
|
(await pingFuture).valueOr:
|
||||||
|
return err("ping failed, no timeout expected: " & error)
|
||||||
|
else:
|
||||||
|
let timedOut = not (await pingFuture.withTimeout(timeout))
|
||||||
|
if timedOut:
|
||||||
|
return err("ping timed out")
|
||||||
|
pingFuture.read().valueOr:
|
||||||
|
return err("failed to read ping future: " & error)
|
||||||
|
|
||||||
|
return ok($(pingRTT.nanos))
|
||||||
109
library/kernel_api/protocols/filter_api.nim
Normal file
109
library/kernel_api/protocols/filter_api.nim
Normal file
@ -0,0 +1,109 @@
|
|||||||
|
import options, std/[strutils, sequtils]
|
||||||
|
import chronicles, chronos, results, ffi
|
||||||
|
import
|
||||||
|
waku/waku_filter_v2/client,
|
||||||
|
waku/waku_core/message/message,
|
||||||
|
waku/factory/waku,
|
||||||
|
waku/waku_relay,
|
||||||
|
waku/waku_filter_v2/common,
|
||||||
|
waku/waku_core/subscription/push_handler,
|
||||||
|
waku/node/peer_manager/peer_manager,
|
||||||
|
waku/node/waku_node,
|
||||||
|
waku/node/kernel_api,
|
||||||
|
waku/waku_core/topics/pubsub_topic,
|
||||||
|
waku/waku_core/topics/content_topic,
|
||||||
|
library/events/json_message_event,
|
||||||
|
library/declare_lib
|
||||||
|
|
||||||
|
const FilterOpTimeout = 5.seconds
|
||||||
|
|
||||||
|
proc checkFilterClientMounted(waku: Waku): Result[string, string] =
|
||||||
|
if waku.node.wakuFilterClient.isNil():
|
||||||
|
let errorMsg = "wakuFilterClient is not mounted"
|
||||||
|
error "fail filter process", error = errorMsg
|
||||||
|
return err(errorMsg)
|
||||||
|
return ok("")
|
||||||
|
|
||||||
|
proc waku_filter_subscribe(
|
||||||
|
ctx: ptr FFIContext[Waku],
|
||||||
|
callback: FFICallBack,
|
||||||
|
userData: pointer,
|
||||||
|
pubSubTopic: cstring,
|
||||||
|
contentTopics: cstring,
|
||||||
|
) {.ffi.} =
|
||||||
|
proc onReceivedMessage(ctx: ptr FFIContext): WakuRelayHandler =
|
||||||
|
return proc(pubsubTopic: PubsubTopic, msg: WakuMessage) {.async.} =
|
||||||
|
callEventCallback(ctx, "onReceivedMessage"):
|
||||||
|
$JsonMessageEvent.new(pubsubTopic, msg)
|
||||||
|
|
||||||
|
checkFilterClientMounted(ctx.myLib[]).isOkOr:
|
||||||
|
return err($error)
|
||||||
|
|
||||||
|
var filterPushEventCallback = FilterPushHandler(onReceivedMessage(ctx))
|
||||||
|
ctx.myLib[].node.wakuFilterClient.registerPushHandler(filterPushEventCallback)
|
||||||
|
|
||||||
|
let peer = ctx.myLib[].node.peerManager.selectPeer(WakuFilterSubscribeCodec).valueOr:
|
||||||
|
let errorMsg = "could not find peer with WakuFilterSubscribeCodec when subscribing"
|
||||||
|
error "fail filter subscribe", error = errorMsg
|
||||||
|
return err(errorMsg)
|
||||||
|
|
||||||
|
let subFut = ctx.myLib[].node.filterSubscribe(
|
||||||
|
some(PubsubTopic($pubsubTopic)),
|
||||||
|
($contentTopics).split(",").mapIt(ContentTopic(it)),
|
||||||
|
peer,
|
||||||
|
)
|
||||||
|
if not await subFut.withTimeout(FilterOpTimeout):
|
||||||
|
let errorMsg = "filter subscription timed out"
|
||||||
|
error "fail filter unsubscribe", error = errorMsg
|
||||||
|
|
||||||
|
return err(errorMsg)
|
||||||
|
|
||||||
|
return ok("")
|
||||||
|
|
||||||
|
proc waku_filter_unsubscribe(
|
||||||
|
ctx: ptr FFIContext[Waku],
|
||||||
|
callback: FFICallBack,
|
||||||
|
userData: pointer,
|
||||||
|
pubSubTopic: cstring,
|
||||||
|
contentTopics: cstring,
|
||||||
|
) {.ffi.} =
|
||||||
|
checkFilterClientMounted(ctx.myLib[]).isOkOr:
|
||||||
|
return err($error)
|
||||||
|
|
||||||
|
let peer = ctx.myLib[].node.peerManager.selectPeer(WakuFilterSubscribeCodec).valueOr:
|
||||||
|
let errorMsg =
|
||||||
|
"could not find peer with WakuFilterSubscribeCodec when unsubscribing"
|
||||||
|
error "fail filter process", error = errorMsg
|
||||||
|
return err(errorMsg)
|
||||||
|
|
||||||
|
let subFut = ctx.myLib[].node.filterUnsubscribe(
|
||||||
|
some(PubsubTopic($pubsubTopic)),
|
||||||
|
($contentTopics).split(",").mapIt(ContentTopic(it)),
|
||||||
|
peer,
|
||||||
|
)
|
||||||
|
if not await subFut.withTimeout(FilterOpTimeout):
|
||||||
|
let errorMsg = "filter un-subscription timed out"
|
||||||
|
error "fail filter unsubscribe", error = errorMsg
|
||||||
|
return err(errorMsg)
|
||||||
|
return ok("")
|
||||||
|
|
||||||
|
proc waku_filter_unsubscribe_all(
|
||||||
|
ctx: ptr FFIContext[Waku], callback: FFICallBack, userData: pointer
|
||||||
|
) {.ffi.} =
|
||||||
|
checkFilterClientMounted(ctx.myLib[]).isOkOr:
|
||||||
|
return err($error)
|
||||||
|
|
||||||
|
let peer = ctx.myLib[].node.peerManager.selectPeer(WakuFilterSubscribeCodec).valueOr:
|
||||||
|
let errorMsg =
|
||||||
|
"could not find peer with WakuFilterSubscribeCodec when unsubscribing all"
|
||||||
|
error "fail filter unsubscribe all", error = errorMsg
|
||||||
|
return err(errorMsg)
|
||||||
|
|
||||||
|
let unsubFut = ctx.myLib[].node.filterUnsubscribeAll(peer)
|
||||||
|
|
||||||
|
if not await unsubFut.withTimeout(FilterOpTimeout):
|
||||||
|
let errorMsg = "filter un-subscription all timed out"
|
||||||
|
error "fail filter unsubscribe all", error = errorMsg
|
||||||
|
|
||||||
|
return err(errorMsg)
|
||||||
|
return ok("")
|
||||||
51
library/kernel_api/protocols/lightpush_api.nim
Normal file
51
library/kernel_api/protocols/lightpush_api.nim
Normal file
@ -0,0 +1,51 @@
|
|||||||
|
import options, std/[json, strformat]
|
||||||
|
import chronicles, chronos, results, ffi
|
||||||
|
import
|
||||||
|
waku/waku_core/message/message,
|
||||||
|
waku/waku_core/codecs,
|
||||||
|
waku/factory/waku,
|
||||||
|
waku/waku_core/message,
|
||||||
|
waku/waku_core/topics/pubsub_topic,
|
||||||
|
waku/waku_lightpush_legacy/client,
|
||||||
|
waku/node/peer_manager/peer_manager,
|
||||||
|
library/events/json_message_event,
|
||||||
|
library/declare_lib
|
||||||
|
|
||||||
|
proc waku_lightpush_publish(
|
||||||
|
ctx: ptr FFIContext[Waku],
|
||||||
|
callback: FFICallBack,
|
||||||
|
userData: pointer,
|
||||||
|
pubSubTopic: cstring,
|
||||||
|
jsonWakuMessage: cstring,
|
||||||
|
) {.ffi.} =
|
||||||
|
if ctx.myLib[].node.wakuLightpushClient.isNil():
|
||||||
|
let errorMsg = "LightpushRequest waku.node.wakuLightpushClient is nil"
|
||||||
|
error "PUBLISH failed", error = errorMsg
|
||||||
|
return err(errorMsg)
|
||||||
|
|
||||||
|
var jsonMessage: JsonMessage
|
||||||
|
try:
|
||||||
|
let jsonContent = parseJson($jsonWakuMessage)
|
||||||
|
jsonMessage = JsonMessage.fromJsonNode(jsonContent).valueOr:
|
||||||
|
raise newException(JsonParsingError, $error)
|
||||||
|
except JsonParsingError as exc:
|
||||||
|
return err(fmt"Error parsing json message: {exc.msg}")
|
||||||
|
|
||||||
|
let msg = json_message_event.toWakuMessage(jsonMessage).valueOr:
|
||||||
|
return err("Problem building the WakuMessage: " & $error)
|
||||||
|
|
||||||
|
let peerOpt = ctx.myLib[].node.peerManager.selectPeer(WakuLightPushCodec)
|
||||||
|
if peerOpt.isNone():
|
||||||
|
let errorMsg = "failed to lightpublish message, no suitable remote peers"
|
||||||
|
error "PUBLISH failed", error = errorMsg
|
||||||
|
return err(errorMsg)
|
||||||
|
|
||||||
|
let msgHashHex = (
|
||||||
|
await ctx.myLib[].node.wakuLegacyLightpushClient.publish(
|
||||||
|
$pubsubTopic, msg, peer = peerOpt.get()
|
||||||
|
)
|
||||||
|
).valueOr:
|
||||||
|
error "PUBLISH failed", error = error
|
||||||
|
return err($error)
|
||||||
|
|
||||||
|
return ok(msgHashHex)
|
||||||
171
library/kernel_api/protocols/relay_api.nim
Normal file
171
library/kernel_api/protocols/relay_api.nim
Normal file
@ -0,0 +1,171 @@
|
|||||||
|
import std/[net, sequtils, strutils, json], strformat
|
||||||
|
import chronicles, chronos, stew/byteutils, results, ffi
|
||||||
|
import
|
||||||
|
waku/waku_core/message/message,
|
||||||
|
waku/factory/[validator_signed, waku],
|
||||||
|
tools/confutils/cli_args,
|
||||||
|
waku/waku_core/message,
|
||||||
|
waku/waku_core/topics/pubsub_topic,
|
||||||
|
waku/waku_core/topics,
|
||||||
|
waku/node/kernel_api/relay,
|
||||||
|
waku/waku_relay/protocol,
|
||||||
|
waku/node/peer_manager,
|
||||||
|
library/events/json_message_event,
|
||||||
|
library/declare_lib
|
||||||
|
|
||||||
|
proc waku_relay_get_peers_in_mesh(
|
||||||
|
ctx: ptr FFIContext[Waku],
|
||||||
|
callback: FFICallBack,
|
||||||
|
userData: pointer,
|
||||||
|
pubSubTopic: cstring,
|
||||||
|
) {.ffi.} =
|
||||||
|
let meshPeers = ctx.myLib[].node.wakuRelay.getPeersInMesh($pubsubTopic).valueOr:
|
||||||
|
error "LIST_MESH_PEERS failed", error = error
|
||||||
|
return err($error)
|
||||||
|
## returns a comma-separated string of peerIDs
|
||||||
|
return ok(meshPeers.mapIt($it).join(","))
|
||||||
|
|
||||||
|
proc waku_relay_get_num_peers_in_mesh(
|
||||||
|
ctx: ptr FFIContext[Waku],
|
||||||
|
callback: FFICallBack,
|
||||||
|
userData: pointer,
|
||||||
|
pubSubTopic: cstring,
|
||||||
|
) {.ffi.} =
|
||||||
|
let numPeersInMesh = ctx.myLib[].node.wakuRelay.getNumPeersInMesh($pubsubTopic).valueOr:
|
||||||
|
error "NUM_MESH_PEERS failed", error = error
|
||||||
|
return err($error)
|
||||||
|
return ok($numPeersInMesh)
|
||||||
|
|
||||||
|
proc waku_relay_get_connected_peers(
|
||||||
|
ctx: ptr FFIContext[Waku],
|
||||||
|
callback: FFICallBack,
|
||||||
|
userData: pointer,
|
||||||
|
pubSubTopic: cstring,
|
||||||
|
) {.ffi.} =
|
||||||
|
## Returns the list of all connected peers to an specific pubsub topic
|
||||||
|
let connPeers = ctx.myLib[].node.wakuRelay.getConnectedPeers($pubsubTopic).valueOr:
|
||||||
|
error "LIST_CONNECTED_PEERS failed", error = error
|
||||||
|
return err($error)
|
||||||
|
## returns a comma-separated string of peerIDs
|
||||||
|
return ok(connPeers.mapIt($it).join(","))
|
||||||
|
|
||||||
|
proc waku_relay_get_num_connected_peers(
|
||||||
|
ctx: ptr FFIContext[Waku],
|
||||||
|
callback: FFICallBack,
|
||||||
|
userData: pointer,
|
||||||
|
pubSubTopic: cstring,
|
||||||
|
) {.ffi.} =
|
||||||
|
let numConnPeers = ctx.myLib[].node.wakuRelay.getNumConnectedPeers($pubsubTopic).valueOr:
|
||||||
|
error "NUM_CONNECTED_PEERS failed", error = error
|
||||||
|
return err($error)
|
||||||
|
return ok($numConnPeers)
|
||||||
|
|
||||||
|
proc waku_relay_add_protected_shard(
|
||||||
|
ctx: ptr FFIContext[Waku],
|
||||||
|
callback: FFICallBack,
|
||||||
|
userData: pointer,
|
||||||
|
clusterId: cint,
|
||||||
|
shardId: cint,
|
||||||
|
publicKey: cstring,
|
||||||
|
) {.ffi.} =
|
||||||
|
## Protects a shard with a public key
|
||||||
|
try:
|
||||||
|
let relayShard = RelayShard(clusterId: uint16(clusterId), shardId: uint16(shardId))
|
||||||
|
let protectedShard = ProtectedShard.parseCmdArg($relayShard & ":" & $publicKey)
|
||||||
|
ctx.myLib[].node.wakuRelay.addSignedShardsValidator(
|
||||||
|
@[protectedShard], uint16(clusterId)
|
||||||
|
)
|
||||||
|
except ValueError as exc:
|
||||||
|
return err("ERROR in waku_relay_add_protected_shard: " & exc.msg)
|
||||||
|
|
||||||
|
return ok("")
|
||||||
|
|
||||||
|
proc waku_relay_subscribe(
|
||||||
|
ctx: ptr FFIContext[Waku],
|
||||||
|
callback: FFICallBack,
|
||||||
|
userData: pointer,
|
||||||
|
pubSubTopic: cstring,
|
||||||
|
) {.ffi.} =
|
||||||
|
echo "Subscribing to topic: " & $pubSubTopic & " ..."
|
||||||
|
proc onReceivedMessage(ctx: ptr FFIContext[Waku]): WakuRelayHandler =
|
||||||
|
return proc(pubsubTopic: PubsubTopic, msg: WakuMessage) {.async.} =
|
||||||
|
callEventCallback(ctx, "onReceivedMessage"):
|
||||||
|
$JsonMessageEvent.new(pubsubTopic, msg)
|
||||||
|
|
||||||
|
var cb = onReceivedMessage(ctx)
|
||||||
|
|
||||||
|
ctx.myLib[].node.subscribe(
|
||||||
|
(kind: SubscriptionKind.PubsubSub, topic: $pubsubTopic),
|
||||||
|
handler = WakuRelayHandler(cb),
|
||||||
|
).isOkOr:
|
||||||
|
error "SUBSCRIBE failed", error = error
|
||||||
|
return err($error)
|
||||||
|
return ok("")
|
||||||
|
|
||||||
|
proc waku_relay_unsubscribe(
|
||||||
|
ctx: ptr FFIContext[Waku],
|
||||||
|
callback: FFICallBack,
|
||||||
|
userData: pointer,
|
||||||
|
pubSubTopic: cstring,
|
||||||
|
) {.ffi.} =
|
||||||
|
ctx.myLib[].node.unsubscribe((kind: SubscriptionKind.PubsubSub, topic: $pubsubTopic)).isOkOr:
|
||||||
|
error "UNSUBSCRIBE failed", error = error
|
||||||
|
return err($error)
|
||||||
|
|
||||||
|
return ok("")
|
||||||
|
|
||||||
|
proc waku_relay_publish(
|
||||||
|
ctx: ptr FFIContext[Waku],
|
||||||
|
callback: FFICallBack,
|
||||||
|
userData: pointer,
|
||||||
|
pubSubTopic: cstring,
|
||||||
|
jsonWakuMessage: cstring,
|
||||||
|
timeoutMs: cuint,
|
||||||
|
) {.ffi.} =
|
||||||
|
var
|
||||||
|
# https://rfc.vac.dev/spec/36/#extern-char-waku_relay_publishchar-messagejson-char-pubsubtopic-int-timeoutms
|
||||||
|
jsonMessage: JsonMessage
|
||||||
|
try:
|
||||||
|
let jsonContent = parseJson($jsonWakuMessage)
|
||||||
|
jsonMessage = JsonMessage.fromJsonNode(jsonContent).valueOr:
|
||||||
|
raise newException(JsonParsingError, $error)
|
||||||
|
except JsonParsingError as exc:
|
||||||
|
return err(fmt"Error parsing json message: {exc.msg}")
|
||||||
|
|
||||||
|
let msg = json_message_event.toWakuMessage(jsonMessage).valueOr:
|
||||||
|
return err("Problem building the WakuMessage: " & $error)
|
||||||
|
|
||||||
|
(await ctx.myLib[].node.wakuRelay.publish($pubsubTopic, msg)).isOkOr:
|
||||||
|
error "PUBLISH failed", error = error
|
||||||
|
return err($error)
|
||||||
|
|
||||||
|
let msgHash = computeMessageHash($pubSubTopic, msg).to0xHex
|
||||||
|
return ok(msgHash)
|
||||||
|
|
||||||
|
proc waku_default_pubsub_topic(
|
||||||
|
ctx: ptr FFIContext[Waku], callback: FFICallBack, userData: pointer
|
||||||
|
) {.ffi.} =
|
||||||
|
# https://rfc.vac.dev/spec/36/#extern-char-waku_default_pubsub_topic
|
||||||
|
return ok(DefaultPubsubTopic)
|
||||||
|
|
||||||
|
proc waku_content_topic(
|
||||||
|
ctx: ptr FFIContext[Waku],
|
||||||
|
callback: FFICallBack,
|
||||||
|
userData: pointer,
|
||||||
|
appName: cstring,
|
||||||
|
appVersion: cuint,
|
||||||
|
contentTopicName: cstring,
|
||||||
|
encoding: cstring,
|
||||||
|
) {.ffi.} =
|
||||||
|
# https://rfc.vac.dev/spec/36/#extern-char-waku_content_topicchar-applicationname-unsigned-int-applicationversion-char-contenttopicname-char-encoding
|
||||||
|
|
||||||
|
return ok(fmt"/{$appName}/{$appVersion}/{$contentTopicName}/{$encoding}")
|
||||||
|
|
||||||
|
proc waku_pubsub_topic(
|
||||||
|
ctx: ptr FFIContext[Waku],
|
||||||
|
callback: FFICallBack,
|
||||||
|
userData: pointer,
|
||||||
|
topicName: cstring,
|
||||||
|
) {.ffi.} =
|
||||||
|
# https://rfc.vac.dev/spec/36/#extern-char-waku_pubsub_topicchar-name-char-encoding
|
||||||
|
return ok(fmt"/waku/2/{$topicName}")
|
||||||
@ -1,28 +1,16 @@
|
|||||||
import std/[json, sugar, strutils, options]
|
import std/[json, sugar, strutils, options]
|
||||||
import chronos, chronicles, results, stew/byteutils
|
import chronos, chronicles, results, stew/byteutils, ffi
|
||||||
import
|
import
|
||||||
../../../../waku/factory/waku,
|
waku/factory/waku,
|
||||||
../../../alloc,
|
library/utils,
|
||||||
../../../utils,
|
waku/waku_core/peers,
|
||||||
../../../../waku/waku_core/peers,
|
waku/waku_core/message/digest,
|
||||||
../../../../waku/waku_core/time,
|
waku/waku_store/common,
|
||||||
../../../../waku/waku_core/message/digest,
|
waku/waku_store/client,
|
||||||
../../../../waku/waku_store/common,
|
waku/common/paging,
|
||||||
../../../../waku/waku_store/client,
|
library/declare_lib
|
||||||
../../../../waku/common/paging
|
|
||||||
|
|
||||||
type StoreReqType* = enum
|
func fromJsonNode(jsonContent: JsonNode): Result[StoreQueryRequest, string] =
|
||||||
REMOTE_QUERY ## to perform a query to another Store node
|
|
||||||
|
|
||||||
type StoreRequest* = object
|
|
||||||
operation: StoreReqType
|
|
||||||
jsonQuery: cstring
|
|
||||||
peerAddr: cstring
|
|
||||||
timeoutMs: cint
|
|
||||||
|
|
||||||
func fromJsonNode(
|
|
||||||
T: type StoreRequest, jsonContent: JsonNode
|
|
||||||
): Result[StoreQueryRequest, string] =
|
|
||||||
var contentTopics: seq[string]
|
var contentTopics: seq[string]
|
||||||
if jsonContent.contains("contentTopics"):
|
if jsonContent.contains("contentTopics"):
|
||||||
contentTopics = collect(newSeq):
|
contentTopics = collect(newSeq):
|
||||||
@ -78,54 +66,29 @@ func fromJsonNode(
|
|||||||
)
|
)
|
||||||
)
|
)
|
||||||
|
|
||||||
proc createShared*(
|
proc waku_store_query(
|
||||||
T: type StoreRequest,
|
ctx: ptr FFIContext[Waku],
|
||||||
op: StoreReqType,
|
callback: FFICallBack,
|
||||||
|
userData: pointer,
|
||||||
jsonQuery: cstring,
|
jsonQuery: cstring,
|
||||||
peerAddr: cstring,
|
peerAddr: cstring,
|
||||||
timeoutMs: cint,
|
timeoutMs: cint,
|
||||||
): ptr type T =
|
) {.ffi.} =
|
||||||
var ret = createShared(T)
|
|
||||||
ret[].operation = op
|
|
||||||
ret[].timeoutMs = timeoutMs
|
|
||||||
ret[].jsonQuery = jsonQuery.alloc()
|
|
||||||
ret[].peerAddr = peerAddr.alloc()
|
|
||||||
return ret
|
|
||||||
|
|
||||||
proc destroyShared(self: ptr StoreRequest) =
|
|
||||||
deallocShared(self[].jsonQuery)
|
|
||||||
deallocShared(self[].peerAddr)
|
|
||||||
deallocShared(self)
|
|
||||||
|
|
||||||
proc process_remote_query(
|
|
||||||
self: ptr StoreRequest, waku: ptr Waku
|
|
||||||
): Future[Result[string, string]] {.async.} =
|
|
||||||
let jsonContentRes = catch:
|
let jsonContentRes = catch:
|
||||||
parseJson($self[].jsonQuery)
|
parseJson($jsonQuery)
|
||||||
|
|
||||||
if jsonContentRes.isErr():
|
if jsonContentRes.isErr():
|
||||||
return err("StoreRequest failed parsing store request: " & jsonContentRes.error.msg)
|
return err("StoreRequest failed parsing store request: " & jsonContentRes.error.msg)
|
||||||
|
|
||||||
let storeQueryRequest = ?StoreRequest.fromJsonNode(jsonContentRes.get())
|
let storeQueryRequest = ?fromJsonNode(jsonContentRes.get())
|
||||||
|
|
||||||
let peer = peers.parsePeerInfo(($self[].peerAddr).split(",")).valueOr:
|
let peer = peers.parsePeerInfo(($peerAddr).split(",")).valueOr:
|
||||||
return err("StoreRequest failed to parse peer addr: " & $error)
|
return err("StoreRequest failed to parse peer addr: " & $error)
|
||||||
|
|
||||||
let queryResponse = (await waku.node.wakuStoreClient.query(storeQueryRequest, peer)).valueOr:
|
let queryResponse = (
|
||||||
|
await ctx.myLib[].node.wakuStoreClient.query(storeQueryRequest, peer)
|
||||||
|
).valueOr:
|
||||||
return err("StoreRequest failed store query: " & $error)
|
return err("StoreRequest failed store query: " & $error)
|
||||||
|
|
||||||
let res = $(%*(queryResponse.toHex()))
|
let res = $(%*(queryResponse.toHex()))
|
||||||
return ok(res) ## returning the response in json format
|
return ok(res) ## returning the response in json format
|
||||||
|
|
||||||
proc process*(
|
|
||||||
self: ptr StoreRequest, waku: ptr Waku
|
|
||||||
): Future[Result[string, string]] {.async.} =
|
|
||||||
defer:
|
|
||||||
deallocShared(self)
|
|
||||||
|
|
||||||
case self.operation
|
|
||||||
of REMOTE_QUERY:
|
|
||||||
return await self.process_remote_query(waku)
|
|
||||||
|
|
||||||
error "store request not handled at all"
|
|
||||||
return err("store request not handled at all")
|
|
||||||
@ -10,241 +10,242 @@
|
|||||||
#include <stdint.h>
|
#include <stdint.h>
|
||||||
|
|
||||||
// The possible returned values for the functions that return int
|
// The possible returned values for the functions that return int
|
||||||
#define RET_OK 0
|
#define RET_OK 0
|
||||||
#define RET_ERR 1
|
#define RET_ERR 1
|
||||||
#define RET_MISSING_CALLBACK 2
|
#define RET_MISSING_CALLBACK 2
|
||||||
|
|
||||||
#ifdef __cplusplus
|
#ifdef __cplusplus
|
||||||
extern "C" {
|
extern "C"
|
||||||
|
{
|
||||||
#endif
|
#endif
|
||||||
|
|
||||||
typedef void (*WakuCallBack) (int callerRet, const char* msg, size_t len, void* userData);
|
typedef void (*FFICallBack)(int callerRet, const char *msg, size_t len, void *userData);
|
||||||
|
|
||||||
// Creates a new instance of the waku node.
|
// Creates a new instance of the waku node.
|
||||||
// Sets up the waku node from the given configuration.
|
// Sets up the waku node from the given configuration.
|
||||||
// Returns a pointer to the Context needed by the rest of the API functions.
|
// Returns a pointer to the Context needed by the rest of the API functions.
|
||||||
void* waku_new(
|
void *waku_new(
|
||||||
const char* configJson,
|
const char *configJson,
|
||||||
WakuCallBack callback,
|
FFICallBack callback,
|
||||||
void* userData);
|
void *userData);
|
||||||
|
|
||||||
int waku_start(void* ctx,
|
int waku_start(void *ctx,
|
||||||
WakuCallBack callback,
|
FFICallBack callback,
|
||||||
void* userData);
|
void *userData);
|
||||||
|
|
||||||
int waku_stop(void* ctx,
|
int waku_stop(void *ctx,
|
||||||
WakuCallBack callback,
|
FFICallBack callback,
|
||||||
void* userData);
|
void *userData);
|
||||||
|
|
||||||
// Destroys an instance of a waku node created with waku_new
|
// Destroys an instance of a waku node created with waku_new
|
||||||
int waku_destroy(void* ctx,
|
int waku_destroy(void *ctx,
|
||||||
WakuCallBack callback,
|
FFICallBack callback,
|
||||||
void* userData);
|
void *userData);
|
||||||
|
|
||||||
int waku_version(void* ctx,
|
int waku_version(void *ctx,
|
||||||
WakuCallBack callback,
|
FFICallBack callback,
|
||||||
void* userData);
|
void *userData);
|
||||||
|
|
||||||
// Sets a callback that will be invoked whenever an event occurs.
|
// Sets a callback that will be invoked whenever an event occurs.
|
||||||
// It is crucial that the passed callback is fast, non-blocking and potentially thread-safe.
|
// It is crucial that the passed callback is fast, non-blocking and potentially thread-safe.
|
||||||
void waku_set_event_callback(void* ctx,
|
void set_event_callback(void *ctx,
|
||||||
WakuCallBack callback,
|
FFICallBack callback,
|
||||||
void* userData);
|
void *userData);
|
||||||
|
|
||||||
int waku_content_topic(void* ctx,
|
int waku_content_topic(void *ctx,
|
||||||
const char* appName,
|
FFICallBack callback,
|
||||||
unsigned int appVersion,
|
void *userData,
|
||||||
const char* contentTopicName,
|
const char *appName,
|
||||||
const char* encoding,
|
unsigned int appVersion,
|
||||||
WakuCallBack callback,
|
const char *contentTopicName,
|
||||||
void* userData);
|
const char *encoding);
|
||||||
|
|
||||||
int waku_pubsub_topic(void* ctx,
|
int waku_pubsub_topic(void *ctx,
|
||||||
const char* topicName,
|
FFICallBack callback,
|
||||||
WakuCallBack callback,
|
void *userData,
|
||||||
void* userData);
|
const char *topicName);
|
||||||
|
|
||||||
int waku_default_pubsub_topic(void* ctx,
|
int waku_default_pubsub_topic(void *ctx,
|
||||||
WakuCallBack callback,
|
FFICallBack callback,
|
||||||
void* userData);
|
void *userData);
|
||||||
|
|
||||||
int waku_relay_publish(void* ctx,
|
int waku_relay_publish(void *ctx,
|
||||||
const char* pubSubTopic,
|
FFICallBack callback,
|
||||||
const char* jsonWakuMessage,
|
void *userData,
|
||||||
unsigned int timeoutMs,
|
const char *pubSubTopic,
|
||||||
WakuCallBack callback,
|
const char *jsonWakuMessage,
|
||||||
void* userData);
|
unsigned int timeoutMs);
|
||||||
|
|
||||||
int waku_lightpush_publish(void* ctx,
|
int waku_lightpush_publish(void *ctx,
|
||||||
const char* pubSubTopic,
|
FFICallBack callback,
|
||||||
const char* jsonWakuMessage,
|
void *userData,
|
||||||
WakuCallBack callback,
|
const char *pubSubTopic,
|
||||||
void* userData);
|
const char *jsonWakuMessage);
|
||||||
|
|
||||||
int waku_relay_subscribe(void* ctx,
|
int waku_relay_subscribe(void *ctx,
|
||||||
const char* pubSubTopic,
|
FFICallBack callback,
|
||||||
WakuCallBack callback,
|
void *userData,
|
||||||
void* userData);
|
const char *pubSubTopic);
|
||||||
|
|
||||||
int waku_relay_add_protected_shard(void* ctx,
|
int waku_relay_add_protected_shard(void *ctx,
|
||||||
int clusterId,
|
FFICallBack callback,
|
||||||
int shardId,
|
void *userData,
|
||||||
char* publicKey,
|
int clusterId,
|
||||||
WakuCallBack callback,
|
int shardId,
|
||||||
void* userData);
|
char *publicKey);
|
||||||
|
|
||||||
int waku_relay_unsubscribe(void* ctx,
|
int waku_relay_unsubscribe(void *ctx,
|
||||||
const char* pubSubTopic,
|
FFICallBack callback,
|
||||||
WakuCallBack callback,
|
void *userData,
|
||||||
void* userData);
|
const char *pubSubTopic);
|
||||||
|
|
||||||
int waku_filter_subscribe(void* ctx,
|
int waku_filter_subscribe(void *ctx,
|
||||||
const char* pubSubTopic,
|
FFICallBack callback,
|
||||||
const char* contentTopics,
|
void *userData,
|
||||||
WakuCallBack callback,
|
const char *pubSubTopic,
|
||||||
void* userData);
|
const char *contentTopics);
|
||||||
|
|
||||||
int waku_filter_unsubscribe(void* ctx,
|
int waku_filter_unsubscribe(void *ctx,
|
||||||
const char* pubSubTopic,
|
FFICallBack callback,
|
||||||
const char* contentTopics,
|
void *userData,
|
||||||
WakuCallBack callback,
|
const char *pubSubTopic,
|
||||||
void* userData);
|
const char *contentTopics);
|
||||||
|
|
||||||
int waku_filter_unsubscribe_all(void* ctx,
|
int waku_filter_unsubscribe_all(void *ctx,
|
||||||
WakuCallBack callback,
|
FFICallBack callback,
|
||||||
void* userData);
|
void *userData);
|
||||||
|
|
||||||
int waku_relay_get_num_connected_peers(void* ctx,
|
int waku_relay_get_num_connected_peers(void *ctx,
|
||||||
const char* pubSubTopic,
|
FFICallBack callback,
|
||||||
WakuCallBack callback,
|
void *userData,
|
||||||
void* userData);
|
const char *pubSubTopic);
|
||||||
|
|
||||||
int waku_relay_get_connected_peers(void* ctx,
|
int waku_relay_get_connected_peers(void *ctx,
|
||||||
const char* pubSubTopic,
|
FFICallBack callback,
|
||||||
WakuCallBack callback,
|
void *userData,
|
||||||
void* userData);
|
const char *pubSubTopic);
|
||||||
|
|
||||||
int waku_relay_get_num_peers_in_mesh(void* ctx,
|
int waku_relay_get_num_peers_in_mesh(void *ctx,
|
||||||
const char* pubSubTopic,
|
FFICallBack callback,
|
||||||
WakuCallBack callback,
|
void *userData,
|
||||||
void* userData);
|
const char *pubSubTopic);
|
||||||
|
|
||||||
int waku_relay_get_peers_in_mesh(void* ctx,
|
int waku_relay_get_peers_in_mesh(void *ctx,
|
||||||
const char* pubSubTopic,
|
FFICallBack callback,
|
||||||
WakuCallBack callback,
|
void *userData,
|
||||||
void* userData);
|
const char *pubSubTopic);
|
||||||
|
|
||||||
int waku_store_query(void* ctx,
|
int waku_store_query(void *ctx,
|
||||||
const char* jsonQuery,
|
FFICallBack callback,
|
||||||
const char* peerAddr,
|
void *userData,
|
||||||
int timeoutMs,
|
const char *jsonQuery,
|
||||||
WakuCallBack callback,
|
const char *peerAddr,
|
||||||
void* userData);
|
int timeoutMs);
|
||||||
|
|
||||||
int waku_connect(void* ctx,
|
int waku_connect(void *ctx,
|
||||||
const char* peerMultiAddr,
|
FFICallBack callback,
|
||||||
unsigned int timeoutMs,
|
void *userData,
|
||||||
WakuCallBack callback,
|
const char *peerMultiAddr,
|
||||||
void* userData);
|
unsigned int timeoutMs);
|
||||||
|
|
||||||
int waku_disconnect_peer_by_id(void* ctx,
|
int waku_disconnect_peer_by_id(void *ctx,
|
||||||
const char* peerId,
|
FFICallBack callback,
|
||||||
WakuCallBack callback,
|
void *userData,
|
||||||
void* userData);
|
const char *peerId);
|
||||||
|
|
||||||
int waku_disconnect_all_peers(void* ctx,
|
int waku_disconnect_all_peers(void *ctx,
|
||||||
WakuCallBack callback,
|
FFICallBack callback,
|
||||||
void* userData);
|
void *userData);
|
||||||
|
|
||||||
int waku_dial_peer(void* ctx,
|
int waku_dial_peer(void *ctx,
|
||||||
const char* peerMultiAddr,
|
FFICallBack callback,
|
||||||
const char* protocol,
|
void *userData,
|
||||||
int timeoutMs,
|
const char *peerMultiAddr,
|
||||||
WakuCallBack callback,
|
const char *protocol,
|
||||||
void* userData);
|
int timeoutMs);
|
||||||
|
|
||||||
int waku_dial_peer_by_id(void* ctx,
|
int waku_dial_peer_by_id(void *ctx,
|
||||||
const char* peerId,
|
FFICallBack callback,
|
||||||
const char* protocol,
|
void *userData,
|
||||||
int timeoutMs,
|
const char *peerId,
|
||||||
WakuCallBack callback,
|
const char *protocol,
|
||||||
void* userData);
|
int timeoutMs);
|
||||||
|
|
||||||
int waku_get_peerids_from_peerstore(void* ctx,
|
int waku_get_peerids_from_peerstore(void *ctx,
|
||||||
WakuCallBack callback,
|
FFICallBack callback,
|
||||||
void* userData);
|
void *userData);
|
||||||
|
|
||||||
int waku_get_connected_peers_info(void* ctx,
|
int waku_get_connected_peers_info(void *ctx,
|
||||||
WakuCallBack callback,
|
FFICallBack callback,
|
||||||
void* userData);
|
void *userData);
|
||||||
|
|
||||||
int waku_get_peerids_by_protocol(void* ctx,
|
int waku_get_peerids_by_protocol(void *ctx,
|
||||||
const char* protocol,
|
FFICallBack callback,
|
||||||
WakuCallBack callback,
|
void *userData,
|
||||||
void* userData);
|
const char *protocol);
|
||||||
|
|
||||||
int waku_listen_addresses(void* ctx,
|
int waku_listen_addresses(void *ctx,
|
||||||
WakuCallBack callback,
|
FFICallBack callback,
|
||||||
void* userData);
|
void *userData);
|
||||||
|
|
||||||
int waku_get_connected_peers(void* ctx,
|
int waku_get_connected_peers(void *ctx,
|
||||||
WakuCallBack callback,
|
FFICallBack callback,
|
||||||
void* userData);
|
void *userData);
|
||||||
|
|
||||||
// Returns a list of multiaddress given a url to a DNS discoverable ENR tree
|
// Returns a list of multiaddress given a url to a DNS discoverable ENR tree
|
||||||
// Parameters
|
// Parameters
|
||||||
// char* entTreeUrl: URL containing a discoverable ENR tree
|
// char* entTreeUrl: URL containing a discoverable ENR tree
|
||||||
// char* nameDnsServer: The nameserver to resolve the ENR tree url.
|
// char* nameDnsServer: The nameserver to resolve the ENR tree url.
|
||||||
// int timeoutMs: Timeout value in milliseconds to execute the call.
|
// int timeoutMs: Timeout value in milliseconds to execute the call.
|
||||||
int waku_dns_discovery(void* ctx,
|
int waku_dns_discovery(void *ctx,
|
||||||
const char* entTreeUrl,
|
FFICallBack callback,
|
||||||
const char* nameDnsServer,
|
void *userData,
|
||||||
int timeoutMs,
|
const char *entTreeUrl,
|
||||||
WakuCallBack callback,
|
const char *nameDnsServer,
|
||||||
void* userData);
|
int timeoutMs);
|
||||||
|
|
||||||
// Updates the bootnode list used for discovering new peers via DiscoveryV5
|
// Updates the bootnode list used for discovering new peers via DiscoveryV5
|
||||||
// bootnodes - JSON array containing the bootnode ENRs i.e. `["enr:...", "enr:..."]`
|
// bootnodes - JSON array containing the bootnode ENRs i.e. `["enr:...", "enr:..."]`
|
||||||
int waku_discv5_update_bootnodes(void* ctx,
|
int waku_discv5_update_bootnodes(void *ctx,
|
||||||
char* bootnodes,
|
FFICallBack callback,
|
||||||
WakuCallBack callback,
|
void *userData,
|
||||||
void* userData);
|
char *bootnodes);
|
||||||
|
|
||||||
int waku_start_discv5(void* ctx,
|
int waku_start_discv5(void *ctx,
|
||||||
WakuCallBack callback,
|
FFICallBack callback,
|
||||||
void* userData);
|
void *userData);
|
||||||
|
|
||||||
int waku_stop_discv5(void* ctx,
|
int waku_stop_discv5(void *ctx,
|
||||||
WakuCallBack callback,
|
FFICallBack callback,
|
||||||
void* userData);
|
void *userData);
|
||||||
|
|
||||||
// Retrieves the ENR information
|
// Retrieves the ENR information
|
||||||
int waku_get_my_enr(void* ctx,
|
int waku_get_my_enr(void *ctx,
|
||||||
WakuCallBack callback,
|
FFICallBack callback,
|
||||||
void* userData);
|
void *userData);
|
||||||
|
|
||||||
int waku_get_my_peerid(void* ctx,
|
int waku_get_my_peerid(void *ctx,
|
||||||
WakuCallBack callback,
|
FFICallBack callback,
|
||||||
void* userData);
|
void *userData);
|
||||||
|
|
||||||
int waku_get_metrics(void* ctx,
|
int waku_get_metrics(void *ctx,
|
||||||
WakuCallBack callback,
|
FFICallBack callback,
|
||||||
void* userData);
|
void *userData);
|
||||||
|
|
||||||
int waku_peer_exchange_request(void* ctx,
|
int waku_peer_exchange_request(void *ctx,
|
||||||
int numPeers,
|
FFICallBack callback,
|
||||||
WakuCallBack callback,
|
void *userData,
|
||||||
void* userData);
|
int numPeers);
|
||||||
|
|
||||||
int waku_ping_peer(void* ctx,
|
int waku_ping_peer(void *ctx,
|
||||||
const char* peerAddr,
|
FFICallBack callback,
|
||||||
int timeoutMs,
|
void *userData,
|
||||||
WakuCallBack callback,
|
const char *peerAddr,
|
||||||
void* userData);
|
int timeoutMs);
|
||||||
|
|
||||||
int waku_is_online(void* ctx,
|
int waku_is_online(void *ctx,
|
||||||
WakuCallBack callback,
|
FFICallBack callback,
|
||||||
void* userData);
|
void *userData);
|
||||||
|
|
||||||
#ifdef __cplusplus
|
#ifdef __cplusplus
|
||||||
}
|
}
|
||||||
|
|||||||
@ -1,107 +1,35 @@
|
|||||||
{.pragma: exported, exportc, cdecl, raises: [].}
|
import std/[atomics, options, atomics, macros]
|
||||||
{.pragma: callback, cdecl, raises: [], gcsafe.}
|
import chronicles, chronos, chronos/threadsync, ffi
|
||||||
{.passc: "-fPIC".}
|
|
||||||
|
|
||||||
when defined(linux):
|
|
||||||
{.passl: "-Wl,-soname,libwaku.so".}
|
|
||||||
|
|
||||||
import std/[json, atomics, strformat, options, atomics]
|
|
||||||
import chronicles, chronos, chronos/threadsync
|
|
||||||
import
|
import
|
||||||
waku/common/base64,
|
|
||||||
waku/waku_core/message/message,
|
waku/waku_core/message/message,
|
||||||
waku/node/waku_node,
|
|
||||||
waku/node/peer_manager,
|
|
||||||
waku/waku_core/topics/pubsub_topic,
|
waku/waku_core/topics/pubsub_topic,
|
||||||
waku/waku_core/subscription/push_handler,
|
|
||||||
waku/waku_relay,
|
waku/waku_relay,
|
||||||
./events/json_message_event,
|
./events/json_message_event,
|
||||||
./waku_context,
|
./events/json_topic_health_change_event,
|
||||||
./waku_thread_requests/requests/node_lifecycle_request,
|
./events/json_connection_change_event,
|
||||||
./waku_thread_requests/requests/peer_manager_request,
|
../waku/factory/app_callbacks,
|
||||||
./waku_thread_requests/requests/protocols/relay_request,
|
waku/factory/waku,
|
||||||
./waku_thread_requests/requests/protocols/store_request,
|
waku/node/waku_node,
|
||||||
./waku_thread_requests/requests/protocols/lightpush_request,
|
./declare_lib
|
||||||
./waku_thread_requests/requests/protocols/filter_request,
|
|
||||||
./waku_thread_requests/requests/debug_node_request,
|
|
||||||
./waku_thread_requests/requests/discovery_request,
|
|
||||||
./waku_thread_requests/requests/ping_request,
|
|
||||||
./waku_thread_requests/waku_thread_request,
|
|
||||||
./alloc,
|
|
||||||
./ffi_types,
|
|
||||||
../waku/factory/app_callbacks
|
|
||||||
|
|
||||||
################################################################################
|
################################################################################
|
||||||
### Wrapper around the waku node
|
## Include different APIs, i.e. all procs with {.ffi.} pragma
|
||||||
################################################################################
|
include
|
||||||
|
./kernel_api/peer_manager_api,
|
||||||
################################################################################
|
./kernel_api/discovery_api,
|
||||||
### Not-exported components
|
./kernel_api/node_lifecycle_api,
|
||||||
|
./kernel_api/debug_node_api,
|
||||||
template checkLibwakuParams*(
|
./kernel_api/ping_api,
|
||||||
ctx: ptr WakuContext, callback: WakuCallBack, userData: pointer
|
./kernel_api/protocols/relay_api,
|
||||||
) =
|
./kernel_api/protocols/store_api,
|
||||||
if not isNil(ctx):
|
./kernel_api/protocols/lightpush_api,
|
||||||
ctx[].userData = userData
|
./kernel_api/protocols/filter_api
|
||||||
|
|
||||||
if isNil(callback):
|
|
||||||
return RET_MISSING_CALLBACK
|
|
||||||
|
|
||||||
proc handleRequest(
|
|
||||||
ctx: ptr WakuContext,
|
|
||||||
requestType: RequestType,
|
|
||||||
content: pointer,
|
|
||||||
callback: WakuCallBack,
|
|
||||||
userData: pointer,
|
|
||||||
): cint =
|
|
||||||
waku_context.sendRequestToWakuThread(ctx, requestType, content, callback, userData).isOkOr:
|
|
||||||
let msg = "libwaku error: " & $error
|
|
||||||
callback(RET_ERR, unsafeAddr msg[0], cast[csize_t](len(msg)), userData)
|
|
||||||
return RET_ERR
|
|
||||||
|
|
||||||
return RET_OK
|
|
||||||
|
|
||||||
### End of not-exported components
|
|
||||||
################################################################################
|
|
||||||
|
|
||||||
################################################################################
|
|
||||||
### Library setup
|
|
||||||
|
|
||||||
# Every Nim library must have this function called - the name is derived from
|
|
||||||
# the `--nimMainPrefix` command line option
|
|
||||||
proc libwakuNimMain() {.importc.}
|
|
||||||
|
|
||||||
# To control when the library has been initialized
|
|
||||||
var initialized: Atomic[bool]
|
|
||||||
|
|
||||||
if defined(android):
|
|
||||||
# Redirect chronicles to Android System logs
|
|
||||||
when compiles(defaultChroniclesStream.outputs[0].writer):
|
|
||||||
defaultChroniclesStream.outputs[0].writer = proc(
|
|
||||||
logLevel: LogLevel, msg: LogOutputStr
|
|
||||||
) {.raises: [].} =
|
|
||||||
echo logLevel, msg
|
|
||||||
|
|
||||||
proc initializeLibrary() {.exported.} =
|
|
||||||
if not initialized.exchange(true):
|
|
||||||
## Every Nim library needs to call `<yourprefix>NimMain` once exactly, to initialize the Nim runtime.
|
|
||||||
## Being `<yourprefix>` the value given in the optional compilation flag --nimMainPrefix:yourprefix
|
|
||||||
libwakuNimMain()
|
|
||||||
when declared(setupForeignThreadGc):
|
|
||||||
setupForeignThreadGc()
|
|
||||||
when declared(nimGC_setStackBottom):
|
|
||||||
var locals {.volatile, noinit.}: pointer
|
|
||||||
locals = addr(locals)
|
|
||||||
nimGC_setStackBottom(locals)
|
|
||||||
|
|
||||||
### End of library setup
|
|
||||||
################################################################################
|
|
||||||
|
|
||||||
################################################################################
|
################################################################################
|
||||||
### Exported procs
|
### Exported procs
|
||||||
|
|
||||||
proc waku_new(
|
proc waku_new(
|
||||||
configJson: cstring, callback: WakuCallback, userData: pointer
|
configJson: cstring, callback: FFICallback, userData: pointer
|
||||||
): pointer {.dynlib, exportc, cdecl.} =
|
): pointer {.dynlib, exportc, cdecl.} =
|
||||||
initializeLibrary()
|
initializeLibrary()
|
||||||
|
|
||||||
@ -111,41 +39,50 @@ proc waku_new(
|
|||||||
return nil
|
return nil
|
||||||
|
|
||||||
## Create the Waku thread that will keep waiting for req from the main thread.
|
## Create the Waku thread that will keep waiting for req from the main thread.
|
||||||
var ctx = waku_context.createWakuContext().valueOr:
|
var ctx = ffi.createFFIContext[Waku]().valueOr:
|
||||||
let msg = "Error in createWakuContext: " & $error
|
let msg = "Error in createFFIContext: " & $error
|
||||||
callback(RET_ERR, unsafeAddr msg[0], cast[csize_t](len(msg)), userData)
|
callback(RET_ERR, unsafeAddr msg[0], cast[csize_t](len(msg)), userData)
|
||||||
return nil
|
return nil
|
||||||
|
|
||||||
ctx.userData = userData
|
ctx.userData = userData
|
||||||
|
|
||||||
|
proc onReceivedMessage(ctx: ptr FFIContext): WakuRelayHandler =
|
||||||
|
return proc(pubsubTopic: PubsubTopic, msg: WakuMessage) {.async.} =
|
||||||
|
callEventCallback(ctx, "onReceivedMessage"):
|
||||||
|
$JsonMessageEvent.new(pubsubTopic, msg)
|
||||||
|
|
||||||
|
proc onTopicHealthChange(ctx: ptr FFIContext): TopicHealthChangeHandler =
|
||||||
|
return proc(pubsubTopic: PubsubTopic, topicHealth: TopicHealth) {.async.} =
|
||||||
|
callEventCallback(ctx, "onTopicHealthChange"):
|
||||||
|
$JsonTopicHealthChangeEvent.new(pubsubTopic, topicHealth)
|
||||||
|
|
||||||
|
proc onConnectionChange(ctx: ptr FFIContext): ConnectionChangeHandler =
|
||||||
|
return proc(peerId: PeerId, peerEvent: PeerEventKind) {.async.} =
|
||||||
|
callEventCallback(ctx, "onConnectionChange"):
|
||||||
|
$JsonConnectionChangeEvent.new($peerId, peerEvent)
|
||||||
|
|
||||||
let appCallbacks = AppCallbacks(
|
let appCallbacks = AppCallbacks(
|
||||||
relayHandler: onReceivedMessage(ctx),
|
relayHandler: onReceivedMessage(ctx),
|
||||||
topicHealthChangeHandler: onTopicHealthChange(ctx),
|
topicHealthChangeHandler: onTopicHealthChange(ctx),
|
||||||
connectionChangeHandler: onConnectionChange(ctx),
|
connectionChangeHandler: onConnectionChange(ctx),
|
||||||
)
|
)
|
||||||
|
|
||||||
let retCode = handleRequest(
|
ffi.sendRequestToFFIThread(
|
||||||
ctx,
|
ctx, CreateNodeRequest.ffiNewReq(callback, userData, configJson, appCallbacks)
|
||||||
RequestType.LIFECYCLE,
|
).isOkOr:
|
||||||
NodeLifecycleRequest.createShared(
|
let msg = "error in sendRequestToFFIThread: " & $error
|
||||||
NodeLifecycleMsgType.CREATE_NODE, configJson, appCallbacks
|
callback(RET_ERR, unsafeAddr msg[0], cast[csize_t](len(msg)), userData)
|
||||||
),
|
|
||||||
callback,
|
|
||||||
userData,
|
|
||||||
)
|
|
||||||
|
|
||||||
if retCode == RET_ERR:
|
|
||||||
return nil
|
return nil
|
||||||
|
|
||||||
return ctx
|
return ctx
|
||||||
|
|
||||||
proc waku_destroy(
|
proc waku_destroy(
|
||||||
ctx: ptr WakuContext, callback: WakuCallBack, userData: pointer
|
ctx: ptr FFIContext[Waku], callback: FFICallBack, userData: pointer
|
||||||
): cint {.dynlib, exportc.} =
|
): cint {.dynlib, exportc, cdecl.} =
|
||||||
initializeLibrary()
|
initializeLibrary()
|
||||||
checkLibwakuParams(ctx, callback, userData)
|
checkParams(ctx, callback, userData)
|
||||||
|
|
||||||
waku_context.destroyWakuContext(ctx).isOkOr:
|
ffi.destroyFFIContext(ctx).isOkOr:
|
||||||
let msg = "libwaku error: " & $error
|
let msg = "libwaku error: " & $error
|
||||||
callback(RET_ERR, unsafeAddr msg[0], cast[csize_t](len(msg)), userData)
|
callback(RET_ERR, unsafeAddr msg[0], cast[csize_t](len(msg)), userData)
|
||||||
return RET_ERR
|
return RET_ERR
|
||||||
@ -155,699 +92,5 @@ proc waku_destroy(
|
|||||||
|
|
||||||
return RET_OK
|
return RET_OK
|
||||||
|
|
||||||
proc waku_version(
|
# ### End of exported procs
|
||||||
ctx: ptr WakuContext, callback: WakuCallBack, userData: pointer
|
# ################################################################################
|
||||||
): cint {.dynlib, exportc.} =
|
|
||||||
initializeLibrary()
|
|
||||||
checkLibwakuParams(ctx, callback, userData)
|
|
||||||
callback(
|
|
||||||
RET_OK,
|
|
||||||
cast[ptr cchar](WakuNodeVersionString),
|
|
||||||
cast[csize_t](len(WakuNodeVersionString)),
|
|
||||||
userData,
|
|
||||||
)
|
|
||||||
|
|
||||||
return RET_OK
|
|
||||||
|
|
||||||
proc waku_set_event_callback(
|
|
||||||
ctx: ptr WakuContext, callback: WakuCallBack, userData: pointer
|
|
||||||
) {.dynlib, exportc.} =
|
|
||||||
initializeLibrary()
|
|
||||||
ctx[].eventCallback = cast[pointer](callback)
|
|
||||||
ctx[].eventUserData = userData
|
|
||||||
|
|
||||||
proc waku_content_topic(
|
|
||||||
ctx: ptr WakuContext,
|
|
||||||
appName: cstring,
|
|
||||||
appVersion: cuint,
|
|
||||||
contentTopicName: cstring,
|
|
||||||
encoding: cstring,
|
|
||||||
callback: WakuCallBack,
|
|
||||||
userData: pointer,
|
|
||||||
): cint {.dynlib, exportc.} =
|
|
||||||
# https://rfc.vac.dev/spec/36/#extern-char-waku_content_topicchar-applicationname-unsigned-int-applicationversion-char-contenttopicname-char-encoding
|
|
||||||
|
|
||||||
initializeLibrary()
|
|
||||||
checkLibwakuParams(ctx, callback, userData)
|
|
||||||
|
|
||||||
let contentTopic = fmt"/{$appName}/{$appVersion}/{$contentTopicName}/{$encoding}"
|
|
||||||
callback(
|
|
||||||
RET_OK, unsafeAddr contentTopic[0], cast[csize_t](len(contentTopic)), userData
|
|
||||||
)
|
|
||||||
|
|
||||||
return RET_OK
|
|
||||||
|
|
||||||
proc waku_pubsub_topic(
|
|
||||||
ctx: ptr WakuContext, topicName: cstring, callback: WakuCallBack, userData: pointer
|
|
||||||
): cint {.dynlib, exportc, cdecl.} =
|
|
||||||
# https://rfc.vac.dev/spec/36/#extern-char-waku_pubsub_topicchar-name-char-encoding
|
|
||||||
|
|
||||||
initializeLibrary()
|
|
||||||
checkLibwakuParams(ctx, callback, userData)
|
|
||||||
|
|
||||||
let outPubsubTopic = fmt"/waku/2/{$topicName}"
|
|
||||||
callback(
|
|
||||||
RET_OK, unsafeAddr outPubsubTopic[0], cast[csize_t](len(outPubsubTopic)), userData
|
|
||||||
)
|
|
||||||
|
|
||||||
return RET_OK
|
|
||||||
|
|
||||||
proc waku_default_pubsub_topic(
|
|
||||||
ctx: ptr WakuContext, callback: WakuCallBack, userData: pointer
|
|
||||||
): cint {.dynlib, exportc.} =
|
|
||||||
# https://rfc.vac.dev/spec/36/#extern-char-waku_default_pubsub_topic
|
|
||||||
|
|
||||||
initializeLibrary()
|
|
||||||
checkLibwakuParams(ctx, callback, userData)
|
|
||||||
|
|
||||||
callback(
|
|
||||||
RET_OK,
|
|
||||||
cast[ptr cchar](DefaultPubsubTopic),
|
|
||||||
cast[csize_t](len(DefaultPubsubTopic)),
|
|
||||||
userData,
|
|
||||||
)
|
|
||||||
|
|
||||||
return RET_OK
|
|
||||||
|
|
||||||
proc waku_relay_publish(
|
|
||||||
ctx: ptr WakuContext,
|
|
||||||
pubSubTopic: cstring,
|
|
||||||
jsonWakuMessage: cstring,
|
|
||||||
timeoutMs: cuint,
|
|
||||||
callback: WakuCallBack,
|
|
||||||
userData: pointer,
|
|
||||||
): cint {.dynlib, exportc, cdecl.} =
|
|
||||||
# https://rfc.vac.dev/spec/36/#extern-char-waku_relay_publishchar-messagejson-char-pubsubtopic-int-timeoutms
|
|
||||||
|
|
||||||
initializeLibrary()
|
|
||||||
checkLibwakuParams(ctx, callback, userData)
|
|
||||||
|
|
||||||
var jsonMessage: JsonMessage
|
|
||||||
try:
|
|
||||||
let jsonContent = parseJson($jsonWakuMessage)
|
|
||||||
jsonMessage = JsonMessage.fromJsonNode(jsonContent).valueOr:
|
|
||||||
raise newException(JsonParsingError, $error)
|
|
||||||
except JsonParsingError:
|
|
||||||
let msg = fmt"Error parsing json message: {getCurrentExceptionMsg()}"
|
|
||||||
callback(RET_ERR, unsafeAddr msg[0], cast[csize_t](len(msg)), userData)
|
|
||||||
return RET_ERR
|
|
||||||
|
|
||||||
let wakuMessage = jsonMessage.toWakuMessage().valueOr:
|
|
||||||
let msg = "Problem building the WakuMessage: " & $error
|
|
||||||
callback(RET_ERR, unsafeAddr msg[0], cast[csize_t](len(msg)), userData)
|
|
||||||
return RET_ERR
|
|
||||||
|
|
||||||
handleRequest(
|
|
||||||
ctx,
|
|
||||||
RequestType.RELAY,
|
|
||||||
RelayRequest.createShared(RelayMsgType.PUBLISH, pubSubTopic, nil, wakuMessage),
|
|
||||||
callback,
|
|
||||||
userData,
|
|
||||||
)
|
|
||||||
|
|
||||||
proc waku_start(
|
|
||||||
ctx: ptr WakuContext, callback: WakuCallBack, userData: pointer
|
|
||||||
): cint {.dynlib, exportc.} =
|
|
||||||
initializeLibrary()
|
|
||||||
checkLibwakuParams(ctx, callback, userData)
|
|
||||||
handleRequest(
|
|
||||||
ctx,
|
|
||||||
RequestType.LIFECYCLE,
|
|
||||||
NodeLifecycleRequest.createShared(NodeLifecycleMsgType.START_NODE),
|
|
||||||
callback,
|
|
||||||
userData,
|
|
||||||
)
|
|
||||||
|
|
||||||
proc waku_stop(
|
|
||||||
ctx: ptr WakuContext, callback: WakuCallBack, userData: pointer
|
|
||||||
): cint {.dynlib, exportc.} =
|
|
||||||
initializeLibrary()
|
|
||||||
checkLibwakuParams(ctx, callback, userData)
|
|
||||||
handleRequest(
|
|
||||||
ctx,
|
|
||||||
RequestType.LIFECYCLE,
|
|
||||||
NodeLifecycleRequest.createShared(NodeLifecycleMsgType.STOP_NODE),
|
|
||||||
callback,
|
|
||||||
userData,
|
|
||||||
)
|
|
||||||
|
|
||||||
proc waku_relay_subscribe(
|
|
||||||
ctx: ptr WakuContext,
|
|
||||||
pubSubTopic: cstring,
|
|
||||||
callback: WakuCallBack,
|
|
||||||
userData: pointer,
|
|
||||||
): cint {.dynlib, exportc.} =
|
|
||||||
initializeLibrary()
|
|
||||||
checkLibwakuParams(ctx, callback, userData)
|
|
||||||
|
|
||||||
var cb = onReceivedMessage(ctx)
|
|
||||||
|
|
||||||
handleRequest(
|
|
||||||
ctx,
|
|
||||||
RequestType.RELAY,
|
|
||||||
RelayRequest.createShared(RelayMsgType.SUBSCRIBE, pubSubTopic, WakuRelayHandler(cb)),
|
|
||||||
callback,
|
|
||||||
userData,
|
|
||||||
)
|
|
||||||
|
|
||||||
proc waku_relay_add_protected_shard(
|
|
||||||
ctx: ptr WakuContext,
|
|
||||||
clusterId: cint,
|
|
||||||
shardId: cint,
|
|
||||||
publicKey: cstring,
|
|
||||||
callback: WakuCallBack,
|
|
||||||
userData: pointer,
|
|
||||||
): cint {.dynlib, exportc, cdecl.} =
|
|
||||||
initializeLibrary()
|
|
||||||
checkLibwakuParams(ctx, callback, userData)
|
|
||||||
|
|
||||||
handleRequest(
|
|
||||||
ctx,
|
|
||||||
RequestType.RELAY,
|
|
||||||
RelayRequest.createShared(
|
|
||||||
RelayMsgType.ADD_PROTECTED_SHARD,
|
|
||||||
clusterId = clusterId,
|
|
||||||
shardId = shardId,
|
|
||||||
publicKey = publicKey,
|
|
||||||
),
|
|
||||||
callback,
|
|
||||||
userData,
|
|
||||||
)
|
|
||||||
|
|
||||||
proc waku_relay_unsubscribe(
|
|
||||||
ctx: ptr WakuContext,
|
|
||||||
pubSubTopic: cstring,
|
|
||||||
callback: WakuCallBack,
|
|
||||||
userData: pointer,
|
|
||||||
): cint {.dynlib, exportc.} =
|
|
||||||
initializeLibrary()
|
|
||||||
checkLibwakuParams(ctx, callback, userData)
|
|
||||||
|
|
||||||
handleRequest(
|
|
||||||
ctx,
|
|
||||||
RequestType.RELAY,
|
|
||||||
RelayRequest.createShared(
|
|
||||||
RelayMsgType.UNSUBSCRIBE, pubSubTopic, WakuRelayHandler(onReceivedMessage(ctx))
|
|
||||||
),
|
|
||||||
callback,
|
|
||||||
userData,
|
|
||||||
)
|
|
||||||
|
|
||||||
proc waku_relay_get_num_connected_peers(
|
|
||||||
ctx: ptr WakuContext,
|
|
||||||
pubSubTopic: cstring,
|
|
||||||
callback: WakuCallBack,
|
|
||||||
userData: pointer,
|
|
||||||
): cint {.dynlib, exportc.} =
|
|
||||||
initializeLibrary()
|
|
||||||
checkLibwakuParams(ctx, callback, userData)
|
|
||||||
|
|
||||||
handleRequest(
|
|
||||||
ctx,
|
|
||||||
RequestType.RELAY,
|
|
||||||
RelayRequest.createShared(RelayMsgType.NUM_CONNECTED_PEERS, pubSubTopic),
|
|
||||||
callback,
|
|
||||||
userData,
|
|
||||||
)
|
|
||||||
|
|
||||||
proc waku_relay_get_connected_peers(
|
|
||||||
ctx: ptr WakuContext,
|
|
||||||
pubSubTopic: cstring,
|
|
||||||
callback: WakuCallBack,
|
|
||||||
userData: pointer,
|
|
||||||
): cint {.dynlib, exportc.} =
|
|
||||||
initializeLibrary()
|
|
||||||
checkLibwakuParams(ctx, callback, userData)
|
|
||||||
|
|
||||||
handleRequest(
|
|
||||||
ctx,
|
|
||||||
RequestType.RELAY,
|
|
||||||
RelayRequest.createShared(RelayMsgType.LIST_CONNECTED_PEERS, pubSubTopic),
|
|
||||||
callback,
|
|
||||||
userData,
|
|
||||||
)
|
|
||||||
|
|
||||||
proc waku_relay_get_num_peers_in_mesh(
|
|
||||||
ctx: ptr WakuContext,
|
|
||||||
pubSubTopic: cstring,
|
|
||||||
callback: WakuCallBack,
|
|
||||||
userData: pointer,
|
|
||||||
): cint {.dynlib, exportc.} =
|
|
||||||
initializeLibrary()
|
|
||||||
checkLibwakuParams(ctx, callback, userData)
|
|
||||||
|
|
||||||
handleRequest(
|
|
||||||
ctx,
|
|
||||||
RequestType.RELAY,
|
|
||||||
RelayRequest.createShared(RelayMsgType.NUM_MESH_PEERS, pubSubTopic),
|
|
||||||
callback,
|
|
||||||
userData,
|
|
||||||
)
|
|
||||||
|
|
||||||
proc waku_relay_get_peers_in_mesh(
|
|
||||||
ctx: ptr WakuContext,
|
|
||||||
pubSubTopic: cstring,
|
|
||||||
callback: WakuCallBack,
|
|
||||||
userData: pointer,
|
|
||||||
): cint {.dynlib, exportc.} =
|
|
||||||
initializeLibrary()
|
|
||||||
checkLibwakuParams(ctx, callback, userData)
|
|
||||||
|
|
||||||
handleRequest(
|
|
||||||
ctx,
|
|
||||||
RequestType.RELAY,
|
|
||||||
RelayRequest.createShared(RelayMsgType.LIST_MESH_PEERS, pubSubTopic),
|
|
||||||
callback,
|
|
||||||
userData,
|
|
||||||
)
|
|
||||||
|
|
||||||
proc waku_filter_subscribe(
|
|
||||||
ctx: ptr WakuContext,
|
|
||||||
pubSubTopic: cstring,
|
|
||||||
contentTopics: cstring,
|
|
||||||
callback: WakuCallBack,
|
|
||||||
userData: pointer,
|
|
||||||
): cint {.dynlib, exportc.} =
|
|
||||||
initializeLibrary()
|
|
||||||
checkLibwakuParams(ctx, callback, userData)
|
|
||||||
|
|
||||||
handleRequest(
|
|
||||||
ctx,
|
|
||||||
RequestType.FILTER,
|
|
||||||
FilterRequest.createShared(
|
|
||||||
FilterMsgType.SUBSCRIBE,
|
|
||||||
pubSubTopic,
|
|
||||||
contentTopics,
|
|
||||||
FilterPushHandler(onReceivedMessage(ctx)),
|
|
||||||
),
|
|
||||||
callback,
|
|
||||||
userData,
|
|
||||||
)
|
|
||||||
|
|
||||||
proc waku_filter_unsubscribe(
|
|
||||||
ctx: ptr WakuContext,
|
|
||||||
pubSubTopic: cstring,
|
|
||||||
contentTopics: cstring,
|
|
||||||
callback: WakuCallBack,
|
|
||||||
userData: pointer,
|
|
||||||
): cint {.dynlib, exportc.} =
|
|
||||||
initializeLibrary()
|
|
||||||
checkLibwakuParams(ctx, callback, userData)
|
|
||||||
|
|
||||||
handleRequest(
|
|
||||||
ctx,
|
|
||||||
RequestType.FILTER,
|
|
||||||
FilterRequest.createShared(FilterMsgType.UNSUBSCRIBE, pubSubTopic, contentTopics),
|
|
||||||
callback,
|
|
||||||
userData,
|
|
||||||
)
|
|
||||||
|
|
||||||
proc waku_filter_unsubscribe_all(
|
|
||||||
ctx: ptr WakuContext, callback: WakuCallBack, userData: pointer
|
|
||||||
): cint {.dynlib, exportc.} =
|
|
||||||
initializeLibrary()
|
|
||||||
checkLibwakuParams(ctx, callback, userData)
|
|
||||||
|
|
||||||
handleRequest(
|
|
||||||
ctx,
|
|
||||||
RequestType.FILTER,
|
|
||||||
FilterRequest.createShared(FilterMsgType.UNSUBSCRIBE_ALL),
|
|
||||||
callback,
|
|
||||||
userData,
|
|
||||||
)
|
|
||||||
|
|
||||||
proc waku_lightpush_publish(
|
|
||||||
ctx: ptr WakuContext,
|
|
||||||
pubSubTopic: cstring,
|
|
||||||
jsonWakuMessage: cstring,
|
|
||||||
callback: WakuCallBack,
|
|
||||||
userData: pointer,
|
|
||||||
): cint {.dynlib, exportc, cdecl.} =
|
|
||||||
initializeLibrary()
|
|
||||||
checkLibwakuParams(ctx, callback, userData)
|
|
||||||
|
|
||||||
var jsonMessage: JsonMessage
|
|
||||||
try:
|
|
||||||
let jsonContent = parseJson($jsonWakuMessage)
|
|
||||||
jsonMessage = JsonMessage.fromJsonNode(jsonContent).valueOr:
|
|
||||||
raise newException(JsonParsingError, $error)
|
|
||||||
except JsonParsingError:
|
|
||||||
let msg = fmt"Error parsing json message: {getCurrentExceptionMsg()}"
|
|
||||||
callback(RET_ERR, unsafeAddr msg[0], cast[csize_t](len(msg)), userData)
|
|
||||||
return RET_ERR
|
|
||||||
|
|
||||||
let wakuMessage = jsonMessage.toWakuMessage().valueOr:
|
|
||||||
let msg = "Problem building the WakuMessage: " & $error
|
|
||||||
callback(RET_ERR, unsafeAddr msg[0], cast[csize_t](len(msg)), userData)
|
|
||||||
return RET_ERR
|
|
||||||
|
|
||||||
handleRequest(
|
|
||||||
ctx,
|
|
||||||
RequestType.LIGHTPUSH,
|
|
||||||
LightpushRequest.createShared(LightpushMsgType.PUBLISH, pubSubTopic, wakuMessage),
|
|
||||||
callback,
|
|
||||||
userData,
|
|
||||||
)
|
|
||||||
|
|
||||||
proc waku_connect(
|
|
||||||
ctx: ptr WakuContext,
|
|
||||||
peerMultiAddr: cstring,
|
|
||||||
timeoutMs: cuint,
|
|
||||||
callback: WakuCallBack,
|
|
||||||
userData: pointer,
|
|
||||||
): cint {.dynlib, exportc.} =
|
|
||||||
initializeLibrary()
|
|
||||||
checkLibwakuParams(ctx, callback, userData)
|
|
||||||
|
|
||||||
handleRequest(
|
|
||||||
ctx,
|
|
||||||
RequestType.PEER_MANAGER,
|
|
||||||
PeerManagementRequest.createShared(
|
|
||||||
PeerManagementMsgType.CONNECT_TO, $peerMultiAddr, chronos.milliseconds(timeoutMs)
|
|
||||||
),
|
|
||||||
callback,
|
|
||||||
userData,
|
|
||||||
)
|
|
||||||
|
|
||||||
proc waku_disconnect_peer_by_id(
|
|
||||||
ctx: ptr WakuContext, peerId: cstring, callback: WakuCallBack, userData: pointer
|
|
||||||
): cint {.dynlib, exportc.} =
|
|
||||||
initializeLibrary()
|
|
||||||
checkLibwakuParams(ctx, callback, userData)
|
|
||||||
|
|
||||||
handleRequest(
|
|
||||||
ctx,
|
|
||||||
RequestType.PEER_MANAGER,
|
|
||||||
PeerManagementRequest.createShared(
|
|
||||||
op = PeerManagementMsgType.DISCONNECT_PEER_BY_ID, peerId = $peerId
|
|
||||||
),
|
|
||||||
callback,
|
|
||||||
userData,
|
|
||||||
)
|
|
||||||
|
|
||||||
proc waku_disconnect_all_peers(
|
|
||||||
ctx: ptr WakuContext, callback: WakuCallBack, userData: pointer
|
|
||||||
): cint {.dynlib, exportc.} =
|
|
||||||
initializeLibrary()
|
|
||||||
checkLibwakuParams(ctx, callback, userData)
|
|
||||||
|
|
||||||
handleRequest(
|
|
||||||
ctx,
|
|
||||||
RequestType.PEER_MANAGER,
|
|
||||||
PeerManagementRequest.createShared(op = PeerManagementMsgType.DISCONNECT_ALL_PEERS),
|
|
||||||
callback,
|
|
||||||
userData,
|
|
||||||
)
|
|
||||||
|
|
||||||
proc waku_dial_peer(
|
|
||||||
ctx: ptr WakuContext,
|
|
||||||
peerMultiAddr: cstring,
|
|
||||||
protocol: cstring,
|
|
||||||
timeoutMs: cuint,
|
|
||||||
callback: WakuCallBack,
|
|
||||||
userData: pointer,
|
|
||||||
): cint {.dynlib, exportc.} =
|
|
||||||
initializeLibrary()
|
|
||||||
checkLibwakuParams(ctx, callback, userData)
|
|
||||||
|
|
||||||
handleRequest(
|
|
||||||
ctx,
|
|
||||||
RequestType.PEER_MANAGER,
|
|
||||||
PeerManagementRequest.createShared(
|
|
||||||
op = PeerManagementMsgType.DIAL_PEER,
|
|
||||||
peerMultiAddr = $peerMultiAddr,
|
|
||||||
protocol = $protocol,
|
|
||||||
),
|
|
||||||
callback,
|
|
||||||
userData,
|
|
||||||
)
|
|
||||||
|
|
||||||
proc waku_dial_peer_by_id(
|
|
||||||
ctx: ptr WakuContext,
|
|
||||||
peerId: cstring,
|
|
||||||
protocol: cstring,
|
|
||||||
timeoutMs: cuint,
|
|
||||||
callback: WakuCallBack,
|
|
||||||
userData: pointer,
|
|
||||||
): cint {.dynlib, exportc.} =
|
|
||||||
initializeLibrary()
|
|
||||||
checkLibwakuParams(ctx, callback, userData)
|
|
||||||
|
|
||||||
handleRequest(
|
|
||||||
ctx,
|
|
||||||
RequestType.PEER_MANAGER,
|
|
||||||
PeerManagementRequest.createShared(
|
|
||||||
op = PeerManagementMsgType.DIAL_PEER_BY_ID, peerId = $peerId, protocol = $protocol
|
|
||||||
),
|
|
||||||
callback,
|
|
||||||
userData,
|
|
||||||
)
|
|
||||||
|
|
||||||
proc waku_get_peerids_from_peerstore(
|
|
||||||
ctx: ptr WakuContext, callback: WakuCallBack, userData: pointer
|
|
||||||
): cint {.dynlib, exportc.} =
|
|
||||||
initializeLibrary()
|
|
||||||
checkLibwakuParams(ctx, callback, userData)
|
|
||||||
|
|
||||||
handleRequest(
|
|
||||||
ctx,
|
|
||||||
RequestType.PEER_MANAGER,
|
|
||||||
PeerManagementRequest.createShared(PeerManagementMsgType.GET_ALL_PEER_IDS),
|
|
||||||
callback,
|
|
||||||
userData,
|
|
||||||
)
|
|
||||||
|
|
||||||
proc waku_get_connected_peers_info(
|
|
||||||
ctx: ptr WakuContext, callback: WakuCallBack, userData: pointer
|
|
||||||
): cint {.dynlib, exportc.} =
|
|
||||||
initializeLibrary()
|
|
||||||
checkLibwakuParams(ctx, callback, userData)
|
|
||||||
|
|
||||||
handleRequest(
|
|
||||||
ctx,
|
|
||||||
RequestType.PEER_MANAGER,
|
|
||||||
PeerManagementRequest.createShared(PeerManagementMsgType.GET_CONNECTED_PEERS_INFO),
|
|
||||||
callback,
|
|
||||||
userData,
|
|
||||||
)
|
|
||||||
|
|
||||||
proc waku_get_connected_peers(
|
|
||||||
ctx: ptr WakuContext, callback: WakuCallBack, userData: pointer
|
|
||||||
): cint {.dynlib, exportc.} =
|
|
||||||
initializeLibrary()
|
|
||||||
checkLibwakuParams(ctx, callback, userData)
|
|
||||||
|
|
||||||
handleRequest(
|
|
||||||
ctx,
|
|
||||||
RequestType.PEER_MANAGER,
|
|
||||||
PeerManagementRequest.createShared(PeerManagementMsgType.GET_CONNECTED_PEERS),
|
|
||||||
callback,
|
|
||||||
userData,
|
|
||||||
)
|
|
||||||
|
|
||||||
proc waku_get_peerids_by_protocol(
|
|
||||||
ctx: ptr WakuContext, protocol: cstring, callback: WakuCallBack, userData: pointer
|
|
||||||
): cint {.dynlib, exportc.} =
|
|
||||||
initializeLibrary()
|
|
||||||
checkLibwakuParams(ctx, callback, userData)
|
|
||||||
|
|
||||||
handleRequest(
|
|
||||||
ctx,
|
|
||||||
RequestType.PEER_MANAGER,
|
|
||||||
PeerManagementRequest.createShared(
|
|
||||||
op = PeerManagementMsgType.GET_PEER_IDS_BY_PROTOCOL, protocol = $protocol
|
|
||||||
),
|
|
||||||
callback,
|
|
||||||
userData,
|
|
||||||
)
|
|
||||||
|
|
||||||
proc waku_store_query(
|
|
||||||
ctx: ptr WakuContext,
|
|
||||||
jsonQuery: cstring,
|
|
||||||
peerAddr: cstring,
|
|
||||||
timeoutMs: cint,
|
|
||||||
callback: WakuCallBack,
|
|
||||||
userData: pointer,
|
|
||||||
): cint {.dynlib, exportc.} =
|
|
||||||
initializeLibrary()
|
|
||||||
checkLibwakuParams(ctx, callback, userData)
|
|
||||||
|
|
||||||
handleRequest(
|
|
||||||
ctx,
|
|
||||||
RequestType.STORE,
|
|
||||||
StoreRequest.createShared(StoreReqType.REMOTE_QUERY, jsonQuery, peerAddr, timeoutMs),
|
|
||||||
callback,
|
|
||||||
userData,
|
|
||||||
)
|
|
||||||
|
|
||||||
proc waku_listen_addresses(
|
|
||||||
ctx: ptr WakuContext, callback: WakuCallBack, userData: pointer
|
|
||||||
): cint {.dynlib, exportc.} =
|
|
||||||
initializeLibrary()
|
|
||||||
checkLibwakuParams(ctx, callback, userData)
|
|
||||||
|
|
||||||
handleRequest(
|
|
||||||
ctx,
|
|
||||||
RequestType.DEBUG,
|
|
||||||
DebugNodeRequest.createShared(DebugNodeMsgType.RETRIEVE_LISTENING_ADDRESSES),
|
|
||||||
callback,
|
|
||||||
userData,
|
|
||||||
)
|
|
||||||
|
|
||||||
proc waku_dns_discovery(
|
|
||||||
ctx: ptr WakuContext,
|
|
||||||
entTreeUrl: cstring,
|
|
||||||
nameDnsServer: cstring,
|
|
||||||
timeoutMs: cint,
|
|
||||||
callback: WakuCallBack,
|
|
||||||
userData: pointer,
|
|
||||||
): cint {.dynlib, exportc.} =
|
|
||||||
initializeLibrary()
|
|
||||||
checkLibwakuParams(ctx, callback, userData)
|
|
||||||
|
|
||||||
handleRequest(
|
|
||||||
ctx,
|
|
||||||
RequestType.DISCOVERY,
|
|
||||||
DiscoveryRequest.createRetrieveBootstrapNodesRequest(
|
|
||||||
DiscoveryMsgType.GET_BOOTSTRAP_NODES, entTreeUrl, nameDnsServer, timeoutMs
|
|
||||||
),
|
|
||||||
callback,
|
|
||||||
userData,
|
|
||||||
)
|
|
||||||
|
|
||||||
proc waku_discv5_update_bootnodes(
|
|
||||||
ctx: ptr WakuContext, bootnodes: cstring, callback: WakuCallBack, userData: pointer
|
|
||||||
): cint {.dynlib, exportc.} =
|
|
||||||
## Updates the bootnode list used for discovering new peers via DiscoveryV5
|
|
||||||
## bootnodes - JSON array containing the bootnode ENRs i.e. `["enr:...", "enr:..."]`
|
|
||||||
initializeLibrary()
|
|
||||||
checkLibwakuParams(ctx, callback, userData)
|
|
||||||
|
|
||||||
handleRequest(
|
|
||||||
ctx,
|
|
||||||
RequestType.DISCOVERY,
|
|
||||||
DiscoveryRequest.createUpdateBootstrapNodesRequest(
|
|
||||||
DiscoveryMsgType.UPDATE_DISCV5_BOOTSTRAP_NODES, bootnodes
|
|
||||||
),
|
|
||||||
callback,
|
|
||||||
userData,
|
|
||||||
)
|
|
||||||
|
|
||||||
proc waku_get_my_enr(
|
|
||||||
ctx: ptr WakuContext, callback: WakuCallBack, userData: pointer
|
|
||||||
): cint {.dynlib, exportc.} =
|
|
||||||
initializeLibrary()
|
|
||||||
checkLibwakuParams(ctx, callback, userData)
|
|
||||||
|
|
||||||
handleRequest(
|
|
||||||
ctx,
|
|
||||||
RequestType.DEBUG,
|
|
||||||
DebugNodeRequest.createShared(DebugNodeMsgType.RETRIEVE_MY_ENR),
|
|
||||||
callback,
|
|
||||||
userData,
|
|
||||||
)
|
|
||||||
|
|
||||||
proc waku_get_my_peerid(
|
|
||||||
ctx: ptr WakuContext, callback: WakuCallBack, userData: pointer
|
|
||||||
): cint {.dynlib, exportc.} =
|
|
||||||
initializeLibrary()
|
|
||||||
checkLibwakuParams(ctx, callback, userData)
|
|
||||||
|
|
||||||
handleRequest(
|
|
||||||
ctx,
|
|
||||||
RequestType.DEBUG,
|
|
||||||
DebugNodeRequest.createShared(DebugNodeMsgType.RETRIEVE_MY_PEER_ID),
|
|
||||||
callback,
|
|
||||||
userData,
|
|
||||||
)
|
|
||||||
|
|
||||||
proc waku_get_metrics(
|
|
||||||
ctx: ptr WakuContext, callback: WakuCallBack, userData: pointer
|
|
||||||
): cint {.dynlib, exportc.} =
|
|
||||||
initializeLibrary()
|
|
||||||
checkLibwakuParams(ctx, callback, userData)
|
|
||||||
|
|
||||||
handleRequest(
|
|
||||||
ctx,
|
|
||||||
RequestType.DEBUG,
|
|
||||||
DebugNodeRequest.createShared(DebugNodeMsgType.RETRIEVE_METRICS),
|
|
||||||
callback,
|
|
||||||
userData,
|
|
||||||
)
|
|
||||||
|
|
||||||
proc waku_start_discv5(
|
|
||||||
ctx: ptr WakuContext, callback: WakuCallBack, userData: pointer
|
|
||||||
): cint {.dynlib, exportc.} =
|
|
||||||
initializeLibrary()
|
|
||||||
checkLibwakuParams(ctx, callback, userData)
|
|
||||||
|
|
||||||
handleRequest(
|
|
||||||
ctx,
|
|
||||||
RequestType.DISCOVERY,
|
|
||||||
DiscoveryRequest.createDiscV5StartRequest(),
|
|
||||||
callback,
|
|
||||||
userData,
|
|
||||||
)
|
|
||||||
|
|
||||||
proc waku_stop_discv5(
|
|
||||||
ctx: ptr WakuContext, callback: WakuCallBack, userData: pointer
|
|
||||||
): cint {.dynlib, exportc.} =
|
|
||||||
initializeLibrary()
|
|
||||||
checkLibwakuParams(ctx, callback, userData)
|
|
||||||
|
|
||||||
handleRequest(
|
|
||||||
ctx,
|
|
||||||
RequestType.DISCOVERY,
|
|
||||||
DiscoveryRequest.createDiscV5StopRequest(),
|
|
||||||
callback,
|
|
||||||
userData,
|
|
||||||
)
|
|
||||||
|
|
||||||
proc waku_peer_exchange_request(
|
|
||||||
ctx: ptr WakuContext, numPeers: uint64, callback: WakuCallBack, userData: pointer
|
|
||||||
): cint {.dynlib, exportc.} =
|
|
||||||
initializeLibrary()
|
|
||||||
checkLibwakuParams(ctx, callback, userData)
|
|
||||||
|
|
||||||
handleRequest(
|
|
||||||
ctx,
|
|
||||||
RequestType.DISCOVERY,
|
|
||||||
DiscoveryRequest.createPeerExchangeRequest(numPeers),
|
|
||||||
callback,
|
|
||||||
userData,
|
|
||||||
)
|
|
||||||
|
|
||||||
proc waku_ping_peer(
|
|
||||||
ctx: ptr WakuContext,
|
|
||||||
peerAddr: cstring,
|
|
||||||
timeoutMs: cuint,
|
|
||||||
callback: WakuCallBack,
|
|
||||||
userData: pointer,
|
|
||||||
): cint {.dynlib, exportc.} =
|
|
||||||
initializeLibrary()
|
|
||||||
checkLibwakuParams(ctx, callback, userData)
|
|
||||||
|
|
||||||
handleRequest(
|
|
||||||
ctx,
|
|
||||||
RequestType.PING,
|
|
||||||
PingRequest.createShared(peerAddr, chronos.milliseconds(timeoutMs)),
|
|
||||||
callback,
|
|
||||||
userData,
|
|
||||||
)
|
|
||||||
|
|
||||||
proc waku_is_online(
|
|
||||||
ctx: ptr WakuContext, callback: WakuCallBack, userData: pointer
|
|
||||||
): cint {.dynlib, exportc.} =
|
|
||||||
initializeLibrary()
|
|
||||||
checkLibwakuParams(ctx, callback, userData)
|
|
||||||
|
|
||||||
handleRequest(
|
|
||||||
ctx,
|
|
||||||
RequestType.DEBUG,
|
|
||||||
DebugNodeRequest.createShared(DebugNodeMsgType.RETRIEVE_ONLINE_STATE),
|
|
||||||
callback,
|
|
||||||
userData,
|
|
||||||
)
|
|
||||||
|
|
||||||
### End of exported procs
|
|
||||||
################################################################################
|
|
||||||
|
|||||||
@ -1,223 +0,0 @@
|
|||||||
{.pragma: exported, exportc, cdecl, raises: [].}
|
|
||||||
{.pragma: callback, cdecl, raises: [], gcsafe.}
|
|
||||||
{.passc: "-fPIC".}
|
|
||||||
|
|
||||||
import std/[options, atomics, os, net, locks]
|
|
||||||
import chronicles, chronos, chronos/threadsync, taskpools/channels_spsc_single, results
|
|
||||||
import
|
|
||||||
waku/common/logging,
|
|
||||||
waku/factory/waku,
|
|
||||||
waku/node/peer_manager,
|
|
||||||
waku/waku_relay/[protocol, topic_health],
|
|
||||||
waku/waku_core/[topics/pubsub_topic, message],
|
|
||||||
./waku_thread_requests/[waku_thread_request, requests/debug_node_request],
|
|
||||||
./ffi_types,
|
|
||||||
./events/[
|
|
||||||
json_message_event, json_topic_health_change_event, json_connection_change_event,
|
|
||||||
json_waku_not_responding_event,
|
|
||||||
]
|
|
||||||
|
|
||||||
type WakuContext* = object
|
|
||||||
wakuThread: Thread[(ptr WakuContext)]
|
|
||||||
watchdogThread: Thread[(ptr WakuContext)]
|
|
||||||
# monitors the Waku thread and notifies the Waku SDK consumer if it hangs
|
|
||||||
lock: Lock
|
|
||||||
reqChannel: ChannelSPSCSingle[ptr WakuThreadRequest]
|
|
||||||
reqSignal: ThreadSignalPtr
|
|
||||||
# to inform The Waku Thread (a.k.a TWT) that a new request is sent
|
|
||||||
reqReceivedSignal: ThreadSignalPtr
|
|
||||||
# to inform the main thread that the request is rx by TWT
|
|
||||||
userData*: pointer
|
|
||||||
eventCallback*: pointer
|
|
||||||
eventUserdata*: pointer
|
|
||||||
running: Atomic[bool] # To control when the threads are running
|
|
||||||
|
|
||||||
const git_version* {.strdefine.} = "n/a"
|
|
||||||
const versionString = "version / git commit hash: " & waku.git_version
|
|
||||||
|
|
||||||
template callEventCallback(ctx: ptr WakuContext, eventName: string, body: untyped) =
|
|
||||||
if isNil(ctx[].eventCallback):
|
|
||||||
error eventName & " - eventCallback is nil"
|
|
||||||
return
|
|
||||||
|
|
||||||
foreignThreadGc:
|
|
||||||
try:
|
|
||||||
let event = body
|
|
||||||
cast[WakuCallBack](ctx[].eventCallback)(
|
|
||||||
RET_OK, unsafeAddr event[0], cast[csize_t](len(event)), ctx[].eventUserData
|
|
||||||
)
|
|
||||||
except Exception, CatchableError:
|
|
||||||
let msg =
|
|
||||||
"Exception " & eventName & " when calling 'eventCallBack': " &
|
|
||||||
getCurrentExceptionMsg()
|
|
||||||
cast[WakuCallBack](ctx[].eventCallback)(
|
|
||||||
RET_ERR, unsafeAddr msg[0], cast[csize_t](len(msg)), ctx[].eventUserData
|
|
||||||
)
|
|
||||||
|
|
||||||
proc onConnectionChange*(ctx: ptr WakuContext): ConnectionChangeHandler =
|
|
||||||
return proc(peerId: PeerId, peerEvent: PeerEventKind) {.async.} =
|
|
||||||
callEventCallback(ctx, "onConnectionChange"):
|
|
||||||
$JsonConnectionChangeEvent.new($peerId, peerEvent)
|
|
||||||
|
|
||||||
proc onReceivedMessage*(ctx: ptr WakuContext): WakuRelayHandler =
|
|
||||||
return proc(pubsubTopic: PubsubTopic, msg: WakuMessage) {.async.} =
|
|
||||||
callEventCallback(ctx, "onReceivedMessage"):
|
|
||||||
$JsonMessageEvent.new(pubsubTopic, msg)
|
|
||||||
|
|
||||||
proc onTopicHealthChange*(ctx: ptr WakuContext): TopicHealthChangeHandler =
|
|
||||||
return proc(pubsubTopic: PubsubTopic, topicHealth: TopicHealth) {.async.} =
|
|
||||||
callEventCallback(ctx, "onTopicHealthChange"):
|
|
||||||
$JsonTopicHealthChangeEvent.new(pubsubTopic, topicHealth)
|
|
||||||
|
|
||||||
proc onWakuNotResponding*(ctx: ptr WakuContext) =
|
|
||||||
callEventCallback(ctx, "onWakuNotResponsive"):
|
|
||||||
$JsonWakuNotRespondingEvent.new()
|
|
||||||
|
|
||||||
proc sendRequestToWakuThread*(
|
|
||||||
ctx: ptr WakuContext,
|
|
||||||
reqType: RequestType,
|
|
||||||
reqContent: pointer,
|
|
||||||
callback: WakuCallBack,
|
|
||||||
userData: pointer,
|
|
||||||
timeout = InfiniteDuration,
|
|
||||||
): Result[void, string] =
|
|
||||||
ctx.lock.acquire()
|
|
||||||
# This lock is only necessary while we use a SP Channel and while the signalling
|
|
||||||
# between threads assumes that there aren't concurrent requests.
|
|
||||||
# Rearchitecting the signaling + migrating to a MP Channel will allow us to receive
|
|
||||||
# requests concurrently and spare us the need of locks
|
|
||||||
defer:
|
|
||||||
ctx.lock.release()
|
|
||||||
|
|
||||||
let req = WakuThreadRequest.createShared(reqType, reqContent, callback, userData)
|
|
||||||
## Sending the request
|
|
||||||
let sentOk = ctx.reqChannel.trySend(req)
|
|
||||||
if not sentOk:
|
|
||||||
deallocShared(req)
|
|
||||||
return err("Couldn't send a request to the waku thread: " & $req[])
|
|
||||||
|
|
||||||
let fireSync = ctx.reqSignal.fireSync().valueOr:
|
|
||||||
deallocShared(req)
|
|
||||||
return err("failed fireSync: " & $error)
|
|
||||||
|
|
||||||
if not fireSync:
|
|
||||||
deallocShared(req)
|
|
||||||
return err("Couldn't fireSync in time")
|
|
||||||
|
|
||||||
## wait until the Waku Thread properly received the request
|
|
||||||
ctx.reqReceivedSignal.waitSync(timeout).isOkOr:
|
|
||||||
deallocShared(req)
|
|
||||||
return err("Couldn't receive reqReceivedSignal signal")
|
|
||||||
|
|
||||||
## Notice that in case of "ok", the deallocShared(req) is performed by the Waku Thread in the
|
|
||||||
## process proc. See the 'waku_thread_request.nim' module for more details.
|
|
||||||
ok()
|
|
||||||
|
|
||||||
proc watchdogThreadBody(ctx: ptr WakuContext) {.thread.} =
|
|
||||||
## Watchdog thread that monitors the Waku thread and notifies the library user if it hangs.
|
|
||||||
|
|
||||||
let watchdogRun = proc(ctx: ptr WakuContext) {.async.} =
|
|
||||||
const WatchdogStartDelay = 10.seconds
|
|
||||||
const WatchdogTimeinterval = 1.seconds
|
|
||||||
const WakuNotRespondingTimeout = 3.seconds
|
|
||||||
|
|
||||||
# Give time for the node to be created and up before sending watchdog requests
|
|
||||||
await sleepAsync(WatchdogStartDelay)
|
|
||||||
while true:
|
|
||||||
await sleepAsync(WatchdogTimeinterval)
|
|
||||||
|
|
||||||
if ctx.running.load == false:
|
|
||||||
info "Watchdog thread exiting because WakuContext is not running"
|
|
||||||
break
|
|
||||||
|
|
||||||
let wakuCallback = proc(
|
|
||||||
callerRet: cint, msg: ptr cchar, len: csize_t, userData: pointer
|
|
||||||
) {.cdecl, gcsafe, raises: [].} =
|
|
||||||
discard ## Don't do anything. Just respecting the callback signature.
|
|
||||||
const nilUserData = nil
|
|
||||||
|
|
||||||
trace "Sending watchdog request to Waku thread"
|
|
||||||
|
|
||||||
sendRequestToWakuThread(
|
|
||||||
ctx,
|
|
||||||
RequestType.DEBUG,
|
|
||||||
DebugNodeRequest.createShared(DebugNodeMsgType.CHECK_WAKU_NOT_BLOCKED),
|
|
||||||
wakuCallback,
|
|
||||||
nilUserData,
|
|
||||||
WakuNotRespondingTimeout,
|
|
||||||
).isOkOr:
|
|
||||||
error "Failed to send watchdog request to Waku thread", error = $error
|
|
||||||
onWakuNotResponding(ctx)
|
|
||||||
|
|
||||||
waitFor watchdogRun(ctx)
|
|
||||||
|
|
||||||
proc wakuThreadBody(ctx: ptr WakuContext) {.thread.} =
|
|
||||||
## Waku thread that attends library user requests (stop, connect_to, etc.)
|
|
||||||
|
|
||||||
logging.setupLog(logging.LogLevel.DEBUG, logging.LogFormat.TEXT)
|
|
||||||
|
|
||||||
let wakuRun = proc(ctx: ptr WakuContext) {.async.} =
|
|
||||||
var waku: Waku
|
|
||||||
while true:
|
|
||||||
await ctx.reqSignal.wait()
|
|
||||||
|
|
||||||
if ctx.running.load == false:
|
|
||||||
break
|
|
||||||
|
|
||||||
## Trying to get a request from the libwaku requestor thread
|
|
||||||
var request: ptr WakuThreadRequest
|
|
||||||
let recvOk = ctx.reqChannel.tryRecv(request)
|
|
||||||
if not recvOk:
|
|
||||||
error "waku thread could not receive a request"
|
|
||||||
continue
|
|
||||||
|
|
||||||
## Handle the request
|
|
||||||
asyncSpawn WakuThreadRequest.process(request, addr waku)
|
|
||||||
|
|
||||||
ctx.reqReceivedSignal.fireSync().isOkOr:
|
|
||||||
error "could not fireSync back to requester thread", error = error
|
|
||||||
|
|
||||||
waitFor wakuRun(ctx)
|
|
||||||
|
|
||||||
proc createWakuContext*(): Result[ptr WakuContext, string] =
|
|
||||||
## This proc is called from the main thread and it creates
|
|
||||||
## the Waku working thread.
|
|
||||||
var ctx = createShared(WakuContext, 1)
|
|
||||||
ctx.reqSignal = ThreadSignalPtr.new().valueOr:
|
|
||||||
return err("couldn't create reqSignal ThreadSignalPtr")
|
|
||||||
ctx.reqReceivedSignal = ThreadSignalPtr.new().valueOr:
|
|
||||||
return err("couldn't create reqReceivedSignal ThreadSignalPtr")
|
|
||||||
ctx.lock.initLock()
|
|
||||||
|
|
||||||
ctx.running.store(true)
|
|
||||||
|
|
||||||
try:
|
|
||||||
createThread(ctx.wakuThread, wakuThreadBody, ctx)
|
|
||||||
except ValueError, ResourceExhaustedError:
|
|
||||||
freeShared(ctx)
|
|
||||||
return err("failed to create the Waku thread: " & getCurrentExceptionMsg())
|
|
||||||
|
|
||||||
try:
|
|
||||||
createThread(ctx.watchdogThread, watchdogThreadBody, ctx)
|
|
||||||
except ValueError, ResourceExhaustedError:
|
|
||||||
freeShared(ctx)
|
|
||||||
return err("failed to create the watchdog thread: " & getCurrentExceptionMsg())
|
|
||||||
|
|
||||||
return ok(ctx)
|
|
||||||
|
|
||||||
proc destroyWakuContext*(ctx: ptr WakuContext): Result[void, string] =
|
|
||||||
ctx.running.store(false)
|
|
||||||
|
|
||||||
let signaledOnTime = ctx.reqSignal.fireSync().valueOr:
|
|
||||||
return err("error in destroyWakuContext: " & $error)
|
|
||||||
if not signaledOnTime:
|
|
||||||
return err("failed to signal reqSignal on time in destroyWakuContext")
|
|
||||||
|
|
||||||
joinThread(ctx.wakuThread)
|
|
||||||
joinThread(ctx.watchdogThread)
|
|
||||||
ctx.lock.deinitLock()
|
|
||||||
?ctx.reqSignal.close()
|
|
||||||
?ctx.reqReceivedSignal.close()
|
|
||||||
freeShared(ctx)
|
|
||||||
|
|
||||||
return ok()
|
|
||||||
@ -1,63 +0,0 @@
|
|||||||
import std/json
|
|
||||||
import
|
|
||||||
chronicles,
|
|
||||||
chronos,
|
|
||||||
results,
|
|
||||||
eth/p2p/discoveryv5/enr,
|
|
||||||
strutils,
|
|
||||||
libp2p/peerid,
|
|
||||||
metrics
|
|
||||||
import
|
|
||||||
../../../waku/factory/waku,
|
|
||||||
../../../waku/node/waku_node,
|
|
||||||
../../../waku/node/health_monitor
|
|
||||||
|
|
||||||
type DebugNodeMsgType* = enum
|
|
||||||
RETRIEVE_LISTENING_ADDRESSES
|
|
||||||
RETRIEVE_MY_ENR
|
|
||||||
RETRIEVE_MY_PEER_ID
|
|
||||||
RETRIEVE_METRICS
|
|
||||||
RETRIEVE_ONLINE_STATE
|
|
||||||
CHECK_WAKU_NOT_BLOCKED
|
|
||||||
|
|
||||||
type DebugNodeRequest* = object
|
|
||||||
operation: DebugNodeMsgType
|
|
||||||
|
|
||||||
proc createShared*(T: type DebugNodeRequest, op: DebugNodeMsgType): ptr type T =
|
|
||||||
var ret = createShared(T)
|
|
||||||
ret[].operation = op
|
|
||||||
return ret
|
|
||||||
|
|
||||||
proc destroyShared(self: ptr DebugNodeRequest) =
|
|
||||||
deallocShared(self)
|
|
||||||
|
|
||||||
proc getMultiaddresses(node: WakuNode): seq[string] =
|
|
||||||
return node.info().listenAddresses
|
|
||||||
|
|
||||||
proc getMetrics(): string =
|
|
||||||
{.gcsafe.}:
|
|
||||||
return defaultRegistry.toText() ## defaultRegistry is {.global.} in metrics module
|
|
||||||
|
|
||||||
proc process*(
|
|
||||||
self: ptr DebugNodeRequest, waku: Waku
|
|
||||||
): Future[Result[string, string]] {.async.} =
|
|
||||||
defer:
|
|
||||||
destroyShared(self)
|
|
||||||
|
|
||||||
case self.operation
|
|
||||||
of RETRIEVE_LISTENING_ADDRESSES:
|
|
||||||
## returns a comma-separated string of the listen addresses
|
|
||||||
return ok(waku.node.getMultiaddresses().join(","))
|
|
||||||
of RETRIEVE_MY_ENR:
|
|
||||||
return ok(waku.node.enr.toURI())
|
|
||||||
of RETRIEVE_MY_PEER_ID:
|
|
||||||
return ok($waku.node.peerId())
|
|
||||||
of RETRIEVE_METRICS:
|
|
||||||
return ok(getMetrics())
|
|
||||||
of RETRIEVE_ONLINE_STATE:
|
|
||||||
return ok($waku.healthMonitor.onlineMonitor.amIOnline())
|
|
||||||
of CHECK_WAKU_NOT_BLOCKED:
|
|
||||||
return ok("waku thread is not blocked")
|
|
||||||
|
|
||||||
error "unsupported operation in DebugNodeRequest"
|
|
||||||
return err("unsupported operation in DebugNodeRequest")
|
|
||||||
@ -1,151 +0,0 @@
|
|||||||
import std/json
|
|
||||||
import chronos, chronicles, results, strutils, libp2p/multiaddress
|
|
||||||
import
|
|
||||||
../../../waku/factory/waku,
|
|
||||||
../../../waku/discovery/waku_dnsdisc,
|
|
||||||
../../../waku/discovery/waku_discv5,
|
|
||||||
../../../waku/waku_core/peers,
|
|
||||||
../../../waku/node/waku_node,
|
|
||||||
../../../waku/node/kernel_api,
|
|
||||||
../../alloc
|
|
||||||
|
|
||||||
type DiscoveryMsgType* = enum
|
|
||||||
GET_BOOTSTRAP_NODES
|
|
||||||
UPDATE_DISCV5_BOOTSTRAP_NODES
|
|
||||||
START_DISCV5
|
|
||||||
STOP_DISCV5
|
|
||||||
PEER_EXCHANGE
|
|
||||||
|
|
||||||
type DiscoveryRequest* = object
|
|
||||||
operation: DiscoveryMsgType
|
|
||||||
|
|
||||||
## used in GET_BOOTSTRAP_NODES
|
|
||||||
enrTreeUrl: cstring
|
|
||||||
nameDnsServer: cstring
|
|
||||||
timeoutMs: cint
|
|
||||||
|
|
||||||
## used in UPDATE_DISCV5_BOOTSTRAP_NODES
|
|
||||||
nodes: cstring
|
|
||||||
|
|
||||||
## used in PEER_EXCHANGE
|
|
||||||
numPeers: uint64
|
|
||||||
|
|
||||||
proc createShared(
|
|
||||||
T: type DiscoveryRequest,
|
|
||||||
op: DiscoveryMsgType,
|
|
||||||
enrTreeUrl: cstring,
|
|
||||||
nameDnsServer: cstring,
|
|
||||||
timeoutMs: cint,
|
|
||||||
nodes: cstring,
|
|
||||||
numPeers: uint64,
|
|
||||||
): ptr type T =
|
|
||||||
var ret = createShared(T)
|
|
||||||
ret[].operation = op
|
|
||||||
ret[].enrTreeUrl = enrTreeUrl.alloc()
|
|
||||||
ret[].nameDnsServer = nameDnsServer.alloc()
|
|
||||||
ret[].timeoutMs = timeoutMs
|
|
||||||
ret[].nodes = nodes.alloc()
|
|
||||||
ret[].numPeers = numPeers
|
|
||||||
return ret
|
|
||||||
|
|
||||||
proc createRetrieveBootstrapNodesRequest*(
|
|
||||||
T: type DiscoveryRequest,
|
|
||||||
op: DiscoveryMsgType,
|
|
||||||
enrTreeUrl: cstring,
|
|
||||||
nameDnsServer: cstring,
|
|
||||||
timeoutMs: cint,
|
|
||||||
): ptr type T =
|
|
||||||
return T.createShared(op, enrTreeUrl, nameDnsServer, timeoutMs, "", 0)
|
|
||||||
|
|
||||||
proc createUpdateBootstrapNodesRequest*(
|
|
||||||
T: type DiscoveryRequest, op: DiscoveryMsgType, nodes: cstring
|
|
||||||
): ptr type T =
|
|
||||||
return T.createShared(op, "", "", 0, nodes, 0)
|
|
||||||
|
|
||||||
proc createDiscV5StartRequest*(T: type DiscoveryRequest): ptr type T =
|
|
||||||
return T.createShared(START_DISCV5, "", "", 0, "", 0)
|
|
||||||
|
|
||||||
proc createDiscV5StopRequest*(T: type DiscoveryRequest): ptr type T =
|
|
||||||
return T.createShared(STOP_DISCV5, "", "", 0, "", 0)
|
|
||||||
|
|
||||||
proc createPeerExchangeRequest*(
|
|
||||||
T: type DiscoveryRequest, numPeers: uint64
|
|
||||||
): ptr type T =
|
|
||||||
return T.createShared(PEER_EXCHANGE, "", "", 0, "", numPeers)
|
|
||||||
|
|
||||||
proc destroyShared(self: ptr DiscoveryRequest) =
|
|
||||||
deallocShared(self[].enrTreeUrl)
|
|
||||||
deallocShared(self[].nameDnsServer)
|
|
||||||
deallocShared(self[].nodes)
|
|
||||||
deallocShared(self)
|
|
||||||
|
|
||||||
proc retrieveBootstrapNodes(
|
|
||||||
enrTreeUrl: string, ipDnsServer: string
|
|
||||||
): Future[Result[seq[string], string]] {.async.} =
|
|
||||||
let dnsNameServers = @[parseIpAddress(ipDnsServer)]
|
|
||||||
let discoveredPeers: seq[RemotePeerInfo] = (
|
|
||||||
await retrieveDynamicBootstrapNodes(enrTreeUrl, dnsNameServers)
|
|
||||||
).valueOr:
|
|
||||||
return err("failed discovering peers from DNS: " & $error)
|
|
||||||
|
|
||||||
var multiAddresses = newSeq[string]()
|
|
||||||
|
|
||||||
for discPeer in discoveredPeers:
|
|
||||||
for address in discPeer.addrs:
|
|
||||||
multiAddresses.add($address & "/p2p/" & $discPeer)
|
|
||||||
|
|
||||||
return ok(multiAddresses)
|
|
||||||
|
|
||||||
proc updateDiscv5BootstrapNodes(nodes: string, waku: ptr Waku): Result[void, string] =
|
|
||||||
waku.wakuDiscv5.updateBootstrapRecords(nodes).isOkOr:
|
|
||||||
return err("error in updateDiscv5BootstrapNodes: " & $error)
|
|
||||||
return ok()
|
|
||||||
|
|
||||||
proc performPeerExchangeRequestTo(
|
|
||||||
numPeers: uint64, waku: ptr Waku
|
|
||||||
): Future[Result[int, string]] {.async.} =
|
|
||||||
let numPeersRecv = (await waku.node.fetchPeerExchangePeers(numPeers)).valueOr:
|
|
||||||
return err($error)
|
|
||||||
return ok(numPeersRecv)
|
|
||||||
|
|
||||||
proc process*(
|
|
||||||
self: ptr DiscoveryRequest, waku: ptr Waku
|
|
||||||
): Future[Result[string, string]] {.async.} =
|
|
||||||
defer:
|
|
||||||
destroyShared(self)
|
|
||||||
|
|
||||||
case self.operation
|
|
||||||
of START_DISCV5:
|
|
||||||
let res = await waku.wakuDiscv5.start()
|
|
||||||
res.isOkOr:
|
|
||||||
error "START_DISCV5 failed", error = error
|
|
||||||
return err($error)
|
|
||||||
|
|
||||||
return ok("discv5 started correctly")
|
|
||||||
of STOP_DISCV5:
|
|
||||||
await waku.wakuDiscv5.stop()
|
|
||||||
|
|
||||||
return ok("discv5 stopped correctly")
|
|
||||||
of GET_BOOTSTRAP_NODES:
|
|
||||||
let nodes = (
|
|
||||||
await retrieveBootstrapNodes($self[].enrTreeUrl, $self[].nameDnsServer)
|
|
||||||
).valueOr:
|
|
||||||
error "GET_BOOTSTRAP_NODES failed", error = error
|
|
||||||
return err($error)
|
|
||||||
|
|
||||||
## returns a comma-separated string of bootstrap nodes' multiaddresses
|
|
||||||
return ok(nodes.join(","))
|
|
||||||
of UPDATE_DISCV5_BOOTSTRAP_NODES:
|
|
||||||
updateDiscv5BootstrapNodes($self[].nodes, waku).isOkOr:
|
|
||||||
error "UPDATE_DISCV5_BOOTSTRAP_NODES failed", error = error
|
|
||||||
return err($error)
|
|
||||||
|
|
||||||
return ok("discovery request processed correctly")
|
|
||||||
of PEER_EXCHANGE:
|
|
||||||
let numValidPeers = (await performPeerExchangeRequestTo(self[].numPeers, waku)).valueOr:
|
|
||||||
error "PEER_EXCHANGE failed", error = error
|
|
||||||
return err($error)
|
|
||||||
return ok($numValidPeers)
|
|
||||||
|
|
||||||
error "discovery request not handled"
|
|
||||||
return err("discovery request not handled")
|
|
||||||
@ -1,135 +0,0 @@
|
|||||||
import std/[sequtils, strutils, tables]
|
|
||||||
import chronicles, chronos, results, options, json
|
|
||||||
import
|
|
||||||
../../../waku/factory/waku,
|
|
||||||
../../../waku/node/waku_node,
|
|
||||||
../../alloc,
|
|
||||||
../../../waku/node/peer_manager
|
|
||||||
|
|
||||||
type PeerManagementMsgType* {.pure.} = enum
|
|
||||||
CONNECT_TO
|
|
||||||
GET_ALL_PEER_IDS
|
|
||||||
GET_CONNECTED_PEERS_INFO
|
|
||||||
GET_PEER_IDS_BY_PROTOCOL
|
|
||||||
DISCONNECT_PEER_BY_ID
|
|
||||||
DISCONNECT_ALL_PEERS
|
|
||||||
DIAL_PEER
|
|
||||||
DIAL_PEER_BY_ID
|
|
||||||
GET_CONNECTED_PEERS
|
|
||||||
|
|
||||||
type PeerManagementRequest* = object
|
|
||||||
operation: PeerManagementMsgType
|
|
||||||
peerMultiAddr: cstring
|
|
||||||
dialTimeout: Duration
|
|
||||||
protocol: cstring
|
|
||||||
peerId: cstring
|
|
||||||
|
|
||||||
type PeerInfo = object
|
|
||||||
protocols: seq[string]
|
|
||||||
addresses: seq[string]
|
|
||||||
|
|
||||||
proc createShared*(
|
|
||||||
T: type PeerManagementRequest,
|
|
||||||
op: PeerManagementMsgType,
|
|
||||||
peerMultiAddr = "",
|
|
||||||
dialTimeout = chronos.milliseconds(0), ## arbitrary Duration as not all ops needs dialTimeout
|
|
||||||
peerId = "",
|
|
||||||
protocol = "",
|
|
||||||
): ptr type T =
|
|
||||||
var ret = createShared(T)
|
|
||||||
ret[].operation = op
|
|
||||||
ret[].peerMultiAddr = peerMultiAddr.alloc()
|
|
||||||
ret[].peerId = peerId.alloc()
|
|
||||||
ret[].protocol = protocol.alloc()
|
|
||||||
ret[].dialTimeout = dialTimeout
|
|
||||||
return ret
|
|
||||||
|
|
||||||
proc destroyShared(self: ptr PeerManagementRequest) =
|
|
||||||
if not isNil(self[].peerMultiAddr):
|
|
||||||
deallocShared(self[].peerMultiAddr)
|
|
||||||
|
|
||||||
if not isNil(self[].peerId):
|
|
||||||
deallocShared(self[].peerId)
|
|
||||||
|
|
||||||
if not isNil(self[].protocol):
|
|
||||||
deallocShared(self[].protocol)
|
|
||||||
|
|
||||||
deallocShared(self)
|
|
||||||
|
|
||||||
proc process*(
|
|
||||||
self: ptr PeerManagementRequest, waku: Waku
|
|
||||||
): Future[Result[string, string]] {.async.} =
|
|
||||||
defer:
|
|
||||||
destroyShared(self)
|
|
||||||
|
|
||||||
case self.operation
|
|
||||||
of CONNECT_TO:
|
|
||||||
let peers = ($self[].peerMultiAddr).split(",").mapIt(strip(it))
|
|
||||||
await waku.node.connectToNodes(peers, source = "static")
|
|
||||||
return ok("")
|
|
||||||
of GET_ALL_PEER_IDS:
|
|
||||||
## returns a comma-separated string of peerIDs
|
|
||||||
let peerIDs =
|
|
||||||
waku.node.peerManager.switch.peerStore.peers().mapIt($it.peerId).join(",")
|
|
||||||
return ok(peerIDs)
|
|
||||||
of GET_CONNECTED_PEERS_INFO:
|
|
||||||
## returns a JSON string mapping peerIDs to objects with protocols and addresses
|
|
||||||
|
|
||||||
var peersMap = initTable[string, PeerInfo]()
|
|
||||||
let peers = waku.node.peerManager.switch.peerStore.peers().filterIt(
|
|
||||||
it.connectedness == Connected
|
|
||||||
)
|
|
||||||
|
|
||||||
# Build a map of peer IDs to peer info objects
|
|
||||||
for peer in peers:
|
|
||||||
let peerIdStr = $peer.peerId
|
|
||||||
peersMap[peerIdStr] =
|
|
||||||
PeerInfo(protocols: peer.protocols, addresses: peer.addrs.mapIt($it))
|
|
||||||
|
|
||||||
# Convert the map to JSON string
|
|
||||||
let jsonObj = %*peersMap
|
|
||||||
let jsonStr = $jsonObj
|
|
||||||
return ok(jsonStr)
|
|
||||||
of GET_PEER_IDS_BY_PROTOCOL:
|
|
||||||
## returns a comma-separated string of peerIDs that mount the given protocol
|
|
||||||
let connectedPeers = waku.node.peerManager.switch.peerStore
|
|
||||||
.peers($self[].protocol)
|
|
||||||
.filterIt(it.connectedness == Connected)
|
|
||||||
.mapIt($it.peerId)
|
|
||||||
.join(",")
|
|
||||||
return ok(connectedPeers)
|
|
||||||
of DISCONNECT_PEER_BY_ID:
|
|
||||||
let peerId = PeerId.init($self[].peerId).valueOr:
|
|
||||||
error "DISCONNECT_PEER_BY_ID failed", error = $error
|
|
||||||
return err($error)
|
|
||||||
await waku.node.peerManager.disconnectNode(peerId)
|
|
||||||
return ok("")
|
|
||||||
of DISCONNECT_ALL_PEERS:
|
|
||||||
await waku.node.peerManager.disconnectAllPeers()
|
|
||||||
return ok("")
|
|
||||||
of DIAL_PEER:
|
|
||||||
let remotePeerInfo = parsePeerInfo($self[].peerMultiAddr).valueOr:
|
|
||||||
error "DIAL_PEER failed", error = $error
|
|
||||||
return err($error)
|
|
||||||
let conn = await waku.node.peerManager.dialPeer(remotePeerInfo, $self[].protocol)
|
|
||||||
if conn.isNone():
|
|
||||||
let msg = "failed dialing peer"
|
|
||||||
error "DIAL_PEER failed", error = msg, peerId = $remotePeerInfo.peerId
|
|
||||||
return err(msg)
|
|
||||||
of DIAL_PEER_BY_ID:
|
|
||||||
let peerId = PeerId.init($self[].peerId).valueOr:
|
|
||||||
error "DIAL_PEER_BY_ID failed", error = $error
|
|
||||||
return err($error)
|
|
||||||
let conn = await waku.node.peerManager.dialPeer(peerId, $self[].protocol)
|
|
||||||
if conn.isNone():
|
|
||||||
let msg = "failed dialing peer"
|
|
||||||
error "DIAL_PEER_BY_ID failed", error = msg, peerId = $peerId
|
|
||||||
return err(msg)
|
|
||||||
of GET_CONNECTED_PEERS:
|
|
||||||
## returns a comma-separated string of peerIDs
|
|
||||||
let
|
|
||||||
(inPeerIds, outPeerIds) = waku.node.peerManager.connectedPeers()
|
|
||||||
connectedPeerids = concat(inPeerIds, outPeerIds)
|
|
||||||
return ok(connectedPeerids.mapIt($it).join(","))
|
|
||||||
|
|
||||||
return ok("")
|
|
||||||
@ -1,54 +0,0 @@
|
|||||||
import std/[json, strutils]
|
|
||||||
import chronos, results
|
|
||||||
import libp2p/[protocols/ping, switch, multiaddress, multicodec]
|
|
||||||
import ../../../waku/[factory/waku, waku_core/peers, node/waku_node], ../../alloc
|
|
||||||
|
|
||||||
type PingRequest* = object
|
|
||||||
peerAddr: cstring
|
|
||||||
timeout: Duration
|
|
||||||
|
|
||||||
proc createShared*(
|
|
||||||
T: type PingRequest, peerAddr: cstring, timeout: Duration
|
|
||||||
): ptr type T =
|
|
||||||
var ret = createShared(T)
|
|
||||||
ret[].peerAddr = peerAddr.alloc()
|
|
||||||
ret[].timeout = timeout
|
|
||||||
return ret
|
|
||||||
|
|
||||||
proc destroyShared(self: ptr PingRequest) =
|
|
||||||
deallocShared(self[].peerAddr)
|
|
||||||
deallocShared(self)
|
|
||||||
|
|
||||||
proc process*(
|
|
||||||
self: ptr PingRequest, waku: ptr Waku
|
|
||||||
): Future[Result[string, string]] {.async.} =
|
|
||||||
defer:
|
|
||||||
destroyShared(self)
|
|
||||||
|
|
||||||
let peerInfo = peers.parsePeerInfo(($self[].peerAddr).split(",")).valueOr:
|
|
||||||
return err("PingRequest failed to parse peer addr: " & $error)
|
|
||||||
|
|
||||||
proc ping(): Future[Result[Duration, string]] {.async, gcsafe.} =
|
|
||||||
try:
|
|
||||||
let conn = await waku.node.switch.dial(peerInfo.peerId, peerInfo.addrs, PingCodec)
|
|
||||||
defer:
|
|
||||||
await conn.close()
|
|
||||||
|
|
||||||
let pingRTT = await waku.node.libp2pPing.ping(conn)
|
|
||||||
if pingRTT == 0.nanos:
|
|
||||||
return err("could not ping peer: rtt-0")
|
|
||||||
return ok(pingRTT)
|
|
||||||
except CatchableError:
|
|
||||||
return err("could not ping peer: " & getCurrentExceptionMsg())
|
|
||||||
|
|
||||||
let pingFuture = ping()
|
|
||||||
let pingRTT: Duration =
|
|
||||||
if self[].timeout == chronos.milliseconds(0): # No timeout expected
|
|
||||||
?(await pingFuture)
|
|
||||||
else:
|
|
||||||
let timedOut = not (await pingFuture.withTimeout(self[].timeout))
|
|
||||||
if timedOut:
|
|
||||||
return err("ping timed out")
|
|
||||||
?(pingFuture.read())
|
|
||||||
|
|
||||||
ok($(pingRTT.nanos))
|
|
||||||
@ -1,106 +0,0 @@
|
|||||||
import options, std/[strutils, sequtils]
|
|
||||||
import chronicles, chronos, results
|
|
||||||
import
|
|
||||||
../../../../waku/waku_filter_v2/client,
|
|
||||||
../../../../waku/waku_core/message/message,
|
|
||||||
../../../../waku/factory/waku,
|
|
||||||
../../../../waku/waku_filter_v2/common,
|
|
||||||
../../../../waku/waku_core/subscription/push_handler,
|
|
||||||
../../../../waku/node/peer_manager/peer_manager,
|
|
||||||
../../../../waku/node/waku_node,
|
|
||||||
../../../../waku/node/kernel_api,
|
|
||||||
../../../../waku/waku_core/topics/pubsub_topic,
|
|
||||||
../../../../waku/waku_core/topics/content_topic,
|
|
||||||
../../../alloc
|
|
||||||
|
|
||||||
type FilterMsgType* = enum
|
|
||||||
SUBSCRIBE
|
|
||||||
UNSUBSCRIBE
|
|
||||||
UNSUBSCRIBE_ALL
|
|
||||||
|
|
||||||
type FilterRequest* = object
|
|
||||||
operation: FilterMsgType
|
|
||||||
pubsubTopic: cstring
|
|
||||||
contentTopics: cstring ## comma-separated list of content-topics
|
|
||||||
filterPushEventCallback: FilterPushHandler ## handles incoming filter pushed msgs
|
|
||||||
|
|
||||||
proc createShared*(
|
|
||||||
T: type FilterRequest,
|
|
||||||
op: FilterMsgType,
|
|
||||||
pubsubTopic: cstring = "",
|
|
||||||
contentTopics: cstring = "",
|
|
||||||
filterPushEventCallback: FilterPushHandler = nil,
|
|
||||||
): ptr type T =
|
|
||||||
var ret = createShared(T)
|
|
||||||
ret[].operation = op
|
|
||||||
ret[].pubsubTopic = pubsubTopic.alloc()
|
|
||||||
ret[].contentTopics = contentTopics.alloc()
|
|
||||||
ret[].filterPushEventCallback = filterPushEventCallback
|
|
||||||
|
|
||||||
return ret
|
|
||||||
|
|
||||||
proc destroyShared(self: ptr FilterRequest) =
|
|
||||||
deallocShared(self[].pubsubTopic)
|
|
||||||
deallocShared(self[].contentTopics)
|
|
||||||
deallocShared(self)
|
|
||||||
|
|
||||||
proc process*(
|
|
||||||
self: ptr FilterRequest, waku: ptr Waku
|
|
||||||
): Future[Result[string, string]] {.async.} =
|
|
||||||
defer:
|
|
||||||
destroyShared(self)
|
|
||||||
|
|
||||||
const FilterOpTimeout = 5.seconds
|
|
||||||
if waku.node.wakuFilterClient.isNil():
|
|
||||||
let errorMsg = "FilterRequest waku.node.wakuFilterClient is nil"
|
|
||||||
error "fail filter process", error = errorMsg, op = $(self.operation)
|
|
||||||
return err(errorMsg)
|
|
||||||
|
|
||||||
case self.operation
|
|
||||||
of SUBSCRIBE:
|
|
||||||
waku.node.wakuFilterClient.registerPushHandler(self.filterPushEventCallback)
|
|
||||||
|
|
||||||
let peer = waku.node.peerManager.selectPeer(WakuFilterSubscribeCodec).valueOr:
|
|
||||||
let errorMsg =
|
|
||||||
"could not find peer with WakuFilterSubscribeCodec when subscribing"
|
|
||||||
error "fail filter process", error = errorMsg, op = $(self.operation)
|
|
||||||
return err(errorMsg)
|
|
||||||
|
|
||||||
let pubsubTopic = some(PubsubTopic($self[].pubsubTopic))
|
|
||||||
let contentTopics = ($(self[].contentTopics)).split(",").mapIt(ContentTopic(it))
|
|
||||||
|
|
||||||
let subFut = waku.node.filterSubscribe(pubsubTopic, contentTopics, peer)
|
|
||||||
if not await subFut.withTimeout(FilterOpTimeout):
|
|
||||||
let errorMsg = "filter subscription timed out"
|
|
||||||
error "fail filter process", error = errorMsg, op = $(self.operation)
|
|
||||||
return err(errorMsg)
|
|
||||||
of UNSUBSCRIBE:
|
|
||||||
let peer = waku.node.peerManager.selectPeer(WakuFilterSubscribeCodec).valueOr:
|
|
||||||
let errorMsg =
|
|
||||||
"could not find peer with WakuFilterSubscribeCodec when unsubscribing"
|
|
||||||
error "fail filter process", error = errorMsg, op = $(self.operation)
|
|
||||||
return err(errorMsg)
|
|
||||||
|
|
||||||
let pubsubTopic = some(PubsubTopic($self[].pubsubTopic))
|
|
||||||
let contentTopics = ($(self[].contentTopics)).split(",").mapIt(ContentTopic(it))
|
|
||||||
|
|
||||||
let subFut = waku.node.filterUnsubscribe(pubsubTopic, contentTopics, peer)
|
|
||||||
if not await subFut.withTimeout(FilterOpTimeout):
|
|
||||||
let errorMsg = "filter un-subscription timed out"
|
|
||||||
error "fail filter process", error = errorMsg, op = $(self.operation)
|
|
||||||
return err(errorMsg)
|
|
||||||
of UNSUBSCRIBE_ALL:
|
|
||||||
let peer = waku.node.peerManager.selectPeer(WakuFilterSubscribeCodec).valueOr:
|
|
||||||
let errorMsg =
|
|
||||||
"could not find peer with WakuFilterSubscribeCodec when unsubscribing all"
|
|
||||||
error "fail filter process", error = errorMsg, op = $(self.operation)
|
|
||||||
return err(errorMsg)
|
|
||||||
|
|
||||||
let unsubFut = waku.node.filterUnsubscribeAll(peer)
|
|
||||||
|
|
||||||
if not await unsubFut.withTimeout(FilterOpTimeout):
|
|
||||||
let errorMsg = "filter un-subscription all timed out"
|
|
||||||
error "fail filter process", error = errorMsg, op = $(self.operation)
|
|
||||||
return err(errorMsg)
|
|
||||||
|
|
||||||
return ok("")
|
|
||||||
@ -1,109 +0,0 @@
|
|||||||
import options
|
|
||||||
import chronicles, chronos, results
|
|
||||||
import
|
|
||||||
../../../../waku/waku_core/message/message,
|
|
||||||
../../../../waku/waku_core/codecs,
|
|
||||||
../../../../waku/factory/waku,
|
|
||||||
../../../../waku/waku_core/message,
|
|
||||||
../../../../waku/waku_core/time, # Timestamp
|
|
||||||
../../../../waku/waku_core/topics/pubsub_topic,
|
|
||||||
../../../../waku/waku_lightpush_legacy/client,
|
|
||||||
../../../../waku/waku_lightpush_legacy/common,
|
|
||||||
../../../../waku/node/peer_manager/peer_manager,
|
|
||||||
../../../alloc
|
|
||||||
|
|
||||||
type LightpushMsgType* = enum
|
|
||||||
PUBLISH
|
|
||||||
|
|
||||||
type ThreadSafeWakuMessage* = object
|
|
||||||
payload: SharedSeq[byte]
|
|
||||||
contentTopic: cstring
|
|
||||||
meta: SharedSeq[byte]
|
|
||||||
version: uint32
|
|
||||||
timestamp: Timestamp
|
|
||||||
ephemeral: bool
|
|
||||||
when defined(rln):
|
|
||||||
proof: SharedSeq[byte]
|
|
||||||
|
|
||||||
type LightpushRequest* = object
|
|
||||||
operation: LightpushMsgType
|
|
||||||
pubsubTopic: cstring
|
|
||||||
message: ThreadSafeWakuMessage # only used in 'PUBLISH' requests
|
|
||||||
|
|
||||||
proc createShared*(
|
|
||||||
T: type LightpushRequest,
|
|
||||||
op: LightpushMsgType,
|
|
||||||
pubsubTopic: cstring,
|
|
||||||
m = WakuMessage(),
|
|
||||||
): ptr type T =
|
|
||||||
var ret = createShared(T)
|
|
||||||
ret[].operation = op
|
|
||||||
ret[].pubsubTopic = pubsubTopic.alloc()
|
|
||||||
ret[].message = ThreadSafeWakuMessage(
|
|
||||||
payload: allocSharedSeq(m.payload),
|
|
||||||
contentTopic: m.contentTopic.alloc(),
|
|
||||||
meta: allocSharedSeq(m.meta),
|
|
||||||
version: m.version,
|
|
||||||
timestamp: m.timestamp,
|
|
||||||
ephemeral: m.ephemeral,
|
|
||||||
)
|
|
||||||
when defined(rln):
|
|
||||||
ret[].message.proof = allocSharedSeq(m.proof)
|
|
||||||
|
|
||||||
return ret
|
|
||||||
|
|
||||||
proc destroyShared(self: ptr LightpushRequest) =
|
|
||||||
deallocSharedSeq(self[].message.payload)
|
|
||||||
deallocShared(self[].message.contentTopic)
|
|
||||||
deallocSharedSeq(self[].message.meta)
|
|
||||||
when defined(rln):
|
|
||||||
deallocSharedSeq(self[].message.proof)
|
|
||||||
|
|
||||||
deallocShared(self)
|
|
||||||
|
|
||||||
proc toWakuMessage(m: ThreadSafeWakuMessage): WakuMessage =
|
|
||||||
var wakuMessage = WakuMessage()
|
|
||||||
|
|
||||||
wakuMessage.payload = m.payload.toSeq()
|
|
||||||
wakuMessage.contentTopic = $m.contentTopic
|
|
||||||
wakuMessage.meta = m.meta.toSeq()
|
|
||||||
wakuMessage.version = m.version
|
|
||||||
wakuMessage.timestamp = m.timestamp
|
|
||||||
wakuMessage.ephemeral = m.ephemeral
|
|
||||||
|
|
||||||
when defined(rln):
|
|
||||||
wakuMessage.proof = m.proof
|
|
||||||
|
|
||||||
return wakuMessage
|
|
||||||
|
|
||||||
proc process*(
|
|
||||||
self: ptr LightpushRequest, waku: ptr Waku
|
|
||||||
): Future[Result[string, string]] {.async.} =
|
|
||||||
defer:
|
|
||||||
destroyShared(self)
|
|
||||||
|
|
||||||
case self.operation
|
|
||||||
of PUBLISH:
|
|
||||||
let msg = self.message.toWakuMessage()
|
|
||||||
let pubsubTopic = $self.pubsubTopic
|
|
||||||
|
|
||||||
if waku.node.wakuLightpushClient.isNil():
|
|
||||||
let errorMsg = "LightpushRequest waku.node.wakuLightpushClient is nil"
|
|
||||||
error "PUBLISH failed", error = errorMsg
|
|
||||||
return err(errorMsg)
|
|
||||||
|
|
||||||
let peerOpt = waku.node.peerManager.selectPeer(WakuLightPushCodec)
|
|
||||||
if peerOpt.isNone():
|
|
||||||
let errorMsg = "failed to lightpublish message, no suitable remote peers"
|
|
||||||
error "PUBLISH failed", error = errorMsg
|
|
||||||
return err(errorMsg)
|
|
||||||
|
|
||||||
let msgHashHex = (
|
|
||||||
await waku.node.wakuLegacyLightpushClient.publish(
|
|
||||||
pubsubTopic, msg, peer = peerOpt.get()
|
|
||||||
)
|
|
||||||
).valueOr:
|
|
||||||
error "PUBLISH failed", error = error
|
|
||||||
return err($error)
|
|
||||||
|
|
||||||
return ok(msgHashHex)
|
|
||||||
@ -1,168 +0,0 @@
|
|||||||
import std/[net, sequtils, strutils]
|
|
||||||
import chronicles, chronos, stew/byteutils, results
|
|
||||||
import
|
|
||||||
waku/waku_core/message/message,
|
|
||||||
waku/factory/[validator_signed, waku],
|
|
||||||
tools/confutils/cli_args,
|
|
||||||
waku/waku_node,
|
|
||||||
waku/waku_core/message,
|
|
||||||
waku/waku_core/time, # Timestamp
|
|
||||||
waku/waku_core/topics/pubsub_topic,
|
|
||||||
waku/waku_core/topics,
|
|
||||||
waku/waku_relay/protocol,
|
|
||||||
waku/node/peer_manager
|
|
||||||
|
|
||||||
import
|
|
||||||
../../../alloc
|
|
||||||
|
|
||||||
type RelayMsgType* = enum
|
|
||||||
SUBSCRIBE
|
|
||||||
UNSUBSCRIBE
|
|
||||||
PUBLISH
|
|
||||||
NUM_CONNECTED_PEERS
|
|
||||||
LIST_CONNECTED_PEERS
|
|
||||||
## to return the list of all connected peers to an specific pubsub topic
|
|
||||||
NUM_MESH_PEERS
|
|
||||||
LIST_MESH_PEERS
|
|
||||||
## to return the list of only the peers that conform the mesh for a particular pubsub topic
|
|
||||||
ADD_PROTECTED_SHARD ## Protects a shard with a public key
|
|
||||||
|
|
||||||
type ThreadSafeWakuMessage* = object
|
|
||||||
payload: SharedSeq[byte]
|
|
||||||
contentTopic: cstring
|
|
||||||
meta: SharedSeq[byte]
|
|
||||||
version: uint32
|
|
||||||
timestamp: Timestamp
|
|
||||||
ephemeral: bool
|
|
||||||
when defined(rln):
|
|
||||||
proof: SharedSeq[byte]
|
|
||||||
|
|
||||||
type RelayRequest* = object
|
|
||||||
operation: RelayMsgType
|
|
||||||
pubsubTopic: cstring
|
|
||||||
relayEventCallback: WakuRelayHandler # not used in 'PUBLISH' requests
|
|
||||||
message: ThreadSafeWakuMessage # only used in 'PUBLISH' requests
|
|
||||||
clusterId: cint # only used in 'ADD_PROTECTED_SHARD' requests
|
|
||||||
shardId: cint # only used in 'ADD_PROTECTED_SHARD' requests
|
|
||||||
publicKey: cstring # only used in 'ADD_PROTECTED_SHARD' requests
|
|
||||||
|
|
||||||
proc createShared*(
|
|
||||||
T: type RelayRequest,
|
|
||||||
op: RelayMsgType,
|
|
||||||
pubsubTopic: cstring = nil,
|
|
||||||
relayEventCallback: WakuRelayHandler = nil,
|
|
||||||
m = WakuMessage(),
|
|
||||||
clusterId: cint = 0,
|
|
||||||
shardId: cint = 0,
|
|
||||||
publicKey: cstring = nil,
|
|
||||||
): ptr type T =
|
|
||||||
var ret = createShared(T)
|
|
||||||
ret[].operation = op
|
|
||||||
ret[].pubsubTopic = pubsubTopic.alloc()
|
|
||||||
ret[].clusterId = clusterId
|
|
||||||
ret[].shardId = shardId
|
|
||||||
ret[].publicKey = publicKey.alloc()
|
|
||||||
ret[].relayEventCallback = relayEventCallback
|
|
||||||
ret[].message = ThreadSafeWakuMessage(
|
|
||||||
payload: allocSharedSeq(m.payload),
|
|
||||||
contentTopic: m.contentTopic.alloc(),
|
|
||||||
meta: allocSharedSeq(m.meta),
|
|
||||||
version: m.version,
|
|
||||||
timestamp: m.timestamp,
|
|
||||||
ephemeral: m.ephemeral,
|
|
||||||
)
|
|
||||||
when defined(rln):
|
|
||||||
ret[].message.proof = allocSharedSeq(m.proof)
|
|
||||||
|
|
||||||
return ret
|
|
||||||
|
|
||||||
proc destroyShared(self: ptr RelayRequest) =
|
|
||||||
deallocSharedSeq(self[].message.payload)
|
|
||||||
deallocShared(self[].message.contentTopic)
|
|
||||||
deallocSharedSeq(self[].message.meta)
|
|
||||||
when defined(rln):
|
|
||||||
deallocSharedSeq(self[].message.proof)
|
|
||||||
deallocShared(self[].pubsubTopic)
|
|
||||||
deallocShared(self[].publicKey)
|
|
||||||
deallocShared(self)
|
|
||||||
|
|
||||||
proc toWakuMessage(m: ThreadSafeWakuMessage): WakuMessage =
|
|
||||||
var wakuMessage = WakuMessage()
|
|
||||||
|
|
||||||
wakuMessage.payload = m.payload.toSeq()
|
|
||||||
wakuMessage.contentTopic = $m.contentTopic
|
|
||||||
wakuMessage.meta = m.meta.toSeq()
|
|
||||||
wakuMessage.version = m.version
|
|
||||||
wakuMessage.timestamp = m.timestamp
|
|
||||||
wakuMessage.ephemeral = m.ephemeral
|
|
||||||
|
|
||||||
when defined(rln):
|
|
||||||
wakuMessage.proof = m.proof
|
|
||||||
|
|
||||||
return wakuMessage
|
|
||||||
|
|
||||||
proc process*(
|
|
||||||
self: ptr RelayRequest, waku: ptr Waku
|
|
||||||
): Future[Result[string, string]] {.async.} =
|
|
||||||
defer:
|
|
||||||
destroyShared(self)
|
|
||||||
|
|
||||||
if waku.node.wakuRelay.isNil():
|
|
||||||
return err("Operation not supported without Waku Relay enabled.")
|
|
||||||
|
|
||||||
case self.operation
|
|
||||||
of SUBSCRIBE:
|
|
||||||
waku.node.subscribe(
|
|
||||||
(kind: SubscriptionKind.PubsubSub, topic: $self.pubsubTopic),
|
|
||||||
handler = self.relayEventCallback,
|
|
||||||
).isOkOr:
|
|
||||||
error "SUBSCRIBE failed", error
|
|
||||||
return err($error)
|
|
||||||
of UNSUBSCRIBE:
|
|
||||||
waku.node.unsubscribe((kind: SubscriptionKind.PubsubSub, topic: $self.pubsubTopic)).isOkOr:
|
|
||||||
error "UNSUBSCRIBE failed", error
|
|
||||||
return err($error)
|
|
||||||
of PUBLISH:
|
|
||||||
let msg = self.message.toWakuMessage()
|
|
||||||
let pubsubTopic = $self.pubsubTopic
|
|
||||||
|
|
||||||
(await waku.node.wakuRelay.publish(pubsubTopic, msg)).isOkOr:
|
|
||||||
error "PUBLISH failed", error
|
|
||||||
return err($error)
|
|
||||||
|
|
||||||
let msgHash = computeMessageHash(pubSubTopic, msg).to0xHex
|
|
||||||
return ok(msgHash)
|
|
||||||
of NUM_CONNECTED_PEERS:
|
|
||||||
let numConnPeers = waku.node.wakuRelay.getNumConnectedPeers($self.pubsubTopic).valueOr:
|
|
||||||
error "NUM_CONNECTED_PEERS failed", error
|
|
||||||
return err($error)
|
|
||||||
return ok($numConnPeers)
|
|
||||||
of LIST_CONNECTED_PEERS:
|
|
||||||
let connPeers = waku.node.wakuRelay.getConnectedPeers($self.pubsubTopic).valueOr:
|
|
||||||
error "LIST_CONNECTED_PEERS failed", error = error
|
|
||||||
return err($error)
|
|
||||||
## returns a comma-separated string of peerIDs
|
|
||||||
return ok(connPeers.mapIt($it).join(","))
|
|
||||||
of NUM_MESH_PEERS:
|
|
||||||
let numPeersInMesh = waku.node.wakuRelay.getNumPeersInMesh($self.pubsubTopic).valueOr:
|
|
||||||
error "NUM_MESH_PEERS failed", error = error
|
|
||||||
return err($error)
|
|
||||||
return ok($numPeersInMesh)
|
|
||||||
of LIST_MESH_PEERS:
|
|
||||||
let meshPeers = waku.node.wakuRelay.getPeersInMesh($self.pubsubTopic).valueOr:
|
|
||||||
error "LIST_MESH_PEERS failed", error = error
|
|
||||||
return err($error)
|
|
||||||
## returns a comma-separated string of peerIDs
|
|
||||||
return ok(meshPeers.mapIt($it).join(","))
|
|
||||||
of ADD_PROTECTED_SHARD:
|
|
||||||
try:
|
|
||||||
let relayShard =
|
|
||||||
RelayShard(clusterId: uint16(self.clusterId), shardId: uint16(self.shardId))
|
|
||||||
let protectedShard =
|
|
||||||
ProtectedShard.parseCmdArg($relayShard & ":" & $self.publicKey)
|
|
||||||
waku.node.wakuRelay.addSignedShardsValidator(
|
|
||||||
@[protectedShard], uint16(self.clusterId)
|
|
||||||
)
|
|
||||||
except ValueError:
|
|
||||||
return err(getCurrentExceptionMsg())
|
|
||||||
return ok("")
|
|
||||||
@ -1,104 +0,0 @@
|
|||||||
## This file contains the base message request type that will be handled.
|
|
||||||
## The requests are created by the main thread and processed by
|
|
||||||
## the Waku Thread.
|
|
||||||
|
|
||||||
import std/json, results
|
|
||||||
import chronos, chronos/threadsync
|
|
||||||
import
|
|
||||||
../../waku/factory/waku,
|
|
||||||
../ffi_types,
|
|
||||||
./requests/node_lifecycle_request,
|
|
||||||
./requests/peer_manager_request,
|
|
||||||
./requests/protocols/relay_request,
|
|
||||||
./requests/protocols/store_request,
|
|
||||||
./requests/protocols/lightpush_request,
|
|
||||||
./requests/protocols/filter_request,
|
|
||||||
./requests/debug_node_request,
|
|
||||||
./requests/discovery_request,
|
|
||||||
./requests/ping_request
|
|
||||||
|
|
||||||
type RequestType* {.pure.} = enum
|
|
||||||
LIFECYCLE
|
|
||||||
PEER_MANAGER
|
|
||||||
PING
|
|
||||||
RELAY
|
|
||||||
STORE
|
|
||||||
DEBUG
|
|
||||||
DISCOVERY
|
|
||||||
LIGHTPUSH
|
|
||||||
FILTER
|
|
||||||
|
|
||||||
type WakuThreadRequest* = object
|
|
||||||
reqType: RequestType
|
|
||||||
reqContent: pointer
|
|
||||||
callback: WakuCallBack
|
|
||||||
userData: pointer
|
|
||||||
|
|
||||||
proc createShared*(
|
|
||||||
T: type WakuThreadRequest,
|
|
||||||
reqType: RequestType,
|
|
||||||
reqContent: pointer,
|
|
||||||
callback: WakuCallBack,
|
|
||||||
userData: pointer,
|
|
||||||
): ptr type T =
|
|
||||||
var ret = createShared(T)
|
|
||||||
ret[].reqType = reqType
|
|
||||||
ret[].reqContent = reqContent
|
|
||||||
ret[].callback = callback
|
|
||||||
ret[].userData = userData
|
|
||||||
return ret
|
|
||||||
|
|
||||||
proc handleRes[T: string | void](
|
|
||||||
res: Result[T, string], request: ptr WakuThreadRequest
|
|
||||||
) =
|
|
||||||
## Handles the Result responses, which can either be Result[string, string] or
|
|
||||||
## Result[void, string].
|
|
||||||
|
|
||||||
defer:
|
|
||||||
deallocShared(request)
|
|
||||||
|
|
||||||
if res.isErr():
|
|
||||||
foreignThreadGc:
|
|
||||||
let msg = "libwaku error: handleRes fireSyncRes error: " & $res.error
|
|
||||||
request[].callback(
|
|
||||||
RET_ERR, unsafeAddr msg[0], cast[csize_t](len(msg)), request[].userData
|
|
||||||
)
|
|
||||||
return
|
|
||||||
|
|
||||||
foreignThreadGc:
|
|
||||||
var msg: cstring = ""
|
|
||||||
when T is string:
|
|
||||||
msg = res.get().cstring()
|
|
||||||
request[].callback(
|
|
||||||
RET_OK, unsafeAddr msg[0], cast[csize_t](len(msg)), request[].userData
|
|
||||||
)
|
|
||||||
return
|
|
||||||
|
|
||||||
proc process*(
|
|
||||||
T: type WakuThreadRequest, request: ptr WakuThreadRequest, waku: ptr Waku
|
|
||||||
) {.async.} =
|
|
||||||
let retFut =
|
|
||||||
case request[].reqType
|
|
||||||
of LIFECYCLE:
|
|
||||||
cast[ptr NodeLifecycleRequest](request[].reqContent).process(waku)
|
|
||||||
of PEER_MANAGER:
|
|
||||||
cast[ptr PeerManagementRequest](request[].reqContent).process(waku[])
|
|
||||||
of PING:
|
|
||||||
cast[ptr PingRequest](request[].reqContent).process(waku)
|
|
||||||
of RELAY:
|
|
||||||
cast[ptr RelayRequest](request[].reqContent).process(waku)
|
|
||||||
of STORE:
|
|
||||||
cast[ptr StoreRequest](request[].reqContent).process(waku)
|
|
||||||
of DEBUG:
|
|
||||||
cast[ptr DebugNodeRequest](request[].reqContent).process(waku[])
|
|
||||||
of DISCOVERY:
|
|
||||||
cast[ptr DiscoveryRequest](request[].reqContent).process(waku)
|
|
||||||
of LIGHTPUSH:
|
|
||||||
cast[ptr LightpushRequest](request[].reqContent).process(waku)
|
|
||||||
of FILTER:
|
|
||||||
cast[ptr FilterRequest](request[].reqContent).process(waku)
|
|
||||||
|
|
||||||
handleRes(await retFut, request)
|
|
||||||
|
|
||||||
proc `$`*(self: WakuThreadRequest): string =
|
|
||||||
return $self.reqType
|
|
||||||
@ -1,12 +0,0 @@
|
|||||||
{ pkgs ? import <nixpkgs> { } }:
|
|
||||||
|
|
||||||
let
|
|
||||||
tools = pkgs.callPackage ./tools.nix {};
|
|
||||||
sourceFile = ../vendor/nimbus-build-system/vendor/Nim/koch.nim;
|
|
||||||
in pkgs.fetchFromGitHub {
|
|
||||||
owner = "nim-lang";
|
|
||||||
repo = "atlas";
|
|
||||||
rev = tools.findKeyValue "^ +AtlasStableCommit = \"([a-f0-9]+)\"$" sourceFile;
|
|
||||||
# WARNING: Requires manual updates when Nim compiler version changes.
|
|
||||||
hash = "sha256-G1TZdgbRPSgxXZ3VsBP2+XFCLHXVb3an65MuQx67o/k=";
|
|
||||||
}
|
|
||||||
@ -6,7 +6,7 @@ let
|
|||||||
in pkgs.fetchFromGitHub {
|
in pkgs.fetchFromGitHub {
|
||||||
owner = "nim-lang";
|
owner = "nim-lang";
|
||||||
repo = "checksums";
|
repo = "checksums";
|
||||||
rev = tools.findKeyValue "^ +ChecksumsStableCommit = \"([a-f0-9]+)\"$" sourceFile;
|
rev = tools.findKeyValue "^ +ChecksumsStableCommit = \"([a-f0-9]+)\".*$" sourceFile;
|
||||||
# WARNING: Requires manual updates when Nim compiler version changes.
|
# WARNING: Requires manual updates when Nim compiler version changes.
|
||||||
hash = "sha256-Bm5iJoT2kAvcTexiLMFBa9oU5gf7d4rWjo3OiN7obWQ=";
|
hash = "sha256-Bm5iJoT2kAvcTexiLMFBa9oU5gf7d4rWjo3OiN7obWQ=";
|
||||||
}
|
}
|
||||||
|
|||||||
@ -9,9 +9,8 @@
|
|||||||
stableSystems ? [
|
stableSystems ? [
|
||||||
"x86_64-linux" "aarch64-linux"
|
"x86_64-linux" "aarch64-linux"
|
||||||
],
|
],
|
||||||
androidArch,
|
abidir ? null,
|
||||||
abidir,
|
zerokitRln,
|
||||||
zerokitPkg,
|
|
||||||
}:
|
}:
|
||||||
|
|
||||||
assert pkgs.lib.assertMsg ((src.submodules or true) == true)
|
assert pkgs.lib.assertMsg ((src.submodules or true) == true)
|
||||||
@ -51,7 +50,7 @@ in stdenv.mkDerivation rec {
|
|||||||
cmake
|
cmake
|
||||||
which
|
which
|
||||||
lsb-release
|
lsb-release
|
||||||
zerokitPkg
|
zerokitRln
|
||||||
nim-unwrapped-2_0
|
nim-unwrapped-2_0
|
||||||
fakeGit
|
fakeGit
|
||||||
fakeCargo
|
fakeCargo
|
||||||
@ -84,27 +83,24 @@ in stdenv.mkDerivation rec {
|
|||||||
pushd vendor/nimbus-build-system/vendor/Nim
|
pushd vendor/nimbus-build-system/vendor/Nim
|
||||||
mkdir dist
|
mkdir dist
|
||||||
cp -r ${callPackage ./nimble.nix {}} dist/nimble
|
cp -r ${callPackage ./nimble.nix {}} dist/nimble
|
||||||
chmod 777 -R dist/nimble
|
cp -r ${callPackage ./checksums.nix {}} dist/checksums
|
||||||
mkdir -p dist/nimble/dist
|
|
||||||
cp -r ${callPackage ./checksums.nix {}} dist/checksums # need both
|
|
||||||
cp -r ${callPackage ./checksums.nix {}} dist/nimble/dist/checksums
|
|
||||||
cp -r ${callPackage ./atlas.nix {}} dist/atlas
|
|
||||||
chmod 777 -R dist/atlas
|
|
||||||
mkdir dist/atlas/dist
|
|
||||||
cp -r ${callPackage ./sat.nix {}} dist/nimble/dist/sat
|
|
||||||
cp -r ${callPackage ./sat.nix {}} dist/atlas/dist/sat
|
|
||||||
cp -r ${callPackage ./csources.nix {}} csources_v2
|
cp -r ${callPackage ./csources.nix {}} csources_v2
|
||||||
chmod 777 -R dist/nimble csources_v2
|
chmod 777 -R dist/nimble csources_v2
|
||||||
popd
|
popd
|
||||||
mkdir -p vendor/zerokit/target/${androidArch}/release
|
cp -r ${zerokitRln}/target vendor/zerokit/
|
||||||
cp ${zerokitPkg}/librln.so vendor/zerokit/target/${androidArch}/release/
|
find vendor/zerokit/target
|
||||||
|
# FIXME
|
||||||
|
cp vendor/zerokit/target/*/release/librln.a librln_v${zerokitRln.version}.a
|
||||||
'';
|
'';
|
||||||
|
|
||||||
installPhase = ''
|
installPhase = if abidir != null then ''
|
||||||
mkdir -p $out/jni
|
mkdir -p $out/jni
|
||||||
cp -r ./build/android/${abidir}/* $out/jni/
|
cp -r ./build/android/${abidir}/* $out/jni/
|
||||||
echo '${androidManifest}' > $out/jni/AndroidManifest.xml
|
echo '${androidManifest}' > $out/jni/AndroidManifest.xml
|
||||||
cd $out && zip -r libwaku.aar *
|
cd $out && zip -r libwaku.aar *
|
||||||
|
'' else ''
|
||||||
|
mkdir -p $out/bin
|
||||||
|
cp -r build/* $out/bin
|
||||||
'';
|
'';
|
||||||
|
|
||||||
meta = with pkgs.lib; {
|
meta = with pkgs.lib; {
|
||||||
|
|||||||
@ -6,7 +6,7 @@ let
|
|||||||
in pkgs.fetchFromGitHub {
|
in pkgs.fetchFromGitHub {
|
||||||
owner = "nim-lang";
|
owner = "nim-lang";
|
||||||
repo = "nimble";
|
repo = "nimble";
|
||||||
rev = tools.findKeyValue "^ +NimbleStableCommit = \"([a-f0-9]+)\".+" sourceFile;
|
rev = tools.findKeyValue "^ +NimbleStableCommit = \"([a-f0-9]+)\".*$" sourceFile;
|
||||||
# WARNING: Requires manual updates when Nim compiler version changes.
|
# WARNING: Requires manual updates when Nim compiler version changes.
|
||||||
hash = "sha256-MVHf19UbOWk8Zba2scj06PxdYYOJA6OXrVyDQ9Ku6Us=";
|
hash = "sha256-MVHf19UbOWk8Zba2scj06PxdYYOJA6OXrVyDQ9Ku6Us=";
|
||||||
}
|
}
|
||||||
|
|||||||
@ -6,7 +6,8 @@ let
|
|||||||
in pkgs.fetchFromGitHub {
|
in pkgs.fetchFromGitHub {
|
||||||
owner = "nim-lang";
|
owner = "nim-lang";
|
||||||
repo = "sat";
|
repo = "sat";
|
||||||
rev = tools.findKeyValue "^ +SatStableCommit = \"([a-f0-9]+)\"$" sourceFile;
|
rev = tools.findKeyValue "^ +SatStableCommit = \"([a-f0-9]+)\".*$" sourceFile;
|
||||||
|
# WARNING: Requires manual updates when Nim compiler version changes.
|
||||||
# WARNING: Requires manual updates when Nim compiler version changes.
|
# WARNING: Requires manual updates when Nim compiler version changes.
|
||||||
hash = "sha256-JFrrSV+mehG0gP7NiQ8hYthL0cjh44HNbXfuxQNhq7c=";
|
hash = "sha256-JFrrSV+mehG0gP7NiQ8hYthL0cjh44HNbXfuxQNhq7c=";
|
||||||
}
|
}
|
||||||
|
|||||||
@ -2,14 +2,51 @@
|
|||||||
|
|
||||||
# Install Anvil
|
# Install Anvil
|
||||||
|
|
||||||
if ! command -v anvil &> /dev/null; then
|
REQUIRED_FOUNDRY_VERSION="$1"
|
||||||
|
|
||||||
|
if command -v anvil &> /dev/null; then
|
||||||
|
# Foundry is already installed; check the current version.
|
||||||
|
CURRENT_FOUNDRY_VERSION=$(anvil --version 2>/dev/null | awk '{print $2}')
|
||||||
|
|
||||||
|
if [ -n "$CURRENT_FOUNDRY_VERSION" ]; then
|
||||||
|
# Compare CURRENT_FOUNDRY_VERSION < REQUIRED_FOUNDRY_VERSION using sort -V
|
||||||
|
lower_version=$(printf '%s\n%s\n' "$CURRENT_FOUNDRY_VERSION" "$REQUIRED_FOUNDRY_VERSION" | sort -V | head -n1)
|
||||||
|
|
||||||
|
if [ "$lower_version" != "$REQUIRED_FOUNDRY_VERSION" ]; then
|
||||||
|
echo "Anvil is already installed with version $CURRENT_FOUNDRY_VERSION, which is older than the required $REQUIRED_FOUNDRY_VERSION. Please update Foundry manually if needed."
|
||||||
|
fi
|
||||||
|
fi
|
||||||
|
else
|
||||||
BASE_DIR="${XDG_CONFIG_HOME:-$HOME}"
|
BASE_DIR="${XDG_CONFIG_HOME:-$HOME}"
|
||||||
FOUNDRY_DIR="${FOUNDRY_DIR:-"$BASE_DIR/.foundry"}"
|
FOUNDRY_DIR="${FOUNDRY_DIR:-"$BASE_DIR/.foundry"}"
|
||||||
FOUNDRY_BIN_DIR="$FOUNDRY_DIR/bin"
|
FOUNDRY_BIN_DIR="$FOUNDRY_DIR/bin"
|
||||||
|
|
||||||
|
echo "Installing Foundry..."
|
||||||
curl -L https://foundry.paradigm.xyz | bash
|
curl -L https://foundry.paradigm.xyz | bash
|
||||||
# Extract the source path from the download result
|
|
||||||
echo "foundryup_path: $FOUNDRY_BIN_DIR"
|
# Add Foundry to PATH for this script session
|
||||||
# run foundryup
|
export PATH="$FOUNDRY_BIN_DIR:$PATH"
|
||||||
$FOUNDRY_BIN_DIR/foundryup
|
|
||||||
|
# Verify foundryup is available
|
||||||
|
if ! command -v foundryup >/dev/null 2>&1; then
|
||||||
|
echo "Error: foundryup installation failed or not found in $FOUNDRY_BIN_DIR"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Run foundryup to install the required version
|
||||||
|
if [ -n "$REQUIRED_FOUNDRY_VERSION" ]; then
|
||||||
|
echo "Installing Foundry tools version $REQUIRED_FOUNDRY_VERSION..."
|
||||||
|
foundryup --install "$REQUIRED_FOUNDRY_VERSION"
|
||||||
|
else
|
||||||
|
echo "Installing latest Foundry tools..."
|
||||||
|
foundryup
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Verify anvil was installed
|
||||||
|
if ! command -v anvil >/dev/null 2>&1; then
|
||||||
|
echo "Error: anvil installation failed"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
echo "Anvil successfully installed: $(anvil --version)"
|
||||||
fi
|
fi
|
||||||
@ -1,8 +1,37 @@
|
|||||||
#!/usr/bin/env bash
|
#!/usr/bin/env bash
|
||||||
|
|
||||||
# Install pnpm
|
# Install pnpm
|
||||||
if ! command -v pnpm &> /dev/null; then
|
|
||||||
echo "pnpm is not installed, installing it now..."
|
REQUIRED_PNPM_VERSION="$1"
|
||||||
npm i pnpm --global
|
|
||||||
|
if command -v pnpm &> /dev/null; then
|
||||||
|
# pnpm is already installed; check the current version.
|
||||||
|
CURRENT_PNPM_VERSION=$(pnpm --version 2>/dev/null)
|
||||||
|
|
||||||
|
if [ -n "$CURRENT_PNPM_VERSION" ]; then
|
||||||
|
# Compare CURRENT_PNPM_VERSION < REQUIRED_PNPM_VERSION using sort -V
|
||||||
|
lower_version=$(printf '%s\n%s\n' "$CURRENT_PNPM_VERSION" "$REQUIRED_PNPM_VERSION" | sort -V | head -n1)
|
||||||
|
|
||||||
|
if [ "$lower_version" != "$REQUIRED_PNPM_VERSION" ]; then
|
||||||
|
echo "pnpm is already installed with version $CURRENT_PNPM_VERSION, which is older than the required $REQUIRED_PNPM_VERSION. Please update pnpm manually if needed."
|
||||||
|
fi
|
||||||
|
fi
|
||||||
|
else
|
||||||
|
# Install pnpm using npm
|
||||||
|
if [ -n "$REQUIRED_PNPM_VERSION" ]; then
|
||||||
|
echo "Installing pnpm version $REQUIRED_PNPM_VERSION..."
|
||||||
|
npm install -g pnpm@$REQUIRED_PNPM_VERSION
|
||||||
|
else
|
||||||
|
echo "Installing latest pnpm..."
|
||||||
|
npm install -g pnpm
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Verify pnpm was installed
|
||||||
|
if ! command -v pnpm >/dev/null 2>&1; then
|
||||||
|
echo "Error: pnpm installation failed"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
echo "pnpm successfully installed: $(pnpm --version)"
|
||||||
fi
|
fi
|
||||||
|
|
||||||
|
|||||||
@ -1,7 +1,9 @@
|
|||||||
#!/usr/bin/env bash
|
#!/usr/bin/env bash
|
||||||
|
|
||||||
# Install Anvil
|
# Install Anvil
|
||||||
./scripts/install_anvil.sh
|
FOUNDRY_VERSION="$1"
|
||||||
|
./scripts/install_anvil.sh "$FOUNDRY_VERSION"
|
||||||
|
|
||||||
#Install pnpm
|
# Install pnpm
|
||||||
./scripts/install_pnpm.sh
|
PNPM_VERSION="$2"
|
||||||
|
./scripts/install_pnpm.sh "$PNPM_VERSION"
|
||||||
53
scripts/libwaku_windows_setup.mk
Normal file
53
scripts/libwaku_windows_setup.mk
Normal file
@ -0,0 +1,53 @@
|
|||||||
|
# ---------------------------------------------------------
|
||||||
|
# Windows Setup Makefile
|
||||||
|
# ---------------------------------------------------------
|
||||||
|
|
||||||
|
# Extend PATH (Make preserves environment variables)
|
||||||
|
export PATH := /c/msys64/usr/bin:/c/msys64/mingw64/bin:/c/msys64/usr/lib:/c/msys64/mingw64/lib:$(PATH)
|
||||||
|
|
||||||
|
# Tools required
|
||||||
|
DEPS = gcc g++ make cmake cargo upx rustc python
|
||||||
|
|
||||||
|
# Default target
|
||||||
|
.PHONY: windows-setup
|
||||||
|
windows-setup: check-deps update-submodules create-tmp libunwind miniupnpc libnatpmp
|
||||||
|
@echo "Windows setup completed successfully!"
|
||||||
|
|
||||||
|
.PHONY: check-deps
|
||||||
|
check-deps:
|
||||||
|
@echo "Checking libwaku build dependencies..."
|
||||||
|
@for dep in $(DEPS); do \
|
||||||
|
if ! which $$dep >/dev/null 2>&1; then \
|
||||||
|
echo "✗ Missing dependency: $$dep"; \
|
||||||
|
exit 1; \
|
||||||
|
else \
|
||||||
|
echo "✓ Found: $$dep"; \
|
||||||
|
fi; \
|
||||||
|
done
|
||||||
|
|
||||||
|
.PHONY: update-submodules
|
||||||
|
update-submodules:
|
||||||
|
@echo "Updating libwaku git submodules..."
|
||||||
|
git submodule update --init --recursive
|
||||||
|
|
||||||
|
.PHONY: create-tmp
|
||||||
|
create-tmp:
|
||||||
|
@echo "Creating tmp directory..."
|
||||||
|
mkdir -p tmp
|
||||||
|
|
||||||
|
.PHONY: libunwind
|
||||||
|
libunwind:
|
||||||
|
@echo "Building libunwind..."
|
||||||
|
cd vendor/nim-libbacktrace && make all V=1
|
||||||
|
|
||||||
|
.PHONY: miniupnpc
|
||||||
|
miniupnpc:
|
||||||
|
@echo "Building miniupnpc..."
|
||||||
|
cd vendor/nim-nat-traversal/vendor/miniupnp/miniupnpc && \
|
||||||
|
make -f Makefile.mingw CC=gcc CXX=g++ libminiupnpc.a V=1
|
||||||
|
|
||||||
|
.PHONY: libnatpmp
|
||||||
|
libnatpmp:
|
||||||
|
@echo "Building libnatpmp..."
|
||||||
|
cd vendor/nim-nat-traversal/vendor/libnatpmp-upstream && \
|
||||||
|
make CC="gcc -fPIC -D_WIN32_WINNT=0x0600 -DNATPMP_STATICLIB" libnatpmp.a V=1
|
||||||
@ -1,6 +1,6 @@
|
|||||||
log-level = "INFO"
|
log-level = "INFO"
|
||||||
relay = true
|
relay = true
|
||||||
#mix = true
|
mix = true
|
||||||
filter = true
|
filter = true
|
||||||
store = false
|
store = false
|
||||||
lightpush = true
|
lightpush = true
|
||||||
@ -18,7 +18,7 @@ num-shards-in-network = 1
|
|||||||
shard = [0]
|
shard = [0]
|
||||||
agent-string = "nwaku-mix"
|
agent-string = "nwaku-mix"
|
||||||
nodekey = "f98e3fba96c32e8d1967d460f1b79457380e1a895f7971cecc8528abe733781a"
|
nodekey = "f98e3fba96c32e8d1967d460f1b79457380e1a895f7971cecc8528abe733781a"
|
||||||
#mixkey = "a87db88246ec0eedda347b9b643864bee3d6933eb15ba41e6d58cb678d813258"
|
mixkey = "a87db88246ec0eedda347b9b643864bee3d6933eb15ba41e6d58cb678d813258"
|
||||||
rendezvous = true
|
rendezvous = true
|
||||||
listen-address = "127.0.0.1"
|
listen-address = "127.0.0.1"
|
||||||
nat = "extip:127.0.0.1"
|
nat = "extip:127.0.0.1"
|
||||||
|
|||||||
@ -1 +1,2 @@
|
|||||||
../../build/chat2mix --cluster-id=2 --num-shards-in-network=1 --shard=0 --servicenode="/ip4/127.0.0.1/tcp/60001/p2p/16Uiu2HAmPiEs2ozjjJF2iN2Pe2FYeMC9w4caRHKYdLdAfjgbWM6o" --log-level=TRACE --mixnode="/ip4/127.0.0.1/tcp/60002/p2p/16Uiu2HAmLtKaFaSWDohToWhWUZFLtqzYZGPFuXwKrojFVF6az5UF:9231e86da6432502900a84f867004ce78632ab52cd8e30b1ec322cd795710c2a" --mixnode="/ip4/127.0.0.1/tcp/60003/p2p/16Uiu2HAmTEDHwAziWUSz6ZE23h5vxG2o4Nn7GazhMor4bVuMXTrA:275cd6889e1f29ca48e5b9edb800d1a94f49f13d393a0ecf1a07af753506de6c" --mixnode="/ip4/127.0.0.1/tcp/60004/p2p/16Uiu2HAmPwRKZajXtfb1Qsv45VVfRZgK3ENdfmnqzSrVm3BczF6f:e0ed594a8d506681be075e8e23723478388fb182477f7a469309a25e7076fc18" --mixnode="/ip4/127.0.0.1/tcp/60005/p2p/16Uiu2HAmRhxmCHBYdXt1RibXrjAUNJbduAhzaTHwFCZT4qWnqZAu:8fd7a1a7c19b403d231452a9b1ea40eb1cc76f455d918ef8980e7685f9eeeb1f"
|
../../build/chat2mix --cluster-id=2 --num-shards-in-network=1 --shard=0 --servicenode="/ip4/127.0.0.1/tcp/60001/p2p/16Uiu2HAmPiEs2ozjjJF2iN2Pe2FYeMC9w4caRHKYdLdAfjgbWM6o" --log-level=TRACE
|
||||||
|
#--mixnode="/ip4/127.0.0.1/tcp/60002/p2p/16Uiu2HAmLtKaFaSWDohToWhWUZFLtqzYZGPFuXwKrojFVF6az5UF:9231e86da6432502900a84f867004ce78632ab52cd8e30b1ec322cd795710c2a" --mixnode="/ip4/127.0.0.1/tcp/60003/p2p/16Uiu2HAmTEDHwAziWUSz6ZE23h5vxG2o4Nn7GazhMor4bVuMXTrA:275cd6889e1f29ca48e5b9edb800d1a94f49f13d393a0ecf1a07af753506de6c" --mixnode="/ip4/127.0.0.1/tcp/60004/p2p/16Uiu2HAmPwRKZajXtfb1Qsv45VVfRZgK3ENdfmnqzSrVm3BczF6f:e0ed594a8d506681be075e8e23723478388fb182477f7a469309a25e7076fc18" --mixnode="/ip4/127.0.0.1/tcp/60005/p2p/16Uiu2HAmRhxmCHBYdXt1RibXrjAUNJbduAhzaTHwFCZT4qWnqZAu:8fd7a1a7c19b403d231452a9b1ea40eb1cc76f455d918ef8980e7685f9eeeb1f"
|
||||||
|
|||||||
@ -1 +1,2 @@
|
|||||||
../../build/chat2mix --cluster-id=2 --num-shards-in-network=1 --shard=0 --servicenode="/ip4/127.0.0.1/tcp/60001/p2p/16Uiu2HAmPiEs2ozjjJF2iN2Pe2FYeMC9w4caRHKYdLdAfjgbWM6o" --log-level=TRACE --mixnode="/ip4/127.0.0.1/tcp/60002/p2p/16Uiu2HAmLtKaFaSWDohToWhWUZFLtqzYZGPFuXwKrojFVF6az5UF:9231e86da6432502900a84f867004ce78632ab52cd8e30b1ec322cd795710c2a" --mixnode="/ip4/127.0.0.1/tcp/60003/p2p/16Uiu2HAmTEDHwAziWUSz6ZE23h5vxG2o4Nn7GazhMor4bVuMXTrA:275cd6889e1f29ca48e5b9edb800d1a94f49f13d393a0ecf1a07af753506de6c" --mixnode="/ip4/127.0.0.1/tcp/60004/p2p/16Uiu2HAmPwRKZajXtfb1Qsv45VVfRZgK3ENdfmnqzSrVm3BczF6f:e0ed594a8d506681be075e8e23723478388fb182477f7a469309a25e7076fc18" --mixnode="/ip4/127.0.0.1/tcp/60005/p2p/16Uiu2HAmRhxmCHBYdXt1RibXrjAUNJbduAhzaTHwFCZT4qWnqZAu:8fd7a1a7c19b403d231452a9b1ea40eb1cc76f455d918ef8980e7685f9eeeb1f"
|
../../build/chat2mix --cluster-id=2 --num-shards-in-network=1 --shard=0 --servicenode="/ip4/127.0.0.1/tcp/60001/p2p/16Uiu2HAmPiEs2ozjjJF2iN2Pe2FYeMC9w4caRHKYdLdAfjgbWM6o" --log-level=TRACE
|
||||||
|
#--mixnode="/ip4/127.0.0.1/tcp/60002/p2p/16Uiu2HAmLtKaFaSWDohToWhWUZFLtqzYZGPFuXwKrojFVF6az5UF:9231e86da6432502900a84f867004ce78632ab52cd8e30b1ec322cd795710c2a" --mixnode="/ip4/127.0.0.1/tcp/60003/p2p/16Uiu2HAmTEDHwAziWUSz6ZE23h5vxG2o4Nn7GazhMor4bVuMXTrA:275cd6889e1f29ca48e5b9edb800d1a94f49f13d393a0ecf1a07af753506de6c" --mixnode="/ip4/127.0.0.1/tcp/60004/p2p/16Uiu2HAmPwRKZajXtfb1Qsv45VVfRZgK3ENdfmnqzSrVm3BczF6f:e0ed594a8d506681be075e8e23723478388fb182477f7a469309a25e7076fc18" --mixnode="/ip4/127.0.0.1/tcp/60005/p2p/16Uiu2HAmRhxmCHBYdXt1RibXrjAUNJbduAhzaTHwFCZT4qWnqZAu:8fd7a1a7c19b403d231452a9b1ea40eb1cc76f455d918ef8980e7685f9eeeb1f"
|
||||||
|
|||||||
@ -9,4 +9,7 @@ import
|
|||||||
./test_tokenbucket,
|
./test_tokenbucket,
|
||||||
./test_requestratelimiter,
|
./test_requestratelimiter,
|
||||||
./test_ratelimit_setting,
|
./test_ratelimit_setting,
|
||||||
./test_timed_map
|
./test_timed_map,
|
||||||
|
./test_event_broker,
|
||||||
|
./test_request_broker,
|
||||||
|
./test_multi_request_broker
|
||||||
|
|||||||
125
tests/common/test_event_broker.nim
Normal file
125
tests/common/test_event_broker.nim
Normal file
@ -0,0 +1,125 @@
|
|||||||
|
import chronos
|
||||||
|
import std/sequtils
|
||||||
|
import testutils/unittests
|
||||||
|
|
||||||
|
import waku/common/broker/event_broker
|
||||||
|
|
||||||
|
EventBroker:
|
||||||
|
type SampleEvent = object
|
||||||
|
value*: int
|
||||||
|
label*: string
|
||||||
|
|
||||||
|
EventBroker:
|
||||||
|
type BinaryEvent = object
|
||||||
|
flag*: bool
|
||||||
|
|
||||||
|
EventBroker:
|
||||||
|
type RefEvent = ref object
|
||||||
|
payload*: seq[int]
|
||||||
|
|
||||||
|
template waitForListeners() =
|
||||||
|
waitFor sleepAsync(1.milliseconds)
|
||||||
|
|
||||||
|
suite "EventBroker":
|
||||||
|
test "delivers events to all listeners":
|
||||||
|
var seen: seq[(int, string)] = @[]
|
||||||
|
|
||||||
|
discard SampleEvent.listen(
|
||||||
|
proc(evt: SampleEvent): Future[void] {.async: (raises: []).} =
|
||||||
|
seen.add((evt.value, evt.label))
|
||||||
|
)
|
||||||
|
|
||||||
|
discard SampleEvent.listen(
|
||||||
|
proc(evt: SampleEvent): Future[void] {.async: (raises: []).} =
|
||||||
|
seen.add((evt.value * 2, evt.label & "!"))
|
||||||
|
)
|
||||||
|
|
||||||
|
let evt = SampleEvent(value: 5, label: "hi")
|
||||||
|
SampleEvent.emit(evt)
|
||||||
|
waitForListeners()
|
||||||
|
|
||||||
|
check seen.len == 2
|
||||||
|
check seen.anyIt(it == (5, "hi"))
|
||||||
|
check seen.anyIt(it == (10, "hi!"))
|
||||||
|
|
||||||
|
SampleEvent.dropAllListeners()
|
||||||
|
|
||||||
|
test "forget removes a single listener":
|
||||||
|
var counter = 0
|
||||||
|
|
||||||
|
let handleA = SampleEvent.listen(
|
||||||
|
proc(evt: SampleEvent): Future[void] {.async: (raises: []).} =
|
||||||
|
inc counter
|
||||||
|
)
|
||||||
|
|
||||||
|
let handleB = SampleEvent.listen(
|
||||||
|
proc(evt: SampleEvent): Future[void] {.async: (raises: []).} =
|
||||||
|
inc(counter, 2)
|
||||||
|
)
|
||||||
|
|
||||||
|
SampleEvent.dropListener(handleA.get())
|
||||||
|
let eventVal = SampleEvent(value: 1, label: "one")
|
||||||
|
SampleEvent.emit(eventVal)
|
||||||
|
waitForListeners()
|
||||||
|
check counter == 2
|
||||||
|
|
||||||
|
SampleEvent.dropAllListeners()
|
||||||
|
|
||||||
|
test "forgetAll clears every listener":
|
||||||
|
var triggered = false
|
||||||
|
|
||||||
|
let handle1 = SampleEvent.listen(
|
||||||
|
proc(evt: SampleEvent): Future[void] {.async: (raises: []).} =
|
||||||
|
triggered = true
|
||||||
|
)
|
||||||
|
let handle2 = SampleEvent.listen(
|
||||||
|
proc(evt: SampleEvent): Future[void] {.async: (raises: []).} =
|
||||||
|
discard
|
||||||
|
)
|
||||||
|
|
||||||
|
SampleEvent.dropAllListeners()
|
||||||
|
SampleEvent.emit(42, "noop")
|
||||||
|
SampleEvent.emit(label = "noop", value = 42)
|
||||||
|
waitForListeners()
|
||||||
|
check not triggered
|
||||||
|
|
||||||
|
let freshHandle = SampleEvent.listen(
|
||||||
|
proc(evt: SampleEvent): Future[void] {.async: (raises: []).} =
|
||||||
|
discard
|
||||||
|
)
|
||||||
|
check freshHandle.get().id > 0'u64
|
||||||
|
SampleEvent.dropListener(freshHandle.get())
|
||||||
|
|
||||||
|
test "broker helpers operate via typedesc":
|
||||||
|
var toggles: seq[bool] = @[]
|
||||||
|
|
||||||
|
let handle = BinaryEvent.listen(
|
||||||
|
proc(evt: BinaryEvent): Future[void] {.async: (raises: []).} =
|
||||||
|
toggles.add(evt.flag)
|
||||||
|
)
|
||||||
|
|
||||||
|
BinaryEvent(flag: true).emit()
|
||||||
|
waitForListeners()
|
||||||
|
let binaryEvent = BinaryEvent(flag: false)
|
||||||
|
BinaryEvent.emit(binaryEvent)
|
||||||
|
waitForListeners()
|
||||||
|
|
||||||
|
check toggles == @[true, false]
|
||||||
|
BinaryEvent.dropAllListeners()
|
||||||
|
|
||||||
|
test "ref typed event":
|
||||||
|
var counter: int = 0
|
||||||
|
|
||||||
|
let handle = RefEvent.listen(
|
||||||
|
proc(evt: RefEvent): Future[void] {.async: (raises: []).} =
|
||||||
|
for n in evt.payload:
|
||||||
|
counter += n
|
||||||
|
)
|
||||||
|
|
||||||
|
RefEvent(payload: @[1, 2, 3]).emit()
|
||||||
|
waitForListeners()
|
||||||
|
RefEvent.emit(payload = @[4, 5, 6])
|
||||||
|
waitForListeners()
|
||||||
|
|
||||||
|
check counter == 21 # 1+2+3 + 4+5+6
|
||||||
|
RefEvent.dropAllListeners()
|
||||||
234
tests/common/test_multi_request_broker.nim
Normal file
234
tests/common/test_multi_request_broker.nim
Normal file
@ -0,0 +1,234 @@
|
|||||||
|
{.used.}
|
||||||
|
|
||||||
|
import testutils/unittests
|
||||||
|
import chronos
|
||||||
|
import std/sequtils
|
||||||
|
import std/strutils
|
||||||
|
|
||||||
|
import waku/common/broker/multi_request_broker
|
||||||
|
|
||||||
|
MultiRequestBroker:
|
||||||
|
type NoArgResponse = object
|
||||||
|
label*: string
|
||||||
|
|
||||||
|
proc signatureFetch*(): Future[Result[NoArgResponse, string]] {.async.}
|
||||||
|
|
||||||
|
MultiRequestBroker:
|
||||||
|
type ArgResponse = object
|
||||||
|
id*: string
|
||||||
|
|
||||||
|
proc signatureFetch*(
|
||||||
|
suffix: string, numsuffix: int
|
||||||
|
): Future[Result[ArgResponse, string]] {.async.}
|
||||||
|
|
||||||
|
MultiRequestBroker:
|
||||||
|
type DualResponse = ref object
|
||||||
|
note*: string
|
||||||
|
suffix*: string
|
||||||
|
|
||||||
|
proc signatureBase*(): Future[Result[DualResponse, string]] {.async.}
|
||||||
|
proc signatureWithInput*(
|
||||||
|
suffix: string
|
||||||
|
): Future[Result[DualResponse, string]] {.async.}
|
||||||
|
|
||||||
|
suite "MultiRequestBroker":
|
||||||
|
test "aggregates zero-argument providers":
|
||||||
|
discard NoArgResponse.setProvider(
|
||||||
|
proc(): Future[Result[NoArgResponse, string]] {.async.} =
|
||||||
|
ok(NoArgResponse(label: "one"))
|
||||||
|
)
|
||||||
|
|
||||||
|
discard NoArgResponse.setProvider(
|
||||||
|
proc(): Future[Result[NoArgResponse, string]] {.async.} =
|
||||||
|
discard catch:
|
||||||
|
await sleepAsync(1.milliseconds)
|
||||||
|
ok(NoArgResponse(label: "two"))
|
||||||
|
)
|
||||||
|
|
||||||
|
let responses = waitFor NoArgResponse.request()
|
||||||
|
check responses.get().len == 2
|
||||||
|
check responses.get().anyIt(it.label == "one")
|
||||||
|
check responses.get().anyIt(it.label == "two")
|
||||||
|
|
||||||
|
NoArgResponse.clearProviders()
|
||||||
|
|
||||||
|
test "aggregates argument providers":
|
||||||
|
discard ArgResponse.setProvider(
|
||||||
|
proc(suffix: string, num: int): Future[Result[ArgResponse, string]] {.async.} =
|
||||||
|
ok(ArgResponse(id: suffix & "-a-" & $num))
|
||||||
|
)
|
||||||
|
|
||||||
|
discard ArgResponse.setProvider(
|
||||||
|
proc(suffix: string, num: int): Future[Result[ArgResponse, string]] {.async.} =
|
||||||
|
ok(ArgResponse(id: suffix & "-b-" & $num))
|
||||||
|
)
|
||||||
|
|
||||||
|
let keyed = waitFor ArgResponse.request("topic", 1)
|
||||||
|
check keyed.get().len == 2
|
||||||
|
check keyed.get().anyIt(it.id == "topic-a-1")
|
||||||
|
check keyed.get().anyIt(it.id == "topic-b-1")
|
||||||
|
|
||||||
|
ArgResponse.clearProviders()
|
||||||
|
|
||||||
|
test "clearProviders resets both provider lists":
|
||||||
|
discard DualResponse.setProvider(
|
||||||
|
proc(): Future[Result[DualResponse, string]] {.async.} =
|
||||||
|
ok(DualResponse(note: "base", suffix: ""))
|
||||||
|
)
|
||||||
|
|
||||||
|
discard DualResponse.setProvider(
|
||||||
|
proc(suffix: string): Future[Result[DualResponse, string]] {.async.} =
|
||||||
|
ok(DualResponse(note: "base" & suffix, suffix: suffix))
|
||||||
|
)
|
||||||
|
|
||||||
|
let noArgs = waitFor DualResponse.request()
|
||||||
|
check noArgs.get().len == 1
|
||||||
|
|
||||||
|
let param = waitFor DualResponse.request("-extra")
|
||||||
|
check param.get().len == 1
|
||||||
|
check param.get()[0].suffix == "-extra"
|
||||||
|
|
||||||
|
DualResponse.clearProviders()
|
||||||
|
|
||||||
|
let emptyNoArgs = waitFor DualResponse.request()
|
||||||
|
check emptyNoArgs.get().len == 0
|
||||||
|
|
||||||
|
let emptyWithArgs = waitFor DualResponse.request("-extra")
|
||||||
|
check emptyWithArgs.get().len == 0
|
||||||
|
|
||||||
|
test "request returns empty seq when no providers registered":
|
||||||
|
let empty = waitFor NoArgResponse.request()
|
||||||
|
check empty.get().len == 0
|
||||||
|
|
||||||
|
test "failed providers will fail the request":
|
||||||
|
NoArgResponse.clearProviders()
|
||||||
|
discard NoArgResponse.setProvider(
|
||||||
|
proc(): Future[Result[NoArgResponse, string]] {.async.} =
|
||||||
|
err("boom")
|
||||||
|
)
|
||||||
|
|
||||||
|
discard NoArgResponse.setProvider(
|
||||||
|
proc(): Future[Result[NoArgResponse, string]] {.async.} =
|
||||||
|
ok(NoArgResponse(label: "survivor"))
|
||||||
|
)
|
||||||
|
|
||||||
|
let filtered = waitFor NoArgResponse.request()
|
||||||
|
check filtered.isErr()
|
||||||
|
|
||||||
|
NoArgResponse.clearProviders()
|
||||||
|
|
||||||
|
test "deduplicates identical zero-argument providers":
|
||||||
|
NoArgResponse.clearProviders()
|
||||||
|
var invocations = 0
|
||||||
|
let sharedHandler = proc(): Future[Result[NoArgResponse, string]] {.async.} =
|
||||||
|
inc invocations
|
||||||
|
ok(NoArgResponse(label: "dup"))
|
||||||
|
|
||||||
|
let first = NoArgResponse.setProvider(sharedHandler)
|
||||||
|
let second = NoArgResponse.setProvider(sharedHandler)
|
||||||
|
|
||||||
|
check first.get().id == second.get().id
|
||||||
|
check first.get().kind == second.get().kind
|
||||||
|
|
||||||
|
let dupResponses = waitFor NoArgResponse.request()
|
||||||
|
check dupResponses.get().len == 1
|
||||||
|
check invocations == 1
|
||||||
|
|
||||||
|
NoArgResponse.clearProviders()
|
||||||
|
|
||||||
|
test "removeProvider deletes registered handlers":
|
||||||
|
var removedCalled = false
|
||||||
|
var keptCalled = false
|
||||||
|
|
||||||
|
let removable = NoArgResponse.setProvider(
|
||||||
|
proc(): Future[Result[NoArgResponse, string]] {.async.} =
|
||||||
|
removedCalled = true
|
||||||
|
ok(NoArgResponse(label: "removed"))
|
||||||
|
)
|
||||||
|
|
||||||
|
discard NoArgResponse.setProvider(
|
||||||
|
proc(): Future[Result[NoArgResponse, string]] {.async.} =
|
||||||
|
keptCalled = true
|
||||||
|
ok(NoArgResponse(label: "kept"))
|
||||||
|
)
|
||||||
|
|
||||||
|
NoArgResponse.removeProvider(removable.get())
|
||||||
|
|
||||||
|
let afterRemoval = (waitFor NoArgResponse.request()).valueOr:
|
||||||
|
assert false, "request failed"
|
||||||
|
@[]
|
||||||
|
check afterRemoval.len == 1
|
||||||
|
check afterRemoval[0].label == "kept"
|
||||||
|
check not removedCalled
|
||||||
|
check keptCalled
|
||||||
|
|
||||||
|
NoArgResponse.clearProviders()
|
||||||
|
|
||||||
|
test "removeProvider works for argument signatures":
|
||||||
|
var invoked: seq[string] = @[]
|
||||||
|
|
||||||
|
discard ArgResponse.setProvider(
|
||||||
|
proc(suffix: string, num: int): Future[Result[ArgResponse, string]] {.async.} =
|
||||||
|
invoked.add("first" & suffix)
|
||||||
|
ok(ArgResponse(id: suffix & "-one-" & $num))
|
||||||
|
)
|
||||||
|
|
||||||
|
let handle = ArgResponse.setProvider(
|
||||||
|
proc(suffix: string, num: int): Future[Result[ArgResponse, string]] {.async.} =
|
||||||
|
invoked.add("second" & suffix)
|
||||||
|
ok(ArgResponse(id: suffix & "-two-" & $num))
|
||||||
|
)
|
||||||
|
|
||||||
|
ArgResponse.removeProvider(handle.get())
|
||||||
|
|
||||||
|
let single = (waitFor ArgResponse.request("topic", 1)).valueOr:
|
||||||
|
assert false, "request failed"
|
||||||
|
@[]
|
||||||
|
check single.len == 1
|
||||||
|
check single[0].id == "topic-one-1"
|
||||||
|
check invoked == @["firsttopic"]
|
||||||
|
|
||||||
|
ArgResponse.clearProviders()
|
||||||
|
|
||||||
|
test "catches exception from providers and report error":
|
||||||
|
let firstHandler = NoArgResponse.setProvider(
|
||||||
|
proc(): Future[Result[NoArgResponse, string]] {.async.} =
|
||||||
|
raise newException(ValueError, "first handler raised")
|
||||||
|
ok(NoArgResponse(label: "any"))
|
||||||
|
)
|
||||||
|
|
||||||
|
discard NoArgResponse.setProvider(
|
||||||
|
proc(): Future[Result[NoArgResponse, string]] {.async.} =
|
||||||
|
ok(NoArgResponse(label: "just ok"))
|
||||||
|
)
|
||||||
|
|
||||||
|
let afterException = waitFor NoArgResponse.request()
|
||||||
|
check afterException.isErr()
|
||||||
|
check afterException.error().contains("first handler raised")
|
||||||
|
|
||||||
|
NoArgResponse.clearProviders()
|
||||||
|
|
||||||
|
test "ref providers returning nil fail request":
|
||||||
|
DualResponse.clearProviders()
|
||||||
|
|
||||||
|
discard DualResponse.setProvider(
|
||||||
|
proc(): Future[Result[DualResponse, string]] {.async.} =
|
||||||
|
let nilResponse: DualResponse = nil
|
||||||
|
ok(nilResponse)
|
||||||
|
)
|
||||||
|
|
||||||
|
let zeroArg = waitFor DualResponse.request()
|
||||||
|
check zeroArg.isErr()
|
||||||
|
|
||||||
|
DualResponse.clearProviders()
|
||||||
|
|
||||||
|
discard DualResponse.setProvider(
|
||||||
|
proc(suffix: string): Future[Result[DualResponse, string]] {.async.} =
|
||||||
|
let nilResponse: DualResponse = nil
|
||||||
|
ok(nilResponse)
|
||||||
|
)
|
||||||
|
|
||||||
|
let withInput = waitFor DualResponse.request("-extra")
|
||||||
|
check withInput.isErr()
|
||||||
|
|
||||||
|
DualResponse.clearProviders()
|
||||||
502
tests/common/test_request_broker.nim
Normal file
502
tests/common/test_request_broker.nim
Normal file
@ -0,0 +1,502 @@
|
|||||||
|
{.used.}
|
||||||
|
|
||||||
|
import testutils/unittests
|
||||||
|
import chronos
|
||||||
|
import std/strutils
|
||||||
|
|
||||||
|
import waku/common/broker/request_broker
|
||||||
|
|
||||||
|
## ---------------------------------------------------------------------------
|
||||||
|
## Async-mode brokers + tests
|
||||||
|
## ---------------------------------------------------------------------------
|
||||||
|
|
||||||
|
RequestBroker:
|
||||||
|
type SimpleResponse = object
|
||||||
|
value*: string
|
||||||
|
|
||||||
|
proc signatureFetch*(): Future[Result[SimpleResponse, string]] {.async.}
|
||||||
|
|
||||||
|
RequestBroker:
|
||||||
|
type KeyedResponse = object
|
||||||
|
key*: string
|
||||||
|
payload*: string
|
||||||
|
|
||||||
|
proc signatureFetchWithKey*(
|
||||||
|
key: string, subKey: int
|
||||||
|
): Future[Result[KeyedResponse, string]] {.async.}
|
||||||
|
|
||||||
|
RequestBroker:
|
||||||
|
type DualResponse = object
|
||||||
|
note*: string
|
||||||
|
count*: int
|
||||||
|
|
||||||
|
proc signatureNoInput*(): Future[Result[DualResponse, string]] {.async.}
|
||||||
|
proc signatureWithInput*(
|
||||||
|
suffix: string
|
||||||
|
): Future[Result[DualResponse, string]] {.async.}
|
||||||
|
|
||||||
|
RequestBroker(async):
|
||||||
|
type ImplicitResponse = ref object
|
||||||
|
note*: string
|
||||||
|
|
||||||
|
static:
|
||||||
|
doAssert typeof(SimpleResponse.request()) is Future[Result[SimpleResponse, string]]
|
||||||
|
|
||||||
|
suite "RequestBroker macro (async mode)":
|
||||||
|
test "serves zero-argument providers":
|
||||||
|
check SimpleResponse
|
||||||
|
.setProvider(
|
||||||
|
proc(): Future[Result[SimpleResponse, string]] {.async.} =
|
||||||
|
ok(SimpleResponse(value: "hi"))
|
||||||
|
)
|
||||||
|
.isOk()
|
||||||
|
|
||||||
|
let res = waitFor SimpleResponse.request()
|
||||||
|
check res.isOk()
|
||||||
|
check res.value.value == "hi"
|
||||||
|
|
||||||
|
SimpleResponse.clearProvider()
|
||||||
|
|
||||||
|
test "zero-argument request errors when unset":
|
||||||
|
let res = waitFor SimpleResponse.request()
|
||||||
|
check res.isErr()
|
||||||
|
check res.error.contains("no zero-arg provider")
|
||||||
|
|
||||||
|
test "serves input-based providers":
|
||||||
|
var seen: seq[string] = @[]
|
||||||
|
check KeyedResponse
|
||||||
|
.setProvider(
|
||||||
|
proc(key: string, subKey: int): Future[Result[KeyedResponse, string]] {.async.} =
|
||||||
|
seen.add(key)
|
||||||
|
ok(KeyedResponse(key: key, payload: key & "-payload+" & $subKey))
|
||||||
|
)
|
||||||
|
.isOk()
|
||||||
|
|
||||||
|
let res = waitFor KeyedResponse.request("topic", 1)
|
||||||
|
check res.isOk()
|
||||||
|
check res.value.key == "topic"
|
||||||
|
check res.value.payload == "topic-payload+1"
|
||||||
|
check seen == @["topic"]
|
||||||
|
|
||||||
|
KeyedResponse.clearProvider()
|
||||||
|
|
||||||
|
test "catches provider exception":
|
||||||
|
check KeyedResponse
|
||||||
|
.setProvider(
|
||||||
|
proc(key: string, subKey: int): Future[Result[KeyedResponse, string]] {.async.} =
|
||||||
|
raise newException(ValueError, "simulated failure")
|
||||||
|
)
|
||||||
|
.isOk()
|
||||||
|
|
||||||
|
let res = waitFor KeyedResponse.request("neglected", 11)
|
||||||
|
check res.isErr()
|
||||||
|
check res.error.contains("simulated failure")
|
||||||
|
|
||||||
|
KeyedResponse.clearProvider()
|
||||||
|
|
||||||
|
test "input request errors when unset":
|
||||||
|
let res = waitFor KeyedResponse.request("foo", 2)
|
||||||
|
check res.isErr()
|
||||||
|
check res.error.contains("input signature")
|
||||||
|
|
||||||
|
test "supports both provider types simultaneously":
|
||||||
|
check DualResponse
|
||||||
|
.setProvider(
|
||||||
|
proc(): Future[Result[DualResponse, string]] {.async.} =
|
||||||
|
ok(DualResponse(note: "base", count: 1))
|
||||||
|
)
|
||||||
|
.isOk()
|
||||||
|
|
||||||
|
check DualResponse
|
||||||
|
.setProvider(
|
||||||
|
proc(suffix: string): Future[Result[DualResponse, string]] {.async.} =
|
||||||
|
ok(DualResponse(note: "base" & suffix, count: suffix.len))
|
||||||
|
)
|
||||||
|
.isOk()
|
||||||
|
|
||||||
|
let noInput = waitFor DualResponse.request()
|
||||||
|
check noInput.isOk()
|
||||||
|
check noInput.value.note == "base"
|
||||||
|
|
||||||
|
let withInput = waitFor DualResponse.request("-extra")
|
||||||
|
check withInput.isOk()
|
||||||
|
check withInput.value.note == "base-extra"
|
||||||
|
check withInput.value.count == 6
|
||||||
|
|
||||||
|
DualResponse.clearProvider()
|
||||||
|
|
||||||
|
test "clearProvider resets both entries":
|
||||||
|
check DualResponse
|
||||||
|
.setProvider(
|
||||||
|
proc(): Future[Result[DualResponse, string]] {.async.} =
|
||||||
|
ok(DualResponse(note: "temp", count: 0))
|
||||||
|
)
|
||||||
|
.isOk()
|
||||||
|
DualResponse.clearProvider()
|
||||||
|
|
||||||
|
let res = waitFor DualResponse.request()
|
||||||
|
check res.isErr()
|
||||||
|
|
||||||
|
test "implicit zero-argument provider works by default":
|
||||||
|
check ImplicitResponse
|
||||||
|
.setProvider(
|
||||||
|
proc(): Future[Result[ImplicitResponse, string]] {.async.} =
|
||||||
|
ok(ImplicitResponse(note: "auto"))
|
||||||
|
)
|
||||||
|
.isOk()
|
||||||
|
|
||||||
|
let res = waitFor ImplicitResponse.request()
|
||||||
|
check res.isOk()
|
||||||
|
|
||||||
|
ImplicitResponse.clearProvider()
|
||||||
|
check res.value.note == "auto"
|
||||||
|
|
||||||
|
test "implicit zero-argument request errors when unset":
|
||||||
|
let res = waitFor ImplicitResponse.request()
|
||||||
|
check res.isErr()
|
||||||
|
check res.error.contains("no zero-arg provider")
|
||||||
|
|
||||||
|
test "no provider override":
|
||||||
|
check DualResponse
|
||||||
|
.setProvider(
|
||||||
|
proc(): Future[Result[DualResponse, string]] {.async.} =
|
||||||
|
ok(DualResponse(note: "base", count: 1))
|
||||||
|
)
|
||||||
|
.isOk()
|
||||||
|
|
||||||
|
check DualResponse
|
||||||
|
.setProvider(
|
||||||
|
proc(suffix: string): Future[Result[DualResponse, string]] {.async.} =
|
||||||
|
ok(DualResponse(note: "base" & suffix, count: suffix.len))
|
||||||
|
)
|
||||||
|
.isOk()
|
||||||
|
|
||||||
|
let overrideProc = proc(): Future[Result[DualResponse, string]] {.async.} =
|
||||||
|
ok(DualResponse(note: "something else", count: 1))
|
||||||
|
|
||||||
|
check DualResponse.setProvider(overrideProc).isErr()
|
||||||
|
|
||||||
|
let noInput = waitFor DualResponse.request()
|
||||||
|
check noInput.isOk()
|
||||||
|
check noInput.value.note == "base"
|
||||||
|
|
||||||
|
let stillResponse = waitFor DualResponse.request(" still works")
|
||||||
|
check stillResponse.isOk()
|
||||||
|
check stillResponse.value.note.contains("base still works")
|
||||||
|
|
||||||
|
DualResponse.clearProvider()
|
||||||
|
|
||||||
|
let noResponse = waitFor DualResponse.request()
|
||||||
|
check noResponse.isErr()
|
||||||
|
check noResponse.error.contains("no zero-arg provider")
|
||||||
|
|
||||||
|
let noResponseArg = waitFor DualResponse.request("Should not work")
|
||||||
|
check noResponseArg.isErr()
|
||||||
|
check noResponseArg.error.contains("no provider")
|
||||||
|
|
||||||
|
check DualResponse.setProvider(overrideProc).isOk()
|
||||||
|
|
||||||
|
let nowSuccWithOverride = waitFor DualResponse.request()
|
||||||
|
check nowSuccWithOverride.isOk()
|
||||||
|
check nowSuccWithOverride.value.note == "something else"
|
||||||
|
check nowSuccWithOverride.value.count == 1
|
||||||
|
|
||||||
|
DualResponse.clearProvider()
|
||||||
|
|
||||||
|
## ---------------------------------------------------------------------------
|
||||||
|
## Sync-mode brokers + tests
|
||||||
|
## ---------------------------------------------------------------------------
|
||||||
|
|
||||||
|
RequestBroker(sync):
|
||||||
|
type SimpleResponseSync = object
|
||||||
|
value*: string
|
||||||
|
|
||||||
|
proc signatureFetch*(): Result[SimpleResponseSync, string]
|
||||||
|
|
||||||
|
RequestBroker(sync):
|
||||||
|
type KeyedResponseSync = object
|
||||||
|
key*: string
|
||||||
|
payload*: string
|
||||||
|
|
||||||
|
proc signatureFetchWithKey*(
|
||||||
|
key: string, subKey: int
|
||||||
|
): Result[KeyedResponseSync, string]
|
||||||
|
|
||||||
|
RequestBroker(sync):
|
||||||
|
type DualResponseSync = object
|
||||||
|
note*: string
|
||||||
|
count*: int
|
||||||
|
|
||||||
|
proc signatureNoInput*(): Result[DualResponseSync, string]
|
||||||
|
proc signatureWithInput*(suffix: string): Result[DualResponseSync, string]
|
||||||
|
|
||||||
|
RequestBroker(sync):
|
||||||
|
type ImplicitResponseSync = ref object
|
||||||
|
note*: string
|
||||||
|
|
||||||
|
static:
|
||||||
|
doAssert typeof(SimpleResponseSync.request()) is Result[SimpleResponseSync, string]
|
||||||
|
doAssert not (
|
||||||
|
typeof(SimpleResponseSync.request()) is Future[Result[SimpleResponseSync, string]]
|
||||||
|
)
|
||||||
|
doAssert typeof(KeyedResponseSync.request("topic", 1)) is
|
||||||
|
Result[KeyedResponseSync, string]
|
||||||
|
|
||||||
|
suite "RequestBroker macro (sync mode)":
|
||||||
|
test "serves zero-argument providers (sync)":
|
||||||
|
check SimpleResponseSync
|
||||||
|
.setProvider(
|
||||||
|
proc(): Result[SimpleResponseSync, string] =
|
||||||
|
ok(SimpleResponseSync(value: "hi"))
|
||||||
|
)
|
||||||
|
.isOk()
|
||||||
|
|
||||||
|
let res = SimpleResponseSync.request()
|
||||||
|
check res.isOk()
|
||||||
|
check res.value.value == "hi"
|
||||||
|
|
||||||
|
SimpleResponseSync.clearProvider()
|
||||||
|
|
||||||
|
test "zero-argument request errors when unset (sync)":
|
||||||
|
let res = SimpleResponseSync.request()
|
||||||
|
check res.isErr()
|
||||||
|
check res.error.contains("no zero-arg provider")
|
||||||
|
|
||||||
|
test "serves input-based providers (sync)":
|
||||||
|
var seen: seq[string] = @[]
|
||||||
|
check KeyedResponseSync
|
||||||
|
.setProvider(
|
||||||
|
proc(key: string, subKey: int): Result[KeyedResponseSync, string] =
|
||||||
|
seen.add(key)
|
||||||
|
ok(KeyedResponseSync(key: key, payload: key & "-payload+" & $subKey))
|
||||||
|
)
|
||||||
|
.isOk()
|
||||||
|
|
||||||
|
let res = KeyedResponseSync.request("topic", 1)
|
||||||
|
check res.isOk()
|
||||||
|
check res.value.key == "topic"
|
||||||
|
check res.value.payload == "topic-payload+1"
|
||||||
|
check seen == @["topic"]
|
||||||
|
|
||||||
|
KeyedResponseSync.clearProvider()
|
||||||
|
|
||||||
|
test "catches provider exception (sync)":
|
||||||
|
check KeyedResponseSync
|
||||||
|
.setProvider(
|
||||||
|
proc(key: string, subKey: int): Result[KeyedResponseSync, string] =
|
||||||
|
raise newException(ValueError, "simulated failure")
|
||||||
|
)
|
||||||
|
.isOk()
|
||||||
|
|
||||||
|
let res = KeyedResponseSync.request("neglected", 11)
|
||||||
|
check res.isErr()
|
||||||
|
check res.error.contains("simulated failure")
|
||||||
|
|
||||||
|
KeyedResponseSync.clearProvider()
|
||||||
|
|
||||||
|
test "input request errors when unset (sync)":
|
||||||
|
let res = KeyedResponseSync.request("foo", 2)
|
||||||
|
check res.isErr()
|
||||||
|
check res.error.contains("input signature")
|
||||||
|
|
||||||
|
test "supports both provider types simultaneously (sync)":
|
||||||
|
check DualResponseSync
|
||||||
|
.setProvider(
|
||||||
|
proc(): Result[DualResponseSync, string] =
|
||||||
|
ok(DualResponseSync(note: "base", count: 1))
|
||||||
|
)
|
||||||
|
.isOk()
|
||||||
|
|
||||||
|
check DualResponseSync
|
||||||
|
.setProvider(
|
||||||
|
proc(suffix: string): Result[DualResponseSync, string] =
|
||||||
|
ok(DualResponseSync(note: "base" & suffix, count: suffix.len))
|
||||||
|
)
|
||||||
|
.isOk()
|
||||||
|
|
||||||
|
let noInput = DualResponseSync.request()
|
||||||
|
check noInput.isOk()
|
||||||
|
check noInput.value.note == "base"
|
||||||
|
|
||||||
|
let withInput = DualResponseSync.request("-extra")
|
||||||
|
check withInput.isOk()
|
||||||
|
check withInput.value.note == "base-extra"
|
||||||
|
check withInput.value.count == 6
|
||||||
|
|
||||||
|
DualResponseSync.clearProvider()
|
||||||
|
|
||||||
|
test "clearProvider resets both entries (sync)":
|
||||||
|
check DualResponseSync
|
||||||
|
.setProvider(
|
||||||
|
proc(): Result[DualResponseSync, string] =
|
||||||
|
ok(DualResponseSync(note: "temp", count: 0))
|
||||||
|
)
|
||||||
|
.isOk()
|
||||||
|
DualResponseSync.clearProvider()
|
||||||
|
|
||||||
|
let res = DualResponseSync.request()
|
||||||
|
check res.isErr()
|
||||||
|
|
||||||
|
test "implicit zero-argument provider works by default (sync)":
|
||||||
|
check ImplicitResponseSync
|
||||||
|
.setProvider(
|
||||||
|
proc(): Result[ImplicitResponseSync, string] =
|
||||||
|
ok(ImplicitResponseSync(note: "auto"))
|
||||||
|
)
|
||||||
|
.isOk()
|
||||||
|
|
||||||
|
let res = ImplicitResponseSync.request()
|
||||||
|
check res.isOk()
|
||||||
|
|
||||||
|
ImplicitResponseSync.clearProvider()
|
||||||
|
check res.value.note == "auto"
|
||||||
|
|
||||||
|
test "implicit zero-argument request errors when unset (sync)":
|
||||||
|
let res = ImplicitResponseSync.request()
|
||||||
|
check res.isErr()
|
||||||
|
check res.error.contains("no zero-arg provider")
|
||||||
|
|
||||||
|
test "implicit zero-argument provider raises error (sync)":
|
||||||
|
check ImplicitResponseSync
|
||||||
|
.setProvider(
|
||||||
|
proc(): Result[ImplicitResponseSync, string] =
|
||||||
|
raise newException(ValueError, "simulated failure")
|
||||||
|
)
|
||||||
|
.isOk()
|
||||||
|
|
||||||
|
let res = ImplicitResponseSync.request()
|
||||||
|
check res.isErr()
|
||||||
|
check res.error.contains("simulated failure")
|
||||||
|
|
||||||
|
ImplicitResponseSync.clearProvider()
|
||||||
|
|
||||||
|
## ---------------------------------------------------------------------------
|
||||||
|
## POD / external type brokers + tests (distinct/alias behavior)
|
||||||
|
## ---------------------------------------------------------------------------
|
||||||
|
|
||||||
|
type ExternalDefinedTypeAsync = object
|
||||||
|
label*: string
|
||||||
|
|
||||||
|
type ExternalDefinedTypeSync = object
|
||||||
|
label*: string
|
||||||
|
|
||||||
|
type ExternalDefinedTypeShared = object
|
||||||
|
label*: string
|
||||||
|
|
||||||
|
RequestBroker:
|
||||||
|
type PodResponse = int
|
||||||
|
|
||||||
|
proc signatureFetch*(): Future[Result[PodResponse, string]] {.async.}
|
||||||
|
|
||||||
|
RequestBroker:
|
||||||
|
type ExternalAliasedResponse = ExternalDefinedTypeAsync
|
||||||
|
|
||||||
|
proc signatureFetch*(): Future[Result[ExternalAliasedResponse, string]] {.async.}
|
||||||
|
|
||||||
|
RequestBroker(sync):
|
||||||
|
type ExternalAliasedResponseSync = ExternalDefinedTypeSync
|
||||||
|
|
||||||
|
proc signatureFetch*(): Result[ExternalAliasedResponseSync, string]
|
||||||
|
|
||||||
|
RequestBroker(sync):
|
||||||
|
type DistinctStringResponseA = distinct string
|
||||||
|
|
||||||
|
RequestBroker(sync):
|
||||||
|
type DistinctStringResponseB = distinct string
|
||||||
|
|
||||||
|
RequestBroker(sync):
|
||||||
|
type ExternalDistinctResponseA = distinct ExternalDefinedTypeShared
|
||||||
|
|
||||||
|
RequestBroker(sync):
|
||||||
|
type ExternalDistinctResponseB = distinct ExternalDefinedTypeShared
|
||||||
|
|
||||||
|
suite "RequestBroker macro (POD/external types)":
|
||||||
|
test "supports non-object response types (async)":
|
||||||
|
check PodResponse
|
||||||
|
.setProvider(
|
||||||
|
proc(): Future[Result[PodResponse, string]] {.async.} =
|
||||||
|
ok(PodResponse(123))
|
||||||
|
)
|
||||||
|
.isOk()
|
||||||
|
|
||||||
|
let res = waitFor PodResponse.request()
|
||||||
|
check res.isOk()
|
||||||
|
check int(res.value) == 123
|
||||||
|
|
||||||
|
PodResponse.clearProvider()
|
||||||
|
|
||||||
|
test "supports aliased external types (async)":
|
||||||
|
check ExternalAliasedResponse
|
||||||
|
.setProvider(
|
||||||
|
proc(): Future[Result[ExternalAliasedResponse, string]] {.async.} =
|
||||||
|
ok(ExternalAliasedResponse(ExternalDefinedTypeAsync(label: "ext")))
|
||||||
|
)
|
||||||
|
.isOk()
|
||||||
|
|
||||||
|
let res = waitFor ExternalAliasedResponse.request()
|
||||||
|
check res.isOk()
|
||||||
|
check ExternalDefinedTypeAsync(res.value).label == "ext"
|
||||||
|
|
||||||
|
ExternalAliasedResponse.clearProvider()
|
||||||
|
|
||||||
|
test "supports aliased external types (sync)":
|
||||||
|
check ExternalAliasedResponseSync
|
||||||
|
.setProvider(
|
||||||
|
proc(): Result[ExternalAliasedResponseSync, string] =
|
||||||
|
ok(ExternalAliasedResponseSync(ExternalDefinedTypeSync(label: "ext")))
|
||||||
|
)
|
||||||
|
.isOk()
|
||||||
|
|
||||||
|
let res = ExternalAliasedResponseSync.request()
|
||||||
|
check res.isOk()
|
||||||
|
check ExternalDefinedTypeSync(res.value).label == "ext"
|
||||||
|
|
||||||
|
ExternalAliasedResponseSync.clearProvider()
|
||||||
|
|
||||||
|
test "distinct response types avoid overload ambiguity (sync)":
|
||||||
|
check DistinctStringResponseA
|
||||||
|
.setProvider(
|
||||||
|
proc(): Result[DistinctStringResponseA, string] =
|
||||||
|
ok(DistinctStringResponseA("a"))
|
||||||
|
)
|
||||||
|
.isOk()
|
||||||
|
|
||||||
|
check DistinctStringResponseB
|
||||||
|
.setProvider(
|
||||||
|
proc(): Result[DistinctStringResponseB, string] =
|
||||||
|
ok(DistinctStringResponseB("b"))
|
||||||
|
)
|
||||||
|
.isOk()
|
||||||
|
|
||||||
|
check ExternalDistinctResponseA
|
||||||
|
.setProvider(
|
||||||
|
proc(): Result[ExternalDistinctResponseA, string] =
|
||||||
|
ok(ExternalDistinctResponseA(ExternalDefinedTypeShared(label: "ea")))
|
||||||
|
)
|
||||||
|
.isOk()
|
||||||
|
|
||||||
|
check ExternalDistinctResponseB
|
||||||
|
.setProvider(
|
||||||
|
proc(): Result[ExternalDistinctResponseB, string] =
|
||||||
|
ok(ExternalDistinctResponseB(ExternalDefinedTypeShared(label: "eb")))
|
||||||
|
)
|
||||||
|
.isOk()
|
||||||
|
|
||||||
|
let resA = DistinctStringResponseA.request()
|
||||||
|
let resB = DistinctStringResponseB.request()
|
||||||
|
check resA.isOk()
|
||||||
|
check resB.isOk()
|
||||||
|
check string(resA.value) == "a"
|
||||||
|
check string(resB.value) == "b"
|
||||||
|
|
||||||
|
let resEA = ExternalDistinctResponseA.request()
|
||||||
|
let resEB = ExternalDistinctResponseB.request()
|
||||||
|
check resEA.isOk()
|
||||||
|
check resEB.isOk()
|
||||||
|
check ExternalDefinedTypeShared(resEA.value).label == "ea"
|
||||||
|
check ExternalDefinedTypeShared(resEB.value).label == "eb"
|
||||||
|
|
||||||
|
DistinctStringResponseA.clearProvider()
|
||||||
|
DistinctStringResponseB.clearProvider()
|
||||||
|
ExternalDistinctResponseA.clearProvider()
|
||||||
|
ExternalDistinctResponseB.clearProvider()
|
||||||
@ -13,6 +13,7 @@ import
|
|||||||
node/peer_manager,
|
node/peer_manager,
|
||||||
node/waku_node,
|
node/waku_node,
|
||||||
node/kernel_api,
|
node/kernel_api,
|
||||||
|
node/kernel_api/lightpush,
|
||||||
waku_lightpush_legacy,
|
waku_lightpush_legacy,
|
||||||
waku_lightpush_legacy/common,
|
waku_lightpush_legacy/common,
|
||||||
waku_lightpush_legacy/protocol_metrics,
|
waku_lightpush_legacy/protocol_metrics,
|
||||||
@ -56,7 +57,7 @@ suite "Waku Legacy Lightpush - End To End":
|
|||||||
(await server.mountRelay()).isOkOr:
|
(await server.mountRelay()).isOkOr:
|
||||||
assert false, "Failed to mount relay"
|
assert false, "Failed to mount relay"
|
||||||
|
|
||||||
await server.mountLegacyLightpush() # without rln-relay
|
check (await server.mountLegacyLightpush()).isOk() # without rln-relay
|
||||||
client.mountLegacyLightpushClient()
|
client.mountLegacyLightpushClient()
|
||||||
|
|
||||||
serverRemotePeerInfo = server.peerInfo.toRemotePeerInfo()
|
serverRemotePeerInfo = server.peerInfo.toRemotePeerInfo()
|
||||||
@ -135,8 +136,8 @@ suite "RLN Proofs as a Lightpush Service":
|
|||||||
server = newTestWakuNode(serverKey, parseIpAddress("0.0.0.0"), Port(0))
|
server = newTestWakuNode(serverKey, parseIpAddress("0.0.0.0"), Port(0))
|
||||||
client = newTestWakuNode(clientKey, parseIpAddress("0.0.0.0"), Port(0))
|
client = newTestWakuNode(clientKey, parseIpAddress("0.0.0.0"), Port(0))
|
||||||
|
|
||||||
anvilProc = runAnvil()
|
anvilProc = runAnvil(stateFile = some(DEFAULT_ANVIL_STATE_PATH))
|
||||||
manager = waitFor setupOnchainGroupManager()
|
manager = waitFor setupOnchainGroupManager(deployContracts = false)
|
||||||
|
|
||||||
# mount rln-relay
|
# mount rln-relay
|
||||||
let wakuRlnConfig = getWakuRlnConfig(manager = manager, index = MembershipIndex(1))
|
let wakuRlnConfig = getWakuRlnConfig(manager = manager, index = MembershipIndex(1))
|
||||||
@ -147,7 +148,7 @@ suite "RLN Proofs as a Lightpush Service":
|
|||||||
(await server.mountRelay()).isOkOr:
|
(await server.mountRelay()).isOkOr:
|
||||||
assert false, "Failed to mount relay"
|
assert false, "Failed to mount relay"
|
||||||
await server.mountRlnRelay(wakuRlnConfig)
|
await server.mountRlnRelay(wakuRlnConfig)
|
||||||
await server.mountLegacyLightPush()
|
check (await server.mountLegacyLightPush()).isOk()
|
||||||
client.mountLegacyLightPushClient()
|
client.mountLegacyLightPushClient()
|
||||||
|
|
||||||
let manager1 = cast[OnchainGroupManager](server.wakuRlnRelay.groupManager)
|
let manager1 = cast[OnchainGroupManager](server.wakuRlnRelay.groupManager)
|
||||||
@ -213,7 +214,7 @@ suite "Waku Legacy Lightpush message delivery":
|
|||||||
assert false, "Failed to mount relay"
|
assert false, "Failed to mount relay"
|
||||||
(await bridgeNode.mountRelay()).isOkOr:
|
(await bridgeNode.mountRelay()).isOkOr:
|
||||||
assert false, "Failed to mount relay"
|
assert false, "Failed to mount relay"
|
||||||
await bridgeNode.mountLegacyLightPush()
|
check (await bridgeNode.mountLegacyLightPush()).isOk()
|
||||||
lightNode.mountLegacyLightPushClient()
|
lightNode.mountLegacyLightPushClient()
|
||||||
|
|
||||||
discard await lightNode.peerManager.dialPeer(
|
discard await lightNode.peerManager.dialPeer(
|
||||||
@ -249,3 +250,19 @@ suite "Waku Legacy Lightpush message delivery":
|
|||||||
|
|
||||||
## Cleanup
|
## Cleanup
|
||||||
await allFutures(lightNode.stop(), bridgeNode.stop(), destNode.stop())
|
await allFutures(lightNode.stop(), bridgeNode.stop(), destNode.stop())
|
||||||
|
|
||||||
|
suite "Waku Legacy Lightpush mounting behavior":
|
||||||
|
asyncTest "fails to mount when relay is not mounted":
|
||||||
|
## Given a node without Relay mounted
|
||||||
|
let
|
||||||
|
key = generateSecp256k1Key()
|
||||||
|
node = newTestWakuNode(key, parseIpAddress("0.0.0.0"), Port(0))
|
||||||
|
|
||||||
|
# Do not mount Relay on purpose
|
||||||
|
check node.wakuRelay.isNil()
|
||||||
|
|
||||||
|
## Then mounting Legacy Lightpush must fail
|
||||||
|
let res = await node.mountLegacyLightPush()
|
||||||
|
check:
|
||||||
|
res.isErr()
|
||||||
|
res.error == MountWithoutRelayError
|
||||||
|
|||||||
@ -13,6 +13,7 @@ import
|
|||||||
node/peer_manager,
|
node/peer_manager,
|
||||||
node/waku_node,
|
node/waku_node,
|
||||||
node/kernel_api,
|
node/kernel_api,
|
||||||
|
node/kernel_api/lightpush,
|
||||||
waku_lightpush,
|
waku_lightpush,
|
||||||
waku_rln_relay,
|
waku_rln_relay,
|
||||||
],
|
],
|
||||||
@ -55,7 +56,7 @@ suite "Waku Lightpush - End To End":
|
|||||||
|
|
||||||
(await server.mountRelay()).isOkOr:
|
(await server.mountRelay()).isOkOr:
|
||||||
assert false, "Failed to mount relay"
|
assert false, "Failed to mount relay"
|
||||||
await server.mountLightpush() # without rln-relay
|
check (await server.mountLightpush()).isOk() # without rln-relay
|
||||||
client.mountLightpushClient()
|
client.mountLightpushClient()
|
||||||
|
|
||||||
serverRemotePeerInfo = server.peerInfo.toRemotePeerInfo()
|
serverRemotePeerInfo = server.peerInfo.toRemotePeerInfo()
|
||||||
@ -135,8 +136,8 @@ suite "RLN Proofs as a Lightpush Service":
|
|||||||
server = newTestWakuNode(serverKey, parseIpAddress("0.0.0.0"), Port(0))
|
server = newTestWakuNode(serverKey, parseIpAddress("0.0.0.0"), Port(0))
|
||||||
client = newTestWakuNode(clientKey, parseIpAddress("0.0.0.0"), Port(0))
|
client = newTestWakuNode(clientKey, parseIpAddress("0.0.0.0"), Port(0))
|
||||||
|
|
||||||
anvilProc = runAnvil()
|
anvilProc = runAnvil(stateFile = some(DEFAULT_ANVIL_STATE_PATH))
|
||||||
manager = waitFor setupOnchainGroupManager()
|
manager = waitFor setupOnchainGroupManager(deployContracts = false)
|
||||||
|
|
||||||
# mount rln-relay
|
# mount rln-relay
|
||||||
let wakuRlnConfig = getWakuRlnConfig(manager = manager, index = MembershipIndex(1))
|
let wakuRlnConfig = getWakuRlnConfig(manager = manager, index = MembershipIndex(1))
|
||||||
@ -147,7 +148,7 @@ suite "RLN Proofs as a Lightpush Service":
|
|||||||
(await server.mountRelay()).isOkOr:
|
(await server.mountRelay()).isOkOr:
|
||||||
assert false, "Failed to mount relay"
|
assert false, "Failed to mount relay"
|
||||||
await server.mountRlnRelay(wakuRlnConfig)
|
await server.mountRlnRelay(wakuRlnConfig)
|
||||||
await server.mountLightPush()
|
check (await server.mountLightPush()).isOk()
|
||||||
client.mountLightPushClient()
|
client.mountLightPushClient()
|
||||||
|
|
||||||
let manager1 = cast[OnchainGroupManager](server.wakuRlnRelay.groupManager)
|
let manager1 = cast[OnchainGroupManager](server.wakuRlnRelay.groupManager)
|
||||||
@ -213,7 +214,7 @@ suite "Waku Lightpush message delivery":
|
|||||||
assert false, "Failed to mount relay"
|
assert false, "Failed to mount relay"
|
||||||
(await bridgeNode.mountRelay()).isOkOr:
|
(await bridgeNode.mountRelay()).isOkOr:
|
||||||
assert false, "Failed to mount relay"
|
assert false, "Failed to mount relay"
|
||||||
await bridgeNode.mountLightPush()
|
check (await bridgeNode.mountLightPush()).isOk()
|
||||||
lightNode.mountLightPushClient()
|
lightNode.mountLightPushClient()
|
||||||
|
|
||||||
discard await lightNode.peerManager.dialPeer(
|
discard await lightNode.peerManager.dialPeer(
|
||||||
@ -251,3 +252,19 @@ suite "Waku Lightpush message delivery":
|
|||||||
|
|
||||||
## Cleanup
|
## Cleanup
|
||||||
await allFutures(lightNode.stop(), bridgeNode.stop(), destNode.stop())
|
await allFutures(lightNode.stop(), bridgeNode.stop(), destNode.stop())
|
||||||
|
|
||||||
|
suite "Waku Lightpush mounting behavior":
|
||||||
|
asyncTest "fails to mount when relay is not mounted":
|
||||||
|
## Given a node without Relay mounted
|
||||||
|
let
|
||||||
|
key = generateSecp256k1Key()
|
||||||
|
node = newTestWakuNode(key, parseIpAddress("0.0.0.0"), Port(0))
|
||||||
|
|
||||||
|
# Do not mount Relay on purpose
|
||||||
|
check node.wakuRelay.isNil()
|
||||||
|
|
||||||
|
## Then mounting Lightpush must fail
|
||||||
|
let res = await node.mountLightPush()
|
||||||
|
check:
|
||||||
|
res.isErr()
|
||||||
|
res.error == MountWithoutRelayError
|
||||||
|
|||||||
@ -66,15 +66,17 @@ suite "Waku Peer Exchange":
|
|||||||
|
|
||||||
suite "fetchPeerExchangePeers":
|
suite "fetchPeerExchangePeers":
|
||||||
var node2 {.threadvar.}: WakuNode
|
var node2 {.threadvar.}: WakuNode
|
||||||
|
var node3 {.threadvar.}: WakuNode
|
||||||
|
|
||||||
asyncSetup:
|
asyncSetup:
|
||||||
node = newTestWakuNode(generateSecp256k1Key(), bindIp, bindPort)
|
node = newTestWakuNode(generateSecp256k1Key(), bindIp, bindPort)
|
||||||
node2 = newTestWakuNode(generateSecp256k1Key(), bindIp, bindPort)
|
node2 = newTestWakuNode(generateSecp256k1Key(), bindIp, bindPort)
|
||||||
|
node3 = newTestWakuNode(generateSecp256k1Key(), bindIp, bindPort)
|
||||||
|
|
||||||
await allFutures(node.start(), node2.start())
|
await allFutures(node.start(), node2.start(), node3.start())
|
||||||
|
|
||||||
asyncTeardown:
|
asyncTeardown:
|
||||||
await allFutures(node.stop(), node2.stop())
|
await allFutures(node.stop(), node2.stop(), node3.stop())
|
||||||
|
|
||||||
asyncTest "Node fetches without mounting peer exchange":
|
asyncTest "Node fetches without mounting peer exchange":
|
||||||
# When a node, without peer exchange mounted, fetches peers
|
# When a node, without peer exchange mounted, fetches peers
|
||||||
@ -104,12 +106,10 @@ suite "Waku Peer Exchange":
|
|||||||
await allFutures([node.mountPeerExchangeClient(), node2.mountPeerExchange()])
|
await allFutures([node.mountPeerExchangeClient(), node2.mountPeerExchange()])
|
||||||
check node.peerManager.switch.peerStore.peers.len == 0
|
check node.peerManager.switch.peerStore.peers.len == 0
|
||||||
|
|
||||||
# Mock that we discovered a node (to avoid running discv5)
|
# Simulate node2 discovering node3 via Discv5
|
||||||
var enr = enr.Record()
|
var rpInfo = node3.peerInfo.toRemotePeerInfo()
|
||||||
assert enr.fromUri(
|
rpInfo.enr = some(node3.enr)
|
||||||
"enr:-Iu4QGNuTvNRulF3A4Kb9YHiIXLr0z_CpvWkWjWKU-o95zUPR_In02AWek4nsSk7G_-YDcaT4bDRPzt5JIWvFqkXSNcBgmlkgnY0gmlwhE0WsGeJc2VjcDI1NmsxoQKp9VzU2FAh7fwOwSpg1M_Ekz4zzl0Fpbg6po2ZwgVwQYN0Y3CC6mCFd2FrdTIB"
|
node2.peerManager.addPeer(rpInfo, PeerOrigin.Discv5)
|
||||||
), "Failed to parse ENR"
|
|
||||||
node2.wakuPeerExchange.enrCache.add(enr)
|
|
||||||
|
|
||||||
# Set node2 as service peer (default one) for px protocol
|
# Set node2 as service peer (default one) for px protocol
|
||||||
node.peerManager.addServicePeer(
|
node.peerManager.addServicePeer(
|
||||||
@ -121,10 +121,8 @@ suite "Waku Peer Exchange":
|
|||||||
check res.tryGet() == 1
|
check res.tryGet() == 1
|
||||||
|
|
||||||
# Check that the peer ended up in the peerstore
|
# Check that the peer ended up in the peerstore
|
||||||
let rpInfo = enr.toRemotePeerInfo.get()
|
|
||||||
check:
|
check:
|
||||||
node.peerManager.switch.peerStore.peers.anyIt(it.peerId == rpInfo.peerId)
|
node.peerManager.switch.peerStore.peers.anyIt(it.peerId == rpInfo.peerId)
|
||||||
node.peerManager.switch.peerStore.peers.anyIt(it.addrs == rpInfo.addrs)
|
|
||||||
|
|
||||||
suite "setPeerExchangePeer":
|
suite "setPeerExchangePeer":
|
||||||
var node2 {.threadvar.}: WakuNode
|
var node2 {.threadvar.}: WakuNode
|
||||||
|
|||||||
@ -282,7 +282,7 @@ suite "Sharding":
|
|||||||
asyncTest "lightpush":
|
asyncTest "lightpush":
|
||||||
# Given a connected server and client subscribed to the same pubsub topic
|
# Given a connected server and client subscribed to the same pubsub topic
|
||||||
client.mountLegacyLightPushClient()
|
client.mountLegacyLightPushClient()
|
||||||
await server.mountLightpush()
|
check (await server.mountLightpush()).isOk()
|
||||||
|
|
||||||
let
|
let
|
||||||
topic = "/waku/2/rs/0/1"
|
topic = "/waku/2/rs/0/1"
|
||||||
@ -405,7 +405,7 @@ suite "Sharding":
|
|||||||
asyncTest "lightpush (automatic sharding filtering)":
|
asyncTest "lightpush (automatic sharding filtering)":
|
||||||
# Given a connected server and client using the same content topic (with two different formats)
|
# Given a connected server and client using the same content topic (with two different formats)
|
||||||
client.mountLegacyLightPushClient()
|
client.mountLegacyLightPushClient()
|
||||||
await server.mountLightpush()
|
check (await server.mountLightpush()).isOk()
|
||||||
|
|
||||||
let
|
let
|
||||||
contentTopicShort = "/toychat/2/huilong/proto"
|
contentTopicShort = "/toychat/2/huilong/proto"
|
||||||
@ -563,7 +563,7 @@ suite "Sharding":
|
|||||||
asyncTest "lightpush - exclusion (automatic sharding filtering)":
|
asyncTest "lightpush - exclusion (automatic sharding filtering)":
|
||||||
# Given a connected server and client using different content topics
|
# Given a connected server and client using different content topics
|
||||||
client.mountLegacyLightPushClient()
|
client.mountLegacyLightPushClient()
|
||||||
await server.mountLightpush()
|
check (await server.mountLightpush()).isOk()
|
||||||
|
|
||||||
let
|
let
|
||||||
contentTopic1 = "/toychat/2/huilong/proto"
|
contentTopic1 = "/toychat/2/huilong/proto"
|
||||||
@ -874,7 +874,7 @@ suite "Sharding":
|
|||||||
asyncTest "Waku LightPush Sharding (Static Sharding)":
|
asyncTest "Waku LightPush Sharding (Static Sharding)":
|
||||||
# Given a connected server and client using two different pubsub topics
|
# Given a connected server and client using two different pubsub topics
|
||||||
client.mountLegacyLightPushClient()
|
client.mountLegacyLightPushClient()
|
||||||
await server.mountLightpush()
|
check (await server.mountLightpush()).isOk()
|
||||||
|
|
||||||
# Given a connected server and client subscribed to multiple pubsub topics
|
# Given a connected server and client subscribed to multiple pubsub topics
|
||||||
let
|
let
|
||||||
|
|||||||
@ -1,12 +1,20 @@
|
|||||||
{.used.}
|
{.used.}
|
||||||
|
|
||||||
import std/options, chronos, testutils/unittests, libp2p/builders
|
import
|
||||||
|
std/options,
|
||||||
|
chronos,
|
||||||
|
testutils/unittests,
|
||||||
|
libp2p/builders,
|
||||||
|
libp2p/protocols/rendezvous
|
||||||
|
|
||||||
import
|
import
|
||||||
waku/waku_core/peers,
|
waku/waku_core/peers,
|
||||||
|
waku/waku_core/codecs,
|
||||||
waku/node/waku_node,
|
waku/node/waku_node,
|
||||||
waku/node/peer_manager/peer_manager,
|
waku/node/peer_manager/peer_manager,
|
||||||
waku/waku_rendezvous/protocol,
|
waku/waku_rendezvous/protocol,
|
||||||
|
waku/waku_rendezvous/common,
|
||||||
|
waku/waku_rendezvous/waku_peer_record,
|
||||||
./testlib/[wakucore, wakunode]
|
./testlib/[wakucore, wakunode]
|
||||||
|
|
||||||
procSuite "Waku Rendezvous":
|
procSuite "Waku Rendezvous":
|
||||||
@ -50,18 +58,26 @@ procSuite "Waku Rendezvous":
|
|||||||
node2.peerManager.addPeer(peerInfo3)
|
node2.peerManager.addPeer(peerInfo3)
|
||||||
node3.peerManager.addPeer(peerInfo2)
|
node3.peerManager.addPeer(peerInfo2)
|
||||||
|
|
||||||
let namespace = "test/name/space"
|
let res = await node1.wakuRendezvous.advertiseAll()
|
||||||
|
|
||||||
let res = await node1.wakuRendezvous.batchAdvertise(
|
|
||||||
namespace, 60.seconds, @[peerInfo2.peerId]
|
|
||||||
)
|
|
||||||
assert res.isOk(), $res.error
|
assert res.isOk(), $res.error
|
||||||
|
# Rendezvous Request API requires dialing first
|
||||||
|
let connOpt =
|
||||||
|
await node3.peerManager.dialPeer(peerInfo2.peerId, WakuRendezVousCodec)
|
||||||
|
require:
|
||||||
|
connOpt.isSome
|
||||||
|
|
||||||
let response =
|
var records: seq[WakuPeerRecord]
|
||||||
await node3.wakuRendezvous.batchRequest(namespace, 1, @[peerInfo2.peerId])
|
try:
|
||||||
assert response.isOk(), $response.error
|
records = await rendezvous.request[WakuPeerRecord](
|
||||||
let records = response.get()
|
node3.wakuRendezvous,
|
||||||
|
Opt.some(computeMixNamespace(clusterId)),
|
||||||
|
Opt.some(1),
|
||||||
|
Opt.some(@[peerInfo2.peerId]),
|
||||||
|
)
|
||||||
|
except CatchableError as e:
|
||||||
|
assert false, "Request failed with exception: " & e.msg
|
||||||
|
|
||||||
check:
|
check:
|
||||||
records.len == 1
|
records.len == 1
|
||||||
records[0].peerId == peerInfo1.peerId
|
records[0].peerId == peerInfo1.peerId
|
||||||
|
#records[0].mixPubKey == $node1.wakuMix.pubKey
|
||||||
|
|||||||
@ -426,7 +426,6 @@ suite "Waku Discovery v5":
|
|||||||
confBuilder.withNodeKey(libp2p_keys.PrivateKey.random(Secp256k1, myRng[])[])
|
confBuilder.withNodeKey(libp2p_keys.PrivateKey.random(Secp256k1, myRng[])[])
|
||||||
confBuilder.discv5Conf.withEnabled(true)
|
confBuilder.discv5Conf.withEnabled(true)
|
||||||
confBuilder.discv5Conf.withUdpPort(9000.Port)
|
confBuilder.discv5Conf.withUdpPort(9000.Port)
|
||||||
|
|
||||||
let conf = confBuilder.build().valueOr:
|
let conf = confBuilder.build().valueOr:
|
||||||
raiseAssert error
|
raiseAssert error
|
||||||
|
|
||||||
@ -468,6 +467,9 @@ suite "Waku Discovery v5":
|
|||||||
# leave some time for discv5 to act
|
# leave some time for discv5 to act
|
||||||
await sleepAsync(chronos.seconds(10))
|
await sleepAsync(chronos.seconds(10))
|
||||||
|
|
||||||
|
# Connect peers via peer manager to ensure identify happens
|
||||||
|
discard await waku0.node.peerManager.connectPeer(waku1.node.switch.peerInfo)
|
||||||
|
|
||||||
var r = waku0.node.peerManager.selectPeer(WakuPeerExchangeCodec)
|
var r = waku0.node.peerManager.selectPeer(WakuPeerExchangeCodec)
|
||||||
assert r.isSome(), "could not retrieve peer mounting WakuPeerExchangeCodec"
|
assert r.isSome(), "could not retrieve peer mounting WakuPeerExchangeCodec"
|
||||||
|
|
||||||
@ -480,7 +482,7 @@ suite "Waku Discovery v5":
|
|||||||
r = waku2.node.peerManager.selectPeer(WakuPeerExchangeCodec)
|
r = waku2.node.peerManager.selectPeer(WakuPeerExchangeCodec)
|
||||||
assert r.isSome(), "could not retrieve peer mounting WakuPeerExchangeCodec"
|
assert r.isSome(), "could not retrieve peer mounting WakuPeerExchangeCodec"
|
||||||
|
|
||||||
r = waku2.node.peerManager.selectPeer(RendezVousCodec)
|
r = waku2.node.peerManager.selectPeer(WakuRendezVousCodec)
|
||||||
assert r.isSome(), "could not retrieve peer mounting RendezVousCodec"
|
assert r.isSome(), "could not retrieve peer mounting RendezVousCodec"
|
||||||
|
|
||||||
asyncTest "Discv5 bootstrap nodes should be added to the peer store":
|
asyncTest "Discv5 bootstrap nodes should be added to the peer store":
|
||||||
|
|||||||
@ -37,7 +37,7 @@ suite "Rate limited push service":
|
|||||||
|
|
||||||
handlerFuture = newFuture[(string, WakuMessage)]()
|
handlerFuture = newFuture[(string, WakuMessage)]()
|
||||||
let requestRes =
|
let requestRes =
|
||||||
await client.publish(some(DefaultPubsubTopic), message, peer = serverPeerId)
|
await client.publish(some(DefaultPubsubTopic), message, serverPeerId)
|
||||||
|
|
||||||
check await handlerFuture.withTimeout(50.millis)
|
check await handlerFuture.withTimeout(50.millis)
|
||||||
|
|
||||||
@ -66,7 +66,7 @@ suite "Rate limited push service":
|
|||||||
var endTime = Moment.now()
|
var endTime = Moment.now()
|
||||||
var elapsed: Duration = (endTime - startTime)
|
var elapsed: Duration = (endTime - startTime)
|
||||||
await sleepAsync(tokenPeriod - elapsed + firstWaitExtend)
|
await sleepAsync(tokenPeriod - elapsed + firstWaitExtend)
|
||||||
firstWaitEXtend = 100.millis
|
firstWaitExtend = 100.millis
|
||||||
|
|
||||||
## Cleanup
|
## Cleanup
|
||||||
await allFutures(clientSwitch.stop(), serverSwitch.stop())
|
await allFutures(clientSwitch.stop(), serverSwitch.stop())
|
||||||
@ -99,7 +99,7 @@ suite "Rate limited push service":
|
|||||||
let message = fakeWakuMessage()
|
let message = fakeWakuMessage()
|
||||||
handlerFuture = newFuture[(string, WakuMessage)]()
|
handlerFuture = newFuture[(string, WakuMessage)]()
|
||||||
let requestRes =
|
let requestRes =
|
||||||
await client.publish(some(DefaultPubsubTopic), message, peer = serverPeerId)
|
await client.publish(some(DefaultPubsubTopic), message, serverPeerId)
|
||||||
discard await handlerFuture.withTimeout(10.millis)
|
discard await handlerFuture.withTimeout(10.millis)
|
||||||
|
|
||||||
check:
|
check:
|
||||||
@ -114,7 +114,7 @@ suite "Rate limited push service":
|
|||||||
let message = fakeWakuMessage()
|
let message = fakeWakuMessage()
|
||||||
handlerFuture = newFuture[(string, WakuMessage)]()
|
handlerFuture = newFuture[(string, WakuMessage)]()
|
||||||
let requestRes =
|
let requestRes =
|
||||||
await client.publish(some(DefaultPubsubTopic), message, peer = serverPeerId)
|
await client.publish(some(DefaultPubsubTopic), message, serverPeerId)
|
||||||
discard await handlerFuture.withTimeout(10.millis)
|
discard await handlerFuture.withTimeout(10.millis)
|
||||||
|
|
||||||
check:
|
check:
|
||||||
|
|||||||
@ -142,9 +142,13 @@ suite "Waku Peer Exchange":
|
|||||||
newTestWakuNode(generateSecp256k1Key(), parseIpAddress("0.0.0.0"), Port(0))
|
newTestWakuNode(generateSecp256k1Key(), parseIpAddress("0.0.0.0"), Port(0))
|
||||||
node2 =
|
node2 =
|
||||||
newTestWakuNode(generateSecp256k1Key(), parseIpAddress("0.0.0.0"), Port(0))
|
newTestWakuNode(generateSecp256k1Key(), parseIpAddress("0.0.0.0"), Port(0))
|
||||||
|
node3 =
|
||||||
|
newTestWakuNode(generateSecp256k1Key(), parseIpAddress("0.0.0.0"), Port(0))
|
||||||
|
node4 =
|
||||||
|
newTestWakuNode(generateSecp256k1Key(), parseIpAddress("0.0.0.0"), Port(0))
|
||||||
|
|
||||||
# Start and mount peer exchange
|
# Start and mount peer exchange
|
||||||
await allFutures([node1.start(), node2.start()])
|
await allFutures([node1.start(), node2.start(), node3.start(), node4.start()])
|
||||||
await allFutures([node1.mountPeerExchange(), node2.mountPeerExchangeClient()])
|
await allFutures([node1.mountPeerExchange(), node2.mountPeerExchangeClient()])
|
||||||
|
|
||||||
# Create connection
|
# Create connection
|
||||||
@ -154,18 +158,15 @@ suite "Waku Peer Exchange":
|
|||||||
require:
|
require:
|
||||||
connOpt.isSome
|
connOpt.isSome
|
||||||
|
|
||||||
# Create some enr and add to peer exchange (simulating disv5)
|
# Simulate node1 discovering node3 via Discv5
|
||||||
var enr1, enr2 = enr.Record()
|
var info3 = node3.peerInfo.toRemotePeerInfo()
|
||||||
check enr1.fromUri(
|
info3.enr = some(node3.enr)
|
||||||
"enr:-Iu4QGNuTvNRulF3A4Kb9YHiIXLr0z_CpvWkWjWKU-o95zUPR_In02AWek4nsSk7G_-YDcaT4bDRPzt5JIWvFqkXSNcBgmlkgnY0gmlwhE0WsGeJc2VjcDI1NmsxoQKp9VzU2FAh7fwOwSpg1M_Ekz4zzl0Fpbg6po2ZwgVwQYN0Y3CC6mCFd2FrdTIB"
|
node1.peerManager.addPeer(info3, PeerOrigin.Discv5)
|
||||||
)
|
|
||||||
check enr2.fromUri(
|
|
||||||
"enr:-Iu4QGJllOWlviPIh_SGR-VVm55nhnBIU5L-s3ran7ARz_4oDdtJPtUs3Bc5aqZHCiPQX6qzNYF2ARHER0JPX97TFbEBgmlkgnY0gmlwhE0WsGeJc2VjcDI1NmsxoQP3ULycvday4EkvtVu0VqbBdmOkbfVLJx8fPe0lE_dRkIN0Y3CC6mCFd2FrdTIB"
|
|
||||||
)
|
|
||||||
|
|
||||||
# Mock that we have discovered these enrs
|
# Simulate node1 discovering node4 via Discv5
|
||||||
node1.wakuPeerExchange.enrCache.add(enr1)
|
var info4 = node4.peerInfo.toRemotePeerInfo()
|
||||||
node1.wakuPeerExchange.enrCache.add(enr2)
|
info4.enr = some(node4.enr)
|
||||||
|
node1.peerManager.addPeer(info4, PeerOrigin.Discv5)
|
||||||
|
|
||||||
# Request 2 peer from px. Test all request variants
|
# Request 2 peer from px. Test all request variants
|
||||||
let response1 = await node2.wakuPeerExchangeClient.request(2)
|
let response1 = await node2.wakuPeerExchangeClient.request(2)
|
||||||
@ -185,12 +186,12 @@ suite "Waku Peer Exchange":
|
|||||||
response3.get().peerInfos.len == 2
|
response3.get().peerInfos.len == 2
|
||||||
|
|
||||||
# Since it can return duplicates test that at least one of the enrs is in the response
|
# Since it can return duplicates test that at least one of the enrs is in the response
|
||||||
response1.get().peerInfos.anyIt(it.enr == enr1.raw) or
|
response1.get().peerInfos.anyIt(it.enr == node3.enr.raw) or
|
||||||
response1.get().peerInfos.anyIt(it.enr == enr2.raw)
|
response1.get().peerInfos.anyIt(it.enr == node4.enr.raw)
|
||||||
response2.get().peerInfos.anyIt(it.enr == enr1.raw) or
|
response2.get().peerInfos.anyIt(it.enr == node3.enr.raw) or
|
||||||
response2.get().peerInfos.anyIt(it.enr == enr2.raw)
|
response2.get().peerInfos.anyIt(it.enr == node4.enr.raw)
|
||||||
response3.get().peerInfos.anyIt(it.enr == enr1.raw) or
|
response3.get().peerInfos.anyIt(it.enr == node3.enr.raw) or
|
||||||
response3.get().peerInfos.anyIt(it.enr == enr2.raw)
|
response3.get().peerInfos.anyIt(it.enr == node4.enr.raw)
|
||||||
|
|
||||||
asyncTest "Request fails gracefully":
|
asyncTest "Request fails gracefully":
|
||||||
let
|
let
|
||||||
@ -265,8 +266,8 @@ suite "Waku Peer Exchange":
|
|||||||
peerInfo2.origin = PeerOrigin.Discv5
|
peerInfo2.origin = PeerOrigin.Discv5
|
||||||
|
|
||||||
check:
|
check:
|
||||||
not poolFilter(cluster, peerInfo1)
|
poolFilter(cluster, peerInfo1).isErr()
|
||||||
poolFilter(cluster, peerInfo2)
|
poolFilter(cluster, peerInfo2).isOk()
|
||||||
|
|
||||||
asyncTest "Request 0 peers, with 1 peer in PeerExchange":
|
asyncTest "Request 0 peers, with 1 peer in PeerExchange":
|
||||||
# Given two valid nodes with PeerExchange
|
# Given two valid nodes with PeerExchange
|
||||||
@ -275,9 +276,11 @@ suite "Waku Peer Exchange":
|
|||||||
newTestWakuNode(generateSecp256k1Key(), parseIpAddress("0.0.0.0"), Port(0))
|
newTestWakuNode(generateSecp256k1Key(), parseIpAddress("0.0.0.0"), Port(0))
|
||||||
node2 =
|
node2 =
|
||||||
newTestWakuNode(generateSecp256k1Key(), parseIpAddress("0.0.0.0"), Port(0))
|
newTestWakuNode(generateSecp256k1Key(), parseIpAddress("0.0.0.0"), Port(0))
|
||||||
|
node3 =
|
||||||
|
newTestWakuNode(generateSecp256k1Key(), parseIpAddress("0.0.0.0"), Port(0))
|
||||||
|
|
||||||
# Start and mount peer exchange
|
# Start and mount peer exchange
|
||||||
await allFutures([node1.start(), node2.start()])
|
await allFutures([node1.start(), node2.start(), node3.start()])
|
||||||
await allFutures([node1.mountPeerExchange(), node2.mountPeerExchangeClient()])
|
await allFutures([node1.mountPeerExchange(), node2.mountPeerExchangeClient()])
|
||||||
|
|
||||||
# Connect the nodes
|
# Connect the nodes
|
||||||
@ -286,12 +289,10 @@ suite "Waku Peer Exchange":
|
|||||||
)
|
)
|
||||||
assert dialResponse.isSome
|
assert dialResponse.isSome
|
||||||
|
|
||||||
# Mock that we have discovered one enr
|
# Simulate node1 discovering node3 via Discv5
|
||||||
var record = enr.Record()
|
var info3 = node3.peerInfo.toRemotePeerInfo()
|
||||||
check record.fromUri(
|
info3.enr = some(node3.enr)
|
||||||
"enr:-Iu4QGNuTvNRulF3A4Kb9YHiIXLr0z_CpvWkWjWKU-o95zUPR_In02AWek4nsSk7G_-YDcaT4bDRPzt5JIWvFqkXSNcBgmlkgnY0gmlwhE0WsGeJc2VjcDI1NmsxoQKp9VzU2FAh7fwOwSpg1M_Ekz4zzl0Fpbg6po2ZwgVwQYN0Y3CC6mCFd2FrdTIB"
|
node1.peerManager.addPeer(info3, PeerOrigin.Discv5)
|
||||||
)
|
|
||||||
node1.wakuPeerExchange.enrCache.add(record)
|
|
||||||
|
|
||||||
# When requesting 0 peers
|
# When requesting 0 peers
|
||||||
let response = await node2.wakuPeerExchangeClient.request(0)
|
let response = await node2.wakuPeerExchangeClient.request(0)
|
||||||
@ -312,13 +313,6 @@ suite "Waku Peer Exchange":
|
|||||||
await allFutures([node1.start(), node2.start()])
|
await allFutures([node1.start(), node2.start()])
|
||||||
await allFutures([node1.mountPeerExchangeClient(), node2.mountPeerExchange()])
|
await allFutures([node1.mountPeerExchangeClient(), node2.mountPeerExchange()])
|
||||||
|
|
||||||
# Mock that we have discovered one enr
|
|
||||||
var record = enr.Record()
|
|
||||||
check record.fromUri(
|
|
||||||
"enr:-Iu4QGNuTvNRulF3A4Kb9YHiIXLr0z_CpvWkWjWKU-o95zUPR_In02AWek4nsSk7G_-YDcaT4bDRPzt5JIWvFqkXSNcBgmlkgnY0gmlwhE0WsGeJc2VjcDI1NmsxoQKp9VzU2FAh7fwOwSpg1M_Ekz4zzl0Fpbg6po2ZwgVwQYN0Y3CC6mCFd2FrdTIB"
|
|
||||||
)
|
|
||||||
node2.wakuPeerExchange.enrCache.add(record)
|
|
||||||
|
|
||||||
# When making any request with an invalid peer info
|
# When making any request with an invalid peer info
|
||||||
var remotePeerInfo2 = node2.peerInfo.toRemotePeerInfo()
|
var remotePeerInfo2 = node2.peerInfo.toRemotePeerInfo()
|
||||||
remotePeerInfo2.peerId.data.add(255.byte)
|
remotePeerInfo2.peerId.data.add(255.byte)
|
||||||
@ -362,17 +356,17 @@ suite "Waku Peer Exchange":
|
|||||||
newTestWakuNode(generateSecp256k1Key(), parseIpAddress("0.0.0.0"), Port(0))
|
newTestWakuNode(generateSecp256k1Key(), parseIpAddress("0.0.0.0"), Port(0))
|
||||||
node2 =
|
node2 =
|
||||||
newTestWakuNode(generateSecp256k1Key(), parseIpAddress("0.0.0.0"), Port(0))
|
newTestWakuNode(generateSecp256k1Key(), parseIpAddress("0.0.0.0"), Port(0))
|
||||||
|
node3 =
|
||||||
|
newTestWakuNode(generateSecp256k1Key(), parseIpAddress("0.0.0.0"), Port(0))
|
||||||
|
|
||||||
# Start and mount peer exchange
|
# Start and mount peer exchange
|
||||||
await allFutures([node1.start(), node2.start()])
|
await allFutures([node1.start(), node2.start(), node3.start()])
|
||||||
await allFutures([node1.mountPeerExchange(), node2.mountPeerExchange()])
|
await allFutures([node1.mountPeerExchange(), node2.mountPeerExchange()])
|
||||||
|
|
||||||
# Mock that we have discovered these enrs
|
# Simulate node1 discovering node3 via Discv5
|
||||||
var enr1 = enr.Record()
|
var info3 = node3.peerInfo.toRemotePeerInfo()
|
||||||
check enr1.fromUri(
|
info3.enr = some(node3.enr)
|
||||||
"enr:-Iu4QGNuTvNRulF3A4Kb9YHiIXLr0z_CpvWkWjWKU-o95zUPR_In02AWek4nsSk7G_-YDcaT4bDRPzt5JIWvFqkXSNcBgmlkgnY0gmlwhE0WsGeJc2VjcDI1NmsxoQKp9VzU2FAh7fwOwSpg1M_Ekz4zzl0Fpbg6po2ZwgVwQYN0Y3CC6mCFd2FrdTIB"
|
node1.peerManager.addPeer(info3, PeerOrigin.Discv5)
|
||||||
)
|
|
||||||
node1.wakuPeerExchange.enrCache.add(enr1)
|
|
||||||
|
|
||||||
# Create connection
|
# Create connection
|
||||||
let connOpt = await node2.peerManager.dialPeer(
|
let connOpt = await node2.peerManager.dialPeer(
|
||||||
@ -396,7 +390,7 @@ suite "Waku Peer Exchange":
|
|||||||
check:
|
check:
|
||||||
decodedBuff.get().response.status_code == PeerExchangeResponseStatusCode.SUCCESS
|
decodedBuff.get().response.status_code == PeerExchangeResponseStatusCode.SUCCESS
|
||||||
decodedBuff.get().response.peerInfos.len == 1
|
decodedBuff.get().response.peerInfos.len == 1
|
||||||
decodedBuff.get().response.peerInfos[0].enr == enr1.raw
|
decodedBuff.get().response.peerInfos[0].enr == node3.enr.raw
|
||||||
|
|
||||||
asyncTest "RateLimit as expected":
|
asyncTest "RateLimit as expected":
|
||||||
let
|
let
|
||||||
@ -404,9 +398,11 @@ suite "Waku Peer Exchange":
|
|||||||
newTestWakuNode(generateSecp256k1Key(), parseIpAddress("0.0.0.0"), Port(0))
|
newTestWakuNode(generateSecp256k1Key(), parseIpAddress("0.0.0.0"), Port(0))
|
||||||
node2 =
|
node2 =
|
||||||
newTestWakuNode(generateSecp256k1Key(), parseIpAddress("0.0.0.0"), Port(0))
|
newTestWakuNode(generateSecp256k1Key(), parseIpAddress("0.0.0.0"), Port(0))
|
||||||
|
node3 =
|
||||||
|
newTestWakuNode(generateSecp256k1Key(), parseIpAddress("0.0.0.0"), Port(0))
|
||||||
|
|
||||||
# Start and mount peer exchange
|
# Start and mount peer exchange
|
||||||
await allFutures([node1.start(), node2.start()])
|
await allFutures([node1.start(), node2.start(), node3.start()])
|
||||||
await allFutures(
|
await allFutures(
|
||||||
[
|
[
|
||||||
node1.mountPeerExchange(rateLimit = (1, 150.milliseconds)),
|
node1.mountPeerExchange(rateLimit = (1, 150.milliseconds)),
|
||||||
@ -414,6 +410,11 @@ suite "Waku Peer Exchange":
|
|||||||
]
|
]
|
||||||
)
|
)
|
||||||
|
|
||||||
|
# Simulate node1 discovering nodeA via Discv5
|
||||||
|
var info3 = node3.peerInfo.toRemotePeerInfo()
|
||||||
|
info3.enr = some(node3.enr)
|
||||||
|
node1.peerManager.addPeer(info3, PeerOrigin.Discv5)
|
||||||
|
|
||||||
# Create connection
|
# Create connection
|
||||||
let connOpt = await node2.peerManager.dialPeer(
|
let connOpt = await node2.peerManager.dialPeer(
|
||||||
node1.switch.peerInfo.toRemotePeerInfo(), WakuPeerExchangeCodec
|
node1.switch.peerInfo.toRemotePeerInfo(), WakuPeerExchangeCodec
|
||||||
@ -421,19 +422,6 @@ suite "Waku Peer Exchange":
|
|||||||
require:
|
require:
|
||||||
connOpt.isSome
|
connOpt.isSome
|
||||||
|
|
||||||
# Create some enr and add to peer exchange (simulating disv5)
|
|
||||||
var enr1, enr2 = enr.Record()
|
|
||||||
check enr1.fromUri(
|
|
||||||
"enr:-Iu4QGNuTvNRulF3A4Kb9YHiIXLr0z_CpvWkWjWKU-o95zUPR_In02AWek4nsSk7G_-YDcaT4bDRPzt5JIWvFqkXSNcBgmlkgnY0gmlwhE0WsGeJc2VjcDI1NmsxoQKp9VzU2FAh7fwOwSpg1M_Ekz4zzl0Fpbg6po2ZwgVwQYN0Y3CC6mCFd2FrdTIB"
|
|
||||||
)
|
|
||||||
check enr2.fromUri(
|
|
||||||
"enr:-Iu4QGJllOWlviPIh_SGR-VVm55nhnBIU5L-s3ran7ARz_4oDdtJPtUs3Bc5aqZHCiPQX6qzNYF2ARHER0JPX97TFbEBgmlkgnY0gmlwhE0WsGeJc2VjcDI1NmsxoQP3ULycvday4EkvtVu0VqbBdmOkbfVLJx8fPe0lE_dRkIN0Y3CC6mCFd2FrdTIB"
|
|
||||||
)
|
|
||||||
|
|
||||||
# Mock that we have discovered these enrs
|
|
||||||
node1.wakuPeerExchange.enrCache.add(enr1)
|
|
||||||
node1.wakuPeerExchange.enrCache.add(enr2)
|
|
||||||
|
|
||||||
await sleepAsync(150.milliseconds)
|
await sleepAsync(150.milliseconds)
|
||||||
|
|
||||||
# Request 2 peer from px. Test all request variants
|
# Request 2 peer from px. Test all request variants
|
||||||
|
|||||||
Binary file not shown.
29
tests/waku_rln_relay/test_rln_contract_deployment.nim
Normal file
29
tests/waku_rln_relay/test_rln_contract_deployment.nim
Normal file
@ -0,0 +1,29 @@
|
|||||||
|
{.used.}
|
||||||
|
|
||||||
|
{.push raises: [].}
|
||||||
|
|
||||||
|
import std/[options, os], results, testutils/unittests, chronos, web3
|
||||||
|
|
||||||
|
import
|
||||||
|
waku/[
|
||||||
|
waku_rln_relay,
|
||||||
|
waku_rln_relay/conversion_utils,
|
||||||
|
waku_rln_relay/group_manager/on_chain/group_manager,
|
||||||
|
],
|
||||||
|
./utils_onchain
|
||||||
|
|
||||||
|
suite "Token and RLN Contract Deployment":
|
||||||
|
test "anvil should dump state to file on exit":
|
||||||
|
# git will ignore this file, if the contract has been updated and the state file needs to be regenerated then this file can be renamed to replace the one in the repo (tests/waku_rln_relay/anvil_state/tests/waku_rln_relay/anvil_state/state-deployed-contracts-mint-and-approved.json)
|
||||||
|
let testStateFile = some("tests/waku_rln_relay/anvil_state/anvil_state.ignore.json")
|
||||||
|
let anvilProc = runAnvil(stateFile = testStateFile, dumpStateOnExit = true)
|
||||||
|
let manager = waitFor setupOnchainGroupManager(deployContracts = true)
|
||||||
|
|
||||||
|
stopAnvil(anvilProc)
|
||||||
|
|
||||||
|
check:
|
||||||
|
fileExists(testStateFile.get())
|
||||||
|
|
||||||
|
#The test should still pass even if thie compression fails
|
||||||
|
compressGzipFile(testStateFile.get(), testStateFile.get() & ".gz").isOkOr:
|
||||||
|
error "Failed to compress state file", error = error
|
||||||
@ -33,8 +33,8 @@ suite "Onchain group manager":
|
|||||||
var manager {.threadVar.}: OnchainGroupManager
|
var manager {.threadVar.}: OnchainGroupManager
|
||||||
|
|
||||||
setup:
|
setup:
|
||||||
anvilProc = runAnvil()
|
anvilProc = runAnvil(stateFile = some(DEFAULT_ANVIL_STATE_PATH))
|
||||||
manager = waitFor setupOnchainGroupManager()
|
manager = waitFor setupOnchainGroupManager(deployContracts = false)
|
||||||
|
|
||||||
teardown:
|
teardown:
|
||||||
stopAnvil(anvilProc)
|
stopAnvil(anvilProc)
|
||||||
|
|||||||
@ -27,8 +27,8 @@ suite "Waku rln relay":
|
|||||||
var manager {.threadVar.}: OnchainGroupManager
|
var manager {.threadVar.}: OnchainGroupManager
|
||||||
|
|
||||||
setup:
|
setup:
|
||||||
anvilProc = runAnvil()
|
anvilProc = runAnvil(stateFile = some(DEFAULT_ANVIL_STATE_PATH))
|
||||||
manager = waitFor setupOnchainGroupManager()
|
manager = waitFor setupOnchainGroupManager(deployContracts = false)
|
||||||
|
|
||||||
teardown:
|
teardown:
|
||||||
stopAnvil(anvilProc)
|
stopAnvil(anvilProc)
|
||||||
@ -70,53 +70,6 @@ suite "Waku rln relay":
|
|||||||
|
|
||||||
info "the generated identity credential: ", idCredential
|
info "the generated identity credential: ", idCredential
|
||||||
|
|
||||||
test "hash Nim Wrappers":
|
|
||||||
# create an RLN instance
|
|
||||||
let rlnInstance = createRLNInstanceWrapper()
|
|
||||||
require:
|
|
||||||
rlnInstance.isOk()
|
|
||||||
|
|
||||||
# prepare the input
|
|
||||||
let
|
|
||||||
msg = "Hello".toBytes()
|
|
||||||
hashInput = encodeLengthPrefix(msg)
|
|
||||||
hashInputBuffer = toBuffer(hashInput)
|
|
||||||
|
|
||||||
# prepare other inputs to the hash function
|
|
||||||
let outputBuffer = default(Buffer)
|
|
||||||
|
|
||||||
let hashSuccess = sha256(unsafeAddr hashInputBuffer, unsafeAddr outputBuffer, true)
|
|
||||||
require:
|
|
||||||
hashSuccess
|
|
||||||
let outputArr = cast[ptr array[32, byte]](outputBuffer.`ptr`)[]
|
|
||||||
|
|
||||||
check:
|
|
||||||
"1e32b3ab545c07c8b4a7ab1ca4f46bc31e4fdc29ac3b240ef1d54b4017a26e4c" ==
|
|
||||||
outputArr.inHex()
|
|
||||||
|
|
||||||
let
|
|
||||||
hashOutput = cast[ptr array[32, byte]](outputBuffer.`ptr`)[]
|
|
||||||
hashOutputHex = hashOutput.toHex()
|
|
||||||
|
|
||||||
info "hash output", hashOutputHex
|
|
||||||
|
|
||||||
test "sha256 hash utils":
|
|
||||||
# create an RLN instance
|
|
||||||
let rlnInstance = createRLNInstanceWrapper()
|
|
||||||
require:
|
|
||||||
rlnInstance.isOk()
|
|
||||||
let rln = rlnInstance.get()
|
|
||||||
|
|
||||||
# prepare the input
|
|
||||||
let msg = "Hello".toBytes()
|
|
||||||
|
|
||||||
let hashRes = sha256(msg)
|
|
||||||
|
|
||||||
check:
|
|
||||||
hashRes.isOk()
|
|
||||||
"1e32b3ab545c07c8b4a7ab1ca4f46bc31e4fdc29ac3b240ef1d54b4017a26e4c" ==
|
|
||||||
hashRes.get().inHex()
|
|
||||||
|
|
||||||
test "poseidon hash utils":
|
test "poseidon hash utils":
|
||||||
# create an RLN instance
|
# create an RLN instance
|
||||||
let rlnInstance = createRLNInstanceWrapper()
|
let rlnInstance = createRLNInstanceWrapper()
|
||||||
|
|||||||
@ -30,8 +30,8 @@ procSuite "WakuNode - RLN relay":
|
|||||||
var manager {.threadVar.}: OnchainGroupManager
|
var manager {.threadVar.}: OnchainGroupManager
|
||||||
|
|
||||||
setup:
|
setup:
|
||||||
anvilProc = runAnvil()
|
anvilProc = runAnvil(stateFile = some(DEFAULT_ANVIL_STATE_PATH))
|
||||||
manager = waitFor setupOnchainGroupManager()
|
manager = waitFor setupOnchainGroupManager(deployContracts = false)
|
||||||
|
|
||||||
teardown:
|
teardown:
|
||||||
stopAnvil(anvilProc)
|
stopAnvil(anvilProc)
|
||||||
|
|||||||
@ -3,7 +3,7 @@
|
|||||||
{.push raises: [].}
|
{.push raises: [].}
|
||||||
|
|
||||||
import
|
import
|
||||||
std/[options, os, osproc, deques, streams, strutils, tempfiles, strformat],
|
std/[options, os, osproc, streams, strutils, strformat],
|
||||||
results,
|
results,
|
||||||
stew/byteutils,
|
stew/byteutils,
|
||||||
testutils/unittests,
|
testutils/unittests,
|
||||||
@ -14,7 +14,6 @@ import
|
|||||||
web3/conversions,
|
web3/conversions,
|
||||||
web3/eth_api_types,
|
web3/eth_api_types,
|
||||||
json_rpc/rpcclient,
|
json_rpc/rpcclient,
|
||||||
json,
|
|
||||||
libp2p/crypto/crypto,
|
libp2p/crypto/crypto,
|
||||||
eth/keys,
|
eth/keys,
|
||||||
results
|
results
|
||||||
@ -24,25 +23,19 @@ import
|
|||||||
waku_rln_relay,
|
waku_rln_relay,
|
||||||
waku_rln_relay/protocol_types,
|
waku_rln_relay/protocol_types,
|
||||||
waku_rln_relay/constants,
|
waku_rln_relay/constants,
|
||||||
waku_rln_relay/contract,
|
|
||||||
waku_rln_relay/rln,
|
waku_rln_relay/rln,
|
||||||
],
|
],
|
||||||
../testlib/common,
|
../testlib/common
|
||||||
./utils
|
|
||||||
|
|
||||||
const CHAIN_ID* = 1234'u256
|
const CHAIN_ID* = 1234'u256
|
||||||
|
|
||||||
template skip0xPrefix(hexStr: string): int =
|
# Path to the file which Anvil loads at startup to initialize the chain with pre-deployed contracts, an account funded with tokens and approved for spending
|
||||||
## Returns the index of the first meaningful char in `hexStr` by skipping
|
const DEFAULT_ANVIL_STATE_PATH* =
|
||||||
## "0x" prefix
|
"tests/waku_rln_relay/anvil_state/state-deployed-contracts-mint-and-approved.json.gz"
|
||||||
if hexStr.len > 1 and hexStr[0] == '0' and hexStr[1] in {'x', 'X'}: 2 else: 0
|
# The contract address of the TestStableToken used for the RLN Membership registration fee
|
||||||
|
const TOKEN_ADDRESS* = "0x5FbDB2315678afecb367f032d93F642f64180aa3"
|
||||||
func strip0xPrefix(s: string): string =
|
# The contract address used ti interact with the WakuRLNV2 contract via the proxy
|
||||||
let prefixLen = skip0xPrefix(s)
|
const WAKU_RLNV2_PROXY_ADDRESS* = "0x5fc8d32690cc91d4c39d9d3abcbd16989f875707"
|
||||||
if prefixLen != 0:
|
|
||||||
s[prefixLen .. ^1]
|
|
||||||
else:
|
|
||||||
s
|
|
||||||
|
|
||||||
proc generateCredentials*(): IdentityCredential =
|
proc generateCredentials*(): IdentityCredential =
|
||||||
let credRes = membershipKeyGen()
|
let credRes = membershipKeyGen()
|
||||||
@ -82,6 +75,10 @@ proc getForgePath(): string =
|
|||||||
forgePath = joinPath(forgePath, ".foundry/bin/forge")
|
forgePath = joinPath(forgePath, ".foundry/bin/forge")
|
||||||
return $forgePath
|
return $forgePath
|
||||||
|
|
||||||
|
template execForge(cmd: string): tuple[output: string, exitCode: int] =
|
||||||
|
# unset env vars that affect e.g. "forge script" before running forge
|
||||||
|
execCmdEx("unset ETH_FROM ETH_PASSWORD && " & cmd)
|
||||||
|
|
||||||
contract(ERC20Token):
|
contract(ERC20Token):
|
||||||
proc allowance(owner: Address, spender: Address): UInt256 {.view.}
|
proc allowance(owner: Address, spender: Address): UInt256 {.view.}
|
||||||
proc balanceOf(account: Address): UInt256 {.view.}
|
proc balanceOf(account: Address): UInt256 {.view.}
|
||||||
@ -102,7 +99,7 @@ proc sendMintCall(
|
|||||||
recipientAddress: Address,
|
recipientAddress: Address,
|
||||||
amountTokens: UInt256,
|
amountTokens: UInt256,
|
||||||
recipientBalanceBeforeExpectedTokens: Option[UInt256] = none(UInt256),
|
recipientBalanceBeforeExpectedTokens: Option[UInt256] = none(UInt256),
|
||||||
): Future[TxHash] {.async.} =
|
): Future[void] {.async.} =
|
||||||
let doBalanceAssert = recipientBalanceBeforeExpectedTokens.isSome()
|
let doBalanceAssert = recipientBalanceBeforeExpectedTokens.isSome()
|
||||||
|
|
||||||
if doBalanceAssert:
|
if doBalanceAssert:
|
||||||
@ -138,7 +135,7 @@ proc sendMintCall(
|
|||||||
tx.data = Opt.some(byteutils.hexToSeqByte(mintCallData))
|
tx.data = Opt.some(byteutils.hexToSeqByte(mintCallData))
|
||||||
|
|
||||||
trace "Sending mint call"
|
trace "Sending mint call"
|
||||||
let txHash = await web3.send(tx)
|
discard await web3.send(tx)
|
||||||
|
|
||||||
let balanceOfSelector = "0x70a08231"
|
let balanceOfSelector = "0x70a08231"
|
||||||
let balanceCallData = balanceOfSelector & paddedAddress
|
let balanceCallData = balanceOfSelector & paddedAddress
|
||||||
@ -153,8 +150,6 @@ proc sendMintCall(
|
|||||||
assert balanceAfterMint == balanceAfterExpectedTokens,
|
assert balanceAfterMint == balanceAfterExpectedTokens,
|
||||||
fmt"Balance is {balanceAfterMint} after transfer but expected {balanceAfterExpectedTokens}"
|
fmt"Balance is {balanceAfterMint} after transfer but expected {balanceAfterExpectedTokens}"
|
||||||
|
|
||||||
return txHash
|
|
||||||
|
|
||||||
# Check how many tokens a spender (the RLN contract) is allowed to spend on behalf of the owner (account which wishes to register a membership)
|
# Check how many tokens a spender (the RLN contract) is allowed to spend on behalf of the owner (account which wishes to register a membership)
|
||||||
proc checkTokenAllowance(
|
proc checkTokenAllowance(
|
||||||
web3: Web3, tokenAddress: Address, owner: Address, spender: Address
|
web3: Web3, tokenAddress: Address, owner: Address, spender: Address
|
||||||
@ -225,11 +220,14 @@ proc deployTestToken*(
|
|||||||
# Deploy TestToken contract
|
# Deploy TestToken contract
|
||||||
let forgeCmdTestToken =
|
let forgeCmdTestToken =
|
||||||
fmt"""cd {submodulePath} && {forgePath} script test/TestToken.sol --broadcast -vvv --rpc-url http://localhost:8540 --tc TestTokenFactory --private-key {pk} && rm -rf broadcast/*/*/run-1*.json && rm -rf cache/*/*/run-1*.json"""
|
fmt"""cd {submodulePath} && {forgePath} script test/TestToken.sol --broadcast -vvv --rpc-url http://localhost:8540 --tc TestTokenFactory --private-key {pk} && rm -rf broadcast/*/*/run-1*.json && rm -rf cache/*/*/run-1*.json"""
|
||||||
let (outputDeployTestToken, exitCodeDeployTestToken) = execCmdEx(forgeCmdTestToken)
|
let (outputDeployTestToken, exitCodeDeployTestToken) = execForge(forgeCmdTestToken)
|
||||||
trace "Executed forge command to deploy TestToken contract",
|
trace "Executed forge command to deploy TestToken contract",
|
||||||
output = outputDeployTestToken
|
output = outputDeployTestToken
|
||||||
if exitCodeDeployTestToken != 0:
|
if exitCodeDeployTestToken != 0:
|
||||||
return error("Forge command to deploy TestToken contract failed")
|
error "Forge command to deploy TestToken contract failed",
|
||||||
|
error = outputDeployTestToken
|
||||||
|
return
|
||||||
|
err("Forge command to deploy TestToken contract failed: " & outputDeployTestToken)
|
||||||
|
|
||||||
# Parse the command output to find contract address
|
# Parse the command output to find contract address
|
||||||
let testTokenAddress = getContractAddressFromDeployScriptOutput(outputDeployTestToken).valueOr:
|
let testTokenAddress = getContractAddressFromDeployScriptOutput(outputDeployTestToken).valueOr:
|
||||||
@ -351,7 +349,7 @@ proc executeForgeContractDeployScripts*(
|
|||||||
let forgeCmdPriceCalculator =
|
let forgeCmdPriceCalculator =
|
||||||
fmt"""cd {submodulePath} && {forgePath} script script/Deploy.s.sol --broadcast -vvvv --rpc-url http://localhost:8540 --tc DeployPriceCalculator --private-key {privateKey} && rm -rf broadcast/*/*/run-1*.json && rm -rf cache/*/*/run-1*.json"""
|
fmt"""cd {submodulePath} && {forgePath} script script/Deploy.s.sol --broadcast -vvvv --rpc-url http://localhost:8540 --tc DeployPriceCalculator --private-key {privateKey} && rm -rf broadcast/*/*/run-1*.json && rm -rf cache/*/*/run-1*.json"""
|
||||||
let (outputDeployPriceCalculator, exitCodeDeployPriceCalculator) =
|
let (outputDeployPriceCalculator, exitCodeDeployPriceCalculator) =
|
||||||
execCmdEx(forgeCmdPriceCalculator)
|
execForge(forgeCmdPriceCalculator)
|
||||||
trace "Executed forge command to deploy LinearPriceCalculator contract",
|
trace "Executed forge command to deploy LinearPriceCalculator contract",
|
||||||
output = outputDeployPriceCalculator
|
output = outputDeployPriceCalculator
|
||||||
if exitCodeDeployPriceCalculator != 0:
|
if exitCodeDeployPriceCalculator != 0:
|
||||||
@ -368,7 +366,7 @@ proc executeForgeContractDeployScripts*(
|
|||||||
|
|
||||||
let forgeCmdWakuRln =
|
let forgeCmdWakuRln =
|
||||||
fmt"""cd {submodulePath} && {forgePath} script script/Deploy.s.sol --broadcast -vvvv --rpc-url http://localhost:8540 --tc DeployWakuRlnV2 --private-key {privateKey} && rm -rf broadcast/*/*/run-1*.json && rm -rf cache/*/*/run-1*.json"""
|
fmt"""cd {submodulePath} && {forgePath} script script/Deploy.s.sol --broadcast -vvvv --rpc-url http://localhost:8540 --tc DeployWakuRlnV2 --private-key {privateKey} && rm -rf broadcast/*/*/run-1*.json && rm -rf cache/*/*/run-1*.json"""
|
||||||
let (outputDeployWakuRln, exitCodeDeployWakuRln) = execCmdEx(forgeCmdWakuRln)
|
let (outputDeployWakuRln, exitCodeDeployWakuRln) = execForge(forgeCmdWakuRln)
|
||||||
trace "Executed forge command to deploy WakuRlnV2 contract",
|
trace "Executed forge command to deploy WakuRlnV2 contract",
|
||||||
output = outputDeployWakuRln
|
output = outputDeployWakuRln
|
||||||
if exitCodeDeployWakuRln != 0:
|
if exitCodeDeployWakuRln != 0:
|
||||||
@ -388,7 +386,7 @@ proc executeForgeContractDeployScripts*(
|
|||||||
# Deploy Proxy contract
|
# Deploy Proxy contract
|
||||||
let forgeCmdProxy =
|
let forgeCmdProxy =
|
||||||
fmt"""cd {submodulePath} && {forgePath} script script/Deploy.s.sol --broadcast -vvvv --rpc-url http://localhost:8540 --tc DeployProxy --private-key {privateKey} && rm -rf broadcast/*/*/run-1*.json && rm -rf cache/*/*/run-1*.json"""
|
fmt"""cd {submodulePath} && {forgePath} script script/Deploy.s.sol --broadcast -vvvv --rpc-url http://localhost:8540 --tc DeployProxy --private-key {privateKey} && rm -rf broadcast/*/*/run-1*.json && rm -rf cache/*/*/run-1*.json"""
|
||||||
let (outputDeployProxy, exitCodeDeployProxy) = execCmdEx(forgeCmdProxy)
|
let (outputDeployProxy, exitCodeDeployProxy) = execForge(forgeCmdProxy)
|
||||||
trace "Executed forge command to deploy proxy contract", output = outputDeployProxy
|
trace "Executed forge command to deploy proxy contract", output = outputDeployProxy
|
||||||
if exitCodeDeployProxy != 0:
|
if exitCodeDeployProxy != 0:
|
||||||
error "Forge command to deploy Proxy failed", error = outputDeployProxy
|
error "Forge command to deploy Proxy failed", error = outputDeployProxy
|
||||||
@ -480,20 +478,64 @@ proc getAnvilPath*(): string =
|
|||||||
anvilPath = joinPath(anvilPath, ".foundry/bin/anvil")
|
anvilPath = joinPath(anvilPath, ".foundry/bin/anvil")
|
||||||
return $anvilPath
|
return $anvilPath
|
||||||
|
|
||||||
|
proc decompressGzipFile*(
|
||||||
|
compressedPath: string, targetPath: string
|
||||||
|
): Result[void, string] =
|
||||||
|
## Decompress a gzipped file using the gunzip command-line utility
|
||||||
|
let cmd = fmt"gunzip -c {compressedPath} > {targetPath}"
|
||||||
|
|
||||||
|
try:
|
||||||
|
let (output, exitCode) = execCmdEx(cmd)
|
||||||
|
if exitCode != 0:
|
||||||
|
return err(
|
||||||
|
"Failed to decompress '" & compressedPath & "' to '" & targetPath & "': " &
|
||||||
|
output
|
||||||
|
)
|
||||||
|
except OSError as e:
|
||||||
|
return err("Failed to execute gunzip command: " & e.msg)
|
||||||
|
except IOError as e:
|
||||||
|
return err("Failed to execute gunzip command: " & e.msg)
|
||||||
|
|
||||||
|
ok()
|
||||||
|
|
||||||
|
proc compressGzipFile*(sourcePath: string, targetPath: string): Result[void, string] =
|
||||||
|
## Compress a file with gzip using the gzip command-line utility
|
||||||
|
let cmd = fmt"gzip -c {sourcePath} > {targetPath}"
|
||||||
|
|
||||||
|
try:
|
||||||
|
let (output, exitCode) = execCmdEx(cmd)
|
||||||
|
if exitCode != 0:
|
||||||
|
return err(
|
||||||
|
"Failed to compress '" & sourcePath & "' to '" & targetPath & "': " & output
|
||||||
|
)
|
||||||
|
except OSError as e:
|
||||||
|
return err("Failed to execute gzip command: " & e.msg)
|
||||||
|
except IOError as e:
|
||||||
|
return err("Failed to execute gzip command: " & e.msg)
|
||||||
|
|
||||||
|
ok()
|
||||||
|
|
||||||
# Runs Anvil daemon
|
# Runs Anvil daemon
|
||||||
proc runAnvil*(port: int = 8540, chainId: string = "1234"): Process =
|
proc runAnvil*(
|
||||||
|
port: int = 8540,
|
||||||
|
chainId: string = "1234",
|
||||||
|
stateFile: Option[string] = none(string),
|
||||||
|
dumpStateOnExit: bool = false,
|
||||||
|
): Process =
|
||||||
# Passed options are
|
# Passed options are
|
||||||
# --port Port to listen on.
|
# --port Port to listen on.
|
||||||
# --gas-limit Sets the block gas limit in WEI.
|
# --gas-limit Sets the block gas limit in WEI.
|
||||||
# --balance The default account balance, specified in ether.
|
# --balance The default account balance, specified in ether.
|
||||||
# --chain-id Chain ID of the network.
|
# --chain-id Chain ID of the network.
|
||||||
|
# --load-state Initialize the chain from a previously saved state snapshot (read-only)
|
||||||
|
# --dump-state Dump the state on exit to the given file (write-only)
|
||||||
# See anvil documentation https://book.getfoundry.sh/reference/anvil/ for more details
|
# See anvil documentation https://book.getfoundry.sh/reference/anvil/ for more details
|
||||||
try:
|
try:
|
||||||
let anvilPath = getAnvilPath()
|
let anvilPath = getAnvilPath()
|
||||||
info "Anvil path", anvilPath
|
info "Anvil path", anvilPath
|
||||||
let runAnvil = startProcess(
|
|
||||||
anvilPath,
|
var args =
|
||||||
args = [
|
@[
|
||||||
"--port",
|
"--port",
|
||||||
$port,
|
$port,
|
||||||
"--gas-limit",
|
"--gas-limit",
|
||||||
@ -502,9 +544,54 @@ proc runAnvil*(port: int = 8540, chainId: string = "1234"): Process =
|
|||||||
"1000000000",
|
"1000000000",
|
||||||
"--chain-id",
|
"--chain-id",
|
||||||
$chainId,
|
$chainId,
|
||||||
],
|
]
|
||||||
options = {poUsePath},
|
|
||||||
)
|
# Add state file argument if provided
|
||||||
|
if stateFile.isSome():
|
||||||
|
var statePath = stateFile.get()
|
||||||
|
info "State file parameter provided",
|
||||||
|
statePath = statePath,
|
||||||
|
dumpStateOnExit = dumpStateOnExit,
|
||||||
|
absolutePath = absolutePath(statePath)
|
||||||
|
|
||||||
|
# Check if the file is gzip compressed and handle decompression
|
||||||
|
if statePath.endsWith(".gz"):
|
||||||
|
let decompressedPath = statePath[0 .. ^4] # Remove .gz extension
|
||||||
|
debug "Gzip compressed state file detected",
|
||||||
|
compressedPath = statePath, decompressedPath = decompressedPath
|
||||||
|
|
||||||
|
if not fileExists(decompressedPath):
|
||||||
|
decompressGzipFile(statePath, decompressedPath).isOkOr:
|
||||||
|
error "Failed to decompress state file", error = error
|
||||||
|
return nil
|
||||||
|
|
||||||
|
statePath = decompressedPath
|
||||||
|
|
||||||
|
if dumpStateOnExit:
|
||||||
|
# Ensure the directory exists
|
||||||
|
let stateDir = parentDir(statePath)
|
||||||
|
if not dirExists(stateDir):
|
||||||
|
createDir(stateDir)
|
||||||
|
# Fresh deployment: start clean and dump state on exit
|
||||||
|
args.add("--dump-state")
|
||||||
|
args.add(statePath)
|
||||||
|
debug "Anvil configured to dump state on exit", path = statePath
|
||||||
|
else:
|
||||||
|
# Using cache: only load state, don't overwrite it (preserves clean cached state)
|
||||||
|
if fileExists(statePath):
|
||||||
|
args.add("--load-state")
|
||||||
|
args.add(statePath)
|
||||||
|
debug "Anvil configured to load state file (read-only)", path = statePath
|
||||||
|
else:
|
||||||
|
warn "State file does not exist, anvil will start fresh",
|
||||||
|
path = statePath, absolutePath = absolutePath(statePath)
|
||||||
|
else:
|
||||||
|
info "No state file provided, anvil will start fresh without state persistence"
|
||||||
|
|
||||||
|
info "Starting anvil with arguments", args = args.join(" ")
|
||||||
|
|
||||||
|
let runAnvil =
|
||||||
|
startProcess(anvilPath, args = args, options = {poUsePath, poStdErrToStdOut})
|
||||||
let anvilPID = runAnvil.processID
|
let anvilPID = runAnvil.processID
|
||||||
|
|
||||||
# We read stdout from Anvil to see when daemon is ready
|
# We read stdout from Anvil to see when daemon is ready
|
||||||
@ -516,7 +603,13 @@ proc runAnvil*(port: int = 8540, chainId: string = "1234"): Process =
|
|||||||
anvilStartLog.add(cmdline)
|
anvilStartLog.add(cmdline)
|
||||||
if cmdline.contains("Listening on 127.0.0.1:" & $port):
|
if cmdline.contains("Listening on 127.0.0.1:" & $port):
|
||||||
break
|
break
|
||||||
|
else:
|
||||||
|
error "Anvil daemon exited (closed output)",
|
||||||
|
pid = anvilPID, startLog = anvilStartLog
|
||||||
|
return
|
||||||
except Exception, CatchableError:
|
except Exception, CatchableError:
|
||||||
|
warn "Anvil daemon stdout reading error; assuming it started OK",
|
||||||
|
pid = anvilPID, startLog = anvilStartLog, err = getCurrentExceptionMsg()
|
||||||
break
|
break
|
||||||
info "Anvil daemon is running and ready", pid = anvilPID, startLog = anvilStartLog
|
info "Anvil daemon is running and ready", pid = anvilPID, startLog = anvilStartLog
|
||||||
return runAnvil
|
return runAnvil
|
||||||
@ -536,7 +629,14 @@ proc stopAnvil*(runAnvil: Process) {.used.} =
|
|||||||
# Send termination signals
|
# Send termination signals
|
||||||
when not defined(windows):
|
when not defined(windows):
|
||||||
discard execCmdEx(fmt"kill -TERM {anvilPID}")
|
discard execCmdEx(fmt"kill -TERM {anvilPID}")
|
||||||
discard execCmdEx(fmt"kill -9 {anvilPID}")
|
# Wait for graceful shutdown to allow state dumping
|
||||||
|
sleep(200)
|
||||||
|
# Only force kill if process is still running
|
||||||
|
let checkResult = execCmdEx(fmt"kill -0 {anvilPID} 2>/dev/null")
|
||||||
|
if checkResult.exitCode == 0:
|
||||||
|
info "Anvil process still running after TERM signal, sending KILL",
|
||||||
|
anvilPID = anvilPID
|
||||||
|
discard execCmdEx(fmt"kill -9 {anvilPID}")
|
||||||
else:
|
else:
|
||||||
discard execCmdEx(fmt"taskkill /F /PID {anvilPID}")
|
discard execCmdEx(fmt"taskkill /F /PID {anvilPID}")
|
||||||
|
|
||||||
@ -547,52 +647,100 @@ proc stopAnvil*(runAnvil: Process) {.used.} =
|
|||||||
info "Error stopping Anvil daemon", anvilPID = anvilPID, error = e.msg
|
info "Error stopping Anvil daemon", anvilPID = anvilPID, error = e.msg
|
||||||
|
|
||||||
proc setupOnchainGroupManager*(
|
proc setupOnchainGroupManager*(
|
||||||
ethClientUrl: string = EthClient, amountEth: UInt256 = 10.u256
|
ethClientUrl: string = EthClient,
|
||||||
|
amountEth: UInt256 = 10.u256,
|
||||||
|
deployContracts: bool = true,
|
||||||
): Future[OnchainGroupManager] {.async.} =
|
): Future[OnchainGroupManager] {.async.} =
|
||||||
|
## Setup an onchain group manager for testing
|
||||||
|
## If deployContracts is false, it will assume that the Anvil testnet already has the required contracts deployed, this significantly speeds up test runs.
|
||||||
|
## To run Anvil with a cached state file containing pre-deployed contracts, see runAnvil documentation.
|
||||||
|
##
|
||||||
|
## To generate/update the cached state file:
|
||||||
|
## 1. Call runAnvil with stateFile and dumpStateOnExit=true
|
||||||
|
## 2. Run setupOnchainGroupManager with deployContracts=true to deploy contracts
|
||||||
|
## 3. The state will be saved to the specified file when anvil exits
|
||||||
|
## 4. Commit this file to git
|
||||||
|
##
|
||||||
|
## To use cached state:
|
||||||
|
## 1. Call runAnvil with stateFile and dumpStateOnExit=false
|
||||||
|
## 2. Anvil loads state in read-only mode (won't overwrite the cached file)
|
||||||
|
## 3. Call setupOnchainGroupManager with deployContracts=false
|
||||||
|
## 4. Tests run fast using pre-deployed contracts
|
||||||
let rlnInstanceRes = createRlnInstance()
|
let rlnInstanceRes = createRlnInstance()
|
||||||
check:
|
check:
|
||||||
rlnInstanceRes.isOk()
|
rlnInstanceRes.isOk()
|
||||||
|
|
||||||
let rlnInstance = rlnInstanceRes.get()
|
let rlnInstance = rlnInstanceRes.get()
|
||||||
|
|
||||||
# connect to the eth client
|
|
||||||
let web3 = await newWeb3(ethClientUrl)
|
let web3 = await newWeb3(ethClientUrl)
|
||||||
let accounts = await web3.provider.eth_accounts()
|
let accounts = await web3.provider.eth_accounts()
|
||||||
web3.defaultAccount = accounts[1]
|
web3.defaultAccount = accounts[1]
|
||||||
|
|
||||||
let (privateKey, acc) = createEthAccount(web3)
|
var privateKey: keys.PrivateKey
|
||||||
|
var acc: Address
|
||||||
|
var testTokenAddress: Address
|
||||||
|
var contractAddress: Address
|
||||||
|
|
||||||
# we just need to fund the default account
|
if not deployContracts:
|
||||||
# the send procedure returns a tx hash that we don't use, hence discard
|
info "Using contract addresses from constants"
|
||||||
discard await sendEthTransfer(
|
|
||||||
web3, web3.defaultAccount, acc, ethToWei(1000.u256), some(0.u256)
|
|
||||||
)
|
|
||||||
|
|
||||||
let testTokenAddress = (await deployTestToken(privateKey, acc, web3)).valueOr:
|
testTokenAddress = Address(hexToByteArray[20](TOKEN_ADDRESS))
|
||||||
assert false, "Failed to deploy test token contract: " & $error
|
contractAddress = Address(hexToByteArray[20](WAKU_RLNV2_PROXY_ADDRESS))
|
||||||
return
|
|
||||||
|
|
||||||
# mint the token from the generated account
|
(privateKey, acc) = createEthAccount(web3)
|
||||||
discard await sendMintCall(
|
|
||||||
web3, web3.defaultAccount, testTokenAddress, acc, ethToWei(1000.u256), some(0.u256)
|
|
||||||
)
|
|
||||||
|
|
||||||
let contractAddress = (await executeForgeContractDeployScripts(privateKey, acc, web3)).valueOr:
|
# Fund the test account
|
||||||
assert false, "Failed to deploy RLN contract: " & $error
|
discard await sendEthTransfer(web3, web3.defaultAccount, acc, ethToWei(1000.u256))
|
||||||
return
|
|
||||||
|
|
||||||
# If the generated account wishes to register a membership, it needs to approve the contract to spend its tokens
|
# Mint tokens to the test account
|
||||||
let tokenApprovalResult = await approveTokenAllowanceAndVerify(
|
await sendMintCall(
|
||||||
web3,
|
web3, web3.defaultAccount, testTokenAddress, acc, ethToWei(1000.u256)
|
||||||
acc,
|
)
|
||||||
privateKey,
|
|
||||||
testTokenAddress,
|
|
||||||
contractAddress,
|
|
||||||
ethToWei(200.u256),
|
|
||||||
some(0.u256),
|
|
||||||
)
|
|
||||||
|
|
||||||
assert tokenApprovalResult.isOk, tokenApprovalResult.error()
|
# Approve the contract to spend tokens
|
||||||
|
let tokenApprovalResult = await approveTokenAllowanceAndVerify(
|
||||||
|
web3, acc, privateKey, testTokenAddress, contractAddress, ethToWei(200.u256)
|
||||||
|
)
|
||||||
|
assert tokenApprovalResult.isOk(), tokenApprovalResult.error
|
||||||
|
else:
|
||||||
|
info "Performing Token and RLN contracts deployment"
|
||||||
|
(privateKey, acc) = createEthAccount(web3)
|
||||||
|
|
||||||
|
# fund the default account
|
||||||
|
discard await sendEthTransfer(
|
||||||
|
web3, web3.defaultAccount, acc, ethToWei(1000.u256), some(0.u256)
|
||||||
|
)
|
||||||
|
|
||||||
|
testTokenAddress = (await deployTestToken(privateKey, acc, web3)).valueOr:
|
||||||
|
assert false, "Failed to deploy test token contract: " & $error
|
||||||
|
return
|
||||||
|
|
||||||
|
# mint the token from the generated account
|
||||||
|
await sendMintCall(
|
||||||
|
web3,
|
||||||
|
web3.defaultAccount,
|
||||||
|
testTokenAddress,
|
||||||
|
acc,
|
||||||
|
ethToWei(1000.u256),
|
||||||
|
some(0.u256),
|
||||||
|
)
|
||||||
|
|
||||||
|
contractAddress = (await executeForgeContractDeployScripts(privateKey, acc, web3)).valueOr:
|
||||||
|
assert false, "Failed to deploy RLN contract: " & $error
|
||||||
|
return
|
||||||
|
|
||||||
|
# If the generated account wishes to register a membership, it needs to approve the contract to spend its tokens
|
||||||
|
let tokenApprovalResult = await approveTokenAllowanceAndVerify(
|
||||||
|
web3,
|
||||||
|
acc,
|
||||||
|
privateKey,
|
||||||
|
testTokenAddress,
|
||||||
|
contractAddress,
|
||||||
|
ethToWei(200.u256),
|
||||||
|
some(0.u256),
|
||||||
|
)
|
||||||
|
|
||||||
|
assert tokenApprovalResult.isOk(), tokenApprovalResult.error
|
||||||
|
|
||||||
let manager = OnchainGroupManager(
|
let manager = OnchainGroupManager(
|
||||||
ethClientUrls: @[ethClientUrl],
|
ethClientUrls: @[ethClientUrl],
|
||||||
|
|||||||
@ -65,7 +65,7 @@ suite "Waku v2 Rest API - Admin":
|
|||||||
): Future[void] {.async, gcsafe.} =
|
): Future[void] {.async, gcsafe.} =
|
||||||
await sleepAsync(0.milliseconds)
|
await sleepAsync(0.milliseconds)
|
||||||
|
|
||||||
let shard = RelayShard(clusterId: clusterId, shardId: 0)
|
let shard = RelayShard(clusterId: clusterId, shardId: 5)
|
||||||
node1.subscribe((kind: PubsubSub, topic: $shard), simpleHandler).isOkOr:
|
node1.subscribe((kind: PubsubSub, topic: $shard), simpleHandler).isOkOr:
|
||||||
assert false, "Failed to subscribe to topic: " & $error
|
assert false, "Failed to subscribe to topic: " & $error
|
||||||
node2.subscribe((kind: PubsubSub, topic: $shard), simpleHandler).isOkOr:
|
node2.subscribe((kind: PubsubSub, topic: $shard), simpleHandler).isOkOr:
|
||||||
@ -212,6 +212,18 @@ suite "Waku v2 Rest API - Admin":
|
|||||||
let conn2 = await node1.peerManager.connectPeer(peerInfo2)
|
let conn2 = await node1.peerManager.connectPeer(peerInfo2)
|
||||||
let conn3 = await node1.peerManager.connectPeer(peerInfo3)
|
let conn3 = await node1.peerManager.connectPeer(peerInfo3)
|
||||||
|
|
||||||
|
var count = 0
|
||||||
|
while count < 20:
|
||||||
|
## Wait ~1s at most for the peer store to update shard info
|
||||||
|
let getRes = await client.getPeers()
|
||||||
|
if getRes.data.allIt(it.shards == @[5.uint16]):
|
||||||
|
break
|
||||||
|
|
||||||
|
count.inc()
|
||||||
|
await sleepAsync(50.milliseconds)
|
||||||
|
|
||||||
|
assert count < 20, "Timeout waiting for shards to be updated in peer store"
|
||||||
|
|
||||||
# Check successful connections
|
# Check successful connections
|
||||||
check:
|
check:
|
||||||
conn2 == true
|
conn2 == true
|
||||||
|
|||||||
@ -41,8 +41,8 @@ suite "Waku v2 REST API - health":
|
|||||||
var manager {.threadVar.}: OnchainGroupManager
|
var manager {.threadVar.}: OnchainGroupManager
|
||||||
|
|
||||||
setup:
|
setup:
|
||||||
anvilProc = runAnvil()
|
anvilProc = runAnvil(stateFile = some(DEFAULT_ANVIL_STATE_PATH))
|
||||||
manager = waitFor setupOnchainGroupManager()
|
manager = waitFor setupOnchainGroupManager(deployContracts = false)
|
||||||
|
|
||||||
teardown:
|
teardown:
|
||||||
stopAnvil(anvilProc)
|
stopAnvil(anvilProc)
|
||||||
|
|||||||
@ -61,7 +61,7 @@ proc init(
|
|||||||
assert false, "Failed to mount relay: " & $error
|
assert false, "Failed to mount relay: " & $error
|
||||||
(await testSetup.serviceNode.mountRelay()).isOkOr:
|
(await testSetup.serviceNode.mountRelay()).isOkOr:
|
||||||
assert false, "Failed to mount relay: " & $error
|
assert false, "Failed to mount relay: " & $error
|
||||||
await testSetup.serviceNode.mountLightPush(rateLimit)
|
check (await testSetup.serviceNode.mountLightPush(rateLimit)).isOk()
|
||||||
testSetup.pushNode.mountLightPushClient()
|
testSetup.pushNode.mountLightPushClient()
|
||||||
|
|
||||||
testSetup.serviceNode.peerManager.addServicePeer(
|
testSetup.serviceNode.peerManager.addServicePeer(
|
||||||
|
|||||||
@ -61,7 +61,7 @@ proc init(
|
|||||||
assert false, "Failed to mount relay"
|
assert false, "Failed to mount relay"
|
||||||
(await testSetup.serviceNode.mountRelay()).isOkOr:
|
(await testSetup.serviceNode.mountRelay()).isOkOr:
|
||||||
assert false, "Failed to mount relay"
|
assert false, "Failed to mount relay"
|
||||||
await testSetup.serviceNode.mountLegacyLightPush(rateLimit)
|
check (await testSetup.serviceNode.mountLegacyLightPush(rateLimit)).isOk()
|
||||||
testSetup.pushNode.mountLegacyLightPushClient()
|
testSetup.pushNode.mountLegacyLightPushClient()
|
||||||
|
|
||||||
testSetup.serviceNode.peerManager.addServicePeer(
|
testSetup.serviceNode.peerManager.addServicePeer(
|
||||||
|
|||||||
Some files were not shown because too many files have changed in this diff Show More
Loading…
x
Reference in New Issue
Block a user