Merge branch 'master' into feat/subscription_service

This commit is contained in:
Fabiana Cecin 2026-02-25 14:34:50 -03:00
commit bc8a2f61a5
No known key found for this signature in database
GPG Key ID: BCAB8A55CB51B6C7
32 changed files with 612 additions and 222 deletions

View File

@ -21,15 +21,21 @@ All items below are to be completed by the owner of the given release.
- [ ] Assign release candidate tag to the release branch HEAD (e.g. `v0.X.0-beta-rc.0`, `v0.X.0-beta-rc.1`, ... `v0.X.0-beta-rc.N`). - [ ] Assign release candidate tag to the release branch HEAD (e.g. `v0.X.0-beta-rc.0`, `v0.X.0-beta-rc.1`, ... `v0.X.0-beta-rc.N`).
- [ ] Generate and edit release notes in CHANGELOG.md. - [ ] Generate and edit release notes in CHANGELOG.md.
- [ ] **Waku test and fleets validation** - [ ] **Validation of release candidate**
- [ ] Ensure all the unit tests (specifically logos-delivery-js tests) are green against the release candidate. - [ ] **Automated testing**
- [ ] Deploy the release candidate to `waku.test` only through [deploy-waku-test job](https://ci.infra.status.im/job/nim-waku/job/deploy-waku-test/) and wait for it to finish (Jenkins access required; ask the infra team if you don't have it). - [ ] Ensure all the unit tests (specifically logos-messaging-js tests) are green against the release candidate.
- After completion, disable [deployment job](https://ci.infra.status.im/job/nim-waku/) so that its version is not updated on every merge to master. - [ ] **Waku fleet testing**
- Verify the deployed version at https://fleets.waku.org/. - [ ] Deploy the release candidate to `waku.test` through [deploy-waku-test job](https://ci.infra.status.im/job/nim-waku/job/deploy-waku-test/) and wait for it to finish (Jenkins access required; ask the infra team if you don't have it).
- Confirm the container image exists on [Harbor](https://harbor.status.im/harbor/projects/9/repositories/logos-delivery/artifacts-tab). - After completion, disable fleet so that daily CI does not override your release candidate.
- [ ] Analyze Kibana logs from the previous month (since the last release was deployed) for possible crashes or errors in `waku.test`. - Verify at https://fleets.waku.org/ that the fleet is locked to the release candidate image.
- Most relevant logs are `(fleet: "waku.test" AND message: "SIGSEGV")`. - Confirm the container image exists on [Harbor](https://harbor.status.im/harbor/projects/9/repositories/nwaku/artifacts-tab).
- [ ] Enable again the `waku.test` fleet to resume auto-deployment of the latest `master` commit. - [ ] Search [Kibana logs](https://kibana.infra.status.im/app/discover) from the previous month (since the last release was deployed) for possible crashes or errors in `waku.test`.
- Set time range to "Last 30 days" (or since last release).
- Most relevant search query: `(fleet: "waku.test" AND message: "SIGSEGV")`, `(fleet: "waku.test" AND message: "exception")`, `(fleet: "waku.test" AND message: "error")`.
- Document any crashes or errors found.
- [ ] If `waku.test` validation is successful, deploy to `waku.sandbox` using the [deploy-waku-sandbox job](https://ci.infra.status.im/job/nim-waku/job/deploy-waku-sandbox/).
- [ ] Search [Kibana logs](https://kibana.infra.status.im/app/discover) for `waku.sandbox`: `(fleet: "waku.sandbox" AND message: "SIGSEGV")`, `(fleet: "waku.sandbox" AND message: "exception")`, `(fleet: "waku.sandbox" AND message: "error")`. most probably if there are no crashes or errors in `waku.test`, there will be no crashes or errors in `waku.sandbox`.
- [ ] Enable the `waku.test` fleet again to resume auto-deployment of the latest `master` commit.
- [ ] **Proceed with release** - [ ] **Proceed with release**
@ -53,4 +59,5 @@ All items below are to be completed by the owner of the given release.
- [Infra-nim-waku](https://github.com/status-im/infra-nim-waku) - [Infra-nim-waku](https://github.com/status-im/infra-nim-waku)
- [Jenkins](https://ci.infra.status.im/job/nim-waku/) - [Jenkins](https://ci.infra.status.im/job/nim-waku/)
- [Fleets](https://fleets.waku.org/) - [Fleets](https://fleets.waku.org/)
- [Harbor](https://harbor.status.im/harbor/projects/9/repositories/logos-delivery/artifacts-tab) - [Harbor](https://harbor.status.im/harbor/projects/9/repositories/nwaku/artifacts-tab)
- [Kibana](https://kibana.infra.status.im/app/)

View File

@ -24,33 +24,39 @@ All items below are to be completed by the owner of the given release.
- [ ] **Validation of release candidate** - [ ] **Validation of release candidate**
- [ ] **Automated testing** - [ ] **Automated testing**
- [ ] Ensure all the unit tests (specifically logos-delivery-js tests) are green against the release candidate. - [ ] Ensure all the unit tests (specifically logos-messaging-js tests) are green against the release candidate.
- [ ] Ask Vac-QA and Vac-DST to perform the available tests against the release candidate.
- [ ] Vac-DST (an additional report is needed; see [this](https://www.notion.so/DST-Reports-1228f96fb65c80729cd1d98a7496fe6f))
- [ ] **Waku fleet testing** - [ ] **Waku fleet testing**
- [ ] Deploy the release candidate to `waku.test` and `waku.sandbox` fleets. - [ ] Deploy the release candidate to `waku.test` fleet.
- Start the [deployment job](https://ci.infra.status.im/job/nim-waku/) for both fleets and wait for it to finish (Jenkins access required; ask the infra team if you don't have it). - Start the [deployment job](https://ci.infra.status.im/job/nim-waku/) and wait for it to finish (Jenkins access required; ask the infra team if you don't have it).
- After completion, disable [deployment job](https://ci.infra.status.im/job/nim-waku/) so that its version is not updated on every merge to `master`. - After completion, disable fleet so that daily CI does not override your release candidate.
- Verify the deployed version at https://fleets.waku.org/. - Verify at https://fleets.waku.org/ that the fleet is locked to the release candidate image.
- Confirm the container image exists on [Harbor](https://harbor.status.im/harbor/projects/9/repositories/nwaku/artifacts-tab). - Confirm the container image exists on [Harbor](https://harbor.status.im/harbor/projects/9/repositories/nwaku/artifacts-tab).
- [ ] Search _Kibana_ logs from the previous month (since the last release was deployed) for possible crashes or errors in `waku.test` and `waku.sandbox`. - [ ] Search [Kibana logs](https://kibana.infra.status.im/app/discover) from the previous month (since the last release was deployed) for possible crashes or errors in `waku.test`.
- Most relevant logs are `(fleet: "waku.test" AND message: "SIGSEGV")` OR `(fleet: "waku.sandbox" AND message: "SIGSEGV")`. - Set time range to "Last 30 days" (or since last release).
- [ ] Enable again the `waku.test` fleet to resume auto-deployment of the latest `master` commit. - Most relevant search query: `(fleet: "waku.test" AND message: "SIGSEGV")`, `(fleet: "waku.test" AND message: "exception")`, `(fleet: "waku.test" AND message: "error")`.
- Document any crashes or errors found.
- [ ] If `waku.test` validation is successful, deploy to `waku.sandbox` using the same [deployment job](https://ci.infra.status.im/job/nim-waku/).
- [ ] Search [Kibana logs](https://kibana.infra.status.im/app/discover) for `waku.sandbox`: `(fleet: "waku.sandbox" AND message: "SIGSEGV")`, `(fleet: "waku.sandbox" AND message: "exception")`, `(fleet: "waku.sandbox" AND message: "error")`. most probably if there are no crashes or errors in `waku.test`, there will be no crashes or errors in `waku.sandbox`.
- [ ] Enable the `waku.test` fleet again to resume auto-deployment of the latest `master` commit.
- [ ] **Status fleet testing** - [ ] **QA and DST testing**
- [ ] Deploy release candidate to `status.staging` - [ ] Ask Vac-QA and Vac-DST to run their available tests against the release candidate; share all release candidates with both teams.
- [ ] Perform [sanity check](https://www.notion.so/How-to-test-Nwaku-on-Status-12c6e4b9bf06420ca868bd199129b425) and log results as comments in this issue. - [ ] Vac-DST: An additional report is needed ([see this example](https://www.notion.so/DST-Reports-1228f96fb65c80729cd1d98a7496fe6f)). Inform DST team about what are the expectations for this rc. For example, if we expect higher or lower bandwidth consumption.
- [ ] Connect 2 instances to `status.staging` fleet, one in relay mode, the other one in light client.
- 1:1 Chats with each other - [ ] **Status fleet testing**
- Send and receive messages in a community - [ ] Deploy release candidate to `status.staging`
- Close one instance, send messages with second instance, reopen first instance and confirm messages sent while offline are retrieved from store - [ ] Perform [sanity check](https://www.notion.so/How-to-test-Nwaku-on-Status-12c6e4b9bf06420ca868bd199129b425) and log results as comments in this issue.
- [ ] Perform checks based on _end user impact_ - [ ] Connect 2 instances to `status.staging` fleet, one in relay mode, the other one in light client.
- [ ] Inform other (Waku and Status) CCs to point their instances to `status.staging` for a few days. Ping Status colleagues on their Discord server or in the [Status community](https://status.app/c/G3kAAMSQtb05kog3aGbr3kiaxN4tF5xy4BAGEkkLwILk2z3GcoYlm5hSJXGn7J3laft-tnTwDWmYJ18dP_3bgX96dqr_8E3qKAvxDf3NrrCMUBp4R9EYkQez9XSM4486mXoC3mIln2zc-TNdvjdfL9eHVZ-mGgs=#zQ3shZeEJqTC1xhGUjxuS4rtHSrhJ8vUYp64v6qWkLpvdy9L9) (this is not a blocking point.) - 1:1 Chats with each other
- [ ] Ask Status-QA to perform sanity checks (as described above) and checks based on _end user impact_; specify the version being tested - Send and receive messages in a community
- [ ] Ask Status-QA or infra to run the automated Status e2e tests against `status.staging` - Close one instance, send messages with second instance, reopen first instance and confirm messages sent while offline are retrieved from store
- [ ] Get other CCs' sign-off: they should comment on this PR, e.g., "Used the app for a week, no problem." If problems are reported, resolve them and create a new RC. - [ ] Perform checks based on _end user impact_
- [ ] **Get Status-QA sign-off**, ensuring that the `status.test` update will not disturb ongoing activities. - [ ] Inform other (Waku and Status) CCs to point their instances to `status.staging` for a few days. Ping Status colleagues on their Discord server or in the [Status community](https://status.app/c/G3kAAMSQtb05kog3aGbr3kiaxN4tF5xy4BAGEkkLwILk2z3GcoYlm5hSJXGn7J3laft-tnTwDWmYJ18dP_3bgX96dqr_8E3qKAvxDf3NrrCMUBp4R9EYkQez9XSM4486mXoC3mIln2zc-TNdvjdfL9eHVZ-mGgs=#zQ3shZeEJqTC1xhGUjxuS4rtHSrhJ8vUYp64v6qWkLpvdy9L9) (this is not a blocking point.)
- [ ] Ask Status-QA to perform sanity checks (as described above) and checks based on _end user impact_; specify the version being tested
- [ ] Ask Status-QA or infra to run the automated Status e2e tests against `status.staging`
- [ ] Get other CCs' sign-off: they should comment on this PR, e.g., "Used the app for a week, no problem." If problems are reported, resolve them and create a new RC.
- [ ] **Get Status-QA sign-off**, ensuring that the `status.test` update will not disturb ongoing activities.
- [ ] **Proceed with release** - [ ] **Proceed with release**
@ -74,3 +80,4 @@ All items below are to be completed by the owner of the given release.
- [Jenkins](https://ci.infra.status.im/job/nim-waku/) - [Jenkins](https://ci.infra.status.im/job/nim-waku/)
- [Fleets](https://fleets.waku.org/) - [Fleets](https://fleets.waku.org/)
- [Harbor](https://harbor.status.im/harbor/projects/9/repositories/nwaku/artifacts-tab) - [Harbor](https://harbor.status.im/harbor/projects/9/repositories/nwaku/artifacts-tab)
- [Kibana](https://kibana.infra.status.im/app/)

View File

@ -30,6 +30,7 @@ import
protobuf/minprotobuf, # message serialisation/deserialisation from and to protobufs protobuf/minprotobuf, # message serialisation/deserialisation from and to protobufs
nameresolving/dnsresolver, nameresolving/dnsresolver,
protocols/mix/curve25519, protocols/mix/curve25519,
protocols/mix/mix_protocol,
] # define DNS resolution ] # define DNS resolution
import import
waku/[ waku/[
@ -38,6 +39,7 @@ import
waku_lightpush/rpc, waku_lightpush/rpc,
waku_enr, waku_enr,
discovery/waku_dnsdisc, discovery/waku_dnsdisc,
discovery/waku_kademlia,
waku_node, waku_node,
node/waku_metrics, node/waku_metrics,
node/peer_manager, node/peer_manager,
@ -453,14 +455,48 @@ proc processInput(rfd: AsyncFD, rng: ref HmacDrbgContext) {.async.} =
(await node.mountMix(conf.clusterId, mixPrivKey, conf.mixnodes)).isOkOr: (await node.mountMix(conf.clusterId, mixPrivKey, conf.mixnodes)).isOkOr:
error "failed to mount waku mix protocol: ", error = $error error "failed to mount waku mix protocol: ", error = $error
quit(QuitFailure) quit(QuitFailure)
await node.mountRendezvousClient(conf.clusterId)
# Setup extended kademlia discovery if bootstrap nodes are provided
if conf.kadBootstrapNodes.len > 0:
var kadBootstrapPeers: seq[(PeerId, seq[MultiAddress])]
for nodeStr in conf.kadBootstrapNodes:
let (peerId, ma) = parseFullAddress(nodeStr).valueOr:
error "Failed to parse kademlia bootstrap node", node = nodeStr, error = error
continue
kadBootstrapPeers.add((peerId, @[ma]))
if kadBootstrapPeers.len > 0:
node.wakuKademlia = WakuKademlia.new(
node.switch,
ExtendedKademliaDiscoveryParams(
bootstrapNodes: kadBootstrapPeers,
mixPubKey: some(mixPubKey),
advertiseMix: false,
),
node.peerManager,
getMixNodePoolSize = proc(): int {.gcsafe, raises: [].} =
if node.wakuMix.isNil():
0
else:
node.getMixNodePoolSize(),
isNodeStarted = proc(): bool {.gcsafe, raises: [].} =
node.started,
).valueOr:
error "failed to setup kademlia discovery", error = error
quit(QuitFailure)
#await node.mountRendezvousClient(conf.clusterId)
await node.start() await node.start()
node.peerManager.start() node.peerManager.start()
if not node.wakuKademlia.isNil():
(await node.wakuKademlia.start(minMixPeers = MinMixNodePoolSize)).isOkOr:
error "failed to start kademlia discovery", error = error
quit(QuitFailure)
await node.mountLibp2pPing() await node.mountLibp2pPing()
await node.mountPeerExchangeClient() #await node.mountPeerExchangeClient()
let pubsubTopic = conf.getPubsubTopic(node, conf.contentTopic) let pubsubTopic = conf.getPubsubTopic(node, conf.contentTopic)
echo "pubsub topic is: " & pubsubTopic echo "pubsub topic is: " & pubsubTopic
let nick = await readNick(transp) let nick = await readNick(transp)
@ -601,11 +637,6 @@ proc processInput(rfd: AsyncFD, rng: ref HmacDrbgContext) {.async.} =
node, pubsubTopic, conf.contentTopic, servicePeerInfo, false node, pubsubTopic, conf.contentTopic, servicePeerInfo, false
) )
echo "waiting for mix nodes to be discovered..." echo "waiting for mix nodes to be discovered..."
while true:
if node.getMixNodePoolSize() >= MinMixNodePoolSize:
break
discard await node.fetchPeerExchangePeers()
await sleepAsync(1000)
while node.getMixNodePoolSize() < MinMixNodePoolSize: while node.getMixNodePoolSize() < MinMixNodePoolSize:
info "waiting for mix nodes to be discovered", info "waiting for mix nodes to be discovered",

View File

@ -203,13 +203,13 @@ type
fleet* {. fleet* {.
desc: desc:
"Select the fleet to connect to. This sets the DNS discovery URL to the selected fleet.", "Select the fleet to connect to. This sets the DNS discovery URL to the selected fleet.",
defaultValue: Fleet.test, defaultValue: Fleet.none,
name: "fleet" name: "fleet"
.}: Fleet .}: Fleet
contentTopic* {. contentTopic* {.
desc: "Content topic for chat messages.", desc: "Content topic for chat messages.",
defaultValue: "/toy-chat-mix/2/huilong/proto", defaultValue: "/toy-chat/2/baixa-chiado/proto",
name: "content-topic" name: "content-topic"
.}: string .}: string
@ -228,7 +228,14 @@ type
desc: "WebSocket Secure Support.", desc: "WebSocket Secure Support.",
defaultValue: false, defaultValue: false,
name: "websocket-secure-support" name: "websocket-secure-support"
.}: bool ## rln-relay configuration .}: bool
## Kademlia Discovery config
kadBootstrapNodes* {.
desc:
"Peer multiaddr for kademlia discovery bootstrap node (must include /p2p/<peerID>). Argument may be repeated.",
name: "kad-bootstrap-node"
.}: seq[string]
proc parseCmdArg*(T: type MixNodePubInfo, p: string): T = proc parseCmdArg*(T: type MixNodePubInfo, p: string): T =
let elements = p.split(":") let elements = p.split(":")

View File

@ -20,7 +20,7 @@ For more context, see https://trunkbaseddevelopment.com/branch-for-release/
- **Full release**: follow the entire [Release process](#release-process--step-by-step). - **Full release**: follow the entire [Release process](#release-process--step-by-step).
- **Beta release**: skip just `6a` and `6c` steps from [Release process](#release-process--step-by-step). - **Beta release**: skip just `6c` and `6d` steps from [Release process](#release-process--step-by-step).
- Choose the appropriate release process based on the release type: - Choose the appropriate release process based on the release type:
- [Full Release](../../.github/ISSUE_TEMPLATE/prepare_full_release.md) - [Full Release](../../.github/ISSUE_TEMPLATE/prepare_full_release.md)
@ -70,20 +70,26 @@ For more context, see https://trunkbaseddevelopment.com/branch-for-release/
6a. **Automated testing** 6a. **Automated testing**
- Ensure all the unit tests (specifically js-waku tests) are green against the release candidate. - Ensure all the unit tests (specifically js-waku tests) are green against the release candidate.
- Ask Vac-QA and Vac-DST to run their available tests against the release candidate; share all release candidates with both teams.
> We need an additional report like [this](https://www.notion.so/DST-Reports-1228f96fb65c80729cd1d98a7496fe6f) specifically from the DST team.
6b. **Waku fleet testing** 6b. **Waku fleet testing**
- Start job on `waku.sandbox` and `waku.test` [Deployment job](https://ci.infra.status.im/job/nim-waku/), wait for completion of the job. If it fails, then debug it. - Start job on `waku.test` [Deployment job](https://ci.infra.status.im/job/nim-waku/), wait for completion of the job. If it fails, then debug it.
- After completion, disable [deployment job](https://ci.infra.status.im/job/nim-waku/) so that its version is not updated on every merge to `master`. - After completion, disable fleet so that daily ci not override your release candidate.
- Verify at https://fleets.waku.org/ that the fleet is locked to the release candidate version. - Verify at https://fleets.waku.org/ that the fleet is locked to the release candidate image.
- Check if the image is created at [Harbor](https://harbor.status.im/harbor/projects/9/repositories/nwaku/artifacts-tab). - Check if the image is created at [Harbor](https://harbor.status.im/harbor/projects/9/repositories/nwaku/artifacts-tab).
- Search _Kibana_ logs from the previous month (since the last release was deployed) for possible crashes or errors in `waku.test` and `waku.sandbox`. - Search [Kibana logs](https://kibana.infra.status.im/app/discover) from the previous month (since the last release was deployed) for possible crashes or errors in `waku.test`.
- Most relevant logs are `(fleet: "waku.test" AND message: "SIGSEGV")` OR `(fleet: "waku.sandbox" AND message: "SIGSEGV")`. - Set time range to "Last 30 days" (or since last release).
- Most relevant search query: `(fleet: "waku.test" AND message: "SIGSEGV")`, `(fleet: "waku.test" AND message: "exception")`, `(fleet: "waku.test" AND message: "error")`.
- Document any crashes or errors found.
- If `waku.test` validation is successful, deploy to `waku.sandbox` using the same [Deployment job](https://ci.infra.status.im/job/nim-waku/).
- Search [Kibana logs](https://kibana.infra.status.im/app/discover) for `waku.sandbox`: `(fleet: "waku.sandbox" AND message: "SIGSEGV")`, `(fleet: "waku.sandbox" AND message: "exception")`, `(fleet: "waku.sandbox" AND message: "error")`. most probably if there are no crashes or errors in `waku.test`, there will be no crashes or errors in `waku.sandbox`.
- Enable the `waku.test` fleet again to resume auto-deployment of the latest `master` commit. - Enable the `waku.test` fleet again to resume auto-deployment of the latest `master` commit.
6c. **Status fleet testing** 6c. **QA and DST testing**
- Ask Vac-QA and Vac-DST to run their available tests against the release candidate; share all release candidates with both teams.
> We need an additional report like [this](https://www.notion.so/DST-Reports-1228f96fb65c80729cd1d98a7496fe6f) specifically from the DST team. Inform DST team about what are the expectations for this rc. For example, if we expect higher or lower bandwidth consumption.
6d. **Status fleet testing**
- Deploy release candidate to `status.staging` - Deploy release candidate to `status.staging`
- Perform [sanity check](https://www.notion.so/How-to-test-Nwaku-on-Status-12c6e4b9bf06420ca868bd199129b425) and log results as comments in this issue. - Perform [sanity check](https://www.notion.so/How-to-test-Nwaku-on-Status-12c6e4b9bf06420ca868bd199129b425) and log results as comments in this issue.
- Connect 2 instances to `status.staging` fleet, one in relay mode, the other one in light client. - Connect 2 instances to `status.staging` fleet, one in relay mode, the other one in light client.
@ -120,10 +126,10 @@ We also need to merge the release branch back into master as a final step.
2. Deploy the release image to [Dockerhub](https://hub.docker.com/r/wakuorg/nwaku) by triggering [the manual Jenkins deployment job](https://ci.infra.status.im/job/nim-waku/job/docker-manual/). 2. Deploy the release image to [Dockerhub](https://hub.docker.com/r/wakuorg/nwaku) by triggering [the manual Jenkins deployment job](https://ci.infra.status.im/job/nim-waku/job/docker-manual/).
> Ensure the following build parameters are set: > Ensure the following build parameters are set:
> - `MAKE_TARGET`: `wakunode2` > - `MAKE_TARGET`: `wakunode2`
> - `IMAGE_TAG`: the release tag (e.g. `v0.36.0`) > - `IMAGE_TAG`: the release tag (e.g. `v0.38.0`)
> - `IMAGE_NAME`: `wakuorg/nwaku` > - `IMAGE_NAME`: `wakuorg/nwaku`
> - `NIMFLAGS`: `--colors:off -d:disableMarchNative -d:chronicles_colors:none -d:postgres` > - `NIMFLAGS`: `--colors:off -d:disableMarchNative -d:chronicles_colors:none -d:postgres`
> - `GIT_REF` the release tag (e.g. `v0.36.0`) > - `GIT_REF` the release tag (e.g. `v0.38.0`)
### Performing a patch release ### Performing a patch release
@ -155,3 +161,4 @@ We also need to merge the release branch back into master as a final step.
- [Jenkins](https://ci.infra.status.im/job/nim-waku/) - [Jenkins](https://ci.infra.status.im/job/nim-waku/)
- [Fleets](https://fleets.waku.org/) - [Fleets](https://fleets.waku.org/)
- [Harbor](https://harbor.status.im/harbor/projects/9/repositories/nwaku/artifacts-tab) - [Harbor](https://harbor.status.im/harbor/projects/9/repositories/nwaku/artifacts-tab)
- [Kibana](https://kibana.infra.status.im/app/)

View File

@ -50,8 +50,8 @@ in stdenv.mkDerivation {
]; ];
# Environment variables required for Android builds # Environment variables required for Android builds
ANDROID_SDK_ROOT="${pkgs.androidPkgs.sdk}"; ANDROID_SDK_ROOT = "${pkgs.androidPkgs.sdk}";
ANDROID_NDK_HOME="${pkgs.androidPkgs.ndk}"; ANDROID_NDK_HOME = "${pkgs.androidPkgs.ndk}";
NIMFLAGS = "-d:disableMarchNative -d:git_revision_override=${revision}"; NIMFLAGS = "-d:disableMarchNative -d:git_revision_override=${revision}";
XDG_CACHE_HOME = "/tmp"; XDG_CACHE_HOME = "/tmp";
@ -61,10 +61,20 @@ in stdenv.mkDerivation {
"QUICK_AND_DIRTY_NIMBLE=${if quickAndDirty then "1" else "0"}" "QUICK_AND_DIRTY_NIMBLE=${if quickAndDirty then "1" else "0"}"
"USE_SYSTEM_NIM=${if useSystemNim then "1" else "0"}" "USE_SYSTEM_NIM=${if useSystemNim then "1" else "0"}"
"LIBRLN_FILE=${zerokitRln}/lib/librln.${if abidir != null then "so" else "a"}" "LIBRLN_FILE=${zerokitRln}/lib/librln.${if abidir != null then "so" else "a"}"
"POSTGRES=1"
]; ];
configurePhase = '' configurePhase = ''
patchShebangs . vendor/nimbus-build-system > /dev/null patchShebangs . vendor/nimbus-build-system > /dev/null
# build_nim.sh guards "rm -rf dist/checksums" with NIX_BUILD_TOP != "/build",
# but on macOS the nix sandbox uses /private/tmp/... so the check fails and
# dist/checksums (provided via preBuild) gets deleted. Fix the check to skip
# the removal whenever NIX_BUILD_TOP is set (i.e. any nix build).
substituteInPlace vendor/nimbus-build-system/scripts/build_nim.sh \
--replace 'if [[ "''${NIX_BUILD_TOP}" != "/build" ]]; then' \
'if [[ -z "''${NIX_BUILD_TOP}" ]]; then'
make nimbus-build-system-paths make nimbus-build-system-paths
make nimbus-build-system-nimble-dir make nimbus-build-system-nimble-dir
''; '';

View File

@ -1,16 +1,17 @@
log-level = "INFO" log-level = "TRACE"
relay = true relay = true
mix = true mix = true
filter = true filter = true
store = false store = true
lightpush = true lightpush = true
max-connections = 150 max-connections = 150
peer-exchange = true peer-exchange = false
metrics-logging = false metrics-logging = false
cluster-id = 2 cluster-id = 2
discv5-discovery = true discv5-discovery = false
discv5-udp-port = 9000 discv5-udp-port = 9000
discv5-enr-auto-update = true discv5-enr-auto-update = true
enable-kad-discovery = true
rest = true rest = true
rest-admin = true rest-admin = true
ports-shift = 1 ports-shift = 1
@ -19,7 +20,9 @@ shard = [0]
agent-string = "nwaku-mix" agent-string = "nwaku-mix"
nodekey = "f98e3fba96c32e8d1967d460f1b79457380e1a895f7971cecc8528abe733781a" nodekey = "f98e3fba96c32e8d1967d460f1b79457380e1a895f7971cecc8528abe733781a"
mixkey = "a87db88246ec0eedda347b9b643864bee3d6933eb15ba41e6d58cb678d813258" mixkey = "a87db88246ec0eedda347b9b643864bee3d6933eb15ba41e6d58cb678d813258"
rendezvous = true rendezvous = false
listen-address = "127.0.0.1" listen-address = "127.0.0.1"
nat = "extip:127.0.0.1" nat = "extip:127.0.0.1"
ext-multiaddr = ["/ip4/127.0.0.1/tcp/60001"]
ext-multiaddr-only = true
ip-colocation-limit=0 ip-colocation-limit=0

View File

@ -1,17 +1,18 @@
log-level = "INFO" log-level = "TRACE"
relay = true relay = true
mix = true mix = true
filter = true filter = true
store = false store = false
lightpush = true lightpush = true
max-connections = 150 max-connections = 150
peer-exchange = true peer-exchange = false
metrics-logging = false metrics-logging = false
cluster-id = 2 cluster-id = 2
discv5-discovery = true discv5-discovery = false
discv5-udp-port = 9001 discv5-udp-port = 9001
discv5-enr-auto-update = true discv5-enr-auto-update = true
discv5-bootstrap-node = ["enr:-LG4QBaAbcA921hmu3IrreLqGZ4y3VWCjBCgNN9mpX9vqkkbSrM3HJHZTXnb5iVXgc5pPtDhWLxkB6F3yY25hSwMezkEgmlkgnY0gmlwhH8AAAGKbXVsdGlhZGRyc4oACATAqEQ-BuphgnJzhQACAQAAiXNlY3AyNTZrMaEDpEW1UlUGHRJg6g_zGuCddKWmIUBGZCQX13xGfh9J6KiDdGNwguphg3VkcIIjKYV3YWt1Mg0"] discv5-bootstrap-node = ["enr:-LG4QBaAbcA921hmu3IrreLqGZ4y3VWCjBCgNN9mpX9vqkkbSrM3HJHZTXnb5iVXgc5pPtDhWLxkB6F3yY25hSwMezkEgmlkgnY0gmlwhH8AAAGKbXVsdGlhZGRyc4oACATAqEQ-BuphgnJzhQACAQAAiXNlY3AyNTZrMaEDpEW1UlUGHRJg6g_zGuCddKWmIUBGZCQX13xGfh9J6KiDdGNwguphg3VkcIIjKYV3YWt1Mg0"]
kad-bootstrap-node = ["/ip4/127.0.0.1/tcp/60001/p2p/16Uiu2HAmPiEs2ozjjJF2iN2Pe2FYeMC9w4caRHKYdLdAfjgbWM6o"]
rest = true rest = true
rest-admin = true rest-admin = true
ports-shift = 2 ports-shift = 2
@ -20,8 +21,10 @@ shard = [0]
agent-string = "nwaku-mix" agent-string = "nwaku-mix"
nodekey = "09e9d134331953357bd38bbfce8edb377f4b6308b4f3bfbe85c610497053d684" nodekey = "09e9d134331953357bd38bbfce8edb377f4b6308b4f3bfbe85c610497053d684"
mixkey = "c86029e02c05a7e25182974b519d0d52fcbafeca6fe191fbb64857fb05be1a53" mixkey = "c86029e02c05a7e25182974b519d0d52fcbafeca6fe191fbb64857fb05be1a53"
rendezvous = true rendezvous = false
listen-address = "127.0.0.1" listen-address = "127.0.0.1"
nat = "extip:127.0.0.1" nat = "extip:127.0.0.1"
ext-multiaddr = ["/ip4/127.0.0.1/tcp/60002"]
ext-multiaddr-only = true
ip-colocation-limit=0 ip-colocation-limit=0
#staticnode = ["/ip4/127.0.0.1/tcp/60001/p2p/16Uiu2HAmPiEs2ozjjJF2iN2Pe2FYeMC9w4caRHKYdLdAfjgbWM6o", "/ip4/127.0.0.1/tcp/60003/p2p/16Uiu2HAmTEDHwAziWUSz6ZE23h5vxG2o4Nn7GazhMor4bVuMXTrA","/ip4/127.0.0.1/tcp/60004/p2p/16Uiu2HAmPwRKZajXtfb1Qsv45VVfRZgK3ENdfmnqzSrVm3BczF6f","/ip4/127.0.0.1/tcp/60005/p2p/16Uiu2HAmRhxmCHBYdXt1RibXrjAUNJbduAhzaTHwFCZT4qWnqZAu"] #staticnode = ["/ip4/127.0.0.1/tcp/60001/p2p/16Uiu2HAmPiEs2ozjjJF2iN2Pe2FYeMC9w4caRHKYdLdAfjgbWM6o", "/ip4/127.0.0.1/tcp/60003/p2p/16Uiu2HAmTEDHwAziWUSz6ZE23h5vxG2o4Nn7GazhMor4bVuMXTrA","/ip4/127.0.0.1/tcp/60004/p2p/16Uiu2HAmPwRKZajXtfb1Qsv45VVfRZgK3ENdfmnqzSrVm3BczF6f","/ip4/127.0.0.1/tcp/60005/p2p/16Uiu2HAmRhxmCHBYdXt1RibXrjAUNJbduAhzaTHwFCZT4qWnqZAu"]

View File

@ -1,17 +1,18 @@
log-level = "INFO" log-level = "TRACE"
relay = true relay = true
mix = true mix = true
filter = true filter = true
store = false store = false
lightpush = true lightpush = true
max-connections = 150 max-connections = 150
peer-exchange = true peer-exchange = false
metrics-logging = false metrics-logging = false
cluster-id = 2 cluster-id = 2
discv5-discovery = true discv5-discovery = false
discv5-udp-port = 9002 discv5-udp-port = 9002
discv5-enr-auto-update = true discv5-enr-auto-update = true
discv5-bootstrap-node = ["enr:-LG4QBaAbcA921hmu3IrreLqGZ4y3VWCjBCgNN9mpX9vqkkbSrM3HJHZTXnb5iVXgc5pPtDhWLxkB6F3yY25hSwMezkEgmlkgnY0gmlwhH8AAAGKbXVsdGlhZGRyc4oACATAqEQ-BuphgnJzhQACAQAAiXNlY3AyNTZrMaEDpEW1UlUGHRJg6g_zGuCddKWmIUBGZCQX13xGfh9J6KiDdGNwguphg3VkcIIjKYV3YWt1Mg0"] discv5-bootstrap-node = ["enr:-LG4QBaAbcA921hmu3IrreLqGZ4y3VWCjBCgNN9mpX9vqkkbSrM3HJHZTXnb5iVXgc5pPtDhWLxkB6F3yY25hSwMezkEgmlkgnY0gmlwhH8AAAGKbXVsdGlhZGRyc4oACATAqEQ-BuphgnJzhQACAQAAiXNlY3AyNTZrMaEDpEW1UlUGHRJg6g_zGuCddKWmIUBGZCQX13xGfh9J6KiDdGNwguphg3VkcIIjKYV3YWt1Mg0"]
kad-bootstrap-node = ["/ip4/127.0.0.1/tcp/60001/p2p/16Uiu2HAmPiEs2ozjjJF2iN2Pe2FYeMC9w4caRHKYdLdAfjgbWM6o"]
rest = false rest = false
rest-admin = false rest-admin = false
ports-shift = 3 ports-shift = 3
@ -20,8 +21,10 @@ shard = [0]
agent-string = "nwaku-mix" agent-string = "nwaku-mix"
nodekey = "ed54db994682e857d77cd6fb81be697382dc43aa5cd78e16b0ec8098549f860e" nodekey = "ed54db994682e857d77cd6fb81be697382dc43aa5cd78e16b0ec8098549f860e"
mixkey = "b858ac16bbb551c4b2973313b1c8c8f7ea469fca03f1608d200bbf58d388ec7f" mixkey = "b858ac16bbb551c4b2973313b1c8c8f7ea469fca03f1608d200bbf58d388ec7f"
rendezvous = true rendezvous = false
listen-address = "127.0.0.1" listen-address = "127.0.0.1"
nat = "extip:127.0.0.1" nat = "extip:127.0.0.1"
ext-multiaddr = ["/ip4/127.0.0.1/tcp/60003"]
ext-multiaddr-only = true
ip-colocation-limit=0 ip-colocation-limit=0
#staticnode = ["/ip4/127.0.0.1/tcp/60001/p2p/16Uiu2HAmPiEs2ozjjJF2iN2Pe2FYeMC9w4caRHKYdLdAfjgbWM6o", "/ip4/127.0.0.1/tcp/60002/p2p/16Uiu2HAmLtKaFaSWDohToWhWUZFLtqzYZGPFuXwKrojFVF6az5UF","/ip4/127.0.0.1/tcp/60004/p2p/16Uiu2HAmPwRKZajXtfb1Qsv45VVfRZgK3ENdfmnqzSrVm3BczF6f","/ip4/127.0.0.1/tcp/60005/p2p/16Uiu2HAmRhxmCHBYdXt1RibXrjAUNJbduAhzaTHwFCZT4qWnqZAu"] #staticnode = ["/ip4/127.0.0.1/tcp/60001/p2p/16Uiu2HAmPiEs2ozjjJF2iN2Pe2FYeMC9w4caRHKYdLdAfjgbWM6o", "/ip4/127.0.0.1/tcp/60002/p2p/16Uiu2HAmLtKaFaSWDohToWhWUZFLtqzYZGPFuXwKrojFVF6az5UF","/ip4/127.0.0.1/tcp/60004/p2p/16Uiu2HAmPwRKZajXtfb1Qsv45VVfRZgK3ENdfmnqzSrVm3BczF6f","/ip4/127.0.0.1/tcp/60005/p2p/16Uiu2HAmRhxmCHBYdXt1RibXrjAUNJbduAhzaTHwFCZT4qWnqZAu"]

View File

@ -1,17 +1,18 @@
log-level = "INFO" log-level = "TRACE"
relay = true relay = true
mix = true mix = true
filter = true filter = true
store = false store = false
lightpush = true lightpush = true
max-connections = 150 max-connections = 150
peer-exchange = true peer-exchange = false
metrics-logging = false metrics-logging = false
cluster-id = 2 cluster-id = 2
discv5-discovery = true discv5-discovery = false
discv5-udp-port = 9003 discv5-udp-port = 9003
discv5-enr-auto-update = true discv5-enr-auto-update = true
discv5-bootstrap-node = ["enr:-LG4QBaAbcA921hmu3IrreLqGZ4y3VWCjBCgNN9mpX9vqkkbSrM3HJHZTXnb5iVXgc5pPtDhWLxkB6F3yY25hSwMezkEgmlkgnY0gmlwhH8AAAGKbXVsdGlhZGRyc4oACATAqEQ-BuphgnJzhQACAQAAiXNlY3AyNTZrMaEDpEW1UlUGHRJg6g_zGuCddKWmIUBGZCQX13xGfh9J6KiDdGNwguphg3VkcIIjKYV3YWt1Mg0"] discv5-bootstrap-node = ["enr:-LG4QBaAbcA921hmu3IrreLqGZ4y3VWCjBCgNN9mpX9vqkkbSrM3HJHZTXnb5iVXgc5pPtDhWLxkB6F3yY25hSwMezkEgmlkgnY0gmlwhH8AAAGKbXVsdGlhZGRyc4oACATAqEQ-BuphgnJzhQACAQAAiXNlY3AyNTZrMaEDpEW1UlUGHRJg6g_zGuCddKWmIUBGZCQX13xGfh9J6KiDdGNwguphg3VkcIIjKYV3YWt1Mg0"]
kad-bootstrap-node = ["/ip4/127.0.0.1/tcp/60001/p2p/16Uiu2HAmPiEs2ozjjJF2iN2Pe2FYeMC9w4caRHKYdLdAfjgbWM6o"]
rest = false rest = false
rest-admin = false rest-admin = false
ports-shift = 4 ports-shift = 4
@ -20,8 +21,10 @@ shard = [0]
agent-string = "nwaku-mix" agent-string = "nwaku-mix"
nodekey = "42f96f29f2d6670938b0864aced65a332dcf5774103b4c44ec4d0ea4ef3c47d6" nodekey = "42f96f29f2d6670938b0864aced65a332dcf5774103b4c44ec4d0ea4ef3c47d6"
mixkey = "d8bd379bb394b0f22dd236d63af9f1a9bc45266beffc3fbbe19e8b6575f2535b" mixkey = "d8bd379bb394b0f22dd236d63af9f1a9bc45266beffc3fbbe19e8b6575f2535b"
rendezvous = true rendezvous = false
listen-address = "127.0.0.1" listen-address = "127.0.0.1"
nat = "extip:127.0.0.1" nat = "extip:127.0.0.1"
ext-multiaddr = ["/ip4/127.0.0.1/tcp/60004"]
ext-multiaddr-only = true
ip-colocation-limit=0 ip-colocation-limit=0
#staticnode = ["/ip4/127.0.0.1/tcp/60002/p2p/16Uiu2HAmLtKaFaSWDohToWhWUZFLtqzYZGPFuXwKrojFVF6az5UF", "/ip4/127.0.0.1/tcp/60003/p2p/16Uiu2HAmTEDHwAziWUSz6ZE23h5vxG2o4Nn7GazhMor4bVuMXTrA","/ip4/127.0.0.1/tcp/60001/p2p/16Uiu2HAmPiEs2ozjjJF2iN2Pe2FYeMC9w4caRHKYdLdAfjgbWM6o","/ip4/127.0.0.1/tcp/60005/p2p/16Uiu2HAmRhxmCHBYdXt1RibXrjAUNJbduAhzaTHwFCZT4qWnqZAu"] #staticnode = ["/ip4/127.0.0.1/tcp/60002/p2p/16Uiu2HAmLtKaFaSWDohToWhWUZFLtqzYZGPFuXwKrojFVF6az5UF", "/ip4/127.0.0.1/tcp/60003/p2p/16Uiu2HAmTEDHwAziWUSz6ZE23h5vxG2o4Nn7GazhMor4bVuMXTrA","/ip4/127.0.0.1/tcp/60001/p2p/16Uiu2HAmPiEs2ozjjJF2iN2Pe2FYeMC9w4caRHKYdLdAfjgbWM6o","/ip4/127.0.0.1/tcp/60005/p2p/16Uiu2HAmRhxmCHBYdXt1RibXrjAUNJbduAhzaTHwFCZT4qWnqZAu"]

View File

@ -1,17 +1,18 @@
log-level = "INFO" log-level = "TRACE"
relay = true relay = true
mix = true mix = true
filter = true filter = true
store = false store = false
lightpush = true lightpush = true
max-connections = 150 max-connections = 150
peer-exchange = true peer-exchange = false
metrics-logging = false metrics-logging = false
cluster-id = 2 cluster-id = 2
discv5-discovery = true discv5-discovery = false
discv5-udp-port = 9004 discv5-udp-port = 9004
discv5-enr-auto-update = true discv5-enr-auto-update = true
discv5-bootstrap-node = ["enr:-LG4QBaAbcA921hmu3IrreLqGZ4y3VWCjBCgNN9mpX9vqkkbSrM3HJHZTXnb5iVXgc5pPtDhWLxkB6F3yY25hSwMezkEgmlkgnY0gmlwhH8AAAGKbXVsdGlhZGRyc4oACATAqEQ-BuphgnJzhQACAQAAiXNlY3AyNTZrMaEDpEW1UlUGHRJg6g_zGuCddKWmIUBGZCQX13xGfh9J6KiDdGNwguphg3VkcIIjKYV3YWt1Mg0"] discv5-bootstrap-node = ["enr:-LG4QBaAbcA921hmu3IrreLqGZ4y3VWCjBCgNN9mpX9vqkkbSrM3HJHZTXnb5iVXgc5pPtDhWLxkB6F3yY25hSwMezkEgmlkgnY0gmlwhH8AAAGKbXVsdGlhZGRyc4oACATAqEQ-BuphgnJzhQACAQAAiXNlY3AyNTZrMaEDpEW1UlUGHRJg6g_zGuCddKWmIUBGZCQX13xGfh9J6KiDdGNwguphg3VkcIIjKYV3YWt1Mg0"]
kad-bootstrap-node = ["/ip4/127.0.0.1/tcp/60001/p2p/16Uiu2HAmPiEs2ozjjJF2iN2Pe2FYeMC9w4caRHKYdLdAfjgbWM6o"]
rest = false rest = false
rest-admin = false rest-admin = false
ports-shift = 5 ports-shift = 5
@ -20,8 +21,10 @@ shard = [0]
agent-string = "nwaku-mix" agent-string = "nwaku-mix"
nodekey = "3ce887b3c34b7a92dd2868af33941ed1dbec4893b054572cd5078da09dd923d4" nodekey = "3ce887b3c34b7a92dd2868af33941ed1dbec4893b054572cd5078da09dd923d4"
mixkey = "780fff09e51e98df574e266bf3266ec6a3a1ddfcf7da826a349a29c137009d49" mixkey = "780fff09e51e98df574e266bf3266ec6a3a1ddfcf7da826a349a29c137009d49"
rendezvous = true rendezvous = false
listen-address = "127.0.0.1" listen-address = "127.0.0.1"
nat = "extip:127.0.0.1" nat = "extip:127.0.0.1"
ext-multiaddr = ["/ip4/127.0.0.1/tcp/60005"]
ext-multiaddr-only = true
ip-colocation-limit=0 ip-colocation-limit=0
#staticnode = ["/ip4/127.0.0.1/tcp/60001/p2p/16Uiu2HAmPiEs2ozjjJF2iN2Pe2FYeMC9w4caRHKYdLdAfjgbWM6o", "/ip4/127.0.0.1/tcp/60003/p2p/16Uiu2HAmTEDHwAziWUSz6ZE23h5vxG2o4Nn7GazhMor4bVuMXTrA","/ip4/127.0.0.1/tcp/60004/p2p/16Uiu2HAmPwRKZajXtfb1Qsv45VVfRZgK3ENdfmnqzSrVm3BczF6f","/ip4/127.0.0.1/tcp/60002/p2p/16Uiu2HAmLtKaFaSWDohToWhWUZFLtqzYZGPFuXwKrojFVF6az5UF"] #staticnode = ["/ip4/127.0.0.1/tcp/60001/p2p/16Uiu2HAmPiEs2ozjjJF2iN2Pe2FYeMC9w4caRHKYdLdAfjgbWM6o", "/ip4/127.0.0.1/tcp/60003/p2p/16Uiu2HAmTEDHwAziWUSz6ZE23h5vxG2o4Nn7GazhMor4bVuMXTrA","/ip4/127.0.0.1/tcp/60004/p2p/16Uiu2HAmPwRKZajXtfb1Qsv45VVfRZgK3ENdfmnqzSrVm3BczF6f","/ip4/127.0.0.1/tcp/60002/p2p/16Uiu2HAmLtKaFaSWDohToWhWUZFLtqzYZGPFuXwKrojFVF6az5UF"]

View File

@ -1,2 +1,2 @@
../../build/chat2mix --cluster-id=2 --num-shards-in-network=1 --shard=0 --servicenode="/ip4/127.0.0.1/tcp/60001/p2p/16Uiu2HAmPiEs2ozjjJF2iN2Pe2FYeMC9w4caRHKYdLdAfjgbWM6o" --log-level=TRACE ../../build/chat2mix --cluster-id=2 --num-shards-in-network=1 --shard=0 --servicenode="/ip4/127.0.0.1/tcp/60001/p2p/16Uiu2HAmPiEs2ozjjJF2iN2Pe2FYeMC9w4caRHKYdLdAfjgbWM6o" --log-level=TRACE --kad-bootstrap-node="/ip4/127.0.0.1/tcp/60001/p2p/16Uiu2HAmPiEs2ozjjJF2iN2Pe2FYeMC9w4caRHKYdLdAfjgbWM6o"
#--mixnode="/ip4/127.0.0.1/tcp/60002/p2p/16Uiu2HAmLtKaFaSWDohToWhWUZFLtqzYZGPFuXwKrojFVF6az5UF:9231e86da6432502900a84f867004ce78632ab52cd8e30b1ec322cd795710c2a" --mixnode="/ip4/127.0.0.1/tcp/60003/p2p/16Uiu2HAmTEDHwAziWUSz6ZE23h5vxG2o4Nn7GazhMor4bVuMXTrA:275cd6889e1f29ca48e5b9edb800d1a94f49f13d393a0ecf1a07af753506de6c" --mixnode="/ip4/127.0.0.1/tcp/60004/p2p/16Uiu2HAmPwRKZajXtfb1Qsv45VVfRZgK3ENdfmnqzSrVm3BczF6f:e0ed594a8d506681be075e8e23723478388fb182477f7a469309a25e7076fc18" --mixnode="/ip4/127.0.0.1/tcp/60005/p2p/16Uiu2HAmRhxmCHBYdXt1RibXrjAUNJbduAhzaTHwFCZT4qWnqZAu:8fd7a1a7c19b403d231452a9b1ea40eb1cc76f455d918ef8980e7685f9eeeb1f" #--mixnode="/ip4/127.0.0.1/tcp/60002/p2p/16Uiu2HAmLtKaFaSWDohToWhWUZFLtqzYZGPFuXwKrojFVF6az5UF:9231e86da6432502900a84f867004ce78632ab52cd8e30b1ec322cd795710c2a" --mixnode="/ip4/127.0.0.1/tcp/60003/p2p/16Uiu2HAmTEDHwAziWUSz6ZE23h5vxG2o4Nn7GazhMor4bVuMXTrA:275cd6889e1f29ca48e5b9edb800d1a94f49f13d393a0ecf1a07af753506de6c" --mixnode="/ip4/127.0.0.1/tcp/60004/p2p/16Uiu2HAmPwRKZajXtfb1Qsv45VVfRZgK3ENdfmnqzSrVm3BczF6f:e0ed594a8d506681be075e8e23723478388fb182477f7a469309a25e7076fc18" --mixnode="/ip4/127.0.0.1/tcp/60005/p2p/16Uiu2HAmRhxmCHBYdXt1RibXrjAUNJbduAhzaTHwFCZT4qWnqZAu:8fd7a1a7c19b403d231452a9b1ea40eb1cc76f455d918ef8980e7685f9eeeb1f"

View File

@ -1 +1 @@
../../build/wakunode2 --config-file="config.toml" ../../build/wakunode2 --config-file="config.toml" 2>&1 | tee mix_node.log

View File

@ -1 +1 @@
../../build/wakunode2 --config-file="config1.toml" ../../build/wakunode2 --config-file="config1.toml" 2>&1 | tee mix_node1.log

View File

@ -1 +1 @@
../../build/wakunode2 --config-file="config2.toml" ../../build/wakunode2 --config-file="config2.toml" 2>&1 | tee mix_node2.log

View File

@ -1 +1 @@
../../build/wakunode2 --config-file="config3.toml" ../../build/wakunode2 --config-file="config3.toml" 2>&1 | tee mix_node3.log

View File

@ -1 +1 @@
../../build/wakunode2 --config-file="config4.toml" ../../build/wakunode2 --config-file="config4.toml" 2>&1 | tee mix_node4.log

View File

@ -621,6 +621,20 @@ with the drawback of consuming some more bandwidth.""",
name: "mixnode" name: "mixnode"
.}: seq[MixNodePubInfo] .}: seq[MixNodePubInfo]
# Kademlia Discovery config
enableKadDiscovery* {.
desc:
"Enable extended kademlia discovery. Can be enabled without bootstrap nodes for the first node in the network.",
defaultValue: false,
name: "enable-kad-discovery"
.}: bool
kadBootstrapNodes* {.
desc:
"Peer multiaddr for kademlia discovery bootstrap node (must include /p2p/<peerID>). Argument may be repeated.",
name: "kad-bootstrap-node"
.}: seq[string]
## websocket config ## websocket config
websocketSupport* {. websocketSupport* {.
desc: "Enable websocket: true|false", desc: "Enable websocket: true|false",
@ -1057,4 +1071,7 @@ proc toWakuConf*(n: WakuNodeConf): ConfResult[WakuConf] =
b.rateLimitConf.withRateLimits(n.rateLimits) b.rateLimitConf.withRateLimits(n.rateLimits)
b.kademliaDiscoveryConf.withEnabled(n.enableKadDiscovery)
b.kademliaDiscoveryConf.withBootstrapNodes(n.kadBootstrapNodes)
return b.build() return b.build()

2
vendor/nim-libp2p vendored

@ -1 +1 @@
Subproject commit ca48c3718246bb411ff0e354a70cb82d9a28de0d Subproject commit ff8d51857b4b79a68468e7bcc27b2026cca02996

2
vendor/nim-metrics vendored

@ -1 +1 @@
Subproject commit 11d0cddfb0e711aa2a8c75d1892ae24a64c299fc Subproject commit a1296caf3ebb5f30f51a5feae7749a30df2824c2

View File

@ -24,7 +24,7 @@ requires "nim >= 2.2.4",
"stew", "stew",
"stint", "stint",
"metrics", "metrics",
"libp2p >= 1.14.3", "libp2p >= 1.15.0",
"web3", "web3",
"presto", "presto",
"regex", "regex",

View File

@ -0,0 +1,280 @@
{.push raises: [].}
import std/[options, sequtils]
import
chronos,
chronicles,
results,
stew/byteutils,
libp2p/[peerid, multiaddress, switch],
libp2p/extended_peer_record,
libp2p/crypto/curve25519,
libp2p/protocols/[kademlia, kad_disco],
libp2p/protocols/kademlia_discovery/types as kad_types,
libp2p/protocols/mix/mix_protocol
import waku/waku_core, waku/node/peer_manager
logScope:
topics = "waku extended kademlia discovery"
const
DefaultExtendedKademliaDiscoveryInterval* = chronos.seconds(5)
ExtendedKademliaDiscoveryStartupDelay* = chronos.seconds(5)
type
MixNodePoolSizeProvider* = proc(): int {.gcsafe, raises: [].}
NodeStartedProvider* = proc(): bool {.gcsafe, raises: [].}
ExtendedKademliaDiscoveryParams* = object
bootstrapNodes*: seq[(PeerId, seq[MultiAddress])]
mixPubKey*: Option[Curve25519Key]
advertiseMix*: bool = false
WakuKademlia* = ref object
protocol*: KademliaDiscovery
peerManager: PeerManager
discoveryLoop: Future[void]
running*: bool
getMixNodePoolSize: MixNodePoolSizeProvider
isNodeStarted: NodeStartedProvider
proc new*(
T: type WakuKademlia,
switch: Switch,
params: ExtendedKademliaDiscoveryParams,
peerManager: PeerManager,
getMixNodePoolSize: MixNodePoolSizeProvider = nil,
isNodeStarted: NodeStartedProvider = nil,
): Result[T, string] =
if params.bootstrapNodes.len == 0:
info "creating kademlia discovery as seed node (no bootstrap nodes)"
let kademlia = KademliaDiscovery.new(
switch,
bootstrapNodes = params.bootstrapNodes,
config = KadDHTConfig.new(
validator = kad_types.ExtEntryValidator(), selector = kad_types.ExtEntrySelector()
),
codec = ExtendedKademliaDiscoveryCodec,
)
try:
switch.mount(kademlia)
except CatchableError:
return err("failed to mount kademlia discovery: " & getCurrentExceptionMsg())
# Register services BEFORE starting kademlia so they are included in the
# initial self-signed peer record published to the DHT
if params.advertiseMix:
if params.mixPubKey.isSome():
let alreadyAdvertising = kademlia.startAdvertising(
ServiceInfo(id: MixProtocolID, data: @(params.mixPubKey.get()))
)
if alreadyAdvertising:
warn "mix service was already being advertised"
debug "extended kademlia advertising mix service",
keyHex = byteutils.toHex(params.mixPubKey.get()),
bootstrapNodes = params.bootstrapNodes.len
else:
warn "mix advertising enabled but no key provided"
info "kademlia discovery created",
bootstrapNodes = params.bootstrapNodes.len, advertiseMix = params.advertiseMix
return ok(
WakuKademlia(
protocol: kademlia,
peerManager: peerManager,
running: false,
getMixNodePoolSize: getMixNodePoolSize,
isNodeStarted: isNodeStarted,
)
)
proc extractMixPubKey(service: ServiceInfo): Option[Curve25519Key] =
if service.id != MixProtocolID:
trace "service is not mix protocol",
serviceId = service.id, mixProtocolId = MixProtocolID
return none(Curve25519Key)
if service.data.len != Curve25519KeySize:
warn "invalid mix pub key length from kademlia record",
expected = Curve25519KeySize,
actual = service.data.len,
dataHex = byteutils.toHex(service.data)
return none(Curve25519Key)
debug "found mix protocol service",
dataLen = service.data.len, expectedLen = Curve25519KeySize
let key = intoCurve25519Key(service.data)
debug "successfully extracted mix pub key", keyHex = byteutils.toHex(key)
return some(key)
proc remotePeerInfoFrom(record: ExtendedPeerRecord): Option[RemotePeerInfo] =
debug "processing kademlia record",
peerId = record.peerId,
numAddresses = record.addresses.len,
numServices = record.services.len,
serviceIds = record.services.mapIt(it.id)
if record.addresses.len == 0:
trace "kademlia record missing addresses", peerId = record.peerId
return none(RemotePeerInfo)
let addrs = record.addresses.mapIt(it.address)
if addrs.len == 0:
trace "kademlia record produced no dialable addresses", peerId = record.peerId
return none(RemotePeerInfo)
let protocols = record.services.mapIt(it.id)
var mixPubKey = none(Curve25519Key)
for service in record.services:
debug "checking service",
peerId = record.peerId, serviceId = service.id, dataLen = service.data.len
mixPubKey = extractMixPubKey(service)
if mixPubKey.isSome():
debug "extracted mix public key from service", peerId = record.peerId
break
if record.services.len > 0 and mixPubKey.isNone():
debug "record has services but no valid mix key",
peerId = record.peerId, services = record.services.mapIt(it.id)
return none(RemotePeerInfo)
return some(
RemotePeerInfo.init(
record.peerId,
addrs = addrs,
protocols = protocols,
origin = PeerOrigin.Kademlia,
mixPubKey = mixPubKey,
)
)
proc lookupMixPeers*(
wk: WakuKademlia
): Future[Result[int, string]] {.async: (raises: []).} =
## Lookup mix peers via kademlia and add them to the peer store.
## Returns the number of mix peers found and added.
if wk.protocol.isNil():
return err("cannot lookup mix peers: kademlia not mounted")
let mixService = ServiceInfo(id: MixProtocolID, data: @[])
var records: seq[ExtendedPeerRecord]
try:
records = await wk.protocol.lookup(mixService)
except CatchableError:
return err("mix peer lookup failed: " & getCurrentExceptionMsg())
debug "mix peer lookup returned records", numRecords = records.len
var added = 0
for record in records:
let peerOpt = remotePeerInfoFrom(record)
if peerOpt.isNone():
continue
let peerInfo = peerOpt.get()
if peerInfo.mixPubKey.isNone():
continue
wk.peerManager.addPeer(peerInfo, PeerOrigin.Kademlia)
info "mix peer added via kademlia lookup",
peerId = $peerInfo.peerId, mixPubKey = byteutils.toHex(peerInfo.mixPubKey.get())
added.inc()
info "mix peer lookup complete", found = added
return ok(added)
proc runDiscoveryLoop(
wk: WakuKademlia, interval: Duration, minMixPeers: int
) {.async: (raises: []).} =
info "extended kademlia discovery loop started", interval = interval
try:
while true:
# Wait for node to be started
if not wk.isNodeStarted.isNil() and not wk.isNodeStarted():
await sleepAsync(ExtendedKademliaDiscoveryStartupDelay)
continue
var records: seq[ExtendedPeerRecord]
try:
records = await wk.protocol.randomRecords()
except CatchableError as e:
warn "extended kademlia discovery failed", error = e.msg
await sleepAsync(interval)
continue
debug "received random records from kademlia", numRecords = records.len
var added = 0
for record in records:
let peerOpt = remotePeerInfoFrom(record)
if peerOpt.isNone():
continue
let peerInfo = peerOpt.get()
wk.peerManager.addPeer(peerInfo, PeerOrigin.Kademlia)
debug "peer added via extended kademlia discovery",
peerId = $peerInfo.peerId,
addresses = peerInfo.addrs.mapIt($it),
protocols = peerInfo.protocols,
hasMixPubKey = peerInfo.mixPubKey.isSome()
added.inc()
if added > 0:
info "added peers from extended kademlia discovery", count = added
# Targeted mix peer lookup when pool is low
if minMixPeers > 0 and not wk.getMixNodePoolSize.isNil() and
wk.getMixNodePoolSize() < minMixPeers:
debug "mix node pool below threshold, performing targeted lookup",
currentPoolSize = wk.getMixNodePoolSize(), threshold = minMixPeers
let found = (await wk.lookupMixPeers()).valueOr:
warn "targeted mix peer lookup failed", error = error
0
if found > 0:
info "found mix peers via targeted kademlia lookup", count = found
await sleepAsync(interval)
except CancelledError as e:
debug "extended kademlia discovery loop cancelled", error = e.msg
except CatchableError as e:
error "extended kademlia discovery loop failed", error = e.msg
proc start*(
wk: WakuKademlia,
interval: Duration = DefaultExtendedKademliaDiscoveryInterval,
minMixPeers: int = 0,
): Future[Result[void, string]] {.async: (raises: []).} =
if wk.running:
return err("already running")
try:
await wk.protocol.start()
except CatchableError as e:
return err("failed to start kademlia discovery: " & e.msg)
wk.discoveryLoop = wk.runDiscoveryLoop(interval, minMixPeers)
info "kademlia discovery started"
return ok()
proc stop*(wk: WakuKademlia) {.async: (raises: []).} =
if not wk.running:
return
info "Stopping kademlia discovery"
wk.running = false
if not wk.discoveryLoop.isNil():
await wk.discoveryLoop.cancelAndWait()
wk.discoveryLoop = nil
if not wk.protocol.isNil():
await wk.protocol.stop()
info "Successfully stopped kademlia discovery"

View File

@ -10,10 +10,12 @@ import
./metrics_server_conf_builder, ./metrics_server_conf_builder,
./rate_limit_conf_builder, ./rate_limit_conf_builder,
./rln_relay_conf_builder, ./rln_relay_conf_builder,
./mix_conf_builder ./mix_conf_builder,
./kademlia_discovery_conf_builder
export export
waku_conf_builder, filter_service_conf_builder, store_sync_conf_builder, waku_conf_builder, filter_service_conf_builder, store_sync_conf_builder,
store_service_conf_builder, rest_server_conf_builder, dns_discovery_conf_builder, store_service_conf_builder, rest_server_conf_builder, dns_discovery_conf_builder,
discv5_conf_builder, web_socket_conf_builder, metrics_server_conf_builder, discv5_conf_builder, web_socket_conf_builder, metrics_server_conf_builder,
rate_limit_conf_builder, rln_relay_conf_builder, mix_conf_builder rate_limit_conf_builder, rln_relay_conf_builder, mix_conf_builder,
kademlia_discovery_conf_builder

View File

@ -0,0 +1,40 @@
import chronicles, std/options, results
import libp2p/[peerid, multiaddress, peerinfo]
import waku/factory/waku_conf
logScope:
topics = "waku conf builder kademlia discovery"
#######################################
## Kademlia Discovery Config Builder ##
#######################################
type KademliaDiscoveryConfBuilder* = object
enabled*: bool
bootstrapNodes*: seq[string]
proc init*(T: type KademliaDiscoveryConfBuilder): KademliaDiscoveryConfBuilder =
KademliaDiscoveryConfBuilder()
proc withEnabled*(b: var KademliaDiscoveryConfBuilder, enabled: bool) =
b.enabled = enabled
proc withBootstrapNodes*(
b: var KademliaDiscoveryConfBuilder, bootstrapNodes: seq[string]
) =
b.bootstrapNodes = bootstrapNodes
proc build*(
b: KademliaDiscoveryConfBuilder
): Result[Option[KademliaDiscoveryConf], string] =
# Kademlia is enabled if explicitly enabled OR if bootstrap nodes are provided
let enabled = b.enabled or b.bootstrapNodes.len > 0
if not enabled:
return ok(none(KademliaDiscoveryConf))
var parsedNodes: seq[(PeerId, seq[MultiAddress])]
for nodeStr in b.bootstrapNodes:
let (peerId, ma) = parseFullAddress(nodeStr).valueOr:
return err("Failed to parse kademlia bootstrap node: " & error)
parsedNodes.add((peerId, @[ma]))
return ok(some(KademliaDiscoveryConf(bootstrapNodes: parsedNodes)))

View File

@ -25,7 +25,8 @@ import
./metrics_server_conf_builder, ./metrics_server_conf_builder,
./rate_limit_conf_builder, ./rate_limit_conf_builder,
./rln_relay_conf_builder, ./rln_relay_conf_builder,
./mix_conf_builder ./mix_conf_builder,
./kademlia_discovery_conf_builder
logScope: logScope:
topics = "waku conf builder" topics = "waku conf builder"
@ -80,6 +81,7 @@ type WakuConfBuilder* = object
mixConf*: MixConfBuilder mixConf*: MixConfBuilder
webSocketConf*: WebSocketConfBuilder webSocketConf*: WebSocketConfBuilder
rateLimitConf*: RateLimitConfBuilder rateLimitConf*: RateLimitConfBuilder
kademliaDiscoveryConf*: KademliaDiscoveryConfBuilder
# End conf builders # End conf builders
relay: Option[bool] relay: Option[bool]
lightPush: Option[bool] lightPush: Option[bool]
@ -140,6 +142,7 @@ proc init*(T: type WakuConfBuilder): WakuConfBuilder =
storeServiceConf: StoreServiceConfBuilder.init(), storeServiceConf: StoreServiceConfBuilder.init(),
webSocketConf: WebSocketConfBuilder.init(), webSocketConf: WebSocketConfBuilder.init(),
rateLimitConf: RateLimitConfBuilder.init(), rateLimitConf: RateLimitConfBuilder.init(),
kademliaDiscoveryConf: KademliaDiscoveryConfBuilder.init(),
) )
proc withNetworkConf*(b: var WakuConfBuilder, networkConf: NetworkConf) = proc withNetworkConf*(b: var WakuConfBuilder, networkConf: NetworkConf) =
@ -506,6 +509,9 @@ proc build*(
let rateLimit = builder.rateLimitConf.build().valueOr: let rateLimit = builder.rateLimitConf.build().valueOr:
return err("Rate limits Conf building failed: " & $error) return err("Rate limits Conf building failed: " & $error)
let kademliaDiscoveryConf = builder.kademliaDiscoveryConf.build().valueOr:
return err("Kademlia Discovery Conf building failed: " & $error)
# End - Build sub-configs # End - Build sub-configs
let logLevel = let logLevel =
@ -628,6 +634,7 @@ proc build*(
restServerConf: restServerConf, restServerConf: restServerConf,
dnsDiscoveryConf: dnsDiscoveryConf, dnsDiscoveryConf: dnsDiscoveryConf,
mixConf: mixConf, mixConf: mixConf,
kademliaDiscoveryConf: kademliaDiscoveryConf,
# end confs # end confs
nodeKey: nodeKey, nodeKey: nodeKey,
clusterId: clusterId, clusterId: clusterId,

View File

@ -6,7 +6,8 @@ import
libp2p/protocols/pubsub/gossipsub, libp2p/protocols/pubsub/gossipsub,
libp2p/protocols/connectivity/relay/relay, libp2p/protocols/connectivity/relay/relay,
libp2p/nameresolving/dnsresolver, libp2p/nameresolving/dnsresolver,
libp2p/crypto/crypto libp2p/crypto/crypto,
libp2p/crypto/curve25519
import import
./internal_config, ./internal_config,
@ -32,6 +33,7 @@ import
../waku_store_legacy/common as legacy_common, ../waku_store_legacy/common as legacy_common,
../waku_filter_v2, ../waku_filter_v2,
../waku_peer_exchange, ../waku_peer_exchange,
../discovery/waku_kademlia,
../node/peer_manager, ../node/peer_manager,
../node/peer_manager/peer_store/waku_peer_storage, ../node/peer_manager/peer_store/waku_peer_storage,
../node/peer_manager/peer_store/migrations as peer_store_sqlite_migrations, ../node/peer_manager/peer_store/migrations as peer_store_sqlite_migrations,
@ -165,13 +167,36 @@ proc setupProtocols(
#mount mix #mount mix
if conf.mixConf.isSome(): if conf.mixConf.isSome():
( let mixConf = conf.mixConf.get()
await node.mountMix( (await node.mountMix(conf.clusterId, mixConf.mixKey, mixConf.mixnodes)).isOkOr:
conf.clusterId, conf.mixConf.get().mixKey, conf.mixConf.get().mixnodes
)
).isOkOr:
return err("failed to mount waku mix protocol: " & $error) return err("failed to mount waku mix protocol: " & $error)
# Setup extended kademlia discovery
if conf.kademliaDiscoveryConf.isSome():
let mixPubKey =
if conf.mixConf.isSome():
some(conf.mixConf.get().mixPubKey)
else:
none(Curve25519Key)
node.wakuKademlia = WakuKademlia.new(
node.switch,
ExtendedKademliaDiscoveryParams(
bootstrapNodes: conf.kademliaDiscoveryConf.get().bootstrapNodes,
mixPubKey: mixPubKey,
advertiseMix: conf.mixConf.isSome(),
),
node.peerManager,
getMixNodePoolSize = proc(): int {.gcsafe, raises: [].} =
if node.wakuMix.isNil():
0
else:
node.getMixNodePoolSize(),
isNodeStarted = proc(): bool {.gcsafe, raises: [].} =
node.started,
).valueOr:
return err("failed to setup kademlia discovery: " & error)
if conf.storeServiceConf.isSome(): if conf.storeServiceConf.isSome():
let storeServiceConf = conf.storeServiceConf.get() let storeServiceConf = conf.storeServiceConf.get()
if storeServiceConf.supportV2: if storeServiceConf.supportV2:
@ -477,6 +502,11 @@ proc startNode*(
if conf.relay: if conf.relay:
node.peerManager.start() node.peerManager.start()
if not node.wakuKademlia.isNil():
let minMixPeers = if conf.mixConf.isSome(): 4 else: 0
(await node.wakuKademlia.start(minMixPeers = minMixPeers)).isOkOr:
return err("failed to start kademlia discovery: " & error)
return ok() return ok()
proc setupNode*( proc setupNode*(

View File

@ -204,6 +204,9 @@ proc new*(
else: else:
nil nil
# Set the extMultiAddrsOnly flag so the node knows not to replace explicit addresses
node.extMultiAddrsOnly = wakuConf.endpointConf.extMultiAddrsOnly
node.setupAppCallbacks(wakuConf, appCallbacks, healthMonitor).isOkOr: node.setupAppCallbacks(wakuConf, appCallbacks, healthMonitor).isOkOr:
error "Failed setting up app callbacks", error = error error "Failed setting up app callbacks", error = error
return err("Failed setting up app callbacks: " & $error) return err("Failed setting up app callbacks: " & $error)

View File

@ -4,6 +4,7 @@ import
libp2p/crypto/crypto, libp2p/crypto/crypto,
libp2p/multiaddress, libp2p/multiaddress,
libp2p/crypto/curve25519, libp2p/crypto/curve25519,
libp2p/peerid,
secp256k1, secp256k1,
results results
@ -51,6 +52,10 @@ type MixConf* = ref object
mixPubKey*: Curve25519Key mixPubKey*: Curve25519Key
mixnodes*: seq[MixNodePubInfo] mixnodes*: seq[MixNodePubInfo]
type KademliaDiscoveryConf* = object
bootstrapNodes*: seq[(PeerId, seq[MultiAddress])]
## Bootstrap nodes for extended kademlia discovery.
type StoreServiceConf* {.requiresInit.} = object type StoreServiceConf* {.requiresInit.} = object
dbMigration*: bool dbMigration*: bool
dbURl*: string dbURl*: string
@ -109,6 +114,7 @@ type WakuConf* {.requiresInit.} = ref object
metricsServerConf*: Option[MetricsServerConf] metricsServerConf*: Option[MetricsServerConf]
webSocketConf*: Option[WebSocketConf] webSocketConf*: Option[WebSocketConf]
mixConf*: Option[MixConf] mixConf*: Option[MixConf]
kademliaDiscoveryConf*: Option[KademliaDiscoveryConf]
portsShift*: uint16 portsShift*: uint16
dnsAddrsNameServers*: seq[IpAddress] dnsAddrsNameServers*: seq[IpAddress]

View File

@ -43,9 +43,6 @@ type
# Keeps track of peer shards # Keeps track of peer shards
ShardBook* = ref object of PeerBook[seq[uint16]] ShardBook* = ref object of PeerBook[seq[uint16]]
# Keeps track of Mix protocol public keys of peers
MixPubKeyBook* = ref object of PeerBook[Curve25519Key]
proc getPeer*(peerStore: PeerStore, peerId: PeerId): RemotePeerInfo = proc getPeer*(peerStore: PeerStore, peerId: PeerId): RemotePeerInfo =
let addresses = let addresses =
if peerStore[LastSeenBook][peerId].isSome(): if peerStore[LastSeenBook][peerId].isSome():
@ -85,7 +82,7 @@ proc delete*(peerStore: PeerStore, peerId: PeerId) =
proc peers*(peerStore: PeerStore): seq[RemotePeerInfo] = proc peers*(peerStore: PeerStore): seq[RemotePeerInfo] =
let allKeys = concat( let allKeys = concat(
toSeq(peerStore[LastSeenBook].book.keys()), toSeq(peerStore[LastSeenOutboundBook].book.keys()),
toSeq(peerStore[AddressBook].book.keys()), toSeq(peerStore[AddressBook].book.keys()),
toSeq(peerStore[ProtoBook].book.keys()), toSeq(peerStore[ProtoBook].book.keys()),
toSeq(peerStore[KeyBook].book.keys()), toSeq(peerStore[KeyBook].book.keys()),

View File

@ -66,6 +66,7 @@ import
events/health_events, events/health_events,
events/peer_events, events/peer_events,
], ],
waku/discovery/waku_kademlia,
./net_config, ./net_config,
./peer_manager, ./peer_manager,
./edge_driver, ./edge_driver,
@ -143,6 +144,7 @@ type
wakuRendezvous*: WakuRendezVous wakuRendezvous*: WakuRendezVous
wakuRendezvousClient*: rendezvous_client.WakuRendezVousClient wakuRendezvousClient*: rendezvous_client.WakuRendezVousClient
announcedAddresses*: seq[MultiAddress] announcedAddresses*: seq[MultiAddress]
extMultiAddrsOnly*: bool # When true, skip automatic IP address replacement
started*: bool # Indicates that node has started listening started*: bool # Indicates that node has started listening
topicSubscriptionQueue*: AsyncEventQueue[SubscriptionEvent] topicSubscriptionQueue*: AsyncEventQueue[SubscriptionEvent]
rateLimitSettings*: ProtocolRateLimitSettings rateLimitSettings*: ProtocolRateLimitSettings
@ -153,6 +155,8 @@ type
edgeHealthEvent*: AsyncEvent edgeHealthEvent*: AsyncEvent
edgeHealthLoop: Future[void] edgeHealthLoop: Future[void]
peerEventListener*: EventWakuPeerListener peerEventListener*: EventWakuPeerListener
kademliaDiscoveryLoop*: Future[void]
wakuKademlia*: WakuKademlia
proc deduceRelayShard( proc deduceRelayShard(
node: WakuNode, node: WakuNode,
@ -308,7 +312,7 @@ proc mountAutoSharding*(
return ok() return ok()
proc getMixNodePoolSize*(node: WakuNode): int = proc getMixNodePoolSize*(node: WakuNode): int =
return node.wakuMix.getNodePoolSize() return node.wakuMix.poolSize()
proc mountMix*( proc mountMix*(
node: WakuNode, node: WakuNode,
@ -460,6 +464,11 @@ proc isBindIpWithZeroPort(inputMultiAdd: MultiAddress): bool =
return false return false
proc updateAnnouncedAddrWithPrimaryIpAddr*(node: WakuNode): Result[void, string] = proc updateAnnouncedAddrWithPrimaryIpAddr*(node: WakuNode): Result[void, string] =
# Skip automatic IP replacement if extMultiAddrsOnly is set
# This respects the user's explicitly configured announced addresses
if node.extMultiAddrsOnly:
return ok()
let peerInfo = node.switch.peerInfo let peerInfo = node.switch.peerInfo
var announcedStr = "" var announcedStr = ""
var listenStr = "" var listenStr = ""
@ -710,6 +719,9 @@ proc stop*(node: WakuNode) {.async.} =
not node.wakuPeerExchangeClient.pxLoopHandle.isNil(): not node.wakuPeerExchangeClient.pxLoopHandle.isNil():
await node.wakuPeerExchangeClient.pxLoopHandle.cancelAndWait() await node.wakuPeerExchangeClient.pxLoopHandle.cancelAndWait()
if not node.wakuKademlia.isNil():
await node.wakuKademlia.stop()
if not node.wakuRendezvous.isNil(): if not node.wakuRendezvous.isNil():
await node.wakuRendezvous.stopWait() await node.wakuRendezvous.stopWait()

View File

@ -38,6 +38,7 @@ type
Static Static
PeerExchange PeerExchange
Dns Dns
Kademlia
PeerDirection* = enum PeerDirection* = enum
UnknownDirection UnknownDirection

View File

@ -1,22 +1,23 @@
{.push raises: [].} {.push raises: [].}
import chronicles, std/[options, tables, sequtils], chronos, results, metrics, strutils import chronicles, std/options, chronos, results, metrics
import import
libp2p/crypto/curve25519, libp2p/crypto/curve25519,
libp2p/crypto/crypto,
libp2p/protocols/mix, libp2p/protocols/mix,
libp2p/protocols/mix/mix_node, libp2p/protocols/mix/mix_node,
libp2p/protocols/mix/mix_protocol, libp2p/protocols/mix/mix_protocol,
libp2p/protocols/mix/mix_metrics, libp2p/protocols/mix/mix_metrics,
libp2p/[multiaddress, multicodec, peerid], libp2p/protocols/mix/delay_strategy,
libp2p/[multiaddress, peerid],
eth/common/keys eth/common/keys
import import
../node/peer_manager, waku/node/peer_manager,
../waku_core, waku/waku_core,
../waku_enr, waku/waku_enr,
../node/peer_manager/waku_peer_store, waku/node/peer_manager/waku_peer_store
../common/nimchronos
logScope: logScope:
topics = "waku mix" topics = "waku mix"
@ -27,7 +28,6 @@ type
WakuMix* = ref object of MixProtocol WakuMix* = ref object of MixProtocol
peerManager*: PeerManager peerManager*: PeerManager
clusterId: uint16 clusterId: uint16
nodePoolLoopHandle: Future[void]
pubKey*: Curve25519Key pubKey*: Curve25519Key
WakuMixResult*[T] = Result[T, string] WakuMixResult*[T] = Result[T, string]
@ -36,106 +36,10 @@ type
multiAddr*: string multiAddr*: string
pubKey*: Curve25519Key pubKey*: Curve25519Key
proc filterMixNodes(cluster: Option[uint16], peer: RemotePeerInfo): bool =
# Note that origin based(discv5) filtering is not done intentionally
# so that more mix nodes can be discovered.
if peer.mixPubKey.isNone():
trace "remote peer has no mix Pub Key", peer = $peer
return false
if cluster.isSome() and peer.enr.isSome() and
peer.enr.get().isClusterMismatched(cluster.get()):
trace "peer has mismatching cluster", peer = $peer
return false
return true
proc appendPeerIdToMultiaddr*(multiaddr: MultiAddress, peerId: PeerId): MultiAddress =
if multiaddr.contains(multiCodec("p2p")).get():
return multiaddr
var maddrStr = multiaddr.toString().valueOr:
error "Failed to convert multiaddress to string.", err = error
return multiaddr
maddrStr.add("/p2p/" & $peerId)
var cleanAddr = MultiAddress.init(maddrStr).valueOr:
error "Failed to convert string to multiaddress.", err = error
return multiaddr
return cleanAddr
func getIPv4Multiaddr*(maddrs: seq[MultiAddress]): Option[MultiAddress] =
for multiaddr in maddrs:
trace "checking multiaddr", addr = $multiaddr
if multiaddr.contains(multiCodec("ip4")).get():
trace "found ipv4 multiaddr", addr = $multiaddr
return some(multiaddr)
trace "no ipv4 multiaddr found"
return none(MultiAddress)
proc populateMixNodePool*(mix: WakuMix) =
# populate only peers that i) are reachable ii) share cluster iii) support mix
let remotePeers = mix.peerManager.switch.peerStore.peers().filterIt(
filterMixNodes(some(mix.clusterId), it)
)
var mixNodes = initTable[PeerId, MixPubInfo]()
for i in 0 ..< min(remotePeers.len, 100):
let ipv4addr = getIPv4Multiaddr(remotePeers[i].addrs).valueOr:
trace "peer has no ipv4 address", peer = $remotePeers[i]
continue
let maddrWithPeerId = appendPeerIdToMultiaddr(ipv4addr, remotePeers[i].peerId)
trace "remote peer info", info = remotePeers[i]
if remotePeers[i].mixPubKey.isNone():
trace "peer has no mix Pub Key", remotePeerId = $remotePeers[i]
continue
let peerMixPubKey = remotePeers[i].mixPubKey.get()
var peerPubKey: crypto.PublicKey
if not remotePeers[i].peerId.extractPublicKey(peerPubKey):
warn "Failed to extract public key from peerId, skipping node",
remotePeerId = remotePeers[i].peerId
continue
if peerPubKey.scheme != PKScheme.Secp256k1:
warn "Peer public key is not Secp256k1, skipping node",
remotePeerId = remotePeers[i].peerId, scheme = peerPubKey.scheme
continue
let mixNodePubInfo = MixPubInfo.init(
remotePeers[i].peerId,
ipv4addr,
intoCurve25519Key(peerMixPubKey),
peerPubKey.skkey,
)
trace "adding mix node to pool",
remotePeerId = remotePeers[i].peerId, multiAddr = $ipv4addr
mixNodes[remotePeers[i].peerId] = mixNodePubInfo
# set the mix node pool
mix.setNodePool(mixNodes)
mix_pool_size.set(len(mixNodes))
trace "mix node pool updated", poolSize = mix.getNodePoolSize()
# Once mix protocol starts to use info from PeerStore, then this can be removed.
proc startMixNodePoolMgr*(mix: WakuMix) {.async.} =
info "starting mix node pool manager"
# try more aggressively to populate the pool at startup
var attempts = 50
# TODO: make initial pool size configurable
while mix.getNodePoolSize() < 100 and attempts > 0:
attempts -= 1
mix.populateMixNodePool()
await sleepAsync(1.seconds)
# TODO: make interval configurable
heartbeat "Updating mix node pool", 5.seconds:
mix.populateMixNodePool()
proc processBootNodes( proc processBootNodes(
bootnodes: seq[MixNodePubInfo], peermgr: PeerManager bootnodes: seq[MixNodePubInfo], peermgr: PeerManager, mix: WakuMix
): Table[PeerId, MixPubInfo] = ) =
var mixNodes = initTable[PeerId, MixPubInfo]() var count = 0
for node in bootnodes: for node in bootnodes:
let pInfo = parsePeerInfo(node.multiAddr).valueOr: let pInfo = parsePeerInfo(node.multiAddr).valueOr:
error "Failed to get peer id from multiaddress: ", error "Failed to get peer id from multiaddress: ",
@ -156,14 +60,15 @@ proc processBootNodes(
error "Failed to parse multiaddress", multiAddr = node.multiAddr, error = error error "Failed to parse multiaddress", multiAddr = node.multiAddr, error = error
continue continue
mixNodes[peerId] = MixPubInfo.init(peerId, multiAddr, node.pubKey, peerPubKey.skkey) let mixPubInfo = MixPubInfo.init(peerId, multiAddr, node.pubKey, peerPubKey.skkey)
mix.nodePool.add(mixPubInfo)
count.inc()
peermgr.addPeer( peermgr.addPeer(
RemotePeerInfo.init(peerId, @[multiAddr], mixPubKey = some(node.pubKey)) RemotePeerInfo.init(peerId, @[multiAddr], mixPubKey = some(node.pubKey))
) )
mix_pool_size.set(len(mixNodes)) mix_pool_size.set(count)
info "using mix bootstrap nodes ", bootNodes = mixNodes info "using mix bootstrap nodes ", count = count
return mixNodes
proc new*( proc new*(
T: type WakuMix, T: type WakuMix,
@ -183,22 +88,28 @@ proc new*(
) )
if bootnodes.len < minMixPoolSize: if bootnodes.len < minMixPoolSize:
warn "publishing with mix won't work until atleast 3 mix nodes in node pool" warn "publishing with mix won't work until atleast 3 mix nodes in node pool"
let initTable = processBootNodes(bootnodes, peermgr)
if len(initTable) < minMixPoolSize:
warn "publishing with mix won't work until atleast 3 mix nodes in node pool"
var m = WakuMix(peerManager: peermgr, clusterId: clusterId, pubKey: mixPubKey) var m = WakuMix(peerManager: peermgr, clusterId: clusterId, pubKey: mixPubKey)
procCall MixProtocol(m).init(localMixNodeInfo, initTable, peermgr.switch) procCall MixProtocol(m).init(
localMixNodeInfo,
peermgr.switch,
delayStrategy =
ExponentialDelayStrategy.new(meanDelayMs = 50, rng = crypto.newRng()),
)
processBootNodes(bootnodes, peermgr, m)
if m.nodePool.len < minMixPoolSize:
warn "publishing with mix won't work until atleast 3 mix nodes in node pool"
return ok(m) return ok(m)
proc poolSize*(mix: WakuMix): int =
mix.nodePool.len
method start*(mix: WakuMix) = method start*(mix: WakuMix) =
info "starting waku mix protocol" info "starting waku mix protocol"
mix.nodePoolLoopHandle = mix.startMixNodePoolMgr()
method stop*(mix: WakuMix) {.async.} = method stop*(mix: WakuMix) {.async.} =
if mix.nodePoolLoopHandle.isNil(): discard
return
await mix.nodePoolLoopHandle.cancelAndWait()
mix.nodePoolLoopHandle = nil
# Mix Protocol # Mix Protocol