Compare commits

...

24 Commits

Author SHA1 Message Date
Sasha
74ad13ba24
chore: rename repo to comply with logos (#2757)
* chore: rename repo to comply with logos

* up script

* up allure

* fix logos-messaging-allure-js name

* fix logos-messaging-allure-js name

* fix logos-messaging-allure-js name
2026-01-05 23:53:38 +01:00
Fabiana Cecin
9a1e9cecc5
fix: peer cache test failure (#2770)
* fix peer cache test

* simplify test fix

---------

Co-authored-by: Arseniy Klempner <arseniyk@status.im>
2025-12-23 16:51:15 -08:00
Danish Arora
ab237410f9
chore: enable relay when lightpush is used (#2762) 2025-12-23 16:24:36 -08:00
Arseniy Klempner
f2ad23ad43
feat(rln)!: generate contract types, migrate from ethers to viem (#2705)
* feat: use wagmi to generate contract types

* feat: migrate rln from ethers to viem

* fix: remove .gitmodules

* fix: update readme

* fix: refactor to use a single viem client object

* fix: update comments, tsconfig

* feat: remove membership event tracking

* fix: script name in package.json and readme

* fix: only allow linea sepolia

* fix: consolidate viem types, typed window

* fix: use viem to infer type of decoded event

* fix: use js for generate abi script

* feat: generate abi and build rln package as release condition

* fix: check that eth_requestAccounts returns an array

* fix: handle error messages

* fix: use https instead of git for cloning in script

* fix: add warning annotations for contract typings check

* fix: install deps for rln package before building

* fix: use pnpm when installing rln contracts

* fix: use workspace flag to run abi script

* fix: add ref to checkout action

* fix: include pnpm in ci
2025-12-01 17:32:35 -08:00
Hanno Cornelius
788f7e62c5
feat: incorporate sds-r into reliable channels (#2701)
* wip

* feat: integrate sds-r with message channels

* fix: fix implementation guide, remove unrelated claude file

* feat: integrate sds-r within reliable channels SDK

* fix: fix import, export

* fix: fix build errors, simplify parallel operation

* fix: sigh. this file has 9 lives

* fix: simplify more

* fix: disable repair if not part of retrieval strategy

* fix: remove dead code, simplify

* fix: improve repair loop

Co-authored-by: fryorcraken <110212804+fryorcraken@users.noreply.github.com>

* chore: make retrievalStrategy mandatory argument

* chore: add repair multiplier, safer checks

---------

Co-authored-by: fryorcraken <commits@fryorcraken.xyz>
Co-authored-by: fryorcraken <110212804+fryorcraken@users.noreply.github.com>
2025-11-21 15:03:48 +00:00
fryorcraken
e5f51d7df1
feat: Reliable Channel: Status Sync, overflow protection, stop TODOs (#2729)
* feat(sds): messages with lost deps are delivered

This is to re-enable participation in the SDS protocol. Meaning the
received message with missing dependencies becomes part of the causal
history, re-enabling acknowledgements.

* fix(sds): avoid overflow in message history storage

* feat(reliable-channel): Emit a "Synced" Status with message counts

Return a "synced" or "syncing" status on `ReliableChannel.status` that
let the developer know whether messages are missing, and if so, how many.

* fix: clean up subscriptions, intervals and timeouts when stopping

# Conflicts:
#	packages/sdk/src/reliable_channel/reliable_channel.ts

* chore: extract random timeout

* fix rebase

* revert listener changes

* typo

* Ensuring no inconsistency on missing message

* test: streamline, stop channels

* clear sync status sets when stopping channel

* prevent sync status event spam

* test: improve naming

* try/catch for callback

* encapsulate/simplify reliable channel API

* sanity checks

* test: ensure sync status cleanup
2025-11-16 08:57:12 +11:00
fryorcraken
84a6ea69cf
fix: cleanup routines on reliable channel and core protocols (#2733)
* fix: add stop methods to protocols to prevent event listener leaks

* fix: add abort signal support for graceful store query cancellation

* fix: call protocol stop methods in WakuNode.stop()

* fix: improve QueryOnConnect cleanup and abort signal handling

* fix: improve MissingMessageRetriever cleanup with abort signal

* fix: add stopAllRetries method to RetryManager for proper cleanup

* fix: implement comprehensive ReliableChannel stop() with proper cleanup

* fix: add active query tracking to QueryOnConnect and await its stop()

* fix: add stop() to IRelayAPI and IStore interfaces, implement in SDK wrappers

* align with usual naming (isStarted)

* remove unnecessary `await`

* test: `stop()` is now async

* chore: use more concise syntax

---------

Co-authored-by: Levente Kiss <levente.kiss@solarpunk.buzz>
2025-11-13 12:32:15 +11:00
049e564e89
ci: add missing jenkins lib 2025-11-06 18:02:33 +01:00
Sasha
101ffe8a04
chore: release master (#2663) 2025-11-04 15:56:42 +01:00
Hanno Cornelius
5334a7fcc9
feat: add SDS-Repair (SDS-R) to the SDS implementation (#2698)
* wip

* feat: integrate sds-r with message channels

* feat: add SDS-R events

* fix: fixed buffer handling incoming and outgoing

* fix: more buffer fixes

* fix: remove some magic numbers

* fix: buffer optimisation, backwards compatible senderId

* fix: fix implementation guide, remove unrelated claude file

* fix: further buffer optimisations

* fix: linting errors

* fix: suggestions from code review

Co-authored-by: Sasha <118575614+weboko@users.noreply.github.com>
Co-authored-by: fryorcraken <110212804+fryorcraken@users.noreply.github.com>

* fix: remove implementation guide

* fix: build errors, remove override, improve buffer

* fix: consistent use of MessageId and ParticipantId

* fix: switch to conditionally constructed from conditionally executed

---------

Co-authored-by: fryorcraken <commits@fryorcraken.xyz>
Co-authored-by: Sasha <118575614+weboko@users.noreply.github.com>
Co-authored-by: fryorcraken <110212804+fryorcraken@users.noreply.github.com>
2025-10-28 10:27:06 +00:00
Arseniy Klempner
115cdd28fe
chore: update hardcoded version of nwaku to 0.36.0, remove unused ci job (#2710)
* fix: update hardcoded version of nwaku to 0.36.0

* fix: remove unused/outdated rln-sync-tree job
2025-10-27 17:02:48 -07:00
Arseniy Klempner
0daa81d3d7
fix: run npm audit fix (#2696)
* fix: run npm audit fix

* fix: bump playwright image in CI
2025-10-27 13:47:14 -07:00
e2c9364053
ci: move it to a container
Add nix flake and use it in the pipeline for reliability and reproducability.

Referenced issue:
* https://github.com/status-im/infra-ci/issues/188
2025-10-23 12:53:04 +02:00
Arseniy Klempner
0df18b2a75
feat: create @waku/run package for local dev env (#2678)
* feat: create @waku/run package for local dev env

* chore: add @waku/run to release please config

* feat: test @waku/run with playwright

* fix: don't run waku/run tests in CI

* fix: cache images so docker-compose can work offline

* feat: set nodekey and staticnode flags for each nwaku node

* fix: use constants for node ids

* chore: set directories for running via npx

* fix: remove .env, support env vars for nwaku ports

* fix: use separate db (same instance) for each node

* feat: add command to test dev env

* chore: use package version in container name

* fix: replace hardcoded WS/REST ports with constants/env vars

* chore: clean up README

* fix: refactor config printing into own function

* fix: add run package to release please manifest

* fix: defer to root folder gitignore/cspell

* fix: update node version and remove tsx

* fix: remove browser tests and express dep

* fix: replace magic values with constants

* fix: move to root .gitignore

* fix: move cspell to root
2025-10-22 21:38:28 -07:00
Sasha
ff9c43038e
chore: use npm token (#2693) 2025-10-22 14:26:13 +02:00
fryorcraken
b8a9d132c1
chore: npm publication (#2688)
* chore: npm publication

Fixing npm publication and warnings

* Upgrade workflow to use trusted publishing

https://docs.npmjs.com/trusted-publishers

* bump node js to 24

To avoid having to reinstall npm in pre-release for npmjs trusted publishers
2025-10-21 16:35:01 +11:00
Sasha
37c6c1e529
chore: expose sdk from waku/react (#2676) 2025-10-09 00:38:54 +02:00
Sasha
ad0bed69ba
feat: add waku/react package and make it compatible with React frameworks (#2656)
* chore: add waku/react package

* fix check

* remove not needed logic from waku/react package

* make it compatible with expo/next

* add to release please

* remove tests
2025-10-08 15:37:49 +02:00
Arseniy Klempner
d803565b30
feat(browser-tests): simplify, refactor, update dockerized browser node (#2623)
* feat(browser-tests): simplify, refactor, update dockerized browser node

* Update packages/browser-tests/web/index.ts

* fix: remove comments and console.logs from tests

* fix: add temporary logging

* fix: debugging static sharding

* fix: replace console with logger

* fix: remove use of any

* fix: log dial error

* fix: replace any with libp2p options

* fix: remove unused logic around sourcing address.env

* fix: uncomment log

* fix: add more logging and fix tests

* feat: add types for test-config

* fix: add types to server.ts

* fix: remove more uses of any

* fix: remove use of any in endpoint handlers
2025-10-07 10:54:19 -07:00
fryorcraken
e92f6a2409
feat!: do not send sync messages with empty history (#2658)
* feat!: do not send sync messages with empty history

A sync message without any history as no value. If there are no messages in the channel, then a sync messages does not help.

If there are messages in the channel, but this participant is not aware of them, then it can confuse other participants to assume that the channel is empty.

* fix test by adding a message to channel history

* make `pushOutgoingSyncMessage` return true even if no callback passed
2025-10-02 15:17:10 +10:00
fryorcraken
c0ecb6abba
fix!: SDS lamport timestamp overflow and keep it to current time (#2664)
* fix!: avoid SDS lamport timestamp overflow

The SDS timestamp is initialized to the current time in milliseconds, which is a 13 digits value (e.g. 1,759,223,090,052).

The maximum value for int32 is 2,147,483,647 (10 digits), which is clearly less than the timestamp.
Maximum value for uint32 is 4,294,967,295 (10 digits), which does not help with ms timestamp.

uint64 is BigInt in JavaScript, so best to be avoided unless strictly necessary as it creates complexity.
max uint64 is 18,446,744,073,709,551,615 (20 digits).

Using seconds instead of milliseconds would enable usage of uint32 valid until the year 2106.

The lamport timestamp is only initialized to current time for a new channel. The only scenario is when a user comes in a channel, and thinks it's new (did not get previous messages), and then starts sending messages. Meaning that there may be an initial timestamp conflict until the logs are consolidated, which is already handled by the protocol.

* change lamportTimestamp to uint64 in protobuf

* lamport timestamp remains close to current time
2025-10-02 09:07:10 +10:00
fryorcraken
593bc45225
feat: reliable channels search up to 30 days to find message (#2657)
* feat: query on connect stops on predicate

* test: query on connect stops at predicate

* feat: reliable channels search up to 30 days to find message

Queries stop once a valid sync or content message is found in the channel.

* fix: protect against decoding exceptions

* stop range queries on messages with a causal history
2025-10-01 21:35:52 +10:00
Arseniy Klempner
bbcfc94879
feat(rln)!: use zerokit for credential generation (#2632)
Co-authored-by: Danish Arora <danisharora099@gmail.com>
2025-09-30 16:49:18 -07:00
Sasha
016a25d578
chore: stop updating TTL for peer store (#2653) 2025-10-01 00:33:26 +02:00
221 changed files with 12209 additions and 12289 deletions

View File

@ -24,9 +24,11 @@
"cipherparams",
"ciphertext",
"circleci",
"circom",
"codecov",
"codegen",
"commitlint",
"cooldown",
"dependabot",
"dialable",
"dingpu",
@ -41,9 +43,7 @@
"Encrypters",
"enr",
"enrs",
"unsubscription",
"enrtree",
"unhandle",
"ephem",
"esnext",
"ethersproject",
@ -55,6 +55,7 @@
"fontsource",
"globby",
"gossipsub",
"hackathons",
"huilong",
"iasked",
"ihave",
@ -62,7 +63,7 @@
"ineed",
"IPAM",
"ipfs",
"cooldown",
"isready",
"iwant",
"jdev",
"jswaku",
@ -103,6 +104,7 @@
"reactjs",
"recid",
"rlnrelay",
"rlnv",
"roadmap",
"sandboxed",
"scanf",
@ -122,14 +124,18 @@
"typedoc",
"undialable",
"unencrypted",
"unhandle",
"unmarshal",
"unmount",
"unmounts",
"unsubscription",
"untracked",
"upgrader",
"vacp",
"varint",
"viem",
"vkey",
"wagmi",
"waku",
"wakuconnect",
"wakunode",
@ -139,6 +145,7 @@
"weboko",
"websockets",
"wifi",
"WTNS",
"xsalsa20",
"zerokit",
"Привет",
@ -163,6 +170,7 @@
"gen",
"proto",
"*.spec.ts",
"*.log",
"CHANGELOG.md"
],
"patterns": [

2
.github/CODEOWNERS vendored
View File

@ -1 +1 @@
* @waku-org/js-waku
* @logos-messaging/js-waku

View File

@ -11,5 +11,5 @@ jobs:
steps:
- uses: actions/add-to-project@v0.5.0
with:
project-url: https://github.com/orgs/waku-org/projects/2
project-url: https://github.com/orgs/logos-messaging/projects/2
github-token: ${{ secrets.ADD_TO_PROJECT_20240815 }}

View File

@ -15,7 +15,7 @@ on:
type: string
env:
NODE_JS: "22"
NODE_JS: "24"
jobs:
check:
@ -23,7 +23,7 @@ jobs:
steps:
- uses: actions/checkout@v3
with:
repository: waku-org/js-waku
repository: logos-messaging/logos-messaging-js
- uses: actions/setup-node@v3
with:
@ -38,7 +38,7 @@ jobs:
steps:
- uses: actions/checkout@v3
with:
repository: waku-org/js-waku
repository: logos-messaging/logos-messaging-js
- uses: actions/setup-node@v3
with:
node-version: ${{ env.NODE_JS }}
@ -57,13 +57,13 @@ jobs:
browser:
runs-on: ubuntu-latest
container:
image: mcr.microsoft.com/playwright:v1.53.1-jammy
image: mcr.microsoft.com/playwright:v1.56.1-jammy
env:
HOME: "/root"
steps:
- uses: actions/checkout@v3
with:
repository: waku-org/js-waku
repository: logos-messaging/logos-messaging-js
- uses: actions/setup-node@v3
with:
node-version: ${{ env.NODE_JS }}
@ -71,65 +71,18 @@ jobs:
- run: npm run build:esm
- run: npm run test:browser
build_rln_tree:
if: false # This condition disables the job
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
repository: waku-org/js-waku
- uses: actions/setup-node@v3
with:
node-version: ${{ env.NODE_JS }}
- name: Check for existing RLN tree artifact
id: check-artifact
uses: actions/github-script@v6
with:
script: |
const artifact = await github.rest.actions.listWorkflowRunArtifacts({
owner: context.repo.owner,
repo: context.repo.repo,
run_id: context.runId
});
console.log(artifact);
const foundArtifact = artifact.data.artifacts.find(art => art.name === 'rln_tree.tar.gz');
if (foundArtifact) {
core.setOutput('artifact_id', foundArtifact.id);
core.setOutput('artifact_found', 'true');
} else {
core.setOutput('artifact_found', 'false');
}
- name: Download RLN tree artifact
if: steps.check-artifact.outputs.artifact_found == 'true'
uses: actions/download-artifact@v4
with:
name: rln_tree.tar.gz
path: /tmp
- uses: ./.github/actions/npm
- name: Sync rln tree and save artifact
run: |
mkdir -p /tmp/rln_tree.db
npm run build:esm
npm run sync-rln-tree
tar -czf rln_tree.tar.gz -C /tmp/rln_tree.db .
- name: Upload artifact
uses: actions/upload-artifact@v4
with:
name: rln_tree.tar.gz
path: rln_tree.tar.gz
node:
uses: ./.github/workflows/test-node.yml
secrets: inherit
with:
nim_wakunode_image: ${{ inputs.nim_wakunode_image || 'wakuorg/nwaku:v0.35.1' }}
nim_wakunode_image: ${{ inputs.nim_wakunode_image || 'wakuorg/nwaku:v0.36.0' }}
test_type: node
allure_reports: true
node_optional:
uses: ./.github/workflows/test-node.yml
with:
nim_wakunode_image: ${{ inputs.nim_wakunode_image || 'wakuorg/nwaku:v0.35.1' }}
nim_wakunode_image: ${{ inputs.nim_wakunode_image || 'wakuorg/nwaku:v0.36.0' }}
test_type: node-optional
node_with_nwaku_master:
@ -151,7 +104,7 @@ jobs:
- uses: actions/checkout@v3
with:
repository: waku-org/js-waku
repository: logos-messaging/logos-messaging-js
if: ${{ steps.release.outputs.releases_created }}
- uses: actions/setup-node@v3
@ -160,12 +113,44 @@ jobs:
node-version: ${{ env.NODE_JS }}
registry-url: "https://registry.npmjs.org"
- uses: pnpm/action-setup@v4
if: ${{ steps.release.outputs.releases_created }}
with:
version: 9
- run: npm install
if: ${{ steps.release.outputs.releases_created }}
- run: npm run build
if: ${{ steps.release.outputs.releases_created }}
- name: Setup Foundry
if: ${{ steps.release.outputs.releases_created }}
uses: foundry-rs/foundry-toolchain@v1
with:
version: nightly
- name: Generate RLN contract ABIs
id: rln-abi
if: ${{ steps.release.outputs.releases_created }}
run: |
npm run setup:contract-abi -w @waku/rln || {
echo "::warning::Failed to generate contract ABIs, marking @waku/rln as private to skip publishing"
cd packages/rln
node -e "const fs = require('fs'); const pkg = JSON.parse(fs.readFileSync('package.json', 'utf8')); pkg.private = true; fs.writeFileSync('package.json', JSON.stringify(pkg, null, 2));"
echo "failed=true" >> $GITHUB_OUTPUT
}
- name: Rebuild with new ABIs
if: ${{ steps.release.outputs.releases_created && steps.rln-abi.outputs.failed != 'true' }}
run: |
npm install -w packages/rln
npm run build -w @waku/rln || {
echo "::warning::Failed to build @waku/rln, marking as private to skip publishing"
cd packages/rln
node -e "const fs = require('fs'); const pkg = JSON.parse(fs.readFileSync('package.json', 'utf8')); pkg.private = true; fs.writeFileSync('package.json', JSON.stringify(pkg, null, 2));"
}
- run: npm run publish
if: ${{ steps.release.outputs.releases_created }}
env:

View File

@ -12,7 +12,7 @@ jobs:
steps:
- uses: actions/checkout@v3
with:
repository: waku-org/js-waku
repository: logos-messaging/logos-messaging-js
- uses: actions/setup-node@v3
with:

View File

@ -8,9 +8,6 @@ on:
env:
NODE_JS: "22"
EXAMPLE_TEMPLATE: "web-chat"
EXAMPLE_NAME: "example"
EXAMPLE_PORT: "8080"
# Firefox in container fails due to $HOME not being owned by user running commands
# more details https://github.com/microsoft/playwright/issues/6500
HOME: "/root"
@ -20,7 +17,7 @@ jobs:
timeout-minutes: 60
runs-on: ubuntu-latest
container:
image: mcr.microsoft.com/playwright:v1.53.1-jammy
image: mcr.microsoft.com/playwright:v1.56.1-jammy
steps:
- uses: actions/checkout@v3
- uses: actions/setup-node@v3
@ -29,11 +26,8 @@ jobs:
- uses: ./.github/actions/npm
- name: Build browser container
run: npm run build --workspace=@waku/headless-tests
- name: Build browser test environment
run: npm run build --workspace=@waku/browser-tests
- name: Build entire monorepo
run: npm run build
- name: Run Playwright tests
run: npm run test --workspace=@waku/browser-tests

View File

@ -2,7 +2,11 @@ on:
workflow_dispatch:
env:
NODE_JS: "22"
NODE_JS: "24"
permissions:
id-token: write
contents: read
jobs:
pre-release:
@ -10,19 +14,49 @@ jobs:
runs-on: ubuntu-latest
if: github.event_name == 'workflow_dispatch'
steps:
- uses: actions/checkout@v3
with:
repository: waku-org/js-waku
- uses: actions/setup-node@v3
- uses: actions/checkout@v4
with:
repository: logos-messaging/logos-messaging-js
ref: ${{ github.ref }}
- uses: actions/setup-node@v4
with:
node-version: ${{ env.NODE_JS }}
registry-url: "https://registry.npmjs.org"
- uses: pnpm/action-setup@v4
with:
version: 9
- run: npm install
- run: npm run build
- name: Setup Foundry
uses: foundry-rs/foundry-toolchain@v1
with:
version: nightly
- name: Generate RLN contract ABIs
id: rln-abi
run: |
npm run setup:contract-abi -w @waku/rln || {
echo "::warning::Failed to generate contract ABIs, marking @waku/rln as private to skip publishing"
cd packages/rln
node -e "const fs = require('fs'); const pkg = JSON.parse(fs.readFileSync('package.json', 'utf8')); pkg.private = true; fs.writeFileSync('package.json', JSON.stringify(pkg, null, 2));"
echo "failed=true" >> $GITHUB_OUTPUT
}
- name: Rebuild with new ABIs
if: steps.rln-abi.outputs.failed != 'true'
run: |
npm install -w packages/rln
npm run build -w @waku/rln || {
echo "::warning::Failed to build @waku/rln, marking as private to skip publishing"
cd packages/rln
node -e "const fs = require('fs'); const pkg = JSON.parse(fs.readFileSync('package.json', 'utf8')); pkg.private = true; fs.writeFileSync('package.json', JSON.stringify(pkg, null, 2));"
}
- run: npm run publish -- --tag next
env:
NODE_AUTH_TOKEN: ${{ secrets.NPM_JS_WAKU_PUBLISH }}

View File

@ -24,7 +24,7 @@ on:
default: false
env:
NODE_JS: "22"
NODE_JS: "24"
# Ensure test type conditions remain consistent.
WAKU_SERVICE_NODE_PARAMS: ${{ (inputs.test_type == 'go-waku-master') && '--min-relay-peers-to-publish=0' || '' }}
DEBUG: ${{ inputs.debug }}
@ -42,8 +42,8 @@ jobs:
checks: write
steps:
- uses: actions/checkout@v3
with:
repository: waku-org/js-waku
with:
repository: logos-messaging/logos-messaging-js
- name: Remove unwanted software
uses: ./.github/actions/prune-vm
@ -62,14 +62,14 @@ jobs:
- name: Merge allure reports
if: always() && env.ALLURE_REPORTS == 'true'
run: node ci/mergeAllureResults.cjs
run: node ci/mergeAllureResults.cjs
- name: Get allure history
if: always() && env.ALLURE_REPORTS == 'true'
uses: actions/checkout@v3
continue-on-error: true
with:
repository: waku-org/allure-jswaku
repository: logos-messaging/logos-messaging-allure-js
ref: gh-pages
path: gh-pages
token: ${{ env.GITHUB_TOKEN }}
@ -89,7 +89,7 @@ jobs:
uses: peaceiris/actions-gh-pages@v3
with:
personal_token: ${{ env.GITHUB_TOKEN }}
external_repository: waku-org/allure-jswaku
external_repository: logos-messaging/logos-messaging-allure-js
publish_branch: gh-pages
publish_dir: allure-history
@ -125,4 +125,4 @@ jobs:
echo "## Run Information" >> $GITHUB_STEP_SUMMARY
echo "- **NWAKU**: ${{ env.WAKUNODE_IMAGE }}" >> $GITHUB_STEP_SUMMARY
echo "## Test Results" >> $GITHUB_STEP_SUMMARY
echo "Allure report will be available at: https://waku-org.github.io/allure-jswaku/${{ github.run_number }}" >> $GITHUB_STEP_SUMMARY
echo "Allure report will be available at: https://logos-messaging.github.io/logos-messaging-allure-js/${{ github.run_number }}" >> $GITHUB_STEP_SUMMARY

View File

@ -18,7 +18,7 @@ on:
- all
env:
NODE_JS: "22"
NODE_JS: "24"
jobs:
test:
@ -34,8 +34,8 @@ jobs:
if: ${{ github.event.inputs.test_type == 'all' }}
steps:
- uses: actions/checkout@v3
with:
repository: waku-org/js-waku
with:
repository: logos-messaging/logos-messaging-js
- name: Remove unwanted software
uses: ./.github/actions/prune-vm
@ -74,8 +74,8 @@ jobs:
if: ${{ github.event.inputs.test_type != 'all' }}
steps:
- uses: actions/checkout@v3
with:
repository: waku-org/js-waku
with:
repository: logos-messaging/logos-messaging-js
- name: Remove unwanted software
uses: ./.github/actions/prune-vm

5
.gitignore vendored
View File

@ -17,4 +17,7 @@ packages/discovery/mock_local_storage
.giga
.cursor
.DS_Store
CLAUDE.md
CLAUDE.md
.env
postgres-data/
packages/rln/waku-rlnv2-contract/

View File

@ -1,13 +1,15 @@
{
"packages/utils": "0.0.27",
"packages/proto": "0.0.14",
"packages/proto": "0.0.15",
"packages/interfaces": "0.0.34",
"packages/enr": "0.0.33",
"packages/core": "0.0.39",
"packages/message-encryption": "0.0.37",
"packages/relay": "0.0.22",
"packages/sdk": "0.0.35",
"packages/discovery": "0.0.12",
"packages/sds": "0.0.7",
"packages/rln": "0.1.9"
"packages/core": "0.0.40",
"packages/message-encryption": "0.0.38",
"packages/relay": "0.0.23",
"packages/sdk": "0.0.36",
"packages/discovery": "0.0.13",
"packages/sds": "0.0.8",
"packages/rln": "0.1.10",
"packages/react": "0.0.8",
"packages/run": "0.0.2"
}

View File

@ -1,45 +0,0 @@
FROM node:20-slim
# Install Chrome dependencies
RUN apt-get update && apt-get install -y \
procps \
libglib2.0-0 \
libnss3 \
libnspr4 \
libatk1.0-0 \
libatk-bridge2.0-0 \
libcups2 \
libdrm2 \
libxkbcommon0 \
libxcomposite1 \
libxdamage1 \
libxfixes3 \
libxrandr2 \
libgbm1 \
libasound2 \
libpango-1.0-0 \
libcairo2 \
&& rm -rf /var/lib/apt/lists/*
WORKDIR /app
# Copy package files
COPY package*.json ./
COPY packages/browser-tests/package.json ./packages/browser-tests/
COPY packages/headless-tests/package.json ./packages/headless-tests/
# Install dependencies and serve
RUN npm install && npm install -g serve
# Copy source files
COPY tsconfig.json ./
COPY packages/ ./packages/
# Build packages
RUN npm run build -w packages/headless-tests && \
npm run build:server -w packages/browser-tests && \
npx playwright install chromium
EXPOSE 3000
CMD ["npm", "run", "start:server", "-w", "packages/browser-tests"]

View File

@ -23,6 +23,15 @@ npm install
npm run doc
```
# Using Nix shell
```shell
git clone https://github.com/waku-org/js-waku.git
cd js-waku
nix develop
npm install
npm run doc
```
## Bugs, Questions & Features
If you encounter any bug or would like to propose new features, feel free to [open an issue](https://github.com/waku-org/js-waku/issues/new/).

29
ci/Jenkinsfile vendored
View File

@ -1,5 +1,16 @@
#!/usr/bin/env groovy
library 'status-jenkins-lib@v1.9.27'
pipeline {
agent { label 'linux' }
agent {
docker {
label 'linuxcontainer'
image 'harbor.status.im/infra/ci-build-containers:linux-base-1.0.0'
args '--volume=/nix:/nix ' +
'--volume=/etc/nix:/etc/nix ' +
'--user jenkins'
}
}
options {
disableConcurrentBuilds()
@ -21,19 +32,25 @@ pipeline {
stages {
stage('Deps') {
steps {
sh 'npm install'
script {
nix.develop('npm install', pure: true)
}
}
}
stage('Packages') {
steps {
sh 'npm run build'
script {
nix.develop('npm run build', pure: true)
}
}
}
stage('Build') {
steps {
sh 'npm run doc'
script {
nix.develop('npm run doc', pure: true)
}
}
}
@ -41,7 +58,9 @@ pipeline {
when { expression { GIT_BRANCH.endsWith('master') } }
steps {
sshagent(credentials: ['status-im-auto-ssh']) {
sh 'npm run deploy'
script {
nix.develop('npm run deploy', pure: false)
}
}
}
}

View File

@ -13,8 +13,8 @@ const Args = process.argv.slice(2);
const USE_HTTPS = Args[0] && Args[0].toUpperCase() === "HTTPS";
const branch = "gh-pages";
const org = "waku-org";
const repo = "js-waku";
const org = "logos-messaging";
const repo = "logos-messaging-js";
/* use SSH auth by default */
let repoUrl = USE_HTTPS
? `https://github.com/${org}/${repo}.git`

26
flake.lock generated Normal file
View File

@ -0,0 +1,26 @@
{
"nodes": {
"nixpkgs": {
"locked": {
"lastModified": 1761016216,
"narHash": "sha256-G/iC4t/9j/52i/nm+0/4ybBmAF4hzR8CNHC75qEhjHo=",
"owner": "NixOS",
"repo": "nixpkgs",
"rev": "481cf557888e05d3128a76f14c76397b7d7cc869",
"type": "github"
},
"original": {
"id": "nixpkgs",
"ref": "nixos-25.05",
"type": "indirect"
}
},
"root": {
"inputs": {
"nixpkgs": "nixpkgs"
}
}
},
"root": "root",
"version": 7
}

33
flake.nix Normal file
View File

@ -0,0 +1,33 @@
{
description = "Nix flake development shell.";
inputs = {
nixpkgs.url = "nixpkgs/nixos-25.05";
};
outputs =
{ self, nixpkgs }:
let
supportedSystems = [
"x86_64-linux"
"aarch64-linux"
"x86_64-darwin"
"aarch64-darwin"
];
forEachSystem = nixpkgs.lib.genAttrs supportedSystems;
pkgsFor = forEachSystem (system: import nixpkgs { inherit system; });
in
rec {
formatter = forEachSystem (system: pkgsFor.${system}.nixpkgs-fmt);
devShells = forEachSystem (system: {
default = pkgsFor.${system}.mkShellNoCC {
packages = with pkgsFor.${system}.buildPackages; [
git # 2.44.1
openssh # 9.7p1
nodejs_20 # v20.15.1
];
};
});
};
}

8024
package-lock.json generated

File diff suppressed because it is too large Load Diff

View File

@ -14,11 +14,12 @@
"packages/rln",
"packages/sdk",
"packages/relay",
"packages/run",
"packages/tests",
"packages/reliability-tests",
"packages/headless-tests",
"packages/browser-tests",
"packages/build-utils"
"packages/build-utils",
"packages/react"
],
"scripts": {
"prepare": "husky",
@ -44,8 +45,7 @@
"doc": "run-s doc:*",
"doc:html": "typedoc --options typedoc.cjs",
"doc:cname": "echo 'js.waku.org' > docs/CNAME",
"publish": "node ./ci/publish.js",
"sync-rln-tree": "node ./packages/tests/src/sync-rln-tree.js"
"publish": "node ./ci/publish.js"
},
"devDependencies": {
"@size-limit/preset-big-lib": "^11.0.2",
@ -78,5 +78,6 @@
"*.{ts,js}": [
"eslint --fix"
]
}
},
"version": ""
}

View File

@ -1,5 +1,4 @@
node_modules
dist
build
.DS_Store
*.log

View File

@ -12,7 +12,7 @@ module.exports = {
plugins: ["import"],
extends: ["eslint:recommended"],
rules: {
"no-console": "off"
"no-unused-vars": ["error", { "argsIgnorePattern": "^_", "ignoreRestSiblings": true }]
},
globals: {
process: true

View File

@ -0,0 +1,72 @@
# syntax=docker/dockerfile:1
# Build stage - install all dependencies and build
FROM node:22-bullseye AS builder
WORKDIR /app
# Copy package.json and temporarily remove workspace dependencies that can't be resolved
COPY package.json package.json.orig
RUN sed '/"@waku\/tests": "\*",/d' package.json.orig > package.json
RUN npm install --no-audit --no-fund
COPY src ./src
COPY types ./types
COPY tsconfig.json ./
COPY web ./web
RUN npm run build
# Production stage - only runtime dependencies
FROM node:22-bullseye
# Install required system deps for Playwright Chromium
RUN apt-get update && apt-get install -y \
wget \
gnupg \
ca-certificates \
fonts-liberation \
libatk-bridge2.0-0 \
libatk1.0-0 \
libatspi2.0-0 \
libcups2 \
libdbus-1-3 \
libdrm2 \
libgtk-3-0 \
libnspr4 \
libnss3 \
libx11-xcb1 \
libxcomposite1 \
libxdamage1 \
libxfixes3 \
libxkbcommon0 \
libxrandr2 \
xdg-utils \
&& rm -rf /var/lib/apt/lists/*
WORKDIR /app
# Copy package files and install only production dependencies
COPY package.json package.json.orig
RUN sed '/"@waku\/tests": "\*",/d' package.json.orig > package.json
RUN npm install --only=production --no-audit --no-fund
# Copy built application from builder stage
COPY --from=builder /app/dist ./dist
# Install Playwright browsers (Chromium only) at runtime layer
RUN npx playwright install --with-deps chromium
ENV PORT=8080 \
NODE_ENV=production
EXPOSE 8080
# Use a script to handle CLI arguments and environment variables
COPY scripts/docker-entrypoint.sh /usr/local/bin/docker-entrypoint.sh
RUN chmod +x /usr/local/bin/docker-entrypoint.sh
ENTRYPOINT ["/usr/local/bin/docker-entrypoint.sh"]
CMD ["npm", "run", "start:server"]

View File

@ -1,182 +1,174 @@
# Waku Browser Tests
This project provides a system for testing the Waku SDK in a browser environment.
This package provides a containerized Waku light node simulation server for testing and development. The server runs a headless browser using Playwright and exposes a REST API similar to the nwaku REST API. A Dockerfile is provided to allow programmatic simulation and "deployment" of js-waku nodes in any Waku orchestration environment that uses Docker (e.g. [10ksim](https://github.com/vacp2p/10ksim) ).
## Architecture
## Quick Start
The system consists of:
1. **Headless Web App**: A simple web application (in the `@waku/headless-tests` package) that loads the Waku SDK and exposes shared API functions.
2. **Express Server**: A server that communicates with the headless app using Playwright.
3. **Shared API**: TypeScript functions shared between the server and web app.
## Setup
1. Install dependencies:
```bash
# Install main dependencies
npm install
# Install headless app dependencies
cd ../headless-tests
npm install
cd ../browser-tests
```
2. Build the application:
### Build and Run
```bash
# Build the application
npm run build
# Start the server (port 8080)
npm run start:server
# Build and run Docker container
npm run docker:build
docker run -p 8080:8080 waku-browser-tests:local
```
This will:
- Build the headless web app using webpack
- Compile the TypeScript server code
## Configuration
## Running
Configure the Waku node using environment variables:
Start the server with:
### Network Configuration
- `WAKU_CLUSTER_ID`: Cluster ID (default: 1)
- `WAKU_SHARD`: Specific shard number - enables static sharding mode (optional)
**Sharding Behavior:**
- **Auto-sharding** (default): Uses `numShardsInCluster: 8` across cluster 1
- **Static sharding**: When `WAKU_SHARD` is set, uses only that specific shard
### Bootstrap Configuration
- `WAKU_ENR_BOOTSTRAP`: Enable ENR bootstrap mode with custom bootstrap peers (comma-separated)
- `WAKU_LIGHTPUSH_NODE`: Preferred lightpush node multiaddr (Docker only)
### ENR Bootstrap Mode
When `WAKU_ENR_BOOTSTRAP` is set:
- Disables default bootstrap (`defaultBootstrap: false`)
- Enables DNS discovery using production ENR trees
- Enables peer exchange and peer cache
- Uses the specified ENR for additional bootstrap peers
```bash
npm run start:server
# Example: ENR bootstrap mode
WAKU_ENR_BOOTSTRAP="enr:-QEnuEBEAyErHEfhiQxAVQoWowGTCuEF9fKZtXSd7H_PymHFhGJA3rGAYDVSHKCyJDGRLBGsloNbS8AZF33IVuefjOO6BIJpZIJ2NIJpcIQS39tkim11bHRpYWRkcnO4lgAvNihub2RlLTAxLmRvLWFtczMud2FrdXYyLnRlc3Quc3RhdHVzaW0ubmV0BgG73gMAODcxbm9kZS0wMS5hYy1jbi1ob25na29uZy1jLndha3V2Mi50ZXN0LnN0YXR1c2ltLm5ldAYBu94DACm9A62t7AQL4Ef5ZYZosRpQTzFVAB8jGjf1TER2wH-0zBOe1-MDBNLeA4lzZWNwMjU2azGhAzfsxbxyCkgCqq8WwYsVWH7YkpMLnU2Bw5xJSimxKav-g3VkcIIjKA" npm run start:server
```
This will:
1. Serve the headless app on port 8080
2. Start a headless browser to load the app
3. Expose API endpoints to interact with Waku
## API Endpoints
- `GET /info`: Get information about the Waku node
- `GET /debug/v1/info`: Get debug information from the Waku node
- `POST /push`: Push a message to the Waku network (legacy)
- `POST /lightpush/v1/message`: Push a message to the Waku network (Waku REST API compatible)
- `POST /admin/v1/create-node`: Create a new Waku node (requires networkConfig)
- `POST /admin/v1/start-node`: Start the Waku node
- `POST /admin/v1/stop-node`: Stop the Waku node
- `POST /admin/v1/peers`: Dial to specified peers (Waku REST API compatible)
- `GET /filter/v2/messages/:contentTopic`: Subscribe to messages on a specific content topic using Server-Sent Events (Waku REST API compatible)
- `GET /filter/v1/messages/:contentTopic`: Retrieve stored messages from a content topic (Waku REST API compatible)
The server exposes the following HTTP endpoints:
### Example: Pushing a message with the legacy endpoint
### Node Management
- `GET /`: Health check - returns server status
- `GET /waku/v1/peer-info`: Get node peer information
- `POST /waku/v1/wait-for-peers`: Wait for peers with specific protocols
### Messaging
- `POST /lightpush/v3/message`: Send message via lightpush
### Static Files
- `GET /app/index.html`: Web application entry point
- `GET /app/*`: Static web application files
### Examples
#### Send a Message (Auto-sharding)
```bash
curl -X POST http://localhost:3000/push \
-H "Content-Type: application/json" \
-d '{"contentTopic": "/toy-chat/2/huilong/proto", "payload": [1, 2, 3]}'
```
### Example: Pushing a message with the Waku REST API compatible endpoint
```bash
curl -X POST http://localhost:3000/lightpush/v1/message \
curl -X POST http://localhost:8080/lightpush/v3/message \
-H "Content-Type: application/json" \
-d '{
"pubsubTopic": "/waku/2/rs/0/0",
"pubsubTopic": "",
"message": {
"payload": "SGVsbG8sIFdha3Uh",
"contentTopic": "/toy-chat/2/huilong/proto",
"timestamp": 1712135330213797632
"contentTopic": "/test/1/example/proto",
"payload": "SGVsbG8gV2FrdQ==",
"version": 1
}
}'
```
### Example: Executing a function
#### Send a Message (Explicit pubsub topic)
```bash
curl -X POST http://localhost:3000/execute \
-H "Content-Type: application/json" \
-d '{"functionName": "getPeerInfo", "params": []}'
```
### Example: Creating a Waku node
```bash
curl -X POST http://localhost:3000/admin/v1/create-node \
curl -X POST http://localhost:8080/lightpush/v3/message \
-H "Content-Type: application/json" \
-d '{
"defaultBootstrap": true,
"networkConfig": {
"clusterId": 1,
"shards": [0, 1]
"pubsubTopic": "/waku/2/rs/1/4",
"message": {
"contentTopic": "/test/1/example/proto",
"payload": "SGVsbG8gV2FrdQ==",
"version": 1
}
}'
```
### Example: Starting and stopping a Waku node
#### Wait for Peers
```bash
# Start the node
curl -X POST http://localhost:3000/admin/v1/start-node
# Stop the node
curl -X POST http://localhost:3000/admin/v1/stop-node
```
### Example: Dialing to specific peers with the Waku REST API compatible endpoint
```bash
curl -X POST http://localhost:3000/admin/v1/peers \
curl -X POST http://localhost:8080/waku/v1/wait-for-peers \
-H "Content-Type: application/json" \
-d '{
"peerMultiaddrs": [
"/ip4/127.0.0.1/tcp/8000/p2p/16Uiu2HAm4v8KuHUH6Cwz3upPeQbkyxQJsFGPdt7kHtkN8F79QiE6"]
]
"timeoutMs": 30000,
"protocols": ["lightpush", "filter"]
}'
```
### Example: Dialing to specific peers with the execute endpoint
#### Get Peer Info
```bash
curl -X POST http://localhost:3000/execute \
-H "Content-Type: application/json" \
-d '{
"functionName": "dialPeers",
"params": [
["/ip4/127.0.0.1/tcp/8000/p2p/16Uiu2HAm4v8KuHUH6Cwz3upPeQbkyxQJsFGPdt7kHtkN8F79QiE6"]
]
}'
curl -X GET http://localhost:8080/waku/v1/peer-info
```
### Example: Subscribing to a content topic with the filter endpoint
## CLI Usage
Run with CLI arguments:
```bash
# Open a persistent connection to receive messages as Server-Sent Events
curl -N http://localhost:3000/filter/v2/messages/%2Ftoy-chat%2F2%2Fhuilong%2Fproto
# You can also specify clustering options
curl -N "http://localhost:3000/filter/v2/messages/%2Ftoy-chat%2F2%2Fhuilong%2Fproto?clusterId=0&shard=0"
# Custom cluster and shard
node dist/src/server.js --cluster-id=2 --shard=0
```
### Example: Retrieving stored messages from a content topic
## Testing
The package includes several test suites:
```bash
# Get the most recent 20 messages
curl http://localhost:3000/filter/v1/messages/%2Ftoy-chat%2F2%2Fhuilong%2Fproto
# Basic server functionality tests (default)
npm test
# Get messages with pagination and time filtering
curl "http://localhost:3000/filter/v1/messages/%2Ftoy-chat%2F2%2Fhuilong%2Fproto?pageSize=10&startTime=1712000000000&endTime=1713000000000&ascending=true"
# Docker testing workflow
npm run docker:build
npm run test:integration
# All tests
npm run test:all
# Individual test suites:
npm run test:server # Server-only tests
npm run test:e2e # End-to-end tests
```
## Extending
**Test Types:**
- `server.spec.ts` - Tests basic server functionality and static file serving
- `integration.spec.ts` - Tests Docker container integration with external services
- `e2e.spec.ts` - Full end-to-end tests using nwaku nodes
To add new functionality:
## Docker Usage
1. Add your function to `src/api/shared.ts`
2. Add your function to the `API` object in `src/api/shared.ts`
3. Use it via the server endpoints
### Example: Dialing to specific peers
The package includes Docker support for containerized testing:
```bash
curl -X POST http://localhost:3000/execute \
-H "Content-Type: application/json" \
-d '{
"functionName": "dialPeers",
"params": [
["/ip4/127.0.0.1/tcp/8000/p2p/16Uiu2HAm4v8KuHUH6Cwz3upPeQbkyxQJsFGPdt7kHtkN8F79QiE6"]
]
}'
# Build image
docker build -t waku-browser-tests:local .
# Run with ENR bootstrap
docker run -p 8080:8080 \
-e WAKU_ENR_BOOTSTRAP="enr:-QEnuE..." \
-e WAKU_CLUSTER_ID="1" \
waku-browser-tests:local
# Run with specific configuration
docker run -p 8080:8080 \
-e WAKU_CLUSTER_ID="2" \
-e WAKU_SHARD="0" \
waku-browser-tests:local
```
## Development
The server automatically:
- Creates a Waku light node on startup
- Configures network settings from environment variables
- Enables appropriate protocols (lightpush, filter)
- Handles peer discovery and connection management
All endpoints are CORS-enabled for cross-origin requests.

View File

@ -5,27 +5,38 @@
"type": "module",
"scripts": {
"start": "npm run start:server",
"start:server": "node ./dist/server.js",
"test": "npx playwright test",
"start:server": "PORT=8080 node ./dist/src/server.js",
"test": "npx playwright test tests/server.spec.ts --reporter=line",
"test:all": "npx playwright test --reporter=line",
"test:server": "npx playwright test tests/server.spec.ts --reporter=line",
"test:integration": "npx playwright test tests/integration.spec.ts --reporter=line",
"test:e2e": "npx playwright test tests/e2e.spec.ts --reporter=line",
"build:server": "tsc -p tsconfig.json",
"build": "npm run build:server"
"build:web": "esbuild web/index.ts --bundle --format=esm --platform=browser --outdir=dist/web && cp web/index.html dist/web/index.html",
"build": "npm-run-all -s build:server build:web",
"docker:build": "docker build -t waku-browser-tests:local . && docker tag waku-browser-tests:local waku-browser-tests:latest"
},
"dependencies": {
"@playwright/test": "^1.51.1",
"@waku/discovery": "^0.0.11",
"@waku/interfaces": "^0.0.33",
"@waku/sdk": "^0.0.34",
"@waku/utils": "0.0.27",
"cors": "^2.8.5",
"dotenv-flow": "^0.4.0",
"express": "^4.21.2",
"filter-obj": "^2.0.2",
"it-first": "^3.0.9"
},
"devDependencies": {
"@types/cors": "^2.8.15",
"@types/express": "^4.17.21",
"@types/node": "^20.10.0",
"@waku/tests": "*",
"axios": "^1.8.4",
"dotenv-flow": "^0.4.0",
"esbuild": "^0.21.5",
"npm-run-all": "^4.1.5",
"serve": "^14.2.3",
"typescript": "5.8.3",
"webpack-cli": "^6.0.1"
},
"dependencies": {
"@playwright/test": "^1.51.1",
"@waku/sdk": "^0.0.30",
"cors": "^2.8.5",
"express": "^4.21.2",
"node-polyfill-webpack-plugin": "^4.1.0"
"testcontainers": "^10.9.0",
"typescript": "5.8.3"
}
}

View File

@ -1,57 +1,39 @@
// For dynamic import of dotenv-flow
import { defineConfig, devices } from "@playwright/test";
import { Logger } from "@waku/utils";
const log = new Logger("playwright-config");
// Only load dotenv-flow in non-CI environments
if (!process.env.CI) {
// Need to use .js extension for ES modules
// eslint-disable-next-line import/extensions
await import("dotenv-flow/config.js");
try {
await import("dotenv-flow/config.js");
} catch (e) {
log.warn("dotenv-flow not found; skipping env loading");
}
}
const EXAMPLE_PORT = process.env.EXAMPLE_PORT || "8080";
// web-chat specific thingy
const EXAMPLE_TEMPLATE = process.env.EXAMPLE_TEMPLATE || "";
const BASE_URL = `http://127.0.0.1:${EXAMPLE_PORT}/${EXAMPLE_TEMPLATE}`;
const BASE_URL = `http://127.0.0.1:${EXAMPLE_PORT}`;
const TEST_IGNORE = process.env.CI ? ["tests/e2e.spec.ts"] : [];
/**
* See https://playwright.dev/docs/test-configuration.
*/
export default defineConfig({
testDir: "./tests",
/* Run tests in files in parallel */
testIgnore: TEST_IGNORE,
fullyParallel: true,
/* Fail the build on CI if you accidentally left test.only in the source code. */
forbidOnly: !!process.env.CI,
/* Retry on CI only */
retries: process.env.CI ? 2 : 0,
/* Opt out of parallel tests on CI. */
workers: process.env.CI ? 2 : undefined,
/* Reporter to use. See https://playwright.dev/docs/test-reporters */
reporter: "html",
/* Shared settings for all the projects below. See https://playwright.dev/docs/api/class-testoptions. */
use: {
/* Base URL to use in actions like `await page.goto('/')`. */
baseURL: BASE_URL,
/* Collect trace when retrying the failed test. See https://playwright.dev/docs/trace-viewer */
trace: "on-first-retry"
},
/* Configure projects for major browsers */
projects: [
{
name: "chromium",
use: { ...devices["Desktop Chrome"] }
}
],
]
/* Run your local dev server before starting the tests */
webServer: {
url: BASE_URL,
stdout: "pipe",
stderr: "pipe",
command: "npm run start:server",
reuseExistingServer: !process.env.CI,
timeout: 5 * 60 * 1000 // five minutes for bootstrapping an example
}
});

View File

@ -0,0 +1,54 @@
#!/bin/bash
# Docker entrypoint script for waku-browser-tests
# Handles CLI arguments and converts them to environment variables
# Supports reading discovered addresses from /etc/addrs/addrs.env (10k sim pattern)
echo "docker-entrypoint.sh"
echo "Using address: $addrs1"
# Only set WAKU_LIGHTPUSH_NODE if it's not already set and addrs1 is available
if [ -z "$WAKU_LIGHTPUSH_NODE" ] && [ -n "$addrs1" ]; then
export WAKU_LIGHTPUSH_NODE="$addrs1"
fi
echo "Num Args: $#"
echo "Args: $@"
echo "WAKU_LIGHTPUSH_NODE=$WAKU_LIGHTPUSH_NODE"
# Parse command line arguments
while [[ $# -gt 0 ]]; do
case $1 in
--cluster-id=*)
export WAKU_CLUSTER_ID="${1#*=}"
echo "Setting WAKU_CLUSTER_ID=${WAKU_CLUSTER_ID}"
shift
;;
--shard=*)
export WAKU_SHARD="${1#*=}"
echo "Setting WAKU_SHARD=${WAKU_SHARD}"
shift
;;
--lightpushnode=*)
export WAKU_LIGHTPUSH_NODE="${1#*=}"
echo "Setting WAKU_LIGHTPUSH_NODE=${WAKU_LIGHTPUSH_NODE}"
shift
;;
--enr-bootstrap=*)
export WAKU_ENR_BOOTSTRAP="${1#*=}"
echo "Setting WAKU_ENR_BOOTSTRAP=${WAKU_ENR_BOOTSTRAP}"
shift
;;
*)
# Unknown argument, notify user and keep it for the main command
echo "Warning: Unknown argument '$1' will be passed to the main command"
break
;;
esac
done
# If no specific command is provided, use the default CMD
if [ $# -eq 0 ]; then
set -- "npm" "run" "start:server"
fi
# Execute the main command
exec "$@"

View File

@ -1,22 +0,0 @@
/**
* Shared utilities for working with Waku nodes
* This file contains functions used by both browser tests and server
*/
/**
* Type definition for a minimal Waku node interface
* This allows us to use the same code in different contexts
*/
export interface IWakuNode {
libp2p: {
peerId: { toString(): string };
getMultiaddrs(): Array<{ toString(): string }>;
getProtocols(): any;
peerStore: {
all(): Promise<Array<{ id: { toString(): string } }>>;
};
};
lightPush: {
send: (encoder: any, message: { payload: Uint8Array }) => Promise<{ successes: any[] }>;
};
}

View File

@ -1,36 +0,0 @@
import { IWakuNode } from "./common.js";
/**
* Gets peer information from a Waku node
* Used in both server API endpoints and headless tests
*/
export async function getPeerInfo(waku: IWakuNode): Promise<{
peerId: string;
multiaddrs: string[];
peers: string[];
}> {
const multiaddrs = waku.libp2p.getMultiaddrs();
const peers = await waku.libp2p.peerStore.all();
return {
peerId: waku.libp2p.peerId.toString(),
multiaddrs: multiaddrs.map((addr) => addr.toString()),
peers: peers.map((peer) => peer.id.toString())
};
}
/**
* Gets debug information from a Waku node
* Used in both server API endpoints and tests
*/
export async function getDebugInfo(waku: IWakuNode): Promise<{
listenAddresses: string[];
peerId: string;
protocols: string[];
}> {
return {
listenAddresses: waku.libp2p.getMultiaddrs().map((addr) => addr.toString()),
peerId: waku.libp2p.peerId.toString(),
protocols: Array.from(waku.libp2p.getProtocols())
};
}

View File

@ -1,16 +0,0 @@
import { createEncoder, LightNode, SDKProtocolResult } from "@waku/sdk";
export async function pushMessage(
waku: LightNode,
contentTopic: string,
payload?: Uint8Array
): Promise<SDKProtocolResult> {
const enc = createEncoder({
contentTopic
});
const result = await waku.lightPush.send(enc, {
payload: payload ?? new Uint8Array()
});
return result;
}

View File

@ -1,274 +0,0 @@
import {
createDecoder,
createEncoder,
createLightNode,
CreateNodeOptions,
DecodedMessage,
LightNode,
SDKProtocolResult,
SubscribeResult
} from "@waku/sdk";
import { IWakuNode } from "./common.js";
/**
* Gets peer information from a Waku node
*/
export async function getPeerInfo(waku: IWakuNode): Promise<{
peerId: string;
multiaddrs: string[];
peers: string[];
}> {
const multiaddrs = waku.libp2p.getMultiaddrs();
const peers = await waku.libp2p.peerStore.all();
return {
peerId: waku.libp2p.peerId.toString(),
multiaddrs: multiaddrs.map((addr) => addr.toString()),
peers: peers.map((peer) => peer.id.toString())
};
}
/**
* Gets debug information from a Waku node
*/
export async function getDebugInfo(waku: IWakuNode): Promise<{
listenAddresses: string[];
peerId: string;
protocols: string[];
}> {
return {
listenAddresses: waku.libp2p.getMultiaddrs().map((addr) => addr.toString()),
peerId: waku.libp2p.peerId.toString(),
protocols: Array.from(waku.libp2p.getProtocols())
};
}
/**
* Pushes a message to the network
*/
export async function pushMessage(
waku: LightNode,
contentTopic: string,
payload?: Uint8Array,
options?: {
clusterId?: number;
shard?: number;
}
): Promise<SDKProtocolResult> {
if (!waku) {
throw new Error("Waku node not found");
}
const encoder = createEncoder({
contentTopic,
pubsubTopicShardInfo: {
clusterId: options?.clusterId ?? 1,
shard: options?.shard ?? 1
}
});
const result = await waku.lightPush.send(encoder, {
payload: payload ?? new Uint8Array()
});
return result;
}
/**
* Creates and initializes a Waku node
* Checks if a node is already running in window and stops it if it exists
*/
export async function createWakuNode(
options: CreateNodeOptions
): Promise<{ success: boolean; error?: string }> {
// Check if we're in a browser environment and a node already exists
if (typeof window === "undefined") {
return { success: false, error: "No window found" };
}
try {
if ((window as any).waku) {
await (window as any).waku.stop();
}
(window as any).waku = await createLightNode(options);
return { success: true };
} catch (error: any) {
return { success: false, error: error.message };
}
}
export async function startNode(): Promise<{
success: boolean;
error?: string;
}> {
if (typeof window !== "undefined" && (window as any).waku) {
try {
await (window as any).waku.start();
return { success: true };
} catch (error: any) {
// Silently continue if there's an error starting the node
return { success: false, error: error.message };
}
}
return { success: false, error: "Waku node not found in window" };
}
export async function stopNode(): Promise<{
success: boolean;
error?: string;
}> {
if (typeof window !== "undefined" && (window as any).waku) {
await (window as any).waku.stop();
return { success: true };
}
return { success: false, error: "Waku node not found in window" };
}
export async function dialPeers(
waku: LightNode,
peers: string[]
): Promise<{
total: number;
errors: string[];
}> {
const total = peers.length;
const errors: string[] = [];
await Promise.allSettled(
peers.map((peer) =>
waku.dial(peer).catch((error: any) => {
errors.push(error.message);
})
)
);
return { total, errors };
}
export async function subscribe(
waku: LightNode,
contentTopic: string,
options?: {
clusterId?: number;
shard?: number;
},
// eslint-disable-next-line no-unused-vars
callback?: (message: DecodedMessage) => void
): Promise<SubscribeResult> {
const clusterId = options?.clusterId ?? 42;
const shard = options?.shard ?? 0;
console.log(
`Creating decoder for content topic ${contentTopic} with clusterId=${clusterId}, shard=${shard}`
);
const pubsubTopic = `/waku/2/rs/${clusterId}/${shard}`;
let configuredTopics: string[] = [];
try {
const protocols = waku.libp2p.getProtocols();
console.log(`Available protocols: ${Array.from(protocols).join(", ")}`);
const metadataMethod = (waku.libp2p as any)._services?.metadata?.getInfo;
if (metadataMethod) {
const metadata = metadataMethod();
console.log(`Node metadata: ${JSON.stringify(metadata)}`);
if (metadata?.pubsubTopics && Array.isArray(metadata.pubsubTopics)) {
configuredTopics = metadata.pubsubTopics;
console.log(
`Found configured pubsub topics: ${configuredTopics.join(", ")}`
);
}
}
if (
configuredTopics.length > 0 &&
!configuredTopics.includes(pubsubTopic)
) {
console.warn(
`Pubsub topic ${pubsubTopic} is not configured. Configured topics: ${configuredTopics.join(", ")}`
);
for (const topic of configuredTopics) {
const parts = topic.split("/");
if (parts.length === 6 && parts[1] === "waku" && parts[3] === "rs") {
console.log(`Found potential matching pubsub topic: ${topic}`);
// Use the first topic as a fallback if no exact match is found
// This isn't ideal but allows tests to continue
const topicClusterId = parseInt(parts[4]);
const topicShard = parseInt(parts[5]);
if (!isNaN(topicClusterId) && !isNaN(topicShard)) {
console.log(
`Using pubsub topic with clusterId=${topicClusterId}, shard=${topicShard} instead`
);
const decoder = createDecoder(contentTopic, {
clusterId: topicClusterId,
shard: topicShard
});
try {
const subscription = await waku.filter.subscribe(
decoder,
callback ??
((_message) => {
console.log(_message);
})
);
return subscription;
} catch (innerErr: any) {
console.error(
`Error with alternative pubsub topic: ${innerErr.message}`
);
}
}
}
}
}
} catch (err) {
console.error(`Error checking node protocols: ${String(err)}`);
}
const decoder = createDecoder(contentTopic, {
clusterId,
shard
});
try {
const subscription = await waku.filter.subscribe(
decoder,
callback ??
((_message) => {
console.log(_message);
})
);
return subscription;
} catch (err: any) {
if (err.message && err.message.includes("Pubsub topic")) {
console.error(`Pubsub topic error: ${err.message}`);
console.log("Subscription failed, but continuing with empty result");
return {
unsubscribe: async () => {
console.log("No-op unsubscribe from failed subscription");
}
} as unknown as SubscribeResult;
}
throw err;
}
}
export const API = {
getPeerInfo,
getDebugInfo,
pushMessage,
createWakuNode,
startNode,
stopNode,
dialPeers,
subscribe
};

View File

@ -1,43 +1,63 @@
import { Browser, chromium, Page } from "@playwright/test";
import { Logger } from "@waku/utils";
const log = new Logger("browser-test");
// Global variable to store the browser and page
let browser: Browser | undefined;
let page: Page | undefined;
/**
* Initialize browser and load headless page
*/
export async function initBrowser(): Promise<void> {
browser = await chromium.launch({
headless: true
});
export async function initBrowser(appPort: number): Promise<void> {
try {
const launchArgs = ["--no-sandbox", "--disable-setuid-sandbox"];
if (!browser) {
throw new Error("Failed to initialize browser");
browser = await chromium.launch({
headless: true,
args: launchArgs
});
if (!browser) {
throw new Error("Failed to initialize browser");
}
page = await browser.newPage();
// Forward browser console to server logs
page.on('console', msg => {
const type = msg.type();
const text = msg.text();
log.info(`[Browser Console ${type.toUpperCase()}] ${text}`);
});
page.on('pageerror', error => {
log.error('[Browser Page Error]', error.message);
});
await page.goto(`http://localhost:${appPort}/app/index.html`, {
waitUntil: "networkidle",
});
await page.waitForFunction(
() => {
return window.wakuApi && typeof window.wakuApi.createWakuNode === "function";
},
{ timeout: 30000 }
);
log.info("Browser initialized successfully with wakuApi");
} catch (error) {
log.error("Error initializing browser:", error);
throw error;
}
page = await browser.newPage();
await page.goto("http://localhost:8080");
}
/**
* Get the current page instance
*/
export function getPage(): Page | undefined {
return page;
}
/**
* Set the page instance (for use by server.ts)
*/
export function setPage(pageInstance: Page | undefined): void {
page = pageInstance;
}
/**
* Closes the browser instance
*/
export async function closeBrowser(): Promise<void> {
if (browser) {
await browser.close();

View File

@ -1,89 +0,0 @@
// Message queue to store received messages by content topic
export interface QueuedMessage {
payload: number[] | undefined;
contentTopic: string;
timestamp: number;
receivedAt: number;
}
export interface MessageQueue {
[contentTopic: string]: QueuedMessage[];
}
// Global message queue storage
const messageQueue: MessageQueue = {};
/**
* Store a message in the queue
*/
export function storeMessage(message: QueuedMessage): void {
const { contentTopic } = message;
if (!messageQueue[contentTopic]) {
messageQueue[contentTopic] = [];
}
messageQueue[contentTopic].push(message);
}
/**
* Get messages for a specific content topic
*/
export function getMessages(
contentTopic: string,
options?: {
startTime?: number;
endTime?: number;
pageSize?: number;
ascending?: boolean;
}
): QueuedMessage[] {
if (!messageQueue[contentTopic]) {
return [];
}
let messages = [...messageQueue[contentTopic]];
// Filter by time if specified
if (options?.startTime || options?.endTime) {
messages = messages.filter((msg) => {
const afterStart = options.startTime
? msg.timestamp >= options.startTime
: true;
const beforeEnd = options.endTime
? msg.timestamp <= options.endTime
: true;
return afterStart && beforeEnd;
});
}
// Sort by timestamp
messages.sort((a, b) => {
return options?.ascending
? a.timestamp - b.timestamp
: b.timestamp - a.timestamp;
});
// Limit result size
if (options?.pageSize && options.pageSize > 0) {
messages = messages.slice(0, options.pageSize);
}
return messages;
}
/**
* Clear all messages from the queue
*/
export function clearQueue(): void {
Object.keys(messageQueue).forEach((topic) => {
delete messageQueue[topic];
});
}
/**
* Get all content topics in the queue
*/
export function getContentTopics(): string[] {
return Object.keys(messageQueue);
}

View File

@ -1,223 +0,0 @@
import express, { Request, Response, Router } from "express";
import { getPage } from "../browser/index.js";
const router = Router();
router.head("/admin/v1/create-node", (_req: Request, res: Response) => {
res.status(200).end();
});
router.head("/admin/v1/start-node", (_req: Request, res: Response) => {
res.status(200).end();
});
router.head("/admin/v1/stop-node", (_req: Request, res: Response) => {
res.status(200).end();
});
router.post("/admin/v1/create-node", (async (req: Request, res: Response) => {
try {
const {
defaultBootstrap = true,
networkConfig
} = req.body;
// Validate that networkConfig is provided
if (!networkConfig) {
return res.status(400).json({
code: 400,
message: "networkConfig is required"
});
}
// Validate that networkConfig has required properties
if (networkConfig.clusterId === undefined) {
return res.status(400).json({
code: 400,
message: "networkConfig.clusterId is required"
});
}
const page = getPage();
if (!page) {
return res.status(503).json({
code: 503,
message: "Browser not initialized"
});
}
const result = await page.evaluate(
({ defaultBootstrap, networkConfig }) => {
const nodeOptions: any = {
defaultBootstrap,
relay: {
advertise: true,
gossipsubOptions: {
allowPublishToZeroPeers: true
}
},
filter: true,
peers: [],
networkConfig: {
clusterId: networkConfig.clusterId,
shards: networkConfig.shards || [0]
}
};
return window.wakuAPI.createWakuNode(nodeOptions);
},
{ defaultBootstrap, networkConfig }
);
if (result && result.success) {
res.status(200).json({
success: true,
message: "Waku node created successfully"
});
} else {
res.status(500).json({
code: 500,
message: "Failed to create Waku node",
details: result?.error || "Unknown error"
});
}
} catch (error: any) {
res.status(500).json({
code: 500,
message: `Could not create Waku node: ${error.message}`
});
}
}) as express.RequestHandler);
// Start Waku node endpoint
router.post("/admin/v1/start-node", (async (_req: Request, res: Response) => {
try {
const page = getPage();
if (!page) {
return res.status(503).json({
code: 503,
message: "Browser not initialized"
});
}
const result = await page.evaluate(() => {
return window.wakuAPI.startNode
? window.wakuAPI.startNode()
: { error: "startNode function not available" };
});
if (result && !result.error) {
res.status(200).json({
success: true,
message: "Waku node started successfully"
});
} else {
res.status(500).json({
code: 500,
message: "Failed to start Waku node",
details: result?.error || "Unknown error"
});
}
} catch (error: any) {
res.status(500).json({
code: 500,
message: `Could not start Waku node: ${error.message}`
});
}
}) as express.RequestHandler);
// Stop Waku node endpoint
router.post("/admin/v1/stop-node", (async (_req: Request, res: Response) => {
try {
const page = getPage();
if (!page) {
return res.status(503).json({
code: 503,
message: "Browser not initialized"
});
}
const result = await page.evaluate(() => {
return window.wakuAPI.stopNode
? window.wakuAPI.stopNode()
: { error: "stopNode function not available" };
});
if (result && !result.error) {
res.status(200).json({
success: true,
message: "Waku node stopped successfully"
});
} else {
res.status(500).json({
code: 500,
message: "Failed to stop Waku node",
details: result?.error || "Unknown error"
});
}
} catch (error: any) {
res.status(500).json({
code: 500,
message: `Could not stop Waku node: ${error.message}`
});
}
}) as express.RequestHandler);
// Dial to peers endpoint
router.post("/admin/v1/peers", (async (req: Request, res: Response) => {
try {
const { peerMultiaddrs } = req.body;
if (!peerMultiaddrs || !Array.isArray(peerMultiaddrs)) {
return res.status(400).json({
code: 400,
message: "Invalid request. peerMultiaddrs array is required."
});
}
const page = getPage();
if (!page) {
return res.status(503).json({
code: 503,
message: "Browser not initialized"
});
}
const result = await page.evaluate(
({ peerAddrs }) => {
return window.wakuAPI.dialPeers(window.waku, peerAddrs);
},
{ peerAddrs: peerMultiaddrs }
);
if (result) {
res.status(200).json({
peersAdded: peerMultiaddrs.length - (result.errors?.length || 0),
peerErrors:
result.errors?.map((error: string, index: number) => {
return {
peerMultiaddr: peerMultiaddrs[index],
error
};
}) || []
});
} else {
res.status(500).json({
code: 500,
message: "Failed to dial peers"
});
}
} catch (error: any) {
res.status(500).json({
code: 500,
message: `Could not dial peers: ${error.message}`
});
}
}) as express.RequestHandler);
export default router;

View File

@ -1,51 +0,0 @@
import express, { Request, Response, Router } from "express";
import { getPage } from "../browser/index.js";
const router = Router();
// Get node info endpoint
router.get("/info", (async (_req: Request, res: Response) => {
try {
const page = getPage();
if (!page) {
return res.status(503).json({
code: 503,
message: "Browser not initialized"
});
}
const result = await page.evaluate(() => {
return window.wakuAPI.getPeerInfo(window.waku);
});
res.json(result);
} catch (error: any) {
console.error("Error getting info:", error);
res.status(500).json({ error: error.message });
}
}) as express.RequestHandler);
// Get node debug info endpoint
router.get("/debug/v1/info", (async (_req: Request, res: Response) => {
try {
const page = getPage();
if (!page) {
return res.status(503).json({
code: 503,
message: "Browser not initialized"
});
}
const result = await page.evaluate(() => {
return window.wakuAPI.getDebugInfo(window.waku);
});
res.json(result);
} catch (error: any) {
console.error("Error getting debug info:", error);
res.status(500).json({ error: error.message });
}
}) as express.RequestHandler);
export default router;

View File

@ -1,131 +0,0 @@
import express, { Request, Response, Router } from "express";
import { getPage } from "../browser/index.js";
const router = Router();
// Legacy push message endpoint
router.post("/push", (async (req: Request, res: Response) => {
try {
const { contentTopic, payload } = req.body;
if (!contentTopic) {
return res.status(400).json({
code: 400,
message: "Invalid request. contentTopic is required."
});
}
const page = getPage();
if (!page) {
return res.status(503).json({
code: 503,
message: "Browser not initialized"
});
}
const result = await page.evaluate(
({ topic, data }) => {
return window.wakuAPI.pushMessage(window.waku, topic, data);
},
{
topic: contentTopic,
data: payload
}
);
if (result) {
res.status(200).json({
messageId:
"0x" +
Buffer.from(contentTopic + Date.now().toString()).toString("hex")
});
} else {
res.status(503).json({
code: 503,
message: "Could not publish message: no suitable peers"
});
}
} catch (error: any) {
if (
error.message.includes("size exceeds") ||
error.message.includes("stream reset")
) {
res.status(503).json({
code: 503,
message:
"Could not publish message: message size exceeds gossipsub max message size"
});
} else {
res.status(500).json({
code: 500,
message: `Could not publish message: ${error.message}`
});
}
}
}) as express.RequestHandler);
// Waku REST API compatible push endpoint
router.post("/lightpush/v1/message", (async (req: Request, res: Response) => {
try {
const { message } = req.body;
if (!message || !message.contentTopic) {
return res.status(400).json({
code: 400,
message: "Invalid request. contentTopic is required."
});
}
const page = getPage();
if (!page) {
return res.status(503).json({
code: 503,
message: "Browser not initialized"
});
}
const result = await page.evaluate(
({ contentTopic, payload }) => {
return window.wakuAPI.pushMessage(window.waku, contentTopic, payload);
},
{
contentTopic: message.contentTopic,
payload: message.payload
}
);
if (result) {
res.status(200).json({
messageId:
"0x" +
Buffer.from(message.contentTopic + Date.now().toString()).toString(
"hex"
)
});
} else {
res.status(503).json({
code: 503,
message: "Could not publish message: no suitable peers"
});
}
} catch (error: any) {
if (
error.message.includes("size exceeds") ||
error.message.includes("stream reset")
) {
res.status(503).json({
code: 503,
message:
"Could not publish message: message size exceeds gossipsub max message size"
});
} else {
res.status(500).json({
code: 500,
message: `Could not publish message: ${error.message}`
});
}
}
}) as express.RequestHandler);
export default router;

View File

@ -0,0 +1,87 @@
import { Router } from "express";
import { Logger } from "@waku/utils";
import {
createEndpointHandler,
validators,
errorHandlers,
} from "../utils/endpoint-handler.js";
interface LightPushResult {
successes: string[];
failures: Array<{ error: string; peerId?: string }>;
}
const log = new Logger("routes:waku");
const router = Router();
const corsEndpoints = [
"/waku/v1/wait-for-peers",
"/waku/v1/peer-info",
"/lightpush/v3/message",
];
corsEndpoints.forEach((endpoint) => {
router.head(endpoint, (_req, res) => {
res.status(200).end();
});
});
router.post(
"/waku/v1/wait-for-peers",
createEndpointHandler({
methodName: "waitForPeers",
validateInput: (body: unknown) => {
const bodyObj = body as { timeoutMs?: number; protocols?: string[] };
return [
bodyObj.timeoutMs || 10000,
bodyObj.protocols || ["lightpush", "filter"],
];
},
transformResult: () => ({
success: true,
message: "Successfully connected to peers",
}),
}),
);
router.get(
"/waku/v1/peer-info",
createEndpointHandler({
methodName: "getPeerInfo",
validateInput: validators.noInput,
}),
);
router.post(
"/lightpush/v3/message",
createEndpointHandler({
methodName: "pushMessageV3",
validateInput: (body: unknown): [string, string, string] => {
const validatedRequest = validators.requireLightpushV3(body);
return [
validatedRequest.message.contentTopic,
validatedRequest.message.payload,
validatedRequest.pubsubTopic,
];
},
handleError: errorHandlers.lightpushError,
transformResult: (result: unknown) => {
const lightPushResult = result as LightPushResult;
if (lightPushResult && lightPushResult.successes && lightPushResult.successes.length > 0) {
log.info("[Server] Message successfully sent via v3 lightpush!");
return {
success: true,
result: lightPushResult,
};
} else {
return {
success: false,
error: "Could not publish message: no suitable peers",
};
}
},
}),
);
export default router;

View File

@ -1,507 +1,244 @@
import { ChildProcess, exec } from "child_process";
import * as net from "net";
import { dirname, join } from "path";
import { fileURLToPath } from "url";
import * as path from "path";
import { chromium } from "@playwright/test";
import cors from "cors";
import express, { Request, Response } from "express";
import { Logger } from "@waku/utils";
import adminRouter from "./routes/admin.js";
import { setPage, getPage, closeBrowser } from "./browser/index.js";
import wakuRouter from "./routes/waku.js";
import { initBrowser, getPage, closeBrowser } from "./browser/index.js";
import {
DEFAULT_CLUSTER_ID,
DEFAULT_NUM_SHARDS,
Protocols,
AutoSharding,
StaticSharding,
} from "@waku/interfaces";
import { CreateNodeOptions } from "@waku/sdk";
import type { WindowNetworkConfig } from "../types/global.js";
const __filename = fileURLToPath(import.meta.url);
const __dirname = dirname(__filename);
interface NodeError extends Error {
code?: string;
}
const log = new Logger("server");
const app = express();
app.use(cors());
app.use(express.json());
app.use(adminRouter);
let headlessServerProcess: ChildProcess | undefined;
import * as fs from "fs";
interface MessageQueue {
[contentTopic: string]: Array<{
payload: number[] | undefined;
contentTopic: string;
timestamp: number;
receivedAt: number;
}>;
}
const __filename = fileURLToPath(import.meta.url);
const __dirname = path.dirname(__filename);
const distRoot = path.resolve(__dirname, "..");
const webDir = path.resolve(distRoot, "web");
const messageQueue: MessageQueue = {};
async function startHeadlessServer(): Promise<void> {
return new Promise((resolve, reject) => {
try {
headlessServerProcess = exec(
`serve ${join(__dirname, "../../headless-tests")} -p 8080 -s`,
(error) => {
if (error) {
console.error(`Error starting serve: ${error}`);
return;
}
}
);
setTimeout(resolve, 2000);
} catch (error) {
console.error("Failed to start headless server:", error);
reject(error);
}
});
}
async function initBrowser(): Promise<void> {
app.get("/app/index.html", (_req: Request, res: Response) => {
try {
const browser = await chromium.launch({
headless: true
});
const htmlPath = path.join(webDir, "index.html");
let htmlContent = fs.readFileSync(htmlPath, "utf8");
if (!browser) {
throw new Error("Failed to initialize browser");
const networkConfig: WindowNetworkConfig = {};
if (process.env.WAKU_CLUSTER_ID) {
networkConfig.clusterId = parseInt(process.env.WAKU_CLUSTER_ID, 10);
}
if (process.env.WAKU_SHARD) {
networkConfig.shards = [parseInt(process.env.WAKU_SHARD, 10)];
log.info("Using static shard:", networkConfig.shards);
}
const page = await browser.newPage();
const lightpushNode = process.env.WAKU_LIGHTPUSH_NODE || null;
const enrBootstrap = process.env.WAKU_ENR_BOOTSTRAP || null;
try {
await checkServerAvailability("http://localhost:8080", 3);
await page.goto("http://localhost:8080");
} catch (error) {
console.error(
"Error loading headless app, continuing without it:",
error
);
await page.setContent(`
<html>
<head><title>Waku Test Environment</title></head>
<body>
<h1>Waku Test Environment (No headless app available)</h1>
<script>
window.waku = {};
window.wakuAPI = {
getPeerInfo: () => ({ peerId: "mock-peer-id", multiaddrs: [], peers: [] }),
getDebugInfo: () => ({ listenAddresses: [], peerId: "mock-peer-id", protocols: [] }),
pushMessage: () => ({ successes: [], failures: [{ error: "No headless app available" }] }),
dialPeers: () => ({ total: 0, errors: ["No headless app available"] }),
createWakuNode: () => ({ success: true, message: "Mock node created" }),
startNode: () => ({ success: true }),
stopNode: () => ({ success: true }),
subscribe: () => ({ unsubscribe: async () => {} })
};
</script>
</body>
</html>
`);
}
log.info("Network config on server start, pre headless:", networkConfig);
setPage(page);
const configScript = ` <script>
window.__WAKU_NETWORK_CONFIG = ${JSON.stringify(networkConfig)};
window.__WAKU_LIGHTPUSH_NODE = ${JSON.stringify(lightpushNode)};
window.__WAKU_ENR_BOOTSTRAP = ${JSON.stringify(enrBootstrap)};
</script>`;
const originalPattern =
' <script type="module" src="./index.js"></script>';
const replacement = `${configScript}\n <script type="module" src="./index.js"></script>`;
htmlContent = htmlContent.replace(originalPattern, replacement);
res.setHeader("Content-Type", "text/html");
res.send(htmlContent);
} catch (error) {
console.error("Error initializing browser:", error);
throw error;
log.error("Error serving dynamic index.html:", error);
res.status(500).send("Error loading page");
}
}
});
async function checkServerAvailability(
url: string,
retries = 3
): Promise<boolean> {
for (let i = 0; i < retries; i++) {
try {
const response = await fetch(url, { method: "HEAD" });
if (response.ok) return true;
} catch (e) {
await new Promise((resolve) => setTimeout(resolve, 1000));
}
}
throw new Error(`Server at ${url} not available after ${retries} retries`);
}
app.use("/app", express.static(webDir, { index: false }));
async function findAvailablePort(
startPort: number,
maxAttempts = 10
): Promise<number> {
for (let attempt = 0; attempt < maxAttempts; attempt++) {
const port = startPort + attempt;
try {
// Try to create a server on the port
await new Promise<void>((resolve, reject) => {
const server = net
.createServer()
.once("error", (err: any) => {
reject(err);
})
.once("listening", () => {
// If we can listen, the port is available
server.close();
resolve();
})
.listen(port);
});
app.use(wakuRouter);
// If we get here, the port is available
return port;
} catch (err) {
// Port is not available, continue to next port
}
}
// If we tried all ports and none are available, throw an error
throw new Error(
`Unable to find an available port after ${maxAttempts} attempts`
);
}
async function startServer(port: number = 3000): Promise<void> {
try {
await startHeadlessServer();
await initBrowser();
await startAPI(port);
} catch (error: any) {
console.error("Error starting server:", error);
}
}
async function startAPI(requestedPort: number): Promise<void> {
async function startAPI(requestedPort: number): Promise<number> {
try {
app.get("/", (_req: Request, res: Response) => {
res.json({ status: "Waku simulation server is running" });
});
app.get("/info", (async (_req: Request, res: Response) => {
try {
const result = await getPage()?.evaluate(() => {
return window.wakuAPI.getPeerInfo(window.waku);
});
res.json(result);
} catch (error: any) {
console.error("Error getting info:", error);
res.status(500).json({ error: error.message });
}
}) as express.RequestHandler);
app.get("/debug/v1/info", (async (_req: Request, res: Response) => {
try {
const result = await getPage()?.evaluate(() => {
return window.wakuAPI.getDebugInfo(window.waku);
});
res.json(result);
} catch (error: any) {
console.error("Error getting debug info:", error);
res.status(500).json({ error: error.message });
}
}) as express.RequestHandler);
app.post("/lightpush/v1/message", (async (req: Request, res: Response) => {
try {
const { message } = req.body;
if (!message || !message.contentTopic) {
return res.status(400).json({
code: 400,
message: "Invalid request. contentTopic is required."
});
}
const result = await getPage()?.evaluate(
({ contentTopic, payload }) => {
return window.wakuAPI.pushMessage(
window.waku,
contentTopic,
payload
);
},
{
contentTopic: message.contentTopic,
payload: message.payload
}
);
if (result) {
res.status(200).json({
messageId:
"0x" +
Buffer.from(
message.contentTopic + Date.now().toString()
).toString("hex")
});
} else {
res.status(503).json({
code: 503,
message: "Could not publish message: no suitable peers"
});
}
} catch (error: any) {
if (
error.message.includes("size exceeds") ||
error.message.includes("stream reset")
) {
res.status(503).json({
code: 503,
message:
"Could not publish message: message size exceeds gossipsub max message size"
});
} else {
res.status(500).json({
code: 500,
message: `Could not publish message: ${error.message}`
});
}
}
}) as express.RequestHandler);
app.get("/filter/v2/messages/:contentTopic", (async (
req: Request,
res: Response
) => {
try {
const { contentTopic } = req.params;
const { clusterId, shard } = req.query;
const options = {
clusterId: clusterId ? parseInt(clusterId as string, 10) : 42, // Default to match node creation
shard: shard ? parseInt(shard as string, 10) : 0 // Default to match node creation
};
// Set up SSE (Server-Sent Events)
res.setHeader("Content-Type", "text/event-stream");
res.setHeader("Cache-Control", "no-cache");
res.setHeader("Connection", "keep-alive");
// Function to send SSE
const sendSSE = (data: any): void => {
res.write(`data: ${JSON.stringify(data)}\n\n`);
};
// Subscribe to messages
await getPage()?.evaluate(
({ contentTopic, options }) => {
// Message handler that will send messages back to the client
const callback = (message: any): void => {
// Post message to the browser context
window.postMessage(
{
type: "WAKU_MESSAGE",
payload: {
payload: message.payload
? Array.from(message.payload)
: undefined,
contentTopic: message.contentTopic,
timestamp: message.timestamp
}
},
"*"
);
};
return window.wakuAPI.subscribe(
window.waku,
contentTopic,
options,
callback
);
},
{ contentTopic, options }
);
// Set up event listener for messages from the page
await getPage()?.exposeFunction("sendMessageToServer", (message: any) => {
// Send the message as SSE
sendSSE(message);
const topic = message.contentTopic;
if (!messageQueue[topic]) {
messageQueue[topic] = [];
}
messageQueue[topic].push({
...message,
receivedAt: Date.now()
});
if (messageQueue[topic].length > 1000) {
messageQueue[topic].shift();
}
});
// Add event listener in the browser context to forward messages to the server
await getPage()?.evaluate(() => {
window.addEventListener("message", (event) => {
if (event.data.type === "WAKU_MESSAGE") {
(window as any).sendMessageToServer(event.data.payload);
}
});
});
req.on("close", () => {
});
} catch (error: any) {
console.error("Error in filter subscription:", error);
res.write(`data: ${JSON.stringify({ error: error.message })}\n\n`);
res.end();
}
}) as express.RequestHandler);
app.get("/filter/v1/messages/:contentTopic", (async (
req: Request,
res: Response
) => {
try {
const { contentTopic } = req.params;
const {
pageSize = "20",
startTime,
endTime,
ascending = "false"
} = req.query;
if (!messageQueue[contentTopic]) {
return res.status(200).json({ messages: [] });
}
const limit = parseInt(pageSize as string, 10);
const isAscending = (ascending as string).toLowerCase() === "true";
const timeStart = startTime ? parseInt(startTime as string, 10) : 0;
const timeEnd = endTime ? parseInt(endTime as string, 10) : Date.now();
const filteredMessages = messageQueue[contentTopic]
.filter((msg) => {
const msgTime = msg.timestamp || msg.receivedAt;
return msgTime >= timeStart && msgTime <= timeEnd;
})
.sort((a, b) => {
const timeA = a.timestamp || a.receivedAt;
const timeB = b.timestamp || b.receivedAt;
return isAscending ? timeA - timeB : timeB - timeA;
})
.slice(0, limit);
// Format response to match Waku REST API format
const response = {
messages: filteredMessages.map((msg) => ({
payload: msg.payload
? Buffer.from(msg.payload).toString("base64")
: "",
contentTopic: msg.contentTopic,
timestamp: msg.timestamp,
version: 0 // Default version
}))
};
res.status(200).json(response);
} catch (error: any) {
console.error("Error retrieving messages:", error);
res.status(500).json({
code: 500,
message: `Failed to retrieve messages: ${error.message}`
});
}
}) as express.RequestHandler);
// Helper endpoint for executing functions (useful for testing)
app.post("/execute", (async (req: Request, res: Response) => {
try {
const { functionName, params = [] } = req.body;
if (functionName === "simulateMessages") {
const [contentTopic, messages] = params;
if (!messageQueue[contentTopic]) {
messageQueue[contentTopic] = [];
}
// Add messages to the queue
for (const msg of messages) {
messageQueue[contentTopic].push({
...msg,
contentTopic,
receivedAt: Date.now()
});
}
return res.status(200).json({
success: true,
messagesAdded: messages.length
});
}
const result = await getPage()?.evaluate(
({ fnName, fnParams }) => {
if (!window.wakuAPI[fnName]) {
return { error: `Function ${fnName} not found` };
}
return window.wakuAPI[fnName](...fnParams);
},
{ fnName: functionName, fnParams: params }
);
res.status(200).json(result);
} catch (error: any) {
console.error(
`Error executing function ${req.body.functionName}:`,
error
);
res.status(500).json({
error: error.message
});
}
}) as express.RequestHandler);
let actualPort: number;
try {
actualPort = await findAvailablePort(requestedPort);
} catch (error) {
console.error("Failed to find an available port:", error);
throw error;
}
app
.listen(actualPort, () => {
.listen(requestedPort, () => {
log.info(`API server running on http://localhost:${requestedPort}`);
})
.on("error", (error: any) => {
.on("error", (error: NodeError) => {
if (error.code === "EADDRINUSE") {
console.error(
`Port ${actualPort} is already in use. Please close the application using this port and try again.`
log.error(
`Port ${requestedPort} is already in use. Please close the application using this port and try again.`,
);
} else {
console.error("Error starting server:", error);
log.error("Error starting server:", error);
}
throw error;
});
return Promise.resolve();
} catch (error: any) {
console.error("Error starting server:", error);
return Promise.reject(error);
return requestedPort;
} catch (error) {
log.error("Error starting server:", error);
throw error;
}
}
process.on("SIGINT", (async () => {
await closeBrowser();
async function startServer(port: number = 3000): Promise<void> {
try {
const actualPort = await startAPI(port);
await initBrowser(actualPort);
if (headlessServerProcess && headlessServerProcess.pid) {
try {
process.kill(headlessServerProcess.pid);
log.info("Auto-starting node with CLI configuration...");
const hasEnrBootstrap = Boolean(process.env.WAKU_ENR_BOOTSTRAP);
const networkConfig: AutoSharding | StaticSharding = process.env.WAKU_SHARD
? ({
clusterId: process.env.WAKU_CLUSTER_ID
? parseInt(process.env.WAKU_CLUSTER_ID, 10)
: DEFAULT_CLUSTER_ID,
shards: [parseInt(process.env.WAKU_SHARD, 10)],
} as StaticSharding)
: ({
clusterId: process.env.WAKU_CLUSTER_ID
? parseInt(process.env.WAKU_CLUSTER_ID, 10)
: DEFAULT_CLUSTER_ID,
numShardsInCluster: DEFAULT_NUM_SHARDS,
} as AutoSharding);
const createOptions: CreateNodeOptions = {
defaultBootstrap: false,
...(hasEnrBootstrap && {
discovery: {
dns: true,
peerExchange: true,
peerCache: true,
},
}),
networkConfig,
};
log.info(
`Bootstrap mode: ${hasEnrBootstrap ? "ENR-only (defaultBootstrap=false)" : "default bootstrap (defaultBootstrap=true)"}`,
);
if (hasEnrBootstrap) {
log.info(`ENR bootstrap peers: ${process.env.WAKU_ENR_BOOTSTRAP}`);
}
log.info(
`Network config: ${JSON.stringify(networkConfig)}`,
);
await getPage()?.evaluate((config) => {
return window.wakuApi.createWakuNode(config);
}, createOptions);
await getPage()?.evaluate(() => window.wakuApi.startNode());
try {
await getPage()?.evaluate(() =>
window.wakuApi.waitForPeers?.(5000, [Protocols.LightPush]),
);
log.info("Auto-start completed with bootstrap peers");
} catch (peerError) {
log.info(
"Auto-start completed (no bootstrap peers found - may be expected with test ENRs)",
);
}
} catch (e) {
// Process already stopped
log.warn("Auto-start failed:", e);
}
} catch (error) {
log.error("Error starting server:", error);
}
}
process.on("uncaughtException", (error) => {
log.error("Uncaught Exception:", error);
if (process.env.NODE_ENV !== "production") {
process.exit(1);
}
});
process.on("unhandledRejection", (reason, promise) => {
log.error("Unhandled Rejection at:", promise, "reason:", reason);
if (process.env.NODE_ENV !== "production") {
process.exit(1);
}
});
const gracefulShutdown = async (signal: string) => {
log.info(`Received ${signal}, gracefully shutting down...`);
try {
await closeBrowser();
} catch (e) {
log.warn("Error closing browser:", e);
}
process.exit(0);
};
process.on("SIGINT", () => gracefulShutdown("SIGINT"));
process.on("SIGTERM", () => gracefulShutdown("SIGTERM"));
function parseCliArgs() {
const args = process.argv.slice(2);
let clusterId: number | undefined;
let shard: number | undefined;
for (const arg of args) {
if (arg.startsWith("--cluster-id=")) {
clusterId = parseInt(arg.split("=")[1], 10);
if (isNaN(clusterId)) {
log.error("Invalid cluster-id value. Must be a number.");
process.exit(1);
}
} else if (arg.startsWith("--shard=")) {
shard = parseInt(arg.split("=")[1], 10);
if (isNaN(shard)) {
log.error("Invalid shard value. Must be a number.");
process.exit(1);
}
}
}
process.exit(0);
}) as any);
return { clusterId, shard };
}
const isMainModule = process.argv[1] === fileURLToPath(import.meta.url);
if (isMainModule) {
const port = process.env.PORT ? parseInt(process.env.PORT, 10) : 3000;
const cliArgs = parseCliArgs();
if (cliArgs.clusterId !== undefined) {
process.env.WAKU_CLUSTER_ID = cliArgs.clusterId.toString();
log.info(`Using CLI cluster ID: ${cliArgs.clusterId}`);
}
if (cliArgs.shard !== undefined) {
process.env.WAKU_SHARD = cliArgs.shard.toString();
log.info(`Using CLI shard: ${cliArgs.shard}`);
}
void startServer(port);
}

View File

@ -1,8 +0,0 @@
import { readFileSync } from "fs";
import { dirname } from "path";
import { fileURLToPath } from "url";
const __filename = fileURLToPath(import.meta.url);
export const __dirname = dirname(__filename);
export const readJSON = (path) => JSON.parse(readFileSync(path, "utf-8"));

View File

@ -0,0 +1,197 @@
import { Request, Response } from "express";
import { Logger } from "@waku/utils";
import { getPage } from "../browser/index.js";
import type { ITestBrowser } from "../../types/global.js";
const log = new Logger("endpoint-handler");
export interface LightpushV3Request {
pubsubTopic: string;
message: {
payload: string;
contentTopic: string;
version: number;
};
}
export interface LightpushV3Response {
success?: boolean;
error?: string;
result?: {
successes: string[];
failures: Array<{
error: string;
peerId?: string;
}>;
};
}
export interface EndpointConfig<TInput = unknown, TOutput = unknown> {
methodName: string;
validateInput?: (_requestBody: unknown) => TInput;
transformResult?: (_sdkResult: unknown) => TOutput;
handleError?: (_caughtError: Error) => { code: number; message: string };
preCheck?: () => Promise<void> | void;
logResult?: boolean;
}
export function createEndpointHandler<TInput = unknown, TOutput = unknown>(
config: EndpointConfig<TInput, TOutput>,
) {
return async (req: Request, res: Response) => {
try {
let input: TInput;
try {
input = config.validateInput
? config.validateInput(req.body)
: req.body;
} catch (validationError) {
return res.status(400).json({
code: 400,
message: `Invalid input: ${validationError instanceof Error ? validationError.message : String(validationError)}`,
});
}
if (config.preCheck) {
try {
await config.preCheck();
} catch (checkError) {
return res.status(503).json({
code: 503,
message: checkError instanceof Error ? checkError.message : String(checkError),
});
}
}
const page = getPage();
if (!page) {
return res.status(503).json({
code: 503,
message: "Browser not initialized",
});
}
const result = await page.evaluate(
({ methodName, params }) => {
const testWindow = window as ITestBrowser;
if (!testWindow.wakuApi) {
throw new Error("window.wakuApi is not available");
}
const wakuApi = testWindow.wakuApi as unknown as Record<string, unknown>;
const method = wakuApi[methodName];
if (typeof method !== "function") {
throw new Error(`window.wakuApi.${methodName} is not a function`);
}
if (params === null || params === undefined) {
return method.call(testWindow.wakuApi);
} else if (Array.isArray(params)) {
return method.apply(testWindow.wakuApi, params);
} else {
return method.call(testWindow.wakuApi, params);
}
},
{ methodName: config.methodName, params: input },
);
if (config.logResult !== false) {
log.info(
`[${config.methodName}] Result:`,
JSON.stringify(result, null, 2),
);
}
const finalResult = config.transformResult
? config.transformResult(result)
: result;
res.status(200).json(finalResult);
} catch (error) {
if (config.handleError) {
const errorResponse = config.handleError(error as Error);
return res.status(errorResponse.code).json({
code: errorResponse.code,
message: errorResponse.message,
});
}
log.error(`[${config.methodName}] Error:`, error);
res.status(500).json({
code: 500,
message: `Could not execute ${config.methodName}: ${error instanceof Error ? error.message : String(error)}`,
});
}
};
}
export const validators = {
requireLightpushV3: (body: unknown): LightpushV3Request => {
// Type guard to check if body is an object
if (!body || typeof body !== "object") {
throw new Error("Request body must be an object");
}
const bodyObj = body as Record<string, unknown>;
if (
bodyObj.pubsubTopic !== undefined &&
typeof bodyObj.pubsubTopic !== "string"
) {
throw new Error("pubsubTopic must be a string if provided");
}
if (!bodyObj.message || typeof bodyObj.message !== "object") {
throw new Error("message is required and must be an object");
}
const message = bodyObj.message as Record<string, unknown>;
if (
!message.contentTopic ||
typeof message.contentTopic !== "string"
) {
throw new Error("message.contentTopic is required and must be a string");
}
if (!message.payload || typeof message.payload !== "string") {
throw new Error(
"message.payload is required and must be a string (base64 encoded)",
);
}
if (
message.version !== undefined &&
typeof message.version !== "number"
) {
throw new Error("message.version must be a number if provided");
}
return {
pubsubTopic: (bodyObj.pubsubTopic as string) || "",
message: {
payload: message.payload as string,
contentTopic: message.contentTopic as string,
version: (message.version as number) || 1,
},
};
},
noInput: () => null,
};
export const errorHandlers = {
lightpushError: (error: Error) => {
if (
error.message.includes("size exceeds") ||
error.message.includes("stream reset")
) {
return {
code: 503,
message:
"Could not publish message: message size exceeds gossipsub max message size",
};
}
return {
code: 500,
message: `Could not publish message: ${error.message}`,
};
},
};

View File

@ -0,0 +1,117 @@
import { test, expect } from "@playwright/test";
import axios from "axios";
import { StartedTestContainer } from "testcontainers";
import { DefaultTestRoutingInfo } from "@waku/tests";
import {
startBrowserTestsContainer,
stopContainer
} from "./utils/container-helpers.js";
import {
createTwoNodeNetwork,
getDockerAccessibleMultiaddr,
stopNwakuNodes,
TwoNodeNetwork
} from "./utils/nwaku-helpers.js";
import {
ENV_BUILDERS,
TEST_CONFIG,
ASSERTIONS
} from "./utils/test-config.js";
test.describe.configure({ mode: "serial" });
let container: StartedTestContainer;
let nwakuNodes: TwoNodeNetwork;
let baseUrl: string;
test.beforeAll(async () => {
nwakuNodes = await createTwoNodeNetwork();
const lightPushPeerAddr = await getDockerAccessibleMultiaddr(nwakuNodes.nodes[0]);
const result = await startBrowserTestsContainer({
environment: {
...ENV_BUILDERS.withLocalLightPush(lightPushPeerAddr),
DEBUG: "waku:*",
WAKU_LIGHTPUSH_NODE: lightPushPeerAddr,
},
networkMode: "waku",
});
container = result.container;
baseUrl = result.baseUrl;
});
test.afterAll(async () => {
await Promise.all([
stopContainer(container),
stopNwakuNodes(nwakuNodes?.nodes || []),
]);
});
test("WakuHeadless can discover nwaku peer and use it for light push", async () => {
test.setTimeout(TEST_CONFIG.DEFAULT_TEST_TIMEOUT);
const contentTopic = TEST_CONFIG.DEFAULT_CONTENT_TOPIC;
const testMessage = TEST_CONFIG.DEFAULT_TEST_MESSAGE;
await new Promise((r) => setTimeout(r, TEST_CONFIG.WAKU_INIT_DELAY));
const healthResponse = await axios.get(`${baseUrl}/`, { timeout: 5000 });
ASSERTIONS.serverHealth(healthResponse);
try {
await axios.post(`${baseUrl}/waku/v1/wait-for-peers`, {
timeoutMs: 10000,
protocols: ["lightpush"],
}, { timeout: 15000 });
} catch {
// Ignore errors
}
const peerInfoResponse = await axios.get(`${baseUrl}/waku/v1/peer-info`);
ASSERTIONS.peerInfo(peerInfoResponse);
const routingInfo = DefaultTestRoutingInfo;
const subscriptionResults = await Promise.all([
nwakuNodes.nodes[0].ensureSubscriptions([routingInfo.pubsubTopic]),
nwakuNodes.nodes[1].ensureSubscriptions([routingInfo.pubsubTopic])
]);
expect(subscriptionResults[0]).toBe(true);
expect(subscriptionResults[1]).toBe(true);
await new Promise((r) => setTimeout(r, TEST_CONFIG.SUBSCRIPTION_DELAY));
const base64Payload = btoa(testMessage);
const pushResponse = await axios.post(`${baseUrl}/lightpush/v3/message`, {
pubsubTopic: routingInfo.pubsubTopic,
message: {
contentTopic,
payload: base64Payload,
version: 1,
},
});
ASSERTIONS.lightPushV3Success(pushResponse);
await new Promise((r) => setTimeout(r, TEST_CONFIG.MESSAGE_PROPAGATION_DELAY));
const [node1Messages, node2Messages] = await Promise.all([
nwakuNodes.nodes[0].messages(contentTopic),
nwakuNodes.nodes[1].messages(contentTopic)
]);
const totalMessages = node1Messages.length + node2Messages.length;
expect(totalMessages).toBeGreaterThanOrEqual(1);
const receivedMessages = [...node1Messages, ...node2Messages];
expect(receivedMessages.length).toBeGreaterThan(0);
const receivedMessage = receivedMessages[0];
ASSERTIONS.messageContent(receivedMessage, testMessage, contentTopic);
});

View File

@ -1,136 +0,0 @@
import { expect, test } from "@playwright/test";
import { LightNode } from "@waku/sdk";
import { API } from "../src/api/shared.js";
import { NETWORK_CONFIG, ACTIVE_PEERS } from "./test-config.js";
// Define the window interface for TypeScript
declare global {
// eslint-disable-next-line no-unused-vars
interface Window {
waku: LightNode;
wakuAPI: typeof API;
}
}
test.describe("waku", () => {
test.beforeEach(async ({ page }) => {
await page.goto("");
await page.waitForTimeout(5000);
// Create and initialize a fresh Waku node for each test
const setupResult = await page.evaluate(async (config) => {
try {
await window.wakuAPI.createWakuNode({
...config.defaultNodeConfig,
networkConfig: config.networkConfig
});
await window.wakuAPI.startNode();
return { success: true };
} catch (error) {
console.error("Failed to initialize Waku node:", error);
return { success: false, error: String(error) };
}
}, NETWORK_CONFIG);
expect(setupResult.success).toBe(true);
});
test("can get peer id", async ({ page }) => {
const peerId = await page.evaluate(() => {
return window.waku.libp2p.peerId.toString();
});
expect(peerId).toBeDefined();
console.log("Peer ID:", peerId);
});
test("can get info", async ({ page }) => {
const info = await page.evaluate(() => {
return window.wakuAPI.getPeerInfo(window.waku);
});
expect(info).toBeDefined();
expect(info.peerId).toBeDefined();
expect(info.multiaddrs).toBeDefined();
expect(info.peers).toBeDefined();
console.log("Info:", info);
});
test("can get debug info", async ({ page }) => {
const debug = await page.evaluate(() => {
return window.wakuAPI.getDebugInfo(window.waku);
});
expect(debug).toBeDefined();
expect(debug.listenAddresses).toBeDefined();
expect(debug.peerId).toBeDefined();
expect(debug.protocols).toBeDefined();
console.log("Debug:", debug);
});
test("can dial peers", async ({ page }) => {
const result = await page.evaluate((peerAddrs) => {
return window.wakuAPI.dialPeers(window.waku, peerAddrs);
}, ACTIVE_PEERS);
expect(result).toBeDefined();
expect(result.total).toBe(ACTIVE_PEERS.length);
expect(result.errors.length >= result.total).toBe(false);
console.log("Dial result:", result);
});
test("can push a message", async ({ page }) => {
// First dial to peers
await page.evaluate((peersToDial) => {
return window.wakuAPI.dialPeers(window.waku, peersToDial);
}, ACTIVE_PEERS);
// Create a test message
const contentTopic = NETWORK_CONFIG.testMessage.contentTopic;
const payload = new TextEncoder().encode(NETWORK_CONFIG.testMessage.payload);
const arrayPayload = Array.from(payload);
// Push the message
const result = await page.evaluate(
({ topic, data }) => {
return window.wakuAPI.pushMessage(
window.waku,
topic,
new Uint8Array(data)
);
},
{ topic: contentTopic, data: arrayPayload }
);
expect(result).toBeDefined();
console.log("Push result:", result);
});
test("can recreate Waku node", async ({ page }) => {
// Get the current node's peer ID
const initialPeerId = await page.evaluate(() => {
return window.waku.libp2p.peerId.toString();
});
// Create a new node with different parameters
const result = await page.evaluate(() => {
return window.wakuAPI.createWakuNode({
defaultBootstrap: true // Different from beforeEach
});
});
expect(result.success).toBe(true);
// Start the new node
await page.evaluate(() => window.wakuAPI.startNode());
// Get the new peer ID
const newPeerId = await page.evaluate(() => {
return window.waku.libp2p.peerId.toString();
});
expect(newPeerId).not.toBe(initialPeerId);
console.log("Initial:", initialPeerId, "New:", newPeerId);
});
});

View File

@ -0,0 +1,134 @@
import { test, expect } from "@playwright/test";
import axios from "axios";
import { StartedTestContainer } from "testcontainers";
import {
createLightNode,
LightNode,
Protocols,
IDecodedMessage,
} from "@waku/sdk";
import { DEFAULT_CLUSTER_ID, DEFAULT_NUM_SHARDS } from "@waku/interfaces";
import { startBrowserTestsContainer, stopContainer } from "./utils/container-helpers.js";
import { ENV_BUILDERS, TEST_CONFIG } from "./utils/test-config.js";
test.describe.configure({ mode: "serial" });
let container: StartedTestContainer;
let baseUrl: string;
let wakuNode: LightNode;
test.beforeAll(async () => {
const result = await startBrowserTestsContainer({
environment: {
...ENV_BUILDERS.withProductionEnr(),
DEBUG: "waku:*",
},
});
container = result.container;
baseUrl = result.baseUrl;
});
test.afterAll(async () => {
if (wakuNode) {
try {
await wakuNode.stop();
} catch {
// Ignore errors
}
}
await stopContainer(container);
});
test("cross-network message delivery: SDK light node receives server lightpush", async () => {
test.setTimeout(TEST_CONFIG.DEFAULT_TEST_TIMEOUT);
const contentTopic = TEST_CONFIG.DEFAULT_CONTENT_TOPIC;
const testMessage = TEST_CONFIG.DEFAULT_TEST_MESSAGE;
wakuNode = await createLightNode({
defaultBootstrap: true,
discovery: {
dns: true,
peerExchange: true,
peerCache: true,
},
networkConfig: {
clusterId: DEFAULT_CLUSTER_ID,
numShardsInCluster: DEFAULT_NUM_SHARDS,
},
libp2p: {
filterMultiaddrs: false,
},
});
await wakuNode.start();
await wakuNode.waitForPeers(
[Protocols.Filter, Protocols.LightPush],
30000,
);
const messages: IDecodedMessage[] = [];
const decoder = wakuNode.createDecoder({ contentTopic });
if (
!(await wakuNode.filter.subscribe([decoder], (message) => {
messages.push(message);
}))
) {
throw new Error("Failed to subscribe to Filter");
}
await new Promise((r) => setTimeout(r, 2000));
const messagePromise = new Promise<void>((resolve) => {
const originalLength = messages.length;
const checkForMessage = () => {
if (messages.length > originalLength) {
resolve();
} else {
setTimeout(checkForMessage, 100);
}
};
checkForMessage();
});
await axios.post(`${baseUrl}/waku/v1/wait-for-peers`, {
timeoutMs: 30000, // Increased timeout
protocols: ["lightpush", "filter"],
});
await new Promise((r) => setTimeout(r, 10000));
const base64Payload = btoa(testMessage);
const pushResponse = await axios.post(`${baseUrl}/lightpush/v3/message`, {
pubsubTopic: decoder.pubsubTopic,
message: {
contentTopic,
payload: base64Payload,
version: 1,
},
});
expect(pushResponse.status).toBe(200);
expect(pushResponse.data.success).toBe(true);
await Promise.race([
messagePromise,
new Promise((_, reject) =>
setTimeout(() => {
reject(new Error("Timeout waiting for message"));
}, 45000),
),
]);
expect(messages).toHaveLength(1);
const receivedMessage = messages[0];
expect(receivedMessage.contentTopic).toBe(contentTopic);
const receivedPayload = new TextDecoder().decode(receivedMessage.payload);
expect(receivedPayload).toBe(testMessage);
});

View File

@ -1,722 +1,82 @@
import { ChildProcess, exec, spawn } from "child_process";
import * as http from "http";
import * as net from "net";
import { join } from "path";
import { expect, test } from "@playwright/test";
import { test, expect } from "@playwright/test";
import axios from "axios";
import { spawn, ChildProcess } from "child_process";
import { fileURLToPath } from "url";
import { dirname, join } from "path";
// The default URL, but we'll update this if we detect a different port
let API_URL = "http://localhost:3000";
// Need this for basic node initialization that doesn't rely on /execute
const PEERS = [
"/dns4/waku-test.bloxy.one/tcp/8095/wss/p2p/16Uiu2HAmSZbDB7CusdRhgkD81VssRjQV5ZH13FbzCGcdnbbh6VwZ",
"/dns4/waku.fryorcraken.xyz/tcp/8000/wss/p2p/16Uiu2HAmMRvhDHrtiHft1FTUYnn6cVA8AWVrTyLUayJJ3MWpUZDB"
];
const __filename = fileURLToPath(import.meta.url);
const __dirname = dirname(__filename);
let serverProcess: ChildProcess;
// Force tests to run sequentially to avoid port conflicts
test.describe.configure({ mode: "serial" });
// Helper function to check if a port is in use
async function isPortInUse(port: number): Promise<boolean> {
return new Promise((resolve) => {
const server = net
.createServer()
.once("error", () => {
// Port is in use
resolve(true);
})
.once("listening", () => {
// Port is free, close server
server.close();
resolve(false);
})
.listen(port);
});
}
test.describe("Server Tests", () => {
let serverProcess: ChildProcess;
let baseUrl = "http://localhost:3000";
// Helper function to kill processes on port 3000
async function killProcessOnPort(): Promise<void> {
return new Promise<void>((resolve) => {
// Different commands for different platforms
const cmd =
process.platform === "win32"
? `netstat -ano | findstr :3000 | findstr LISTENING`
: `lsof -i:3000 -t`;
test.beforeAll(async () => {
const serverPath = join(__dirname, "..", "dist", "src", "server.js");
exec(cmd, (err, stdout) => {
if (err || !stdout.trim()) {
console.log("No process running on port 3000");
resolve();
return;
}
console.log(`Found processes on port 3000: ${stdout.trim()}`);
// Kill the process
const killCmd =
process.platform === "win32"
? `FOR /F "tokens=5" %P IN ('netstat -ano ^| findstr :3000 ^| findstr LISTENING') DO taskkill /F /PID %P`
: `kill -9 ${stdout.trim()}`;
exec(killCmd, (killErr) => {
if (killErr) {
console.error(`Error killing process: ${killErr.message}`);
} else {
console.log("Killed process on port 3000");
}
// Wait a moment for OS to release the port
setTimeout(resolve, 500);
});
serverProcess = spawn("node", [serverPath], {
stdio: "pipe",
env: { ...process.env, PORT: "3000" }
});
});
}
// Helper function to wait for the API server to be available
async function waitForApiServer(
maxRetries = 10,
interval = 1000
): Promise<boolean> {
for (let i = 0; i < maxRetries; i++) {
try {
const response = await axios.get(API_URL, { timeout: 2000 });
if (response.status === 200) {
console.log(`API server is available at ${API_URL}`);
return true;
}
} catch (e) {
console.log(
`API server not available at ${API_URL}, retrying (${i + 1}/${maxRetries})...`
);
await new Promise((resolve) => setTimeout(resolve, interval));
}
}
console.warn(
`API server at ${API_URL} not available after ${maxRetries} attempts`
);
return false;
}
serverProcess.stdout?.on("data", (_data: Buffer) => {
});
// Setup and teardown for the whole test suite
test.beforeAll(async () => {
// First check if port 3000 is already in use - if so, try to kill it
const portInUse = await isPortInUse(3000);
if (portInUse) {
console.log(
"Port 3000 is already in use. Attempting to kill the process..."
);
await killProcessOnPort();
serverProcess.stderr?.on("data", (_data: Buffer) => {
});
// Check again
const stillInUse = await isPortInUse(3000);
if (stillInUse) {
console.log("Failed to free port 3000. Waiting for it to be released...");
await new Promise((resolve) => setTimeout(resolve, 5000));
}
}
await new Promise((resolve) => setTimeout(resolve, 3000));
// Start the server
console.log("Starting server for tests...");
serverProcess = spawn("node", [join(process.cwd(), "dist/server.js")], {
stdio: "pipe",
detached: true
});
// Log server output for debugging and capture the actual port
serverProcess.stdout?.on("data", (data) => {
const output = data.toString();
console.log(`Server: ${output}`);
// Check if the output contains the port information
const portMatch = output.match(
/API server running on http:\/\/localhost:(\d+)/
);
if (portMatch && portMatch[1]) {
const detectedPort = parseInt(portMatch[1], 10);
if (detectedPort !== 3000) {
console.log(
`Server is running on port ${detectedPort} instead of 3000`
);
API_URL = `http://localhost:${detectedPort}`;
}
}
});
serverProcess.stderr?.on("data", (data) => {
console.error(`Server Error: ${data}`);
});
// Wait for server to start and API to be available
console.log("Waiting for server to start...");
await new Promise((resolve) => setTimeout(resolve, 5000));
const apiAvailable = await waitForApiServer();
if (!apiAvailable) {
console.warn("API server is not available, tests may fail");
}
if (apiAvailable) {
// Create a node for the tests
try {
console.log("Creating node for tests...");
const createNodeResponse = await axios.post(
`${API_URL}/admin/v1/create-node`,
{
defaultBootstrap: false,
networkConfig: {
clusterId: 42,
shards: [0]
},
pubsubTopics: ["/waku/2/rs/42/0"] // Explicitly configure the pubsub topic
},
{ timeout: 10000 }
);
if (createNodeResponse.status === 200) {
console.log("Node creation response:", createNodeResponse.data);
// Start the node
const startNodeResponse = await axios.post(
`${API_URL}/admin/v1/start-node`,
{},
{ timeout: 5000 }
);
if (startNodeResponse.status === 200) {
console.log("Node started successfully");
}
}
} catch (error) {
console.warn(
"Failed to create/start node through API, some tests may fail:",
error
);
}
} else {
console.warn(
"Skipping node creation as server doesn't appear to be running"
);
}
});
test.afterAll(async () => {
// Stop the server
console.log("Stopping server...");
if (serverProcess && serverProcess.pid) {
if (process.platform === "win32") {
spawn("taskkill", ["/pid", serverProcess.pid.toString(), "/f", "/t"]);
} else {
// Ensure the process and all its children are terminated
let serverReady = false;
for (let i = 0; i < 30; i++) {
try {
process.kill(-serverProcess.pid, "SIGINT");
} catch (e) {
console.log("Server process already terminated");
}
}
}
// Verify no processes running on port 3000
await killProcessOnPort();
// Give time for all processes to terminate
await new Promise((resolve) => setTimeout(resolve, 1000));
});
test.describe("Waku Server API", () => {
// Direct test of filter endpoint - this runs first
test("can directly access filter/v1/messages endpoint", async () => {
// Try with different content topic formats
const testTopics = [
"test-topic",
"/test/topic",
"%2Ftest%2Ftopic", // Pre-encoded
"%2Ftest%2Ftopic" // Pre-encoded
];
for (const topic of testTopics) {
console.log(`Testing direct access with topic: ${topic}`);
try {
const response = await axios.get(
`${API_URL}/filter/v1/messages/${topic}`,
{
timeout: 5000,
validateStatus: () => true
}
);
console.log(` Status: ${response.status}`);
console.log(` Content-Type: ${response.headers["content-type"]}`);
console.log(` Data: ${JSON.stringify(response.data)}`);
// If this succeeds, we'll use this topic format for our tests
if (response.status === 200) {
console.log(` Found working topic format: ${topic}`);
const res = await axios.get(`${baseUrl}/`, { timeout: 2000 });
if (res.status === 200) {
serverReady = true;
break;
}
} catch (error: any) {
console.error(` Error with topic ${topic}:`, error.message);
if (error.response) {
console.error(` Response status: ${error.response.status}`);
}
} catch {
// Ignore errors
}
await new Promise((r) => setTimeout(r, 1000));
}
expect(serverReady).toBe(true);
});
test.afterAll(async () => {
if (serverProcess) {
serverProcess.kill("SIGTERM");
await new Promise((resolve) => setTimeout(resolve, 1000));
}
});
// This test checks if the server is running and can serve the basic endpoints
test("can get server status and verify endpoints", async () => {
// Get initial server status with retry mechanism
let initialResponse;
for (let attempt = 0; attempt < 5; attempt++) {
try {
initialResponse = await axios.get(`${API_URL}/`, {
timeout: 5000,
validateStatus: () => true // Accept any status code
});
if (initialResponse.status === 200) {
break;
}
} catch (e) {
console.log(
`Server not responding on attempt ${attempt + 1}/5, retrying...`
);
await new Promise((resolve) => setTimeout(resolve, 1000));
}
}
test("server health endpoint", async () => {
const res = await axios.get(`${baseUrl}/`);
expect(res.status).toBe(200);
expect(res.data.status).toBe("Waku simulation server is running");
});
// If we still couldn't connect, skip this test
if (!initialResponse || initialResponse.status !== 200) {
console.warn("Server is not responding, skipping endpoint checks");
test.skip();
return;
}
test("static files are served", async () => {
const htmlRes = await axios.get(`${baseUrl}/app/index.html`);
expect(htmlRes.status).toBe(200);
expect(htmlRes.data).toContain("Waku Test Environment");
expect(initialResponse.status).toBe(200);
expect(initialResponse.data.status).toBe(
"Waku simulation server is running"
);
// Check if key endpoints are available
console.log("Checking if server endpoints are properly registered...");
const jsRes = await axios.get(`${baseUrl}/app/index.js`);
expect(jsRes.status).toBe(200);
expect(jsRes.data).toContain("WakuHeadless");
});
test("Waku node auto-started", async () => {
try {
// Try to access the various endpoints with simple HEAD requests
const endpoints = [
"/info",
"/debug/v1/info",
"/admin/v1/create-node",
"/admin/v1/start-node",
"/admin/v1/stop-node",
"/filter/v1/messages/test-topic",
"/filter/v2/messages/test-topic"
];
for (const endpoint of endpoints) {
try {
const response = await axios.head(`${API_URL}${endpoint}`, {
validateStatus: () => true, // Accept any status code
timeout: 3000 // Short timeout to avoid hanging
});
// Some endpoints may return 404 or 405 if they only support specific methods,
// but at least we should get a response if the route is registered
console.log(`Endpoint ${endpoint}: Status ${response.status}`);
// If we get a 404, the route is not registered
expect(response.status).not.toBe(404);
} catch (error) {
console.warn(`Error checking endpoint ${endpoint}:`, error.message);
// Continue checking other endpoints even if one fails
}
}
} catch (error: any) {
console.error("Error checking endpoints:", error.message);
throw error;
}
});
// Test node lifecycle operations using the dedicated endpoints
test("can create, start, and stop a node", async () => {
// 1. Create a new node
const createResponse = await axios.post(`${API_URL}/admin/v1/create-node`, {
defaultBootstrap: true,
networkConfig: {
clusterId: 42,
shards: [0]
},
pubsubTopics: ["/waku/2/rs/42/0"] // Explicitly configure the pubsub topic
});
expect(createResponse.status).toBe(200);
expect(createResponse.data.success).toBe(true);
// 2. Start the node
const startResponse = await axios.post(`${API_URL}/admin/v1/start-node`);
expect(startResponse.status).toBe(200);
expect(startResponse.data.success).toBe(true);
// 3. Get info to verify it's running
const infoResponse = await axios.get(`${API_URL}/info`);
expect(infoResponse.status).toBe(200);
expect(infoResponse.data.peerId).toBeDefined();
console.log("Node peer ID:", infoResponse.data.peerId);
// 4. Stop the node
const stopResponse = await axios.post(`${API_URL}/admin/v1/stop-node`);
expect(stopResponse.status).toBe(200);
expect(stopResponse.data.success).toBe(true);
// 5. Start it again
const restartResponse = await axios.post(`${API_URL}/admin/v1/start-node`);
expect(restartResponse.status).toBe(200);
expect(restartResponse.data.success).toBe(true);
// 6. Verify it's running again
const finalInfoResponse = await axios.get(`${API_URL}/info`);
expect(finalInfoResponse.status).toBe(200);
expect(finalInfoResponse.data.peerId).toBeDefined();
});
// This test requires a running node, which we now can properly initialize with our new endpoints
test("can connect to peers and get node info", async () => {
// Create and start a fresh node
await axios.post(`${API_URL}/admin/v1/create-node`, {
defaultBootstrap: false,
networkConfig: {
clusterId: 42,
shards: [0]
},
pubsubTopics: ["/waku/2/rs/42/0"] // Explicitly configure the pubsub topic
});
await axios.post(`${API_URL}/admin/v1/start-node`);
// FilterConnect to peers
const dialResponse = await axios.post(`${API_URL}/admin/v1/peers`, {
peerMultiaddrs: PEERS
});
expect(dialResponse.status).toBe(200);
console.log("Peer connection response:", dialResponse.data);
// Get debug info now that we have a properly initialized node
const debugResponse = await axios.get(`${API_URL}/debug/v1/info`);
expect(debugResponse.status).toBe(200);
expect(debugResponse.data).toBeDefined();
// Log protocols available
if (debugResponse.data.protocols) {
const wakuProtocols = debugResponse.data.protocols.filter((p: string) =>
p.includes("/waku/")
);
console.log("Waku protocols:", wakuProtocols);
}
});
test("can push messages", async () => {
// Create and start a fresh node
await axios.post(`${API_URL}/admin/v1/create-node`, {
defaultBootstrap: true,
networkConfig: {
clusterId: 42,
shards: [0]
},
pubsubTopics: ["/waku/2/rs/42/0"] // Explicitly configure the pubsub topic
});
await axios.post(`${API_URL}/admin/v1/start-node`);
// FilterConnect to peers
await axios.post(`${API_URL}/admin/v1/peers`, {
peerMultiaddrs: PEERS
});
// Test the REST API format push endpoint
try {
const restPushResponse = await axios.post(
`${API_URL}/lightpush/v1/message`,
{
pubsubTopic: "/waku/2/default-waku/proto",
message: {
contentTopic: "/test/1/message/proto",
payload: Array.from(
new TextEncoder().encode("Test message via REST endpoint")
)
}
}
);
expect(restPushResponse.status).toBe(200);
expect(restPushResponse.data.messageId).toBeDefined();
console.log("Message ID:", restPushResponse.data.messageId);
const infoRes = await axios.get(`${baseUrl}/waku/v1/peer-info`);
expect(infoRes.status).toBe(200);
expect(infoRes.data.peerId).toBeDefined();
expect(infoRes.data.multiaddrs).toBeDefined();
} catch (error) {
console.log("REST push might fail if no peers connected:", error);
}
});
test("can retrieve messages from the queue", async () => {
// Create and start a fresh node
await axios.post(`${API_URL}/admin/v1/create-node`, {
defaultBootstrap: true,
networkConfig: {
clusterId: 42,
shards: [0]
},
pubsubTopics: ["/waku/2/rs/42/0"] // Explicitly configure the pubsub topic
});
await axios.post(`${API_URL}/admin/v1/start-node`);
// FilterConnect to peers
await axios.post(`${API_URL}/admin/v1/peers`, {
peerMultiaddrs: PEERS
});
// Use a simple content topic to avoid encoding issues
const contentTopic = "test-queue";
try {
// Check endpoint existence by checking available routes
console.log("Checking server routes and status...");
const rootResponse = await axios.get(`${API_URL}/`);
console.log(
"Server root response:",
rootResponse.status,
rootResponse.data
);
// First ensure the queue is empty
console.log(`Attempting to get messages from ${contentTopic}...`);
const emptyQueueResponse = await axios.get(
`${API_URL}/filter/v1/messages/${contentTopic}`
);
expect(emptyQueueResponse.status).toBe(200);
expect(emptyQueueResponse.data.messages).toEqual([]);
} catch (error: any) {
console.error("Error accessing filter endpoint:", error.message);
if (error.response) {
console.error("Response status:", error.response.status);
console.error("Response data:", error.response.data);
}
throw error;
}
// Simulate adding messages to the queue
const messages = [
{
payload: Array.from(new TextEncoder().encode("Message 1")),
timestamp: Date.now() - 2000,
contentTopic
},
{
payload: Array.from(new TextEncoder().encode("Message 2")),
timestamp: Date.now() - 1000,
contentTopic
},
{
payload: Array.from(new TextEncoder().encode("Message 3")),
timestamp: Date.now(),
contentTopic
}
];
const testMessages = await axios.post(`${API_URL}/execute`, {
functionName: "simulateMessages",
params: [contentTopic, messages]
});
expect(testMessages.status).toBe(200);
// Now check if we can retrieve messages
const messagesResponse = await axios.get(
`${API_URL}/filter/v1/messages/${contentTopic}`
);
expect(messagesResponse.status).toBe(200);
expect(messagesResponse.data.messages.length).toBe(3);
// Verify message format
const message = messagesResponse.data.messages[0];
expect(message).toHaveProperty("payload");
expect(message).toHaveProperty("contentTopic");
expect(message).toHaveProperty("timestamp");
expect(message).toHaveProperty("version");
// Test pagination
const paginatedResponse = await axios.get(
`${API_URL}/filter/v1/messages/${contentTopic}?pageSize=2`
);
expect(paginatedResponse.status).toBe(200);
expect(paginatedResponse.data.messages.length).toBe(2);
// Test sorting order
const ascendingResponse = await axios.get(
`${API_URL}/filter/v1/messages/${contentTopic}?ascending=true`
);
expect(ascendingResponse.status).toBe(200);
expect(ascendingResponse.data.messages.length).toBe(3);
const timestamps = ascendingResponse.data.messages.map(
(msg: any) => msg.timestamp
);
expect(timestamps[0]).toBeLessThan(timestamps[1]);
expect(timestamps[1]).toBeLessThan(timestamps[2]);
});
test("can access filter endpoint for SSE", async () => {
// Create and start a fresh node - only if API is accessible
try {
// Quick check if server is running
await axios.get(API_URL, { timeout: 2000 });
// Create node
await axios.post(`${API_URL}/admin/v1/create-node`, {
defaultBootstrap: true,
networkConfig: {
clusterId: 42,
shards: [0]
},
pubsubTopics: ["/waku/2/rs/42/0"] // Explicitly configure the pubsub topic
});
// Start node
await axios.post(`${API_URL}/admin/v1/start-node`);
// FilterConnect to peers
await axios.post(`${API_URL}/admin/v1/peers`, {
peerMultiaddrs: PEERS
});
} catch (error) {
console.warn("Server appears to be unreachable, skipping test");
test.skip();
return;
}
const contentTopic = "test-sse";
// Verify filter endpoint is accessible
// Instead of implementing a full SSE client, we'll make sure the endpoint
// returns the correct headers and status code which indicates SSE readiness
try {
const sseResponse = await axios
.get(
`${API_URL}/filter/v2/messages/${contentTopic}?clusterId=42&shard=0`,
{
// Set a timeout to avoid hanging the test
timeout: 2000,
// Expecting the request to timeout as SSE keeps connection open
validateStatus: () => true,
// We can't use responseType: 'stream' directly with axios,
// but we can check the response headers
headers: {
Accept: "text/event-stream"
}
}
)
.catch((e) => {
// We expect a timeout error since SSE keeps connection open
if (e.code === "ECONNABORTED") {
return e.response;
}
throw e;
});
// If response exists and has expected SSE headers, the test passes
if (sseResponse) {
expect(sseResponse.headers["content-type"]).toBe("text/event-stream");
expect(sseResponse.headers["cache-control"]).toBe("no-cache");
expect(sseResponse.headers["connection"]).toBe("keep-alive");
} else {
// If no response, we manually make an HTTP request to check the headers
const headers = await new Promise<Record<string, string>>((resolve) => {
const requestUrl = new URL(
`${API_URL}/filter/v2/messages/${contentTopic}?clusterId=42&shard=0`
);
const req = http.get(requestUrl, (res) => {
// Only interested in headers
req.destroy();
if (res.headers) {
resolve(res.headers as Record<string, string>);
} else {
resolve({});
}
});
req.on("error", () => resolve({}));
});
if (Object.keys(headers).length === 0) {
console.warn(
"No headers received, SSE endpoint may not be accessible"
);
test.skip();
return;
}
expect(headers["content-type"]).toBe("text/event-stream");
}
} catch (error) {
console.error("Error during SSE endpoint test:", error);
test.fail();
return;
}
console.log("SSE endpoint is accessible with correct headers");
});
// Add a specific test just for the filter/v1/messages endpoint
test("can access filter/v1/messages endpoint directly", async () => {
// Check if server is available first
try {
await axios.get(API_URL, { timeout: 2000 });
} catch (error) {
console.warn("Server appears to be unreachable, skipping test");
test.skip();
return;
}
// Create a random content topic just for this test
const contentTopic = `direct-filter-${Date.now()}`;
try {
// Try different approaches to access the endpoint
console.log(
`Testing direct access to filter/v1/messages/${contentTopic}`
);
// Method 1: GET request with encoded content topic
const getResponse = await axios({
method: "get",
url: `${API_URL}/filter/v1/messages/${contentTopic}`,
validateStatus: function () {
// Allow any status code to check what's coming back
return true;
},
timeout: 5000
});
console.log("Response status:", getResponse.status);
console.log("Response headers:", getResponse.headers);
if (getResponse.status === 404) {
throw new Error(
`Endpoint not found (404): /filter/v1/messages/${contentTopic}`
);
}
// If we got here, the endpoint exists even if it returns empty results
expect(getResponse.status).toBe(200);
expect(getResponse.data).toHaveProperty("messages");
expect(Array.isArray(getResponse.data.messages)).toBe(true);
} catch (error: any) {
console.error("Error during filter/v1 endpoint test:", error.message);
if (error.response) {
console.error("Response status:", error.response.status);
console.error("Response headers:", error.response.headers);
console.error("Response data:", error.response.data);
} else if (error.request) {
console.error("No response received:", error.request);
// If no response, we'll skip the test rather than fail it
test.skip();
return;
}
throw error;
expect(error.response?.status).toBe(400);
}
});
});

View File

@ -1,35 +0,0 @@
export const NETWORK_CONFIG = {
"waku.sandbox": {
peers: [
"/dns4/node-01.do-ams3.waku.sandbox.status.im/tcp/30303/p2p/16Uiu2HAmNaeL4p3WEYzC9mgXBmBWSgWjPHRvatZTXnp8Jgv3iKsb",
"/dns4/node-01.gc-us-central1-a.waku.sandbox.status.im/tcp/30303/p2p/16Uiu2HAmRv1iQ3NoMMcjbtRmKxPuYBbF9nLYz2SDv9MTN8WhGuUU",
"/dns4/node-01.ac-cn-hongkong-c.waku.sandbox.status.im/tcp/30303/p2p/16Uiu2HAmQYiojgZ8APsh9wqbWNyCstVhnp9gbeNrxSEQnLJchC92"
]
},
"waku.test": {
peers: [
"/dns4/node-01.do-ams3.waku.test.status.im/tcp/8000/wss/p2p/16Uiu2HAkykgaECHswi3YKJ5dMLbq2kPVCo89fcyTd38UcQD6ej5W",
"/dns4/node-01.gc-us-central1-a.waku.test.status.im/tcp/8000/wss/p2p/16Uiu2HAmDCp8XJ9z1ev18zuv8NHekAsjNyezAvmMfFEJkiharitG",
"/dns4/node-01.ac-cn-hongkong-c.waku.test.status.im/tcp/8000/wss/p2p/16Uiu2HAkzHaTP5JsUwfR9NR8Rj9HC24puS6ocaU8wze4QrXr9iXp"
]
},
networkConfig: {
clusterId: 1,
shards: [0]
},
// Default node configuration
defaultNodeConfig: {
defaultBootstrap: false
},
// Test message configuration
testMessage: {
contentTopic: "/test/1/message/proto",
payload: "Hello, Waku!"
}
};
export const ACTIVE_PEERS = NETWORK_CONFIG["waku.test"].peers;

View File

@ -0,0 +1,128 @@
import axios from "axios";
import { GenericContainer, StartedTestContainer } from "testcontainers";
import { Logger } from "@waku/utils";
const log = new Logger("container-helpers");
export interface ContainerSetupOptions {
environment?: Record<string, string>;
networkMode?: string;
timeout?: number;
maxAttempts?: number;
}
export interface ContainerSetupResult {
container: StartedTestContainer;
baseUrl: string;
}
/**
* Starts a waku-browser-tests Docker container with proper health checking.
* Follows patterns from @waku/tests package for retry logic and cleanup.
*/
export async function startBrowserTestsContainer(
options: ContainerSetupOptions = {}
): Promise<ContainerSetupResult> {
const {
environment = {},
networkMode = "bridge",
timeout = 2000,
maxAttempts = 60
} = options;
log.info("Starting waku-browser-tests container...");
let generic = new GenericContainer("waku-browser-tests:local")
.withExposedPorts(8080)
.withNetworkMode(networkMode);
// Apply environment variables
for (const [key, value] of Object.entries(environment)) {
generic = generic.withEnvironment({ [key]: value });
}
const container = await generic.start();
// Set up container logging - stream all output from the start
const logs = await container.logs();
logs.on("data", (b) => process.stdout.write("[container] " + b.toString()));
logs.on("error", (err) => log.error("[container log error]", err));
// Give container time to initialize
await new Promise((r) => setTimeout(r, 5000));
const mappedPort = container.getMappedPort(8080);
const baseUrl = `http://127.0.0.1:${mappedPort}`;
// Wait for server readiness with retry logic (following waku/tests patterns)
const serverReady = await waitForServerReady(baseUrl, maxAttempts, timeout);
if (!serverReady) {
await logFinalContainerState(container);
throw new Error("Container failed to become ready");
}
log.info("✅ Browser tests container ready");
await new Promise((r) => setTimeout(r, 500)); // Final settling time
return { container, baseUrl };
}
/**
* Waits for server to become ready with exponential backoff and detailed logging.
* Follows retry patterns from @waku/tests ServiceNode.
*/
async function waitForServerReady(
baseUrl: string,
maxAttempts: number,
timeout: number
): Promise<boolean> {
for (let i = 0; i < maxAttempts; i++) {
try {
const res = await axios.get(`${baseUrl}/`, { timeout });
if (res.status === 200) {
log.info(`Server is ready after ${i + 1} attempts`);
return true;
}
} catch (error) {
if (i % 10 === 0) {
log.info(`Attempt ${i + 1}/${maxAttempts} failed:`, error.code || error.message);
}
}
await new Promise((r) => setTimeout(r, 1000));
}
return false;
}
/**
* Logs final container state for debugging, following waku/tests error handling patterns.
*/
async function logFinalContainerState(container: StartedTestContainer): Promise<void> {
try {
const finalLogs = await container.logs({ tail: 50 });
log.info("=== Final Container Logs ===");
finalLogs.on("data", (b) => log.info(b.toString()));
await new Promise((r) => setTimeout(r, 1000));
} catch (logError) {
log.error("Failed to get container logs:", logError);
}
}
/**
* Gracefully stops containers with retry logic, following teardown patterns from waku/tests.
*/
export async function stopContainer(container: StartedTestContainer): Promise<void> {
if (!container) return;
log.info("Stopping container gracefully...");
try {
await container.stop({ timeout: 10000 });
log.info("Container stopped successfully");
} catch (error) {
const message = error instanceof Error ? error.message : String(error);
log.warn(
"Container stop had issues (expected):",
message
);
}
}

View File

@ -0,0 +1,8 @@
/**
* Shared test utilities for browser-tests package.
* Follows patterns established in @waku/tests package.
*/
export * from "./container-helpers.js";
export * from "./nwaku-helpers.js";
export * from "./test-config.js";

View File

@ -0,0 +1,141 @@
import { ServiceNode } from "@waku/tests";
import { DefaultTestRoutingInfo } from "@waku/tests";
import { Logger } from "@waku/utils";
const log = new Logger("nwaku-helpers");
export interface TwoNodeNetwork {
nodes: ServiceNode[];
}
/**
* Creates a two-node nwaku network following waku/tests patterns.
* Node 1: Relay + Light Push (service provider)
* Node 2: Relay only (network peer)
*/
export async function createTwoNodeNetwork(): Promise<TwoNodeNetwork> {
log.info("Creating nwaku node 1 (Relay + Light Push)...");
const lightPushNode = new ServiceNode(
"lightpush-node-" + Math.random().toString(36).substring(7),
);
const lightPushArgs = {
relay: true,
lightpush: true,
filter: false,
store: false,
clusterId: DefaultTestRoutingInfo.clusterId,
numShardsInNetwork: DefaultTestRoutingInfo.networkConfig.numShardsInCluster,
contentTopic: [DefaultTestRoutingInfo.contentTopic],
};
await lightPushNode.start(lightPushArgs, { retries: 3 });
log.info("Creating nwaku node 2 (Relay only)...");
const relayNode = new ServiceNode(
"relay-node-" + Math.random().toString(36).substring(7),
);
// Connect second node to first node (following ServiceNodesFleet pattern)
const firstNodeAddr = await lightPushNode.getExternalMultiaddr();
const relayArgs = {
relay: true,
lightpush: false,
filter: false,
store: false,
staticnode: firstNodeAddr,
clusterId: DefaultTestRoutingInfo.clusterId,
numShardsInNetwork: DefaultTestRoutingInfo.networkConfig.numShardsInCluster,
contentTopic: [DefaultTestRoutingInfo.contentTopic],
};
await relayNode.start(relayArgs, { retries: 3 });
// Wait for network formation (following waku/tests timing patterns)
log.info("Waiting for nwaku network formation...");
await new Promise((r) => setTimeout(r, 5000));
// Verify connectivity (optional, for debugging)
await verifyNetworkFormation([lightPushNode, relayNode]);
return {
nodes: [lightPushNode, relayNode],
};
}
/**
* Verifies that nwaku nodes have formed connections.
* Follows error handling patterns from waku/tests.
*/
async function verifyNetworkFormation(nodes: ServiceNode[]): Promise<void> {
try {
const peerCounts = await Promise.all(
nodes.map(async (node, index) => {
const peers = await node.peers();
log.info(`Node ${index + 1} has ${peers.length} peer(s)`);
return peers.length;
}),
);
if (peerCounts.every((count) => count === 0)) {
log.warn("⚠️ Nodes may not be properly connected yet");
}
} catch (error) {
log.warn("Could not verify peer connections:", error);
}
}
/**
* Extracts Docker-accessible multiaddr from nwaku node.
* Returns multiaddr using container's internal IP for Docker network communication.
*/
export async function getDockerAccessibleMultiaddr(
node: ServiceNode,
): Promise<string> {
// Get multiaddr with localhost and extract components
const localhostMultiaddr = await node.getMultiaddrWithId();
const peerId = await node.getPeerId();
// Extract port from multiaddr string
const multiaddrStr = localhostMultiaddr.toString();
const portMatch = multiaddrStr.match(/\/tcp\/(\d+)/);
const port = portMatch ? portMatch[1] : null;
if (!port) {
throw new Error("Could not extract port from multiaddr: " + multiaddrStr);
}
// Get Docker container IP (accessing internal field)
// Note: This accesses an internal implementation detail of ServiceNode
const nodeWithDocker = node as ServiceNode & {
docker?: { containerIp?: string };
};
const containerIp = nodeWithDocker.docker?.containerIp;
if (!containerIp) {
throw new Error("Could not get container IP from node");
}
// Build Docker network accessible multiaddr
const dockerMultiaddr = `/ip4/${containerIp}/tcp/${port}/ws/p2p/${peerId}`;
log.info("Original multiaddr:", multiaddrStr);
log.info("Docker accessible multiaddr:", dockerMultiaddr);
return dockerMultiaddr;
}
/**
* Stops nwaku nodes with retry logic, following teardown patterns from waku/tests.
*/
export async function stopNwakuNodes(nodes: ServiceNode[]): Promise<void> {
if (!nodes || nodes.length === 0) return;
log.info("Stopping nwaku nodes...");
try {
await Promise.all(nodes.map((node) => node.stop()));
log.info("Nwaku nodes stopped successfully");
} catch (error) {
const message = error instanceof Error ? error.message : String(error);
log.warn("Nwaku nodes stop had issues:", message);
}
}

View File

@ -0,0 +1,127 @@
import { expect } from "@playwright/test";
import { DefaultTestRoutingInfo } from "@waku/tests";
import { AxiosResponse } from "axios";
/**
* Response type definitions for API endpoints
*/
interface ServerHealthResponse {
status: string;
}
interface PeerInfoResponse {
peerId: string;
multiaddrs: string[];
peers: string[];
}
interface LightPushV3Result {
successes: string[];
failures: Array<{ error: string; peerId?: string }>;
}
interface LightPushV3Response {
success: boolean;
result: LightPushV3Result;
error?: string;
}
interface MessageResponse {
contentTopic: string;
payload: string;
version: number;
timestamp?: bigint | number;
}
/**
* Common test configuration constants following waku/tests patterns.
*/
export const TEST_CONFIG = {
// Test timeouts (following waku/tests timeout patterns)
DEFAULT_TEST_TIMEOUT: 120000, // 2 minutes
CONTAINER_READY_TIMEOUT: 60000, // 1 minute
NETWORK_FORMATION_DELAY: 5000, // 5 seconds
SUBSCRIPTION_DELAY: 3000, // 3 seconds
MESSAGE_PROPAGATION_DELAY: 5000, // 5 seconds
WAKU_INIT_DELAY: 8000, // 8 seconds
// Network configuration
DEFAULT_CLUSTER_ID: DefaultTestRoutingInfo.clusterId.toString(),
DEFAULT_CONTENT_TOPIC: "/test/1/browser-tests/proto",
// Test messages
DEFAULT_TEST_MESSAGE: "Hello from browser tests",
} as const;
/**
* Environment variable builders for different test scenarios.
*/
export const ENV_BUILDERS = {
/**
* Environment for production ENR bootstrap (integration test pattern).
*/
withProductionEnr: () => ({
WAKU_ENR_BOOTSTRAP: "enr:-QEnuEBEAyErHEfhiQxAVQoWowGTCuEF9fKZtXSd7H_PymHFhGJA3rGAYDVSHKCyJDGRLBGsloNbS8AZF33IVuefjOO6BIJpZIJ2NIJpcIQS39tkim11bHRpYWRkcnO4lgAvNihub2RlLTAxLmRvLWFtczMud2FrdXYyLnRlc3Quc3RhdHVzaW0ubmV0BgG73gMAODcxbm9kZS0wMS5hYy1jbi1ob25na29uZy1jLndha3V2Mi50ZXN0LnN0YXR1c2ltLm5ldAYBu94DACm9A62t7AQL4Ef5ZYZosRpQTzFVAB8jGjf1TER2wH-0zBOe1-MDBNLeA4lzZWNwMjU2azGhAzfsxbxyCkgCqq8WwYsVWH7YkpMLnU2Bw5xJSimxKav-g3VkcIIjKA",
WAKU_CLUSTER_ID: "1",
}),
/**
* Environment for local nwaku node connection (e2e test pattern).
*/
withLocalLightPush: (lightpushMultiaddr: string) => ({
WAKU_LIGHTPUSH_NODE: lightpushMultiaddr,
WAKU_CLUSTER_ID: TEST_CONFIG.DEFAULT_CLUSTER_ID,
}),
};
/**
* Test assertion helpers following waku/tests verification patterns.
*/
export const ASSERTIONS = {
/**
* Verifies server health response structure.
*/
serverHealth: (response: AxiosResponse<ServerHealthResponse>) => {
expect(response.status).toBe(200);
expect(response.data.status).toBe("Waku simulation server is running");
},
/**
* Verifies peer info response structure.
*/
peerInfo: (response: AxiosResponse<PeerInfoResponse>) => {
expect(response.status).toBe(200);
expect(response.data.peerId).toBeDefined();
expect(typeof response.data.peerId).toBe("string");
},
/**
* Verifies lightpush response structure (v3 format).
*/
lightPushV3Success: (response: AxiosResponse<LightPushV3Response>) => {
expect(response.status).toBe(200);
expect(response.data).toHaveProperty('success', true);
expect(response.data).toHaveProperty('result');
expect(response.data.result).toHaveProperty('successes');
expect(Array.isArray(response.data.result.successes)).toBe(true);
expect(response.data.result.successes.length).toBeGreaterThan(0);
},
/**
* Verifies message content and structure.
*/
messageContent: (message: MessageResponse, expectedContent: string, expectedTopic: string) => {
expect(message).toHaveProperty('contentTopic', expectedTopic);
expect(message).toHaveProperty('payload');
expect(typeof message.payload).toBe('string');
const receivedPayload = Buffer.from(message.payload, 'base64').toString();
expect(receivedPayload).toBe(expectedContent);
// Optional fields
expect(message).toHaveProperty('version');
if (message.timestamp) {
expect(['bigint', 'number']).toContain(typeof message.timestamp);
}
},
};

View File

@ -15,5 +15,5 @@
"typeRoots": ["./node_modules/@types", "./types"]
},
"include": ["src/server.ts", "types/**/*.d.ts"],
"exclude": ["node_modules", "dist"]
"exclude": ["node_modules", "dist", "web"]
}

View File

@ -1,27 +1,19 @@
import { LightNode } from "@waku/sdk";
import { IWakuNode } from "../src/api/common.js";
import {
createWakuNode,
dialPeers,
getDebugInfo,
getPeerInfo,
pushMessage,
subscribe
} from "../src/api/shared.js";
import type { WakuHeadless } from "../web/index.js";
export interface WindowNetworkConfig {
clusterId?: number;
shards?: number[];
}
export interface ITestBrowser extends Window {
wakuApi: WakuHeadless;
__WAKU_NETWORK_CONFIG?: WindowNetworkConfig;
__WAKU_LIGHTPUSH_NODE?: string | null;
__WAKU_ENR_BOOTSTRAP?: string | null;
}
// Define types for the Waku node and window
declare global {
// eslint-disable-next-line no-unused-vars
interface Window {
waku: IWakuNode & LightNode;
wakuAPI: {
getPeerInfo: typeof getPeerInfo;
getDebugInfo: typeof getDebugInfo;
pushMessage: typeof pushMessage;
dialPeers: typeof dialPeers;
createWakuNode: typeof createWakuNode;
subscribe: typeof subscribe;
[key: string]: any;
};
wakuApi: WakuHeadless;
}
}

View File

@ -1,7 +1,9 @@
declare module "serve" {
import type { Server } from "http";
function serve(
folder: string,
options: { port: number; single: boolean; listen: boolean }
): any;
options: { port: number; single: boolean; listen: boolean },
): Promise<Server>;
export default serve;
}

View File

@ -0,0 +1,14 @@
<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8" />
<meta name="viewport" content="width=device-width, initial-scale=1" />
<title>Waku Test Environment</title>
</head>
<body>
<h1>Waku Test Environment</h1>
<script type="module" src="./index.js"></script>
</body>
</html>

View File

@ -0,0 +1,431 @@
import {
createLightNode,
LightNode,
Protocols,
NetworkConfig,
CreateNodeOptions,
} from "@waku/sdk";
import {
AutoSharding,
DEFAULT_CLUSTER_ID,
DEFAULT_NUM_SHARDS,
ShardId,
StaticSharding,
ShardInfo,
CreateLibp2pOptions,
IEncoder,
ILightPush,
SDKProtocolResult,
Failure,
} from "@waku/interfaces";
import { bootstrap } from "@libp2p/bootstrap";
import { EnrDecoder, TransportProtocol } from "@waku/enr";
import type { Multiaddr } from "@multiformats/multiaddr";
import type { ITestBrowser } from "../types/global.js";
import { Logger, StaticShardingRoutingInfo } from "@waku/utils";
import type { PeerId } from "@libp2p/interface";
const log = new Logger("waku-headless");
export interface SerializableSDKProtocolResult {
successes: string[];
failures: Array<{
error: string;
peerId?: string;
}>;
myPeerId?: string;
}
function makeSerializable(result: SDKProtocolResult): SerializableSDKProtocolResult {
return {
...result,
successes: result.successes.map((peerId: PeerId) => peerId.toString()),
failures: result.failures.map((failure: Failure) => ({
error: failure.error || failure.toString(),
peerId: failure.peerId ? failure.peerId.toString() : undefined,
})),
};
}
async function convertEnrToMultiaddrs(enrString: string): Promise<string[]> {
try {
const enr = await EnrDecoder.fromString(enrString);
const allMultiaddrs = enr.getAllLocationMultiaddrs();
const multiaddrs: string[] = [];
for (const multiaddr of allMultiaddrs) {
const maStr = multiaddr.toString();
multiaddrs.push(maStr);
}
if (multiaddrs.length === 0) {
const tcpMultiaddr = enr.getFullMultiaddr(TransportProtocol.TCP);
if (tcpMultiaddr) {
const tcpStr = tcpMultiaddr.toString();
multiaddrs.push(tcpStr);
}
const udpMultiaddr = enr.getFullMultiaddr(TransportProtocol.UDP);
if (udpMultiaddr) {
const udpStr = udpMultiaddr.toString();
multiaddrs.push(udpStr);
}
}
return multiaddrs;
} catch (error) {
return [];
}
}
export class WakuHeadless {
waku: LightNode | null;
networkConfig: NetworkConfig;
lightpushNode: string | null;
enrBootstrap: string | null;
constructor(
networkConfig?: Partial<NetworkConfig>,
lightpushNode?: string | null,
enrBootstrap?: string | null,
) {
this.waku = null;
this.networkConfig = this.buildNetworkConfig(networkConfig);
log.info("Network config on construction:", this.networkConfig);
this.lightpushNode = lightpushNode || null;
this.enrBootstrap = enrBootstrap || null;
if (this.lightpushNode) {
log.info(`Configured preferred lightpush node: ${this.lightpushNode}`);
}
if (this.enrBootstrap) {
log.info(`Configured ENR bootstrap: ${this.enrBootstrap}`);
}
}
private shouldUseCustomBootstrap(options: CreateNodeOptions): boolean {
const hasEnr = Boolean(this.enrBootstrap);
const isDefaultBootstrap = Boolean(options.defaultBootstrap);
return hasEnr && !isDefaultBootstrap;
}
private async getBootstrapMultiaddrs(): Promise<string[]> {
if (!this.enrBootstrap) {
return [];
}
const enrList = this.enrBootstrap.split(",").map((enr) => enr.trim());
const allMultiaddrs: string[] = [];
for (const enr of enrList) {
const multiaddrs = await convertEnrToMultiaddrs(enr);
if (multiaddrs.length > 0) {
allMultiaddrs.push(...multiaddrs);
}
}
return allMultiaddrs;
}
private buildNetworkConfig(
providedConfig?: Partial<NetworkConfig> | Partial<ShardInfo>,
): NetworkConfig {
const clusterId = providedConfig?.clusterId ?? DEFAULT_CLUSTER_ID;
const staticShards = (providedConfig as Partial<ShardInfo>)?.shards;
if (
staticShards &&
Array.isArray(staticShards) &&
staticShards.length > 0
) {
log.info("Using static sharding with shards:", staticShards);
return {
clusterId,
} as StaticSharding;
}
const numShardsInCluster =
(providedConfig as Partial<AutoSharding>)?.numShardsInCluster ?? DEFAULT_NUM_SHARDS;
log.info(
"Using auto sharding with num shards in cluster:",
numShardsInCluster,
);
return {
clusterId,
numShardsInCluster,
} as AutoSharding;
}
private async send(
lightPush: ILightPush,
encoder: IEncoder,
payload: Uint8Array,
): Promise<SDKProtocolResult> {
return lightPush.send(encoder, {
payload,
timestamp: new Date(),
});
}
async pushMessageV3(
contentTopic: string,
payload: string,
pubsubTopic: string,
): Promise<SerializableSDKProtocolResult> {
if (!this.waku) {
throw new Error("Waku node not started");
}
log.info(
"Pushing message via v3 lightpush:",
contentTopic,
payload,
pubsubTopic,
);
log.info("Waku node:", this.waku);
log.info("Network config:", this.networkConfig);
let processedPayload: Uint8Array;
try {
const binaryString = atob(payload);
const bytes = new Uint8Array(binaryString.length);
for (let i = 0; i < binaryString.length; i++) {
bytes[i] = binaryString.charCodeAt(i);
}
processedPayload = bytes;
} catch (e) {
processedPayload = new TextEncoder().encode(payload);
}
try {
const lightPush = this.waku.lightPush;
if (!lightPush) {
throw new Error("Lightpush service not available");
}
let shardId: ShardId | undefined;
if (pubsubTopic) {
const staticShardingRoutingInfo =
StaticShardingRoutingInfo.fromPubsubTopic(
pubsubTopic,
this.networkConfig as StaticSharding,
);
shardId = staticShardingRoutingInfo?.shardId;
}
const encoder = this.waku.createEncoder({
contentTopic,
shardId,
});
log.info("Encoder:", encoder);
log.info("Pubsub topic:", pubsubTopic);
log.info("Encoder pubsub topic:", encoder.pubsubTopic);
if (pubsubTopic && pubsubTopic !== encoder.pubsubTopic) {
log.warn(
`Explicit pubsubTopic ${pubsubTopic} provided, but auto-sharding determined ${encoder.pubsubTopic}. Using auto-sharding.`,
);
}
let result;
if (this.lightpushNode) {
try {
const preferredPeerId = this.getPeerIdFromMultiaddr(
this.lightpushNode,
);
if (preferredPeerId) {
result = await this.send(lightPush, encoder, processedPayload);
log.info("✅ Message sent via preferred lightpush node");
} else {
throw new Error(
"Could not extract peer ID from preferred node address",
);
}
} catch (error) {
log.error(
"Couldn't send message via preferred lightpush node:",
error,
);
result = await this.send(lightPush, encoder, processedPayload);
}
} else {
result = await this.send(lightPush, encoder, processedPayload);
}
const serializableResult = makeSerializable(result);
return serializableResult;
} catch (error) {
log.error("Error sending message via v3 lightpush:", error);
throw new Error(
`Failed to send v3 message: ${error instanceof Error ? error.message : String(error)}`,
);
}
}
async waitForPeers(
timeoutMs: number = 10000,
protocols: Protocols[] = [Protocols.LightPush, Protocols.Filter],
) {
if (!this.waku) {
throw new Error("Waku node not started");
}
const startTime = Date.now();
try {
await this.waku.waitForPeers(protocols, timeoutMs);
const elapsed = Date.now() - startTime;
const peers = this.waku.libp2p.getPeers();
return {
success: true,
peersFound: peers.length,
protocolsRequested: protocols,
timeElapsed: elapsed,
};
} catch (error) {
const elapsed = Date.now() - startTime;
log.error(`Failed to find peers after ${elapsed}ms:`, error);
throw error;
}
}
async createWakuNode(options: CreateNodeOptions) {
try {
if (this.waku) {
await this.waku.stop();
}
} catch (e) {
log.warn("ignore previous waku stop error");
}
let libp2pConfig: CreateLibp2pOptions = {
...options.libp2p,
filterMultiaddrs: false,
};
if (this.enrBootstrap) {
const multiaddrs = await this.getBootstrapMultiaddrs();
if (multiaddrs.length > 0) {
libp2pConfig.peerDiscovery = [
bootstrap({ list: multiaddrs }),
...(options.libp2p?.peerDiscovery || []),
];
}
}
const createOptions = {
...options,
networkConfig: this.networkConfig,
libp2p: libp2pConfig,
};
this.waku = await createLightNode(createOptions);
return { success: true };
}
async startNode() {
if (!this.waku) {
throw new Error("Waku node not created");
}
await this.waku.start();
if (this.lightpushNode) {
await this.dialPreferredLightpushNode();
}
return { success: true };
}
private async dialPreferredLightpushNode() {
if (!this.waku || !this.lightpushNode) {
log.info("Skipping dial: waku or lightpushNode not set");
return;
}
try {
log.info("Attempting to dial preferred lightpush node:", this.lightpushNode);
await this.waku.dial(this.lightpushNode);
log.info("Successfully dialed preferred lightpush node:", this.lightpushNode);
} catch (error) {
const message = error instanceof Error ? error.message : String(error);
log.error(
"Failed to dial preferred lightpush node:",
this.lightpushNode,
message
);
}
}
private getPeerIdFromMultiaddr(multiaddr: string): string | null {
const parts = multiaddr.split("/");
const p2pIndex = parts.indexOf("p2p");
return p2pIndex !== -1 && p2pIndex + 1 < parts.length
? parts[p2pIndex + 1]
: null;
}
async stopNode() {
if (!this.waku) {
throw new Error("Waku node not created");
}
await this.waku.stop();
return { success: true };
}
getPeerInfo() {
if (!this.waku) {
throw new Error("Waku node not started");
}
const addrs = this.waku.libp2p.getMultiaddrs();
return {
peerId: this.waku.libp2p.peerId.toString(),
multiaddrs: addrs.map((a: Multiaddr) => a.toString()),
peers: [],
};
}
}
(() => {
try {
log.info("Initializing WakuHeadless...");
const testWindow = window as ITestBrowser;
const globalNetworkConfig = testWindow.__WAKU_NETWORK_CONFIG;
const globalLightpushNode = testWindow.__WAKU_LIGHTPUSH_NODE;
const globalEnrBootstrap = testWindow.__WAKU_ENR_BOOTSTRAP;
log.info("Global config from window:", {
networkConfig: globalNetworkConfig,
lightpushNode: globalLightpushNode,
enrBootstrap: globalEnrBootstrap
});
const instance = new WakuHeadless(
globalNetworkConfig,
globalLightpushNode,
globalEnrBootstrap,
);
testWindow.wakuApi = instance;
log.info("WakuHeadless initialized successfully:", !!testWindow.wakuApi);
} catch (error) {
log.error("Error initializing WakuHeadless:", error);
const testWindow = window as ITestBrowser;
// Create a stub wakuApi that will reject all method calls
testWindow.wakuApi = {
waku: null,
networkConfig: { clusterId: 0, numShardsInCluster: 0 },
lightpushNode: null,
enrBootstrap: null,
error,
createWakuNode: () => Promise.reject(new Error("WakuHeadless failed to initialize")),
startNode: () => Promise.reject(new Error("WakuHeadless failed to initialize")),
stopNode: () => Promise.reject(new Error("WakuHeadless failed to initialize")),
pushMessageV3: () => Promise.reject(new Error("WakuHeadless failed to initialize")),
waitForPeers: () => Promise.reject(new Error("WakuHeadless failed to initialize")),
getPeerInfo: () => { throw new Error("WakuHeadless failed to initialize"); },
} as unknown as WakuHeadless;
}
})();

View File

@ -1,7 +1,7 @@
{
"name": "@waku/build-utils",
"version": "1.0.0",
"description": "Build utilities for js-waku",
"description": "Build utilities for logos-messaging-js",
"main": "index.js",
"module": "index.js",
"type": "module",
@ -14,12 +14,12 @@
},
"repository": {
"type": "git",
"url": "git+https://github.com/waku-org/js-waku.git"
"url": "git+https://github.com/logos-messaging/logos-messaging-js.git"
},
"author": "Waku Team",
"license": "MIT OR Apache-2.0",
"bugs": {
"url": "https://github.com/waku-org/js-waku/issues"
"url": "https://github.com/logos-messaging/logos-messaging-js/issues"
},
"homepage": "https://github.com/waku-org/js-waku#readme"
"homepage": "https://github.com/logos-messaging/logos-messaging-js#readme"
}

View File

@ -5,6 +5,15 @@ All notable changes to this project will be documented in this file.
The file is maintained by [Release Please](https://github.com/googleapis/release-please) based on [Conventional Commits](https://www.conventionalcommits.org) specification,
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
## [0.0.40](https://github.com/waku-org/js-waku/compare/core-v0.0.39...core-v0.0.40) (2025-10-31)
### Dependencies
* The following workspace dependencies were updated
* dependencies
* @waku/proto bumped from 0.0.14 to 0.0.15
## [0.0.39](https://github.com/waku-org/js-waku/compare/core-v0.0.38...core-v0.0.39) (2025-09-20)

View File

@ -1,6 +1,6 @@
{
"name": "@waku/core",
"version": "0.0.39",
"version": "0.0.40",
"description": "TypeScript implementation of the Waku v2 protocol",
"types": "./dist/index.d.ts",
"module": "./dist/index.js",
@ -25,17 +25,18 @@
}
},
"type": "module",
"homepage": "https://github.com/waku-org/js-waku/tree/master/packages/core#readme",
"homepage": "https://github.com/logos-messaging/logos-messaging-js/tree/master/packages/core#readme",
"repository": {
"type": "git",
"url": "https://github.com/waku-org/js-waku.git"
"url": "git+https://github.com/logos-messaging/logos-messaging-js.git"
},
"bugs": {
"url": "https://github.com/waku-org/js-waku/issues"
"url": "https://github.com/logos-messaging/logos-messaging-js/issues"
},
"license": "MIT OR Apache-2.0",
"keywords": [
"waku",
"logos-messaging",
"decentralised",
"communication",
"web3",
@ -67,7 +68,7 @@
"@waku/enr": "^0.0.33",
"@waku/interfaces": "0.0.34",
"@libp2p/ping": "2.0.35",
"@waku/proto": "0.0.14",
"@waku/proto": "0.0.15",
"@waku/utils": "0.0.27",
"debug": "^4.3.4",
"@noble/hashes": "^1.3.2",

View File

@ -9,7 +9,6 @@ import {
WakuEvent
} from "@waku/interfaces";
import { Logger } from "@waku/utils";
import { numberToBytes } from "@waku/utils/bytes";
import { Dialer } from "./dialer.js";
import { NetworkMonitor } from "./network_monitor.js";
@ -80,7 +79,7 @@ export class ConnectionLimiter implements IConnectionLimiter {
/**
* NOTE: Event is not being emitted on closing nor losing a connection.
* @see https://github.com/libp2p/js-libp2p/issues/939
* @see https://github.com/status-im/js-waku/issues/252
* @see https://github.com/logos-messaging/logos-messaging-js/issues/252
*
* >This event will be triggered anytime we are disconnected from another peer,
* >regardless of the circumstances of that disconnection.
@ -125,7 +124,6 @@ export class ConnectionLimiter implements IConnectionLimiter {
private async maintainConnections(): Promise<void> {
await this.maintainConnectionsCount();
await this.maintainBootstrapConnections();
await this.maintainTTLConnectedPeers();
}
private async onDisconnectedEvent(): Promise<void> {
@ -215,28 +213,6 @@ export class ConnectionLimiter implements IConnectionLimiter {
}
}
private async maintainTTLConnectedPeers(): Promise<void> {
log.info(`Maintaining TTL connected peers`);
const promises = this.libp2p.getConnections().map(async (c) => {
try {
await this.libp2p.peerStore.merge(c.remotePeer, {
metadata: {
ttl: numberToBytes(Date.now())
}
});
log.info(`TTL updated for connected peer ${c.remotePeer.toString()}`);
} catch (error) {
log.error(
`Unexpected error while maintaining TTL connected peer`,
error
);
}
});
await Promise.all(promises);
}
private async dialPeersFromStore(): Promise<void> {
log.info(`Dialing peers from store`);
@ -268,6 +244,9 @@ export class ConnectionLimiter implements IConnectionLimiter {
private async getPrioritizedPeers(): Promise<Peer[]> {
const allPeers = await this.libp2p.peerStore.all();
const allConnections = this.libp2p.getConnections();
const allConnectionsSet = new Set(
allConnections.map((c) => c.remotePeer.toString())
);
log.info(
`Found ${allPeers.length} peers in store, and found ${allConnections.length} connections`
@ -275,7 +254,7 @@ export class ConnectionLimiter implements IConnectionLimiter {
const notConnectedPeers = allPeers.filter(
(p) =>
!allConnections.some((c) => c.remotePeer.equals(p.id)) &&
!allConnectionsSet.has(p.id.toString()) &&
isAddressesSupported(
this.libp2p,
p.addresses.map((a) => a.multiaddr)

View File

@ -61,6 +61,7 @@ export class FilterCore {
}
public async stop(): Promise<void> {
this.streamManager.stop();
try {
await this.libp2p.unhandle(FilterCodecs.PUSH);
} catch (e) {

View File

@ -33,6 +33,11 @@ export class LightPushCore {
this.streamManager = new StreamManager(CODECS.v3, libp2p.components);
}
public stop(): void {
this.streamManager.stop();
this.streamManagerV2.stop();
}
public async send(
encoder: IEncoder,
message: IMessage,

View File

@ -35,6 +35,10 @@ export class StoreCore {
this.streamManager = new StreamManager(StoreCodec, libp2p.components);
}
public stop(): void {
this.streamManager.stop();
}
public get maxTimeLimit(): number {
return MAX_TIME_RANGE;
}
@ -68,6 +72,11 @@ export class StoreCore {
let currentCursor = queryOpts.paginationCursor;
while (true) {
if (queryOpts.abortSignal?.aborted) {
log.info("Store query aborted by signal");
break;
}
const storeQueryRequest = StoreQueryRequest.create({
...queryOpts,
paginationCursor: currentCursor
@ -89,13 +98,22 @@ export class StoreCore {
break;
}
const res = await pipe(
[storeQueryRequest.encode()],
lp.encode,
stream,
lp.decode,
async (source) => await all(source)
);
let res;
try {
res = await pipe(
[storeQueryRequest.encode()],
lp.encode,
stream,
lp.decode,
async (source) => await all(source)
);
} catch (error) {
if (error instanceof Error && error.name === "AbortError") {
log.info(`Store query aborted for peer ${peerId.toString()}`);
break;
}
throw error;
}
const bytes = new Uint8ArrayList();
res.forEach((chunk) => {
@ -122,6 +140,11 @@ export class StoreCore {
`${storeQueryResponse.messages.length} messages retrieved from store`
);
if (queryOpts.abortSignal?.aborted) {
log.info("Store query aborted by signal before processing messages");
break;
}
const decodedMessages = storeQueryResponse.messages.map((protoMsg) => {
if (!protoMsg.message) {
return Promise.resolve(undefined);

View File

@ -27,6 +27,10 @@ describe("StreamManager", () => {
} as any as Libp2pComponents);
});
afterEach(() => {
sinon.restore();
});
it("should return usable stream attached to connection", async () => {
for (const writeStatus of ["ready", "writing"]) {
const con1 = createMockConnection();

View File

@ -23,6 +23,15 @@ export class StreamManager {
);
}
public stop(): void {
this.libp2p.events.removeEventListener(
"peer:update",
this.handlePeerUpdateStreamPool
);
this.streamPool.clear();
this.ongoingCreation.clear();
}
public async getStream(peerId: PeerId): Promise<Stream | undefined> {
try {
const peerIdStr = peerId.toString();

View File

@ -1,5 +1,15 @@
# Changelog
## [0.0.13](https://github.com/waku-org/js-waku/compare/discovery-v0.0.12...discovery-v0.0.13) (2025-10-31)
### Dependencies
* The following workspace dependencies were updated
* dependencies
* @waku/core bumped from 0.0.39 to 0.0.40
* @waku/proto bumped from ^0.0.14 to ^0.0.15
## [0.0.12](https://github.com/waku-org/js-waku/compare/discovery-v0.0.11...discovery-v0.0.12) (2025-09-20)

View File

@ -1,6 +1,6 @@
{
"name": "@waku/discovery",
"version": "0.0.12",
"version": "0.0.13",
"description": "Contains various discovery mechanisms: DNS Discovery (EIP-1459, Peer Exchange, Local Peer Cache Discovery.",
"types": "./dist/index.d.ts",
"module": "./dist/index.js",
@ -12,17 +12,18 @@
},
"type": "module",
"author": "Waku Team",
"homepage": "https://github.com/waku-org/js-waku/tree/master/packages/discovery#readme",
"homepage": "https://github.com/logos-messaging/logos-messaging-js/tree/master/packages/discovery#readme",
"repository": {
"type": "git",
"url": "https://github.com/waku-org/js-waku.git"
"url": "git+https://github.com/logos-messaging/logos-messaging-js.git"
},
"bugs": {
"url": "https://github.com/waku-org/js-waku/issues"
"url": "https://github.com/logos-messaging/logos-messaging-js/issues"
},
"license": "MIT OR Apache-2.0",
"keywords": [
"waku",
"logos-messaging",
"decentralized",
"secure",
"communication",
@ -51,10 +52,10 @@
"node": ">=22"
},
"dependencies": {
"@waku/core": "0.0.39",
"@waku/core": "0.0.40",
"@waku/enr": "0.0.33",
"@waku/interfaces": "0.0.34",
"@waku/proto": "^0.0.14",
"@waku/proto": "^0.0.15",
"@waku/utils": "0.0.27",
"debug": "^4.3.4",
"dns-over-http-resolver": "^3.0.8",

View File

@ -12,17 +12,18 @@
},
"type": "module",
"author": "Waku Team",
"homepage": "https://github.com/waku-org/js-waku/tree/master/packages/enr#readme",
"homepage": "https://github.com/logos-messaging/logos-messaging-js/tree/master/packages/enr#readme",
"repository": {
"type": "git",
"url": "https://github.com/waku-org/js-waku.git"
"url": "git+https://github.com/logos-messaging/logos-messaging-js.git"
},
"bugs": {
"url": "https://github.com/waku-org/js-waku/issues"
"url": "https://github.com/logos-messaging/logos-messaging-js/issues"
},
"license": "MIT OR Apache-2.0",
"keywords": [
"waku",
"logos-messaging",
"decentralized",
"secure",
"communication",

View File

@ -1,34 +0,0 @@
module.exports = {
root: true,
env: {
browser: true,
node: true,
es2021: true
},
plugins: ["import"],
extends: ["eslint:recommended"],
parserOptions: {
ecmaVersion: 2022,
sourceType: "module"
},
rules: {
// Disable rules that might cause issues with this package
"no-undef": "off"
},
ignorePatterns: [
"node_modules",
"build",
"coverage"
],
overrides: [
{
files: ["*.spec.ts", "**/test_utils/*.ts", "*.js", "*.cjs"],
rules: {
"@typescript-eslint/no-non-null-assertion": "off",
"@typescript-eslint/no-explicit-any": "off",
"no-console": "off",
"import/no-extraneous-dependencies": ["error", { "devDependencies": true }]
}
}
]
};

View File

@ -1,23 +0,0 @@
# Waku Headless Tests
This package contains a minimal browser application used for testing the Waku SDK in a browser environment. It is used by the browser-tests package to run end-to-end tests on the SDK.
## Usage
### Build the app
```bash
npm run build
```
### Start the app
```bash
npm start
```
This will start a server on port 8080 by default.
## Integration with browser-tests
This package is designed to be used with the browser-tests package to run end-to-end tests on the SDK. It exposes the Waku API via a global object in the browser.

Binary file not shown.

Before

Width:  |  Height:  |  Size: 4.2 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 14 KiB

View File

@ -1,50 +0,0 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8" />
<meta content="width=device-width, initial-scale=1.0" name="viewport" />
<title>Headless</title>
<link rel="stylesheet" href="./style.css" />
<link rel="apple-touch-icon" href="./favicon.png" />
<link rel="manifest" href="./manifest.json" />
<link rel="icon" href="./favicon.ico" />
</head>
<body>
<div id="state"></div>
<div class="content">
<div class="header">
<h3>Status: <span id="status"></span></h3>
<details>
<summary>Peer's information</summary>
<h4>Content topic</h4>
<p id="contentTopic"></p>
<h4>Local Peer Id</h4>
<p id="localPeerId"></p>
<h4>Remote Peer Id</h4>
<p id="remotePeerId"></p>
</details>
</div>
<div id="messages"></div>
<div class="footer">
<div class="inputArea">
<input type="text" id="nickText" placeholder="Nickname" />
<textarea id="messageText" placeholder="Message"></textarea>
</div>
<div class="controls">
<button id="send">Send</button>
<button id="exit">Exit chat</button>
</div>
</div>
</div>
<script src="./build/bundle.js"></script>
</body>
</html>

View File

@ -1,14 +0,0 @@
/* eslint-disable */
import { API } from "../browser-tests/src/api/shared.ts";
runApp().catch((err) => {
console.error(err);
});
async function runApp() {
if (typeof window !== "undefined") {
// Expose shared API functions for browser communication
window.wakuAPI = API;
window.subscriptions = [];
}
}

View File

@ -1,19 +0,0 @@
{
"name": "Light Chat",
"description": "Send messages between several users (or just one) using light client targeted protocols.",
"icons": [
{
"src": "favicon.ico",
"sizes": "64x64 32x32 24x24 16x16",
"type": "image/x-icon"
},
{
"src": "favicon.png",
"type": "image/png",
"sizes": "192x192"
}
],
"display": "standalone",
"theme_color": "#ffffff",
"background_color": "#ffffff"
}

View File

@ -1,27 +0,0 @@
{
"name": "@waku/headless-tests",
"version": "0.1.0",
"private": true,
"homepage": "/headless",
"type": "module",
"devDependencies": {
"@babel/core": "^7.24.0",
"@babel/preset-env": "^7.24.0",
"@babel/preset-typescript": "^7.23.3",
"babel-loader": "^9.1.3",
"filter-obj": "^2.0.2",
"it-first": "^3.0.9",
"node-polyfill-webpack-plugin": "^2.0.1",
"serve": "^14.1.2",
"webpack": "^5.99.5",
"webpack-cli": "^5.1.4"
},
"dependencies": {
"@waku/sdk": "^0.0.30"
},
"scripts": {
"start": "serve .",
"build": "webpack",
"format": "eslint --fix webpack.config.js"
}
}

View File

@ -1,153 +0,0 @@
* {
margin: 0;
padding: 0;
word-wrap: break-word;
box-sizing: border-box;
}
html, body {
width: 100%;
height: 100%;
max-width: 100%;
max-height: 100%;
}
html {
font-size: 16px;
overflow: hidden;
}
body {
display: flex;
align-items: center;
padding: 10px;
justify-content: center;
}
details {
margin-bottom: 15px;
}
details p {
margin-bottom: 10px;
}
summary {
cursor: pointer;
max-width: 100%;
margin-bottom: 5px;
}
span {
font-weight: 300;
}
input, textarea {
line-height: 1rem;
padding: 5px;
}
textarea {
min-height: 3rem;
}
h3 {
margin-bottom: 5px;
}
.content {
width: 800px;
min-width: 300px;
max-width: 800px;
height: 100%;
display: flex;
flex-direction: column;
align-content: space-between;
}
#messages {
overflow-y: scroll;
overflow-x: hidden;
}
.message + .message {
margin-top: 15px;
}
.message :first-child {
font-weight: bold;
}
.message p + p {
margin-top: 5px;
}
.message span {
font-size: 0.8rem;
}
.inputArea {
display: flex;
gap: 10px;
flex-direction: column;
margin-top: 20px;
}
.controls {
margin-top: 10px;
display: flex;
gap: 10px;
}
.controls button {
flex-grow: 1;
cursor: pointer;
padding: 10px;
}
#send {
background-color: #32d1a0;
border: none;
color: white;
}
#send:hover {
background-color: #3abd96;
}
#send:active {
background-color: #3ba183;
}
#exit {
color: white;
border: none;
background-color: #ff3a31;
}
#exit:hover {
background-color: #e4423a;
}
#exit:active {
background-color: #c84740;
}
.success {
color: #3ba183;
}
.progress {
color: #9ea13b;
}
.terminated {
color: black;
}
.error {
color: #c84740;
}
.footer {
display: flex;
width: 100%;
flex-direction: column;
align-self: flex-end;
}

View File

@ -1,14 +0,0 @@
{
"compilerOptions": {
"target": "es2020",
"module": "commonjs",
"allowJs": true,
"checkJs": false,
"esModuleInterop": true,
"skipLibCheck": true,
"forceConsistentCasingInFileNames": true
},
"include": [
"**/*.js"
]
}

View File

@ -1,47 +0,0 @@
/* eslint-disable */
/**
* This webpack configuration file uses ES Module syntax.
*/
import path from 'path';
import { fileURLToPath } from 'url';
import NodePolyfillPlugin from 'node-polyfill-webpack-plugin';
const __filename = fileURLToPath(import.meta.url);
const __dirname = path.dirname(__filename);
export default {
entry: "./index.js",
output: {
filename: "bundle.js",
path: path.resolve(__dirname, "build")
},
mode: "production",
target: "web",
plugins: [new NodePolyfillPlugin()],
resolve: {
extensions: [".js", ".ts", ".tsx", ".jsx"],
fallback: {
fs: false,
net: false,
tls: false
},
alias: {
// Create an alias to easily import from src
"@src": path.resolve(__dirname, "../src")
}
},
module: {
rules: [
{
test: /\.(js|ts|tsx)$/,
exclude: /node_modules/,
use: {
loader: "babel-loader",
options: {
presets: ["@babel/preset-env", "@babel/preset-typescript"]
}
}
}
]
}
};

View File

@ -12,17 +12,18 @@
},
"type": "module",
"author": "Waku Team",
"homepage": "https://github.com/waku-org/js-waku/tree/master/packages/interfaces#readme",
"homepage": "https://github.com/logos-messaging/logos-messaging-js/tree/master/packages/interfaces#readme",
"repository": {
"type": "git",
"url": "https://github.com/waku-org/js-waku.git"
"url": "git+https://github.com/logos-messaging/logos-messaging-js.git"
},
"bugs": {
"url": "https://github.com/waku-org/js-waku/issues"
"url": "https://github.com/logos-messaging/logos-messaging-js/issues"
},
"license": "MIT OR Apache-2.0",
"keywords": [
"waku",
"logos-messaging",
"decentralized",
"secure",
"communication",

View File

@ -16,6 +16,7 @@ export interface IRelayAPI {
readonly pubsubTopics: Set<PubsubTopic>;
readonly gossipSub: GossipSub;
start: () => Promise<void>;
stop: () => Promise<void>;
waitForPeers: () => Promise<void>;
getMeshPeers: (topic?: TopicStr) => PeerIdStr[];
}

View File

@ -88,11 +88,18 @@ export type QueryRequestParams = {
* Only use if you know what you are doing.
*/
peerId?: PeerId;
/**
* An optional AbortSignal to cancel the query.
* When the signal is aborted, the query will stop processing and return early.
*/
abortSignal?: AbortSignal;
};
export type IStore = {
readonly multicodec: string;
stop(): void;
createCursor(message: IDecodedMessage): StoreCursor;
queryGenerator: <T extends IDecodedMessage>(
decoders: IDecoder<T>[],

View File

@ -101,6 +101,16 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
* @waku/interfaces bumped from 0.0.27 to 0.0.28
* @waku/utils bumped from 0.0.20 to 0.0.21
## [0.0.38](https://github.com/waku-org/js-waku/compare/message-encryption-v0.0.37...message-encryption-v0.0.38) (2025-10-31)
### Dependencies
* The following workspace dependencies were updated
* dependencies
* @waku/core bumped from 0.0.39 to 0.0.40
* @waku/proto bumped from 0.0.14 to 0.0.15
## [0.0.37](https://github.com/waku-org/js-waku/compare/message-encryption-v0.0.36...message-encryption-v0.0.37) (2025-09-20)

View File

@ -1,6 +1,6 @@
{
"name": "@waku/message-encryption",
"version": "0.0.37",
"version": "0.0.38",
"description": "Waku Message Payload Encryption",
"types": "./dist/index.d.ts",
"module": "./dist/index.js",
@ -33,17 +33,18 @@
},
"type": "module",
"author": "Waku Team",
"homepage": "https://github.com/waku-org/js-waku/tree/master/packages/message-encryption#readme",
"homepage": "https://github.com/logos-messaging/logos-messaging-js/tree/master/packages/message-encryption#readme",
"repository": {
"type": "git",
"url": "https://github.com/waku-org/js-waku.git"
"url": "git+https://github.com/logos-messaging/logos-messaging-js.git"
},
"bugs": {
"url": "https://github.com/waku-org/js-waku/issues"
"url": "https://github.com/logos-messaging/logos-messaging-js/issues"
},
"license": "MIT OR Apache-2.0",
"keywords": [
"waku",
"logos-messaging",
"decentralized",
"secure",
"communication",
@ -76,9 +77,9 @@
},
"dependencies": {
"@noble/secp256k1": "^1.7.1",
"@waku/core": "0.0.39",
"@waku/core": "0.0.40",
"@waku/interfaces": "0.0.34",
"@waku/proto": "0.0.14",
"@waku/proto": "0.0.15",
"@waku/utils": "0.0.27",
"debug": "^4.3.4",
"js-sha3": "^0.9.2",

View File

@ -1,5 +1,21 @@
# Changelog
## [0.0.15](https://github.com/waku-org/js-waku/compare/proto-v0.0.14...proto-v0.0.15) (2025-10-31)
### ⚠ BREAKING CHANGES
* SDS lamport timestamp overflow and keep it to current time ([#2664](https://github.com/waku-org/js-waku/issues/2664))
### Features
* Add SDS-Repair (SDS-R) to the SDS implementation ([#2698](https://github.com/waku-org/js-waku/issues/2698)) ([5334a7f](https://github.com/waku-org/js-waku/commit/5334a7fcc91544d33294beaad9b45e641ecf404d))
### Bug Fixes
* SDS lamport timestamp overflow and keep it to current time ([#2664](https://github.com/waku-org/js-waku/issues/2664)) ([c0ecb6a](https://github.com/waku-org/js-waku/commit/c0ecb6abbaae0544f352b89293f59f274600a916))
## [0.0.14](https://github.com/waku-org/js-waku/compare/proto-v0.0.13...proto-v0.0.14) (2025-09-20)

View File

@ -1,6 +1,6 @@
{
"name": "@waku/proto",
"version": "0.0.14",
"version": "0.0.15",
"description": "Protobuf definitions for Waku",
"types": "./dist/index.d.ts",
"module": "./dist/index.js",
@ -12,17 +12,18 @@
},
"type": "module",
"author": "Waku Team",
"homepage": "https://github.com/waku-org/js-waku/tree/master/packages/proto#readme",
"homepage": "https://github.com/logos-messaging/logos-messaging-js/tree/master/packages/proto#readme",
"repository": {
"type": "git",
"url": "https://github.com/waku-org/js-waku.git"
"url": "git+https://github.com/logos-messaging/logos-messaging-js.git"
},
"bugs": {
"url": "https://github.com/waku-org/js-waku/issues"
"url": "https://github.com/logos-messaging/logos-messaging-js/issues"
},
"license": "MIT OR Apache-2.0",
"keywords": [
"waku",
"logos-messaging",
"decentralized",
"secure",
"communication",

View File

@ -13,6 +13,7 @@ import type { Uint8ArrayList } from 'uint8arraylist'
export interface HistoryEntry {
messageId: string
retrievalHint?: Uint8Array
senderId?: string
}
export namespace HistoryEntry {
@ -35,6 +36,11 @@ export namespace HistoryEntry {
w.bytes(obj.retrievalHint)
}
if (obj.senderId != null) {
w.uint32(26)
w.string(obj.senderId)
}
if (opts.lengthDelimited !== false) {
w.ldelim()
}
@ -57,6 +63,10 @@ export namespace HistoryEntry {
obj.retrievalHint = reader.bytes()
break
}
case 3: {
obj.senderId = reader.string()
break
}
default: {
reader.skipType(tag & 7)
break
@ -84,9 +94,10 @@ export interface SdsMessage {
senderId: string
messageId: string
channelId: string
lamportTimestamp?: number
lamportTimestamp?: bigint
causalHistory: HistoryEntry[]
bloomFilter?: Uint8Array
repairRequest: HistoryEntry[]
content?: Uint8Array
}
@ -117,7 +128,7 @@ export namespace SdsMessage {
if (obj.lamportTimestamp != null) {
w.uint32(80)
w.int32(obj.lamportTimestamp)
w.uint64(obj.lamportTimestamp)
}
if (obj.causalHistory != null) {
@ -132,6 +143,13 @@ export namespace SdsMessage {
w.bytes(obj.bloomFilter)
}
if (obj.repairRequest != null) {
for (const value of obj.repairRequest) {
w.uint32(106)
HistoryEntry.codec().encode(value, w)
}
}
if (obj.content != null) {
w.uint32(162)
w.bytes(obj.content)
@ -145,7 +163,8 @@ export namespace SdsMessage {
senderId: '',
messageId: '',
channelId: '',
causalHistory: []
causalHistory: [],
repairRequest: []
}
const end = length == null ? reader.len : reader.pos + length
@ -167,7 +186,7 @@ export namespace SdsMessage {
break
}
case 10: {
obj.lamportTimestamp = reader.int32()
obj.lamportTimestamp = reader.uint64()
break
}
case 11: {
@ -184,6 +203,16 @@ export namespace SdsMessage {
obj.bloomFilter = reader.bytes()
break
}
case 13: {
if (opts.limits?.repairRequest != null && obj.repairRequest.length === opts.limits.repairRequest) {
throw new MaxLengthError('Decode error - map field "repairRequest" had too many elements')
}
obj.repairRequest.push(HistoryEntry.codec().decode(reader, reader.uint32(), {
limits: opts.limits?.repairRequest$
}))
break
}
case 20: {
obj.content = reader.bytes()
break

View File

@ -3,14 +3,19 @@ syntax = "proto3";
message HistoryEntry {
string message_id = 1; // Unique identifier of the SDS message, as defined in `Message`
optional bytes retrieval_hint = 2; // Optional information to help remote parties retrieve this SDS message; For example, A Waku deterministic message hash or routing payload hash
optional string sender_id = 3; // Participant ID of original message sender. Only populated if using optional SDS Repair extension
}
message SdsMessage {
string sender_id = 1; // Participant ID of the message sender
string message_id = 2; // Unique identifier of the message
string channel_id = 3; // Identifier of the channel to which the message belongs
optional int32 lamport_timestamp = 10; // Logical timestamp for causal ordering in channel
optional uint64 lamport_timestamp = 10; // Logical timestamp for causal ordering in channel
repeated HistoryEntry causal_history = 11; // List of preceding message IDs that this message causally depends on. Generally 2 or 3 message IDs are included.
optional bytes bloom_filter = 12; // Bloom filter representing received message IDs in channel
repeated HistoryEntry repair_request = 13; // Capped list of history entries missing from sender's causal history. Only populated if using the optional SDS Repair extension.
optional bytes content = 20; // Actual content of the message
}

View File

@ -0,0 +1,26 @@
const config = {
extension: ['ts', 'tsx'],
spec: 'src/**/*.spec.{ts,tsx}',
require: ['ts-node/register', 'isomorphic-fetch'],
loader: 'ts-node/esm',
'node-option': [
'experimental-specifier-resolution=node',
'loader=ts-node/esm'
],
exit: true,
retries: 4
};
if (process.env.CI) {
console.log("Running tests in parallel");
config.parallel = true;
config.jobs = 6;
console.log("Activating allure reporting");
config.reporter = 'mocha-multi-reporters';
config.reporterOptions = {
configFile: '.mocha.reporters.json'
};
} else {
console.log("Running tests serially. To enable parallel execution update mocha config");
}
module.exports = config;

View File

@ -0,0 +1,25 @@
# Changelog
All notable changes to this project will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
## [0.0.8](https://github.com/waku-org/js-waku/compare/react-v0.0.7...react-v0.0.8) (2025-10-31)
### Features
* Add waku/react package and make it compatible with React frameworks ([#2656](https://github.com/waku-org/js-waku/issues/2656)) ([ad0bed6](https://github.com/waku-org/js-waku/commit/ad0bed69ba30456b6823e0260b58bdc31d53ff22))
### Dependencies
* The following workspace dependencies were updated
* dependencies
* @waku/sdk bumped from 0.0.35 to 0.0.36
## [0.0.7] - Unreleased
### Added
- Initial release with React hooks and components for js-waku

100
packages/react/package.json Normal file
View File

@ -0,0 +1,100 @@
{
"name": "@waku/react",
"version": "0.0.8",
"description": "React hooks and components to use logos-messaging-js",
"type": "module",
"main": "dist/index.cjs.js",
"module": "dist/index.esm.mjs",
"umd:main": "dist/index.umd.js",
"unpkg": "dist/index.umd.js",
"source": "src/index.ts",
"types": "dist/index.d.ts",
"sideEffects": false,
"exports": {
".": {
"types": "./dist/index.d.ts",
"import": "./dist/index.esm.mjs",
"require": "./dist/index.cjs.js"
}
},
"author": "Waku Team",
"homepage": "https://github.com/logos-messaging/logos-messaging-js/tree/master/packages/react#readme",
"repository": {
"type": "git",
"url": "git+https://github.com/logos-messaging/logos-messaging-js.git"
},
"bugs": {
"url": "https://github.com/logos-messaging/logos-messaging-js/issues"
},
"license": "MIT OR Apache-2.0",
"keywords": [
"waku",
"logos-messaging",
"decentralized",
"secure",
"communication",
"web3",
"ethereum",
"dapps",
"privacy",
"react"
],
"scripts": {
"build": "rollup --config rollup.config.js",
"fix": "run-s fix:*",
"fix:lint": "eslint src *.js --fix",
"check": "run-s check:*",
"check:tsc": "tsc -p tsconfig.dev.json",
"check:lint": "eslint src *.js",
"check:spelling": "cspell \"{README.md,src/**/*.ts,src/**/*.tsx}\"",
"watch:build": "tsc -p tsconfig.json -w",
"prepublish": "npm run build",
"reset-hard": "git clean -dfx -e .idea && git reset --hard && npm i && npm run build"
},
"engines": {
"node": ">=22"
},
"dependencies": {
"@waku/interfaces": "0.0.34",
"@waku/sdk": "0.0.36",
"@waku/utils": "0.0.27"
},
"devDependencies": {
"@rollup/plugin-commonjs": "^25.0.7",
"@rollup/plugin-json": "^6.0.0",
"@rollup/plugin-node-resolve": "^15.2.3",
"@types/chai": "^4.3.11",
"@types/mocha": "^10.0.9",
"@types/react": "^18.0.28",
"@typescript-eslint/eslint-plugin": "^7.1.1",
"@typescript-eslint/parser": "^7.1.1",
"@waku/build-utils": "*",
"chai": "^5.1.1",
"cspell": "^8.6.1",
"eslint": "^8.34.0",
"eslint-config-prettier": "^8.6.0",
"eslint-plugin-prettier": "^4.2.1",
"eslint-plugin-react": "^7.32.2",
"eslint-plugin-react-hooks": "^4.6.0",
"eslint-plugin-simple-import-sort": "^10.0.0",
"mocha": "^10.7.3",
"npm-run-all": "^4.1.5",
"rollup": "^4.12.0",
"@rollup/plugin-terser": "^0.4.4",
"rollup-plugin-typescript2": "^0.36.0"
},
"peerDependencies": {
"react": "^16.8.0 || ^17 || ^18 || ^19"
},
"files": [
"dist",
"bundle",
"src/**/*.ts",
"src/**/*.tsx",
"!**/*.spec.*",
"!**/*.json",
"CHANGELOG.md",
"LICENSE",
"README.md"
]
}

View File

@ -0,0 +1,99 @@
import commonjs from "@rollup/plugin-commonjs";
import json from "@rollup/plugin-json";
import { nodeResolve } from "@rollup/plugin-node-resolve";
import terser from "@rollup/plugin-terser";
import typescript from "rollup-plugin-typescript2";
const input = "src/index.ts";
const external = [
"react",
"react-dom",
"@waku/interfaces",
"@waku/sdk",
"@waku/utils"
];
export default [
{
input,
output: {
file: "dist/index.esm.mjs",
format: "esm",
sourcemap: true
},
external,
plugins: [
nodeResolve({
browser: true,
preferBuiltins: false
}),
commonjs(),
json(),
typescript({
tsconfig: "./tsconfig.json",
useTsconfigDeclarationDir: true
})
]
},
{
input,
output: {
file: "dist/index.cjs.js",
format: "cjs",
sourcemap: true,
exports: "named"
},
external,
plugins: [
nodeResolve({
browser: true,
preferBuiltins: false
}),
commonjs(),
json(),
typescript({
tsconfig: "./tsconfig.json",
tsconfigOverride: {
compilerOptions: {
declaration: false
}
}
})
]
},
{
input,
output: {
file: "dist/index.umd.js",
format: "umd",
name: "WakuReact",
sourcemap: true,
globals: {
react: "React",
"react-dom": "ReactDOM",
"@waku/interfaces": "WakuInterfaces",
"@waku/sdk": "WakuSDK",
"@waku/utils": "WakuUtils"
}
},
external,
plugins: [
nodeResolve({
browser: true,
preferBuiltins: false
}),
commonjs(),
json(),
typescript({
tsconfig: "./tsconfig.json",
tsconfigOverride: {
compilerOptions: {
declaration: false
}
}
}),
terser()
]
}
];

View File

@ -0,0 +1,57 @@
"use client";
import type { CreateNodeOptions, IWaku, LightNode } from "@waku/interfaces";
import * as React from "react";
import type { CreateNodeResult, ReactChildrenProps } from "./types.js";
import { useCreateLightNode } from "./useCreateWaku.js";
type WakuContextType<T extends IWaku> = CreateNodeResult<T>;
const WakuContext = React.createContext<WakuContextType<IWaku>>({
node: undefined,
isLoading: false,
error: undefined
});
/**
* Hook to retrieve Waku node from Context. By default generic Waku type will be used.
* @example
* const { node, isLoading, error } = useWaku<LightNode>();
* @example
* const { node, isLoading, error } = useWaku<RelayNode>();
* @example
* const { node, isLoading, error } = useWaku<FullNode>();
* @example
* const { node, isLoading, error } = useWaku();
* @returns WakuContext
*/
export const useWaku = (): WakuContextType<LightNode> =>
React.useContext(WakuContext) as WakuContextType<LightNode>;
type ProviderProps<T> = ReactChildrenProps & { options: T };
/**
* Provider for creating Waku node based on options passed.
* @example
* const App = (props) => (
* <WakuProvider options={{...}}>
* <Component />
* </WakuProvider>
* );
* const Component = (props) => {
* const { node, isLoading, error } = useWaku();
* ...
* };
* @param {Object} props - options to create a node and other React props
* @param {CreateNodeOptions} props.options - optional options for creating Light Node
* @returns React Light Node provider component
*/
export const WakuProvider = (
props: ProviderProps<CreateNodeOptions>
): React.ReactElement => {
const result = useCreateLightNode(props.options);
return (
<WakuContext.Provider value={result}>{props.children}</WakuContext.Provider>
);
};

View File

@ -0,0 +1,4 @@
export * from "@waku/sdk";
export type { CreateNodeOptions } from "./types.js";
export { WakuProvider, useWaku } from "./WakuProvider.js";

View File

@ -0,0 +1,16 @@
import type { IWaku } from "@waku/interfaces";
import type * as React from "react";
export type { CreateNodeOptions, AutoSharding } from "@waku/interfaces";
type HookState = {
isLoading: boolean;
error: undefined | string;
};
export type CreateNodeResult<T extends IWaku> = HookState & {
node: undefined | T;
};
export type ReactChildrenProps = {
children?: React.ReactNode;
};

View File

@ -0,0 +1,63 @@
"use client";
import type { CreateNodeOptions, IWaku, LightNode } from "@waku/interfaces";
import { createLightNode } from "@waku/sdk";
import * as React from "react";
import type { CreateNodeResult } from "./types.js";
type NodeFactory<N, T = Record<string, never>> = (options?: T) => Promise<N>;
type CreateNodeParams<N extends IWaku, T = Record<string, never>> = T & {
factory: NodeFactory<N, T>;
};
const useCreateNode = <N extends IWaku, T = Record<string, never>>(
params: CreateNodeParams<N, T>
): CreateNodeResult<N> => {
const { factory, ...options } = params;
const [node, setNode] = React.useState<N | undefined>(undefined);
const [isLoading, setLoading] = React.useState<boolean>(true);
const [error, setError] = React.useState<undefined | string>(undefined);
React.useEffect(() => {
let cancelled = false;
setLoading(true);
factory(options as T)
.then(async (node) => {
if (cancelled) {
return;
}
await node.start();
await node.waitForPeers();
setNode(node);
setLoading(false);
})
.catch((err) => {
setLoading(false);
setError(`Failed at creating node: ${err?.message || "no message"}`);
});
return () => {
cancelled = true;
};
}, []);
return {
node,
error,
isLoading
};
};
export const useCreateLightNode = (
params?: CreateNodeOptions
): CreateNodeResult<LightNode> => {
return useCreateNode<LightNode, CreateNodeOptions>({
...params,
factory: createLightNode
});
};

View File

@ -0,0 +1,6 @@
{
"extends": "../../tsconfig.dev",
"compilerOptions": {
"jsx": "react"
}
}

View File

@ -0,0 +1,12 @@
{
"extends": "../../tsconfig",
"compilerOptions": {
"outDir": "dist/",
"rootDir": "src",
"tsBuildInfoFile": "dist/.tsbuildinfo",
"declarationDir": "dist",
"jsx": "react"
},
"include": ["src", "*.js"],
"exclude": ["src/**/*.spec.ts", "src/**/*.spec.tsx", "src/test_utils"]
}

View File

@ -0,0 +1,4 @@
{
"extends": ["../../typedoc.base.json"],
"entryPoints": ["src/index.ts"]
}

View File

@ -25,6 +25,17 @@
* @waku/interfaces bumped from 0.0.16 to 0.0.17
* @waku/utils bumped from 0.0.9 to 0.0.10
## [0.0.23](https://github.com/waku-org/js-waku/compare/relay-v0.0.22...relay-v0.0.23) (2025-10-31)
### Dependencies
* The following workspace dependencies were updated
* dependencies
* @waku/core bumped from 0.0.39 to 0.0.40
* @waku/sdk bumped from 0.0.35 to 0.0.36
* @waku/proto bumped from 0.0.14 to 0.0.15
## [0.0.22](https://github.com/waku-org/js-waku/compare/relay-v0.0.21...relay-v0.0.22) (2025-09-20)

View File

@ -1,6 +1,6 @@
{
"name": "@waku/relay",
"version": "0.0.22",
"version": "0.0.23",
"description": "Relay Protocol for Waku",
"types": "./dist/index.d.ts",
"module": "./dist/index.js",
@ -11,17 +11,18 @@
}
},
"type": "module",
"homepage": "https://github.com/waku-org/js-waku/tree/master/packages/relay#readme",
"homepage": "https://github.com/logos-messaging/logos-messaging-js/tree/master/packages/relay#readme",
"repository": {
"type": "git",
"url": "https://github.com/waku-org/js-waku.git"
"url": "git+https://github.com/logos-messaging/logos-messaging-js.git"
},
"bugs": {
"url": "https://github.com/waku-org/js-waku/issues"
"url": "https://github.com/logos-messaging/logos-messaging-js/issues"
},
"license": "MIT OR Apache-2.0",
"keywords": [
"waku",
"logos-messaging",
"decentralised",
"communication",
"web3",
@ -51,10 +52,10 @@
"dependencies": {
"@chainsafe/libp2p-gossipsub": "14.1.1",
"@noble/hashes": "^1.3.2",
"@waku/core": "0.0.39",
"@waku/sdk": "0.0.35",
"@waku/core": "0.0.40",
"@waku/sdk": "0.0.36",
"@waku/interfaces": "0.0.34",
"@waku/proto": "0.0.14",
"@waku/proto": "0.0.15",
"@waku/utils": "0.0.27",
"chai": "^4.3.10",
"debug": "^4.3.4",

View File

@ -67,6 +67,10 @@ export class Relay implements IRelay {
* Observers under key `""` are always called.
*/
private observers: Map<PubsubTopic, Map<ContentTopic, Set<unknown>>>;
private messageEventHandlers: Map<
PubsubTopic,
(event: CustomEvent<GossipsubMessage>) => void
> = new Map();
public constructor(params: RelayConstructorParams) {
if (!this.isRelayPubsub(params.libp2p.services.pubsub)) {
@ -105,6 +109,19 @@ export class Relay implements IRelay {
this.subscribeToAllTopics();
}
public async stop(): Promise<void> {
for (const pubsubTopic of this.pubsubTopics) {
const handler = this.messageEventHandlers.get(pubsubTopic);
if (handler) {
this.gossipSub.removeEventListener("gossipsub:message", handler);
}
this.gossipSub.topicValidators.delete(pubsubTopic);
this.gossipSub.unsubscribe(pubsubTopic);
}
this.messageEventHandlers.clear();
this.observers.clear();
}
/**
* Wait for at least one peer with the given protocol to be connected and in the gossipsub
* mesh for all pubsubTopics.
@ -299,17 +316,17 @@ export class Relay implements IRelay {
* @override
*/
private gossipSubSubscribe(pubsubTopic: string): void {
this.gossipSub.addEventListener(
"gossipsub:message",
(event: CustomEvent<GossipsubMessage>) => {
if (event.detail.msg.topic !== pubsubTopic) return;
const handler = (event: CustomEvent<GossipsubMessage>): void => {
if (event.detail.msg.topic !== pubsubTopic) return;
this.processIncomingMessage(
event.detail.msg.topic,
event.detail.msg.data
).catch((e) => log.error("Failed to process incoming message", e));
}
);
this.processIncomingMessage(
event.detail.msg.topic,
event.detail.msg.data
).catch((e) => log.error("Failed to process incoming message", e));
};
this.messageEventHandlers.set(pubsubTopic, handler);
this.gossipSub.addEventListener("gossipsub:message", handler);
this.gossipSub.topicValidators.set(pubsubTopic, messageValidator);
this.gossipSub.subscribe(pubsubTopic);

View File

@ -13,17 +13,18 @@
},
"type": "module",
"author": "Waku Team",
"homepage": "https://github.com/waku-org/js-waku/tree/master/packages/reliability-tests#readme",
"homepage": "https://github.com/logos-messaging/logos-messaging-js/tree/master/packages/reliability-tests#readme",
"repository": {
"type": "git",
"url": "https://github.com/waku-org/js-waku.git"
"url": "git+https://github.com/logos-messaging/logos-messaging-js.git"
},
"bugs": {
"url": "https://github.com/waku-org/js-waku/issues"
"url": "https://github.com/logos-messaging/logos-messaging-js/issues"
},
"license": "MIT OR Apache-2.0",
"keywords": [
"waku",
"logos-messaging",
"decentralized",
"secure",
"communication",

Some files were not shown because too many files have changed in this diff Show More