From 65419d85aab18a00ead480fa566c83e7a4f48834 Mon Sep 17 00:00:00 2001
From: Ivan FB <128452529+Ivansete-status@users.noreply.github.com>
Date: Mon, 19 Aug 2024 12:49:19 +0200
Subject: [PATCH 01/24] run-docker clarify title (#206)
---
docs/guides/nwaku/run-docker.md | 2 ++
1 file changed, 2 insertions(+)
diff --git a/docs/guides/nwaku/run-docker.md b/docs/guides/nwaku/run-docker.md
index c68b948..635d54b 100644
--- a/docs/guides/nwaku/run-docker.md
+++ b/docs/guides/nwaku/run-docker.md
@@ -17,6 +17,8 @@ We recommend running a `nwaku` node with at least 2GB of RAM, especially if `WSS
The Nwaku Docker images are available on the Docker Hub public registry under the [statusteam/nim-waku](https://hub.docker.com/r/statusteam/nim-waku) repository. Please visit [statusteam/nim-waku/tags](https://hub.docker.com/r/statusteam/nim-waku/tags) for images of specific releases.
+## Build Docker image
+
You can also build the Docker image locally:
```shell
From 6ddf3464cba5b03253cf25163274b0fd39b04166 Mon Sep 17 00:00:00 2001
From: chair <29414216+chair28980@users.noreply.github.com>
Date: Mon, 2 Sep 2024 02:46:28 -0700
Subject: [PATCH 02/24] Update add-action-project.yml (#214)
---
.github/workflows/add-action-project.yml | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/.github/workflows/add-action-project.yml b/.github/workflows/add-action-project.yml
index 4f50380..ad3b8b1 100644
--- a/.github/workflows/add-action-project.yml
+++ b/.github/workflows/add-action-project.yml
@@ -14,4 +14,4 @@ jobs:
- uses: actions/add-to-project@v0.5.0
with:
project-url: https://github.com/orgs/waku-org/projects/2
- github-token: ${{ secrets.ADD_TO_PROJECT_PAT }}
+ github-token: ${{ secrets.ADD_TO_PROJECT_20240815 }}
From 40f0380fc7c3f5a1f421de4e798c8c2775b3a08a Mon Sep 17 00:00:00 2001
From: Ivan FB <128452529+Ivansete-status@users.noreply.github.com>
Date: Fri, 6 Sep 2024 20:50:11 +0200
Subject: [PATCH 03/24] extending faq adding how to run a node point (#210)
---
docs/learn/faq.md | 6 ++++++
1 file changed, 6 insertions(+)
diff --git a/docs/learn/faq.md b/docs/learn/faq.md
index 51f5428..36f1ac0 100644
--- a/docs/learn/faq.md
+++ b/docs/learn/faq.md
@@ -33,3 +33,9 @@ import { AccordionItem } from '@site/src/components/mdx'
Rate Limiting Nullifier is a zero-knowledge (ZK) protocol enabling spam protection in a decentralized network while preserving privacy. Each message must be accompanied by a ZK proof, which Relay nodes verify to ensure the publishers do not send more messages than they are allowed. The ZK proof does not leak any private information about message publishers - it only proves they are members of a set of users allowed to publish a certain number of messages per given time frame.
+
+
+ Follow the README instructions at nwaku-compose.
+
+
+
From 583a4105b6d50af897dbcb9f89f633c165eedb5d Mon Sep 17 00:00:00 2001
From: Danish Arora <35004822+danisharora099@users.noreply.github.com>
Date: Wed, 11 Sep 2024 11:13:08 +0530
Subject: [PATCH 04/24] feat: js-waku migration docs (0.0.26->0.0.27) (#213)
* feat: setup migrations for js-waku and nwaku
* feat: add migrations for 0.026->0.027
---
.../js-waku/migration_v0.026_0.027.md | 201 ++++++++++++++++++
1 file changed, 201 insertions(+)
create mode 100644 docs/migrations/js-waku/migration_v0.026_0.027.md
diff --git a/docs/migrations/js-waku/migration_v0.026_0.027.md b/docs/migrations/js-waku/migration_v0.026_0.027.md
new file mode 100644
index 0000000..34f44f8
--- /dev/null
+++ b/docs/migrations/js-waku/migration_v0.026_0.027.md
@@ -0,0 +1,201 @@
+# Migrating to Waku v0.027
+
+A migration guide for refactoring your application code from Waku v0.026 to v0.027.
+
+## Table of Contents
+
+- [Migrating to Waku v0.027](#migrating-to-waku-v0027)
+ - [Table of Contents](#table-of-contents)
+ - [Network Configuration](#network-configuration)
+ - [Default Network Configuration](#default-network-configuration)
+ - [Static Sharding](#static-sharding)
+ - [Auto Sharding](#auto-sharding)
+ - [Pubsub Topic Configuration](#pubsub-topic-configuration)
+ - [Removed APIs](#removed-apis)
+ - [Type Changes](#type-changes)
+ - [Internal/Private Utility Function Changes](#internalprivate-utility-function-changes)
+
+## Network Configuration
+
+The way to configure network settings for a Waku node has been simplified. The new NetworkConfig type only allows for Static Sharding or Auto Sharding.
+
+### Default Network Configuration
+
+If no network configuration is provided when creating a Light Node, The Waku Network configuration will be used by default.
+
+**Before**
+```typescript
+import { createLightNode } from "@waku/sdk";
+
+const waku = await createLightNode();
+// This would use the default pubsub topic, that was, `/waku/2/default-waku/proto`
+```
+
+**After**
+```typescript
+import { createLightNode } from "@waku/sdk";
+
+const waku = await createLightNode();
+// This will now use The Waku Network configuration by default:
+// { clusterId: 1, shards: [0,1,2,3,4,5,6,7] }
+```
+
+### Static Sharding
+
+**Before**
+```typescript
+import { createLightNode } from "@waku/sdk";
+
+const waku = await createLightNode({
+ shardInfo: {
+ clusterId: 1,
+ shards: [0, 1, 2, 3]
+ }
+});
+```
+
+**After**
+```typescript
+import { createLightNode } from "@waku/sdk";
+
+const waku = await createLightNode({
+ networkConfig: {
+ clusterId: 1,
+ shards: [0, 1, 2, 3]
+ }
+});
+```
+
+
+### Auto Sharding
+
+**Before**
+```typescript
+import { createLightNode } from "@waku/sdk";
+
+const waku = await createLightNode({
+ shardInfo: {
+ clusterId: 1,
+ contentTopics: ["/my-app/1/notifications/proto"]
+ }
+});
+```
+
+**After**
+```typescript
+import { createLightNode } from "@waku/sdk";
+
+const waku = await createLightNode({
+ networkConfig: {
+ clusterId: 1,
+ contentTopics: ["/my-app/1/notifications/proto"]
+ }
+});
+```
+
+## Pubsub Topic Configuration
+
+Named pubsub topics are no longer supported. You must use either Static Sharding or Auto Sharding to configure pubsub topics.
+
+**Before**
+```typescript
+import { createLightNode } from "@waku/sdk";
+
+const waku = await createLightNode({
+ pubsubTopics: ["/waku/2/default-waku/proto"]
+});
+```
+
+**After**
+
+Use Static Sharding:
+```typescript
+import { createLightNode } from "@waku/sdk";
+
+const waku = await createLightNode({
+ networkConfig: {
+ clusterId: 1,
+ shards: [0, 1, 2, 3, 4, 5, 6, 7]
+ }
+});
+```
+
+Or use Auto Sharding:
+```typescript
+import { createLightNode } from "@waku/sdk";
+
+const waku = await createLightNode({
+ networkConfig: {
+ clusterId: 1,
+ contentTopics: ["/your-app/1/default/proto"]
+ }
+});
+```
+
+## Removed APIs
+
+The following APIs have been removed:
+
+- ApplicationInfo type: Use `string` for application and version in `NetworkConfig` instead.
+- `shardInfo` option in `createLightNode`: Use `networkConfig` instead.
+- `pubsubTopics` option in `createLightNode`: Use `networkConfig` with Static Sharding or Auto Sharding instead.
+
+If you were using `ApplicationInfo` before, you should now use `ContentTopicInfo` (Auto Sharding) and specify your application and version in the content topic `string`.
+
+**Before**
+```typescript
+import { createLightNode } from "@waku/sdk";
+
+const waku = await createLightNode({
+ shardInfo: {
+ clusterId: 1,
+ application: "my-app",
+ version: "1"
+ }
+});
+```
+
+**After**
+```typescript
+import { createLightNode } from "@waku/sdk";
+
+const waku = await createLightNode({
+ networkConfig: {
+ clusterId: 1,
+ contentTopics: ["/my-app/1/default/proto"]
+ }
+});
+```
+
+## Type Changes
+
+- `ShardingParams` has been removed. Use `NetworkConfig` instead.
+- `NetworkConfig` is now defined as `StaticSharding` | `AutoSharding`.
+- `StaticSharding` is equivalent to the previous `ShardInfo`.
+- `AutoSharding` is equivalent to the previous `ContentTopicInfo`.
+
+## Internal/Private Utility Function Changes
+
+Several utility functions have been updated or added:
+
+- `ensureShardingConfigured` has been removed. Use `derivePubsubTopicsFromNetworkConfig` instead.
+- New function `derivePubsubTopicsFromNetworkConfig` has been added to derive pubsub topics from the network configuration.
+- `shardInfoToPubsubTopics` now accepts `Partial` instead of `Partial`.
+- New function `pubsubTopicsToShardInfo` has been added to convert pubsub topics to a ShardInfo object.
+
+If you were using any of these utility functions directly, you'll need to update your code accordingly.
+
+**Before**
+```typescript
+import { ensureShardingConfigured } from "@waku/utils";
+
+const result = ensureShardingConfigured(shardInfo);
+```
+
+**After**
+```typescript
+import { derivePubsubTopicsFromNetworkConfig } from "@waku/utils";
+
+const pubsubTopics = derivePubsubTopicsFromNetworkConfig(networkConfig);
+```
+Note: The default `NetworkConfig` for The Waku Network is now `{ clusterId: 1, shards: [0,1,2,3,4,5,6,7] }.`
From 8ca783537b367379821b4a4ed39eb4f905d80e3a Mon Sep 17 00:00:00 2001
From: gabrielmer <101006718+gabrielmer@users.noreply.github.com>
Date: Mon, 16 Sep 2024 12:29:04 +0300
Subject: [PATCH 05/24] feat: add migration instructions (#211)
---
docs/guides/nwaku/upgrade-instructions.md | 32 +++++++++++++++++++++++
sidebars.js | 1 +
2 files changed, 33 insertions(+)
create mode 100644 docs/guides/nwaku/upgrade-instructions.md
diff --git a/docs/guides/nwaku/upgrade-instructions.md b/docs/guides/nwaku/upgrade-instructions.md
new file mode 100644
index 0000000..3a28fda
--- /dev/null
+++ b/docs/guides/nwaku/upgrade-instructions.md
@@ -0,0 +1,32 @@
+---
+title: Upgrade Instructions
+hide_table_of_contents: true
+sidebar_label: Upgrade Instructions
+---
+
+import { AccordionItem } from '@site/src/components/mdx'
+
+If you are currently using Nwaku, running an old version and want to upgrade your node, please follow the below migration instructions for each target release newer than your current running version in ascending order.
+
+For example, if you are interested in the version v0.32.0 and are currently running v0.30.0, follow the instructions for v0.31.0 and then the ones for v0.32.0
+
+## Target Releases
+
+
+
+The `--protected-topic` CLI config was deprecated and is replaced by the new `--protected-shard` configuration. Instead of configuring `topic:public_key` you will now need to configure `shard:public_key`
+
+For example, if you used to run your node with `--protected-topic="waku/2/rs/3/4:your_public_key"` you will need to replace this configuration for `--protected-shard="4:your_public_key"`
+
+
+
+
+Named sharding was deprecated in this version. This means that pubsub topics will only be supported if they comply with the static sharding format: /waku/2/rs/<CLUSTER_ID>/<SHARD_ID>
+
+In order to migrate your existing application, you need to:
+
+1. Make sure that your clients are sending messages to pubsub topics in the required format. Check that in your interactions with Nwaku's REST API or when using `js-waku`, the configured pubsub topics follow the static sharding format defined above.
+2. When running a node with the `--pubsub-topic` CLI flag, the values provided should comply with the static sharding format.
+3. If your application relies on nodes or clients that may not be updated immediately, keep your node on an older version while subscribing to both the current pubsub topic and the new pubsub topic that will comply with the static sharding format. In that case, you can keep backward compatibility for a migration period.
+
+
diff --git a/sidebars.js b/sidebars.js
index fcb3b0b..040d1fe 100644
--- a/sidebars.js
+++ b/sidebars.js
@@ -22,6 +22,7 @@ const sidebars = {
"guides/nwaku/config-options",
"guides/nwaku/configure-nwaku",
"guides/nwaku/faq",
+ "guides/nwaku/upgrade-instructions",
{
type: "html",
value:
From 35c06b3d42b0b84c1d231077186486579775a08c Mon Sep 17 00:00:00 2001
From: gabrielmer <101006718+gabrielmer@users.noreply.github.com>
Date: Fri, 20 Sep 2024 12:38:31 +0300
Subject: [PATCH 06/24] chore: update config instructions for v0.33.0 (#218)
---
docs/guides/nwaku/config-options.md | 7 ++++---
1 file changed, 4 insertions(+), 3 deletions(-)
diff --git a/docs/guides/nwaku/config-options.md b/docs/guides/nwaku/config-options.md
index 654e915..819119c 100644
--- a/docs/guides/nwaku/config-options.md
+++ b/docs/guides/nwaku/config-options.md
@@ -66,9 +66,11 @@ Here are the available node configuration options, along with their default valu
| `rln-relay-bandwidth-threshold` | `0 # to maintain backwards compatibility` | Message rate in bytes/sec after which verification of proofs should happen |
| `staticnode` | | Peer multiaddr to directly connect with. Argument may be repeated |
| `keep-alive` | `false` | Enable keep-alive for idle connections: true\|false |
-| `topic` | `["/waku/2/default-waku/proto"]` | Default topic to subscribe to. Argument may be repeated. Deprecated! Please use `pubsub-topic` and/or `content-topic` instead |
-| `pubsub-topic` | | Default pubsub topic to subscribe to. Argument may be repeated |
+| `pubsub-topic` | | Default pubsub topic to subscribe to. Argument may be repeated. **Deprecated!** Please use `shard` and/or `content-topic` instead |
+| `shard` | | Shard to subscribe to. Argument may be repeated |
| `content-topic` | | Default content topic to subscribe to. Argument may be repeated |
+| `reliability` | `false` | Enable experimental reliability protocol true\|false |
+
## Store and message store config
@@ -107,7 +109,6 @@ Here are the available node configuration options, along with their default valu
| `rest-port` | `8645` | Listening port of the REST HTTP server |
| `rest-relay-cache-capacity` | `30` | Capacity of the Relay REST API message cache |
| `rest-admin` | `false` | Enable access to REST HTTP Admin API: true\|false |
-| `rest-private` | `false` | Enable access to REST HTTP Private API: true\|false |
| `rest-allow-origin` | | Allow cross-origin requests from the specified origin. When using the REST API in a browser, specify the origin host to get a valid response from the node REST HTTP server. This option may be repeated and can contain wildcards (?,\*) for defining URLs and ports such as `localhost:*`, `127.0.0.1:8080`, or allow any website with `*` |
## Metrics config
From edb6b637fe056fbb1d45c59f19009db9c804f83a Mon Sep 17 00:00:00 2001
From: NagyZoltanPeter <113987313+NagyZoltanPeter@users.noreply.github.com>
Date: Mon, 23 Sep 2024 14:07:51 +0200
Subject: [PATCH 07/24] chore: DOS protection of non-relay req/resp protocols
new cli argument description (#216)
* DOS protection of non-relay req/resp protocols has a new cli argument, now described officially.
---
docs/guides/nwaku/config-options.md | 6 ++++++
1 file changed, 6 insertions(+)
diff --git a/docs/guides/nwaku/config-options.md b/docs/guides/nwaku/config-options.md
index 819119c..dcad83c 100644
--- a/docs/guides/nwaku/config-options.md
+++ b/docs/guides/nwaku/config-options.md
@@ -157,6 +157,12 @@ Here are the available node configuration options, along with their default valu
| `websocket-secure-key-path` | | Secure websocket key path: '/path/to/key.txt' |
| `websocket-secure-cert-path` | | Secure websocket Certificate path: '/path/to/cert.txt' |
+## Non relay, request-response protocol DOS protection configuration
+
+| Name | Default Value | Description |
+| ---------------------------- | ------------- | ------------------------------------------------------ |
+| `rate-limit` | | This is a repeatable option. Each one of them can describe spefic rate limit configuration for a particular protocol.
\:volume/period\
- if protocol is not given, settings will be taken as default for un-set protocols. Ex: `80/2s`
-Supported protocols are: `lightpush`\|`filter`\|`px`\|`store`\|`storev2`\|`storev3`
-volume must be an integer value, representing number of requests over the period of time allowed.
-period\ must be an integer with defined unit as one of `h`\|`m`\|`s`\|`ms`
- `storev2` and `storev3` takes precedence over `store` which can easy set both store protocols at once.
- In case of multiple set of the same protocol limit, last one will take place.
- if config is not set it means unlimited requests are allowed.
-filter has a bit different approach. It has a default setting applied if not overridden. Rate limit setting for filter will be applied per subscriber-peers, not globally - it must be considered when changing the setting.
Examples:
- `100/1s` - default for all protocols if not set otherwise.
-`lightpush:0/0s` - lightpush protocol will be not rate limited.
-`store:130/1500ms` - both store-v3 and store-v2 will apply 130 request per each 1500ms separately.
-`px:10/1h` PeerExchange will serve only 10 requests in every hour.
-`filter:8/5m` - will allow 8 subs/unsubs/ping requests for each subscribers within every 5 min. |
+
:::tip
To configure your node using the provided configuration options, have a look at the [Node Configuration Methods](/guides/nwaku/config-methods) guide.
:::
From dbedf1fbd2f98dbeaa64da7c22902104ad615e8c Mon Sep 17 00:00:00 2001
From: Hanno Cornelius <68783915+jm-clius@users.noreply.github.com>
Date: Mon, 30 Sep 2024 06:01:45 +0100
Subject: [PATCH 08/24] docs: add links to published research papers.md (#220)
---
docs/research/index.md | 6 ++++++
1 file changed, 6 insertions(+)
diff --git a/docs/research/index.md b/docs/research/index.md
index 7bc6af8..fabbcfb 100644
--- a/docs/research/index.md
+++ b/docs/research/index.md
@@ -7,3 +7,9 @@ sidebar_position: 1
**Research and Studies**: Protocol simulations and theoretical analysis to support the design of Waku protocols. The protocol definitions are on the [Waku RFCs](https://rfc.vac.dev/waku) website.
**Benchmarks**: Results of implementations and engineering-related benchmarks for Waku clients.
+
+Waku also has the following published research papers:
+- [**WAKU-RLN-RELAY: Privacy-Preserving Peer-to-Peer Economic Spam Protection**](https://arxiv.org/abs/2207.00117)
+- [**Message Latency in Waku Relay with Rate Limiting Nullifiers**](https://eprint.iacr.org/2024/1073)
+- [**Waku: A Family of Modular P2P Protocols For Secure & Censorship-Resistant Communication**](https://arxiv.org/abs/2207.00038)
+- [**The Waku Network as Infrastructure for dApps**](https://ieeexplore.ieee.org/document/10646404)
From ed094329bd075722a5fce6c2f810c05f4452d7f9 Mon Sep 17 00:00:00 2001
From: Sasha <118575614+weboko@users.noreply.github.com>
Date: Mon, 30 Sep 2024 17:13:47 +0200
Subject: [PATCH 09/24] chore: update docs to show new property (#209)
---
docs/guides/js-waku/configure-discovery.md | 7 ++-----
1 file changed, 2 insertions(+), 5 deletions(-)
diff --git a/docs/guides/js-waku/configure-discovery.md b/docs/guides/js-waku/configure-discovery.md
index 05e98c6..841a502 100644
--- a/docs/guides/js-waku/configure-discovery.md
+++ b/docs/guides/js-waku/configure-discovery.md
@@ -64,14 +64,11 @@ const peers = [
const node = await createLightNode();
-// In case nodes are using `ws` protocol - additional configuration is needed:
+// In case nodes are using IP address and / or `ws` protocol - additional configuration is needed:
/*
-import { webSockets } from "@libp2p/websockets";
-import { all as filterAll } from "@libp2p/websockets/filters";
-
const node = await createLightNode({
libp2p: {
- transports: [webSockets({ filter: filterAll })],
+ filterMultiaddrs: false,
},
});
*/
From c5d18384556cbabe3cc8394d0f387dd1e536ab0d Mon Sep 17 00:00:00 2001
From: Alex Williamson
Date: Tue, 1 Oct 2024 04:41:27 +0200
Subject: [PATCH 10/24] Fixed grammatical errors in documentation (#208)
Co-authored-by: Sasha <118575614+weboko@users.noreply.github.com>
---
docs/learn/faq.md | 2 +-
docs/learn/research.md | 2 +-
docs/learn/security-features.md | 2 +-
docs/learn/waku-network.md | 2 +-
docs/privacy-policy.md | 2 +-
docs/research/research-and-studies/maximum-bandwidth.md | 4 ++--
6 files changed, 7 insertions(+), 7 deletions(-)
diff --git a/docs/learn/faq.md b/docs/learn/faq.md
index 36f1ac0..025ff42 100644
--- a/docs/learn/faq.md
+++ b/docs/learn/faq.md
@@ -27,7 +27,7 @@ import { AccordionItem } from '@site/src/components/mdx'
- Waku focuses on short, ephemeral, real-time time messages, while IPFS focuses on large, long-term data storage. Although there's an overlap between the two technologies, Waku does not currently support large data for privacy reasons.
+ Waku focuses on short, ephemeral, real-time messages, while IPFS focuses on large, long-term data storage. Although there's an overlap between the two technologies, Waku does not currently support large data for privacy reasons.
diff --git a/docs/learn/research.md b/docs/learn/research.md
index add260f..84aec04 100644
--- a/docs/learn/research.md
+++ b/docs/learn/research.md
@@ -13,7 +13,7 @@ We have prepared a PoC implementation of this method in JS: .
+Denial of service signifies the case where an adversarial peer exhausts another node's service capacity (e.g., by making a large number of requests) and makes it unavailable to the rest of the system. RnD on DoS attack mitigation can be tracked from here: .
In a nutshell, peers have to pay for the service they obtain from each other. In addition to incentivising the service provider, accounting also makes DoS attacks costly for malicious peers. The accounting model can be used in `Store` and `Filter` to protect against DoS attacks.
diff --git a/docs/learn/security-features.md b/docs/learn/security-features.md
index 268fa7e..55ecc6a 100644
--- a/docs/learn/security-features.md
+++ b/docs/learn/security-features.md
@@ -5,7 +5,7 @@ hide_table_of_contents: true
Waku's protocol layers offer different services and security considerations, shaping the overall security of Waku. We document the security models in the [RFCs of the protocols](https://rfc.vac.dev/), aiming to provide transparent and open-source references. This empowers Waku users to understand each protocol's security guarantees and limitations.
-Some of the Waku's security features include the following:
+Some of Waku's security features include the following:
## [Pseudonymity](https://rfc.vac.dev/waku/standards/core/10/waku2/#pseudonymity)
diff --git a/docs/learn/waku-network.md b/docs/learn/waku-network.md
index 8fc9c57..478ebeb 100644
--- a/docs/learn/waku-network.md
+++ b/docs/learn/waku-network.md
@@ -11,7 +11,7 @@ The Waku Network is a shared p2p messaging network that is open-access, useful f
4. Services for resource-restricted nodes, including historical message storage and retrieval, filtering, etc.
:::tip
-If you want to learn more about the Waku Network, the [The Waku Network: Technical Overview](https://blog.waku.org/2024-waku-network-tech-overview) article provides an in-depth look under the hood.
+If you want to learn more about the Waku Network, [The Waku Network: Technical Overview](https://blog.waku.org/2024-waku-network-tech-overview) article provides an in-depth look under the hood.
:::
## Why join the Waku network?
diff --git a/docs/privacy-policy.md b/docs/privacy-policy.md
index 1f8091e..eca7233 100644
--- a/docs/privacy-policy.md
+++ b/docs/privacy-policy.md
@@ -27,7 +27,7 @@ Whenever we refer to “Logos”, “we” or other similar references, we are r
### 2) We limit the collection and processing of personal data from your use of the Website
-We aim to limit the collection and collection and processing of personal data from users of the Website. We only collect and process certain personal data for specific purposes and where we have the legal basis to do so under applicable privacy legislation. We will not collect or process any personal data that we don’t need and where we do store any personal data, we will only store it for the least amount of time needed for the indicated purpose.
+We aim to limit the collection and processing of personal data from users of the Website. We only collect and process certain personal data for specific purposes and where we have the legal basis to do so under applicable privacy legislation. We will not collect or process any personal data that we don’t need and where we do store any personal data, we will only store it for the least amount of time needed for the indicated purpose.
In this regard, we collect and process the following personal data from your use of the Website:
diff --git a/docs/research/research-and-studies/maximum-bandwidth.md b/docs/research/research-and-studies/maximum-bandwidth.md
index 1d7d600..c01b273 100644
--- a/docs/research/research-and-studies/maximum-bandwidth.md
+++ b/docs/research/research-and-studies/maximum-bandwidth.md
@@ -58,7 +58,7 @@ The **trade-off is clear**:
So it's about where to draw this line.
Points to take into account:
-- **Relay contributes to bandwidth the most**: Relay is the protocol that mostly contributes to bandwidth usage, and it can't choose to allocate fewer bandwidth resources like other protocols (eg `store` can choose to provide less resources and it will work). In other words, the network sets the relay bandwidth requirements, and if the node can't meet them, it just wont work.
+- **Relay contributes to bandwidth the most**: Relay is the protocol that mostly contributes to bandwidth usage, and it can't choose to allocate fewer bandwidth resources like other protocols (eg `store` can choose to provide less resources and it will work). In other words, the network sets the relay bandwidth requirements, and if the node can't meet them, it just won't work.
- **Upload and download bandwidth are the same**: Due to how gossipsub works, and hence `relay`, the bandwidth consumption is symmetric, meaning that upload and download bandwidth is the same. This is because of `D` and the reciprocity of the connections, meaning that one node upload is another download.
- **Nodes not meeting requirements can use light clients**. Note that nodes not meeting the bandwidth requirements can still use waku, but they will have to use light protocols, which are a great alternative, especially on mobile, but with some drawbacks (trust assumptions, less reliability, etc)
- **Waku can't take all the bandwidth:** Waku is meant to be used in conjunction with other services, so it shouldn't consume all the existing bandwidth. If Waku consumes `x Mbps` and someone bandwidth is `x Mpbs`, the UX won't be good.
@@ -80,4 +80,4 @@ Coming up with a number:
**Conclusion:** Limit to `10 Mbps` each waku shard. How? Not trivial, see https://github.com/waku-org/research/issues/22#issuecomment-1727795042
-*Note:* This number is not set in stone and is subject to modifications, but it will most likely stay in the same order of magnitude if changed.
\ No newline at end of file
+*Note:* This number is not set in stone and is subject to modifications, but it will most likely stay in the same order of magnitude if changed.
From 7e81fdc6eca7fced8cd70de9e5eb43622da32ae5 Mon Sep 17 00:00:00 2001
From: LordGhostX <47832826+LordGhostX@users.noreply.github.com>
Date: Tue, 1 Oct 2024 07:53:34 +0100
Subject: [PATCH 11/24] chore(js-waku)!: update API for NetworkConfig (#193)
* intro
* update js-waku docs
* update shard instructions
Co-authored-by: Sasha <118575614+weboko@users.noreply.github.com>
* remove chat references in message structure
* add info on contentTopics parameter
* chore: update guide for v0.027
* chore: prioritize autosharding over static sharding
* chore: update cspell for autosharding
---------
Co-authored-by: Sasha <118575614+weboko@users.noreply.github.com>
Co-authored-by: Danish Arora
---
.cspell.json | 1 +
docs/guides/js-waku/light-send-receive.md | 93 +++++++++++++++++------
docs/guides/js-waku/use-waku-react.md | 12 +--
3 files changed, 78 insertions(+), 28 deletions(-)
diff --git a/.cspell.json b/.cspell.json
index 038c3ad..10e618d 100644
--- a/.cspell.json
+++ b/.cspell.json
@@ -17,6 +17,7 @@
"enrtree",
"Discv5",
"gossipsub",
+ "autosharding",
"lightpush",
"pubtopic1",
"proto",
diff --git a/docs/guides/js-waku/light-send-receive.md b/docs/guides/js-waku/light-send-receive.md
index c14585d..809a767 100644
--- a/docs/guides/js-waku/light-send-receive.md
+++ b/docs/guides/js-waku/light-send-receive.md
@@ -24,6 +24,34 @@ await node.start();
When the `defaultBootstrap` parameter is set to `true`, your node will be bootstrapped using the [default bootstrap method](/guides/js-waku/configure-discovery#default-bootstrap-method). Have a look at the [Bootstrap Nodes and Discover Peers](/guides/js-waku/configure-discovery) guide to learn more methods to bootstrap nodes.
:::
+A node needs to know how to route messages. By default, it will use The Waku Network configuration (`{ clusterId: 1, shards: [0,1,2,3,4,5,6,7] }`). For most applications, it's recommended to use autosharding:
+
+```js
+// Create node with auto sharding (recommended)
+const node = await createLightNode({
+ defaultBootstrap: true,
+ networkConfig: {
+ clusterId: 1,
+ contentTopics: ["/my-app/1/notifications/proto"],
+ },
+});
+```
+
+### Alternative network configuration
+
+If your project requires a specific network configuration, you can use static sharding:
+
+```js
+// Create node with static sharding
+const node = await createLightNode({
+ defaultBootstrap: true,
+ networkConfig: {
+ clusterId: 1,
+ shards: [0, 1, 2, 3],
+ },
+});
+```
+
## Connect to remote peers
Use the `waitForRemotePeer()` function to wait for the node to connect with peers on the Waku Network:
@@ -41,15 +69,12 @@ The `protocols` parameter allows you to specify the [protocols](/learn/concepts/
import { waitForRemotePeer, Protocols } from "@waku/sdk";
// Wait for peer connections with specific protocols
-await waitForRemotePeer(node, [
- Protocols.LightPush,
- Protocols.Filter,
-]);
+await waitForRemotePeer(node, [Protocols.LightPush, Protocols.Filter]);
```
## Choose a content topic
-[Choose a content topic](/learn/concepts/content-topics) for your application and create a message `encoder` and `decoder`:
+Choose a [content topic](/learn/concepts/content-topics) for your application and create a message `encoder` and `decoder`:
```js
import { createEncoder, createDecoder } from "@waku/sdk";
@@ -66,11 +91,25 @@ The `ephemeral` parameter allows you to specify whether messages should **NOT**
```js
const encoder = createEncoder({
- contentTopic: contentTopic, // message content topic
- ephemeral: true, // allows messages NOT be stored on the network
+ contentTopic: contentTopic, // message content topic
+ ephemeral: true, // allows messages NOT be stored on the network
});
```
+The `pubsubTopicShardInfo` parameter allows you to configure a different network configuration for your `encoder` and `decoder`:
+
+```js
+// Create the network config
+const networkConfig = { clusterId: 3, shards: [1, 2] };
+
+// Create encoder and decoder with custom network config
+const encoder = createEncoder({
+ contentTopic: contentTopic,
+ pubsubTopicShardInfo: networkConfig,
+});
+const decoder = createDecoder(contentTopic, networkConfig);
+```
+
:::info
In this example, users send and receive messages on a shared content topic. However, real applications may have users broadcasting messages while others listen or only have 1:1 exchanges. Waku supports all these use cases.
:::
@@ -83,10 +122,10 @@ Create your application's message structure using [Protobuf's valid message](htt
import protobuf from "protobufjs";
// Create a message structure using Protobuf
-const ChatMessage = new protobuf.Type("ChatMessage")
- .add(new protobuf.Field("timestamp", 1, "uint64"))
- .add(new protobuf.Field("sender", 2, "string"))
- .add(new protobuf.Field("message", 3, "string"));
+const DataPacket = new protobuf.Type("DataPacket")
+ .add(new protobuf.Field("timestamp", 1, "uint64"))
+ .add(new protobuf.Field("sender", 2, "string"))
+ .add(new protobuf.Field("message", 3, "string"));
```
:::info
@@ -99,18 +138,18 @@ To send messages over the Waku Network using the `Light Push` protocol, create a
```js
// Create a new message object
-const protoMessage = ChatMessage.create({
- timestamp: Date.now(),
- sender: "Alice",
- message: "Hello, World!",
+const protoMessage = DataPacket.create({
+ timestamp: Date.now(),
+ sender: "Alice",
+ message: "Hello, World!",
});
// Serialise the message using Protobuf
-const serialisedMessage = ChatMessage.encode(protoMessage).finish();
+const serialisedMessage = DataPacket.encode(protoMessage).finish();
// Send the message using Light Push
await node.lightPush.send(encoder, {
- payload: serialisedMessage,
+ payload: serialisedMessage,
});
```
@@ -121,11 +160,11 @@ To receive messages using the `Filter` protocol, create a callback function for
```js
// Create the callback function
const callback = (wakuMessage) => {
- // Check if there is a payload on the message
- if (!wakuMessage.payload) return;
- // Render the messageObj as desired in your application
- const messageObj = ChatMessage.decode(wakuMessage.payload);
- console.log(messageObj);
+ // Check if there is a payload on the message
+ if (!wakuMessage.payload) return;
+ // Render the messageObj as desired in your application
+ const messageObj = DataPacket.decode(wakuMessage.payload);
+ console.log(messageObj);
};
// Create a Filter subscription
@@ -140,6 +179,16 @@ if (error) {
await subscription.subscribe([decoder], callback);
```
+The `pubsubTopicShardInfo` parameter allows you to configure a different network configuration for your `Filter` subscription:
+
+```js
+// Create the network config
+const networkConfig = { clusterId: 3, shards: [1, 2] };
+
+// Create Filter subscription with custom network config
+const subscription = await node.filter.createSubscription(networkConfig);
+```
+
You can use the `subscription.unsubscribe()` function to stop receiving messages from a content topic:
```js
diff --git a/docs/guides/js-waku/use-waku-react.md b/docs/guides/js-waku/use-waku-react.md
index 163e14e..974af9a 100644
--- a/docs/guides/js-waku/use-waku-react.md
+++ b/docs/guides/js-waku/use-waku-react.md
@@ -118,7 +118,7 @@ function App() {
const decoder = createDecoder(contentTopic);
// Create a message structure using Protobuf
- const ChatMessage = new protobuf.Type("ChatMessage")
+ const DataPacket = new protobuf.Type("DataPacket")
.add(new protobuf.Field("timestamp", 1, "uint64"))
.add(new protobuf.Field("message", 2, "string"));
@@ -223,13 +223,13 @@ function App() {
// Create a new message object
const timestamp = Date.now();
- const protoMessage = ChatMessage.create({
+ const protoMessage = DataPacket.create({
timestamp: timestamp,
message: inputMessage
});
// Serialise the message and push to the network
- const payload = ChatMessage.encode(protoMessage).finish();
+ const payload = DataPacket.encode(protoMessage).finish();
const { recipients, errors } = await push({ payload, timestamp });
// Check for errors
@@ -258,7 +258,7 @@ function App() {
useEffect(() => {
setMessages(filterMessages.map((wakuMessage) => {
if (!wakuMessage.payload) return;
- return ChatMessage.decode(wakuMessage.payload);
+ return DataPacket.decode(wakuMessage.payload);
}));
}, [filterMessages]);
}
@@ -283,7 +283,7 @@ function App() {
const allMessages = storeMessages.concat(filterMessages);
setMessages(allMessages.map((wakuMessage) => {
if (!wakuMessage.payload) return;
- return ChatMessage.decode(wakuMessage.payload);
+ return DataPacket.decode(wakuMessage.payload);
}));
}, [filterMessages, storeMessages]);
}
@@ -295,4 +295,4 @@ To explore the available Store query options, have a look at the [Retrieve Messa
:::tip
You have successfully integrated `@waku/sdk` into a React application using the `@waku/react` package. Have a look at the [web-chat](https://github.com/waku-org/js-waku-examples/tree/master/examples/web-chat) example for a working demo and the [Building a Tic-Tac-Toe Game with Waku](https://blog.waku.org/tictactoe-tutorial) tutorial to learn more.
-:::
\ No newline at end of file
+:::
From c9389f03c38d6d486ed194d240de7b404c815ede Mon Sep 17 00:00:00 2001
From: NagyZoltanPeter <113987313+NagyZoltanPeter@users.noreply.github.com>
Date: Wed, 2 Oct 2024 13:48:57 +0200
Subject: [PATCH 12/24] fix: Addressing comments on the descritpion of DOS
protection configuration (#222)
* Addressing comments on the descritpion of DOS protection configuration
* Incorporate review suggestions, added full notion of argument examples
---
docs/guides/nwaku/config-options.md | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/docs/guides/nwaku/config-options.md b/docs/guides/nwaku/config-options.md
index dcad83c..0b3154b 100644
--- a/docs/guides/nwaku/config-options.md
+++ b/docs/guides/nwaku/config-options.md
@@ -157,11 +157,11 @@ Here are the available node configuration options, along with their default valu
| `websocket-secure-key-path` | | Secure websocket key path: '/path/to/key.txt' |
| `websocket-secure-cert-path` | | Secure websocket Certificate path: '/path/to/cert.txt' |
-## Non relay, request-response protocol DOS protection configuration
+## Non-relay, request-response protocol DOS protection configuration
| Name | Default Value | Description |
| ---------------------------- | ------------- | ------------------------------------------------------ |
-| `rate-limit` | | This is a repeatable option. Each one of them can describe spefic rate limit configuration for a particular protocol.
\:volume/period\
- if protocol is not given, settings will be taken as default for un-set protocols. Ex: `80/2s`
-Supported protocols are: `lightpush`\|`filter`\|`px`\|`store`\|`storev2`\|`storev3`
-volume must be an integer value, representing number of requests over the period of time allowed.
-period\ must be an integer with defined unit as one of `h`\|`m`\|`s`\|`ms`
- `storev2` and `storev3` takes precedence over `store` which can easy set both store protocols at once.
- In case of multiple set of the same protocol limit, last one will take place.
- if config is not set it means unlimited requests are allowed.
-filter has a bit different approach. It has a default setting applied if not overridden. Rate limit setting for filter will be applied per subscriber-peers, not globally - it must be considered when changing the setting.
Examples:
- `100/1s` - default for all protocols if not set otherwise.
-`lightpush:0/0s` - lightpush protocol will be not rate limited.
-`store:130/1500ms` - both store-v3 and store-v2 will apply 130 request per each 1500ms separately.
-`px:10/1h` PeerExchange will serve only 10 requests in every hour.
-`filter:8/5m` - will allow 8 subs/unsubs/ping requests for each subscribers within every 5 min. |
+| `rate-limit` | | This is a repeatable option. Each can describe a specific rate limit configuration for a particular protocol.
Formatted as:`:volume/period`
- if protocol is not given, settings will be taken as default for un-set protocols. Ex: `80/2s`
-Supported protocols are: `lightpush`\|`filter`\|`px`\|`store`\|`storev2`\|`storev3`
-volume must be an integer value, representing number of requests over the period of time allowed.
-period\ must be an integer with defined unit as one of `h`\|`m`\|`s`\|`ms`
- `storev2` and `storev3` takes precedence over `store` which can easy set both store protocols at once.
- In case of multiple set of the same protocol limit, last one will take place.
- if config is not set, - which is the default - means unlimited requests are allowed.
-filter has a bit different approach. It has a default setting applied if not overridden. Rate limit setting for filter will be applied per subscriber-peers, not globally - it must be considered when changing the setting.
Examples:
`--rate-limit="100/1s"` - default for all protocols if not set otherwise.
`--rate-limit="lightpush:0/0s"` - lightpush protocol will not be rate-limited.
`--rate-limit="store:130/1500ms"` - both store-v3 and store-v2 will apply 130 request per each 1500ms separately.
`--rate-limit="px:10/1h"` PeerExchange will serve only 10 requests every hour.
`--rate-limit="filter:8/5m"` - will allow 8 subs/unsubs/ping requests for each subscriber within every 5 min. |
:::tip
To configure your node using the provided configuration options, have a look at the [Node Configuration Methods](/guides/nwaku/config-methods) guide.
From 1b2affe15ea4371ed3a03969cef375bc064d3455 Mon Sep 17 00:00:00 2001
From: gabrielmer <101006718+gabrielmer@users.noreply.github.com>
Date: Thu, 3 Oct 2024 10:34:21 +0300
Subject: [PATCH 13/24] Adding num-shards-in-network config option to
documentation (#224)
---
docs/guides/nwaku/config-options.md | 1 +
1 file changed, 1 insertion(+)
diff --git a/docs/guides/nwaku/config-options.md b/docs/guides/nwaku/config-options.md
index 0b3154b..9b599b9 100644
--- a/docs/guides/nwaku/config-options.md
+++ b/docs/guides/nwaku/config-options.md
@@ -68,6 +68,7 @@ Here are the available node configuration options, along with their default valu
| `keep-alive` | `false` | Enable keep-alive for idle connections: true\|false |
| `pubsub-topic` | | Default pubsub topic to subscribe to. Argument may be repeated. **Deprecated!** Please use `shard` and/or `content-topic` instead |
| `shard` | | Shard to subscribe to. Argument may be repeated |
+| `num-shards-in-network` | | Number of shards in the network. Used to map content topics to shards when using autosharding |
| `content-topic` | | Default content topic to subscribe to. Argument may be repeated |
| `reliability` | `false` | Enable experimental reliability protocol true\|false |
From 2b0258ade2e763ae8107bb10ab2af1f2919108fc Mon Sep 17 00:00:00 2001
From: Sasha <118575614+weboko@users.noreply.github.com>
Date: Fri, 11 Oct 2024 12:05:07 +0200
Subject: [PATCH 14/24] feat: introduce new waitForPeers API (#226)
---
docs/guides/js-waku/configure-discovery.md | 4 ++--
docs/guides/js-waku/light-send-receive.md | 10 ++++------
docs/guides/js-waku/store-retrieve-messages.md | 6 +++---
3 files changed, 9 insertions(+), 11 deletions(-)
diff --git a/docs/guides/js-waku/configure-discovery.md b/docs/guides/js-waku/configure-discovery.md
index 841a502..66af809 100644
--- a/docs/guides/js-waku/configure-discovery.md
+++ b/docs/guides/js-waku/configure-discovery.md
@@ -191,10 +191,10 @@ const node = await createLightNode({
You can retrieve the array of peers connected to a node using the `libp2p.getPeers()` function within the `@waku/sdk` package:
```js
-import { createLightNode, waitForRemotePeer } from "@waku/sdk";
+import { createLightNode } from "@waku/sdk";
const node = await createLightNode({ defaultBootstrap: true });
-await waitForRemotePeer(node);
+await node.waitForPeers();
// Retrieve array of peers connected to the node
console.log(node.libp2p.getPeers());
diff --git a/docs/guides/js-waku/light-send-receive.md b/docs/guides/js-waku/light-send-receive.md
index 809a767..fd62604 100644
--- a/docs/guides/js-waku/light-send-receive.md
+++ b/docs/guides/js-waku/light-send-receive.md
@@ -54,22 +54,20 @@ const node = await createLightNode({
## Connect to remote peers
-Use the `waitForRemotePeer()` function to wait for the node to connect with peers on the Waku Network:
+Use the `node.waitForPeers()` function to wait for the node to connect with peers on the Waku Network:
```js
-import { waitForRemotePeer } from "@waku/sdk";
-
// Wait for a successful peer connection
-await waitForRemotePeer(node);
+await node.waitForPeers();
```
The `protocols` parameter allows you to specify the [protocols](/learn/concepts/protocols) that the remote peers should have enabled:
```js
-import { waitForRemotePeer, Protocols } from "@waku/sdk";
+import { Protocols } from "@waku/sdk";
// Wait for peer connections with specific protocols
-await waitForRemotePeer(node, [Protocols.LightPush, Protocols.Filter]);
+await node.waitForPeers([Protocols.LightPush, Protocols.Filter]);
```
## Choose a content topic
diff --git a/docs/guides/js-waku/store-retrieve-messages.md b/docs/guides/js-waku/store-retrieve-messages.md
index 293a7f1..c09d8c9 100644
--- a/docs/guides/js-waku/store-retrieve-messages.md
+++ b/docs/guides/js-waku/store-retrieve-messages.md
@@ -19,13 +19,13 @@ await node.start();
## Connect to store peers
-Use the `waitForRemotePeer()` function to wait for the node to connect with Store peers:
+Use the `node.waitForPeers()` method to wait for the node to connect with Store peers:
```js
-import { waitForRemotePeer, Protocols } from "@waku/sdk";
+import { Protocols } from "@waku/sdk";
// Wait for a successful peer connection
-await waitForRemotePeer(node, [Protocols.Store]);
+await node.waitForPeers([Protocols.Store]);
```
## Choose a content topic
From a87a3be8ca3efd08fe955ceb6da5bf88b8846d49 Mon Sep 17 00:00:00 2001
From: fryorcraken <110212804+fryorcraken@users.noreply.github.com>
Date: Mon, 21 Oct 2024 14:08:47 +1100
Subject: [PATCH 15/24] Update research docs (#227)
---
docs/research/benchmarks/postgres-adoption.md | 4 +-
.../research-and-studies/capped-bandwidth.md | 73 +++++++------------
.../research-and-studies/maximum-bandwidth.md | 4 +-
3 files changed, 32 insertions(+), 49 deletions(-)
diff --git a/docs/research/benchmarks/postgres-adoption.md b/docs/research/benchmarks/postgres-adoption.md
index 5c6147d..98e3396 100644
--- a/docs/research/benchmarks/postgres-adoption.md
+++ b/docs/research/benchmarks/postgres-adoption.md
@@ -109,7 +109,7 @@ Notice that the two `nwaku` nodes run the very same version, which is compiled l
#### Comparing archive SQLite & Postgres performance in [nwaku-b6dd6899](https://github.com/waku-org/nwaku/tree/b6dd6899030ee628813dfd60ad1ad024345e7b41)
-The next results were obtained by running the docker-compose-manual-binaries.yml from [test-waku-query-c078075](https://github.com/waku-org/test-waku-query/tree/c07807597faa781ae6c8c32eefdf48ecac03a7ba) in the sandbox machine (metal-01.he-eu-hel1.wakudev.misc.statusim.net.)
+The next results were obtained by running the docker-compose-manual-binaries.yml from [test-waku-query-c078075](https://github.com/waku-org/test-waku-query/tree/c07807597faa781ae6c8c32eefdf48ecac03a7ba) in the sandbox machine (metal-01.he-eu-hel1.wakudev.misc.status.im.)
**Scenario 1**
@@ -155,7 +155,7 @@ In this case, the performance is similar regarding the timings. The store rate i
This nwaku commit is after a few **Postgres** optimizations were applied.
-The next results were obtained by running the docker-compose-manual-binaries.yml from [test-waku-query-c078075](https://github.com/waku-org/test-waku-query/tree/c07807597faa781ae6c8c32eefdf48ecac03a7ba) in the sandbox machine (metal-01.he-eu-hel1.wakudev.misc.statusim.net.)
+The next results were obtained by running the docker-compose-manual-binaries.yml from [test-waku-query-c078075](https://github.com/waku-org/test-waku-query/tree/c07807597faa781ae6c8c32eefdf48ecac03a7ba) in the sandbox machine (metal-01.he-eu-hel1.wakudev.misc.status.im.)
**Scenario 1**
diff --git a/docs/research/research-and-studies/capped-bandwidth.md b/docs/research/research-and-studies/capped-bandwidth.md
index 5b755a1..007099c 100644
--- a/docs/research/research-and-studies/capped-bandwidth.md
+++ b/docs/research/research-and-studies/capped-bandwidth.md
@@ -2,63 +2,46 @@
title: Capped Bandwidth in Waku
---
-This issue explains i) why The Waku Network requires a capped bandwidth per shard and ii) how to solve it by rate limiting with RLN by daily requests (instead of every x seconds), which would require RLN v2, or some modifications in the current circuits to work. It also explains why the current rate limiting RLN approach (limit 1 message every x seconds) is not practical to solve this problem.
+This post explains i) why The Waku Network requires a capped bandwidth per shard and ii) how to achieve it by rate limiting with RLN v2.
## Problem
-First of all, lets begin with the terminology. We have talked in the past about "predictable" bandwidth, but a better name would be "capped" bandwidth. This is because it is totally fine that the waku traffic is not predictable, as long as its capped. And it has to be capped because otherwise no one will be able to run a node.
+First of all, let's begin with the terminology. We have talked in the past about "predictable" bandwidth, but a better name would be "capped" bandwidth. This is because it is totally fine that the waku traffic is not predictable, as long as it is capped. And it has to be capped because otherwise, no one will be able to run a node.
-Since we aim that everyone is able to run a full waku node (at least subscribed to a single shard) its of paramount importance that the bandwidth requirements (up/down) are i) reasonable to run with a residential internet connection in every country and ii) limited to an upper value, aka capped. If the required bandwidth to stay up to date with a topic is higher than what the node has available, then it will start losing messages and won't be able to stay up to date with the topic messages. And not to mention the problems this will cause to other services and applications being used by the user.
+Since we aim that everyone can run a full waku node (at least subscribed to a single shard) it is of paramount importance that the bandwidth requirements (up/down) are i) reasonable to run with a residential internet connection in every country and ii) limited to an upper value, aka capped. If the required bandwidth to stay up to date with a topic is higher than what the node has available, then it will start losing messages and won't be able to stay up to date with the topic messages. And not to mention the problems this will cause to other services and applications being used by the user.
-The main problem is that one can't just chose the bandwidth it allocates to `relay`. One could set the maximum bandwidth willing to allocate to `store` but this is not how `relay` works. The required bandwidth is not set by the node, but by the network. If a pubsub topic `a` has a traffic of 50 Mbps (which is the sum of all messages being sent multiplied by its size, times the D_out degree), then if a node wants to stay up to date in that topic, and relay traffic in it, then it will require 50 Mbps. There is no thing such as "partially contribute" to the topic (with eg 25Mbps) because then you will be losing messages, becoming an unreliable peer. The network sets the pace.
+The main problem is that one can't just choose the bandwidth it allocates to `relay`. One could set the maximum bandwidth willing to allocate to `store` but this is not how `relay` works. The required bandwidth is not set by the node, but by the network. If a pubsub topic `a` has a traffic of 50 Mbps (which is the sum of all messages being sent multiplied by its size, times the D_out degree), then if a node wants to stay up to date in that topic, and relay traffic in it, then it will require 50 Mbps. There is no thing such as "partially contributing" to the topic (with eg 25Mbps) because then you will be losing messages, becoming an unreliable peer and potentially be disconnected. The network sets the pace.
-So waku needs an upper boundary on the in/out bandwidth (mbps) it consumes. Just like apps have requirements on cpu and memory, we should set a requirement on bandwidth, and then guarantee that if you have that bandwidth, you will be able to run a node without any problem. And this is the tricky part.
+So waku needs an upper boundary on the in/out bandwidth (mbps) it consumes. Just like apps have requirements on cpu and memory, we should set a requirement on bandwidth, and then guarantee that if you have that bandwidth, you will be able to run a node without any problem. And this is the tricky part. This metric is Waku's constraint, similar to the gas-per-block limit in blockchains.
-## Current approach
+## Previous Work
-With the recent productisation effort of RLN, we have part of the problem solved, but not entirely. RLN offers an improvement, since now have a pseudo-identity (RLN membership) that can be used to rate limit users, enforcing a limit on how often it can send a message (eg 1 message every 10 seconds). We assume of course, that getting said RLN membership requires to pay something, or put something at stake, so that it can't be sibyl attacked.
+Quick summary of the evolution to solve this problem:
+* Waku started with no rate-limiting mechanism. The network was subject to DoS attacks.
+* RLN v1 was introduced, which allowed to rate-limit in a privacy-preserving and anonymous way. The rate limit can be configured to 1 message every `y` seconds. However, this didn't offer much granularity. A low `y` would allow too many messages and a high `y` would make the protocol unusable (impossible to send two messages in a row).
+* RLN v2 was introduced, which allows to rate-limit each user to `x` messages every `y` seconds. This offers the granularity we need. It is the current solution deployed in The Waku Network.
-Rate limiting with RLN so that each entity just sends 1 message every x seconds indeed solves the spam problem but it doesn't per se cap the traffic. In order to cap the traffic, we would first need to cap the amount of memberships we allow. Lets see an example:
-- We limit to 10.000 RLN memberships
-- Each ones is rate limited to send 1 message/10 seconds
-- Message size of 50 kBytes
+## Current Solution (RLN v2)
-Having this, the worst case bandwidth that we can theoretically have, would be if all of the memberships publish messages at the same time, with the maximum size, continuously. That is `10.000 messages/sec * 50 kBytes = 500 MBytes/second`. This would be a burst every 10 seconds, but enough to leave out the majority of the nodes. Of course this assumption is not realistic as most likely not everyone will continuously send messages at the same time and the size will vary. But in theory this could happen.
+The current solution to this problem is the usage of RLN v2, which allows to rate-limit `x` messages every `y` seconds. On top of this, the introduction of [WAKU2-RLN-CONTRACT](https://github.com/waku-org/specs/blob/master/standards/core/rln-contract.md) enforces a maximum amount of messages that can be sent to the network per `epoch`. This is achieved by limiting the amount of memberships that can be registered. The current values are:
+* `R_{max}`: 160000 mgs/epoch
+* `r_{max}`: 600 msgs/epoch
+* `r_{min}`: 20 msgs/epoch
-A naive (and not practical) way of fixing this, would be to design the network for this worst case. So if we want to cap the maximum bandwidth to 5 MBytes/s then we would have different options on the maximum i) amount of RLN memberships and ii) maximum message size:
-- `1.000` RLN memberships, `5` kBytes message size: `1000 * 5 = 5 MBytes/s`
-- `10.000` RLN memberships, `500` Bytes message size: `10000 * 0.5 = 5 MBytes/s`
+In other words, the contract limits the amount of memberships that can be registered from `266` to `8000` depending on which rate limit users choose.
-In both cases we cap the traffic, however, if we design The Waku Network like this, it will be massively underutilized. As an alternative, the approach we should follow is to rely on statistics, and assume that i) not everyone will be using the network at the same time and ii) message size will vary. So while its impossible to guarantee any capped bandwidth, we should be able to guarantee that with 95 or 99% confidence the bandwidth will stay around a given value, with a maximum variance.
+On the other hand [64/WAKU2-NETWORK](https://github.com/vacp2p/rfc-index/blob/main/waku/standards/core/64/network.md) states that:
+* `rlnEpochSizeSec`: 600. Meaning the epoch size is 600 seconds.
+* `maxMessageSize`: 150KB. Meaning the maximum message size that is allowed. Note: recommended average of 4KB.
-The current RLN approach of rate limiting 1 message every x seconds is not very practical. The current RLN limitations are enforced on 1 message every x time (called `epoch`). So we currently can allow 1 msg per second or 1 msg per 10 seconds by just modifying the `epoch` size. But this has some drawbacks. Unfortunately, neither of the options are viable for waku:
-1. A small `epoch` size (eg `1 seconds`) would allow a membership to publish `24*3600/1=86400` messages a day, which would be too much. In exchange, this allows a user to publish messages right after the other, since it just have to wait 1 second between messages. Problem is that having an rln membership being able to publish this amount of messages, is a bit of a liability for waku, and hinders scalability.
-2. A high `epoch` size (eg `240 seconds`) would allow a membership to publish `24*3600/240=360` messages a day, which is a more reasonable limit, but this won't allow a user to publish two messages one right after the other, meaning that if you publish a message, you have to way 240 seconds to publish the next one. Not practical, a no go.
+Putting this all together and assuming:
+* Messages are sent uniformly distributed.
+* All users totally consumes its rate-limit.
-But what if we widen the window size, and allow multiple messages within that window?
+We can expect the following message rate and bandwidth for the whole network:
+* A traffic of `266 msg/second` on average (`160000/600`)
+* A traffic of `6 MBps` on average (266 * 4KB * 6), where `4KB` is the average message size and `6` is the average gossipsub D-out degree.
-## Solution
-
-In order to fix this, we need bigger windows sizes, to smooth out particular bursts. Its fine that a user publishes 20 messages in one second, as long as in a wider window it doesn't publish more than, lets say 500. This wider window could be a day. So we could say that a membership can publish `250 msg/day`. With this we solve i) and ii) from the previous section.
-
-Some quick napkin math on how this can scale:
-- 10.000 RLN memberships
-- Each RLN membership allow to publish 250 msg/day
-- Message size of 5 kBytes
-
-Assuming a completely random distribution:
-- 10.000 * 250 = 2 500 000 messages will be published a day (at max)
-- A day has 86 400 seconds. So with a random distribution we can say that 30 msg/sec (at max)
-- 30 msg/sec * 5 kBytes/msg = 150 kBytes/sec (at max)
-- Assuming D_out=8: 150 kBytes/sec * 8 = 1.2 MBytes/sec (9.6 Mbits/sec)
-
-So while its still not possible to guarantee 100% the maximum bandwidth, if we rate limit per day we can have better guarantees. Looking at these numbers, considering a single shard, it would be feasible to serve 10.000 users considering a usage of 250 msg/day.
-
-TODO: Analysis on 95%/99% interval confidence on bandwidth given a random distribution.
-
-## TLDR
-
-- Waku should guarantee a capped bandwidth so that everyone can run a node.
-- The guarantee is a "statistical guarantee", since there is no way of enforcing a strict limit.
-- Current RLN approach is to rate limit 1 message every x seconds. A better approach would be x messages every day, which helps achieving such bandwidth limit.
-- To follow up: Variable RLN memberships. Eg. allow to chose tier 1 (100msg/day) tier 2 (200msg/day) etc.
\ No newline at end of file
+And assuming a uniform distribution of traffic among 8 shards:
+* `33 msg/second` per shard.
+* `0.75 MBps` per shard.
diff --git a/docs/research/research-and-studies/maximum-bandwidth.md b/docs/research/research-and-studies/maximum-bandwidth.md
index c01b273..1d7d600 100644
--- a/docs/research/research-and-studies/maximum-bandwidth.md
+++ b/docs/research/research-and-studies/maximum-bandwidth.md
@@ -58,7 +58,7 @@ The **trade-off is clear**:
So it's about where to draw this line.
Points to take into account:
-- **Relay contributes to bandwidth the most**: Relay is the protocol that mostly contributes to bandwidth usage, and it can't choose to allocate fewer bandwidth resources like other protocols (eg `store` can choose to provide less resources and it will work). In other words, the network sets the relay bandwidth requirements, and if the node can't meet them, it just won't work.
+- **Relay contributes to bandwidth the most**: Relay is the protocol that mostly contributes to bandwidth usage, and it can't choose to allocate fewer bandwidth resources like other protocols (eg `store` can choose to provide less resources and it will work). In other words, the network sets the relay bandwidth requirements, and if the node can't meet them, it just wont work.
- **Upload and download bandwidth are the same**: Due to how gossipsub works, and hence `relay`, the bandwidth consumption is symmetric, meaning that upload and download bandwidth is the same. This is because of `D` and the reciprocity of the connections, meaning that one node upload is another download.
- **Nodes not meeting requirements can use light clients**. Note that nodes not meeting the bandwidth requirements can still use waku, but they will have to use light protocols, which are a great alternative, especially on mobile, but with some drawbacks (trust assumptions, less reliability, etc)
- **Waku can't take all the bandwidth:** Waku is meant to be used in conjunction with other services, so it shouldn't consume all the existing bandwidth. If Waku consumes `x Mbps` and someone bandwidth is `x Mpbs`, the UX won't be good.
@@ -80,4 +80,4 @@ Coming up with a number:
**Conclusion:** Limit to `10 Mbps` each waku shard. How? Not trivial, see https://github.com/waku-org/research/issues/22#issuecomment-1727795042
-*Note:* This number is not set in stone and is subject to modifications, but it will most likely stay in the same order of magnitude if changed.
+*Note:* This number is not set in stone and is subject to modifications, but it will most likely stay in the same order of magnitude if changed.
\ No newline at end of file
From 579f454efdb7d532d42bf1996b1d9a7db258277f Mon Sep 17 00:00:00 2001
From: Prem Chaitanya Prathi
Date: Tue, 29 Oct 2024 09:09:25 +0530
Subject: [PATCH 16/24] fix: broken links to blog.waku.org (#228)
---
docs/guides/getting-started.md | 4 ++--
docs/guides/js-waku/use-waku-react.md | 2 +-
docs/learn/waku-network.md | 2 +-
3 files changed, 4 insertions(+), 4 deletions(-)
diff --git a/docs/guides/getting-started.md b/docs/guides/getting-started.md
index 797039e..6fc00fa 100644
--- a/docs/guides/getting-started.md
+++ b/docs/guides/getting-started.md
@@ -48,8 +48,8 @@ Looking for what to build with Waku? Discover a collection of sample ideas and u
## Case studies
## Getting started
diff --git a/docs/guides/js-waku/use-waku-react.md b/docs/guides/js-waku/use-waku-react.md
index 974af9a..46340aa 100644
--- a/docs/guides/js-waku/use-waku-react.md
+++ b/docs/guides/js-waku/use-waku-react.md
@@ -294,5 +294,5 @@ To explore the available Store query options, have a look at the [Retrieve Messa
:::
:::tip
-You have successfully integrated `@waku/sdk` into a React application using the `@waku/react` package. Have a look at the [web-chat](https://github.com/waku-org/js-waku-examples/tree/master/examples/web-chat) example for a working demo and the [Building a Tic-Tac-Toe Game with Waku](https://blog.waku.org/tictactoe-tutorial) tutorial to learn more.
+You have successfully integrated `@waku/sdk` into a React application using the `@waku/react` package. Have a look at the [web-chat](https://github.com/waku-org/js-waku-examples/tree/master/examples/web-chat) example for a working demo and the [Building a Tic-Tac-Toe Game with Waku](https://blog.waku.org/2024-01-22-tictactoe-tutorial/) tutorial to learn more.
:::
diff --git a/docs/learn/waku-network.md b/docs/learn/waku-network.md
index 478ebeb..a09d198 100644
--- a/docs/learn/waku-network.md
+++ b/docs/learn/waku-network.md
@@ -11,7 +11,7 @@ The Waku Network is a shared p2p messaging network that is open-access, useful f
4. Services for resource-restricted nodes, including historical message storage and retrieval, filtering, etc.
:::tip
-If you want to learn more about the Waku Network, [The Waku Network: Technical Overview](https://blog.waku.org/2024-waku-network-tech-overview) article provides an in-depth look under the hood.
+If you want to learn more about the Waku Network, [The Waku Network: Technical Overview](https://blog.waku.org/2024-03-26-waku-network-tech-overview/) article provides an in-depth look under the hood.
:::
## Why join the Waku network?
From 0313cf287a655c92308a8a966aedbdff1119f4ec Mon Sep 17 00:00:00 2001
From: gabrielmer <101006718+gabrielmer@users.noreply.github.com>
Date: Thu, 31 Oct 2024 15:43:04 +0200
Subject: [PATCH 17/24] updating protected topics in favor of protected shards
(#229)
---
docs/guides/nwaku/config-options.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/docs/guides/nwaku/config-options.md b/docs/guides/nwaku/config-options.md
index 9b599b9..65b0b31 100644
--- a/docs/guides/nwaku/config-options.md
+++ b/docs/guides/nwaku/config-options.md
@@ -10,7 +10,7 @@ Here are the available node configuration options, along with their default valu
| Name | Default Value | Description |
| ----------------- | --------------------------- | --------------------------------------------------------------------------------------------------- |
| `config-file` | | Loads configuration from a TOML file (cmd-line parameters take precedence) |
-| `protected-topic` | `newSeq[ProtectedTopic](0)` | Topics and its public key to be used for message validation, topic:pubkey. Argument may be repeated |
+| `protected-shard` | `newSeq[ProtectedShard](0)` | Shards and its public keys to be used for message validation, shard:pubkey. Argument may be repeated |
## Log config
From 3f0010159d79ca9b163b9c247de1e639aef269a7 Mon Sep 17 00:00:00 2001
From: NagyZoltanPeter <113987313+NagyZoltanPeter@users.noreply.github.com>
Date: Thu, 30 Jan 2025 14:35:27 +0100
Subject: [PATCH 18/24] Add which utility as pre-req for fedora build guidance
(#230)
Add which utility as pre-req for fedora build guidance
---
docs/guides/nwaku/build-source.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/docs/guides/nwaku/build-source.md b/docs/guides/nwaku/build-source.md
index 9dff155..282df70 100644
--- a/docs/guides/nwaku/build-source.md
+++ b/docs/guides/nwaku/build-source.md
@@ -32,7 +32,7 @@ source "$HOME/.cargo/env"
```shell
-sudo dnf install @development-tools git libpq-devel
+sudo dnf install @development-tools git libpq-devel which
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
```
From a21ed2e02c4701df676bba65a693a9a7ef7a3dd8 Mon Sep 17 00:00:00 2001
From: Danish Arora <35004822+danisharora099@users.noreply.github.com>
Date: Fri, 31 Jan 2025 17:41:09 +0530
Subject: [PATCH 19/24] chore: add docs to use a specific store node (#231)
---
docs/guides/js-waku/store-retrieve-messages.md | 13 +++++++++++++
docs/guides/js-waku/use-waku-react.md | 13 +++++++++++++
2 files changed, 26 insertions(+)
diff --git a/docs/guides/js-waku/store-retrieve-messages.md b/docs/guides/js-waku/store-retrieve-messages.md
index c09d8c9..61d664c 100644
--- a/docs/guides/js-waku/store-retrieve-messages.md
+++ b/docs/guides/js-waku/store-retrieve-messages.md
@@ -28,6 +28,19 @@ import { Protocols } from "@waku/sdk";
await node.waitForPeers([Protocols.Store]);
```
+You can also specify a dedicated Store peer to use for queries when creating the node. This is particularly useful when running your own Store node or when you want to use a specific Store node in the network:
+
+```js
+const node = await createLightNode({
+ defaultBootstrap: true,
+ store: {
+ peer: "/ip4/1.2.3.4/tcp/1234/p2p/16Uiu2HAm..." // multiaddr or PeerId of your Store node
+ }
+});
+```
+
+If the specified Store peer is not available, the node will fall back to using random Store peers in the network.
+
## Choose a content topic
[Choose a content topic](/learn/concepts/content-topics) for filtering the messages to retrieve and create a message `decoder`:
diff --git a/docs/guides/js-waku/use-waku-react.md b/docs/guides/js-waku/use-waku-react.md
index 46340aa..b612f5a 100644
--- a/docs/guides/js-waku/use-waku-react.md
+++ b/docs/guides/js-waku/use-waku-react.md
@@ -289,6 +289,19 @@ function App() {
}
```
+You can also configure a specific Store peer when creating the node, which is useful when running your own Store node or using a specific node in the network:
+
+```js
+const node = await createLightNode({
+ defaultBootstrap: true,
+ store: {
+ peer: "/ip4/1.2.3.4/tcp/1234/p2p/16Uiu2HAm..." // multiaddr or PeerId of your Store node
+ }
+});
+```
+
+If the specified Store peer is not available, the node will fall back to using random Store peers in the network.
+
:::info
To explore the available Store query options, have a look at the [Retrieve Messages Using Store Protocol](/guides/js-waku/store-retrieve-messages#store-query-options) guide.
:::
From 1593add1cbd43e3f2fe21bba9724db7e5bd9fb32 Mon Sep 17 00:00:00 2001
From: Tanya S <120410716+stubbsta@users.noreply.github.com>
Date: Tue, 18 Mar 2025 19:00:20 +0200
Subject: [PATCH 20/24] fetch new content - test-report-page (#235)
---
.../benchmarks/test-results-summary.md | 90 +++++++++++++++++++
1 file changed, 90 insertions(+)
create mode 100644 docs/research/benchmarks/test-results-summary.md
diff --git a/docs/research/benchmarks/test-results-summary.md b/docs/research/benchmarks/test-results-summary.md
new file mode 100644
index 0000000..b5786bf
--- /dev/null
+++ b/docs/research/benchmarks/test-results-summary.md
@@ -0,0 +1,90 @@
+---
+title: Performance Benchmarks and Test Reports
+---
+
+
+## Introduction
+This page summarises key performance metrics for nwaku and provides links to detailed test reports.
+
+> ## TL;DR
+>
+> - Average Waku bandwidth usage: ~**10 KB/s** (minus discv5 Discovery) for 1KB message size and message injection rate of 1msg/s.
+Confirmed for topologies of up to 2000 Relay nodes.
+> - Average time for a message to propagate to 100% of nodes: **0.4s** for topologies of up to 2000 Relay nodes.
+> - Average per-node bandwidth usage of the discv5 protocol: **8 KB/s** for incoming traffic and **7.4 KB/s** for outgoing traffic,
+ in a network with 100 continuously online nodes.
+> - Future improvements: A messaging API is currently in development to streamline interactions with the Waku protocol suite.
+Once completed, it will enable benchmarking at the messaging API level, allowing applications to more easily compare their
+own performance results.
+
+
+## Insights
+
+### Relay Bandwidth Usage: nwaku v0.34.0
+The average per-node `libp2p` bandwidth usage in a 1000-node Relay network with 1KB messages at varying injection rates.
+
+
+| Message Injection Rate | Average libp2p incoming bandwidth (KB/s) | Average libp2p outgoing bandwidth (KB/s) |
+|------------------------|------------------------------------------|------------------------------------------|
+| 1 msg/s | ~10.1 | ~10.3 |
+| 1 msg/10s | ~1.8 | ~1.9 |
+
+### Message Propagation Latency: nwaku v0.34.0-rc1
+The message propagation latency is measured as the total time for a message to reach all nodes.
+We compare the latency in different network configurations for the following simulation parameters:
+- Total messages published: 600
+- Message size: 1KB
+- Message injection rate: 1msg/s
+
+The different network configurations tested are:
+- Relay Config: 1000 nodes with relay enabled
+- Mixed Config: 210 nodes, consisting of bootstrap nodes, filter clients and servers, lightpush clients and servers, store nodes
+- Non-persistent Relay Config: 500 persistent relay nodes, 10 store nodes and 100 non-persistent relay nodes
+
+Click on a specific config to see the detailed test report.
+
+| Config | Average Message Propagation Latency (s) | Max Message Propagation Latency (s)|
+|------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------|------------------------------------|
+| [Relay](https://www.notion.so/Waku-regression-testing-v0-34-1618f96fb65c803bb7bad6ecd6bafff9) (1000 nodes) | 0.05 | 1.6 |
+| [Mixed](https://www.notion.so/Mixed-environment-analysis-1688f96fb65c809eb235c59b97d6e15b) (210 nodes) | 0.0125 | 0.007 |
+| [Non-persistent Relay](https://www.notion.so/High-Churn-Relay-Store-Reliability-16c8f96fb65c8008bacaf5e86881160c) (510 nodes)| 0.0125 | 0.25 |
+
+### Discv5 Bandwidth Usage: nwaku v0.34.0
+The average bandwidth usage of discv5 for a network of 100 nodes and message injection rate of 0 or 1msg/s.
+The measurements are based on a stable network where all nodes have already connected to peers to form a healthy mesh.
+
+|Message size |Average discv5 incoming bandwidth (KB/s)|Average discv5 outgoing bandwidth (KB/s)|
+|-------------------- |----------------------------------------|----------------------------------------|
+| no message injection| 7.88 | 6.70 |
+| 1KB | 8.04 | 7.40 |
+| 10KB | 8.03 | 7.45 |
+
+## Testing
+### DST
+The VAC DST team performs regression testing on all new **nwaku** releases, comparing performance with previous versions.
+They simulate large Waku networks with a variety of network and protocol configurations that are representative of real-world usage.
+
+**Test Reports**: [DST Reports](https://www.notion.so/DST-Reports-1228f96fb65c80729cd1d98a7496fe6f)
+
+
+### QA
+The VAC QA team performs interoperability tests for **nwaku** and **go-waku** using the latest main branch builds.
+These tests run daily and verify protocol functionality by targeting specific features of each protocol.
+
+**Test Reports**: [QA Reports](https://discord.com/channels/1110799176264056863/1196933819614363678)
+
+### nwaku
+The **nwaku** team follows a structured release procedure for all release candidates.
+This involves deploying RCs to `status.staging` fleet for validation and performing sanity checks.
+
+**Release Process**: [nwaku Release Procedure](https://github.com/waku-org/nwaku/blob/master/.github/ISSUE_TEMPLATE/prepare_release.md)
+
+
+### Research
+The Waku Research team conducts a variety of benchmarking, performance testing, proof-of-concept validations and debugging efforts.
+They also maintain a Waku simulator designed for small-scale, single-purpose, on-demand testing.
+
+
+**Test Reports**: [Waku Research Reports](https://www.notion.so/Miscellaneous-2c02516248db4a28ba8cb2797a40d1bb)
+
+**Waku Simulator**: [Waku Simulator Book](https://waku-org.github.io/waku-simulator/)
From 3e26f35f4b8388c8684090eb2237e0a7e2b87f07 Mon Sep 17 00:00:00 2001
From: Sergei Tikhomirov
Date: Tue, 20 May 2025 07:07:21 +0200
Subject: [PATCH 21/24] fix: insert missing bracket (#236)
---
docs/guides/nwaku/find-node-address.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/docs/guides/nwaku/find-node-address.md b/docs/guides/nwaku/find-node-address.md
index 6ff102a..931fd3a 100644
--- a/docs/guides/nwaku/find-node-address.md
+++ b/docs/guides/nwaku/find-node-address.md
@@ -56,5 +56,5 @@ enr:-IO4QDxToTg86pPCK2KvMeVCXC2ADVZWrxXSvNZeaoa0JhShbM5qed69RQz1s1mWEEqJ3aoklo_7
```
:::tip Congratulations!
-You have successfully found the listening and discoverable addresses for your `nwaku` node. Have a look at the Configure Peer Discovery](/guides/nwaku/configure-discovery) guide to learn how to discover and connect with peers in the network.
+You have successfully found the listening and discoverable addresses for your `nwaku` node. Have a look at the [Configure Peer Discovery](/guides/nwaku/configure-discovery) guide to learn how to discover and connect with peers in the network.
:::
From f83a184421fc1b5ee43af3d7bd506337c19952da Mon Sep 17 00:00:00 2001
From: Alex Williamson
Date: Tue, 20 May 2025 10:12:09 +0500
Subject: [PATCH 22/24] find-node-address.md (#234)
Co-authored-by: fryorcraken <110212804+fryorcraken@users.noreply.github.com>
From bc618877debc2b68951816f5b3d009e42271dac3 Mon Sep 17 00:00:00 2001
From: fryorcraken <110212804+fryorcraken@users.noreply.github.com>
Date: Tue, 20 May 2025 15:24:43 +1000
Subject: [PATCH 23/24] Add git branching instructions (#238)
* docs: fix wakudev hostname
* Add git branching instructions
---------
Co-authored-by: Anton Iakimov
---
README.md | 8 ++++++++
docs/research/benchmarks/postgres-adoption.md | 6 +++---
2 files changed, 11 insertions(+), 3 deletions(-)
diff --git a/README.md b/README.md
index 5a601c7..dc3c6f6 100644
--- a/README.md
+++ b/README.md
@@ -94,3 +94,11 @@ yarn clear
The hosting is done using [Caddy server with Git plugin for handling GitHub webhooks](https://github.com/status-im/infra-misc/blob/master/ansible/roles/caddy-git).
Information about deployed build can be also found in `/build.json` available on the website.
+
+## Change Process
+
+1. Create a new working branch from develop: git checkout develop; git checkout -b my-changes.
+2. Make your changes, push them to the origin, and open a Pull Request against the develop branch.
+3. After approval, merge the pull request, and verify the changes on the staging server (e.g., https://dev.vac.dev).
+4. When ready to promote changes to the live website, rebase the master branch on the staging changes: git checkout master; git pull origin master; git rebase origin/develop; git push.
+
diff --git a/docs/research/benchmarks/postgres-adoption.md b/docs/research/benchmarks/postgres-adoption.md
index 98e3396..415ecc6 100644
--- a/docs/research/benchmarks/postgres-adoption.md
+++ b/docs/research/benchmarks/postgres-adoption.md
@@ -109,7 +109,7 @@ Notice that the two `nwaku` nodes run the very same version, which is compiled l
#### Comparing archive SQLite & Postgres performance in [nwaku-b6dd6899](https://github.com/waku-org/nwaku/tree/b6dd6899030ee628813dfd60ad1ad024345e7b41)
-The next results were obtained by running the docker-compose-manual-binaries.yml from [test-waku-query-c078075](https://github.com/waku-org/test-waku-query/tree/c07807597faa781ae6c8c32eefdf48ecac03a7ba) in the sandbox machine (metal-01.he-eu-hel1.wakudev.misc.status.im.)
+The next results were obtained by running the docker-compose-manual-binaries.yml from [test-waku-query-c078075](https://github.com/waku-org/test-waku-query/tree/c07807597faa781ae6c8c32eefdf48ecac03a7ba) in the sandbox machine (metal-01.he-eu-hel1.misc.wakudev.status.im.)
**Scenario 1**
@@ -155,7 +155,7 @@ In this case, the performance is similar regarding the timings. The store rate i
This nwaku commit is after a few **Postgres** optimizations were applied.
-The next results were obtained by running the docker-compose-manual-binaries.yml from [test-waku-query-c078075](https://github.com/waku-org/test-waku-query/tree/c07807597faa781ae6c8c32eefdf48ecac03a7ba) in the sandbox machine (metal-01.he-eu-hel1.wakudev.misc.status.im.)
+The next results were obtained by running the docker-compose-manual-binaries.yml from [test-waku-query-c078075](https://github.com/waku-org/test-waku-query/tree/c07807597faa781ae6c8c32eefdf48ecac03a7ba) in the sandbox machine (metal-01.he-eu-hel1.misc.wakudev.status.im.)
**Scenario 1**
@@ -217,7 +217,7 @@ The `db-postgres-hammer` is aimed to stress the database from the `select` point
#### Results
-The following results were obtained by using the sandbox machine (metal-01.he-eu-hel1.wakudev.misc) and running nim-waku nodes from https://github.com/waku-org/nwaku/tree/b452ed865466a33b7f5b87fa937a8471b28e466e and using the `test-waku-query` project from https://github.com/waku-org/test-waku-query/tree/fef29cea182cc744c7940abc6c96d38a68739356
+The following results were obtained by using the sandbox machine (metal-01.he-eu-hel1.misc.wakudev) and running nim-waku nodes from https://github.com/waku-org/nwaku/tree/b452ed865466a33b7f5b87fa937a8471b28e466e and using the `test-waku-query` project from https://github.com/waku-org/test-waku-query/tree/fef29cea182cc744c7940abc6c96d38a68739356
The following shows the results
From 27ebcd762cef6f305bed87910ae5692060619edd Mon Sep 17 00:00:00 2001
From: Prem Chaitanya Prathi
Date: Tue, 20 May 2025 11:01:12 +0530
Subject: [PATCH 24/24] fix: point nwaku to proper docker repo and image
version (#217)
* fix: point nwaku to proper docker repo and image version
* chore: add help command in docker
---------
Co-authored-by: fryorcraken <110212804+fryorcraken@users.noreply.github.com>
---
.cspell.json | 2 +-
docs/guides/nwaku/config-methods.md | 6 +++---
docs/guides/nwaku/run-docker.md | 14 ++++++++++++--
docs/research/benchmarks/postgres-adoption.md | 2 +-
4 files changed, 17 insertions(+), 7 deletions(-)
diff --git a/.cspell.json b/.cspell.json
index 10e618d..eed5e20 100644
--- a/.cspell.json
+++ b/.cspell.json
@@ -40,7 +40,7 @@
"autoplay",
"classwide",
"devel",
- "statusteam",
+ "wakuorg",
"myaddr",
"extip",
"staticnode",
diff --git a/docs/guides/nwaku/config-methods.md b/docs/guides/nwaku/config-methods.md
index f66ba98..c9969ae 100644
--- a/docs/guides/nwaku/config-methods.md
+++ b/docs/guides/nwaku/config-methods.md
@@ -25,7 +25,7 @@ Node configuration is primarily done using command line options, which override
When running your node with Docker, provide the command line options after the image name in this format:
```shell
-docker run statusteam/nim-waku --tcp-port=65000
+docker run wakuorg/nwaku --tcp-port=65000
```
## Environment variables
@@ -41,7 +41,7 @@ WAKUNODE2_TCP_PORT=65000 ./build/wakunode2
When running your node with Docker, start the node using the `-e` command option:
```shell
-docker run -e "WAKUNODE2_TCP_PORT=65000" statusteam/nim-waku
+docker run -e "WAKUNODE2_TCP_PORT=65000" wakuorg/nwaku
```
:::info
@@ -72,7 +72,7 @@ You can also specify the configuration file via environment variables:
WAKUNODE2_CONFIG_FILE=[TOML CONFIGURATION FILE] ./build/wakunode2
# Using environment variables with Docker
-docker run -e "WAKUNODE2_CONFIG_FILE=[TOML CONFIGURATION FILE]" statusteam/nim-waku
+docker run -e "WAKUNODE2_CONFIG_FILE=[TOML CONFIGURATION FILE]" wakuorg/nwaku
```
:::info
diff --git a/docs/guides/nwaku/run-docker.md b/docs/guides/nwaku/run-docker.md
index 635d54b..15a8efa 100644
--- a/docs/guides/nwaku/run-docker.md
+++ b/docs/guides/nwaku/run-docker.md
@@ -15,7 +15,7 @@ We recommend running a `nwaku` node with at least 2GB of RAM, especially if `WSS
## Get Docker image
-The Nwaku Docker images are available on the Docker Hub public registry under the [statusteam/nim-waku](https://hub.docker.com/r/statusteam/nim-waku) repository. Please visit [statusteam/nim-waku/tags](https://hub.docker.com/r/statusteam/nim-waku/tags) for images of specific releases.
+The Nwaku Docker images are available on the Docker Hub public registry under the [wakuorg/nwaku](https://hub.docker.com/r/wakuorg/nwaku) repository. Please visit [wakuorg/nwaku/tags](https://hub.docker.com/r/wakuorg/nwaku/tags) for images of specific releases.
## Build Docker image
@@ -45,7 +45,7 @@ docker run [OPTIONS] [IMAGE] [ARG...]
Run `nwaku` using the most typical configuration:
```shell
-docker run -i -t -p 60000:60000 -p 9000:9000/udp statusteam/nim-waku:v0.20.0 \
+docker run -i -t -p 60000:60000 -p 9000:9000/udp wakuorg/nwaku:v0.32.0 \
--dns-discovery=true \
--dns-discovery-url=enrtree://AIRVQ5DDA4FFWLRBCHJWUWOO6X6S4ZTZ5B667LQ6AJU6PEYDLRD5O@sandbox.waku.nodes.status.im \
--discv5-discovery=true \
@@ -58,6 +58,16 @@ To find your public IP, use:
dig TXT +short o-o.myaddr.l.google.com @ns1.google.com | awk -F'"' '{ print $2}'
```
+For more detailed information about all possible configurations, please run
+
+```shell
+docker run -t wakuorg/nwaku:v0.32.0 --help
+```
+
+:::info
+Note that running a node in The Waku Network (--cluster-id=1) requires a special set of configurations and therefore, it is recommended to run in this case with docker compose
+:::
+
:::info
We recommend using explicit port mappings (`-p`) when exposing ports accessible from outside the host (listening and discovery ports, API servers).
:::
diff --git a/docs/research/benchmarks/postgres-adoption.md b/docs/research/benchmarks/postgres-adoption.md
index 415ecc6..f593f08 100644
--- a/docs/research/benchmarks/postgres-adoption.md
+++ b/docs/research/benchmarks/postgres-adoption.md
@@ -78,7 +78,7 @@ In this case, we are comparing *Store* performance by means of Rest service.
- node_c: one _nwaku_ node with *REST* enabled and acting as a *Store client* for node_a.
- node_d: one _nwaku_ node with *REST* enabled and acting as a *Store client* for node_b.
- With _jmeter_, 10 users make *REST* *Store* requests concurrently to each of the “rest” nodes (node_c and node_d.)
-- All _nwaku_ nodes running statusteam/nim-waku:v0.19.0
+- All _nwaku_ nodes running wakuorg/nwaku:v0.32.0
[This](https://github.com/waku-org/test-waku-query/blob/master/docker/jmeter/http_store_requests.jmx) is the _jmeter_ project used.