mirror of
https://github.com/logos-messaging/docs.waku.org.git
synced 2026-01-05 22:33:06 +00:00
Merge branch 'develop' into fix/nwaku-ref
This commit is contained in:
commit
437614e0f8
@ -17,6 +17,7 @@
|
||||
"enrtree",
|
||||
"Discv5",
|
||||
"gossipsub",
|
||||
"autosharding",
|
||||
"lightpush",
|
||||
"pubtopic1",
|
||||
"proto",
|
||||
|
||||
@ -94,3 +94,11 @@ yarn clear
|
||||
The hosting is done using [Caddy server with Git plugin for handling GitHub webhooks](https://github.com/status-im/infra-misc/blob/master/ansible/roles/caddy-git).
|
||||
|
||||
Information about deployed build can be also found in `/build.json` available on the website.
|
||||
|
||||
## Change Process
|
||||
|
||||
1. Create a new working branch from develop: git checkout develop; git checkout -b my-changes.
|
||||
2. Make your changes, push them to the origin, and open a Pull Request against the develop branch.
|
||||
3. After approval, merge the pull request, and verify the changes on the staging server (e.g., https://dev.vac.dev).
|
||||
4. When ready to promote changes to the live website, rebase the master branch on the staging changes: git checkout master; git pull origin master; git rebase origin/develop; git push.
|
||||
|
||||
|
||||
@ -48,8 +48,8 @@ Looking for what to build with Waku? Discover a collection of sample ideas and u
|
||||
## Case studies
|
||||
|
||||
<div class="case-study-container">
|
||||
<a href="https://blog.waku.org/thegraph-waku-case-study/" target="_blank" rel="noopener noreferrer"><img src="/img/graph-use-case.jpeg" /></a>
|
||||
<a href="https://blog.waku.org/railgun-waku-case-study/" target="_blank" rel="noopener noreferrer"><img src="/img/railgun-use-case.jpeg" /></a>
|
||||
<a href="https://blog.waku.org/2024-05-13-the-graph-case-study/" target="_blank" rel="noopener noreferrer"><img src="/img/graph-use-case.jpeg" /></a>
|
||||
<a href="https://blog.waku.org/2024-04-26-railgun-case-study/" target="_blank" rel="noopener noreferrer"><img src="/img/railgun-use-case.jpeg" /></a>
|
||||
</div>
|
||||
|
||||
## Getting started
|
||||
|
||||
@ -64,14 +64,11 @@ const peers = [
|
||||
|
||||
const node = await createLightNode();
|
||||
|
||||
// In case nodes are using `ws` protocol - additional configuration is needed:
|
||||
// In case nodes are using IP address and / or `ws` protocol - additional configuration is needed:
|
||||
/*
|
||||
import { webSockets } from "@libp2p/websockets";
|
||||
import { all as filterAll } from "@libp2p/websockets/filters";
|
||||
|
||||
const node = await createLightNode({
|
||||
libp2p: {
|
||||
transports: [webSockets({ filter: filterAll })],
|
||||
filterMultiaddrs: false,
|
||||
},
|
||||
});
|
||||
*/
|
||||
@ -194,10 +191,10 @@ const node = await createLightNode({
|
||||
You can retrieve the array of peers connected to a node using the `libp2p.getPeers()` function within the `@waku/sdk` package:
|
||||
|
||||
```js
|
||||
import { createLightNode, waitForRemotePeer } from "@waku/sdk";
|
||||
import { createLightNode } from "@waku/sdk";
|
||||
|
||||
const node = await createLightNode({ defaultBootstrap: true });
|
||||
await waitForRemotePeer(node);
|
||||
await node.waitForPeers();
|
||||
|
||||
// Retrieve array of peers connected to the node
|
||||
console.log(node.libp2p.getPeers());
|
||||
|
||||
@ -24,32 +24,55 @@ await node.start();
|
||||
When the `defaultBootstrap` parameter is set to `true`, your node will be bootstrapped using the [default bootstrap method](/guides/js-waku/configure-discovery#default-bootstrap-method). Have a look at the [Bootstrap Nodes and Discover Peers](/guides/js-waku/configure-discovery) guide to learn more methods to bootstrap nodes.
|
||||
:::
|
||||
|
||||
## Connect to remote peers
|
||||
|
||||
Use the `waitForRemotePeer()` function to wait for the node to connect with peers on the Waku Network:
|
||||
A node needs to know how to route messages. By default, it will use The Waku Network configuration (`{ clusterId: 1, shards: [0,1,2,3,4,5,6,7] }`). For most applications, it's recommended to use autosharding:
|
||||
|
||||
```js
|
||||
import { waitForRemotePeer } from "@waku/sdk";
|
||||
// Create node with auto sharding (recommended)
|
||||
const node = await createLightNode({
|
||||
defaultBootstrap: true,
|
||||
networkConfig: {
|
||||
clusterId: 1,
|
||||
contentTopics: ["/my-app/1/notifications/proto"],
|
||||
},
|
||||
});
|
||||
```
|
||||
|
||||
### Alternative network configuration
|
||||
|
||||
If your project requires a specific network configuration, you can use static sharding:
|
||||
|
||||
```js
|
||||
// Create node with static sharding
|
||||
const node = await createLightNode({
|
||||
defaultBootstrap: true,
|
||||
networkConfig: {
|
||||
clusterId: 1,
|
||||
shards: [0, 1, 2, 3],
|
||||
},
|
||||
});
|
||||
```
|
||||
|
||||
## Connect to remote peers
|
||||
|
||||
Use the `node.waitForPeers()` function to wait for the node to connect with peers on the Waku Network:
|
||||
|
||||
```js
|
||||
// Wait for a successful peer connection
|
||||
await waitForRemotePeer(node);
|
||||
await node.waitForPeers();
|
||||
```
|
||||
|
||||
The `protocols` parameter allows you to specify the [protocols](/learn/concepts/protocols) that the remote peers should have enabled:
|
||||
|
||||
```js
|
||||
import { waitForRemotePeer, Protocols } from "@waku/sdk";
|
||||
import { Protocols } from "@waku/sdk";
|
||||
|
||||
// Wait for peer connections with specific protocols
|
||||
await waitForRemotePeer(node, [
|
||||
Protocols.LightPush,
|
||||
Protocols.Filter,
|
||||
]);
|
||||
await node.waitForPeers([Protocols.LightPush, Protocols.Filter]);
|
||||
```
|
||||
|
||||
## Choose a content topic
|
||||
|
||||
[Choose a content topic](/learn/concepts/content-topics) for your application and create a message `encoder` and `decoder`:
|
||||
Choose a [content topic](/learn/concepts/content-topics) for your application and create a message `encoder` and `decoder`:
|
||||
|
||||
```js
|
||||
import { createEncoder, createDecoder } from "@waku/sdk";
|
||||
@ -71,6 +94,20 @@ const encoder = createEncoder({
|
||||
});
|
||||
```
|
||||
|
||||
The `pubsubTopicShardInfo` parameter allows you to configure a different network configuration for your `encoder` and `decoder`:
|
||||
|
||||
```js
|
||||
// Create the network config
|
||||
const networkConfig = { clusterId: 3, shards: [1, 2] };
|
||||
|
||||
// Create encoder and decoder with custom network config
|
||||
const encoder = createEncoder({
|
||||
contentTopic: contentTopic,
|
||||
pubsubTopicShardInfo: networkConfig,
|
||||
});
|
||||
const decoder = createDecoder(contentTopic, networkConfig);
|
||||
```
|
||||
|
||||
:::info
|
||||
In this example, users send and receive messages on a shared content topic. However, real applications may have users broadcasting messages while others listen or only have 1:1 exchanges. Waku supports all these use cases.
|
||||
:::
|
||||
@ -83,7 +120,7 @@ Create your application's message structure using [Protobuf's valid message](htt
|
||||
import protobuf from "protobufjs";
|
||||
|
||||
// Create a message structure using Protobuf
|
||||
const ChatMessage = new protobuf.Type("ChatMessage")
|
||||
const DataPacket = new protobuf.Type("DataPacket")
|
||||
.add(new protobuf.Field("timestamp", 1, "uint64"))
|
||||
.add(new protobuf.Field("sender", 2, "string"))
|
||||
.add(new protobuf.Field("message", 3, "string"));
|
||||
@ -99,14 +136,14 @@ To send messages over the Waku Network using the `Light Push` protocol, create a
|
||||
|
||||
```js
|
||||
// Create a new message object
|
||||
const protoMessage = ChatMessage.create({
|
||||
const protoMessage = DataPacket.create({
|
||||
timestamp: Date.now(),
|
||||
sender: "Alice",
|
||||
message: "Hello, World!",
|
||||
});
|
||||
|
||||
// Serialise the message using Protobuf
|
||||
const serialisedMessage = ChatMessage.encode(protoMessage).finish();
|
||||
const serialisedMessage = DataPacket.encode(protoMessage).finish();
|
||||
|
||||
// Send the message using Light Push
|
||||
await node.lightPush.send(encoder, {
|
||||
@ -124,7 +161,7 @@ const callback = (wakuMessage) => {
|
||||
// Check if there is a payload on the message
|
||||
if (!wakuMessage.payload) return;
|
||||
// Render the messageObj as desired in your application
|
||||
const messageObj = ChatMessage.decode(wakuMessage.payload);
|
||||
const messageObj = DataPacket.decode(wakuMessage.payload);
|
||||
console.log(messageObj);
|
||||
};
|
||||
|
||||
@ -140,6 +177,16 @@ if (error) {
|
||||
await subscription.subscribe([decoder], callback);
|
||||
```
|
||||
|
||||
The `pubsubTopicShardInfo` parameter allows you to configure a different network configuration for your `Filter` subscription:
|
||||
|
||||
```js
|
||||
// Create the network config
|
||||
const networkConfig = { clusterId: 3, shards: [1, 2] };
|
||||
|
||||
// Create Filter subscription with custom network config
|
||||
const subscription = await node.filter.createSubscription(networkConfig);
|
||||
```
|
||||
|
||||
You can use the `subscription.unsubscribe()` function to stop receiving messages from a content topic:
|
||||
|
||||
```js
|
||||
|
||||
@ -19,15 +19,28 @@ await node.start();
|
||||
|
||||
## Connect to store peers
|
||||
|
||||
Use the `waitForRemotePeer()` function to wait for the node to connect with Store peers:
|
||||
Use the `node.waitForPeers()` method to wait for the node to connect with Store peers:
|
||||
|
||||
```js
|
||||
import { waitForRemotePeer, Protocols } from "@waku/sdk";
|
||||
import { Protocols } from "@waku/sdk";
|
||||
|
||||
// Wait for a successful peer connection
|
||||
await waitForRemotePeer(node, [Protocols.Store]);
|
||||
await node.waitForPeers([Protocols.Store]);
|
||||
```
|
||||
|
||||
You can also specify a dedicated Store peer to use for queries when creating the node. This is particularly useful when running your own Store node or when you want to use a specific Store node in the network:
|
||||
|
||||
```js
|
||||
const node = await createLightNode({
|
||||
defaultBootstrap: true,
|
||||
store: {
|
||||
peer: "/ip4/1.2.3.4/tcp/1234/p2p/16Uiu2HAm..." // multiaddr or PeerId of your Store node
|
||||
}
|
||||
});
|
||||
```
|
||||
|
||||
If the specified Store peer is not available, the node will fall back to using random Store peers in the network.
|
||||
|
||||
## Choose a content topic
|
||||
|
||||
[Choose a content topic](/learn/concepts/content-topics) for filtering the messages to retrieve and create a message `decoder`:
|
||||
|
||||
@ -118,7 +118,7 @@ function App() {
|
||||
const decoder = createDecoder(contentTopic);
|
||||
|
||||
// Create a message structure using Protobuf
|
||||
const ChatMessage = new protobuf.Type("ChatMessage")
|
||||
const DataPacket = new protobuf.Type("DataPacket")
|
||||
.add(new protobuf.Field("timestamp", 1, "uint64"))
|
||||
.add(new protobuf.Field("message", 2, "string"));
|
||||
|
||||
@ -223,13 +223,13 @@ function App() {
|
||||
|
||||
// Create a new message object
|
||||
const timestamp = Date.now();
|
||||
const protoMessage = ChatMessage.create({
|
||||
const protoMessage = DataPacket.create({
|
||||
timestamp: timestamp,
|
||||
message: inputMessage
|
||||
});
|
||||
|
||||
// Serialise the message and push to the network
|
||||
const payload = ChatMessage.encode(protoMessage).finish();
|
||||
const payload = DataPacket.encode(protoMessage).finish();
|
||||
const { recipients, errors } = await push({ payload, timestamp });
|
||||
|
||||
// Check for errors
|
||||
@ -258,7 +258,7 @@ function App() {
|
||||
useEffect(() => {
|
||||
setMessages(filterMessages.map((wakuMessage) => {
|
||||
if (!wakuMessage.payload) return;
|
||||
return ChatMessage.decode(wakuMessage.payload);
|
||||
return DataPacket.decode(wakuMessage.payload);
|
||||
}));
|
||||
}, [filterMessages]);
|
||||
}
|
||||
@ -283,16 +283,29 @@ function App() {
|
||||
const allMessages = storeMessages.concat(filterMessages);
|
||||
setMessages(allMessages.map((wakuMessage) => {
|
||||
if (!wakuMessage.payload) return;
|
||||
return ChatMessage.decode(wakuMessage.payload);
|
||||
return DataPacket.decode(wakuMessage.payload);
|
||||
}));
|
||||
}, [filterMessages, storeMessages]);
|
||||
}
|
||||
```
|
||||
|
||||
You can also configure a specific Store peer when creating the node, which is useful when running your own Store node or using a specific node in the network:
|
||||
|
||||
```js
|
||||
const node = await createLightNode({
|
||||
defaultBootstrap: true,
|
||||
store: {
|
||||
peer: "/ip4/1.2.3.4/tcp/1234/p2p/16Uiu2HAm..." // multiaddr or PeerId of your Store node
|
||||
}
|
||||
});
|
||||
```
|
||||
|
||||
If the specified Store peer is not available, the node will fall back to using random Store peers in the network.
|
||||
|
||||
:::info
|
||||
To explore the available Store query options, have a look at the [Retrieve Messages Using Store Protocol](/guides/js-waku/store-retrieve-messages#store-query-options) guide.
|
||||
:::
|
||||
|
||||
:::tip
|
||||
You have successfully integrated `@waku/sdk` into a React application using the `@waku/react` package. Have a look at the [web-chat](https://github.com/waku-org/js-waku-examples/tree/master/examples/web-chat) example for a working demo and the [Building a Tic-Tac-Toe Game with Waku](https://blog.waku.org/tictactoe-tutorial) tutorial to learn more.
|
||||
You have successfully integrated `@waku/sdk` into a React application using the `@waku/react` package. Have a look at the [web-chat](https://github.com/waku-org/js-waku-examples/tree/master/examples/web-chat) example for a working demo and the [Building a Tic-Tac-Toe Game with Waku](https://blog.waku.org/2024-01-22-tictactoe-tutorial/) tutorial to learn more.
|
||||
:::
|
||||
@ -32,7 +32,7 @@ source "$HOME/.cargo/env"
|
||||
<TabItem value="fedora" label="Fedora">
|
||||
|
||||
```shell
|
||||
sudo dnf install @development-tools git libpq-devel
|
||||
sudo dnf install @development-tools git libpq-devel which
|
||||
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
|
||||
```
|
||||
|
||||
|
||||
@ -10,7 +10,7 @@ Here are the available node configuration options, along with their default valu
|
||||
| Name | Default Value | Description |
|
||||
| ----------------- | --------------------------- | --------------------------------------------------------------------------------------------------- |
|
||||
| `config-file` | | Loads configuration from a TOML file (cmd-line parameters take precedence) |
|
||||
| `protected-topic` | `newSeq[ProtectedTopic](0)` | Topics and its public key to be used for message validation, topic:pubkey. Argument may be repeated |
|
||||
| `protected-shard` | `newSeq[ProtectedShard](0)` | Shards and its public keys to be used for message validation, shard:pubkey. Argument may be repeated |
|
||||
|
||||
## Log config
|
||||
|
||||
@ -68,6 +68,7 @@ Here are the available node configuration options, along with their default valu
|
||||
| `keep-alive` | `false` | Enable keep-alive for idle connections: true\|false |
|
||||
| `pubsub-topic` | | Default pubsub topic to subscribe to. Argument may be repeated. **Deprecated!** Please use `shard` and/or `content-topic` instead |
|
||||
| `shard` | | Shard to subscribe to. Argument may be repeated |
|
||||
| `num-shards-in-network` | | Number of shards in the network. Used to map content topics to shards when using autosharding |
|
||||
| `content-topic` | | Default content topic to subscribe to. Argument may be repeated |
|
||||
| `reliability` | `false` | Enable experimental reliability protocol true\|false |
|
||||
|
||||
@ -157,11 +158,11 @@ Here are the available node configuration options, along with their default valu
|
||||
| `websocket-secure-key-path` | | Secure websocket key path: '/path/to/key.txt' |
|
||||
| `websocket-secure-cert-path` | | Secure websocket Certificate path: '/path/to/cert.txt' |
|
||||
|
||||
## Non relay, request-response protocol DOS protection configuration
|
||||
## Non-relay, request-response protocol DOS protection configuration
|
||||
|
||||
| Name | Default Value | Description |
|
||||
| ---------------------------- | ------------- | ------------------------------------------------------ |
|
||||
| `rate-limit` | | This is a repeatable option. Each one of them can describe spefic rate limit configuration for a particular protocol.<br />\<protocol\>:volume/period\<time-unit\><br />- if protocol is not given, settings will be taken as default for un-set protocols. Ex: `80/2s`<br />-Supported protocols are: `lightpush`\|`filter`\|`px`\|`store`\|`storev2`\|`storev3`<br />-volume must be an integer value, representing number of requests over the period of time allowed.<br />-period\<time-unit\> must be an integer with defined unit as one of `h`\|`m`\|`s`\|`ms`<br />- `storev2` and `storev3` takes precedence over `store` which can easy set both store protocols at once.<br />- In case of multiple set of the same protocol limit, last one will take place.<br />- if config is not set it means unlimited requests are allowed.<br />-filter has a bit different approach. It has a default setting applied if not overridden. Rate limit setting for filter will be applied per subscriber-peers, not globally - it must be considered when changing the setting.<br /><br />Examples:<br />- `100/1s` - default for all protocols if not set otherwise.<br />-`lightpush:0/0s` - lightpush protocol will be not rate limited.<br />-`store:130/1500ms` - both store-v3 and store-v2 will apply 130 request per each 1500ms separately.<br />-`px:10/1h` PeerExchange will serve only 10 requests in every hour.<br />-`filter:8/5m` - will allow 8 subs/unsubs/ping requests for each subscribers within every 5 min. |
|
||||
| `rate-limit` | | This is a repeatable option. Each can describe a specific rate limit configuration for a particular protocol.<br />Formatted as:`<protocol>:volume/period<time-unit>`<br />- if protocol is not given, settings will be taken as default for un-set protocols. Ex: `80/2s`<br />-Supported protocols are: `lightpush`\|`filter`\|`px`\|`store`\|`storev2`\|`storev3`<br />-volume must be an integer value, representing number of requests over the period of time allowed.<br />-period\<time-unit\> must be an integer with defined unit as one of `h`\|`m`\|`s`\|`ms`<br />- `storev2` and `storev3` takes precedence over `store` which can easy set both store protocols at once.<br />- In case of multiple set of the same protocol limit, last one will take place.<br />- if config is not set, - which is the default - means unlimited requests are allowed.<br />-filter has a bit different approach. It has a default setting applied if not overridden. Rate limit setting for filter will be applied per subscriber-peers, not globally - it must be considered when changing the setting.<br /><br />Examples:<br />`--rate-limit="100/1s"` - default for all protocols if not set otherwise.<br />`--rate-limit="lightpush:0/0s"` - lightpush protocol will not be rate-limited.<br />`--rate-limit="store:130/1500ms"` - both store-v3 and store-v2 will apply 130 request per each 1500ms separately.<br />`--rate-limit="px:10/1h"` PeerExchange will serve only 10 requests every hour.<br />`--rate-limit="filter:8/5m"` - will allow 8 subs/unsubs/ping requests for each subscriber within every 5 min. |
|
||||
|
||||
:::tip
|
||||
To configure your node using the provided configuration options, have a look at the [Node Configuration Methods](/guides/nwaku/config-methods) guide.
|
||||
|
||||
@ -56,5 +56,5 @@ enr:-IO4QDxToTg86pPCK2KvMeVCXC2ADVZWrxXSvNZeaoa0JhShbM5qed69RQz1s1mWEEqJ3aoklo_7
|
||||
```
|
||||
|
||||
:::tip Congratulations!
|
||||
You have successfully found the listening and discoverable addresses for your `nwaku` node. Have a look at the Configure Peer Discovery](/guides/nwaku/configure-discovery) guide to learn how to discover and connect with peers in the network.
|
||||
You have successfully found the listening and discoverable addresses for your `nwaku` node. Have a look at the [Configure Peer Discovery](/guides/nwaku/configure-discovery) guide to learn how to discover and connect with peers in the network.
|
||||
:::
|
||||
|
||||
@ -27,7 +27,7 @@ import { AccordionItem } from '@site/src/components/mdx'
|
||||
</AccordionItem>
|
||||
|
||||
<AccordionItem title="How does Waku differ from IPFS?">
|
||||
Waku focuses on short, ephemeral, real-time time messages, while IPFS focuses on large, long-term data storage. Although there's an overlap between the two technologies, Waku does not currently support large data for privacy reasons.
|
||||
Waku focuses on short, ephemeral, real-time messages, while IPFS focuses on large, long-term data storage. Although there's an overlap between the two technologies, Waku does not currently support large data for privacy reasons.
|
||||
</AccordionItem>
|
||||
|
||||
<AccordionItem title="What are Rate Limiting Nullifiers (RLN)?">
|
||||
|
||||
@ -13,7 +13,7 @@ We have prepared a PoC implementation of this method in JS: <https://examples.wa
|
||||
|
||||
## Prevention of denial of service (DoS) and node incentivisation
|
||||
|
||||
Denial of service signifies the case where an adversarial peer exhausts another node's service capacity (e.g., by making a large number of requests) and makes it unavailable to the rest of the system. RnD on DoS attack mitigation can tracked from here: <https://github.com/vacp2p/research/issues/148>.
|
||||
Denial of service signifies the case where an adversarial peer exhausts another node's service capacity (e.g., by making a large number of requests) and makes it unavailable to the rest of the system. RnD on DoS attack mitigation can be tracked from here: <https://github.com/vacp2p/research/issues/148>.
|
||||
|
||||
In a nutshell, peers have to pay for the service they obtain from each other. In addition to incentivising the service provider, accounting also makes DoS attacks costly for malicious peers. The accounting model can be used in `Store` and `Filter` to protect against DoS attacks.
|
||||
|
||||
|
||||
@ -5,7 +5,7 @@ hide_table_of_contents: true
|
||||
|
||||
Waku's protocol layers offer different services and security considerations, shaping the overall security of Waku. We document the security models in the [RFCs of the protocols](https://rfc.vac.dev/), aiming to provide transparent and open-source references. This empowers Waku users to understand each protocol's security guarantees and limitations.
|
||||
|
||||
Some of the Waku's security features include the following:
|
||||
Some of Waku's security features include the following:
|
||||
|
||||
## [Pseudonymity](https://rfc.vac.dev/waku/standards/core/10/waku2/#pseudonymity)
|
||||
|
||||
|
||||
@ -11,7 +11,7 @@ The Waku Network is a shared p2p messaging network that is open-access, useful f
|
||||
4. Services for resource-restricted nodes, including historical message storage and retrieval, filtering, etc.
|
||||
|
||||
:::tip
|
||||
If you want to learn more about the Waku Network, the [The Waku Network: Technical Overview](https://blog.waku.org/2024-waku-network-tech-overview) article provides an in-depth look under the hood.
|
||||
If you want to learn more about the Waku Network, [The Waku Network: Technical Overview](https://blog.waku.org/2024-03-26-waku-network-tech-overview/) article provides an in-depth look under the hood.
|
||||
:::
|
||||
|
||||
## Why join the Waku network?
|
||||
|
||||
@ -27,7 +27,7 @@ Whenever we refer to “Logos”, “we” or other similar references, we are r
|
||||
|
||||
### 2) We limit the collection and processing of personal data from your use of the Website
|
||||
|
||||
We aim to limit the collection and collection and processing of personal data from users of the Website. We only collect and process certain personal data for specific purposes and where we have the legal basis to do so under applicable privacy legislation. We will not collect or process any personal data that we don’t need and where we do store any personal data, we will only store it for the least amount of time needed for the indicated purpose.
|
||||
We aim to limit the collection and processing of personal data from users of the Website. We only collect and process certain personal data for specific purposes and where we have the legal basis to do so under applicable privacy legislation. We will not collect or process any personal data that we don’t need and where we do store any personal data, we will only store it for the least amount of time needed for the indicated purpose.
|
||||
|
||||
In this regard, we collect and process the following personal data from your use of the Website:
|
||||
|
||||
|
||||
@ -109,7 +109,7 @@ Notice that the two `nwaku` nodes run the very same version, which is compiled l
|
||||
|
||||
#### Comparing archive SQLite & Postgres performance in [nwaku-b6dd6899](https://github.com/waku-org/nwaku/tree/b6dd6899030ee628813dfd60ad1ad024345e7b41)
|
||||
|
||||
The next results were obtained by running the docker-compose-manual-binaries.yml from [test-waku-query-c078075](https://github.com/waku-org/test-waku-query/tree/c07807597faa781ae6c8c32eefdf48ecac03a7ba) in the sandbox machine (metal-01.he-eu-hel1.wakudev.misc.statusim.net.)
|
||||
The next results were obtained by running the docker-compose-manual-binaries.yml from [test-waku-query-c078075](https://github.com/waku-org/test-waku-query/tree/c07807597faa781ae6c8c32eefdf48ecac03a7ba) in the sandbox machine (metal-01.he-eu-hel1.misc.wakudev.status.im.)
|
||||
|
||||
**Scenario 1**
|
||||
|
||||
@ -155,7 +155,7 @@ In this case, the performance is similar regarding the timings. The store rate i
|
||||
|
||||
This nwaku commit is after a few **Postgres** optimizations were applied.
|
||||
|
||||
The next results were obtained by running the docker-compose-manual-binaries.yml from [test-waku-query-c078075](https://github.com/waku-org/test-waku-query/tree/c07807597faa781ae6c8c32eefdf48ecac03a7ba) in the sandbox machine (metal-01.he-eu-hel1.wakudev.misc.statusim.net.)
|
||||
The next results were obtained by running the docker-compose-manual-binaries.yml from [test-waku-query-c078075](https://github.com/waku-org/test-waku-query/tree/c07807597faa781ae6c8c32eefdf48ecac03a7ba) in the sandbox machine (metal-01.he-eu-hel1.misc.wakudev.status.im.)
|
||||
|
||||
**Scenario 1**
|
||||
|
||||
@ -217,7 +217,7 @@ The `db-postgres-hammer` is aimed to stress the database from the `select` point
|
||||
|
||||
#### Results
|
||||
|
||||
The following results were obtained by using the sandbox machine (metal-01.he-eu-hel1.wakudev.misc) and running nim-waku nodes from https://github.com/waku-org/nwaku/tree/b452ed865466a33b7f5b87fa937a8471b28e466e and using the `test-waku-query` project from https://github.com/waku-org/test-waku-query/tree/fef29cea182cc744c7940abc6c96d38a68739356
|
||||
The following results were obtained by using the sandbox machine (metal-01.he-eu-hel1.misc.wakudev) and running nim-waku nodes from https://github.com/waku-org/nwaku/tree/b452ed865466a33b7f5b87fa937a8471b28e466e and using the `test-waku-query` project from https://github.com/waku-org/test-waku-query/tree/fef29cea182cc744c7940abc6c96d38a68739356
|
||||
|
||||
The following shows the results
|
||||
|
||||
|
||||
90
docs/research/benchmarks/test-results-summary.md
Normal file
90
docs/research/benchmarks/test-results-summary.md
Normal file
@ -0,0 +1,90 @@
|
||||
---
|
||||
title: Performance Benchmarks and Test Reports
|
||||
---
|
||||
|
||||
|
||||
## Introduction
|
||||
This page summarises key performance metrics for nwaku and provides links to detailed test reports.
|
||||
|
||||
> ## TL;DR
|
||||
>
|
||||
> - Average Waku bandwidth usage: ~**10 KB/s** (minus discv5 Discovery) for 1KB message size and message injection rate of 1msg/s.
|
||||
Confirmed for topologies of up to 2000 Relay nodes.
|
||||
> - Average time for a message to propagate to 100% of nodes: **0.4s** for topologies of up to 2000 Relay nodes.
|
||||
> - Average per-node bandwidth usage of the discv5 protocol: **8 KB/s** for incoming traffic and **7.4 KB/s** for outgoing traffic,
|
||||
in a network with 100 continuously online nodes.
|
||||
> - Future improvements: A messaging API is currently in development to streamline interactions with the Waku protocol suite.
|
||||
Once completed, it will enable benchmarking at the messaging API level, allowing applications to more easily compare their
|
||||
own performance results.
|
||||
|
||||
|
||||
## Insights
|
||||
|
||||
### Relay Bandwidth Usage: nwaku v0.34.0
|
||||
The average per-node `libp2p` bandwidth usage in a 1000-node Relay network with 1KB messages at varying injection rates.
|
||||
|
||||
|
||||
| Message Injection Rate | Average libp2p incoming bandwidth (KB/s) | Average libp2p outgoing bandwidth (KB/s) |
|
||||
|------------------------|------------------------------------------|------------------------------------------|
|
||||
| 1 msg/s | ~10.1 | ~10.3 |
|
||||
| 1 msg/10s | ~1.8 | ~1.9 |
|
||||
|
||||
### Message Propagation Latency: nwaku v0.34.0-rc1
|
||||
The message propagation latency is measured as the total time for a message to reach all nodes.
|
||||
We compare the latency in different network configurations for the following simulation parameters:
|
||||
- Total messages published: 600
|
||||
- Message size: 1KB
|
||||
- Message injection rate: 1msg/s
|
||||
|
||||
The different network configurations tested are:
|
||||
- Relay Config: 1000 nodes with relay enabled
|
||||
- Mixed Config: 210 nodes, consisting of bootstrap nodes, filter clients and servers, lightpush clients and servers, store nodes
|
||||
- Non-persistent Relay Config: 500 persistent relay nodes, 10 store nodes and 100 non-persistent relay nodes
|
||||
|
||||
Click on a specific config to see the detailed test report.
|
||||
|
||||
| Config | Average Message Propagation Latency (s) | Max Message Propagation Latency (s)|
|
||||
|------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------|------------------------------------|
|
||||
| [Relay](https://www.notion.so/Waku-regression-testing-v0-34-1618f96fb65c803bb7bad6ecd6bafff9) (1000 nodes) | 0.05 | 1.6 |
|
||||
| [Mixed](https://www.notion.so/Mixed-environment-analysis-1688f96fb65c809eb235c59b97d6e15b) (210 nodes) | 0.0125 | 0.007 |
|
||||
| [Non-persistent Relay](https://www.notion.so/High-Churn-Relay-Store-Reliability-16c8f96fb65c8008bacaf5e86881160c) (510 nodes)| 0.0125 | 0.25 |
|
||||
|
||||
### Discv5 Bandwidth Usage: nwaku v0.34.0
|
||||
The average bandwidth usage of discv5 for a network of 100 nodes and message injection rate of 0 or 1msg/s.
|
||||
The measurements are based on a stable network where all nodes have already connected to peers to form a healthy mesh.
|
||||
|
||||
|Message size |Average discv5 incoming bandwidth (KB/s)|Average discv5 outgoing bandwidth (KB/s)|
|
||||
|-------------------- |----------------------------------------|----------------------------------------|
|
||||
| no message injection| 7.88 | 6.70 |
|
||||
| 1KB | 8.04 | 7.40 |
|
||||
| 10KB | 8.03 | 7.45 |
|
||||
|
||||
## Testing
|
||||
### DST
|
||||
The VAC DST team performs regression testing on all new **nwaku** releases, comparing performance with previous versions.
|
||||
They simulate large Waku networks with a variety of network and protocol configurations that are representative of real-world usage.
|
||||
|
||||
**Test Reports**: [DST Reports](https://www.notion.so/DST-Reports-1228f96fb65c80729cd1d98a7496fe6f)
|
||||
|
||||
|
||||
### QA
|
||||
The VAC QA team performs interoperability tests for **nwaku** and **go-waku** using the latest main branch builds.
|
||||
These tests run daily and verify protocol functionality by targeting specific features of each protocol.
|
||||
|
||||
**Test Reports**: [QA Reports](https://discord.com/channels/1110799176264056863/1196933819614363678)
|
||||
|
||||
### nwaku
|
||||
The **nwaku** team follows a structured release procedure for all release candidates.
|
||||
This involves deploying RCs to `status.staging` fleet for validation and performing sanity checks.
|
||||
|
||||
**Release Process**: [nwaku Release Procedure](https://github.com/waku-org/nwaku/blob/master/.github/ISSUE_TEMPLATE/prepare_release.md)
|
||||
|
||||
|
||||
### Research
|
||||
The Waku Research team conducts a variety of benchmarking, performance testing, proof-of-concept validations and debugging efforts.
|
||||
They also maintain a Waku simulator designed for small-scale, single-purpose, on-demand testing.
|
||||
|
||||
|
||||
**Test Reports**: [Waku Research Reports](https://www.notion.so/Miscellaneous-2c02516248db4a28ba8cb2797a40d1bb)
|
||||
|
||||
**Waku Simulator**: [Waku Simulator Book](https://waku-org.github.io/waku-simulator/)
|
||||
@ -2,63 +2,46 @@
|
||||
title: Capped Bandwidth in Waku
|
||||
---
|
||||
|
||||
This issue explains i) why The Waku Network requires a capped bandwidth per shard and ii) how to solve it by rate limiting with RLN by daily requests (instead of every x seconds), which would require RLN v2, or some modifications in the current circuits to work. It also explains why the current rate limiting RLN approach (limit 1 message every x seconds) is not practical to solve this problem.
|
||||
This post explains i) why The Waku Network requires a capped bandwidth per shard and ii) how to achieve it by rate limiting with RLN v2.
|
||||
|
||||
## Problem
|
||||
|
||||
First of all, lets begin with the terminology. We have talked in the past about "predictable" bandwidth, but a better name would be "capped" bandwidth. This is because it is totally fine that the waku traffic is not predictable, as long as its capped. And it has to be capped because otherwise no one will be able to run a node.
|
||||
First of all, let's begin with the terminology. We have talked in the past about "predictable" bandwidth, but a better name would be "capped" bandwidth. This is because it is totally fine that the waku traffic is not predictable, as long as it is capped. And it has to be capped because otherwise, no one will be able to run a node.
|
||||
|
||||
Since we aim that everyone is able to run a full waku node (at least subscribed to a single shard) its of paramount importance that the bandwidth requirements (up/down) are i) reasonable to run with a residential internet connection in every country and ii) limited to an upper value, aka capped. If the required bandwidth to stay up to date with a topic is higher than what the node has available, then it will start losing messages and won't be able to stay up to date with the topic messages. And not to mention the problems this will cause to other services and applications being used by the user.
|
||||
Since we aim that everyone can run a full waku node (at least subscribed to a single shard) it is of paramount importance that the bandwidth requirements (up/down) are i) reasonable to run with a residential internet connection in every country and ii) limited to an upper value, aka capped. If the required bandwidth to stay up to date with a topic is higher than what the node has available, then it will start losing messages and won't be able to stay up to date with the topic messages. And not to mention the problems this will cause to other services and applications being used by the user.
|
||||
|
||||
The main problem is that one can't just chose the bandwidth it allocates to `relay`. One could set the maximum bandwidth willing to allocate to `store` but this is not how `relay` works. The required bandwidth is not set by the node, but by the network. If a pubsub topic `a` has a traffic of 50 Mbps (which is the sum of all messages being sent multiplied by its size, times the D_out degree), then if a node wants to stay up to date in that topic, and relay traffic in it, then it will require 50 Mbps. There is no thing such as "partially contribute" to the topic (with eg 25Mbps) because then you will be losing messages, becoming an unreliable peer. The network sets the pace.
|
||||
The main problem is that one can't just choose the bandwidth it allocates to `relay`. One could set the maximum bandwidth willing to allocate to `store` but this is not how `relay` works. The required bandwidth is not set by the node, but by the network. If a pubsub topic `a` has a traffic of 50 Mbps (which is the sum of all messages being sent multiplied by its size, times the D_out degree), then if a node wants to stay up to date in that topic, and relay traffic in it, then it will require 50 Mbps. There is no thing such as "partially contributing" to the topic (with eg 25Mbps) because then you will be losing messages, becoming an unreliable peer and potentially be disconnected. The network sets the pace.
|
||||
|
||||
So waku needs an upper boundary on the in/out bandwidth (mbps) it consumes. Just like apps have requirements on cpu and memory, we should set a requirement on bandwidth, and then guarantee that if you have that bandwidth, you will be able to run a node without any problem. And this is the tricky part.
|
||||
So waku needs an upper boundary on the in/out bandwidth (mbps) it consumes. Just like apps have requirements on cpu and memory, we should set a requirement on bandwidth, and then guarantee that if you have that bandwidth, you will be able to run a node without any problem. And this is the tricky part. This metric is Waku's constraint, similar to the gas-per-block limit in blockchains.
|
||||
|
||||
## Current approach
|
||||
## Previous Work
|
||||
|
||||
With the recent productisation effort of RLN, we have part of the problem solved, but not entirely. RLN offers an improvement, since now have a pseudo-identity (RLN membership) that can be used to rate limit users, enforcing a limit on how often it can send a message (eg 1 message every 10 seconds). We assume of course, that getting said RLN membership requires to pay something, or put something at stake, so that it can't be sibyl attacked.
|
||||
Quick summary of the evolution to solve this problem:
|
||||
* Waku started with no rate-limiting mechanism. The network was subject to DoS attacks.
|
||||
* RLN v1 was introduced, which allowed to rate-limit in a privacy-preserving and anonymous way. The rate limit can be configured to 1 message every `y` seconds. However, this didn't offer much granularity. A low `y` would allow too many messages and a high `y` would make the protocol unusable (impossible to send two messages in a row).
|
||||
* RLN v2 was introduced, which allows to rate-limit each user to `x` messages every `y` seconds. This offers the granularity we need. It is the current solution deployed in The Waku Network.
|
||||
|
||||
Rate limiting with RLN so that each entity just sends 1 message every x seconds indeed solves the spam problem but it doesn't per se cap the traffic. In order to cap the traffic, we would first need to cap the amount of memberships we allow. Lets see an example:
|
||||
- We limit to 10.000 RLN memberships
|
||||
- Each ones is rate limited to send 1 message/10 seconds
|
||||
- Message size of 50 kBytes
|
||||
## Current Solution (RLN v2)
|
||||
|
||||
Having this, the worst case bandwidth that we can theoretically have, would be if all of the memberships publish messages at the same time, with the maximum size, continuously. That is `10.000 messages/sec * 50 kBytes = 500 MBytes/second`. This would be a burst every 10 seconds, but enough to leave out the majority of the nodes. Of course this assumption is not realistic as most likely not everyone will continuously send messages at the same time and the size will vary. But in theory this could happen.
|
||||
The current solution to this problem is the usage of RLN v2, which allows to rate-limit `x` messages every `y` seconds. On top of this, the introduction of [WAKU2-RLN-CONTRACT](https://github.com/waku-org/specs/blob/master/standards/core/rln-contract.md) enforces a maximum amount of messages that can be sent to the network per `epoch`. This is achieved by limiting the amount of memberships that can be registered. The current values are:
|
||||
* `R_{max}`: 160000 mgs/epoch
|
||||
* `r_{max}`: 600 msgs/epoch
|
||||
* `r_{min}`: 20 msgs/epoch
|
||||
|
||||
A naive (and not practical) way of fixing this, would be to design the network for this worst case. So if we want to cap the maximum bandwidth to 5 MBytes/s then we would have different options on the maximum i) amount of RLN memberships and ii) maximum message size:
|
||||
- `1.000` RLN memberships, `5` kBytes message size: `1000 * 5 = 5 MBytes/s`
|
||||
- `10.000` RLN memberships, `500` Bytes message size: `10000 * 0.5 = 5 MBytes/s`
|
||||
In other words, the contract limits the amount of memberships that can be registered from `266` to `8000` depending on which rate limit users choose.
|
||||
|
||||
In both cases we cap the traffic, however, if we design The Waku Network like this, it will be massively underutilized. As an alternative, the approach we should follow is to rely on statistics, and assume that i) not everyone will be using the network at the same time and ii) message size will vary. So while its impossible to guarantee any capped bandwidth, we should be able to guarantee that with 95 or 99% confidence the bandwidth will stay around a given value, with a maximum variance.
|
||||
On the other hand [64/WAKU2-NETWORK](https://github.com/vacp2p/rfc-index/blob/main/waku/standards/core/64/network.md) states that:
|
||||
* `rlnEpochSizeSec`: 600. Meaning the epoch size is 600 seconds.
|
||||
* `maxMessageSize`: 150KB. Meaning the maximum message size that is allowed. Note: recommended average of 4KB.
|
||||
|
||||
The current RLN approach of rate limiting 1 message every x seconds is not very practical. The current RLN limitations are enforced on 1 message every x time (called `epoch`). So we currently can allow 1 msg per second or 1 msg per 10 seconds by just modifying the `epoch` size. But this has some drawbacks. Unfortunately, neither of the options are viable for waku:
|
||||
1. A small `epoch` size (eg `1 seconds`) would allow a membership to publish `24*3600/1=86400` messages a day, which would be too much. In exchange, this allows a user to publish messages right after the other, since it just have to wait 1 second between messages. Problem is that having an rln membership being able to publish this amount of messages, is a bit of a liability for waku, and hinders scalability.
|
||||
2. A high `epoch` size (eg `240 seconds`) would allow a membership to publish `24*3600/240=360` messages a day, which is a more reasonable limit, but this won't allow a user to publish two messages one right after the other, meaning that if you publish a message, you have to way 240 seconds to publish the next one. Not practical, a no go.
|
||||
Putting this all together and assuming:
|
||||
* Messages are sent uniformly distributed.
|
||||
* All users totally consumes its rate-limit.
|
||||
|
||||
But what if we widen the window size, and allow multiple messages within that window?
|
||||
We can expect the following message rate and bandwidth for the whole network:
|
||||
* A traffic of `266 msg/second` on average (`160000/600`)
|
||||
* A traffic of `6 MBps` on average (266 * 4KB * 6), where `4KB` is the average message size and `6` is the average gossipsub D-out degree.
|
||||
|
||||
## Solution
|
||||
|
||||
In order to fix this, we need bigger windows sizes, to smooth out particular bursts. Its fine that a user publishes 20 messages in one second, as long as in a wider window it doesn't publish more than, lets say 500. This wider window could be a day. So we could say that a membership can publish `250 msg/day`. With this we solve i) and ii) from the previous section.
|
||||
|
||||
Some quick napkin math on how this can scale:
|
||||
- 10.000 RLN memberships
|
||||
- Each RLN membership allow to publish 250 msg/day
|
||||
- Message size of 5 kBytes
|
||||
|
||||
Assuming a completely random distribution:
|
||||
- 10.000 * 250 = 2 500 000 messages will be published a day (at max)
|
||||
- A day has 86 400 seconds. So with a random distribution we can say that 30 msg/sec (at max)
|
||||
- 30 msg/sec * 5 kBytes/msg = 150 kBytes/sec (at max)
|
||||
- Assuming D_out=8: 150 kBytes/sec * 8 = 1.2 MBytes/sec (9.6 Mbits/sec)
|
||||
|
||||
So while its still not possible to guarantee 100% the maximum bandwidth, if we rate limit per day we can have better guarantees. Looking at these numbers, considering a single shard, it would be feasible to serve 10.000 users considering a usage of 250 msg/day.
|
||||
|
||||
TODO: Analysis on 95%/99% interval confidence on bandwidth given a random distribution.
|
||||
|
||||
## TLDR
|
||||
|
||||
- Waku should guarantee a capped bandwidth so that everyone can run a node.
|
||||
- The guarantee is a "statistical guarantee", since there is no way of enforcing a strict limit.
|
||||
- Current RLN approach is to rate limit 1 message every x seconds. A better approach would be x messages every day, which helps achieving such bandwidth limit.
|
||||
- To follow up: Variable RLN memberships. Eg. allow to chose tier 1 (100msg/day) tier 2 (200msg/day) etc.
|
||||
And assuming a uniform distribution of traffic among 8 shards:
|
||||
* `33 msg/second` per shard.
|
||||
* `0.75 MBps` per shard.
|
||||
|
||||
Loading…
x
Reference in New Issue
Block a user