The [original plan](https://vac.dev/waku-v2-plan) for Waku v2 suggested theoretical improvements in resource usage over Waku v1,
mainly as a result of the improved amplification factors provided by GossipSub.
In its turn, [Waku v1 proposed improvements](https://vac.dev/fixing-whisper-with-waku) over its predecessor, Whisper.
Given that Waku v2 is aimed at resource restricted environments,
we are specifically interested in its scalability and resource usage characteristics.
However, the theoretical performance improvements of Waku v2 over Waku v1,
has never been properly benchmarked and tested.
Although we're working towards a full performance evaluation of Waku v2,
this would require significant planning and resources,
if it were to simulate "real world" conditions faithfully and measure bandwidth and resource usage across different network connections,
robustness against attacks/losses, message latencies, etc.
(There already exists a fairly comprehensive [evaluation of GossipSub v1.1](https://research.protocol.ai/publications/gossipsub-v1.1-evaluation-report/vyzovitis2020.pdf),
on which [`11/WAKU2-RELAY`](https://rfc.vac.dev/spec/11/) is based.)
As a starting point,
this post contains a limited and local comparison of the _bandwidth_ profile (only) between Waku v1 and Waku v2.
It reuses and adapts existing network simulations for [Waku v1](https://github.com/status-im/nim-waku/blob/master/waku/v1/node/quicksim.nim) and [Waku v2](https://github.com/status-im/nim-waku/blob/master/waku/v2/node/quicksim2.nim)
and compares bandwidth usage for similar message propagation scenarios.
This means that every peer will forward every new incoming message to all its connected peers (except the one it received the message from).
This necessarily leads to unnecessary duplication (termed _amplification factor_),
wasting bandwidth and resources.
What's more, we expect this effect to worsen the larger the network becomes,
as each _connection_ will receive a copy of each message,
rather than a single copy per peer.
Message routing in Waku v2 follows the `libp2p` _GossipSub_ protocol,
which lowers amplification factors by only sending full message contents to a subset of connected peers.
As a Waku v2 network grows, each peer will limit its number of full-message ("mesh") peerings -
`libp2p` suggests a maximum of `12` such connections per peer.
This allows much better scalability than a flood-routed network.
From time to time, a Waku v2 peer will send metadata about the messages it has seen to other peers ("gossip" peers).
See [this explainer](https://hackmd.io/@vac/main/%2FYYlZYBCURFyO_ZG1EiteWg#11WAKU2-RELAY-gossipsub) for a more detailed discussion.
## Methodology
The results below contain only some scenarios that provide an interesting contrast between Waku v1 and Waku v2.
For example, [star network topologies](https://en.wikipedia.org/wiki/Star_network) do not show a substantial difference between Waku v1 and Waku v2.
This is because each peer relies on a single connection to the central node for every message,
which barely requires any routing:
each connection receives a copy of every message for both Waku v1 and Waku v2.
Hybrid topologies similarly show only a difference between Waku v1 and Waku v2 for network segments with [mesh-like connections](https://en.wikipedia.org/wiki/Mesh_networking),
where routing decisions need to be made.
For this reason, the following approach applies to all iterations:
5. Traffic in each network is **generated from 10 nodes** (randomly-selected) and published in a round-robin fashion to **10 topics** (content topics for Waku v2).
In practice, we found no significant difference in _average_ bandwidth usage when tweaking these two parameters (the number of traffic generating nodes and the number of topics).
7. After running each iteration, we **verify that messages propagated to all peers** (comparing the number of published messages to the metrics logged by each peer).
For Waku v1, nodes are configured as "full" nodes (i.e. with full bloom filter),
while Waku v2 nodes are `relay` nodes, all subscribing and publishing to the same PubSub topic.
## Network size comparison
### Iteration 1: 10 nodes
Let's start with a small network of 10 nodes only and see how Waku v1 bandwidth usage compares to that of Waku v2.
At this small scale we don't expect to see improved bandwidth usage in Waku v2 over Waku v1,
since all connections, for both Waku v1 and Waku v2, will be full-message connections.
The number of connections is low enough that Waku v2 nodes will likely GRAFT all connections to full-message peerings,
essentially flooding every message on every connection in a similar fashion to Waku v1.
If our expectations are confirmed, it helps validate our methodology,
showing that it gives more or less equivalent results between Waku v1 and Waku v2 networks.
- As the network continues to scale, both in absolute terms (number of nodes) and in network traffic (message rates) the disparity between Waku v2 and Waku v1 becomes even larger.
## Future work
Now that we've confirmed that Waku v2's bandwidth improvements over its predecessor matches theory,
we can proceed to a more in-depth characterisation of Waku v2's resource usage.
- What proportion of Waku v2's bandwidth usage is used to propagate _payload_ versus bandwidth spent on _control_ messaging to maintain the mesh?
- To what extent is message latency (time until a message is delivered to its destination) affected by network size and message rate?
- How _reliable_ is message delivery in Waku v2 for different network sizes and message rates?
- What are the resource usage profiles of other Waku v2 protocols (e.g.[`12/WAKU2-FILTER`](https://rfc.vac.dev/spec/12/) and [`19/WAKU2-LIGHTPUSH`](https://rfc.vac.dev/spec/19/))?
Our aim is to get ever closer to a "real world" understanding of Waku v2's performance characteristics,
identify and fix vulnerabilities
and continually improve the efficiency of our suite of protocols.
## References
- [Evaluation of GossipSub v1.1](https://research.protocol.ai/publications/gossipsub-v1.1-evaluation-report/vyzovitis2020.pdf)
- [Fixing Whisper with Waku](https://vac.dev/fixing-whisper-with-waku)
- [GossipSub vs flood routing](https://hackmd.io/@vac/main/%2FYYlZYBCURFyO_ZG1EiteWg#11WAKU2-RELAY-gossipsub)