Note: (Parts of) this RFC will be deprecated in the future as we continue research to scale specific components in a way that aligns better with our principles of decentralization and protecting anonymity.">
Note: (Parts of) this RFC will be deprecated in the future as we continue research to scale specific components in a way that aligns better with our principles of decentralization and protecting anonymity." />
<li>Editor: Daniel Kaiser <ahref="mailto:danielkaiser@status.im">danielkaiser@status.im</a></li>
</ul><h1id="abstract">
Abstract
<aclass="anchor"href="#abstract">#</a>
</h1>
<p>This document describes how to scale <ahref="/spec/56/">56/STATUS-COMMUNITIES</a> as well as <ahref="/spec/55/">55/STATUS-1TO1-CHAT</a>
using existing Waku v2 protocols and components.
It also adds a few new aspects, where more sophisticated components are not yet researched and evaluated.</p>
<blockquote>
<p><em>Note:</em> (Parts of) this RFC will be deprecated in the future as we continue research to scale specific components
in a way that aligns better with our principles of decentralization and protecting anonymity.
This document informs about scaling at the current stage of research and shows it is practically possible.
Practical feasibility is also a core goal for us.
We believe in incremental improvement, i.e. having a working decentralized scaling solution with trade-offs is better than a fully centralized solution.</p>
<p><ahref="/spec/56/">56/STATUS-COMMUNITIES</a> as well as <ahref="/spec/55/">55/STATUS-1TO1-CHAT</a> use Waku v2 protocols.
Both use Waku content topics (see <ahref="/spec/23/">23/WAKU2-TOPICS</a>) for content based filtering.</p>
<p>Waku v2 currently has scaling limitations in two dimensions:</p>
<ol>
<li>
<p>Messages that are part of a specific content topic have to be disseminated in a single mesh network (i.e. pubsub topic).
This limits scaling the number of messages disseminated in a specific content topic,
and by extension, the number of active nodes that are part of this content topic.</p>
</li>
<li>
<p>Scaling a large set of content topics requires distributing these over several mesh networks (which this document refers to as pubsub topic shards).</p>
</li>
</ol>
<p>This document focuses on the second scaling dimension.
With the scaling solutions discussed in this document,
each content topics can have a large set of active users, but still has to fit in a single pubsub mesh.</p>
<blockquote>
<p><em>Note:</em> While it is possible to use the same content topic name on several shards,
each node that is interested in this content topic has to be subscribed to all respective shards, which does not scale.
Splitting content topics in a more sophisticated and efficient way will be part of a future document.</p>
</blockquote>
<h1id="relay-shards">
Relay Shards
<aclass="anchor"href="#relay-shards">#</a>
</h1>
<p>Sharding the <ahref="/spec/11/">Waku Relay</a> network is an integral part of scaling the Status app.</p>
<p><ahref="/spec/51/">51/WAKU2-RELAY-SHARDING</a> specifies shards clusters, which are sets of <code>1024</code> shards (separate pubsub mesh networks).
Content topics specified by application protocols can be distributed over these shards.
The Status app protocols are assigned to shard cluster <code>16</code>,
as defined in <ahref="/spec/52/">52/WAKU2-RELAY-STATIC-SHARD-ALLOC</a>.</p>
<p><ahref="/spec/51/">51/WAKU2-RELAY-SHARDING</a> specifies three sharding methods.
This document uses <em>static sharding</em>, which leaves the distribution of content topics to application protocols,
but takes care of shard discovery.</p>
<p>The 1024 shards within the main Status shard cluster are allocated as follows.</p>
<h2id="shard-allocation">
Shard Allocation
<aclass="anchor"href="#shard-allocation">#</a>
</h2>
<table>
<thead>
<tr>
<th>shard index</th>
<th>usage</th>
</tr>
</thead>
<tbody>
<tr>
<td>0 - 15</td>
<td>reserved</td>
</tr>
<tr>
<td>16 - 127</td>
<td>specific (large) communities</td>
</tr>
<tr>
<td>128 - 767</td>
<td>communities</td>
</tr>
<tr>
<td>768 - 895</td>
<td>1:1 chat</td>
</tr>
<tr>
<td>896 - 1023</td>
<td>media and control msgs</td>
</tr>
</tbody>
</table>
<p>Shard indices are mapped to pubsub topic names as follows (specified in <ahref="/spec/51/">51/WAKU2-RELAY-SHARDING</a>).</p>
<p>an example for the shard with index <code>18</code> in the Status shard cluster:</p>
<p><code>/waku/2/rs/16/18</code></p>
<p>In other words, the mesh network with the pubsub topic name <code>/waku/2/rs/16/18</code> carries messages associated with shard <code>18</code> in the Status shard cluster.</p>
</code></pre><p>which means: connect to the <code>"status/xyz"</code> content topic on shard <code>18</code> within the Status shard cluster.</p>
<h2id="status-communities">
Status Communities
<aclass="anchor"href="#status-communities">#</a>
</h2>
<p>In order to associate a community with a shard,
the community description protobuf is extended by the field
</span></span></span><spanstyle="display:flex;"><span><spanstyle="color:#960050;background-color:#1e0010"></span><spanstyle="color:#75715e">// The Lamport timestamp of the message
</span></span></span><spanstyle="display:flex;"><span><spanstyle="color:#960050;background-color:#1e0010"></span><spanstyle="color:#75715e">// A mapping of members in the community to their roles
</span></span></span><spanstyle="display:flex;"><span><spanstyle="color:#75715e"></span> map<<spanstyle="color:#66d9ef">string</span>,CommunityMember> members <spanstyle="color:#f92672">=</span><spanstyle="color:#ae81ff">2</span>;<spanstyle="color:#960050;background-color:#1e0010">
</span></span></span><spanstyle="display:flex;"><span><spanstyle="color:#960050;background-color:#1e0010"></span><spanstyle="color:#75715e">// The permissions of the Community
</span></span></span><spanstyle="display:flex;"><span><spanstyle="color:#960050;background-color:#1e0010"></span><spanstyle="color:#75715e">// The metadata of the Community
</span></span></span><spanstyle="display:flex;"><span><spanstyle="color:#960050;background-color:#1e0010"></span><spanstyle="color:#75715e">// A mapping of chats to their details
</span></span></span><spanstyle="display:flex;"><span><spanstyle="color:#960050;background-color:#1e0010"></span><spanstyle="color:#75715e">// A list of banned members
</span></span></span><spanstyle="display:flex;"><span><spanstyle="color:#960050;background-color:#1e0010"></span><spanstyle="color:#75715e">// A mapping of categories to their details
</span></span></span><spanstyle="display:flex;"><span><spanstyle="color:#960050;background-color:#1e0010"></span><spanstyle="color:#75715e">// The admin settings of the Community
</span></span></span><spanstyle="display:flex;"><span><spanstyle="color:#960050;background-color:#1e0010"></span><spanstyle="color:#75715e">// If the community is encrypted
</span></span></span><spanstyle="display:flex;"><span><spanstyle="color:#960050;background-color:#1e0010"></span><spanstyle="color:#75715e">// The list of tags
</span></span></span><spanstyle="display:flex;"><span><spanstyle="color:#960050;background-color:#1e0010"></span><spanstyle="color:#75715e">// index of the community's shard within the Status shard cluster
<p><em>Note</em>: Currently, Status app has allocated shared cluster <code>16</code> in <ahref="/spec/52/">52/WAKU2-RELAY-STATIC-SHARD-ALLOC</a>.
Status app could allocate more shard clusters, for instance to establish a test net.
We could add the shard cluster index to the community description as well.
The recommendation for now is to keep it as a configuration option of the Status app.</p>
</blockquote>
<blockquote>
<p><em>Note</em>: Once this RFC moves forward, the new community description protobuf fields should be mentioned in <ahref="https://rfc.vac.dev/spec/56/">56/STATUS-COMMUNITIES</a>.</p>
</blockquote>
<p>Status communities can be mapped to shards in two ways: static, and owner-based.</p>
<h3id="static-mapping">
Static Mapping
<aclass="anchor"href="#static-mapping">#</a>
</h3>
<p>With static mapping, communities are assigned a specific shard index within the Status shard cluster.
This mapping is similar in nature to the shard cluster allocation in <ahref="/spec/52/">52/WAKU2-RELAY-STATIC-SHARD-ALLOC</a>.
Shard indices allocated in that way are in the range <code>16 - 127</code>.
The Status CC community uses index <code>16</code> (not to confuse with shard cluster index <code>16</code>, which is the Status shard cluster).</p>
<h3id="owner-mapping">
Owner Mapping
<aclass="anchor"href="#owner-mapping">#</a>
</h3>
<blockquote>
<p><em>Note</em>: This way of mapping will be specified post-MVP.</p>
</blockquote>
<p>Community owners can choose to map their communities to any shard within the index range <code>128 - 767</code>.</p>
<h2id="11-chat">
1:1 Chat
<aclass="anchor"href="#11-chat">#</a>
</h2>
<p><ahref="/spec/55">55/STATUS-1TO1-CHAT</a> uses partitioned topics to map 1:1 chats to a set of 5000 content topics.
This document extends this mapping to 8192 content topics that are, in turn, mapped to 128 shards in the index range of <code>768 - 895</code>.</p>
<p>Waku messages are typically relayed in larger mesh networks comprised of nodes with varying resource profiles (see <ahref="/spec/30/">30/ADAPTIVE-NODES</a>).
To maximise scaling, relaying of specific message types can be dedicated to shards where only infrastructure nodes with very strong resource profiles relay messages.
This comes as a trade-off to decentralization.</p>
<p>To get the maximum scaling for select large communities for the Status scaling MVP,
specific control messages that cause significant load (at a high user number) SHOULD be moved to a separate control message shard.
These control messages comprise:</p>
<ul>
<li>community description</li>
<li>membership update</li>
<li>backup</li>
<li>community request to join response</li>
<li>sync profile picture</li>
</ul>
<p>The relay functionality of control messages shards SHOULD be provided by infrastructure nodes.
Desktop clients should use light protocols as the default for control message shards.
Strong Desktop clients MAY opt in to support the relay network.</p>
<p>Each large community (in the index range of <code>16 - 127</code>) can get its dedicated control message shard (in the index range <code>896 - 1023</code>) if deemed necessary.
The Status CC community uses shard <code>896</code> as its control message shard.
This comes with trade-offs to decentralization and anonymity (see <em>Security Considerations</em> section).</p>
<h2id="media-shards">
Media Shards
<aclass="anchor"href="#media-shards">#</a>
</h2>
<p>Similar to control messages, media-heavy communities should use separate media shards (in the index range <code>896 - 1023</code>) for disseminating messages with large media data.
The Status CC community uses shard <code>897</code> as its media shard.</p>
<p>Large communities MAY choose to mainly rely on infrastructure nodes for <em>all</em> message transfers (not limited to control, and media messages).
Desktop clients of such communities should use light protocols as the default.
Strong Desktop clients MAY opt in to support the relay network.</p>
<blockquote>
<p><em>Note</em>: This is not planned for the MVP.</p>
</blockquote>
<h1id="light-protocols">
Light Protocols
<aclass="anchor"href="#light-protocols">#</a>
</h1>
<p>Light protocols may be used to save bandwidth,
at the (global) cost of not contributing to the network.
Using light protocols is RECOMMENDED for resource restricted nodes,
e.g. browsers,
and devices that (temporarily) have a low bandwidth connection or a connection with usage-based billing.</p>
<p>Light protocols comprise</p>
<ul>
<li><ahref="/spec/19/">19/WAKU2-LIGHTPUSH</a> for sending messages</li>
<li><ahref="/spec/12/">12/WAKU2-FILTER</a> for requesting messages with specific attributes</li>
<li><ahref="/spec/34">34/WAKU2-PEER-EXCHANGE</a> for discovering peers</li>
</ul>
<h1id="waku-archive">
Waku Archive
<aclass="anchor"href="#waku-archive">#</a>
</h1>
<p>Archive nodes are Waku nodes that offer the Waku archive service via the Waku store protocol (<ahref="/spec/13/">13/WAKU2-STORE</a>).
They are part of a set of shards and store all messages disseminated in these shards.
Nodes can request history messages via the <ahref="/spec/13/">13/WAKU2-STORE</a>.</p>
<p>The store service is not limited to a Status fleet.
Anybody can run a Waku Archive node in the Status shards.</p>
<blockquote>
<p><em>Note</em>: There is no specification for discovering archive nodes associated with specific shards yet.
Nodes expect archive nodes to store all messages, regardless of shard association.</p>
</blockquote>
<p>The recommendation for the allocation of archive nodes to shards is similar to the
allocation of infrastructure nodes to shards described above.
In fact, the archive service can be offered by infrastructure nodes.</p>
<h1id="discovery">
Discovery
<aclass="anchor"href="#discovery">#</a>
</h1>
<p>Shard discovery is covered by <ahref="/spec/51/">51/WAKU2-RELAY-SHARDING</a>.
This allows the Status app to abstract from the discovery process and simply address shards by their index.</p>
<p>To make nodes behind restrictive NATs discoverable,
this document suggests using <ahref="https://github.com/libp2p/specs/blob/master/rendezvous/README.md">libp2p rendezvous</a>.
Nodes can check whether they are behind a restrictive NAT using the <ahref="https://github.com/libp2p/specs/blob/master/autonat/README.md">libp2p AutoNAT protocol</a>.</p>
<blockquote>
<p><em>Note:</em> The following will move into <ahref="/spec/51/">51/WAKU2-RELAY-SHARDING</a>, or <ahref="/spec/33/">33/WAKU2-DISCV5</a>:
Nodes behind restrictive NATs SHOULD not announce their publicly unreachable address via <ahref="/spec/33/">33/WAKU2-DISCV5</a> discovery.</p>
</blockquote>
<p>It is RECOMMENDED that nodes that are part of the relay network also act as rendezvous points.
This includes accepting register queries from peers, as well as answering rendezvous discover queries.
Nodes MAY opt-out of the rendezvous functionality.</p>
<p>To allow nodes to initiate connections to peers behind restrictive NATs (after discovery via rendezvous),
it is RECOMMENDED that nodes that are part of the Waku relay network also offer
<p>To minimize the load on circuit-relay nodes, nodes SHOULD</p>
<ol>
<li>make use of the <ahref="https://github.com/libp2p/specs/blob/6634ca7abb2f955645243d48d1cd2fd02a8e8880/relay/circuit-v2.md#reservation">limiting</a>
functionality offered by the libp2p circuit relay protocols, and</li>
<li>use <ahref="https://github.com/libp2p/specs/blob/master/relay/DCUtR.md">DCUtR</a> to upgrade to a direct connection.</li>
</ol>
<p>Nodes that do not announce themselves at all and only plan to use light protocols,
MAY use rendezvous discovery instead of or along-side <ahref="/specs/34">34/WAKU2-PEER-EXCHANGE</a>.
For these nodes, rendezvous and <ahref="/specs/34">34/WAKU2-PEER-EXCHANGE</a> offer the same functionality,
but return node sets sampled in different ways.
Using both can help increasing connectivity.</p>
<p>Nodes that are not behind restrictive NATs MAY register at rendezvous points, too;
this helps increasing discoverability, and by extension connectivity.
Such nodes SHOULD, however, not register at circuit relays.</p>
</code></pre><p>Registering shard 2 in the Status shard cluster (with shard cluster index 16, see <ahref="/spec/52/h">52/WAKU2-RELAY-STATIC-SHARD-ALLOC</a>),