36 KiB
| title | name | category | status | tags | editor | contributors | ||||
|---|---|---|---|---|---|---|---|---|---|---|
| RELIABLE-CHANNEL-API | Reliable Channel API | Standards Track | raw |
|
Franck Royer <franck@status.im> |
|
Table of contents
- Table of contents
- Abstract
- Motivation
- Syntax
- API design
- Architecture
- The Reliable Channel API
- Implementation Suggestions
- Security/Privacy Considerations
- Copyright
Abstract
This document specifies an Application Programming Interface (API) for Reliable Channel, a high-level abstraction that provides eventual message consistency guarantees for all participants in a channel, as well as message segmentation and rate limit management when using an underlying rate limited delivery protocol with message size restrictions such as WAKU2.
The Reliable Channel is built on top of:
- WAKU-API for Waku protocol integration
- SDS (Scalable Data Sync) for causal ordering and acknowledgments
- Message segmentation for handling large payloads
- Rate limit management for WAKU2-RLN-RELAY compliance
The Reliable Channel API ensures that:
- All messages sent in a channel are eventually received by all participants
- Senders are notified when messages are acknowledged by other participants
- Missing messages are automatically detected and retrieved
- Message delivery is retried until acknowledged or maximum retry attempts are reached
- Messages are causally ordered using Lamport timestamps
- Large messages can be segmented to fit transport constraints
- Messages are queued or dropped when the underlying routing transport has a rate limit and said limit is being reached
Motivation
While protocols like SDS provide the mechanisms for achieving reliability (causal ordering, acknowledgments, missing message detection), and WAKU-API provides the transport layer, there is a need for an opinionated, high-level API that makes these capabilities accessible and easy to use.
The Reliable Channel API provides this accessibility by:
- Simplifying integration: Wraps SDS, Waku protocols, and other components (segmentation, rate limiting) behind a single, cohesive interface
- Providing sane defaults: Pre-configures SDS parameters, retry strategies, and sync intervals for common use cases
- Event-driven model: Exposes message lifecycle through intuitive events rather than requiring manual polling of SDS state
- Automatic task scheduling: Handles the periodic execution of SDS tasks (sync, buffer sweeps) internally
- Abstracting complexity: Hides the details of:
- SDS message wrapping/unwrapping
- Store queries for missing messages
- Message segmentation for large payloads
- Rate limit compliance when using WAKU2-RLN-RELAY
The goal is to enable application developers to achieve end-to-end reliability with minimal configuration and without deep knowledge of the underlying protocols. This follows the same philosophy as WAKU-API: providing an opinionated, accessible interface to powerful but complex underlying mechanisms.
Syntax
The keywords "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in RFC2119.
API design
IDL
This specification uses the same custom Interface Definition Language (IDL) in YAML as defined in WAKU-API.
Primitive types and general guidelines
The primitive types and general guidelines are the same as defined in WAKU-API.
Architecture
The Reliable Channel is a layered architecture that combines multiple components:
┌─────────────────────────────────────────┐
│ Reliable Channel API │ ← Application-facing event-driven API
├─────────────────────────────────────────┤
│ Message Segmentation │ ← Large message splitting/reassembly
├─────────────────────────────────────────┤
│ Rate Limit Manager │ ← WAKU2-RLN-RELAY compliance & pacing
├─────────────────────────────────────────┤
│ SDS (Scalable Data Sync) │ ← Causal ordering & acknowledgments
├─────────────────────────────────────────┤
│ WAKU-API (LightPush/Filter/Store) │ ← Message transport layer
└─────────────────────────────────────────┘
SDS Integration
The Reliable Channel wraps the SDS MessageChannel, which provides:
- Causal ordering: Using Lamport timestamps to establish message order
- Acknowledgments: Via causal history (definitive) and bloom filters (probabilistic)
- Missing message detection: By tracking gaps in causal history
- Buffering: For unacknowledged outgoing messages and incoming messages with unmet dependencies
The Reliable Channel handles the integration between SDS and Waku protocols:
- Wrapping user payloads in SDS messages before encoding
- Unwrapping SDS messages after decoding (extracting
contentfield) - Subscribing to messages via the Waku node's unified subscribe API
- Receiving messages via the node's message emitter (content-topic based)
- Scheduling SDS periodic tasks (sync, buffer sweeps, process tasks)
- Mapping SDS events to user-facing events
- Computing retrieval hints (Waku message hashes) for SDS messages
Message Segmentation
For messages exceeding safe payload limits:
- Messages SHOULD be split into segments of approximately 100 KB
- The 100 KB limit accounts for overhead from:
- Encryption (message-encryption layer)
- WAKU2-RLN-RELAY proof
- SDS metadata (causal history, bloom filter, Lamport timestamp)
- Protocol buffers encoding
- This ensures the final encoded Waku message stays well below the 150 KB routing layer limit
- Each segment SHOULD be tracked independently through SDS
- Segments SHOULD be reassembled before delivery to the application
- Partial message state SHOULD be managed to handle segment loss
- Segment order MUST be preserved during reassembly
TODO: refer to message segmentation spec
Rate Limit Management
When using WAKU2-RLN-RELAY:
- Regular messages and segments exceeding the rate limit SHOULD be queued
- Ephemeral messages SHOULD be dropped (not sent nor queued) when the rate limit is approached or reached
- "Approached" threshold SHOULD be configurable (e.g., drop ephemerals at 90% capacity)
- Rate limit errors SHOULD be surfaced through events
- When segmentation is enabled, each segment counts toward the rate limit independently
- Messages coming from a retry mechanism should be queued when the rate limit is approached.
TODO: refer to rate limit manager spec
Note: at a later stage, we may prefer for the Waku node to expose rate limit information (total rate limit, usage so far) instead of having it a configurable on the reliable channel.
The Reliable Channel API
api_version: "0.0.1"
library_name: "waku-reliable-channel"
description: "Reliable Channel: an event-driven API for eventual message consistency in Waku channels."
Create Reliable Channel
Type definitions
types:
ReliableChannel:
type: object
description: "A Reliable Channel instance that provides eventual consistency guarantees."
ReliableChannelOptions:
type: object
fields:
sync_min_interval_ms:
type: uint
default: 30000
description: "The minimum interval between 2 sync messages in the channel (in milliseconds). This is shared responsibility between channel participants. Set to 0 to disable automatic sync messages."
retry_interval_ms:
type: uint
default: 30000
description: "How long to wait before re-sending a message that has not been acknowledged (in milliseconds)."
max_retry_attempts:
type: uint
default: 10
description: "How many times to attempt resending messages that were not acknowledged."
retrieve_frequency_ms:
type: uint
default: 10000
description: "How often store queries are done to retrieve missing messages (in milliseconds)."
sweep_in_buf_interval_ms:
type: uint
default: 5000
description: "How often the SDS message channel incoming buffer is swept (in milliseconds)."
query_on_connect:
type: bool
default: true
description: "Whether to automatically do a store query after connection to store nodes."
auto_start:
type: bool
default: true
description: "Whether to automatically start the message channel."
process_task_min_elapse_ms:
type: uint
default: 1000
description: "The minimum elapsed time between calling the underlying channel process task for incoming messages. This prevents overload when processing many messages."
causal_history_size:
type: uint
description: "The number of recent messages to include in causal history. Passed to the underlying SDS MessageChannel."
bloom_filter_size:
type: uint
description: "The size of the bloom filter for probabilistic acknowledgments. Passed to the underlying SDS MessageChannel."
ChannelId:
type: string
description: "An identifier for the channel. All participants of the channel MUST use the same id."
SenderId:
type: string
description: "An identifier for the sender. SHOULD be unique per participant and persisted between sessions to ensure acknowledgements are only valid when originating from different senders."
MessageId:
type: string
description: "A unique identifier for a logical message, derived from the message payload before segmentation."
MessagePayload:
type: array<byte>
description: "The unwrapped message content (user payload extracted from SDS message)."
ChunkInfo:
type: object
description: "Information about message segmentation for tracking partial progress."
fields:
chunk_index:
type: uint
description: "Zero-based index of the current chunk (0 to total_chunks-1)."
total_chunks:
type: uint
description: "Total number of chunks for this message."
Function definitions
functions:
createReliableChannel:
description: "Create a new Reliable Channel instance. All participants in the channel MUST be able to decrypt messages and MUST subscribe to the same content topic(s)."
parameters:
- name: waku_node
type: WakuNode
description: "The Waku node instance to use for sending and receiving messages."
- name: channel_id
type: ChannelId
description: "An identifier for the channel. All participants MUST use the same id."
- name: sender_id
type: SenderId
description: "An identifier for this sender. SHOULD be unique and persisted between sessions."
- name: content_topic
type: ContentTopic
description: "The content topic to use for the channel."
- name: options
type: ReliableChannelOptions
description: "Configuration options for the Reliable Channel."
returns:
type: result<ReliableChannel, error>
Extended definitions
Default configuration values: See the SDS Implementation Suggestions section for recommended default values for ReliableChannelOptions.
channel_id and sender_id:
The channel_id MUST be the same for all participants in a channel.
The sender_id SHOULD be unique for each participant and SHOULD be persisted between sessions to ensure proper acknowledgment tracking.
content_topic:
A Reliable Channel uses a unique content topic. This ensure that all messages are retrievable.
options.auto_start:
If set to true (default), the Reliable Channel SHOULD automatically call start() during creation.
If set to false, the application MUST call start() before the channel will process messages.
options.query_on_connect:
If set to true (default) and the Waku node has store capability,
the Reliable Channel SHOULD automatically query the store for missing messages when connecting to store nodes.
This helps ensure message consistency when participants come online after being offline.
Send messages
Function definitions
functions:
send:
description: "Send a message in the channel. The message will be retried if not acknowledged by other participants."
parameters:
- name: message_payload
type: array<byte>
description: "The message content to send (before SDS wrapping)."
returns:
type: MessageId
description: "A unique identifier for the message, used to track events."
sendEphemeral:
description: "Send an ephemeral message in the channel. Ephemeral messages are not tracked for acknowledgment, not included in causal history, and are dropped when rate limits are approached or reached."
parameters:
- name: message_payload
type: array<byte>
description: "The message content to send (before SDS wrapping)."
returns:
type: MessageId
description: "A unique identifier for the message, used to track events (limited to `message-sent` or `ephemeral-message-dropped` event)."
Event handling
The Reliable Channel uses an event-driven model to notify applications about message lifecycle events.
Type definitions
types:
ReliableChannelEvents:
type: object
description: "Events emitted by the Reliable Channel."
events:
sending-message:
type: event<SendingMessageEvent>
description: "Emitted when a message chunk is being sent over the wire. MAY be emitted multiple times if retry mechanism kicks in. For segmented messages, emitted once per chunk."
message-sent:
type: event<MessageSentEvent>
description: "Emitted when a message chunk has been sent over the wire but has not been acknowledged yet. MAY be emitted multiple times if retry mechanism kicks in. For segmented messages, emitted once per chunk."
message-possibly-acknowledged:
type: event<PossibleAcknowledgment>
description: "Emitted when a bloom filter indicates a message chunk was possibly received by another party. This is probabilistic. For segmented messages, emitted per chunk."
message-acknowledged:
type: event<MessageAcknowledgedEvent>
description: "Emitted when a message chunk was fully acknowledged by other members of the channel (present in their causal history). For segmented messages, emitted per chunk as each is acknowledged."
sending-message-irrecoverable-error:
type: event<MessageError>
description: "Emitted when a message chunk could not be sent due to a non-recoverable error. For segmented messages, emitted per chunk that fails."
message-received:
type: event<MessagePayload>
description: "Emitted when a new complete message has been received and reassembled from another participant. The payload is the unwrapped user content (SDS content field). Only emitted once all chunks are received."
irretrievable-message:
type: event<HistoryEntry>
description: "Emitted when the channel is aware of a missing message but failed to retrieve it successfully."
ephemeral-message-dropped:
type: event<EphemeralDropReason>
description: "Emitted when an ephemeral message was dropped due to rate limit constraints."
SendingMessageEvent:
type: object
fields:
message_id:
type: MessageId
description: "The logical message ID."
chunk_info:
type: ChunkInfo
description: "Information about which chunk is being sent. For non-segmented messages, chunk_index=0 and total_chunks=1."
MessageSentEvent:
type: object
fields:
message_id:
type: MessageId
description: "The logical message ID."
chunk_info:
type: ChunkInfo
description: "Information about which chunk was sent. For non-segmented messages, chunk_index=0 and total_chunks=1."
MessageAcknowledgedEvent:
type: object
fields:
message_id:
type: MessageId
description: "The logical message ID."
chunks_acknowledged:
type: array<uint>
description: "Array of chunk indices that have been acknowledged so far (e.g., [0, 2, 4] means chunks 0, 2, and 4 are acknowledged). Use `.length` to get total count. For non-segmented messages, this is [0]."
total_chunks:
type: uint
description: "Total number of chunks for this message. For non-segmented messages, this is 1."
PossibleAcknowledgment:
type: object
fields:
message_id:
type: MessageId
description: "The logical message ID that was possibly acknowledged."
chunk_info:
type: ChunkInfo
description: "Information about which chunk was possibly acknowledged."
possible_ack_count:
type: uint
description: "The number of possible acknowledgments detected for this chunk."
MessageError:
type: object
fields:
message_id:
type: MessageId
description: "The logical message ID that encountered an error."
chunk_info:
type: ChunkInfo
description: "Information about which chunk encountered the error. For non-segmented messages, chunk_index=0 and total_chunks=1."
error:
type: error
description: "The error that occurred."
HistoryEntry:
type: object
description: "An entry in the message history that could not be retrieved."
EphemeralDropReason:
type: object
fields:
reason:
type: string
description: "The reason the ephemeral message was dropped (e.g., 'rate_limit_approached', 'rate_limit_reached')."
rate_limit_utilization:
type: uint
description: "Percentage of rate limit consumed (0-100)."
Extended definitions
Event lifecycle:
For each regular message sent via send(), the following event sequence is expected:
Non-segmented messages (payload ≤ 100 KB):
sending-message: Emitted once withchunk_info.chunk_index=0, total_chunks=1message-sent: Emitted once withchunk_info.chunk_index=0, total_chunks=1- One of:
message-possibly-acknowledged: (Optional, probabilistic) with chunk infomessage-acknowledged: Emitted once with chunk info when acknowledgedsending-message-irrecoverable-error: Emitted if an unrecoverable error occurs
Segmented messages (payload > 100 KB):
sending-message: Emitted once per chunk (e.g., 5 times for a 5-chunk message)- Each emission includes
chunk_infoshowing which chunk (0/5, 1/5, 2/5, 3/5, 4/5)
- Each emission includes
message-sent: Emitted once per chunk with correspondingchunk_infomessage-acknowledged: Emitted each time a new chunk is acknowledgedchunks_acknowledgedarray grows as chunks are acknowledged- Example progression for a 5-chunk message:
- First ack:
chunks_acknowledged=[0], total_chunks=5 - Second ack:
chunks_acknowledged=[0, 1], total_chunks=5 - Third ack:
chunks_acknowledged=[0, 1, 3], total_chunks=5(chunk 2 still pending) - Fourth ack:
chunks_acknowledged=[0, 1, 2, 3], total_chunks=5(chunk 2 now received)
- First ack:
- All events share the same
message_id - Application can:
- Check overall progress:
chunks_acknowledged.length / total_chunks(e.g., 3/5 = 60%) - Track specific chunks:
chunks_acknowledged.includes(2)to see if chunk 2 is done - Display which chunks remain: chunks not in the array
- Check overall progress:
Events 1-2 MAY be emitted multiple times per chunk if the retry mechanism is activated.
For received messages:
message-received: Emitted only once after all chunks are received and reassembled- The payload is the complete reassembled message
- No chunk information is provided (reassembly is transparent to the receiver)
For ephemeral messages sent via sendEphemeral():
- Ephemeral messages are NEVER segmented (if too large, they are rejected)
message-sent: Emitted once withchunk_info.chunk_index=0, total_chunks=1ephemeral-message-dropped: Emitted if the message is dropped due to rate limit constraintssending-message-irrecoverable-error: Emitted if encoding, sending, or size check fails
Irrecoverable errors:
The following errors are considered irrecoverable and will trigger sending-message-irrecoverable-error:
- Encoding failed
- Empty payload
- Message size too large
- WAKU2-RLN-RELAY proof generation failed
When an irrecoverable error occurs, the retry mechanism SHOULD NOT attempt to resend the message.
Channel lifecycle
Function definitions
functions:
start:
description: "Start the Reliable Channel. Sets up event listeners, begins sync loop, starts missing message retrieval, and subscribes to messages via the Waku node."
returns:
type: void
stop:
description: "Stop the Reliable Channel. Stops sync loop, missing message retrieval, and clears intervals."
returns:
type: void
isStarted:
description: "Check if the Reliable Channel is currently started."
returns:
type: bool
description: "True if the channel is started, false otherwise."
Implementation Suggestions
This section provides practical implementation guidance based on the js-waku implementation.
SDS MessageChannel integration
The Reliable Channel MUST use the SDS MessageChannel as its core reliability mechanism.
Reference: js-waku/packages/sds/src/message_channel/message_channel.ts:1
Key integration points:
-
Message wrapping: User payloads MUST be wrapped in SDS
ContentMessagebefore sending:User payload → SDS ContentMessage (encode) → Waku Message → Network -
Message unwrapping: Received Waku messages MUST be unwrapped to extract user payloads:
Network → Waku Message → SDS Message (decode) → User payload (content field)- The Reliable Channel receives raw
Uint8Arraypayloads from the node's message emitter - SDS decoding extracts the message structure
- Only the
contentfield is emitted to the application viamessage-receivedevent
- The Reliable Channel receives raw
-
SDS configuration: Implementations SHOULD configure the SDS MessageChannel with:
causalHistorySize: Number of recent message IDs in causal history (default: 200)bloomFilterSize: Bloom filter capacity for probabilistic ACKs (default: 10,000 messages)bloomFilterErrorRate: False positive rate (default: 0.001)
-
Subscription and reception:
- Call
node.subscribe([contentTopic])to subscribe via the Waku node's unified API - Listen to
node.messageEmitteron the content topic for incoming messages - Process raw
Uint8Arraypayloads through SDS decoding - Extract and emit the
contentfield to the application
- Call
-
Retrieval hint computation:
- Compute Waku message hash from the SDS-wrapped payload before sending
- Include hash in SDS
HistoryEntryasretrievalHint - Use hash for targeted store queries when retrieving missing messages
-
Task scheduling: Implementations MUST periodically call SDS methods:
processTasks(): Process queued send/receive operationssweepIncomingBuffer(): Deliver messages with met dependenciessweepOutgoingBuffer(): Identify messages for retry
-
Event mapping: SDS events SHOULD be mapped to Reliable Channel events:
OutMessageSent→message-sentOutMessageAcknowledged→message-acknowledgedOutMessagePossiblyAcknowledged→message-possibly-acknowledgedInMessageReceived→ (internal, triggers sync restart)InMessageMissing→ Missing message retrieval trigger
Default SDS configuration values (from js-waku):
- Bloom filter capacity: 10,000 messages
- Bloom filter error rate: 0.001 (0.1% false positive rate)
- Causal history size: 200 message IDs (≈12.8 KB overhead per message)
- Possible ACKs threshold: 2 bloom filter hits before considering acknowledged
Message retries
The retry mechanism SHOULD use a simple fixed-interval retry strategy:
- When a message is sent, start a retry timer
- Every
retry_interval_ms, attempt to resend the message - Stop retrying when:
- The message is acknowledged (via causal history)
max_retry_attemptsis reached- An irrecoverable error occurs
Reference: js-waku/packages/sdk/src/reliable_channel/retry_manager.ts:1
Implementation notes:
- Retry intervals SHOULD be configurable (default: 30 seconds)
- Maximum retry attempts SHOULD be configurable (default: 10 attempts)
- Future implementations MAY implement exponential back-off strategies
- Implementations SHOULD NOT retry sending if the node is known to be offline
Missing message retrieval
The Reliable Channel SHOULD implement automatic detection and retrieval of missing messages:
- Detection: When processing incoming messages, the SDS layer detects gaps in causal history
- Tracking: Missing messages are tracked with their message IDs and retrieval hints (Waku message hashes)
- Retrieval: Periodic store queries retrieve missing messages using retrieval hints
- Processing: Retrieved messages are processed through the normal message pipeline
Reference: js-waku/packages/sdk/src/reliable_channel/missing_message_retriever.ts:1
Implementation notes:
- Missing message checks SHOULD run periodically (default: every 10 seconds)
- Store queries SHOULD use message hashes for targeted retrieval
- Retrieved messages SHOULD be removed from the missing messages list once received
- If a message cannot be retrieved, implementations MAY emit an
irretrievable-messageevent
Synchronization messages
Sync messages are empty messages that carry only causal history and bloom filter information. They serve to:
- Acknowledge received messages without sending new content
- Keep the channel active
- Allow participants to learn about each other's message state
Implementation notes:
- Sync messages SHOULD be sent periodically with randomized delays
- The sync interval is shared responsibility:
sync_interval = random() * sync_min_interval_ms - When a content message is received, sync SHOULD be scheduled sooner (multiplier: 0.5)
- When a content message is sent, sync SHOULD be rescheduled at normal interval (multiplier: 1.0)
- When a sync message is received, sync SHOULD be rescheduled at normal interval
- After failing to send a sync, retry SHOULD use a longer interval (multiplier: 2.0)
Reference: js-waku/packages/sdk/src/reliable_channel/reliable_channel.ts:515-529
Query on connect
When enabled, the Reliable Channel SHOULD automatically query the store when connecting to store nodes. This helps participants catch up on missed messages after being offline.
Implementation notes:
- Query on connect SHOULD be triggered when:
- A store node becomes available
- The node reconnects after being offline (health status changes)
- A configurable time threshold has elapsed since the last query (default: 5 minutes)
- Queries SHOULD stop when finding a message with causal history from the same channel
- Queries SHOULD continue if all retrieved messages are from different channels
Reference: js-waku/packages/sdk/src/reliable_channel/reliable_channel.ts:183-196
Performance considerations
To avoid overload when processing many messages:
- Throttled processing: Don't call the SDS process task more frequently than
process_task_min_elapse_ms(default: 1 second) - Batched sweeping: Sweep the incoming buffer at regular intervals rather than per-message (default: every 5 seconds)
- Lazy task execution: Queue process tasks with a minimum elapsed time between executions
Reference: js-waku/packages/sdk/src/reliable_channel/reliable_channel.ts:460-473
Ephemeral messages
Ephemeral messages are defined in the SDS specification as short-lived messages for which no synchronization or reliability is required.
Implementation notes:
-
SDS integration:
- Send ephemeral messages with
lamport_timestamp,causal_history, andbloom_filterunset - Do NOT add to unacknowledged outgoing buffer after broadcast
- Do NOT include in causal history or bloom filters
- Do NOT add to local log
- Send ephemeral messages with
-
Rate limit awareness:
- Before sending an ephemeral message, check rate limit utilization
- If utilization >= threshold (default: 90%), drop the message and emit
ephemeral-message-dropped - This ensures reliable messages are never blocked by ephemeral traffic
-
Use cases:
- Typing indicators
- Presence updates
- Real-time status updates
- Other transient UI state that doesn't require guaranteed delivery
-
Receiving ephemeral messages:
- Deliver immediately without buffering for causal dependencies
- Emit
message-receivedevent (same as regular messages) - Do NOT add to local log or acknowledge
Reference: SDS specification section on Ephemeral Messages
Message segmentation
To handle large messages while accounting for protocol overhead, implementations SHOULD:
-
Segment size calculation:
- Target segment size: ~100 KB (102,400 bytes) of user payload
- This accounts for overhead that will be added:
- Encryption: Variable depending on encryption scheme (e.g., ~48 bytes for ECIES)
- SDS metadata: ~12.8 KB with default causal history (200 × 64 bytes)
- WAKU2-RLN-RELAY proof: ~128 bytes
- Protobuf encoding overhead: ~few hundred bytes
- Final Waku message stays well under 150 KB routing layer limit
-
Message ID and chunk tracking:
- Compute a single logical
message_idfrom the complete user payload (before segmentation) - All chunks of the same message share this
message_id - Each chunk has its own SDS message ID for tracking in causal history
- Chunk info (
chunk_index,total_chunks) is included in all events
- Compute a single logical
-
Segmentation strategy:
- Split large user payloads into ~100 KB chunks
- Wrap each chunk in a separate SDS
ContentMessage - Include segmentation metadata in each SDS message (chunk index, total chunks, logical message ID)
- Each chunk is sent and tracked independently through SDS
-
Event emission:
sending-message: Emitted once per chunk withchunk_infoindicating which chunkmessage-sent: Emitted once per chunk withchunk_infoindicating which chunkmessage-acknowledged: Emitted each time a new chunk is acknowledgedchunks_acknowledgedcontains ALL acknowledged chunks so far (cumulative)- Example progression:
[0]→[0, 1]→[0, 1, 3]→[0, 1, 2, 3]→[0, 1, 2, 3, 4] - Application can show:
${chunks_acknowledged.length}/${total_chunks} chunks sent (60%) - Or track individual chunks:
!chunks_acknowledged.includes(2)to show "chunk 2 pending"
- All events for the same logical message share the same
message_id
-
Reassembly (receiving):
- Buffer received chunks keyed by logical message ID
- Track which chunks have been received (by chunk index)
- Verify chunk count matches expected total from metadata
- Reassemble in sequence order when all chunks present
- Emit
message-receivedonly once with complete payload - Handle partial message timeout (mark as irretrievable after threshold)
-
Interaction with retries and rate limits:
- Each chunk is retried independently if not acknowledged
- When using WAKU2-RLN-RELAY, each chunk consumes one message slot per epoch
- Large messages may take multiple epochs to send completely
- Partial acknowledgments allow applications to show progress to users
Security/Privacy Considerations
-
Encryption: All participants in a Reliable Channel MUST be able to decrypt messages. Implementations SHOULD use the same encryption layer (encoder/decoder) for all messages.
-
Sender identity: The
sender_idis used to differentiate acknowledgments. Implementations SHOULD ensure that acknowledgments are only considered valid when they originate from a different sender. -
Channel isolation: Messages in different channels are isolated. A participant SHOULD only process messages that match their channel ID.
-
Message ordering: While the Reliable Channel ensures eventual consistency, it does not guarantee strict message ordering across participants.
-
Resource exhaustion: Implementations SHOULD implement limits on:
- Number of missing messages tracked
- Number of active retry attempts
- Frequency of store queries
-
Privacy: Store queries reveal message interest patterns. Implementations MAY consider privacy-preserving retrieval strategies in the future.
Copyright
Copyright and related rights waived via CC0.