* feat(sds): messages with lost deps are delivered
This is to re-enable participation in the SDS protocol. Meaning the
received message with missing dependencies becomes part of the causal
history, re-enabling acknowledgements.
* fix(sds): avoid overflow in message history storage
* feat(reliable-channel): Emit a "Synced" Status with message counts
Return a "synced" or "syncing" status on `ReliableChannel.status` that
let the developer know whether messages are missing, and if so, how many.
* fix: clean up subscriptions, intervals and timeouts when stopping
# Conflicts:
# packages/sdk/src/reliable_channel/reliable_channel.ts
* chore: extract random timeout
* fix rebase
* revert listener changes
* typo
* Ensuring no inconsistency on missing message
* test: streamline, stop channels
* clear sync status sets when stopping channel
* prevent sync status event spam
* test: improve naming
* try/catch for callback
* encapsulate/simplify reliable channel API
* sanity checks
* test: ensure sync status cleanup
* fix: add stop methods to protocols to prevent event listener leaks
* fix: add abort signal support for graceful store query cancellation
* fix: call protocol stop methods in WakuNode.stop()
* fix: improve QueryOnConnect cleanup and abort signal handling
* fix: improve MissingMessageRetriever cleanup with abort signal
* fix: add stopAllRetries method to RetryManager for proper cleanup
* fix: implement comprehensive ReliableChannel stop() with proper cleanup
* fix: add active query tracking to QueryOnConnect and await its stop()
* fix: add stop() to IRelayAPI and IStore interfaces, implement in SDK wrappers
* align with usual naming (isStarted)
* remove unnecessary `await`
* test: `stop()` is now async
* chore: use more concise syntax
---------
Co-authored-by: Levente Kiss <levente.kiss@solarpunk.buzz>
* chore: npm publication
Fixing npm publication and warnings
* Upgrade workflow to use trusted publishing
https://docs.npmjs.com/trusted-publishers
* bump node js to 24
To avoid having to reinstall npm in pre-release for npmjs trusted publishers
* feat!: do not send sync messages with empty history
A sync message without any history as no value. If there are no messages in the channel, then a sync messages does not help.
If there are messages in the channel, but this participant is not aware of them, then it can confuse other participants to assume that the channel is empty.
* fix test by adding a message to channel history
* make `pushOutgoingSyncMessage` return true even if no callback passed
* fix!: avoid SDS lamport timestamp overflow
The SDS timestamp is initialized to the current time in milliseconds, which is a 13 digits value (e.g. 1,759,223,090,052).
The maximum value for int32 is 2,147,483,647 (10 digits), which is clearly less than the timestamp.
Maximum value for uint32 is 4,294,967,295 (10 digits), which does not help with ms timestamp.
uint64 is BigInt in JavaScript, so best to be avoided unless strictly necessary as it creates complexity.
max uint64 is 18,446,744,073,709,551,615 (20 digits).
Using seconds instead of milliseconds would enable usage of uint32 valid until the year 2106.
The lamport timestamp is only initialized to current time for a new channel. The only scenario is when a user comes in a channel, and thinks it's new (did not get previous messages), and then starts sending messages. Meaning that there may be an initial timestamp conflict until the logs are consolidated, which is already handled by the protocol.
* change lamportTimestamp to uint64 in protobuf
* lamport timestamp remains close to current time
* feat: query on connect stops on predicate
* test: query on connect stops at predicate
* feat: reliable channels search up to 30 days to find message
Queries stop once a valid sync or content message is found in the channel.
* fix: protect against decoding exceptions
* stop range queries on messages with a causal history
* SDS: pushOutgoingMessage is actually sync
* SDS: ensure that `ContentMessage` class is stored in local history with `valueOf` method
* feat: introduce reliable channels
Easy to use Scalable Data Sync (SDS, e2e reliability) wrapper, that includes:
- store queries upon connection to store nodes
- store queries to retrieve missing messages
* remove `channel` prefix
* attempt to improve performance when processing a lot of incoming messages
* test: split test file
* use index.ts for re-export only.
* improve if condition
* use getter for isStarted
* waku node already auto-start
* rename send
* fix lightPush.send type post rebase
* test: remove extra console.log
* SDS: emit messages as missing as soon as they are received
* make configurable elapse time for task process
* typo
* use string instead of enum for event types
* ReliableChannel.send returns the message id
* SDS: export `MessageId`
* SDS: attach retrieval hints to incoming messages
* sds: ensure items are ordered by timestamp
* test: sds: avoid using "as any" as it bypasses type checks
* test: filter: avoid using "as any" as it bypasses type checks
* test: fix tests without introducing proxy
* feat: query on connect
Perform store time-range queries upon connecting to a store node.
Some heuristics are applied to ensure the store queries are not too frequent.
* make `maybeQuery` private
* query-on-connect: use index.ts only for re-export
* query-on-connect: update doc
* store connect evt: use enum instead of free strings for Waku event types
* store connect evt: more accurate enum name
* store connect evt: add store connect event on peer manager
* store connect evt: simplify logic statements
* store connect evt: test store connect
* store connect evt: export event types
* test: use enum
* Shorter name for waku events
* update local peer discovery, make it configurable for cache
* move to separate file
* up tests, remove local storage from tests
* pass local peer cache options
* add e2e tests
* add aditional e2e tests for local cache
* rename local-peer-cache into peer-cache
* update tests, ci
* prevent filterign ws addresses
Concepts are being mixed up between the global network config (static vs auto sharding), that needs to be the same of all nodes in the network, individual node configuration (eg relay node subscribing to a given shard), and the routing characteristic of a specific message (eg pubsub topic, shard).
This stops proper configuration of nwaku post 0.36.0 because we know need to be deliberate on whether nwaku nodes are running with auto or static sharding.
It also included various back and forth conversions between shards, pubsub topics, etc.
With this change, we tidy up the network configuration, and make it explicit whether it is static or auto sharded.
We also introduce the concept of routing info, which is specific to a message, and tied to the overall network configuration.
Routing info abstract pubsub topic, shard, and autosharding needs. Which should lead to easier tidy up of the pubsub concept at a later stage.
# Conflicts:
# packages/core/src/lib/connection_manager/connection_manager.ts
# packages/core/src/lib/metadata/metadata.ts
# packages/interfaces/src/metadata.ts
# packages/interfaces/src/sharding.ts
# packages/relay/src/create.ts
# packages/sdk/src/filter/filter.ts
# packages/sdk/src/filter/types.ts
# packages/sdk/src/light_push/light_push.spec.ts
# packages/tests/tests/sharding/auto_sharding.spec.ts
# packages/tests/tests/sharding/static_sharding.spec.ts
# Conflicts:
# packages/sdk/src/store/store.ts
For an edge node, there is no such thing as a "pubsub topic configuration". An edge node should be able to operate for any possible shard, and it is a per-protocol matter (eg send message with light push).
A relay node do subscribe to shards, but in this case, even metadata protocol does not need to advertise them, this is already handled by gossipsub.
Only service node should advertise their shards via metadata protocol, which is out of scope for js-waku.
# Conflicts:
# packages/interfaces/src/connection_manager.ts
* add FF for auto recovery
* implement connection locking, connection maintenance, auto recovery, bootstrap connections maintenance and fix bootstrap peers dropping
* add ut for peer manager changes
* implement UT for Connection Limiter
* increase connection maintenance interval
* update e2e test
* implement new peer manager, use in lightPush, improve retry manager and fix retry bug
* fix unsubscribe issue
* remove not needed usage of pubsub, use peer manager in store sdk
* chore: remove deprecated filter implementation
* update tests
* update next filter for new peer manager
* skip IReceiver test, remove unused utility
* remove comment
* fix typo
* remove old connection based peer manager
* update types, export, and edge case for light push
* add retry manager tests
* add new peer manager tests
* refactor tests
* use peer manager events in filter and check for pubsub topic as well
* update test names
* address comments
* unskip Filter e2e test
* address more comments, remove duplication
* skip CI test
* update after merge
* move to peer:idenfity and peer:disconnect events, improve mapping in filter subscriptions
* update tests
* add logs and change peer manager time lock to 10s
* feat: implement shard retrieval for store and improve set store peers usage
* remove log
* remove only, improve condition
* implement smarter way to retrieve peers
* up tests
* update mock
* address nits, add target to eslint, revert to es2022
* remove IBaseProtocol
* fix references, interfaces and integration
* fix ci
* up mock
* up lock
* add mock for local storage
* add missing prop, fix tests
* up lock
* create new filter api
* implement await on main methods on new Filter
* add info logs in new filter
* add logs to subscription impl
* remove lint supress
* add unit tests
* introduce E2E tests
* update e2e tests and add case for testing filter recovery after nwaku nodes replacement
* add new test cases for max limits and enable decoders as array on new filter
* fix edge case testing, correct test cases
* skip test
* update error message
* up text
* up text
* fix lint
* implement unsubscribeAll
* add js-dock to new filter
* add cspell
* implement TTL set for message history