There is now a global PeerStore structure (instead of having one for libp2p, one for waku, etc)
The user can create custom books for new types easily
Also add a pruning system to remove dead peers
* gossipsub: adding duplicate arrival metrics
Adding counters for received deduplicated messages and for
duplicates recognized by the seen cache. Note that duplicates that
are not recognized (arrive after seenTTL) are not counted as
duplicates here either.
* gossipsub: adding mcache (message cache for responding IWANT) stats
It is generally assumed that IWANT messages arrive when mcache still
has the message. These stats are to verify this assumption.
* libp2p: adding internal TX queuing stats
Messages are queued in TX before getting written on the stream,
but we have no statistics about these queues. This patch adds
some queue length and queuing time related statistics.
* adding Grafana libp2p dashboard
Adding Grafana dashboard with newly exposed metrics.
* enable libp2p_mplex_metrics in nimble test
Signed-off-by: Csaba Kiraly <csaba.kiraly@gmail.com>
* Signed envelopes and routing records
* Send signed peer record as part of identify (#649)
* Add SPR from identify to new peer book (#657)
* Send & receive gossipsub PX
* Add Signed Payload
Co-authored-by: Hanno Cornelius <68783915+jm-clius@users.noreply.github.com>
* adding raises defect across the codebase
* use unittest2
* add windows deps caching
* update mingw link
* die on failed peerinfo initialization
* use result.expect instead of get
* use expect more consistently and rework inits
* use expect more consistently
* throw on missing public key
* remove unused closure annotation
* merge master
* add floodPublish test
* test delivery via control Iwant/have mechanics
* fix issues in control, and add testing
* fix possible backoff issue with pruned routine overriding it
In `async` functions, a closure environment is created for variables
that cross an await boundary - this closure environment is kept in
memory for the lifetime of the associated future - this means that
although _some_ variables are no longer used, they still take up memory
for a long time.
In Nimbus, message validation is processed in batches meaning the future
of an incoming gossip message stays around for quite a while - this
leads to memory consumption peaks of 100-200 mb when there are many
attestations in the pipeline.
To avoid excessive memory usage, it's generally better to move non-async
code into proc's such that the variables therein can be released earlier
- this includes the many hidden variables introduced by macro and
template expansion (ie chronicles that does expensive exception
handling)
* move seen table salt to floodsub, use there as well
* shorten seen table salt to size of hash
* avoid unnecessary memory allocations and copies in a few places
* factor out message scoring
* avoid reencoding outgoing message for every peer
* keep checking validators until reject (in case there's both reject and
ignore)
* `readOnce` avoids `readExactly` overhead for single-byte read
* genericAssign -> assign2
* properly propagate initiator information for gossipsub
* Fix pubsubpeer lifetime management
* restore old behavior
* tests fixing
* clamp backoff time value received
* fix member name collisions
* internal test fixes
* better names and explaining of the importance of transport direction
* fixes
* Refactor gossipsub into multiple modules
* splitup further gossipsub
* move more mesh related stuff to behavior
* fix internal tests
* fix PubSubPeer.outbound flag, make it more reliable
* use discard rather then _