Compare commits

...

31 Commits

Author SHA1 Message Date
Prem Chaitanya Prathi
fc0a0a9bb7
Merge branch 'libp2p:master' into master 2025-08-22 10:47:47 +05:30
Dat Duong
ab876fc71c
fix: Select ctx.Done() when preprocessing to avoid blocking on cancel (#635)
Close #636 

The PR updates the send logic to use a select with ctx.Done() and
t.p.ctx.Done(), ensuring the operation terminates gracefully. A test
case reproducing the issue is included in the PR for verification.
2025-08-21 17:01:05 -07:00
web3-bot
ee9c8434f9
ci: uci/update-go (#638)
This PR was created automatically by the @web3-bot as a part of the
[Unified CI](https://github.com/ipdxco/unified-github-workflows)
project.
2025-08-21 10:13:04 +02:00
Marco Munizaga
abb8f8a2cd
Release v0.14.2 (#629)
Includes #627
2025-07-03 13:44:54 -07:00
Marco Munizaga
bc7e2e619d
Skip 32-bit tests in CI (#628)
This is roughly a third of our CI time, and, as far as I know, running
32bit tests has never caught an issue.

Also, I'm unaware of anyone using this library on a 32bit x86 system. I
believe the last 32bit x86 CPU released was the [pentium
4](https://en.wikipedia.org/wiki/List_of_Intel_Pentium_4_processors)
close to 20 years ago.
2025-07-03 12:04:07 -07:00
Marco Munizaga
631e47b133
Fix test races and enable race tests in CI (#626)
closes #624
2025-07-03 11:37:53 -07:00
Marco Munizaga
e38c340f93
Fix race when calling Preprocess and msg ID generator(#627)
Closes #624
2025-07-03 11:10:37 -07:00
Marco Munizaga
ae65ce484e
Release v0.14.1 (#623)
Includes #622
2025-06-25 16:15:26 -07:00
Marco Munizaga
fedbccc0c6
fix(BatchPublishing): Make topic.AddToBatch threadsafe (#622)
topic.Publish is already thread safe. topic.AddToBatch should strive to
follow similar semantics.

Looking at how this would integrate with Prysm, they use separate
goroutines per message they'd like to batch.
2025-06-25 12:38:21 -07:00
Marco Munizaga
3f89e4331c
Release v0.14.0 (#614)
This release contains a couple fixes and the new Batch Publishing
feature.

- #607 Batch Publishing. Useful if you are publishing a group of related
messages at once
- #612 Send IDONTWANT before initial publish. Useful when many nodes may
publish the same message at once.
- #609 Avoid sending an extra "IDONTWANT" to the peer that just sent you
a message.
- #615 10x faster rpc splitting.
2025-05-29 15:58:51 -07:00
Marco Munizaga
c405ca8028
refactor: 10x faster RPC splitting (#615)
Builds on #582. 10x faster than current master. 0 allocs.

The basic logic is the same as the old version, except we return an
`iter.Seq[RPC]` and yield `RPC` types instead of a slice of `*RPC`. This
lets us avoid allocations for heap pointers.

Please review @algorandskiy, and let me know if this improves your use
case.
2025-05-28 22:32:25 -07:00
Marco Munizaga
38ad16a687
test: Fix flaky TestMessageBatchPublish (#616)
Messages where being dropped in the validation queue if the machine was
not fast enough.
2025-05-27 09:04:26 -07:00
Marco Munizaga
9e5145fb29
Send IDONTWANT before first publish (#612)
See #610 

We previously send IDONTWANT only when forwarding. This has us send
IDONTWANT on our initial publish as well. Helps in the case that one or
more peers may also publish the same thing at around the same time (see
#610 for a longer explanation) and prevents "boomerang" duplicates where
a peer sends you back the message you sent before you get a chance to
send it to them.

This also serves as a hint to a peer that you are about to send them a
certain message.
2025-05-19 17:02:21 -07:00
Marco Munizaga
0c5ee7bbfe
feat(gossipsub): Add MessageBatch (#607)
to support batch publishing messages

Replaces #602.

Batch publishing lets the system know there are multiple related
messages to be published so it can prioritize sending different messages
before sending copies of messages. For example, with the default API,
when you publish two messages A and B, under the hood A gets sent to D=8
peers first, before B gets sent out. With this MessageBatch api we can
now send one copy of A _and then_ one copy of B before sending multiple
copies.

When a node has bandwidth constraints relative to the messages it is
publishing this improves dissemination time.

For more context see this post:
https://ethresear.ch/t/improving-das-performance-with-gossipsub-batch-publishing/21713
2025-05-08 10:23:02 -07:00
Marco Munizaga
50ccc5ca90
fix(IDONTWANT)!: Do not IDONTWANT your sender (#609)
We were sending IDONTWANT to the sender of the received message. This is
pointless, as the sender should not repeat a message it already sent.
The sender could also have tracked that it had sent this peer the
message (we don't do this currently, and it's probably not necessary).

@ppopth
2025-04-30 10:58:50 +03:00
web3-bot
95a070affb
ci: uci/copy-templates (#604)
This PR was created automatically by the @web3-bot as a part of the
[Unified CI](https://github.com/ipdxco/unified-github-workflows)
project.
2025-03-28 19:05:04 +01:00
Hlib Kanunnikov
68726389f2
feat: WithValidatorData publishing option (#603)
Micro optimization that helps preventing deserialization of published
messages in local subscribtions
2025-03-13 12:59:39 +02:00
Aayush Rajasekaran
f486808fbe
feat: avoid repeated checksum calculations (#599)
We don't need to recalculate the checksum for each peer.
2025-02-24 00:39:38 +02:00
Andrew Gillis
b50197ee8b
Upgrade go-libp2p to v0.39.1 (#598) 2025-02-18 05:41:46 -08:00
web3-bot
bfcc7c4889
ci: uci/update-go (#595)
This PR was created automatically by the @web3-bot as a part of the
[Unified CI](https://github.com/ipdxco/unified-github-workflows)
project.
2025-02-16 22:31:20 +01:00
Marco Munizaga
9b90c72ced
Release v0.13.0 (#593) 2025-02-06 20:19:16 -05:00
Pop Chunhapanya
bf5b583843
Allow cancelling IWANT using IDONTWANT (#591)
As specified in the Gossipsub v1.2 spec, we should allow cancelling
IWANT by IDONTWANT.

That is if IDONTWANT already arrived, we should not process IWANT.

However due to the code structure, we can cancel IWANT only in
handleIWant.


https://github.com/libp2p/specs/blob/master/pubsub/gossipsub/gossipsub-v1.2.md#cancelling-iwant
2024-12-30 22:25:26 +02:00
Nishant Das
0936035d5f
Improve IDONTWANT Flood Protection (#590)
In this PR we add in a new config parameter called `MaxIDontWantLength`
which would be very similarly used as `MaxIHaveLength` has been used of
`IHAVE` messgaes . This parameter has been set as the value of `10` now.

The main purpose is to bring how IDONTWANT messages are handled in line
with how IHAVE have been handled. We add the relevant changes to the
`handleIDontWant` method along with adding in a new regression test for
this check.
2024-12-28 13:49:13 +02:00
Nishant Das
3536508a9d
Fix the Router's Ability to Prune the Mesh Periodically (#589)
When a new peer wants to graft us into their mesh, we check our current
mesh size to determine whether we can add any more new peers to it. This
is done to prevent our mesh size from being greater than `Dhi` and
prevent mesh takeover attacks here:


c06df2f9a3/gossipsub.go (L943)

During every heartbeat we check our mesh size and if it is **greater**
than `Dhi` then we will prune our mesh back down to `D`.

c06df2f9a3/gossipsub.go (L1608)

However if you look closely at both lines there is a problematic end
result. Since we only stop grafting new peers into our mesh if our
current mesh size is **greater than or equal to** `Dhi` and we only
prune peers if the current mesh size is greater than `Dhi`.

This would result in the mesh being in a state of stasis at `Dhi`.
Rather than float between `D` and `Dhi` , the mesh stagnates at `Dhi` .
This would end up increasing the target degree of the node to `Dhi` from
`D`. This had been observed in ethereum mainnet by recording mesh
interactions and message fulfillment from those peers.

This PR fixes it by adding an equality check to the conditional so that
it can be periodically pruned. The PR also adds a regression test for
this particular case.
2024-12-26 20:02:14 +02:00
Yahya Hassanzadeh, Ph.D.
c06df2f9a3
Add Function to Enable Application Layer to Send Direct Control Messages (#562)
### PR Description

This PR addresses https://github.com/libp2p/go-libp2p-pubsub/issues/561;
i.e., adding a new `SendControl` function to the `GossipSubRouter`. This
will allow the application layer to send direct control messages to
peers, facilitating finer-grained testing.
2024-10-18 23:28:24 +03:00
Pavel Zbitskiy
f71345c1ec
Do not format expensive debug messages in non-debug levels in doDropRPC (#580)
In high load scenarios when consumer is slow, `doDropRPC` is called
often and makes extra unnecessary allocations formatting `log.Debug`
message.

Fixed by checking log level before running expensive formatting.

Before:
```
BenchmarkAllocDoDropRPC-10    	13684732	        76.28 ns/op	     144 B/op	       3 allocs/op
```

After:
```
BenchmarkAllocDoDropRPC-10    	28140273	        42.88 ns/op	     112 B/op	       1 allocs/op
```
2024-09-25 09:33:35 +03:00
Andrew Gillis
4c13974188
Update go-libp2p to latest (#578)
- Update to go-libp2p v0.36.3
2024-09-09 08:42:16 -07:00
web3-bot
b8a6a868ad
ci: uci/update-go (#577)
This PR was created automatically by the @web3-bot as a part of the
[Unified CI](https://github.com/ipdxco/unified-github-workflows)
project.
2024-08-26 16:58:30 +02:00
richΛrd
b0f2429ca2
Merge branch 'libp2p:master' into master 2024-08-23 10:33:42 -04:00
sukun
1aeb6ebc6a
chore: upgrade go-libp2p (#575)
Co-authored-by: Steven Allen <steven@stebalien.com>
2024-08-16 19:57:24 +03:00
Pop Chunhapanya
b421b3ab05
GossipSub v1.2: IDONTWANT control message and priority queue. (#553)
## GossipSub v1.2 implementation

Specification: libp2p/specs#548

### Work Summary
Sending IDONTWANT

Implement a smart queue
Add priorities to the smart queue

    Put IDONTWANT packets into the smart priority queue as soon as the node gets the packets

Handling IDONTWANT

Use a map to remember the message ids whose IDONTWANT packets have been received
Implement max_idontwant_messages (ignore the IDONWANT packets if the max is reached)
Clear the message IDs from the cache after 3 heartbeats

    Hash the message IDs before putting them into the cache.

More requested features

    Add a feature test to not send IDONTWANT if the other side doesnt support it

### Commit Summary

* Replace sending channel with the smart rpcQueue

Since we want to implement a priority queue later, we need to replace
the normal sending channels with the new smart structures first.

* Implement UrgentPush in the smart rpcQueue

UrgentPush allows you to push an rpc packet to the front of the queue so
that it will be popped out fast.

* Add IDONTWANT to rpc.proto and trace.proto

* Send IDONTWANT right before validation step

Most importantly, this commit adds a new method called PreValidation to
the interface PubSubRouter, which will be called right before validating
the gossipsub message.

In GossipSubRouter, PreValidation will send the IDONTWANT controll
messages to all the mesh peers of the topics of the received messages.

* Test GossipSub IDONWANT sending

* Send IDONWANT only for large messages

* Handle IDONTWANT control messages

When receiving IDONTWANTs, the host should remember the message ids
contained in IDONTWANTs using a hash map.

When receiving messages with those ids, it shouldn't forward them to the
peers who already sent the IDONTWANTs.

When the maximum number of IDONTWANTs is reached for any particular
peer, the host should ignore any excessive IDONTWANTs from that peer.

* Clear expired message IDs from the IDONTWANT cache

If the messages IDs received from IDONTWANTs are older than 3
heartbeats, they should be removed from the IDONTWANT cache.

* Keep the hashes of IDONTWANT message ids instead

Rather than keeping the raw message ids, keep their hashes instead to
save memory and protect again memory DoS attacks.

* Increase GossipSubMaxIHaveMessages to 1000

* fixup! Clear expired message IDs from the IDONTWANT cache

* Not send IDONTWANT if the receiver doesn't support

* fixup! Replace sending channel with the smart rpcQueue

* Not use pointers in rpcQueue

* Simply rcpQueue by using only one mutex

* Check ctx error in rpc sending worker

Co-authored-by: Steven Allen <steven@stebalien.com>

* fixup! Simply rcpQueue by using only one mutex

* fixup! Keep the hashes of IDONTWANT message ids instead

* Use AfterFunc instead implementing our own

* Fix misc lint errors

* fixup! Fix misc lint errors

* Revert "Increase GossipSubMaxIHaveMessages to 1000"

This reverts commit 6fabcdd068a5f5238c5280a3460af9c3998418ec.

* Increase GossipSubMaxIDontWantMessages to 1000

* fixup! Handle IDONTWANT control messages

* Skip TestGossipsubConnTagMessageDeliveries

* Skip FuzzAppendOrMergeRPC

* Revert "Skip FuzzAppendOrMergeRPC"

This reverts commit f141e13234de0960d139339acb636a1afea9e219.

* fixup! Send IDONWANT only for large messages

* fixup! fixup! Keep the hashes of IDONTWANT message ids instead

* fixup! Implement UrgentPush in the smart rpcQueue

* fixup! Use AfterFunc instead implementing our own

---------

Co-authored-by: Steven Allen <steven@stebalien.com>
2024-08-16 18:16:35 +03:00
38 changed files with 3680 additions and 717 deletions

14
.github/workflows/generated-pr.yml vendored Normal file
View File

@ -0,0 +1,14 @@
name: Close Generated PRs
on:
schedule:
- cron: '0 0 * * *'
workflow_dispatch:
permissions:
issues: write
pull-requests: write
jobs:
stale:
uses: ipdxco/unified-github-workflows/.github/workflows/reusable-generated-pr.yml@v1

View File

@ -1,4 +1,4 @@
{
"skipOSes": ["windows", "macos"],
"skipRace": true
"skip32bit": true
}

View File

@ -1,8 +1,9 @@
name: Close and mark stale issue
name: Close Stale Issues
on:
schedule:
- cron: '0 0 * * *'
workflow_dispatch:
permissions:
issues: write
@ -10,4 +11,4 @@ permissions:
jobs:
stale:
uses: pl-strflt/.github/.github/workflows/reusable-stale-issue.yml@v0.3
uses: ipdxco/unified-github-workflows/.github/workflows/reusable-stale-issue.yml@v1

View File

@ -96,11 +96,17 @@ func TestBackoff_Clean(t *testing.T) {
if err != nil {
t.Fatalf("unexpected error post update: %s", err)
}
b.mu.Lock()
b.info[id].lastTried = time.Now().Add(-TimeToLive) // enforces expiry
b.mu.Unlock()
}
if len(b.info) != size {
t.Fatalf("info map size mismatch, expected: %d, got: %d", size, len(b.info))
b.mu.Lock()
infoLen := len(b.info)
b.mu.Unlock()
if infoLen != size {
t.Fatalf("info map size mismatch, expected: %d, got: %d", size, infoLen)
}
// waits for a cleanup loop to kick-in

41
comm.go
View File

@ -114,7 +114,7 @@ func (p *PubSub) notifyPeerDead(pid peer.ID) {
}
}
func (p *PubSub) handleNewPeer(ctx context.Context, pid peer.ID, outgoing <-chan *RPC) {
func (p *PubSub) handleNewPeer(ctx context.Context, pid peer.ID, outgoing *rpcQueue) {
s, err := p.host.NewStream(p.ctx, pid, p.rt.Protocols()...)
if err != nil {
log.Debug("opening new stream to peer: ", err, pid)
@ -135,7 +135,7 @@ func (p *PubSub) handleNewPeer(ctx context.Context, pid peer.ID, outgoing <-chan
}
}
func (p *PubSub) handleNewPeerWithBackoff(ctx context.Context, pid peer.ID, backoff time.Duration, outgoing <-chan *RPC) {
func (p *PubSub) handleNewPeerWithBackoff(ctx context.Context, pid peer.ID, backoff time.Duration, outgoing *rpcQueue) {
select {
case <-time.After(backoff):
p.handleNewPeer(ctx, pid, outgoing)
@ -156,7 +156,7 @@ func (p *PubSub) handlePeerDead(s network.Stream) {
p.notifyPeerDead(pid)
}
func (p *PubSub) handleSendingMessages(ctx context.Context, s network.Stream, outgoing <-chan *RPC) {
func (p *PubSub) handleSendingMessages(ctx context.Context, s network.Stream, outgoing *rpcQueue) {
writeRpc := func(rpc *RPC) error {
size := uint64(rpc.Size())
@ -174,20 +174,17 @@ func (p *PubSub) handleSendingMessages(ctx context.Context, s network.Stream, ou
}
defer s.Close()
for {
select {
case rpc, ok := <-outgoing:
if !ok {
return
}
for ctx.Err() == nil {
rpc, err := outgoing.Pop(ctx)
if err != nil {
log.Debugf("popping message from the queue to send to %s: %s", s.Conn().RemotePeer(), err)
return
}
err := writeRpc(rpc)
if err != nil {
s.Reset()
log.Debugf("writing message to %s: %s", s.Conn().RemotePeer(), err)
return
}
case <-ctx.Done():
err = writeRpc(rpc)
if err != nil {
s.Reset()
log.Debugf("writing message to %s: %s", s.Conn().RemotePeer(), err)
return
}
}
@ -209,15 +206,17 @@ func rpcWithControl(msgs []*pb.Message,
ihave []*pb.ControlIHave,
iwant []*pb.ControlIWant,
graft []*pb.ControlGraft,
prune []*pb.ControlPrune) *RPC {
prune []*pb.ControlPrune,
idontwant []*pb.ControlIDontWant) *RPC {
return &RPC{
RPC: pb.RPC{
Publish: msgs,
Control: &pb.ControlMessage{
Ihave: ihave,
Iwant: iwant,
Graft: graft,
Prune: prune,
Ihave: ihave,
Iwant: iwant,
Graft: graft,
Prune: prune,
Idontwant: idontwant,
},
},
}

View File

@ -5,10 +5,11 @@ package compat_pb
import (
fmt "fmt"
proto "github.com/gogo/protobuf/proto"
io "io"
math "math"
math_bits "math/bits"
proto "github.com/gogo/protobuf/proto"
)
// Reference imports to suppress errors if they are not otherwise used.

View File

@ -71,6 +71,8 @@ func (fs *FloodSubRouter) AcceptFrom(peer.ID) AcceptStatus {
return AcceptAll
}
func (fs *FloodSubRouter) Preprocess(from peer.ID, msgs []*Message) {}
func (fs *FloodSubRouter) HandleRPC(rpc *RPC) {}
func (fs *FloodSubRouter) Publish(msg *Message) {
@ -83,19 +85,19 @@ func (fs *FloodSubRouter) Publish(msg *Message) {
continue
}
mch, ok := fs.p.peers[pid]
q, ok := fs.p.peers[pid]
if !ok {
continue
}
select {
case mch <- out:
fs.tracer.SendRPC(out, pid)
default:
err := q.Push(out, false)
if err != nil {
log.Infof("dropping message to peer %s: queue full", pid)
fs.tracer.DropRPC(out, pid)
// Drop it. The peer is too slow.
continue
}
fs.tracer.SendRPC(out, pid)
}
}

View File

@ -42,8 +42,7 @@ func checkMessageRouting(t *testing.T, topic string, pubs []*PubSub, subs []*Sub
}
func connect(t *testing.T, a, b host.Host) {
pinfo := a.Peerstore().PeerInfo(a.ID())
err := b.Connect(context.Background(), pinfo)
err := b.Connect(context.Background(), peer.AddrInfo{ID: a.ID(), Addrs: a.Addrs()})
if err != nil {
t.Fatal(err)
}
@ -269,8 +268,11 @@ func TestReconnects(t *testing.T) {
t.Fatal("timed out waiting for B chan to be closed")
}
nSubs := len(psubs[2].mySubs["cats"])
if nSubs > 0 {
nSubs := make(chan int)
psubs[2].eval <- func() {
nSubs <- len(psubs[2].mySubs["cats"])
}
if <-nSubs > 0 {
t.Fatal(`B should have 0 subscribers for channel "cats", has`, nSubs)
}
@ -867,9 +869,14 @@ func TestImproperlySignedMessageRejected(t *testing.T) {
t.Fatal(err)
}
var adversaryMessages []*Message
adversaryMessagesCh := make(chan []*Message)
adversaryContext, adversaryCancel := context.WithCancel(ctx)
go func(ctx context.Context) {
var adversaryMessages []*Message
defer func() {
adversaryMessagesCh <- adversaryMessages
}()
for {
select {
case <-ctx.Done():
@ -886,6 +893,7 @@ func TestImproperlySignedMessageRejected(t *testing.T) {
<-time.After(1 * time.Second)
adversaryCancel()
adversaryMessages := <-adversaryMessagesCh
// Ensure the adversary successfully publishes the incorrectly signed
// message. If the adversary "sees" this, we successfully got through
@ -896,9 +904,13 @@ func TestImproperlySignedMessageRejected(t *testing.T) {
// the honest peer's validation process will drop the message;
// next will never furnish the incorrect message.
var honestPeerMessages []*Message
honestPeerMessagesCh := make(chan []*Message)
honestPeerContext, honestPeerCancel := context.WithCancel(ctx)
go func(ctx context.Context) {
var honestPeerMessages []*Message
defer func() {
honestPeerMessagesCh <- honestPeerMessages
}()
for {
select {
case <-ctx.Done():
@ -916,6 +928,7 @@ func TestImproperlySignedMessageRejected(t *testing.T) {
<-time.After(1 * time.Second)
honestPeerCancel()
honestPeerMessages := <-honestPeerMessagesCh
if len(honestPeerMessages) != 1 {
t.Fatalf("got %d messages, expected 1", len(honestPeerMessages))
}

114
go.mod
View File

@ -1,111 +1,119 @@
module github.com/libp2p/go-libp2p-pubsub
go 1.21
go 1.24
require (
github.com/benbjohnson/clock v1.3.5
github.com/gogo/protobuf v1.3.2
github.com/ipfs/go-log/v2 v2.5.1
github.com/libp2p/go-buffer-pool v0.1.0
github.com/libp2p/go-libp2p v0.34.0
github.com/libp2p/go-libp2p v0.39.1
github.com/libp2p/go-libp2p-testing v0.12.0
github.com/libp2p/go-msgio v0.3.0
github.com/multiformats/go-multiaddr v0.12.4
github.com/multiformats/go-multiaddr v0.14.0
github.com/multiformats/go-varint v0.0.7
go.uber.org/zap v1.27.0
)
require (
github.com/beorn7/perks v1.0.1 // indirect
github.com/cespare/xxhash/v2 v2.2.0 // indirect
github.com/cespare/xxhash/v2 v2.3.0 // indirect
github.com/containerd/cgroups v1.1.0 // indirect
github.com/coreos/go-systemd/v22 v22.5.0 // indirect
github.com/davecgh/go-spew v1.1.1 // indirect
github.com/davidlazar/go-crypto v0.0.0-20200604182044-b73af7476f6c // indirect
github.com/decred/dcrd/dcrec/secp256k1/v4 v4.3.0 // indirect
github.com/docker/go-units v0.5.0 // indirect
github.com/elastic/gosigar v0.14.2 // indirect
github.com/elastic/gosigar v0.14.3 // indirect
github.com/flynn/noise v1.1.0 // indirect
github.com/francoispqt/gojay v1.2.13 // indirect
github.com/go-task/slim-sprig v0.0.0-20230315185526-52ccab3ef572 // indirect
github.com/go-task/slim-sprig/v3 v3.0.0 // indirect
github.com/godbus/dbus/v5 v5.1.0 // indirect
github.com/google/gopacket v1.1.19 // indirect
github.com/google/pprof v0.0.0-20240207164012-fb44976bdcd5 // indirect
github.com/google/uuid v1.4.0 // indirect
github.com/gorilla/websocket v1.5.1 // indirect
github.com/google/pprof v0.0.0-20250202011525-fc3143867406 // indirect
github.com/google/uuid v1.6.0 // indirect
github.com/gorilla/websocket v1.5.3 // indirect
github.com/hashicorp/golang-lru/v2 v2.0.7 // indirect
github.com/huin/goupnp v1.3.0 // indirect
github.com/ipfs/go-cid v0.4.1 // indirect
github.com/ipfs/go-cid v0.5.0 // indirect
github.com/jackpal/go-nat-pmp v1.0.2 // indirect
github.com/jbenet/go-temp-err-catcher v0.1.0 // indirect
github.com/klauspost/compress v1.17.8 // indirect
github.com/klauspost/cpuid/v2 v2.2.7 // indirect
github.com/koron/go-ssdp v0.0.4 // indirect
github.com/libp2p/go-flow-metrics v0.1.0 // indirect
github.com/klauspost/compress v1.17.11 // indirect
github.com/klauspost/cpuid/v2 v2.2.9 // indirect
github.com/koron/go-ssdp v0.0.5 // indirect
github.com/libp2p/go-flow-metrics v0.2.0 // indirect
github.com/libp2p/go-libp2p-asn-util v0.4.1 // indirect
github.com/libp2p/go-nat v0.2.0 // indirect
github.com/libp2p/go-netroute v0.2.1 // indirect
github.com/libp2p/go-netroute v0.2.2 // indirect
github.com/libp2p/go-reuseport v0.4.0 // indirect
github.com/libp2p/go-yamux/v4 v4.0.1 // indirect
github.com/libp2p/go-yamux/v4 v4.0.2 // indirect
github.com/marten-seemann/tcp v0.0.0-20210406111302-dfbc87cc63fd // indirect
github.com/mattn/go-isatty v0.0.20 // indirect
github.com/miekg/dns v1.1.58 // indirect
github.com/miekg/dns v1.1.63 // indirect
github.com/mikioh/tcpinfo v0.0.0-20190314235526-30a79bb1804b // indirect
github.com/mikioh/tcpopt v0.0.0-20190314235656-172688c1accc // indirect
github.com/minio/sha256-simd v1.0.1 // indirect
github.com/mr-tron/base58 v1.2.0 // indirect
github.com/multiformats/go-base32 v0.1.0 // indirect
github.com/multiformats/go-base36 v0.2.0 // indirect
github.com/multiformats/go-multiaddr-dns v0.3.1 // indirect
github.com/multiformats/go-multiaddr-dns v0.4.1 // indirect
github.com/multiformats/go-multiaddr-fmt v0.1.0 // indirect
github.com/multiformats/go-multibase v0.2.0 // indirect
github.com/multiformats/go-multicodec v0.9.0 // indirect
github.com/multiformats/go-multihash v0.2.3 // indirect
github.com/multiformats/go-multistream v0.5.0 // indirect
github.com/onsi/ginkgo/v2 v2.15.0 // indirect
github.com/multiformats/go-multistream v0.6.0 // indirect
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 // indirect
github.com/onsi/ginkgo/v2 v2.22.2 // indirect
github.com/opencontainers/runtime-spec v1.2.0 // indirect
github.com/pbnjay/memory v0.0.0-20210728143218-7b4eea64cf58 // indirect
github.com/pion/datachannel v1.5.6 // indirect
github.com/pion/dtls/v2 v2.2.11 // indirect
github.com/pion/ice/v2 v2.3.24 // indirect
github.com/pion/interceptor v0.1.29 // indirect
github.com/pion/logging v0.2.2 // indirect
github.com/pion/datachannel v1.5.10 // indirect
github.com/pion/dtls/v2 v2.2.12 // indirect
github.com/pion/dtls/v3 v3.0.4 // indirect
github.com/pion/ice/v2 v2.3.37 // indirect
github.com/pion/ice/v4 v4.0.6 // indirect
github.com/pion/interceptor v0.1.37 // indirect
github.com/pion/logging v0.2.3 // indirect
github.com/pion/mdns v0.0.12 // indirect
github.com/pion/mdns/v2 v2.0.7 // indirect
github.com/pion/randutil v0.1.0 // indirect
github.com/pion/rtcp v1.2.14 // indirect
github.com/pion/rtp v1.8.6 // indirect
github.com/pion/sctp v1.8.16 // indirect
github.com/pion/sdp/v3 v3.0.9 // indirect
github.com/pion/srtp/v2 v2.0.18 // indirect
github.com/pion/rtcp v1.2.15 // indirect
github.com/pion/rtp v1.8.11 // indirect
github.com/pion/sctp v1.8.35 // indirect
github.com/pion/sdp/v3 v3.0.10 // indirect
github.com/pion/srtp/v3 v3.0.4 // indirect
github.com/pion/stun v0.6.1 // indirect
github.com/pion/transport/v2 v2.2.5 // indirect
github.com/pion/stun/v3 v3.0.0 // indirect
github.com/pion/transport/v2 v2.2.10 // indirect
github.com/pion/transport/v3 v3.0.7 // indirect
github.com/pion/turn/v2 v2.1.6 // indirect
github.com/pion/webrtc/v3 v3.2.40 // indirect
github.com/pion/turn/v4 v4.0.0 // indirect
github.com/pion/webrtc/v4 v4.0.8 // indirect
github.com/pkg/errors v0.9.1 // indirect
github.com/pmezard/go-difflib v1.0.0 // indirect
github.com/prometheus/client_golang v1.19.1 // indirect
github.com/prometheus/client_golang v1.20.5 // indirect
github.com/prometheus/client_model v0.6.1 // indirect
github.com/prometheus/common v0.48.0 // indirect
github.com/prometheus/procfs v0.12.0 // indirect
github.com/quic-go/qpack v0.4.0 // indirect
github.com/quic-go/quic-go v0.44.0 // indirect
github.com/quic-go/webtransport-go v0.8.0 // indirect
github.com/prometheus/common v0.62.0 // indirect
github.com/prometheus/procfs v0.15.1 // indirect
github.com/quic-go/qpack v0.5.1 // indirect
github.com/quic-go/quic-go v0.49.0 // indirect
github.com/quic-go/webtransport-go v0.8.1-0.20241018022711-4ac2c9250e66 // indirect
github.com/raulk/go-watchdog v1.3.0 // indirect
github.com/spaolacci/murmur3 v1.1.0 // indirect
github.com/stretchr/testify v1.9.0 // indirect
go.uber.org/dig v1.17.1 // indirect
go.uber.org/fx v1.21.1 // indirect
go.uber.org/mock v0.4.0 // indirect
github.com/stretchr/testify v1.10.0 // indirect
github.com/wlynxg/anet v0.0.5 // indirect
go.uber.org/dig v1.18.0 // indirect
go.uber.org/fx v1.23.0 // indirect
go.uber.org/mock v0.5.0 // indirect
go.uber.org/multierr v1.11.0 // indirect
go.uber.org/zap v1.27.0 // indirect
golang.org/x/crypto v0.23.0 // indirect
golang.org/x/exp v0.0.0-20240506185415-9bf2ced13842 // indirect
golang.org/x/mod v0.17.0 // indirect
golang.org/x/net v0.25.0 // indirect
golang.org/x/sync v0.7.0 // indirect
golang.org/x/sys v0.20.0 // indirect
golang.org/x/text v0.15.0 // indirect
golang.org/x/tools v0.21.0 // indirect
google.golang.org/protobuf v1.34.1 // indirect
golang.org/x/crypto v0.32.0 // indirect
golang.org/x/exp v0.0.0-20250128182459-e0ece0dbea4c // indirect
golang.org/x/mod v0.23.0 // indirect
golang.org/x/net v0.34.0 // indirect
golang.org/x/sync v0.11.0 // indirect
golang.org/x/sys v0.30.0 // indirect
golang.org/x/text v0.22.0 // indirect
golang.org/x/tools v0.29.0 // indirect
google.golang.org/protobuf v1.36.4 // indirect
gopkg.in/yaml.v3 v3.0.1 // indirect
lukechampine.com/blake3 v1.2.1 // indirect
lukechampine.com/blake3 v1.3.0 // indirect
)

256
go.sum
View File

@ -18,8 +18,8 @@ github.com/beorn7/perks v1.0.1 h1:VlbKKnNfV8bJzeqoa4cOKqO6bYr3WgKZxO8Z16+hsOM=
github.com/beorn7/perks v1.0.1/go.mod h1:G2ZrVWU2WbWT9wwq4/hrbKbnv/1ERSJQ0ibhJ6rlkpw=
github.com/bradfitz/go-smtpd v0.0.0-20170404230938-deb6d6237625/go.mod h1:HYsPBTaaSFSlLx/70C2HPIMNZpVV8+vt/A+FMnYP11g=
github.com/buger/jsonparser v0.0.0-20181115193947-bf1c66bbce23/go.mod h1:bbYlZJ7hK1yFx9hf58LP0zeX7UjIGs20ufpu3evjr+s=
github.com/cespare/xxhash/v2 v2.2.0 h1:DC2CZ1Ep5Y4k3ZQ899DldepgrayRUGE6BBZ/cd9Cj44=
github.com/cespare/xxhash/v2 v2.2.0/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs=
github.com/cespare/xxhash/v2 v2.3.0 h1:UL815xU9SqsFlibzuggzjXhog7bL6oX9BbNZnL2UFvs=
github.com/cespare/xxhash/v2 v2.3.0/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs=
github.com/cilium/ebpf v0.2.0/go.mod h1:To2CFviqOWL/M0gIMsvSMlqe7em/l1ALkX1PyjrX2Qs=
github.com/client9/misspell v0.3.4/go.mod h1:qj6jICC3Q7zFZvVWo7KLAzC3yx5G7kyvSDkc90ppPyw=
github.com/containerd/cgroups v0.0.0-20201119153540-4cbc285b3327/go.mod h1:ZJeTFisyysqgcCdecO57Dj79RfL0LNeGiFUqLYQRYLE=
@ -45,8 +45,8 @@ github.com/docker/go-units v0.5.0 h1:69rxXcBk27SvSaaxTtLh/8llcHD8vYHT7WSdRZ/jvr4
github.com/docker/go-units v0.5.0/go.mod h1:fgPhTUdO+D/Jk86RDLlptpiXQzgHJF7gydDDbaIK4Dk=
github.com/dustin/go-humanize v1.0.0/go.mod h1:HtrtbFcZ19U5GC7JDqmcUSB87Iq5E25KnS6fMYU6eOk=
github.com/elastic/gosigar v0.12.0/go.mod h1:iXRIGg2tLnu7LBdpqzyQfGDEidKCfWcCMS0WKyPWoMs=
github.com/elastic/gosigar v0.14.2 h1:Dg80n8cr90OZ7x+bAax/QjoW/XqTI11RmA79ZwIm9/4=
github.com/elastic/gosigar v0.14.2/go.mod h1:iXRIGg2tLnu7LBdpqzyQfGDEidKCfWcCMS0WKyPWoMs=
github.com/elastic/gosigar v0.14.3 h1:xwkKwPia+hSfg9GqrCUKYdId102m9qTJIIr7egmK/uo=
github.com/elastic/gosigar v0.14.3/go.mod h1:iXRIGg2tLnu7LBdpqzyQfGDEidKCfWcCMS0WKyPWoMs=
github.com/flynn/go-shlex v0.0.0-20150515145356-3f9db97f8568/go.mod h1:xEzjJPgXI435gkrCt3MPfRiAkVrwSbHsst4LCFVfpJc=
github.com/flynn/noise v1.1.0 h1:KjPQoQCEFdZDiP03phOvGi11+SVVhBG2wOWAorLsstg=
github.com/flynn/noise v1.1.0/go.mod h1:xbMo+0i6+IGbYdJhF31t2eR1BIU0CYc12+BNAKwUTag=
@ -56,10 +56,10 @@ github.com/fsnotify/fsnotify v1.4.7/go.mod h1:jwhsz4b93w/PPRr/qN1Yymfu8t87LnFCMo
github.com/ghodss/yaml v1.0.0/go.mod h1:4dBDuWmgqj2HViK6kFavaiC9ZROes6MMH2rRYeMEF04=
github.com/gliderlabs/ssh v0.1.1/go.mod h1:U7qILu1NlMHj9FlMhZLlkCdDnU1DBEAqr0aevW3Awn0=
github.com/go-errors/errors v1.0.1/go.mod h1:f4zRHt4oKfwPJE5k8C9vpYG+aDHdBFUsgrm6/TyX73Q=
github.com/go-logr/logr v1.3.0 h1:2y3SDp0ZXuc6/cjLSZ+Q3ir+QB9T/iG5yYRXqsagWSY=
github.com/go-logr/logr v1.3.0/go.mod h1:9T104GzyrTigFIr8wt5mBrctHMim0Nb2HLGrmQ40KvY=
github.com/go-task/slim-sprig v0.0.0-20230315185526-52ccab3ef572 h1:tfuBGBXKqDEevZMzYi5KSi8KkcZtzBcTgAUUtapy0OI=
github.com/go-task/slim-sprig v0.0.0-20230315185526-52ccab3ef572/go.mod h1:9Pwr4B2jHnOSGXyyzV8ROjYa2ojvAY6HCGYYfMoC3Ls=
github.com/go-logr/logr v1.4.2 h1:6pFjapn8bFcIbiKo3XT4j/BhANplGihG6tvd+8rYgrY=
github.com/go-logr/logr v1.4.2/go.mod h1:9T104GzyrTigFIr8wt5mBrctHMim0Nb2HLGrmQ40KvY=
github.com/go-task/slim-sprig/v3 v3.0.0 h1:sUs3vkvUymDpBKi3qH1YSqBQk9+9D/8M2mN1vB6EwHI=
github.com/go-task/slim-sprig/v3 v3.0.0/go.mod h1:W848ghGpv3Qj3dhTPRyJypKRiqCdHZiAzKg9hl15HA8=
github.com/godbus/dbus/v5 v5.0.3/go.mod h1:xhWf0FNVPg57R7Z0UbKHbJfkEywrmjJnf7w5xrFpKfA=
github.com/godbus/dbus/v5 v5.0.4/go.mod h1:xhWf0FNVPg57R7Z0UbKHbJfkEywrmjJnf7w5xrFpKfA=
github.com/godbus/dbus/v5 v5.1.0 h1:4KLkAxT3aOY8Li4FRJe/KvhoNFFxo0m6fNuFUO8QJUk=
@ -74,8 +74,6 @@ github.com/golang/mock v1.1.1/go.mod h1:oTYuIxOrZwtPieC+H1uAHpcLFnEyAGVDL/k47Jfb
github.com/golang/mock v1.2.0/go.mod h1:oTYuIxOrZwtPieC+H1uAHpcLFnEyAGVDL/k47Jfbm0A=
github.com/golang/protobuf v1.2.0/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/golang/protobuf v1.3.1/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/golang/protobuf v1.5.3 h1:KhyjKVUg7Usr/dYsdSqoFveMYd5ko72D+zANwlG1mmg=
github.com/golang/protobuf v1.5.3/go.mod h1:XVQd3VNwM+JqD3oG2Ue2ip4fOMUkwXdXDdiuN0vRsmY=
github.com/google/btree v0.0.0-20180813153112-4030bb1f1f0c/go.mod h1:lNA+9X1NB3Zf8V7Ke586lFgjr2dZNuvo3lPJSGZ5JPQ=
github.com/google/go-cmp v0.2.0/go.mod h1:oXzfMopK8JAjlY9xF4vHSVASa0yLyX7SntLO5aqRK0M=
github.com/google/go-cmp v0.5.2/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
@ -87,24 +85,24 @@ github.com/google/gopacket v1.1.19 h1:ves8RnFZPGiFnTS0uPQStjwru6uO6h+nlr9j6fL7kF
github.com/google/gopacket v1.1.19/go.mod h1:iJ8V8n6KS+z2U1A8pUwu8bW5SyEMkXJB8Yo/Vo+TKTo=
github.com/google/martian v2.1.0+incompatible/go.mod h1:9I4somxYTbIHy5NJKHRl3wXiIaQGbYVAs8BPL6v8lEs=
github.com/google/pprof v0.0.0-20181206194817-3ea8567a2e57/go.mod h1:zfwlbNMJ+OItoe0UupaVj+oy1omPYYDuagoSzA8v9mc=
github.com/google/pprof v0.0.0-20240207164012-fb44976bdcd5 h1:E/LAvt58di64hlYjx7AsNS6C/ysHWYo+2qPCZKTQhRo=
github.com/google/pprof v0.0.0-20240207164012-fb44976bdcd5/go.mod h1:czg5+yv1E0ZGTi6S6vVK1mke0fV+FaUhNGcd6VRS9Ik=
github.com/google/pprof v0.0.0-20250202011525-fc3143867406 h1:wlQI2cYY0BsWmmPPAnxfQ8SDW0S3Jasn+4B8kXFxprg=
github.com/google/pprof v0.0.0-20250202011525-fc3143867406/go.mod h1:vavhavw2zAxS5dIdcRluK6cSGGPlZynqzFM8NdvU144=
github.com/google/uuid v1.3.1/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/google/uuid v1.4.0 h1:MtMxsa51/r9yyhkyLsVeVt0B+BGQZzpQiTQ4eHZ8bc4=
github.com/google/uuid v1.4.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/google/uuid v1.6.0 h1:NIvaJDMOsjHA8n1jAhLSgzrAzy1Hgr+hNrb57e+94F0=
github.com/google/uuid v1.6.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/googleapis/gax-go v2.0.0+incompatible/go.mod h1:SFVmujtThgffbyetf+mdk2eWhX2bMyUtNHzFKcPA9HY=
github.com/googleapis/gax-go/v2 v2.0.3/go.mod h1:LLvjysVCY1JZeum8Z6l8qUty8fiNwE08qbEPm1M08qg=
github.com/gopherjs/gopherjs v0.0.0-20181017120253-0766667cb4d1/go.mod h1:wJfORRmW1u3UXTncJ5qlYoELFm8eSnnEO6hX4iZ3EWY=
github.com/gorilla/websocket v1.5.1 h1:gmztn0JnHVt9JZquRuzLw3g4wouNVzKL15iLr/zn/QY=
github.com/gorilla/websocket v1.5.1/go.mod h1:x3kM2JMyaluk02fnUJpQuwD2dCS5NDG2ZHL0uE0tcaY=
github.com/gorilla/websocket v1.5.3 h1:saDtZ6Pbx/0u+bgYQ3q96pZgCzfhKXGPqt7kZ72aNNg=
github.com/gorilla/websocket v1.5.3/go.mod h1:YR8l580nyteQvAITg2hZ9XVh4b55+EU/adAjf1fMHhE=
github.com/gregjones/httpcache v0.0.0-20180305231024-9cad4c3443a7/go.mod h1:FecbI9+v66THATjSRHfNgh1IVFe/9kFxbXtjV0ctIMA=
github.com/grpc-ecosystem/grpc-gateway v1.5.0/go.mod h1:RSKVYQBd5MCa4OVpNdGskqpgL2+G+NZTnrVHpWWfpdw=
github.com/hashicorp/golang-lru/v2 v2.0.7 h1:a+bsQ5rvGLjzHuww6tVxozPZFVghXaHOwFs4luLUK2k=
github.com/hashicorp/golang-lru/v2 v2.0.7/go.mod h1:QeFd9opnmA6QUJc5vARoKUSoFhyfM2/ZepoAG6RGpeM=
github.com/huin/goupnp v1.3.0 h1:UvLUlWDNpoUdYzb2TCn+MuTWtcjXKSza2n6CBdQ0xXc=
github.com/huin/goupnp v1.3.0/go.mod h1:gnGPsThkYa7bFi/KWmEysQRf48l2dvR5bxr2OFckNX8=
github.com/ipfs/go-cid v0.4.1 h1:A/T3qGvxi4kpKWWcPC/PgbvDA2bjVLO7n4UeVwnbs/s=
github.com/ipfs/go-cid v0.4.1/go.mod h1:uQHwDeX4c6CtyrFwdqyhpNcxVewur1M7l7fNU7LKwZk=
github.com/ipfs/go-cid v0.5.0 h1:goEKKhaGm0ul11IHA7I6p1GmKz8kEYniqFopaB5Otwg=
github.com/ipfs/go-cid v0.5.0/go.mod h1:0L7vmeNXpQpUS9vt+yEARkJ8rOg43DF3iPgn4GIN0mk=
github.com/ipfs/go-log/v2 v2.5.1 h1:1XdUzF7048prq4aBjDQQ4SL5RxftpRGdXhNRwKSAlcY=
github.com/ipfs/go-log/v2 v2.5.1/go.mod h1:prSpmC1Gpllc9UYWxDiZDreBYw7zp4Iqp1kOLU9U5UI=
github.com/jackpal/go-nat-pmp v1.0.2 h1:KzKSgb7qkJvOUTqYl9/Hg/me3pWgBmERKrTGD7BdWus=
@ -117,12 +115,12 @@ github.com/jstemmer/go-junit-report v0.0.0-20190106144839-af01ea7f8024/go.mod h1
github.com/kisielk/errcheck v1.2.0/go.mod h1:/BMXB+zMLi60iA8Vv6Ksmxu/1UDYcXs4uQLJ+jE2L00=
github.com/kisielk/errcheck v1.5.0/go.mod h1:pFxgyoBC7bSaBwPgfKdkLd5X25qrDl4LWUI2bnpBCr8=
github.com/kisielk/gotool v1.0.0/go.mod h1:XhKaO+MFFWcvkIS/tQcRk01m1F5IRFswLeQ+oQHNcck=
github.com/klauspost/compress v1.17.8 h1:YcnTYrq7MikUT7k0Yb5eceMmALQPYBW/Xltxn0NAMnU=
github.com/klauspost/compress v1.17.8/go.mod h1:Di0epgTjJY877eYKx5yC51cX2A2Vl2ibi7bDH9ttBbw=
github.com/klauspost/cpuid/v2 v2.2.7 h1:ZWSB3igEs+d0qvnxR/ZBzXVmxkgt8DdzP6m9pfuVLDM=
github.com/klauspost/cpuid/v2 v2.2.7/go.mod h1:Lcz8mBdAVJIBVzewtcLocK12l3Y+JytZYpaMropDUws=
github.com/koron/go-ssdp v0.0.4 h1:1IDwrghSKYM7yLf7XCzbByg2sJ/JcNOZRXS2jczTwz0=
github.com/koron/go-ssdp v0.0.4/go.mod h1:oDXq+E5IL5q0U8uSBcoAXzTzInwy5lEgC91HoKtbmZk=
github.com/klauspost/compress v1.17.11 h1:In6xLpyWOi1+C7tXUUWv2ot1QvBjxevKAaI6IXrJmUc=
github.com/klauspost/compress v1.17.11/go.mod h1:pMDklpSncoRMuLFrf1W9Ss9KT+0rH90U12bZKk7uwG0=
github.com/klauspost/cpuid/v2 v2.2.9 h1:66ze0taIn2H33fBvCkXuv9BmCwDfafmiIVpKV9kKGuY=
github.com/klauspost/cpuid/v2 v2.2.9/go.mod h1:rqkxqrZ1EhYM9G+hXH7YdowN5R5RGN6NK4QwQ3WMXF8=
github.com/koron/go-ssdp v0.0.5 h1:E1iSMxIs4WqxTbIBLtmNBeOOC+1sCIXQeqTWVnpmwhk=
github.com/koron/go-ssdp v0.0.5/go.mod h1:Qm59B7hpKpDqfyRNWRNr00jGwLdXjDyZh6y7rH6VS0w=
github.com/kr/pretty v0.1.0/go.mod h1:dAy3ld7l9f0ibDNOQOHHMYYIIbhfbHSm3C4ZsoJORNo=
github.com/kr/pretty v0.2.1/go.mod h1:ipq/a2n7PKx3OHsz4KJII5eveXtPO4qwEXGdVfWzfnI=
github.com/kr/pretty v0.3.1 h1:flRD4NNwYAUpkphVc1HcthR4KEIFJ65n8Mw5qdRn3LE=
@ -134,10 +132,10 @@ github.com/kr/text v0.2.0 h1:5Nx0Ya0ZqY2ygV366QzturHI13Jq95ApcVaJBhpS+AY=
github.com/kr/text v0.2.0/go.mod h1:eLer722TekiGuMkidMxC/pM04lWEeraHUUmBw8l2grE=
github.com/libp2p/go-buffer-pool v0.1.0 h1:oK4mSFcQz7cTQIfqbe4MIj9gLW+mnanjyFtc6cdF0Y8=
github.com/libp2p/go-buffer-pool v0.1.0/go.mod h1:N+vh8gMqimBzdKkSMVuydVDq+UV5QTWy5HSiZacSbPg=
github.com/libp2p/go-flow-metrics v0.1.0 h1:0iPhMI8PskQwzh57jB9WxIuIOQ0r+15PChFGkx3Q3WM=
github.com/libp2p/go-flow-metrics v0.1.0/go.mod h1:4Xi8MX8wj5aWNDAZttg6UPmc0ZrnFNsMtpsYUClFtro=
github.com/libp2p/go-libp2p v0.34.0 h1:J+SL3DMz+zPz06OHSRt42GKA5n5hmwgY1l7ckLUz3+c=
github.com/libp2p/go-libp2p v0.34.0/go.mod h1:snyJQix4ET6Tj+LeI0VPjjxTtdWpeOhYt5lEY0KirkQ=
github.com/libp2p/go-flow-metrics v0.2.0 h1:EIZzjmeOE6c8Dav0sNv35vhZxATIXWZg6j/C08XmmDw=
github.com/libp2p/go-flow-metrics v0.2.0/go.mod h1:st3qqfu8+pMfh+9Mzqb2GTiwrAGjIPszEjZmtksN8Jc=
github.com/libp2p/go-libp2p v0.39.1 h1:1Ur6rPCf3GR+g8jkrnaQaM0ha2IGespsnNlCqJLLALE=
github.com/libp2p/go-libp2p v0.39.1/go.mod h1:3zicI8Lp7Isun+Afo/JOACUbbJqqR2owK6RQWFsVAbI=
github.com/libp2p/go-libp2p-asn-util v0.4.1 h1:xqL7++IKD9TBFMgnLPZR6/6iYhawHKHl950SO9L6n94=
github.com/libp2p/go-libp2p-asn-util v0.4.1/go.mod h1:d/NI6XZ9qxw67b4e+NgpQexCIiFYJjErASrYW4PFDN8=
github.com/libp2p/go-libp2p-testing v0.12.0 h1:EPvBb4kKMWO29qP4mZGyhVzUyR25dvfUIK5WDu6iPUA=
@ -146,12 +144,12 @@ github.com/libp2p/go-msgio v0.3.0 h1:mf3Z8B1xcFN314sWX+2vOTShIE0Mmn2TXn3YCUQGNj0
github.com/libp2p/go-msgio v0.3.0/go.mod h1:nyRM819GmVaF9LX3l03RMh10QdOroF++NBbxAb0mmDM=
github.com/libp2p/go-nat v0.2.0 h1:Tyz+bUFAYqGyJ/ppPPymMGbIgNRH+WqC5QrT5fKrrGk=
github.com/libp2p/go-nat v0.2.0/go.mod h1:3MJr+GRpRkyT65EpVPBstXLvOlAPzUVlG6Pwg9ohLJk=
github.com/libp2p/go-netroute v0.2.1 h1:V8kVrpD8GK0Riv15/7VN6RbUQ3URNZVosw7H2v9tksU=
github.com/libp2p/go-netroute v0.2.1/go.mod h1:hraioZr0fhBjG0ZRXJJ6Zj2IVEVNx6tDTFQfSmcq7mQ=
github.com/libp2p/go-netroute v0.2.2 h1:Dejd8cQ47Qx2kRABg6lPwknU7+nBnFRpko45/fFPuZ8=
github.com/libp2p/go-netroute v0.2.2/go.mod h1:Rntq6jUAH0l9Gg17w5bFGhcC9a+vk4KNXs6s7IljKYE=
github.com/libp2p/go-reuseport v0.4.0 h1:nR5KU7hD0WxXCJbmw7r2rhRYruNRl2koHw8fQscQm2s=
github.com/libp2p/go-reuseport v0.4.0/go.mod h1:ZtI03j/wO5hZVDFo2jKywN6bYKWLOy8Se6DrI2E1cLU=
github.com/libp2p/go-yamux/v4 v4.0.1 h1:FfDR4S1wj6Bw2Pqbc8Uz7pCxeRBPbwsBbEdfwiCypkQ=
github.com/libp2p/go-yamux/v4 v4.0.1/go.mod h1:NWjl8ZTLOGlozrXSOZ/HlfG++39iKNnM5wwmtQP1YB4=
github.com/libp2p/go-yamux/v4 v4.0.2 h1:nrLh89LN/LEiqcFiqdKDRHjGstN300C1269K/EX0CPU=
github.com/libp2p/go-yamux/v4 v4.0.2/go.mod h1:C808cCRgOs1iBwY4S71T5oxgMxgLmqUw56qh4AeBW2o=
github.com/lunixbochs/vtclean v1.0.0/go.mod h1:pHhQNgMf3btfWnGBVipUOjRYhoOsdGqdm/+2c2E2WMI=
github.com/mailru/easyjson v0.0.0-20190312143242-1de009706dbe/go.mod h1:C1wdFJiN94OJF2b5HbByQZoLdCWB1Yqtg26g4irojpc=
github.com/marten-seemann/tcp v0.0.0-20210406111302-dfbc87cc63fd h1:br0buuQ854V8u83wA0rVZ8ttrq5CpaPZdvrK0LP2lOk=
@ -161,9 +159,8 @@ github.com/mattn/go-isatty v0.0.20 h1:xfD0iDuEKnDkl03q4limB+vH+GxLEtL/jb4xVJSWWE
github.com/mattn/go-isatty v0.0.20/go.mod h1:W+V8PltTTMOvKvAeJH7IuucS94S2C6jfK/D7dTCTo3Y=
github.com/matttproud/golang_protobuf_extensions v1.0.1/go.mod h1:D8He9yQNgCq6Z5Ld7szi9bcBfOoFv/3dc6xSMkL2PC0=
github.com/microcosm-cc/bluemonday v1.0.1/go.mod h1:hsXNsILzKxV+sX77C5b8FSuKF00vh2OMYv+xgHpAMF4=
github.com/miekg/dns v1.1.41/go.mod h1:p6aan82bvRIyn+zDIv9xYNUpwa73JcSh9BKwknJysuI=
github.com/miekg/dns v1.1.58 h1:ca2Hdkz+cDg/7eNF6V56jjzuZ4aCAE+DbVkILdQWG/4=
github.com/miekg/dns v1.1.58/go.mod h1:Ypv+3b/KadlvW9vJfXOTf300O4UqaHFzFCuHz+rPkBY=
github.com/miekg/dns v1.1.63 h1:8M5aAw6OMZfFXTT7K5V0Eu5YiiL8l7nUAkyN6C9YwaY=
github.com/miekg/dns v1.1.63/go.mod h1:6NGHfjhpmr5lt3XPLuyfDJi5AXbNIPM9PY6H6sF1Nfs=
github.com/mikioh/tcp v0.0.0-20190314235350-803a9b46060c h1:bzE/A84HN25pxAuk9Eej1Kz9OUelF97nAc82bDquQI8=
github.com/mikioh/tcp v0.0.0-20190314235350-803a9b46060c/go.mod h1:0SQS9kMwD2VsyFEB++InYyBJroV/FRmBgcydeSUcJms=
github.com/mikioh/tcpinfo v0.0.0-20190314235526-30a79bb1804b h1:z78hV3sbSMAUoyUMM0I83AUIT6Hu17AWfgjzIbtrYFc=
@ -184,11 +181,10 @@ github.com/multiformats/go-base32 v0.1.0/go.mod h1:Kj3tFY6zNr+ABYMqeUNeGvkIC/UYg
github.com/multiformats/go-base36 v0.2.0 h1:lFsAbNOGeKtuKozrtBsAkSVhv1p9D0/qedU9rQyccr0=
github.com/multiformats/go-base36 v0.2.0/go.mod h1:qvnKE++v+2MWCfePClUEjE78Z7P2a1UV0xHgWc0hkp4=
github.com/multiformats/go-multiaddr v0.1.1/go.mod h1:aMKBKNEYmzmDmxfX88/vz+J5IU55txyt0p4aiWVohjo=
github.com/multiformats/go-multiaddr v0.2.0/go.mod h1:0nO36NvPpyV4QzvTLi/lafl2y95ncPj0vFwVF6k6wJ4=
github.com/multiformats/go-multiaddr v0.12.4 h1:rrKqpY9h+n80EwhhC/kkcunCZZ7URIF8yN1WEUt2Hvc=
github.com/multiformats/go-multiaddr v0.12.4/go.mod h1:sBXrNzucqkFJhvKOiwwLyqamGa/P5EIXNPLovyhQCII=
github.com/multiformats/go-multiaddr-dns v0.3.1 h1:QgQgR+LQVt3NPTjbrLLpsaT2ufAA2y0Mkk+QRVJbW3A=
github.com/multiformats/go-multiaddr-dns v0.3.1/go.mod h1:G/245BRQ6FJGmryJCrOuTdB37AMA5AMOVuO6NY3JwTk=
github.com/multiformats/go-multiaddr v0.14.0 h1:bfrHrJhrRuh/NXH5mCnemjpbGjzRw/b+tJFOD41g2tU=
github.com/multiformats/go-multiaddr v0.14.0/go.mod h1:6EkVAxtznq2yC3QT5CM1UTAwG0GTP3EWAIcjHuzQ+r4=
github.com/multiformats/go-multiaddr-dns v0.4.1 h1:whi/uCLbDS3mSEUMb1MsoT4uzUeZB0N32yzufqS0i5M=
github.com/multiformats/go-multiaddr-dns v0.4.1/go.mod h1:7hfthtB4E4pQwirrz+J0CcDUfbWzTqEzVyYKKIKpgkc=
github.com/multiformats/go-multiaddr-fmt v0.1.0 h1:WLEFClPycPkp4fnIzoFoV9FVd49/eQsuaL3/CWe167E=
github.com/multiformats/go-multiaddr-fmt v0.1.0/go.mod h1:hGtDIW4PU4BqJ50gW2quDuPVjyWNZxToGUh/HwTZYJo=
github.com/multiformats/go-multibase v0.2.0 h1:isdYCVLvksgWlMW9OZRYJEa9pZETFivncJHmHnnd87g=
@ -198,90 +194,97 @@ github.com/multiformats/go-multicodec v0.9.0/go.mod h1:L3QTQvMIaVBkXOXXtVmYE+LI1
github.com/multiformats/go-multihash v0.0.8/go.mod h1:YSLudS+Pi8NHE7o6tb3D8vrpKa63epEDmG8nTduyAew=
github.com/multiformats/go-multihash v0.2.3 h1:7Lyc8XfX/IY2jWb/gI7JP+o7JEq9hOa7BFvVU9RSh+U=
github.com/multiformats/go-multihash v0.2.3/go.mod h1:dXgKXCXjBzdscBLk9JkjINiEsCKRVch90MdaGiKsvSM=
github.com/multiformats/go-multistream v0.5.0 h1:5htLSLl7lvJk3xx3qT/8Zm9J4K8vEOf/QGkvOGQAyiE=
github.com/multiformats/go-multistream v0.5.0/go.mod h1:n6tMZiwiP2wUsR8DgfDWw1dydlEqV3l6N3/GBsX6ILA=
github.com/multiformats/go-varint v0.0.1/go.mod h1:3Ls8CIEsrijN6+B7PbrXRPxHRPuXSrVKRY101jdMZYE=
github.com/multiformats/go-multistream v0.6.0 h1:ZaHKbsL404720283o4c/IHQXiS6gb8qAN5EIJ4PN5EA=
github.com/multiformats/go-multistream v0.6.0/go.mod h1:MOyoG5otO24cHIg8kf9QW2/NozURlkP/rvi2FQJyCPg=
github.com/multiformats/go-varint v0.0.7 h1:sWSGR+f/eu5ABZA2ZpYKBILXTTs9JWpdEM/nEGOHFS8=
github.com/multiformats/go-varint v0.0.7/go.mod h1:r8PUYw/fD/SjBCiKOoDlGF6QawOELpZAu9eioSos/OU=
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 h1:C3w9PqII01/Oq1c1nUAm88MOHcQC9l5mIlSMApZMrHA=
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822/go.mod h1:+n7T8mK8HuQTcFwEeznm/DIxMOiR9yIdICNftLE1DvQ=
github.com/neelance/astrewrite v0.0.0-20160511093645-99348263ae86/go.mod h1:kHJEU3ofeGjhHklVoIGuVj85JJwZ6kWPaJwCIxgnFmo=
github.com/neelance/sourcemap v0.0.0-20151028013722-8c68805598ab/go.mod h1:Qr6/a/Q4r9LP1IltGz7tA7iOK1WonHEYhu1HRBA7ZiM=
github.com/onsi/ginkgo/v2 v2.15.0 h1:79HwNRBAZHOEwrczrgSOPy+eFTTlIGELKy5as+ClttY=
github.com/onsi/ginkgo/v2 v2.15.0/go.mod h1:HlxMHtYF57y6Dpf+mc5529KKmSq9h2FpCF+/ZkwUxKM=
github.com/onsi/gomega v1.30.0 h1:hvMK7xYz4D3HapigLTeGdId/NcfQx1VHMJc60ew99+8=
github.com/onsi/gomega v1.30.0/go.mod h1:9sxs+SwGrKI0+PWe4Fxa9tFQQBG5xSsSbMXOI8PPpoQ=
github.com/onsi/ginkgo/v2 v2.22.2 h1:/3X8Panh8/WwhU/3Ssa6rCKqPLuAkVY2I0RoyDLySlU=
github.com/onsi/ginkgo/v2 v2.22.2/go.mod h1:oeMosUL+8LtarXBHu/c0bx2D/K9zyQ6uX3cTyztHwsk=
github.com/onsi/gomega v1.36.2 h1:koNYke6TVk6ZmnyHrCXba/T/MoLBXFjeC1PtvYgw0A8=
github.com/onsi/gomega v1.36.2/go.mod h1:DdwyADRjrc825LhMEkD76cHR5+pUnjhUN8GlHlRPHzY=
github.com/opencontainers/runtime-spec v1.0.2/go.mod h1:jwyrGlmzljRJv/Fgzds9SsS/C5hL+LL3ko9hs6T5lQ0=
github.com/opencontainers/runtime-spec v1.2.0 h1:z97+pHb3uELt/yiAWD691HNHQIF07bE7dzrbT927iTk=
github.com/opencontainers/runtime-spec v1.2.0/go.mod h1:jwyrGlmzljRJv/Fgzds9SsS/C5hL+LL3ko9hs6T5lQ0=
github.com/openzipkin/zipkin-go v0.1.1/go.mod h1:NtoC/o8u3JlF1lSlyPNswIbeQH9bJTmOf0Erfk+hxe8=
github.com/pbnjay/memory v0.0.0-20210728143218-7b4eea64cf58 h1:onHthvaw9LFnH4t2DcNVpwGmV9E1BkGknEliJkfwQj0=
github.com/pbnjay/memory v0.0.0-20210728143218-7b4eea64cf58/go.mod h1:DXv8WO4yhMYhSNPKjeNKa5WY9YCIEBRbNzFFPJbWO6Y=
github.com/pion/datachannel v1.5.6 h1:1IxKJntfSlYkpUj8LlYRSWpYiTTC02nUrOE8T3DqGeg=
github.com/pion/datachannel v1.5.6/go.mod h1:1eKT6Q85pRnr2mHiWHxJwO50SfZRtWHTsNIVb/NfGW4=
github.com/pion/datachannel v1.5.10 h1:ly0Q26K1i6ZkGf42W7D4hQYR90pZwzFOjTq5AuCKk4o=
github.com/pion/datachannel v1.5.10/go.mod h1:p/jJfC9arb29W7WrxyKbepTU20CFgyx5oLo8Rs4Py/M=
github.com/pion/dtls/v2 v2.2.7/go.mod h1:8WiMkebSHFD0T+dIU+UeBaoV7kDhOW5oDCzZ7WZ/F9s=
github.com/pion/dtls/v2 v2.2.11 h1:9U/dpCYl1ySttROPWJgqWKEylUdT0fXp/xst6JwY5Ks=
github.com/pion/dtls/v2 v2.2.11/go.mod h1:d9SYc9fch0CqK90mRk1dC7AkzzpwJj6u2GU3u+9pqFE=
github.com/pion/ice/v2 v2.3.24 h1:RYgzhH/u5lH0XO+ABatVKCtRd+4U1GEaCXSMjNr13tI=
github.com/pion/ice/v2 v2.3.24/go.mod h1:KXJJcZK7E8WzrBEYnV4UtqEZsGeWfHxsNqhVcVvgjxw=
github.com/pion/interceptor v0.1.29 h1:39fsnlP1U8gw2JzOFWdfCU82vHvhW9o0rZnZF56wF+M=
github.com/pion/interceptor v0.1.29/go.mod h1:ri+LGNjRUc5xUNtDEPzfdkmSqISixVTBF/z/Zms/6T4=
github.com/pion/logging v0.2.2 h1:M9+AIj/+pxNsDfAT64+MAVgJO0rsyLnoJKCqf//DoeY=
github.com/pion/dtls/v2 v2.2.12 h1:KP7H5/c1EiVAAKUmXyCzPiQe5+bCJrpOeKg/L05dunk=
github.com/pion/dtls/v2 v2.2.12/go.mod h1:d9SYc9fch0CqK90mRk1dC7AkzzpwJj6u2GU3u+9pqFE=
github.com/pion/dtls/v3 v3.0.4 h1:44CZekewMzfrn9pmGrj5BNnTMDCFwr+6sLH+cCuLM7U=
github.com/pion/dtls/v3 v3.0.4/go.mod h1:R373CsjxWqNPf6MEkfdy3aSe9niZvL/JaKlGeFphtMg=
github.com/pion/ice/v2 v2.3.37 h1:ObIdaNDu1rCo7hObhs34YSBcO7fjslJMZV0ux+uZWh0=
github.com/pion/ice/v2 v2.3.37/go.mod h1:mBF7lnigdqgtB+YHkaY/Y6s6tsyRyo4u4rPGRuOjUBQ=
github.com/pion/ice/v4 v4.0.6 h1:jmM9HwI9lfetQV/39uD0nY4y++XZNPhvzIPCb8EwxUM=
github.com/pion/ice/v4 v4.0.6/go.mod h1:y3M18aPhIxLlcO/4dn9X8LzLLSma84cx6emMSu14FGw=
github.com/pion/interceptor v0.1.37 h1:aRA8Zpab/wE7/c0O3fh1PqY0AJI3fCSEM5lRWJVorwI=
github.com/pion/interceptor v0.1.37/go.mod h1:JzxbJ4umVTlZAf+/utHzNesY8tmRkM2lVmkS82TTj8Y=
github.com/pion/logging v0.2.2/go.mod h1:k0/tDVsRCX2Mb2ZEmTqNa7CWsQPc+YYCB7Q+5pahoms=
github.com/pion/logging v0.2.3 h1:gHuf0zpoh1GW67Nr6Gj4cv5Z9ZscU7g/EaoC/Ke/igI=
github.com/pion/logging v0.2.3/go.mod h1:z8YfknkquMe1csOrxK5kc+5/ZPAzMxbKLX5aXpbpC90=
github.com/pion/mdns v0.0.12 h1:CiMYlY+O0azojWDmxdNr7ADGrnZ+V6Ilfner+6mSVK8=
github.com/pion/mdns v0.0.12/go.mod h1:VExJjv8to/6Wqm1FXK+Ii/Z9tsVk/F5sD/N70cnYFbk=
github.com/pion/mdns/v2 v2.0.7 h1:c9kM8ewCgjslaAmicYMFQIde2H9/lrZpjBkN8VwoVtM=
github.com/pion/mdns/v2 v2.0.7/go.mod h1:vAdSYNAT0Jy3Ru0zl2YiW3Rm/fJCwIeM0nToenfOJKA=
github.com/pion/randutil v0.1.0 h1:CFG1UdESneORglEsnimhUjf33Rwjubwj6xfiOXBa3mA=
github.com/pion/randutil v0.1.0/go.mod h1:XcJrSMMbbMRhASFVOlj/5hQial/Y8oH/HVo7TBZq+j8=
github.com/pion/rtcp v1.2.12/go.mod h1:sn6qjxvnwyAkkPzPULIbVqSKI5Dv54Rv7VG0kNxh9L4=
github.com/pion/rtcp v1.2.14 h1:KCkGV3vJ+4DAJmvP0vaQShsb0xkRfWkO540Gy102KyE=
github.com/pion/rtcp v1.2.14/go.mod h1:sn6qjxvnwyAkkPzPULIbVqSKI5Dv54Rv7VG0kNxh9L4=
github.com/pion/rtp v1.8.3/go.mod h1:pBGHaFt/yW7bf1jjWAoUjpSNoDnw98KTMg+jWWvziqU=
github.com/pion/rtp v1.8.6 h1:MTmn/b0aWWsAzux2AmP8WGllusBVw4NPYPVFFd7jUPw=
github.com/pion/rtp v1.8.6/go.mod h1:pBGHaFt/yW7bf1jjWAoUjpSNoDnw98KTMg+jWWvziqU=
github.com/pion/sctp v1.8.13/go.mod h1:YKSgO/bO/6aOMP9LCie1DuD7m+GamiK2yIiPM6vH+GA=
github.com/pion/sctp v1.8.16 h1:PKrMs+o9EMLRvFfXq59WFsC+V8mN1wnKzqrv+3D/gYY=
github.com/pion/sctp v1.8.16/go.mod h1:P6PbDVA++OJMrVNg2AL3XtYHV4uD6dvfyOovCgMs0PE=
github.com/pion/sdp/v3 v3.0.9 h1:pX++dCHoHUwq43kuwf3PyJfHlwIj4hXA7Vrifiq0IJY=
github.com/pion/sdp/v3 v3.0.9/go.mod h1:B5xmvENq5IXJimIO4zfp6LAe1fD9N+kFv+V/1lOdz8M=
github.com/pion/srtp/v2 v2.0.18 h1:vKpAXfawO9RtTRKZJbG4y0v1b11NZxQnxRl85kGuUlo=
github.com/pion/srtp/v2 v2.0.18/go.mod h1:0KJQjA99A6/a0DOVTu1PhDSw0CXF2jTkqOoMg3ODqdA=
github.com/pion/rtcp v1.2.15 h1:LZQi2JbdipLOj4eBjK4wlVoQWfrZbh3Q6eHtWtJBZBo=
github.com/pion/rtcp v1.2.15/go.mod h1:jlGuAjHMEXwMUHK78RgX0UmEJFV4zUKOFHR7OP+D3D0=
github.com/pion/rtp v1.8.11 h1:17xjnY5WO5hgO6SD3/NTIUPvSFw/PbLsIJyz1r1yNIk=
github.com/pion/rtp v1.8.11/go.mod h1:8uMBJj32Pa1wwx8Fuv/AsFhn8jsgw+3rUC2PfoBZ8p4=
github.com/pion/sctp v1.8.35 h1:qwtKvNK1Wc5tHMIYgTDJhfZk7vATGVHhXbUDfHbYwzA=
github.com/pion/sctp v1.8.35/go.mod h1:EcXP8zCYVTRy3W9xtOF7wJm1L1aXfKRQzaM33SjQlzg=
github.com/pion/sdp/v3 v3.0.10 h1:6MChLE/1xYB+CjumMw+gZ9ufp2DPApuVSnDT8t5MIgA=
github.com/pion/sdp/v3 v3.0.10/go.mod h1:88GMahN5xnScv1hIMTqLdu/cOcUkj6a9ytbncwMCq2E=
github.com/pion/srtp/v3 v3.0.4 h1:2Z6vDVxzrX3UHEgrUyIGM4rRouoC7v+NiF1IHtp9B5M=
github.com/pion/srtp/v3 v3.0.4/go.mod h1:1Jx3FwDoxpRaTh1oRV8A/6G1BnFL+QI82eK4ms8EEJQ=
github.com/pion/stun v0.6.1 h1:8lp6YejULeHBF8NmV8e2787BogQhduZugh5PdhDyyN4=
github.com/pion/stun v0.6.1/go.mod h1:/hO7APkX4hZKu/D0f2lHzNyvdkTGtIy3NDmLR7kSz/8=
github.com/pion/stun/v3 v3.0.0 h1:4h1gwhWLWuZWOJIJR9s2ferRO+W3zA/b6ijOI6mKzUw=
github.com/pion/stun/v3 v3.0.0/go.mod h1:HvCN8txt8mwi4FBvS3EmDghW6aQJ24T+y+1TKjB5jyU=
github.com/pion/transport/v2 v2.2.1/go.mod h1:cXXWavvCnFF6McHTft3DWS9iic2Mftcz1Aq29pGcU5g=
github.com/pion/transport/v2 v2.2.2/go.mod h1:OJg3ojoBJopjEeECq2yJdXH9YVrUJ1uQ++NjXLOUorc=
github.com/pion/transport/v2 v2.2.3/go.mod h1:q2U/tf9FEfnSBGSW6w5Qp5PFWRLRj3NjLhCCgpRK4p0=
github.com/pion/transport/v2 v2.2.4/go.mod h1:q2U/tf9FEfnSBGSW6w5Qp5PFWRLRj3NjLhCCgpRK4p0=
github.com/pion/transport/v2 v2.2.5 h1:iyi25i/21gQck4hfRhomF6SktmUQjRsRW4WJdhfc3Kc=
github.com/pion/transport/v2 v2.2.5/go.mod h1:q2U/tf9FEfnSBGSW6w5Qp5PFWRLRj3NjLhCCgpRK4p0=
github.com/pion/transport/v2 v2.2.10 h1:ucLBLE8nuxiHfvkFKnkDQRYWYfp8ejf4YBOPfaQpw6Q=
github.com/pion/transport/v2 v2.2.10/go.mod h1:sq1kSLWs+cHW9E+2fJP95QudkzbK7wscs8yYgQToO5E=
github.com/pion/transport/v3 v3.0.1/go.mod h1:UY7kiITrlMv7/IKgd5eTUcaahZx5oUN3l9SzK5f5xE0=
github.com/pion/transport/v3 v3.0.2 h1:r+40RJR25S9w3jbA6/5uEPTzcdn7ncyU44RWCbHkLg4=
github.com/pion/transport/v3 v3.0.2/go.mod h1:nIToODoOlb5If2jF9y2Igfx3PFYWfuXi37m0IlWa/D0=
github.com/pion/transport/v3 v3.0.7 h1:iRbMH05BzSNwhILHoBoAPxoB9xQgOaJk+591KC9P1o0=
github.com/pion/transport/v3 v3.0.7/go.mod h1:YleKiTZ4vqNxVwh77Z0zytYi7rXHl7j6uPLGhhz9rwo=
github.com/pion/turn/v2 v2.1.3/go.mod h1:huEpByKKHix2/b9kmTAM3YoX6MKP+/D//0ClgUYR2fY=
github.com/pion/turn/v2 v2.1.6 h1:Xr2niVsiPTB0FPtt+yAWKFUkU1eotQbGgpTIld4x1Gc=
github.com/pion/turn/v2 v2.1.6/go.mod h1:huEpByKKHix2/b9kmTAM3YoX6MKP+/D//0ClgUYR2fY=
github.com/pion/webrtc/v3 v3.2.40 h1:Wtfi6AZMQg+624cvCXUuSmrKWepSB7zfgYDOYqsSOVU=
github.com/pion/webrtc/v3 v3.2.40/go.mod h1:M1RAe3TNTD1tzyvqHrbVODfwdPGSXOUo/OgpoGGJqFY=
github.com/pion/turn/v4 v4.0.0 h1:qxplo3Rxa9Yg1xXDxxH8xaqcyGUtbHYw4QSCvmFWvhM=
github.com/pion/turn/v4 v4.0.0/go.mod h1:MuPDkm15nYSklKpN8vWJ9W2M0PlyQZqYt1McGuxG7mA=
github.com/pion/webrtc/v4 v4.0.8 h1:T1ZmnT9qxIJIt4d8XoiMOBrTClGHDDXNg9e/fh018Qc=
github.com/pion/webrtc/v4 v4.0.8/go.mod h1:HHBeUVBAC+j4ZFnYhovEFStF02Arb1EyD4G7e7HBTJw=
github.com/pkg/errors v0.8.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pkg/errors v0.9.1 h1:FEBLx1zS214owpjy7qsBeixbURkuhQAwrK5UwLGTwt4=
github.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/prometheus/client_golang v0.8.0/go.mod h1:7SWBe2y4D6OKWSNQJUaRYU/AaXPKyh/dDVn+NZz0KFw=
github.com/prometheus/client_golang v1.19.1 h1:wZWJDwK+NameRJuPGDhlnFgx8e8HN3XHQeLaYJFJBOE=
github.com/prometheus/client_golang v1.19.1/go.mod h1:mP78NwGzrVks5S2H6ab8+ZZGJLZUq1hoULYBAYBw1Ho=
github.com/prometheus/client_golang v1.20.5 h1:cxppBPuYhUnsO6yo/aoRol4L7q7UFfdm+bR9r+8l63Y=
github.com/prometheus/client_golang v1.20.5/go.mod h1:PIEt8X02hGcP8JWbeHyeZ53Y/jReSnHgO035n//V5WE=
github.com/prometheus/client_model v0.0.0-20180712105110-5c3871d89910/go.mod h1:MbSGuTsp3dbXC40dX6PRTWyKYBIrTGTE9sqQNg2J8bo=
github.com/prometheus/client_model v0.6.1 h1:ZKSh/rekM+n3CeS952MLRAdFwIKqeY8b62p8ais2e9E=
github.com/prometheus/client_model v0.6.1/go.mod h1:OrxVMOVHjw3lKMa8+x6HeMGkHMQyHDk9E3jmP2AmGiY=
github.com/prometheus/common v0.0.0-20180801064454-c7de2306084e/go.mod h1:daVV7qP5qjZbuso7PdcryaAu0sAZbrN9i7WWcTMWvro=
github.com/prometheus/common v0.48.0 h1:QO8U2CdOzSn1BBsmXJXduaaW+dY/5QLjfB8svtSzKKE=
github.com/prometheus/common v0.48.0/go.mod h1:0/KsvlIEfPQCQ5I2iNSAWKPZziNCvRs5EC6ILDTlAPc=
github.com/prometheus/common v0.62.0 h1:xasJaQlnWAeyHdUBeGjXmutelfJHWMRr+Fg4QszZ2Io=
github.com/prometheus/common v0.62.0/go.mod h1:vyBcEuLSvWos9B1+CyL7JZ2up+uFzXhkqml0W5zIY1I=
github.com/prometheus/procfs v0.0.0-20180725123919-05ee40e3a273/go.mod h1:c3At6R/oaqEKCNdg8wHV1ftS6bRYblBhIjjI8uT2IGk=
github.com/prometheus/procfs v0.12.0 h1:jluTpSng7V9hY0O2R9DzzJHYb2xULk9VTR1V1R/k6Bo=
github.com/prometheus/procfs v0.12.0/go.mod h1:pcuDEFsWDnvcgNzo4EEweacyhjeA9Zk3cnaOZAZEfOo=
github.com/quic-go/qpack v0.4.0 h1:Cr9BXA1sQS2SmDUWjSofMPNKmvF6IiIfDRmgU0w1ZCo=
github.com/quic-go/qpack v0.4.0/go.mod h1:UZVnYIfi5GRk+zI9UMaCPsmZ2xKJP7XBUvVyT1Knj9A=
github.com/quic-go/quic-go v0.44.0 h1:So5wOr7jyO4vzL2sd8/pD9Kesciv91zSk8BoFngItQ0=
github.com/quic-go/quic-go v0.44.0/go.mod h1:z4cx/9Ny9UtGITIPzmPTXh1ULfOyWh4qGQlpnPcWmek=
github.com/quic-go/webtransport-go v0.8.0 h1:HxSrwun11U+LlmwpgM1kEqIqH90IT4N8auv/cD7QFJg=
github.com/quic-go/webtransport-go v0.8.0/go.mod h1:N99tjprW432Ut5ONql/aUhSLT0YVSlwHohQsuac9WaM=
github.com/prometheus/procfs v0.15.1 h1:YagwOFzUgYfKKHX6Dr+sHT7km/hxC76UB0learggepc=
github.com/prometheus/procfs v0.15.1/go.mod h1:fB45yRUv8NstnjriLhBQLuOUt+WW4BsoGhij/e3PBqk=
github.com/quic-go/qpack v0.5.1 h1:giqksBPnT/HDtZ6VhtFKgoLOWmlyo9Ei6u9PqzIMbhI=
github.com/quic-go/qpack v0.5.1/go.mod h1:+PC4XFrEskIVkcLzpEkbLqq1uCoxPhQuvK5rH1ZgaEg=
github.com/quic-go/quic-go v0.49.0 h1:w5iJHXwHxs1QxyBv1EHKuC50GX5to8mJAxvtnttJp94=
github.com/quic-go/quic-go v0.49.0/go.mod h1:s2wDnmCdooUQBmQfpUSTCYBl1/D4FcqbULMMkASvR6s=
github.com/quic-go/webtransport-go v0.8.1-0.20241018022711-4ac2c9250e66 h1:4WFk6u3sOT6pLa1kQ50ZVdm8BQFgJNA117cepZxtLIg=
github.com/quic-go/webtransport-go v0.8.1-0.20241018022711-4ac2c9250e66/go.mod h1:Vp72IJajgeOL6ddqrAhmp7IM9zbTcgkQxD/YdxrVwMw=
github.com/raulk/go-watchdog v1.3.0 h1:oUmdlHxdkXRJlwfG0O9omj8ukerm8MEQavSiDTEtBsk=
github.com/raulk/go-watchdog v1.3.0/go.mod h1:fIvOnLbF0b0ZwkB9YU4mOW9Did//4vPZtDqv66NfsMU=
github.com/rogpeppe/go-internal v1.10.0 h1:TMyTOH3F/DB16zRVcYyreMH6GnZZrwQVAoYjRBZyWFQ=
@ -320,37 +323,38 @@ github.com/spaolacci/murmur3 v1.1.0/go.mod h1:JwIasOWyU6f++ZhiEuf87xNszmSA2myDM2
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/objx v0.4.0/go.mod h1:YvHI0jy2hoMjB+UWwv71VJQ9isScKT/TqJzVSSt89Yw=
github.com/stretchr/objx v0.5.0/go.mod h1:Yh+to48EsGEfYuaHDzXPcE3xhTkx73EhmCGUpEOglKo=
github.com/stretchr/objx v0.5.2/go.mod h1:FRsXN1f5AsAjCGJKqEizvkpNtU+EGNCLh3NxZ/8L+MA=
github.com/stretchr/testify v1.2.2/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs=
github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI=
github.com/stretchr/testify v1.4.0/go.mod h1:j7eGeouHqKxXV5pUuKE4zz7dFj8WfuZ+81PSLYec5m4=
github.com/stretchr/testify v1.6.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
github.com/stretchr/testify v1.7.0/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
github.com/stretchr/testify v1.7.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
github.com/stretchr/testify v1.8.0/go.mod h1:yNjHg4UonilssWZ8iaSj1OCr/vHnekPRkoO+kdMU+MU=
github.com/stretchr/testify v1.8.3/go.mod h1:sz/lmYIOXD/1dqDmKjjqLyZ2RngseejIcXlSw2iwfAo=
github.com/stretchr/testify v1.8.4/go.mod h1:sz/lmYIOXD/1dqDmKjjqLyZ2RngseejIcXlSw2iwfAo=
github.com/stretchr/testify v1.9.0 h1:HtqpIVDClZ4nwg75+f6Lvsy/wHu+3BoSGCbBAcpTsTg=
github.com/stretchr/testify v1.9.0/go.mod h1:r2ic/lqez/lEtzL7wO/rwa5dbSLXVDPFyf8C91i36aY=
github.com/stretchr/testify v1.10.0 h1:Xv5erBjTwe/5IxqUQTdXv5kgmIvbHo3QQyRwhJsOfJA=
github.com/stretchr/testify v1.10.0/go.mod h1:r2ic/lqez/lEtzL7wO/rwa5dbSLXVDPFyf8C91i36aY=
github.com/tarm/serial v0.0.0-20180830185346-98f6abe2eb07/go.mod h1:kDXzergiv9cbyO7IOYJZWg1U88JhDg3PB6klq9Hg2pA=
github.com/urfave/cli v1.22.2/go.mod h1:Gos4lmkARVdJ6EkW0WaNv/tZAAMe9V7XWyB60NtXRu0=
github.com/viant/assertly v0.4.8/go.mod h1:aGifi++jvCrUaklKEKT0BU95igDNaqkvz+49uaYMPRU=
github.com/viant/toolbox v0.24.0/go.mod h1:OxMCG57V0PXuIP2HNQrtJf2CjqdmbrOx5EkMILuUhzM=
github.com/wlynxg/anet v0.0.3/go.mod h1:eay5PRQr7fIVAMbTbchTnO9gG65Hg/uYGdc7mguHxoA=
github.com/wlynxg/anet v0.0.5 h1:J3VJGi1gvo0JwZ/P1/Yc/8p63SoW98B5dHkYDmpgvvU=
github.com/wlynxg/anet v0.0.5/go.mod h1:eay5PRQr7fIVAMbTbchTnO9gG65Hg/uYGdc7mguHxoA=
github.com/yuin/goldmark v1.1.27/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
github.com/yuin/goldmark v1.2.1/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
github.com/yuin/goldmark v1.3.5/go.mod h1:mwnBkeHKe2W/ZEtQ+71ViKU8L12m81fl3OWwC1Zlc8k=
github.com/yuin/goldmark v1.4.13/go.mod h1:6yULJ656Px+3vBD8DxQVa3kxgyrAnzto9xy5taEt/CY=
go.opencensus.io v0.18.0/go.mod h1:vKdFvxhtzZ9onBp9VKHK8z/sRpBMnKAsufL7wlDrCOA=
go.uber.org/atomic v1.7.0/go.mod h1:fEN4uk6kAWBTFdckzkM89CLk9XfWZrxpCo0nPH17wJc=
go.uber.org/dig v1.17.1 h1:Tga8Lz8PcYNsWsyHMZ1Vm0OQOUaJNDyvPImgbAu9YSc=
go.uber.org/dig v1.17.1/go.mod h1:Us0rSJiThwCv2GteUN0Q7OKvU7n5J4dxZ9JKUXozFdE=
go.uber.org/fx v1.21.1 h1:RqBh3cYdzZS0uqwVeEjOX2p73dddLpym315myy/Bpb0=
go.uber.org/fx v1.21.1/go.mod h1:HT2M7d7RHo+ebKGh9NRcrsrHHfpZ60nW3QRubMRfv48=
go.uber.org/dig v1.18.0 h1:imUL1UiY0Mg4bqbFfsRQO5G4CGRBec/ZujWTvSVp3pw=
go.uber.org/dig v1.18.0/go.mod h1:Us0rSJiThwCv2GteUN0Q7OKvU7n5J4dxZ9JKUXozFdE=
go.uber.org/fx v1.23.0 h1:lIr/gYWQGfTwGcSXWXu4vP5Ws6iqnNEIY+F/aFzCKTg=
go.uber.org/fx v1.23.0/go.mod h1:o/D9n+2mLP6v1EG+qsdT1O8wKopYAsqZasju97SDFCU=
go.uber.org/goleak v1.1.11-0.20210813005559-691160354723/go.mod h1:cwTWslyiVhfpKIDGSZEM2HlOvcqm+tG4zioyIeLoqMQ=
go.uber.org/goleak v1.3.0 h1:2K3zAYmnTNqV73imy9J1T3WC+gmCePx2hEGkimedGto=
go.uber.org/goleak v1.3.0/go.mod h1:CoHD4mav9JJNrW/WLlf7HGZPjdw8EucARQHekz1X6bE=
go.uber.org/mock v0.4.0 h1:VcM4ZOtdbR4f6VXfiOpwpVJDL6lCReaZ6mw31wqh7KU=
go.uber.org/mock v0.4.0/go.mod h1:a6FSlNadKUHUa9IP5Vyt1zh4fC7uAwxMutEAscFbkZc=
go.uber.org/mock v0.5.0 h1:KAMbZvZPyBPWgD14IrIQ38QCyjwpvVVV6K/bHl1IwQU=
go.uber.org/mock v0.5.0/go.mod h1:ge71pBPLYDk7QIi1LupWxdAykm7KIEFchiOqd6z7qMM=
go.uber.org/multierr v1.6.0/go.mod h1:cdWPpRnG4AhwMwsgIHip0KRBQjJy5kYEpYjJxpXp9iU=
go.uber.org/multierr v1.11.0 h1:blXXJkSxSSfBVBlC76pxqeO+LN3aDfLQo+309xJstO0=
go.uber.org/multierr v1.11.0/go.mod h1:20+QtiLqy0Nd6FdQB9TLXag12DsQkrbs3htMFfDN80Y=
@ -369,16 +373,13 @@ golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9/go.mod h1:LzIPMQfyMNhhGPh
golang.org/x/crypto v0.0.0-20210322153248-0c34fe9e7dc2/go.mod h1:T9bdIzuCu7OtxOm1hfPfRQxPLYneinmdGuTeoZ9dtd4=
golang.org/x/crypto v0.0.0-20210921155107-089bfa567519/go.mod h1:GvvjBRRGRdwPK5ydBHafDWAxML/pGHZbMvKqRZ5+Abc=
golang.org/x/crypto v0.8.0/go.mod h1:mRqEX+O9/h5TFCrQhkgjo2yKi0yYA+9ecGkdQoHrywE=
golang.org/x/crypto v0.11.0/go.mod h1:xgJhtzW8F9jGdVFWZESrid1U1bjeNy4zgy5cRr/CIio=
golang.org/x/crypto v0.12.0/go.mod h1:NF0Gs7EO5K4qLn+Ylc+fih8BSTeIjAP05siRnAh98yw=
golang.org/x/crypto v0.18.0/go.mod h1:R0j02AL6hcrfOiy9T4ZYp/rcWeMxM3L6QYxlOuEG1mg=
golang.org/x/crypto v0.19.0/go.mod h1:Iy9bg/ha4yyC70EfRS8jz+B6ybOBKMaSxLj6P6oBDfU=
golang.org/x/crypto v0.21.0/go.mod h1:0BP7YvVV9gBbVKyeTG0Gyn+gZm94bibOW5BjDEYAOMs=
golang.org/x/crypto v0.23.0 h1:dIJU/v2J8Mdglj/8rJ6UUOM3Zc9zLZxVZwwxMooUSAI=
golang.org/x/crypto v0.23.0/go.mod h1:CKFgDieR+mRhux2Lsu27y0fO304Db0wZe70UKqHu0v8=
golang.org/x/crypto v0.32.0 h1:euUpcYgM8WcP71gNpTqQCn6rC2t6ULUPiOzfWaXVVfc=
golang.org/x/crypto v0.32.0/go.mod h1:ZnnJkOaASj8g0AjIduWNlq2NRxL0PlBrbKVyZ6V/Ugc=
golang.org/x/exp v0.0.0-20190121172915-509febef88a4/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
golang.org/x/exp v0.0.0-20240506185415-9bf2ced13842 h1:vr/HnozRka3pE4EsMEg1lgkXJkTFJCVUX+S/ZT6wYzM=
golang.org/x/exp v0.0.0-20240506185415-9bf2ced13842/go.mod h1:XtvwrStGgqGPLc4cjQfWqZHG1YFdYs6swckp8vpsjnc=
golang.org/x/exp v0.0.0-20250128182459-e0ece0dbea4c h1:KL/ZBHXgKGVmuZBZ01Lt57yE5ws8ZPSkkihmEyq7FXc=
golang.org/x/exp v0.0.0-20250128182459-e0ece0dbea4c/go.mod h1:tujkw807nyEEAamNbDrEGzRav+ilXA7PCRAd6xsmwiU=
golang.org/x/lint v0.0.0-20180702182130-06c8688daad7/go.mod h1:UVdnD1Gm6xHRNCYTkRU2/jEulfH38KcIWyp/GAMgvoE=
golang.org/x/lint v0.0.0-20181026193005-c67002cb31c3/go.mod h1:UVdnD1Gm6xHRNCYTkRU2/jEulfH38KcIWyp/GAMgvoE=
golang.org/x/lint v0.0.0-20190227174305-5b3e6a55c961/go.mod h1:wehouNa3lNwaWXcvxsM5YxQ5yQlVC4a0KAMCusXpPoU=
@ -390,8 +391,8 @@ golang.org/x/mod v0.3.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
golang.org/x/mod v0.4.2/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
golang.org/x/mod v0.6.0-dev.0.20220419223038-86c51ed26bb4/go.mod h1:jJ57K6gSWd91VN4djpZkiMVwK6gcyfeH4XE8wZrZaV4=
golang.org/x/mod v0.8.0/go.mod h1:iBbtSCu2XBx23ZKBPSOrRkjjQPZFPuis4dIYUhu/chs=
golang.org/x/mod v0.17.0 h1:zY54UmvipHiNd+pm+m0x9KhZ9hl1/7QNMyxXbc6ICqA=
golang.org/x/mod v0.17.0/go.mod h1:hTbmBsO62+eylJbnUtE2MGJUyE7QWk4xUqPFrRgJ+7c=
golang.org/x/mod v0.23.0 h1:Zb7khfcRGKk+kqfxFaP5tZqCnDZMjC5VtUBs87Hr6QM=
golang.org/x/mod v0.23.0/go.mod h1:6SkKJ3Xj0I0BrPOZoBy3bdMptDDU9oJrpohJ3eWZ1fY=
golang.org/x/net v0.0.0-20180724234803-3673e40ba225/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20180826012351-8a410e7b638d/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20180906233101-161cd47e91fd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
@ -412,13 +413,10 @@ golang.org/x/net v0.0.0-20220722155237-a158d28d115b/go.mod h1:XRhObCWvk6IyKnWLug
golang.org/x/net v0.6.0/go.mod h1:2Tu9+aMcznHK/AK1HMvgo6xiTLG5rD5rZLDS+rp2Bjs=
golang.org/x/net v0.9.0/go.mod h1:d48xBJpPfHeWQsugry2m+kC02ZBRGRgulfHnEXEuWns=
golang.org/x/net v0.10.0/go.mod h1:0qNGK6F8kojg2nk9dLZ2mShWaEBan6FAoqfSigmmuDg=
golang.org/x/net v0.13.0/go.mod h1:zEVYFnQC7m/vmpQFELhcD1EWkZlX69l4oqgmer6hfKA=
golang.org/x/net v0.14.0/go.mod h1:PpSgVXXLK0OxS0F31C1/tv6XNguvCrnXIDrFMspZIUI=
golang.org/x/net v0.20.0/go.mod h1:z8BVo6PvndSri0LbOE3hAn0apkU+1YvI6E70E9jsnvY=
golang.org/x/net v0.21.0/go.mod h1:bIjVDfnllIU7BJ2DNgfnXvpSvtn8VRwhlsaeUTyUS44=
golang.org/x/net v0.22.0/go.mod h1:JKghWKKOSdJwpW2GEx0Ja7fmaKnMsbu+MWVZTokSYmg=
golang.org/x/net v0.25.0 h1:d/OCCoBEUq33pjydKrGQhw7IlUPI2Oylr+8qLx49kac=
golang.org/x/net v0.25.0/go.mod h1:JkAGAh7GEvH74S6FOH42FLoXpXbE/aqXSrIQjXgsiwM=
golang.org/x/net v0.34.0 h1:Mb7Mrk043xzHgnRM88suvJFwzVrRfHEHJEl5/71CKw0=
golang.org/x/net v0.34.0/go.mod h1:di0qlW3YNM5oh6GqDGQr92MyTozJPmybPK4Ev/Gm31k=
golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
golang.org/x/oauth2 v0.0.0-20181017192945-9dcd33a902f4/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
golang.org/x/oauth2 v0.0.0-20181203162652-d668ce993890/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
@ -434,8 +432,8 @@ golang.org/x/sync v0.0.0-20201020160332-67f06af15bc9/go.mod h1:RxMgew5VJxzue5/jJ
golang.org/x/sync v0.0.0-20210220032951-036812b2e83c/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20220722155255-886fb9371eb4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.1.0/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.7.0 h1:YsImfSBoP9QPYL0xyKJPq0gcaJdG3rInoqxTWbfQu9M=
golang.org/x/sync v0.7.0/go.mod h1:Czt+wKu1gCyEFDUtn0jG5QVvpJ6rzVqr5aXyt9drQfk=
golang.org/x/sync v0.11.0 h1:GGz8+XQP4FvTTrjZPzNKTMFtSXH80RAzG+5ghFPgK9w=
golang.org/x/sync v0.11.0/go.mod h1:Czt+wKu1gCyEFDUtn0jG5QVvpJ6rzVqr5aXyt9drQfk=
golang.org/x/sys v0.0.0-20180810173357-98c5dad5d1a0/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20180830151530-49385e6e1522/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20180909124046-d0be0721c37e/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
@ -448,7 +446,6 @@ golang.org/x/sys v0.0.0-20200124204421-9fbb57f87de9/go.mod h1:h1NjWce9XRLGQEsW7w
golang.org/x/sys v0.0.0-20200602225109-6fdc65e7d980/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200930185726-fdedc70b468f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20201119102817-f84b799fce68/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210303074136-134d130e1a04/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210330210617-4fbd30eecc44/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210510120138-977fb7262007/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20210615035016-665e8c7367d1/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
@ -460,34 +457,27 @@ golang.org/x/sys v0.6.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.7.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.8.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.9.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.10.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.11.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.16.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/sys v0.17.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/sys v0.18.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/sys v0.20.0 h1:Od9JTbYCk261bKm4M/mw7AklTlFYIa0bIp9BgSm1S8Y=
golang.org/x/sys v0.20.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/sys v0.30.0 h1:QjkSwP/36a20jFYWkSue1YwXzLmsV5Gfq7Eiy72C1uc=
golang.org/x/sys v0.30.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
golang.org/x/term v0.0.0-20210927222741-03fcf44c2211/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8=
golang.org/x/term v0.5.0/go.mod h1:jMB1sMXY+tzblOD4FWmEbocvup2/aLOaQEp7JmGp78k=
golang.org/x/term v0.7.0/go.mod h1:P32HKFT3hSsZrRxla30E9HqToFYAQPCMs/zFMBUFqPY=
golang.org/x/term v0.8.0/go.mod h1:xPskH00ivmX89bAKVGSKKtLOWNx2+17Eiy94tnKShWo=
golang.org/x/term v0.10.0/go.mod h1:lpqdcUyK/oCiQxvxVrppt5ggO2KCZ5QblwqPnfZ6d5o=
golang.org/x/term v0.11.0/go.mod h1:zC9APTIj3jG3FdV/Ons+XE1riIZXG4aZ4GTHiPZJPIU=
golang.org/x/term v0.16.0/go.mod h1:yn7UURbUtPyrVJPGPq404EukNFxcm/foM+bV/bfcDsY=
golang.org/x/term v0.17.0/go.mod h1:lLRBjIVuehSbZlaOtGMbcMncT+aqLLLmKrsjNrUguwk=
golang.org/x/term v0.18.0/go.mod h1:ILwASektA3OnRv7amZ1xhE/KTR+u50pbXfZ03+6Nx58=
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.1-0.20180807135948-17ff2d5776d2/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.3.7/go.mod h1:u+2+/6zg+i71rQMx5EYifcz6MCKuco9NR6JIITiCfzQ=
golang.org/x/text v0.7.0/go.mod h1:mrYo+phRRbMaCq/xk9113O4dZlRixOauAjOtrjsXDZ8=
golang.org/x/text v0.9.0/go.mod h1:e1OnstbJyHTd6l/uOt8jFFHp6TRDWZR/bV3emEE/zU8=
golang.org/x/text v0.11.0/go.mod h1:TvPlkZtksWOMsz7fbANvkp4WM8x/WCo/om8BMLbz+aE=
golang.org/x/text v0.12.0/go.mod h1:TvPlkZtksWOMsz7fbANvkp4WM8x/WCo/om8BMLbz+aE=
golang.org/x/text v0.14.0/go.mod h1:18ZOQIKpY8NJVqYksKHtTdi31H5itFRjB5/qKTNYzSU=
golang.org/x/text v0.15.0 h1:h1V/4gjBv8v9cjcR6+AR5+/cIYK5N/WAgiv4xlsEtAk=
golang.org/x/text v0.15.0/go.mod h1:18ZOQIKpY8NJVqYksKHtTdi31H5itFRjB5/qKTNYzSU=
golang.org/x/text v0.22.0 h1:bofq7m3/HAFvbF51jz3Q9wLg3jkvSPuiZu/pD1XwgtM=
golang.org/x/text v0.22.0/go.mod h1:YRoo4H8PVmsu+E3Ou7cqLVH8oXWIHVoX0jqUWALQhfY=
golang.org/x/time v0.0.0-20180412165947-fbb02b2291d2/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/time v0.0.0-20181108054448-85acf8d2951c/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/time v0.5.0 h1:o7cqy6amK/52YcAKIPlM3a+Fpj35zvRj2TP+e1xFSfk=
@ -506,8 +496,8 @@ golang.org/x/tools v0.0.0-20210106214847-113979e3529a/go.mod h1:emZCQorbCU4vsT4f
golang.org/x/tools v0.1.5/go.mod h1:o0xws9oXOQQZyjljx8fwUC0k7L1pTE6eaCbjGeHmOkk=
golang.org/x/tools v0.1.12/go.mod h1:hNGJHUnrk76NpqgfD5Aqm5Crs+Hm0VOH/i9J2+nxYbc=
golang.org/x/tools v0.6.0/go.mod h1:Xwgl3UAJ/d3gWutnCtw505GrjyAbvKui8lOU390QaIU=
golang.org/x/tools v0.21.0 h1:qc0xYgIbsSDt9EyWz05J5wfa7LOVW0YTLOXrqdLAWIw=
golang.org/x/tools v0.21.0/go.mod h1:aiJjzUbINMkxbQROHiO6hDPo2LHcIPhhQsa9DLh0yGk=
golang.org/x/tools v0.29.0 h1:Xx0h3TtM9rzQpQuR4dKLrdglAmCEN5Oi+P74JdhdzXE=
golang.org/x/tools v0.29.0/go.mod h1:KMQVMRsVxU6nHCFXrBPhDB8XncLNLM0lIy/F14RP588=
golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
@ -528,8 +518,8 @@ google.golang.org/grpc v1.14.0/go.mod h1:yo6s7OP7yaDglbqo1J04qKzAhqBH6lvTonzMVmE
google.golang.org/grpc v1.16.0/go.mod h1:0JHn/cJsOMiMfNA9+DeHDlAU7KAAB5GDlYFpa9MZMio=
google.golang.org/grpc v1.17.0/go.mod h1:6QZJwpn2B+Zp71q/5VxRsJ6NXXVCE5NRUHRo+f3cWCs=
google.golang.org/grpc v1.19.0/go.mod h1:mqu4LbDTu4XGKhr4mRzUsmM4RtVoemTSY81AxZiDr8c=
google.golang.org/protobuf v1.34.1 h1:9ddQBjfCyZPOHPUiPxpYESBLc+T8P3E+Vo4IbKZgFWg=
google.golang.org/protobuf v1.34.1/go.mod h1:c6P6GXX6sHbq/GpV6MGZEdwhWPcYBgnhAHhKbcUYpos=
google.golang.org/protobuf v1.36.4 h1:6A3ZDJHn/eNqc1i+IdefRzy/9PokBTPvcqMySR7NNIM=
google.golang.org/protobuf v1.36.4/go.mod h1:9fA7Ob0pmnwhb644+1+CVWFRbNajQ6iRojtC/QF5bRE=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20190902080502-41f04d3bba15/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
@ -547,7 +537,7 @@ grpc.go4.org v0.0.0-20170609214715-11d0a25b4919/go.mod h1:77eQGdRu53HpSqPFJFmuJd
honnef.co/go/tools v0.0.0-20180728063816-88497007e858/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
honnef.co/go/tools v0.0.0-20190102054323-c2f93a96b099/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
honnef.co/go/tools v0.0.0-20190106161140-3f1c8253044a/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
lukechampine.com/blake3 v1.2.1 h1:YuqqRuaqsGV71BV/nm9xlI0MKUv4QC54jQnBChWbGnI=
lukechampine.com/blake3 v1.2.1/go.mod h1:0OFRp7fBtAylGVCO40o87sbupkyIGgbpv1+M1k1LM6k=
lukechampine.com/blake3 v1.3.0 h1:sJ3XhFINmHSrYCgl958hscfIa3bw8x4DqMP3u1YvoYE=
lukechampine.com/blake3 v1.3.0/go.mod h1:0OFRp7fBtAylGVCO40o87sbupkyIGgbpv1+M1k1LM6k=
sourcegraph.com/sourcegraph/go-diff v0.5.0/go.mod h1:kuch7UrkMzY0X+p9CRK03kfuPQ2zzQcaEFbx8wA8rck=
sourcegraph.com/sqs/pbtypes v0.0.0-20180604144634-d3ebe8f20ae4/go.mod h1:ketZ/q3QxT9HOBeFhu6RdvsftgpsbFHBF5Cas6cDKZ0=

View File

@ -2,8 +2,10 @@ package pubsub
import (
"context"
"crypto/sha256"
"fmt"
"io"
"iter"
"math/rand"
"sort"
"time"
@ -18,17 +20,25 @@ import (
"github.com/libp2p/go-libp2p/core/protocol"
"github.com/libp2p/go-libp2p/core/record"
"github.com/libp2p/go-libp2p/p2p/host/peerstore/pstoremem"
"go.uber.org/zap/zapcore"
)
const (
// GossipSubID_v10 is the protocol ID for version 1.0.0 of the GossipSub protocol.
// It is advertised along with GossipSubID_v11 for backwards compatibility.
// It is advertised along with GossipSubID_v11 and GossipSubID_v12 for backwards compatibility.
GossipSubID_v10 = protocol.ID("/meshsub/1.0.0")
// GossipSubID_v11 is the protocol ID for version 1.1.0 of the GossipSub protocol.
// It is advertised along with GossipSubID_v12 for backwards compatibility.
// See the spec for details about how v1.1.0 compares to v1.0.0:
// https://github.com/libp2p/specs/blob/master/pubsub/gossipsub/gossipsub-v1.1.md
GossipSubID_v11 = protocol.ID("/meshsub/1.1.0")
// GossipSubID_v12 is the protocol ID for version 1.2.0 of the GossipSub protocol.
// See the spec for details about how v1.2.0 compares to v1.1.0:
// https://github.com/libp2p/specs/blob/master/pubsub/gossipsub/gossipsub-v1.2.md
GossipSubID_v12 = protocol.ID("/meshsub/1.2.0")
)
// Defines the default gossipsub parameters.
@ -59,9 +69,18 @@ var (
GossipSubGraftFloodThreshold = 10 * time.Second
GossipSubMaxIHaveLength = 5000
GossipSubMaxIHaveMessages = 10
GossipSubMaxIDontWantLength = 10
GossipSubMaxIDontWantMessages = 1000
GossipSubIWantFollowupTime = 3 * time.Second
GossipSubIDontWantMessageThreshold = 1024 // 1KB
GossipSubIDontWantMessageTTL = 3 // 3 heartbeats
)
type checksum struct {
payload [32]byte
length uint8
}
// GossipSubParams defines all the gossipsub specific parameters.
type GossipSubParams struct {
// overlay parameters.
@ -201,10 +220,25 @@ type GossipSubParams struct {
// MaxIHaveMessages is the maximum number of IHAVE messages to accept from a peer within a heartbeat.
MaxIHaveMessages int
// MaxIDontWantLength is the maximum number of messages to include in an IDONTWANT message. Also controls
// the maximum number of IDONTWANT ids we will accept to protect against IDONTWANT floods. This value
// should be adjusted if your system anticipates a larger amount than specified per heartbeat.
MaxIDontWantLength int
// MaxIDontWantMessages is the maximum number of IDONTWANT messages to accept from a peer within a heartbeat.
MaxIDontWantMessages int
// Time to wait for a message requested through IWANT following an IHAVE advertisement.
// If the message is not received within this window, a broken promise is declared and
// the router may apply bahavioural penalties.
IWantFollowupTime time.Duration
// IDONTWANT is only sent for messages larger than the threshold. This should be greater than
// D_high * the size of the message id. Otherwise, the attacker can do the amplication attack by sending
// small messages while the receiver replies back with larger IDONTWANT messages.
IDontWantMessageThreshold int
// IDONTWANT is cleared when it's older than the TTL.
IDontWantMessageTTL int
}
// NewGossipSub returns a new PubSub object using the default GossipSubRouter as the router.
@ -223,23 +257,25 @@ func NewGossipSubWithRouter(ctx context.Context, h host.Host, rt PubSubRouter, o
func DefaultGossipSubRouter(h host.Host) *GossipSubRouter {
params := DefaultGossipSubParams()
return &GossipSubRouter{
peers: make(map[peer.ID]protocol.ID),
mesh: make(map[string]map[peer.ID]struct{}),
fanout: make(map[string]map[peer.ID]struct{}),
lastpub: make(map[string]int64),
gossip: make(map[peer.ID][]*pb.ControlIHave),
control: make(map[peer.ID]*pb.ControlMessage),
backoff: make(map[string]map[peer.ID]time.Time),
peerhave: make(map[peer.ID]int),
iasked: make(map[peer.ID]int),
outbound: make(map[peer.ID]bool),
connect: make(chan connectInfo, params.MaxPendingConnections),
cab: pstoremem.NewAddrBook(),
mcache: NewMessageCache(params.HistoryGossip, params.HistoryLength),
protos: GossipSubDefaultProtocols,
feature: GossipSubDefaultFeatures,
tagTracer: newTagTracer(h.ConnManager()),
params: params,
peers: make(map[peer.ID]protocol.ID),
mesh: make(map[string]map[peer.ID]struct{}),
fanout: make(map[string]map[peer.ID]struct{}),
lastpub: make(map[string]int64),
gossip: make(map[peer.ID][]*pb.ControlIHave),
control: make(map[peer.ID]*pb.ControlMessage),
backoff: make(map[string]map[peer.ID]time.Time),
peerhave: make(map[peer.ID]int),
peerdontwant: make(map[peer.ID]int),
unwanted: make(map[peer.ID]map[checksum]int),
iasked: make(map[peer.ID]int),
outbound: make(map[peer.ID]bool),
connect: make(chan connectInfo, params.MaxPendingConnections),
cab: pstoremem.NewAddrBook(),
mcache: NewMessageCache(params.HistoryGossip, params.HistoryLength),
protos: GossipSubDefaultProtocols,
feature: GossipSubDefaultFeatures,
tagTracer: newTagTracer(h.ConnManager()),
params: params,
}
}
@ -273,7 +309,11 @@ func DefaultGossipSubParams() GossipSubParams {
GraftFloodThreshold: GossipSubGraftFloodThreshold,
MaxIHaveLength: GossipSubMaxIHaveLength,
MaxIHaveMessages: GossipSubMaxIHaveMessages,
MaxIDontWantLength: GossipSubMaxIDontWantLength,
MaxIDontWantMessages: GossipSubMaxIDontWantMessages,
IWantFollowupTime: GossipSubIWantFollowupTime,
IDontWantMessageThreshold: GossipSubIDontWantMessageThreshold,
IDontWantMessageTTL: GossipSubIDontWantMessageTTL,
SlowHeartbeatWarning: 0.1,
}
}
@ -422,20 +462,22 @@ func WithGossipSubParams(cfg GossipSubParams) Option {
// is the fanout map. Fanout peer lists are expired if we don't publish any
// messages to their topic for GossipSubFanoutTTL.
type GossipSubRouter struct {
p *PubSub
peers map[peer.ID]protocol.ID // peer protocols
direct map[peer.ID]struct{} // direct peers
mesh map[string]map[peer.ID]struct{} // topic meshes
fanout map[string]map[peer.ID]struct{} // topic fanout
lastpub map[string]int64 // last publish time for fanout topics
gossip map[peer.ID][]*pb.ControlIHave // pending gossip
control map[peer.ID]*pb.ControlMessage // pending control messages
peerhave map[peer.ID]int // number of IHAVEs received from peer in the last heartbeat
iasked map[peer.ID]int // number of messages we have asked from peer in the last heartbeat
outbound map[peer.ID]bool // connection direction cache, marks peers with outbound connections
backoff map[string]map[peer.ID]time.Time // prune backoff
connect chan connectInfo // px connection requests
cab peerstore.AddrBook
p *PubSub
peers map[peer.ID]protocol.ID // peer protocols
direct map[peer.ID]struct{} // direct peers
mesh map[string]map[peer.ID]struct{} // topic meshes
fanout map[string]map[peer.ID]struct{} // topic fanout
lastpub map[string]int64 // last publish time for fanout topics
gossip map[peer.ID][]*pb.ControlIHave // pending gossip
control map[peer.ID]*pb.ControlMessage // pending control messages
peerhave map[peer.ID]int // number of IHAVEs received from peer in the last heartbeat
peerdontwant map[peer.ID]int // number of IDONTWANTs received from peer in the last heartbeat
unwanted map[peer.ID]map[checksum]int // TTL of the message ids peers don't want
iasked map[peer.ID]int // number of messages we have asked from peer in the last heartbeat
outbound map[peer.ID]bool // connection direction cache, marks peers with outbound connections
backoff map[string]map[peer.ID]time.Time // prune backoff
connect chan connectInfo // px connection requests
cab peerstore.AddrBook
protos []protocol.ID
feature GossipSubFeatureTest
@ -481,6 +523,8 @@ type GossipSubRouter struct {
heartbeatTicks uint64
}
var _ BatchPublisher = &GossipSubRouter{}
type connectInfo struct {
p peer.ID
spr *record.Envelope
@ -663,6 +707,40 @@ func (gs *GossipSubRouter) AcceptFrom(p peer.ID) AcceptStatus {
return gs.gate.AcceptFrom(p)
}
// Preprocess sends the IDONTWANT control messages to all the mesh
// peers. They need to be sent right before the validation because they
// should be seen by the peers as soon as possible.
func (gs *GossipSubRouter) Preprocess(from peer.ID, msgs []*Message) {
tmids := make(map[string][]string)
for _, msg := range msgs {
if len(msg.GetData()) < gs.params.IDontWantMessageThreshold {
continue
}
topic := msg.GetTopic()
tmids[topic] = append(tmids[topic], gs.p.idGen.ID(msg))
}
for topic, mids := range tmids {
if len(mids) == 0 {
continue
}
// shuffle the messages got from the RPC envelope
shuffleStrings(mids)
// send IDONTWANT to all the mesh peers
for p := range gs.mesh[topic] {
if p == from {
// We don't send IDONTWANT to the peer that sent us the messages
continue
}
// send to only peers that support IDONTWANT
if gs.feature(GossipSubFeatureIdontwant, gs.peers[p]) {
idontwant := []*pb.ControlIDontWant{{MessageIDs: mids}}
out := rpcWithControl(nil, nil, nil, nil, nil, idontwant)
gs.sendRPC(p, out, true)
}
}
}
}
func (gs *GossipSubRouter) HandleRPC(rpc *RPC) {
ctl := rpc.GetControl()
if ctl == nil {
@ -673,13 +751,14 @@ func (gs *GossipSubRouter) HandleRPC(rpc *RPC) {
ihave := gs.handleIWant(rpc.from, ctl)
prune := gs.handleGraft(rpc.from, ctl)
gs.handlePrune(rpc.from, ctl)
gs.handleIDontWant(rpc.from, ctl)
if len(iwant) == 0 && len(ihave) == 0 && len(prune) == 0 {
return
}
out := rpcWithControl(ihave, nil, iwant, nil, prune)
gs.sendRPC(rpc.from, out)
out := rpcWithControl(ihave, nil, iwant, nil, prune, nil)
gs.sendRPC(rpc.from, out, false)
}
func (gs *GossipSubRouter) handleIHave(p peer.ID, ctl *pb.ControlMessage) []*pb.ControlIWant {
@ -767,6 +846,11 @@ func (gs *GossipSubRouter) handleIWant(p peer.ID, ctl *pb.ControlMessage) []*pb.
ihave := make(map[string]*pb.Message)
for _, iwant := range ctl.GetIwant() {
for _, mid := range iwant.GetMessageIDs() {
// Check if that peer has sent IDONTWANT before, if so don't send them the message
if _, ok := gs.unwanted[p][computeChecksum(mid)]; ok {
continue
}
msg, count, ok := gs.mcache.GetForPeer(mid, p)
if !ok {
continue
@ -931,6 +1015,35 @@ func (gs *GossipSubRouter) handlePrune(p peer.ID, ctl *pb.ControlMessage) {
}
}
func (gs *GossipSubRouter) handleIDontWant(p peer.ID, ctl *pb.ControlMessage) {
if gs.unwanted[p] == nil {
gs.unwanted[p] = make(map[checksum]int)
}
// IDONTWANT flood protection
if gs.peerdontwant[p] >= gs.params.MaxIDontWantMessages {
log.Debugf("IDONWANT: peer %s has advertised too many times (%d) within this heartbeat interval; ignoring", p, gs.peerdontwant[p])
return
}
gs.peerdontwant[p]++
totalUnwantedIds := 0
// Remember all the unwanted message ids
mainIDWLoop:
for _, idontwant := range ctl.GetIdontwant() {
for _, mid := range idontwant.GetMessageIDs() {
// IDONTWANT flood protection
if totalUnwantedIds >= gs.params.MaxIDontWantLength {
log.Debugf("IDONWANT: peer %s has advertised too many ids (%d) within this message; ignoring", p, totalUnwantedIds)
break mainIDWLoop
}
totalUnwantedIds++
gs.unwanted[p][computeChecksum(mid)] = gs.params.IDontWantMessageTTL
}
}
}
func (gs *GossipSubRouter) addBackoff(p peer.ID, topic string, isUnsubscribe bool) {
backoff := gs.params.PruneBackoff
if isUnsubscribe {
@ -1033,75 +1146,105 @@ func (gs *GossipSubRouter) connector() {
}
}
func (gs *GossipSubRouter) Publish(msg *Message) {
gs.mcache.Put(msg)
from := msg.ReceivedFrom
topic := msg.GetTopic()
tosend := make(map[peer.ID]struct{})
// any peers in the topic?
tmap, ok := gs.p.topics[topic]
if !ok {
return
func (gs *GossipSubRouter) PublishBatch(messages []*Message, opts *BatchPublishOptions) {
strategy := opts.Strategy
for _, msg := range messages {
msgID := gs.p.idGen.ID(msg)
for p, rpc := range gs.rpcs(msg) {
strategy.AddRPC(p, msgID, rpc)
}
}
if gs.floodPublish && from == gs.p.host.ID() {
for p := range tmap {
_, direct := gs.direct[p]
if direct || gs.score.Score(p) >= gs.publishThreshold {
tosend[p] = struct{}{}
}
}
} else {
// direct peers
for p := range gs.direct {
_, inTopic := tmap[p]
if inTopic {
tosend[p] = struct{}{}
}
}
for p, rpc := range strategy.All() {
gs.sendRPC(p, rpc, false)
}
}
// floodsub peers
for p := range tmap {
if !gs.feature(GossipSubFeatureMesh, gs.peers[p]) && gs.score.Score(p) >= gs.publishThreshold {
tosend[p] = struct{}{}
}
}
func (gs *GossipSubRouter) Publish(msg *Message) {
for p, rpc := range gs.rpcs(msg) {
gs.sendRPC(p, rpc, false)
}
}
// gossipsub peers
gmap, ok := gs.mesh[topic]
func (gs *GossipSubRouter) rpcs(msg *Message) iter.Seq2[peer.ID, *RPC] {
return func(yield func(peer.ID, *RPC) bool) {
gs.mcache.Put(msg)
from := msg.ReceivedFrom
topic := msg.GetTopic()
tosend := make(map[peer.ID]struct{})
// any peers in the topic?
tmap, ok := gs.p.topics[topic]
if !ok {
// we are not in the mesh for topic, use fanout peers
gmap, ok = gs.fanout[topic]
if !ok || len(gmap) == 0 {
// we don't have any, pick some with score above the publish threshold
peers := gs.getPeers(topic, gs.params.D, func(p peer.ID) bool {
_, direct := gs.direct[p]
return !direct && gs.score.Score(p) >= gs.publishThreshold
})
return
}
if len(peers) > 0 {
gmap = peerListToMap(peers)
gs.fanout[topic] = gmap
if gs.floodPublish && from == gs.p.host.ID() {
for p := range tmap {
_, direct := gs.direct[p]
if direct || gs.score.Score(p) >= gs.publishThreshold {
tosend[p] = struct{}{}
}
}
gs.lastpub[topic] = time.Now().UnixNano()
} else {
// direct peers
for p := range gs.direct {
_, inTopic := tmap[p]
if inTopic {
tosend[p] = struct{}{}
}
}
// floodsub peers
for p := range tmap {
if !gs.feature(GossipSubFeatureMesh, gs.peers[p]) && gs.score.Score(p) >= gs.publishThreshold {
tosend[p] = struct{}{}
}
}
// gossipsub peers
gmap, ok := gs.mesh[topic]
if !ok {
// we are not in the mesh for topic, use fanout peers
gmap, ok = gs.fanout[topic]
if !ok || len(gmap) == 0 {
// we don't have any, pick some with score above the publish threshold
peers := gs.getPeers(topic, gs.params.D, func(p peer.ID) bool {
_, direct := gs.direct[p]
return !direct && gs.score.Score(p) >= gs.publishThreshold
})
if len(peers) > 0 {
gmap = peerListToMap(peers)
gs.fanout[topic] = gmap
}
}
gs.lastpub[topic] = time.Now().UnixNano()
}
csum := computeChecksum(gs.p.idGen.ID(msg))
for p := range gmap {
// Check if it has already received an IDONTWANT for the message.
// If so, don't send it to the peer
if _, ok := gs.unwanted[p][csum]; ok {
continue
}
tosend[p] = struct{}{}
}
}
for p := range gmap {
tosend[p] = struct{}{}
}
}
out := rpcWithMessages(msg.Message)
for pid := range tosend {
if pid == from || pid == peer.ID(msg.GetFrom()) {
continue
}
out := rpcWithMessages(msg.Message)
for pid := range tosend {
if pid == from || pid == peer.ID(msg.GetFrom()) {
continue
if !yield(pid, out) {
return
}
}
gs.sendRPC(pid, out)
}
}
@ -1186,17 +1329,17 @@ func (gs *GossipSubRouter) Leave(topic string) {
func (gs *GossipSubRouter) sendGraft(p peer.ID, topic string) {
graft := []*pb.ControlGraft{{TopicID: &topic}}
out := rpcWithControl(nil, nil, nil, graft, nil)
gs.sendRPC(p, out)
out := rpcWithControl(nil, nil, nil, graft, nil, nil)
gs.sendRPC(p, out, false)
}
func (gs *GossipSubRouter) sendPrune(p peer.ID, topic string, isUnsubscribe bool) {
prune := []*pb.ControlPrune{gs.makePrune(p, topic, gs.doPX, isUnsubscribe)}
out := rpcWithControl(nil, nil, nil, nil, prune)
gs.sendRPC(p, out)
out := rpcWithControl(nil, nil, nil, nil, prune, nil)
gs.sendRPC(p, out, false)
}
func (gs *GossipSubRouter) sendRPC(p peer.ID, out *RPC) {
func (gs *GossipSubRouter) sendRPC(p peer.ID, out *RPC, urgent bool) {
// do we own the RPC?
own := false
@ -1220,31 +1363,32 @@ func (gs *GossipSubRouter) sendRPC(p peer.ID, out *RPC) {
delete(gs.gossip, p)
}
mch, ok := gs.p.peers[p]
q, ok := gs.p.peers[p]
if !ok {
return
}
// If we're below the max message size, go ahead and send
if out.Size() < gs.p.maxMessageSize {
gs.doSendRPC(out, p, mch)
gs.doSendRPC(out, p, q, urgent)
return
}
// Potentially split the RPC into multiple RPCs that are below the max message size
outRPCs := appendOrMergeRPC(nil, gs.p.maxMessageSize, *out)
for _, rpc := range outRPCs {
for rpc := range out.split(gs.p.maxMessageSize) {
if rpc.Size() > gs.p.maxMessageSize {
// This should only happen if a single message/control is above the maxMessageSize.
gs.doDropRPC(out, p, fmt.Sprintf("Dropping oversized RPC. Size: %d, limit: %d. (Over by %d bytes)", rpc.Size(), gs.p.maxMessageSize, rpc.Size()-gs.p.maxMessageSize))
continue
}
gs.doSendRPC(rpc, p, mch)
gs.doSendRPC(&rpc, p, q, urgent)
}
}
func (gs *GossipSubRouter) doDropRPC(rpc *RPC, p peer.ID, reason string) {
log.Debugf("dropping message to peer %s: %s", p, reason)
if log.Level() <= zapcore.DebugLevel {
log.Debugf("dropping message to peer %s: %s", p, reason)
}
gs.tracer.DropRPC(rpc, p)
// push control messages that need to be retried
ctl := rpc.GetControl()
@ -1253,144 +1397,18 @@ func (gs *GossipSubRouter) doDropRPC(rpc *RPC, p peer.ID, reason string) {
}
}
func (gs *GossipSubRouter) doSendRPC(rpc *RPC, p peer.ID, mch chan *RPC) {
select {
case mch <- rpc:
gs.tracer.SendRPC(rpc, p)
default:
func (gs *GossipSubRouter) doSendRPC(rpc *RPC, p peer.ID, q *rpcQueue, urgent bool) {
var err error
if urgent {
err = q.UrgentPush(rpc, false)
} else {
err = q.Push(rpc, false)
}
if err != nil {
gs.doDropRPC(rpc, p, "queue full")
return
}
}
// appendOrMergeRPC appends the given RPCs to the slice, merging them if possible.
// If any elem is too large to fit in a single RPC, it will be split into multiple RPCs.
// If an RPC is too large and can't be split further (e.g. Message data is
// bigger than the RPC limit), then it will be returned as an oversized RPC.
// The caller should filter out oversized RPCs.
func appendOrMergeRPC(slice []*RPC, limit int, elems ...RPC) []*RPC {
if len(elems) == 0 {
return slice
}
if len(slice) == 0 && len(elems) == 1 && elems[0].Size() < limit {
// Fast path: no merging needed and only one element
return append(slice, &elems[0])
}
out := slice
if len(out) == 0 {
out = append(out, &RPC{RPC: pb.RPC{}})
out[0].from = elems[0].from
}
for _, elem := range elems {
lastRPC := out[len(out)-1]
// Merge/Append publish messages
// TODO: Never merge messages. The current behavior is the same as the
// old behavior. In the future let's not merge messages. Since,
// it may increase message latency.
for _, msg := range elem.GetPublish() {
if lastRPC.Publish = append(lastRPC.Publish, msg); lastRPC.Size() > limit {
lastRPC.Publish = lastRPC.Publish[:len(lastRPC.Publish)-1]
lastRPC = &RPC{RPC: pb.RPC{}, from: elem.from}
lastRPC.Publish = append(lastRPC.Publish, msg)
out = append(out, lastRPC)
}
}
// Merge/Append Subscriptions
for _, sub := range elem.GetSubscriptions() {
if lastRPC.Subscriptions = append(lastRPC.Subscriptions, sub); lastRPC.Size() > limit {
lastRPC.Subscriptions = lastRPC.Subscriptions[:len(lastRPC.Subscriptions)-1]
lastRPC = &RPC{RPC: pb.RPC{}, from: elem.from}
lastRPC.Subscriptions = append(lastRPC.Subscriptions, sub)
out = append(out, lastRPC)
}
}
// Merge/Append Control messages
if ctl := elem.GetControl(); ctl != nil {
if lastRPC.Control == nil {
lastRPC.Control = &pb.ControlMessage{}
if lastRPC.Size() > limit {
lastRPC.Control = nil
lastRPC = &RPC{RPC: pb.RPC{Control: &pb.ControlMessage{}}, from: elem.from}
out = append(out, lastRPC)
}
}
for _, graft := range ctl.GetGraft() {
if lastRPC.Control.Graft = append(lastRPC.Control.Graft, graft); lastRPC.Size() > limit {
lastRPC.Control.Graft = lastRPC.Control.Graft[:len(lastRPC.Control.Graft)-1]
lastRPC = &RPC{RPC: pb.RPC{Control: &pb.ControlMessage{}}, from: elem.from}
lastRPC.Control.Graft = append(lastRPC.Control.Graft, graft)
out = append(out, lastRPC)
}
}
for _, prune := range ctl.GetPrune() {
if lastRPC.Control.Prune = append(lastRPC.Control.Prune, prune); lastRPC.Size() > limit {
lastRPC.Control.Prune = lastRPC.Control.Prune[:len(lastRPC.Control.Prune)-1]
lastRPC = &RPC{RPC: pb.RPC{Control: &pb.ControlMessage{}}, from: elem.from}
lastRPC.Control.Prune = append(lastRPC.Control.Prune, prune)
out = append(out, lastRPC)
}
}
for _, iwant := range ctl.GetIwant() {
if len(lastRPC.Control.Iwant) == 0 {
// Initialize with a single IWANT.
// For IWANTs we don't need more than a single one,
// since there are no topic IDs here.
newIWant := &pb.ControlIWant{}
if lastRPC.Control.Iwant = append(lastRPC.Control.Iwant, newIWant); lastRPC.Size() > limit {
lastRPC.Control.Iwant = lastRPC.Control.Iwant[:len(lastRPC.Control.Iwant)-1]
lastRPC = &RPC{RPC: pb.RPC{Control: &pb.ControlMessage{
Iwant: []*pb.ControlIWant{newIWant},
}}, from: elem.from}
out = append(out, lastRPC)
}
}
for _, msgID := range iwant.GetMessageIDs() {
if lastRPC.Control.Iwant[0].MessageIDs = append(lastRPC.Control.Iwant[0].MessageIDs, msgID); lastRPC.Size() > limit {
lastRPC.Control.Iwant[0].MessageIDs = lastRPC.Control.Iwant[0].MessageIDs[:len(lastRPC.Control.Iwant[0].MessageIDs)-1]
lastRPC = &RPC{RPC: pb.RPC{Control: &pb.ControlMessage{
Iwant: []*pb.ControlIWant{{MessageIDs: []string{msgID}}},
}}, from: elem.from}
out = append(out, lastRPC)
}
}
}
for _, ihave := range ctl.GetIhave() {
if len(lastRPC.Control.Ihave) == 0 ||
lastRPC.Control.Ihave[len(lastRPC.Control.Ihave)-1].TopicID != ihave.TopicID {
// Start a new IHAVE if we are referencing a new topic ID
newIhave := &pb.ControlIHave{TopicID: ihave.TopicID}
if lastRPC.Control.Ihave = append(lastRPC.Control.Ihave, newIhave); lastRPC.Size() > limit {
lastRPC.Control.Ihave = lastRPC.Control.Ihave[:len(lastRPC.Control.Ihave)-1]
lastRPC = &RPC{RPC: pb.RPC{Control: &pb.ControlMessage{
Ihave: []*pb.ControlIHave{newIhave},
}}, from: elem.from}
out = append(out, lastRPC)
}
}
for _, msgID := range ihave.GetMessageIDs() {
lastIHave := lastRPC.Control.Ihave[len(lastRPC.Control.Ihave)-1]
if lastIHave.MessageIDs = append(lastIHave.MessageIDs, msgID); lastRPC.Size() > limit {
lastIHave.MessageIDs = lastIHave.MessageIDs[:len(lastIHave.MessageIDs)-1]
lastRPC = &RPC{RPC: pb.RPC{Control: &pb.ControlMessage{
Ihave: []*pb.ControlIHave{{TopicID: ihave.TopicID, MessageIDs: []string{msgID}}},
}}, from: elem.from}
out = append(out, lastRPC)
}
}
}
}
}
return out
gs.tracer.SendRPC(rpc, p)
}
func (gs *GossipSubRouter) heartbeatTimer() {
@ -1441,6 +1459,9 @@ func (gs *GossipSubRouter) heartbeat() {
// clean up iasked counters
gs.clearIHaveCounters()
// clean up IDONTWANT counters
gs.clearIDontWantCounters()
// apply IWANT request penalties
gs.applyIwantPenalties()
@ -1503,7 +1524,7 @@ func (gs *GossipSubRouter) heartbeat() {
}
// do we have too many peers?
if len(peers) > gs.params.Dhi {
if len(peers) >= gs.params.Dhi {
plst := peerMapToList(peers)
// sort by score (but shuffle first for the case we don't use the score)
@ -1693,6 +1714,23 @@ func (gs *GossipSubRouter) clearIHaveCounters() {
}
}
func (gs *GossipSubRouter) clearIDontWantCounters() {
if len(gs.peerdontwant) > 0 {
// throw away the old map and make a new one
gs.peerdontwant = make(map[peer.ID]int)
}
// decrement TTLs of all the IDONTWANTs and delete it from the cache when it reaches zero
for _, mids := range gs.unwanted {
for mid := range mids {
mids[mid]--
if mids[mid] == 0 {
delete(mids, mid)
}
}
}
}
func (gs *GossipSubRouter) applyIwantPenalties() {
for p, count := range gs.gossipTracer.GetBrokenPromises() {
log.Infof("peer %s didn't follow up in %d IWANT requests; adding penalty", p, count)
@ -1767,8 +1805,8 @@ func (gs *GossipSubRouter) sendGraftPrune(tograft, toprune map[peer.ID][]string,
}
}
out := rpcWithControl(nil, nil, nil, graft, prune)
gs.sendRPC(p, out)
out := rpcWithControl(nil, nil, nil, graft, prune, nil)
gs.sendRPC(p, out, false)
}
for p, topics := range toprune {
@ -1777,8 +1815,8 @@ func (gs *GossipSubRouter) sendGraftPrune(tograft, toprune map[peer.ID][]string,
prune = append(prune, gs.makePrune(p, topic, gs.doPX && !noPX[p], false))
}
out := rpcWithControl(nil, nil, nil, nil, prune)
gs.sendRPC(p, out)
out := rpcWithControl(nil, nil, nil, nil, prune, nil)
gs.sendRPC(p, out, false)
}
}
@ -1844,15 +1882,15 @@ func (gs *GossipSubRouter) flush() {
// send gossip first, which will also piggyback pending control
for p, ihave := range gs.gossip {
delete(gs.gossip, p)
out := rpcWithControl(nil, ihave, nil, nil, nil)
gs.sendRPC(p, out)
out := rpcWithControl(nil, ihave, nil, nil, nil, nil)
gs.sendRPC(p, out, false)
}
// send the remaining control messages that wasn't merged with gossip
for p, ctl := range gs.control {
delete(gs.control, p)
out := rpcWithControl(nil, nil, nil, ctl.Graft, ctl.Prune)
gs.sendRPC(p, out)
out := rpcWithControl(nil, nil, nil, ctl.Graft, ctl.Prune, nil)
gs.sendRPC(p, out, false)
}
}
@ -1873,9 +1911,10 @@ func (gs *GossipSubRouter) piggybackGossip(p peer.ID, out *RPC, ihave []*pb.Cont
}
func (gs *GossipSubRouter) pushControl(p peer.ID, ctl *pb.ControlMessage) {
// remove IHAVE/IWANT from control message, gossip is not retried
// remove IHAVE/IWANT/IDONTWANT from control message, gossip is not retried
ctl.Ihave = nil
ctl.Iwant = nil
ctl.Idontwant = nil
if ctl.Graft != nil || ctl.Prune != nil {
gs.control[p] = ctl
}
@ -2017,6 +2056,23 @@ func (gs *GossipSubRouter) WithDefaultTagTracer() Option {
return WithRawTracer(gs.tagTracer)
}
// SendControl dispatches the given set of control messages to the given peer.
// The control messages are sent as a single RPC, with the given (optional) messages.
// Args:
//
// p: the peer to send the control messages to.
// ctl: the control messages to send.
// msgs: the messages to send in the same RPC (optional).
// The control messages are piggybacked on the messages.
//
// Returns:
//
// nothing.
func (gs *GossipSubRouter) SendControl(p peer.ID, ctl *pb.ControlMessage, msgs ...*pb.Message) {
out := rpcWithControl(msgs, ctl.Ihave, ctl.Iwant, ctl.Graft, ctl.Prune, ctl.Idontwant)
gs.sendRPC(p, out, false)
}
func peerListToMap(peers []peer.ID) map[peer.ID]struct{} {
pmap := make(map[peer.ID]struct{})
for _, p := range peers {
@ -2053,3 +2109,13 @@ func shuffleStrings(lst []string) {
lst[i], lst[j] = lst[j], lst[i]
}
}
func computeChecksum(mid string) checksum {
var cs checksum
if len(mid) > 32 || len(mid) == 0 {
cs.payload = sha256.Sum256([]byte(mid))
} else {
cs.length = uint8(copy(cs.payload[:], mid))
}
return cs
}

View File

@ -15,6 +15,7 @@ import (
)
func TestGossipsubConnTagMessageDeliveries(t *testing.T) {
t.Skip("flaky test disabled")
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
@ -97,7 +98,7 @@ func TestGossipsubConnTagMessageDeliveries(t *testing.T) {
connectAll(t, honestHosts)
for _, h := range honestHosts {
if len(h.Network().Conns()) != nHonest-1 {
if len(h.Network().Peers()) != nHonest-1 {
t.Errorf("expected to have conns to all honest peers, have %d", len(h.Network().Conns()))
}
}
@ -147,8 +148,8 @@ func TestGossipsubConnTagMessageDeliveries(t *testing.T) {
for _, h := range honestHosts {
nHonestConns := 0
nDishonestConns := 0
for _, conn := range h.Network().Conns() {
if _, ok := honestPeers[conn.RemotePeer()]; !ok {
for _, p := range h.Network().Peers() {
if _, ok := honestPeers[p]; !ok {
nDishonestConns++
} else {
nHonestConns++

View File

@ -18,18 +18,22 @@ const (
GossipSubFeatureMesh = iota
// Protocol supports Peer eXchange on prune -- gossipsub-v1.1 compatible
GossipSubFeaturePX
// Protocol supports IDONTWANT -- gossipsub-v1.2 compatible
GossipSubFeatureIdontwant
)
// GossipSubDefaultProtocols is the default gossipsub router protocol list
var GossipSubDefaultProtocols = []protocol.ID{GossipSubID_v11, GossipSubID_v10, FloodSubID}
var GossipSubDefaultProtocols = []protocol.ID{GossipSubID_v12, GossipSubID_v11, GossipSubID_v10, FloodSubID}
// GossipSubDefaultFeatures is the feature test function for the default gossipsub protocols
func GossipSubDefaultFeatures(feat GossipSubFeature, proto protocol.ID) bool {
switch feat {
case GossipSubFeatureMesh:
return proto == GossipSubID_v11 || proto == GossipSubID_v10
return proto == GossipSubID_v12 || proto == GossipSubID_v11 || proto == GossipSubID_v10
case GossipSubFeaturePX:
return proto == GossipSubID_v11
return proto == GossipSubID_v12 || proto == GossipSubID_v11
case GossipSubFeatureIdontwant:
return proto == GossipSubID_v12
default:
return false
}

View File

@ -21,6 +21,9 @@ func TestDefaultGossipSubFeatures(t *testing.T) {
if !GossipSubDefaultFeatures(GossipSubFeatureMesh, GossipSubID_v11) {
t.Fatal("gossipsub-v1.1 should support Mesh")
}
if !GossipSubDefaultFeatures(GossipSubFeatureMesh, GossipSubID_v12) {
t.Fatal("gossipsub-v1.2 should support Mesh")
}
if GossipSubDefaultFeatures(GossipSubFeaturePX, FloodSubID) {
t.Fatal("floodsub should not support PX")
@ -28,9 +31,25 @@ func TestDefaultGossipSubFeatures(t *testing.T) {
if GossipSubDefaultFeatures(GossipSubFeaturePX, GossipSubID_v10) {
t.Fatal("gossipsub-v1.0 should not support PX")
}
if !GossipSubDefaultFeatures(GossipSubFeatureMesh, GossipSubID_v11) {
if !GossipSubDefaultFeatures(GossipSubFeaturePX, GossipSubID_v11) {
t.Fatal("gossipsub-v1.1 should support PX")
}
if !GossipSubDefaultFeatures(GossipSubFeaturePX, GossipSubID_v12) {
t.Fatal("gossipsub-v1.2 should support PX")
}
if GossipSubDefaultFeatures(GossipSubFeatureIdontwant, FloodSubID) {
t.Fatal("floodsub should not support IDONTWANT")
}
if GossipSubDefaultFeatures(GossipSubFeatureIdontwant, GossipSubID_v10) {
t.Fatal("gossipsub-v1.0 should not support IDONTWANT")
}
if GossipSubDefaultFeatures(GossipSubFeatureIdontwant, GossipSubID_v11) {
t.Fatal("gossipsub-v1.1 should not support IDONTWANT")
}
if !GossipSubDefaultFeatures(GossipSubFeatureIdontwant, GossipSubID_v12) {
t.Fatal("gossipsub-v1.2 should support IDONTWANT")
}
}
func TestGossipSubCustomProtocols(t *testing.T) {

View File

@ -3,6 +3,8 @@ package pubsub
import (
"context"
"crypto/rand"
"encoding/base64"
"fmt"
"strconv"
"sync"
"testing"
@ -121,7 +123,7 @@ func TestGossipsubAttackSpamIWANT(t *testing.T) {
// being spammy)
iwantlst := []string{DefaultMsgIdFn(msg)}
iwant := []*pb.ControlIWant{{MessageIDs: iwantlst}}
orpc := rpcWithControl(nil, nil, iwant, nil, nil)
orpc := rpcWithControl(nil, nil, iwant, nil, nil, nil)
writeMsg(&orpc.RPC)
}
})
@ -208,7 +210,7 @@ func TestGossipsubAttackSpamIHAVE(t *testing.T) {
for i := 0; i < 3*GossipSubMaxIHaveLength; i++ {
ihavelst := []string{"someid" + strconv.Itoa(i)}
ihave := []*pb.ControlIHave{{TopicID: sub.Topicid, MessageIDs: ihavelst}}
orpc := rpcWithControl(nil, ihave, nil, nil, nil)
orpc := rpcWithControl(nil, ihave, nil, nil, nil, nil)
writeMsg(&orpc.RPC)
}
@ -238,7 +240,7 @@ func TestGossipsubAttackSpamIHAVE(t *testing.T) {
for i := 0; i < 3*GossipSubMaxIHaveLength; i++ {
ihavelst := []string{"someid" + strconv.Itoa(i+100)}
ihave := []*pb.ControlIHave{{TopicID: sub.Topicid, MessageIDs: ihavelst}}
orpc := rpcWithControl(nil, ihave, nil, nil, nil)
orpc := rpcWithControl(nil, ihave, nil, nil, nil, nil)
writeMsg(&orpc.RPC)
}
@ -765,11 +767,191 @@ func TestGossipsubAttackInvalidMessageSpam(t *testing.T) {
<-ctx.Done()
}
// Test that when Gossipsub receives too many IDONTWANT messages from a peer
func TestGossipsubAttackSpamIDONTWANT(t *testing.T) {
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
hosts := getDefaultHosts(t, 3)
msgID := func(pmsg *pb.Message) string {
// silly content-based test message-ID: just use the data as whole
return base64.URLEncoding.EncodeToString(pmsg.Data)
}
psubs := make([]*PubSub, 2)
psubs[0] = getGossipsub(ctx, hosts[0], WithMessageIdFn(msgID))
psubs[1] = getGossipsub(ctx, hosts[1], WithMessageIdFn(msgID))
topic := "foobar"
for _, ps := range psubs {
_, err := ps.Subscribe(topic)
if err != nil {
t.Fatal(err)
}
}
// Wait a bit after the last message before checking the result
msgWaitMax := time.Second + GossipSubHeartbeatInterval
msgTimer := time.NewTimer(msgWaitMax)
// Checks we received some messages
var expMid string
var actMids []string
var mu sync.Mutex
checkMsgs := func() {
mu.Lock()
defer mu.Unlock()
if len(actMids) == 0 {
t.Fatalf("Expected some messages when the maximum number of IDONTWANTs is reached")
}
if actMids[0] != expMid {
t.Fatalf("The expected message is incorrect")
}
if len(actMids) > 1 {
t.Fatalf("The spam prevention should be reset after the heartbeat")
}
}
// Wait for the timer to expire
go func() {
select {
case <-msgTimer.C:
checkMsgs()
cancel()
return
case <-ctx.Done():
checkMsgs()
}
}()
newMockGS(ctx, t, hosts[2], func(writeMsg func(*pb.RPC), irpc *pb.RPC) {
mu.Lock()
defer mu.Unlock()
// Each time the host receives a message
for _, msg := range irpc.GetPublish() {
actMids = append(actMids, msgID(msg))
}
// When the middle peer connects it will send us its subscriptions
for _, sub := range irpc.GetSubscriptions() {
if sub.GetSubscribe() {
// Reply by subcribing to the topic and grafting to the middle peer
writeMsg(&pb.RPC{
Subscriptions: []*pb.RPC_SubOpts{{Subscribe: sub.Subscribe, Topicid: sub.Topicid}},
Control: &pb.ControlMessage{Graft: []*pb.ControlGraft{{TopicID: sub.Topicid}}},
})
go func() {
// Wait for a short interval to make sure the middle peer
// received and processed the subscribe + graft
time.Sleep(100 * time.Millisecond)
// Generate a message and send IDONTWANT to the middle peer
data := make([]byte, 16)
var mid string
for i := 0; i < 1+GossipSubMaxIDontWantMessages; i++ {
rand.Read(data)
mid = msgID(&pb.Message{Data: data})
writeMsg(&pb.RPC{
Control: &pb.ControlMessage{Idontwant: []*pb.ControlIDontWant{{MessageIDs: []string{mid}}}},
})
}
// The host should receives this message id because the maximum was reached
expMid = mid
// Wait for a short interval to make sure the middle peer
// received and processed the IDONTWANTs
time.Sleep(100 * time.Millisecond)
// Publish the message from the first peer
if err := psubs[0].Publish(topic, data); err != nil {
t.Error(err)
return // cannot call t.Fatal in a non-test goroutine
}
// Wait for the next heartbeat so that the prevention will be reset
select {
case <-ctx.Done():
return
case <-time.After(GossipSubHeartbeatInterval):
}
// Test IDONTWANT again to see that it now works again
rand.Read(data)
mid = msgID(&pb.Message{Data: data})
writeMsg(&pb.RPC{
Control: &pb.ControlMessage{Idontwant: []*pb.ControlIDontWant{{MessageIDs: []string{mid}}}},
})
time.Sleep(100 * time.Millisecond)
if err := psubs[0].Publish(topic, data); err != nil {
t.Error(err)
return // cannot call t.Fatal in a non-test goroutine
}
}()
}
}
})
connect(t, hosts[0], hosts[1])
connect(t, hosts[1], hosts[2])
<-ctx.Done()
}
func TestGossipsubHandleIDontwantSpam(t *testing.T) {
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
hosts := getDefaultHosts(t, 2)
msgID := func(pmsg *pb.Message) string {
// silly content-based test message-ID: just use the data as whole
return base64.URLEncoding.EncodeToString(pmsg.Data)
}
psubs := make([]*PubSub, 2)
psubs[0] = getGossipsub(ctx, hosts[0], WithMessageIdFn(msgID))
psubs[1] = getGossipsub(ctx, hosts[1], WithMessageIdFn(msgID))
connect(t, hosts[0], hosts[1])
topic := "foobar"
for _, ps := range psubs {
_, err := ps.Subscribe(topic)
if err != nil {
t.Fatal(err)
}
}
exceededIDWLength := GossipSubMaxIDontWantLength + 1
var idwIds []string
for i := 0; i < exceededIDWLength; i++ {
idwIds = append(idwIds, fmt.Sprintf("idontwant-%d", i))
}
rPid := hosts[1].ID()
ctrlMessage := &pb.ControlMessage{Idontwant: []*pb.ControlIDontWant{{MessageIDs: idwIds}}}
grt := psubs[0].rt.(*GossipSubRouter)
grt.handleIDontWant(rPid, ctrlMessage)
if grt.peerdontwant[rPid] != 1 {
t.Errorf("Wanted message count of %d but received %d", 1, grt.peerdontwant[rPid])
}
mid := fmt.Sprintf("idontwant-%d", GossipSubMaxIDontWantLength-1)
if _, ok := grt.unwanted[rPid][computeChecksum(mid)]; !ok {
t.Errorf("Desired message id was not stored in the unwanted map: %s", mid)
}
mid = fmt.Sprintf("idontwant-%d", GossipSubMaxIDontWantLength)
if _, ok := grt.unwanted[rPid][computeChecksum(mid)]; ok {
t.Errorf("Unwanted message id was stored in the unwanted map: %s", mid)
}
}
type mockGSOnRead func(writeMsg func(*pb.RPC), irpc *pb.RPC)
func newMockGS(ctx context.Context, t *testing.T, attacker host.Host, onReadMsg mockGSOnRead) {
newMockGSWithVersion(ctx, t, attacker, protocol.ID("/meshsub/1.2.0"), onReadMsg)
}
func newMockGSWithVersion(ctx context.Context, t *testing.T, attacker host.Host, gossipSubID protocol.ID, onReadMsg mockGSOnRead) {
// Listen on the gossipsub protocol
const gossipSubID = protocol.ID("/meshsub/1.0.0")
const maxMessageSize = 1024 * 1024
attacker.SetStreamHandler(gossipSubID, func(stream network.Stream) {
// When an incoming stream is opened, set up an outgoing stream

File diff suppressed because it is too large Load Diff

78
messagebatch.go Normal file
View File

@ -0,0 +1,78 @@
package pubsub
import (
"iter"
"sync"
"github.com/libp2p/go-libp2p/core/peer"
)
// MessageBatch allows a user to batch related messages and then publish them at
// once. This allows the Scheduler to define an order for outgoing RPCs.
// This helps bandwidth constrained peers.
type MessageBatch struct {
mu sync.Mutex
messages []*Message
}
func (mb *MessageBatch) add(msg *Message) {
mb.mu.Lock()
defer mb.mu.Unlock()
mb.messages = append(mb.messages, msg)
}
func (mb *MessageBatch) take() []*Message {
mb.mu.Lock()
defer mb.mu.Unlock()
messages := mb.messages
mb.messages = nil
return messages
}
type messageBatchAndPublishOptions struct {
messages []*Message
opts *BatchPublishOptions
}
// RPCScheduler schedules outgoing RPCs.
type RPCScheduler interface {
// AddRPC adds an RPC to the scheduler.
AddRPC(peer peer.ID, msgID string, rpc *RPC)
// All returns an ordered iterator of RPCs.
All() iter.Seq2[peer.ID, *RPC]
}
type pendingRPC struct {
peer peer.ID
rpc *RPC
}
// RoundRobinMessageIDScheduler schedules outgoing RPCs in round-robin order of message IDs.
type RoundRobinMessageIDScheduler struct {
rpcs map[string][]pendingRPC
}
func (s *RoundRobinMessageIDScheduler) AddRPC(peer peer.ID, msgID string, rpc *RPC) {
if s.rpcs == nil {
s.rpcs = make(map[string][]pendingRPC)
}
s.rpcs[msgID] = append(s.rpcs[msgID], pendingRPC{peer: peer, rpc: rpc})
}
func (s *RoundRobinMessageIDScheduler) All() iter.Seq2[peer.ID, *RPC] {
return func(yield func(peer.ID, *RPC) bool) {
for len(s.rpcs) > 0 {
for msgID, rpcs := range s.rpcs {
if len(rpcs) == 0 {
delete(s.rpcs, msgID)
continue
}
if !yield(rpcs[0].peer, rpcs[0].rpc) {
return
}
s.rpcs[msgID] = rpcs[1:]
}
}
}
}

View File

@ -9,6 +9,7 @@ import (
// msgIDGenerator handles computing IDs for msgs
// It allows setting custom generators(MsgIdFunction) per topic
type msgIDGenerator struct {
sync.Mutex
Default MsgIdFunction
topicGensLk sync.RWMutex
@ -31,6 +32,8 @@ func (m *msgIDGenerator) Set(topic string, gen MsgIdFunction) {
// ID computes ID for the msg or short-circuits with the cached value.
func (m *msgIDGenerator) ID(msg *Message) string {
m.Lock()
defer m.Unlock()
if msg.ID != "" {
return msg.ID
}

View File

@ -5,10 +5,11 @@ package pubsub_pb
import (
fmt "fmt"
proto "github.com/gogo/protobuf/proto"
io "io"
math "math"
math_bits "math/bits"
proto "github.com/gogo/protobuf/proto"
)
// Reference imports to suppress errors if they are not otherwise used.
@ -228,13 +229,14 @@ func (m *Message) GetKey() []byte {
}
type ControlMessage struct {
Ihave []*ControlIHave `protobuf:"bytes,1,rep,name=ihave" json:"ihave,omitempty"`
Iwant []*ControlIWant `protobuf:"bytes,2,rep,name=iwant" json:"iwant,omitempty"`
Graft []*ControlGraft `protobuf:"bytes,3,rep,name=graft" json:"graft,omitempty"`
Prune []*ControlPrune `protobuf:"bytes,4,rep,name=prune" json:"prune,omitempty"`
XXX_NoUnkeyedLiteral struct{} `json:"-"`
XXX_unrecognized []byte `json:"-"`
XXX_sizecache int32 `json:"-"`
Ihave []*ControlIHave `protobuf:"bytes,1,rep,name=ihave" json:"ihave,omitempty"`
Iwant []*ControlIWant `protobuf:"bytes,2,rep,name=iwant" json:"iwant,omitempty"`
Graft []*ControlGraft `protobuf:"bytes,3,rep,name=graft" json:"graft,omitempty"`
Prune []*ControlPrune `protobuf:"bytes,4,rep,name=prune" json:"prune,omitempty"`
Idontwant []*ControlIDontWant `protobuf:"bytes,5,rep,name=idontwant" json:"idontwant,omitempty"`
XXX_NoUnkeyedLiteral struct{} `json:"-"`
XXX_unrecognized []byte `json:"-"`
XXX_sizecache int32 `json:"-"`
}
func (m *ControlMessage) Reset() { *m = ControlMessage{} }
@ -298,6 +300,13 @@ func (m *ControlMessage) GetPrune() []*ControlPrune {
return nil
}
func (m *ControlMessage) GetIdontwant() []*ControlIDontWant {
if m != nil {
return m.Idontwant
}
return nil
}
type ControlIHave struct {
TopicID *string `protobuf:"bytes,1,opt,name=topicID" json:"topicID,omitempty"`
// implementors from other languages should use bytes here - go protobuf emits invalid utf8 strings
@ -512,6 +521,54 @@ func (m *ControlPrune) GetBackoff() uint64 {
return 0
}
type ControlIDontWant struct {
// implementors from other languages should use bytes here - go protobuf emits invalid utf8 strings
MessageIDs []string `protobuf:"bytes,1,rep,name=messageIDs" json:"messageIDs,omitempty"`
XXX_NoUnkeyedLiteral struct{} `json:"-"`
XXX_unrecognized []byte `json:"-"`
XXX_sizecache int32 `json:"-"`
}
func (m *ControlIDontWant) Reset() { *m = ControlIDontWant{} }
func (m *ControlIDontWant) String() string { return proto.CompactTextString(m) }
func (*ControlIDontWant) ProtoMessage() {}
func (*ControlIDontWant) Descriptor() ([]byte, []int) {
return fileDescriptor_77a6da22d6a3feb1, []int{7}
}
func (m *ControlIDontWant) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
}
func (m *ControlIDontWant) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
if deterministic {
return xxx_messageInfo_ControlIDontWant.Marshal(b, m, deterministic)
} else {
b = b[:cap(b)]
n, err := m.MarshalToSizedBuffer(b)
if err != nil {
return nil, err
}
return b[:n], nil
}
}
func (m *ControlIDontWant) XXX_Merge(src proto.Message) {
xxx_messageInfo_ControlIDontWant.Merge(m, src)
}
func (m *ControlIDontWant) XXX_Size() int {
return m.Size()
}
func (m *ControlIDontWant) XXX_DiscardUnknown() {
xxx_messageInfo_ControlIDontWant.DiscardUnknown(m)
}
var xxx_messageInfo_ControlIDontWant proto.InternalMessageInfo
func (m *ControlIDontWant) GetMessageIDs() []string {
if m != nil {
return m.MessageIDs
}
return nil
}
type PeerInfo struct {
PeerID []byte `protobuf:"bytes,1,opt,name=peerID" json:"peerID,omitempty"`
SignedPeerRecord []byte `protobuf:"bytes,2,opt,name=signedPeerRecord" json:"signedPeerRecord,omitempty"`
@ -524,7 +581,7 @@ func (m *PeerInfo) Reset() { *m = PeerInfo{} }
func (m *PeerInfo) String() string { return proto.CompactTextString(m) }
func (*PeerInfo) ProtoMessage() {}
func (*PeerInfo) Descriptor() ([]byte, []int) {
return fileDescriptor_77a6da22d6a3feb1, []int{7}
return fileDescriptor_77a6da22d6a3feb1, []int{8}
}
func (m *PeerInfo) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
@ -576,43 +633,46 @@ func init() {
proto.RegisterType((*ControlIWant)(nil), "pubsub.pb.ControlIWant")
proto.RegisterType((*ControlGraft)(nil), "pubsub.pb.ControlGraft")
proto.RegisterType((*ControlPrune)(nil), "pubsub.pb.ControlPrune")
proto.RegisterType((*ControlIDontWant)(nil), "pubsub.pb.ControlIDontWant")
proto.RegisterType((*PeerInfo)(nil), "pubsub.pb.PeerInfo")
}
func init() { proto.RegisterFile("rpc.proto", fileDescriptor_77a6da22d6a3feb1) }
var fileDescriptor_77a6da22d6a3feb1 = []byte{
// 480 bytes of a gzipped FileDescriptorProto
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0x7c, 0x92, 0xc1, 0x8e, 0xd3, 0x3c,
0x10, 0xc7, 0xe5, 0x6d, 0xbb, 0xd9, 0xcc, 0xe6, 0xfb, 0xb4, 0x32, 0x68, 0x31, 0x08, 0x55, 0x55,
0x4e, 0x01, 0x41, 0x0e, 0xcb, 0x95, 0x0b, 0xb4, 0x12, 0x9b, 0x03, 0x50, 0x99, 0x03, 0x67, 0x27,
0x75, 0xba, 0xd1, 0x6e, 0x63, 0x63, 0x3b, 0x8b, 0x78, 0x08, 0xde, 0x8b, 0x03, 0x07, 0x1e, 0x01,
0xf5, 0xc6, 0x5b, 0x20, 0x3b, 0x4e, 0x9a, 0xa5, 0x94, 0x9b, 0xe7, 0xef, 0xdf, 0xcc, 0xfc, 0x3d,
0x1e, 0x08, 0x95, 0x2c, 0x52, 0xa9, 0x84, 0x11, 0x38, 0x94, 0x4d, 0xae, 0x9b, 0x3c, 0x95, 0x79,
0xfc, 0x0b, 0xc1, 0x88, 0x2e, 0xe7, 0xf8, 0x25, 0xfc, 0xa7, 0x9b, 0x5c, 0x17, 0xaa, 0x92, 0xa6,
0x12, 0xb5, 0x26, 0x68, 0x36, 0x4a, 0x4e, 0x2f, 0xce, 0xd3, 0x1e, 0x4d, 0xe9, 0x72, 0x9e, 0x7e,
0x68, 0xf2, 0xf7, 0xd2, 0x68, 0x7a, 0x17, 0xc6, 0xcf, 0x20, 0x90, 0x4d, 0x7e, 0x53, 0xe9, 0x2b,
0x72, 0xe4, 0xf2, 0xf0, 0x20, 0xef, 0x2d, 0xd7, 0x9a, 0xad, 0x39, 0xed, 0x10, 0xfc, 0x02, 0x82,
0x42, 0xd4, 0x46, 0x89, 0x1b, 0x32, 0x9a, 0xa1, 0xe4, 0xf4, 0xe2, 0xe1, 0x80, 0x9e, 0xb7, 0x37,
0x7d, 0x92, 0x27, 0x1f, 0xbd, 0x82, 0xc0, 0x37, 0xc7, 0x8f, 0x21, 0xf4, 0xed, 0x73, 0x4e, 0xd0,
0x0c, 0x25, 0x27, 0x74, 0x27, 0x60, 0x02, 0x81, 0x11, 0xb2, 0x2a, 0xaa, 0x15, 0x39, 0x9a, 0xa1,
0x24, 0xa4, 0x5d, 0x18, 0x7f, 0x45, 0x10, 0xf8, 0xba, 0x18, 0xc3, 0xb8, 0x54, 0x62, 0xe3, 0xd2,
0x23, 0xea, 0xce, 0x56, 0x5b, 0x31, 0xc3, 0x5c, 0x5a, 0x44, 0xdd, 0x19, 0xdf, 0x87, 0x89, 0xe6,
0x9f, 0x6a, 0xe1, 0x9c, 0x46, 0xb4, 0x0d, 0xac, 0xea, 0x8a, 0x92, 0xb1, 0xeb, 0xd0, 0x06, 0xce,
0x57, 0xb5, 0xae, 0x99, 0x69, 0x14, 0x27, 0x13, 0xc7, 0xef, 0x04, 0x7c, 0x06, 0xa3, 0x6b, 0xfe,
0x85, 0x1c, 0x3b, 0xdd, 0x1e, 0xe3, 0xef, 0x08, 0xfe, 0xbf, 0xfb, 0x5c, 0xfc, 0x1c, 0x26, 0xd5,
0x15, 0xbb, 0xe5, 0x7e, 0xfc, 0x0f, 0xf6, 0x07, 0x93, 0x5d, 0xb2, 0x5b, 0x4e, 0x5b, 0xca, 0xe1,
0x9f, 0x59, 0x6d, 0xfc, 0xd4, 0xff, 0x86, 0x7f, 0x64, 0xb5, 0xa1, 0x2d, 0x65, 0xf1, 0xb5, 0x62,
0xa5, 0x21, 0xa3, 0x43, 0xf8, 0x1b, 0x7b, 0x4d, 0x5b, 0xca, 0xe2, 0x52, 0x35, 0x35, 0x27, 0xe3,
0x43, 0xf8, 0xd2, 0x5e, 0xd3, 0x96, 0x8a, 0x2f, 0x21, 0x1a, 0x7a, 0xec, 0x3f, 0x22, 0x5b, 0xb8,
0x29, 0x77, 0x1f, 0x91, 0x2d, 0xf0, 0x14, 0x60, 0xd3, 0x3e, 0x38, 0x5b, 0x68, 0xe7, 0x3d, 0xa4,
0x03, 0x25, 0x4e, 0x77, 0x95, 0xac, 0xfd, 0x3f, 0x78, 0xb4, 0xc7, 0x27, 0x3d, 0xef, 0xfc, 0x1f,
0xee, 0x1c, 0x6f, 0x7a, 0xd2, 0x59, 0xff, 0x87, 0xc7, 0x27, 0x30, 0x91, 0x9c, 0x2b, 0xed, 0x47,
0x7b, 0x6f, 0xf0, 0xf8, 0x25, 0xe7, 0x2a, 0xab, 0x4b, 0x41, 0x5b, 0xc2, 0x16, 0xc9, 0x59, 0x71,
0x2d, 0xca, 0xd2, 0x6d, 0xc9, 0x98, 0x76, 0x61, 0xfc, 0x0e, 0x4e, 0x3a, 0x18, 0x9f, 0xc3, 0xb1,
0xc5, 0x7d, 0xa7, 0x88, 0xfa, 0x08, 0x3f, 0x85, 0x33, 0xbb, 0x24, 0x7c, 0x65, 0x49, 0xca, 0x0b,
0xa1, 0x56, 0x7e, 0x03, 0xf7, 0xf4, 0xd7, 0xd1, 0xb7, 0xed, 0x14, 0xfd, 0xd8, 0x4e, 0xd1, 0xcf,
0xed, 0x14, 0xfd, 0x0e, 0x00, 0x00, 0xff, 0xff, 0xb2, 0xf8, 0xc4, 0x6e, 0xd2, 0x03, 0x00, 0x00,
// 511 bytes of a gzipped FileDescriptorProto
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0x84, 0x92, 0xcd, 0x6e, 0x13, 0x31,
0x10, 0xc7, 0xe5, 0x7c, 0x34, 0xdd, 0xe9, 0x82, 0x22, 0x83, 0x8a, 0xf9, 0x50, 0x14, 0xed, 0x29,
0x20, 0xd8, 0x43, 0x38, 0x21, 0x71, 0x81, 0x44, 0xa2, 0x39, 0x00, 0x91, 0x39, 0x70, 0xde, 0xdd,
0x38, 0xe9, 0xaa, 0x8d, 0x6d, 0x6c, 0x6f, 0x11, 0x4f, 0xc0, 0x89, 0xf7, 0xe2, 0xc8, 0x23, 0xa0,
0xdc, 0x78, 0x0b, 0xe4, 0x59, 0xe7, 0xa3, 0x4d, 0x03, 0x37, 0xcf, 0xf8, 0x37, 0xfe, 0xff, 0x67,
0xc6, 0x10, 0x19, 0x5d, 0xa4, 0xda, 0x28, 0xa7, 0x68, 0xa4, 0xab, 0xdc, 0x56, 0x79, 0xaa, 0xf3,
0xe4, 0x0f, 0x81, 0x26, 0x9f, 0x8e, 0xe8, 0x6b, 0xb8, 0x63, 0xab, 0xdc, 0x16, 0xa6, 0xd4, 0xae,
0x54, 0xd2, 0x32, 0xd2, 0x6f, 0x0e, 0x4e, 0x86, 0xa7, 0xe9, 0x06, 0x4d, 0xf9, 0x74, 0x94, 0x7e,
0xaa, 0xf2, 0x8f, 0xda, 0x59, 0x7e, 0x1d, 0xa6, 0xcf, 0xa1, 0xa3, 0xab, 0xfc, 0xb2, 0xb4, 0xe7,
0xac, 0x81, 0x75, 0x74, 0xa7, 0xee, 0xbd, 0xb0, 0x36, 0x5b, 0x08, 0xbe, 0x46, 0xe8, 0x4b, 0xe8,
0x14, 0x4a, 0x3a, 0xa3, 0x2e, 0x59, 0xb3, 0x4f, 0x06, 0x27, 0xc3, 0x87, 0x3b, 0xf4, 0xa8, 0xbe,
0xd9, 0x14, 0x05, 0xf2, 0xd1, 0x1b, 0xe8, 0x04, 0x71, 0xfa, 0x04, 0xa2, 0x20, 0x9f, 0x0b, 0x46,
0xfa, 0x64, 0x70, 0xcc, 0xb7, 0x09, 0xca, 0xa0, 0xe3, 0x94, 0x2e, 0x8b, 0x72, 0xc6, 0x1a, 0x7d,
0x32, 0x88, 0xf8, 0x3a, 0x4c, 0x7e, 0x10, 0xe8, 0x84, 0x77, 0x29, 0x85, 0xd6, 0xdc, 0xa8, 0x25,
0x96, 0xc7, 0x1c, 0xcf, 0x3e, 0x37, 0xcb, 0x5c, 0x86, 0x65, 0x31, 0xc7, 0x33, 0xbd, 0x0f, 0x6d,
0x2b, 0xbe, 0x48, 0x85, 0x4e, 0x63, 0x5e, 0x07, 0x3e, 0x8b, 0x8f, 0xb2, 0x16, 0x2a, 0xd4, 0x01,
0xfa, 0x2a, 0x17, 0x32, 0x73, 0x95, 0x11, 0xac, 0x8d, 0xfc, 0x36, 0x41, 0xbb, 0xd0, 0xbc, 0x10,
0xdf, 0xd8, 0x11, 0xe6, 0xfd, 0x31, 0xf9, 0xde, 0x80, 0xbb, 0xd7, 0xdb, 0xa5, 0x2f, 0xa0, 0x5d,
0x9e, 0x67, 0x57, 0x22, 0x8c, 0xff, 0xc1, 0xfe, 0x60, 0x26, 0x67, 0xd9, 0x95, 0xe0, 0x35, 0x85,
0xf8, 0xd7, 0x4c, 0xba, 0x30, 0xf5, 0xdb, 0xf0, 0xcf, 0x99, 0x74, 0xbc, 0xa6, 0x3c, 0xbe, 0x30,
0xd9, 0xdc, 0xb1, 0xe6, 0x21, 0xfc, 0x9d, 0xbf, 0xe6, 0x35, 0xe5, 0x71, 0x6d, 0x2a, 0x29, 0x58,
0xeb, 0x10, 0x3e, 0xf5, 0xd7, 0xbc, 0xa6, 0xe8, 0x2b, 0x88, 0xca, 0x99, 0x92, 0x0e, 0x0d, 0xb5,
0xb1, 0xe4, 0xf1, 0x2d, 0x86, 0xc6, 0x4a, 0x3a, 0x34, 0xb5, 0xa5, 0x93, 0x33, 0x88, 0x77, 0xdb,
0xdb, 0xec, 0x70, 0x32, 0xc6, 0x05, 0xad, 0x77, 0x38, 0x19, 0xd3, 0x1e, 0xc0, 0xb2, 0x9e, 0xd5,
0x64, 0x6c, 0xb1, 0xed, 0x88, 0xef, 0x64, 0x92, 0x74, 0xfb, 0x92, 0x17, 0xb9, 0xc1, 0x93, 0x3d,
0x7e, 0xb0, 0xe1, 0xb1, 0xf5, 0xc3, 0xca, 0xc9, 0x72, 0x43, 0x62, 0xd7, 0xff, 0xf0, 0xf8, 0x14,
0xda, 0x5a, 0x08, 0x63, 0xc3, 0x56, 0xee, 0xed, 0x0c, 0x61, 0x2a, 0x84, 0x99, 0xc8, 0xb9, 0xe2,
0x35, 0xe1, 0x1f, 0xc9, 0xb3, 0xe2, 0x42, 0xcd, 0xe7, 0xf8, 0xc1, 0x5a, 0x7c, 0x1d, 0x26, 0x43,
0xe8, 0xde, 0x9c, 0xd8, 0x7f, 0x9b, 0xf9, 0x00, 0xc7, 0x6b, 0x01, 0x7a, 0x0a, 0x47, 0x5e, 0x22,
0xb8, 0x8b, 0x79, 0x88, 0xe8, 0x33, 0xe8, 0xfa, 0x3f, 0x29, 0x66, 0x9e, 0xe4, 0xa2, 0x50, 0x66,
0x16, 0x3e, 0xfc, 0x5e, 0xfe, 0x6d, 0xfc, 0x73, 0xd5, 0x23, 0xbf, 0x56, 0x3d, 0xf2, 0x7b, 0xd5,
0x23, 0x7f, 0x03, 0x00, 0x00, 0xff, 0xff, 0xba, 0x73, 0x8e, 0xbf, 0x41, 0x04, 0x00, 0x00,
}
func (m *RPC) Marshal() (dAtA []byte, err error) {
@ -819,6 +879,20 @@ func (m *ControlMessage) MarshalToSizedBuffer(dAtA []byte) (int, error) {
i -= len(m.XXX_unrecognized)
copy(dAtA[i:], m.XXX_unrecognized)
}
if len(m.Idontwant) > 0 {
for iNdEx := len(m.Idontwant) - 1; iNdEx >= 0; iNdEx-- {
{
size, err := m.Idontwant[iNdEx].MarshalToSizedBuffer(dAtA[:i])
if err != nil {
return 0, err
}
i -= size
i = encodeVarintRpc(dAtA, i, uint64(size))
}
i--
dAtA[i] = 0x2a
}
}
if len(m.Prune) > 0 {
for iNdEx := len(m.Prune) - 1; iNdEx >= 0; iNdEx-- {
{
@ -1044,6 +1118,42 @@ func (m *ControlPrune) MarshalToSizedBuffer(dAtA []byte) (int, error) {
return len(dAtA) - i, nil
}
func (m *ControlIDontWant) Marshal() (dAtA []byte, err error) {
size := m.Size()
dAtA = make([]byte, size)
n, err := m.MarshalToSizedBuffer(dAtA[:size])
if err != nil {
return nil, err
}
return dAtA[:n], nil
}
func (m *ControlIDontWant) MarshalTo(dAtA []byte) (int, error) {
size := m.Size()
return m.MarshalToSizedBuffer(dAtA[:size])
}
func (m *ControlIDontWant) MarshalToSizedBuffer(dAtA []byte) (int, error) {
i := len(dAtA)
_ = i
var l int
_ = l
if m.XXX_unrecognized != nil {
i -= len(m.XXX_unrecognized)
copy(dAtA[i:], m.XXX_unrecognized)
}
if len(m.MessageIDs) > 0 {
for iNdEx := len(m.MessageIDs) - 1; iNdEx >= 0; iNdEx-- {
i -= len(m.MessageIDs[iNdEx])
copy(dAtA[i:], m.MessageIDs[iNdEx])
i = encodeVarintRpc(dAtA, i, uint64(len(m.MessageIDs[iNdEx])))
i--
dAtA[i] = 0xa
}
}
return len(dAtA) - i, nil
}
func (m *PeerInfo) Marshal() (dAtA []byte, err error) {
size := m.Size()
dAtA = make([]byte, size)
@ -1209,6 +1319,12 @@ func (m *ControlMessage) Size() (n int) {
n += 1 + l + sovRpc(uint64(l))
}
}
if len(m.Idontwant) > 0 {
for _, e := range m.Idontwant {
l = e.Size()
n += 1 + l + sovRpc(uint64(l))
}
}
if m.XXX_unrecognized != nil {
n += len(m.XXX_unrecognized)
}
@ -1296,6 +1412,24 @@ func (m *ControlPrune) Size() (n int) {
return n
}
func (m *ControlIDontWant) Size() (n int) {
if m == nil {
return 0
}
var l int
_ = l
if len(m.MessageIDs) > 0 {
for _, s := range m.MessageIDs {
l = len(s)
n += 1 + l + sovRpc(uint64(l))
}
}
if m.XXX_unrecognized != nil {
n += len(m.XXX_unrecognized)
}
return n
}
func (m *PeerInfo) Size() (n int) {
if m == nil {
return 0
@ -2001,6 +2135,40 @@ func (m *ControlMessage) Unmarshal(dAtA []byte) error {
return err
}
iNdEx = postIndex
case 5:
if wireType != 2 {
return fmt.Errorf("proto: wrong wireType = %d for field Idontwant", wireType)
}
var msglen int
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowRpc
}
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := dAtA[iNdEx]
iNdEx++
msglen |= int(b&0x7F) << shift
if b < 0x80 {
break
}
}
if msglen < 0 {
return ErrInvalidLengthRpc
}
postIndex := iNdEx + msglen
if postIndex < 0 {
return ErrInvalidLengthRpc
}
if postIndex > l {
return io.ErrUnexpectedEOF
}
m.Idontwant = append(m.Idontwant, &ControlIDontWant{})
if err := m.Idontwant[len(m.Idontwant)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
return err
}
iNdEx = postIndex
default:
iNdEx = preIndex
skippy, err := skipRpc(dAtA[iNdEx:])
@ -2444,6 +2612,89 @@ func (m *ControlPrune) Unmarshal(dAtA []byte) error {
}
return nil
}
func (m *ControlIDontWant) Unmarshal(dAtA []byte) error {
l := len(dAtA)
iNdEx := 0
for iNdEx < l {
preIndex := iNdEx
var wire uint64
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowRpc
}
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := dAtA[iNdEx]
iNdEx++
wire |= uint64(b&0x7F) << shift
if b < 0x80 {
break
}
}
fieldNum := int32(wire >> 3)
wireType := int(wire & 0x7)
if wireType == 4 {
return fmt.Errorf("proto: ControlIDontWant: wiretype end group for non-group")
}
if fieldNum <= 0 {
return fmt.Errorf("proto: ControlIDontWant: illegal tag %d (wire type %d)", fieldNum, wire)
}
switch fieldNum {
case 1:
if wireType != 2 {
return fmt.Errorf("proto: wrong wireType = %d for field MessageIDs", wireType)
}
var stringLen uint64
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowRpc
}
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := dAtA[iNdEx]
iNdEx++
stringLen |= uint64(b&0x7F) << shift
if b < 0x80 {
break
}
}
intStringLen := int(stringLen)
if intStringLen < 0 {
return ErrInvalidLengthRpc
}
postIndex := iNdEx + intStringLen
if postIndex < 0 {
return ErrInvalidLengthRpc
}
if postIndex > l {
return io.ErrUnexpectedEOF
}
m.MessageIDs = append(m.MessageIDs, string(dAtA[iNdEx:postIndex]))
iNdEx = postIndex
default:
iNdEx = preIndex
skippy, err := skipRpc(dAtA[iNdEx:])
if err != nil {
return err
}
if (skippy < 0) || (iNdEx+skippy) < 0 {
return ErrInvalidLengthRpc
}
if (iNdEx + skippy) > l {
return io.ErrUnexpectedEOF
}
m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...)
iNdEx += skippy
}
}
if iNdEx > l {
return io.ErrUnexpectedEOF
}
return nil
}
func (m *PeerInfo) Unmarshal(dAtA []byte) error {
l := len(dAtA)
iNdEx := 0

View File

@ -28,6 +28,7 @@ message ControlMessage {
repeated ControlIWant iwant = 2;
repeated ControlGraft graft = 3;
repeated ControlPrune prune = 4;
repeated ControlIDontWant idontwant = 5;
}
message ControlIHave {
@ -51,7 +52,12 @@ message ControlPrune {
optional uint64 backoff = 3;
}
message ControlIDontWant {
// implementors from other languages should use bytes here - go protobuf emits invalid utf8 strings
repeated string messageIDs = 1;
}
message PeerInfo {
optional bytes peerID = 1;
optional bytes signedPeerRecord = 2;
}
}

View File

@ -5,10 +5,11 @@ package pubsub_pb
import (
fmt "fmt"
proto "github.com/gogo/protobuf/proto"
io "io"
math "math"
math_bits "math/bits"
proto "github.com/gogo/protobuf/proto"
)
// Reference imports to suppress errors if they are not otherwise used.
@ -1159,13 +1160,14 @@ func (m *TraceEvent_SubMeta) GetTopic() string {
}
type TraceEvent_ControlMeta struct {
Ihave []*TraceEvent_ControlIHaveMeta `protobuf:"bytes,1,rep,name=ihave" json:"ihave,omitempty"`
Iwant []*TraceEvent_ControlIWantMeta `protobuf:"bytes,2,rep,name=iwant" json:"iwant,omitempty"`
Graft []*TraceEvent_ControlGraftMeta `protobuf:"bytes,3,rep,name=graft" json:"graft,omitempty"`
Prune []*TraceEvent_ControlPruneMeta `protobuf:"bytes,4,rep,name=prune" json:"prune,omitempty"`
XXX_NoUnkeyedLiteral struct{} `json:"-"`
XXX_unrecognized []byte `json:"-"`
XXX_sizecache int32 `json:"-"`
Ihave []*TraceEvent_ControlIHaveMeta `protobuf:"bytes,1,rep,name=ihave" json:"ihave,omitempty"`
Iwant []*TraceEvent_ControlIWantMeta `protobuf:"bytes,2,rep,name=iwant" json:"iwant,omitempty"`
Graft []*TraceEvent_ControlGraftMeta `protobuf:"bytes,3,rep,name=graft" json:"graft,omitempty"`
Prune []*TraceEvent_ControlPruneMeta `protobuf:"bytes,4,rep,name=prune" json:"prune,omitempty"`
Idontwant []*TraceEvent_ControlIDontWantMeta `protobuf:"bytes,5,rep,name=idontwant" json:"idontwant,omitempty"`
XXX_NoUnkeyedLiteral struct{} `json:"-"`
XXX_unrecognized []byte `json:"-"`
XXX_sizecache int32 `json:"-"`
}
func (m *TraceEvent_ControlMeta) Reset() { *m = TraceEvent_ControlMeta{} }
@ -1229,6 +1231,13 @@ func (m *TraceEvent_ControlMeta) GetPrune() []*TraceEvent_ControlPruneMeta {
return nil
}
func (m *TraceEvent_ControlMeta) GetIdontwant() []*TraceEvent_ControlIDontWantMeta {
if m != nil {
return m.Idontwant
}
return nil
}
type TraceEvent_ControlIHaveMeta struct {
Topic *string `protobuf:"bytes,1,opt,name=topic" json:"topic,omitempty"`
MessageIDs [][]byte `protobuf:"bytes,2,rep,name=messageIDs" json:"messageIDs,omitempty"`
@ -1433,6 +1442,53 @@ func (m *TraceEvent_ControlPruneMeta) GetPeers() [][]byte {
return nil
}
type TraceEvent_ControlIDontWantMeta struct {
MessageIDs [][]byte `protobuf:"bytes,1,rep,name=messageIDs" json:"messageIDs,omitempty"`
XXX_NoUnkeyedLiteral struct{} `json:"-"`
XXX_unrecognized []byte `json:"-"`
XXX_sizecache int32 `json:"-"`
}
func (m *TraceEvent_ControlIDontWantMeta) Reset() { *m = TraceEvent_ControlIDontWantMeta{} }
func (m *TraceEvent_ControlIDontWantMeta) String() string { return proto.CompactTextString(m) }
func (*TraceEvent_ControlIDontWantMeta) ProtoMessage() {}
func (*TraceEvent_ControlIDontWantMeta) Descriptor() ([]byte, []int) {
return fileDescriptor_0571941a1d628a80, []int{0, 21}
}
func (m *TraceEvent_ControlIDontWantMeta) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
}
func (m *TraceEvent_ControlIDontWantMeta) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
if deterministic {
return xxx_messageInfo_TraceEvent_ControlIDontWantMeta.Marshal(b, m, deterministic)
} else {
b = b[:cap(b)]
n, err := m.MarshalToSizedBuffer(b)
if err != nil {
return nil, err
}
return b[:n], nil
}
}
func (m *TraceEvent_ControlIDontWantMeta) XXX_Merge(src proto.Message) {
xxx_messageInfo_TraceEvent_ControlIDontWantMeta.Merge(m, src)
}
func (m *TraceEvent_ControlIDontWantMeta) XXX_Size() int {
return m.Size()
}
func (m *TraceEvent_ControlIDontWantMeta) XXX_DiscardUnknown() {
xxx_messageInfo_TraceEvent_ControlIDontWantMeta.DiscardUnknown(m)
}
var xxx_messageInfo_TraceEvent_ControlIDontWantMeta proto.InternalMessageInfo
func (m *TraceEvent_ControlIDontWantMeta) GetMessageIDs() [][]byte {
if m != nil {
return m.MessageIDs
}
return nil
}
type TraceEventBatch struct {
Batch []*TraceEvent `protobuf:"bytes,1,rep,name=batch" json:"batch,omitempty"`
XXX_NoUnkeyedLiteral struct{} `json:"-"`
@ -1504,76 +1560,79 @@ func init() {
proto.RegisterType((*TraceEvent_ControlIWantMeta)(nil), "pubsub.pb.TraceEvent.ControlIWantMeta")
proto.RegisterType((*TraceEvent_ControlGraftMeta)(nil), "pubsub.pb.TraceEvent.ControlGraftMeta")
proto.RegisterType((*TraceEvent_ControlPruneMeta)(nil), "pubsub.pb.TraceEvent.ControlPruneMeta")
proto.RegisterType((*TraceEvent_ControlIDontWantMeta)(nil), "pubsub.pb.TraceEvent.ControlIDontWantMeta")
proto.RegisterType((*TraceEventBatch)(nil), "pubsub.pb.TraceEventBatch")
}
func init() { proto.RegisterFile("trace.proto", fileDescriptor_0571941a1d628a80) }
var fileDescriptor_0571941a1d628a80 = []byte{
// 999 bytes of a gzipped FileDescriptorProto
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xac, 0x96, 0x51, 0x6f, 0xda, 0x56,
0x14, 0xc7, 0xe7, 0x00, 0x01, 0x0e, 0x84, 0x78, 0x77, 0x6d, 0x65, 0xb1, 0x36, 0x62, 0x59, 0x55,
0x21, 0x4d, 0x42, 0x6a, 0xa4, 0xa9, 0x0f, 0x6b, 0xab, 0x11, 0xec, 0x26, 0x44, 0x24, 0xb1, 0x0e,
0x24, 0x7b, 0xcc, 0x0c, 0xdc, 0x35, 0x8e, 0xc0, 0xb6, 0xec, 0x0b, 0x53, 0x9f, 0xf6, 0xb4, 0xef,
0xd6, 0xb7, 0xed, 0x23, 0x54, 0xf9, 0x24, 0xd3, 0xbd, 0xd7, 0x36, 0x36, 0xd8, 0xb4, 0x8b, 0xfa,
0xe6, 0x73, 0xf3, 0xff, 0x9d, 0x7b, 0xce, 0xbd, 0xe7, 0x7f, 0x03, 0xd4, 0x98, 0x6f, 0x4d, 0x68,
0xc7, 0xf3, 0x5d, 0xe6, 0x92, 0xaa, 0xb7, 0x18, 0x07, 0x8b, 0x71, 0xc7, 0x1b, 0x1f, 0x7e, 0x7a,
0x02, 0x30, 0xe2, 0x7f, 0x32, 0x96, 0xd4, 0x61, 0xa4, 0x03, 0x45, 0xf6, 0xc1, 0xa3, 0x9a, 0xd2,
0x52, 0xda, 0x8d, 0xa3, 0x66, 0x27, 0x16, 0x76, 0x56, 0xa2, 0xce, 0xe8, 0x83, 0x47, 0x51, 0xe8,
0xc8, 0x13, 0xd8, 0xf5, 0x28, 0xf5, 0xfb, 0xba, 0xb6, 0xd3, 0x52, 0xda, 0x75, 0x0c, 0x23, 0xf2,
0x14, 0xaa, 0xcc, 0x9e, 0xd3, 0x80, 0x59, 0x73, 0x4f, 0x2b, 0xb4, 0x94, 0x76, 0x01, 0x57, 0x0b,
0x64, 0x00, 0x0d, 0x6f, 0x31, 0x9e, 0xd9, 0xc1, 0xed, 0x39, 0x0d, 0x02, 0xeb, 0x3d, 0xd5, 0x8a,
0x2d, 0xa5, 0x5d, 0x3b, 0x7a, 0x9e, 0xbd, 0x9f, 0x99, 0xd2, 0xe2, 0x1a, 0x4b, 0xfa, 0xb0, 0xe7,
0xd3, 0x3b, 0x3a, 0x61, 0x51, 0xb2, 0x92, 0x48, 0xf6, 0x63, 0x76, 0x32, 0x4c, 0x4a, 0x31, 0x4d,
0x12, 0x04, 0x75, 0xba, 0xf0, 0x66, 0xf6, 0xc4, 0x62, 0x34, 0xca, 0xb6, 0x2b, 0xb2, 0xbd, 0xc8,
0xce, 0xa6, 0xaf, 0xa9, 0x71, 0x83, 0xe7, 0xcd, 0x4e, 0xe9, 0xcc, 0x5e, 0x52, 0x3f, 0xca, 0x58,
0xde, 0xd6, 0xac, 0x9e, 0xd2, 0xe2, 0x1a, 0x4b, 0x5e, 0x41, 0xd9, 0x9a, 0x4e, 0x4d, 0x4a, 0x7d,
0xad, 0x22, 0xd2, 0x3c, 0xcb, 0x4e, 0xd3, 0x95, 0x22, 0x8c, 0xd4, 0xe4, 0x57, 0x00, 0x9f, 0xce,
0xdd, 0x25, 0x15, 0x6c, 0x55, 0xb0, 0xad, 0xbc, 0x23, 0x8a, 0x74, 0x98, 0x60, 0xf8, 0xd6, 0x3e,
0x9d, 0x2c, 0xd1, 0xec, 0x69, 0xb0, 0x6d, 0x6b, 0x94, 0x22, 0x8c, 0xd4, 0x1c, 0x0c, 0xa8, 0x33,
0xe5, 0x60, 0x6d, 0x1b, 0x38, 0x94, 0x22, 0x8c, 0xd4, 0x1c, 0x9c, 0xfa, 0xae, 0xc7, 0xc1, 0xfa,
0x36, 0x50, 0x97, 0x22, 0x8c, 0xd4, 0x7c, 0x8c, 0xef, 0x5c, 0xdb, 0xd1, 0xf6, 0x04, 0x95, 0x33,
0xc6, 0x67, 0xae, 0xed, 0xa0, 0xd0, 0x91, 0x97, 0x50, 0x9a, 0x51, 0x6b, 0x49, 0xb5, 0x86, 0x00,
0xbe, 0xcf, 0x06, 0x06, 0x5c, 0x82, 0x52, 0xc9, 0x91, 0xf7, 0xbe, 0xf5, 0x07, 0xd3, 0xf6, 0xb7,
0x21, 0x27, 0x5c, 0x82, 0x52, 0xc9, 0x11, 0xcf, 0x5f, 0x38, 0x54, 0x53, 0xb7, 0x21, 0x26, 0x97,
0xa0, 0x54, 0x36, 0x75, 0x68, 0xa4, 0xa7, 0x9f, 0x3b, 0x6b, 0x2e, 0x3f, 0xfb, 0xba, 0xb0, 0x69,
0x1d, 0x57, 0x0b, 0xe4, 0x11, 0x94, 0x98, 0xeb, 0xd9, 0x13, 0x61, 0xc7, 0x2a, 0xca, 0xa0, 0xf9,
0x17, 0xec, 0xa5, 0xc6, 0xfe, 0x33, 0x49, 0x0e, 0xa1, 0xee, 0xd3, 0x09, 0xb5, 0x97, 0x74, 0xfa,
0xce, 0x77, 0xe7, 0xa1, 0xb5, 0x53, 0x6b, 0xdc, 0xf8, 0x3e, 0xb5, 0x02, 0xd7, 0x11, 0xee, 0xae,
0x62, 0x18, 0xad, 0x0a, 0x28, 0x26, 0x0b, 0xb8, 0x03, 0x75, 0xdd, 0x29, 0x5f, 0xa1, 0x86, 0x78,
0xaf, 0x42, 0x72, 0xaf, 0x5b, 0x68, 0xa4, 0x3d, 0xf4, 0x90, 0x23, 0xdb, 0xd8, 0xbf, 0xb0, 0xb9,
0x7f, 0xf3, 0x15, 0x94, 0x43, 0x9b, 0x25, 0xde, 0x41, 0x25, 0xf5, 0x0e, 0x3e, 0xe2, 0x57, 0xee,
0x32, 0x37, 0x4a, 0x2e, 0x82, 0xe6, 0x73, 0x80, 0x95, 0xc7, 0xf2, 0xd8, 0xe6, 0xef, 0x50, 0x0e,
0xad, 0xb4, 0x51, 0x8d, 0x92, 0x71, 0x1a, 0x2f, 0xa1, 0x38, 0xa7, 0xcc, 0x12, 0x3b, 0xe5, 0x7b,
0xd3, 0xec, 0x9d, 0x53, 0x66, 0xa1, 0x90, 0x36, 0x47, 0x50, 0x0e, 0x3d, 0xc7, 0x8b, 0xe0, 0xae,
0x1b, 0xb9, 0x51, 0x11, 0x32, 0x7a, 0x60, 0xd6, 0xd0, 0x90, 0x5f, 0x33, 0xeb, 0x53, 0x28, 0x72,
0xc3, 0xae, 0xae, 0x4b, 0x49, 0x5e, 0xfa, 0x33, 0x28, 0x09, 0x77, 0xe6, 0x18, 0xe0, 0x67, 0x28,
0x09, 0x27, 0x6e, 0xbb, 0xa7, 0x6c, 0x4c, 0xb8, 0xf1, 0x7f, 0x62, 0x1f, 0x15, 0x28, 0x87, 0xc5,
0x93, 0x37, 0x50, 0x09, 0x47, 0x2d, 0xd0, 0x94, 0x56, 0xa1, 0x5d, 0x3b, 0xfa, 0x21, 0xbb, 0xdb,
0x70, 0x58, 0x45, 0xc7, 0x31, 0x42, 0xba, 0x50, 0x0f, 0x16, 0xe3, 0x60, 0xe2, 0xdb, 0x1e, 0xb3,
0x5d, 0x47, 0xdb, 0x11, 0x29, 0xf2, 0xde, 0xcf, 0xc5, 0x58, 0xe0, 0x29, 0x84, 0xfc, 0x02, 0xe5,
0x89, 0xeb, 0x30, 0xdf, 0x9d, 0x89, 0x21, 0xce, 0x2d, 0xa0, 0x27, 0x45, 0x22, 0x43, 0x44, 0x34,
0xbb, 0x50, 0x4b, 0x14, 0xf6, 0xa0, 0xc7, 0xe7, 0x0d, 0x94, 0xc3, 0xc2, 0x38, 0x1e, 0x96, 0x36,
0x96, 0x3f, 0x31, 0x2a, 0xb8, 0x5a, 0xc8, 0xc1, 0xff, 0xde, 0x81, 0x5a, 0xa2, 0x34, 0xf2, 0x1a,
0x4a, 0xf6, 0x2d, 0x7f, 0xaa, 0xe5, 0x69, 0xbe, 0xd8, 0xda, 0x4c, 0xff, 0xd4, 0x5a, 0xca, 0x23,
0x95, 0x90, 0xa0, 0xff, 0xb4, 0x1c, 0x16, 0x1e, 0xe4, 0x67, 0xe8, 0xdf, 0x2c, 0x87, 0x85, 0x34,
0x87, 0x38, 0x2d, 0xdf, 0xfc, 0xc2, 0x17, 0xd0, 0x62, 0xe0, 0x24, 0x2d, 0x9f, 0xff, 0xd7, 0xd1,
0xf3, 0x5f, 0xfc, 0x02, 0x5a, 0xcc, 0x9d, 0xa4, 0xe5, 0x7f, 0x82, 0x53, 0x50, 0xd7, 0x9b, 0xca,
0xf6, 0x02, 0x39, 0x00, 0x88, 0xef, 0x24, 0x10, 0x8d, 0xd6, 0x31, 0xb1, 0xd2, 0x3c, 0x5a, 0x65,
0x8a, 0x1a, 0x5c, 0x63, 0x94, 0x0d, 0xa6, 0x1d, 0x33, 0x71, 0x5b, 0x39, 0x4e, 0x7c, 0x1b, 0x2b,
0xe3, 0x16, 0x72, 0xea, 0xe4, 0x6f, 0x23, 0xa5, 0x7e, 0x54, 0xa2, 0x0c, 0x0e, 0xff, 0x51, 0xa0,
0xc8, 0x7f, 0x60, 0x92, 0xef, 0x60, 0xdf, 0xbc, 0x3a, 0x1e, 0xf4, 0x87, 0xa7, 0x37, 0xe7, 0xc6,
0x70, 0xd8, 0x3d, 0x31, 0xd4, 0x6f, 0x08, 0x81, 0x06, 0x1a, 0x67, 0x46, 0x6f, 0x14, 0xaf, 0x29,
0xe4, 0x31, 0x7c, 0xab, 0x5f, 0x99, 0x83, 0x7e, 0xaf, 0x3b, 0x32, 0xe2, 0xe5, 0x1d, 0xce, 0xeb,
0xc6, 0xa0, 0x7f, 0x6d, 0x60, 0xbc, 0x58, 0x20, 0x75, 0xa8, 0x74, 0x75, 0xfd, 0xc6, 0x34, 0x0c,
0x54, 0x8b, 0x64, 0x1f, 0x6a, 0x68, 0x9c, 0x5f, 0x5e, 0x1b, 0x72, 0xa1, 0xc4, 0xff, 0x8c, 0x46,
0xef, 0xfa, 0x06, 0xcd, 0x9e, 0xba, 0xcb, 0xa3, 0xa1, 0x71, 0xa1, 0x8b, 0xa8, 0xcc, 0x23, 0x1d,
0x2f, 0x4d, 0x11, 0x55, 0x48, 0x05, 0x8a, 0x67, 0x97, 0xfd, 0x0b, 0xb5, 0x4a, 0xaa, 0x50, 0x1a,
0x18, 0xdd, 0x6b, 0x43, 0x05, 0xfe, 0x79, 0x82, 0xdd, 0x77, 0x23, 0xb5, 0xc6, 0x3f, 0x4d, 0xbc,
0xba, 0x30, 0xd4, 0xfa, 0xe1, 0x5b, 0xd8, 0x5f, 0xdd, 0xef, 0xb1, 0xc5, 0x26, 0xb7, 0xe4, 0x27,
0x28, 0x8d, 0xf9, 0x47, 0x38, 0xc4, 0x8f, 0x33, 0x47, 0x01, 0xa5, 0xe6, 0xb8, 0xfe, 0xf1, 0xfe,
0x40, 0xf9, 0xf7, 0xfe, 0x40, 0xf9, 0x74, 0x7f, 0xa0, 0xfc, 0x17, 0x00, 0x00, 0xff, 0xff, 0xdb,
0x3a, 0x1c, 0xe4, 0xc9, 0x0b, 0x00, 0x00,
// 1027 bytes of a gzipped FileDescriptorProto
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xac, 0x96, 0xdf, 0x6e, 0xe2, 0x46,
0x14, 0xc6, 0xeb, 0x80, 0x03, 0x1c, 0x08, 0x71, 0xa7, 0xd9, 0xd6, 0x72, 0x77, 0x23, 0x9a, 0xae,
0x56, 0xa8, 0x95, 0x90, 0x36, 0x52, 0xbb, 0x17, 0xdd, 0x5d, 0x95, 0x60, 0x6f, 0x42, 0x44, 0x12,
0x6b, 0x20, 0xe9, 0x65, 0x6a, 0x60, 0xba, 0x71, 0x04, 0xb6, 0x65, 0x0f, 0x54, 0x7b, 0xd5, 0xd7,
0xdb, 0xbb, 0xed, 0x23, 0x54, 0x79, 0x92, 0x6a, 0x66, 0xfc, 0x07, 0x83, 0xed, 0xec, 0x46, 0xb9,
0xf3, 0x19, 0xbe, 0xdf, 0x99, 0x33, 0x67, 0xce, 0x37, 0x02, 0xea, 0xd4, 0xb7, 0x26, 0xa4, 0xe3,
0xf9, 0x2e, 0x75, 0x51, 0xcd, 0x5b, 0x8c, 0x83, 0xc5, 0xb8, 0xe3, 0x8d, 0x0f, 0xee, 0xbe, 0x03,
0x18, 0xb1, 0x9f, 0x8c, 0x25, 0x71, 0x28, 0xea, 0x40, 0x99, 0x7e, 0xf0, 0x88, 0x2a, 0xb5, 0xa4,
0x76, 0xf3, 0x50, 0xeb, 0xc4, 0xc2, 0x4e, 0x22, 0xea, 0x8c, 0x3e, 0x78, 0x04, 0x73, 0x1d, 0xfa,
0x16, 0xb6, 0x3d, 0x42, 0xfc, 0xbe, 0xae, 0x6e, 0xb5, 0xa4, 0x76, 0x03, 0x87, 0x11, 0x7a, 0x0a,
0x35, 0x6a, 0xcf, 0x49, 0x40, 0xad, 0xb9, 0xa7, 0x96, 0x5a, 0x52, 0xbb, 0x84, 0x93, 0x05, 0x34,
0x80, 0xa6, 0xb7, 0x18, 0xcf, 0xec, 0xe0, 0xe6, 0x8c, 0x04, 0x81, 0xf5, 0x9e, 0xa8, 0xe5, 0x96,
0xd4, 0xae, 0x1f, 0x3e, 0xcf, 0xde, 0xcf, 0x4c, 0x69, 0xf1, 0x1a, 0x8b, 0xfa, 0xb0, 0xe3, 0x93,
0x5b, 0x32, 0xa1, 0x51, 0x32, 0x99, 0x27, 0xfb, 0x31, 0x3b, 0x19, 0x5e, 0x95, 0xe2, 0x34, 0x89,
0x30, 0x28, 0xd3, 0x85, 0x37, 0xb3, 0x27, 0x16, 0x25, 0x51, 0xb6, 0x6d, 0x9e, 0xed, 0x45, 0x76,
0x36, 0x7d, 0x4d, 0x8d, 0x37, 0x78, 0x76, 0xd8, 0x29, 0x99, 0xd9, 0x4b, 0xe2, 0x47, 0x19, 0x2b,
0x45, 0x87, 0xd5, 0x53, 0x5a, 0xbc, 0xc6, 0xa2, 0x57, 0x50, 0xb1, 0xa6, 0x53, 0x93, 0x10, 0x5f,
0xad, 0xf2, 0x34, 0xcf, 0xb2, 0xd3, 0x74, 0x85, 0x08, 0x47, 0x6a, 0xf4, 0x3b, 0x80, 0x4f, 0xe6,
0xee, 0x92, 0x70, 0xb6, 0xc6, 0xd9, 0x56, 0x5e, 0x8b, 0x22, 0x1d, 0x5e, 0x61, 0xd8, 0xd6, 0x3e,
0x99, 0x2c, 0xb1, 0xd9, 0x53, 0xa1, 0x68, 0x6b, 0x2c, 0x44, 0x38, 0x52, 0x33, 0x30, 0x20, 0xce,
0x94, 0x81, 0xf5, 0x22, 0x70, 0x28, 0x44, 0x38, 0x52, 0x33, 0x70, 0xea, 0xbb, 0x1e, 0x03, 0x1b,
0x45, 0xa0, 0x2e, 0x44, 0x38, 0x52, 0xb3, 0x31, 0xbe, 0x75, 0x6d, 0x47, 0xdd, 0xe1, 0x54, 0xce,
0x18, 0x9f, 0xba, 0xb6, 0x83, 0xb9, 0x0e, 0xbd, 0x04, 0x79, 0x46, 0xac, 0x25, 0x51, 0x9b, 0x1c,
0xf8, 0x3e, 0x1b, 0x18, 0x30, 0x09, 0x16, 0x4a, 0x86, 0xbc, 0xf7, 0xad, 0xbf, 0xa8, 0xba, 0x5b,
0x84, 0x1c, 0x33, 0x09, 0x16, 0x4a, 0x86, 0x78, 0xfe, 0xc2, 0x21, 0xaa, 0x52, 0x84, 0x98, 0x4c,
0x82, 0x85, 0x52, 0xd3, 0xa1, 0x99, 0x9e, 0x7e, 0xe6, 0xac, 0xb9, 0xf8, 0xec, 0xeb, 0xdc, 0xa6,
0x0d, 0x9c, 0x2c, 0xa0, 0x3d, 0x90, 0xa9, 0xeb, 0xd9, 0x13, 0x6e, 0xc7, 0x1a, 0x16, 0x81, 0xf6,
0x0f, 0xec, 0xa4, 0xc6, 0xfe, 0x9e, 0x24, 0x07, 0xd0, 0xf0, 0xc9, 0x84, 0xd8, 0x4b, 0x32, 0x7d,
0xe7, 0xbb, 0xf3, 0xd0, 0xda, 0xa9, 0x35, 0x66, 0x7c, 0x9f, 0x58, 0x81, 0xeb, 0x70, 0x77, 0xd7,
0x70, 0x18, 0x25, 0x05, 0x94, 0x57, 0x0b, 0xb8, 0x05, 0x65, 0xdd, 0x29, 0x8f, 0x50, 0x43, 0xbc,
0x57, 0x69, 0x75, 0xaf, 0x1b, 0x68, 0xa6, 0x3d, 0xf4, 0x90, 0x96, 0x6d, 0xec, 0x5f, 0xda, 0xdc,
0x5f, 0x7b, 0x05, 0x95, 0xd0, 0x66, 0x2b, 0xef, 0xa0, 0x94, 0x7a, 0x07, 0xf7, 0xd8, 0x95, 0xbb,
0xd4, 0x8d, 0x92, 0xf3, 0x40, 0x7b, 0x0e, 0x90, 0x78, 0x2c, 0x8f, 0xd5, 0xfe, 0x84, 0x4a, 0x68,
0xa5, 0x8d, 0x6a, 0xa4, 0x8c, 0x6e, 0xbc, 0x84, 0xf2, 0x9c, 0x50, 0x8b, 0xef, 0x94, 0xef, 0x4d,
0xb3, 0x77, 0x46, 0xa8, 0x85, 0xb9, 0x54, 0x1b, 0x41, 0x25, 0xf4, 0x1c, 0x2b, 0x82, 0xb9, 0x6e,
0xe4, 0x46, 0x45, 0x88, 0xe8, 0x81, 0x59, 0x43, 0x43, 0x3e, 0x66, 0xd6, 0xa7, 0x50, 0x66, 0x86,
0x4d, 0xae, 0x4b, 0x5a, 0xbd, 0xf4, 0x67, 0x20, 0x73, 0x77, 0xe6, 0x18, 0xe0, 0x17, 0x90, 0xb9,
0x13, 0x8b, 0xee, 0x29, 0x1b, 0xe3, 0x6e, 0xfc, 0x42, 0xec, 0xa3, 0x04, 0x95, 0xb0, 0x78, 0xf4,
0x06, 0xaa, 0xe1, 0xa8, 0x05, 0xaa, 0xd4, 0x2a, 0xb5, 0xeb, 0x87, 0x3f, 0x64, 0x9f, 0x36, 0x1c,
0x56, 0x7e, 0xe2, 0x18, 0x41, 0x5d, 0x68, 0x04, 0x8b, 0x71, 0x30, 0xf1, 0x6d, 0x8f, 0xda, 0xae,
0xa3, 0x6e, 0xf1, 0x14, 0x79, 0xef, 0xe7, 0x62, 0xcc, 0xf1, 0x14, 0x82, 0x7e, 0x83, 0xca, 0xc4,
0x75, 0xa8, 0xef, 0xce, 0xf8, 0x10, 0xe7, 0x16, 0xd0, 0x13, 0x22, 0x9e, 0x21, 0x22, 0xb4, 0x2e,
0xd4, 0x57, 0x0a, 0x7b, 0xd0, 0xe3, 0xf3, 0x06, 0x2a, 0x61, 0x61, 0x0c, 0x0f, 0x4b, 0x1b, 0x8b,
0xbf, 0x18, 0x55, 0x9c, 0x2c, 0xe4, 0xe0, 0x9f, 0xb6, 0xa0, 0xbe, 0x52, 0x1a, 0x7a, 0x0d, 0xb2,
0x7d, 0xc3, 0x9e, 0x6a, 0xd1, 0xcd, 0x17, 0x85, 0x87, 0xe9, 0x9f, 0x58, 0x4b, 0xd1, 0x52, 0x01,
0x71, 0xfa, 0x6f, 0xcb, 0xa1, 0x61, 0x23, 0xef, 0xa1, 0xff, 0xb0, 0x1c, 0x1a, 0xd2, 0x0c, 0x62,
0xb4, 0x78, 0xf3, 0x4b, 0x9f, 0x41, 0xf3, 0x81, 0x13, 0xb4, 0x78, 0xfe, 0x5f, 0x47, 0xcf, 0x7f,
0xf9, 0x33, 0x68, 0x3e, 0x77, 0x82, 0xe6, 0x10, 0x3a, 0x81, 0x9a, 0x3d, 0x75, 0x1d, 0xca, 0xab,
0x97, 0x79, 0x86, 0x9f, 0x8a, 0xab, 0xd7, 0x5d, 0x87, 0xc6, 0x27, 0x48, 0x60, 0xed, 0x04, 0x94,
0xf5, 0xf6, 0x64, 0xbb, 0x0a, 0xed, 0x03, 0xc4, 0xb7, 0x1b, 0xf0, 0x96, 0x35, 0xf0, 0xca, 0x8a,
0x76, 0x98, 0x64, 0x8a, 0x36, 0x5a, 0x63, 0xa4, 0x0d, 0xa6, 0x1d, 0x33, 0x71, 0x83, 0x72, 0x3c,
0xfd, 0x36, 0x56, 0xc6, 0xcd, 0xc8, 0xa9, 0x93, 0xbd, 0xb2, 0x84, 0xf8, 0x51, 0x89, 0x22, 0xd0,
0x7e, 0x85, 0xbd, 0xac, 0x56, 0xdc, 0x57, 0xe1, 0xc1, 0x27, 0x09, 0xca, 0xec, 0x2f, 0x2e, 0xfa,
0x06, 0x76, 0xcd, 0xcb, 0xa3, 0x41, 0x7f, 0x78, 0x72, 0x7d, 0x66, 0x0c, 0x87, 0xdd, 0x63, 0x43,
0xf9, 0x0a, 0x21, 0x68, 0x62, 0xe3, 0xd4, 0xe8, 0x8d, 0xe2, 0x35, 0x09, 0x3d, 0x81, 0xaf, 0xf5,
0x4b, 0x73, 0xd0, 0xef, 0x75, 0x47, 0x46, 0xbc, 0xbc, 0xc5, 0x78, 0xdd, 0x18, 0xf4, 0xaf, 0x0c,
0x1c, 0x2f, 0x96, 0x50, 0x03, 0xaa, 0x5d, 0x5d, 0xbf, 0x36, 0x0d, 0x03, 0x2b, 0x65, 0xb4, 0x0b,
0x75, 0x6c, 0x9c, 0x5d, 0x5c, 0x19, 0x62, 0x41, 0x66, 0x3f, 0x63, 0xa3, 0x77, 0x75, 0x8d, 0xcd,
0x9e, 0xb2, 0xcd, 0xa2, 0xa1, 0x71, 0xae, 0xf3, 0xa8, 0xc2, 0x22, 0x1d, 0x5f, 0x98, 0x3c, 0xaa,
0xa2, 0x2a, 0x94, 0x4f, 0x2f, 0xfa, 0xe7, 0x4a, 0x0d, 0xd5, 0x40, 0x1e, 0x18, 0xdd, 0x2b, 0x43,
0x01, 0xf6, 0x79, 0x8c, 0xbb, 0xef, 0x46, 0x4a, 0x9d, 0x7d, 0x9a, 0xf8, 0xf2, 0xdc, 0x50, 0x1a,
0x07, 0x6f, 0x61, 0x37, 0x99, 0x8f, 0x23, 0x8b, 0x4e, 0x6e, 0xd0, 0xcf, 0x20, 0x8f, 0xd9, 0x47,
0x68, 0xa3, 0x27, 0x99, 0xa3, 0x84, 0x85, 0xe6, 0xa8, 0xf1, 0xf1, 0x6e, 0x5f, 0xfa, 0xf7, 0x6e,
0x5f, 0xfa, 0xef, 0x6e, 0x5f, 0xfa, 0x3f, 0x00, 0x00, 0xff, 0xff, 0x17, 0x7f, 0xbd, 0x0d, 0x4b,
0x0c, 0x00, 0x00,
}
func (m *TraceEvent) Marshal() (dAtA []byte, err error) {
@ -2509,6 +2568,20 @@ func (m *TraceEvent_ControlMeta) MarshalToSizedBuffer(dAtA []byte) (int, error)
i -= len(m.XXX_unrecognized)
copy(dAtA[i:], m.XXX_unrecognized)
}
if len(m.Idontwant) > 0 {
for iNdEx := len(m.Idontwant) - 1; iNdEx >= 0; iNdEx-- {
{
size, err := m.Idontwant[iNdEx].MarshalToSizedBuffer(dAtA[:i])
if err != nil {
return 0, err
}
i -= size
i = encodeVarintTrace(dAtA, i, uint64(size))
}
i--
dAtA[i] = 0x2a
}
}
if len(m.Prune) > 0 {
for iNdEx := len(m.Prune) - 1; iNdEx >= 0; iNdEx-- {
{
@ -2724,6 +2797,42 @@ func (m *TraceEvent_ControlPruneMeta) MarshalToSizedBuffer(dAtA []byte) (int, er
return len(dAtA) - i, nil
}
func (m *TraceEvent_ControlIDontWantMeta) Marshal() (dAtA []byte, err error) {
size := m.Size()
dAtA = make([]byte, size)
n, err := m.MarshalToSizedBuffer(dAtA[:size])
if err != nil {
return nil, err
}
return dAtA[:n], nil
}
func (m *TraceEvent_ControlIDontWantMeta) MarshalTo(dAtA []byte) (int, error) {
size := m.Size()
return m.MarshalToSizedBuffer(dAtA[:size])
}
func (m *TraceEvent_ControlIDontWantMeta) MarshalToSizedBuffer(dAtA []byte) (int, error) {
i := len(dAtA)
_ = i
var l int
_ = l
if m.XXX_unrecognized != nil {
i -= len(m.XXX_unrecognized)
copy(dAtA[i:], m.XXX_unrecognized)
}
if len(m.MessageIDs) > 0 {
for iNdEx := len(m.MessageIDs) - 1; iNdEx >= 0; iNdEx-- {
i -= len(m.MessageIDs[iNdEx])
copy(dAtA[i:], m.MessageIDs[iNdEx])
i = encodeVarintTrace(dAtA, i, uint64(len(m.MessageIDs[iNdEx])))
i--
dAtA[i] = 0xa
}
}
return len(dAtA) - i, nil
}
func (m *TraceEventBatch) Marshal() (dAtA []byte, err error) {
size := m.Size()
dAtA = make([]byte, size)
@ -3211,6 +3320,12 @@ func (m *TraceEvent_ControlMeta) Size() (n int) {
n += 1 + l + sovTrace(uint64(l))
}
}
if len(m.Idontwant) > 0 {
for _, e := range m.Idontwant {
l = e.Size()
n += 1 + l + sovTrace(uint64(l))
}
}
if m.XXX_unrecognized != nil {
n += len(m.XXX_unrecognized)
}
@ -3295,6 +3410,24 @@ func (m *TraceEvent_ControlPruneMeta) Size() (n int) {
return n
}
func (m *TraceEvent_ControlIDontWantMeta) Size() (n int) {
if m == nil {
return 0
}
var l int
_ = l
if len(m.MessageIDs) > 0 {
for _, b := range m.MessageIDs {
l = len(b)
n += 1 + l + sovTrace(uint64(l))
}
}
if m.XXX_unrecognized != nil {
n += len(m.XXX_unrecognized)
}
return n
}
func (m *TraceEventBatch) Size() (n int) {
if m == nil {
return 0
@ -6032,6 +6165,40 @@ func (m *TraceEvent_ControlMeta) Unmarshal(dAtA []byte) error {
return err
}
iNdEx = postIndex
case 5:
if wireType != 2 {
return fmt.Errorf("proto: wrong wireType = %d for field Idontwant", wireType)
}
var msglen int
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowTrace
}
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := dAtA[iNdEx]
iNdEx++
msglen |= int(b&0x7F) << shift
if b < 0x80 {
break
}
}
if msglen < 0 {
return ErrInvalidLengthTrace
}
postIndex := iNdEx + msglen
if postIndex < 0 {
return ErrInvalidLengthTrace
}
if postIndex > l {
return io.ErrUnexpectedEOF
}
m.Idontwant = append(m.Idontwant, &TraceEvent_ControlIDontWantMeta{})
if err := m.Idontwant[len(m.Idontwant)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
return err
}
iNdEx = postIndex
default:
iNdEx = preIndex
skippy, err := skipTrace(dAtA[iNdEx:])
@ -6453,6 +6620,89 @@ func (m *TraceEvent_ControlPruneMeta) Unmarshal(dAtA []byte) error {
}
return nil
}
func (m *TraceEvent_ControlIDontWantMeta) Unmarshal(dAtA []byte) error {
l := len(dAtA)
iNdEx := 0
for iNdEx < l {
preIndex := iNdEx
var wire uint64
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowTrace
}
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := dAtA[iNdEx]
iNdEx++
wire |= uint64(b&0x7F) << shift
if b < 0x80 {
break
}
}
fieldNum := int32(wire >> 3)
wireType := int(wire & 0x7)
if wireType == 4 {
return fmt.Errorf("proto: ControlIDontWantMeta: wiretype end group for non-group")
}
if fieldNum <= 0 {
return fmt.Errorf("proto: ControlIDontWantMeta: illegal tag %d (wire type %d)", fieldNum, wire)
}
switch fieldNum {
case 1:
if wireType != 2 {
return fmt.Errorf("proto: wrong wireType = %d for field MessageIDs", wireType)
}
var byteLen int
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowTrace
}
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := dAtA[iNdEx]
iNdEx++
byteLen |= int(b&0x7F) << shift
if b < 0x80 {
break
}
}
if byteLen < 0 {
return ErrInvalidLengthTrace
}
postIndex := iNdEx + byteLen
if postIndex < 0 {
return ErrInvalidLengthTrace
}
if postIndex > l {
return io.ErrUnexpectedEOF
}
m.MessageIDs = append(m.MessageIDs, make([]byte, postIndex-iNdEx))
copy(m.MessageIDs[len(m.MessageIDs)-1], dAtA[iNdEx:postIndex])
iNdEx = postIndex
default:
iNdEx = preIndex
skippy, err := skipTrace(dAtA[iNdEx:])
if err != nil {
return err
}
if (skippy < 0) || (iNdEx+skippy) < 0 {
return ErrInvalidLengthTrace
}
if (iNdEx + skippy) > l {
return io.ErrUnexpectedEOF
}
m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...)
iNdEx += skippy
}
}
if iNdEx > l {
return io.ErrUnexpectedEOF
}
return nil
}
func (m *TraceEventBatch) Unmarshal(dAtA []byte) error {
l := len(dAtA)
iNdEx := 0

View File

@ -124,6 +124,7 @@ message TraceEvent {
repeated ControlIWantMeta iwant = 2;
repeated ControlGraftMeta graft = 3;
repeated ControlPruneMeta prune = 4;
repeated ControlIDontWantMeta idontwant = 5;
}
message ControlIHaveMeta {
@ -143,6 +144,10 @@ message TraceEvent {
optional string topic = 1;
repeated bytes peers = 2;
}
message ControlIDontWantMeta {
repeated bytes messageIDs = 1;
}
}
message TraceEventBatch {

339
pubsub.go
View File

@ -5,6 +5,8 @@ import (
"encoding/binary"
"errors"
"fmt"
"iter"
"math/bits"
"math/rand"
"sync"
"sync/atomic"
@ -137,6 +139,9 @@ type PubSub struct {
// sendMsg handles messages that have been validated
sendMsg chan *Message
// sendMessageBatch publishes a batch of messages
sendMessageBatch chan messageBatchAndPublishOptions
// addVal handles validator registration requests
addVal chan *addValReq
@ -150,7 +155,7 @@ type PubSub struct {
blacklist Blacklist
blacklistPeer chan peer.ID
peers map[peer.ID]chan *RPC
peers map[peer.ID]*rpcQueue
inboundStreamsMx sync.Mutex
inboundStreams map[peer.ID]network.Stream
@ -199,11 +204,14 @@ type PubSubRouter interface {
// EnoughPeers returns whether the router needs more peers before it's ready to publish new records.
// Suggested (if greater than 0) is a suggested number of peers that the router should need.
EnoughPeers(topic string, suggested int) bool
// AcceptFrom is invoked on any incoming message before pushing it to the validation pipeline
// AcceptFrom is invoked on any RPC envelope before pushing it to the validation pipeline
// or processing control information.
// Allows routers with internal scoring to vet peers before committing any processing resources
// to the message and implement an effective graylist and react to validation queue overload.
AcceptFrom(peer.ID) AcceptStatus
// Preprocess is invoked on messages in the RPC envelope right before pushing it to
// the validation pipeline
Preprocess(from peer.ID, msgs []*Message)
// HandleRPC is invoked to process control messages in the RPC envelope.
// It is invoked after subscriptions and payload messages have been processed.
HandleRPC(*RPC)
@ -217,6 +225,10 @@ type PubSubRouter interface {
Leave(topic string)
}
type BatchPublisher interface {
PublishBatch(messages []*Message, opts *BatchPublishOptions)
}
type AcceptStatus int
const (
@ -248,6 +260,202 @@ type RPC struct {
from peer.ID
}
// split splits the given RPC If a sub RPC is too large and can't be split
// further (e.g. Message data is bigger than the RPC limit), then it will be
// returned as an oversized RPC. The caller should filter out oversized RPCs.
func (rpc *RPC) split(limit int) iter.Seq[RPC] {
return func(yield func(RPC) bool) {
nextRPC := RPC{from: rpc.from}
{
nextRPCSize := 0
messagesInNextRPC := 0
messageSlice := rpc.Publish
// Merge/Append publish messages. This pattern is optimized compared the
// the patterns for other fields because this is the common cause for
// splitting a message.
for _, msg := range rpc.Publish {
// We know the message field number is <15 so this is safe.
incrementalSize := pbFieldNumberLT15Size + sizeOfEmbeddedMsg(msg.Size())
if nextRPCSize+incrementalSize > limit {
// The message doesn't fit. Let's set the messages that did fit
// into this RPC, yield it, then make a new one
nextRPC.Publish = messageSlice[:messagesInNextRPC]
messageSlice = messageSlice[messagesInNextRPC:]
if !yield(nextRPC) {
return
}
nextRPC = RPC{from: rpc.from}
nextRPCSize = 0
messagesInNextRPC = 0
}
messagesInNextRPC++
nextRPCSize += incrementalSize
}
if nextRPCSize > 0 {
// yield the message here for simplicity. We aren't optimally
// packing this RPC, but we avoid successively calling .Size()
// on the messages for the next parts.
nextRPC.Publish = messageSlice[:messagesInNextRPC]
if !yield(nextRPC) {
return
}
nextRPC = RPC{from: rpc.from}
}
}
// Fast path check. It's possible the original RPC is now small enough
// without the messages to publish
nextRPC = *rpc
nextRPC.Publish = nil
if s := nextRPC.Size(); s < limit {
if s != 0 {
yield(nextRPC)
}
return
}
// We have to split the RPC into multiple parts
nextRPC = RPC{from: rpc.from}
// Merge/Append Subscriptions
for _, sub := range rpc.Subscriptions {
if nextRPC.Subscriptions = append(nextRPC.Subscriptions, sub); nextRPC.Size() > limit {
nextRPC.Subscriptions = nextRPC.Subscriptions[:len(nextRPC.Subscriptions)-1]
if !yield(nextRPC) {
return
}
nextRPC = RPC{from: rpc.from}
nextRPC.Subscriptions = append(nextRPC.Subscriptions, sub)
}
}
// Merge/Append Control messages
if ctl := rpc.Control; ctl != nil {
if nextRPC.Control == nil {
nextRPC.Control = &pb.ControlMessage{}
if nextRPC.Size() > limit {
nextRPC.Control = nil
if !yield(nextRPC) {
return
}
nextRPC = RPC{RPC: pb.RPC{Control: &pb.ControlMessage{}}, from: rpc.from}
}
}
for _, graft := range ctl.GetGraft() {
if nextRPC.Control.Graft = append(nextRPC.Control.Graft, graft); nextRPC.Size() > limit {
nextRPC.Control.Graft = nextRPC.Control.Graft[:len(nextRPC.Control.Graft)-1]
if !yield(nextRPC) {
return
}
nextRPC = RPC{RPC: pb.RPC{Control: &pb.ControlMessage{}}, from: rpc.from}
nextRPC.Control.Graft = append(nextRPC.Control.Graft, graft)
}
}
for _, prune := range ctl.GetPrune() {
if nextRPC.Control.Prune = append(nextRPC.Control.Prune, prune); nextRPC.Size() > limit {
nextRPC.Control.Prune = nextRPC.Control.Prune[:len(nextRPC.Control.Prune)-1]
if !yield(nextRPC) {
return
}
nextRPC = RPC{RPC: pb.RPC{Control: &pb.ControlMessage{}}, from: rpc.from}
nextRPC.Control.Prune = append(nextRPC.Control.Prune, prune)
}
}
for _, iwant := range ctl.GetIwant() {
if len(nextRPC.Control.Iwant) == 0 {
// Initialize with a single IWANT.
// For IWANTs we don't need more than a single one,
// since there are no topic IDs here.
newIWant := &pb.ControlIWant{}
if nextRPC.Control.Iwant = append(nextRPC.Control.Iwant, newIWant); nextRPC.Size() > limit {
nextRPC.Control.Iwant = nextRPC.Control.Iwant[:len(nextRPC.Control.Iwant)-1]
if !yield(nextRPC) {
return
}
nextRPC = RPC{RPC: pb.RPC{Control: &pb.ControlMessage{
Iwant: []*pb.ControlIWant{newIWant},
}}, from: rpc.from}
}
}
for _, msgID := range iwant.GetMessageIDs() {
if nextRPC.Control.Iwant[0].MessageIDs = append(nextRPC.Control.Iwant[0].MessageIDs, msgID); nextRPC.Size() > limit {
nextRPC.Control.Iwant[0].MessageIDs = nextRPC.Control.Iwant[0].MessageIDs[:len(nextRPC.Control.Iwant[0].MessageIDs)-1]
if !yield(nextRPC) {
return
}
nextRPC = RPC{RPC: pb.RPC{Control: &pb.ControlMessage{
Iwant: []*pb.ControlIWant{{MessageIDs: []string{msgID}}},
}}, from: rpc.from}
}
}
}
for _, ihave := range ctl.GetIhave() {
if len(nextRPC.Control.Ihave) == 0 ||
nextRPC.Control.Ihave[len(nextRPC.Control.Ihave)-1].TopicID != ihave.TopicID {
// Start a new IHAVE if we are referencing a new topic ID
newIhave := &pb.ControlIHave{TopicID: ihave.TopicID}
if nextRPC.Control.Ihave = append(nextRPC.Control.Ihave, newIhave); nextRPC.Size() > limit {
nextRPC.Control.Ihave = nextRPC.Control.Ihave[:len(nextRPC.Control.Ihave)-1]
if !yield(nextRPC) {
return
}
nextRPC = RPC{RPC: pb.RPC{Control: &pb.ControlMessage{
Ihave: []*pb.ControlIHave{newIhave},
}}, from: rpc.from}
}
}
for _, msgID := range ihave.GetMessageIDs() {
lastIHave := nextRPC.Control.Ihave[len(nextRPC.Control.Ihave)-1]
if lastIHave.MessageIDs = append(lastIHave.MessageIDs, msgID); nextRPC.Size() > limit {
lastIHave.MessageIDs = lastIHave.MessageIDs[:len(lastIHave.MessageIDs)-1]
if !yield(nextRPC) {
return
}
nextRPC = RPC{RPC: pb.RPC{Control: &pb.ControlMessage{
Ihave: []*pb.ControlIHave{{TopicID: ihave.TopicID, MessageIDs: []string{msgID}}},
}}, from: rpc.from}
}
}
}
}
if nextRPC.Size() > 0 {
if !yield(nextRPC) {
return
}
}
}
}
// pbFieldNumberLT15Size is the number of bytes required to encode a protobuf
// field number less than or equal to 15 along with its wire type. This is 1
// byte because the protobuf encoding of field numbers is a varint encoding of:
// fieldNumber << 3 | wireType
// Refer to https://protobuf.dev/programming-guides/encoding/#structure
// for more details on the encoding of messages. You may also reference the
// concrete implementation of pb.RPC.Size()
const pbFieldNumberLT15Size = 1
func sovRpc(x uint64) (n int) {
return (bits.Len64(x) + 6) / 7
}
func sizeOfEmbeddedMsg(
msgSize int,
) int {
prefixSize := sovRpc(uint64(msgSize))
return prefixSize + msgSize
}
type Option func(*PubSub) error
// NewPubSub returns a new PubSub management object.
@ -282,6 +490,7 @@ func NewPubSub(ctx context.Context, h host.Host, rt PubSubRouter, opts ...Option
rmTopic: make(chan *rmTopicReq),
getTopics: make(chan *topicReq),
sendMsg: make(chan *Message, 32),
sendMessageBatch: make(chan messageBatchAndPublishOptions, 1),
addVal: make(chan *addValReq),
rmVal: make(chan *rmValReq),
eval: make(chan func()),
@ -289,7 +498,7 @@ func NewPubSub(ctx context.Context, h host.Host, rt PubSubRouter, opts ...Option
mySubs: make(map[string]map[*Subscription]struct{}),
myRelays: make(map[string]int),
topics: make(map[string]map[peer.ID]struct{}),
peers: make(map[peer.ID]chan *RPC),
peers: make(map[peer.ID]*rpcQueue),
inboundStreams: make(map[peer.ID]network.Stream),
blacklist: NewMapBlacklist(),
blacklistPeer: make(chan peer.ID),
@ -563,8 +772,8 @@ func WithAppSpecificRpcInspector(inspector func(peer.ID, *RPC) error) Option {
func (p *PubSub) processLoop(ctx context.Context) {
defer func() {
// Clean up go routines.
for _, ch := range p.peers {
close(ch)
for _, queue := range p.peers {
queue.Close()
}
p.peers = nil
p.topics = nil
@ -579,7 +788,7 @@ func (p *PubSub) processLoop(ctx context.Context) {
case s := <-p.newPeerStream:
pid := s.Conn().RemotePeer()
ch, ok := p.peers[pid]
q, ok := p.peers[pid]
if !ok {
log.Warn("new stream for unknown peer: ", pid)
s.Reset()
@ -588,7 +797,7 @@ func (p *PubSub) processLoop(ctx context.Context) {
if p.blacklist.Contains(pid) {
log.Warn("closing stream for blacklisted peer: ", pid)
close(ch)
q.Close()
delete(p.peers, pid)
s.Reset()
continue
@ -650,6 +859,9 @@ func (p *PubSub) processLoop(ctx context.Context) {
case msg := <-p.sendMsg:
p.publishMessage(msg)
case batchAndOpts := <-p.sendMessageBatch:
p.publishMessageBatch(batchAndOpts)
case req := <-p.addVal:
p.val.AddValidator(req)
@ -663,9 +875,9 @@ func (p *PubSub) processLoop(ctx context.Context) {
log.Infof("Blacklisting peer %s", pid)
p.blacklist.Add(pid)
ch, ok := p.peers[pid]
q, ok := p.peers[pid]
if ok {
close(ch)
q.Close()
delete(p.peers, pid)
for t, tmap := range p.topics {
if _, ok := tmap[pid]; ok {
@ -712,10 +924,10 @@ func (p *PubSub) handlePendingPeers() {
continue
}
messages := make(chan *RPC, p.peerOutboundQueueSize)
messages <- p.getHelloPacket()
go p.handleNewPeer(p.ctx, pid, messages)
p.peers[pid] = messages
rpcQueue := newRpcQueue(p.peerOutboundQueueSize)
rpcQueue.Push(p.getHelloPacket(), true)
go p.handleNewPeer(p.ctx, pid, rpcQueue)
p.peers[pid] = rpcQueue
}
}
@ -732,12 +944,12 @@ func (p *PubSub) handleDeadPeers() {
p.peerDeadPrioLk.Unlock()
for pid := range deadPeers {
ch, ok := p.peers[pid]
q, ok := p.peers[pid]
if !ok {
continue
}
close(ch)
q.Close()
delete(p.peers, pid)
for t, tmap := range p.topics {
@ -759,10 +971,10 @@ func (p *PubSub) handleDeadPeers() {
// still connected, must be a duplicate connection being closed.
// we respawn the writer as we need to ensure there is a stream active
log.Debugf("peer declared dead but still connected; respawning writer: %s", pid)
messages := make(chan *RPC, p.peerOutboundQueueSize)
messages <- p.getHelloPacket()
p.peers[pid] = messages
go p.handleNewPeerWithBackoff(p.ctx, pid, backoffDelay, messages)
rpcQueue := newRpcQueue(p.peerOutboundQueueSize)
rpcQueue.Push(p.getHelloPacket(), true)
p.peers[pid] = rpcQueue
go p.handleNewPeerWithBackoff(p.ctx, pid, backoffDelay, rpcQueue)
}
}
}
@ -926,14 +1138,14 @@ func (p *PubSub) announce(topic string, sub bool) {
out := rpcWithSubs(subopt)
for pid, peer := range p.peers {
select {
case peer <- out:
p.tracer.SendRPC(out, pid)
default:
err := peer.Push(out, false)
if err != nil {
log.Infof("Can't send announce message to peer %s: queue full; scheduling retry", pid)
p.tracer.DropRPC(out, pid)
go p.announceRetry(pid, topic, sub)
continue
}
p.tracer.SendRPC(out, pid)
}
}
@ -969,14 +1181,14 @@ func (p *PubSub) doAnnounceRetry(pid peer.ID, topic string, sub bool) {
}
out := rpcWithSubs(subopt)
select {
case peer <- out:
p.tracer.SendRPC(out, pid)
default:
err := peer.Push(out, false)
if err != nil {
log.Infof("Can't send announce message to peer %s: queue full; scheduling retry", pid)
p.tracer.DropRPC(out, pid)
go p.announceRetry(pid, topic, sub)
return
}
p.tracer.SendRPC(out, pid)
}
// notifySubs sends a given message to all corresponding subscribers.
@ -1102,13 +1314,21 @@ func (p *PubSub) handleIncomingRPC(rpc *RPC) {
p.tracer.ThrottlePeer(rpc.from)
case AcceptAll:
var toPush []*Message
for _, pmsg := range rpc.GetPublish() {
if !(p.subscribedToMsg(pmsg) || p.canRelayMsg(pmsg)) {
log.Debug("received message in topic we didn't subscribe to; ignoring message")
continue
}
p.pushMsg(&Message{pmsg, "", rpc.from, nil, false})
msg := &Message{pmsg, "", rpc.from, nil, false}
if p.shouldPush(msg) {
toPush = append(toPush, msg)
}
}
p.rt.Preprocess(rpc.from, toPush)
for _, msg := range toPush {
p.pushMsg(msg)
}
}
@ -1125,27 +1345,28 @@ func DefaultPeerFilter(pid peer.ID, topic string) bool {
return true
}
// pushMsg pushes a message performing validation as necessary
func (p *PubSub) pushMsg(msg *Message) {
// shouldPush filters a message before validating and pushing it
// It returns true if the message can be further validated and pushed
func (p *PubSub) shouldPush(msg *Message) bool {
src := msg.ReceivedFrom
// reject messages from blacklisted peers
if p.blacklist.Contains(src) {
log.Debugf("dropping message from blacklisted peer %s", src)
p.tracer.RejectMessage(msg, RejectBlacklstedPeer)
return
return false
}
// even if they are forwarded by good peers
if p.blacklist.Contains(msg.GetFrom()) {
log.Debugf("dropping message from blacklisted source %s", src)
p.tracer.RejectMessage(msg, RejectBlacklistedSource)
return
return false
}
err := p.checkSigningPolicy(msg)
if err != nil {
log.Debugf("dropping message from %s: %s", src, err)
return
return false
}
// reject messages claiming to be from ourselves but not locally published
@ -1153,16 +1374,24 @@ func (p *PubSub) pushMsg(msg *Message) {
if peer.ID(msg.GetFrom()) == self && src != self {
log.Debugf("dropping message claiming to be from self but forwarded from %s", src)
p.tracer.RejectMessage(msg, RejectSelfOrigin)
return
return false
}
// have we already seen and validated this message?
id := p.idGen.ID(msg)
if p.seenMessage(id) {
p.tracer.DuplicateMessage(msg)
return
return false
}
return true
}
// pushMsg pushes a message performing validation as necessary
func (p *PubSub) pushMsg(msg *Message) {
src := msg.ReceivedFrom
id := p.idGen.ID(msg)
if !p.val.Push(src, msg) {
return
}
@ -1212,6 +1441,15 @@ func (p *PubSub) publishMessage(msg *Message) {
}
}
func (p *PubSub) publishMessageBatch(batchAndOpts messageBatchAndPublishOptions) {
for _, msg := range batchAndOpts.messages {
p.tracer.DeliverMessage(msg)
p.notifySubs(msg)
}
// We type checked when pushing the batch to the channel
p.rt.(BatchPublisher).PublishBatch(batchAndOpts.messages, batchAndOpts.opts)
}
type addTopicReq struct {
topic *Topic
resp chan *Topic
@ -1349,6 +1587,37 @@ func (p *PubSub) Publish(topic string, data []byte, opts ...PubOpt) error {
return t.Publish(context.TODO(), data, opts...)
}
// PublishBatch publishes a batch of messages. This only works for routers that
// implement the BatchPublisher interface.
//
// Users should make sure there is enough space in the Peer's outbound queue to
// ensure messages are not dropped. WithPeerOutboundQueueSize should be set to
// at least the expected number of batched messages per peer plus some slack to
// account for gossip messages.
//
// The default publish strategy is RoundRobinMessageIDScheduler.
func (p *PubSub) PublishBatch(batch *MessageBatch, opts ...BatchPubOpt) error {
if _, ok := p.rt.(BatchPublisher); !ok {
return fmt.Errorf("pubsub router is not a BatchPublisher")
}
publishOptions := &BatchPublishOptions{}
for _, o := range opts {
err := o(publishOptions)
if err != nil {
return err
}
}
setDefaultBatchPublishOptions(publishOptions)
p.sendMessageBatch <- messageBatchAndPublishOptions{
messages: batch.take(),
opts: publishOptions,
}
return nil
}
func (p *PubSub) nextSeqno() []byte {
seqno := make([]byte, 8)
counter := atomic.AddUint64(&p.counter, 1)

View File

@ -40,7 +40,9 @@ func TestPubSubRemovesBlacklistedPeer(t *testing.T) {
// Bad peer is blacklisted after it has connected.
// Calling p.BlacklistPeer directly does the right thing but we should also clean
// up the peer if it has been added the the blacklist by another means.
bl.Add(hosts[0].ID())
withRouter(psubs1, func(r PubSubRouter) {
bl.Add(hosts[0].ID())
})
_, err := psubs0.Subscribe("test")
if err != nil {

View File

@ -94,6 +94,8 @@ func (rs *RandomSubRouter) AcceptFrom(peer.ID) AcceptStatus {
return AcceptAll
}
func (rs *RandomSubRouter) Preprocess(from peer.ID, msgs []*Message) {}
func (rs *RandomSubRouter) HandleRPC(rpc *RPC) {}
func (rs *RandomSubRouter) Publish(msg *Message) {
@ -144,18 +146,18 @@ func (rs *RandomSubRouter) Publish(msg *Message) {
out := rpcWithMessages(msg.Message)
for p := range tosend {
mch, ok := rs.p.peers[p]
q, ok := rs.p.peers[p]
if !ok {
continue
}
select {
case mch <- out:
rs.tracer.SendRPC(out, p)
default:
err := q.Push(out, false)
if err != nil {
log.Infof("dropping message to peer %s: queue full", p)
rs.tracer.DropRPC(out, p)
continue
}
rs.tracer.SendRPC(out, p)
}
}

147
rpc_queue.go Normal file
View File

@ -0,0 +1,147 @@
package pubsub
import (
"context"
"errors"
"sync"
)
var (
ErrQueueCancelled = errors.New("rpc queue operation cancelled")
ErrQueueClosed = errors.New("rpc queue closed")
ErrQueueFull = errors.New("rpc queue full")
ErrQueuePushOnClosed = errors.New("push on closed rpc queue")
)
type priorityQueue struct {
normal []*RPC
priority []*RPC
}
func (q *priorityQueue) Len() int {
return len(q.normal) + len(q.priority)
}
func (q *priorityQueue) NormalPush(rpc *RPC) {
q.normal = append(q.normal, rpc)
}
func (q *priorityQueue) PriorityPush(rpc *RPC) {
q.priority = append(q.priority, rpc)
}
func (q *priorityQueue) Pop() *RPC {
var rpc *RPC
if len(q.priority) > 0 {
rpc = q.priority[0]
q.priority[0] = nil
q.priority = q.priority[1:]
} else if len(q.normal) > 0 {
rpc = q.normal[0]
q.normal[0] = nil
q.normal = q.normal[1:]
}
return rpc
}
type rpcQueue struct {
dataAvailable sync.Cond
spaceAvailable sync.Cond
// Mutex used to access queue
queueMu sync.Mutex
queue priorityQueue
closed bool
maxSize int
}
func newRpcQueue(maxSize int) *rpcQueue {
q := &rpcQueue{maxSize: maxSize}
q.dataAvailable.L = &q.queueMu
q.spaceAvailable.L = &q.queueMu
return q
}
func (q *rpcQueue) Push(rpc *RPC, block bool) error {
return q.push(rpc, false, block)
}
func (q *rpcQueue) UrgentPush(rpc *RPC, block bool) error {
return q.push(rpc, true, block)
}
func (q *rpcQueue) push(rpc *RPC, urgent bool, block bool) error {
q.queueMu.Lock()
defer q.queueMu.Unlock()
if q.closed {
panic(ErrQueuePushOnClosed)
}
for q.queue.Len() == q.maxSize {
if block {
q.spaceAvailable.Wait()
// It can receive a signal because the queue is closed.
if q.closed {
panic(ErrQueuePushOnClosed)
}
} else {
return ErrQueueFull
}
}
if urgent {
q.queue.PriorityPush(rpc)
} else {
q.queue.NormalPush(rpc)
}
q.dataAvailable.Signal()
return nil
}
// Note that, when the queue is empty and there are two blocked Pop calls, it
// doesn't mean that the first Pop will get the item from the next Push. The
// second Pop will probably get it instead.
func (q *rpcQueue) Pop(ctx context.Context) (*RPC, error) {
q.queueMu.Lock()
defer q.queueMu.Unlock()
if q.closed {
return nil, ErrQueueClosed
}
unregisterAfterFunc := context.AfterFunc(ctx, func() {
// Wake up all the waiting routines. The only routine that correponds
// to this Pop call will return from the function. Note that this can
// be expensive, if there are too many waiting routines.
q.dataAvailable.Broadcast()
})
defer unregisterAfterFunc()
for q.queue.Len() == 0 {
select {
case <-ctx.Done():
return nil, ErrQueueCancelled
default:
}
q.dataAvailable.Wait()
// It can receive a signal because the queue is closed.
if q.closed {
return nil, ErrQueueClosed
}
}
rpc := q.queue.Pop()
q.spaceAvailable.Signal()
return rpc, nil
}
func (q *rpcQueue) Close() {
q.queueMu.Lock()
defer q.queueMu.Unlock()
q.closed = true
q.dataAvailable.Broadcast()
q.spaceAvailable.Broadcast()
}

229
rpc_queue_test.go Normal file
View File

@ -0,0 +1,229 @@
package pubsub
import (
"context"
"testing"
"time"
)
func TestNewRpcQueue(t *testing.T) {
maxSize := 32
q := newRpcQueue(maxSize)
if q.maxSize != maxSize {
t.Fatalf("rpc queue has wrong max size, expected %d but got %d", maxSize, q.maxSize)
}
if q.dataAvailable.L != &q.queueMu {
t.Fatalf("the dataAvailable field of rpc queue has an incorrect mutex")
}
if q.spaceAvailable.L != &q.queueMu {
t.Fatalf("the spaceAvailable field of rpc queue has an incorrect mutex")
}
}
func TestRpcQueueUrgentPush(t *testing.T) {
maxSize := 32
q := newRpcQueue(maxSize)
rpc1 := &RPC{}
rpc2 := &RPC{}
rpc3 := &RPC{}
rpc4 := &RPC{}
q.Push(rpc1, true)
q.UrgentPush(rpc2, true)
q.Push(rpc3, true)
q.UrgentPush(rpc4, true)
pop1, err := q.Pop(context.Background())
if err != nil {
t.Fatal(err)
}
pop2, err := q.Pop(context.Background())
if err != nil {
t.Fatal(err)
}
pop3, err := q.Pop(context.Background())
if err != nil {
t.Fatal(err)
}
pop4, err := q.Pop(context.Background())
if err != nil {
t.Fatal(err)
}
if pop1 != rpc2 {
t.Fatalf("get wrong item from rpc queue Pop")
}
if pop2 != rpc4 {
t.Fatalf("get wrong item from rpc queue Pop")
}
if pop3 != rpc1 {
t.Fatalf("get wrong item from rpc queue Pop")
}
if pop4 != rpc3 {
t.Fatalf("get wrong item from rpc queue Pop")
}
}
func TestRpcQueuePushThenPop(t *testing.T) {
maxSize := 32
q := newRpcQueue(maxSize)
rpc1 := &RPC{}
rpc2 := &RPC{}
q.Push(rpc1, true)
q.Push(rpc2, true)
pop1, err := q.Pop(context.Background())
if err != nil {
t.Fatal(err)
}
pop2, err := q.Pop(context.Background())
if err != nil {
t.Fatal(err)
}
if pop1 != rpc1 {
t.Fatalf("get wrong item from rpc queue Pop")
}
if pop2 != rpc2 {
t.Fatalf("get wrong item from rpc queue Pop")
}
}
func TestRpcQueuePopThenPush(t *testing.T) {
maxSize := 32
q := newRpcQueue(maxSize)
rpc1 := &RPC{}
rpc2 := &RPC{}
go func() {
// Wait to make sure the main goroutine is blocked.
time.Sleep(1 * time.Millisecond)
q.Push(rpc1, true)
q.Push(rpc2, true)
}()
pop1, err := q.Pop(context.Background())
if err != nil {
t.Fatal(err)
}
pop2, err := q.Pop(context.Background())
if err != nil {
t.Fatal(err)
}
if pop1 != rpc1 {
t.Fatalf("get wrong item from rpc queue Pop")
}
if pop2 != rpc2 {
t.Fatalf("get wrong item from rpc queue Pop")
}
}
func TestRpcQueueBlockPushWhenFull(t *testing.T) {
maxSize := 1
q := newRpcQueue(maxSize)
finished := make(chan struct{})
q.Push(&RPC{}, true)
go func() {
q.Push(&RPC{}, true)
finished <- struct{}{}
}()
// Wait to make sure the goroutine is blocked.
time.Sleep(1 * time.Millisecond)
select {
case <-finished:
t.Fatalf("blocking rpc queue Push is not blocked when it is full")
default:
}
}
func TestRpcQueueNonblockPushWhenFull(t *testing.T) {
maxSize := 1
q := newRpcQueue(maxSize)
q.Push(&RPC{}, true)
err := q.Push(&RPC{}, false)
if err != ErrQueueFull {
t.Fatalf("non-blocking rpc queue Push returns wrong error when it is full")
}
}
func TestRpcQueuePushAfterClose(t *testing.T) {
maxSize := 32
q := newRpcQueue(maxSize)
q.Close()
defer func() {
if r := recover(); r == nil {
t.Fatalf("rpc queue Push does not panick after closed")
}
}()
q.Push(&RPC{}, true)
}
func TestRpcQueuePopAfterClose(t *testing.T) {
maxSize := 32
q := newRpcQueue(maxSize)
q.Close()
_, err := q.Pop(context.Background())
if err != ErrQueueClosed {
t.Fatalf("rpc queue Pop returns wrong error after closed")
}
}
func TestRpcQueueCloseWhilePush(t *testing.T) {
maxSize := 1
q := newRpcQueue(maxSize)
q.Push(&RPC{}, true)
defer func() {
if r := recover(); r == nil {
t.Fatalf("rpc queue Push does not panick when it's closed on the fly")
}
}()
go func() {
// Wait to make sure the main goroutine is blocked.
time.Sleep(1 * time.Millisecond)
q.Close()
}()
q.Push(&RPC{}, true)
}
func TestRpcQueueCloseWhilePop(t *testing.T) {
maxSize := 32
q := newRpcQueue(maxSize)
go func() {
// Wait to make sure the main goroutine is blocked.
time.Sleep(1 * time.Millisecond)
q.Close()
}()
_, err := q.Pop(context.Background())
if err != ErrQueueClosed {
t.Fatalf("rpc queue Pop returns wrong error when it's closed on the fly")
}
}
func TestRpcQueuePushWhenFullThenPop(t *testing.T) {
maxSize := 1
q := newRpcQueue(maxSize)
q.Push(&RPC{}, true)
go func() {
// Wait to make sure the main goroutine is blocked.
time.Sleep(1 * time.Millisecond)
q.Pop(context.Background())
}()
q.Push(&RPC{}, true)
}
func TestRpcQueueCancelPop(t *testing.T) {
maxSize := 32
q := newRpcQueue(maxSize)
ctx, cancel := context.WithCancel(context.Background())
go func() {
// Wait to make sure the main goroutine is blocked.
time.Sleep(1 * time.Millisecond)
cancel()
}()
_, err := q.Pop(ctx)
if err != ErrQueueCancelled {
t.Fatalf("rpc queue Pop returns wrong error when it's cancelled")
}
}

View File

@ -18,6 +18,10 @@ type FirstSeenCache struct {
var _ TimeCache = (*FirstSeenCache)(nil)
func newFirstSeenCache(ttl time.Duration) *FirstSeenCache {
return newFirstSeenCacheWithSweepInterval(ttl, backgroundSweepInterval)
}
func newFirstSeenCacheWithSweepInterval(ttl time.Duration, sweepInterval time.Duration) *FirstSeenCache {
tc := &FirstSeenCache{
m: make(map[string]time.Time),
ttl: ttl,
@ -25,7 +29,7 @@ func newFirstSeenCache(ttl time.Duration) *FirstSeenCache {
ctx, done := context.WithCancel(context.Background())
tc.done = done
go background(ctx, &tc.lk, tc.m)
go background(ctx, &tc.lk, tc.m, sweepInterval)
return tc
}

View File

@ -17,9 +17,7 @@ func TestFirstSeenCacheFound(t *testing.T) {
}
func TestFirstSeenCacheExpire(t *testing.T) {
backgroundSweepInterval = time.Second
tc := newFirstSeenCache(time.Second)
tc := newFirstSeenCacheWithSweepInterval(time.Second, time.Second)
for i := 0; i < 10; i++ {
tc.Add(fmt.Sprint(i))
time.Sleep(time.Millisecond * 100)
@ -34,9 +32,7 @@ func TestFirstSeenCacheExpire(t *testing.T) {
}
func TestFirstSeenCacheNotFoundAfterExpire(t *testing.T) {
backgroundSweepInterval = time.Second
tc := newFirstSeenCache(time.Second)
tc := newFirstSeenCacheWithSweepInterval(time.Second, time.Second)
tc.Add(fmt.Sprint(0))
time.Sleep(2 * time.Second)

View File

@ -19,6 +19,10 @@ type LastSeenCache struct {
var _ TimeCache = (*LastSeenCache)(nil)
func newLastSeenCache(ttl time.Duration) *LastSeenCache {
return newLastSeenCacheWithSweepInterval(ttl, backgroundSweepInterval)
}
func newLastSeenCacheWithSweepInterval(ttl time.Duration, sweepInterval time.Duration) *LastSeenCache {
tc := &LastSeenCache{
m: make(map[string]time.Time),
ttl: ttl,
@ -26,7 +30,7 @@ func newLastSeenCache(ttl time.Duration) *LastSeenCache {
ctx, done := context.WithCancel(context.Background())
tc.done = done
go background(ctx, &tc.lk, tc.m)
go background(ctx, &tc.lk, tc.m, sweepInterval)
return tc
}

View File

@ -17,8 +17,7 @@ func TestLastSeenCacheFound(t *testing.T) {
}
func TestLastSeenCacheExpire(t *testing.T) {
backgroundSweepInterval = time.Second
tc := newLastSeenCache(time.Second)
tc := newLastSeenCacheWithSweepInterval(time.Second, time.Second)
for i := 0; i < 11; i++ {
tc.Add(fmt.Sprint(i))
time.Sleep(time.Millisecond * 100)
@ -80,9 +79,7 @@ func TestLastSeenCacheSlideForward(t *testing.T) {
}
func TestLastSeenCacheNotFoundAfterExpire(t *testing.T) {
backgroundSweepInterval = time.Second
tc := newLastSeenCache(time.Second)
tc := newLastSeenCacheWithSweepInterval(time.Second, time.Second)
tc.Add(fmt.Sprint(0))
time.Sleep(2 * time.Second)

View File

@ -6,10 +6,10 @@ import (
"time"
)
var backgroundSweepInterval = time.Minute
const backgroundSweepInterval = time.Minute
func background(ctx context.Context, lk sync.Locker, m map[string]time.Time) {
ticker := time.NewTicker(backgroundSweepInterval)
func background(ctx context.Context, lk sync.Locker, m map[string]time.Time, tickerDur time.Duration) {
ticker := time.NewTicker(tickerDur)
defer ticker.Stop()
for {

View File

@ -213,19 +213,59 @@ type RouterReady func(rt PubSubRouter, topic string) (bool, error)
type ProvideKey func() (crypto.PrivKey, peer.ID)
type PublishOptions struct {
ready RouterReady
customKey ProvideKey
local bool
ready RouterReady
customKey ProvideKey
local bool
validatorData any
}
type BatchPublishOptions struct {
Strategy RPCScheduler
}
type PubOpt func(pub *PublishOptions) error
type BatchPubOpt func(pub *BatchPublishOptions) error
func setDefaultBatchPublishOptions(opts *BatchPublishOptions) {
if opts.Strategy == nil {
opts.Strategy = &RoundRobinMessageIDScheduler{}
}
}
// Publish publishes data to topic.
func (t *Topic) Publish(ctx context.Context, data []byte, opts ...PubOpt) error {
msg, err := t.validate(ctx, data, opts...)
if err != nil {
if errors.Is(err, dupeErr{}) {
// If it was a duplicate, we return nil to indicate success.
// Semantically the message was published by us or someone else.
return nil
}
return err
}
return t.p.val.sendMsgBlocking(msg)
}
func (t *Topic) AddToBatch(ctx context.Context, batch *MessageBatch, data []byte, opts ...PubOpt) error {
msg, err := t.validate(ctx, data, opts...)
if err != nil {
if errors.Is(err, dupeErr{}) {
// If it was a duplicate, we return nil to indicate success.
// Semantically the message was published by us or someone else.
// We won't add it to the batch. Since it's already been published.
return nil
}
return err
}
batch.add(msg)
return nil
}
func (t *Topic) validate(ctx context.Context, data []byte, opts ...PubOpt) (*Message, error) {
t.mux.RLock()
defer t.mux.RUnlock()
if t.closed {
return ErrTopicClosed
return nil, ErrTopicClosed
}
pid := t.p.signID
@ -235,17 +275,17 @@ func (t *Topic) Publish(ctx context.Context, data []byte, opts ...PubOpt) error
for _, opt := range opts {
err := opt(pub)
if err != nil {
return err
return nil, err
}
}
if pub.customKey != nil && !pub.local {
key, pid = pub.customKey()
if key == nil {
return ErrNilSignKey
return nil, ErrNilSignKey
}
if len(pid) == 0 {
return ErrEmptyPeerID
return nil, ErrEmptyPeerID
}
}
@ -263,7 +303,7 @@ func (t *Topic) Publish(ctx context.Context, data []byte, opts ...PubOpt) error
m.From = []byte(pid)
err := signMessage(pid, key, m)
if err != nil {
return err
return nil, err
}
}
@ -290,9 +330,9 @@ func (t *Topic) Publish(ctx context.Context, data []byte, opts ...PubOpt) error
break readyLoop
}
case <-t.p.ctx.Done():
return t.p.ctx.Err()
return nil, t.p.ctx.Err()
case <-ctx.Done():
return ctx.Err()
return nil, ctx.Err()
}
if ticker == nil {
ticker = time.NewTicker(200 * time.Millisecond)
@ -302,13 +342,27 @@ func (t *Topic) Publish(ctx context.Context, data []byte, opts ...PubOpt) error
select {
case <-ticker.C:
case <-ctx.Done():
return fmt.Errorf("router is not ready: %w", ctx.Err())
return nil, fmt.Errorf("router is not ready: %w", ctx.Err())
}
}
}
}
return t.p.val.PushLocal(&Message{m, "", t.p.host.ID(), nil, pub.local})
msg := &Message{m, "", t.p.host.ID(), pub.validatorData, pub.local}
select {
case t.p.eval <- func() {
t.p.rt.Preprocess(t.p.host.ID(), []*Message{msg})
}:
case <-t.p.ctx.Done():
return nil, t.p.ctx.Err()
case <-ctx.Done():
return nil, ctx.Err()
}
err := t.p.val.ValidateLocal(msg)
if err != nil {
return nil, err
}
return msg, nil
}
// WithReadiness returns a publishing option for only publishing when the router is ready.
@ -332,6 +386,15 @@ func WithLocalPublication(local bool) PubOpt {
}
}
// WithValidatorData returns a publishing option to set custom validator data for the message.
// This allows users to avoid deserialization of the message data when validating the message locally.
func WithValidatorData(data any) PubOpt {
return func(pub *PublishOptions) error {
pub.validatorData = data
return nil
}
}
// WithSecretKeyAndPeerId returns a publishing option for providing a custom private key and its corresponding peer ID
// This option is useful when we want to send messages from "virtual", never-connectable peers in the network
func WithSecretKeyAndPeerId(key crypto.PrivKey, pid peer.ID) PubOpt {

View File

@ -951,6 +951,23 @@ func TestTopicPublishWithKeyInvalidParameters(t *testing.T) {
})
}
func TestTopicPublishWithContextCanceled(t *testing.T) {
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
const topic = "foobar"
const numHosts = 5
hosts := getDefaultHosts(t, numHosts)
topics := getTopics(getPubsubs(ctx, hosts), topic)
cancel()
err := topics[0].Publish(ctx, []byte("buff"))
if err != context.Canceled {
t.Fatal("error should have been of type context.Canceled", err)
}
}
func TestTopicRelayPublishWithKey(t *testing.T) {
ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second)
defer cancel()

View File

@ -402,11 +402,23 @@ func (t *pubsubTracer) traceRPCMeta(rpc *RPC) *pb.TraceEvent_RPCMeta {
})
}
var idontwant []*pb.TraceEvent_ControlIDontWantMeta
for _, ctl := range rpc.Control.Idontwant {
var mids [][]byte
for _, mid := range ctl.MessageIDs {
mids = append(mids, []byte(mid))
}
idontwant = append(idontwant, &pb.TraceEvent_ControlIDontWantMeta{
MessageIDs: mids,
})
}
rpcMeta.Control = &pb.TraceEvent_ControlMeta{
Ihave: ihave,
Iwant: iwant,
Graft: graft,
Prune: prune,
Ihave: ihave,
Iwant: iwant,
Graft: graft,
Prune: prune,
Idontwant: idontwant,
}
}

View File

@ -26,6 +26,12 @@ func (e ValidationError) Error() string {
return e.Reason
}
type dupeErr struct{}
func (dupeErr) Error() string {
return "duplicate message"
}
// Validator is a function that validates a message with a binary decision: accept or reject.
type Validator func(context.Context, peer.ID, *Message) bool
@ -226,10 +232,9 @@ func (v *validation) RemoveValidator(req *rmValReq) {
}
}
// PushLocal synchronously pushes a locally published message and performs applicable
// validations.
// Returns an error if validation fails
func (v *validation) PushLocal(msg *Message) error {
// ValidateLocal synchronously validates a locally published message and
// performs applicable validations. Returns an error if validation fails.
func (v *validation) ValidateLocal(msg *Message) error {
v.p.tracer.PublishMessage(msg)
err := v.p.checkSigningPolicy(msg)
@ -238,7 +243,9 @@ func (v *validation) PushLocal(msg *Message) error {
}
vals := v.getValidators(msg)
return v.validate(vals, msg.ReceivedFrom, msg, true)
return v.validate(vals, msg.ReceivedFrom, msg, true, func(msg *Message) error {
return nil
})
}
// Push pushes a message into the validation pipeline.
@ -282,15 +289,26 @@ func (v *validation) validateWorker() {
for {
select {
case req := <-v.validateQ:
v.validate(req.vals, req.src, req.msg, false)
_ = v.validate(req.vals, req.src, req.msg, false, v.sendMsgBlocking)
case <-v.p.ctx.Done():
return
}
}
}
// validate performs validation and only sends the message if all validators succeed
func (v *validation) validate(vals []*validatorImpl, src peer.ID, msg *Message, synchronous bool) error {
func (v *validation) sendMsgBlocking(msg *Message) error {
select {
case v.p.sendMsg <- msg:
return nil
case <-v.p.ctx.Done():
return v.p.ctx.Err()
}
}
// validate performs validation and only calls onValid if all validators succeed.
// If synchronous is true, onValid will be called before this function returns
// if the message is new and accepted.
func (v *validation) validate(vals []*validatorImpl, src peer.ID, msg *Message, synchronous bool, onValid func(*Message) error) error {
// If signature verification is enabled, but signing is disabled,
// the Signature is required to be nil upon receiving the message in PubSub.pushMsg.
if msg.Signature != nil {
@ -306,7 +324,7 @@ func (v *validation) validate(vals []*validatorImpl, src peer.ID, msg *Message,
id := v.p.idGen.ID(msg)
if !v.p.markSeen(id) {
v.tracer.DuplicateMessage(msg)
return nil
return dupeErr{}
} else {
v.tracer.ValidateMessage(msg)
}
@ -345,7 +363,7 @@ loop:
select {
case v.validateThrottle <- struct{}{}:
go func() {
v.doValidateTopic(async, src, msg, result)
v.doValidateTopic(async, src, msg, result, onValid)
<-v.validateThrottle
}()
default:
@ -360,13 +378,8 @@ loop:
return ValidationError{Reason: RejectValidationIgnored}
}
// no async validators, accepted message, send it!
select {
case v.p.sendMsg <- msg:
return nil
case <-v.p.ctx.Done():
return v.p.ctx.Err()
}
// no async validators, accepted message
return onValid(msg)
}
func (v *validation) validateSignature(msg *Message) bool {
@ -379,7 +392,7 @@ func (v *validation) validateSignature(msg *Message) bool {
return true
}
func (v *validation) doValidateTopic(vals []*validatorImpl, src peer.ID, msg *Message, r ValidationResult) {
func (v *validation) doValidateTopic(vals []*validatorImpl, src peer.ID, msg *Message, r ValidationResult, onValid func(*Message) error) {
result := v.validateTopic(vals, src, msg)
if result == ValidationAccept && r != ValidationAccept {
@ -388,7 +401,7 @@ func (v *validation) doValidateTopic(vals []*validatorImpl, src peer.ID, msg *Me
switch result {
case ValidationAccept:
v.p.sendMsg <- msg
_ = onValid(msg)
case ValidationReject:
log.Debugf("message validation failed; dropping message from %s", src)
v.tracer.RejectMessage(msg, RejectValidationFailed)

View File

@ -20,10 +20,23 @@ import (
pb "github.com/libp2p/go-libp2p-pubsub/pb"
)
var rng *rand.Rand
var rng *concurrentRNG
type concurrentRNG struct {
mu sync.Mutex
rng *rand.Rand
}
func (r *concurrentRNG) Intn(n int) int {
r.mu.Lock()
defer r.mu.Unlock()
return r.rng.Intn(n)
}
func init() {
rng = rand.New(rand.NewSource(314159))
rng = &concurrentRNG{
rng: rand.New(rand.NewSource(314159)),
}
}
func TestBasicSeqnoValidator1(t *testing.T) {

View File

@ -1,3 +1,3 @@
{
"version": "v0.11.0"
"version": "v0.14.2"
}