24a30d7a7c
This is a bit complicated, so: 1) Peerpool was subscribing to `event.Feed`, which is a global event emitter for ethereum. 2) The p2p.Server was publshing on `event.Feed`, this triggered in the same routine a publish on `event.Feed`. 3) Peerpool was listening to `event.Feed`, react on it, and in the same routine, trigger some code on p2p.Server that would publish on `event.Feed` This meant that if the size of the channel was unbufferred, it would deadlock, as peerPool would not be consuming when it would publish (the same go routine publishes and listen effectively, through a lot of indirection and non-buffered channels, p2p.Server->event.Feed) The channel though was a buffered channel with size 10, and this meant that most of the times is fine. The issue is that peerpool is not the only producer to this channel. So it's possible that while is processing an event, the buffer would fill up, and it would hange trying to publish, and nobody is listening to the channel, hanging EVERYTHING. At least that's what I think, needs to be tested, but definitely an issue. I kept the code changes to a minimum, this code is a bit hairy, but it's fairly critical so I don't want to make too many changes. |
||
---|---|---|
.. | ||
verifier | ||
README.md | ||
cache.go | ||
cache_test.go | ||
cotopicpool.go | ||
cotopicpool_test.go | ||
peerpool.go | ||
peerpool_test.go | ||
signal.go | ||
topic_peer_queue.go | ||
topic_peer_queue_test.go | ||
topic_register.go | ||
topicpool.go | ||
topicpool_test.go |
README.md
Peer pool signals
Peer pool sends 3 types of signals.
Discovery started signal will be sent once discovery server is started. And every time node will have to re-start discovery server because peer number dropped too low.
{
"type": "discovery.started",
"event": null
}
Discovery stopped signal will be sent once discovery found max limit of peers for every registered topic.
{
"type": "discovery.stopped",
"event": null
}
Discovery summary signal will be sent every time new peer is added or removed from a cluster. It will contain a map with capability as a key and total numbers of peers with that capability as a value.
{
"type": "discovery.summary",
"event": [
{
"id": "339c84c816b5f17a622c8d7ab9498f9998e942a274f70794af934bf5d3d02e14db8ddca2170e4edccede29ea6d409b154c141c34c01006e76c95e17672a27454",
"name": "peer-0/v1.0/darwin/go1.10.1",
"caps": [
"shh/6"
],
"network": {
"localAddress": "127.0.0.1:61049",
"remoteAddress": "127.0.0.1:33732",
"inbound": false,
"trusted": false,
"static": true
},
"protocols": {
"shh": "unknown"
}
}
]
}
Or if we don't have any peers:
{
"type": "discovery.summary",
"event": []
}