f70aceef37
* Harden handling of unviable forks In our current handling of unviable forks, we allow peers to send us blocks that come from a different fork - this is not necessarily an error as it can happen naturally, but it does open up the client to a case where the same unviable fork keeps getting requested - rather than allowing this to happen, we'll now give these peers a small negative score - if it keeps happening, we'll disconnect them. * keep track of unviable forks in quarantine, to avoid filling it with known junk * collect peer scores in single module * descore peers when they send unviable blocks during sync * don't give score for duplicate blocks * increase quarantine size to a level that allows finality to happen under optimal conditions - this helps avoid downloading the same blocks over and over in case of an unviable fork * increase initial score for new peers to make room for one more failure before disconnection * log and score invalid/unviable blocks in requestmanager too * avoid ChainDAG dependency in quarantine * reject gossip blocks with unviable parent * continue processing unviable sync blocks in order to build unviable dag * docs * Update beacon_chain/consensus_object_pools/block_pools_types.nim * add unviable queue test |
||
---|---|---|
.. | ||
README.md | ||
batch_validation.nim | ||
block_processor.nim | ||
consensus_manager.nim | ||
eth2_processor.nim | ||
gossip_validation.nim |
README.md
Gossip Processing
This folder holds a collection of modules to:
- validate raw gossip data before
- rebroadcasting it (potentially aggregated)
- sending it to one of the consensus object pools
Validation
Gossip validation is different from consensus verification in particular for blocks.
- Blocks: https://github.com/ethereum/consensus-specs/blob/v1.1.8/specs/phase0/p2p-interface.md#beacon_block
- Attestations (aggregated): https://github.com/ethereum/consensus-specs/blob/v1.1.8/specs/phase0/p2p-interface.md#beacon_aggregate_and_proof
- Attestations (unaggregated): https://github.com/ethereum/consensus-specs/blob/v1.1.8/specs/phase0/p2p-interface.md#attestation-subnets
- Voluntary exits: https://github.com/ethereum/consensus-specs/blob/v1.1.8/specs/phase0/p2p-interface.md#voluntary_exit
- Proposer slashings: https://github.com/ethereum/consensus-specs/blob/v1.1.8/specs/phase0/p2p-interface.md#proposer_slashing
- Attester slashing: https://github.com/ethereum/consensus-specs/blob/v1.1.8/specs/phase0/p2p-interface.md#attester_slashing
There are multiple consumers of validated consensus objects:
- a
ValidationResult.Accept
output triggers rebroadcasting in libp2p- We jump into method
validate(PubSub, Message)
in libp2p/protocols/pubsub/pubsub.nim - which was called by
rpcHandler(GossipSub, PubSubPeer, RPCMsg)
- We jump into method
- a
blockValidator
message enqueues the validated object to the processing queue inblock_processor
blockQueue: AsyncQueue[BlockEntry]
(shared with request_manager and sync_manager)- This queue is then regularly processed to be made available to the consensus object pools.
- a
xyzValidator
message adds the validated object to a pool in eth2_processor- Attestations (unaggregated and aggregated) get collected into batches.
- Once a threshold is exceeded or after a timeout, they get validated together using BatchCrypto.
Security concerns
As the first line of defense in Nimbus, modules must be able to handle bursts of data that may come:
- from malicious nodes trying to DOS us
- from long periods of non-finality, creating lots of forks, attestations