49729e1ef3
When a block is introduced to the system both via REST and gossip at the same time, we will call `storeBlock` from two locations leading to a dupliace check race condition as we wait for the EL. This issue may manifest in particular when using an external block builder that itself publishes the block onto the gossip network. * refactor enqueue flow * simplify calling `addBlock` * complete request manager verifier future for blobless blocks * re-verify parent conditions before adding block among other things, it might have gone stale or finalized between one call and the other |
||
---|---|---|
.. | ||
README.md | ||
batch_validation.nim | ||
block_processor.nim | ||
eth2_processor.nim | ||
gossip_validation.nim | ||
light_client_processor.nim | ||
optimistic_processor.nim |
README.md
Gossip Processing
This folder holds a collection of modules to:
- validate raw gossip data before
- rebroadcasting it (potentially aggregated)
- sending it to one of the consensus object pools
Validation
Gossip validation is different from consensus verification in particular for blocks.
- Blocks: https://github.com/ethereum/consensus-specs/blob/v1.3.0/specs/phase0/p2p-interface.md#beacon_block
- Attestations (aggregated): https://github.com/ethereum/consensus-specs/blob/v1.4.0-beta.1/specs/phase0/p2p-interface.md#beacon_aggregate_and_proof
- Attestations (unaggregated): https://github.com/ethereum/consensus-specs/blob/v1.3.0/specs/phase0/p2p-interface.md#attestation-subnets
- Voluntary exits: https://github.com/ethereum/consensus-specs/blob/v1.4.0-alpha.3/specs/phase0/p2p-interface.md#voluntary_exit
- Proposer slashings: https://github.com/ethereum/consensus-specs/blob/v1.4.0-alpha.3/specs/phase0/p2p-interface.md#proposer_slashing
- Attester slashing: https://github.com/ethereum/consensus-specs/blob/v1.4.0-alpha.3/specs/phase0/p2p-interface.md#attester_slashing
There are multiple consumers of validated consensus objects:
- a
ValidationResult.Accept
output triggers rebroadcasting in libp2p- We jump into method
validate(PubSub, Message)
in libp2p/protocols/pubsub/pubsub.nim - which was called by
rpcHandler(GossipSub, PubSubPeer, RPCMsg)
- We jump into method
- a
blockValidator
message enqueues the validated object to the processing queue inblock_processor
blockQueue: AsyncQueue[BlockEntry]
(shared with request_manager and sync_manager)- This queue is then regularly processed to be made available to the consensus object pools.
- a
xyzValidator
message adds the validated object to a pool in eth2_processor- Attestations (unaggregated and aggregated) get collected into batches.
- Once a threshold is exceeded or after a timeout, they get validated together using BatchCrypto.
Security concerns
As the first line of defense in Nimbus, modules must be able to handle bursts of data that may come:
- from malicious nodes trying to DOS us
- from long periods of non-finality, creating lots of forks, attestations