mirror of
https://github.com/status-im/nimbus-eth2.git
synced 2025-01-10 14:26:26 +00:00
d1cb5b7220
* Add parallel attestation verification * Update tests, batchVerify doesn't use the threadpool with only single core (nim-blscurve update) * bump nim-blscurve * Debug info for failing eth2 test vectors * remove submodule eth2-testnets * verbose debugging of make failure on Windows (libbacktrace?) * Remove CI debug mode * initialization convention * Fix new altair tests
Gossip Processing
This folders hold a collection of modules to:
- validate raw gossip data before
- rebroadcasting them (potentially aggregated)
- sending it to one of the consensus object pool
Validation
Gossip Validation is different from consensus verification in particular for blocks.
- Blocks: https://github.com/ethereum/consensus-specs/blob/v1.0.1/specs/phase0/p2p-interface.md#beacon_block
- Attestations (aggregate): https://github.com/ethereum/consensus-specs/blob/v1.0.1/specs/phase0/p2p-interface.md#beacon_aggregate_and_proof
- Attestations (single): https://github.com/ethereum/consensus-specs/blob/v1.0.1/specs/phase0/p2p-interface.md#attestation-subnets
- Exits: https://github.com/ethereum/consensus-specs/blob/v1.0.1/specs/phase0/p2p-interface.md#voluntary_exit
- Proposer slashings: https://github.com/ethereum/consensus-specs/blob/v1.0.1/specs/phase0/p2p-interface.md#proposer_slashing
- Attester slashing: https://github.com/ethereum/consensus-specs/blob/v1.0.1/specs/phase0/p2p-interface.md#attester_slashing
There are 2 consumers of validated consensus objects:
- a
ValidationResult.Accept
output triggers rebroadcasting in libp2p- method
validate(PubSub, message)
in libp2p/protocols/pubsub/pubsub.nim in the - which was called by
rpcHandler(GossipSub, PubSubPeer, RPCMsg)
- method
- a
xyzValidator
message enqueues the validated object in one of the processing queue in eth2_processorblocksQueue: AsyncQueue[BlockEntry]
, (shared with request_manager and sync_manager)attestationsQueue: AsyncQueue[AttestationEntry]
aggregatesQueue: AsyncQueue[AggregateEntry]
Those queues are then regularly processed to be made available to the consensus object pools.
Security concerns
As the first line of defense in Nimbus, modules must be able to handle burst of data that may come:
- from malicious nodes trying to DOS us
- from long periods of non-finality, creating lots of forks, attestations, forks