nimbus-eth2/beacon_chain/gossip_processing
Jacek Sieka abe0d7b4ae singe validator key cache
Instead of keeping a validator key list per EpochRef, this PR introduces
a single shared validator key list in ChainDAG, and cleans up some other
ChainDAG and key-related issues.

The PR does not introduce the validator key list in the state transition
- this is because we batch-check all signatures before entering the spec
code, thus the spec code never hits the cache.

A future refactor should _probably_ remove the threadvar altogether.

There's a few other small fixes in here that make the flow easier to
read:

* fix `var ChainDAGRef` -> `ChainDAGRef`
* fix `var QuarantineRef` -> `QuarantineRef`
* consistent `dag` variable name
* avoid using threadvar pubkey cache in most cases
* better error messages in batch signature checking
2021-06-01 20:43:44 +03:00
..
README.md Reorg (5/5) (#2377) 2021-03-05 14:12:00 +01:00
batch_validation.nim singe validator key cache 2021-06-01 20:43:44 +03:00
block_processor.nim singe validator key cache 2021-06-01 20:43:44 +03:00
consensus_manager.nim singe validator key cache 2021-06-01 20:43:44 +03:00
eth2_processor.nim singe validator key cache 2021-06-01 20:43:44 +03:00
gossip_validation.nim singe validator key cache 2021-06-01 20:43:44 +03:00

README.md

Gossip Processing

This folders hold a collection of modules to:

  • validate raw gossip data before
    • rebroadcasting them (potentially aggregated)
    • sending it to one of the consensus object pool

Validation

Gossip Validation is different from consensus verification in particular for blocks.

There are 2 consumers of validated consensus objects:

  • a ValidationResult.Accept output triggers rebroadcasting in libp2p
    • method validate(PubSub, message) in libp2p/protocols/pubsub/pubsub.nim in the
    • which was called by rpcHandler(GossipSub, PubSubPeer, RPCMsg)
  • a xyzValidator message enqueues the validated object in one of the processing queue in eth2_processor
    • blocksQueue: AsyncQueue[BlockEntry], (shared with request_manager and sync_manager)
    • attestationsQueue: AsyncQueue[AttestationEntry]
    • aggregatesQueue: AsyncQueue[AggregateEntry]

Those queues are then regularly processed to be made available to the consensus object pools.

Security concerns

As the first line of defense in Nimbus, modules must be able to handle burst of data that may come:

  • from malicious nodes trying to DOS us
  • from long periods of non-finality, creating lots of forks, attestations, forks