nimbus-eth2/beacon_chain/gossip_processing
Jacek Sieka d859bc12f0
write uncompressed validator keys to database (#2639)
* write uncompressed validator keys to database

Loading 150k+ validator keys on startup in compressed format takes a lot
of time - better store them in uncompressed format which makes behaviour
just after startup faster / more predictable.

* refactor cached validator key access
* fix isomorphic cast to work with non-var instances
* remove cooked pubkey cache - directly use database cache in chaindag
as well (one less cache to keep in sync)
* bump blscurve, introduce loadValid for known-to-be-valid keys
2021-06-10 10:37:02 +03:00
..
README.md Reorg (5/5) (#2377) 2021-03-05 14:12:00 +01:00
batch_validation.nim write uncompressed validator keys to database (#2639) 2021-06-10 10:37:02 +03:00
block_processor.nim singe validator key cache 2021-06-01 20:43:44 +03:00
consensus_manager.nim singe validator key cache 2021-06-01 20:43:44 +03:00
eth2_processor.nim write uncompressed validator keys to database (#2639) 2021-06-10 10:37:02 +03:00
gossip_validation.nim write uncompressed validator keys to database (#2639) 2021-06-10 10:37:02 +03:00

README.md

Gossip Processing

This folders hold a collection of modules to:

  • validate raw gossip data before
    • rebroadcasting them (potentially aggregated)
    • sending it to one of the consensus object pool

Validation

Gossip Validation is different from consensus verification in particular for blocks.

There are 2 consumers of validated consensus objects:

  • a ValidationResult.Accept output triggers rebroadcasting in libp2p
    • method validate(PubSub, message) in libp2p/protocols/pubsub/pubsub.nim in the
    • which was called by rpcHandler(GossipSub, PubSubPeer, RPCMsg)
  • a xyzValidator message enqueues the validated object in one of the processing queue in eth2_processor
    • blocksQueue: AsyncQueue[BlockEntry], (shared with request_manager and sync_manager)
    • attestationsQueue: AsyncQueue[AttestationEntry]
    • aggregatesQueue: AsyncQueue[AggregateEntry]

Those queues are then regularly processed to be made available to the consensus object pools.

Security concerns

As the first line of defense in Nimbus, modules must be able to handle burst of data that may come:

  • from malicious nodes trying to DOS us
  • from long periods of non-finality, creating lots of forks, attestations, forks