4207b127f9
* era: load blocks and states Era files contain finalized history and can be thought of as an alternative source for block and state data that allows clients to avoid syncing this information from the P2P network - the P2P network is then used to "top up" the client with the most recent data. They can be freely shared in the community via whatever means (http, torrent, etc) and serve as a permanent cold store of consensus data (and, after the merge, execution data) for history buffs and bean counters alike. This PR gently introduces support for loading blocks and states in two cases: block requests from rest/p2p and frontfilling when doing checkpoint sync. The era files are used as a secondary source if the information is not found in the database - compared to the database, there are a few key differences: * the database stores the block indexed by block root while the era file indexes by slot - the former is used only in rest, while the latter is used both by p2p and rest. * when loading blocks from era files, the root is no longer trivially available - if it is needed, it must either be computed (slow) or cached (messy) - the good news is that for p2p requests, it is not needed * in era files, "framed" snappy encoding is used while in the database we store unframed snappy - for p2p2 requests, the latter requires recompression while the former could avoid it * front-filling is the process of using era files to replace backfilling - in theory this front-filling could happen from any block and front-fills with gaps could also be entertained, but our backfilling algorithm cannot take advantage of this because there's no (simple) way to tell it to "skip" a range. * front-filling, as implemented, is a bit slow (10s to load mainnet): we load the full BeaconState for every era to grab the roots of the blocks - it would be better to partially load the state - as such, it would also be good to be able to partially decompress snappy blobs * lookups from REST via root are served by first looking up a block summary in the database, then using the slot to load the block data from the era file - however, there needs to be an option to create the summary table from era files to fully support historical queries To test this, `ncli_db` has an era file exporter: the files it creates should be placed in an `era` folder next to `db` in the data directory. What's interesting in particular about this setup is that `db` remains as the source of truth for security purposes - it stores the latest synced head root which in turn determines where a node "starts" its consensus participation - the era directory however can be freely shared between nodes / people without any (significant) security implications, assuming the era files are consistent / not broken. There's lots of future improvements to be had: * we can drop the in-memory `BlockRef` index almost entirely - at this point, resident memory usage of Nimbus should drop to a cool 500-600 mb * we could serve era files via REST trivially: this would drop backfill times to whatever time it takes to download the files - unlike the current implementation that downloads block by block, downloading an era at a time almost entirely cuts out request overhead * we can "reasonably" recreate detailed state history from almost any point in time, turning an O(slot) process into O(1) effectively - we'll still need caches and indices to do this with sufficient efficiency for the rest api, but at least it cuts the whole process down to minutes instead of hours, for arbitrary points in time * CI: ignore failures with Nim-1.6 (temporary) * test fixes Co-authored-by: Ștefan Talpalaru <stefantalpalaru@yahoo.com> |
||
---|---|---|
.. | ||
README.md | ||
attestation_pool.nim | ||
block_clearance.nim | ||
block_dag.nim | ||
block_pools_types.nim | ||
block_pools_types_light_client.nim | ||
block_quarantine.nim | ||
blockchain_dag.nim | ||
blockchain_dag_light_client.nim | ||
exit_pool.nim | ||
spec_cache.nim | ||
sync_committee_msg_pool.nim |
README.md
Consensus object pools
This folder holds the various consensus object pools needed for a blockchain client.
Object in those pools have passed the "gossip validation" filter according to specs:
- blocks: https://github.com/ethereum/consensus-specs/blob/v1.1.10/specs/phase0/p2p-interface.md#beacon_block
- aggregate attestations: https://github.com/ethereum/consensus-specs/blob/v1.1.10/specs/phase0/p2p-interface.md#beacon_aggregate_and_proof
- unaggregated attestation: https://github.com/ethereum/consensus-specs/blob/v1.1.10/specs/phase0/p2p-interface.md#beacon_attestation_subnet_id
- voluntary exits: https://github.com/ethereum/consensus-specs/blob/v1.1.10/specs/phase0/p2p-interface.md#voluntary_exit
- Attester slashings: https://github.com/ethereum/consensus-specs/blob/v1.1.10/specs/phase0/p2p-interface.md#attester_slashing
- Proposer slashings: https://github.com/ethereum/consensus-specs/blob/v1.1.10/specs/phase0/p2p-interface.md#proposer_slashing
After "gossip validation" the consensus objects can be rebroadcasted as they are optimistically good, however for internal processing further verification is needed. For blocks, this means verifying state transition and all contained cryptographic signatures (instead of just the proposer signature). For other consensus objects, it is possible that gossip validation is a superset of consensus verification (TODO).
The pools presenet in this folder are:
- block_pools:
- block_quarantine: for seemingly valid blocks that are on a fork unknown to us.
- block_clearance: to verify (state_transition + cryptography) candidate blocks.
- blockchain_dag: an in-memory direct-acyclic graph of fully validated and verified blockchain candidates with the tail being the last finalized epoch. A block in the DAG MUST be in the fork choice and a block in the fork choice MUST be in the DAG (except for orphans following finalization). On finalization non-empty epoch blocks are stored in the beacon_chain_db.
- attestation_pool: Handles the attestation received from gossip and collect them for fork choice.
- exit_pool: Handle voluntary exits and forced exits (attester slashings and proposer slashings)