* update some v1.1.0 alpha1 to alpha2
* remove unused getDepositMessage overload and move other out of datatypes/base
* bump nim-eth2-scenarios to download v1.1.0-alpha.2 test vectors
* construct object rather than result
Since quite a lot of additional procs were now compiled as generics, this lead to compiler bugs that had
to be worked-around:
* The `Domain` type was renamed to `Eth2Domain` to avoid compilation errors
due to conflicts with `nativesockets.Domain`.
Similarly, `eth2_network.KeyPair` was renamed to `NetKeyPair`.
* A new more robust version of `hexToByteArray` was added to stew
* Add default port info
* Add default port info
* Edit CLI and docs to reflect 9000 port
* Apply suggestions from code review
Co-authored-by: Jacek Sieka <arnetheduck@gmail.com>
* set upper bound on EpochRef cache
* max 32 EpochRef instances
* less memory waste in BlockRef by removing EpochRef seq that is mostly
unused (~20mb)
* less memory waste in dag block lookup by not keeping an extra copy of
digest (~70mb)
* fix `==` and `$` for Eth2Digest
* remove `ChainDAG.tmpState` (~50mb?)
all in all, this branch cuts mainnet memory usage by ~160-180mb and puts
limits on EpochRef cache usage - where normally it hovered around 950mb
before, it's now sitting at 600-700mb on my machine.
* docs
* initial immutable validator database factoring
* remove changes from chain_dag: this abstraction properly belongs in beacon_chain_db
* add merging mutable/immutable validator portions; individually test database roundtripping of immutable validators and states-sans-immutable-validators
* update test summaries
* use stew/assign2 instead of Nim assignment
* add reading/writing of immutable validators in chaindag
* remove unused import
* replace chunked k/v store of immutable validators with per-row SQL table storage
* use List instead of HashList
* un-stub some ncli_db code so that it uses
* switch HashArray to array; move BeaconStateNoImmutableValidators from datatypes to beacon_chain_db
* begin only-mutable-part state storage
* uncomment some assigns
* work around https://github.com/nim-lang/Nim/issues/17253
* fix most of the issues/oversights; local sim runs again
* fix test suite by adding missing beaconstate field to copy function
* have ncli bench also store immutable validators
* extract some immutable-validator-specific code from the beacon chain db module
* add more rigorous database state roundtripping, with changing validator sets
* adjust ncli_db to use new schema
* simplify putState/getState by moving all immutable validator accounting into beacon state DB
* remove redundant test case and move code to immutable-beacon-chain module
* more efficient, but still brute-force, mutable+immutable validator merging
* reuse BeaconState in getState
* ensure HashList/HashArray caches are cleared when reusing getState buffers; add ncli_db and a unit test to verify this
* HashList.clear() -> HashList.clearCache()
* only copy incrementally necessary immutable validators
* increase strictness of test cases and fix/work around resulting HashList cache invalidation issues
* remove explanatory scaffolding
* allow for storage of full (with all validators) states for backwards/forwards-compatibility
* adjust DbSeq type usage
* store full, with-validators, state every 64 epochs to enable reverting versions
* reduce memory allocation and intermediate objects in state storage codepath
* eliminate allocation/copying through intermediate BeaconStateNoImmutableValidators objects
* skip benchmarking initial genesis-validator-heavy state store
* always store new-style state and sometimes old-style state
* document intent behind BeaconState/Validator type-punnery
* more accurate failure message on SQLite in-memory database initialization failure
Era files contain 8192 blocks and a state corresponding to the length of
the array holding block roots in the state, meaning that each block is
verifiable using the pubkeys and block roots from the state. Of course,
one would need to know the root of the state as well, which is available
in the first block of the _next_ file - or known from outside.
This PR also adds an implementation to write e2s, e2i and era files, as
well as a python script to inspect them.
All in all, the format is very similar to what goes on in the network
requests meaning it can trivially serve as a backing format for serving
said requests.
Mainnet, up to the first 671k slots, take up 3.5gb - in each era file,
the BeaconState contributes about 9mb at current validator set sizes, up
from ~3mb in the early blocks, for a grand total of ~558mb for the 82 eras
tested - this overhead could potentially be calculated but one would lose
the ability to verify individual blocks (eras could still be verified using
historical roots).
```
-rw-rw-r--. 1 arnetheduck arnetheduck 16 5 mar 11.47 ethereum2-mainnet-00000000-00000001.e2i
-rw-rw-r--. 1 arnetheduck arnetheduck 1,8M 5 mar 11.47 ethereum2-mainnet-00000000-00000001.e2s
-rw-rw-r--. 1 arnetheduck arnetheduck 65K 5 mar 11.47 ethereum2-mainnet-00000001-00000001.e2i
-rw-rw-r--. 1 arnetheduck arnetheduck 18M 5 mar 11.47 ethereum2-mainnet-00000001-00000001.e2s
...
-rw-rw-r--. 1 arnetheduck arnetheduck 65K 5 mar 11.52 ethereum2-mainnet-00000051-00000001.e2i
-rw-rw-r--. 1 arnetheduck arnetheduck 68M 5 mar 11.52 ethereum2-mainnet-00000051-00000001.e2s
-rw-rw-r--. 1 arnetheduck arnetheduck 61K 5 mar 11.11 ethereum2-mainnet-00000052-00000001.e2i
-rw-rw-r--. 1 arnetheduck arnetheduck 62M 5 mar 11.11 ethereum2-mainnet-00000052-00000001.e2s
```
* Deferred DAG and fork choice pruning
* fixup
* Address https://github.com/status-im/nimbus-eth2/pull/2384/files#r589448448, rely only on onSLotEnd for state pruning
* no need to store needPruning in the data structure
* lastPrunePoint is updated in pruning proc
* Split eager and LazyPruning
* enforce pruning in updateHead
* allow multiple hard fork datatypes to coexist
* update to 1.0.1
* merge recent datatypes.nim updates
* trigger rebuild now the out-of-disk-space machine offline
* fix replays stalling processing
Occasionally, attestations will arrive that vote for a target derived
either from the finalized block or earlier. In these cases, Nimbus would
replay the state transition of up to 32 epochs worth of blocks because
the finalized state has been pruned, delaying other processing and
leading to poor inclusion distance.
* put cheap attestation checks before forming EpochRef
* check that attestation target is not from an unviable history with
regards to finalization
* fix overly aggressive state pruning removing the state close to the
finalized checkpoint resulting in rare long replays for valid
attestations
* log long replays
* harden logging and traversal of nil BlockSlot
* simplify target check
no need to lookup target in chain dag again
* fixup
* fixup
* refactor slot loop
* fix attestations being sent out early when _any_ block arrives (as
opposed to the block for the "correct" slot)
* fix attestations being sent out late when block already arrived
* refactor slot processing loop
* shutdown if clock moves backwards significantly
* fix docs
* notify caller whether the block actually arrived
* fix several memory leaks due to temporaries not being reset during
init
* avoid massive main() function with lots of stuff in it
* disable nim-prompt (unused)
* reuse validator pool instance in eth2_processor
* style cleanup
* database state storage benchmarking via ncli_db
* more cleanups from immutable validator state branch
* unexport some eth2_network constants and remove unused variables/templates
* make two PeerScore constants public
* Create CLI tool for slashing export
* Use SQLite as a DB instead of a KV-store
* Keeps v1 and v2 DBs around
* Uses the same schema as Lighthouse v1.1.0
* Passes all interchange tests + skeleton of finalization pruning
* Removes tests that would violate v5 / minimal slashing DB and MinSlot rules
* Migration tool added using low-watermark scheme for faster migration of large number of validators
* force pushing to fix unstable base
* increase attestation/aggregate queue sizes
when there are many validators, many aggregates and attestations arrive
every slot - increase the queue size a bit - also do batches on each
idle loop iteration since it's fairly quick
* don't score subnets for now
* wrapping up
* refactor and cleanups
* gossip parameters fixes
* comment fix
Co-authored-by: Jacek Sieka <jacek@status.im>
* hotfix gossip scoring
* skip gossip scoring parameters validation as they violate for now (but does not matter cos we don't score)
* workaround again gossip validation
* expose node signatures
* format bitseqs as hex strings
* format trusted sigs as hex strings (same as untrusted)
* reuse rpc client sigs
* include validator index in duties
* move SyncInfo to spec
when there are many validators, many aggregates and attestations arrive
every slot - increase the queue size a bit - also do batches on each
idle loop iteration since it's fairly quick
* use IntSet rather than HashSet[ValidatorIndex]
* add bounds check before uint64 -> int conversion
* use intsets in block transitions
* remove superfluous Nim issue explanation/reference
* performance fixes
* don't mark tree cache as dirty on read-only List accesses
* store only blob in memory for keys and signatures, parse blob lazily
* compare public keys by blob instead of parsing / converting to raw
* compare Eth2Digest using non-constant-time comparison
* avoid some unnecessary validator copying
This branch will in particular speed up deposit processing which has
been slowing down block replay.
Pre (mainnet, 1600 blocks):
```
All time are ms
Average, StdDev, Min, Max, Samples, Test
Validation is turned off meaning that no BLS operations are performed
3450.269, 0.000, 3450.269, 3450.269, 1, Initialize DB
0.417, 0.822, 0.036, 21.098, 1400, Load block from database
16.521, 0.000, 16.521, 16.521, 1, Load state from database
27.906, 50.846, 8.104, 1507.633, 1350, Apply block
52.617, 37.029, 20.640, 135.938, 50, Apply epoch block
```
Post:
```
3502.715, 0.000, 3502.715, 3502.715, 1, Initialize DB
0.080, 0.560, 0.035, 21.015, 1400, Load block from database
17.595, 0.000, 17.595, 17.595, 1, Load state from database
15.706, 11.028, 8.300, 107.537, 1350, Apply block
33.217, 12.622, 17.331, 60.580, 50, Apply epoch block
```
* more perf fixes
* load EpochRef cache into StateCache more aggressively
* point out security concern with public key cache
* reuse proposer index from state when processing block
* avoid genericAssign in a few more places
* don't parse key when signature is unparseable
* fix `==` overload for Eth2Digest
* preallocate validator list when getting active validators
* speed up proposer index calculation a little bit
* reuse cache when replaying blocks in ncli_db
* avoid a few more copying loops
```
Average, StdDev, Min, Max, Samples, Test
Validation is turned off meaning that no BLS operations are performed
3279.158, 0.000, 3279.158, 3279.158, 1, Initialize DB
0.072, 0.357, 0.035, 13.400, 1400, Load block from database
17.295, 0.000, 17.295, 17.295, 1, Load state from database
5.918, 9.896, 0.198, 98.028, 1350, Apply block
15.888, 10.951, 7.902, 39.535, 50, Apply epoch block
0.000, 0.000, 0.000, 0.000, 0, Database block store
```
* clear full balance cache before processing rewards and penalties
```
All time are ms
Average, StdDev, Min, Max, Samples, Test
Validation is turned off meaning that no BLS operations are performed
3947.901, 0.000, 3947.901, 3947.901, 1, Initialize DB
0.124, 0.506, 0.026, 202.370, 363345, Load block from database
97.614, 0.000, 97.614, 97.614, 1, Load state from database
0.186, 0.188, 0.012, 99.561, 357262, Advance slot, non-epoch
14.161, 5.966, 1.099, 395.511, 11524, Advance slot, epoch
1.372, 4.170, 0.017, 276.401, 363345, Apply block, no slot processing
0.000, 0.000, 0.000, 0.000, 0, Database block store
```
* use `idleAsync` to more evenly divide cpu attention when syncing in
particular - this gives networking better latency
* more strict exception handling in eth2_processor
* allow always-on subscription to all attestation subnets when gossiping
* in subscribe-all-subnets mode, consider all subnets to be stability subnets for ENR purposes
* remove await/async from sub/unsub
* fix unsubscribe wrong key (missed _snappy)
* use the right libp2p commit hash
* remove unused async
* fix inspector
* fix subnet calculation in RPC and insert broadcast attestations into node's pool
* unify codepaths to ensure only mostly-checked-to-be-valid attestations enter the pool, even from node's own broadcasts
* update attestation pool tests for new validateAttestation param
Co-authored-by: Dustin Brody <tersec@users.noreply.github.com>
* fix subnet calculation in RPC and insert broadcast attestations into node's pool
* unify codepaths to ensure only mostly-checked-to-be-valid attestations enter the pool, even from node's own broadcasts
* update attestation pool tests for new validateAttestation param
* make subnet cycling more robust; use one stability subnet/validator; explicitly represent gossip enabled/disabled
* fix asymmetry in _snappy being used for subscriptions but not unsubscriptions
* remove redundant comment
* minimal RPC and VC support for infoming BN of subnets
* create and verify slot signatures in RPC interface and VC
* loosen old slot check
* because Slot + uint64 works but uint64 + Slot doesn't
* document assumptions for head state use; don't clear stability subnets; guard against VC not having checked an epoch ahead, fixing a crash; clarify unsigned comparison
* revert unsub fix
* checkpoint database at end of each slot
To avoid spending time on synchronizing with the file system while doing
processing, the manual checkpointing mode turns off fsync during
processing and instead checkpoints the database when the slot has ended.
From an sqlite perspecitve, in WAL mode this guaranees database
consistency but may lead to data loss which is fine - anything missing
from the beacon chain database can be recovered on the next startup.
* log sync status and delay in slot start message
* bump
* detect already-aggregate-voted condition before attestation pool; add is_aggregator tests
* replace pair of attestation-per-epoch tracking lists with single list and remove Option use
* fix attestation condition
* use safer type conversions; add more is_aggregator tests
* don't lag aggregated attestations by a slot
* don't use aggregation topic at all
* use aggregates again, but with aggressively low ATTESTATION_PROPAGATION_SLOT_RANGE; seems to hold on to LH 1.0 nodes
* clean up scaffolding and double ATTESTATION_PROPAGATION_SLOT_RANGE to 16
* increase ATTESTATION_PROPAGATION_SLOT_RANGE to 24
* increase ATTESTATION_PROPAGATION_SLOT_RANGE to 28 and isolate in only used function due to customization; remove TRAILING_DISTANCE machinery
The key change here is that `addChunksAndGenMerkleProofs` is called
with all pending deposits instead of just the deposits included in
the block. The later was effectively producing merkle proofs against
a different root.
* increase default max peers
also avoid reconnection when opening stream as this might induce
a loop
* Use dial without addresses
* dial back max peers a little
* Revert "Revert "Full "node" RPC calls implementation and fixes to peer lifetime states. (#2065)" (#2082)"
This reverts commit 7cc3dc8027.
* fix nil disconnectedFut crash
* fixes
don't resetPeer, it causes peer miscounts
* disconnect disconnecting peers
...when there's a race.
* avoid connection spamming
* never decrease SeenTable timeout
* only recover ENR for known peers
* seen only when really disconnected
* Handle some web3 timeouts better
* Add support for developer .env files
* Eth1 improvements; Mainnet genesis state
Notable changes:
* The deposits table have been removed from the database. The client
will no longer process all deposits on start-up.
* The network metadata now includes a "state snapshot" of the deposit
contract. This allows the client to skip syncing deposits made prior
to the snapshot (i.e. genesis). Suitable metadata added for Pyrmont
and Mainnet.
* The Eth1 monitor won't be started unless there are validators attached
to the node.
* The genesis detection code is now optional and disabled by default
* Bugfix: The client should not produce blocks that will fail validation
when it hasn't downloaded the latest deposits yet
* Bugfix: Work around the database corruption affecting Pyrmont nodes
* Remove metadata for Toledo and Medalla