* Add default port info
* Add default port info
* Edit CLI and docs to reflect 9000 port
* Apply suggestions from code review
Co-authored-by: Jacek Sieka <arnetheduck@gmail.com>
* set upper bound on EpochRef cache
* max 32 EpochRef instances
* less memory waste in BlockRef by removing EpochRef seq that is mostly
unused (~20mb)
* less memory waste in dag block lookup by not keeping an extra copy of
digest (~70mb)
* fix `==` and `$` for Eth2Digest
* remove `ChainDAG.tmpState` (~50mb?)
all in all, this branch cuts mainnet memory usage by ~160-180mb and puts
limits on EpochRef cache usage - where normally it hovered around 950mb
before, it's now sitting at 600-700mb on my machine.
* docs
* initial immutable validator database factoring
* remove changes from chain_dag: this abstraction properly belongs in beacon_chain_db
* add merging mutable/immutable validator portions; individually test database roundtripping of immutable validators and states-sans-immutable-validators
* update test summaries
* use stew/assign2 instead of Nim assignment
* add reading/writing of immutable validators in chaindag
* remove unused import
* replace chunked k/v store of immutable validators with per-row SQL table storage
* use List instead of HashList
* un-stub some ncli_db code so that it uses
* switch HashArray to array; move BeaconStateNoImmutableValidators from datatypes to beacon_chain_db
* begin only-mutable-part state storage
* uncomment some assigns
* work around https://github.com/nim-lang/Nim/issues/17253
* fix most of the issues/oversights; local sim runs again
* fix test suite by adding missing beaconstate field to copy function
* have ncli bench also store immutable validators
* extract some immutable-validator-specific code from the beacon chain db module
* add more rigorous database state roundtripping, with changing validator sets
* adjust ncli_db to use new schema
* simplify putState/getState by moving all immutable validator accounting into beacon state DB
* remove redundant test case and move code to immutable-beacon-chain module
* more efficient, but still brute-force, mutable+immutable validator merging
* reuse BeaconState in getState
* ensure HashList/HashArray caches are cleared when reusing getState buffers; add ncli_db and a unit test to verify this
* HashList.clear() -> HashList.clearCache()
* only copy incrementally necessary immutable validators
* increase strictness of test cases and fix/work around resulting HashList cache invalidation issues
* remove explanatory scaffolding
* allow for storage of full (with all validators) states for backwards/forwards-compatibility
* adjust DbSeq type usage
* store full, with-validators, state every 64 epochs to enable reverting versions
* reduce memory allocation and intermediate objects in state storage codepath
* eliminate allocation/copying through intermediate BeaconStateNoImmutableValidators objects
* skip benchmarking initial genesis-validator-heavy state store
* always store new-style state and sometimes old-style state
* document intent behind BeaconState/Validator type-punnery
* more accurate failure message on SQLite in-memory database initialization failure
Era files contain 8192 blocks and a state corresponding to the length of
the array holding block roots in the state, meaning that each block is
verifiable using the pubkeys and block roots from the state. Of course,
one would need to know the root of the state as well, which is available
in the first block of the _next_ file - or known from outside.
This PR also adds an implementation to write e2s, e2i and era files, as
well as a python script to inspect them.
All in all, the format is very similar to what goes on in the network
requests meaning it can trivially serve as a backing format for serving
said requests.
Mainnet, up to the first 671k slots, take up 3.5gb - in each era file,
the BeaconState contributes about 9mb at current validator set sizes, up
from ~3mb in the early blocks, for a grand total of ~558mb for the 82 eras
tested - this overhead could potentially be calculated but one would lose
the ability to verify individual blocks (eras could still be verified using
historical roots).
```
-rw-rw-r--. 1 arnetheduck arnetheduck 16 5 mar 11.47 ethereum2-mainnet-00000000-00000001.e2i
-rw-rw-r--. 1 arnetheduck arnetheduck 1,8M 5 mar 11.47 ethereum2-mainnet-00000000-00000001.e2s
-rw-rw-r--. 1 arnetheduck arnetheduck 65K 5 mar 11.47 ethereum2-mainnet-00000001-00000001.e2i
-rw-rw-r--. 1 arnetheduck arnetheduck 18M 5 mar 11.47 ethereum2-mainnet-00000001-00000001.e2s
...
-rw-rw-r--. 1 arnetheduck arnetheduck 65K 5 mar 11.52 ethereum2-mainnet-00000051-00000001.e2i
-rw-rw-r--. 1 arnetheduck arnetheduck 68M 5 mar 11.52 ethereum2-mainnet-00000051-00000001.e2s
-rw-rw-r--. 1 arnetheduck arnetheduck 61K 5 mar 11.11 ethereum2-mainnet-00000052-00000001.e2i
-rw-rw-r--. 1 arnetheduck arnetheduck 62M 5 mar 11.11 ethereum2-mainnet-00000052-00000001.e2s
```
* Deferred DAG and fork choice pruning
* fixup
* Address https://github.com/status-im/nimbus-eth2/pull/2384/files#r589448448, rely only on onSLotEnd for state pruning
* no need to store needPruning in the data structure
* lastPrunePoint is updated in pruning proc
* Split eager and LazyPruning
* enforce pruning in updateHead
* allow multiple hard fork datatypes to coexist
* update to 1.0.1
* merge recent datatypes.nim updates
* trigger rebuild now the out-of-disk-space machine offline
* fix replays stalling processing
Occasionally, attestations will arrive that vote for a target derived
either from the finalized block or earlier. In these cases, Nimbus would
replay the state transition of up to 32 epochs worth of blocks because
the finalized state has been pruned, delaying other processing and
leading to poor inclusion distance.
* put cheap attestation checks before forming EpochRef
* check that attestation target is not from an unviable history with
regards to finalization
* fix overly aggressive state pruning removing the state close to the
finalized checkpoint resulting in rare long replays for valid
attestations
* log long replays
* harden logging and traversal of nil BlockSlot
* simplify target check
no need to lookup target in chain dag again
* fixup
* fixup
* refactor slot loop
* fix attestations being sent out early when _any_ block arrives (as
opposed to the block for the "correct" slot)
* fix attestations being sent out late when block already arrived
* refactor slot processing loop
* shutdown if clock moves backwards significantly
* fix docs
* notify caller whether the block actually arrived
* fix several memory leaks due to temporaries not being reset during
init
* avoid massive main() function with lots of stuff in it
* disable nim-prompt (unused)
* reuse validator pool instance in eth2_processor
* style cleanup
* database state storage benchmarking via ncli_db
* more cleanups from immutable validator state branch
* unexport some eth2_network constants and remove unused variables/templates
* make two PeerScore constants public
* Create CLI tool for slashing export
* Use SQLite as a DB instead of a KV-store
* Keeps v1 and v2 DBs around
* Uses the same schema as Lighthouse v1.1.0
* Passes all interchange tests + skeleton of finalization pruning
* Removes tests that would violate v5 / minimal slashing DB and MinSlot rules
* Migration tool added using low-watermark scheme for faster migration of large number of validators
* force pushing to fix unstable base
* increase attestation/aggregate queue sizes
when there are many validators, many aggregates and attestations arrive
every slot - increase the queue size a bit - also do batches on each
idle loop iteration since it's fairly quick
* don't score subnets for now
* wrapping up
* refactor and cleanups
* gossip parameters fixes
* comment fix
Co-authored-by: Jacek Sieka <jacek@status.im>
* hotfix gossip scoring
* skip gossip scoring parameters validation as they violate for now (but does not matter cos we don't score)
* workaround again gossip validation
* expose node signatures
* format bitseqs as hex strings
* format trusted sigs as hex strings (same as untrusted)
* reuse rpc client sigs
* include validator index in duties
* move SyncInfo to spec
when there are many validators, many aggregates and attestations arrive
every slot - increase the queue size a bit - also do batches on each
idle loop iteration since it's fairly quick
* use IntSet rather than HashSet[ValidatorIndex]
* add bounds check before uint64 -> int conversion
* use intsets in block transitions
* remove superfluous Nim issue explanation/reference
* performance fixes
* don't mark tree cache as dirty on read-only List accesses
* store only blob in memory for keys and signatures, parse blob lazily
* compare public keys by blob instead of parsing / converting to raw
* compare Eth2Digest using non-constant-time comparison
* avoid some unnecessary validator copying
This branch will in particular speed up deposit processing which has
been slowing down block replay.
Pre (mainnet, 1600 blocks):
```
All time are ms
Average, StdDev, Min, Max, Samples, Test
Validation is turned off meaning that no BLS operations are performed
3450.269, 0.000, 3450.269, 3450.269, 1, Initialize DB
0.417, 0.822, 0.036, 21.098, 1400, Load block from database
16.521, 0.000, 16.521, 16.521, 1, Load state from database
27.906, 50.846, 8.104, 1507.633, 1350, Apply block
52.617, 37.029, 20.640, 135.938, 50, Apply epoch block
```
Post:
```
3502.715, 0.000, 3502.715, 3502.715, 1, Initialize DB
0.080, 0.560, 0.035, 21.015, 1400, Load block from database
17.595, 0.000, 17.595, 17.595, 1, Load state from database
15.706, 11.028, 8.300, 107.537, 1350, Apply block
33.217, 12.622, 17.331, 60.580, 50, Apply epoch block
```
* more perf fixes
* load EpochRef cache into StateCache more aggressively
* point out security concern with public key cache
* reuse proposer index from state when processing block
* avoid genericAssign in a few more places
* don't parse key when signature is unparseable
* fix `==` overload for Eth2Digest
* preallocate validator list when getting active validators
* speed up proposer index calculation a little bit
* reuse cache when replaying blocks in ncli_db
* avoid a few more copying loops
```
Average, StdDev, Min, Max, Samples, Test
Validation is turned off meaning that no BLS operations are performed
3279.158, 0.000, 3279.158, 3279.158, 1, Initialize DB
0.072, 0.357, 0.035, 13.400, 1400, Load block from database
17.295, 0.000, 17.295, 17.295, 1, Load state from database
5.918, 9.896, 0.198, 98.028, 1350, Apply block
15.888, 10.951, 7.902, 39.535, 50, Apply epoch block
0.000, 0.000, 0.000, 0.000, 0, Database block store
```
* clear full balance cache before processing rewards and penalties
```
All time are ms
Average, StdDev, Min, Max, Samples, Test
Validation is turned off meaning that no BLS operations are performed
3947.901, 0.000, 3947.901, 3947.901, 1, Initialize DB
0.124, 0.506, 0.026, 202.370, 363345, Load block from database
97.614, 0.000, 97.614, 97.614, 1, Load state from database
0.186, 0.188, 0.012, 99.561, 357262, Advance slot, non-epoch
14.161, 5.966, 1.099, 395.511, 11524, Advance slot, epoch
1.372, 4.170, 0.017, 276.401, 363345, Apply block, no slot processing
0.000, 0.000, 0.000, 0.000, 0, Database block store
```
* use `idleAsync` to more evenly divide cpu attention when syncing in
particular - this gives networking better latency
* more strict exception handling in eth2_processor
* allow always-on subscription to all attestation subnets when gossiping
* in subscribe-all-subnets mode, consider all subnets to be stability subnets for ENR purposes
* remove await/async from sub/unsub
* fix unsubscribe wrong key (missed _snappy)
* use the right libp2p commit hash
* remove unused async
* fix inspector
* fix subnet calculation in RPC and insert broadcast attestations into node's pool
* unify codepaths to ensure only mostly-checked-to-be-valid attestations enter the pool, even from node's own broadcasts
* update attestation pool tests for new validateAttestation param
Co-authored-by: Dustin Brody <tersec@users.noreply.github.com>
* fix subnet calculation in RPC and insert broadcast attestations into node's pool
* unify codepaths to ensure only mostly-checked-to-be-valid attestations enter the pool, even from node's own broadcasts
* update attestation pool tests for new validateAttestation param
* make subnet cycling more robust; use one stability subnet/validator; explicitly represent gossip enabled/disabled
* fix asymmetry in _snappy being used for subscriptions but not unsubscriptions
* remove redundant comment
* minimal RPC and VC support for infoming BN of subnets
* create and verify slot signatures in RPC interface and VC
* loosen old slot check
* because Slot + uint64 works but uint64 + Slot doesn't
* document assumptions for head state use; don't clear stability subnets; guard against VC not having checked an epoch ahead, fixing a crash; clarify unsigned comparison
* revert unsub fix
* checkpoint database at end of each slot
To avoid spending time on synchronizing with the file system while doing
processing, the manual checkpointing mode turns off fsync during
processing and instead checkpoints the database when the slot has ended.
From an sqlite perspecitve, in WAL mode this guaranees database
consistency but may lead to data loss which is fine - anything missing
from the beacon chain database can be recovered on the next startup.
* log sync status and delay in slot start message
* bump
* detect already-aggregate-voted condition before attestation pool; add is_aggregator tests
* replace pair of attestation-per-epoch tracking lists with single list and remove Option use
* fix attestation condition
* use safer type conversions; add more is_aggregator tests
* don't lag aggregated attestations by a slot
* don't use aggregation topic at all
* use aggregates again, but with aggressively low ATTESTATION_PROPAGATION_SLOT_RANGE; seems to hold on to LH 1.0 nodes
* clean up scaffolding and double ATTESTATION_PROPAGATION_SLOT_RANGE to 16
* increase ATTESTATION_PROPAGATION_SLOT_RANGE to 24
* increase ATTESTATION_PROPAGATION_SLOT_RANGE to 28 and isolate in only used function due to customization; remove TRAILING_DISTANCE machinery
The key change here is that `addChunksAndGenMerkleProofs` is called
with all pending deposits instead of just the deposits included in
the block. The later was effectively producing merkle proofs against
a different root.
* increase default max peers
also avoid reconnection when opening stream as this might induce
a loop
* Use dial without addresses
* dial back max peers a little
* Revert "Revert "Full "node" RPC calls implementation and fixes to peer lifetime states. (#2065)" (#2082)"
This reverts commit 7cc3dc8027.
* fix nil disconnectedFut crash
* fixes
don't resetPeer, it causes peer miscounts
* disconnect disconnecting peers
...when there's a race.
* avoid connection spamming
* never decrease SeenTable timeout
* only recover ENR for known peers
* seen only when really disconnected
* Handle some web3 timeouts better
* Add support for developer .env files
* Eth1 improvements; Mainnet genesis state
Notable changes:
* The deposits table have been removed from the database. The client
will no longer process all deposits on start-up.
* The network metadata now includes a "state snapshot" of the deposit
contract. This allows the client to skip syncing deposits made prior
to the snapshot (i.e. genesis). Suitable metadata added for Pyrmont
and Mainnet.
* The Eth1 monitor won't be started unless there are validators attached
to the node.
* The genesis detection code is now optional and disabled by default
* Bugfix: The client should not produce blocks that will fail validation
when it hasn't downloaded the latest deposits yet
* Bugfix: Work around the database corruption affecting Pyrmont nodes
* Remove metadata for Toledo and Medalla
* log when database is loading (to avoid confusion)
* generate network keys later during startup
* fix quarantine not scheduling chain of parents for download and
increase size to one epoch
* log validator count, enr and peerid more clearly on startup
* Avoid hangs when wss:// is specified for a non-secure HTTP server
* Produce an ERROR when the web3 provider is unsupported, but still launch the node
This introcudes a cache for block summaries, useful for instantiating
the block dag on startup, bringing medalla startup times down from
minutes to seconds.
This is something of a temporary band-aid that would be obsoleted by a
finalized block store.
This reverts commit 63173ab2c1.
It appears the cluster is having trouble staying connected - since the culprit is unknown, this is a first step on the way to what was stable.
Notably, this does not fully revert libp2p itself, merely the gossip version.
Validators exiting is normal, no need to scream about it
* avoid reallocating seq on big exit queue
* avoid fetching state cache when updating head (it's rarely needed)
* remove incorrectly implemented live validator counts (avoids memory
allocs)
* Add exponential rewind on MissingParent.
* Try to avoid peers which are useless for syncing.
Fix forward sync restart at proper point.
Fix getLocalWallSlot() to not return slots from the future.
* Fix incorrect logs.
* Fix logging.
Enable peer's status messages log on DEBUG level.
* Fix watch task to monitor operation progress, but not local head progress.
* Add more logging information.
Remove recurring failures detection mechanism.
* Concentrate all sensitive writeFile/createPath calls in one place.
Fix eth2_network_simulation for Windows.
* Remove artifacts.
* fix import
Co-authored-by: Jacek Sieka <jacek@status.im>
* Fix continuous sync queue rewinds on slow PCs.
Fix recurring disconnects on low peer score.
* Calculate average syncing speed, not the current one.
Move speed calculation to different task.
* Address review comments.
* Bump nim-eth to get UseDiscv51 flag
* Switch medalla to discovery v5.1, other targets to v5.0
* Bump nim-eth for better discv5.1 logging
* Bump eth2-testnets for updated medalla bootnodes
Calculating rewards/penalties is slow due to how we compute sets of
attestations validators then use the sets for inclusion checks, to see
who attested. The dominant function during validated block processing /
epoch processing is hash set building and lookup.
This PR inverts the flow by removing the sets and creating a single
large validator status list, then applying all relevant state
attestations, then updating rewards and penalties.
This provides a 10x speedup to epoch processing which in turn speeds up
both empty slot and block processing - for example, on startup, we
replay all non-finalized blocks to prime fork choice - the same when
validating attestations or replaying states on reorg.
* misc memory and perf fixes
* use EpochRef for attestation aggregation
* compress effective balances in memory (medalla unfinalized: 4gb ->
1gb)
* avoid hitting db when rewinding to head or clearance state
* avoid hitting db when blocks can be applied to in-memory state -
speeds up startup considerably
* avoid storing epochref in fork choice
* simplify and speed up beacon block creation flow - avoids state reload
thanks to head rewind optimization
* iterator-based committee and attestation participation help avoid lots
of small memory allocations throughout epoch transition (40% speedup on
epoch processing, for example during startup)
* add constant for threshold
* update ve1.0.0-rc.0 preset spec references
* remove runtime preset ETH1_FOLLOW_DISTANCE from preset files; remove two CI build items to try to keep Travis from timing out
* update attestation extended validation to v1.0.0-rc.0
* attestation block and target must be within same epoch
* remove duplicate attestation epoch/target epoch check
* in spec now
This addresses the issues by detecting and rejecting keystores with
incorrect PBKDF2 and SCrypt params. It also bumps the version of
nim-json-serialization to include a bugfix for incorrect parsing
of json files featuring comments.
It turns out that we often save lots of states in the database that are
the result of empty slot processing only - here, we make sure to only
save a state if a block follows - this fixes several issues:
* empty slot states are not always pruned leading to state database size
explosion
* storing states is (very) slow which slows down processing in general,
so we should only do it when it's likely to be useful
* attestation processing doesn't get stuck on saving random states that
won't appear in the chain history
* in exit pool, filter out already-packaged messages; bundle remaining messages into beaconblocks
* filter messages at block construction time
* allow adding up to intended capacity of buffers, beyond per-block limits
* document rationale/design for filtering mechanism
* fixed#1663 - Interger overflow in compute_start_slot_at_epoch through RPC
* changed the way the overflow check is done - took the approach from PR #1797 - see the comment in PR #1810 for more details
this would reallocate the attestation queue on every attestation and
other call to update_time, causing quite the overhead (~10% cpu spent
when gossiping)
* addPeer() and addPeerNoWait() now returns PeerStatus, not bool.
Minor refactoring of PeerPool.
Fix tests.
* Refactor PeerPool.
Add lenSpace.
Add tests for lenSpace.
PeerPool.add procedures now return different error codes.
Fix SyncManager break/continue problem.
Fix connectWorker break/continue problem.
Refactor connectWorker and discoveryLoop.
Fix incoming/outgoing blocking problem.
* Refactor discovery loop.
Add checkPeer.
* Fix logic and compilation bugs.
* Adjust position of debugging log.
* Fix issue with maximum peers in PeerPool.
Optimize node record decoding.
* fix discoveryLoop.
* Remove aliases and fix tests using aliases.
* Bump BLST
* Test for https://github.com/supranational/blst/issues/22 regression
* Use SHA256 from BLST + bump nim-blscurve to reenable fno-tree-vectorize
* SHA256 on non-blst platforms import fixes
* import fixes again
* can't prefix with nimcrypto
* address review comment [skip ci]
* {.noInit.} on the digests
about 40% better slot processing times (with LTO enabled) - these don't
do BLS but are used
heavily during replay (state transition = slot + block transition)
tests using a recent medalla state and advancing it 1000 slots:
```
./ncli slots --preState2:state-302271-3c1dbf19-c1f944bf.ssz --slot:1000
--postState2:xx.ssz
```
pre:
```
All time are ms
Average, StdDev, Min, Max, Samples,
Test
Validation is turned off meaning that no BLS operations are performed
39.236, 0.000, 39.236, 39.236, 1,
Load state from file
0.049, 0.002, 0.046, 0.063, 968,
Apply slot
256.504, 81.008, 213.471, 591.902, 32,
Apply epoch slot
28.597, 0.000, 28.597, 28.597, 1,
Save state to file
```
cast:
```
All time are ms
Average, StdDev, Min, Max, Samples,
Test
Validation is turned off meaning that no BLS operations are performed
37.079, 0.000, 37.079, 37.079, 1,
Load state from file
0.042, 0.002, 0.040, 0.090, 968,
Apply slot
215.552, 68.763, 180.155, 500.103, 32,
Apply epoch slot
25.106, 0.000, 25.106, 25.106, 1,
Save state to file
```
cast+rewards:
```
All time are ms
Average, StdDev, Min, Max, Samples,
Test
Validation is turned off meaning that no BLS operations are performed
40.049, 0.000, 40.049, 40.049, 1,
Load state from file
0.048, 0.001, 0.045, 0.060, 968,
Apply slot
164.981, 76.273, 142.099, 477.868, 32,
Apply epoch slot
28.498, 0.000, 28.498, 28.498, 1,
Save state to file
```
cast+rewards+shr
```
All time are ms
Average, StdDev, Min, Max, Samples,
Test
Validation is turned off meaning that no BLS operations are performed
12.898, 0.000, 12.898, 12.898, 1,
Load state from file
0.039, 0.002, 0.038, 0.054, 968,
Apply slot
139.971, 68.797, 120.088, 428.844, 32,
Apply epoch slot
24.761, 0.000, 24.761, 24.761, 1,
Save state to file
```
* Slashing protection + interchange initial commit
* Restrict the when UseSlashingProtection dance in other modules
* Integrate slashing tests in other all_tests
* Add attestation slashing protection support
* Add a message that mention if built with/without slashing protection
* no op the initialization proc
* test slashing protection in Jenkins (temp)
* where to configure NIMFLAGS in Jenkins ...
* Jenkins -> ensure Built with slashing protection
* Add slashing protection complete import
* use Opt.get(otherwise)
* Don't use negation in proc name
* Turn slashing protection on by default
* Refactor peer_pool.
Fix eth2_network peer counters.
Fix PeerPool do not allow to add more peers when empty space available.
* Remove unused imports.
* Add test for a bug.
* Fix eth2_network disconnect should deletePeer not release.
More PeerPool refactoring.
* remove some superfluous gcsafes
* remove getTailState (unused)
* don't store old epochrefs in blocks
* document attestation pool a bit
* remove `pcs =` cruft from log
* skeleton of attester slashing pool & validators
* add skeleton for proposer slashings and voluntary exits; rename pool to more inclusive exit pool to stay consistent with all three; ensure is initialized by beacon_node so is safe to merge, even if it doesn't do much yet
* add ncli_db subcommand to prune database of unnecessary blocks, states, and state roots
* tweak comments
* reduce default aggressiveness in pruning old states
* move copyPrunedDatabase() to ncli_db, as it's not generally useful as part of beacon_chain_db and doesn't use any internal interfaces
* Syncing workers now not bound to peers.
Sync status is now printed in statusbar.
* Add `SyncQueue.outSlot` to statusbar too.
* Add `inRangeEvent` and `rangeAge` parameter.
* Fix rangeAge is not depends on SyncQueue latest slot.
Fix syncManager to start from latest local head slot.
* Add notInRange event.
* Remove suspects field.
Validator duties proceed slot-by-slot - we should not start a new
validator duty iteration before the previous one is gone or we might run
into consistency and voting issues
* Quick fix to prune some states, pending smarter state storage
Adverse effects might include slow rewinds - typically the protocol
doesn't ask for pre-finalized states but RPC might
* document issue, add test
* fix cache miss log
per spec, we must half-close request stream - not doing so may lead to
failure of the other end to start processing our request leading to
timeouts.
In particular, this fixes many sync problems that have been seen on
medalla.
* remove safeClose - close no longer raises
* use per-chunk timeouts in request processing
* stop discarding non-existent future epochs during epoch state transitions; remove a pointless StateCache() construction in advance_slots()
* update nbench to pass StateCache to process_slots()
* ignore sqlite WAL journal files in git; switch attestation resolved from info to debug
* promote sent attestations/blocks to notice rather than demote resolved attestations/blocks to debug
This implements disparity, resolving a part of
https://github.com/status-im/nim-beacon-chain/issues/1367
* make BeaconTime a duration for fractional seconds
* factor out attestation/aggregate validation
* simplify recording of queued attestations
* simplify attestation signature check
* fix blocks_received metric
* add some trivial validation tests
* remove unresolved attestation table - attestations for unknown blocks
are dropped instead (cannot verify their signature)
* harden beacon_pending_deposits metrics calculation
* ...
* move beacon_pending_deposits and beacon_processed_deposits_total out of specs and into chain DAG
* initial - cheaper pruning - addresses #1534
* Pass tests: update offset when pruning, proper handling of pruned parents
* Use options instead of nil for nilable newHead (finalization passing but rootcause not solved)
* First line of defense against stackoverflow in tests
* Fix compute_delta offset after pruning
* Rebase fix - medalla ready
* Remove Option[BlockRef]
Replace shuffling function with zrnt version - `get_shuffled_seq` in
particular puts more strain on the GC by allocating superfluous seq's
which turns out to have a significant impact on block processing (when
replaying blocks for example) - 4x improvement on non-epoch, 1.5x on
epoch blocks (replay is done without signature checking)
Medalla, first 10k slots - pre:
```
Loaded 68973 blocks, head slot 117077
All time are ms
Average, StdDev, Min, Max, Samples,
Test
Validation is turned off meaning that no BLS operations are performed
76855.848, 0.000, 76855.848, 76855.848, 1,
Initialize DB
1.073, 0.914, 0.071, 12.454, 7831,
Load block from database
31.382, 0.000, 31.382, 31.382, 1,
Load state from database
85.644, 30.350, 3.056, 466.136, 7519,
Apply block
506.569, 91.129, 130.654, 874.786, 312,
Apply epoch block
```
post:
```
Loaded 68973 blocks, head slot 117077
All time are ms
Average, StdDev, Min, Max, Samples,
Test
Validation is turned off meaning that no BLS operations are performed
72457.303, 0.000, 72457.303, 72457.303, 1,
Initialize DB
1.015, 0.858, 0.070, 11.231, 7831,
Load block from database
28.983, 0.000, 28.983, 28.983, 1,
Load state from database
21.725, 17.461, 2.659, 393.217, 7519,
Apply block
324.012, 33.954, 45.452, 440.532, 312,
Apply epoch block
```
When blocks and attestations arrive, they are SSZ-decoded twice: once
for validation and once for processing. This branch enqueues the decoded
block directly for processing, avoiding the second, slow
deserialization.
* move processing of blocks and attestations to queue
* ...and out from beacon_node
* split attestation processing into attestations and aggregates
* also updates metrics
* clean up logging to better follow the lifetime of gossip: arrival,
validation and processing
* drop attestations and aggregates if there are too many
* try to prioritise blocks and aggregates before single-validator
attestations
* evaluate block attestations under the epochref of the block - this is
what the state transition function does
* avoid copying attestation seq unnecessarily
* avoid unnecessary hashset for unslashed indices
* don't sort shuffled_validator_indices, just get them directly with
iteration
* grab full epoch of proposer indices while we have the data available -
they'll get cached and reused
* avoid computing active validator set when not used for logging
* collect all epochrefs in specific blocks to make them easier to find
and to avoid lots of small seqs
* reuse validator key databases more aggressively by comparing keys
* make state cache available from within `withState`
* make epochRef available from within onBlockAdded callback
* integrate getEpochInfo into block resolution and epoch ref logic such
that epochrefs are created when blocks are added to pool or lazily when
needed by a getEpochRef
* fill state cache better from EpochRef, speeding up replay and
validation
* store epochRef in specific blocks to make them easier to find and
reuse
* fix database corruption when state is saved while replaying quarantine
* replay slots fully from block pool before processing state
* compare bls values more smartly
* store epoch state without block applied in database - it's recommended
to resync the node!
this branch will drastically speed up processing in times of long
non-finality, as well as cut memory usage by 10x during the recent
medalla madness.
* add attestation processing queue so attestations don't get processed
too early
* rework justified slot delay to match spec / lighthouse better
* keep less state in fork choice
* request epochref less
* add --enable-logtrace argument to launch_local_testnet
* scan for all available logfiles
* remove specific filename references
* update v0.11.3 spec ref to v0.12.2
* Bump BLSCurve
* Use unified aggregation API
* use new blscurve with unified aggregate API
* bump
* fix toRaw
* replace state_sim combine with AggregateSignature
* Fix 32-bit
* Fix 32-bit for real and test deactivating ccache for fno-tree-lopp-vectorize flag
* change compilation switches to narrow down Linux issue
* Use -fno-tree-vectorize to disable both tree-loop-vectorize and tree-slp-vectorize
* blscurve now disables both Loop and SLP vectorization
* Add tests for the miracl/milagro fallback
* Travis has max log size of 4MB
* Test with Miracl in the finalization test
* fix state_sim log level
* Coment out the slow fallback tests
* fix invalid state root being written to database
When rewinding state data, the wrong block reference would be used when
saving the state root - this would cause state loading to fail by
loading a different state than expected, preventing blocks to be
applied.
* refactor state loading and saving to consistently use and set
StateData block
* avoid rollback when state is missing from database (as opposed to
being partially overwritten and therefore in need of rollback)
* don't store state roots for empty slots - previously, these were used
as a cache to avoid recalculating them in state transition, but this has
been superceded by hash tree root caching
* don't attempt loading states / state roots for non-epoch slots, these
are not saved to the database
* simplify rewinder and clean up funcitions after caches have been
reworked
* fix chaindag logscope
* add database reload metric
* re-enable clearance epoch tests
* names
* initial dynamic subscribe/unsubscribe for attestations to/from subnets
* implement random stability subnet and clean up
* switch from HashSet[uint64] to set[uint8]
* refactor subnet logic out from beacon_node and actual (un)subscribing
* only try to subscribe to marginally different subnets
* add assertions
* maintain ENR subnets
* assert that beacon_node and eth2_network have consistent view of subscribed subnets
* disable actual cycling
* Eliminate possibilities for range errors and overflows
* Handle more properly invalid requests for furute slots
* Eliminate the confusing surrounding the MAX_REQUEST_BLOCKS constant
Addresses https://github.com/status-im/nim-beacon-chain/issues/1366
removed in 0.12.2 - the flow, in particular when the other peer doesn't
support snappy, is hard to follow because of the trial-and-error
approach - removing it simplifies things and removes some of the
hard-to-read parts of the thunking etc
this resolves some peer counting issues that were happening because the
lifetime future in PeerInfo was unreliable (multiple PeerInfo instances
existed per peer)
In addition, this solves another race condition: when connecting to a
peer and later dialling that protocol, it is not certain that the same
connection will be used if there's a concurrent incoming peer connection
ongoing - better not make too many assumptions about who sent statuses
when.
* Allow sync manager process blocks one by one.
* Log storeBlock() and updateHead() duration.
* Calculate duration only for blocks added without any error.
* Fix float compilation error.
* Fix duration.
* Fix SyncQueue tests.
* remove one cache, add another
This cache removes the need for rewinding in most attestation validation
flow since the attestations come from one of two epochs and must be
targetting a viable block.
Additionally, it also removes all state caches which are less likely to
be used over-all - more metrics are needed to track the rewinding.
On risk is that when chains don't finalize, we'll have lots of epochrefs
in memory meaning lots of validator key databases, most being exactly
the same. This can be addressed in any number of ways. Some of the
memory usage is mitigated by the fact that we previously had lots of big
state caches and now we're keeping only keys instead.
* cleanups
* doc
Rationale: this makes moving keys between the clients eaiser
Other changes:
* Restore building with custom presets
(defaultRuntimePreset is not a template in this mode)