* collect signature production and verificaiton in one place
Signatures are made over data and domain - here we collect all such
activities in one place.
Also:
* security: fix cast-before-range-check
* log block/attestation verification consistently
* run block verification based on `getProposer` in its own history
* clean up some unused stuff
* import
* missing raises
* 3gb vs 12gb for 4000 epochs of witti
* 3-4x sync blocks/sec performance improvement on my FDE SSD drive
* generate less quirky code for primitive types
* random fixes
* create dump dir on startup
* don't crash on failure to write dump
* fix a few `uint64` instances being used when indexing arrays - this
should be a compile error but isn't due to compiler bugs
* fix standalone test_block_pool compilation
* add signed block processing in ncli
* reuse cache entry instead of allocating a new one
* allow for small clock disparities when validating blocks
- each BN gets half of its previous validators as inProcess and the other half goes to the respective VC for that BN - using separate data dirs where the keys are copied
- also removed a few command line options which are no longer necessary
- block proposals originating from a VC are propagated from one BN to the rest properly
- other cleanup & moving code back to since it is no longer used elsewhere
These are the 3 most important things in the life of a beacon-node -
finalization checkpoint for security, head + time for seeing that we're
synced and working.
The order given is the expected increasing order that things should be
moving in - finalization lags head, and head lags clock - the structure
helps the user remember, but also see the pattern where time is getting
out of sync with head.
A validator would also want to know when the next action is going to be
(ie next attestation / block production) but that's for another day.
* reorder ssz
* split into hash_trees and ssz_serialization, roughly, for hashing and
IO
* move bitseqs into ssz (from stew)
* clean up imports
* docs, imports
Add SeenTable to avoid continuous attempts to dead peers.
Refactor onSecond.
Block backward sync while forward sync is working.
SyncManager now checks responses according corresponding requests + tests.
SyncManager now watching for not progressing local_head_slot and resets SyncQueue.
* wip: cache
* cache lists and arrays of complex objects (5x block processing speed
on ncli_db)
trivial baseline cache that stores tree in flat memory structure
* support array of uint64
* work around type issues
* more type compiler bug workarounds
* cache balances, more type fixes
* index type
* ncli_db: add validation flag, better ux
* int64 fixes
* test fix
* "oops"
```
647.913, 0.000, 647.913, 647.913, 1,
Initialize DB
0.540, 0.402, 0.340, 9.451, 619,
Load block from database
40.268, 0.000, 40.268, 40.268, 1,
Load state from database
0.498, 0.150, 0.343, 0.930, 596,
Apply block
3.548, 11.005, 0.729, 54.022, 23,
Apply epoch block
```
* support all basic types
* cleanups
* a few more cleanups
* switch state transition caching usage to shuffled active validator indices to match EpochRef
* refactor the EpochRef -> StateCache transformation; elide pointless mapIt
* limit state passed between get_beacon_committee(...) and compute_committee(...)
* tweaks
The first offset of an SSZ object should always have a fixed constant
value. Otherwise, some unused bytes may appear between the fixed portion
and the dynamic portion.
Please note that this fix shutds down the minimal forward compatibility
currently supported by the SSZ format (and thus, the expected behavior
must be clarified in the SSZ spec).
* plumbing between block pool and state transition functions around active validator indices and committees
* have shared epochrefs followed by blockref tree while allowing for skipped slots
* factor out the epoch info extraction; document how the EpochRef follows forks
* be stricter about SSZ length prefix
* compute zeroHash list at compile time
* remove SSZ schema stuff
* move SSZ navigation to ncli
* cleanup a few leftover openArray uses
* Add ability to reset state of sync manager.
Fix bug when sync got stuck on `zero-point` reset.
Fix bug when sync got stuck when some of the workers waiting for failing one.
* Remove debugging comments and imports.
* Remove not used pendingLock.
The lack of body of `goodbye` in sync_protocol.nim was preventing
the respective LibP2P protocol to be mounted and advertised on the
network.
Adding a body fixes that, but I've also made some changes in the
P2P protocol codegen that will prevent the issue from happening
again (no body is now considered the equivalent of having an empty
body).
the cast worked around the bug at compile time by means of casting, but
introduced a runtime error, because the pre-cast type was still being
used during deserialization
- we have a new binary which connects via RPC to the respective BN and has an internal clock - waking it up on every slot
- the BN has a new option called --external-validators and currently in order to have the VC binaries to run we need to pass EXTERNAL_VALIDATORS=yes to make
- factored some code out of beacon_node.nim for easier reuse in validator_api.nim and validator_client.nim
- the VC loads its associated private keys from the datadir for its BN
- most of the validator API calls have been implemented as a stub.
- the VC polls its BN at the start of each epoch - getting a list of all active validators for the current epoch - and then continues to request blocks and sign them with its appropriate validators when necessary
* in makeBeaconBlock - use rollback instead
* in tests - this helps state_sim give more accurate data and makes it
30% faster
* fix some usages of raw BeaconState
* update more spec refs in beacon_chain/spec/presets; skip skipped constant sanity checks also from markdown reports' perspectives
* mark skipped as skipped in markdown
* Fix nbench compilation with HashedBeaconState
* Add nbench to tooling
* use newClone - fix 265e01e404 (r425198575)
* Detail advance_slot and hashTreeRoot
* Report throughput
* Fallback for ARM
* windows does not support inline ASM
The work on this was started last week while I was waiting
for a decision on the "Async Snappy" PR. It was prompted by
a failing test in the test suite, where the HashingStream
was inserting some incorrectly padded chunks that affected
the result of `hash_tree_root`. Instead of working around
the problem in the HashingStream, I've decided to implement
a planned optimisation that allows us to remove the hashing
stream altogether.
With the optimisation in place, `hash_tree_root` will now
use only stack memory and only the precise amount neccesary
to build the chunks-merging tree.
* switch state cache to use ref statedata objects to limit memory usage
* more directly initialize ref StateData
* use HashedBeaconState instead of StateData to try to fix memory leak
* switch cache to seq[ref HashedBeaconState]
* remove unused import
Co-authored-by: Ștefan Talpalaru <stefantalpalaru@yahoo.com>
* sync fixes
* fix Status message finalized info
* work around sync starting before initial status exchange
* don't fail block on deposit signature check failure (fixes#989)
* print ForkDigest and Version nicely
* dump incoming blocks
* fix crash when libp2p peer connection is closed
* update chunk size to 16 to work around missing blocks when syncing
* bump libp2p
* bump libp2p
* better deposit skip message