Other changes:
* More optimal search for TTD block.
* Add timeouts to all REST requests during trusted node sync.
Fixes#4037
* Removed support for storing a deposit snapshot in the network
metadata.
* Types and scaffolding for EIP-4844
This commit adds the EIP-4844 spec types, and fills in
scaffolding/boilerplate for the use of these types across the repo.
None of the actual EIP-4844 logic is introduced yet.
This follows the pattern used by @tersec when introducing Capella (#4276).
* use eth2-networks fork
* review feedback: add static check EIP4844_FORK_EPOCH == FAR_FUTURE_EPOCH
* review feedback: remove EIP4844 from /eth/v1/config/spec response
* Cleanup / review feedback
* Fix REST test
In Jenkins CI we run two instances of unit tests concurrently.
This can trigger CI failure when the same port numbers are re-used
by the different test instances. Fixed one more issue of this by
allowing user configuration of the base port number.
* Initial commit.
* NextAttestationEntry type.
* Add doppelgangerCheck and actual check.
* Recover deleted check.
* Remove NextAttestainEntry changes.
* More cleanups for NextAttestationEntry.
* Address review comments.
* Remove GENESIS_EPOCH specific check branch.
* Decrease number of full epochs for doppelganger check in VC.
Co-authored-by: zah <zahary@status.im>
The various `PeerScore` constants are used for both beacon blocks and
LC objects, and will likely also find use for EIP4844 blob sidecars.
Renaming them to use more generically applicable names not referring
to blocks explicitly aymore.
We currently use `BlockError` for both beacon blocks and LC objects.
In light of EIP4844, we will likely also use it for blob sidecars.
To avoid confusion, renaming it to a more generic `VerifierError`,
and update its documentation to be more generic.
To avoid long lines as a followup, also renaming the `block_processor`'s
`BlockProcessingCompleted.completed`->`ProcessingStatus.completed` and
`BlockProcessingCompleted.notCompleted`->`ProcessingStatus.notCompleted`
This PR removes a bunch of code to make TNS aware of era files, avoiding
a duplicated backfill when era files are available.
* reuse chaindag for loading backfill state, replacing the TNS homebrew
* fix era block iteration to skip empty slots
* add tests for `can_advance_slots`
When the EL/Builder fails to produce an execution payload, we fall back
to an empty `ExecutionPayload`. Even though it contains no transactions
it should refer to the configured fee recipient. This is useful for
privacy reasons (do not reveal the reason for the empty payload) and for
compliance with additional fee recipient rules by staking pools.
* move duty tracking code to `ActionTracker`
* fix earlier duties overwriting later ones
* re-run subnet selection when new duty appears
* log upcoming duties as soon as they're known (vs 4 epochs before)
Currently, we require genesis and a checkpoint block and state to start
from an arbitrary slot - this PR relaxes this requirement so that we can
start with a state alone.
The current trusted-node-sync algorithm works by first downloading
blocks until we find an epoch aligned non-empty slot, then downloads the
state via slot.
However, current
[proposals](https://github.com/ethereum/beacon-APIs/pull/226) for
checkpointing prefer finalized state as
the main reference - this allows more simple access control and caching
on the server side - in particular, this should help checkpoint-syncing
from sources that have a fast `finalized` state download (like infura
and teku) but are slow when accessing state via slot.
Earlier versions of Nimbus will not be able to read databases created
without a checkpoint block and genesis. In most cases, backfilling makes
the database compatible except where genesis is also missing (custom
networks).
* backfill checkpoint block from libp2p instead of checkpoint source,
when doing trusted node sync
* allow starting the client without genesis / checkpoint block
* perform epoch start slot lookahead when loading tail state, so as to
deal with the case where the epoch start slot does not have a block
* replace `--blockId` with `--state-id` in TNS command line
* when replaying, also look at the parent of the last-known-block (even
if we don't have the parent block data, we can still replay from a
"parent" state) - in particular, this clears the way for implementing
state pruning
* deprecate `--finalized-checkpoint-block` option (no longer needed)
* Allow chain dag without genesis / block
This PR enables the initialization of the dag without access to blocks
or genesis state - it is a prerequisite for implementing a number of
interesting features:
* checkpoint sync without any block download
* pruning of blocks and states
* backfill checkpoint block
When EL `newPayload` is slow (e.g., Raspberry Pi with Besu), the epoch
and shuffling caches tend to fill up with multiple copies per epoch when
processing gossip and performing validator duties close to wall slot.
The old strategy of evicting oldest epoch led to the same item being
evicted over and over, leading to blocking of over 5 minutes in extreme
cases where alternate epochs/shuffling got loaded repeatedly.
Changing the cache eviction strategy to least-recently-used seems to
improve the situation drastically. A simple implementation was selected
based on single linked-list without a hashtable.
* avoid database race-condition inconsistency after fcU `INVALID` then crash
* ensure head doesn't fall behind finalized; add more tests for head movement/reloading DAG
* detect mismatch of config and binary
When loading configuration that sets keys that Nimbus bakes into the
binary at compile-time, raise an error if the config is incompatible
instead of ignoring the conflicting value.
Since these files may have been created in a previous run or manually,
we want to keep loading them even on nodes that don't enable the
keystore API (for example static setups)
Other changes:
* log keystore loading progressively (#3699)
* print initial fee recipient when loading validators
* log dynamic fee recipient updates
When the sync queue processes results for a blocks by range request,
and the requested range contained some slots that are already finalized,
`BlockError.MissingParent` currently leads to `PeerScoreBadBlocks` even
when the error occurs on a non-finalized slot in the requested range.
This patch changes the scoring in that case to `PeerScoreMissingBlocks`
for consistency with range requests solely covering non-finalized slots,
and, likewise, rewinds the sync queue to the next `rewindSlot`.
* more efficient forkchoiceUpdated usage
* await rather than asyncSpawn; ensure head update before dag.updateHead
* use action tracker rather than attached validators to check for next slot proposal; use wall slot + 1 rather than state slot + 1 to correctly check when missing blocks
* re-add two-fcU case for when newPayload not VALID
* check dynamicFeeRecipientsStore for potential proposal
* remove duplicate checks for whether next proposer
`news` has a few open issues that are not present in `nim-websock`:
1. There is a 1 second delay between each MB of sent data.
2. Cancelling an ongoing `send` makes the entire WebSocket unusable.
3. Control packets do not have priority over ongoing message frames.
Using `news`, there are quite a few of these messages in Geth:
```
Previously seen beacon client is offline. Please ensure it is
operational to follow the chain!
```
It may take quite some time to reconnect when this happens.
Using `nim-websock`, this message still occurs because `eth1_monitor`
reconnects the EL connection when no new blocks occurred for 5 minutes,
but reconnecting is quick and the message is rarer.
The optimistic sync spec was updated since the LC based optsync module
was introduced. It is no longer necessary to wait for the justified
checkpoint to have execution enabled; instead, any block is okay to be
optimistically imported to the EL client, as long as its parent block
has execution enabled. Complex syncing logic has been removed, and the
LC optsync module will now follow gossip directly, reducing the latency
when using this module. Note that because this is now based on gossip
instead of using sync manager / request manager, that individual blocks
may be missed. However, EL clients should recover from this by fetching
missing blocks themselves.
* Harden block proposal against expired slashings/exits
When a message is signed in a phase0 domain, it can no longer be
validated under bellatrix due to the correct fork no longer being
available in the `BeaconState`.
To ensure that all slashing/exits are still valid, in this PR we re-run
the checks in the state that we're proposing for, thus hardening against
both signatures and other changes in the state that might have
invalidated the message.
* fix same message added multiple times
in case of attestation slashing of multiple validators in one go
* support connecting to peers without bellatrix
Make discovery fork ID aware of scheduled Bellatrix fork to enable
connections to peers that don't have Bellatrix scheduled yet.
Without this, has peering issues with peers on older SW version.
* expand tests with compatibility checks
* more exhaustive compatibility checks
* Fixes a segfault during block production when the Keymanager API
is disabled. The Keymanager is now disabled on half of the local
testnet nodes to catch such problems in the future.
* Fixes multiple potential stalls from REST requests being done
without a timeout. From practice, we know that such requests
can hang forever if not cancelled with a timeout. At best,
this would be a resource leak, at worst, it may lead to a
full stall of the client and missed validator duties.
* Changes some Options usages to Opt (for easier use of valueOr)
* Keymanager API for the validator client
* Properly treat the 'description' field as optional when loading Keystores
* Spec-compliant serialization of the slashing data in Keymanager's DeleteKeys response ()
Fixes#3940Fixes#3964Closes#3884 by adding test
In order to avoid full replays when validating attestations hailing from
untaken forks, it's better to keep shufflings separate from `EpochRef`
and perform a lookahead on the shuffling when processing the block that
determines them.
This also helps performance in the case where REST clients are trying to
perform lookahead on attestation duties and decreases memory usage by
sharing shufflings between EpochRef instances of the same dependent
root.
Now that the 1.2.0-rc.2 spec contains the same `LightClientUpdate`
definition that Nimbus was already using before, the corresponding
SSZ test vectors can be re-enabled.
When fetching eth1 data and deposits for a new block proposal, the list
of deposits from previous eth1 data to the next one is fully loaded into
a `seq`. This can potentially be a very long list in active periods.
Changing this to an `iterator` saves memory by ensuring that the entire
list is no longer materialized; only the `DepositData` roots are needed.
* Use final `v1` version for light client protocols
* Unhide LC data collection options
* Default enable LC data serving
* rm unneeded import
* Connect to EL on startup
* Add docs for LC based EL sync
The light client sync protocol employs heuristics to ensure it does not
become stuck during non-finality or low sync committee participation.
These can enable use cases that prefer availability of recent data
over security. For our syncing use case, though, security is preferred.
An option is added to light client processor to configure this tradeoff.
Other changes:
* The Keymanager error responses differ from the Beacon API responses.
'keymanagerApiError' replaces the former usages of 'jsonError'.
* Return status code 401 and 403 for authorization errors in accordance
to the spec.
* Eliminate inconsistencies in the REST JSON parsing. Some of the code
paths allowed missing fields.
* Added logging of serialization failure details at DEBUG level.
Removes a few extra-ambitious templates to make `self` updates explicit,
and moves the `FinalityCheckpoints` type from `base` to `helpers` as it
is an additional Nimbus specific type not defined by spec.
Whether new blocks/attestations/etc are produced internally or received
via REST, their journey through the node is the same - to ensure that
they get the same treatment (logging, metrics, processing), this PR
moves the routing to a dedicated module and fixes several small
differences that existed before.
* `xxxValidator` -> `processMessageName` - the processor also was adding
messages to pools, so we want the name to reflect that action
* add missing "sent" metrics for some messages
* document ignore policy better - already-seen messages are not actaully
rebroadcast by libp2p
* skip redundant signature checks for internal validators consistently
The justified and finalized `Checkpoint` are frequently passed around
together. This introduces a new `FinalityCheckpoint` data structure that
combines them into one.
Due to the large usage of this structure in fork choice, also took this
opportunity to update fork choice tests to the latest v1.2.0-rc.1 spec.
Many additional tests enabled, some need more work, e.g. EL mock blocks.
Also implemented `discard_equivocations` which was skipped in #3661,
and improved code reuse across fork choice logic while at it.
* optimistic sync
* flag that initially loaded blocks from database might need execution block root filled in
* return optimistic status in REST calls
* refactor blockslot pruning
* ensure beacon_blocks_by_{root,range} do not provide optimistic blocks
* handle forkchoice head being pre-merge with block being postmerge
* re-enable blocking head updates on validator duties
* fix is_optimistic_candidate_block per spec; don't crash with nil future
* fix is_optimistic_candidate_block per spec; don't crash with nil future
* mark blocks sans execution payloads valid during head update
Separate LC initialization options from the main ChainDAGRef options to
allow ChainDAGRef to treat them as opaque and reduce risk for conflicts
when extending those options in the future.
Merkle proofs tend to have long underlying type definitions, e.g.,
`array[log2trunc(NEXT_SYNC_COMMITTEE_INDEX), Eth2Digest]`. For the
ones used in the LC sync protocol, dedicated types are introduced
to improve readability. Furthermore, the `CachedLightClientBootstrap`
wrapper that solely wrapped a merkle branch is eliminated.
This updates `nim-ssz-serialization` to
`3db6cc0f282708aca6c290914488edd832971d61`.
Notable changes:
- Use `uint64` for `GeneralizedIndex`
- Add support for building merkle multiproofs
For consistency with other options, use a common prefix for light client
data configuration options.
* `--serve-light-client-data` --> `--light-client-data-serve`
* `--import-light-client-data` --> `--light-client-data-import-mode`
No deprecation of the old identifiers as they were only sparingly used
and all usage can be easily updated without interferance.
When launched with `--light-client-enable` the latest blocks are fetched
and optimistic candidate blocks are passed to a callback (log for now).
This helps accelerate syncing in the future (optimistic sync).
Adds a `LightClient` instance to the beacon node as preparation to
accelerate syncing in the future (optimistic sync).
- `--light-client-enable` turns on the feature
- `--light-client-trusted-block-root` configures block to start from
If no block root is configured, light client tracks DAG `finalizedHead`.
Introduces a new library for syncing using libp2p based light client
sync protocol, and adds a new `nimbus_light_client` executable that uses
this library for syncing. The new executable emits log messages when
new beacon block headers are received, and is integrated into testing.
* SSZ `[]` -> `mitem`
* `[]` -> `item`
immutable access via mutable instance cannot rely on template
overloading, and `[]` cannot be a `func` because of special seq handling
in compiler.
Incorporates the latest changes to the light client sync protocol based
on Devconnect AMS feedback. Note that this breaks compatibility with the
previous prototype, due to changes to data structures and endpoints.
See https://github.com/ethereum/consensus-specs/pull/2802
Since we were not verifying BLS signature in blocks that we produce,
we were failing to notice that some deposits need to be ignored (due
to having an invalid signature). Processing these deposits resulted
in a different ending state after the state transition which caused
our blocks to be rejected by the network.
Follows up on https://github.com/status-im/nimbus-eth2/pull/3461 which
ensured that repeated `beaconBlocksByRange` requests get shrinked to
account for potential out-of-band advancements to `safeSlot`, with
similar logic for the initial request.
Other changes:
* logtrace can now verify sync committee messages and contributions
* Many unnecessary use of pairs() have been removed for consistency
* Map 40x BN response codes to BeaconNodeStatus.Incompatible in the VC
Other fixes:
* Fix bit rot in the `make prater-dev-deposit` target.
* Correct content-type in the responses of the Nimbus signing node
* Invalid JSON payload was being sent in the web3signer requests
This PR makes the necessary adjustments to deal with the revamped snappy
API.
In practical terms for nimbus-eth2, there are performance increases to
gossip processing, database reading and writing as well as era file
processing. Exporting `.era` files for example, a snappy-heavy
operation, almost halves in total processing time:
Pre:
```
Average, StdDev, Min, Max, Samples, Test
39.088, 8.735, 23.619, 53.301, 50, tState
237.079, 46.692, 165.620, 355.481, 49, tBlocks
```
Post:
```
All time are ms
Average, StdDev, Min, Max, Samples, Test
25.350, 5.303, 15.351, 41.856, 50, tState
141.238, 24.164, 99.990, 199.329, 49, tBlocks
```
Some upstream repos still need fixes, but this gets us close enough that
style hints can be enabled by default.
In general, "canonical" spellings are preferred even if they violate
nep-1 - this applies in particular to spec-related stuff like
`genesis_validators_root` which appears throughout the codebase.
`.era` files and Req/Resp protocols use framed formats - aligning the
database with these makes for less recompression work overall as gossip
is sent only once while req/resp repeats (potentially) - this also
allows efficient pruning-to-era where snappy-recompression is the major
cycle thief.
* harden validator API against pre-finalized slot requests
* check `syncHorizon` when responding to validator api requests too far
from `head`
* limit state-id based requests to one epoch ahead of `head`
* put historic data bounds on block/attestation/etc validator production API, preventing them from being used with already-finalized slots
* add validator block smoke tests
* make rest test create a new genesis with the tests running roughly in
the first epoch to allow testing a few more boundary conditions
Gracefully handles the new failure modes recently introduced to the DAG
as part of https://github.com/status-im/nimbus-eth2/pull/3513
Data that is deemed to exist but fails to load leads to an error log to
avoid suppressing logic errors accidentally. In `verifyFinalization`
mode, the assertions remain active.
When doing checkpoint sync, collecting light client data of known blocks
and states incorrectly assumes that `finalized_checkpoint` information
is also known. Hardens collection to only collect finalized checkpoint
data after `dag.computeEarliestLightClientSlot`.
Adds `LightClientProcessor` as the pendant to `BlockProcessor` while
operating in light client mode. Note that a similar mechanism based on
async futures is used for interoperability with existing infrastructure,
despite light client object validation being done synchronously.
Up til now, the block dag has been using `BlockRef`, a structure adapted
for a full DAG, to represent all of chain history. This is a correct and
simple design, but does not exploit the linearity of the chain once
parts of it finalize.
By pruning the in-memory `BlockRef` structure at finalization, we save,
at the time of writing, a cool ~250mb (or 25%:ish) chunk of memory
landing us at a steady state of ~750mb normal memory usage for a
validating node.
Above all though, we prevent memory usage from growing proportionally
with the length of the chain, something that would not be sustainable
over time - instead, the steady state memory usage is roughly
determined by the validator set size which grows much more slowly. With
these changes, the core should remain sustainable memory-wise post-merge
all the way to withdrawals (when the validator set is expected to grow).
In-memory indices are still used for the "hot" unfinalized portion of
the chain - this ensure that consensus performance remains unchanged.
What changes is that for historical access, we use a db-based linear
slot index which is cache-and-disk-friendly, keeping the cost for
accessing historical data at a similar level as before, achieving the
savings at no percievable cost to functionality or performance.
A nice collateral benefit is the almost-instant startup since we no
longer load any large indicies at dag init.
The cost of this functionality instead can be found in the complexity of
having to deal with two ways of traversing the chain - by `BlockRef` and
by slot.
* use `BlockId` instead of `BlockRef` where finalized / historical data
may be required
* simplify clearance pre-advancement
* remove dag.finalizedBlocks (~50:ish mb)
* remove `getBlockAtSlot` - use `getBlockIdAtSlot` instead
* `parent` and `atSlot` for `BlockId` now require a `ChainDAGRef`
instance, unlike `BlockRef` traversal
* prune `BlockRef` parents on finality (~200:ish mb)
* speed up ChainDAG init by not loading finalized history index
* mess up light client server error handling - this need revisiting :)
The spec implicitly talks about the slot of a block in several places,
and keeping it readily available is useful in a number of context -
might as well put this implicitly refereneced helper in the spec code
directly
One more step on the journey to reduce `BlockRef` usage across the
codebase - this one gets rid of `StateData` whose job was to keep track
of which block was last assigned to a state - these duties have now been
taken over by `latest_block_root`, a fairly recent addition that
computes this block root from state data (at a small cost that should be
insignificant)
99% mechanical change.
When a `beaconBlocksByRange` response advances the `safeSlot`, but later
has errors, the sync queue keeps repeating that same request until it is
fulfilled without errors. Data up through `safeSlot` is considered to be
immutable, i.e., finalized, so re-requesting that data is not useful.
By advancing the sync progress in that scenario, those redundant query
portions can be avoided. Note, the finalized block _itself_ is always
requested, even in the initial request. This behaviour is kept same.
* fewer deps on `BlockRef` traversal in anticipation of pruning
* allows identifying EpochRef:s by their shuffling as a first step of
* tighten error handling around missing blocks
using the zero hash for signalling "missing block" is fragile and easy
to miss - with checkpoint sync now, and pruning in the future, missing
blocks become "normal".
When syncing as a light client, different behaviour is needed to handle
the various ways how errors may occur. The existing logic for blocks can
also be applied to light client objects:
- `Invalid`: Malformed object that is clearly an error by its producer.
- `MissingParent`: More data is needed to decide applicability.
- `UnviableFork`: Object may be valid but will never apply on this fork.
- `Duplicate`: No errors were encountered but the object was not useful.
Light clients require full nodes to serve additional data so that they
can stay in sync with the network. This patch adds a new launch option
`--import-light-client-data` to configure what data to make available.
For now, data is only kept in memory; it is not persisted at this time.
Note that data is only locally collected, a separate patch is needed to
actually make it availble over the network. `--serve-light-client-data`
will be used for serving data, but is not functional yet outside tests.
When performing trusted node sync, historical access is limited to
states after the checkpoint.
Reindexing restores full historical access by replaying historical
blocks against the state and storing snapshots in the database.
The process can be initiated or resumed at any point in time.
This adopts the spec sections of the pre-release proposal of the libp2p
based light client sync protocol, and also adds a test runner for the
new accompanying tests. While the release version of the light client
sync protocol contains conflicting definitions, it is currently unused,
and the code specific to the pre-release proposal is marked as such.
See https://github.com/ethereum/consensus-specs/pull/2802
The spec does not provide code for validating the `fork_version` field
of `LightClientUpdate`. However, we can use our own logic for additional
validation of that field. The spec's python test suite sets up states
that do not follow the fork schedule (e.g., that use Altair fork version
before Altair fork epoch), which complicates upstreaming this as code.
* Refactor and optimize logs.
* Introduce shortLog(SyncRequest).
* Address review comment.
* make sync queue logs more consistent
Adds a few minor logging improvements:
- Fixes a typo (`was happened` -> `has happened`)
- Avoids passing `reset_slot` argument to log statement multiple times
- Uses same `rewind_to_slot` label when logging in both sync directions
- Consistent rewind point logging
Co-authored-by: cheatfate <eugene.kabanov@status.im>
In practice, the sync committee signs `LightClientUpdate` instances at
the next slot following the block. This is not correctly reflected in
the tests, where it is signed one slot early. This patch updates the
tests to use the correct slot for the computation.
This PR names and documents the concept of the archive: a range of slots
for which we have degraded functionality in terms of historical access -
in particular:
* we don't support rewinding to states in this range
* we don't keep an in-memory representation of the block dag
The archive de-facto exists in a trusted-node-synced node, but this PR
gives it a name and drops the in-memory digest index.
In order to satisfy `GetBlocksByRange` requests, we ensure that we have
blocks for the entire archive period via backfill. Future versions may
relax this further, adding a "pre-archive" period that is fully pruned.
During by-slot searches in the archive (both for libp2p and rest
requests), an extra database lookup is used to covert the given `slot`
to a `root` - future versions will avoid this using era files which
natively are indexed by `slot`. That said, the lookup is quite
fast compared to the actual block loading given how trivial the table
is - it's hard to measure, even.
A collateral benefit of this PR is that checkpoint-synced nodes will see
100-200MB memory usage savings, thanks to the dropped in-memory cache -
future pruning work will bring this benefit to full nodes as well.
* document chaindag storage architecture and assumptions
* look up parent using block id instead of full block in clearance
(future-proofing the code against a future in which blocks come from era
files)
* simplify finalized block init, always writing the backfill portion to
db at startup (to ensure lookups work as expected)
* preallocate some extra memory for finalized blocks, to avoid immediate
realloc
https://github.com/ethereum/consensus-specs/pull/2225 removed an ignore
rule that would filter out duplicate aggregates from gossip publishing -
however, this causes increased bandwidth and CPU usage as discussed in
https://github.com/ethereum/consensus-specs/issues/2183 - the intent is
to revert the removal and reinstate the rule.
This PR implements ignore filtering which cuts down on CPU usage (fewer
aggregates to validate) and bandwidth usage (less fanout of duplicates)
- as #2225 points out, this may lead to a small increase in IHAVE
messages.
Streamline lookup with Forky and BeaconBlockFork (then we can do the
same for era)
We use type to avoid conditionals, as fork is often already known at a
"higher" level.
* load blockid before loading block by root - this is needed to map root
to slot and will eventually be done via block summary table for "old"
blocks
Co-authored-by: tersec <tersec@users.noreply.github.com>
* Initial commit.
* Fix current test suite.
* Fix keymanager api test.
* Fix wss_sim.
* Add more keystore_management tests.
* Recover deleted isEmptyDir().
* Add `HttpHostUri` distinct type.
Move keymanager calls away from rest_beacon_calls to rest_keymanager_calls.
Add REST serialization of RemoteKeystore and Keystore object.
Add tests for Remote Keystore management API.
Add tests for Keystore management API (Add keystore).
Fix serialzation issues.
* Fix test to use HttpHostUri instead of Uri.
* Add links to specification in comments.
* Remove debugging echoes.
* harden and speed up block sync
The `GetBlockBy*` server implementation currently reads SSZ bytes from
database, deserializes them into a Nim object then serializes them right
back to SSZ - here, we eliminate the deser/ser steps and send the bytes
straight to the network. Unfortunately, the snappy recoding must still
be done because of differences in framing.
Also, the quota system makes one giant request for quota right before
sending all blocks - this means that a 1024 block request will be
"paused" for a long time, then all blocks will be sent at once causing a
spike in database reads which potentially will see the reading client
time out before any block is sent.
Finally, on the reading side we make several copies of blocks as they
travel through various queues - this was not noticeable before but
becomes a problem in two cases: bellatrix blocks are up to 10mb (instead
of .. 30-40kb) and when backfilling, we process a lot more of them a lot
faster.
* fix status comparisons for nodes syncing from genesis (#3327 was a bit
too hard)
* don't hit database at all for post-altair slots in GetBlock v1
requests
* Store finalized block roots in database (3s startup)
When the chain has finalized a checkpoint, the history from that point
onwards becomes linear - this is exploited in `.era` files to allow
constant-time by-slot lookups.
In the database, we can do the same by storing finalized block roots in
a simple sparse table indexed by slot, bringing the two representations
closer to each other in terms of conceptual layout and performance.
Doing so has a number of interesting effects:
* mainnet startup time is improved 3-5x (3s on my laptop)
* the _first_ startup might take slightly longer as the new index is
being built - ~10s on the same laptop
* we no longer rely on the beacon block summaries to load the full dag -
this is a lot faster because we no longer have to look up each block by
parent root
* a collateral benefit is that we no longer need to load the full
summaries table into memory - we get the RSS benefits of #3164 without
the CPU hit.
Other random stuff:
* simplify forky block generics
* fix withManyWrites multiple evaluation
* fix validator key cache not being updated properly in chaindag
read-only mode
* drop pre-altair summaries from `kvstore`
* recreate missing summaries from altair+ blocks as well (in case
database has lost some to an involuntary restart)
* print database startup timings in chaindag load log
* avoid allocating superfluos state at startup
* use a recursive sql query to load the summaries of the unfinalized
blocks
* Harden handling of unviable forks
In our current handling of unviable forks, we allow peers to send us
blocks that come from a different fork - this is not necessarily an
error as it can happen naturally, but it does open up the client to a
case where the same unviable fork keeps getting requested - rather than
allowing this to happen, we'll now give these peers a small negative
score - if it keeps happening, we'll disconnect them.
* keep track of unviable forks in quarantine, to avoid filling it with
known junk
* collect peer scores in single module
* descore peers when they send unviable blocks during sync
* don't give score for duplicate blocks
* increase quarantine size to a level that allows finality to happen
under optimal conditions - this helps avoid downloading the same blocks
over and over in case of an unviable fork
* increase initial score for new peers to make room for one more failure
before disconnection
* log and score invalid/unviable blocks in requestmanager too
* avoid ChainDAG dependency in quarantine
* reject gossip blocks with unviable parent
* continue processing unviable sync blocks in order to build unviable
dag
* docs
* Update beacon_chain/consensus_object_pools/block_pools_types.nim
* add unviable queue test
It's sometimes useful to simulate what happens when a chain runs from a
given state with a given set of private keys - `wss_sim` allows running
such a simulation.
One use of such a tool is to simulate a weak subjectivity attack,
creating alternative histories of the same chain:
https://notes.status.im/nimbus-insecura-network#
* limit by-root requests to non-finalized blocks
Presently, we keep a mapping from block root to `BlockRef` in memory -
this has simplified reasoning about the dag, but is not sustainable with
the chain growing.
We can distinguish between two cases where by-root access is useful:
* unfinalized blocks - this is where the beacon chain is operating
generally, by validating incoming data as interesting for future fork
choice decisions - bounded by the length of the unfinalized period
* finalized blocks - historical access in the REST API etc - no bounds,
really
In this PR, we limit the by-root block index to the first use case:
finalized chain data can more efficiently be addressed by slot number.
Future work includes:
* limiting the `BlockRef` horizon in general - each instance is 40
bytes+overhead which adds up - this needs further refactoring to deal
with the tail vs state problem
* persisting the finalized slot-to-hash index - this one also keeps
growing unbounded (albeit slowly)
Anyway, this PR easily shaves ~128mb of memory usage at the time of
writing.
* No longer honor `BeaconBlocksByRoot` requests outside of the
non-finalized period - previously, Nimbus would generously return any
block through this libp2p request - per the spec, finalized blocks
should be fetched via `BeaconBlocksByRange` instead.
* return `Opt[BlockRef]` instead of `nil` when blocks can't be found -
this becomes a lot more common now and thus deserves more attention
* `dag.blocks` -> `dag.forkBlocks` - this index only carries unfinalized
blocks from now - `finalizedBlocks` covers the other `BlockRef`
instances
* in backfill, verify that the last backfilled block leads back to
genesis, or panic
* add backfill timings to log
* fix missing check that `BlockRef` block can be fetched with
`getForkedBlock` reliably
* shortcut doppelganger check when feature is not enabled
* in REST/JSON-RPC, fetch blocks without involving `BlockRef`
* fix dag.blocks ref
* initial support for minification and new interchange tests. Removal of v1 and v1 migration.
* Synthetic attestations: SQLite3 requires one statement/query per prepared statement
* Fix DB import interrupted if no attestation was found
* Skip test relying on undocumented test behavior (https://github.com/eth-clients/slashing-protection-interchange-tests/pull/12#issuecomment-1011158701)
* Skip test relying on unclear minification behavior:
creating an invalid minified attestation with source > target or setting target = max(source, target)
* remove DB v1 and update submodule
* Apply suggestions from code review
Co-authored-by: Jacek Sieka <jacek@status.im>
Co-authored-by: Jacek Sieka <jacek@status.im>
Time in the beacon chain is expressed relative to the genesis time -
this PR creates a `beacon_time` module that collects helpers and
utilities for dealing the time units - the new module does not deal with
actual wall time (that's remains in `beacon_clock`).
Collecting the time related stuff in one place makes it easier to find,
avoids some circular imports and allows more easily identifying the code
actually needs wall time to operate.
* move genesis-time-related functionality into `spec/beacon_time`
* avoid using `chronos.Duration` for time differences - it does not
support negative values (such as when something happens earlier than it
should)
* saturate conversions between `FAR_FUTURE_XXX`, so as to avoid
overflows
* fix delay reporting in validator client so it uses the expected
deadline of the slot, not "closest wall slot"
* simplify looping over the slots of an epoch
* `compute_start_slot_at_epoch` -> `start_slot`
* `compute_epoch_at_slot` -> `epoch`
A follow-up PR will (likely) introduce saturating arithmetic for the
time units - this is merely code moves, renames and fixing of small
bugs.
* Harden CommitteeIndex, SubnetId, SyncSubcommitteeIndex
Harden the use of `CommitteeIndex` et al to prevent future issues by
using a distinct type, then validating before use in several cases -
datatypes in spec are kept simple though so that invalid data still can
be read.
* fix invalid epoch used in REST
`/eth/v1/beacon/states/{state_id}/committees` committee length (could
return invalid data)
* normalize some variable names
* normalize committee index loops
* fix `RestAttesterDuty` to use `uint64` for `validator_committee_index`
* validate `CommitteeIndex` on ingress in REST API
* update rest rules with stricter parsing
* better REST serializers
* save lots of memory by not using `zip` ...at least a few bytes!
* REST cleanups
* reject out-of-range committee requests
* print all hex values as lower-case
* allow requesting state information by head state root
* turn `DomainType` into array (follow spec)
* `uint_to_bytesXX` -> `uint_to_bytes` (follow spec)
* fix wrong dependent root in `/eth/v1/validator/duties/proposer/`
* update documentation - `--subscribe-all-subnets` is no longer needed
when using the REST interface with validator clients
* more fixes
* common helpers for dependent block
* remove test rules obsoleted by more strict epoch tests
* fix trailing commas
* Update docs/the_nimbus_book/src/rest-api.md
* Update docs/the_nimbus_book/src/rest-api.md
Co-authored-by: sacha <sacha@status.im>
Overhaul of era files, including documentation and reference
implementations
* store blocks, then state, then slot indices for easy lookup at low
cost
* document era file rationale
* altair+ support in era writer
* support downloading blocks / states via JSON in addition to SSZ -
slow, but needed for infura support - SSZ is still used when server
supports it
* use common forked block/state reader in REST API
* fix stack overflows in REST JSON decoder
* fix invalid serialization of `justification_bits` in
`/eth/v1/debug/beacon/states` and `/eth/v2/debug/beacon/states`
* fix REST client to use `/eth/...` instead of `/api/eth/...`, update
"default" urls to expose REST api via `/eth` as well as this is what the
standard says - `/api` was added early on based on an example "base url"
in the spec that has been removed since
* expose Nimbus REST extensions via `/nimbus` in addition to
`/api/nimbus` to stay consistent with `/eth`
* fix invalid state root when reading states via REST
* fix recursive imports in `spec/ssz_codec`
* remove usages of `serialization.useCustomSerialization` - fickle
With checkpoint sync in particular, and state pruning in the future,
loading states or state-dependent data may fail. This PR adjusts the
code to allow this to be handled gracefully.
In particular, the new availability assumption is that states are always
available for the finalized checkpoint and newer, but may fail for
anything older.
The `tail` remains the point where state loading de-facto fails, meaning
that between the tail and the finalized checkpoint, we can still get
historical data (but code should be prepared to handle this as an
error).
However, to harden the code against long replays, several operations
which are assumed to work only with non-final data (such as gossip
verification and validator duties) now limit their search horizon to
post-finalized data.
* harden several state-dependent operations by logging an error instead
of introducing a panic when state loading fails
* `withState` -> `withUpdatedState` to differentiate from the other
`withState`
* `updateStateData` can now fail if no state is found in database - it
is also hardened against excessively long replays
* `getEpochRef` can now fail when replay fails
* reject blocks with invalid target root - they would be ignored
previously
* fix recursion bug in `isProposed`
With the right sequence of events (for example a REST request or a
validation), it can happen that the first traversal across a state
checkpoint boundary is done without storing that state on disk - this
causes problens when replaying states, because now states may be missing
from the database.
Here, we simply avoid using the caches when advancing a state that will
go into the database, ensuring that the information lost during caching
always is permanently stored.
* fix recursion bug in `isProposed`
* use v1.1.6 test vectors; use BeaconTime instead of Slot in fork choice
* tick through every slot at least once
* use div INTERVALS_PER_SLOT and use precomputed constants of them
* use correct (even if numerically equal) constant
Introduced in #3171, it turns out we can just follow the block headers
to achieve the same effect
* leaves the constant in the code so as to avoid confusion when reading
database that had the constant written (such as the fleet nodes and
other unstable users)
Validator monitoring based on and mostly compatible with the
implementation in Lighthouse - tracks additional logs and metrics for
specified validators so as to stay on top on performance.
The implementation works more or less the following way:
* Validator pubkeys are singled out for monitoring - these can be
running on the node or not
* For every action that the validator takes, we record steps in the
process such as messages being seen on the network or published in the
API
* When the dust settles at the end of an epoch, we report the
information from one epoch before that, which coincides with the
balances being updated - this is a tradeoff between being correct
(waiting for finalization) and providing relevant information in a
timely manner)
* SyncManager cleanups for backfill support
Cleanups, fixes and simplifications, in anticipation of backfill support
for the `SyncManager`:
* reformat sync progress indicator to show time left and % done more
prominently:
* old: `sync="sPssPsssss:2:2.4229:00h57m (2706898)"`
* new: `sync="14d12h31m (0.52%) 1.1378slots/s (wQQQQQDDQQ:1287520)"`
* reset average speed when going out of sync
* pass all block errors to sync manager, including duplicate/unviable
* penalize peers for reporting a head block that is outside of our
expected wall clock time (they're likely on a different network or
trying to disrupt sync)
* remove `SyncFailureKind` (unused)
* remove `inRange` (unused)
* add `Q` for sync queue requests that are in the `SyncQueue` but not
yet in the `BlockProcessor` queue
* update last slot in `SyncQueue` after getting peer status
* fix race condition between `wakeupWaiters` and `resetWait`, where
workers would not be correctly reset if block verification returned a
completed future without event loop
* log syncmanager direction
* Fix ordering issue.
Some of the requests size of which are not equal to `chunkSize` could be processed in wrong order which could lead to sync process freezes.
Co-authored-by: cheatfate <eugene.kabanov@status.im>
In the ChainDAG, 3 block pointers are kept: genesis, tail and head. This
PR adds one more block pointer: the backfill block which represents the
block that has been backfilled so far.
When doing a checkpoint sync, a random block is given as starting point
- this is the tail block, and we require that the tail block has a
corresponding state.
When backfilling, we end up with blocks without corresponding states,
hence we cannot use `tail` as a backfill pointer - there is no state.
Nonetheless, we need to keep track of where we are in the backfill
process between restarts, such that we can answer GetBeaconBlocksByRange
requests.
This PR adds the basic support for backfill handling - it needs to be
integrated with backfill sync, and the REST API needs to be adjusted to
take advantage of the new backfilled blocks when responding to certain
requests.
Future work will also enable moving the tail in either direction:
* pruning means moving the tail forward in time and removing states
* backwards means recreating past states from genesis, such that
intermediate states are recreated step by step all the way to the tail -
at that point, tail, genesis and backfill will match up.
* backfilling is done when backfill != genesis - later, this will be the
WSS checkpoint instead
As of https://github.com/status-im/nim-eth/pull/379 `nim-eth` defines a
couple static test cases for merkle proof verification.
Since the EF has defined a `is_valid_merkle_branch` function in the spec
we are no longer using the custom implementation from `nim-eth`, but the
tests were never ported to target the new implementation. This patch now
follows up on that and integrates those tests from `nim-eth`.
* BlockId reform
Introduce `BlockId` that helps track a root/slot pair - this prepares
the codebase for backfilling and handling out-of-dag blocks
* move block dag code to separate module
* fix finalised state root in REST event stream
* fix finalised head computation on head update, when starting from
checkpoint
* clean up chaindag init
* revert `epochAncestor` change in introduced in #3144 that would return
an epoch ancestor from the canoncial history instead of the given
history, causing `EpochRef` keys to point to the wrong block
In #780 a test was disabled that verified that an attestation with
empty `aggregation_bits` completes successfully. The test was never
re-introduced, and as of the current consensus spec v1.1.6, such
attestations are not considered valid, as they fail the check in
`is_valid_indexed_attestation`. This patch fully removes that outdated
test, and moves it to the list of pending invalid attestation tests.
* batch-verify sync messages for a small perf boost
Generally reuses the same structure as attestation and aggregate
verification
* normalize `signatures` and `signature_batch` to use the same pattern
of verification
* normalize parameter names, order etc for signature stuff in general
* avoid calling `blsSign` directly - instead, go through `signatures`
consistently
* Introduce slot->BlockRef mapping for finalized chain
The finalized chain is linear, thus we can use a seq to lookup blocks by
slot number.
Here, we introduce such a seq, even though in the future, it should
likely be backed by a database structure instead, or, more likely, a
flat era file with a flat lookup index.
This dramatically speeds up requests by slot, such as those coming from
the REST interface or GetBlocksByRange, as these are currently served by
a linear iteration from head.
* fix REST block requests to not return blocks from an earlier slot when
the given slot is empty
* fix StateId interpretation such that it doesn't treat state roots as
block roots
* don't load full block from database just to return its root
* move quarantine outside of chaindag
The quarantine has been part of the ChainDAG for the longest time, but
this design has a few issues:
* the function in which blocks are verified and added to the dag becomes
reentrant and therefore difficult to reason about - we're currently
using a stateful flag to work around it
* quarantined blocks bypass the processing queue leading to a processing
stampede
* the quarantine flow is unsuitable for orphaned attestations - these
should also should be quarantined eventually
Instead of processing the quarantine inside ChainDAG, this PR moves
re-queueing to `block_processor` which already is responsible for
dealing with follow-up work when a block is added to the dag
This sets the stage for keeping attestations in the quarantine as well.
Also:
* make `BlockError` `{.pure.}`
* avoid use of `ValidationResult` in block clearance (that's for gossip)
This extends the `makeTestBlocks` function used in tests with a new
parameter `syncCommitteeRatio` to control whether the produced blocks
should be signed by the validators assigned to the sync committee.
A similar parameter already exists to configure whether attestations
for the test blocks should be produced.
* add EF fork choice tests to CI
* checkpoints
* compilation fixes and add test to preset dependent suite
* support longpaths on Windows CI
* skip minimal tests (long paths issue + impl detals tested)
* fix stackoverflow on some platforms
* rebase on top of https://github.com/status-im/nimbus-eth2/pull/3054
* fix stack usage
Renames and cleanups split out from the validator monitoring branch, so
as to reduce conflict area vs other PR:s
* add constants for expected message timing
* name validators after the messages they validate, mostly, to make
grepping easier
* unify field naming of EpochInfo across forks to make cross-fork code
easier
* ncli_db: add putState, putBlock
These tools allow modifying an existing nimbus database for the purpose
of recovery or reorg, moving the head, tail and genesis to arbitrary
points.
* remove potentially expensive `putState` in `BeaconStateDB`
* introduce `latest_block_root` which computes the root of the latest
applied block from the `latest_block_header` field (instead of passing
it in separately)
* avoid some unnecessary BeaconState copies during init
* discover https://github.com/nim-lang/Nim/issues/19094
* prefer `HashedBeaconState` in a few places to avoid recomputing state
root
* fetch latest block root from state when creating blocks
* harden `get_beacon_proposer_index` against invalid slots and document
* move random spec function tests to `test_spec.nim`
* avoid unnecessary state root computation before block proposal
* Support starting from altair
* hide `finalized-checkpoint-` - they are incomplete and usage may cause
crashes
* remove genesis detection code (broken, obsolete)
* enable starting ChainDAG from altair checkpoints - this is a
prerequisite for checkpoint sync (TODO: backfill)
* tighten checkpoint state conditions
* show error when starting from checkpoint with existing database (not
supported)
* print rest-compatible JSON in ncli/state_sim
* altair/merge support in ncli
* more altair/merge support in ncli_db
* pre-load header to speed up loading
* fix forked block decoding
* fix stack overflow crash in REST/debug/getStateV2
* introduce `ForkyXxx` for generic type matching of `Xxx` across
branches (SomeHashedBeaconState -> ForkyHashedBeaconState et al) -
`Some` is already used for other types of type classes
* consolidate function naming in BeaconChainDB, use some generics
* import `forks.nim` from other spec modules and move `Forked*` helpers
around to resolve circular imports
* remove `ForkedBeaconState`, use `ForkedHashedBeaconState` throughout
(less data shuffling between the types)
* fix several cases of states being stored on stack in tests, causing
random failures on some platforms
* remove reading json support from ncli - this should be ported to the
rest json reading instead (doesn't currently work because stack sizes)
The sync committee period used to be a plain `uint64`. With the light
client sync relying more heavily on them, it makes sense to introduce
a proper type for them, similar to how they are already used for `Slot`
and `Epoch`. This introduces such a `SyncCommitteePeriod` type.
Furthermore, some usage code dealing with those periods is cleaned up.
* Logging and startup improvements
Color support for released binaries!
* startup scripts no longer log to file by default - this only affects
source builds - released binaries don't support file logging
* add --log-stdout option to control logging to stdout (colors, json)
* detect tty:s vs redirected logs and log accordingly
* add option to disable log colors at runtime
* simplify several "common" logs, showing the most important information
earlier and more clearly
* remove line numbers / file information / tid - these take up space and
are of little use to end users
* still enabled in debug builds and tools
* remove `testnet_servers_image` compile-time option
* server images, released binaries and compile-from-source now offer
the same behaviour and features
* fixes https://github.com/status-im/nimbus-eth2/issues/2326
* fixes https://github.com/status-im/nimbus-eth2/issues/1794
* remove instanteneous block speed from sync message, keeping only
average
before:
```
INF 2021-10-28 16:45:59.000+02:00 Slot start topics="beacnde" tid=386429 file=nimbus_beacon_node.nim:884 lastSlot=2384027 wallSlot=2384028 delay=461us84ns peers=0 head=75a10ee5:3348 headEpoch=104 finalized=cd6804ba:3264 finalizedEpoch=102 sync="wwwwwwwwww:0:0.0000:0.0000:00h00m (3348)"
INF 2021-10-28 16:45:59.046+02:00 Slot end topics="beacnde" tid=386429 file=nimbus_beacon_node.nim:821 slot=2384028 nextSlot=2384029 head=75a10ee5:3348 headEpoch=104 finalizedHead=cd6804ba:3264 finalizedEpoch=102 nextAttestationSlot=-1 nextProposalSlot=-1 nextActionWait=n/a
```
after:
```
INF 2021-10-28 22:43:23.033+02:00 Slot start topics="beacnde" slot=2385815 epoch=74556 sync="DDPDDPUDDD:10:5.2258:01h19m (2361088)" peers=37 head=eacd2dae:2361096 finalized=73782:a4751487 delay=33ms687us715ns
INF 2021-10-28 22:43:23.291+02:00 Slot end topics="beacnde" slot=2385815 nextActionWait=n/a nextAttestationSlot=-1 nextProposalSlot=-1 head=eacd2dae:2361096
```
* fix comment
* documentation updates
* mention `--log-file` may be deprecated in the future
* update various docs
Updated the local network simulation scripts to use the REST interface
for connecting the validator client to the beacon node. Otherwise, the
current scripts produce errors.
* `SyncCommitteeIndex` -> `SyncSubcommitteeIndex`
* `syncCommitteePeriod` -> `sync_committee_period` (spec spelling)
* tighten period comparisons
* fix assert when validating committee message with non-altair state in
REST api
* register vc duties with subnet tracker
* fix activation logging during startup
* cache slot signature to avoid duplicate signature work
* schedule aggregation duties one slot at a time to avoid CPU spike at
each epoch
* lower aggregation subnet pre-subscription time to 4 slots (lowers
bandwidth and CPU usage)
* update stability subnets in ENR on startup
* log gossip state
* perform gossip subscriptions just before the next slot starts
* document stuff
* add random include
* don't overwrite subscription state when not subscribed
* log target gossip state
* updating gossip status once is enough
* add test
* remove syncQueueLen - this one is not updated at the end of the sync
and may cause gossip to disconnect itself completely - use a simple head
distance instead
* fix gossip disconnection - if in hysteresis, node.gossipState will be
set to disabled even though we don't disable topic subscriptions
* fix extra duty registration call
Similar to the existing `RewardInfo`, this PR adds the infrastructure
needed to export epoch processing information from altair+. Because
accounting is done somewhat differently, the PR uses a fork-specific
object to extrct the information in order to make the cost on the spec
side low.
* RewardInfo -> EpochInfo, ForkedEpochInfo
* use array for computing new sync committee
* avoid repeated total active balance computations in block processing
* simplify proposer index check
* simplify epoch transition tests
* pre-compute base increment and reuse in epoch processing, and a few
other small optimizations
This PR introduces the type and does the heavy lifting in terms of
refactoring - the tools that use the accounting will need separate PR:s
(as well as refinements to the exportred information)
* "official" -> "scenarios", like the submodule
* fewer test binaries - various compile hacks have been improved over
time, test suite should follow
* remove obsolete bls tests - there are better test vectors in
nim-blscurve
* remove obsolete mentions of `ssz_testing`
* remove obsolete comments about proc vs globals, unittest2 already uses
proc's
This extends `mock_blocks` support to also allow producing Merge blocks.
The only step missing was initialization of the `ExecutionPayload` to an
empty value (as created by `build_empty_execution_payload`).
So far, `withState` and `withBlck` templates could only be used to have
convenience access to fork-agnostic BeaconState and BeaconBlock fields.
This patch:
- injects an additional `stateFork` constant that allows to use
`when` expressions to also access Altair and Merge-specific fields.
- introduces a `withStateAndBlck` template to support operating on both
a `BeaconState` and `BeaconBlock` at a time.
- makes sync committee related functions Merge aware.
- changes a couple if-else trees for forks into case statements so that
forgotten future forks are promoted to compile-time errors.
So far, SSZ tests for `uint128` and `uint256` integers were disabled
due to "compile-time issues". The code has been adjusted to support
those additional test vectors, and the corresponding tests have been
enabled.
In tests, the private key was put into the validator deposit's withdraw
credentials so that it can be recovered later. This leads to problems
when creating the validators through other means that do not put the key
there. In general, mock private keys only depend on the validator index,
though, and because it is clear what the index of a validator is, it is
not actually needed to put the key into the credentials.
There are a number of locations in the code that get attestations on a
forked beacon state. For attestation pools test, a convenience wrapper
was available to reduce clutter. This patch integrates that wrapper into
the core component so that it can also take advantage of the wrapper.
Replaced non-conventional `-Present` form in copyright headers with the
more appropriate `-2021` version that is already used by the majority of
the code base. Copyright header should indicate the years during which
the specific file was modified.
There are multiple ways how states can be processed during tests. One is
through tests/mocking, the other through tests/xxxutil. Both methods can
create genesis states with a set of deposits from mock validators. The
private keys for those validators are created differently, though. While
tests/mocking generates random keys, testblockutil generates fixed keys.
This is a problem when mixing both tests/mocking and testblockutil in a
single test case, as they assume that the deposits were generated using
the same method. This patch aligns the key generation across tests.
The initialization of a `SyncAggregate` to its default value is not very
intuitive. There is an `init` function in `sync_committee_msg_pool` that
provides a convenience wrapper. This patch exports that initializer so
that the rest of the code base can also take advantage of it.
Adds a function that constructs a Merkle proof for a generalized index.
This will be used during light client sync to update light clients with
a new state (see NEXT_SYNC_COMMITTEE_INDEX / FINALIZED_ROOT_INDEX).
The phase0 and altair overloads of `makeBeaconBlock` slightly differ in
their signatures which makes using them unnecessarily verbose.
- A placeholder `sync_aggregate` argument similar to `executionPayload`
is added to the phase0 overload to match the altair signature.
- A wrapper operating on `ForkedHashedBeaconState` is introduced.
When sync committee message handling was introduced in #2830, the edge
case of the same validator being selected multiple times as part of a
sync subcommittee was not covered. Not handling that edge case makes
sync contributions have a lower-than-expected participation rate as each
sync validator is only counted up through once per subcommittee.
This patch ensures that this edge case is properly covered.
testblockutil currently fails to add test blocks to Altair state because
it assumes phase0 blocks in various places. This patch corrects this
limitation by using forked types internally. Note that existing clients
so far solely operate on phase0 blocks and should eventually be updated.
* Add parallel attestation verification
* Update tests, batchVerify doesn't use the threadpool with only single core (nim-blscurve update)
* bump nim-blscurve
* Debug info for failing eth2 test vectors
* remove submodule eth2-testnets
* verbose debugging of make failure on Windows (libbacktrace?)
* Remove CI debug mode
* initialization convention
* Fix new altair tests
When creating beacon blocks, the state needs to be advanced to the new
slot to determine the correct proposer index. In #875 a regression got
introduced that removed this logic when mocking blocks. Existing tests
failed to detect this, as they do not make use of the proposer index.
This patch re-introduces the state advancement, fixing the problem.
Currently, only the function to mock a block at next slot is exported.
A more generic function that allows specifying any slot is available but
not exported. This patch marks that function to also be exported.
The mocking framework was limited to chain configurations that do not
include Altair at genesis. This patch extends it so that genesis states
can be generated that are already upgraded to Altair. This is useful for
tests such as eth2spec/test/altair/unittests/test_sync_protocol.py.
The `initGenesisState` function to mock genesis states for testing was
originally introduced with a `genesis_time` parameter in #326. However,
this parameter was never respected and genesis state was always mocked
with `genesis_time` assumed to be 0. Since there is no test overriding
`genesis_time`, the parameter is removed until needed.
* add merge SSZ consensus object tests
* add merge attestations, attester slashing, proposer slashing, and voluntary exit test fixtures
* throw catchable exception rather than assert on invalid SingleMemberUnion selector
* hash_tree_root can assert, because it's trusted data by that point
* re-assert on writing, since also trusted data
* beta.4 update
* implement upgrade_to_merge()