* import EL deposits even when EL is stuck
The `eth1_monitor` only starts importing deposits once the EL reports a
new head block. However, the EL may be stuck at a block, e.g., the TTD.
By polling the latest EL block once after subscribing to new EL block
events it is ensured that deposits are still imported in this situation.
* also poll once on re-connects
* update `eth1_latest_head` metric in poll mode
* add comment about similar polling vs events parts
* replace check with assert
* `isNewLastBlock` helper
When fetching eth1 data and deposits for a new block proposal, the list
of deposits from previous eth1 data to the next one is fully loaded into
a `seq`. This can potentially be a very long list in active periods.
Changing this to an `iterator` saves memory by ensuring that the entire
list is no longer materialized; only the `DepositData` roots are needed.
When the EL connection is interrupted, deposits are once more requested
in chunks of 5000 blocks. This is a problem when the response takes over
a minute to produce and consistently times out as followup requests with
lower chunk sizes may no longer work after a request was canceled, e.g.,
when using Geth with websockets. By keeping track of `blocksPerRequest`
across EL reconnections, it is possible to recover from this by avoiding
to continuously repeat the initial request with the full 5000 blocks.
Also cleans up one more "retry of retry" instance; `DataProviderTimeout`
is a `CatchableError` and already handled by the existing retry logic.
When connection to the EL is lost as part of EL deposits importing, the
targeted block range to sync would reset. This is changed to properly
remember import progress across reconnects.
https://github.com/status-im/nimbus-eth2/pull/3944
The use of nested `awaitWithRetries` calls would have
resulted in an unexpected number of retries (3x3).
We now use regular `await` in outer layer to avoid the problem.
https://github.com/status-im/nimbus-eth2/pull/3943
The new code has an invariant that the `headMerkleizer` field in
the `Eth1Chain` is always kept in sync with the blocks stored in
the chain.
This invariant is now enforced better by doing the necessary merkleizer updates
in the `Eth1Chain.addBlock` function, in the `Eth1Chain.init` function and in the
`Eth1Chain.reset` function.
When importing blocks with deposits from the EL, the timestamp is never
initialized for them. Therefore, only blocks without deposits (for which
the timestamp is obtained) are considered for `is_candidate_block`.
This is fixed by also importing timestamps for blocks with deposits.
* fix obtaining deposits after connection loss
When an error occurs during Eth1 deposits import, the already imported
blocks are kept while the connection to the EL is re-established.
However, the corresponding merkleizer is not persisted, leading to any
future deposits no longer being properly imported. This is quite common
when syncing a fresh Nimbus instance against an already-synced Geth EL.
Fixed by persisting the head merkleizer together with the blocks.
* MEV validator registration
* add nearby canary to detect new beacon chain forks
* remove special MEV graffiti
* web3signer support
* fix trace logging
* Nim 1.2 needs raises Defect
* use template rather than proc in REST JSON parsing
* use --payload-builder-enable and --payload-builder-url
* explicitly default MEV to disabled
* explicitly empty default value for payload builder URL
* revert attestation pool to unstable version
* Use final `v1` version for light client protocols
* Unhide LC data collection options
* Default enable LC data serving
* rm unneeded import
* Connect to EL on startup
* Add docs for LC based EL sync
LC cancels concurrent requests if one peer sent a correct response and
waiting for other peers is no longer useful. On the server side this
resulted in a descore (-500) and a likely disconnect. The behaviour is
changed to match `UnexpectedEOF`, `PotentiallyExpectedEOF` handling that
return an error response without disconnecting from the peer.
When there is heavy forking, proposals may get missed due to including
attestations from different forks that later fail verification.
Checking attestation signatures when building blocks should fix this.
The light client sync protocol employs heuristics to ensure it does not
become stuck during non-finality or low sync committee participation.
These can enable use cases that prefer availability of recent data
over security. For our syncing use case, though, security is preferred.
An option is added to light client processor to configure this tradeoff.
* Re-enabled requireAllFields after a fix in nim-json-serialization
The problem was that `Option[T]` fields were not treated as optional
when requireAllFields is set to true. This is now fixed in NJS.
* Add makefile targets for recreating the Jenkins simulation runs
* Fix a discrepancy with the REST spec
It's not quite clear why this condition was triggered in the local
simulation, but it seems a viable scenario after the Keymanager API
is integrated in the validator client.
The user can temporarily remove all validator keys from a running
client before adding another set of keys.
Adds the `--web3-url` launch argument to `nimbus_light_client` to enable
driving the EL with the optimistic head obtained from LC sync protocol.
This will keep issuing `newPayload` / `forkChoiceUpdated` requests for
new blocks, marking them as optimistic. `ZERO_HASH` is reported as the
finalized block for now.
#3864 introduced a regression by turning on `requireAllFields` globally
for JSON parsing. Certain endpoints such as `RestSyncInfo` have optional
fields that do not parse correctly without additional changes. This is
reverted for now to restore previous behaviour and unblock CI testing.
Other changes:
* The Keymanager error responses differ from the Beacon API responses.
'keymanagerApiError' replaces the former usages of 'jsonError'.
* Return status code 401 and 403 for authorization errors in accordance
to the spec.
* Eliminate inconsistencies in the REST JSON parsing. Some of the code
paths allowed missing fields.
* Added logging of serialization failure details at DEBUG level.
* track the SyncCommittee period in slot end logs
* Update beacon_chain/nimbus_beacon_node.nim
Co-authored-by: Etan Kissling <etan@status.im>
Co-authored-by: Etan Kissling <etan@status.im>
Removes a few extra-ambitious templates to make `self` updates explicit,
and moves the `FinalityCheckpoints` type from `base` to `helpers` as it
is an additional Nimbus specific type not defined by spec.
a notice in the log is enough - we don't want the REST API to return an
error in this case because that makes the validator client think
something is seriously wrong (like the BN or message being broken)
Whether new blocks/attestations/etc are produced internally or received
via REST, their journey through the node is the same - to ensure that
they get the same treatment (logging, metrics, processing), this PR
moves the routing to a dedicated module and fixes several small
differences that existed before.
* `xxxValidator` -> `processMessageName` - the processor also was adding
messages to pools, so we want the name to reflect that action
* add missing "sent" metrics for some messages
* document ignore policy better - already-seen messages are not actaully
rebroadcast by libp2p
* skip redundant signature checks for internal validators consistently
The justified and finalized `Checkpoint` are frequently passed around
together. This introduces a new `FinalityCheckpoint` data structure that
combines them into one.
Due to the large usage of this structure in fork choice, also took this
opportunity to update fork choice tests to the latest v1.2.0-rc.1 spec.
Many additional tests enabled, some need more work, e.g. EL mock blocks.
Also implemented `discard_equivocations` which was skipped in #3661,
and improved code reuse across fork choice logic while at it.
* merge LC db into main BN db
To treat derived LC data similar to derived state caches, merge it into
the main beacon node DB.
* shorten table names, group with lc prefix
* optimistic sync
* flag that initially loaded blocks from database might need execution block root filled in
* return optimistic status in REST calls
* refactor blockslot pruning
* ensure beacon_blocks_by_{root,range} do not provide optimistic blocks
* handle forkchoice head being pre-merge with block being postmerge
* re-enable blocking head updates on validator duties
* fix is_optimistic_candidate_block per spec; don't crash with nil future
* fix is_optimistic_candidate_block per spec; don't crash with nil future
* mark blocks sans execution payloads valid during head update
* persist LC data across restarts
With the Altair spec `LightClientUpdate` structure taking its final form
it is finally possible to persist LC data across restarts without having
to worry about data migration due to spec changes. A separate `lcdataV1`
database is created in the `caches` subdirectory to hold known LC data.
A full database with default settings (129 periods) uses <15 MB disk.
* extend LC data DB rationale
* wording
* add `isSupportedBySQLite` helper and explicit return
* remove redundant `return`
All message processing is done in the validation callbacks, so there's
no need to trigger data handlers for messages we publish - the
self-publish is async, and therefore has an associated cost
Separate LC initialization options from the main ChainDAGRef options to
allow ChainDAGRef to treat them as opaque and reduce risk for conflicts
when extending those options in the future.
* remove web3 url prompt in launcher script
The interactive prompt for web3 has outlived its utility as we now load
url:s from command line params and config files, preventing the prompt
from correctly detecting when it's needed.
Also, after the merge, a JWT secret will (likely) be needed.
* log notice when web3 url is missing
* fix docs to not mention default that doesn't exist
* fix scripts to properly quote arguments
Merkle proofs tend to have long underlying type definitions, e.g.,
`array[log2trunc(NEXT_SYNC_COMMITTEE_INDEX), Eth2Digest]`. For the
ones used in the LC sync protocol, dedicated types are introduced
to improve readability. Furthermore, the `CachedLightClientBootstrap`
wrapper that solely wrapped a merkle branch is eliminated.
Adds a `--light-client-data-max-periods` option to override the number
of sync committee periods to retain light client data.
Raising it above the default enables archive nodes to serve full data.
Lowering below the default speeds up import times (still no persistence)
This updates `nim-ssz-serialization` to
`3db6cc0f282708aca6c290914488edd832971d61`.
Notable changes:
- Use `uint64` for `GeneralizedIndex`
- Add support for building merkle multiproofs
Combines the LC data configuration options (serve / importMode), the
callbacks (finality / optimistic LC update) as well as the cache storing
light client data, into a new `LightClientDataStore` structure.
Also moves the structure into a light client specific file.
* Initial commit
* Make `events` API spec compliant.
* Add `Eth-Consensus-Version` in responses.
* Bump chronos to get redirect with headers working.
* Add `is_optimistic` field and handling to syncing RestSyncInfo.
* adopt LC REST API with v0 suffix (without proofs)
Adopts the light client data REST API used by Lodestar as defined in
https://github.com/ethereum/beacon-APIs/pull/181 with a v0 suffix.
Requests:
- `/eth/v0/beacon/light_client/bootstrap/{block_root}`
- `/eth/v0/beacon/light_client/updates?start_period={start_period}&count={count}`
- `/eth/v0/beacon/light_client/finality_update`
- `/eth/v0/beacon/light_client/optimistic_update`
HTTP Server-Sent Events (SSE):
- `light_client_finality_update_v0`
- `light_client_optimistic_update_v0`
More work is needed to adopt the proofs endpoint, it is not included.
* initialize event queues
* register event topics
`m.depositsChain.blocks.len` may change during `startEth1Syncing`. If
that happened, an additional check now ensures that `scratchMerkleizer`
was initialized before attempting to use it.
If database access errors are encountered while proccessing LC data,
track the section which was accessed without errors so that the rest
may be attempted to be re-indexed later.
Ensures that all intermediate blocks are reported if a small gap is
encountered when downloading optimistic blocks. Gaps may occur when
a block is missed and still downloading, or when EL processing is slow.
If the gap exceeds 1 epoch, optimistic block stream jumps to latest.
* check for and log gossip broadcast failure
* switch notices to warns; update LC variables regardless
* don't both return a Result and log sending error
* add metrics counter for failed-due-to-no-peers and removed unnecessary async
* don't report failure of sync committee messages
* remove redundant metric
* document metric being incremented
The initial sync committee period follows a different finality rule than
the other ones. Instead of next sync committee finalizing as soon as the
`finalizedHead.slot >= period.start_slot` have to use Altair start slot.
For consistency with other options, use a common prefix for light client
data configuration options.
* `--serve-light-client-data` --> `--light-client-data-serve`
* `--import-light-client-data` --> `--light-client-data-import-mode`
No deprecation of the old identifiers as they were only sparingly used
and all usage can be easily updated without interferance.
Corrects an off-by-1 in the reported sync percentage computation.
New logic is based on `SyncQueue.total` and `SyncQueue.progress`
with `pivot` instead of `sq.startSlot`.
When launched with `--light-client-enable` the latest blocks are fetched
and optimistic candidate blocks are passed to a callback (log for now).
This helps accelerate syncing in the future (optimistic sync).
Adds a `LightClient` instance to the beacon node as preparation to
accelerate syncing in the future (optimistic sync).
- `--light-client-enable` turns on the feature
- `--light-client-trusted-block-root` configures block to start from
If no block root is configured, light client tracks DAG `finalizedHead`.
Introduces a new library for syncing using libp2p based light client
sync protocol, and adds a new `nimbus_light_client` executable that uses
this library for syncing. The new executable emits log messages when
new beacon block headers are received, and is integrated into testing.
* SSZ `[]` -> `mitem`
* `[]` -> `item`
immutable access via mutable instance cannot rely on template
overloading, and `[]` cannot be a `func` because of special seq handling
in compiler.
* remove deprecated JSON-RPC server
* keep the command-line options around as no-ops, temporarily
* service -> server; JSON-RPC is still used elsewhere
* document static vs dynamic range checking requirements
* add `vindices` iterator to iterate over valid validator indices in a
state
* clean up spec comments in general
* fixup
Co-authored-by: tersec <tersec@users.noreply.github.com>
Incorporates the latest changes to the light client sync protocol based
on Devconnect AMS feedback. Note that this breaks compatibility with the
previous prototype, due to changes to data structures and endpoints.
See https://github.com/ethereum/consensus-specs/pull/2802
Since we were not verifying BLS signature in blocks that we produce,
we were failing to notice that some deposits need to be ignored (due
to having an invalid signature). Processing these deposits resulted
in a different ending state after the state transition which caused
our blocks to be rejected by the network.
* Some Web3Signer versions insist replying with text/plain messages
* When reading blocks, the Web3Signer uses upper-case fork identifiers
instead of lower-case identifies like the Beacon API.
Follows up on https://github.com/status-im/nimbus-eth2/pull/3461 which
ensured that repeated `beaconBlocksByRange` requests get shrinked to
account for potential out-of-band advancements to `safeSlot`, with
similar logic for the initial request.
Other changes:
* logtrace can now verify sync committee messages and contributions
* Many unnecessary use of pairs() have been removed for consistency
* Map 40x BN response codes to BeaconNodeStatus.Incompatible in the VC
Other fixes:
* Fix bit rot in the `make prater-dev-deposit` target.
* Correct content-type in the responses of the Nimbus signing node
* Invalid JSON payload was being sent in the web3signer requests
* era file verification
Implement and document era file verification
* era file states now come with block applied for easier verification
* clarify conflicting version handling
* document verification requirements
* remove count from name, use start-era, end-root to discover range
* remove obsolete todo
* abstract out block root loading
Updated outdated presets / configs / REST config to v1.1.10 specs.
- `TERMINAL_BLOCK_HASH_ACTIVATION_EPOCH` and `PROPOSER_SCORE_BOOST` are
not yet used in `eth2-networks`, added configurability as TODOs.
- `MIN_ANCHOR_POW_BLOCK_DIFFICULTY` is no longer needed, put on ignore
list as some Altair devnets still reference it.
* use MAX_CHUNK_SIZE_BELLATRIX for signed Bellatrix blocks
* Update beacon_chain/networking/eth2_network.nim
Co-authored-by: Etan Kissling <etan@status.im>
* localPassC to localPassc
* check against maxChunkSize rather than constant
Co-authored-by: Etan Kissling <etan@status.im>
This PR makes the necessary adjustments to deal with the revamped snappy
API.
In practical terms for nimbus-eth2, there are performance increases to
gossip processing, database reading and writing as well as era file
processing. Exporting `.era` files for example, a snappy-heavy
operation, almost halves in total processing time:
Pre:
```
Average, StdDev, Min, Max, Samples, Test
39.088, 8.735, 23.619, 53.301, 50, tState
237.079, 46.692, 165.620, 355.481, 49, tBlocks
```
Post:
```
All time are ms
Average, StdDev, Min, Max, Samples, Test
25.350, 5.303, 15.351, 41.856, 50, tState
141.238, 24.164, 99.990, 199.329, 49, tBlocks
```
* Add `NoMonitor` flag to stop SyncManager from monitoring sync situation.
* Remove `toleranceValue` and `PeerScoreHeadTooNew`.
Co-authored-by: Etan Kissling <etan@status.im>
* `gnosis-chain` -> `gnosis`
Use same name as LH/Teku throughout
* fixes#3504
* fixes large stack temporary that can cause crashes during genesis
detection
Some upstream repos still need fixes, but this gets us close enough that
style hints can be enabled by default.
In general, "canonical" spellings are preferred even if they violate
nep-1 - this applies in particular to spec-related stuff like
`genesis_validators_root` which appears throughout the codebase.
Validator monitoring improves logging by giving more specific monitoring
information, and can now be seen as complete.
Previously, logging has focused on "Attestation sent" messages which
carry little informational value when things go wrong, and are overly
aggressive when everything works as expected (sending attestations is
the norm).
* lower "Attestation sent" log to `INFO`
* mark 1.7.0 as the start of the validator monitor feature - previous
versions had significant bugs in totals mode
The `pyrmont` testnet has been discontinued.
For experiments, it's still possible to run pyrmont nodes by passing a
genesis/config, but this PR removes the bundled `--network:pyrmont`
option.
* update docs
* remove empty docs
* remove obsolete `eth2-stats` page
`.era` files and Req/Resp protocols use framed formats - aligning the
database with these makes for less recompression work overall as gossip
is sent only once while req/resp repeats (potentially) - this also
allows efficient pruning-to-era where snappy-recompression is the major
cycle thief.
* harden validator API against pre-finalized slot requests
* check `syncHorizon` when responding to validator api requests too far
from `head`
* limit state-id based requests to one epoch ahead of `head`
* put historic data bounds on block/attestation/etc validator production API, preventing them from being used with already-finalized slots
* add validator block smoke tests
* make rest test create a new genesis with the tests running roughly in
the first epoch to allow testing a few more boundary conditions
* era: load blocks and states
Era files contain finalized history and can be thought of as an
alternative source for block and state data that allows clients to avoid
syncing this information from the P2P network - the P2P network is then
used to "top up" the client with the most recent data. They can be
freely shared in the community via whatever means (http, torrent, etc)
and serve as a permanent cold store of consensus data (and, after the
merge, execution data) for history buffs and bean counters alike.
This PR gently introduces support for loading blocks and states in two
cases: block requests from rest/p2p and frontfilling when doing
checkpoint sync.
The era files are used as a secondary source if the information is not
found in the database - compared to the database, there are a few key
differences:
* the database stores the block indexed by block root while the era file
indexes by slot - the former is used only in rest, while the latter is
used both by p2p and rest.
* when loading blocks from era files, the root is no longer trivially
available - if it is needed, it must either be computed (slow) or cached
(messy) - the good news is that for p2p requests, it is not needed
* in era files, "framed" snappy encoding is used while in the database
we store unframed snappy - for p2p2 requests, the latter requires
recompression while the former could avoid it
* front-filling is the process of using era files to replace backfilling
- in theory this front-filling could happen from any block and
front-fills with gaps could also be entertained, but our backfilling
algorithm cannot take advantage of this because there's no (simple) way
to tell it to "skip" a range.
* front-filling, as implemented, is a bit slow (10s to load mainnet): we
load the full BeaconState for every era to grab the roots of the blocks
- it would be better to partially load the state - as such, it would
also be good to be able to partially decompress snappy blobs
* lookups from REST via root are served by first looking up a block
summary in the database, then using the slot to load the block data from
the era file - however, there needs to be an option to create the
summary table from era files to fully support historical queries
To test this, `ncli_db` has an era file exporter: the files it creates
should be placed in an `era` folder next to `db` in the data directory.
What's interesting in particular about this setup is that `db` remains
as the source of truth for security purposes - it stores the latest
synced head root which in turn determines where a node "starts" its
consensus participation - the era directory however can be freely shared
between nodes / people without any (significant) security implications,
assuming the era files are consistent / not broken.
There's lots of future improvements to be had:
* we can drop the in-memory `BlockRef` index almost entirely - at this
point, resident memory usage of Nimbus should drop to a cool 500-600 mb
* we could serve era files via REST trivially: this would drop backfill
times to whatever time it takes to download the files - unlike the
current implementation that downloads block by block, downloading an era
at a time almost entirely cuts out request overhead
* we can "reasonably" recreate detailed state history from almost any
point in time, turning an O(slot) process into O(1) effectively - we'll
still need caches and indices to do this with sufficient efficiency for
the rest api, but at least it cuts the whole process down to minutes
instead of hours, for arbitrary points in time
* CI: ignore failures with Nim-1.6 (temporary)
* test fixes
Co-authored-by: Ștefan Talpalaru <stefantalpalaru@yahoo.com>
This extends the `--serve-light-client-data` launch option to serve
locally collected light client data via libp2p.
Backfill of historic best `LightClientUpdate` is not yet implemented.
See https://github.com/ethereum/consensus-specs/pull/2802
During operation as a light client, the chain DAG is not available.
As a preparation, the beacon node initialization logic is divided into
parts depending on the presence of the chain DAG, and parts that are
always available (including a future light client mode).
This is a pure code move without semantic changes.
Gracefully handles the new failure modes recently introduced to the DAG
as part of https://github.com/status-im/nimbus-eth2/pull/3513
Data that is deemed to exist but fails to load leads to an error log to
avoid suppressing logic errors accidentally. In `verifyFinalization`
mode, the assertions remain active.