It's not quite clear why this condition was triggered in the local
simulation, but it seems a viable scenario after the Keymanager API
is integrated in the validator client.
The user can temporarily remove all validator keys from a running
client before adding another set of keys.
Adds the `--web3-url` launch argument to `nimbus_light_client` to enable
driving the EL with the optimistic head obtained from LC sync protocol.
This will keep issuing `newPayload` / `forkChoiceUpdated` requests for
new blocks, marking them as optimistic. `ZERO_HASH` is reported as the
finalized block for now.
#3864 introduced a regression by turning on `requireAllFields` globally
for JSON parsing. Certain endpoints such as `RestSyncInfo` have optional
fields that do not parse correctly without additional changes. This is
reverted for now to restore previous behaviour and unblock CI testing.
Other changes:
* The Keymanager error responses differ from the Beacon API responses.
'keymanagerApiError' replaces the former usages of 'jsonError'.
* Return status code 401 and 403 for authorization errors in accordance
to the spec.
* Eliminate inconsistencies in the REST JSON parsing. Some of the code
paths allowed missing fields.
* Added logging of serialization failure details at DEBUG level.
* track the SyncCommittee period in slot end logs
* Update beacon_chain/nimbus_beacon_node.nim
Co-authored-by: Etan Kissling <etan@status.im>
Co-authored-by: Etan Kissling <etan@status.im>
Removes a few extra-ambitious templates to make `self` updates explicit,
and moves the `FinalityCheckpoints` type from `base` to `helpers` as it
is an additional Nimbus specific type not defined by spec.
a notice in the log is enough - we don't want the REST API to return an
error in this case because that makes the validator client think
something is seriously wrong (like the BN or message being broken)
Whether new blocks/attestations/etc are produced internally or received
via REST, their journey through the node is the same - to ensure that
they get the same treatment (logging, metrics, processing), this PR
moves the routing to a dedicated module and fixes several small
differences that existed before.
* `xxxValidator` -> `processMessageName` - the processor also was adding
messages to pools, so we want the name to reflect that action
* add missing "sent" metrics for some messages
* document ignore policy better - already-seen messages are not actaully
rebroadcast by libp2p
* skip redundant signature checks for internal validators consistently
The justified and finalized `Checkpoint` are frequently passed around
together. This introduces a new `FinalityCheckpoint` data structure that
combines them into one.
Due to the large usage of this structure in fork choice, also took this
opportunity to update fork choice tests to the latest v1.2.0-rc.1 spec.
Many additional tests enabled, some need more work, e.g. EL mock blocks.
Also implemented `discard_equivocations` which was skipped in #3661,
and improved code reuse across fork choice logic while at it.
* merge LC db into main BN db
To treat derived LC data similar to derived state caches, merge it into
the main beacon node DB.
* shorten table names, group with lc prefix
* optimistic sync
* flag that initially loaded blocks from database might need execution block root filled in
* return optimistic status in REST calls
* refactor blockslot pruning
* ensure beacon_blocks_by_{root,range} do not provide optimistic blocks
* handle forkchoice head being pre-merge with block being postmerge
* re-enable blocking head updates on validator duties
* fix is_optimistic_candidate_block per spec; don't crash with nil future
* fix is_optimistic_candidate_block per spec; don't crash with nil future
* mark blocks sans execution payloads valid during head update
* persist LC data across restarts
With the Altair spec `LightClientUpdate` structure taking its final form
it is finally possible to persist LC data across restarts without having
to worry about data migration due to spec changes. A separate `lcdataV1`
database is created in the `caches` subdirectory to hold known LC data.
A full database with default settings (129 periods) uses <15 MB disk.
* extend LC data DB rationale
* wording
* add `isSupportedBySQLite` helper and explicit return
* remove redundant `return`
All message processing is done in the validation callbacks, so there's
no need to trigger data handlers for messages we publish - the
self-publish is async, and therefore has an associated cost
Separate LC initialization options from the main ChainDAGRef options to
allow ChainDAGRef to treat them as opaque and reduce risk for conflicts
when extending those options in the future.
* remove web3 url prompt in launcher script
The interactive prompt for web3 has outlived its utility as we now load
url:s from command line params and config files, preventing the prompt
from correctly detecting when it's needed.
Also, after the merge, a JWT secret will (likely) be needed.
* log notice when web3 url is missing
* fix docs to not mention default that doesn't exist
* fix scripts to properly quote arguments
Merkle proofs tend to have long underlying type definitions, e.g.,
`array[log2trunc(NEXT_SYNC_COMMITTEE_INDEX), Eth2Digest]`. For the
ones used in the LC sync protocol, dedicated types are introduced
to improve readability. Furthermore, the `CachedLightClientBootstrap`
wrapper that solely wrapped a merkle branch is eliminated.
Adds a `--light-client-data-max-periods` option to override the number
of sync committee periods to retain light client data.
Raising it above the default enables archive nodes to serve full data.
Lowering below the default speeds up import times (still no persistence)
This updates `nim-ssz-serialization` to
`3db6cc0f282708aca6c290914488edd832971d61`.
Notable changes:
- Use `uint64` for `GeneralizedIndex`
- Add support for building merkle multiproofs
Combines the LC data configuration options (serve / importMode), the
callbacks (finality / optimistic LC update) as well as the cache storing
light client data, into a new `LightClientDataStore` structure.
Also moves the structure into a light client specific file.
* Initial commit
* Make `events` API spec compliant.
* Add `Eth-Consensus-Version` in responses.
* Bump chronos to get redirect with headers working.
* Add `is_optimistic` field and handling to syncing RestSyncInfo.
* adopt LC REST API with v0 suffix (without proofs)
Adopts the light client data REST API used by Lodestar as defined in
https://github.com/ethereum/beacon-APIs/pull/181 with a v0 suffix.
Requests:
- `/eth/v0/beacon/light_client/bootstrap/{block_root}`
- `/eth/v0/beacon/light_client/updates?start_period={start_period}&count={count}`
- `/eth/v0/beacon/light_client/finality_update`
- `/eth/v0/beacon/light_client/optimistic_update`
HTTP Server-Sent Events (SSE):
- `light_client_finality_update_v0`
- `light_client_optimistic_update_v0`
More work is needed to adopt the proofs endpoint, it is not included.
* initialize event queues
* register event topics
`m.depositsChain.blocks.len` may change during `startEth1Syncing`. If
that happened, an additional check now ensures that `scratchMerkleizer`
was initialized before attempting to use it.
If database access errors are encountered while proccessing LC data,
track the section which was accessed without errors so that the rest
may be attempted to be re-indexed later.
Ensures that all intermediate blocks are reported if a small gap is
encountered when downloading optimistic blocks. Gaps may occur when
a block is missed and still downloading, or when EL processing is slow.
If the gap exceeds 1 epoch, optimistic block stream jumps to latest.
* check for and log gossip broadcast failure
* switch notices to warns; update LC variables regardless
* don't both return a Result and log sending error
* add metrics counter for failed-due-to-no-peers and removed unnecessary async
* don't report failure of sync committee messages
* remove redundant metric
* document metric being incremented
The initial sync committee period follows a different finality rule than
the other ones. Instead of next sync committee finalizing as soon as the
`finalizedHead.slot >= period.start_slot` have to use Altair start slot.
For consistency with other options, use a common prefix for light client
data configuration options.
* `--serve-light-client-data` --> `--light-client-data-serve`
* `--import-light-client-data` --> `--light-client-data-import-mode`
No deprecation of the old identifiers as they were only sparingly used
and all usage can be easily updated without interferance.
Corrects an off-by-1 in the reported sync percentage computation.
New logic is based on `SyncQueue.total` and `SyncQueue.progress`
with `pivot` instead of `sq.startSlot`.
When launched with `--light-client-enable` the latest blocks are fetched
and optimistic candidate blocks are passed to a callback (log for now).
This helps accelerate syncing in the future (optimistic sync).
Adds a `LightClient` instance to the beacon node as preparation to
accelerate syncing in the future (optimistic sync).
- `--light-client-enable` turns on the feature
- `--light-client-trusted-block-root` configures block to start from
If no block root is configured, light client tracks DAG `finalizedHead`.
Introduces a new library for syncing using libp2p based light client
sync protocol, and adds a new `nimbus_light_client` executable that uses
this library for syncing. The new executable emits log messages when
new beacon block headers are received, and is integrated into testing.
* SSZ `[]` -> `mitem`
* `[]` -> `item`
immutable access via mutable instance cannot rely on template
overloading, and `[]` cannot be a `func` because of special seq handling
in compiler.
* remove deprecated JSON-RPC server
* keep the command-line options around as no-ops, temporarily
* service -> server; JSON-RPC is still used elsewhere
* document static vs dynamic range checking requirements
* add `vindices` iterator to iterate over valid validator indices in a
state
* clean up spec comments in general
* fixup
Co-authored-by: tersec <tersec@users.noreply.github.com>
Incorporates the latest changes to the light client sync protocol based
on Devconnect AMS feedback. Note that this breaks compatibility with the
previous prototype, due to changes to data structures and endpoints.
See https://github.com/ethereum/consensus-specs/pull/2802
Since we were not verifying BLS signature in blocks that we produce,
we were failing to notice that some deposits need to be ignored (due
to having an invalid signature). Processing these deposits resulted
in a different ending state after the state transition which caused
our blocks to be rejected by the network.
* Some Web3Signer versions insist replying with text/plain messages
* When reading blocks, the Web3Signer uses upper-case fork identifiers
instead of lower-case identifies like the Beacon API.
Follows up on https://github.com/status-im/nimbus-eth2/pull/3461 which
ensured that repeated `beaconBlocksByRange` requests get shrinked to
account for potential out-of-band advancements to `safeSlot`, with
similar logic for the initial request.
Other changes:
* logtrace can now verify sync committee messages and contributions
* Many unnecessary use of pairs() have been removed for consistency
* Map 40x BN response codes to BeaconNodeStatus.Incompatible in the VC
Other fixes:
* Fix bit rot in the `make prater-dev-deposit` target.
* Correct content-type in the responses of the Nimbus signing node
* Invalid JSON payload was being sent in the web3signer requests
* era file verification
Implement and document era file verification
* era file states now come with block applied for easier verification
* clarify conflicting version handling
* document verification requirements
* remove count from name, use start-era, end-root to discover range
* remove obsolete todo
* abstract out block root loading
Updated outdated presets / configs / REST config to v1.1.10 specs.
- `TERMINAL_BLOCK_HASH_ACTIVATION_EPOCH` and `PROPOSER_SCORE_BOOST` are
not yet used in `eth2-networks`, added configurability as TODOs.
- `MIN_ANCHOR_POW_BLOCK_DIFFICULTY` is no longer needed, put on ignore
list as some Altair devnets still reference it.
* use MAX_CHUNK_SIZE_BELLATRIX for signed Bellatrix blocks
* Update beacon_chain/networking/eth2_network.nim
Co-authored-by: Etan Kissling <etan@status.im>
* localPassC to localPassc
* check against maxChunkSize rather than constant
Co-authored-by: Etan Kissling <etan@status.im>
This PR makes the necessary adjustments to deal with the revamped snappy
API.
In practical terms for nimbus-eth2, there are performance increases to
gossip processing, database reading and writing as well as era file
processing. Exporting `.era` files for example, a snappy-heavy
operation, almost halves in total processing time:
Pre:
```
Average, StdDev, Min, Max, Samples, Test
39.088, 8.735, 23.619, 53.301, 50, tState
237.079, 46.692, 165.620, 355.481, 49, tBlocks
```
Post:
```
All time are ms
Average, StdDev, Min, Max, Samples, Test
25.350, 5.303, 15.351, 41.856, 50, tState
141.238, 24.164, 99.990, 199.329, 49, tBlocks
```
* Add `NoMonitor` flag to stop SyncManager from monitoring sync situation.
* Remove `toleranceValue` and `PeerScoreHeadTooNew`.
Co-authored-by: Etan Kissling <etan@status.im>
* `gnosis-chain` -> `gnosis`
Use same name as LH/Teku throughout
* fixes#3504
* fixes large stack temporary that can cause crashes during genesis
detection
Some upstream repos still need fixes, but this gets us close enough that
style hints can be enabled by default.
In general, "canonical" spellings are preferred even if they violate
nep-1 - this applies in particular to spec-related stuff like
`genesis_validators_root` which appears throughout the codebase.
Validator monitoring improves logging by giving more specific monitoring
information, and can now be seen as complete.
Previously, logging has focused on "Attestation sent" messages which
carry little informational value when things go wrong, and are overly
aggressive when everything works as expected (sending attestations is
the norm).
* lower "Attestation sent" log to `INFO`
* mark 1.7.0 as the start of the validator monitor feature - previous
versions had significant bugs in totals mode
The `pyrmont` testnet has been discontinued.
For experiments, it's still possible to run pyrmont nodes by passing a
genesis/config, but this PR removes the bundled `--network:pyrmont`
option.
* update docs
* remove empty docs
* remove obsolete `eth2-stats` page
`.era` files and Req/Resp protocols use framed formats - aligning the
database with these makes for less recompression work overall as gossip
is sent only once while req/resp repeats (potentially) - this also
allows efficient pruning-to-era where snappy-recompression is the major
cycle thief.
* harden validator API against pre-finalized slot requests
* check `syncHorizon` when responding to validator api requests too far
from `head`
* limit state-id based requests to one epoch ahead of `head`
* put historic data bounds on block/attestation/etc validator production API, preventing them from being used with already-finalized slots
* add validator block smoke tests
* make rest test create a new genesis with the tests running roughly in
the first epoch to allow testing a few more boundary conditions
* era: load blocks and states
Era files contain finalized history and can be thought of as an
alternative source for block and state data that allows clients to avoid
syncing this information from the P2P network - the P2P network is then
used to "top up" the client with the most recent data. They can be
freely shared in the community via whatever means (http, torrent, etc)
and serve as a permanent cold store of consensus data (and, after the
merge, execution data) for history buffs and bean counters alike.
This PR gently introduces support for loading blocks and states in two
cases: block requests from rest/p2p and frontfilling when doing
checkpoint sync.
The era files are used as a secondary source if the information is not
found in the database - compared to the database, there are a few key
differences:
* the database stores the block indexed by block root while the era file
indexes by slot - the former is used only in rest, while the latter is
used both by p2p and rest.
* when loading blocks from era files, the root is no longer trivially
available - if it is needed, it must either be computed (slow) or cached
(messy) - the good news is that for p2p requests, it is not needed
* in era files, "framed" snappy encoding is used while in the database
we store unframed snappy - for p2p2 requests, the latter requires
recompression while the former could avoid it
* front-filling is the process of using era files to replace backfilling
- in theory this front-filling could happen from any block and
front-fills with gaps could also be entertained, but our backfilling
algorithm cannot take advantage of this because there's no (simple) way
to tell it to "skip" a range.
* front-filling, as implemented, is a bit slow (10s to load mainnet): we
load the full BeaconState for every era to grab the roots of the blocks
- it would be better to partially load the state - as such, it would
also be good to be able to partially decompress snappy blobs
* lookups from REST via root are served by first looking up a block
summary in the database, then using the slot to load the block data from
the era file - however, there needs to be an option to create the
summary table from era files to fully support historical queries
To test this, `ncli_db` has an era file exporter: the files it creates
should be placed in an `era` folder next to `db` in the data directory.
What's interesting in particular about this setup is that `db` remains
as the source of truth for security purposes - it stores the latest
synced head root which in turn determines where a node "starts" its
consensus participation - the era directory however can be freely shared
between nodes / people without any (significant) security implications,
assuming the era files are consistent / not broken.
There's lots of future improvements to be had:
* we can drop the in-memory `BlockRef` index almost entirely - at this
point, resident memory usage of Nimbus should drop to a cool 500-600 mb
* we could serve era files via REST trivially: this would drop backfill
times to whatever time it takes to download the files - unlike the
current implementation that downloads block by block, downloading an era
at a time almost entirely cuts out request overhead
* we can "reasonably" recreate detailed state history from almost any
point in time, turning an O(slot) process into O(1) effectively - we'll
still need caches and indices to do this with sufficient efficiency for
the rest api, but at least it cuts the whole process down to minutes
instead of hours, for arbitrary points in time
* CI: ignore failures with Nim-1.6 (temporary)
* test fixes
Co-authored-by: Ștefan Talpalaru <stefantalpalaru@yahoo.com>