- add support for setting protocol handlers with `{.raises.}` annotation
- fix: valueOr and withValue utilities
- fix: remove explicit param from GossipSubParams constructor
Add support for using era file for the initial checkpoint block.
This should also avoid an error when the beacon node is restarted
before the backfill process has made any progress (#6059).
The `<` function to compare peers was not exported, leading to the same
peer be acquired over and over again until kick. `mixin` doesn't pull it
into `peerCmp` without `*` export, and with the export no mixin needed.
The `wss_sim` was not properly maintained since Bellatrix. The missing
functionality is now added, including:
- Bellatrix: Connect to an EL for execution payload production
- Capella: Correct withdrawals processing, is mandatory to do
- Deneb: Dump blob sidecars into the output directory
See https://ethresear.ch/t/insecura-my-consensus-for-the-pyrmont-network/11833
Iterating peers should only yield peers present in registry, otherwise
`nil` pointers are returned and depending on comparison function it will
break, see #6149.
When initializing from a state that's not aligned to an epoch boundary,
an earlier state is loaded that's epoch aligned, and subsequently topped
up with the missing blocks. `dag.headSyncCommittee` is initialized prior
to topping up the missing blocks, though. If the sync committee changes
while applying the blocks (e.g., a sync committee period boundary hits),
the cached information becomes unlinked from `dag.head`, leading to
valid blocks based on that chain being rejected. To fix this, move cache
initialization after the top up with blocks. This has been observed on
Goerli by initializing from 7919502 and attempting to top up 7920111.
The block gets rejected with an invalid state root on nodes that have
restarted after setting 7920111 as head, while it gets accepted by all
other nodes. Error message is `block: state root verification failed`.
The incorrect initialization behaviour was introduced in #4592, before
which the sync committee cache was initialized after applying blocks.
The fallback when blobless quarantine contains a block with all blobs
modifies collection while iterating, potentially asserting if reachable.
Using a second loop to process this situation resolves that.
`batchVerify`'s precondition is a non-empty signature list:
```nim
if input.len == 0:
# Spec precondition
return false
```
This means that in eras without any blocks (as has happened on Goerli),
calling it leads to era files being reported as invalid.
Using a dedicated branch for researching the effectiveness of split view
scenario handling simplifies testing and avoids having partial work on
`unstable`. If we want, we can reintroduce it under a `--debug` flag at
a later time. But for now, Goerli is a rare opoprtunity to test this,
maybe just for another week or so.
- https://github.com/status-im/infra-nimbus/pull/179
In split view situation, the canonical chain may only be served by a
tiny amount of peers, and branches may span long durations. Minority
branches may still have a large weight from attestations and should
be discovered. To assist with that, add a branch discovery module that
assists in such a situation by specifically targeting peers with unknown
histories and downloading from them, in addition to sync manager work
which handles popular branches.
There are situations where all states in the `blockchain_dag` are
occupied and cannot be borrowed.
- headState: Many assumptions in the code that it cannot be advanced
- clearanceState: Resets every time a new block gets imported, including
blocks from non-canonical branches
- epochRefState: Used even more frequently than clearanceState
This means that during the catch-up mechanic where the head state is
slowly advanced to wall clock to catch up on validator duties in the
situation where the canonical head is way behind non-canonical heads,
we cannot use any of the three existing states. In that situation,
Nimbus already consumes an increased amount of memory due to all the
`BlockRef`, fork choice states and so on, so experience is degraded.
It seems reasonable to allocate a fourth state temporarily during that
mechanic, until a new proposal could be made on the canonical chain.
Note that currently, on `unstable`, proposals _do_ happen every couple
hours because sync manager doesn't manage to discover additional heads
in a split-view scenario on Goerli. However, with the branch discovery
module, new blocks are discovered all the time, and the clearanceState
may no longer be borrowed as it is reset to different branch too often.
The extra state could also find other uses in the future, e.g., for
incremental computations as in reindexing the database, or online
collection of historical light client data.
Use the same eviction policy for blocks as already the case for blobs.
FIFO makes more sense, because it favors keeping ancestors of blocks
which need to be applied to the DAG before their children get eligible.
`eth2_network` forgets to descore peers when opening connection times
out. It only descores when opening the connection succeeds and then
there is a subsequent error. The caller cannot distinguish the cases,
so ensure that the descore is also applied if the request fails during
its initial portion.
When quarantining a block from block processor, we should also keep a
copy of its blobs. Otherwise, this involves more network roundtrips
to obtain information we already have. This is in line with how blobs
arrive from gossip and request manager sources. The existing flow does
not work when applying blocks from quarantine, which is addressed here.
Blobs are cached from gossip and other sources for all orphans, not just
those specifically tagged as `blobless`. `blobless` only means that they
are actively fetched from the network. The `MaxBlobs` should be aligned
to match `MaxOrphans`. Note that blobs are tiny compared to blocks, so
this isn't a huge memory hog.
* handle case of unreachable block in `is_optimstic` helper
When a non-canonical block is still in the DB, it can be accessed via
`BlockId`, but `BlockRef` may be unavailable if the block was not
properly cleaned when it got orphaned. Report it as optimistic.
* `template` -> `func`
When checking for `MissingParent`, it may be that the parent block was
already discovered as part of a prior run. In that case, it can be
loaded from storage and processed without having to rediscover the
entire branch from the network. This is similar to #6112 but for blocks
that are discovered via gossip / sync mgr instead of via request mgr.
* Add some duration metering.
Refactor some log statements.
Rework sync contribution deadline waiting.
Add some cancellation reporting handlers.
* Make all validator's shortLog to become validatorLog.
Optimize some logs with logScope.
* Add `raises`.
* More log statements polishing.
During sync, we can skip the `blobSidecarsByRange` request when there
are no blocks with `kzg_commitments` in the blocks data. Avoids running
into throttling from peers during long periods of non-finality.
Each individual blob currently uses as much quota from the network limit
as an entire block does, 128 items per second shared across all peers.
Blobs are 128 KB each instead of up to several MB and are simpler to
encode. There can be multiple per block (6 currently), so allow 2000
blobs per second across all peers. That decreases the cost per block
from `3125 + 3125 * blobs.len` quota (= `[3125, 21875]`) to a lower
`3125 + 200 * blobs.len` quota (= `[3125, 4325]`), accounting for the
slight increase in data transfer and encoding time.
During sync, sometimes the same block gets encountered and added to
quarantine multiple times. If its parent is already known, quarantine
incorrectly registers it as missing, leading to re-download. This can
be fixed by registering the parent's deepest missing parent recursively.
Also increase the stickiness of `missing`. We only perform 4 attempts
within ~16 seconds before giving up. Very frequently, this is not enough
and there is no progress until sync manager kicks in even on holesky.
When restarting beacon node, orphaned blocks remain in the database but
on startup, only the canonical chain as selected by fork choice loads.
When a new block is discovered that builds on top of an orphaned block,
the orphaned block is re-downloaded using sync/request manager, despite
it already being present on disk. Such queries can be answered locally
to improve discovery speed of alternate forks.
During lag spike, e.g., from state replays, peer count can temporarily
drop significantly. Should not have to wait another 60 minutes in that
situation just to be back where one started.
The `clearanceState` points to the latest resolved block, regardless of
whether that block is canonical according to fork choice. If chain is
stalled and we want to prepare for resuming validator duties, we need
a recent state according to fork choice to avoid lag spikes and missing
slot timings.
Nimbus currently stops performing validator duties if the blockchain
does not progress for `node.config.syncHorizon` slots. This means that
the chain won't recover because no new blocks are proposed. To fix that,
continue performing validator duties if no progress is registered for a
long time, and none of our peers is indicating any progress.
#6087 introduced a subtle change to `nim-web3` resulting in `Gwei` to be
serialized differently than before. Using a `distinct` type for `Gwei`
improves type safety and avoids such problems in the future.
On Goerli there are some instances of long streaks of empty epochs due
to different branches being built in parallel. They sometimes lead to
`Request for pruned historical state` logs requiring a BN restart to
resolve. Avoid that by trying to restore states from the entire non-
finalized history, to avoid losing sync in such situtions.
When a config defines a different `INACTIVITY_SCORE_RECOVERY_RATE` than
the default, `process_inactivity_updates` uses an incorrect rate ever
since #2710 when `INACTIVITY_SCORE_RECOVERY_RATE` became configurable.
When there are long periods of non-finality, `nodeIsViableForHead` has
been observed to consume significant time as it repeatedly walks the
non-finalized check graph as part of determining what heads are eligible
for fork choice. Caching the result resolves that.
Overall, it may still be better to prune fork choice more aggressively
when finality advances, to fully avoid the case specced out using the
linear scan. The current implementation is very close to spec, though,
so such a change should not be introduced without thorough testing.
The simple cache should allow significantly better performance on Goerli
while the network is still supported (Mid April).
In `block_dag` there is a max depth of 100 years configured to detect
internal inconsistencies, e.g., circular references. As `BlockRef` was
changed long ago to only reflect the non-finalized chain segment, the
theoretically supported max depth can be reduced and simplified.
We don't need the `cfg` right now, but it makes sense to have the object
passed to the clock so that the API doesn't break if we want to support
configurable `SECONDS_PER_SLOT`. As the `libnimbus_lc` library is not
yet widely used, better to add the argument now than later.
The `syncHorizon` describes the number of empty slots before the beacon
node considers itself to be out of sync. There are two places where we
currently set this to 50 slots, but it makes more sense to base it on
wall time, e.g., the 10 minutes that the default 50 are derived from.
* allow specifying get_proposer_reward block root at state.slot
* Add consensus_block_value calculation.
* Address review comments.
* Post-rebase adjustments.
* Use proper state to calculate consensus block value.
* Revert "allow specifying get_proposer_reward block root at state.slot"
This reverts commit 9fef9a8199f63056060527ac2531acc3b0ed8dcb.
* Fix post-revert problems.
Return back to Gwei.
* Adding test which is not working.
* Do not use test suite if it does not have post-state.
* Add debug logging.
* Increase logging to track sources of balance changes.
* Fix sync committee rewards/penalties calculation.
* Revert "Increase logging to track sources of balance changes."
This reverts commit 32feb20f2fdb66521401710866cd59ecc9951ef8.
* Adopt new vision to block rewards.
* Add block produce logging to VC.
* Remove rewards.nim.
* Eliminate toWei changes.
* Improve UInt256 shortLog.
* Fix conversion procedure.
* Address review comments.
* Fix test.
* Revert "Fix test."
This reverts commit 4948b2c1ec.
---------
Co-authored-by: tersec <tersec@users.noreply.github.com>
Co-authored-by: Etan Kissling <etan@status.im>
Provide additional context in the `syncEth1Chain tick` debug log to aid
with understanding of flow when debugging on a more precise basis than
just having the metrics.
Corrects a regression from #5998 that led to crashes in #6046.
In `trustedNodeSync` mode, the config does not contain genesis keys,
so attempting to load from them is a `Defect`.
Fix the `/eth/v1/beacon/deposit_snapshot` API to produce proper EIP-4881
compatible `DepositTreeSnapshot` responses. The endpoint used to expose
a Nimbus-specific database internal format.
Also fix trusted node sync to consume properly formatted EIP-4881 data
with `--with-deposit-snapshot`, and `--finalized-deposit-tree-snapshot`
beacon node launch option to use the EIP-4881 data. Further ensure that
`ncli_testnet` produces EIP-4881 formatted data for interoperability.
EIP-4881 was never correctly implemented, the `DepositTreeSnapshot`
structure has nothing to do with its actual definition. Reflect that
by renaming the type to a Nimbus-specific `DepositContractSnapshot`,
so that an actual EIP-4881 implementation can use the correct names.
- https://eips.ethereum.org/EIPS/eip-4881#specification
Notably, `DepositTreeSnapshot` contains a compressed sequence in
`finalized`, only containing the minimally required intermediate roots.
That also explains the incorrect REST response reported in #5508.
The non-canonical representation was introduced in #4303 and is also
persisted in the database. We'll have to maintain it for a while.
`nextActionWait` currently shows `n/a` if only proposal is scheduled
but no attestation, e.g., attestation was already made for current
epoch and validator is exiting next epoch so doesn't have another
attestation lined up. It's an edge case but it's still more correct
to also log `nextActionWait` if only proposal is scheduled.
When using `--external-beacon-api-url`, one has to accompany it with
either `--trusted-block-root` or `--trusted-state-root`. If neither is
specified, we can fallback to a deeply finalized noncontroversial block
root. For networks that started post Altair, e.g., Holesky, the genesis
block root fulfills that requirement, as in, it is implicitly trusted.
Therefore, if only `--external-beacon-api-url` is provided without any
`--trusted-block-root` or `--trusted-state-root`, use genesis block root
if it is a viable starting point (post-Altair).
```
build/nimbus_beacon_node \
--network=holesky \
--data-dir="$HOME/Downloads/nimbus/data/holesky" \
"--external-beacon-api-url=http://unstable.holesky.beacon-api.nimbus.team" \
--tcp-port=9010 --udp-port=9010 \
--rest --log-level=DEBUG \
--no-el
```
The exports in `eth2_discovery` produce deprecation warnings as they
refer to `close`, `closeWait` and so on. Turns out that they are not
necessary at all. The `Eth2DiscoveryProtocol` is even already exported
two lines above using `*` marker...
During refactoring of #5959, some implicit `return` were overlooked,
resulting in spurious `err()` being returned without message.
```
{"lvl":"WRN","ts":"2024-02-26 10:12:20.469+00:00","msg":"Beacon nodes report different configuration values","reason":"","service":"fallback_service","node":"http://127.0.0.1:9303[Nimbus/v24.2.1-4e9bc7-stateofus]","node_index":0,"node_roles":"AGBSDT"}
```
Correcting the helpers to return explicit result in all exhaustive
cases so that this cannot happen anymore by accident.
* add EIP-7044 support to keymanager API
When trying to sign `VoluntaryExit` via keymanager API, the logic is not
yet aware of EIP-7044 (part of Deneb). This patch adds missing EIP-7044
support to the keymanager API as well.
As part of this, the VC needs to become aware about:
- `CAPELLA_FORK_VERSION`: To correctly form the EIP-7044 signing domain.
The fork schedule does not indicate which of the results, if any,
corresponds to Capella.
- `CAPELLA_FORK_EPOCH`: To detect whether Capella was scheduled.
If a BN does not have it in its config while other BNs have it,
this leads to a log if Capella has not activated yet, or marks the BN
as incompatible if Capella already activated.
- `DENEB_FORK_EPOCH`: To check whether EIP-7044 logic should be used.
Related PRs:
- #5120 added support for processing EIP-7044 `VoluntaryExit` messages
as part of the state transition functions (tested by EF spec tests).
- #5953 synced the support from #5120 to gossip validation.
- #5954 added support to the `nimbus_beacon_node deposits exit` command.
- #5956 contains an alternative generic version of `VCForkConfig`.
* address reviewer feedback: letter case, module location, double lookup
---------
Co-authored-by: cheatfate <eugene.kabanov@status.im>
* Update beacon_chain/rpc/rest_constants.nim
* move `VCRuntimeConfig` back to `rest_types`
---------
Co-authored-by: cheatfate <eugene.kabanov@status.im>
* fix `getForkVersion` helper
---------
Co-authored-by: cheatfate <eugene.kabanov@status.im>
In #5120, the `nimbus_beacon_node deposits exit` command was updated for
compatibility with EIP-7044, which forces signatures to be made using
`CAPELLA_FORK_VERSION` regardless of the `VoluntaryExit`'s `epoch` after
Deneb is activated.
This update had a regression, as an older mechanism was used to fetch
`RuntimeConfig`, resulting in an encoding issue (#5362). This was then
fixed in #5370, restoring general `deposits exit` functionality.
However, the logic from #5120 has another flaw, as it uses an incorrect
fork version based on the pre-Deneb logic even after Deneb and EIP-7044
are activated. Fix this now, so that `deposits exit` continues to work
correctly after Deneb activates.
In #5120, EIP-7044 support got added to the state transition function to
force `CAPELLA_FORK_VERSION` to be used when validiting `VoluntaryExit`
messages, irrespective of their `epoch`.
In #5637, similar logic was added when batch verifying BLS signatures,
which is used during gossip validation (libp2p gossipsub, and req/resp).
However, that logic did not match the one introduced in #5120, and only
uses `CAPELLA_FORK_VERSION` when a `VoluntaryExit`'s `epoch` was set to
a value `>= CAPELLA_FORK_EPOCH`. Otherwise, `BELLATRIX_FORK_VERSION`
would still be used when validating `VoluntaryExit`, e.g., with `epoch`
set to `0`, as is the case in this Holesky block:
- https://holesky.beaconcha.in/slot/1076985#voluntary-exits
Extracting the correct logic from #5120 into a function, and reusing it
when verifying BLS signatures fixes this issue, and also leverages the
exhaustive EF test suite that covers the (correct) #5120 logic.
This fix only affects networks that have EIP-7044 applied (post-Deneb).
Without the fix, Deneb blocks with a `VoluntaryExit` with `epoch` set to
`< CAPELLA_FORK_EPOCH` incorrectly fail to validate despite being valid.
Incorrect blocks that contain a malicious `VoluntaryExit` with `epoch`
set to `< CAPELLA_FORK_EPOCH` and signed using `BELLATRIX_FORK_VERSION`
_would_ pass the BLS verification stage, but subsequently fail the state
transition logic. Such blocks would still correctly be labeled invalid.
During refactoring of #5959, some implicit `return` were overlooked,
resulting in spurious `err()` being returned without message.
```
{"lvl":"WRN","ts":"2024-02-26 10:12:20.469+00:00","msg":"Beacon nodes report different configuration values","reason":"","service":"fallback_service","node":"http://127.0.0.1:9303[Nimbus/v24.2.1-4e9bc7-stateofus]","node_index":0,"node_roles":"AGBSDT"}
```
Correcting the helpers to return explicit result in all exhaustive
cases so that this cannot happen anymore by accident.
* add EIP-7044 support to keymanager API
When trying to sign `VoluntaryExit` via keymanager API, the logic is not
yet aware of EIP-7044 (part of Deneb). This patch adds missing EIP-7044
support to the keymanager API as well.
As part of this, the VC needs to become aware about:
- `CAPELLA_FORK_VERSION`: To correctly form the EIP-7044 signing domain.
The fork schedule does not indicate which of the results, if any,
corresponds to Capella.
- `CAPELLA_FORK_EPOCH`: To detect whether Capella was scheduled.
If a BN does not have it in its config while other BNs have it,
this leads to a log if Capella has not activated yet, or marks the BN
as incompatible if Capella already activated.
- `DENEB_FORK_EPOCH`: To check whether EIP-7044 logic should be used.
Related PRs:
- #5120 added support for processing EIP-7044 `VoluntaryExit` messages
as part of the state transition functions (tested by EF spec tests).
- #5953 synced the support from #5120 to gossip validation.
- #5954 added support to the `nimbus_beacon_node deposits exit` command.
- #5956 contains an alternative generic version of `VCForkConfig`.
* address reviewer feedback: letter case, module location, double lookup
---------
Co-authored-by: cheatfate <eugene.kabanov@status.im>
* Update beacon_chain/rpc/rest_constants.nim
* move `VCRuntimeConfig` back to `rest_types`
---------
Co-authored-by: cheatfate <eugene.kabanov@status.im>
* fix `getForkVersion` helper
---------
Co-authored-by: cheatfate <eugene.kabanov@status.im>
Fix regression from #5842 where `Eth-Execution-Payload-Value` is parsed
into `consensusValue` instead of `Eth-consensus-Block-Value`. We don't
use those values for now, but fixing avoids hard-to-debug bugs later.
In #5120, the `nimbus_beacon_node deposits exit` command was updated for
compatibility with EIP-7044, which forces signatures to be made using
`CAPELLA_FORK_VERSION` regardless of the `VoluntaryExit`'s `epoch` after
Deneb is activated.
This update had a regression, as an older mechanism was used to fetch
`RuntimeConfig`, resulting in an encoding issue (#5362). This was then
fixed in #5370, restoring general `deposits exit` functionality.
However, the logic from #5120 has another flaw, as it uses an incorrect
fork version based on the pre-Deneb logic even after Deneb and EIP-7044
are activated. Fix this now, so that `deposits exit` continues to work
correctly after Deneb activates.
In #5120, EIP-7044 support got added to the state transition function to
force `CAPELLA_FORK_VERSION` to be used when validiting `VoluntaryExit`
messages, irrespective of their `epoch`.
In #5637, similar logic was added when batch verifying BLS signatures,
which is used during gossip validation (libp2p gossipsub, and req/resp).
However, that logic did not match the one introduced in #5120, and only
uses `CAPELLA_FORK_VERSION` when a `VoluntaryExit`'s `epoch` was set to
a value `>= CAPELLA_FORK_EPOCH`. Otherwise, `BELLATRIX_FORK_VERSION`
would still be used when validating `VoluntaryExit`, e.g., with `epoch`
set to `0`, as is the case in this Holesky block:
- https://holesky.beaconcha.in/slot/1076985#voluntary-exits
Extracting the correct logic from #5120 into a function, and reusing it
when verifying BLS signatures fixes this issue, and also leverages the
exhaustive EF test suite that covers the (correct) #5120 logic.
This fix only affects networks that have EIP-7044 applied (post-Deneb).
Without the fix, Deneb blocks with a `VoluntaryExit` with `epoch` set to
`< CAPELLA_FORK_EPOCH` incorrectly fail to validate despite being valid.
Incorrect blocks that contain a malicious `VoluntaryExit` with `epoch`
set to `< CAPELLA_FORK_EPOCH` and signed using `BELLATRIX_FORK_VERSION`
_would_ pass the BLS verification stage, but subsequently fail the state
transition logic. Such blocks would still correctly be labeled invalid.