nimbus-eth2/beacon_chain/spec/state_transition.nim

546 lines
23 KiB
Nim
Raw Permalink Normal View History

# beacon_chain
# Copyright (c) 2018-2023 Status Research & Development GmbH
# Licensed and distributed under either of
# * MIT license (license terms in the root directory or at https://opensource.org/licenses/MIT).
# * Apache v2 license (license terms in the root directory or at https://www.apache.org/licenses/LICENSE-2.0).
# at your option. This file may not be copied, modified, or distributed except according to those terms.
# State transition, as described in
# https://github.com/ethereum/consensus-specs/blob/v1.4.0-beta.3/specs/phase0/beacon-chain.md#beacon-chain-state-transition-function
#
# The entry point is `state_transition` which is at the bottom of the file!
#
# General notes about the code:
# * Weird styling - the sections taken from the spec use python styling while
# the others use NEP-1 - helps grepping identifiers in spec
# * When updating the code, add TODO sections to mark where there are clear
# improvements to be made - other than that, keep things similar to spec unless
# motivated by security or performance considerations
performance fixes (#2259) * performance fixes * don't mark tree cache as dirty on read-only List accesses * store only blob in memory for keys and signatures, parse blob lazily * compare public keys by blob instead of parsing / converting to raw * compare Eth2Digest using non-constant-time comparison * avoid some unnecessary validator copying This branch will in particular speed up deposit processing which has been slowing down block replay. Pre (mainnet, 1600 blocks): ``` All time are ms Average, StdDev, Min, Max, Samples, Test Validation is turned off meaning that no BLS operations are performed 3450.269, 0.000, 3450.269, 3450.269, 1, Initialize DB 0.417, 0.822, 0.036, 21.098, 1400, Load block from database 16.521, 0.000, 16.521, 16.521, 1, Load state from database 27.906, 50.846, 8.104, 1507.633, 1350, Apply block 52.617, 37.029, 20.640, 135.938, 50, Apply epoch block ``` Post: ``` 3502.715, 0.000, 3502.715, 3502.715, 1, Initialize DB 0.080, 0.560, 0.035, 21.015, 1400, Load block from database 17.595, 0.000, 17.595, 17.595, 1, Load state from database 15.706, 11.028, 8.300, 107.537, 1350, Apply block 33.217, 12.622, 17.331, 60.580, 50, Apply epoch block ``` * more perf fixes * load EpochRef cache into StateCache more aggressively * point out security concern with public key cache * reuse proposer index from state when processing block * avoid genericAssign in a few more places * don't parse key when signature is unparseable * fix `==` overload for Eth2Digest * preallocate validator list when getting active validators * speed up proposer index calculation a little bit * reuse cache when replaying blocks in ncli_db * avoid a few more copying loops ``` Average, StdDev, Min, Max, Samples, Test Validation is turned off meaning that no BLS operations are performed 3279.158, 0.000, 3279.158, 3279.158, 1, Initialize DB 0.072, 0.357, 0.035, 13.400, 1400, Load block from database 17.295, 0.000, 17.295, 17.295, 1, Load state from database 5.918, 9.896, 0.198, 98.028, 1350, Apply block 15.888, 10.951, 7.902, 39.535, 50, Apply epoch block 0.000, 0.000, 0.000, 0.000, 0, Database block store ``` * clear full balance cache before processing rewards and penalties ``` All time are ms Average, StdDev, Min, Max, Samples, Test Validation is turned off meaning that no BLS operations are performed 3947.901, 0.000, 3947.901, 3947.901, 1, Initialize DB 0.124, 0.506, 0.026, 202.370, 363345, Load block from database 97.614, 0.000, 97.614, 97.614, 1, Load state from database 0.186, 0.188, 0.012, 99.561, 357262, Advance slot, non-epoch 14.161, 5.966, 1.099, 395.511, 11524, Advance slot, epoch 1.372, 4.170, 0.017, 276.401, 363345, Apply block, no slot processing 0.000, 0.000, 0.000, 0.000, 0, Database block store ```
2021-01-25 12:04:18 +00:00
#
# Performance notes:
# * The state transition is used in two contexts: to verify that incoming blocks
# are correct and to replay existing blocks from database. Incoming blocks
# are processed one-by-one while replay happens multiple blocks at a time.
# * Although signature verification is the slowest operation in the state
# state transition, we skip it during replay - this is also when we repeatedly
# call the state transition, making the non-signature part of the code
# important from a performance point of view.
# * It's important to start with a prefilled cache - generating the shuffled
# list of active validators is generally very slow.
# * Throughout, the code is affected by inefficient for loop codegen, meaning
# that we have to iterate over indices and pick out the value manually:
# https://github.com/nim-lang/Nim/issues/14421
# * Throughout, we're affected by inefficient `let` borrowing, meaning we
# often have to take the address of a sequence item due to the above - look
# for `let ... = unsafeAddr sequence[idx]`
# * Throughout, we're affected by the overloading rules that prefer a `var`
# overload to a non-var overload - look for `asSeq()` - when the `var`
# overload is used, the hash tree cache is cleared, which, aside from being
2023-08-07 10:06:47 +00:00
# slow itself, causes additional processing to recalculate the Merkle tree.
{.push raises: [].}
2018-09-26 16:26:39 +00:00
import
chronicles,
stew/results,
../extras,
"."/[
disentangle eth2 types from the ssz library (#2785) * reorganize ssz dependencies This PR continues the work in https://github.com/status-im/nimbus-eth2/pull/2646, https://github.com/status-im/nimbus-eth2/pull/2779 as well as past issues with serialization and type, to disentangle SSZ from eth2 and at the same time simplify imports and exports with a structured approach. The principal idea here is that when a library wants to introduce SSZ support, they do so via 3 files: * `ssz_codecs` which imports and reexports `codecs` - this covers the basic byte conversions and ensures no overloads get lost * `xxx_merkleization` imports and exports `merkleization` to specialize and get access to `hash_tree_root` and friends * `xxx_ssz_serialization` imports and exports `ssz_serialization` to specialize ssz for a specific library Those that need to interact with SSZ always import the `xxx_` versions of the modules and never `ssz` itself so as to keep imports simple and safe. This is similar to how the REST / JSON-RPC serializers are structured in that someone wanting to serialize spec types to REST-JSON will import `eth2_rest_serialization` and nothing else. * split up ssz into a core library that is independendent of eth2 types * rename `bytes_reader` to `codec` to highlight that it contains coding and decoding of bytes and native ssz types * remove tricky List init overload that causes compile issues * get rid of top-level ssz import * reenable merkleization tests * move some "standard" json serializers to spec * remove `ValidatorIndex` serialization for now * remove test_ssz_merkleization * add tests for over/underlong byte sequences * fix broken seq[byte] test - seq[byte] is not an SSZ type There are a few things this PR doesn't solve: * like #2646 this PR is weak on how to handle root and other dontSerialize fields that "sometimes" should be computed - the same problem appears in REST / JSON-RPC etc * Fix a build problem on macOS * Another way to fix the macOS builds Co-authored-by: Zahary Karadjov <zahary@gmail.com>
2021-08-18 18:57:58 +00:00
beaconstate, eth2_merkleization, forks, helpers, signatures,
state_transition_block, state_transition_epoch, validator]
export results, extras
logScope:
topics = "state_transition"
# https://github.com/ethereum/consensus-specs/blob/v1.4.0-alpha.3/specs/phase0/beacon-chain.md#beacon-chain-state-transition-function
proc verify_block_signature(
state: ForkyBeaconState, signed_block: SomeForkySignedBeaconBlock):
Result[void, cstring] =
let
proposer_index = signed_block.message.proposer_index
if proposer_index >= state.validators.lenu64:
return err("block: invalid proposer index")
if not verify_block_signature(
state.fork, state.genesis_validators_root, signed_block.message.slot,
signed_block.root, state.validators[proposer_index].pubkey,
signed_block.signature):
return err("block: signature verification failed")
ok()
# https://github.com/ethereum/consensus-specs/blob/v1.4.0-beta.3/specs/phase0/beacon-chain.md#beacon-chain-state-transition-function
func verifyStateRoot(
state: ForkyBeaconState,
blck: ForkyBeaconBlock | ForkySigVerifiedBeaconBlock):
Result[void, cstring] =
# This is inlined in state_transition(...) in spec.
let state_root = hash_tree_root(state)
if state_root != blck.state_root:
err("block: state root verification failed")
else:
ok()
func verifyStateRoot(
state: ForkyBeaconState, blck: ForkyTrustedBeaconBlock):
Result[void, cstring] =
# This is inlined in state_transition(...) in spec.
ok()
type
RollbackProc* = proc() {.gcsafe, noSideEffect, raises: [].}
RollbackHashedProc*[T] =
proc(state: var T) {.gcsafe, noSideEffect, raises: [].}
RollbackForkedHashedProc* = RollbackHashedProc[ForkedHashedBeaconState]
func noRollback*() =
trace "Skipping rollback of broken state"
# Hashed-state transition functions
# ---------------------------------------------------------------
# https://github.com/ethereum/consensus-specs/blob/v1.4.0-alpha.3/specs/phase0/beacon-chain.md#beacon-chain-state-transition-function
func process_slot*(
state: var ForkyBeaconState, pre_state_root: Eth2Digest) =
# `process_slot` is the first stage of per-slot processing - it is run for
# every slot, including epoch slots - it does not however update the slot
# number! `pre_state_root` refers to the state root of the incoming
# state before any slot processing has been done.
# Cache state root
state.state_roots[state.slot mod SLOTS_PER_HISTORICAL_ROOT] = pre_state_root
# Cache latest block header state root
if state.latest_block_header.state_root == ZERO_HASH:
state.latest_block_header.state_root = pre_state_root
# Cache block root
state.block_roots[state.slot mod SLOTS_PER_HISTORICAL_ROOT] =
hash_tree_root(state.latest_block_header)
func clear_epoch_from_cache(cache: var StateCache, epoch: Epoch) =
cache.total_active_balance.del epoch
cache.shuffled_active_validator_indices.del epoch
for slot in epoch.slots():
cache.beacon_proposer_indices.del slot
# https://github.com/ethereum/consensus-specs/blob/v1.4.0-beta.3/specs/phase0/beacon-chain.md#beacon-chain-state-transition-function
proc advance_slot(
Implement split preset/config support (#2710) * Implement split preset/config support This is the initial bulk refactor to introduce runtime config values in a number of places, somewhat replacing the existing mechanism of loading network metadata. It still needs more work, this is the initial refactor that introduces runtime configuration in some of the places that need it. The PR changes the way presets and constants work, to match the spec. In particular, a "preset" now refers to the compile-time configuration while a "cfg" or "RuntimeConfig" is the dynamic part. A single binary can support either mainnet or minimal, but not both. Support for other presets has been removed completely (can be readded, in case there's need). There's a number of outstanding tasks: * `SECONDS_PER_SLOT` still needs fixing * loading custom runtime configs needs redoing * checking constants against YAML file * yeerongpilly support `build/nimbus_beacon_node --network=yeerongpilly --discv5:no --log-level=DEBUG` * load fork epoch from config * fix fork digest sent in status * nicer error string for request failures * fix tools * one more * fixup * fixup * fixup * use "standard" network definition folder in local testnet Files are loaded from their standard locations, including genesis etc, to conform to the format used in the `eth2-networks` repo. * fix launch scripts, allow unknown config values * fix base config of rest test * cleanups * bundle mainnet config using common loader * fix spec links and names * only include supported preset in binary * drop yeerongpilly, add altair-devnet-0, support boot_enr.yaml
2021-07-12 13:01:38 +00:00
cfg: RuntimeConfig,
state: var ForkyBeaconState, previous_slot_state_root: Eth2Digest,
2022-05-25 13:49:29 +00:00
flags: UpdateFlags, cache: var StateCache, info: var ForkyEpochInfo):
Result[void, cstring] =
# Do the per-slot and potentially the per-epoch processing, then bump the
# slot number - we've now arrived at the slot state on top of which a block
# optionally can be applied.
process_slot(state, previous_slot_state_root)
info.clear()
# Process epoch on the start slot of the next epoch
let is_epoch_transition = (state.slot + 1).is_epoch
if is_epoch_transition:
# Note: Genesis epoch = 0, no need to test if before Genesis
2022-05-25 13:49:29 +00:00
? process_epoch(cfg, state, flags, cache, info)
clear_epoch_from_cache(cache, (state.slot + 1).epoch)
state.slot += 1
2022-05-25 13:49:29 +00:00
ok()
func noRollback*(state: var phase0.HashedBeaconState) =
trace "Skipping rollback of broken phase 0 state"
func noRollback*(state: var altair.HashedBeaconState) =
trace "Skipping rollback of broken Altair state"
func noRollback*(state: var bellatrix.HashedBeaconState) =
trace "Skipping rollback of broken Bellatrix state"
from ./datatypes/capella import
ExecutionPayload, HashedBeaconState, SignedBLSToExecutionChangeList,
asSigVerified
func noRollback*(state: var capella.HashedBeaconState) =
trace "Skipping rollback of broken Capella state"
2023-02-23 18:06:57 +00:00
from ./datatypes/deneb import HashedBeaconState
2022-12-09 21:39:11 +00:00
2023-02-23 18:06:57 +00:00
func noRollback*(state: var deneb.HashedBeaconState) =
trace "Skipping rollback of broken Deneb state"
2022-12-09 21:39:11 +00:00
func maybeUpgradeStateToAltair(
Implement split preset/config support (#2710) * Implement split preset/config support This is the initial bulk refactor to introduce runtime config values in a number of places, somewhat replacing the existing mechanism of loading network metadata. It still needs more work, this is the initial refactor that introduces runtime configuration in some of the places that need it. The PR changes the way presets and constants work, to match the spec. In particular, a "preset" now refers to the compile-time configuration while a "cfg" or "RuntimeConfig" is the dynamic part. A single binary can support either mainnet or minimal, but not both. Support for other presets has been removed completely (can be readded, in case there's need). There's a number of outstanding tasks: * `SECONDS_PER_SLOT` still needs fixing * loading custom runtime configs needs redoing * checking constants against YAML file * yeerongpilly support `build/nimbus_beacon_node --network=yeerongpilly --discv5:no --log-level=DEBUG` * load fork epoch from config * fix fork digest sent in status * nicer error string for request failures * fix tools * one more * fixup * fixup * fixup * use "standard" network definition folder in local testnet Files are loaded from their standard locations, including genesis etc, to conform to the format used in the `eth2-networks` repo. * fix launch scripts, allow unknown config values * fix base config of rest test * cleanups * bundle mainnet config using common loader * fix spec links and names * only include supported preset in binary * drop yeerongpilly, add altair-devnet-0, support boot_enr.yaml
2021-07-12 13:01:38 +00:00
cfg: RuntimeConfig, state: var ForkedHashedBeaconState) =
# Both process_slots() and state_transition_block() call this, so only run it
# once by checking for existing fork.
Implement split preset/config support (#2710) * Implement split preset/config support This is the initial bulk refactor to introduce runtime config values in a number of places, somewhat replacing the existing mechanism of loading network metadata. It still needs more work, this is the initial refactor that introduces runtime configuration in some of the places that need it. The PR changes the way presets and constants work, to match the spec. In particular, a "preset" now refers to the compile-time configuration while a "cfg" or "RuntimeConfig" is the dynamic part. A single binary can support either mainnet or minimal, but not both. Support for other presets has been removed completely (can be readded, in case there's need). There's a number of outstanding tasks: * `SECONDS_PER_SLOT` still needs fixing * loading custom runtime configs needs redoing * checking constants against YAML file * yeerongpilly support `build/nimbus_beacon_node --network=yeerongpilly --discv5:no --log-level=DEBUG` * load fork epoch from config * fix fork digest sent in status * nicer error string for request failures * fix tools * one more * fixup * fixup * fixup * use "standard" network definition folder in local testnet Files are loaded from their standard locations, including genesis etc, to conform to the format used in the `eth2-networks` repo. * fix launch scripts, allow unknown config values * fix base config of rest test * cleanups * bundle mainnet config using common loader * fix spec links and names * only include supported preset in binary * drop yeerongpilly, add altair-devnet-0, support boot_enr.yaml
2021-07-12 13:01:38 +00:00
if getStateField(state, slot).epoch == cfg.ALTAIR_FORK_EPOCH and
state.kind == ConsensusFork.Phase0:
let newState = upgrade_to_altair(cfg, state.phase0Data.data)
state = (ref ForkedHashedBeaconState)(
kind: ConsensusFork.Altair,
altairData: altair.HashedBeaconState(
root: hash_tree_root(newState[]), data: newState[]))[]
func maybeUpgradeStateToBellatrix(
cfg: RuntimeConfig, state: var ForkedHashedBeaconState) =
# Both process_slots() and state_transition_block() call this, so only run it
# once by checking for existing fork.
if getStateField(state, slot).epoch == cfg.BELLATRIX_FORK_EPOCH and
state.kind == ConsensusFork.Altair:
let newState = upgrade_to_bellatrix(cfg, state.altairData.data)
state = (ref ForkedHashedBeaconState)(
kind: ConsensusFork.Bellatrix,
bellatrixData: bellatrix.HashedBeaconState(
root: hash_tree_root(newState[]), data: newState[]))[]
func maybeUpgradeStateToCapella(
cfg: RuntimeConfig, state: var ForkedHashedBeaconState) =
# Both process_slots() and state_transition_block() call this, so only run it
# once by checking for existing fork.
if getStateField(state, slot).epoch == cfg.CAPELLA_FORK_EPOCH and
state.kind == ConsensusFork.Bellatrix:
let newState = upgrade_to_capella(cfg, state.bellatrixData.data)
state = (ref ForkedHashedBeaconState)(
kind: ConsensusFork.Capella,
capellaData: capella.HashedBeaconState(
root: hash_tree_root(newState[]), data: newState[]))[]
func maybeUpgradeStateToDeneb(
cfg: RuntimeConfig, state: var ForkedHashedBeaconState) =
# Both process_slots() and state_transition_block() call this, so only run it
# once by checking for existing fork.
if getStateField(state, slot).epoch == cfg.DENEB_FORK_EPOCH and
state.kind == ConsensusFork.Capella:
let newState = upgrade_to_deneb(cfg, state.capellaData.data)
state = (ref ForkedHashedBeaconState)(
kind: ConsensusFork.Deneb,
denebData: deneb.HashedBeaconState(
root: hash_tree_root(newState[]), data: newState[]))[]
func maybeUpgradeState*(
cfg: RuntimeConfig, state: var ForkedHashedBeaconState) =
cfg.maybeUpgradeStateToAltair(state)
cfg.maybeUpgradeStateToBellatrix(state)
cfg.maybeUpgradeStateToCapella(state)
cfg.maybeUpgradeStateToDeneb(state)
proc process_slots*(
Implement split preset/config support (#2710) * Implement split preset/config support This is the initial bulk refactor to introduce runtime config values in a number of places, somewhat replacing the existing mechanism of loading network metadata. It still needs more work, this is the initial refactor that introduces runtime configuration in some of the places that need it. The PR changes the way presets and constants work, to match the spec. In particular, a "preset" now refers to the compile-time configuration while a "cfg" or "RuntimeConfig" is the dynamic part. A single binary can support either mainnet or minimal, but not both. Support for other presets has been removed completely (can be readded, in case there's need). There's a number of outstanding tasks: * `SECONDS_PER_SLOT` still needs fixing * loading custom runtime configs needs redoing * checking constants against YAML file * yeerongpilly support `build/nimbus_beacon_node --network=yeerongpilly --discv5:no --log-level=DEBUG` * load fork epoch from config * fix fork digest sent in status * nicer error string for request failures * fix tools * one more * fixup * fixup * fixup * use "standard" network definition folder in local testnet Files are loaded from their standard locations, including genesis etc, to conform to the format used in the `eth2-networks` repo. * fix launch scripts, allow unknown config values * fix base config of rest test * cleanups * bundle mainnet config using common loader * fix spec links and names * only include supported preset in binary * drop yeerongpilly, add altair-devnet-0, support boot_enr.yaml
2021-07-12 13:01:38 +00:00
cfg: RuntimeConfig, state: var ForkedHashedBeaconState, slot: Slot,
cache: var StateCache, info: var ForkedEpochInfo, flags: UpdateFlags):
Result[void, cstring] =
if not (getStateField(state, slot) < slot):
if slotProcessed notin flags or getStateField(state, slot) != slot:
return err("process_slots: cannot rewind state to past slot")
# Update the state so its slot matches that of the block
while getStateField(state, slot) < slot:
withState(state):
withEpochInfo(forkyState.data, info):
? advance_slot(
cfg, forkyState.data, forkyState.root, flags, cache, info)
if skipLastStateRootCalculation notin flags or
forkyState.data.slot < slot:
# Don't update state root for the slot of the block if going to process
# block after
forkyState.root = hash_tree_root(forkyState.data)
maybeUpgradeState(cfg, state)
ok()
proc state_transition_block_aux(
Implement split preset/config support (#2710) * Implement split preset/config support This is the initial bulk refactor to introduce runtime config values in a number of places, somewhat replacing the existing mechanism of loading network metadata. It still needs more work, this is the initial refactor that introduces runtime configuration in some of the places that need it. The PR changes the way presets and constants work, to match the spec. In particular, a "preset" now refers to the compile-time configuration while a "cfg" or "RuntimeConfig" is the dynamic part. A single binary can support either mainnet or minimal, but not both. Support for other presets has been removed completely (can be readded, in case there's need). There's a number of outstanding tasks: * `SECONDS_PER_SLOT` still needs fixing * loading custom runtime configs needs redoing * checking constants against YAML file * yeerongpilly support `build/nimbus_beacon_node --network=yeerongpilly --discv5:no --log-level=DEBUG` * load fork epoch from config * fix fork digest sent in status * nicer error string for request failures * fix tools * one more * fixup * fixup * fixup * use "standard" network definition folder in local testnet Files are loaded from their standard locations, including genesis etc, to conform to the format used in the `eth2-networks` repo. * fix launch scripts, allow unknown config values * fix base config of rest test * cleanups * bundle mainnet config using common loader * fix spec links and names * only include supported preset in binary * drop yeerongpilly, add altair-devnet-0, support boot_enr.yaml
2021-07-12 13:01:38 +00:00
cfg: RuntimeConfig,
state: var ForkyHashedBeaconState,
signedBlock: SomeForkySignedBeaconBlock,
cache: var StateCache, flags: UpdateFlags): Result[void, cstring] =
# Block updates - these happen when there's a new block being suggested
# by the block proposer. Every actor in the network will update its state
# according to the contents of this block - but first they will validate
# that the block is sane.
2022-04-14 15:39:37 +00:00
if skipBlsValidation notin flags:
? verify_block_signature(state.data, signedBlock)
trace "state_transition: processing block, signature passed",
signature = shortLog(signedBlock.signature),
blockRoot = shortLog(signedBlock.root)
? process_block(cfg, state.data, signedBlock.message, flags, cache)
if skipStateRootValidation notin flags:
? verifyStateRoot(state.data, signedBlock.message)
# only blocks currently being produced have an empty state root - we use a
# separate function for those
doAssert not signedBlock.message.state_root.isZero,
"see makeBeaconBlock for block production"
state.root = signedBlock.message.state_root
ok()
func noRollback*(state: var ForkedHashedBeaconState) =
trace "Skipping rollback of broken state"
proc state_transition_block*(
Implement split preset/config support (#2710) * Implement split preset/config support This is the initial bulk refactor to introduce runtime config values in a number of places, somewhat replacing the existing mechanism of loading network metadata. It still needs more work, this is the initial refactor that introduces runtime configuration in some of the places that need it. The PR changes the way presets and constants work, to match the spec. In particular, a "preset" now refers to the compile-time configuration while a "cfg" or "RuntimeConfig" is the dynamic part. A single binary can support either mainnet or minimal, but not both. Support for other presets has been removed completely (can be readded, in case there's need). There's a number of outstanding tasks: * `SECONDS_PER_SLOT` still needs fixing * loading custom runtime configs needs redoing * checking constants against YAML file * yeerongpilly support `build/nimbus_beacon_node --network=yeerongpilly --discv5:no --log-level=DEBUG` * load fork epoch from config * fix fork digest sent in status * nicer error string for request failures * fix tools * one more * fixup * fixup * fixup * use "standard" network definition folder in local testnet Files are loaded from their standard locations, including genesis etc, to conform to the format used in the `eth2-networks` repo. * fix launch scripts, allow unknown config values * fix base config of rest test * cleanups * bundle mainnet config using common loader * fix spec links and names * only include supported preset in binary * drop yeerongpilly, add altair-devnet-0, support boot_enr.yaml
2021-07-12 13:01:38 +00:00
cfg: RuntimeConfig,
state: var ForkedHashedBeaconState,
signedBlock: SomeForkySignedBeaconBlock,
cache: var StateCache, flags: UpdateFlags,
rollback: RollbackForkedHashedProc): Result[void, cstring] =
## `rollback` is called if the transition fails and the given state has been
## partially changed. If a temporary state was given to `state_transition`,
## it is safe to use `noRollback` and leave it broken, else the state
## object should be rolled back to a consistent state. If the transition fails
## before the state has been updated, `rollback` will not be called.
doAssert not rollback.isNil, "use noRollback if it's ok to mess up state"
let res = withState(state):
when consensusFork == type(signedBlock).kind:
state_transition_block_aux(cfg, forkyState, signedBlock, cache, flags)
else:
err("State/block fork mismatch")
if res.isErr():
rollback(state)
res
proc state_transition*(
Implement split preset/config support (#2710) * Implement split preset/config support This is the initial bulk refactor to introduce runtime config values in a number of places, somewhat replacing the existing mechanism of loading network metadata. It still needs more work, this is the initial refactor that introduces runtime configuration in some of the places that need it. The PR changes the way presets and constants work, to match the spec. In particular, a "preset" now refers to the compile-time configuration while a "cfg" or "RuntimeConfig" is the dynamic part. A single binary can support either mainnet or minimal, but not both. Support for other presets has been removed completely (can be readded, in case there's need). There's a number of outstanding tasks: * `SECONDS_PER_SLOT` still needs fixing * loading custom runtime configs needs redoing * checking constants against YAML file * yeerongpilly support `build/nimbus_beacon_node --network=yeerongpilly --discv5:no --log-level=DEBUG` * load fork epoch from config * fix fork digest sent in status * nicer error string for request failures * fix tools * one more * fixup * fixup * fixup * use "standard" network definition folder in local testnet Files are loaded from their standard locations, including genesis etc, to conform to the format used in the `eth2-networks` repo. * fix launch scripts, allow unknown config values * fix base config of rest test * cleanups * bundle mainnet config using common loader * fix spec links and names * only include supported preset in binary * drop yeerongpilly, add altair-devnet-0, support boot_enr.yaml
2021-07-12 13:01:38 +00:00
cfg: RuntimeConfig,
state: var ForkedHashedBeaconState,
signedBlock: SomeForkySignedBeaconBlock,
cache: var StateCache, info: var ForkedEpochInfo, flags: UpdateFlags,
rollback: RollbackForkedHashedProc): Result[void, cstring] =
## Apply a block to the state, advancing the slot counter as necessary. The
## given state must be of a lower slot, or, in case the `slotProcessed` flag
## is set, can be the slot state of the same slot as the block (where the
## slot state is the state without any block applied). To create a slot state,
## advance the state corresponding to the the parent block using
## `process_slots`.
##
## To run the state transition function in preparation for block production,
## use `makeBeaconBlock` instead.
##
## `rollback` is called if the transition fails and the given state has been
## partially changed. If a temporary state was given to `state_transition`,
## it is safe to use `noRollback` and leave it broken, else the state
## object should be rolled back to a consistent state. If the transition fails
## before the state has been updated, `rollback` will not be called.
? process_slots(
cfg, state, signedBlock.message.slot, cache, info,
flags + {skipLastStateRootCalculation})
state_transition_block(
Implement split preset/config support (#2710) * Implement split preset/config support This is the initial bulk refactor to introduce runtime config values in a number of places, somewhat replacing the existing mechanism of loading network metadata. It still needs more work, this is the initial refactor that introduces runtime configuration in some of the places that need it. The PR changes the way presets and constants work, to match the spec. In particular, a "preset" now refers to the compile-time configuration while a "cfg" or "RuntimeConfig" is the dynamic part. A single binary can support either mainnet or minimal, but not both. Support for other presets has been removed completely (can be readded, in case there's need). There's a number of outstanding tasks: * `SECONDS_PER_SLOT` still needs fixing * loading custom runtime configs needs redoing * checking constants against YAML file * yeerongpilly support `build/nimbus_beacon_node --network=yeerongpilly --discv5:no --log-level=DEBUG` * load fork epoch from config * fix fork digest sent in status * nicer error string for request failures * fix tools * one more * fixup * fixup * fixup * use "standard" network definition folder in local testnet Files are loaded from their standard locations, including genesis etc, to conform to the format used in the `eth2-networks` repo. * fix launch scripts, allow unknown config values * fix base config of rest test * cleanups * bundle mainnet config using common loader * fix spec links and names * only include supported preset in binary * drop yeerongpilly, add altair-devnet-0, support boot_enr.yaml
2021-07-12 13:01:38 +00:00
cfg, state, signedBlock, cache, flags, rollback)
func partialBeaconBlock*(
Implement split preset/config support (#2710) * Implement split preset/config support This is the initial bulk refactor to introduce runtime config values in a number of places, somewhat replacing the existing mechanism of loading network metadata. It still needs more work, this is the initial refactor that introduces runtime configuration in some of the places that need it. The PR changes the way presets and constants work, to match the spec. In particular, a "preset" now refers to the compile-time configuration while a "cfg" or "RuntimeConfig" is the dynamic part. A single binary can support either mainnet or minimal, but not both. Support for other presets has been removed completely (can be readded, in case there's need). There's a number of outstanding tasks: * `SECONDS_PER_SLOT` still needs fixing * loading custom runtime configs needs redoing * checking constants against YAML file * yeerongpilly support `build/nimbus_beacon_node --network=yeerongpilly --discv5:no --log-level=DEBUG` * load fork epoch from config * fix fork digest sent in status * nicer error string for request failures * fix tools * one more * fixup * fixup * fixup * use "standard" network definition folder in local testnet Files are loaded from their standard locations, including genesis etc, to conform to the format used in the `eth2-networks` repo. * fix launch scripts, allow unknown config values * fix base config of rest test * cleanups * bundle mainnet config using common loader * fix spec links and names * only include supported preset in binary * drop yeerongpilly, add altair-devnet-0, support boot_enr.yaml
2021-07-12 13:01:38 +00:00
cfg: RuntimeConfig,
state: var ForkyHashedBeaconState,
proposer_index: ValidatorIndex,
randao_reveal: ValidatorSig,
eth1_data: Eth1Data,
graffiti: GraffitiBytes,
attestations: seq[Attestation],
deposits: seq[Deposit],
validator_changes: BeaconBlockValidatorChanges,
sync_aggregate: SyncAggregate,
execution_payload: ForkyExecutionPayloadForSigning
): auto =
const consensusFork = typeof(state).kind
# https://github.com/ethereum/consensus-specs/blob/v1.3.0/specs/phase0/validator.md#preparing-for-a-beaconblock
var res = consensusFork.BeaconBlock(
slot: state.data.slot,
proposer_index: proposer_index.uint64,
parent_root: state.latest_block_root,
body: consensusFork.BeaconBlockBody(
randao_reveal: randao_reveal,
eth1_data: eth1_data,
graffiti: graffiti,
proposer_slashings: validator_changes.proposer_slashings,
attester_slashings: validator_changes.attester_slashings,
attestations: List[Attestation, Limit MAX_ATTESTATIONS](attestations),
deposits: List[Deposit, Limit MAX_DEPOSITS](deposits),
voluntary_exits: validator_changes.voluntary_exits))
# https://github.com/ethereum/consensus-specs/blob/v1.4.0-beta.1/specs/altair/validator.md#preparing-a-beaconblock
when consensusFork >= ConsensusFork.Altair:
res.body.sync_aggregate = sync_aggregate
# https://github.com/ethereum/consensus-specs/blob/v1.3.0/specs/bellatrix/validator.md#block-proposal
when consensusFork >= ConsensusFork.Bellatrix:
res.body.execution_payload = execution_payload.executionPayload
# https://github.com/ethereum/consensus-specs/blob/v1.4.0-beta.3/specs/capella/validator.md#block-proposal
when consensusFork >= ConsensusFork.Capella:
res.body.bls_to_execution_changes =
validator_changes.bls_to_execution_changes
2022-12-09 21:39:11 +00:00
# https://github.com/ethereum/consensus-specs/blob/v1.3.0/specs/deneb/validator.md#constructing-the-beaconblockbody
when consensusFork >= ConsensusFork.Deneb:
res.body.blob_kzg_commitments = execution_payload.kzgs
res
Support for driving multiple EL nodes from a single Nimbus BN (#4465) * Support for driving multiple EL nodes from a single Nimbus BN Full list of changes: * Eth1Monitor has been renamed to ELManager to match its current responsibilities better. * The ELManager is no longer optional in the code (it won't have a nil value under any circumstances). * The support for subscribing for headers was removed as it only worked with WebSockets and contributed significant complexity while bringing only a very minor advantage. * The `--web3-url` parameter has been deprecated in favor of a new `--el` parameter. The new parameter has a reasonable default value and supports specifying a different JWT for each connection. Each connection can also be configured with a different set of responsibilities (e.g. download deposits, validate blocks and/or produce blocks). On the command-line, these properties can be configured through URL properties stored in the #anchor part of the URL. In TOML files, they come with a very natural syntax (althrough the URL scheme is also supported). * The previously scattered EL-related state and logic is now moved to `eth1_monitor.nim` (this module will be renamed to `el_manager.nim` in a follow-up commit). State is assigned properly either to the `ELManager` or the to individual `ELConnection` objects where appropriate. The ELManager executes all Engine API requests against all attached EL nodes, in parallel. It compares their results and if there is a disagreement regarding the validity of a certain payload, this is detected and the beacon node is protected from publishing a block with a potential execution layer consensus bug in it. The BN provides metrics per EL node for the number of successful or failed requests for each type Engine API requests. If an EL node goes offline and connectivity is resoted later, we report the problem and the remedy in edge-triggered fashion. * More progress towards implementing Deneb block production in the VC and comparing the value of blocks produced by the EL and the builder API. * Adds a Makefile target for the zhejiang testnet
2023-03-05 01:40:21 +00:00
proc makeBeaconBlock*(
cfg: RuntimeConfig,
state: var ForkedHashedBeaconState,
proposer_index: ValidatorIndex,
randao_reveal: ValidatorSig,
eth1_data: Eth1Data,
graffiti: GraffitiBytes,
attestations: seq[Attestation],
deposits: seq[Deposit],
validator_changes: BeaconBlockValidatorChanges,
sync_aggregate: SyncAggregate,
Support for driving multiple EL nodes from a single Nimbus BN (#4465) * Support for driving multiple EL nodes from a single Nimbus BN Full list of changes: * Eth1Monitor has been renamed to ELManager to match its current responsibilities better. * The ELManager is no longer optional in the code (it won't have a nil value under any circumstances). * The support for subscribing for headers was removed as it only worked with WebSockets and contributed significant complexity while bringing only a very minor advantage. * The `--web3-url` parameter has been deprecated in favor of a new `--el` parameter. The new parameter has a reasonable default value and supports specifying a different JWT for each connection. Each connection can also be configured with a different set of responsibilities (e.g. download deposits, validate blocks and/or produce blocks). On the command-line, these properties can be configured through URL properties stored in the #anchor part of the URL. In TOML files, they come with a very natural syntax (althrough the URL scheme is also supported). * The previously scattered EL-related state and logic is now moved to `eth1_monitor.nim` (this module will be renamed to `el_manager.nim` in a follow-up commit). State is assigned properly either to the `ELManager` or the to individual `ELConnection` objects where appropriate. The ELManager executes all Engine API requests against all attached EL nodes, in parallel. It compares their results and if there is a disagreement regarding the validity of a certain payload, this is detected and the beacon node is protected from publishing a block with a potential execution layer consensus bug in it. The BN provides metrics per EL node for the number of successful or failed requests for each type Engine API requests. If an EL node goes offline and connectivity is resoted later, we report the problem and the remedy in edge-triggered fashion. * More progress towards implementing Deneb block production in the VC and comparing the value of blocks produced by the EL and the builder API. * Adds a Makefile target for the zhejiang testnet
2023-03-05 01:40:21 +00:00
executionPayload: ForkyExecutionPayloadForSigning,
rollback: RollbackForkedHashedProc,
cache: var StateCache,
# TODO:
# `verificationFlags` is needed only in tests and can be
# removed if we don't use invalid signatures there
verificationFlags: UpdateFlags,
transactions_root: Opt[Eth2Digest],
execution_payload_root: Opt[Eth2Digest],
kzg_commitments: Opt[KzgCommitments]):
Result[ForkedBeaconBlock, cstring] =
## Create a block for the given state. The latest block applied to it will
## be used for the parent_root value, and the slot will be take from
## state.slot meaning process_slots must be called up to the slot for which
## the block is to be created.
template makeBeaconBlock(kind: untyped): Result[ForkedBeaconBlock, cstring] =
# To create a block, we'll first apply a partial block to the state, skipping
# some validations.
var blck =
ForkedBeaconBlock.init(
partialBeaconBlock(
cfg, state.`kind Data`, proposer_index, randao_reveal, eth1_data,
graffiti, attestations, deposits, validator_changes, sync_aggregate,
Support for driving multiple EL nodes from a single Nimbus BN (#4465) * Support for driving multiple EL nodes from a single Nimbus BN Full list of changes: * Eth1Monitor has been renamed to ELManager to match its current responsibilities better. * The ELManager is no longer optional in the code (it won't have a nil value under any circumstances). * The support for subscribing for headers was removed as it only worked with WebSockets and contributed significant complexity while bringing only a very minor advantage. * The `--web3-url` parameter has been deprecated in favor of a new `--el` parameter. The new parameter has a reasonable default value and supports specifying a different JWT for each connection. Each connection can also be configured with a different set of responsibilities (e.g. download deposits, validate blocks and/or produce blocks). On the command-line, these properties can be configured through URL properties stored in the #anchor part of the URL. In TOML files, they come with a very natural syntax (althrough the URL scheme is also supported). * The previously scattered EL-related state and logic is now moved to `eth1_monitor.nim` (this module will be renamed to `el_manager.nim` in a follow-up commit). State is assigned properly either to the `ELManager` or the to individual `ELConnection` objects where appropriate. The ELManager executes all Engine API requests against all attached EL nodes, in parallel. It compares their results and if there is a disagreement regarding the validity of a certain payload, this is detected and the beacon node is protected from publishing a block with a potential execution layer consensus bug in it. The BN provides metrics per EL node for the number of successful or failed requests for each type Engine API requests. If an EL node goes offline and connectivity is resoted later, we report the problem and the remedy in edge-triggered fashion. * More progress towards implementing Deneb block production in the VC and comparing the value of blocks produced by the EL and the builder API. * Adds a Makefile target for the zhejiang testnet
2023-03-05 01:40:21 +00:00
executionPayload))
let res = process_block(
cfg, state.`kind Data`.data, blck.`kind Data`.asSigVerified(),
verificationFlags, cache)
if res.isErr:
rollback(state)
return err(res.error())
# Override for Builder API
if transactions_root.isSome and execution_payload_root.isSome:
withState(state):
when consensusFork < ConsensusFork.Capella:
# Nimbus doesn't support pre-Capella builder API
discard
elif consensusFork == ConsensusFork.Capella:
forkyState.data.latest_execution_payload_header.transactions_root =
transactions_root.get
# https://github.com/ethereum/consensus-specs/blob/v1.4.0-beta.3/specs/capella/beacon-chain.md#beaconblockbody
# Effectively hash_tree_root(ExecutionPayload) with the beacon block
# body, with the execution payload replaced by the execution payload
# header. htr(payload) == htr(payload header), so substitute.
forkyState.data.latest_block_header.body_root = hash_tree_root(
[hash_tree_root(randao_reveal),
hash_tree_root(eth1_data),
hash_tree_root(graffiti),
hash_tree_root(validator_changes.proposer_slashings),
hash_tree_root(validator_changes.attester_slashings),
hash_tree_root(List[Attestation, Limit MAX_ATTESTATIONS](attestations)),
hash_tree_root(List[Deposit, Limit MAX_DEPOSITS](deposits)),
hash_tree_root(validator_changes.voluntary_exits),
hash_tree_root(sync_aggregate),
execution_payload_root.get,
hash_tree_root(validator_changes.bls_to_execution_changes)])
elif consensusFork == ConsensusFork.Deneb:
forkyState.data.latest_execution_payload_header.transactions_root =
transactions_root.get
when executionPayload is deneb.ExecutionPayloadForSigning:
# https://github.com/ethereum/consensus-specs/blob/v1.4.0-beta.1/specs/deneb/beacon-chain.md#beaconblockbody
forkyState.data.latest_block_header.body_root = hash_tree_root(
[hash_tree_root(randao_reveal),
hash_tree_root(eth1_data),
hash_tree_root(graffiti),
hash_tree_root(validator_changes.proposer_slashings),
hash_tree_root(validator_changes.attester_slashings),
hash_tree_root(List[Attestation, Limit MAX_ATTESTATIONS](attestations)),
hash_tree_root(List[Deposit, Limit MAX_DEPOSITS](deposits)),
hash_tree_root(validator_changes.voluntary_exits),
hash_tree_root(sync_aggregate),
execution_payload_root.get,
hash_tree_root(validator_changes.bls_to_execution_changes),
hash_tree_root(kzg_commitments.get)
])
else:
raiseAssert "Attempt to use non-Deneb payload with post-Deneb state"
else:
static: raiseAssert "Unreachable"
state.`kind Data`.root = hash_tree_root(state.`kind Data`.data)
blck.`kind Data`.state_root = state.`kind Data`.root
ok(blck)
const payloadFork = typeof(executionPayload).kind
Support for driving multiple EL nodes from a single Nimbus BN (#4465) * Support for driving multiple EL nodes from a single Nimbus BN Full list of changes: * Eth1Monitor has been renamed to ELManager to match its current responsibilities better. * The ELManager is no longer optional in the code (it won't have a nil value under any circumstances). * The support for subscribing for headers was removed as it only worked with WebSockets and contributed significant complexity while bringing only a very minor advantage. * The `--web3-url` parameter has been deprecated in favor of a new `--el` parameter. The new parameter has a reasonable default value and supports specifying a different JWT for each connection. Each connection can also be configured with a different set of responsibilities (e.g. download deposits, validate blocks and/or produce blocks). On the command-line, these properties can be configured through URL properties stored in the #anchor part of the URL. In TOML files, they come with a very natural syntax (althrough the URL scheme is also supported). * The previously scattered EL-related state and logic is now moved to `eth1_monitor.nim` (this module will be renamed to `el_manager.nim` in a follow-up commit). State is assigned properly either to the `ELManager` or the to individual `ELConnection` objects where appropriate. The ELManager executes all Engine API requests against all attached EL nodes, in parallel. It compares their results and if there is a disagreement regarding the validity of a certain payload, this is detected and the beacon node is protected from publishing a block with a potential execution layer consensus bug in it. The BN provides metrics per EL node for the number of successful or failed requests for each type Engine API requests. If an EL node goes offline and connectivity is resoted later, we report the problem and the remedy in edge-triggered fashion. * More progress towards implementing Deneb block production in the VC and comparing the value of blocks produced by the EL and the builder API. * Adds a Makefile target for the zhejiang testnet
2023-03-05 01:40:21 +00:00
when payloadFork == ConsensusFork.Bellatrix:
case state.kind
of ConsensusFork.Phase0: makeBeaconBlock(phase0)
of ConsensusFork.Altair: makeBeaconBlock(altair)
of ConsensusFork.Bellatrix: makeBeaconBlock(bellatrix)
else: raiseAssert "Attempt to use Bellatrix payload with post-Bellatrix state"
Support for driving multiple EL nodes from a single Nimbus BN (#4465) * Support for driving multiple EL nodes from a single Nimbus BN Full list of changes: * Eth1Monitor has been renamed to ELManager to match its current responsibilities better. * The ELManager is no longer optional in the code (it won't have a nil value under any circumstances). * The support for subscribing for headers was removed as it only worked with WebSockets and contributed significant complexity while bringing only a very minor advantage. * The `--web3-url` parameter has been deprecated in favor of a new `--el` parameter. The new parameter has a reasonable default value and supports specifying a different JWT for each connection. Each connection can also be configured with a different set of responsibilities (e.g. download deposits, validate blocks and/or produce blocks). On the command-line, these properties can be configured through URL properties stored in the #anchor part of the URL. In TOML files, they come with a very natural syntax (althrough the URL scheme is also supported). * The previously scattered EL-related state and logic is now moved to `eth1_monitor.nim` (this module will be renamed to `el_manager.nim` in a follow-up commit). State is assigned properly either to the `ELManager` or the to individual `ELConnection` objects where appropriate. The ELManager executes all Engine API requests against all attached EL nodes, in parallel. It compares their results and if there is a disagreement regarding the validity of a certain payload, this is detected and the beacon node is protected from publishing a block with a potential execution layer consensus bug in it. The BN provides metrics per EL node for the number of successful or failed requests for each type Engine API requests. If an EL node goes offline and connectivity is resoted later, we report the problem and the remedy in edge-triggered fashion. * More progress towards implementing Deneb block production in the VC and comparing the value of blocks produced by the EL and the builder API. * Adds a Makefile target for the zhejiang testnet
2023-03-05 01:40:21 +00:00
elif payloadFork == ConsensusFork.Capella:
case state.kind
of ConsensusFork.Capella: makeBeaconBlock(capella)
else: raiseAssert "Attempt to use Capella payload with non-Capella state"
Support for driving multiple EL nodes from a single Nimbus BN (#4465) * Support for driving multiple EL nodes from a single Nimbus BN Full list of changes: * Eth1Monitor has been renamed to ELManager to match its current responsibilities better. * The ELManager is no longer optional in the code (it won't have a nil value under any circumstances). * The support for subscribing for headers was removed as it only worked with WebSockets and contributed significant complexity while bringing only a very minor advantage. * The `--web3-url` parameter has been deprecated in favor of a new `--el` parameter. The new parameter has a reasonable default value and supports specifying a different JWT for each connection. Each connection can also be configured with a different set of responsibilities (e.g. download deposits, validate blocks and/or produce blocks). On the command-line, these properties can be configured through URL properties stored in the #anchor part of the URL. In TOML files, they come with a very natural syntax (althrough the URL scheme is also supported). * The previously scattered EL-related state and logic is now moved to `eth1_monitor.nim` (this module will be renamed to `el_manager.nim` in a follow-up commit). State is assigned properly either to the `ELManager` or the to individual `ELConnection` objects where appropriate. The ELManager executes all Engine API requests against all attached EL nodes, in parallel. It compares their results and if there is a disagreement regarding the validity of a certain payload, this is detected and the beacon node is protected from publishing a block with a potential execution layer consensus bug in it. The BN provides metrics per EL node for the number of successful or failed requests for each type Engine API requests. If an EL node goes offline and connectivity is resoted later, we report the problem and the remedy in edge-triggered fashion. * More progress towards implementing Deneb block production in the VC and comparing the value of blocks produced by the EL and the builder API. * Adds a Makefile target for the zhejiang testnet
2023-03-05 01:40:21 +00:00
elif payloadFork == ConsensusFork.Deneb:
case state.kind
of ConsensusFork.Deneb: makeBeaconBlock(deneb)
else: raiseAssert "Attempt to use Deneb payload with non-Deneb state"
Support for driving multiple EL nodes from a single Nimbus BN (#4465) * Support for driving multiple EL nodes from a single Nimbus BN Full list of changes: * Eth1Monitor has been renamed to ELManager to match its current responsibilities better. * The ELManager is no longer optional in the code (it won't have a nil value under any circumstances). * The support for subscribing for headers was removed as it only worked with WebSockets and contributed significant complexity while bringing only a very minor advantage. * The `--web3-url` parameter has been deprecated in favor of a new `--el` parameter. The new parameter has a reasonable default value and supports specifying a different JWT for each connection. Each connection can also be configured with a different set of responsibilities (e.g. download deposits, validate blocks and/or produce blocks). On the command-line, these properties can be configured through URL properties stored in the #anchor part of the URL. In TOML files, they come with a very natural syntax (althrough the URL scheme is also supported). * The previously scattered EL-related state and logic is now moved to `eth1_monitor.nim` (this module will be renamed to `el_manager.nim` in a follow-up commit). State is assigned properly either to the `ELManager` or the to individual `ELConnection` objects where appropriate. The ELManager executes all Engine API requests against all attached EL nodes, in parallel. It compares their results and if there is a disagreement regarding the validity of a certain payload, this is detected and the beacon node is protected from publishing a block with a potential execution layer consensus bug in it. The BN provides metrics per EL node for the number of successful or failed requests for each type Engine API requests. If an EL node goes offline and connectivity is resoted later, we report the problem and the remedy in edge-triggered fashion. * More progress towards implementing Deneb block production in the VC and comparing the value of blocks produced by the EL and the builder API. * Adds a Makefile target for the zhejiang testnet
2023-03-05 01:40:21 +00:00
else:
{.error: "Unsupported fork".}
# workaround for https://github.com/nim-lang/Nim/issues/20900 rather than have
# these be default arguments
Support for driving multiple EL nodes from a single Nimbus BN (#4465) * Support for driving multiple EL nodes from a single Nimbus BN Full list of changes: * Eth1Monitor has been renamed to ELManager to match its current responsibilities better. * The ELManager is no longer optional in the code (it won't have a nil value under any circumstances). * The support for subscribing for headers was removed as it only worked with WebSockets and contributed significant complexity while bringing only a very minor advantage. * The `--web3-url` parameter has been deprecated in favor of a new `--el` parameter. The new parameter has a reasonable default value and supports specifying a different JWT for each connection. Each connection can also be configured with a different set of responsibilities (e.g. download deposits, validate blocks and/or produce blocks). On the command-line, these properties can be configured through URL properties stored in the #anchor part of the URL. In TOML files, they come with a very natural syntax (althrough the URL scheme is also supported). * The previously scattered EL-related state and logic is now moved to `eth1_monitor.nim` (this module will be renamed to `el_manager.nim` in a follow-up commit). State is assigned properly either to the `ELManager` or the to individual `ELConnection` objects where appropriate. The ELManager executes all Engine API requests against all attached EL nodes, in parallel. It compares their results and if there is a disagreement regarding the validity of a certain payload, this is detected and the beacon node is protected from publishing a block with a potential execution layer consensus bug in it. The BN provides metrics per EL node for the number of successful or failed requests for each type Engine API requests. If an EL node goes offline and connectivity is resoted later, we report the problem and the remedy in edge-triggered fashion. * More progress towards implementing Deneb block production in the VC and comparing the value of blocks produced by the EL and the builder API. * Adds a Makefile target for the zhejiang testnet
2023-03-05 01:40:21 +00:00
proc makeBeaconBlock*(
cfg: RuntimeConfig, state: var ForkedHashedBeaconState,
proposer_index: ValidatorIndex, randao_reveal: ValidatorSig,
eth1_data: Eth1Data, graffiti: GraffitiBytes,
attestations: seq[Attestation], deposits: seq[Deposit],
validator_changes: BeaconBlockValidatorChanges,
Support for driving multiple EL nodes from a single Nimbus BN (#4465) * Support for driving multiple EL nodes from a single Nimbus BN Full list of changes: * Eth1Monitor has been renamed to ELManager to match its current responsibilities better. * The ELManager is no longer optional in the code (it won't have a nil value under any circumstances). * The support for subscribing for headers was removed as it only worked with WebSockets and contributed significant complexity while bringing only a very minor advantage. * The `--web3-url` parameter has been deprecated in favor of a new `--el` parameter. The new parameter has a reasonable default value and supports specifying a different JWT for each connection. Each connection can also be configured with a different set of responsibilities (e.g. download deposits, validate blocks and/or produce blocks). On the command-line, these properties can be configured through URL properties stored in the #anchor part of the URL. In TOML files, they come with a very natural syntax (althrough the URL scheme is also supported). * The previously scattered EL-related state and logic is now moved to `eth1_monitor.nim` (this module will be renamed to `el_manager.nim` in a follow-up commit). State is assigned properly either to the `ELManager` or the to individual `ELConnection` objects where appropriate. The ELManager executes all Engine API requests against all attached EL nodes, in parallel. It compares their results and if there is a disagreement regarding the validity of a certain payload, this is detected and the beacon node is protected from publishing a block with a potential execution layer consensus bug in it. The BN provides metrics per EL node for the number of successful or failed requests for each type Engine API requests. If an EL node goes offline and connectivity is resoted later, we report the problem and the remedy in edge-triggered fashion. * More progress towards implementing Deneb block production in the VC and comparing the value of blocks produced by the EL and the builder API. * Adds a Makefile target for the zhejiang testnet
2023-03-05 01:40:21 +00:00
sync_aggregate: SyncAggregate,
executionPayload: ForkyExecutionPayloadForSigning,
rollback: RollbackForkedHashedProc, cache: var StateCache):
Result[ForkedBeaconBlock, cstring] =
makeBeaconBlock(
cfg, state, proposer_index, randao_reveal, eth1_data, graffiti,
attestations, deposits, validator_changes, sync_aggregate,
Support for driving multiple EL nodes from a single Nimbus BN (#4465) * Support for driving multiple EL nodes from a single Nimbus BN Full list of changes: * Eth1Monitor has been renamed to ELManager to match its current responsibilities better. * The ELManager is no longer optional in the code (it won't have a nil value under any circumstances). * The support for subscribing for headers was removed as it only worked with WebSockets and contributed significant complexity while bringing only a very minor advantage. * The `--web3-url` parameter has been deprecated in favor of a new `--el` parameter. The new parameter has a reasonable default value and supports specifying a different JWT for each connection. Each connection can also be configured with a different set of responsibilities (e.g. download deposits, validate blocks and/or produce blocks). On the command-line, these properties can be configured through URL properties stored in the #anchor part of the URL. In TOML files, they come with a very natural syntax (althrough the URL scheme is also supported). * The previously scattered EL-related state and logic is now moved to `eth1_monitor.nim` (this module will be renamed to `el_manager.nim` in a follow-up commit). State is assigned properly either to the `ELManager` or the to individual `ELConnection` objects where appropriate. The ELManager executes all Engine API requests against all attached EL nodes, in parallel. It compares their results and if there is a disagreement regarding the validity of a certain payload, this is detected and the beacon node is protected from publishing a block with a potential execution layer consensus bug in it. The BN provides metrics per EL node for the number of successful or failed requests for each type Engine API requests. If an EL node goes offline and connectivity is resoted later, we report the problem and the remedy in edge-triggered fashion. * More progress towards implementing Deneb block production in the VC and comparing the value of blocks produced by the EL and the builder API. * Adds a Makefile target for the zhejiang testnet
2023-03-05 01:40:21 +00:00
executionPayload, rollback, cache,
verificationFlags = {}, transactions_root = Opt.none Eth2Digest,
execution_payload_root = Opt.none Eth2Digest,
kzg_commitments = Opt.none KzgCommitments)
Support for driving multiple EL nodes from a single Nimbus BN (#4465) * Support for driving multiple EL nodes from a single Nimbus BN Full list of changes: * Eth1Monitor has been renamed to ELManager to match its current responsibilities better. * The ELManager is no longer optional in the code (it won't have a nil value under any circumstances). * The support for subscribing for headers was removed as it only worked with WebSockets and contributed significant complexity while bringing only a very minor advantage. * The `--web3-url` parameter has been deprecated in favor of a new `--el` parameter. The new parameter has a reasonable default value and supports specifying a different JWT for each connection. Each connection can also be configured with a different set of responsibilities (e.g. download deposits, validate blocks and/or produce blocks). On the command-line, these properties can be configured through URL properties stored in the #anchor part of the URL. In TOML files, they come with a very natural syntax (althrough the URL scheme is also supported). * The previously scattered EL-related state and logic is now moved to `eth1_monitor.nim` (this module will be renamed to `el_manager.nim` in a follow-up commit). State is assigned properly either to the `ELManager` or the to individual `ELConnection` objects where appropriate. The ELManager executes all Engine API requests against all attached EL nodes, in parallel. It compares their results and if there is a disagreement regarding the validity of a certain payload, this is detected and the beacon node is protected from publishing a block with a potential execution layer consensus bug in it. The BN provides metrics per EL node for the number of successful or failed requests for each type Engine API requests. If an EL node goes offline and connectivity is resoted later, we report the problem and the remedy in edge-triggered fashion. * More progress towards implementing Deneb block production in the VC and comparing the value of blocks produced by the EL and the builder API. * Adds a Makefile target for the zhejiang testnet
2023-03-05 01:40:21 +00:00
proc makeBeaconBlock*(
cfg: RuntimeConfig, state: var ForkedHashedBeaconState,
proposer_index: ValidatorIndex, randao_reveal: ValidatorSig,
eth1_data: Eth1Data, graffiti: GraffitiBytes,
attestations: seq[Attestation], deposits: seq[Deposit],
validator_changes: BeaconBlockValidatorChanges,
Support for driving multiple EL nodes from a single Nimbus BN (#4465) * Support for driving multiple EL nodes from a single Nimbus BN Full list of changes: * Eth1Monitor has been renamed to ELManager to match its current responsibilities better. * The ELManager is no longer optional in the code (it won't have a nil value under any circumstances). * The support for subscribing for headers was removed as it only worked with WebSockets and contributed significant complexity while bringing only a very minor advantage. * The `--web3-url` parameter has been deprecated in favor of a new `--el` parameter. The new parameter has a reasonable default value and supports specifying a different JWT for each connection. Each connection can also be configured with a different set of responsibilities (e.g. download deposits, validate blocks and/or produce blocks). On the command-line, these properties can be configured through URL properties stored in the #anchor part of the URL. In TOML files, they come with a very natural syntax (althrough the URL scheme is also supported). * The previously scattered EL-related state and logic is now moved to `eth1_monitor.nim` (this module will be renamed to `el_manager.nim` in a follow-up commit). State is assigned properly either to the `ELManager` or the to individual `ELConnection` objects where appropriate. The ELManager executes all Engine API requests against all attached EL nodes, in parallel. It compares their results and if there is a disagreement regarding the validity of a certain payload, this is detected and the beacon node is protected from publishing a block with a potential execution layer consensus bug in it. The BN provides metrics per EL node for the number of successful or failed requests for each type Engine API requests. If an EL node goes offline and connectivity is resoted later, we report the problem and the remedy in edge-triggered fashion. * More progress towards implementing Deneb block production in the VC and comparing the value of blocks produced by the EL and the builder API. * Adds a Makefile target for the zhejiang testnet
2023-03-05 01:40:21 +00:00
sync_aggregate: SyncAggregate,
executionPayload: ForkyExecutionPayloadForSigning,
rollback: RollbackForkedHashedProc,
cache: var StateCache, verificationFlags: UpdateFlags):
Result[ForkedBeaconBlock, cstring] =
makeBeaconBlock(
cfg, state, proposer_index, randao_reveal, eth1_data, graffiti,
attestations, deposits, validator_changes, sync_aggregate,
Support for driving multiple EL nodes from a single Nimbus BN (#4465) * Support for driving multiple EL nodes from a single Nimbus BN Full list of changes: * Eth1Monitor has been renamed to ELManager to match its current responsibilities better. * The ELManager is no longer optional in the code (it won't have a nil value under any circumstances). * The support for subscribing for headers was removed as it only worked with WebSockets and contributed significant complexity while bringing only a very minor advantage. * The `--web3-url` parameter has been deprecated in favor of a new `--el` parameter. The new parameter has a reasonable default value and supports specifying a different JWT for each connection. Each connection can also be configured with a different set of responsibilities (e.g. download deposits, validate blocks and/or produce blocks). On the command-line, these properties can be configured through URL properties stored in the #anchor part of the URL. In TOML files, they come with a very natural syntax (althrough the URL scheme is also supported). * The previously scattered EL-related state and logic is now moved to `eth1_monitor.nim` (this module will be renamed to `el_manager.nim` in a follow-up commit). State is assigned properly either to the `ELManager` or the to individual `ELConnection` objects where appropriate. The ELManager executes all Engine API requests against all attached EL nodes, in parallel. It compares their results and if there is a disagreement regarding the validity of a certain payload, this is detected and the beacon node is protected from publishing a block with a potential execution layer consensus bug in it. The BN provides metrics per EL node for the number of successful or failed requests for each type Engine API requests. If an EL node goes offline and connectivity is resoted later, we report the problem and the remedy in edge-triggered fashion. * More progress towards implementing Deneb block production in the VC and comparing the value of blocks produced by the EL and the builder API. * Adds a Makefile target for the zhejiang testnet
2023-03-05 01:40:21 +00:00
executionPayload, rollback, cache,
verificationFlags = verificationFlags,
transactions_root = Opt.none Eth2Digest,
execution_payload_root = Opt.none Eth2Digest,
kzg_commitments = Opt.none KzgCommitments)