nimbus-eth1/nimbus/common/common.nim

458 lines
14 KiB
Nim
Raw Normal View History

2022-12-02 04:39:12 +00:00
# Nimbus
# Copyright (c) 2022-2024 Status Research & Development GmbH
2022-12-02 04:39:12 +00:00
# Licensed under either of
# * Apache License, version 2.0, ([LICENSE-APACHE](LICENSE-APACHE))
# * MIT license ([LICENSE-MIT](LICENSE-MIT))
# at your option.
# This file may not be copied, modified, or distributed except according to
# those terms.
{.push raises: [].}
2022-12-02 04:39:12 +00:00
import
chronicles,
eth/trie/trie_defs,
../core/casper,
../db/[core_db, ledger, storage_types],
2022-12-02 04:39:12 +00:00
../utils/[utils, ec_recover],
Unified database frontend integration (#1670) * Nimbus folder environment update details: * Integrated `CoreDbRef` for the sources in the `nimbus` sub-folder. * The `nimbus` program does not compile yet as it needs the updates in the parallel `stateless` sub-folder. * Stateless environment update details: * Integrated `CoreDbRef` for the sources in the `stateless` sub-folder. * The `nimbus` program compiles now. * Premix environment update details: * Integrated `CoreDbRef` for the sources in the `premix` sub-folder. * Fluffy environment update details: * Integrated `CoreDbRef` for the sources in the `fluffy` sub-folder. * Tools environment update details: * Integrated `CoreDbRef` for the sources in the `tools` sub-folder. * Nodocker environment update details: * Integrated `CoreDbRef` for the sources in the `hive_integration/nodocker` sub-folder. * Tests environment update details: * Integrated `CoreDbRef` for the sources in the `tests` sub-folder. * The unit tests compile and run cleanly now. * Generalise `CoreDbRef` to any `select_backend` supported database why: Generalisation was just missed due to overcoming some compiler oddity which was tied to rocksdb for testing. * Suppress compiler warning for `newChainDB()` why: Warning was added to this function which must be wrapped so that any `CatchableError` is re-raised as `Defect`. * Split off persistent `CoreDbRef` constructor into separate file why: This allows to compile a memory only database version without linking the backend library. * Use memory `CoreDbRef` database by default detail: Persistent DB constructor needs to import `db/core_db/persistent why: Most tests use memory DB anyway. This avoids linking `-lrocksdb` or any other backend by default. * fix `toLegacyBackend()` availability check why: got garbled after memory/persistent split. * Clarify raw access to MPT for snap sync handler why: Logically, `kvt` is not the raw access for the hexary trie (although this holds for the legacy database)
2023-08-04 11:10:09 +00:00
".."/[constants, errors],
"."/[chain_config, evmforks, genesis, hardforks]
2022-12-02 04:39:12 +00:00
export
chain_config,
Unified database frontend integration (#1670) * Nimbus folder environment update details: * Integrated `CoreDbRef` for the sources in the `nimbus` sub-folder. * The `nimbus` program does not compile yet as it needs the updates in the parallel `stateless` sub-folder. * Stateless environment update details: * Integrated `CoreDbRef` for the sources in the `stateless` sub-folder. * The `nimbus` program compiles now. * Premix environment update details: * Integrated `CoreDbRef` for the sources in the `premix` sub-folder. * Fluffy environment update details: * Integrated `CoreDbRef` for the sources in the `fluffy` sub-folder. * Tools environment update details: * Integrated `CoreDbRef` for the sources in the `tools` sub-folder. * Nodocker environment update details: * Integrated `CoreDbRef` for the sources in the `hive_integration/nodocker` sub-folder. * Tests environment update details: * Integrated `CoreDbRef` for the sources in the `tests` sub-folder. * The unit tests compile and run cleanly now. * Generalise `CoreDbRef` to any `select_backend` supported database why: Generalisation was just missed due to overcoming some compiler oddity which was tied to rocksdb for testing. * Suppress compiler warning for `newChainDB()` why: Warning was added to this function which must be wrapped so that any `CatchableError` is re-raised as `Defect`. * Split off persistent `CoreDbRef` constructor into separate file why: This allows to compile a memory only database version without linking the backend library. * Use memory `CoreDbRef` database by default detail: Persistent DB constructor needs to import `db/core_db/persistent why: Most tests use memory DB anyway. This avoids linking `-lrocksdb` or any other backend by default. * fix `toLegacyBackend()` availability check why: got garbled after memory/persistent split. * Clarify raw access to MPT for snap sync handler why: Logically, `kvt` is not the raw access for the hexary trie (although this holds for the legacy database)
2023-08-04 11:10:09 +00:00
core_db,
constants,
errors,
2022-12-02 04:39:12 +00:00
evmforks,
hardforks,
genesis,
utils
type
SyncProgress = object
start : BlockNumber
current: BlockNumber
highest: BlockNumber
SyncState* = enum
Waiting
Syncing
Synced
SyncReqNewHeadCB* = proc(header: Header) {.gcsafe, raises: [].}
## Update head for syncing
ReqBeaconSyncTargetCB* = proc(header: Header; finHash: Hash32) {.gcsafe, raises: [].}
## Ditto (for beacon sync)
Flare sync (#2627) * Cosmetics, small fixes, add stashed headers verifier * Remove direct `Era1` support why: Era1 is indirectly supported by using the import tool before syncing. * Clarify database persistent save function. why: Function relied on the last saved state block number which was wrong. It now relies on the tx-level. If it is 0, then data are saved directly. Otherwise the task that owns the tx will do it. * Extracted configuration constants into separate file * Enable single peer mode for debugging * Fix peer losing issue in multi-mode details: Running concurrent download peers was previously programmed as running a batch downloading and storing ~8k headers and then leaving the `async` function to be restarted by a scheduler. This was unfortunate because of occasionally occurring long waiting times for restart. While the time gap until restarting were typically observed a few millisecs, there were always a few outliers which well exceed several seconds. This seemed to let remote peers run into timeouts. * Prefix function names `unprocXxx()` and `stagedYyy()` by `headers` why: There will be other `unproc` and `staged` modules. * Remove cruft, update logging * Fix accounting issue details: When staging after fetching headers from the network, there was an off by 1 error occurring when the result was by one smaller than requested. Also, a whole range was mis-accounted when a peer was terminating connection immediately after responding. * Fix slow/error header accounting when fetching why: Originally set for detecting slow headers in a row, the counter was wrongly extended to general errors. * Ban peers for a while that respond with too few headers continuously why: Some peers only returned one header at a time. If these peers sit on a farm, they might collectively slow down the download process. * Update RPC beacon header updater why: Old function hook has slightly changed its meaning since it was used for snap sync. Also, the old hook is used by other functions already. * Limit number of peers or set to single peer mode details: Merge several concepts, single peer mode being one of it. * Some code clean up, fixings for removing of compiler warnings * De-noise header fetch related sources why: Header download looks relatively stable, so general debugging is not needed, anymore. This is the equivalent of removing the scaffold from the part of the building where work has completed. * More clean up and code prettification for headers stuff * Implement body fetch and block import details: Available headers are used stage blocks by combining existing headers with newly fetched blocks. Then these blocks are imported/executed via `persistBlocks()`. * Logger cosmetics and cleanup * Remove staged block queue debugging details: Feature still available, just not executed anymore * Docu, logging update * Update/simplify `runDaemon()` * Re-calibrate block body requests and soft config for import blocks batch why: * For fetching, larger fetch requests are mostly truncated anyway on MainNet. * For executing, smaller batch sizes reduce the memory needed for the price of longer execution times. * Update metrics counters * Docu update * Some fixes, formatting updates, etc. * Update `borrowed` type: uint -. uint64 also: Always convert to `uint64` rather than `uint` where appropriate
2024-09-27 15:07:42 +00:00
NotifyBadBlockCB* = proc(invalid, origin: Header) {.gcsafe, raises: [].}
## Notify engine-API of encountered bad block
2022-12-02 04:39:12 +00:00
CommonRef* = ref object
# all purpose storage
Unified database frontend integration (#1670) * Nimbus folder environment update details: * Integrated `CoreDbRef` for the sources in the `nimbus` sub-folder. * The `nimbus` program does not compile yet as it needs the updates in the parallel `stateless` sub-folder. * Stateless environment update details: * Integrated `CoreDbRef` for the sources in the `stateless` sub-folder. * The `nimbus` program compiles now. * Premix environment update details: * Integrated `CoreDbRef` for the sources in the `premix` sub-folder. * Fluffy environment update details: * Integrated `CoreDbRef` for the sources in the `fluffy` sub-folder. * Tools environment update details: * Integrated `CoreDbRef` for the sources in the `tools` sub-folder. * Nodocker environment update details: * Integrated `CoreDbRef` for the sources in the `hive_integration/nodocker` sub-folder. * Tests environment update details: * Integrated `CoreDbRef` for the sources in the `tests` sub-folder. * The unit tests compile and run cleanly now. * Generalise `CoreDbRef` to any `select_backend` supported database why: Generalisation was just missed due to overcoming some compiler oddity which was tied to rocksdb for testing. * Suppress compiler warning for `newChainDB()` why: Warning was added to this function which must be wrapped so that any `CatchableError` is re-raised as `Defect`. * Split off persistent `CoreDbRef` constructor into separate file why: This allows to compile a memory only database version without linking the backend library. * Use memory `CoreDbRef` database by default detail: Persistent DB constructor needs to import `db/core_db/persistent why: Most tests use memory DB anyway. This avoids linking `-lrocksdb` or any other backend by default. * fix `toLegacyBackend()` availability check why: got garbled after memory/persistent split. * Clarify raw access to MPT for snap sync handler why: Logically, `kvt` is not the raw access for the hexary trie (although this holds for the legacy database)
2023-08-04 11:10:09 +00:00
db: CoreDbRef
2022-12-02 04:39:12 +00:00
# block chain config
config: ChainConfig
# cache of genesis
genesisHash: Hash32
genesisHeader: Header
2022-12-02 04:39:12 +00:00
# map block number and ttd and time to
2022-12-02 04:39:12 +00:00
# HardFork
forkTransitionTable: ForkTransitionTable
2022-12-02 04:39:12 +00:00
# Eth wire protocol need this
2023-10-24 10:39:19 +00:00
forkIdCalculator: ForkIdCalculator
2022-12-02 04:39:12 +00:00
networkId: NetworkId
# synchronizer need this
syncProgress: SyncProgress
syncState: SyncState
syncReqNewHead: SyncReqNewHeadCB
## Call back function for the sync processor. This function stages
## the arguent header to a private aerea for subsequent processing.
reqBeaconSyncTargetCB: ReqBeaconSyncTargetCB
Flare sync (#2627) * Cosmetics, small fixes, add stashed headers verifier * Remove direct `Era1` support why: Era1 is indirectly supported by using the import tool before syncing. * Clarify database persistent save function. why: Function relied on the last saved state block number which was wrong. It now relies on the tx-level. If it is 0, then data are saved directly. Otherwise the task that owns the tx will do it. * Extracted configuration constants into separate file * Enable single peer mode for debugging * Fix peer losing issue in multi-mode details: Running concurrent download peers was previously programmed as running a batch downloading and storing ~8k headers and then leaving the `async` function to be restarted by a scheduler. This was unfortunate because of occasionally occurring long waiting times for restart. While the time gap until restarting were typically observed a few millisecs, there were always a few outliers which well exceed several seconds. This seemed to let remote peers run into timeouts. * Prefix function names `unprocXxx()` and `stagedYyy()` by `headers` why: There will be other `unproc` and `staged` modules. * Remove cruft, update logging * Fix accounting issue details: When staging after fetching headers from the network, there was an off by 1 error occurring when the result was by one smaller than requested. Also, a whole range was mis-accounted when a peer was terminating connection immediately after responding. * Fix slow/error header accounting when fetching why: Originally set for detecting slow headers in a row, the counter was wrongly extended to general errors. * Ban peers for a while that respond with too few headers continuously why: Some peers only returned one header at a time. If these peers sit on a farm, they might collectively slow down the download process. * Update RPC beacon header updater why: Old function hook has slightly changed its meaning since it was used for snap sync. Also, the old hook is used by other functions already. * Limit number of peers or set to single peer mode details: Merge several concepts, single peer mode being one of it. * Some code clean up, fixings for removing of compiler warnings * De-noise header fetch related sources why: Header download looks relatively stable, so general debugging is not needed, anymore. This is the equivalent of removing the scaffold from the part of the building where work has completed. * More clean up and code prettification for headers stuff * Implement body fetch and block import details: Available headers are used stage blocks by combining existing headers with newly fetched blocks. Then these blocks are imported/executed via `persistBlocks()`. * Logger cosmetics and cleanup * Remove staged block queue debugging details: Feature still available, just not executed anymore * Docu, logging update * Update/simplify `runDaemon()` * Re-calibrate block body requests and soft config for import blocks batch why: * For fetching, larger fetch requests are mostly truncated anyway on MainNet. * For executing, smaller batch sizes reduce the memory needed for the price of longer execution times. * Update metrics counters * Docu update * Some fixes, formatting updates, etc. * Update `borrowed` type: uint -. uint64 also: Always convert to `uint64` rather than `uint` where appropriate
2024-09-27 15:07:42 +00:00
## Call back function for a sync processor that returns the canonical
## header.
notifyBadBlock: NotifyBadBlockCB
## Allow synchronizer to inform engine-API of bad encountered during sync
## progress
startOfHistory: Hash32
## This setting is needed for resuming blockwise syncying after
## installing a snapshot pivot. The default value for this field is
## `GENESIS_PARENT_HASH` to start at the very beginning.
pos: CasperRef
## Proof Of Stake descriptor
pruneHistory: bool
## Must not not set for a full node, might go away some time
2022-12-02 04:39:12 +00:00
# ------------------------------------------------------------------------------
# Forward declarations
2022-12-02 04:39:12 +00:00
# ------------------------------------------------------------------------------
proc proofOfStake*(com: CommonRef, header: Header): bool {.gcsafe.}
2022-12-02 04:39:12 +00:00
# ------------------------------------------------------------------------------
# Private helper functions
# ------------------------------------------------------------------------------
func setForkId(com: CommonRef, genesis: Header) =
2023-10-24 10:39:19 +00:00
com.genesisHash = genesis.blockHash
2022-12-02 04:39:12 +00:00
let genesisCRC = crc32(0, com.genesisHash.data)
2023-10-24 10:39:19 +00:00
com.forkIdCalculator = initForkIdCalculator(
com.forkTransitionTable,
genesisCRC,
genesis.timestamp.uint64)
2022-12-02 04:39:12 +00:00
func daoCheck(conf: ChainConfig) =
2022-12-02 04:39:12 +00:00
if not conf.daoForkSupport or conf.daoForkBlock.isNone:
conf.daoForkBlock = conf.homesteadBlock
if conf.daoForkSupport and conf.daoForkBlock.isNone:
conf.daoForkBlock = conf.homesteadBlock
proc initializeDb(com: CommonRef) =
let kvt = com.db.ctx.getKvt()
proc contains(kvt: CoreDbKvtRef; key: openArray[byte]): bool =
kvt.hasKeyRc(key).expect "valid bool"
if canonicalHeadHashKey().toOpenArray notin kvt:
info "Writing genesis to DB"
doAssert(com.genesisHeader.number == 0.BlockNumber,
"can't commit genesis block with number > 0")
doAssert(com.db.persistHeader(com.genesisHeader,
com.proofOfStake(com.genesisHeader),
startOfHistory=com.genesisHeader.parentHash),
"can persist genesis header")
doAssert(canonicalHeadHashKey().toOpenArray in kvt)
# The database must at least contain the base and head pointers - the base
# is implicitly considered finalized
let
baseNum = com.db.getSavedStateBlockNumber()
base =
try:
com.db.getBlockHeader(baseNum)
except BlockNotFound as exc:
fatal "Cannot load base block header",
baseNum, err = exc.msg
quit 1
finalized =
try:
com.db.finalizedHeader()
except BlockNotFound:
debug "No finalized block stored in database, reverting to base"
base
head =
try:
com.db.getCanonicalHead()
except EVMError as exc:
fatal "Cannot load canonical block header",
err = exc.msg
quit 1
info "Database initialized",
base = (base.blockHash, base.number),
finalized = (finalized.blockHash, finalized.number),
head = (head.blockHash, head.number)
proc init(com : CommonRef,
db : CoreDbRef,
networkId : NetworkId,
config : ChainConfig,
genesis : Genesis,
pruneHistory: bool,
) =
2022-12-02 04:39:12 +00:00
config.daoCheck()
Unified database frontend integration (#1670) * Nimbus folder environment update details: * Integrated `CoreDbRef` for the sources in the `nimbus` sub-folder. * The `nimbus` program does not compile yet as it needs the updates in the parallel `stateless` sub-folder. * Stateless environment update details: * Integrated `CoreDbRef` for the sources in the `stateless` sub-folder. * The `nimbus` program compiles now. * Premix environment update details: * Integrated `CoreDbRef` for the sources in the `premix` sub-folder. * Fluffy environment update details: * Integrated `CoreDbRef` for the sources in the `fluffy` sub-folder. * Tools environment update details: * Integrated `CoreDbRef` for the sources in the `tools` sub-folder. * Nodocker environment update details: * Integrated `CoreDbRef` for the sources in the `hive_integration/nodocker` sub-folder. * Tests environment update details: * Integrated `CoreDbRef` for the sources in the `tests` sub-folder. * The unit tests compile and run cleanly now. * Generalise `CoreDbRef` to any `select_backend` supported database why: Generalisation was just missed due to overcoming some compiler oddity which was tied to rocksdb for testing. * Suppress compiler warning for `newChainDB()` why: Warning was added to this function which must be wrapped so that any `CatchableError` is re-raised as `Defect`. * Split off persistent `CoreDbRef` constructor into separate file why: This allows to compile a memory only database version without linking the backend library. * Use memory `CoreDbRef` database by default detail: Persistent DB constructor needs to import `db/core_db/persistent why: Most tests use memory DB anyway. This avoids linking `-lrocksdb` or any other backend by default. * fix `toLegacyBackend()` availability check why: got garbled after memory/persistent split. * Clarify raw access to MPT for snap sync handler why: Logically, `kvt` is not the raw access for the hexary trie (although this holds for the legacy database)
2023-08-04 11:10:09 +00:00
com.db = db
2022-12-02 04:39:12 +00:00
com.config = config
com.forkTransitionTable = config.toForkTransitionTable()
2022-12-02 04:39:12 +00:00
com.networkId = networkId
com.syncProgress= SyncProgress()
com.syncState = Waiting
com.pruneHistory= pruneHistory
com.pos = CasperRef.new
2023-10-24 10:39:19 +00:00
# com.forkIdCalculator and com.genesisHash are set
2022-12-02 04:39:12 +00:00
# by setForkId
if genesis.isNil.not:
let
forkDeterminer = ForkDeterminationInfo(
number: 0.BlockNumber,
td: Opt.some(0.u256),
time: Opt.some(genesis.timestamp)
)
fork = toHardFork(com.forkTransitionTable, forkDeterminer)
# Must not overwrite the global state on the single state DB
if not db.getBlockHeader(0.BlockNumber, com.genesisHeader):
com.genesisHeader = toGenesisHeader(genesis, fork, com.db)
2022-12-02 04:39:12 +00:00
com.setForkId(com.genesisHeader)
com.pos.timestamp = genesis.timestamp
# By default, history begins at genesis.
com.startOfHistory = GENESIS_PARENT_HASH
com.initializeDb()
proc isBlockAfterTtd(com: CommonRef, header: Header): bool =
if com.config.terminalTotalDifficulty.isNone:
return false
let
ttd = com.config.terminalTotalDifficulty.get()
ptd = com.db.getScore(header.parentHash).valueOr:
return false
td = ptd + header.difficulty
ptd >= ttd and td >= ttd
2022-12-02 04:39:12 +00:00
# ------------------------------------------------------------------------------
# Public constructors
# ------------------------------------------------------------------------------
proc new*(
_: type CommonRef;
db: CoreDbRef;
networkId: NetworkId = MainNet;
params = networkParams(MainNet);
pruneHistory = false;
): CommonRef =
2022-12-02 04:39:12 +00:00
## If genesis data is present, the forkIds will be initialized
## empty data base also initialized with genesis block
new(result)
result.init(
db,
networkId,
params.config,
Optional accounts cache module for creating genesis (#1897) * Split off `ReadOnlyStateDB` from `AccountStateDB` from `state_db.nim` why: Apart from testing, applications use `ReadOnlyStateDB` as an easy way to access the accounts ledger. This is well supported by the `Aristo` db, but writable mode is only parially supported. The writable AccountStateDB` object for modifying accounts is not used by production code. So, for lecgacy and testing apps, the full support of the previous `AccountStateDB` is now enabled by `import db/state_db/read_write` and the `import db/state_db` provides read-only mode. * Encapsulate `AccountStateDB` as `GenesisLedgerRef` or genesis creation why: `AccountStateDB` has poor support for `Aristo` and is not widely used in favour of `AccountsLedger` (which will be abstracted as `ledger`.) Currently, using other than the `AccountStateDB` ledgers within the `GenesisLedgerRef` wrapper is experimental and test only. Eventually, the wrapper should disappear so that the `Ledger` object (which encapsulates `AccountsCache` and `AccountsLedger`) will prevail. * For the `Ledger`, provide access to raw accounts `MPT` why: This gives to the `CoreDbMptRef` descriptor from the `CoreDb` (which is the legacy version of CoreDxMptRef`.) For the new `ledger` API, the accounts are based on the `CoreDxMAccRef` descriptor which uses a particular sub-system for accounts while legacy applications use the `CoreDbPhkRef` equivalent of the `SecureHexaryTrie`. The only place where this feature will currently be used is the `genesis.nim` source file. * Fix `Aristo` bugs, missing boundary checks, typos, etc. * Verify root vertex in `MPT` and account constructors why: Was missing so far, in particular the accounts constructor must verify `VertexID(1) * Fix include file
2023-11-20 11:51:43 +00:00
params.genesis,
pruneHistory)
proc new*(
_: type CommonRef;
db: CoreDbRef;
config: ChainConfig;
networkId: NetworkId = MainNet;
pruneHistory = false;
): CommonRef =
2022-12-02 04:39:12 +00:00
## There is no genesis data present
## Mainly used for testing without genesis
new(result)
result.init(
db,
networkId,
config,
Optional accounts cache module for creating genesis (#1897) * Split off `ReadOnlyStateDB` from `AccountStateDB` from `state_db.nim` why: Apart from testing, applications use `ReadOnlyStateDB` as an easy way to access the accounts ledger. This is well supported by the `Aristo` db, but writable mode is only parially supported. The writable AccountStateDB` object for modifying accounts is not used by production code. So, for lecgacy and testing apps, the full support of the previous `AccountStateDB` is now enabled by `import db/state_db/read_write` and the `import db/state_db` provides read-only mode. * Encapsulate `AccountStateDB` as `GenesisLedgerRef` or genesis creation why: `AccountStateDB` has poor support for `Aristo` and is not widely used in favour of `AccountsLedger` (which will be abstracted as `ledger`.) Currently, using other than the `AccountStateDB` ledgers within the `GenesisLedgerRef` wrapper is experimental and test only. Eventually, the wrapper should disappear so that the `Ledger` object (which encapsulates `AccountsCache` and `AccountsLedger`) will prevail. * For the `Ledger`, provide access to raw accounts `MPT` why: This gives to the `CoreDbMptRef` descriptor from the `CoreDb` (which is the legacy version of CoreDxMptRef`.) For the new `ledger` API, the accounts are based on the `CoreDxMAccRef` descriptor which uses a particular sub-system for accounts while legacy applications use the `CoreDbPhkRef` equivalent of the `SecureHexaryTrie`. The only place where this feature will currently be used is the `genesis.nim` source file. * Fix `Aristo` bugs, missing boundary checks, typos, etc. * Verify root vertex in `MPT` and account constructors why: Was missing so far, in particular the accounts constructor must verify `VertexID(1) * Fix include file
2023-11-20 11:51:43 +00:00
nil,
pruneHistory)
2022-12-02 04:39:12 +00:00
func clone*(com: CommonRef, db: CoreDbRef): CommonRef =
2022-12-02 04:39:12 +00:00
## clone but replace the db
## used in EVM tracer whose db is CaptureDB
CommonRef(
Unified database frontend integration (#1670) * Nimbus folder environment update details: * Integrated `CoreDbRef` for the sources in the `nimbus` sub-folder. * The `nimbus` program does not compile yet as it needs the updates in the parallel `stateless` sub-folder. * Stateless environment update details: * Integrated `CoreDbRef` for the sources in the `stateless` sub-folder. * The `nimbus` program compiles now. * Premix environment update details: * Integrated `CoreDbRef` for the sources in the `premix` sub-folder. * Fluffy environment update details: * Integrated `CoreDbRef` for the sources in the `fluffy` sub-folder. * Tools environment update details: * Integrated `CoreDbRef` for the sources in the `tools` sub-folder. * Nodocker environment update details: * Integrated `CoreDbRef` for the sources in the `hive_integration/nodocker` sub-folder. * Tests environment update details: * Integrated `CoreDbRef` for the sources in the `tests` sub-folder. * The unit tests compile and run cleanly now. * Generalise `CoreDbRef` to any `select_backend` supported database why: Generalisation was just missed due to overcoming some compiler oddity which was tied to rocksdb for testing. * Suppress compiler warning for `newChainDB()` why: Warning was added to this function which must be wrapped so that any `CatchableError` is re-raised as `Defect`. * Split off persistent `CoreDbRef` constructor into separate file why: This allows to compile a memory only database version without linking the backend library. * Use memory `CoreDbRef` database by default detail: Persistent DB constructor needs to import `db/core_db/persistent why: Most tests use memory DB anyway. This avoids linking `-lrocksdb` or any other backend by default. * fix `toLegacyBackend()` availability check why: got garbled after memory/persistent split. * Clarify raw access to MPT for snap sync handler why: Logically, `kvt` is not the raw access for the hexary trie (although this holds for the legacy database)
2023-08-04 11:10:09 +00:00
db : db,
2022-12-02 04:39:12 +00:00
config : com.config,
forkTransitionTable: com.forkTransitionTable,
2023-10-24 10:39:19 +00:00
forkIdCalculator: com.forkIdCalculator,
2022-12-02 04:39:12 +00:00
genesisHash : com.genesisHash,
genesisHeader: com.genesisHeader,
syncProgress : com.syncProgress,
networkId : com.networkId,
pos : com.pos,
pruneHistory : com.pruneHistory)
2022-12-02 04:39:12 +00:00
func clone*(com: CommonRef): CommonRef =
Unified database frontend integration (#1670) * Nimbus folder environment update details: * Integrated `CoreDbRef` for the sources in the `nimbus` sub-folder. * The `nimbus` program does not compile yet as it needs the updates in the parallel `stateless` sub-folder. * Stateless environment update details: * Integrated `CoreDbRef` for the sources in the `stateless` sub-folder. * The `nimbus` program compiles now. * Premix environment update details: * Integrated `CoreDbRef` for the sources in the `premix` sub-folder. * Fluffy environment update details: * Integrated `CoreDbRef` for the sources in the `fluffy` sub-folder. * Tools environment update details: * Integrated `CoreDbRef` for the sources in the `tools` sub-folder. * Nodocker environment update details: * Integrated `CoreDbRef` for the sources in the `hive_integration/nodocker` sub-folder. * Tests environment update details: * Integrated `CoreDbRef` for the sources in the `tests` sub-folder. * The unit tests compile and run cleanly now. * Generalise `CoreDbRef` to any `select_backend` supported database why: Generalisation was just missed due to overcoming some compiler oddity which was tied to rocksdb for testing. * Suppress compiler warning for `newChainDB()` why: Warning was added to this function which must be wrapped so that any `CatchableError` is re-raised as `Defect`. * Split off persistent `CoreDbRef` constructor into separate file why: This allows to compile a memory only database version without linking the backend library. * Use memory `CoreDbRef` database by default detail: Persistent DB constructor needs to import `db/core_db/persistent why: Most tests use memory DB anyway. This avoids linking `-lrocksdb` or any other backend by default. * fix `toLegacyBackend()` availability check why: got garbled after memory/persistent split. * Clarify raw access to MPT for snap sync handler why: Logically, `kvt` is not the raw access for the hexary trie (although this holds for the legacy database)
2023-08-04 11:10:09 +00:00
com.clone(com.db)
2022-12-02 04:39:12 +00:00
# ------------------------------------------------------------------------------
# Public functions
# ------------------------------------------------------------------------------
func toHardFork*(
com: CommonRef, forkDeterminer: ForkDeterminationInfo): HardFork =
toHardFork(com.forkTransitionTable, forkDeterminer)
func toEVMFork*(com: CommonRef, forkDeterminer: ForkDeterminationInfo): EVMFork =
2022-12-02 04:39:12 +00:00
## similar to toFork, but produce EVMFork
let fork = com.toHardFork(forkDeterminer)
2022-12-02 04:39:12 +00:00
ToEVMFork[fork]
func isSpuriousOrLater*(com: CommonRef, number: BlockNumber): bool =
com.toHardFork(number.forkDeterminationInfo) >= Spurious
func isByzantiumOrLater*(com: CommonRef, number: BlockNumber): bool =
com.toHardFork(number.forkDeterminationInfo) >= Byzantium
func isLondonOrLater*(com: CommonRef, number: BlockNumber): bool =
2022-12-02 04:39:12 +00:00
# TODO: Fixme, use only London comparator
2023-10-24 10:39:19 +00:00
com.toHardFork(number.forkDeterminationInfo) >= London
2022-12-02 04:39:12 +00:00
2023-10-24 10:39:19 +00:00
func forkId*(com: CommonRef, head, time: uint64): ForkID {.gcsafe.} =
2022-12-02 04:39:12 +00:00
## EIP 2364/2124
2023-10-24 10:39:19 +00:00
com.forkIdCalculator.newID(head, time)
func forkId*(com: CommonRef, head: BlockNumber, time: EthTime): ForkID {.gcsafe.} =
## EIP 2364/2124
com.forkIdCalculator.newID(head, time.uint64)
2022-12-02 04:39:12 +00:00
func isEIP155*(com: CommonRef, number: BlockNumber): bool =
com.config.eip155Block.isSome and number >= com.config.eip155Block.get
func isShanghaiOrLater*(com: CommonRef, t: EthTime): bool =
com.config.shanghaiTime.isSome and t >= com.config.shanghaiTime.get
func isCancunOrLater*(com: CommonRef, t: EthTime): bool =
com.config.cancunTime.isSome and t >= com.config.cancunTime.get
func isPragueOrLater*(com: CommonRef, t: EthTime): bool =
com.config.pragueTime.isSome and t >= com.config.pragueTime.get
proc proofOfStake*(com: CommonRef, header: Header): bool =
if com.config.posBlock.isSome:
# see comments of posBlock in common/hardforks.nim
header.number >= com.config.posBlock.get
elif com.config.mergeForkBlock.isSome:
header.number >= com.config.mergeForkBlock.get
else:
# This costly check is only executed from test suite
com.isBlockAfterTtd(header)
2022-12-02 04:39:12 +00:00
proc syncReqNewHead*(com: CommonRef; header: Header)
2023-05-16 04:15:10 +00:00
{.gcsafe, raises: [].} =
Flare sync (#2627) * Cosmetics, small fixes, add stashed headers verifier * Remove direct `Era1` support why: Era1 is indirectly supported by using the import tool before syncing. * Clarify database persistent save function. why: Function relied on the last saved state block number which was wrong. It now relies on the tx-level. If it is 0, then data are saved directly. Otherwise the task that owns the tx will do it. * Extracted configuration constants into separate file * Enable single peer mode for debugging * Fix peer losing issue in multi-mode details: Running concurrent download peers was previously programmed as running a batch downloading and storing ~8k headers and then leaving the `async` function to be restarted by a scheduler. This was unfortunate because of occasionally occurring long waiting times for restart. While the time gap until restarting were typically observed a few millisecs, there were always a few outliers which well exceed several seconds. This seemed to let remote peers run into timeouts. * Prefix function names `unprocXxx()` and `stagedYyy()` by `headers` why: There will be other `unproc` and `staged` modules. * Remove cruft, update logging * Fix accounting issue details: When staging after fetching headers from the network, there was an off by 1 error occurring when the result was by one smaller than requested. Also, a whole range was mis-accounted when a peer was terminating connection immediately after responding. * Fix slow/error header accounting when fetching why: Originally set for detecting slow headers in a row, the counter was wrongly extended to general errors. * Ban peers for a while that respond with too few headers continuously why: Some peers only returned one header at a time. If these peers sit on a farm, they might collectively slow down the download process. * Update RPC beacon header updater why: Old function hook has slightly changed its meaning since it was used for snap sync. Also, the old hook is used by other functions already. * Limit number of peers or set to single peer mode details: Merge several concepts, single peer mode being one of it. * Some code clean up, fixings for removing of compiler warnings * De-noise header fetch related sources why: Header download looks relatively stable, so general debugging is not needed, anymore. This is the equivalent of removing the scaffold from the part of the building where work has completed. * More clean up and code prettification for headers stuff * Implement body fetch and block import details: Available headers are used stage blocks by combining existing headers with newly fetched blocks. Then these blocks are imported/executed via `persistBlocks()`. * Logger cosmetics and cleanup * Remove staged block queue debugging details: Feature still available, just not executed anymore * Docu, logging update * Update/simplify `runDaemon()` * Re-calibrate block body requests and soft config for import blocks batch why: * For fetching, larger fetch requests are mostly truncated anyway on MainNet. * For executing, smaller batch sizes reduce the memory needed for the price of longer execution times. * Update metrics counters * Docu update * Some fixes, formatting updates, etc. * Update `borrowed` type: uint -. uint64 also: Always convert to `uint64` rather than `uint` where appropriate
2024-09-27 15:07:42 +00:00
## Used by RPC updater
if not com.syncReqNewHead.isNil:
com.syncReqNewHead(header)
proc reqBeaconSyncTargetCB*(com: CommonRef; header: Header; finHash: Hash32) =
Flare sync (#2627) * Cosmetics, small fixes, add stashed headers verifier * Remove direct `Era1` support why: Era1 is indirectly supported by using the import tool before syncing. * Clarify database persistent save function. why: Function relied on the last saved state block number which was wrong. It now relies on the tx-level. If it is 0, then data are saved directly. Otherwise the task that owns the tx will do it. * Extracted configuration constants into separate file * Enable single peer mode for debugging * Fix peer losing issue in multi-mode details: Running concurrent download peers was previously programmed as running a batch downloading and storing ~8k headers and then leaving the `async` function to be restarted by a scheduler. This was unfortunate because of occasionally occurring long waiting times for restart. While the time gap until restarting were typically observed a few millisecs, there were always a few outliers which well exceed several seconds. This seemed to let remote peers run into timeouts. * Prefix function names `unprocXxx()` and `stagedYyy()` by `headers` why: There will be other `unproc` and `staged` modules. * Remove cruft, update logging * Fix accounting issue details: When staging after fetching headers from the network, there was an off by 1 error occurring when the result was by one smaller than requested. Also, a whole range was mis-accounted when a peer was terminating connection immediately after responding. * Fix slow/error header accounting when fetching why: Originally set for detecting slow headers in a row, the counter was wrongly extended to general errors. * Ban peers for a while that respond with too few headers continuously why: Some peers only returned one header at a time. If these peers sit on a farm, they might collectively slow down the download process. * Update RPC beacon header updater why: Old function hook has slightly changed its meaning since it was used for snap sync. Also, the old hook is used by other functions already. * Limit number of peers or set to single peer mode details: Merge several concepts, single peer mode being one of it. * Some code clean up, fixings for removing of compiler warnings * De-noise header fetch related sources why: Header download looks relatively stable, so general debugging is not needed, anymore. This is the equivalent of removing the scaffold from the part of the building where work has completed. * More clean up and code prettification for headers stuff * Implement body fetch and block import details: Available headers are used stage blocks by combining existing headers with newly fetched blocks. Then these blocks are imported/executed via `persistBlocks()`. * Logger cosmetics and cleanup * Remove staged block queue debugging details: Feature still available, just not executed anymore * Docu, logging update * Update/simplify `runDaemon()` * Re-calibrate block body requests and soft config for import blocks batch why: * For fetching, larger fetch requests are mostly truncated anyway on MainNet. * For executing, smaller batch sizes reduce the memory needed for the price of longer execution times. * Update metrics counters * Docu update * Some fixes, formatting updates, etc. * Update `borrowed` type: uint -. uint64 also: Always convert to `uint64` rather than `uint` where appropriate
2024-09-27 15:07:42 +00:00
## Used by RPC updater
if not com.reqBeaconSyncTargetCB.isNil:
com.reqBeaconSyncTargetCB(header, finHash)
Flare sync (#2627) * Cosmetics, small fixes, add stashed headers verifier * Remove direct `Era1` support why: Era1 is indirectly supported by using the import tool before syncing. * Clarify database persistent save function. why: Function relied on the last saved state block number which was wrong. It now relies on the tx-level. If it is 0, then data are saved directly. Otherwise the task that owns the tx will do it. * Extracted configuration constants into separate file * Enable single peer mode for debugging * Fix peer losing issue in multi-mode details: Running concurrent download peers was previously programmed as running a batch downloading and storing ~8k headers and then leaving the `async` function to be restarted by a scheduler. This was unfortunate because of occasionally occurring long waiting times for restart. While the time gap until restarting were typically observed a few millisecs, there were always a few outliers which well exceed several seconds. This seemed to let remote peers run into timeouts. * Prefix function names `unprocXxx()` and `stagedYyy()` by `headers` why: There will be other `unproc` and `staged` modules. * Remove cruft, update logging * Fix accounting issue details: When staging after fetching headers from the network, there was an off by 1 error occurring when the result was by one smaller than requested. Also, a whole range was mis-accounted when a peer was terminating connection immediately after responding. * Fix slow/error header accounting when fetching why: Originally set for detecting slow headers in a row, the counter was wrongly extended to general errors. * Ban peers for a while that respond with too few headers continuously why: Some peers only returned one header at a time. If these peers sit on a farm, they might collectively slow down the download process. * Update RPC beacon header updater why: Old function hook has slightly changed its meaning since it was used for snap sync. Also, the old hook is used by other functions already. * Limit number of peers or set to single peer mode details: Merge several concepts, single peer mode being one of it. * Some code clean up, fixings for removing of compiler warnings * De-noise header fetch related sources why: Header download looks relatively stable, so general debugging is not needed, anymore. This is the equivalent of removing the scaffold from the part of the building where work has completed. * More clean up and code prettification for headers stuff * Implement body fetch and block import details: Available headers are used stage blocks by combining existing headers with newly fetched blocks. Then these blocks are imported/executed via `persistBlocks()`. * Logger cosmetics and cleanup * Remove staged block queue debugging details: Feature still available, just not executed anymore * Docu, logging update * Update/simplify `runDaemon()` * Re-calibrate block body requests and soft config for import blocks batch why: * For fetching, larger fetch requests are mostly truncated anyway on MainNet. * For executing, smaller batch sizes reduce the memory needed for the price of longer execution times. * Update metrics counters * Docu update * Some fixes, formatting updates, etc. * Update `borrowed` type: uint -. uint64 also: Always convert to `uint64` rather than `uint` where appropriate
2024-09-27 15:07:42 +00:00
proc notifyBadBlock*(com: CommonRef; invalid, origin: Header)
{.gcsafe, raises: [].} =
if not com.notifyBadBlock.isNil:
com.notifyBadBlock(invalid, origin)
2022-12-02 04:39:12 +00:00
# ------------------------------------------------------------------------------
# Getters
# ------------------------------------------------------------------------------
func startOfHistory*(com: CommonRef): Hash32 =
## Getter
com.startOfHistory
func pos*(com: CommonRef): CasperRef =
## Getter
com.pos
Unified database frontend integration (#1670) * Nimbus folder environment update details: * Integrated `CoreDbRef` for the sources in the `nimbus` sub-folder. * The `nimbus` program does not compile yet as it needs the updates in the parallel `stateless` sub-folder. * Stateless environment update details: * Integrated `CoreDbRef` for the sources in the `stateless` sub-folder. * The `nimbus` program compiles now. * Premix environment update details: * Integrated `CoreDbRef` for the sources in the `premix` sub-folder. * Fluffy environment update details: * Integrated `CoreDbRef` for the sources in the `fluffy` sub-folder. * Tools environment update details: * Integrated `CoreDbRef` for the sources in the `tools` sub-folder. * Nodocker environment update details: * Integrated `CoreDbRef` for the sources in the `hive_integration/nodocker` sub-folder. * Tests environment update details: * Integrated `CoreDbRef` for the sources in the `tests` sub-folder. * The unit tests compile and run cleanly now. * Generalise `CoreDbRef` to any `select_backend` supported database why: Generalisation was just missed due to overcoming some compiler oddity which was tied to rocksdb for testing. * Suppress compiler warning for `newChainDB()` why: Warning was added to this function which must be wrapped so that any `CatchableError` is re-raised as `Defect`. * Split off persistent `CoreDbRef` constructor into separate file why: This allows to compile a memory only database version without linking the backend library. * Use memory `CoreDbRef` database by default detail: Persistent DB constructor needs to import `db/core_db/persistent why: Most tests use memory DB anyway. This avoids linking `-lrocksdb` or any other backend by default. * fix `toLegacyBackend()` availability check why: got garbled after memory/persistent split. * Clarify raw access to MPT for snap sync handler why: Logically, `kvt` is not the raw access for the hexary trie (although this holds for the legacy database)
2023-08-04 11:10:09 +00:00
func db*(com: CommonRef): CoreDbRef =
2022-12-02 04:39:12 +00:00
com.db
func eip150Block*(com: CommonRef): Opt[BlockNumber] =
2022-12-02 04:39:12 +00:00
com.config.eip150Block
func eip150Hash*(com: CommonRef): Hash32 =
2022-12-02 04:39:12 +00:00
com.config.eip150Hash
func daoForkBlock*(com: CommonRef): Opt[BlockNumber] =
2022-12-02 04:39:12 +00:00
com.config.daoForkBlock
func daoForkSupport*(com: CommonRef): bool =
com.config.daoForkSupport
func ttd*(com: CommonRef): Opt[DifficultyInt] =
2022-12-02 04:39:12 +00:00
com.config.terminalTotalDifficulty
func ttdPassed*(com: CommonRef): bool =
com.config.terminalTotalDifficultyPassed.get(false)
func pruneHistory*(com: CommonRef): bool =
com.pruneHistory
2022-12-02 04:39:12 +00:00
# always remember ChainId and NetworkId
# are two distinct things that often got mixed
# because some client do not make distinction
# between them.
# And popular networks such as MainNet
# add more confusion to this
# by not making a distinction in their value.
2022-12-02 04:39:12 +00:00
func chainId*(com: CommonRef): ChainId =
com.config.chainId
func networkId*(com: CommonRef): NetworkId =
com.networkId
func genesisHash*(com: CommonRef): Hash32 =
2022-12-02 04:39:12 +00:00
## Getter
com.genesisHash
func genesisHeader*(com: CommonRef): Header =
2022-12-02 04:39:12 +00:00
## Getter
com.genesisHeader
func syncStart*(com: CommonRef): BlockNumber =
com.syncProgress.start
func syncCurrent*(com: CommonRef): BlockNumber =
com.syncProgress.current
func syncHighest*(com: CommonRef): BlockNumber =
com.syncProgress.highest
func syncState*(com: CommonRef): SyncState =
com.syncState
2022-12-02 04:39:12 +00:00
# ------------------------------------------------------------------------------
# Setters
# ------------------------------------------------------------------------------
func `syncStart=`*(com: CommonRef, number: BlockNumber) =
2022-12-02 04:39:12 +00:00
com.syncProgress.start = number
func `syncCurrent=`*(com: CommonRef, number: BlockNumber) =
2022-12-02 04:39:12 +00:00
com.syncProgress.current = number
func `syncHighest=`*(com: CommonRef, number: BlockNumber) =
2022-12-02 04:39:12 +00:00
com.syncProgress.highest = number
func `syncState=`*(com: CommonRef, state: SyncState) =
com.syncState = state
func `startOfHistory=`*(com: CommonRef, val: Hash32) =
## Setter
com.startOfHistory = val
func setTTD*(com: CommonRef, ttd: Opt[DifficultyInt]) =
2022-12-02 04:39:12 +00:00
## useful for testing
com.config.terminalTotalDifficulty = ttd
# rebuild the MergeFork piece of the forkTransitionTable
com.forkTransitionTable.mergeForkTransitionThreshold = com.config.mergeForkTransitionThreshold
2022-12-02 04:39:12 +00:00
func `syncReqNewHead=`*(com: CommonRef; cb: SyncReqNewHeadCB) =
## Activate or reset a call back handler for syncing.
com.syncReqNewHead = cb
func `reqBeaconSyncTarget=`*(com: CommonRef; cb: ReqBeaconSyncTargetCB) =
Flare sync (#2627) * Cosmetics, small fixes, add stashed headers verifier * Remove direct `Era1` support why: Era1 is indirectly supported by using the import tool before syncing. * Clarify database persistent save function. why: Function relied on the last saved state block number which was wrong. It now relies on the tx-level. If it is 0, then data are saved directly. Otherwise the task that owns the tx will do it. * Extracted configuration constants into separate file * Enable single peer mode for debugging * Fix peer losing issue in multi-mode details: Running concurrent download peers was previously programmed as running a batch downloading and storing ~8k headers and then leaving the `async` function to be restarted by a scheduler. This was unfortunate because of occasionally occurring long waiting times for restart. While the time gap until restarting were typically observed a few millisecs, there were always a few outliers which well exceed several seconds. This seemed to let remote peers run into timeouts. * Prefix function names `unprocXxx()` and `stagedYyy()` by `headers` why: There will be other `unproc` and `staged` modules. * Remove cruft, update logging * Fix accounting issue details: When staging after fetching headers from the network, there was an off by 1 error occurring when the result was by one smaller than requested. Also, a whole range was mis-accounted when a peer was terminating connection immediately after responding. * Fix slow/error header accounting when fetching why: Originally set for detecting slow headers in a row, the counter was wrongly extended to general errors. * Ban peers for a while that respond with too few headers continuously why: Some peers only returned one header at a time. If these peers sit on a farm, they might collectively slow down the download process. * Update RPC beacon header updater why: Old function hook has slightly changed its meaning since it was used for snap sync. Also, the old hook is used by other functions already. * Limit number of peers or set to single peer mode details: Merge several concepts, single peer mode being one of it. * Some code clean up, fixings for removing of compiler warnings * De-noise header fetch related sources why: Header download looks relatively stable, so general debugging is not needed, anymore. This is the equivalent of removing the scaffold from the part of the building where work has completed. * More clean up and code prettification for headers stuff * Implement body fetch and block import details: Available headers are used stage blocks by combining existing headers with newly fetched blocks. Then these blocks are imported/executed via `persistBlocks()`. * Logger cosmetics and cleanup * Remove staged block queue debugging details: Feature still available, just not executed anymore * Docu, logging update * Update/simplify `runDaemon()` * Re-calibrate block body requests and soft config for import blocks batch why: * For fetching, larger fetch requests are mostly truncated anyway on MainNet. * For executing, smaller batch sizes reduce the memory needed for the price of longer execution times. * Update metrics counters * Docu update * Some fixes, formatting updates, etc. * Update `borrowed` type: uint -. uint64 also: Always convert to `uint64` rather than `uint` where appropriate
2024-09-27 15:07:42 +00:00
## Activate or reset a call back handler for syncing.
com.reqBeaconSyncTargetCB = cb
Flare sync (#2627) * Cosmetics, small fixes, add stashed headers verifier * Remove direct `Era1` support why: Era1 is indirectly supported by using the import tool before syncing. * Clarify database persistent save function. why: Function relied on the last saved state block number which was wrong. It now relies on the tx-level. If it is 0, then data are saved directly. Otherwise the task that owns the tx will do it. * Extracted configuration constants into separate file * Enable single peer mode for debugging * Fix peer losing issue in multi-mode details: Running concurrent download peers was previously programmed as running a batch downloading and storing ~8k headers and then leaving the `async` function to be restarted by a scheduler. This was unfortunate because of occasionally occurring long waiting times for restart. While the time gap until restarting were typically observed a few millisecs, there were always a few outliers which well exceed several seconds. This seemed to let remote peers run into timeouts. * Prefix function names `unprocXxx()` and `stagedYyy()` by `headers` why: There will be other `unproc` and `staged` modules. * Remove cruft, update logging * Fix accounting issue details: When staging after fetching headers from the network, there was an off by 1 error occurring when the result was by one smaller than requested. Also, a whole range was mis-accounted when a peer was terminating connection immediately after responding. * Fix slow/error header accounting when fetching why: Originally set for detecting slow headers in a row, the counter was wrongly extended to general errors. * Ban peers for a while that respond with too few headers continuously why: Some peers only returned one header at a time. If these peers sit on a farm, they might collectively slow down the download process. * Update RPC beacon header updater why: Old function hook has slightly changed its meaning since it was used for snap sync. Also, the old hook is used by other functions already. * Limit number of peers or set to single peer mode details: Merge several concepts, single peer mode being one of it. * Some code clean up, fixings for removing of compiler warnings * De-noise header fetch related sources why: Header download looks relatively stable, so general debugging is not needed, anymore. This is the equivalent of removing the scaffold from the part of the building where work has completed. * More clean up and code prettification for headers stuff * Implement body fetch and block import details: Available headers are used stage blocks by combining existing headers with newly fetched blocks. Then these blocks are imported/executed via `persistBlocks()`. * Logger cosmetics and cleanup * Remove staged block queue debugging details: Feature still available, just not executed anymore * Docu, logging update * Update/simplify `runDaemon()` * Re-calibrate block body requests and soft config for import blocks batch why: * For fetching, larger fetch requests are mostly truncated anyway on MainNet. * For executing, smaller batch sizes reduce the memory needed for the price of longer execution times. * Update metrics counters * Docu update * Some fixes, formatting updates, etc. * Update `borrowed` type: uint -. uint64 also: Always convert to `uint64` rather than `uint` where appropriate
2024-09-27 15:07:42 +00:00
func `notifyBadBlock=`*(com: CommonRef; cb: NotifyBadBlockCB) =
## Activate or reset a call back handler for bad block notification.
com.notifyBadBlock = cb
# ------------------------------------------------------------------------------
# End
# ------------------------------------------------------------------------------