2022-12-02 04:39:12 +00:00
|
|
|
# Nimbus
|
2024-02-20 03:07:38 +00:00
|
|
|
# Copyright (c) 2022-2024 Status Research & Development GmbH
|
2022-12-02 04:39:12 +00:00
|
|
|
# Licensed under either of
|
|
|
|
# * Apache License, version 2.0, ([LICENSE-APACHE](LICENSE-APACHE))
|
|
|
|
# * MIT license ([LICENSE-MIT](LICENSE-MIT))
|
|
|
|
# at your option.
|
|
|
|
# This file may not be copied, modified, or distributed except according to
|
|
|
|
# those terms.
|
|
|
|
|
2023-02-14 20:27:17 +00:00
|
|
|
{.push raises: [].}
|
|
|
|
|
2022-12-02 04:39:12 +00:00
|
|
|
import
|
|
|
|
chronicles,
|
|
|
|
eth/trie/trie_defs,
|
2024-06-16 03:22:06 +00:00
|
|
|
../core/casper,
|
2023-12-12 19:12:56 +00:00
|
|
|
../db/[core_db, ledger, storage_types],
|
2022-12-02 04:39:12 +00:00
|
|
|
../utils/[utils, ec_recover],
|
2023-08-04 11:10:09 +00:00
|
|
|
".."/[constants, errors],
|
|
|
|
"."/[chain_config, evmforks, genesis, hardforks]
|
2022-12-02 04:39:12 +00:00
|
|
|
|
|
|
|
export
|
|
|
|
chain_config,
|
2023-08-04 11:10:09 +00:00
|
|
|
core_db,
|
|
|
|
constants,
|
|
|
|
errors,
|
2022-12-02 04:39:12 +00:00
|
|
|
evmforks,
|
|
|
|
hardforks,
|
|
|
|
genesis,
|
|
|
|
utils
|
|
|
|
|
|
|
|
type
|
|
|
|
SyncProgress = object
|
|
|
|
start : BlockNumber
|
|
|
|
current: BlockNumber
|
|
|
|
highest: BlockNumber
|
|
|
|
|
2024-09-04 09:54:54 +00:00
|
|
|
SyncState* = enum
|
|
|
|
Waiting
|
|
|
|
Syncing
|
|
|
|
Synced
|
|
|
|
|
2024-10-16 01:34:12 +00:00
|
|
|
SyncReqNewHeadCB* = proc(header: Header) {.gcsafe, raises: [].}
|
2023-01-17 09:28:14 +00:00
|
|
|
## Update head for syncing
|
|
|
|
|
2024-10-17 17:59:50 +00:00
|
|
|
ReqBeaconSyncTargetCB* = proc(header: Header; finHash: Hash32) {.gcsafe, raises: [].}
|
2024-10-09 18:00:00 +00:00
|
|
|
## Ditto (for beacon sync)
|
Flare sync (#2627)
* Cosmetics, small fixes, add stashed headers verifier
* Remove direct `Era1` support
why:
Era1 is indirectly supported by using the import tool before syncing.
* Clarify database persistent save function.
why:
Function relied on the last saved state block number which was wrong.
It now relies on the tx-level. If it is 0, then data are saved directly.
Otherwise the task that owns the tx will do it.
* Extracted configuration constants into separate file
* Enable single peer mode for debugging
* Fix peer losing issue in multi-mode
details:
Running concurrent download peers was previously programmed as running
a batch downloading and storing ~8k headers and then leaving the `async`
function to be restarted by a scheduler.
This was unfortunate because of occasionally occurring long waiting
times for restart.
While the time gap until restarting were typically observed a few
millisecs, there were always a few outliers which well exceed several
seconds. This seemed to let remote peers run into timeouts.
* Prefix function names `unprocXxx()` and `stagedYyy()` by `headers`
why:
There will be other `unproc` and `staged` modules.
* Remove cruft, update logging
* Fix accounting issue
details:
When staging after fetching headers from the network, there was an off
by 1 error occurring when the result was by one smaller than requested.
Also, a whole range was mis-accounted when a peer was terminating
connection immediately after responding.
* Fix slow/error header accounting when fetching
why:
Originally set for detecting slow headers in a row, the counter
was wrongly extended to general errors.
* Ban peers for a while that respond with too few headers continuously
why:
Some peers only returned one header at a time. If these peers sit on a
farm, they might collectively slow down the download process.
* Update RPC beacon header updater
why:
Old function hook has slightly changed its meaning since it was used
for snap sync. Also, the old hook is used by other functions already.
* Limit number of peers or set to single peer mode
details:
Merge several concepts, single peer mode being one of it.
* Some code clean up, fixings for removing of compiler warnings
* De-noise header fetch related sources
why:
Header download looks relatively stable, so general debugging is not
needed, anymore. This is the equivalent of removing the scaffold from
the part of the building where work has completed.
* More clean up and code prettification for headers stuff
* Implement body fetch and block import
details:
Available headers are used stage blocks by combining existing headers
with newly fetched blocks. Then these blocks are imported/executed via
`persistBlocks()`.
* Logger cosmetics and cleanup
* Remove staged block queue debugging
details:
Feature still available, just not executed anymore
* Docu, logging update
* Update/simplify `runDaemon()`
* Re-calibrate block body requests and soft config for import blocks batch
why:
* For fetching, larger fetch requests are mostly truncated anyway on
MainNet.
* For executing, smaller batch sizes reduce the memory needed for the
price of longer execution times.
* Update metrics counters
* Docu update
* Some fixes, formatting updates, etc.
* Update `borrowed` type: uint -. uint64
also:
Always convert to `uint64` rather than `uint` where appropriate
2024-09-27 15:07:42 +00:00
|
|
|
|
2024-10-16 01:34:12 +00:00
|
|
|
NotifyBadBlockCB* = proc(invalid, origin: Header) {.gcsafe, raises: [].}
|
2024-05-17 01:38:46 +00:00
|
|
|
## Notify engine-API of encountered bad block
|
|
|
|
|
2022-12-02 04:39:12 +00:00
|
|
|
CommonRef* = ref object
|
|
|
|
# all purpose storage
|
2023-08-04 11:10:09 +00:00
|
|
|
db: CoreDbRef
|
2022-12-02 04:39:12 +00:00
|
|
|
|
|
|
|
# block chain config
|
|
|
|
config: ChainConfig
|
|
|
|
|
|
|
|
# cache of genesis
|
2024-10-16 01:34:12 +00:00
|
|
|
genesisHash: Hash32
|
|
|
|
genesisHeader: Header
|
2022-12-02 04:39:12 +00:00
|
|
|
|
2023-02-16 11:40:07 +00:00
|
|
|
# map block number and ttd and time to
|
2022-12-02 04:39:12 +00:00
|
|
|
# HardFork
|
2023-02-16 11:40:07 +00:00
|
|
|
forkTransitionTable: ForkTransitionTable
|
2022-12-02 04:39:12 +00:00
|
|
|
|
|
|
|
# Eth wire protocol need this
|
2023-10-24 10:39:19 +00:00
|
|
|
forkIdCalculator: ForkIdCalculator
|
2022-12-02 04:39:12 +00:00
|
|
|
networkId: NetworkId
|
|
|
|
|
|
|
|
# synchronizer need this
|
|
|
|
syncProgress: SyncProgress
|
|
|
|
|
2024-09-04 09:54:54 +00:00
|
|
|
syncState: SyncState
|
|
|
|
|
2023-01-17 09:28:14 +00:00
|
|
|
syncReqNewHead: SyncReqNewHeadCB
|
2023-05-16 13:52:44 +00:00
|
|
|
## Call back function for the sync processor. This function stages
|
|
|
|
## the arguent header to a private aerea for subsequent processing.
|
|
|
|
|
2024-10-09 18:00:00 +00:00
|
|
|
reqBeaconSyncTargetCB: ReqBeaconSyncTargetCB
|
Flare sync (#2627)
* Cosmetics, small fixes, add stashed headers verifier
* Remove direct `Era1` support
why:
Era1 is indirectly supported by using the import tool before syncing.
* Clarify database persistent save function.
why:
Function relied on the last saved state block number which was wrong.
It now relies on the tx-level. If it is 0, then data are saved directly.
Otherwise the task that owns the tx will do it.
* Extracted configuration constants into separate file
* Enable single peer mode for debugging
* Fix peer losing issue in multi-mode
details:
Running concurrent download peers was previously programmed as running
a batch downloading and storing ~8k headers and then leaving the `async`
function to be restarted by a scheduler.
This was unfortunate because of occasionally occurring long waiting
times for restart.
While the time gap until restarting were typically observed a few
millisecs, there were always a few outliers which well exceed several
seconds. This seemed to let remote peers run into timeouts.
* Prefix function names `unprocXxx()` and `stagedYyy()` by `headers`
why:
There will be other `unproc` and `staged` modules.
* Remove cruft, update logging
* Fix accounting issue
details:
When staging after fetching headers from the network, there was an off
by 1 error occurring when the result was by one smaller than requested.
Also, a whole range was mis-accounted when a peer was terminating
connection immediately after responding.
* Fix slow/error header accounting when fetching
why:
Originally set for detecting slow headers in a row, the counter
was wrongly extended to general errors.
* Ban peers for a while that respond with too few headers continuously
why:
Some peers only returned one header at a time. If these peers sit on a
farm, they might collectively slow down the download process.
* Update RPC beacon header updater
why:
Old function hook has slightly changed its meaning since it was used
for snap sync. Also, the old hook is used by other functions already.
* Limit number of peers or set to single peer mode
details:
Merge several concepts, single peer mode being one of it.
* Some code clean up, fixings for removing of compiler warnings
* De-noise header fetch related sources
why:
Header download looks relatively stable, so general debugging is not
needed, anymore. This is the equivalent of removing the scaffold from
the part of the building where work has completed.
* More clean up and code prettification for headers stuff
* Implement body fetch and block import
details:
Available headers are used stage blocks by combining existing headers
with newly fetched blocks. Then these blocks are imported/executed via
`persistBlocks()`.
* Logger cosmetics and cleanup
* Remove staged block queue debugging
details:
Feature still available, just not executed anymore
* Docu, logging update
* Update/simplify `runDaemon()`
* Re-calibrate block body requests and soft config for import blocks batch
why:
* For fetching, larger fetch requests are mostly truncated anyway on
MainNet.
* For executing, smaller batch sizes reduce the memory needed for the
price of longer execution times.
* Update metrics counters
* Docu update
* Some fixes, formatting updates, etc.
* Update `borrowed` type: uint -. uint64
also:
Always convert to `uint64` rather than `uint` where appropriate
2024-09-27 15:07:42 +00:00
|
|
|
## Call back function for a sync processor that returns the canonical
|
|
|
|
## header.
|
|
|
|
|
2024-05-17 01:38:46 +00:00
|
|
|
notifyBadBlock: NotifyBadBlockCB
|
|
|
|
## Allow synchronizer to inform engine-API of bad encountered during sync
|
|
|
|
## progress
|
|
|
|
|
2024-10-16 01:34:12 +00:00
|
|
|
startOfHistory: Hash32
|
2023-04-14 22:28:57 +00:00
|
|
|
## This setting is needed for resuming blockwise syncying after
|
|
|
|
## installing a snapshot pivot. The default value for this field is
|
|
|
|
## `GENESIS_PARENT_HASH` to start at the very beginning.
|
|
|
|
|
2022-12-06 05:55:40 +00:00
|
|
|
pos: CasperRef
|
|
|
|
## Proof Of Stake descriptor
|
|
|
|
|
2024-05-20 10:17:51 +00:00
|
|
|
pruneHistory: bool
|
|
|
|
## Must not not set for a full node, might go away some time
|
2023-12-12 19:12:56 +00:00
|
|
|
|
2022-12-02 04:39:12 +00:00
|
|
|
# ------------------------------------------------------------------------------
|
2022-12-05 11:25:44 +00:00
|
|
|
# Forward declarations
|
2022-12-02 04:39:12 +00:00
|
|
|
# ------------------------------------------------------------------------------
|
|
|
|
|
2024-10-08 02:37:36 +00:00
|
|
|
proc proofOfStake*(com: CommonRef, header: Header): bool {.gcsafe.}
|
2022-12-02 04:39:12 +00:00
|
|
|
|
2022-12-05 11:25:44 +00:00
|
|
|
# ------------------------------------------------------------------------------
|
|
|
|
# Private helper functions
|
|
|
|
# ------------------------------------------------------------------------------
|
|
|
|
|
2024-10-16 01:34:12 +00:00
|
|
|
func setForkId(com: CommonRef, genesis: Header) =
|
2023-10-24 10:39:19 +00:00
|
|
|
com.genesisHash = genesis.blockHash
|
2022-12-02 04:39:12 +00:00
|
|
|
let genesisCRC = crc32(0, com.genesisHash.data)
|
2023-10-24 10:39:19 +00:00
|
|
|
com.forkIdCalculator = initForkIdCalculator(
|
|
|
|
com.forkTransitionTable,
|
|
|
|
genesisCRC,
|
|
|
|
genesis.timestamp.uint64)
|
2022-12-02 04:39:12 +00:00
|
|
|
|
2024-05-30 20:30:40 +00:00
|
|
|
func daoCheck(conf: ChainConfig) =
|
2022-12-02 04:39:12 +00:00
|
|
|
if not conf.daoForkSupport or conf.daoForkBlock.isNone:
|
|
|
|
conf.daoForkBlock = conf.homesteadBlock
|
|
|
|
|
|
|
|
if conf.daoForkSupport and conf.daoForkBlock.isNone:
|
|
|
|
conf.daoForkBlock = conf.homesteadBlock
|
|
|
|
|
2024-08-08 05:45:30 +00:00
|
|
|
proc initializeDb(com: CommonRef) =
|
|
|
|
let kvt = com.db.ctx.getKvt()
|
|
|
|
proc contains(kvt: CoreDbKvtRef; key: openArray[byte]): bool =
|
2024-08-30 11:18:36 +00:00
|
|
|
kvt.hasKeyRc(key).expect "valid bool"
|
2024-08-08 05:45:30 +00:00
|
|
|
if canonicalHeadHashKey().toOpenArray notin kvt:
|
|
|
|
info "Writing genesis to DB"
|
|
|
|
doAssert(com.genesisHeader.number == 0.BlockNumber,
|
|
|
|
"can't commit genesis block with number > 0")
|
|
|
|
doAssert(com.db.persistHeader(com.genesisHeader,
|
2024-10-08 02:37:36 +00:00
|
|
|
com.proofOfStake(com.genesisHeader),
|
2024-08-08 05:45:30 +00:00
|
|
|
startOfHistory=com.genesisHeader.parentHash),
|
|
|
|
"can persist genesis header")
|
|
|
|
doAssert(canonicalHeadHashKey().toOpenArray in kvt)
|
|
|
|
|
|
|
|
# The database must at least contain the base and head pointers - the base
|
|
|
|
# is implicitly considered finalized
|
|
|
|
let
|
|
|
|
baseNum = com.db.getSavedStateBlockNumber()
|
|
|
|
base =
|
|
|
|
try:
|
|
|
|
com.db.getBlockHeader(baseNum)
|
|
|
|
except BlockNotFound as exc:
|
|
|
|
fatal "Cannot load base block header",
|
|
|
|
baseNum, err = exc.msg
|
|
|
|
quit 1
|
|
|
|
finalized =
|
|
|
|
try:
|
|
|
|
com.db.finalizedHeader()
|
2024-09-10 05:24:45 +00:00
|
|
|
except BlockNotFound:
|
2024-08-08 05:45:30 +00:00
|
|
|
debug "No finalized block stored in database, reverting to base"
|
|
|
|
base
|
|
|
|
head =
|
|
|
|
try:
|
|
|
|
com.db.getCanonicalHead()
|
|
|
|
except EVMError as exc:
|
|
|
|
fatal "Cannot load canonical block header",
|
|
|
|
err = exc.msg
|
|
|
|
quit 1
|
|
|
|
|
|
|
|
info "Database initialized",
|
|
|
|
base = (base.blockHash, base.number),
|
|
|
|
finalized = (finalized.blockHash, finalized.number),
|
|
|
|
head = (head.blockHash, head.number)
|
|
|
|
|
2024-05-20 10:17:51 +00:00
|
|
|
proc init(com : CommonRef,
|
|
|
|
db : CoreDbRef,
|
|
|
|
networkId : NetworkId,
|
|
|
|
config : ChainConfig,
|
|
|
|
genesis : Genesis,
|
|
|
|
pruneHistory: bool,
|
2024-10-26 09:26:38 +00:00
|
|
|
) =
|
2023-12-12 19:12:56 +00:00
|
|
|
|
2022-12-02 04:39:12 +00:00
|
|
|
config.daoCheck()
|
|
|
|
|
2023-08-04 11:10:09 +00:00
|
|
|
com.db = db
|
2022-12-02 04:39:12 +00:00
|
|
|
com.config = config
|
2023-02-16 11:40:07 +00:00
|
|
|
com.forkTransitionTable = config.toForkTransitionTable()
|
2022-12-02 04:39:12 +00:00
|
|
|
com.networkId = networkId
|
|
|
|
com.syncProgress= SyncProgress()
|
2024-09-04 09:54:54 +00:00
|
|
|
com.syncState = Waiting
|
2024-05-20 10:17:51 +00:00
|
|
|
com.pruneHistory= pruneHistory
|
2023-08-16 10:05:14 +00:00
|
|
|
com.pos = CasperRef.new
|
|
|
|
|
2023-10-24 10:39:19 +00:00
|
|
|
# com.forkIdCalculator and com.genesisHash are set
|
2022-12-02 04:39:12 +00:00
|
|
|
# by setForkId
|
|
|
|
if genesis.isNil.not:
|
2024-07-17 10:05:53 +00:00
|
|
|
let
|
|
|
|
forkDeterminer = ForkDeterminationInfo(
|
|
|
|
number: 0.BlockNumber,
|
|
|
|
td: Opt.some(0.u256),
|
|
|
|
time: Opt.some(genesis.timestamp)
|
|
|
|
)
|
|
|
|
fork = toHardFork(com.forkTransitionTable, forkDeterminer)
|
|
|
|
|
2024-05-22 13:41:14 +00:00
|
|
|
# Must not overwrite the global state on the single state DB
|
2024-06-14 07:31:08 +00:00
|
|
|
if not db.getBlockHeader(0.BlockNumber, com.genesisHeader):
|
2024-10-26 09:26:38 +00:00
|
|
|
com.genesisHeader = toGenesisHeader(genesis, fork, com.db)
|
2024-05-22 13:41:14 +00:00
|
|
|
|
2022-12-02 04:39:12 +00:00
|
|
|
com.setForkId(com.genesisHeader)
|
2023-08-16 10:05:14 +00:00
|
|
|
com.pos.timestamp = genesis.timestamp
|
2022-12-05 11:25:44 +00:00
|
|
|
|
2023-04-14 22:28:57 +00:00
|
|
|
# By default, history begins at genesis.
|
|
|
|
com.startOfHistory = GENESIS_PARENT_HASH
|
|
|
|
|
2024-08-08 05:45:30 +00:00
|
|
|
com.initializeDb()
|
|
|
|
|
2024-10-08 02:37:36 +00:00
|
|
|
proc isBlockAfterTtd(com: CommonRef, header: Header): bool =
|
|
|
|
if com.config.terminalTotalDifficulty.isNone:
|
|
|
|
return false
|
2023-02-16 11:40:07 +00:00
|
|
|
|
2024-10-08 02:37:36 +00:00
|
|
|
let
|
|
|
|
ttd = com.config.terminalTotalDifficulty.get()
|
|
|
|
ptd = com.db.getScore(header.parentHash).valueOr:
|
|
|
|
return false
|
|
|
|
td = ptd + header.difficulty
|
|
|
|
ptd >= ttd and td >= ttd
|
2023-02-16 11:40:07 +00:00
|
|
|
|
2022-12-02 04:39:12 +00:00
|
|
|
# ------------------------------------------------------------------------------
|
|
|
|
# Public constructors
|
|
|
|
# ------------------------------------------------------------------------------
|
|
|
|
|
2023-12-12 19:12:56 +00:00
|
|
|
proc new*(
|
|
|
|
_: type CommonRef;
|
|
|
|
db: CoreDbRef;
|
|
|
|
networkId: NetworkId = MainNet;
|
|
|
|
params = networkParams(MainNet);
|
2024-05-20 10:17:51 +00:00
|
|
|
pruneHistory = false;
|
2024-10-26 09:26:38 +00:00
|
|
|
): CommonRef =
|
2022-12-02 04:39:12 +00:00
|
|
|
|
|
|
|
## If genesis data is present, the forkIds will be initialized
|
|
|
|
## empty data base also initialized with genesis block
|
|
|
|
new(result)
|
|
|
|
result.init(
|
|
|
|
db,
|
|
|
|
networkId,
|
|
|
|
params.config,
|
Optional accounts cache module for creating genesis (#1897)
* Split off `ReadOnlyStateDB` from `AccountStateDB` from `state_db.nim`
why:
Apart from testing, applications use `ReadOnlyStateDB` as an easy
way to access the accounts ledger. This is well supported by the
`Aristo` db, but writable mode is only parially supported.
The writable AccountStateDB` object for modifying accounts is not
used by production code.
So, for lecgacy and testing apps, the full support of the previous
`AccountStateDB` is now enabled by `import db/state_db/read_write`
and the `import db/state_db` provides read-only mode.
* Encapsulate `AccountStateDB` as `GenesisLedgerRef` or genesis creation
why:
`AccountStateDB` has poor support for `Aristo` and is not widely used
in favour of `AccountsLedger` (which will be abstracted as `ledger`.)
Currently, using other than the `AccountStateDB` ledgers within the
`GenesisLedgerRef` wrapper is experimental and test only. Eventually,
the wrapper should disappear so that the `Ledger` object (which
encapsulates `AccountsCache` and `AccountsLedger`) will prevail.
* For the `Ledger`, provide access to raw accounts `MPT`
why:
This gives to the `CoreDbMptRef` descriptor from the `CoreDb` (which is
the legacy version of CoreDxMptRef`.) For the new `ledger` API, the
accounts are based on the `CoreDxMAccRef` descriptor which uses a
particular sub-system for accounts while legacy applications use the
`CoreDbPhkRef` equivalent of the `SecureHexaryTrie`.
The only place where this feature will currently be used is the
`genesis.nim` source file.
* Fix `Aristo` bugs, missing boundary checks, typos, etc.
* Verify root vertex in `MPT` and account constructors
why:
Was missing so far, in particular the accounts constructor must
verify `VertexID(1)
* Fix include file
2023-11-20 11:51:43 +00:00
|
|
|
params.genesis,
|
2024-05-20 10:17:51 +00:00
|
|
|
pruneHistory)
|
2023-12-12 19:12:56 +00:00
|
|
|
|
|
|
|
proc new*(
|
|
|
|
_: type CommonRef;
|
|
|
|
db: CoreDbRef;
|
|
|
|
config: ChainConfig;
|
|
|
|
networkId: NetworkId = MainNet;
|
2024-05-20 10:17:51 +00:00
|
|
|
pruneHistory = false;
|
2024-10-26 09:26:38 +00:00
|
|
|
): CommonRef =
|
2022-12-02 04:39:12 +00:00
|
|
|
|
|
|
|
## There is no genesis data present
|
|
|
|
## Mainly used for testing without genesis
|
|
|
|
new(result)
|
|
|
|
result.init(
|
|
|
|
db,
|
|
|
|
networkId,
|
|
|
|
config,
|
Optional accounts cache module for creating genesis (#1897)
* Split off `ReadOnlyStateDB` from `AccountStateDB` from `state_db.nim`
why:
Apart from testing, applications use `ReadOnlyStateDB` as an easy
way to access the accounts ledger. This is well supported by the
`Aristo` db, but writable mode is only parially supported.
The writable AccountStateDB` object for modifying accounts is not
used by production code.
So, for lecgacy and testing apps, the full support of the previous
`AccountStateDB` is now enabled by `import db/state_db/read_write`
and the `import db/state_db` provides read-only mode.
* Encapsulate `AccountStateDB` as `GenesisLedgerRef` or genesis creation
why:
`AccountStateDB` has poor support for `Aristo` and is not widely used
in favour of `AccountsLedger` (which will be abstracted as `ledger`.)
Currently, using other than the `AccountStateDB` ledgers within the
`GenesisLedgerRef` wrapper is experimental and test only. Eventually,
the wrapper should disappear so that the `Ledger` object (which
encapsulates `AccountsCache` and `AccountsLedger`) will prevail.
* For the `Ledger`, provide access to raw accounts `MPT`
why:
This gives to the `CoreDbMptRef` descriptor from the `CoreDb` (which is
the legacy version of CoreDxMptRef`.) For the new `ledger` API, the
accounts are based on the `CoreDxMAccRef` descriptor which uses a
particular sub-system for accounts while legacy applications use the
`CoreDbPhkRef` equivalent of the `SecureHexaryTrie`.
The only place where this feature will currently be used is the
`genesis.nim` source file.
* Fix `Aristo` bugs, missing boundary checks, typos, etc.
* Verify root vertex in `MPT` and account constructors
why:
Was missing so far, in particular the accounts constructor must
verify `VertexID(1)
* Fix include file
2023-11-20 11:51:43 +00:00
|
|
|
nil,
|
2024-05-20 10:17:51 +00:00
|
|
|
pruneHistory)
|
2022-12-02 04:39:12 +00:00
|
|
|
|
2024-05-30 20:30:40 +00:00
|
|
|
func clone*(com: CommonRef, db: CoreDbRef): CommonRef =
|
2022-12-02 04:39:12 +00:00
|
|
|
## clone but replace the db
|
|
|
|
## used in EVM tracer whose db is CaptureDB
|
|
|
|
CommonRef(
|
2023-08-04 11:10:09 +00:00
|
|
|
db : db,
|
2022-12-02 04:39:12 +00:00
|
|
|
config : com.config,
|
2023-02-16 11:40:07 +00:00
|
|
|
forkTransitionTable: com.forkTransitionTable,
|
2023-10-24 10:39:19 +00:00
|
|
|
forkIdCalculator: com.forkIdCalculator,
|
2022-12-02 04:39:12 +00:00
|
|
|
genesisHash : com.genesisHash,
|
|
|
|
genesisHeader: com.genesisHeader,
|
|
|
|
syncProgress : com.syncProgress,
|
|
|
|
networkId : com.networkId,
|
2023-12-12 19:12:56 +00:00
|
|
|
pos : com.pos,
|
2024-05-20 10:17:51 +00:00
|
|
|
pruneHistory : com.pruneHistory)
|
2022-12-02 04:39:12 +00:00
|
|
|
|
2024-05-30 20:30:40 +00:00
|
|
|
func clone*(com: CommonRef): CommonRef =
|
2023-08-04 11:10:09 +00:00
|
|
|
com.clone(com.db)
|
2022-12-02 04:39:12 +00:00
|
|
|
|
|
|
|
# ------------------------------------------------------------------------------
|
|
|
|
# Public functions
|
|
|
|
# ------------------------------------------------------------------------------
|
|
|
|
|
2023-05-10 16:04:35 +00:00
|
|
|
func toHardFork*(
|
|
|
|
com: CommonRef, forkDeterminer: ForkDeterminationInfo): HardFork =
|
2023-02-16 11:40:07 +00:00
|
|
|
toHardFork(com.forkTransitionTable, forkDeterminer)
|
|
|
|
|
|
|
|
func toEVMFork*(com: CommonRef, forkDeterminer: ForkDeterminationInfo): EVMFork =
|
2022-12-02 04:39:12 +00:00
|
|
|
## similar to toFork, but produce EVMFork
|
2023-02-16 11:40:07 +00:00
|
|
|
let fork = com.toHardFork(forkDeterminer)
|
2022-12-02 04:39:12 +00:00
|
|
|
ToEVMFork[fork]
|
|
|
|
|
2024-07-04 13:48:36 +00:00
|
|
|
func isSpuriousOrLater*(com: CommonRef, number: BlockNumber): bool =
|
|
|
|
com.toHardFork(number.forkDeterminationInfo) >= Spurious
|
|
|
|
|
2024-07-17 10:05:53 +00:00
|
|
|
func isByzantiumOrLater*(com: CommonRef, number: BlockNumber): bool =
|
|
|
|
com.toHardFork(number.forkDeterminationInfo) >= Byzantium
|
|
|
|
|
2024-06-27 05:54:36 +00:00
|
|
|
func isLondonOrLater*(com: CommonRef, number: BlockNumber): bool =
|
2022-12-02 04:39:12 +00:00
|
|
|
# TODO: Fixme, use only London comparator
|
2023-10-24 10:39:19 +00:00
|
|
|
com.toHardFork(number.forkDeterminationInfo) >= London
|
2022-12-02 04:39:12 +00:00
|
|
|
|
2023-10-24 10:39:19 +00:00
|
|
|
func forkId*(com: CommonRef, head, time: uint64): ForkID {.gcsafe.} =
|
2022-12-02 04:39:12 +00:00
|
|
|
## EIP 2364/2124
|
2023-10-24 10:39:19 +00:00
|
|
|
com.forkIdCalculator.newID(head, time)
|
|
|
|
|
|
|
|
func forkId*(com: CommonRef, head: BlockNumber, time: EthTime): ForkID {.gcsafe.} =
|
|
|
|
## EIP 2364/2124
|
2024-06-14 07:31:08 +00:00
|
|
|
com.forkIdCalculator.newID(head, time.uint64)
|
2022-12-02 04:39:12 +00:00
|
|
|
|
|
|
|
func isEIP155*(com: CommonRef, number: BlockNumber): bool =
|
|
|
|
com.config.eip155Block.isSome and number >= com.config.eip155Block.get
|
|
|
|
|
2023-03-09 23:40:55 +00:00
|
|
|
func isShanghaiOrLater*(com: CommonRef, t: EthTime): bool =
|
|
|
|
com.config.shanghaiTime.isSome and t >= com.config.shanghaiTime.get
|
|
|
|
|
2023-06-24 13:56:44 +00:00
|
|
|
func isCancunOrLater*(com: CommonRef, t: EthTime): bool =
|
|
|
|
com.config.cancunTime.isSome and t >= com.config.cancunTime.get
|
|
|
|
|
2024-03-28 06:47:02 +00:00
|
|
|
func isPragueOrLater*(com: CommonRef, t: EthTime): bool =
|
|
|
|
com.config.pragueTime.isSome and t >= com.config.pragueTime.get
|
|
|
|
|
2024-10-08 02:37:36 +00:00
|
|
|
proc proofOfStake*(com: CommonRef, header: Header): bool =
|
|
|
|
if com.config.posBlock.isSome:
|
|
|
|
# see comments of posBlock in common/hardforks.nim
|
|
|
|
header.number >= com.config.posBlock.get
|
|
|
|
elif com.config.mergeForkBlock.isSome:
|
|
|
|
header.number >= com.config.mergeForkBlock.get
|
|
|
|
else:
|
|
|
|
# This costly check is only executed from test suite
|
|
|
|
com.isBlockAfterTtd(header)
|
2022-12-02 04:39:12 +00:00
|
|
|
|
2024-10-16 01:34:12 +00:00
|
|
|
proc syncReqNewHead*(com: CommonRef; header: Header)
|
2023-05-16 04:15:10 +00:00
|
|
|
{.gcsafe, raises: [].} =
|
Flare sync (#2627)
* Cosmetics, small fixes, add stashed headers verifier
* Remove direct `Era1` support
why:
Era1 is indirectly supported by using the import tool before syncing.
* Clarify database persistent save function.
why:
Function relied on the last saved state block number which was wrong.
It now relies on the tx-level. If it is 0, then data are saved directly.
Otherwise the task that owns the tx will do it.
* Extracted configuration constants into separate file
* Enable single peer mode for debugging
* Fix peer losing issue in multi-mode
details:
Running concurrent download peers was previously programmed as running
a batch downloading and storing ~8k headers and then leaving the `async`
function to be restarted by a scheduler.
This was unfortunate because of occasionally occurring long waiting
times for restart.
While the time gap until restarting were typically observed a few
millisecs, there were always a few outliers which well exceed several
seconds. This seemed to let remote peers run into timeouts.
* Prefix function names `unprocXxx()` and `stagedYyy()` by `headers`
why:
There will be other `unproc` and `staged` modules.
* Remove cruft, update logging
* Fix accounting issue
details:
When staging after fetching headers from the network, there was an off
by 1 error occurring when the result was by one smaller than requested.
Also, a whole range was mis-accounted when a peer was terminating
connection immediately after responding.
* Fix slow/error header accounting when fetching
why:
Originally set for detecting slow headers in a row, the counter
was wrongly extended to general errors.
* Ban peers for a while that respond with too few headers continuously
why:
Some peers only returned one header at a time. If these peers sit on a
farm, they might collectively slow down the download process.
* Update RPC beacon header updater
why:
Old function hook has slightly changed its meaning since it was used
for snap sync. Also, the old hook is used by other functions already.
* Limit number of peers or set to single peer mode
details:
Merge several concepts, single peer mode being one of it.
* Some code clean up, fixings for removing of compiler warnings
* De-noise header fetch related sources
why:
Header download looks relatively stable, so general debugging is not
needed, anymore. This is the equivalent of removing the scaffold from
the part of the building where work has completed.
* More clean up and code prettification for headers stuff
* Implement body fetch and block import
details:
Available headers are used stage blocks by combining existing headers
with newly fetched blocks. Then these blocks are imported/executed via
`persistBlocks()`.
* Logger cosmetics and cleanup
* Remove staged block queue debugging
details:
Feature still available, just not executed anymore
* Docu, logging update
* Update/simplify `runDaemon()`
* Re-calibrate block body requests and soft config for import blocks batch
why:
* For fetching, larger fetch requests are mostly truncated anyway on
MainNet.
* For executing, smaller batch sizes reduce the memory needed for the
price of longer execution times.
* Update metrics counters
* Docu update
* Some fixes, formatting updates, etc.
* Update `borrowed` type: uint -. uint64
also:
Always convert to `uint64` rather than `uint` where appropriate
2024-09-27 15:07:42 +00:00
|
|
|
## Used by RPC updater
|
2023-01-17 09:28:14 +00:00
|
|
|
if not com.syncReqNewHead.isNil:
|
2023-02-14 20:27:17 +00:00
|
|
|
com.syncReqNewHead(header)
|
2023-01-17 09:28:14 +00:00
|
|
|
|
2024-10-17 17:59:50 +00:00
|
|
|
proc reqBeaconSyncTargetCB*(com: CommonRef; header: Header; finHash: Hash32) =
|
Flare sync (#2627)
* Cosmetics, small fixes, add stashed headers verifier
* Remove direct `Era1` support
why:
Era1 is indirectly supported by using the import tool before syncing.
* Clarify database persistent save function.
why:
Function relied on the last saved state block number which was wrong.
It now relies on the tx-level. If it is 0, then data are saved directly.
Otherwise the task that owns the tx will do it.
* Extracted configuration constants into separate file
* Enable single peer mode for debugging
* Fix peer losing issue in multi-mode
details:
Running concurrent download peers was previously programmed as running
a batch downloading and storing ~8k headers and then leaving the `async`
function to be restarted by a scheduler.
This was unfortunate because of occasionally occurring long waiting
times for restart.
While the time gap until restarting were typically observed a few
millisecs, there were always a few outliers which well exceed several
seconds. This seemed to let remote peers run into timeouts.
* Prefix function names `unprocXxx()` and `stagedYyy()` by `headers`
why:
There will be other `unproc` and `staged` modules.
* Remove cruft, update logging
* Fix accounting issue
details:
When staging after fetching headers from the network, there was an off
by 1 error occurring when the result was by one smaller than requested.
Also, a whole range was mis-accounted when a peer was terminating
connection immediately after responding.
* Fix slow/error header accounting when fetching
why:
Originally set for detecting slow headers in a row, the counter
was wrongly extended to general errors.
* Ban peers for a while that respond with too few headers continuously
why:
Some peers only returned one header at a time. If these peers sit on a
farm, they might collectively slow down the download process.
* Update RPC beacon header updater
why:
Old function hook has slightly changed its meaning since it was used
for snap sync. Also, the old hook is used by other functions already.
* Limit number of peers or set to single peer mode
details:
Merge several concepts, single peer mode being one of it.
* Some code clean up, fixings for removing of compiler warnings
* De-noise header fetch related sources
why:
Header download looks relatively stable, so general debugging is not
needed, anymore. This is the equivalent of removing the scaffold from
the part of the building where work has completed.
* More clean up and code prettification for headers stuff
* Implement body fetch and block import
details:
Available headers are used stage blocks by combining existing headers
with newly fetched blocks. Then these blocks are imported/executed via
`persistBlocks()`.
* Logger cosmetics and cleanup
* Remove staged block queue debugging
details:
Feature still available, just not executed anymore
* Docu, logging update
* Update/simplify `runDaemon()`
* Re-calibrate block body requests and soft config for import blocks batch
why:
* For fetching, larger fetch requests are mostly truncated anyway on
MainNet.
* For executing, smaller batch sizes reduce the memory needed for the
price of longer execution times.
* Update metrics counters
* Docu update
* Some fixes, formatting updates, etc.
* Update `borrowed` type: uint -. uint64
also:
Always convert to `uint64` rather than `uint` where appropriate
2024-09-27 15:07:42 +00:00
|
|
|
## Used by RPC updater
|
2024-10-09 18:00:00 +00:00
|
|
|
if not com.reqBeaconSyncTargetCB.isNil:
|
2024-10-17 17:59:50 +00:00
|
|
|
com.reqBeaconSyncTargetCB(header, finHash)
|
Flare sync (#2627)
* Cosmetics, small fixes, add stashed headers verifier
* Remove direct `Era1` support
why:
Era1 is indirectly supported by using the import tool before syncing.
* Clarify database persistent save function.
why:
Function relied on the last saved state block number which was wrong.
It now relies on the tx-level. If it is 0, then data are saved directly.
Otherwise the task that owns the tx will do it.
* Extracted configuration constants into separate file
* Enable single peer mode for debugging
* Fix peer losing issue in multi-mode
details:
Running concurrent download peers was previously programmed as running
a batch downloading and storing ~8k headers and then leaving the `async`
function to be restarted by a scheduler.
This was unfortunate because of occasionally occurring long waiting
times for restart.
While the time gap until restarting were typically observed a few
millisecs, there were always a few outliers which well exceed several
seconds. This seemed to let remote peers run into timeouts.
* Prefix function names `unprocXxx()` and `stagedYyy()` by `headers`
why:
There will be other `unproc` and `staged` modules.
* Remove cruft, update logging
* Fix accounting issue
details:
When staging after fetching headers from the network, there was an off
by 1 error occurring when the result was by one smaller than requested.
Also, a whole range was mis-accounted when a peer was terminating
connection immediately after responding.
* Fix slow/error header accounting when fetching
why:
Originally set for detecting slow headers in a row, the counter
was wrongly extended to general errors.
* Ban peers for a while that respond with too few headers continuously
why:
Some peers only returned one header at a time. If these peers sit on a
farm, they might collectively slow down the download process.
* Update RPC beacon header updater
why:
Old function hook has slightly changed its meaning since it was used
for snap sync. Also, the old hook is used by other functions already.
* Limit number of peers or set to single peer mode
details:
Merge several concepts, single peer mode being one of it.
* Some code clean up, fixings for removing of compiler warnings
* De-noise header fetch related sources
why:
Header download looks relatively stable, so general debugging is not
needed, anymore. This is the equivalent of removing the scaffold from
the part of the building where work has completed.
* More clean up and code prettification for headers stuff
* Implement body fetch and block import
details:
Available headers are used stage blocks by combining existing headers
with newly fetched blocks. Then these blocks are imported/executed via
`persistBlocks()`.
* Logger cosmetics and cleanup
* Remove staged block queue debugging
details:
Feature still available, just not executed anymore
* Docu, logging update
* Update/simplify `runDaemon()`
* Re-calibrate block body requests and soft config for import blocks batch
why:
* For fetching, larger fetch requests are mostly truncated anyway on
MainNet.
* For executing, smaller batch sizes reduce the memory needed for the
price of longer execution times.
* Update metrics counters
* Docu update
* Some fixes, formatting updates, etc.
* Update `borrowed` type: uint -. uint64
also:
Always convert to `uint64` rather than `uint` where appropriate
2024-09-27 15:07:42 +00:00
|
|
|
|
2024-10-16 01:34:12 +00:00
|
|
|
proc notifyBadBlock*(com: CommonRef; invalid, origin: Header)
|
2024-05-17 01:38:46 +00:00
|
|
|
{.gcsafe, raises: [].} =
|
|
|
|
|
|
|
|
if not com.notifyBadBlock.isNil:
|
|
|
|
com.notifyBadBlock(invalid, origin)
|
|
|
|
|
2022-12-02 04:39:12 +00:00
|
|
|
# ------------------------------------------------------------------------------
|
|
|
|
# Getters
|
|
|
|
# ------------------------------------------------------------------------------
|
2023-04-14 22:28:57 +00:00
|
|
|
|
2024-10-16 01:34:12 +00:00
|
|
|
func startOfHistory*(com: CommonRef): Hash32 =
|
2023-04-14 22:28:57 +00:00
|
|
|
## Getter
|
|
|
|
com.startOfHistory
|
|
|
|
|
|
|
|
func pos*(com: CommonRef): CasperRef =
|
2022-12-06 05:55:40 +00:00
|
|
|
## Getter
|
|
|
|
com.pos
|
|
|
|
|
2023-08-04 11:10:09 +00:00
|
|
|
func db*(com: CommonRef): CoreDbRef =
|
2022-12-02 04:39:12 +00:00
|
|
|
com.db
|
|
|
|
|
2024-06-14 07:31:08 +00:00
|
|
|
func eip150Block*(com: CommonRef): Opt[BlockNumber] =
|
2022-12-02 04:39:12 +00:00
|
|
|
com.config.eip150Block
|
|
|
|
|
2024-10-16 01:34:12 +00:00
|
|
|
func eip150Hash*(com: CommonRef): Hash32 =
|
2022-12-02 04:39:12 +00:00
|
|
|
com.config.eip150Hash
|
|
|
|
|
2024-06-14 07:31:08 +00:00
|
|
|
func daoForkBlock*(com: CommonRef): Opt[BlockNumber] =
|
2022-12-02 04:39:12 +00:00
|
|
|
com.config.daoForkBlock
|
|
|
|
|
|
|
|
func daoForkSupport*(com: CommonRef): bool =
|
|
|
|
com.config.daoForkSupport
|
|
|
|
|
2024-06-14 07:31:08 +00:00
|
|
|
func ttd*(com: CommonRef): Opt[DifficultyInt] =
|
2022-12-02 04:39:12 +00:00
|
|
|
com.config.terminalTotalDifficulty
|
|
|
|
|
2023-10-19 03:29:06 +00:00
|
|
|
func ttdPassed*(com: CommonRef): bool =
|
|
|
|
com.config.terminalTotalDifficultyPassed.get(false)
|
|
|
|
|
2024-05-20 10:17:51 +00:00
|
|
|
func pruneHistory*(com: CommonRef): bool =
|
|
|
|
com.pruneHistory
|
2022-12-02 04:39:12 +00:00
|
|
|
|
|
|
|
# always remember ChainId and NetworkId
|
|
|
|
# are two distinct things that often got mixed
|
|
|
|
# because some client do not make distinction
|
|
|
|
# between them.
|
|
|
|
# And popular networks such as MainNet
|
2024-05-28 05:10:10 +00:00
|
|
|
# add more confusion to this
|
|
|
|
# by not making a distinction in their value.
|
2022-12-02 04:39:12 +00:00
|
|
|
func chainId*(com: CommonRef): ChainId =
|
|
|
|
com.config.chainId
|
|
|
|
|
|
|
|
func networkId*(com: CommonRef): NetworkId =
|
|
|
|
com.networkId
|
|
|
|
|
2024-10-16 01:34:12 +00:00
|
|
|
func genesisHash*(com: CommonRef): Hash32 =
|
2022-12-02 04:39:12 +00:00
|
|
|
## Getter
|
|
|
|
com.genesisHash
|
|
|
|
|
2024-10-16 01:34:12 +00:00
|
|
|
func genesisHeader*(com: CommonRef): Header =
|
2022-12-02 04:39:12 +00:00
|
|
|
## Getter
|
|
|
|
com.genesisHeader
|
|
|
|
|
|
|
|
func syncStart*(com: CommonRef): BlockNumber =
|
|
|
|
com.syncProgress.start
|
|
|
|
|
|
|
|
func syncCurrent*(com: CommonRef): BlockNumber =
|
|
|
|
com.syncProgress.current
|
|
|
|
|
|
|
|
func syncHighest*(com: CommonRef): BlockNumber =
|
|
|
|
com.syncProgress.highest
|
|
|
|
|
2024-09-04 09:54:54 +00:00
|
|
|
func syncState*(com: CommonRef): SyncState =
|
|
|
|
com.syncState
|
|
|
|
|
2022-12-02 04:39:12 +00:00
|
|
|
# ------------------------------------------------------------------------------
|
|
|
|
# Setters
|
|
|
|
# ------------------------------------------------------------------------------
|
|
|
|
|
2024-05-30 20:30:40 +00:00
|
|
|
func `syncStart=`*(com: CommonRef, number: BlockNumber) =
|
2022-12-02 04:39:12 +00:00
|
|
|
com.syncProgress.start = number
|
|
|
|
|
2024-05-30 20:30:40 +00:00
|
|
|
func `syncCurrent=`*(com: CommonRef, number: BlockNumber) =
|
2022-12-02 04:39:12 +00:00
|
|
|
com.syncProgress.current = number
|
|
|
|
|
2024-05-30 20:30:40 +00:00
|
|
|
func `syncHighest=`*(com: CommonRef, number: BlockNumber) =
|
2022-12-02 04:39:12 +00:00
|
|
|
com.syncProgress.highest = number
|
|
|
|
|
2024-09-04 09:54:54 +00:00
|
|
|
func `syncState=`*(com: CommonRef, state: SyncState) =
|
|
|
|
com.syncState = state
|
|
|
|
|
2024-10-16 01:34:12 +00:00
|
|
|
func `startOfHistory=`*(com: CommonRef, val: Hash32) =
|
2023-04-14 22:28:57 +00:00
|
|
|
## Setter
|
|
|
|
com.startOfHistory = val
|
|
|
|
|
2024-06-14 07:31:08 +00:00
|
|
|
func setTTD*(com: CommonRef, ttd: Opt[DifficultyInt]) =
|
2022-12-02 04:39:12 +00:00
|
|
|
## useful for testing
|
|
|
|
com.config.terminalTotalDifficulty = ttd
|
2023-02-16 11:40:07 +00:00
|
|
|
# rebuild the MergeFork piece of the forkTransitionTable
|
|
|
|
com.forkTransitionTable.mergeForkTransitionThreshold = com.config.mergeForkTransitionThreshold
|
2022-12-02 04:39:12 +00:00
|
|
|
|
2024-05-30 20:30:40 +00:00
|
|
|
func `syncReqNewHead=`*(com: CommonRef; cb: SyncReqNewHeadCB) =
|
|
|
|
## Activate or reset a call back handler for syncing.
|
2023-01-17 09:28:14 +00:00
|
|
|
com.syncReqNewHead = cb
|
|
|
|
|
2024-10-09 18:00:00 +00:00
|
|
|
func `reqBeaconSyncTarget=`*(com: CommonRef; cb: ReqBeaconSyncTargetCB) =
|
Flare sync (#2627)
* Cosmetics, small fixes, add stashed headers verifier
* Remove direct `Era1` support
why:
Era1 is indirectly supported by using the import tool before syncing.
* Clarify database persistent save function.
why:
Function relied on the last saved state block number which was wrong.
It now relies on the tx-level. If it is 0, then data are saved directly.
Otherwise the task that owns the tx will do it.
* Extracted configuration constants into separate file
* Enable single peer mode for debugging
* Fix peer losing issue in multi-mode
details:
Running concurrent download peers was previously programmed as running
a batch downloading and storing ~8k headers and then leaving the `async`
function to be restarted by a scheduler.
This was unfortunate because of occasionally occurring long waiting
times for restart.
While the time gap until restarting were typically observed a few
millisecs, there were always a few outliers which well exceed several
seconds. This seemed to let remote peers run into timeouts.
* Prefix function names `unprocXxx()` and `stagedYyy()` by `headers`
why:
There will be other `unproc` and `staged` modules.
* Remove cruft, update logging
* Fix accounting issue
details:
When staging after fetching headers from the network, there was an off
by 1 error occurring when the result was by one smaller than requested.
Also, a whole range was mis-accounted when a peer was terminating
connection immediately after responding.
* Fix slow/error header accounting when fetching
why:
Originally set for detecting slow headers in a row, the counter
was wrongly extended to general errors.
* Ban peers for a while that respond with too few headers continuously
why:
Some peers only returned one header at a time. If these peers sit on a
farm, they might collectively slow down the download process.
* Update RPC beacon header updater
why:
Old function hook has slightly changed its meaning since it was used
for snap sync. Also, the old hook is used by other functions already.
* Limit number of peers or set to single peer mode
details:
Merge several concepts, single peer mode being one of it.
* Some code clean up, fixings for removing of compiler warnings
* De-noise header fetch related sources
why:
Header download looks relatively stable, so general debugging is not
needed, anymore. This is the equivalent of removing the scaffold from
the part of the building where work has completed.
* More clean up and code prettification for headers stuff
* Implement body fetch and block import
details:
Available headers are used stage blocks by combining existing headers
with newly fetched blocks. Then these blocks are imported/executed via
`persistBlocks()`.
* Logger cosmetics and cleanup
* Remove staged block queue debugging
details:
Feature still available, just not executed anymore
* Docu, logging update
* Update/simplify `runDaemon()`
* Re-calibrate block body requests and soft config for import blocks batch
why:
* For fetching, larger fetch requests are mostly truncated anyway on
MainNet.
* For executing, smaller batch sizes reduce the memory needed for the
price of longer execution times.
* Update metrics counters
* Docu update
* Some fixes, formatting updates, etc.
* Update `borrowed` type: uint -. uint64
also:
Always convert to `uint64` rather than `uint` where appropriate
2024-09-27 15:07:42 +00:00
|
|
|
## Activate or reset a call back handler for syncing.
|
2024-10-09 18:00:00 +00:00
|
|
|
com.reqBeaconSyncTargetCB = cb
|
Flare sync (#2627)
* Cosmetics, small fixes, add stashed headers verifier
* Remove direct `Era1` support
why:
Era1 is indirectly supported by using the import tool before syncing.
* Clarify database persistent save function.
why:
Function relied on the last saved state block number which was wrong.
It now relies on the tx-level. If it is 0, then data are saved directly.
Otherwise the task that owns the tx will do it.
* Extracted configuration constants into separate file
* Enable single peer mode for debugging
* Fix peer losing issue in multi-mode
details:
Running concurrent download peers was previously programmed as running
a batch downloading and storing ~8k headers and then leaving the `async`
function to be restarted by a scheduler.
This was unfortunate because of occasionally occurring long waiting
times for restart.
While the time gap until restarting were typically observed a few
millisecs, there were always a few outliers which well exceed several
seconds. This seemed to let remote peers run into timeouts.
* Prefix function names `unprocXxx()` and `stagedYyy()` by `headers`
why:
There will be other `unproc` and `staged` modules.
* Remove cruft, update logging
* Fix accounting issue
details:
When staging after fetching headers from the network, there was an off
by 1 error occurring when the result was by one smaller than requested.
Also, a whole range was mis-accounted when a peer was terminating
connection immediately after responding.
* Fix slow/error header accounting when fetching
why:
Originally set for detecting slow headers in a row, the counter
was wrongly extended to general errors.
* Ban peers for a while that respond with too few headers continuously
why:
Some peers only returned one header at a time. If these peers sit on a
farm, they might collectively slow down the download process.
* Update RPC beacon header updater
why:
Old function hook has slightly changed its meaning since it was used
for snap sync. Also, the old hook is used by other functions already.
* Limit number of peers or set to single peer mode
details:
Merge several concepts, single peer mode being one of it.
* Some code clean up, fixings for removing of compiler warnings
* De-noise header fetch related sources
why:
Header download looks relatively stable, so general debugging is not
needed, anymore. This is the equivalent of removing the scaffold from
the part of the building where work has completed.
* More clean up and code prettification for headers stuff
* Implement body fetch and block import
details:
Available headers are used stage blocks by combining existing headers
with newly fetched blocks. Then these blocks are imported/executed via
`persistBlocks()`.
* Logger cosmetics and cleanup
* Remove staged block queue debugging
details:
Feature still available, just not executed anymore
* Docu, logging update
* Update/simplify `runDaemon()`
* Re-calibrate block body requests and soft config for import blocks batch
why:
* For fetching, larger fetch requests are mostly truncated anyway on
MainNet.
* For executing, smaller batch sizes reduce the memory needed for the
price of longer execution times.
* Update metrics counters
* Docu update
* Some fixes, formatting updates, etc.
* Update `borrowed` type: uint -. uint64
also:
Always convert to `uint64` rather than `uint` where appropriate
2024-09-27 15:07:42 +00:00
|
|
|
|
2024-05-30 20:30:40 +00:00
|
|
|
func `notifyBadBlock=`*(com: CommonRef; cb: NotifyBadBlockCB) =
|
2024-05-17 01:38:46 +00:00
|
|
|
## Activate or reset a call back handler for bad block notification.
|
|
|
|
com.notifyBadBlock = cb
|
|
|
|
|
2023-01-17 09:28:14 +00:00
|
|
|
# ------------------------------------------------------------------------------
|
|
|
|
# End
|
|
|
|
# ------------------------------------------------------------------------------
|