2022-12-02 11:39:12 +07:00
|
|
|
# Nimbus
|
2025-01-14 20:30:56 +07:00
|
|
|
# Copyright (c) 2022-2025 Status Research & Development GmbH
|
2022-12-02 11:39:12 +07:00
|
|
|
# Licensed under either of
|
|
|
|
# * Apache License, version 2.0, ([LICENSE-APACHE](LICENSE-APACHE))
|
|
|
|
# * MIT license ([LICENSE-MIT](LICENSE-MIT))
|
|
|
|
# at your option.
|
|
|
|
# This file may not be copied, modified, or distributed except according to
|
|
|
|
# those terms.
|
|
|
|
|
2023-02-14 21:27:17 +01:00
|
|
|
{.push raises: [].}
|
|
|
|
|
2022-12-02 11:39:12 +07:00
|
|
|
import
|
|
|
|
chronicles,
|
2025-01-29 15:04:27 +00:00
|
|
|
logging,
|
2023-12-12 19:12:56 +00:00
|
|
|
../db/[core_db, ledger, storage_types],
|
2024-12-21 20:13:33 +07:00
|
|
|
../utils/[utils],
|
2024-11-06 09:01:25 +07:00
|
|
|
".."/[constants, errors, version],
|
2024-12-13 05:53:41 +01:00
|
|
|
"."/[chain_config, evmforks, genesis, hardforks],
|
|
|
|
taskpools
|
2022-12-02 11:39:12 +07:00
|
|
|
|
|
|
|
export
|
|
|
|
chain_config,
|
2023-08-04 12:10:09 +01:00
|
|
|
core_db,
|
|
|
|
constants,
|
|
|
|
errors,
|
2022-12-02 11:39:12 +07:00
|
|
|
evmforks,
|
|
|
|
hardforks,
|
|
|
|
genesis,
|
2024-12-13 05:53:41 +01:00
|
|
|
utils,
|
2025-01-29 15:04:27 +00:00
|
|
|
taskpools,
|
|
|
|
logging
|
2022-12-02 11:39:12 +07:00
|
|
|
|
|
|
|
type
|
|
|
|
SyncProgress = object
|
|
|
|
start : BlockNumber
|
|
|
|
current: BlockNumber
|
|
|
|
highest: BlockNumber
|
|
|
|
|
2024-09-04 16:54:54 +07:00
|
|
|
SyncState* = enum
|
|
|
|
Waiting
|
|
|
|
Syncing
|
|
|
|
Synced
|
|
|
|
|
2024-10-16 07:04:12 +05:30
|
|
|
SyncReqNewHeadCB* = proc(header: Header) {.gcsafe, raises: [].}
|
2023-01-17 09:28:14 +00:00
|
|
|
## Update head for syncing
|
|
|
|
|
2025-01-29 12:31:06 +00:00
|
|
|
ReqBeaconSyncerTargetCB* = proc(header: Header; finHash: Hash32) {.gcsafe, raises: [].}
|
2024-10-09 18:00:00 +00:00
|
|
|
## Ditto (for beacon sync)
|
Flare sync (#2627)
* Cosmetics, small fixes, add stashed headers verifier
* Remove direct `Era1` support
why:
Era1 is indirectly supported by using the import tool before syncing.
* Clarify database persistent save function.
why:
Function relied on the last saved state block number which was wrong.
It now relies on the tx-level. If it is 0, then data are saved directly.
Otherwise the task that owns the tx will do it.
* Extracted configuration constants into separate file
* Enable single peer mode for debugging
* Fix peer losing issue in multi-mode
details:
Running concurrent download peers was previously programmed as running
a batch downloading and storing ~8k headers and then leaving the `async`
function to be restarted by a scheduler.
This was unfortunate because of occasionally occurring long waiting
times for restart.
While the time gap until restarting were typically observed a few
millisecs, there were always a few outliers which well exceed several
seconds. This seemed to let remote peers run into timeouts.
* Prefix function names `unprocXxx()` and `stagedYyy()` by `headers`
why:
There will be other `unproc` and `staged` modules.
* Remove cruft, update logging
* Fix accounting issue
details:
When staging after fetching headers from the network, there was an off
by 1 error occurring when the result was by one smaller than requested.
Also, a whole range was mis-accounted when a peer was terminating
connection immediately after responding.
* Fix slow/error header accounting when fetching
why:
Originally set for detecting slow headers in a row, the counter
was wrongly extended to general errors.
* Ban peers for a while that respond with too few headers continuously
why:
Some peers only returned one header at a time. If these peers sit on a
farm, they might collectively slow down the download process.
* Update RPC beacon header updater
why:
Old function hook has slightly changed its meaning since it was used
for snap sync. Also, the old hook is used by other functions already.
* Limit number of peers or set to single peer mode
details:
Merge several concepts, single peer mode being one of it.
* Some code clean up, fixings for removing of compiler warnings
* De-noise header fetch related sources
why:
Header download looks relatively stable, so general debugging is not
needed, anymore. This is the equivalent of removing the scaffold from
the part of the building where work has completed.
* More clean up and code prettification for headers stuff
* Implement body fetch and block import
details:
Available headers are used stage blocks by combining existing headers
with newly fetched blocks. Then these blocks are imported/executed via
`persistBlocks()`.
* Logger cosmetics and cleanup
* Remove staged block queue debugging
details:
Feature still available, just not executed anymore
* Docu, logging update
* Update/simplify `runDaemon()`
* Re-calibrate block body requests and soft config for import blocks batch
why:
* For fetching, larger fetch requests are mostly truncated anyway on
MainNet.
* For executing, smaller batch sizes reduce the memory needed for the
price of longer execution times.
* Update metrics counters
* Docu update
* Some fixes, formatting updates, etc.
* Update `borrowed` type: uint -. uint64
also:
Always convert to `uint64` rather than `uint` where appropriate
2024-09-27 15:07:42 +00:00
|
|
|
|
2025-01-29 16:20:25 +00:00
|
|
|
BeaconSyncerProgressCB* = proc(): tuple[start, current, target: BlockNumber] {.gcsafe, raises: [].}
|
2025-01-29 12:31:06 +00:00
|
|
|
## Query syncer status
|
|
|
|
|
2024-10-16 07:04:12 +05:30
|
|
|
NotifyBadBlockCB* = proc(invalid, origin: Header) {.gcsafe, raises: [].}
|
2024-05-17 08:38:46 +07:00
|
|
|
## Notify engine-API of encountered bad block
|
|
|
|
|
2022-12-02 11:39:12 +07:00
|
|
|
CommonRef* = ref object
|
|
|
|
# all purpose storage
|
2023-08-04 12:10:09 +01:00
|
|
|
db: CoreDbRef
|
2022-12-02 11:39:12 +07:00
|
|
|
|
|
|
|
# block chain config
|
|
|
|
config: ChainConfig
|
|
|
|
|
|
|
|
# cache of genesis
|
2024-10-16 07:04:12 +05:30
|
|
|
genesisHash: Hash32
|
|
|
|
genesisHeader: Header
|
2022-12-02 11:39:12 +07:00
|
|
|
|
2023-02-16 11:40:07 +00:00
|
|
|
# map block number and ttd and time to
|
2022-12-02 11:39:12 +07:00
|
|
|
# HardFork
|
2023-02-16 11:40:07 +00:00
|
|
|
forkTransitionTable: ForkTransitionTable
|
2022-12-02 11:39:12 +07:00
|
|
|
|
|
|
|
# Eth wire protocol need this
|
2023-10-24 17:39:19 +07:00
|
|
|
forkIdCalculator: ForkIdCalculator
|
2022-12-02 11:39:12 +07:00
|
|
|
networkId: NetworkId
|
|
|
|
|
|
|
|
# synchronizer need this
|
|
|
|
syncProgress: SyncProgress
|
|
|
|
|
2024-09-04 16:54:54 +07:00
|
|
|
syncState: SyncState
|
|
|
|
|
2023-01-17 09:28:14 +00:00
|
|
|
syncReqNewHead: SyncReqNewHeadCB
|
2023-05-16 14:52:44 +01:00
|
|
|
## Call back function for the sync processor. This function stages
|
|
|
|
## the arguent header to a private aerea for subsequent processing.
|
|
|
|
|
2025-01-29 12:31:06 +00:00
|
|
|
reqBeaconSyncerTargetCB: ReqBeaconSyncerTargetCB
|
Flare sync (#2627)
* Cosmetics, small fixes, add stashed headers verifier
* Remove direct `Era1` support
why:
Era1 is indirectly supported by using the import tool before syncing.
* Clarify database persistent save function.
why:
Function relied on the last saved state block number which was wrong.
It now relies on the tx-level. If it is 0, then data are saved directly.
Otherwise the task that owns the tx will do it.
* Extracted configuration constants into separate file
* Enable single peer mode for debugging
* Fix peer losing issue in multi-mode
details:
Running concurrent download peers was previously programmed as running
a batch downloading and storing ~8k headers and then leaving the `async`
function to be restarted by a scheduler.
This was unfortunate because of occasionally occurring long waiting
times for restart.
While the time gap until restarting were typically observed a few
millisecs, there were always a few outliers which well exceed several
seconds. This seemed to let remote peers run into timeouts.
* Prefix function names `unprocXxx()` and `stagedYyy()` by `headers`
why:
There will be other `unproc` and `staged` modules.
* Remove cruft, update logging
* Fix accounting issue
details:
When staging after fetching headers from the network, there was an off
by 1 error occurring when the result was by one smaller than requested.
Also, a whole range was mis-accounted when a peer was terminating
connection immediately after responding.
* Fix slow/error header accounting when fetching
why:
Originally set for detecting slow headers in a row, the counter
was wrongly extended to general errors.
* Ban peers for a while that respond with too few headers continuously
why:
Some peers only returned one header at a time. If these peers sit on a
farm, they might collectively slow down the download process.
* Update RPC beacon header updater
why:
Old function hook has slightly changed its meaning since it was used
for snap sync. Also, the old hook is used by other functions already.
* Limit number of peers or set to single peer mode
details:
Merge several concepts, single peer mode being one of it.
* Some code clean up, fixings for removing of compiler warnings
* De-noise header fetch related sources
why:
Header download looks relatively stable, so general debugging is not
needed, anymore. This is the equivalent of removing the scaffold from
the part of the building where work has completed.
* More clean up and code prettification for headers stuff
* Implement body fetch and block import
details:
Available headers are used stage blocks by combining existing headers
with newly fetched blocks. Then these blocks are imported/executed via
`persistBlocks()`.
* Logger cosmetics and cleanup
* Remove staged block queue debugging
details:
Feature still available, just not executed anymore
* Docu, logging update
* Update/simplify `runDaemon()`
* Re-calibrate block body requests and soft config for import blocks batch
why:
* For fetching, larger fetch requests are mostly truncated anyway on
MainNet.
* For executing, smaller batch sizes reduce the memory needed for the
price of longer execution times.
* Update metrics counters
* Docu update
* Some fixes, formatting updates, etc.
* Update `borrowed` type: uint -. uint64
also:
Always convert to `uint64` rather than `uint` where appropriate
2024-09-27 15:07:42 +00:00
|
|
|
## Call back function for a sync processor that returns the canonical
|
|
|
|
## header.
|
|
|
|
|
2025-01-29 16:20:25 +00:00
|
|
|
beaconSyncerProgressCB: BeaconSyncerProgressCB
|
2025-01-29 12:31:06 +00:00
|
|
|
## Call back function querying the status of the sync processor. The
|
|
|
|
## function returns `true` if the syncer is running, downloading or
|
|
|
|
## importing headers and blocks.
|
|
|
|
|
2024-05-17 08:38:46 +07:00
|
|
|
notifyBadBlock: NotifyBadBlockCB
|
|
|
|
## Allow synchronizer to inform engine-API of bad encountered during sync
|
|
|
|
## progress
|
|
|
|
|
2024-10-16 07:04:12 +05:30
|
|
|
startOfHistory: Hash32
|
2023-04-14 23:28:57 +01:00
|
|
|
## This setting is needed for resuming blockwise syncying after
|
|
|
|
## installing a snapshot pivot. The default value for this field is
|
|
|
|
## `GENESIS_PARENT_HASH` to start at the very beginning.
|
|
|
|
|
2024-05-20 10:17:51 +00:00
|
|
|
pruneHistory: bool
|
|
|
|
## Must not not set for a full node, might go away some time
|
2023-12-12 19:12:56 +00:00
|
|
|
|
2024-11-06 09:01:25 +07:00
|
|
|
extraData: string
|
2024-12-13 10:47:35 +07:00
|
|
|
## Value of extraData field when building a block
|
|
|
|
|
|
|
|
gasLimit: uint64
|
|
|
|
## Desired gas limit when building a block
|
2024-11-06 09:01:25 +07:00
|
|
|
|
2024-12-13 05:53:41 +01:00
|
|
|
taskpool*: Taskpool
|
|
|
|
## Shared task pool for offloading computation to other threads
|
|
|
|
|
2022-12-05 18:25:44 +07:00
|
|
|
# ------------------------------------------------------------------------------
|
|
|
|
# Private helper functions
|
|
|
|
# ------------------------------------------------------------------------------
|
|
|
|
|
2024-10-16 07:04:12 +05:30
|
|
|
func setForkId(com: CommonRef, genesis: Header) =
|
2023-10-24 17:39:19 +07:00
|
|
|
com.genesisHash = genesis.blockHash
|
2022-12-02 11:39:12 +07:00
|
|
|
let genesisCRC = crc32(0, com.genesisHash.data)
|
2023-10-24 17:39:19 +07:00
|
|
|
com.forkIdCalculator = initForkIdCalculator(
|
|
|
|
com.forkTransitionTable,
|
|
|
|
genesisCRC,
|
|
|
|
genesis.timestamp.uint64)
|
2022-12-02 11:39:12 +07:00
|
|
|
|
2024-05-30 20:30:40 +00:00
|
|
|
func daoCheck(conf: ChainConfig) =
|
2022-12-02 11:39:12 +07:00
|
|
|
if not conf.daoForkSupport or conf.daoForkBlock.isNone:
|
|
|
|
conf.daoForkBlock = conf.homesteadBlock
|
|
|
|
|
|
|
|
if conf.daoForkSupport and conf.daoForkBlock.isNone:
|
|
|
|
conf.daoForkBlock = conf.homesteadBlock
|
|
|
|
|
2024-08-08 07:45:30 +02:00
|
|
|
proc initializeDb(com: CommonRef) =
|
aristo: fork support via layers/txframes (#2960)
* aristo: fork support via layers/txframes
This change reorganises how the database is accessed: instead holding a
"current frame" in the database object, a dag of frames is created based
on the "base frame" held in `AristoDbRef` and all database access
happens through this frame, which can be thought of as a consistent
point-in-time snapshot of the database based on a particular fork of the
chain.
In the code, "frame", "transaction" and "layer" is used to denote more
or less the same thing: a dag of stacked changes backed by the on-disk
database.
Although this is not a requirement, in practice each frame holds the
change set of a single block - as such, the frame and its ancestors
leading up to the on-disk state represents the state of the database
after that block has been applied.
"committing" means merging the changes to its parent frame so that the
difference between them is lost and only the cumulative changes remain -
this facility enables frames to be combined arbitrarily wherever they
are in the dag.
In particular, it becomes possible to consolidate a set of changes near
the base of the dag and commit those to disk without having to re-do the
in-memory frames built on top of them - this is useful for "flattening"
a set of changes during a base update and sending those to storage
without having to perform a block replay on top.
Looking at abstractions, a side effect of this change is that the KVT
and Aristo are brought closer together by considering them to be part of
the "same" atomic transaction set - the way the code gets organised,
applying a block and saving it to the kvt happens in the same "logical"
frame - therefore, discarding the frame discards both the aristo and kvt
changes at the same time - likewise, they are persisted to disk together
- this makes reasoning about the database somewhat easier but has the
downside of increased memory usage, something that perhaps will need
addressing in the future.
Because the code reasons more strictly about frames and the state of the
persisted database, it also makes it more visible where ForkedChain
should be used and where it is still missing - in particular, frames
represent a single branch of history while forkedchain manages multiple
parallel forks - user-facing services such as the RPC should use the
latter, ie until it has been finalized, a getBlock request should
consider all forks and not just the blocks in the canonical head branch.
Another advantage of this approach is that `AristoDbRef` conceptually
becomes more simple - removing its tracking of the "current" transaction
stack simplifies reasoning about what can go wrong since this state now
has to be passed around in the form of `AristoTxRef` - as such, many of
the tests and facilities in the code that were dealing with "stack
inconsistency" are now structurally prevented from happening. The test
suite will need significant refactoring after this change.
Once this change has been merged, there are several follow-ups to do:
* there's no mechanism for keeping frames up to date as they get
committed or rolled back - TODO
* naming is confused - many names for the same thing for legacy reason
* forkedchain support is still missing in lots of code
* clean up redundant logic based on previous designs - in particular the
debug and introspection code no longer makes sense
* the way change sets are stored will probably need revisiting - because
it's a stack of changes where each frame must be interrogated to find an
on-disk value, with a base distance of 128 we'll at minimum have to
perform 128 frame lookups for *every* database interaction - regardless,
the "dag-like" nature will stay
* dispose and commit are poorly defined and perhaps redundant - in
theory, one could simply let the GC collect abandoned frames etc, though
it's likely an explicit mechanism will remain useful, so they stay for
now
More about the changes:
* `AristoDbRef` gains a `txRef` field (todo: rename) that "more or less"
corresponds to the old `balancer` field
* `AristoDbRef.stack` is gone - instead, there's a chain of
`AristoTxRef` objects that hold their respective "layer" which has the
actual changes
* No more reasoning about "top" and "stack" - instead, each
`AristoTxRef` can be a "head" that "more or less" corresponds to the old
single-history `top` notion and its stack
* `level` still represents "distance to base" - it's computed from the
parent chain instead of being stored
* one has to be careful not to use frames where forkedchain was intended
- layers are only for a single branch of history!
* fix layer vtop after rollback
* engine fix
* Fix test_txpool
* Fix test_rpc
* Fix copyright year
* fix simulator
* Fix copyright year
* Fix copyright year
* Fix tracer
* Fix infinite recursion bug
* Remove aristo and kvt empty files
* Fic copyright year
* Fix fc chain_kvt
* ForkedChain refactoring
* Fix merge master conflict
* Fix copyright year
* Reparent txFrame
* Fix test
* Fix txFrame reparent again
* Cleanup and fix test
* UpdateBase bugfix and fix test
* Fixe newPayload bug discovered by hive
* Fix engine api fcu
* Clean up call template, chain_kvt, andn txguid
* Fix copyright year
* work around base block loading issue
* Add test
* Fix updateHead bug
* Fix updateBase bug
* Change func commitBase to proc commitBase
* Touch up and fix debug mode crash
---------
Co-authored-by: jangko <jangko128@gmail.com>
2025-02-06 08:04:50 +01:00
|
|
|
let txFrame = com.db.baseTxFrame()
|
|
|
|
proc contains(txFrame: CoreDbTxRef; key: openArray[byte]): bool =
|
|
|
|
txFrame.hasKeyRc(key).expect "valid bool"
|
|
|
|
if canonicalHeadHashKey().toOpenArray notin txFrame:
|
2024-11-02 08:18:26 +01:00
|
|
|
info "Writing genesis to DB",
|
|
|
|
blockHash = com.genesisHeader.rlpHash,
|
|
|
|
stateRoot = com.genesisHeader.stateRoot,
|
|
|
|
difficulty = com.genesisHeader.difficulty,
|
|
|
|
gasLimit = com.genesisHeader.gasLimit,
|
|
|
|
timestamp = com.genesisHeader.timestamp,
|
|
|
|
nonce = com.genesisHeader.nonce
|
2024-08-08 07:45:30 +02:00
|
|
|
doAssert(com.genesisHeader.number == 0.BlockNumber,
|
|
|
|
"can't commit genesis block with number > 0")
|
aristo: fork support via layers/txframes (#2960)
* aristo: fork support via layers/txframes
This change reorganises how the database is accessed: instead holding a
"current frame" in the database object, a dag of frames is created based
on the "base frame" held in `AristoDbRef` and all database access
happens through this frame, which can be thought of as a consistent
point-in-time snapshot of the database based on a particular fork of the
chain.
In the code, "frame", "transaction" and "layer" is used to denote more
or less the same thing: a dag of stacked changes backed by the on-disk
database.
Although this is not a requirement, in practice each frame holds the
change set of a single block - as such, the frame and its ancestors
leading up to the on-disk state represents the state of the database
after that block has been applied.
"committing" means merging the changes to its parent frame so that the
difference between them is lost and only the cumulative changes remain -
this facility enables frames to be combined arbitrarily wherever they
are in the dag.
In particular, it becomes possible to consolidate a set of changes near
the base of the dag and commit those to disk without having to re-do the
in-memory frames built on top of them - this is useful for "flattening"
a set of changes during a base update and sending those to storage
without having to perform a block replay on top.
Looking at abstractions, a side effect of this change is that the KVT
and Aristo are brought closer together by considering them to be part of
the "same" atomic transaction set - the way the code gets organised,
applying a block and saving it to the kvt happens in the same "logical"
frame - therefore, discarding the frame discards both the aristo and kvt
changes at the same time - likewise, they are persisted to disk together
- this makes reasoning about the database somewhat easier but has the
downside of increased memory usage, something that perhaps will need
addressing in the future.
Because the code reasons more strictly about frames and the state of the
persisted database, it also makes it more visible where ForkedChain
should be used and where it is still missing - in particular, frames
represent a single branch of history while forkedchain manages multiple
parallel forks - user-facing services such as the RPC should use the
latter, ie until it has been finalized, a getBlock request should
consider all forks and not just the blocks in the canonical head branch.
Another advantage of this approach is that `AristoDbRef` conceptually
becomes more simple - removing its tracking of the "current" transaction
stack simplifies reasoning about what can go wrong since this state now
has to be passed around in the form of `AristoTxRef` - as such, many of
the tests and facilities in the code that were dealing with "stack
inconsistency" are now structurally prevented from happening. The test
suite will need significant refactoring after this change.
Once this change has been merged, there are several follow-ups to do:
* there's no mechanism for keeping frames up to date as they get
committed or rolled back - TODO
* naming is confused - many names for the same thing for legacy reason
* forkedchain support is still missing in lots of code
* clean up redundant logic based on previous designs - in particular the
debug and introspection code no longer makes sense
* the way change sets are stored will probably need revisiting - because
it's a stack of changes where each frame must be interrogated to find an
on-disk value, with a base distance of 128 we'll at minimum have to
perform 128 frame lookups for *every* database interaction - regardless,
the "dag-like" nature will stay
* dispose and commit are poorly defined and perhaps redundant - in
theory, one could simply let the GC collect abandoned frames etc, though
it's likely an explicit mechanism will remain useful, so they stay for
now
More about the changes:
* `AristoDbRef` gains a `txRef` field (todo: rename) that "more or less"
corresponds to the old `balancer` field
* `AristoDbRef.stack` is gone - instead, there's a chain of
`AristoTxRef` objects that hold their respective "layer" which has the
actual changes
* No more reasoning about "top" and "stack" - instead, each
`AristoTxRef` can be a "head" that "more or less" corresponds to the old
single-history `top` notion and its stack
* `level` still represents "distance to base" - it's computed from the
parent chain instead of being stored
* one has to be careful not to use frames where forkedchain was intended
- layers are only for a single branch of history!
* fix layer vtop after rollback
* engine fix
* Fix test_txpool
* Fix test_rpc
* Fix copyright year
* fix simulator
* Fix copyright year
* Fix copyright year
* Fix tracer
* Fix infinite recursion bug
* Remove aristo and kvt empty files
* Fic copyright year
* Fix fc chain_kvt
* ForkedChain refactoring
* Fix merge master conflict
* Fix copyright year
* Reparent txFrame
* Fix test
* Fix txFrame reparent again
* Cleanup and fix test
* UpdateBase bugfix and fix test
* Fixe newPayload bug discovered by hive
* Fix engine api fcu
* Clean up call template, chain_kvt, andn txguid
* Fix copyright year
* work around base block loading issue
* Add test
* Fix updateHead bug
* Fix updateBase bug
* Change func commitBase to proc commitBase
* Touch up and fix debug mode crash
---------
Co-authored-by: jangko <jangko128@gmail.com>
2025-02-06 08:04:50 +01:00
|
|
|
txFrame.persistHeaderAndSetHead(com.genesisHeader,
|
2024-11-07 08:24:21 +07:00
|
|
|
startOfHistory=com.genesisHeader.parentHash).
|
|
|
|
expect("can persist genesis header")
|
aristo: fork support via layers/txframes (#2960)
* aristo: fork support via layers/txframes
This change reorganises how the database is accessed: instead holding a
"current frame" in the database object, a dag of frames is created based
on the "base frame" held in `AristoDbRef` and all database access
happens through this frame, which can be thought of as a consistent
point-in-time snapshot of the database based on a particular fork of the
chain.
In the code, "frame", "transaction" and "layer" is used to denote more
or less the same thing: a dag of stacked changes backed by the on-disk
database.
Although this is not a requirement, in practice each frame holds the
change set of a single block - as such, the frame and its ancestors
leading up to the on-disk state represents the state of the database
after that block has been applied.
"committing" means merging the changes to its parent frame so that the
difference between them is lost and only the cumulative changes remain -
this facility enables frames to be combined arbitrarily wherever they
are in the dag.
In particular, it becomes possible to consolidate a set of changes near
the base of the dag and commit those to disk without having to re-do the
in-memory frames built on top of them - this is useful for "flattening"
a set of changes during a base update and sending those to storage
without having to perform a block replay on top.
Looking at abstractions, a side effect of this change is that the KVT
and Aristo are brought closer together by considering them to be part of
the "same" atomic transaction set - the way the code gets organised,
applying a block and saving it to the kvt happens in the same "logical"
frame - therefore, discarding the frame discards both the aristo and kvt
changes at the same time - likewise, they are persisted to disk together
- this makes reasoning about the database somewhat easier but has the
downside of increased memory usage, something that perhaps will need
addressing in the future.
Because the code reasons more strictly about frames and the state of the
persisted database, it also makes it more visible where ForkedChain
should be used and where it is still missing - in particular, frames
represent a single branch of history while forkedchain manages multiple
parallel forks - user-facing services such as the RPC should use the
latter, ie until it has been finalized, a getBlock request should
consider all forks and not just the blocks in the canonical head branch.
Another advantage of this approach is that `AristoDbRef` conceptually
becomes more simple - removing its tracking of the "current" transaction
stack simplifies reasoning about what can go wrong since this state now
has to be passed around in the form of `AristoTxRef` - as such, many of
the tests and facilities in the code that were dealing with "stack
inconsistency" are now structurally prevented from happening. The test
suite will need significant refactoring after this change.
Once this change has been merged, there are several follow-ups to do:
* there's no mechanism for keeping frames up to date as they get
committed or rolled back - TODO
* naming is confused - many names for the same thing for legacy reason
* forkedchain support is still missing in lots of code
* clean up redundant logic based on previous designs - in particular the
debug and introspection code no longer makes sense
* the way change sets are stored will probably need revisiting - because
it's a stack of changes where each frame must be interrogated to find an
on-disk value, with a base distance of 128 we'll at minimum have to
perform 128 frame lookups for *every* database interaction - regardless,
the "dag-like" nature will stay
* dispose and commit are poorly defined and perhaps redundant - in
theory, one could simply let the GC collect abandoned frames etc, though
it's likely an explicit mechanism will remain useful, so they stay for
now
More about the changes:
* `AristoDbRef` gains a `txRef` field (todo: rename) that "more or less"
corresponds to the old `balancer` field
* `AristoDbRef.stack` is gone - instead, there's a chain of
`AristoTxRef` objects that hold their respective "layer" which has the
actual changes
* No more reasoning about "top" and "stack" - instead, each
`AristoTxRef` can be a "head" that "more or less" corresponds to the old
single-history `top` notion and its stack
* `level` still represents "distance to base" - it's computed from the
parent chain instead of being stored
* one has to be careful not to use frames where forkedchain was intended
- layers are only for a single branch of history!
* fix layer vtop after rollback
* engine fix
* Fix test_txpool
* Fix test_rpc
* Fix copyright year
* fix simulator
* Fix copyright year
* Fix copyright year
* Fix tracer
* Fix infinite recursion bug
* Remove aristo and kvt empty files
* Fic copyright year
* Fix fc chain_kvt
* ForkedChain refactoring
* Fix merge master conflict
* Fix copyright year
* Reparent txFrame
* Fix test
* Fix txFrame reparent again
* Cleanup and fix test
* UpdateBase bugfix and fix test
* Fixe newPayload bug discovered by hive
* Fix engine api fcu
* Clean up call template, chain_kvt, andn txguid
* Fix copyright year
* work around base block loading issue
* Add test
* Fix updateHead bug
* Fix updateBase bug
* Change func commitBase to proc commitBase
* Touch up and fix debug mode crash
---------
Co-authored-by: jangko <jangko128@gmail.com>
2025-02-06 08:04:50 +01:00
|
|
|
doAssert(canonicalHeadHashKey().toOpenArray in txFrame)
|
2024-08-08 07:45:30 +02:00
|
|
|
|
|
|
|
# The database must at least contain the base and head pointers - the base
|
|
|
|
# is implicitly considered finalized
|
|
|
|
let
|
aristo: fork support via layers/txframes (#2960)
* aristo: fork support via layers/txframes
This change reorganises how the database is accessed: instead holding a
"current frame" in the database object, a dag of frames is created based
on the "base frame" held in `AristoDbRef` and all database access
happens through this frame, which can be thought of as a consistent
point-in-time snapshot of the database based on a particular fork of the
chain.
In the code, "frame", "transaction" and "layer" is used to denote more
or less the same thing: a dag of stacked changes backed by the on-disk
database.
Although this is not a requirement, in practice each frame holds the
change set of a single block - as such, the frame and its ancestors
leading up to the on-disk state represents the state of the database
after that block has been applied.
"committing" means merging the changes to its parent frame so that the
difference between them is lost and only the cumulative changes remain -
this facility enables frames to be combined arbitrarily wherever they
are in the dag.
In particular, it becomes possible to consolidate a set of changes near
the base of the dag and commit those to disk without having to re-do the
in-memory frames built on top of them - this is useful for "flattening"
a set of changes during a base update and sending those to storage
without having to perform a block replay on top.
Looking at abstractions, a side effect of this change is that the KVT
and Aristo are brought closer together by considering them to be part of
the "same" atomic transaction set - the way the code gets organised,
applying a block and saving it to the kvt happens in the same "logical"
frame - therefore, discarding the frame discards both the aristo and kvt
changes at the same time - likewise, they are persisted to disk together
- this makes reasoning about the database somewhat easier but has the
downside of increased memory usage, something that perhaps will need
addressing in the future.
Because the code reasons more strictly about frames and the state of the
persisted database, it also makes it more visible where ForkedChain
should be used and where it is still missing - in particular, frames
represent a single branch of history while forkedchain manages multiple
parallel forks - user-facing services such as the RPC should use the
latter, ie until it has been finalized, a getBlock request should
consider all forks and not just the blocks in the canonical head branch.
Another advantage of this approach is that `AristoDbRef` conceptually
becomes more simple - removing its tracking of the "current" transaction
stack simplifies reasoning about what can go wrong since this state now
has to be passed around in the form of `AristoTxRef` - as such, many of
the tests and facilities in the code that were dealing with "stack
inconsistency" are now structurally prevented from happening. The test
suite will need significant refactoring after this change.
Once this change has been merged, there are several follow-ups to do:
* there's no mechanism for keeping frames up to date as they get
committed or rolled back - TODO
* naming is confused - many names for the same thing for legacy reason
* forkedchain support is still missing in lots of code
* clean up redundant logic based on previous designs - in particular the
debug and introspection code no longer makes sense
* the way change sets are stored will probably need revisiting - because
it's a stack of changes where each frame must be interrogated to find an
on-disk value, with a base distance of 128 we'll at minimum have to
perform 128 frame lookups for *every* database interaction - regardless,
the "dag-like" nature will stay
* dispose and commit are poorly defined and perhaps redundant - in
theory, one could simply let the GC collect abandoned frames etc, though
it's likely an explicit mechanism will remain useful, so they stay for
now
More about the changes:
* `AristoDbRef` gains a `txRef` field (todo: rename) that "more or less"
corresponds to the old `balancer` field
* `AristoDbRef.stack` is gone - instead, there's a chain of
`AristoTxRef` objects that hold their respective "layer" which has the
actual changes
* No more reasoning about "top" and "stack" - instead, each
`AristoTxRef` can be a "head" that "more or less" corresponds to the old
single-history `top` notion and its stack
* `level` still represents "distance to base" - it's computed from the
parent chain instead of being stored
* one has to be careful not to use frames where forkedchain was intended
- layers are only for a single branch of history!
* fix layer vtop after rollback
* engine fix
* Fix test_txpool
* Fix test_rpc
* Fix copyright year
* fix simulator
* Fix copyright year
* Fix copyright year
* Fix tracer
* Fix infinite recursion bug
* Remove aristo and kvt empty files
* Fic copyright year
* Fix fc chain_kvt
* ForkedChain refactoring
* Fix merge master conflict
* Fix copyright year
* Reparent txFrame
* Fix test
* Fix txFrame reparent again
* Cleanup and fix test
* UpdateBase bugfix and fix test
* Fixe newPayload bug discovered by hive
* Fix engine api fcu
* Clean up call template, chain_kvt, andn txguid
* Fix copyright year
* work around base block loading issue
* Add test
* Fix updateHead bug
* Fix updateBase bug
* Change func commitBase to proc commitBase
* Touch up and fix debug mode crash
---------
Co-authored-by: jangko <jangko128@gmail.com>
2025-02-06 08:04:50 +01:00
|
|
|
baseNum = txFrame.getSavedStateBlockNumber()
|
|
|
|
base = txFrame.getBlockHeader(baseNum).valueOr:
|
2024-11-07 08:24:21 +07:00
|
|
|
fatal "Cannot load base block header",
|
|
|
|
baseNum, err = error
|
|
|
|
quit 1
|
aristo: fork support via layers/txframes (#2960)
* aristo: fork support via layers/txframes
This change reorganises how the database is accessed: instead holding a
"current frame" in the database object, a dag of frames is created based
on the "base frame" held in `AristoDbRef` and all database access
happens through this frame, which can be thought of as a consistent
point-in-time snapshot of the database based on a particular fork of the
chain.
In the code, "frame", "transaction" and "layer" is used to denote more
or less the same thing: a dag of stacked changes backed by the on-disk
database.
Although this is not a requirement, in practice each frame holds the
change set of a single block - as such, the frame and its ancestors
leading up to the on-disk state represents the state of the database
after that block has been applied.
"committing" means merging the changes to its parent frame so that the
difference between them is lost and only the cumulative changes remain -
this facility enables frames to be combined arbitrarily wherever they
are in the dag.
In particular, it becomes possible to consolidate a set of changes near
the base of the dag and commit those to disk without having to re-do the
in-memory frames built on top of them - this is useful for "flattening"
a set of changes during a base update and sending those to storage
without having to perform a block replay on top.
Looking at abstractions, a side effect of this change is that the KVT
and Aristo are brought closer together by considering them to be part of
the "same" atomic transaction set - the way the code gets organised,
applying a block and saving it to the kvt happens in the same "logical"
frame - therefore, discarding the frame discards both the aristo and kvt
changes at the same time - likewise, they are persisted to disk together
- this makes reasoning about the database somewhat easier but has the
downside of increased memory usage, something that perhaps will need
addressing in the future.
Because the code reasons more strictly about frames and the state of the
persisted database, it also makes it more visible where ForkedChain
should be used and where it is still missing - in particular, frames
represent a single branch of history while forkedchain manages multiple
parallel forks - user-facing services such as the RPC should use the
latter, ie until it has been finalized, a getBlock request should
consider all forks and not just the blocks in the canonical head branch.
Another advantage of this approach is that `AristoDbRef` conceptually
becomes more simple - removing its tracking of the "current" transaction
stack simplifies reasoning about what can go wrong since this state now
has to be passed around in the form of `AristoTxRef` - as such, many of
the tests and facilities in the code that were dealing with "stack
inconsistency" are now structurally prevented from happening. The test
suite will need significant refactoring after this change.
Once this change has been merged, there are several follow-ups to do:
* there's no mechanism for keeping frames up to date as they get
committed or rolled back - TODO
* naming is confused - many names for the same thing for legacy reason
* forkedchain support is still missing in lots of code
* clean up redundant logic based on previous designs - in particular the
debug and introspection code no longer makes sense
* the way change sets are stored will probably need revisiting - because
it's a stack of changes where each frame must be interrogated to find an
on-disk value, with a base distance of 128 we'll at minimum have to
perform 128 frame lookups for *every* database interaction - regardless,
the "dag-like" nature will stay
* dispose and commit are poorly defined and perhaps redundant - in
theory, one could simply let the GC collect abandoned frames etc, though
it's likely an explicit mechanism will remain useful, so they stay for
now
More about the changes:
* `AristoDbRef` gains a `txRef` field (todo: rename) that "more or less"
corresponds to the old `balancer` field
* `AristoDbRef.stack` is gone - instead, there's a chain of
`AristoTxRef` objects that hold their respective "layer" which has the
actual changes
* No more reasoning about "top" and "stack" - instead, each
`AristoTxRef` can be a "head" that "more or less" corresponds to the old
single-history `top` notion and its stack
* `level` still represents "distance to base" - it's computed from the
parent chain instead of being stored
* one has to be careful not to use frames where forkedchain was intended
- layers are only for a single branch of history!
* fix layer vtop after rollback
* engine fix
* Fix test_txpool
* Fix test_rpc
* Fix copyright year
* fix simulator
* Fix copyright year
* Fix copyright year
* Fix tracer
* Fix infinite recursion bug
* Remove aristo and kvt empty files
* Fic copyright year
* Fix fc chain_kvt
* ForkedChain refactoring
* Fix merge master conflict
* Fix copyright year
* Reparent txFrame
* Fix test
* Fix txFrame reparent again
* Cleanup and fix test
* UpdateBase bugfix and fix test
* Fixe newPayload bug discovered by hive
* Fix engine api fcu
* Clean up call template, chain_kvt, andn txguid
* Fix copyright year
* work around base block loading issue
* Add test
* Fix updateHead bug
* Fix updateBase bug
* Change func commitBase to proc commitBase
* Touch up and fix debug mode crash
---------
Co-authored-by: jangko <jangko128@gmail.com>
2025-02-06 08:04:50 +01:00
|
|
|
finalized = txFrame.finalizedHeader().valueOr:
|
2024-11-07 08:24:21 +07:00
|
|
|
debug "No finalized block stored in database, reverting to base"
|
|
|
|
base
|
aristo: fork support via layers/txframes (#2960)
* aristo: fork support via layers/txframes
This change reorganises how the database is accessed: instead holding a
"current frame" in the database object, a dag of frames is created based
on the "base frame" held in `AristoDbRef` and all database access
happens through this frame, which can be thought of as a consistent
point-in-time snapshot of the database based on a particular fork of the
chain.
In the code, "frame", "transaction" and "layer" is used to denote more
or less the same thing: a dag of stacked changes backed by the on-disk
database.
Although this is not a requirement, in practice each frame holds the
change set of a single block - as such, the frame and its ancestors
leading up to the on-disk state represents the state of the database
after that block has been applied.
"committing" means merging the changes to its parent frame so that the
difference between them is lost and only the cumulative changes remain -
this facility enables frames to be combined arbitrarily wherever they
are in the dag.
In particular, it becomes possible to consolidate a set of changes near
the base of the dag and commit those to disk without having to re-do the
in-memory frames built on top of them - this is useful for "flattening"
a set of changes during a base update and sending those to storage
without having to perform a block replay on top.
Looking at abstractions, a side effect of this change is that the KVT
and Aristo are brought closer together by considering them to be part of
the "same" atomic transaction set - the way the code gets organised,
applying a block and saving it to the kvt happens in the same "logical"
frame - therefore, discarding the frame discards both the aristo and kvt
changes at the same time - likewise, they are persisted to disk together
- this makes reasoning about the database somewhat easier but has the
downside of increased memory usage, something that perhaps will need
addressing in the future.
Because the code reasons more strictly about frames and the state of the
persisted database, it also makes it more visible where ForkedChain
should be used and where it is still missing - in particular, frames
represent a single branch of history while forkedchain manages multiple
parallel forks - user-facing services such as the RPC should use the
latter, ie until it has been finalized, a getBlock request should
consider all forks and not just the blocks in the canonical head branch.
Another advantage of this approach is that `AristoDbRef` conceptually
becomes more simple - removing its tracking of the "current" transaction
stack simplifies reasoning about what can go wrong since this state now
has to be passed around in the form of `AristoTxRef` - as such, many of
the tests and facilities in the code that were dealing with "stack
inconsistency" are now structurally prevented from happening. The test
suite will need significant refactoring after this change.
Once this change has been merged, there are several follow-ups to do:
* there's no mechanism for keeping frames up to date as they get
committed or rolled back - TODO
* naming is confused - many names for the same thing for legacy reason
* forkedchain support is still missing in lots of code
* clean up redundant logic based on previous designs - in particular the
debug and introspection code no longer makes sense
* the way change sets are stored will probably need revisiting - because
it's a stack of changes where each frame must be interrogated to find an
on-disk value, with a base distance of 128 we'll at minimum have to
perform 128 frame lookups for *every* database interaction - regardless,
the "dag-like" nature will stay
* dispose and commit are poorly defined and perhaps redundant - in
theory, one could simply let the GC collect abandoned frames etc, though
it's likely an explicit mechanism will remain useful, so they stay for
now
More about the changes:
* `AristoDbRef` gains a `txRef` field (todo: rename) that "more or less"
corresponds to the old `balancer` field
* `AristoDbRef.stack` is gone - instead, there's a chain of
`AristoTxRef` objects that hold their respective "layer" which has the
actual changes
* No more reasoning about "top" and "stack" - instead, each
`AristoTxRef` can be a "head" that "more or less" corresponds to the old
single-history `top` notion and its stack
* `level` still represents "distance to base" - it's computed from the
parent chain instead of being stored
* one has to be careful not to use frames where forkedchain was intended
- layers are only for a single branch of history!
* fix layer vtop after rollback
* engine fix
* Fix test_txpool
* Fix test_rpc
* Fix copyright year
* fix simulator
* Fix copyright year
* Fix copyright year
* Fix tracer
* Fix infinite recursion bug
* Remove aristo and kvt empty files
* Fic copyright year
* Fix fc chain_kvt
* ForkedChain refactoring
* Fix merge master conflict
* Fix copyright year
* Reparent txFrame
* Fix test
* Fix txFrame reparent again
* Cleanup and fix test
* UpdateBase bugfix and fix test
* Fixe newPayload bug discovered by hive
* Fix engine api fcu
* Clean up call template, chain_kvt, andn txguid
* Fix copyright year
* work around base block loading issue
* Add test
* Fix updateHead bug
* Fix updateBase bug
* Change func commitBase to proc commitBase
* Touch up and fix debug mode crash
---------
Co-authored-by: jangko <jangko128@gmail.com>
2025-02-06 08:04:50 +01:00
|
|
|
head = txFrame.getCanonicalHead().valueOr:
|
2024-11-07 08:24:21 +07:00
|
|
|
fatal "Cannot load canonical block header",
|
|
|
|
err = error
|
|
|
|
quit 1
|
2024-08-08 07:45:30 +02:00
|
|
|
|
|
|
|
info "Database initialized",
|
|
|
|
base = (base.blockHash, base.number),
|
|
|
|
finalized = (finalized.blockHash, finalized.number),
|
|
|
|
head = (head.blockHash, head.number)
|
|
|
|
|
2024-05-20 10:17:51 +00:00
|
|
|
proc init(com : CommonRef,
|
|
|
|
db : CoreDbRef,
|
2024-12-13 05:53:41 +01:00
|
|
|
taskpool : Taskpool,
|
2024-05-20 10:17:51 +00:00
|
|
|
networkId : NetworkId,
|
|
|
|
config : ChainConfig,
|
|
|
|
genesis : Genesis,
|
2024-11-06 09:01:25 +07:00
|
|
|
pruneHistory: bool) =
|
2023-12-12 19:12:56 +00:00
|
|
|
|
2022-12-02 11:39:12 +07:00
|
|
|
config.daoCheck()
|
|
|
|
|
2023-08-04 12:10:09 +01:00
|
|
|
com.db = db
|
2022-12-02 11:39:12 +07:00
|
|
|
com.config = config
|
2023-02-16 11:40:07 +00:00
|
|
|
com.forkTransitionTable = config.toForkTransitionTable()
|
2022-12-02 11:39:12 +07:00
|
|
|
com.networkId = networkId
|
|
|
|
com.syncProgress= SyncProgress()
|
2024-09-04 16:54:54 +07:00
|
|
|
com.syncState = Waiting
|
2024-05-20 10:17:51 +00:00
|
|
|
com.pruneHistory= pruneHistory
|
2024-11-06 09:01:25 +07:00
|
|
|
com.extraData = ShortClientId
|
2024-12-13 05:53:41 +01:00
|
|
|
com.taskpool = taskpool
|
2024-12-13 10:47:35 +07:00
|
|
|
com.gasLimit = DEFAULT_GAS_LIMIT
|
2023-08-16 17:05:14 +07:00
|
|
|
|
2023-10-24 17:39:19 +07:00
|
|
|
# com.forkIdCalculator and com.genesisHash are set
|
2022-12-02 11:39:12 +07:00
|
|
|
# by setForkId
|
|
|
|
if genesis.isNil.not:
|
2024-07-17 17:05:53 +07:00
|
|
|
let
|
|
|
|
forkDeterminer = ForkDeterminationInfo(
|
|
|
|
number: 0.BlockNumber,
|
|
|
|
td: Opt.some(0.u256),
|
|
|
|
time: Opt.some(genesis.timestamp)
|
|
|
|
)
|
|
|
|
fork = toHardFork(com.forkTransitionTable, forkDeterminer)
|
aristo: fork support via layers/txframes (#2960)
* aristo: fork support via layers/txframes
This change reorganises how the database is accessed: instead holding a
"current frame" in the database object, a dag of frames is created based
on the "base frame" held in `AristoDbRef` and all database access
happens through this frame, which can be thought of as a consistent
point-in-time snapshot of the database based on a particular fork of the
chain.
In the code, "frame", "transaction" and "layer" is used to denote more
or less the same thing: a dag of stacked changes backed by the on-disk
database.
Although this is not a requirement, in practice each frame holds the
change set of a single block - as such, the frame and its ancestors
leading up to the on-disk state represents the state of the database
after that block has been applied.
"committing" means merging the changes to its parent frame so that the
difference between them is lost and only the cumulative changes remain -
this facility enables frames to be combined arbitrarily wherever they
are in the dag.
In particular, it becomes possible to consolidate a set of changes near
the base of the dag and commit those to disk without having to re-do the
in-memory frames built on top of them - this is useful for "flattening"
a set of changes during a base update and sending those to storage
without having to perform a block replay on top.
Looking at abstractions, a side effect of this change is that the KVT
and Aristo are brought closer together by considering them to be part of
the "same" atomic transaction set - the way the code gets organised,
applying a block and saving it to the kvt happens in the same "logical"
frame - therefore, discarding the frame discards both the aristo and kvt
changes at the same time - likewise, they are persisted to disk together
- this makes reasoning about the database somewhat easier but has the
downside of increased memory usage, something that perhaps will need
addressing in the future.
Because the code reasons more strictly about frames and the state of the
persisted database, it also makes it more visible where ForkedChain
should be used and where it is still missing - in particular, frames
represent a single branch of history while forkedchain manages multiple
parallel forks - user-facing services such as the RPC should use the
latter, ie until it has been finalized, a getBlock request should
consider all forks and not just the blocks in the canonical head branch.
Another advantage of this approach is that `AristoDbRef` conceptually
becomes more simple - removing its tracking of the "current" transaction
stack simplifies reasoning about what can go wrong since this state now
has to be passed around in the form of `AristoTxRef` - as such, many of
the tests and facilities in the code that were dealing with "stack
inconsistency" are now structurally prevented from happening. The test
suite will need significant refactoring after this change.
Once this change has been merged, there are several follow-ups to do:
* there's no mechanism for keeping frames up to date as they get
committed or rolled back - TODO
* naming is confused - many names for the same thing for legacy reason
* forkedchain support is still missing in lots of code
* clean up redundant logic based on previous designs - in particular the
debug and introspection code no longer makes sense
* the way change sets are stored will probably need revisiting - because
it's a stack of changes where each frame must be interrogated to find an
on-disk value, with a base distance of 128 we'll at minimum have to
perform 128 frame lookups for *every* database interaction - regardless,
the "dag-like" nature will stay
* dispose and commit are poorly defined and perhaps redundant - in
theory, one could simply let the GC collect abandoned frames etc, though
it's likely an explicit mechanism will remain useful, so they stay for
now
More about the changes:
* `AristoDbRef` gains a `txRef` field (todo: rename) that "more or less"
corresponds to the old `balancer` field
* `AristoDbRef.stack` is gone - instead, there's a chain of
`AristoTxRef` objects that hold their respective "layer" which has the
actual changes
* No more reasoning about "top" and "stack" - instead, each
`AristoTxRef` can be a "head" that "more or less" corresponds to the old
single-history `top` notion and its stack
* `level` still represents "distance to base" - it's computed from the
parent chain instead of being stored
* one has to be careful not to use frames where forkedchain was intended
- layers are only for a single branch of history!
* fix layer vtop after rollback
* engine fix
* Fix test_txpool
* Fix test_rpc
* Fix copyright year
* fix simulator
* Fix copyright year
* Fix copyright year
* Fix tracer
* Fix infinite recursion bug
* Remove aristo and kvt empty files
* Fic copyright year
* Fix fc chain_kvt
* ForkedChain refactoring
* Fix merge master conflict
* Fix copyright year
* Reparent txFrame
* Fix test
* Fix txFrame reparent again
* Cleanup and fix test
* UpdateBase bugfix and fix test
* Fixe newPayload bug discovered by hive
* Fix engine api fcu
* Clean up call template, chain_kvt, andn txguid
* Fix copyright year
* work around base block loading issue
* Add test
* Fix updateHead bug
* Fix updateBase bug
* Change func commitBase to proc commitBase
* Touch up and fix debug mode crash
---------
Co-authored-by: jangko <jangko128@gmail.com>
2025-02-06 08:04:50 +01:00
|
|
|
txFrame = db.baseTxFrame()
|
2024-07-17 17:05:53 +07:00
|
|
|
|
2024-05-22 13:41:14 +00:00
|
|
|
# Must not overwrite the global state on the single state DB
|
aristo: fork support via layers/txframes (#2960)
* aristo: fork support via layers/txframes
This change reorganises how the database is accessed: instead holding a
"current frame" in the database object, a dag of frames is created based
on the "base frame" held in `AristoDbRef` and all database access
happens through this frame, which can be thought of as a consistent
point-in-time snapshot of the database based on a particular fork of the
chain.
In the code, "frame", "transaction" and "layer" is used to denote more
or less the same thing: a dag of stacked changes backed by the on-disk
database.
Although this is not a requirement, in practice each frame holds the
change set of a single block - as such, the frame and its ancestors
leading up to the on-disk state represents the state of the database
after that block has been applied.
"committing" means merging the changes to its parent frame so that the
difference between them is lost and only the cumulative changes remain -
this facility enables frames to be combined arbitrarily wherever they
are in the dag.
In particular, it becomes possible to consolidate a set of changes near
the base of the dag and commit those to disk without having to re-do the
in-memory frames built on top of them - this is useful for "flattening"
a set of changes during a base update and sending those to storage
without having to perform a block replay on top.
Looking at abstractions, a side effect of this change is that the KVT
and Aristo are brought closer together by considering them to be part of
the "same" atomic transaction set - the way the code gets organised,
applying a block and saving it to the kvt happens in the same "logical"
frame - therefore, discarding the frame discards both the aristo and kvt
changes at the same time - likewise, they are persisted to disk together
- this makes reasoning about the database somewhat easier but has the
downside of increased memory usage, something that perhaps will need
addressing in the future.
Because the code reasons more strictly about frames and the state of the
persisted database, it also makes it more visible where ForkedChain
should be used and where it is still missing - in particular, frames
represent a single branch of history while forkedchain manages multiple
parallel forks - user-facing services such as the RPC should use the
latter, ie until it has been finalized, a getBlock request should
consider all forks and not just the blocks in the canonical head branch.
Another advantage of this approach is that `AristoDbRef` conceptually
becomes more simple - removing its tracking of the "current" transaction
stack simplifies reasoning about what can go wrong since this state now
has to be passed around in the form of `AristoTxRef` - as such, many of
the tests and facilities in the code that were dealing with "stack
inconsistency" are now structurally prevented from happening. The test
suite will need significant refactoring after this change.
Once this change has been merged, there are several follow-ups to do:
* there's no mechanism for keeping frames up to date as they get
committed or rolled back - TODO
* naming is confused - many names for the same thing for legacy reason
* forkedchain support is still missing in lots of code
* clean up redundant logic based on previous designs - in particular the
debug and introspection code no longer makes sense
* the way change sets are stored will probably need revisiting - because
it's a stack of changes where each frame must be interrogated to find an
on-disk value, with a base distance of 128 we'll at minimum have to
perform 128 frame lookups for *every* database interaction - regardless,
the "dag-like" nature will stay
* dispose and commit are poorly defined and perhaps redundant - in
theory, one could simply let the GC collect abandoned frames etc, though
it's likely an explicit mechanism will remain useful, so they stay for
now
More about the changes:
* `AristoDbRef` gains a `txRef` field (todo: rename) that "more or less"
corresponds to the old `balancer` field
* `AristoDbRef.stack` is gone - instead, there's a chain of
`AristoTxRef` objects that hold their respective "layer" which has the
actual changes
* No more reasoning about "top" and "stack" - instead, each
`AristoTxRef` can be a "head" that "more or less" corresponds to the old
single-history `top` notion and its stack
* `level` still represents "distance to base" - it's computed from the
parent chain instead of being stored
* one has to be careful not to use frames where forkedchain was intended
- layers are only for a single branch of history!
* fix layer vtop after rollback
* engine fix
* Fix test_txpool
* Fix test_rpc
* Fix copyright year
* fix simulator
* Fix copyright year
* Fix copyright year
* Fix tracer
* Fix infinite recursion bug
* Remove aristo and kvt empty files
* Fic copyright year
* Fix fc chain_kvt
* ForkedChain refactoring
* Fix merge master conflict
* Fix copyright year
* Reparent txFrame
* Fix test
* Fix txFrame reparent again
* Cleanup and fix test
* UpdateBase bugfix and fix test
* Fixe newPayload bug discovered by hive
* Fix engine api fcu
* Clean up call template, chain_kvt, andn txguid
* Fix copyright year
* work around base block loading issue
* Add test
* Fix updateHead bug
* Fix updateBase bug
* Change func commitBase to proc commitBase
* Touch up and fix debug mode crash
---------
Co-authored-by: jangko <jangko128@gmail.com>
2025-02-06 08:04:50 +01:00
|
|
|
|
|
|
|
com.genesisHeader = txFrame.getBlockHeader(0.BlockNumber).valueOr:
|
|
|
|
toGenesisHeader(genesis, fork, txFrame)
|
2024-05-22 13:41:14 +00:00
|
|
|
|
2022-12-02 11:39:12 +07:00
|
|
|
com.setForkId(com.genesisHeader)
|
2022-12-05 18:25:44 +07:00
|
|
|
|
2023-04-14 23:28:57 +01:00
|
|
|
# By default, history begins at genesis.
|
|
|
|
com.startOfHistory = GENESIS_PARENT_HASH
|
|
|
|
|
2024-08-08 07:45:30 +02:00
|
|
|
com.initializeDb()
|
|
|
|
|
aristo: fork support via layers/txframes (#2960)
* aristo: fork support via layers/txframes
This change reorganises how the database is accessed: instead holding a
"current frame" in the database object, a dag of frames is created based
on the "base frame" held in `AristoDbRef` and all database access
happens through this frame, which can be thought of as a consistent
point-in-time snapshot of the database based on a particular fork of the
chain.
In the code, "frame", "transaction" and "layer" is used to denote more
or less the same thing: a dag of stacked changes backed by the on-disk
database.
Although this is not a requirement, in practice each frame holds the
change set of a single block - as such, the frame and its ancestors
leading up to the on-disk state represents the state of the database
after that block has been applied.
"committing" means merging the changes to its parent frame so that the
difference between them is lost and only the cumulative changes remain -
this facility enables frames to be combined arbitrarily wherever they
are in the dag.
In particular, it becomes possible to consolidate a set of changes near
the base of the dag and commit those to disk without having to re-do the
in-memory frames built on top of them - this is useful for "flattening"
a set of changes during a base update and sending those to storage
without having to perform a block replay on top.
Looking at abstractions, a side effect of this change is that the KVT
and Aristo are brought closer together by considering them to be part of
the "same" atomic transaction set - the way the code gets organised,
applying a block and saving it to the kvt happens in the same "logical"
frame - therefore, discarding the frame discards both the aristo and kvt
changes at the same time - likewise, they are persisted to disk together
- this makes reasoning about the database somewhat easier but has the
downside of increased memory usage, something that perhaps will need
addressing in the future.
Because the code reasons more strictly about frames and the state of the
persisted database, it also makes it more visible where ForkedChain
should be used and where it is still missing - in particular, frames
represent a single branch of history while forkedchain manages multiple
parallel forks - user-facing services such as the RPC should use the
latter, ie until it has been finalized, a getBlock request should
consider all forks and not just the blocks in the canonical head branch.
Another advantage of this approach is that `AristoDbRef` conceptually
becomes more simple - removing its tracking of the "current" transaction
stack simplifies reasoning about what can go wrong since this state now
has to be passed around in the form of `AristoTxRef` - as such, many of
the tests and facilities in the code that were dealing with "stack
inconsistency" are now structurally prevented from happening. The test
suite will need significant refactoring after this change.
Once this change has been merged, there are several follow-ups to do:
* there's no mechanism for keeping frames up to date as they get
committed or rolled back - TODO
* naming is confused - many names for the same thing for legacy reason
* forkedchain support is still missing in lots of code
* clean up redundant logic based on previous designs - in particular the
debug and introspection code no longer makes sense
* the way change sets are stored will probably need revisiting - because
it's a stack of changes where each frame must be interrogated to find an
on-disk value, with a base distance of 128 we'll at minimum have to
perform 128 frame lookups for *every* database interaction - regardless,
the "dag-like" nature will stay
* dispose and commit are poorly defined and perhaps redundant - in
theory, one could simply let the GC collect abandoned frames etc, though
it's likely an explicit mechanism will remain useful, so they stay for
now
More about the changes:
* `AristoDbRef` gains a `txRef` field (todo: rename) that "more or less"
corresponds to the old `balancer` field
* `AristoDbRef.stack` is gone - instead, there's a chain of
`AristoTxRef` objects that hold their respective "layer" which has the
actual changes
* No more reasoning about "top" and "stack" - instead, each
`AristoTxRef` can be a "head" that "more or less" corresponds to the old
single-history `top` notion and its stack
* `level` still represents "distance to base" - it's computed from the
parent chain instead of being stored
* one has to be careful not to use frames where forkedchain was intended
- layers are only for a single branch of history!
* fix layer vtop after rollback
* engine fix
* Fix test_txpool
* Fix test_rpc
* Fix copyright year
* fix simulator
* Fix copyright year
* Fix copyright year
* Fix tracer
* Fix infinite recursion bug
* Remove aristo and kvt empty files
* Fic copyright year
* Fix fc chain_kvt
* ForkedChain refactoring
* Fix merge master conflict
* Fix copyright year
* Reparent txFrame
* Fix test
* Fix txFrame reparent again
* Cleanup and fix test
* UpdateBase bugfix and fix test
* Fixe newPayload bug discovered by hive
* Fix engine api fcu
* Clean up call template, chain_kvt, andn txguid
* Fix copyright year
* work around base block loading issue
* Add test
* Fix updateHead bug
* Fix updateBase bug
* Change func commitBase to proc commitBase
* Touch up and fix debug mode crash
---------
Co-authored-by: jangko <jangko128@gmail.com>
2025-02-06 08:04:50 +01:00
|
|
|
proc isBlockAfterTtd(com: CommonRef, header: Header, txFrame: CoreDbTxRef): bool =
|
2024-10-08 09:37:36 +07:00
|
|
|
if com.config.terminalTotalDifficulty.isNone:
|
|
|
|
return false
|
2023-02-16 11:40:07 +00:00
|
|
|
|
2024-10-08 09:37:36 +07:00
|
|
|
let
|
|
|
|
ttd = com.config.terminalTotalDifficulty.get()
|
aristo: fork support via layers/txframes (#2960)
* aristo: fork support via layers/txframes
This change reorganises how the database is accessed: instead holding a
"current frame" in the database object, a dag of frames is created based
on the "base frame" held in `AristoDbRef` and all database access
happens through this frame, which can be thought of as a consistent
point-in-time snapshot of the database based on a particular fork of the
chain.
In the code, "frame", "transaction" and "layer" is used to denote more
or less the same thing: a dag of stacked changes backed by the on-disk
database.
Although this is not a requirement, in practice each frame holds the
change set of a single block - as such, the frame and its ancestors
leading up to the on-disk state represents the state of the database
after that block has been applied.
"committing" means merging the changes to its parent frame so that the
difference between them is lost and only the cumulative changes remain -
this facility enables frames to be combined arbitrarily wherever they
are in the dag.
In particular, it becomes possible to consolidate a set of changes near
the base of the dag and commit those to disk without having to re-do the
in-memory frames built on top of them - this is useful for "flattening"
a set of changes during a base update and sending those to storage
without having to perform a block replay on top.
Looking at abstractions, a side effect of this change is that the KVT
and Aristo are brought closer together by considering them to be part of
the "same" atomic transaction set - the way the code gets organised,
applying a block and saving it to the kvt happens in the same "logical"
frame - therefore, discarding the frame discards both the aristo and kvt
changes at the same time - likewise, they are persisted to disk together
- this makes reasoning about the database somewhat easier but has the
downside of increased memory usage, something that perhaps will need
addressing in the future.
Because the code reasons more strictly about frames and the state of the
persisted database, it also makes it more visible where ForkedChain
should be used and where it is still missing - in particular, frames
represent a single branch of history while forkedchain manages multiple
parallel forks - user-facing services such as the RPC should use the
latter, ie until it has been finalized, a getBlock request should
consider all forks and not just the blocks in the canonical head branch.
Another advantage of this approach is that `AristoDbRef` conceptually
becomes more simple - removing its tracking of the "current" transaction
stack simplifies reasoning about what can go wrong since this state now
has to be passed around in the form of `AristoTxRef` - as such, many of
the tests and facilities in the code that were dealing with "stack
inconsistency" are now structurally prevented from happening. The test
suite will need significant refactoring after this change.
Once this change has been merged, there are several follow-ups to do:
* there's no mechanism for keeping frames up to date as they get
committed or rolled back - TODO
* naming is confused - many names for the same thing for legacy reason
* forkedchain support is still missing in lots of code
* clean up redundant logic based on previous designs - in particular the
debug and introspection code no longer makes sense
* the way change sets are stored will probably need revisiting - because
it's a stack of changes where each frame must be interrogated to find an
on-disk value, with a base distance of 128 we'll at minimum have to
perform 128 frame lookups for *every* database interaction - regardless,
the "dag-like" nature will stay
* dispose and commit are poorly defined and perhaps redundant - in
theory, one could simply let the GC collect abandoned frames etc, though
it's likely an explicit mechanism will remain useful, so they stay for
now
More about the changes:
* `AristoDbRef` gains a `txRef` field (todo: rename) that "more or less"
corresponds to the old `balancer` field
* `AristoDbRef.stack` is gone - instead, there's a chain of
`AristoTxRef` objects that hold their respective "layer" which has the
actual changes
* No more reasoning about "top" and "stack" - instead, each
`AristoTxRef` can be a "head" that "more or less" corresponds to the old
single-history `top` notion and its stack
* `level` still represents "distance to base" - it's computed from the
parent chain instead of being stored
* one has to be careful not to use frames where forkedchain was intended
- layers are only for a single branch of history!
* fix layer vtop after rollback
* engine fix
* Fix test_txpool
* Fix test_rpc
* Fix copyright year
* fix simulator
* Fix copyright year
* Fix copyright year
* Fix tracer
* Fix infinite recursion bug
* Remove aristo and kvt empty files
* Fic copyright year
* Fix fc chain_kvt
* ForkedChain refactoring
* Fix merge master conflict
* Fix copyright year
* Reparent txFrame
* Fix test
* Fix txFrame reparent again
* Cleanup and fix test
* UpdateBase bugfix and fix test
* Fixe newPayload bug discovered by hive
* Fix engine api fcu
* Clean up call template, chain_kvt, andn txguid
* Fix copyright year
* work around base block loading issue
* Add test
* Fix updateHead bug
* Fix updateBase bug
* Change func commitBase to proc commitBase
* Touch up and fix debug mode crash
---------
Co-authored-by: jangko <jangko128@gmail.com>
2025-02-06 08:04:50 +01:00
|
|
|
ptd = txFrame.getScore(header.parentHash).valueOr:
|
2024-10-08 09:37:36 +07:00
|
|
|
return false
|
|
|
|
td = ptd + header.difficulty
|
|
|
|
ptd >= ttd and td >= ttd
|
2023-02-16 11:40:07 +00:00
|
|
|
|
2022-12-02 11:39:12 +07:00
|
|
|
# ------------------------------------------------------------------------------
|
|
|
|
# Public constructors
|
|
|
|
# ------------------------------------------------------------------------------
|
|
|
|
|
2023-12-12 19:12:56 +00:00
|
|
|
proc new*(
|
|
|
|
_: type CommonRef;
|
|
|
|
db: CoreDbRef;
|
2024-12-13 05:53:41 +01:00
|
|
|
taskpool: Taskpool;
|
2023-12-12 19:12:56 +00:00
|
|
|
networkId: NetworkId = MainNet;
|
|
|
|
params = networkParams(MainNet);
|
2024-05-20 10:17:51 +00:00
|
|
|
pruneHistory = false;
|
2024-10-26 11:26:38 +02:00
|
|
|
): CommonRef =
|
2022-12-02 11:39:12 +07:00
|
|
|
|
|
|
|
## If genesis data is present, the forkIds will be initialized
|
|
|
|
## empty data base also initialized with genesis block
|
|
|
|
new(result)
|
|
|
|
result.init(
|
|
|
|
db,
|
2024-12-13 05:53:41 +01:00
|
|
|
taskpool,
|
2022-12-02 11:39:12 +07:00
|
|
|
networkId,
|
|
|
|
params.config,
|
Optional accounts cache module for creating genesis (#1897)
* Split off `ReadOnlyStateDB` from `AccountStateDB` from `state_db.nim`
why:
Apart from testing, applications use `ReadOnlyStateDB` as an easy
way to access the accounts ledger. This is well supported by the
`Aristo` db, but writable mode is only parially supported.
The writable AccountStateDB` object for modifying accounts is not
used by production code.
So, for lecgacy and testing apps, the full support of the previous
`AccountStateDB` is now enabled by `import db/state_db/read_write`
and the `import db/state_db` provides read-only mode.
* Encapsulate `AccountStateDB` as `GenesisLedgerRef` or genesis creation
why:
`AccountStateDB` has poor support for `Aristo` and is not widely used
in favour of `AccountsLedger` (which will be abstracted as `ledger`.)
Currently, using other than the `AccountStateDB` ledgers within the
`GenesisLedgerRef` wrapper is experimental and test only. Eventually,
the wrapper should disappear so that the `Ledger` object (which
encapsulates `AccountsCache` and `AccountsLedger`) will prevail.
* For the `Ledger`, provide access to raw accounts `MPT`
why:
This gives to the `CoreDbMptRef` descriptor from the `CoreDb` (which is
the legacy version of CoreDxMptRef`.) For the new `ledger` API, the
accounts are based on the `CoreDxMAccRef` descriptor which uses a
particular sub-system for accounts while legacy applications use the
`CoreDbPhkRef` equivalent of the `SecureHexaryTrie`.
The only place where this feature will currently be used is the
`genesis.nim` source file.
* Fix `Aristo` bugs, missing boundary checks, typos, etc.
* Verify root vertex in `MPT` and account constructors
why:
Was missing so far, in particular the accounts constructor must
verify `VertexID(1)
* Fix include file
2023-11-20 11:51:43 +00:00
|
|
|
params.genesis,
|
2024-05-20 10:17:51 +00:00
|
|
|
pruneHistory)
|
2023-12-12 19:12:56 +00:00
|
|
|
|
|
|
|
proc new*(
|
|
|
|
_: type CommonRef;
|
|
|
|
db: CoreDbRef;
|
2024-12-13 05:53:41 +01:00
|
|
|
taskpool: Taskpool;
|
2023-12-12 19:12:56 +00:00
|
|
|
config: ChainConfig;
|
|
|
|
networkId: NetworkId = MainNet;
|
2024-05-20 10:17:51 +00:00
|
|
|
pruneHistory = false;
|
2024-10-26 11:26:38 +02:00
|
|
|
): CommonRef =
|
2022-12-02 11:39:12 +07:00
|
|
|
|
|
|
|
## There is no genesis data present
|
|
|
|
## Mainly used for testing without genesis
|
|
|
|
new(result)
|
|
|
|
result.init(
|
|
|
|
db,
|
2024-12-13 05:53:41 +01:00
|
|
|
taskpool,
|
2022-12-02 11:39:12 +07:00
|
|
|
networkId,
|
|
|
|
config,
|
Optional accounts cache module for creating genesis (#1897)
* Split off `ReadOnlyStateDB` from `AccountStateDB` from `state_db.nim`
why:
Apart from testing, applications use `ReadOnlyStateDB` as an easy
way to access the accounts ledger. This is well supported by the
`Aristo` db, but writable mode is only parially supported.
The writable AccountStateDB` object for modifying accounts is not
used by production code.
So, for lecgacy and testing apps, the full support of the previous
`AccountStateDB` is now enabled by `import db/state_db/read_write`
and the `import db/state_db` provides read-only mode.
* Encapsulate `AccountStateDB` as `GenesisLedgerRef` or genesis creation
why:
`AccountStateDB` has poor support for `Aristo` and is not widely used
in favour of `AccountsLedger` (which will be abstracted as `ledger`.)
Currently, using other than the `AccountStateDB` ledgers within the
`GenesisLedgerRef` wrapper is experimental and test only. Eventually,
the wrapper should disappear so that the `Ledger` object (which
encapsulates `AccountsCache` and `AccountsLedger`) will prevail.
* For the `Ledger`, provide access to raw accounts `MPT`
why:
This gives to the `CoreDbMptRef` descriptor from the `CoreDb` (which is
the legacy version of CoreDxMptRef`.) For the new `ledger` API, the
accounts are based on the `CoreDxMAccRef` descriptor which uses a
particular sub-system for accounts while legacy applications use the
`CoreDbPhkRef` equivalent of the `SecureHexaryTrie`.
The only place where this feature will currently be used is the
`genesis.nim` source file.
* Fix `Aristo` bugs, missing boundary checks, typos, etc.
* Verify root vertex in `MPT` and account constructors
why:
Was missing so far, in particular the accounts constructor must
verify `VertexID(1)
* Fix include file
2023-11-20 11:51:43 +00:00
|
|
|
nil,
|
2024-05-20 10:17:51 +00:00
|
|
|
pruneHistory)
|
2022-12-02 11:39:12 +07:00
|
|
|
|
2024-05-30 20:30:40 +00:00
|
|
|
func clone*(com: CommonRef, db: CoreDbRef): CommonRef =
|
2022-12-02 11:39:12 +07:00
|
|
|
## clone but replace the db
|
|
|
|
## used in EVM tracer whose db is CaptureDB
|
|
|
|
CommonRef(
|
2023-08-04 12:10:09 +01:00
|
|
|
db : db,
|
2022-12-02 11:39:12 +07:00
|
|
|
config : com.config,
|
2023-02-16 11:40:07 +00:00
|
|
|
forkTransitionTable: com.forkTransitionTable,
|
2023-10-24 17:39:19 +07:00
|
|
|
forkIdCalculator: com.forkIdCalculator,
|
2022-12-02 11:39:12 +07:00
|
|
|
genesisHash : com.genesisHash,
|
|
|
|
genesisHeader: com.genesisHeader,
|
|
|
|
syncProgress : com.syncProgress,
|
|
|
|
networkId : com.networkId,
|
2024-05-20 10:17:51 +00:00
|
|
|
pruneHistory : com.pruneHistory)
|
2022-12-02 11:39:12 +07:00
|
|
|
|
2024-05-30 20:30:40 +00:00
|
|
|
func clone*(com: CommonRef): CommonRef =
|
2023-08-04 12:10:09 +01:00
|
|
|
com.clone(com.db)
|
2022-12-02 11:39:12 +07:00
|
|
|
|
|
|
|
# ------------------------------------------------------------------------------
|
|
|
|
# Public functions
|
|
|
|
# ------------------------------------------------------------------------------
|
|
|
|
|
2023-05-10 18:04:35 +02:00
|
|
|
func toHardFork*(
|
|
|
|
com: CommonRef, forkDeterminer: ForkDeterminationInfo): HardFork =
|
2023-02-16 11:40:07 +00:00
|
|
|
toHardFork(com.forkTransitionTable, forkDeterminer)
|
|
|
|
|
|
|
|
func toEVMFork*(com: CommonRef, forkDeterminer: ForkDeterminationInfo): EVMFork =
|
2022-12-02 11:39:12 +07:00
|
|
|
## similar to toFork, but produce EVMFork
|
2023-02-16 11:40:07 +00:00
|
|
|
let fork = com.toHardFork(forkDeterminer)
|
2022-12-02 11:39:12 +07:00
|
|
|
ToEVMFork[fork]
|
|
|
|
|
2025-01-27 23:20:39 +07:00
|
|
|
func toEVMFork*(com: CommonRef, header: Header): EVMFork =
|
|
|
|
com.toEVMFork(forkDeterminationInfo(header))
|
|
|
|
|
2024-07-04 20:48:36 +07:00
|
|
|
func isSpuriousOrLater*(com: CommonRef, number: BlockNumber): bool =
|
|
|
|
com.toHardFork(number.forkDeterminationInfo) >= Spurious
|
|
|
|
|
2024-07-17 17:05:53 +07:00
|
|
|
func isByzantiumOrLater*(com: CommonRef, number: BlockNumber): bool =
|
|
|
|
com.toHardFork(number.forkDeterminationInfo) >= Byzantium
|
|
|
|
|
2024-06-27 12:54:36 +07:00
|
|
|
func isLondonOrLater*(com: CommonRef, number: BlockNumber): bool =
|
2022-12-02 11:39:12 +07:00
|
|
|
# TODO: Fixme, use only London comparator
|
2023-10-24 17:39:19 +07:00
|
|
|
com.toHardFork(number.forkDeterminationInfo) >= London
|
2022-12-02 11:39:12 +07:00
|
|
|
|
2023-10-24 17:39:19 +07:00
|
|
|
func forkId*(com: CommonRef, head, time: uint64): ForkID {.gcsafe.} =
|
2022-12-02 11:39:12 +07:00
|
|
|
## EIP 2364/2124
|
2023-10-24 17:39:19 +07:00
|
|
|
com.forkIdCalculator.newID(head, time)
|
|
|
|
|
|
|
|
func forkId*(com: CommonRef, head: BlockNumber, time: EthTime): ForkID {.gcsafe.} =
|
|
|
|
## EIP 2364/2124
|
2024-06-14 14:31:08 +07:00
|
|
|
com.forkIdCalculator.newID(head, time.uint64)
|
2022-12-02 11:39:12 +07:00
|
|
|
|
|
|
|
func isEIP155*(com: CommonRef, number: BlockNumber): bool =
|
|
|
|
com.config.eip155Block.isSome and number >= com.config.eip155Block.get
|
|
|
|
|
2023-03-09 18:40:55 -05:00
|
|
|
func isShanghaiOrLater*(com: CommonRef, t: EthTime): bool =
|
|
|
|
com.config.shanghaiTime.isSome and t >= com.config.shanghaiTime.get
|
|
|
|
|
2023-06-24 20:56:44 +07:00
|
|
|
func isCancunOrLater*(com: CommonRef, t: EthTime): bool =
|
|
|
|
com.config.cancunTime.isSome and t >= com.config.cancunTime.get
|
|
|
|
|
2024-03-28 13:47:02 +07:00
|
|
|
func isPragueOrLater*(com: CommonRef, t: EthTime): bool =
|
|
|
|
com.config.pragueTime.isSome and t >= com.config.pragueTime.get
|
|
|
|
|
aristo: fork support via layers/txframes (#2960)
* aristo: fork support via layers/txframes
This change reorganises how the database is accessed: instead holding a
"current frame" in the database object, a dag of frames is created based
on the "base frame" held in `AristoDbRef` and all database access
happens through this frame, which can be thought of as a consistent
point-in-time snapshot of the database based on a particular fork of the
chain.
In the code, "frame", "transaction" and "layer" is used to denote more
or less the same thing: a dag of stacked changes backed by the on-disk
database.
Although this is not a requirement, in practice each frame holds the
change set of a single block - as such, the frame and its ancestors
leading up to the on-disk state represents the state of the database
after that block has been applied.
"committing" means merging the changes to its parent frame so that the
difference between them is lost and only the cumulative changes remain -
this facility enables frames to be combined arbitrarily wherever they
are in the dag.
In particular, it becomes possible to consolidate a set of changes near
the base of the dag and commit those to disk without having to re-do the
in-memory frames built on top of them - this is useful for "flattening"
a set of changes during a base update and sending those to storage
without having to perform a block replay on top.
Looking at abstractions, a side effect of this change is that the KVT
and Aristo are brought closer together by considering them to be part of
the "same" atomic transaction set - the way the code gets organised,
applying a block and saving it to the kvt happens in the same "logical"
frame - therefore, discarding the frame discards both the aristo and kvt
changes at the same time - likewise, they are persisted to disk together
- this makes reasoning about the database somewhat easier but has the
downside of increased memory usage, something that perhaps will need
addressing in the future.
Because the code reasons more strictly about frames and the state of the
persisted database, it also makes it more visible where ForkedChain
should be used and where it is still missing - in particular, frames
represent a single branch of history while forkedchain manages multiple
parallel forks - user-facing services such as the RPC should use the
latter, ie until it has been finalized, a getBlock request should
consider all forks and not just the blocks in the canonical head branch.
Another advantage of this approach is that `AristoDbRef` conceptually
becomes more simple - removing its tracking of the "current" transaction
stack simplifies reasoning about what can go wrong since this state now
has to be passed around in the form of `AristoTxRef` - as such, many of
the tests and facilities in the code that were dealing with "stack
inconsistency" are now structurally prevented from happening. The test
suite will need significant refactoring after this change.
Once this change has been merged, there are several follow-ups to do:
* there's no mechanism for keeping frames up to date as they get
committed or rolled back - TODO
* naming is confused - many names for the same thing for legacy reason
* forkedchain support is still missing in lots of code
* clean up redundant logic based on previous designs - in particular the
debug and introspection code no longer makes sense
* the way change sets are stored will probably need revisiting - because
it's a stack of changes where each frame must be interrogated to find an
on-disk value, with a base distance of 128 we'll at minimum have to
perform 128 frame lookups for *every* database interaction - regardless,
the "dag-like" nature will stay
* dispose and commit are poorly defined and perhaps redundant - in
theory, one could simply let the GC collect abandoned frames etc, though
it's likely an explicit mechanism will remain useful, so they stay for
now
More about the changes:
* `AristoDbRef` gains a `txRef` field (todo: rename) that "more or less"
corresponds to the old `balancer` field
* `AristoDbRef.stack` is gone - instead, there's a chain of
`AristoTxRef` objects that hold their respective "layer" which has the
actual changes
* No more reasoning about "top" and "stack" - instead, each
`AristoTxRef` can be a "head" that "more or less" corresponds to the old
single-history `top` notion and its stack
* `level` still represents "distance to base" - it's computed from the
parent chain instead of being stored
* one has to be careful not to use frames where forkedchain was intended
- layers are only for a single branch of history!
* fix layer vtop after rollback
* engine fix
* Fix test_txpool
* Fix test_rpc
* Fix copyright year
* fix simulator
* Fix copyright year
* Fix copyright year
* Fix tracer
* Fix infinite recursion bug
* Remove aristo and kvt empty files
* Fic copyright year
* Fix fc chain_kvt
* ForkedChain refactoring
* Fix merge master conflict
* Fix copyright year
* Reparent txFrame
* Fix test
* Fix txFrame reparent again
* Cleanup and fix test
* UpdateBase bugfix and fix test
* Fixe newPayload bug discovered by hive
* Fix engine api fcu
* Clean up call template, chain_kvt, andn txguid
* Fix copyright year
* work around base block loading issue
* Add test
* Fix updateHead bug
* Fix updateBase bug
* Change func commitBase to proc commitBase
* Touch up and fix debug mode crash
---------
Co-authored-by: jangko <jangko128@gmail.com>
2025-02-06 08:04:50 +01:00
|
|
|
proc proofOfStake*(com: CommonRef, header: Header, txFrame: CoreDbTxRef): bool =
|
2024-10-08 09:37:36 +07:00
|
|
|
if com.config.posBlock.isSome:
|
|
|
|
# see comments of posBlock in common/hardforks.nim
|
|
|
|
header.number >= com.config.posBlock.get
|
2024-12-17 18:42:13 +07:00
|
|
|
elif com.config.mergeNetsplitBlock.isSome:
|
|
|
|
header.number >= com.config.mergeNetsplitBlock.get
|
2024-10-08 09:37:36 +07:00
|
|
|
else:
|
|
|
|
# This costly check is only executed from test suite
|
aristo: fork support via layers/txframes (#2960)
* aristo: fork support via layers/txframes
This change reorganises how the database is accessed: instead holding a
"current frame" in the database object, a dag of frames is created based
on the "base frame" held in `AristoDbRef` and all database access
happens through this frame, which can be thought of as a consistent
point-in-time snapshot of the database based on a particular fork of the
chain.
In the code, "frame", "transaction" and "layer" is used to denote more
or less the same thing: a dag of stacked changes backed by the on-disk
database.
Although this is not a requirement, in practice each frame holds the
change set of a single block - as such, the frame and its ancestors
leading up to the on-disk state represents the state of the database
after that block has been applied.
"committing" means merging the changes to its parent frame so that the
difference between them is lost and only the cumulative changes remain -
this facility enables frames to be combined arbitrarily wherever they
are in the dag.
In particular, it becomes possible to consolidate a set of changes near
the base of the dag and commit those to disk without having to re-do the
in-memory frames built on top of them - this is useful for "flattening"
a set of changes during a base update and sending those to storage
without having to perform a block replay on top.
Looking at abstractions, a side effect of this change is that the KVT
and Aristo are brought closer together by considering them to be part of
the "same" atomic transaction set - the way the code gets organised,
applying a block and saving it to the kvt happens in the same "logical"
frame - therefore, discarding the frame discards both the aristo and kvt
changes at the same time - likewise, they are persisted to disk together
- this makes reasoning about the database somewhat easier but has the
downside of increased memory usage, something that perhaps will need
addressing in the future.
Because the code reasons more strictly about frames and the state of the
persisted database, it also makes it more visible where ForkedChain
should be used and where it is still missing - in particular, frames
represent a single branch of history while forkedchain manages multiple
parallel forks - user-facing services such as the RPC should use the
latter, ie until it has been finalized, a getBlock request should
consider all forks and not just the blocks in the canonical head branch.
Another advantage of this approach is that `AristoDbRef` conceptually
becomes more simple - removing its tracking of the "current" transaction
stack simplifies reasoning about what can go wrong since this state now
has to be passed around in the form of `AristoTxRef` - as such, many of
the tests and facilities in the code that were dealing with "stack
inconsistency" are now structurally prevented from happening. The test
suite will need significant refactoring after this change.
Once this change has been merged, there are several follow-ups to do:
* there's no mechanism for keeping frames up to date as they get
committed or rolled back - TODO
* naming is confused - many names for the same thing for legacy reason
* forkedchain support is still missing in lots of code
* clean up redundant logic based on previous designs - in particular the
debug and introspection code no longer makes sense
* the way change sets are stored will probably need revisiting - because
it's a stack of changes where each frame must be interrogated to find an
on-disk value, with a base distance of 128 we'll at minimum have to
perform 128 frame lookups for *every* database interaction - regardless,
the "dag-like" nature will stay
* dispose and commit are poorly defined and perhaps redundant - in
theory, one could simply let the GC collect abandoned frames etc, though
it's likely an explicit mechanism will remain useful, so they stay for
now
More about the changes:
* `AristoDbRef` gains a `txRef` field (todo: rename) that "more or less"
corresponds to the old `balancer` field
* `AristoDbRef.stack` is gone - instead, there's a chain of
`AristoTxRef` objects that hold their respective "layer" which has the
actual changes
* No more reasoning about "top" and "stack" - instead, each
`AristoTxRef` can be a "head" that "more or less" corresponds to the old
single-history `top` notion and its stack
* `level` still represents "distance to base" - it's computed from the
parent chain instead of being stored
* one has to be careful not to use frames where forkedchain was intended
- layers are only for a single branch of history!
* fix layer vtop after rollback
* engine fix
* Fix test_txpool
* Fix test_rpc
* Fix copyright year
* fix simulator
* Fix copyright year
* Fix copyright year
* Fix tracer
* Fix infinite recursion bug
* Remove aristo and kvt empty files
* Fic copyright year
* Fix fc chain_kvt
* ForkedChain refactoring
* Fix merge master conflict
* Fix copyright year
* Reparent txFrame
* Fix test
* Fix txFrame reparent again
* Cleanup and fix test
* UpdateBase bugfix and fix test
* Fixe newPayload bug discovered by hive
* Fix engine api fcu
* Clean up call template, chain_kvt, andn txguid
* Fix copyright year
* work around base block loading issue
* Add test
* Fix updateHead bug
* Fix updateBase bug
* Change func commitBase to proc commitBase
* Touch up and fix debug mode crash
---------
Co-authored-by: jangko <jangko128@gmail.com>
2025-02-06 08:04:50 +01:00
|
|
|
com.isBlockAfterTtd(header, txFrame)
|
2022-12-02 11:39:12 +07:00
|
|
|
|
2024-11-08 10:47:07 +07:00
|
|
|
func depositContractAddress*(com: CommonRef): Address =
|
|
|
|
com.config.depositContractAddress.get(default(Address))
|
|
|
|
|
2024-10-16 07:04:12 +05:30
|
|
|
proc syncReqNewHead*(com: CommonRef; header: Header)
|
2023-05-16 11:15:10 +07:00
|
|
|
{.gcsafe, raises: [].} =
|
Flare sync (#2627)
* Cosmetics, small fixes, add stashed headers verifier
* Remove direct `Era1` support
why:
Era1 is indirectly supported by using the import tool before syncing.
* Clarify database persistent save function.
why:
Function relied on the last saved state block number which was wrong.
It now relies on the tx-level. If it is 0, then data are saved directly.
Otherwise the task that owns the tx will do it.
* Extracted configuration constants into separate file
* Enable single peer mode for debugging
* Fix peer losing issue in multi-mode
details:
Running concurrent download peers was previously programmed as running
a batch downloading and storing ~8k headers and then leaving the `async`
function to be restarted by a scheduler.
This was unfortunate because of occasionally occurring long waiting
times for restart.
While the time gap until restarting were typically observed a few
millisecs, there were always a few outliers which well exceed several
seconds. This seemed to let remote peers run into timeouts.
* Prefix function names `unprocXxx()` and `stagedYyy()` by `headers`
why:
There will be other `unproc` and `staged` modules.
* Remove cruft, update logging
* Fix accounting issue
details:
When staging after fetching headers from the network, there was an off
by 1 error occurring when the result was by one smaller than requested.
Also, a whole range was mis-accounted when a peer was terminating
connection immediately after responding.
* Fix slow/error header accounting when fetching
why:
Originally set for detecting slow headers in a row, the counter
was wrongly extended to general errors.
* Ban peers for a while that respond with too few headers continuously
why:
Some peers only returned one header at a time. If these peers sit on a
farm, they might collectively slow down the download process.
* Update RPC beacon header updater
why:
Old function hook has slightly changed its meaning since it was used
for snap sync. Also, the old hook is used by other functions already.
* Limit number of peers or set to single peer mode
details:
Merge several concepts, single peer mode being one of it.
* Some code clean up, fixings for removing of compiler warnings
* De-noise header fetch related sources
why:
Header download looks relatively stable, so general debugging is not
needed, anymore. This is the equivalent of removing the scaffold from
the part of the building where work has completed.
* More clean up and code prettification for headers stuff
* Implement body fetch and block import
details:
Available headers are used stage blocks by combining existing headers
with newly fetched blocks. Then these blocks are imported/executed via
`persistBlocks()`.
* Logger cosmetics and cleanup
* Remove staged block queue debugging
details:
Feature still available, just not executed anymore
* Docu, logging update
* Update/simplify `runDaemon()`
* Re-calibrate block body requests and soft config for import blocks batch
why:
* For fetching, larger fetch requests are mostly truncated anyway on
MainNet.
* For executing, smaller batch sizes reduce the memory needed for the
price of longer execution times.
* Update metrics counters
* Docu update
* Some fixes, formatting updates, etc.
* Update `borrowed` type: uint -. uint64
also:
Always convert to `uint64` rather than `uint` where appropriate
2024-09-27 15:07:42 +00:00
|
|
|
## Used by RPC updater
|
2023-01-17 09:28:14 +00:00
|
|
|
if not com.syncReqNewHead.isNil:
|
2023-02-14 21:27:17 +01:00
|
|
|
com.syncReqNewHead(header)
|
2023-01-17 09:28:14 +00:00
|
|
|
|
2025-01-29 12:31:06 +00:00
|
|
|
proc reqBeaconSyncerTarget*(com: CommonRef; header: Header; finHash: Hash32) =
|
Flare sync (#2627)
* Cosmetics, small fixes, add stashed headers verifier
* Remove direct `Era1` support
why:
Era1 is indirectly supported by using the import tool before syncing.
* Clarify database persistent save function.
why:
Function relied on the last saved state block number which was wrong.
It now relies on the tx-level. If it is 0, then data are saved directly.
Otherwise the task that owns the tx will do it.
* Extracted configuration constants into separate file
* Enable single peer mode for debugging
* Fix peer losing issue in multi-mode
details:
Running concurrent download peers was previously programmed as running
a batch downloading and storing ~8k headers and then leaving the `async`
function to be restarted by a scheduler.
This was unfortunate because of occasionally occurring long waiting
times for restart.
While the time gap until restarting were typically observed a few
millisecs, there were always a few outliers which well exceed several
seconds. This seemed to let remote peers run into timeouts.
* Prefix function names `unprocXxx()` and `stagedYyy()` by `headers`
why:
There will be other `unproc` and `staged` modules.
* Remove cruft, update logging
* Fix accounting issue
details:
When staging after fetching headers from the network, there was an off
by 1 error occurring when the result was by one smaller than requested.
Also, a whole range was mis-accounted when a peer was terminating
connection immediately after responding.
* Fix slow/error header accounting when fetching
why:
Originally set for detecting slow headers in a row, the counter
was wrongly extended to general errors.
* Ban peers for a while that respond with too few headers continuously
why:
Some peers only returned one header at a time. If these peers sit on a
farm, they might collectively slow down the download process.
* Update RPC beacon header updater
why:
Old function hook has slightly changed its meaning since it was used
for snap sync. Also, the old hook is used by other functions already.
* Limit number of peers or set to single peer mode
details:
Merge several concepts, single peer mode being one of it.
* Some code clean up, fixings for removing of compiler warnings
* De-noise header fetch related sources
why:
Header download looks relatively stable, so general debugging is not
needed, anymore. This is the equivalent of removing the scaffold from
the part of the building where work has completed.
* More clean up and code prettification for headers stuff
* Implement body fetch and block import
details:
Available headers are used stage blocks by combining existing headers
with newly fetched blocks. Then these blocks are imported/executed via
`persistBlocks()`.
* Logger cosmetics and cleanup
* Remove staged block queue debugging
details:
Feature still available, just not executed anymore
* Docu, logging update
* Update/simplify `runDaemon()`
* Re-calibrate block body requests and soft config for import blocks batch
why:
* For fetching, larger fetch requests are mostly truncated anyway on
MainNet.
* For executing, smaller batch sizes reduce the memory needed for the
price of longer execution times.
* Update metrics counters
* Docu update
* Some fixes, formatting updates, etc.
* Update `borrowed` type: uint -. uint64
also:
Always convert to `uint64` rather than `uint` where appropriate
2024-09-27 15:07:42 +00:00
|
|
|
## Used by RPC updater
|
2025-01-29 12:31:06 +00:00
|
|
|
if not com.reqBeaconSyncerTargetCB.isNil:
|
|
|
|
com.reqBeaconSyncerTargetCB(header, finHash)
|
|
|
|
|
2025-01-29 16:20:25 +00:00
|
|
|
proc beaconSyncerProgress*(com: CommonRef): tuple[start, current, target: BlockNumber] =
|
2025-01-29 12:31:06 +00:00
|
|
|
## Query syncer status
|
2025-01-29 16:20:25 +00:00
|
|
|
if not com.beaconSyncerProgressCB.isNil:
|
|
|
|
return com.beaconSyncerProgressCB()
|
|
|
|
# (0,0,0)
|
Flare sync (#2627)
* Cosmetics, small fixes, add stashed headers verifier
* Remove direct `Era1` support
why:
Era1 is indirectly supported by using the import tool before syncing.
* Clarify database persistent save function.
why:
Function relied on the last saved state block number which was wrong.
It now relies on the tx-level. If it is 0, then data are saved directly.
Otherwise the task that owns the tx will do it.
* Extracted configuration constants into separate file
* Enable single peer mode for debugging
* Fix peer losing issue in multi-mode
details:
Running concurrent download peers was previously programmed as running
a batch downloading and storing ~8k headers and then leaving the `async`
function to be restarted by a scheduler.
This was unfortunate because of occasionally occurring long waiting
times for restart.
While the time gap until restarting were typically observed a few
millisecs, there were always a few outliers which well exceed several
seconds. This seemed to let remote peers run into timeouts.
* Prefix function names `unprocXxx()` and `stagedYyy()` by `headers`
why:
There will be other `unproc` and `staged` modules.
* Remove cruft, update logging
* Fix accounting issue
details:
When staging after fetching headers from the network, there was an off
by 1 error occurring when the result was by one smaller than requested.
Also, a whole range was mis-accounted when a peer was terminating
connection immediately after responding.
* Fix slow/error header accounting when fetching
why:
Originally set for detecting slow headers in a row, the counter
was wrongly extended to general errors.
* Ban peers for a while that respond with too few headers continuously
why:
Some peers only returned one header at a time. If these peers sit on a
farm, they might collectively slow down the download process.
* Update RPC beacon header updater
why:
Old function hook has slightly changed its meaning since it was used
for snap sync. Also, the old hook is used by other functions already.
* Limit number of peers or set to single peer mode
details:
Merge several concepts, single peer mode being one of it.
* Some code clean up, fixings for removing of compiler warnings
* De-noise header fetch related sources
why:
Header download looks relatively stable, so general debugging is not
needed, anymore. This is the equivalent of removing the scaffold from
the part of the building where work has completed.
* More clean up and code prettification for headers stuff
* Implement body fetch and block import
details:
Available headers are used stage blocks by combining existing headers
with newly fetched blocks. Then these blocks are imported/executed via
`persistBlocks()`.
* Logger cosmetics and cleanup
* Remove staged block queue debugging
details:
Feature still available, just not executed anymore
* Docu, logging update
* Update/simplify `runDaemon()`
* Re-calibrate block body requests and soft config for import blocks batch
why:
* For fetching, larger fetch requests are mostly truncated anyway on
MainNet.
* For executing, smaller batch sizes reduce the memory needed for the
price of longer execution times.
* Update metrics counters
* Docu update
* Some fixes, formatting updates, etc.
* Update `borrowed` type: uint -. uint64
also:
Always convert to `uint64` rather than `uint` where appropriate
2024-09-27 15:07:42 +00:00
|
|
|
|
2024-10-16 07:04:12 +05:30
|
|
|
proc notifyBadBlock*(com: CommonRef; invalid, origin: Header)
|
2024-05-17 08:38:46 +07:00
|
|
|
{.gcsafe, raises: [].} =
|
|
|
|
|
|
|
|
if not com.notifyBadBlock.isNil:
|
|
|
|
com.notifyBadBlock(invalid, origin)
|
|
|
|
|
2022-12-02 11:39:12 +07:00
|
|
|
# ------------------------------------------------------------------------------
|
|
|
|
# Getters
|
|
|
|
# ------------------------------------------------------------------------------
|
2023-04-14 23:28:57 +01:00
|
|
|
|
2024-10-16 07:04:12 +05:30
|
|
|
func startOfHistory*(com: CommonRef): Hash32 =
|
2023-04-14 23:28:57 +01:00
|
|
|
## Getter
|
|
|
|
com.startOfHistory
|
|
|
|
|
2023-08-04 12:10:09 +01:00
|
|
|
func db*(com: CommonRef): CoreDbRef =
|
2022-12-02 11:39:12 +07:00
|
|
|
com.db
|
|
|
|
|
2024-06-14 14:31:08 +07:00
|
|
|
func eip150Block*(com: CommonRef): Opt[BlockNumber] =
|
2022-12-02 11:39:12 +07:00
|
|
|
com.config.eip150Block
|
|
|
|
|
2024-10-16 07:04:12 +05:30
|
|
|
func eip150Hash*(com: CommonRef): Hash32 =
|
2022-12-02 11:39:12 +07:00
|
|
|
com.config.eip150Hash
|
|
|
|
|
2024-06-14 14:31:08 +07:00
|
|
|
func daoForkBlock*(com: CommonRef): Opt[BlockNumber] =
|
2022-12-02 11:39:12 +07:00
|
|
|
com.config.daoForkBlock
|
|
|
|
|
|
|
|
func daoForkSupport*(com: CommonRef): bool =
|
|
|
|
com.config.daoForkSupport
|
|
|
|
|
2024-06-14 14:31:08 +07:00
|
|
|
func ttd*(com: CommonRef): Opt[DifficultyInt] =
|
2022-12-02 11:39:12 +07:00
|
|
|
com.config.terminalTotalDifficulty
|
|
|
|
|
2024-05-20 10:17:51 +00:00
|
|
|
func pruneHistory*(com: CommonRef): bool =
|
|
|
|
com.pruneHistory
|
2022-12-02 11:39:12 +07:00
|
|
|
|
|
|
|
# always remember ChainId and NetworkId
|
|
|
|
# are two distinct things that often got mixed
|
|
|
|
# because some client do not make distinction
|
|
|
|
# between them.
|
|
|
|
# And popular networks such as MainNet
|
2024-05-28 05:10:10 +00:00
|
|
|
# add more confusion to this
|
|
|
|
# by not making a distinction in their value.
|
2022-12-02 11:39:12 +07:00
|
|
|
func chainId*(com: CommonRef): ChainId =
|
|
|
|
com.config.chainId
|
|
|
|
|
|
|
|
func networkId*(com: CommonRef): NetworkId =
|
|
|
|
com.networkId
|
|
|
|
|
2024-10-16 07:04:12 +05:30
|
|
|
func genesisHash*(com: CommonRef): Hash32 =
|
2022-12-02 11:39:12 +07:00
|
|
|
## Getter
|
|
|
|
com.genesisHash
|
|
|
|
|
2024-10-16 07:04:12 +05:30
|
|
|
func genesisHeader*(com: CommonRef): Header =
|
2022-12-02 11:39:12 +07:00
|
|
|
## Getter
|
|
|
|
com.genesisHeader
|
|
|
|
|
|
|
|
func syncStart*(com: CommonRef): BlockNumber =
|
|
|
|
com.syncProgress.start
|
|
|
|
|
|
|
|
func syncCurrent*(com: CommonRef): BlockNumber =
|
|
|
|
com.syncProgress.current
|
|
|
|
|
|
|
|
func syncHighest*(com: CommonRef): BlockNumber =
|
|
|
|
com.syncProgress.highest
|
|
|
|
|
2024-09-04 16:54:54 +07:00
|
|
|
func syncState*(com: CommonRef): SyncState =
|
|
|
|
com.syncState
|
|
|
|
|
2024-11-06 09:01:25 +07:00
|
|
|
func extraData*(com: CommonRef): string =
|
|
|
|
com.extraData
|
|
|
|
|
2024-12-13 10:47:35 +07:00
|
|
|
func gasLimit*(com: CommonRef): uint64 =
|
|
|
|
com.gasLimit
|
|
|
|
|
2025-01-16 12:10:52 +07:00
|
|
|
func maxBlobsPerBlock*(com: CommonRef, fork: HardFork): uint64 =
|
|
|
|
doAssert(fork >= Cancun)
|
|
|
|
com.config.blobSchedule[fork].expect("blobSchedule initialized").max
|
|
|
|
|
|
|
|
func targetBlobsPerBlock*(com: CommonRef, fork: HardFork): uint64 =
|
|
|
|
doAssert(fork >= Cancun)
|
|
|
|
com.config.blobSchedule[fork].expect("blobSchedule initialized").target
|
|
|
|
|
2025-01-27 23:20:39 +07:00
|
|
|
func baseFeeUpdateFraction*(com: CommonRef, fork: HardFork): uint64 =
|
|
|
|
doAssert(fork >= Cancun)
|
|
|
|
com.config.blobSchedule[fork].expect("blobSchedule initialized").baseFeeUpdateFraction
|
|
|
|
|
2022-12-02 11:39:12 +07:00
|
|
|
# ------------------------------------------------------------------------------
|
|
|
|
# Setters
|
|
|
|
# ------------------------------------------------------------------------------
|
|
|
|
|
2024-05-30 20:30:40 +00:00
|
|
|
func `syncStart=`*(com: CommonRef, number: BlockNumber) =
|
2022-12-02 11:39:12 +07:00
|
|
|
com.syncProgress.start = number
|
|
|
|
|
2024-05-30 20:30:40 +00:00
|
|
|
func `syncCurrent=`*(com: CommonRef, number: BlockNumber) =
|
2022-12-02 11:39:12 +07:00
|
|
|
com.syncProgress.current = number
|
|
|
|
|
2024-05-30 20:30:40 +00:00
|
|
|
func `syncHighest=`*(com: CommonRef, number: BlockNumber) =
|
2022-12-02 11:39:12 +07:00
|
|
|
com.syncProgress.highest = number
|
|
|
|
|
2024-09-04 16:54:54 +07:00
|
|
|
func `syncState=`*(com: CommonRef, state: SyncState) =
|
|
|
|
com.syncState = state
|
|
|
|
|
2024-10-16 07:04:12 +05:30
|
|
|
func `startOfHistory=`*(com: CommonRef, val: Hash32) =
|
2023-04-14 23:28:57 +01:00
|
|
|
## Setter
|
|
|
|
com.startOfHistory = val
|
|
|
|
|
2024-06-14 14:31:08 +07:00
|
|
|
func setTTD*(com: CommonRef, ttd: Opt[DifficultyInt]) =
|
2022-12-02 11:39:12 +07:00
|
|
|
## useful for testing
|
|
|
|
com.config.terminalTotalDifficulty = ttd
|
2023-02-16 11:40:07 +00:00
|
|
|
# rebuild the MergeFork piece of the forkTransitionTable
|
|
|
|
com.forkTransitionTable.mergeForkTransitionThreshold = com.config.mergeForkTransitionThreshold
|
2022-12-02 11:39:12 +07:00
|
|
|
|
2024-05-30 20:30:40 +00:00
|
|
|
func `syncReqNewHead=`*(com: CommonRef; cb: SyncReqNewHeadCB) =
|
|
|
|
## Activate or reset a call back handler for syncing.
|
2023-01-17 09:28:14 +00:00
|
|
|
com.syncReqNewHead = cb
|
|
|
|
|
2025-01-29 12:31:06 +00:00
|
|
|
func `reqBeaconSyncerTarget=`*(com: CommonRef; cb: ReqBeaconSyncerTargetCB) =
|
Flare sync (#2627)
* Cosmetics, small fixes, add stashed headers verifier
* Remove direct `Era1` support
why:
Era1 is indirectly supported by using the import tool before syncing.
* Clarify database persistent save function.
why:
Function relied on the last saved state block number which was wrong.
It now relies on the tx-level. If it is 0, then data are saved directly.
Otherwise the task that owns the tx will do it.
* Extracted configuration constants into separate file
* Enable single peer mode for debugging
* Fix peer losing issue in multi-mode
details:
Running concurrent download peers was previously programmed as running
a batch downloading and storing ~8k headers and then leaving the `async`
function to be restarted by a scheduler.
This was unfortunate because of occasionally occurring long waiting
times for restart.
While the time gap until restarting were typically observed a few
millisecs, there were always a few outliers which well exceed several
seconds. This seemed to let remote peers run into timeouts.
* Prefix function names `unprocXxx()` and `stagedYyy()` by `headers`
why:
There will be other `unproc` and `staged` modules.
* Remove cruft, update logging
* Fix accounting issue
details:
When staging after fetching headers from the network, there was an off
by 1 error occurring when the result was by one smaller than requested.
Also, a whole range was mis-accounted when a peer was terminating
connection immediately after responding.
* Fix slow/error header accounting when fetching
why:
Originally set for detecting slow headers in a row, the counter
was wrongly extended to general errors.
* Ban peers for a while that respond with too few headers continuously
why:
Some peers only returned one header at a time. If these peers sit on a
farm, they might collectively slow down the download process.
* Update RPC beacon header updater
why:
Old function hook has slightly changed its meaning since it was used
for snap sync. Also, the old hook is used by other functions already.
* Limit number of peers or set to single peer mode
details:
Merge several concepts, single peer mode being one of it.
* Some code clean up, fixings for removing of compiler warnings
* De-noise header fetch related sources
why:
Header download looks relatively stable, so general debugging is not
needed, anymore. This is the equivalent of removing the scaffold from
the part of the building where work has completed.
* More clean up and code prettification for headers stuff
* Implement body fetch and block import
details:
Available headers are used stage blocks by combining existing headers
with newly fetched blocks. Then these blocks are imported/executed via
`persistBlocks()`.
* Logger cosmetics and cleanup
* Remove staged block queue debugging
details:
Feature still available, just not executed anymore
* Docu, logging update
* Update/simplify `runDaemon()`
* Re-calibrate block body requests and soft config for import blocks batch
why:
* For fetching, larger fetch requests are mostly truncated anyway on
MainNet.
* For executing, smaller batch sizes reduce the memory needed for the
price of longer execution times.
* Update metrics counters
* Docu update
* Some fixes, formatting updates, etc.
* Update `borrowed` type: uint -. uint64
also:
Always convert to `uint64` rather than `uint` where appropriate
2024-09-27 15:07:42 +00:00
|
|
|
## Activate or reset a call back handler for syncing.
|
2025-01-29 12:31:06 +00:00
|
|
|
com.reqBeaconSyncerTargetCB = cb
|
|
|
|
|
2025-01-29 16:20:25 +00:00
|
|
|
func `beaconSyncerProgress=`*(com: CommonRef; cb: BeaconSyncerProgressCB) =
|
2025-01-29 12:31:06 +00:00
|
|
|
## Activate or reset a call back handler for querying syncer.
|
2025-01-29 16:20:25 +00:00
|
|
|
com.beaconSyncerProgressCB = cb
|
Flare sync (#2627)
* Cosmetics, small fixes, add stashed headers verifier
* Remove direct `Era1` support
why:
Era1 is indirectly supported by using the import tool before syncing.
* Clarify database persistent save function.
why:
Function relied on the last saved state block number which was wrong.
It now relies on the tx-level. If it is 0, then data are saved directly.
Otherwise the task that owns the tx will do it.
* Extracted configuration constants into separate file
* Enable single peer mode for debugging
* Fix peer losing issue in multi-mode
details:
Running concurrent download peers was previously programmed as running
a batch downloading and storing ~8k headers and then leaving the `async`
function to be restarted by a scheduler.
This was unfortunate because of occasionally occurring long waiting
times for restart.
While the time gap until restarting were typically observed a few
millisecs, there were always a few outliers which well exceed several
seconds. This seemed to let remote peers run into timeouts.
* Prefix function names `unprocXxx()` and `stagedYyy()` by `headers`
why:
There will be other `unproc` and `staged` modules.
* Remove cruft, update logging
* Fix accounting issue
details:
When staging after fetching headers from the network, there was an off
by 1 error occurring when the result was by one smaller than requested.
Also, a whole range was mis-accounted when a peer was terminating
connection immediately after responding.
* Fix slow/error header accounting when fetching
why:
Originally set for detecting slow headers in a row, the counter
was wrongly extended to general errors.
* Ban peers for a while that respond with too few headers continuously
why:
Some peers only returned one header at a time. If these peers sit on a
farm, they might collectively slow down the download process.
* Update RPC beacon header updater
why:
Old function hook has slightly changed its meaning since it was used
for snap sync. Also, the old hook is used by other functions already.
* Limit number of peers or set to single peer mode
details:
Merge several concepts, single peer mode being one of it.
* Some code clean up, fixings for removing of compiler warnings
* De-noise header fetch related sources
why:
Header download looks relatively stable, so general debugging is not
needed, anymore. This is the equivalent of removing the scaffold from
the part of the building where work has completed.
* More clean up and code prettification for headers stuff
* Implement body fetch and block import
details:
Available headers are used stage blocks by combining existing headers
with newly fetched blocks. Then these blocks are imported/executed via
`persistBlocks()`.
* Logger cosmetics and cleanup
* Remove staged block queue debugging
details:
Feature still available, just not executed anymore
* Docu, logging update
* Update/simplify `runDaemon()`
* Re-calibrate block body requests and soft config for import blocks batch
why:
* For fetching, larger fetch requests are mostly truncated anyway on
MainNet.
* For executing, smaller batch sizes reduce the memory needed for the
price of longer execution times.
* Update metrics counters
* Docu update
* Some fixes, formatting updates, etc.
* Update `borrowed` type: uint -. uint64
also:
Always convert to `uint64` rather than `uint` where appropriate
2024-09-27 15:07:42 +00:00
|
|
|
|
2024-05-30 20:30:40 +00:00
|
|
|
func `notifyBadBlock=`*(com: CommonRef; cb: NotifyBadBlockCB) =
|
2024-05-17 08:38:46 +07:00
|
|
|
## Activate or reset a call back handler for bad block notification.
|
|
|
|
com.notifyBadBlock = cb
|
|
|
|
|
2024-11-06 09:01:25 +07:00
|
|
|
func `extraData=`*(com: CommonRef, val: string) =
|
|
|
|
com.extraData = val
|
|
|
|
|
2024-12-13 10:47:35 +07:00
|
|
|
func `gasLimit=`*(com: CommonRef, val: uint64) =
|
2024-12-17 17:48:31 +07:00
|
|
|
if val < GAS_LIMIT_MINIMUM:
|
|
|
|
com.gasLimit = GAS_LIMIT_MINIMUM
|
|
|
|
elif val > GAS_LIMIT_MAXIMUM:
|
|
|
|
com.gasLimit = GAS_LIMIT_MAXIMUM
|
|
|
|
else:
|
|
|
|
com.gasLimit = val
|
2024-12-13 10:47:35 +07:00
|
|
|
|
2023-01-17 09:28:14 +00:00
|
|
|
# ------------------------------------------------------------------------------
|
|
|
|
# End
|
|
|
|
# ------------------------------------------------------------------------------
|