nimbus-eth1/tests/test_coredb/test_chainsync.nim

295 lines
9.0 KiB
Nim
Raw Normal View History

# Nimbus
aristo: fork support via layers/txframes (#2960) * aristo: fork support via layers/txframes This change reorganises how the database is accessed: instead holding a "current frame" in the database object, a dag of frames is created based on the "base frame" held in `AristoDbRef` and all database access happens through this frame, which can be thought of as a consistent point-in-time snapshot of the database based on a particular fork of the chain. In the code, "frame", "transaction" and "layer" is used to denote more or less the same thing: a dag of stacked changes backed by the on-disk database. Although this is not a requirement, in practice each frame holds the change set of a single block - as such, the frame and its ancestors leading up to the on-disk state represents the state of the database after that block has been applied. "committing" means merging the changes to its parent frame so that the difference between them is lost and only the cumulative changes remain - this facility enables frames to be combined arbitrarily wherever they are in the dag. In particular, it becomes possible to consolidate a set of changes near the base of the dag and commit those to disk without having to re-do the in-memory frames built on top of them - this is useful for "flattening" a set of changes during a base update and sending those to storage without having to perform a block replay on top. Looking at abstractions, a side effect of this change is that the KVT and Aristo are brought closer together by considering them to be part of the "same" atomic transaction set - the way the code gets organised, applying a block and saving it to the kvt happens in the same "logical" frame - therefore, discarding the frame discards both the aristo and kvt changes at the same time - likewise, they are persisted to disk together - this makes reasoning about the database somewhat easier but has the downside of increased memory usage, something that perhaps will need addressing in the future. Because the code reasons more strictly about frames and the state of the persisted database, it also makes it more visible where ForkedChain should be used and where it is still missing - in particular, frames represent a single branch of history while forkedchain manages multiple parallel forks - user-facing services such as the RPC should use the latter, ie until it has been finalized, a getBlock request should consider all forks and not just the blocks in the canonical head branch. Another advantage of this approach is that `AristoDbRef` conceptually becomes more simple - removing its tracking of the "current" transaction stack simplifies reasoning about what can go wrong since this state now has to be passed around in the form of `AristoTxRef` - as such, many of the tests and facilities in the code that were dealing with "stack inconsistency" are now structurally prevented from happening. The test suite will need significant refactoring after this change. Once this change has been merged, there are several follow-ups to do: * there's no mechanism for keeping frames up to date as they get committed or rolled back - TODO * naming is confused - many names for the same thing for legacy reason * forkedchain support is still missing in lots of code * clean up redundant logic based on previous designs - in particular the debug and introspection code no longer makes sense * the way change sets are stored will probably need revisiting - because it's a stack of changes where each frame must be interrogated to find an on-disk value, with a base distance of 128 we'll at minimum have to perform 128 frame lookups for *every* database interaction - regardless, the "dag-like" nature will stay * dispose and commit are poorly defined and perhaps redundant - in theory, one could simply let the GC collect abandoned frames etc, though it's likely an explicit mechanism will remain useful, so they stay for now More about the changes: * `AristoDbRef` gains a `txRef` field (todo: rename) that "more or less" corresponds to the old `balancer` field * `AristoDbRef.stack` is gone - instead, there's a chain of `AristoTxRef` objects that hold their respective "layer" which has the actual changes * No more reasoning about "top" and "stack" - instead, each `AristoTxRef` can be a "head" that "more or less" corresponds to the old single-history `top` notion and its stack * `level` still represents "distance to base" - it's computed from the parent chain instead of being stored * one has to be careful not to use frames where forkedchain was intended - layers are only for a single branch of history! * fix layer vtop after rollback * engine fix * Fix test_txpool * Fix test_rpc * Fix copyright year * fix simulator * Fix copyright year * Fix copyright year * Fix tracer * Fix infinite recursion bug * Remove aristo and kvt empty files * Fic copyright year * Fix fc chain_kvt * ForkedChain refactoring * Fix merge master conflict * Fix copyright year * Reparent txFrame * Fix test * Fix txFrame reparent again * Cleanup and fix test * UpdateBase bugfix and fix test * Fixe newPayload bug discovered by hive * Fix engine api fcu * Clean up call template, chain_kvt, andn txguid * Fix copyright year * work around base block loading issue * Add test * Fix updateHead bug * Fix updateBase bug * Change func commitBase to proc commitBase * Touch up and fix debug mode crash --------- Co-authored-by: jangko <jangko128@gmail.com>
2025-02-06 08:04:50 +01:00
# Copyright (c) 2023-2025 Status Research & Development GmbH
# Licensed under either of
# * Apache License, version 2.0, ([LICENSE-APACHE](LICENSE-APACHE) or
# http://www.apache.org/licenses/LICENSE-2.0)
# * MIT license ([LICENSE-MIT](LICENSE-MIT) or
# http://opensource.org/licenses/MIT)
# at your option. This file may not be copied, modified, or
# distributed except according to those terms.
import
std/[os, strformat, times],
chronicles,
eth/common,
results,
unittest2,
Core db aristo hasher profiling and timing improvement (#1938) * Explicitly use shared `Kvt` table on `Ledger` and `Clique` lookup. why: Speeds up lookup time with `Aristo` backend. For writing `Clique` data, the `Companion` model allows to write `Clique` data past the database locked by evm transactions. * Implement `CoreDb` profiling with API tracking why: Chasing time spent per APT procs ... * Implement `Ledger` profiling with API tracking why: Chasing time spent per APT procs ... * Always hashify when commiting or storing why: A dirty cache makes no sense when committing * Make sure that a zero key is created when adding/updating vertices why: This is an error fix mainly for edge cases. A typical error was that the root key got deleted when there were only a few vertices left on the DB. * Need all created and changed vertices zero-keyed on the cache why: A zero key (i.e. empty Merkle hash) indicates that a vertex key needs to be updated. This would not be needed immediately after a merge as there is an actual leaf path on the cache layer. But after subsequent merge and delete operations this information might get blurred. * Re-org hashing algorithm why: Apart from errors, the previous implementation was too slow for two reasons: + some control hashes were calculated for debugging (now all verification is done in `aristo_check` module) + the leaf paths stored on the cache are used to build the labelling (aka hashing) schedule; there paths were accumulated over successive hash sessions although it is clear that all keys were generated, already
2023-12-12 17:47:41 +00:00
../../nimbus/core/chain,
../../nimbus/db/ledger,
../replay/[pp, undump_blocks, undump_blocks_era1, xcheck],
./test_helpers
when CoreDbEnableProfiling:
import
std/sequtils
when CoreDbEnableProfiling:
import
../../nimbus/db/aristo/[aristo_api, aristo_profile],
../../nimbus/db/kvt/kvt_api
var
aristoProfData: AristoDbProfListRef
kvtProfData: KvtDbProfListRef
cdbProfData: CoreDbProfListRef
const
EnableExtraLoggingControl = true
var
logStartTime {.used.} = Time()
logSavedEnv {.used.}: (bool,bool)
Core db aristo hasher profiling and timing improvement (#1938) * Explicitly use shared `Kvt` table on `Ledger` and `Clique` lookup. why: Speeds up lookup time with `Aristo` backend. For writing `Clique` data, the `Companion` model allows to write `Clique` data past the database locked by evm transactions. * Implement `CoreDb` profiling with API tracking why: Chasing time spent per APT procs ... * Implement `Ledger` profiling with API tracking why: Chasing time spent per APT procs ... * Always hashify when commiting or storing why: A dirty cache makes no sense when committing * Make sure that a zero key is created when adding/updating vertices why: This is an error fix mainly for edge cases. A typical error was that the root key got deleted when there were only a few vertices left on the DB. * Need all created and changed vertices zero-keyed on the cache why: A zero key (i.e. empty Merkle hash) indicates that a vertex key needs to be updated. This would not be needed immediately after a merge as there is an actual leaf path on the cache layer. But after subsequent merge and delete operations this information might get blurred. * Re-org hashing algorithm why: Apart from errors, the previous implementation was too slow for two reasons: + some control hashes were calculated for debugging (now all verification is done in `aristo_check` module) + the leaf paths stored on the cache are used to build the labelling (aka hashing) schedule; there paths were accumulated over successive hash sessions although it is clear that all keys were generated, already
2023-12-12 17:47:41 +00:00
# ------------------------------------------------------------------------------
# Private helpers
# ------------------------------------------------------------------------------
proc setTraceLevel {.used.} =
when defined(chronicles_runtime_filtering) and loggingEnabled:
setLogLevel(LogLevel.TRACE)
proc setDebugLevel {.used.} =
when defined(chronicles_runtime_filtering) and loggingEnabled:
setLogLevel(LogLevel.DEBUG)
proc setErrorLevel {.used.} =
when defined(chronicles_runtime_filtering) and loggingEnabled:
setLogLevel(LogLevel.ERROR)
Core db aristo hasher profiling and timing improvement (#1938) * Explicitly use shared `Kvt` table on `Ledger` and `Clique` lookup. why: Speeds up lookup time with `Aristo` backend. For writing `Clique` data, the `Companion` model allows to write `Clique` data past the database locked by evm transactions. * Implement `CoreDb` profiling with API tracking why: Chasing time spent per APT procs ... * Implement `Ledger` profiling with API tracking why: Chasing time spent per APT procs ... * Always hashify when commiting or storing why: A dirty cache makes no sense when committing * Make sure that a zero key is created when adding/updating vertices why: This is an error fix mainly for edge cases. A typical error was that the root key got deleted when there were only a few vertices left on the DB. * Need all created and changed vertices zero-keyed on the cache why: A zero key (i.e. empty Merkle hash) indicates that a vertex key needs to be updated. This would not be needed immediately after a merge as there is an actual leaf path on the cache layer. But after subsequent merge and delete operations this information might get blurred. * Re-org hashing algorithm why: Apart from errors, the previous implementation was too slow for two reasons: + some control hashes were calculated for debugging (now all verification is done in `aristo_check` module) + the leaf paths stored on the cache are used to build the labelling (aka hashing) schedule; there paths were accumulated over successive hash sessions although it is clear that all keys were generated, already
2023-12-12 17:47:41 +00:00
# --------------
template initLogging(noisy: bool, com: CommonRef) =
when EnableExtraLoggingControl:
if noisy:
setDebugLevel()
debug "start undumping into persistent blocks"
logStartTime = Time()
setErrorLevel()
when CoreDbEnableApiTracking:
logSavedEnv = (com.db.trackCoreDbApi, com.db.trackLedgerApi)
com.db.trackCoreDbApi = true
com.db.trackLedgerApi = true
proc finishLogging(com: CommonRef) =
when EnableExtraLoggingControl:
setErrorLevel()
when CoreDbEnableApiTracking:
(com.db.trackCoreDbApi, com.db.trackLedgerApi) = logSavedEnv
template startLogging(noisy: bool; num: BlockNumber) =
when EnableExtraLoggingControl:
if noisy and logStartTime == Time():
logStartTime = getTime()
setDebugLevel()
debug "start logging ...", blockNumber=num
when false:
template startLogging(noisy: bool) =
when EnableExtraLoggingControl:
if noisy and logStartTime == Time():
logStartTime = getTime()
setDebugLevel()
debug "start logging ..."
template stopLogging(noisy: bool) =
when EnableExtraLoggingControl:
if logStartTime != Time():
debug "stop logging", elapsed=(getTime() - logStartTime).pp
logStartTime = Time()
setErrorLevel()
template stopLoggingAfter(noisy: bool; code: untyped) =
## Turn logging off after executing `block`
block:
defer: noisy.stopLogging()
code
# ------------------------------------------------------------------------------
# Public test function
# ------------------------------------------------------------------------------
Core db aristo hasher profiling and timing improvement (#1938) * Explicitly use shared `Kvt` table on `Ledger` and `Clique` lookup. why: Speeds up lookup time with `Aristo` backend. For writing `Clique` data, the `Companion` model allows to write `Clique` data past the database locked by evm transactions. * Implement `CoreDb` profiling with API tracking why: Chasing time spent per APT procs ... * Implement `Ledger` profiling with API tracking why: Chasing time spent per APT procs ... * Always hashify when commiting or storing why: A dirty cache makes no sense when committing * Make sure that a zero key is created when adding/updating vertices why: This is an error fix mainly for edge cases. A typical error was that the root key got deleted when there were only a few vertices left on the DB. * Need all created and changed vertices zero-keyed on the cache why: A zero key (i.e. empty Merkle hash) indicates that a vertex key needs to be updated. This would not be needed immediately after a merge as there is an actual leaf path on the cache layer. But after subsequent merge and delete operations this information might get blurred. * Re-org hashing algorithm why: Apart from errors, the previous implementation was too slow for two reasons: + some control hashes were calculated for debugging (now all verification is done in `aristo_check` module) + the leaf paths stored on the cache are used to build the labelling (aka hashing) schedule; there paths were accumulated over successive hash sessions although it is clear that all keys were generated, already
2023-12-12 17:47:41 +00:00
proc test_chainSyncProfilingPrint*(
noisy = false;
nBlocks: int;
indent = 2;
Core db aristo hasher profiling and timing improvement (#1938) * Explicitly use shared `Kvt` table on `Ledger` and `Clique` lookup. why: Speeds up lookup time with `Aristo` backend. For writing `Clique` data, the `Companion` model allows to write `Clique` data past the database locked by evm transactions. * Implement `CoreDb` profiling with API tracking why: Chasing time spent per APT procs ... * Implement `Ledger` profiling with API tracking why: Chasing time spent per APT procs ... * Always hashify when commiting or storing why: A dirty cache makes no sense when committing * Make sure that a zero key is created when adding/updating vertices why: This is an error fix mainly for edge cases. A typical error was that the root key got deleted when there were only a few vertices left on the DB. * Need all created and changed vertices zero-keyed on the cache why: A zero key (i.e. empty Merkle hash) indicates that a vertex key needs to be updated. This would not be needed immediately after a merge as there is an actual leaf path on the cache layer. But after subsequent merge and delete operations this information might get blurred. * Re-org hashing algorithm why: Apart from errors, the previous implementation was too slow for two reasons: + some control hashes were calculated for debugging (now all verification is done in `aristo_check` module) + the leaf paths stored on the cache are used to build the labelling (aka hashing) schedule; there paths were accumulated over successive hash sessions although it is clear that all keys were generated, already
2023-12-12 17:47:41 +00:00
) =
if noisy:
let info =
if 0 < nBlocks and nBlocks < high(int): " (" & $nBlocks & " blocks)"
else: ""
discard info
var blurb: seq[string]
when CoreDbEnableProfiling:
blurb.add cdbProfData.profilingPrinter(
names = CoreDbFnInx.toSeq.mapIt($it),
header = "CoreDb profiling results" & info,
indent)
blurb.add aristoProfData.profilingPrinter(
names = AristoApiProfNames.toSeq.mapIt($it),
header = "Aristo backend profiling results" & info,
indent)
blurb.add kvtProfData.profilingPrinter(
names = KvtApiProfNames.toSeq.mapIt($it),
header = "Kvt backend profiling results" & info,
indent)
for s in blurb:
Core db aristo hasher profiling and timing improvement (#1938) * Explicitly use shared `Kvt` table on `Ledger` and `Clique` lookup. why: Speeds up lookup time with `Aristo` backend. For writing `Clique` data, the `Companion` model allows to write `Clique` data past the database locked by evm transactions. * Implement `CoreDb` profiling with API tracking why: Chasing time spent per APT procs ... * Implement `Ledger` profiling with API tracking why: Chasing time spent per APT procs ... * Always hashify when commiting or storing why: A dirty cache makes no sense when committing * Make sure that a zero key is created when adding/updating vertices why: This is an error fix mainly for edge cases. A typical error was that the root key got deleted when there were only a few vertices left on the DB. * Need all created and changed vertices zero-keyed on the cache why: A zero key (i.e. empty Merkle hash) indicates that a vertex key needs to be updated. This would not be needed immediately after a merge as there is an actual leaf path on the cache layer. But after subsequent merge and delete operations this information might get blurred. * Re-org hashing algorithm why: Apart from errors, the previous implementation was too slow for two reasons: + some control hashes were calculated for debugging (now all verification is done in `aristo_check` module) + the leaf paths stored on the cache are used to build the labelling (aka hashing) schedule; there paths were accumulated over successive hash sessions although it is clear that all keys were generated, already
2023-12-12 17:47:41 +00:00
if 0 < s.len: true.say "***", s, "\n"
proc test_chainSync*(
noisy: bool;
filePaths: seq[string];
com: CommonRef;
numBlocks = high(int);
enaLogging = true;
Aristo avoid storage trie update race conditions (#2251) * Update TDD suite logger output format choices why: New format is not practical for TDD as it just dumps data across a wide range (considerably larder than 80 columns.) So the new format can be turned on by function argument. * Update unit tests samples configuration why: Slightly changed the way to find the `era1` directory * Remove compiler warnings (fix deprecated expressions and phrases) * Update `Aristo` debugging tools * Always update the `storageID` field of account leaf vertices why: Storage tries are weekly linked to an account leaf object in that the `storageID` field is updated by the application. Previously, `Aristo` verified that leaf objects make sense when passed to the database. As a consequence * the database was inconsistent for a short while * the burden for correctness was all on the application which led to delayed error handling which is hard to debug. So `Aristo` will internally update the account leaf objects so that there are no race conditions due to the storage trie handling * Aristo: Let `stow()`/`persist()` bail out unless there is a `VertexID(1)` why: The journal and filter logic depends on the hash of the `VertexID(1)` which is commonly known as the state root. This implies that all changes to the database are somehow related to that. * Make sure that a `Ledger` account does not overwrite the storage trie reference why: Due to the abstraction of a sub-trie (now referred to as column with a hash describing its state) there was a weakness in the `Aristo` handler where an account leaf could be overwritten though changing the validity of the database. This has been changed and the database will now reject such changes. This patch fixes the behaviour on the application layer. In particular, the column handle returned by the `CoreDb` needs to be updated by the `Aristo` database state. This mitigates the problem that a storage trie might have vanished or re-apperaed with a different vertex ID. * Fix sub-trie deletion test why: Was originally hinged on `VertexID(1)` which cannot be wholesale deleted anymore after the last Aristo update. Also, running with `VertexID(2)` needs an artificial `VertexID(1)` for making `stow()` or `persist()` work. * Cosmetics * Activate `test_generalstate_json` * Temporarily `deactivate test_tracer_json` * Fix copyright header --------- Co-authored-by: jordan <jordan@dry.pudding> Co-authored-by: Jacek Sieka <jacek@status.im>
2024-05-30 18:48:38 +01:00
lastOneExtra = true;
oldLogAlign = false;
): bool =
## Store persistent blocks from dump into chain DB
let
sayBlocks = 900'u64
chain = com.newChain()
aristo: fork support via layers/txframes (#2960) * aristo: fork support via layers/txframes This change reorganises how the database is accessed: instead holding a "current frame" in the database object, a dag of frames is created based on the "base frame" held in `AristoDbRef` and all database access happens through this frame, which can be thought of as a consistent point-in-time snapshot of the database based on a particular fork of the chain. In the code, "frame", "transaction" and "layer" is used to denote more or less the same thing: a dag of stacked changes backed by the on-disk database. Although this is not a requirement, in practice each frame holds the change set of a single block - as such, the frame and its ancestors leading up to the on-disk state represents the state of the database after that block has been applied. "committing" means merging the changes to its parent frame so that the difference between them is lost and only the cumulative changes remain - this facility enables frames to be combined arbitrarily wherever they are in the dag. In particular, it becomes possible to consolidate a set of changes near the base of the dag and commit those to disk without having to re-do the in-memory frames built on top of them - this is useful for "flattening" a set of changes during a base update and sending those to storage without having to perform a block replay on top. Looking at abstractions, a side effect of this change is that the KVT and Aristo are brought closer together by considering them to be part of the "same" atomic transaction set - the way the code gets organised, applying a block and saving it to the kvt happens in the same "logical" frame - therefore, discarding the frame discards both the aristo and kvt changes at the same time - likewise, they are persisted to disk together - this makes reasoning about the database somewhat easier but has the downside of increased memory usage, something that perhaps will need addressing in the future. Because the code reasons more strictly about frames and the state of the persisted database, it also makes it more visible where ForkedChain should be used and where it is still missing - in particular, frames represent a single branch of history while forkedchain manages multiple parallel forks - user-facing services such as the RPC should use the latter, ie until it has been finalized, a getBlock request should consider all forks and not just the blocks in the canonical head branch. Another advantage of this approach is that `AristoDbRef` conceptually becomes more simple - removing its tracking of the "current" transaction stack simplifies reasoning about what can go wrong since this state now has to be passed around in the form of `AristoTxRef` - as such, many of the tests and facilities in the code that were dealing with "stack inconsistency" are now structurally prevented from happening. The test suite will need significant refactoring after this change. Once this change has been merged, there are several follow-ups to do: * there's no mechanism for keeping frames up to date as they get committed or rolled back - TODO * naming is confused - many names for the same thing for legacy reason * forkedchain support is still missing in lots of code * clean up redundant logic based on previous designs - in particular the debug and introspection code no longer makes sense * the way change sets are stored will probably need revisiting - because it's a stack of changes where each frame must be interrogated to find an on-disk value, with a base distance of 128 we'll at minimum have to perform 128 frame lookups for *every* database interaction - regardless, the "dag-like" nature will stay * dispose and commit are poorly defined and perhaps redundant - in theory, one could simply let the GC collect abandoned frames etc, though it's likely an explicit mechanism will remain useful, so they stay for now More about the changes: * `AristoDbRef` gains a `txRef` field (todo: rename) that "more or less" corresponds to the old `balancer` field * `AristoDbRef.stack` is gone - instead, there's a chain of `AristoTxRef` objects that hold their respective "layer" which has the actual changes * No more reasoning about "top" and "stack" - instead, each `AristoTxRef` can be a "head" that "more or less" corresponds to the old single-history `top` notion and its stack * `level` still represents "distance to base" - it's computed from the parent chain instead of being stored * one has to be careful not to use frames where forkedchain was intended - layers are only for a single branch of history! * fix layer vtop after rollback * engine fix * Fix test_txpool * Fix test_rpc * Fix copyright year * fix simulator * Fix copyright year * Fix copyright year * Fix tracer * Fix infinite recursion bug * Remove aristo and kvt empty files * Fic copyright year * Fix fc chain_kvt * ForkedChain refactoring * Fix merge master conflict * Fix copyright year * Reparent txFrame * Fix test * Fix txFrame reparent again * Cleanup and fix test * UpdateBase bugfix and fix test * Fixe newPayload bug discovered by hive * Fix engine api fcu * Clean up call template, chain_kvt, andn txguid * Fix copyright year * work around base block loading issue * Add test * Fix updateHead bug * Fix updateBase bug * Change func commitBase to proc commitBase * Touch up and fix debug mode crash --------- Co-authored-by: jangko <jangko128@gmail.com>
2025-02-06 08:04:50 +01:00
blockOnDb = com.db.baseTxFrame().getSavedStateBlockNumber()
lastBlock = max(1, numBlocks).BlockNumber
noisy.initLogging com
defer: com.finishLogging()
# Scan folder for `era1` files (ignoring the argument file name)
let
(dir, _, ext) = filePaths[0].splitFile
files =
if filePaths.len == 1 and ext == ".era1":
@[dir]
else:
filePaths
# If the least block is non-zero, resume at the next block
let start = block:
if blockOnDb == 0:
0u64
elif blockOnDb < lastBlock:
noisy.say "***", "resuming at #", blockOnDb+1
blockOnDb + 1
else:
noisy.say "***", "stop: sample exhausted"
return true
# Profile variables will be non-nil if profiling is available. The profiling
# API data need to be captured so it will be available after the services
# have terminated.
when CoreDbEnableProfiling:
aristoProfData = com.db.ariApi.AristoApiProfRef.data
kvtProfData = com.db.kvtApi.KvtApiProfRef.data
cdbProfData = com.db.profTab
# This will enable printing the `era1` covered block ranges (if any)
undump_blocks_era1.noisy = noisy
var
blocks = 0
total = 0
begin = toUnixFloat(getTime())
sample = begin
template sayPerf =
if blocks > 0:
total += blocks
let done {.inject.} = toUnixFloat(getTime())
Aristo avoid storage trie update race conditions (#2251) * Update TDD suite logger output format choices why: New format is not practical for TDD as it just dumps data across a wide range (considerably larder than 80 columns.) So the new format can be turned on by function argument. * Update unit tests samples configuration why: Slightly changed the way to find the `era1` directory * Remove compiler warnings (fix deprecated expressions and phrases) * Update `Aristo` debugging tools * Always update the `storageID` field of account leaf vertices why: Storage tries are weekly linked to an account leaf object in that the `storageID` field is updated by the application. Previously, `Aristo` verified that leaf objects make sense when passed to the database. As a consequence * the database was inconsistent for a short while * the burden for correctness was all on the application which led to delayed error handling which is hard to debug. So `Aristo` will internally update the account leaf objects so that there are no race conditions due to the storage trie handling * Aristo: Let `stow()`/`persist()` bail out unless there is a `VertexID(1)` why: The journal and filter logic depends on the hash of the `VertexID(1)` which is commonly known as the state root. This implies that all changes to the database are somehow related to that. * Make sure that a `Ledger` account does not overwrite the storage trie reference why: Due to the abstraction of a sub-trie (now referred to as column with a hash describing its state) there was a weakness in the `Aristo` handler where an account leaf could be overwritten though changing the validity of the database. This has been changed and the database will now reject such changes. This patch fixes the behaviour on the application layer. In particular, the column handle returned by the `CoreDb` needs to be updated by the `Aristo` database state. This mitigates the problem that a storage trie might have vanished or re-apperaed with a different vertex ID. * Fix sub-trie deletion test why: Was originally hinged on `VertexID(1)` which cannot be wholesale deleted anymore after the last Aristo update. Also, running with `VertexID(2)` needs an artificial `VertexID(1)` for making `stow()` or `persist()` work. * Cosmetics * Activate `test_generalstate_json` * Temporarily `deactivate test_tracer_json` * Fix copyright header --------- Co-authored-by: jordan <jordan@dry.pudding> Co-authored-by: Jacek Sieka <jacek@status.im>
2024-05-30 18:48:38 +01:00
noisy.say "", &"{blocks:3} blocks, {(done-sample):2.3}s,",
" {(blocks.float / (done-sample)):4.3f} b/s,",
" avg {(total.float / (done-begin)):4.3f} b/s"
blocks = 0
sample = done
for w in files.undumpBlocks(least = start):
let (fromBlock, toBlock) = (w[0].header.number, w[^1].header.number)
if fromBlock == 0'u64:
aristo: fork support via layers/txframes (#2960) * aristo: fork support via layers/txframes This change reorganises how the database is accessed: instead holding a "current frame" in the database object, a dag of frames is created based on the "base frame" held in `AristoDbRef` and all database access happens through this frame, which can be thought of as a consistent point-in-time snapshot of the database based on a particular fork of the chain. In the code, "frame", "transaction" and "layer" is used to denote more or less the same thing: a dag of stacked changes backed by the on-disk database. Although this is not a requirement, in practice each frame holds the change set of a single block - as such, the frame and its ancestors leading up to the on-disk state represents the state of the database after that block has been applied. "committing" means merging the changes to its parent frame so that the difference between them is lost and only the cumulative changes remain - this facility enables frames to be combined arbitrarily wherever they are in the dag. In particular, it becomes possible to consolidate a set of changes near the base of the dag and commit those to disk without having to re-do the in-memory frames built on top of them - this is useful for "flattening" a set of changes during a base update and sending those to storage without having to perform a block replay on top. Looking at abstractions, a side effect of this change is that the KVT and Aristo are brought closer together by considering them to be part of the "same" atomic transaction set - the way the code gets organised, applying a block and saving it to the kvt happens in the same "logical" frame - therefore, discarding the frame discards both the aristo and kvt changes at the same time - likewise, they are persisted to disk together - this makes reasoning about the database somewhat easier but has the downside of increased memory usage, something that perhaps will need addressing in the future. Because the code reasons more strictly about frames and the state of the persisted database, it also makes it more visible where ForkedChain should be used and where it is still missing - in particular, frames represent a single branch of history while forkedchain manages multiple parallel forks - user-facing services such as the RPC should use the latter, ie until it has been finalized, a getBlock request should consider all forks and not just the blocks in the canonical head branch. Another advantage of this approach is that `AristoDbRef` conceptually becomes more simple - removing its tracking of the "current" transaction stack simplifies reasoning about what can go wrong since this state now has to be passed around in the form of `AristoTxRef` - as such, many of the tests and facilities in the code that were dealing with "stack inconsistency" are now structurally prevented from happening. The test suite will need significant refactoring after this change. Once this change has been merged, there are several follow-ups to do: * there's no mechanism for keeping frames up to date as they get committed or rolled back - TODO * naming is confused - many names for the same thing for legacy reason * forkedchain support is still missing in lots of code * clean up redundant logic based on previous designs - in particular the debug and introspection code no longer makes sense * the way change sets are stored will probably need revisiting - because it's a stack of changes where each frame must be interrogated to find an on-disk value, with a base distance of 128 we'll at minimum have to perform 128 frame lookups for *every* database interaction - regardless, the "dag-like" nature will stay * dispose and commit are poorly defined and perhaps redundant - in theory, one could simply let the GC collect abandoned frames etc, though it's likely an explicit mechanism will remain useful, so they stay for now More about the changes: * `AristoDbRef` gains a `txRef` field (todo: rename) that "more or less" corresponds to the old `balancer` field * `AristoDbRef.stack` is gone - instead, there's a chain of `AristoTxRef` objects that hold their respective "layer" which has the actual changes * No more reasoning about "top" and "stack" - instead, each `AristoTxRef` can be a "head" that "more or less" corresponds to the old single-history `top` notion and its stack * `level` still represents "distance to base" - it's computed from the parent chain instead of being stored * one has to be careful not to use frames where forkedchain was intended - layers are only for a single branch of history! * fix layer vtop after rollback * engine fix * Fix test_txpool * Fix test_rpc * Fix copyright year * fix simulator * Fix copyright year * Fix copyright year * Fix tracer * Fix infinite recursion bug * Remove aristo and kvt empty files * Fic copyright year * Fix fc chain_kvt * ForkedChain refactoring * Fix merge master conflict * Fix copyright year * Reparent txFrame * Fix test * Fix txFrame reparent again * Cleanup and fix test * UpdateBase bugfix and fix test * Fixe newPayload bug discovered by hive * Fix engine api fcu * Clean up call template, chain_kvt, andn txguid * Fix copyright year * work around base block loading issue * Add test * Fix updateHead bug * Fix updateBase bug * Change func commitBase to proc commitBase * Touch up and fix debug mode crash --------- Co-authored-by: jangko <jangko128@gmail.com>
2025-02-06 08:04:50 +01:00
xCheck w[0].header == com.db.baseTxFrame().getBlockHeader(0'u64).expect("block header exists")
continue
# Process groups of blocks ...
if toBlock < lastBlock:
# Message if `[fromBlock,toBlock]` contains a multiple of `sayBlocks`
if fromBlock + (toBlock mod sayBlocks) <= toBlock:
Aristo avoid storage trie update race conditions (#2251) * Update TDD suite logger output format choices why: New format is not practical for TDD as it just dumps data across a wide range (considerably larder than 80 columns.) So the new format can be turned on by function argument. * Update unit tests samples configuration why: Slightly changed the way to find the `era1` directory * Remove compiler warnings (fix deprecated expressions and phrases) * Update `Aristo` debugging tools * Always update the `storageID` field of account leaf vertices why: Storage tries are weekly linked to an account leaf object in that the `storageID` field is updated by the application. Previously, `Aristo` verified that leaf objects make sense when passed to the database. As a consequence * the database was inconsistent for a short while * the burden for correctness was all on the application which led to delayed error handling which is hard to debug. So `Aristo` will internally update the account leaf objects so that there are no race conditions due to the storage trie handling * Aristo: Let `stow()`/`persist()` bail out unless there is a `VertexID(1)` why: The journal and filter logic depends on the hash of the `VertexID(1)` which is commonly known as the state root. This implies that all changes to the database are somehow related to that. * Make sure that a `Ledger` account does not overwrite the storage trie reference why: Due to the abstraction of a sub-trie (now referred to as column with a hash describing its state) there was a weakness in the `Aristo` handler where an account leaf could be overwritten though changing the validity of the database. This has been changed and the database will now reject such changes. This patch fixes the behaviour on the application layer. In particular, the column handle returned by the `CoreDb` needs to be updated by the `Aristo` database state. This mitigates the problem that a storage trie might have vanished or re-apperaed with a different vertex ID. * Fix sub-trie deletion test why: Was originally hinged on `VertexID(1)` which cannot be wholesale deleted anymore after the last Aristo update. Also, running with `VertexID(2)` needs an artificial `VertexID(1)` for making `stow()` or `persist()` work. * Cosmetics * Activate `test_generalstate_json` * Temporarily `deactivate test_tracer_json` * Fix copyright header --------- Co-authored-by: jordan <jordan@dry.pudding> Co-authored-by: Jacek Sieka <jacek@status.im>
2024-05-30 18:48:38 +01:00
if oldLogAlign:
noisy.whisper "***",
&"processing ...[#{fromBlock},#{toBlock}]...\n"
else:
sayPerf
noisy.whisper "***",
&"processing ...[#{fromBlock:>8},#{toBlock:>8}]..."
if enaLogging:
noisy.startLogging(w[0].header.number)
noisy.stopLoggingAfter():
Consolidate block type for block processing (#2325) This PR consolidates the split header-body sequences into a single EthBlock sequence and cleans up the fallout from that which significantly reduces block processing overhead during import thanks to less garbage collection and fewer copies of things all around. Notably, since the number of headers must always match the number of bodies, we also get rid of a pointless degree of freedom that in the future could introduce unnecessary bugs. * only read header and body from era file * avoid several unnecessary copies along the block processing way * simplify signatures, cleaning up unused arguemnts and returns * use `stew/assign2` in a few strategic places where the generated nim assignent is slow and add a few `move` to work around poor analysis in nim 1.6 (will need to be revisited for 2.0) ``` stats-20240607_2223-a814aa0b.csv vs stats-20240608_0714-21c1d0a9.csv bps_x bps_y tps_x tps_y bpsd tpsd timed block_number (498305, 713245] 1,540.52 1,809.73 2,361.58 2775.340189 17.63% 17.63% -14.92% (713245, 928185] 730.36 865.26 1,715.90 2028.973852 18.01% 18.01% -15.21% (928185, 1143126] 663.03 789.10 2,529.26 3032.490771 19.79% 19.79% -16.28% (1143126, 1358066] 393.46 508.05 2,152.50 2777.578119 29.13% 29.13% -22.50% (1358066, 1573007] 370.88 440.72 2,351.31 2791.896052 18.81% 18.81% -15.80% (1573007, 1787947] 283.65 335.11 2,068.93 2441.373402 17.60% 17.60% -14.91% (1787947, 2002888] 287.29 342.11 2,078.39 2474.179448 18.99% 18.99% -15.91% (2002888, 2217828] 293.38 343.16 2,208.83 2584.77457 17.16% 17.16% -14.61% (2217828, 2432769] 140.09 167.86 1,081.87 1296.336926 18.82% 18.82% -15.80% blocks: 1934464, baseline: 3h13m1s, contender: 2h43m47s bpsd (mean): 19.55% tpsd (mean): 19.55% Time (total): -29m13s, -15.14% ```
2024-06-09 16:32:20 +02:00
let runPersistBlocksRc = chain.persistBlocks(w)
xCheck runPersistBlocksRc.isOk():
if noisy:
Aristo avoid storage trie update race conditions (#2251) * Update TDD suite logger output format choices why: New format is not practical for TDD as it just dumps data across a wide range (considerably larder than 80 columns.) So the new format can be turned on by function argument. * Update unit tests samples configuration why: Slightly changed the way to find the `era1` directory * Remove compiler warnings (fix deprecated expressions and phrases) * Update `Aristo` debugging tools * Always update the `storageID` field of account leaf vertices why: Storage tries are weekly linked to an account leaf object in that the `storageID` field is updated by the application. Previously, `Aristo` verified that leaf objects make sense when passed to the database. As a consequence * the database was inconsistent for a short while * the burden for correctness was all on the application which led to delayed error handling which is hard to debug. So `Aristo` will internally update the account leaf objects so that there are no race conditions due to the storage trie handling * Aristo: Let `stow()`/`persist()` bail out unless there is a `VertexID(1)` why: The journal and filter logic depends on the hash of the `VertexID(1)` which is commonly known as the state root. This implies that all changes to the database are somehow related to that. * Make sure that a `Ledger` account does not overwrite the storage trie reference why: Due to the abstraction of a sub-trie (now referred to as column with a hash describing its state) there was a weakness in the `Aristo` handler where an account leaf could be overwritten though changing the validity of the database. This has been changed and the database will now reject such changes. This patch fixes the behaviour on the application layer. In particular, the column handle returned by the `CoreDb` needs to be updated by the `Aristo` database state. This mitigates the problem that a storage trie might have vanished or re-apperaed with a different vertex ID. * Fix sub-trie deletion test why: Was originally hinged on `VertexID(1)` which cannot be wholesale deleted anymore after the last Aristo update. Also, running with `VertexID(2)` needs an artificial `VertexID(1)` for making `stow()` or `persist()` work. * Cosmetics * Activate `test_generalstate_json` * Temporarily `deactivate test_tracer_json` * Fix copyright header --------- Co-authored-by: jordan <jordan@dry.pudding> Co-authored-by: Jacek Sieka <jacek@status.im>
2024-05-30 18:48:38 +01:00
noisy.whisper "***", "Re-run with logging enabled...\n"
setTraceLevel()
when CoreDbEnableApiTracking:
com.db.trackCoreDbApi = false
com.db.trackLedgerApi = false
Consolidate block type for block processing (#2325) This PR consolidates the split header-body sequences into a single EthBlock sequence and cleans up the fallout from that which significantly reduces block processing overhead during import thanks to less garbage collection and fewer copies of things all around. Notably, since the number of headers must always match the number of bodies, we also get rid of a pointless degree of freedom that in the future could introduce unnecessary bugs. * only read header and body from era file * avoid several unnecessary copies along the block processing way * simplify signatures, cleaning up unused arguemnts and returns * use `stew/assign2` in a few strategic places where the generated nim assignent is slow and add a few `move` to work around poor analysis in nim 1.6 (will need to be revisited for 2.0) ``` stats-20240607_2223-a814aa0b.csv vs stats-20240608_0714-21c1d0a9.csv bps_x bps_y tps_x tps_y bpsd tpsd timed block_number (498305, 713245] 1,540.52 1,809.73 2,361.58 2775.340189 17.63% 17.63% -14.92% (713245, 928185] 730.36 865.26 1,715.90 2028.973852 18.01% 18.01% -15.21% (928185, 1143126] 663.03 789.10 2,529.26 3032.490771 19.79% 19.79% -16.28% (1143126, 1358066] 393.46 508.05 2,152.50 2777.578119 29.13% 29.13% -22.50% (1358066, 1573007] 370.88 440.72 2,351.31 2791.896052 18.81% 18.81% -15.80% (1573007, 1787947] 283.65 335.11 2,068.93 2441.373402 17.60% 17.60% -14.91% (1787947, 2002888] 287.29 342.11 2,078.39 2474.179448 18.99% 18.99% -15.91% (2002888, 2217828] 293.38 343.16 2,208.83 2584.77457 17.16% 17.16% -14.61% (2217828, 2432769] 140.09 167.86 1,081.87 1296.336926 18.82% 18.82% -15.80% blocks: 1934464, baseline: 3h13m1s, contender: 2h43m47s bpsd (mean): 19.55% tpsd (mean): 19.55% Time (total): -29m13s, -15.14% ```
2024-06-09 16:32:20 +02:00
discard chain.persistBlocks(w)
blocks += w.len
continue
# Last group or single block
#
# Make sure that the `lastBlock` is the first item of the argument batch.
# So It might be necessary to Split off all blocks smaller than `lastBlock`
# and execute them first. Then the next batch starts with the `lastBlock`.
let
pivot = lastBlock - fromBlock
Consolidate block type for block processing (#2325) This PR consolidates the split header-body sequences into a single EthBlock sequence and cleans up the fallout from that which significantly reduces block processing overhead during import thanks to less garbage collection and fewer copies of things all around. Notably, since the number of headers must always match the number of bodies, we also get rid of a pointless degree of freedom that in the future could introduce unnecessary bugs. * only read header and body from era file * avoid several unnecessary copies along the block processing way * simplify signatures, cleaning up unused arguemnts and returns * use `stew/assign2` in a few strategic places where the generated nim assignent is slow and add a few `move` to work around poor analysis in nim 1.6 (will need to be revisited for 2.0) ``` stats-20240607_2223-a814aa0b.csv vs stats-20240608_0714-21c1d0a9.csv bps_x bps_y tps_x tps_y bpsd tpsd timed block_number (498305, 713245] 1,540.52 1,809.73 2,361.58 2775.340189 17.63% 17.63% -14.92% (713245, 928185] 730.36 865.26 1,715.90 2028.973852 18.01% 18.01% -15.21% (928185, 1143126] 663.03 789.10 2,529.26 3032.490771 19.79% 19.79% -16.28% (1143126, 1358066] 393.46 508.05 2,152.50 2777.578119 29.13% 29.13% -22.50% (1358066, 1573007] 370.88 440.72 2,351.31 2791.896052 18.81% 18.81% -15.80% (1573007, 1787947] 283.65 335.11 2,068.93 2441.373402 17.60% 17.60% -14.91% (1787947, 2002888] 287.29 342.11 2,078.39 2474.179448 18.99% 18.99% -15.91% (2002888, 2217828] 293.38 343.16 2,208.83 2584.77457 17.16% 17.16% -14.61% (2217828, 2432769] 140.09 167.86 1,081.87 1296.336926 18.82% 18.82% -15.80% blocks: 1934464, baseline: 3h13m1s, contender: 2h43m47s bpsd (mean): 19.55% tpsd (mean): 19.55% Time (total): -29m13s, -15.14% ```
2024-06-09 16:32:20 +02:00
blocks9 = w[pivot .. ^1]
doAssert lastBlock == blocks9[0].header.number
# Process leading batch before `lastBlock` (if any)
var dotsOrSpace = "..."
if fromBlock < lastBlock:
let
Consolidate block type for block processing (#2325) This PR consolidates the split header-body sequences into a single EthBlock sequence and cleans up the fallout from that which significantly reduces block processing overhead during import thanks to less garbage collection and fewer copies of things all around. Notably, since the number of headers must always match the number of bodies, we also get rid of a pointless degree of freedom that in the future could introduce unnecessary bugs. * only read header and body from era file * avoid several unnecessary copies along the block processing way * simplify signatures, cleaning up unused arguemnts and returns * use `stew/assign2` in a few strategic places where the generated nim assignent is slow and add a few `move` to work around poor analysis in nim 1.6 (will need to be revisited for 2.0) ``` stats-20240607_2223-a814aa0b.csv vs stats-20240608_0714-21c1d0a9.csv bps_x bps_y tps_x tps_y bpsd tpsd timed block_number (498305, 713245] 1,540.52 1,809.73 2,361.58 2775.340189 17.63% 17.63% -14.92% (713245, 928185] 730.36 865.26 1,715.90 2028.973852 18.01% 18.01% -15.21% (928185, 1143126] 663.03 789.10 2,529.26 3032.490771 19.79% 19.79% -16.28% (1143126, 1358066] 393.46 508.05 2,152.50 2777.578119 29.13% 29.13% -22.50% (1358066, 1573007] 370.88 440.72 2,351.31 2791.896052 18.81% 18.81% -15.80% (1573007, 1787947] 283.65 335.11 2,068.93 2441.373402 17.60% 17.60% -14.91% (1787947, 2002888] 287.29 342.11 2,078.39 2474.179448 18.99% 18.99% -15.91% (2002888, 2217828] 293.38 343.16 2,208.83 2584.77457 17.16% 17.16% -14.61% (2217828, 2432769] 140.09 167.86 1,081.87 1296.336926 18.82% 18.82% -15.80% blocks: 1934464, baseline: 3h13m1s, contender: 2h43m47s bpsd (mean): 19.55% tpsd (mean): 19.55% Time (total): -29m13s, -15.14% ```
2024-06-09 16:32:20 +02:00
blocks1 = w[0 ..< pivot]
Aristo avoid storage trie update race conditions (#2251) * Update TDD suite logger output format choices why: New format is not practical for TDD as it just dumps data across a wide range (considerably larder than 80 columns.) So the new format can be turned on by function argument. * Update unit tests samples configuration why: Slightly changed the way to find the `era1` directory * Remove compiler warnings (fix deprecated expressions and phrases) * Update `Aristo` debugging tools * Always update the `storageID` field of account leaf vertices why: Storage tries are weekly linked to an account leaf object in that the `storageID` field is updated by the application. Previously, `Aristo` verified that leaf objects make sense when passed to the database. As a consequence * the database was inconsistent for a short while * the burden for correctness was all on the application which led to delayed error handling which is hard to debug. So `Aristo` will internally update the account leaf objects so that there are no race conditions due to the storage trie handling * Aristo: Let `stow()`/`persist()` bail out unless there is a `VertexID(1)` why: The journal and filter logic depends on the hash of the `VertexID(1)` which is commonly known as the state root. This implies that all changes to the database are somehow related to that. * Make sure that a `Ledger` account does not overwrite the storage trie reference why: Due to the abstraction of a sub-trie (now referred to as column with a hash describing its state) there was a weakness in the `Aristo` handler where an account leaf could be overwritten though changing the validity of the database. This has been changed and the database will now reject such changes. This patch fixes the behaviour on the application layer. In particular, the column handle returned by the `CoreDb` needs to be updated by the `Aristo` database state. This mitigates the problem that a storage trie might have vanished or re-apperaed with a different vertex ID. * Fix sub-trie deletion test why: Was originally hinged on `VertexID(1)` which cannot be wholesale deleted anymore after the last Aristo update. Also, running with `VertexID(2)` needs an artificial `VertexID(1)` for making `stow()` or `persist()` work. * Cosmetics * Activate `test_generalstate_json` * Temporarily `deactivate test_tracer_json` * Fix copyright header --------- Co-authored-by: jordan <jordan@dry.pudding> Co-authored-by: Jacek Sieka <jacek@status.im>
2024-05-30 18:48:38 +01:00
if oldLogAlign:
noisy.whisper "***", &"processing ...[#{fromBlock},#{toBlock}]...\n"
else:
sayPerf
noisy.whisper "***",
&"processing {dotsOrSpace}[#{fromBlock:>8},#{(lastBlock-1):>8}]"
Consolidate block type for block processing (#2325) This PR consolidates the split header-body sequences into a single EthBlock sequence and cleans up the fallout from that which significantly reduces block processing overhead during import thanks to less garbage collection and fewer copies of things all around. Notably, since the number of headers must always match the number of bodies, we also get rid of a pointless degree of freedom that in the future could introduce unnecessary bugs. * only read header and body from era file * avoid several unnecessary copies along the block processing way * simplify signatures, cleaning up unused arguemnts and returns * use `stew/assign2` in a few strategic places where the generated nim assignent is slow and add a few `move` to work around poor analysis in nim 1.6 (will need to be revisited for 2.0) ``` stats-20240607_2223-a814aa0b.csv vs stats-20240608_0714-21c1d0a9.csv bps_x bps_y tps_x tps_y bpsd tpsd timed block_number (498305, 713245] 1,540.52 1,809.73 2,361.58 2775.340189 17.63% 17.63% -14.92% (713245, 928185] 730.36 865.26 1,715.90 2028.973852 18.01% 18.01% -15.21% (928185, 1143126] 663.03 789.10 2,529.26 3032.490771 19.79% 19.79% -16.28% (1143126, 1358066] 393.46 508.05 2,152.50 2777.578119 29.13% 29.13% -22.50% (1358066, 1573007] 370.88 440.72 2,351.31 2791.896052 18.81% 18.81% -15.80% (1573007, 1787947] 283.65 335.11 2,068.93 2441.373402 17.60% 17.60% -14.91% (1787947, 2002888] 287.29 342.11 2,078.39 2474.179448 18.99% 18.99% -15.91% (2002888, 2217828] 293.38 343.16 2,208.83 2584.77457 17.16% 17.16% -14.61% (2217828, 2432769] 140.09 167.86 1,081.87 1296.336926 18.82% 18.82% -15.80% blocks: 1934464, baseline: 3h13m1s, contender: 2h43m47s bpsd (mean): 19.55% tpsd (mean): 19.55% Time (total): -29m13s, -15.14% ```
2024-06-09 16:32:20 +02:00
let runPersistBlocks1Rc = chain.persistBlocks(blocks1)
xCheck runPersistBlocks1Rc.isOk()
dotsOrSpace = " "
noisy.startLogging(blocks9[0].header.number)
if lastOneExtra:
let
Consolidate block type for block processing (#2325) This PR consolidates the split header-body sequences into a single EthBlock sequence and cleans up the fallout from that which significantly reduces block processing overhead during import thanks to less garbage collection and fewer copies of things all around. Notably, since the number of headers must always match the number of bodies, we also get rid of a pointless degree of freedom that in the future could introduce unnecessary bugs. * only read header and body from era file * avoid several unnecessary copies along the block processing way * simplify signatures, cleaning up unused arguemnts and returns * use `stew/assign2` in a few strategic places where the generated nim assignent is slow and add a few `move` to work around poor analysis in nim 1.6 (will need to be revisited for 2.0) ``` stats-20240607_2223-a814aa0b.csv vs stats-20240608_0714-21c1d0a9.csv bps_x bps_y tps_x tps_y bpsd tpsd timed block_number (498305, 713245] 1,540.52 1,809.73 2,361.58 2775.340189 17.63% 17.63% -14.92% (713245, 928185] 730.36 865.26 1,715.90 2028.973852 18.01% 18.01% -15.21% (928185, 1143126] 663.03 789.10 2,529.26 3032.490771 19.79% 19.79% -16.28% (1143126, 1358066] 393.46 508.05 2,152.50 2777.578119 29.13% 29.13% -22.50% (1358066, 1573007] 370.88 440.72 2,351.31 2791.896052 18.81% 18.81% -15.80% (1573007, 1787947] 283.65 335.11 2,068.93 2441.373402 17.60% 17.60% -14.91% (1787947, 2002888] 287.29 342.11 2,078.39 2474.179448 18.99% 18.99% -15.91% (2002888, 2217828] 293.38 343.16 2,208.83 2584.77457 17.16% 17.16% -14.61% (2217828, 2432769] 140.09 167.86 1,081.87 1296.336926 18.82% 18.82% -15.80% blocks: 1934464, baseline: 3h13m1s, contender: 2h43m47s bpsd (mean): 19.55% tpsd (mean): 19.55% Time (total): -29m13s, -15.14% ```
2024-06-09 16:32:20 +02:00
blocks0 = blocks9[0..0]
Aristo avoid storage trie update race conditions (#2251) * Update TDD suite logger output format choices why: New format is not practical for TDD as it just dumps data across a wide range (considerably larder than 80 columns.) So the new format can be turned on by function argument. * Update unit tests samples configuration why: Slightly changed the way to find the `era1` directory * Remove compiler warnings (fix deprecated expressions and phrases) * Update `Aristo` debugging tools * Always update the `storageID` field of account leaf vertices why: Storage tries are weekly linked to an account leaf object in that the `storageID` field is updated by the application. Previously, `Aristo` verified that leaf objects make sense when passed to the database. As a consequence * the database was inconsistent for a short while * the burden for correctness was all on the application which led to delayed error handling which is hard to debug. So `Aristo` will internally update the account leaf objects so that there are no race conditions due to the storage trie handling * Aristo: Let `stow()`/`persist()` bail out unless there is a `VertexID(1)` why: The journal and filter logic depends on the hash of the `VertexID(1)` which is commonly known as the state root. This implies that all changes to the database are somehow related to that. * Make sure that a `Ledger` account does not overwrite the storage trie reference why: Due to the abstraction of a sub-trie (now referred to as column with a hash describing its state) there was a weakness in the `Aristo` handler where an account leaf could be overwritten though changing the validity of the database. This has been changed and the database will now reject such changes. This patch fixes the behaviour on the application layer. In particular, the column handle returned by the `CoreDb` needs to be updated by the `Aristo` database state. This mitigates the problem that a storage trie might have vanished or re-apperaed with a different vertex ID. * Fix sub-trie deletion test why: Was originally hinged on `VertexID(1)` which cannot be wholesale deleted anymore after the last Aristo update. Also, running with `VertexID(2)` needs an artificial `VertexID(1)` for making `stow()` or `persist()` work. * Cosmetics * Activate `test_generalstate_json` * Temporarily `deactivate test_tracer_json` * Fix copyright header --------- Co-authored-by: jordan <jordan@dry.pudding> Co-authored-by: Jacek Sieka <jacek@status.im>
2024-05-30 18:48:38 +01:00
if oldLogAlign:
noisy.whisper "***",
&"processing {dotsOrSpace}[#{fromBlock},#{lastBlock-1}]\n"
else:
sayPerf
noisy.whisper "***",
&"processing {dotsOrSpace}[#{lastBlock:>8},#{lastBlock:>8}]"
noisy.stopLoggingAfter():
Consolidate block type for block processing (#2325) This PR consolidates the split header-body sequences into a single EthBlock sequence and cleans up the fallout from that which significantly reduces block processing overhead during import thanks to less garbage collection and fewer copies of things all around. Notably, since the number of headers must always match the number of bodies, we also get rid of a pointless degree of freedom that in the future could introduce unnecessary bugs. * only read header and body from era file * avoid several unnecessary copies along the block processing way * simplify signatures, cleaning up unused arguemnts and returns * use `stew/assign2` in a few strategic places where the generated nim assignent is slow and add a few `move` to work around poor analysis in nim 1.6 (will need to be revisited for 2.0) ``` stats-20240607_2223-a814aa0b.csv vs stats-20240608_0714-21c1d0a9.csv bps_x bps_y tps_x tps_y bpsd tpsd timed block_number (498305, 713245] 1,540.52 1,809.73 2,361.58 2775.340189 17.63% 17.63% -14.92% (713245, 928185] 730.36 865.26 1,715.90 2028.973852 18.01% 18.01% -15.21% (928185, 1143126] 663.03 789.10 2,529.26 3032.490771 19.79% 19.79% -16.28% (1143126, 1358066] 393.46 508.05 2,152.50 2777.578119 29.13% 29.13% -22.50% (1358066, 1573007] 370.88 440.72 2,351.31 2791.896052 18.81% 18.81% -15.80% (1573007, 1787947] 283.65 335.11 2,068.93 2441.373402 17.60% 17.60% -14.91% (1787947, 2002888] 287.29 342.11 2,078.39 2474.179448 18.99% 18.99% -15.91% (2002888, 2217828] 293.38 343.16 2,208.83 2584.77457 17.16% 17.16% -14.61% (2217828, 2432769] 140.09 167.86 1,081.87 1296.336926 18.82% 18.82% -15.80% blocks: 1934464, baseline: 3h13m1s, contender: 2h43m47s bpsd (mean): 19.55% tpsd (mean): 19.55% Time (total): -29m13s, -15.14% ```
2024-06-09 16:32:20 +02:00
let runPersistBlocks0Rc = chain.persistBlocks(blocks0)
xCheck runPersistBlocks0Rc.isOk()
else:
Aristo avoid storage trie update race conditions (#2251) * Update TDD suite logger output format choices why: New format is not practical for TDD as it just dumps data across a wide range (considerably larder than 80 columns.) So the new format can be turned on by function argument. * Update unit tests samples configuration why: Slightly changed the way to find the `era1` directory * Remove compiler warnings (fix deprecated expressions and phrases) * Update `Aristo` debugging tools * Always update the `storageID` field of account leaf vertices why: Storage tries are weekly linked to an account leaf object in that the `storageID` field is updated by the application. Previously, `Aristo` verified that leaf objects make sense when passed to the database. As a consequence * the database was inconsistent for a short while * the burden for correctness was all on the application which led to delayed error handling which is hard to debug. So `Aristo` will internally update the account leaf objects so that there are no race conditions due to the storage trie handling * Aristo: Let `stow()`/`persist()` bail out unless there is a `VertexID(1)` why: The journal and filter logic depends on the hash of the `VertexID(1)` which is commonly known as the state root. This implies that all changes to the database are somehow related to that. * Make sure that a `Ledger` account does not overwrite the storage trie reference why: Due to the abstraction of a sub-trie (now referred to as column with a hash describing its state) there was a weakness in the `Aristo` handler where an account leaf could be overwritten though changing the validity of the database. This has been changed and the database will now reject such changes. This patch fixes the behaviour on the application layer. In particular, the column handle returned by the `CoreDb` needs to be updated by the `Aristo` database state. This mitigates the problem that a storage trie might have vanished or re-apperaed with a different vertex ID. * Fix sub-trie deletion test why: Was originally hinged on `VertexID(1)` which cannot be wholesale deleted anymore after the last Aristo update. Also, running with `VertexID(2)` needs an artificial `VertexID(1)` for making `stow()` or `persist()` work. * Cosmetics * Activate `test_generalstate_json` * Temporarily `deactivate test_tracer_json` * Fix copyright header --------- Co-authored-by: jordan <jordan@dry.pudding> Co-authored-by: Jacek Sieka <jacek@status.im>
2024-05-30 18:48:38 +01:00
if oldLogAlign:
noisy.whisper "***",
&"processing {dotsOrSpace}[#{lastBlock},#{toBlock}]\n"
else:
sayPerf
noisy.whisper "***",
&"processing {dotsOrSpace}[#{lastBlock:>8},#{toBlock:>8}]"
noisy.stopLoggingAfter():
Consolidate block type for block processing (#2325) This PR consolidates the split header-body sequences into a single EthBlock sequence and cleans up the fallout from that which significantly reduces block processing overhead during import thanks to less garbage collection and fewer copies of things all around. Notably, since the number of headers must always match the number of bodies, we also get rid of a pointless degree of freedom that in the future could introduce unnecessary bugs. * only read header and body from era file * avoid several unnecessary copies along the block processing way * simplify signatures, cleaning up unused arguemnts and returns * use `stew/assign2` in a few strategic places where the generated nim assignent is slow and add a few `move` to work around poor analysis in nim 1.6 (will need to be revisited for 2.0) ``` stats-20240607_2223-a814aa0b.csv vs stats-20240608_0714-21c1d0a9.csv bps_x bps_y tps_x tps_y bpsd tpsd timed block_number (498305, 713245] 1,540.52 1,809.73 2,361.58 2775.340189 17.63% 17.63% -14.92% (713245, 928185] 730.36 865.26 1,715.90 2028.973852 18.01% 18.01% -15.21% (928185, 1143126] 663.03 789.10 2,529.26 3032.490771 19.79% 19.79% -16.28% (1143126, 1358066] 393.46 508.05 2,152.50 2777.578119 29.13% 29.13% -22.50% (1358066, 1573007] 370.88 440.72 2,351.31 2791.896052 18.81% 18.81% -15.80% (1573007, 1787947] 283.65 335.11 2,068.93 2441.373402 17.60% 17.60% -14.91% (1787947, 2002888] 287.29 342.11 2,078.39 2474.179448 18.99% 18.99% -15.91% (2002888, 2217828] 293.38 343.16 2,208.83 2584.77457 17.16% 17.16% -14.61% (2217828, 2432769] 140.09 167.86 1,081.87 1296.336926 18.82% 18.82% -15.80% blocks: 1934464, baseline: 3h13m1s, contender: 2h43m47s bpsd (mean): 19.55% tpsd (mean): 19.55% Time (total): -29m13s, -15.14% ```
2024-06-09 16:32:20 +02:00
let runPersistBlocks9Rc = chain.persistBlocks(blocks9)
xCheck runPersistBlocks9Rc.isOk()
break
Aristo avoid storage trie update race conditions (#2251) * Update TDD suite logger output format choices why: New format is not practical for TDD as it just dumps data across a wide range (considerably larder than 80 columns.) So the new format can be turned on by function argument. * Update unit tests samples configuration why: Slightly changed the way to find the `era1` directory * Remove compiler warnings (fix deprecated expressions and phrases) * Update `Aristo` debugging tools * Always update the `storageID` field of account leaf vertices why: Storage tries are weekly linked to an account leaf object in that the `storageID` field is updated by the application. Previously, `Aristo` verified that leaf objects make sense when passed to the database. As a consequence * the database was inconsistent for a short while * the burden for correctness was all on the application which led to delayed error handling which is hard to debug. So `Aristo` will internally update the account leaf objects so that there are no race conditions due to the storage trie handling * Aristo: Let `stow()`/`persist()` bail out unless there is a `VertexID(1)` why: The journal and filter logic depends on the hash of the `VertexID(1)` which is commonly known as the state root. This implies that all changes to the database are somehow related to that. * Make sure that a `Ledger` account does not overwrite the storage trie reference why: Due to the abstraction of a sub-trie (now referred to as column with a hash describing its state) there was a weakness in the `Aristo` handler where an account leaf could be overwritten though changing the validity of the database. This has been changed and the database will now reject such changes. This patch fixes the behaviour on the application layer. In particular, the column handle returned by the `CoreDb` needs to be updated by the `Aristo` database state. This mitigates the problem that a storage trie might have vanished or re-apperaed with a different vertex ID. * Fix sub-trie deletion test why: Was originally hinged on `VertexID(1)` which cannot be wholesale deleted anymore after the last Aristo update. Also, running with `VertexID(2)` needs an artificial `VertexID(1)` for making `stow()` or `persist()` work. * Cosmetics * Activate `test_generalstate_json` * Temporarily `deactivate test_tracer_json` * Fix copyright header --------- Co-authored-by: jordan <jordan@dry.pudding> Co-authored-by: Jacek Sieka <jacek@status.im>
2024-05-30 18:48:38 +01:00
if not oldLogAlign:
sayPerf
true
# ------------------------------------------------------------------------------
# End
# ------------------------------------------------------------------------------