nimbus-eth1/nimbus/core/chain/persist_blocks.nim

273 lines
9.4 KiB
Nim
Raw Normal View History

Feature/implement poa processing (#748) * re-shuffled Clique functions why: Due to the port from the go-sources, the interface logic is not optimal for nimbus. The main visible function is currently snapshot() and most of the _procurement_ of this function result has been moved to a sub-directory. * run eip-225 Clique test against p2p/chain.persistBlocks() why: Previously, loading the test block chains was fugdged with the purpose only to fill the database. As it is now clear how nimbus works on Goerli, the same can be achieved with a more realistic scenario. details: Eventually these tests will be pre-cursor to the reply tests for the Goerli chain supporting TDD approach with more simple cases. * fix exception annotations for executor module why: needed for exception tracking details: main annoyance are vmState methods (in state.nim) which potentially throw a base level Exception (a proc would only throws CatchableError) * split p2p/chain into sub-modules and fix exception annotations why: make space for implementing PoA stuff * provide over-loadable Clique PRNG why: There is a PRNG provided for generating reproducible number sequences. The functions which employ the PRNG to generate time slots were ported ported from the go-implementation. They are currently unused. * implement trusted signer assembly in p2p/chain.persistBlocks() details: * PoA processing moved there at the end of a transaction. Currently, there is no action (eg. transaction rollback) if this fails. * The unit tests with staged blocks work ok. In particular, there should be tests with to-be-rejected blocks. * TODO: 1.Optimise throughput/cache handling; 2.Verify headers * fix statement cast in pool.nim * added table features to LRU cache why: Clique uses the LRU cache using a mixture of volatile online items from the LRU cache and database checkpoints for hard synchronisation. For performance, Clique needs more table like features. details: First, last, and query key added, as well as efficient random delete added. Also key-item pair iterator added for debugging. * re-factored LRU snapshot caching why: Caching was sub-optimal (aka. bonkers) in that it skipped over memory caches in many cases and so mostly rebuild the snapshot from the last on-disk checkpoint. details; The LRU snapshot toValue() handler has been moved into the module clique_snapshot. This is for the fact that toValue() is not supposed to see the whole LRU cache database. So there must be a higher layer working with the the whole LRU cache and the on-disk checkpoint database. also: some clean up todo: The code still assumes that the block headers are valid in itself. This is particular important when an epoch header (aka re-sync header) is processed as it must contain the PoA result of all previous headers. So blocks need to be verified when they come in before used for PoA processing. * fix some snapshot cache fringe cases why: Must not index empty sequences in clique_snapshot module
2021-07-14 15:13:27 +00:00
# Nimbus
# Copyright (c) 2018-2024 Status Research & Development GmbH
Feature/implement poa processing (#748) * re-shuffled Clique functions why: Due to the port from the go-sources, the interface logic is not optimal for nimbus. The main visible function is currently snapshot() and most of the _procurement_ of this function result has been moved to a sub-directory. * run eip-225 Clique test against p2p/chain.persistBlocks() why: Previously, loading the test block chains was fugdged with the purpose only to fill the database. As it is now clear how nimbus works on Goerli, the same can be achieved with a more realistic scenario. details: Eventually these tests will be pre-cursor to the reply tests for the Goerli chain supporting TDD approach with more simple cases. * fix exception annotations for executor module why: needed for exception tracking details: main annoyance are vmState methods (in state.nim) which potentially throw a base level Exception (a proc would only throws CatchableError) * split p2p/chain into sub-modules and fix exception annotations why: make space for implementing PoA stuff * provide over-loadable Clique PRNG why: There is a PRNG provided for generating reproducible number sequences. The functions which employ the PRNG to generate time slots were ported ported from the go-implementation. They are currently unused. * implement trusted signer assembly in p2p/chain.persistBlocks() details: * PoA processing moved there at the end of a transaction. Currently, there is no action (eg. transaction rollback) if this fails. * The unit tests with staged blocks work ok. In particular, there should be tests with to-be-rejected blocks. * TODO: 1.Optimise throughput/cache handling; 2.Verify headers * fix statement cast in pool.nim * added table features to LRU cache why: Clique uses the LRU cache using a mixture of volatile online items from the LRU cache and database checkpoints for hard synchronisation. For performance, Clique needs more table like features. details: First, last, and query key added, as well as efficient random delete added. Also key-item pair iterator added for debugging. * re-factored LRU snapshot caching why: Caching was sub-optimal (aka. bonkers) in that it skipped over memory caches in many cases and so mostly rebuild the snapshot from the last on-disk checkpoint. details; The LRU snapshot toValue() handler has been moved into the module clique_snapshot. This is for the fact that toValue() is not supposed to see the whole LRU cache database. So there must be a higher layer working with the the whole LRU cache and the on-disk checkpoint database. also: some clean up todo: The code still assumes that the block headers are valid in itself. This is particular important when an epoch header (aka re-sync header) is processed as it must contain the PoA result of all previous headers. So blocks need to be verified when they come in before used for PoA processing. * fix some snapshot cache fringe cases why: Must not index empty sequences in clique_snapshot module
2021-07-14 15:13:27 +00:00
# Licensed under either of
# * Apache License, version 2.0, ([LICENSE-APACHE](LICENSE-APACHE) or
# http://www.apache.org/licenses/LICENSE-2.0)
# * MIT license ([LICENSE-MIT](LICENSE-MIT) or
# http://opensource.org/licenses/MIT)
# at your option. This file may not be copied, modified, or distributed except
# according to those terms.
{.push raises: [].}
Feature/implement poa processing (#748) * re-shuffled Clique functions why: Due to the port from the go-sources, the interface logic is not optimal for nimbus. The main visible function is currently snapshot() and most of the _procurement_ of this function result has been moved to a sub-directory. * run eip-225 Clique test against p2p/chain.persistBlocks() why: Previously, loading the test block chains was fugdged with the purpose only to fill the database. As it is now clear how nimbus works on Goerli, the same can be achieved with a more realistic scenario. details: Eventually these tests will be pre-cursor to the reply tests for the Goerli chain supporting TDD approach with more simple cases. * fix exception annotations for executor module why: needed for exception tracking details: main annoyance are vmState methods (in state.nim) which potentially throw a base level Exception (a proc would only throws CatchableError) * split p2p/chain into sub-modules and fix exception annotations why: make space for implementing PoA stuff * provide over-loadable Clique PRNG why: There is a PRNG provided for generating reproducible number sequences. The functions which employ the PRNG to generate time slots were ported ported from the go-implementation. They are currently unused. * implement trusted signer assembly in p2p/chain.persistBlocks() details: * PoA processing moved there at the end of a transaction. Currently, there is no action (eg. transaction rollback) if this fails. * The unit tests with staged blocks work ok. In particular, there should be tests with to-be-rejected blocks. * TODO: 1.Optimise throughput/cache handling; 2.Verify headers * fix statement cast in pool.nim * added table features to LRU cache why: Clique uses the LRU cache using a mixture of volatile online items from the LRU cache and database checkpoints for hard synchronisation. For performance, Clique needs more table like features. details: First, last, and query key added, as well as efficient random delete added. Also key-item pair iterator added for debugging. * re-factored LRU snapshot caching why: Caching was sub-optimal (aka. bonkers) in that it skipped over memory caches in many cases and so mostly rebuild the snapshot from the last on-disk checkpoint. details; The LRU snapshot toValue() handler has been moved into the module clique_snapshot. This is for the fact that toValue() is not supposed to see the whole LRU cache database. So there must be a higher layer working with the the whole LRU cache and the on-disk checkpoint database. also: some clean up todo: The code still assumes that the block headers are valid in itself. This is particular important when an epoch header (aka re-sync header) is processed as it must contain the PoA result of all previous headers. So blocks need to be verified when they come in before used for PoA processing. * fix some snapshot cache fringe cases why: Must not index empty sequences in clique_snapshot module
2021-07-14 15:13:27 +00:00
import
results,
../../db/ledger,
../../evm/state,
../../evm/types,
Feature/implement poa processing (#748) * re-shuffled Clique functions why: Due to the port from the go-sources, the interface logic is not optimal for nimbus. The main visible function is currently snapshot() and most of the _procurement_ of this function result has been moved to a sub-directory. * run eip-225 Clique test against p2p/chain.persistBlocks() why: Previously, loading the test block chains was fugdged with the purpose only to fill the database. As it is now clear how nimbus works on Goerli, the same can be achieved with a more realistic scenario. details: Eventually these tests will be pre-cursor to the reply tests for the Goerli chain supporting TDD approach with more simple cases. * fix exception annotations for executor module why: needed for exception tracking details: main annoyance are vmState methods (in state.nim) which potentially throw a base level Exception (a proc would only throws CatchableError) * split p2p/chain into sub-modules and fix exception annotations why: make space for implementing PoA stuff * provide over-loadable Clique PRNG why: There is a PRNG provided for generating reproducible number sequences. The functions which employ the PRNG to generate time slots were ported ported from the go-implementation. They are currently unused. * implement trusted signer assembly in p2p/chain.persistBlocks() details: * PoA processing moved there at the end of a transaction. Currently, there is no action (eg. transaction rollback) if this fails. * The unit tests with staged blocks work ok. In particular, there should be tests with to-be-rejected blocks. * TODO: 1.Optimise throughput/cache handling; 2.Verify headers * fix statement cast in pool.nim * added table features to LRU cache why: Clique uses the LRU cache using a mixture of volatile online items from the LRU cache and database checkpoints for hard synchronisation. For performance, Clique needs more table like features. details: First, last, and query key added, as well as efficient random delete added. Also key-item pair iterator added for debugging. * re-factored LRU snapshot caching why: Caching was sub-optimal (aka. bonkers) in that it skipped over memory caches in many cases and so mostly rebuild the snapshot from the last on-disk checkpoint. details; The LRU snapshot toValue() handler has been moved into the module clique_snapshot. This is for the fact that toValue() is not supposed to see the whole LRU cache database. So there must be a higher layer working with the the whole LRU cache and the on-disk checkpoint database. also: some clean up todo: The code still assumes that the block headers are valid in itself. This is particular important when an epoch header (aka re-sync header) is processed as it must contain the PoA result of all previous headers. So blocks need to be verified when they come in before used for PoA processing. * fix some snapshot cache fringe cases why: Must not index empty sequences in clique_snapshot module
2021-07-14 15:13:27 +00:00
../executor,
../validate,
./chain_desc,
chronicles,
stint
when not defined(release):
Fearture/poa clique tuning (#765) * Provide API details: API is bundled via clique.nim. * Set extraValidation as default for PoA chains why: This triggers consensus verification and an update of the list of authorised signers. These signers are integral part of the PoA block chain. todo: Option argument to control validation for the nimbus binary. * Fix snapshot state block number why: Using sub-sequence here, so the len() function was wrong. * Optional start where block verification begins why: Can speed up time building loading initial parts of block chain. For PoA, this allows to prove & test that authorised signers can be (correctly) calculated starting at any point on the block chain. todo: On Goerli around blocks #193537..#197568, processing time increases disproportionally -- needs to be understand * For Clique test, get old grouping back (7 transactions per log entry) why: Forgot to change back after troubleshooting * Fix field/function/module-name misunderstanding why: Make compilation work * Use eth_types.blockHash() rather than utils.hash() in Clique modules why: Prefer lib module * Dissolve snapshot_misc.nim details: .. into clique_verify.nim (the other source file clique_unused.nim is inactive) * Hide unused AsyncLock in Clique descriptor details: Unused here but was part of the Go reference implementation * Remove fakeDiff flag from Clique descriptor details: This flag was a kludge in the Go reference implementation used for the canonical tests. The tests have been adapted so there is no need for the fakeDiff flag and its implementation. * Not observing minimum distance from epoch sync point why: For compiling PoA state, the go implementation will walk back to the epoch header with at least 90000 blocks apart from the current header in the absence of other synchronisation points. Here just the nearest epoch header is used. The assumption is that all the checkpoints before have been vetted already regardless of the current branch. details: The behaviour of using the nearest vs the minimum distance epoch is controlled by a flag and can be changed at run time. * Analysing processing time (patch adds some debugging/visualisation support) why: At the first half million blocks of the Goerli replay, blocks on the interval #194854..#196224 take exceptionally long to process, but not due to PoA processing. details: It turns out that much time is spent in p2p/excecutor.processBlock() where the elapsed transaction execution time is significantly greater for many of these blocks. Between the 1371 blocks #194854..#196224 there are 223 blocks with more than 1/2 seconds execution time whereas there are only 4 such blocks before and 13 such after this range up to #504192. * fix debugging symbol in clique_desc (causes CI failing) * Fixing canonical reference tests why: Two errors were introduced earlier but ovelooked: 1. "Remove fakeDiff flag .." patch was incomplete 2. "Not observing minimum distance .." introduced problem w/tests 23/24 details: Fixing 2. needed to revert the behaviour by setting the applySnapsMinBacklog flag for the Clique descriptor. Also a new test was added to lock the new behaviour. * Remove cruft why: Clique/PoA processing was intended to take place somewhere in executor/process_block.processBlock() but was decided later to run from chain/persist_block.persistBlock() instead. * Update API comment * ditto
2021-07-30 14:06:51 +00:00
import
#../../tracer,
../../utils/utils
Feature/implement poa processing (#748) * re-shuffled Clique functions why: Due to the port from the go-sources, the interface logic is not optimal for nimbus. The main visible function is currently snapshot() and most of the _procurement_ of this function result has been moved to a sub-directory. * run eip-225 Clique test against p2p/chain.persistBlocks() why: Previously, loading the test block chains was fugdged with the purpose only to fill the database. As it is now clear how nimbus works on Goerli, the same can be achieved with a more realistic scenario. details: Eventually these tests will be pre-cursor to the reply tests for the Goerli chain supporting TDD approach with more simple cases. * fix exception annotations for executor module why: needed for exception tracking details: main annoyance are vmState methods (in state.nim) which potentially throw a base level Exception (a proc would only throws CatchableError) * split p2p/chain into sub-modules and fix exception annotations why: make space for implementing PoA stuff * provide over-loadable Clique PRNG why: There is a PRNG provided for generating reproducible number sequences. The functions which employ the PRNG to generate time slots were ported ported from the go-implementation. They are currently unused. * implement trusted signer assembly in p2p/chain.persistBlocks() details: * PoA processing moved there at the end of a transaction. Currently, there is no action (eg. transaction rollback) if this fails. * The unit tests with staged blocks work ok. In particular, there should be tests with to-be-rejected blocks. * TODO: 1.Optimise throughput/cache handling; 2.Verify headers * fix statement cast in pool.nim * added table features to LRU cache why: Clique uses the LRU cache using a mixture of volatile online items from the LRU cache and database checkpoints for hard synchronisation. For performance, Clique needs more table like features. details: First, last, and query key added, as well as efficient random delete added. Also key-item pair iterator added for debugging. * re-factored LRU snapshot caching why: Caching was sub-optimal (aka. bonkers) in that it skipped over memory caches in many cases and so mostly rebuild the snapshot from the last on-disk checkpoint. details; The LRU snapshot toValue() handler has been moved into the module clique_snapshot. This is for the fact that toValue() is not supposed to see the whole LRU cache database. So there must be a higher layer working with the the whole LRU cache and the on-disk checkpoint database. also: some clean up todo: The code still assumes that the block headers are valid in itself. This is particular important when an epoch header (aka re-sync header) is processed as it must contain the PoA result of all previous headers. So blocks need to be verified when they come in before used for PoA processing. * fix some snapshot cache fringe cases why: Must not index empty sequences in clique_snapshot module
2021-07-14 15:13:27 +00:00
export results
type
PersistBlockFlag* = enum
NoValidation # Validate the batch instead of validating each block in it
NoFullValidation # Validate the batch instead of validating each block in it
NoPersistHeader
NoPersistTransactions
NoPersistUncles
NoPersistWithdrawals
NoPersistReceipts
PersistBlockFlags* = set[PersistBlockFlag]
PersistStats = tuple[blocks: int, txs: int, gas: GasInt]
const
NoPersistBodies* = {NoPersistTransactions, NoPersistUncles, NoPersistWithdrawals}
CleanUpEpoch = 30_000.BlockNumber
## Regular checks for history clean up (applies to single state DB). This
## is mainly a debugging/testing feature so that the database can be held
## a bit smaller. It is not applicable to a full node.
Feature/implement poa processing (#748) * re-shuffled Clique functions why: Due to the port from the go-sources, the interface logic is not optimal for nimbus. The main visible function is currently snapshot() and most of the _procurement_ of this function result has been moved to a sub-directory. * run eip-225 Clique test against p2p/chain.persistBlocks() why: Previously, loading the test block chains was fugdged with the purpose only to fill the database. As it is now clear how nimbus works on Goerli, the same can be achieved with a more realistic scenario. details: Eventually these tests will be pre-cursor to the reply tests for the Goerli chain supporting TDD approach with more simple cases. * fix exception annotations for executor module why: needed for exception tracking details: main annoyance are vmState methods (in state.nim) which potentially throw a base level Exception (a proc would only throws CatchableError) * split p2p/chain into sub-modules and fix exception annotations why: make space for implementing PoA stuff * provide over-loadable Clique PRNG why: There is a PRNG provided for generating reproducible number sequences. The functions which employ the PRNG to generate time slots were ported ported from the go-implementation. They are currently unused. * implement trusted signer assembly in p2p/chain.persistBlocks() details: * PoA processing moved there at the end of a transaction. Currently, there is no action (eg. transaction rollback) if this fails. * The unit tests with staged blocks work ok. In particular, there should be tests with to-be-rejected blocks. * TODO: 1.Optimise throughput/cache handling; 2.Verify headers * fix statement cast in pool.nim * added table features to LRU cache why: Clique uses the LRU cache using a mixture of volatile online items from the LRU cache and database checkpoints for hard synchronisation. For performance, Clique needs more table like features. details: First, last, and query key added, as well as efficient random delete added. Also key-item pair iterator added for debugging. * re-factored LRU snapshot caching why: Caching was sub-optimal (aka. bonkers) in that it skipped over memory caches in many cases and so mostly rebuild the snapshot from the last on-disk checkpoint. details; The LRU snapshot toValue() handler has been moved into the module clique_snapshot. This is for the fact that toValue() is not supposed to see the whole LRU cache database. So there must be a higher layer working with the the whole LRU cache and the on-disk checkpoint database. also: some clean up todo: The code still assumes that the block headers are valid in itself. This is particular important when an epoch header (aka re-sync header) is processed as it must contain the PoA result of all previous headers. So blocks need to be verified when they come in before used for PoA processing. * fix some snapshot cache fringe cases why: Must not index empty sequences in clique_snapshot module
2021-07-14 15:13:27 +00:00
# ------------------------------------------------------------------------------
# Private
# ------------------------------------------------------------------------------
proc getVmState(c: ChainRef, header: BlockHeader): Result[BaseVMState, string] =
if not c.vmState.isNil:
return ok(c.vmState)
let vmState = BaseVMState()
if not vmState.init(header, c.com):
return err("Could not initialise VMState")
ok(vmState)
proc purgeOlderBlocksFromHistory(db: CoreDbRef, bn: BlockNumber) =
## Remove non-reachable blocks from KVT database
if 0 < bn:
var blkNum = bn - 1
while 0 < blkNum:
try:
if not db.forgetHistory blkNum:
break
except RlpError as exc:
warn "Error forgetting history", err = exc.msg
blkNum = blkNum - 1
proc persistBlocksImpl(
c: ChainRef, blocks: openArray[EthBlock], flags: PersistBlockFlags = {}
): Result[PersistStats, string] =
let dbTx = c.db.ctx.newTransaction()
defer:
dbTx.dispose()
Feature/implement poa processing (#748) * re-shuffled Clique functions why: Due to the port from the go-sources, the interface logic is not optimal for nimbus. The main visible function is currently snapshot() and most of the _procurement_ of this function result has been moved to a sub-directory. * run eip-225 Clique test against p2p/chain.persistBlocks() why: Previously, loading the test block chains was fugdged with the purpose only to fill the database. As it is now clear how nimbus works on Goerli, the same can be achieved with a more realistic scenario. details: Eventually these tests will be pre-cursor to the reply tests for the Goerli chain supporting TDD approach with more simple cases. * fix exception annotations for executor module why: needed for exception tracking details: main annoyance are vmState methods (in state.nim) which potentially throw a base level Exception (a proc would only throws CatchableError) * split p2p/chain into sub-modules and fix exception annotations why: make space for implementing PoA stuff * provide over-loadable Clique PRNG why: There is a PRNG provided for generating reproducible number sequences. The functions which employ the PRNG to generate time slots were ported ported from the go-implementation. They are currently unused. * implement trusted signer assembly in p2p/chain.persistBlocks() details: * PoA processing moved there at the end of a transaction. Currently, there is no action (eg. transaction rollback) if this fails. * The unit tests with staged blocks work ok. In particular, there should be tests with to-be-rejected blocks. * TODO: 1.Optimise throughput/cache handling; 2.Verify headers * fix statement cast in pool.nim * added table features to LRU cache why: Clique uses the LRU cache using a mixture of volatile online items from the LRU cache and database checkpoints for hard synchronisation. For performance, Clique needs more table like features. details: First, last, and query key added, as well as efficient random delete added. Also key-item pair iterator added for debugging. * re-factored LRU snapshot caching why: Caching was sub-optimal (aka. bonkers) in that it skipped over memory caches in many cases and so mostly rebuild the snapshot from the last on-disk checkpoint. details; The LRU snapshot toValue() handler has been moved into the module clique_snapshot. This is for the fact that toValue() is not supposed to see the whole LRU cache database. So there must be a higher layer working with the the whole LRU cache and the on-disk checkpoint database. also: some clean up todo: The code still assumes that the block headers are valid in itself. This is particular important when an epoch header (aka re-sync header) is processed as it must contain the PoA result of all previous headers. So blocks need to be verified when they come in before used for PoA processing. * fix some snapshot cache fringe cases why: Must not index empty sequences in clique_snapshot module
2021-07-14 15:13:27 +00:00
Consolidate block type for block processing (#2325) This PR consolidates the split header-body sequences into a single EthBlock sequence and cleans up the fallout from that which significantly reduces block processing overhead during import thanks to less garbage collection and fewer copies of things all around. Notably, since the number of headers must always match the number of bodies, we also get rid of a pointless degree of freedom that in the future could introduce unnecessary bugs. * only read header and body from era file * avoid several unnecessary copies along the block processing way * simplify signatures, cleaning up unused arguemnts and returns * use `stew/assign2` in a few strategic places where the generated nim assignent is slow and add a few `move` to work around poor analysis in nim 1.6 (will need to be revisited for 2.0) ``` stats-20240607_2223-a814aa0b.csv vs stats-20240608_0714-21c1d0a9.csv bps_x bps_y tps_x tps_y bpsd tpsd timed block_number (498305, 713245] 1,540.52 1,809.73 2,361.58 2775.340189 17.63% 17.63% -14.92% (713245, 928185] 730.36 865.26 1,715.90 2028.973852 18.01% 18.01% -15.21% (928185, 1143126] 663.03 789.10 2,529.26 3032.490771 19.79% 19.79% -16.28% (1143126, 1358066] 393.46 508.05 2,152.50 2777.578119 29.13% 29.13% -22.50% (1358066, 1573007] 370.88 440.72 2,351.31 2791.896052 18.81% 18.81% -15.80% (1573007, 1787947] 283.65 335.11 2,068.93 2441.373402 17.60% 17.60% -14.91% (1787947, 2002888] 287.29 342.11 2,078.39 2474.179448 18.99% 18.99% -15.91% (2002888, 2217828] 293.38 343.16 2,208.83 2584.77457 17.16% 17.16% -14.61% (2217828, 2432769] 140.09 167.86 1,081.87 1296.336926 18.82% 18.82% -15.80% blocks: 1934464, baseline: 3h13m1s, contender: 2h43m47s bpsd (mean): 19.55% tpsd (mean): 19.55% Time (total): -29m13s, -15.14% ```
2024-06-09 14:32:20 +00:00
c.com.hardForkTransition(blocks[0].header)
2022-12-02 04:35:41 +00:00
Redesign of BaseVMState descriptor (#923) * Redesign of BaseVMState descriptor why: BaseVMState provides an environment for executing transactions. The current descriptor also provides data that cannot generally be known within the execution environment, e.g. the total gasUsed which is available not before after all transactions have finished. Also, the BaseVMState constructor has been replaced by a constructor that does not need pre-initialised input of the account database. also: Previous constructor and some fields are provided with a deprecated annotation (producing a lot of noise.) * Replace legacy directives in production sources * Replace legacy directives in unit test sources * fix CI (missing premix update) * Remove legacy directives * chase CI problem * rebased * Re-introduce 'AccountsCache' constructor optimisation for 'BaseVmState' re-initialisation why: Constructing a new 'AccountsCache' descriptor can be avoided sometimes when the current state root is properly positioned already. Such a feature existed already as the update function 'initStateDB()' for the 'BaseChanDB' where the accounts cache was linked into this desctiptor. The function 'initStateDB()' was removed and re-implemented into the 'BaseVmState' constructor without optimisation. The old version was of restricted use as a wrong accounts cache state would unconditionally throw an exception rather than conceptually ask for a remedy. The optimised 'BaseVmState' re-initialisation has been implemented for the 'persistBlocks()' function. also: moved some test helpers to 'test/replay' folder * Remove unused & undocumented fields from Chain descriptor why: Reduces attack surface in general & improves reading the code.
2022-01-18 16:19:32 +00:00
# Note that `0 < headers.len`, assured when called from `persistBlocks()`
Consolidate block type for block processing (#2325) This PR consolidates the split header-body sequences into a single EthBlock sequence and cleans up the fallout from that which significantly reduces block processing overhead during import thanks to less garbage collection and fewer copies of things all around. Notably, since the number of headers must always match the number of bodies, we also get rid of a pointless degree of freedom that in the future could introduce unnecessary bugs. * only read header and body from era file * avoid several unnecessary copies along the block processing way * simplify signatures, cleaning up unused arguemnts and returns * use `stew/assign2` in a few strategic places where the generated nim assignent is slow and add a few `move` to work around poor analysis in nim 1.6 (will need to be revisited for 2.0) ``` stats-20240607_2223-a814aa0b.csv vs stats-20240608_0714-21c1d0a9.csv bps_x bps_y tps_x tps_y bpsd tpsd timed block_number (498305, 713245] 1,540.52 1,809.73 2,361.58 2775.340189 17.63% 17.63% -14.92% (713245, 928185] 730.36 865.26 1,715.90 2028.973852 18.01% 18.01% -15.21% (928185, 1143126] 663.03 789.10 2,529.26 3032.490771 19.79% 19.79% -16.28% (1143126, 1358066] 393.46 508.05 2,152.50 2777.578119 29.13% 29.13% -22.50% (1358066, 1573007] 370.88 440.72 2,351.31 2791.896052 18.81% 18.81% -15.80% (1573007, 1787947] 283.65 335.11 2,068.93 2441.373402 17.60% 17.60% -14.91% (1787947, 2002888] 287.29 342.11 2,078.39 2474.179448 18.99% 18.99% -15.91% (2002888, 2217828] 293.38 343.16 2,208.83 2584.77457 17.16% 17.16% -14.61% (2217828, 2432769] 140.09 167.86 1,081.87 1296.336926 18.82% 18.82% -15.80% blocks: 1934464, baseline: 3h13m1s, contender: 2h43m47s bpsd (mean): 19.55% tpsd (mean): 19.55% Time (total): -29m13s, -15.14% ```
2024-06-09 14:32:20 +00:00
let
vmState = ?c.getVmState(blocks[0].header)
fromBlock = blocks[0].header.number
toBlock = blocks[blocks.high()].header.number
trace "Persisting blocks", fromBlock, toBlock
var
blks = 0
txs = 0
gas = GasInt(0)
parentHash: Hash256 # only needed after the first block
Consolidate block type for block processing (#2325) This PR consolidates the split header-body sequences into a single EthBlock sequence and cleans up the fallout from that which significantly reduces block processing overhead during import thanks to less garbage collection and fewer copies of things all around. Notably, since the number of headers must always match the number of bodies, we also get rid of a pointless degree of freedom that in the future could introduce unnecessary bugs. * only read header and body from era file * avoid several unnecessary copies along the block processing way * simplify signatures, cleaning up unused arguemnts and returns * use `stew/assign2` in a few strategic places where the generated nim assignent is slow and add a few `move` to work around poor analysis in nim 1.6 (will need to be revisited for 2.0) ``` stats-20240607_2223-a814aa0b.csv vs stats-20240608_0714-21c1d0a9.csv bps_x bps_y tps_x tps_y bpsd tpsd timed block_number (498305, 713245] 1,540.52 1,809.73 2,361.58 2775.340189 17.63% 17.63% -14.92% (713245, 928185] 730.36 865.26 1,715.90 2028.973852 18.01% 18.01% -15.21% (928185, 1143126] 663.03 789.10 2,529.26 3032.490771 19.79% 19.79% -16.28% (1143126, 1358066] 393.46 508.05 2,152.50 2777.578119 29.13% 29.13% -22.50% (1358066, 1573007] 370.88 440.72 2,351.31 2791.896052 18.81% 18.81% -15.80% (1573007, 1787947] 283.65 335.11 2,068.93 2441.373402 17.60% 17.60% -14.91% (1787947, 2002888] 287.29 342.11 2,078.39 2474.179448 18.99% 18.99% -15.91% (2002888, 2217828] 293.38 343.16 2,208.83 2584.77457 17.16% 17.16% -14.61% (2217828, 2432769] 140.09 167.86 1,081.87 1296.336926 18.82% 18.82% -15.80% blocks: 1934464, baseline: 3h13m1s, contender: 2h43m47s bpsd (mean): 19.55% tpsd (mean): 19.55% Time (total): -29m13s, -15.14% ```
2024-06-09 14:32:20 +00:00
for blk in blocks:
template header(): BlockHeader =
blk.header
Redesign of BaseVMState descriptor (#923) * Redesign of BaseVMState descriptor why: BaseVMState provides an environment for executing transactions. The current descriptor also provides data that cannot generally be known within the execution environment, e.g. the total gasUsed which is available not before after all transactions have finished. Also, the BaseVMState constructor has been replaced by a constructor that does not need pre-initialised input of the account database. also: Previous constructor and some fields are provided with a deprecated annotation (producing a lot of noise.) * Replace legacy directives in production sources * Replace legacy directives in unit test sources * fix CI (missing premix update) * Remove legacy directives * chase CI problem * rebased * Re-introduce 'AccountsCache' constructor optimisation for 'BaseVmState' re-initialisation why: Constructing a new 'AccountsCache' descriptor can be avoided sometimes when the current state root is properly positioned already. Such a feature existed already as the update function 'initStateDB()' for the 'BaseChanDB' where the accounts cache was linked into this desctiptor. The function 'initStateDB()' was removed and re-implemented into the 'BaseVmState' constructor without optimisation. The old version was of restricted use as a wrong accounts cache state would unconditionally throw an exception rather than conceptually ask for a remedy. The optimised 'BaseVmState' re-initialisation has been implemented for the 'persistBlocks()' function. also: moved some test helpers to 'test/replay' folder * Remove unused & undocumented fields from Chain descriptor why: Reduces attack surface in general & improves reading the code.
2022-01-18 16:19:32 +00:00
# Full validation means validating the state root at every block and
# performing the more expensive hash computations on the block itself, ie
# verifying that the transaction and receipts roots are valid - when not
# doing full validation, we skip these expensive checks relying instead
# on the source of the data to have performed them previously or because
# the cost of failure is low.
# TODO Figure out the right balance for header fields - in particular, if
# we receive instruction from the CL while syncing that a block is
# CL-valid, do we skip validation while "far from head"? probably yes.
# This requires performing a header-chain validation from that CL-valid
# block which the current code doesn't express.
# Also, the potential avenues for corruption should be described with
# more rigor, ie if the txroot doesn't match but everything else does,
# can the state root of the last block still be correct? Dubious, but
# what would be the consequences? We would roll back the full set of
# blocks which is fairly low-cost.
let skipValidation =
NoFullValidation in flags and header.number != toBlock or NoValidation in flags
2022-12-05 08:46:37 +00:00
c.com.hardForkTransition(header)
2022-12-02 04:35:41 +00:00
if blks > 0:
template parent(): BlockHeader =
blocks[blks - 1].header
let updated =
if header.number == parent.number + 1 and header.parentHash == parentHash:
vmState.reinit(parent = parent, header = header, linear = true)
else:
# TODO remove this code path and process only linear histories in this
# function
vmState.reinit(header = header)
if not updated:
debug "Cannot update VmState", blockNumber = header.number
return err("Cannot update VmState to block " & $header.number)
Redesign of BaseVMState descriptor (#923) * Redesign of BaseVMState descriptor why: BaseVMState provides an environment for executing transactions. The current descriptor also provides data that cannot generally be known within the execution environment, e.g. the total gasUsed which is available not before after all transactions have finished. Also, the BaseVMState constructor has been replaced by a constructor that does not need pre-initialised input of the account database. also: Previous constructor and some fields are provided with a deprecated annotation (producing a lot of noise.) * Replace legacy directives in production sources * Replace legacy directives in unit test sources * fix CI (missing premix update) * Remove legacy directives * chase CI problem * rebased * Re-introduce 'AccountsCache' constructor optimisation for 'BaseVmState' re-initialisation why: Constructing a new 'AccountsCache' descriptor can be avoided sometimes when the current state root is properly positioned already. Such a feature existed already as the update function 'initStateDB()' for the 'BaseChanDB' where the accounts cache was linked into this desctiptor. The function 'initStateDB()' was removed and re-implemented into the 'BaseVmState' constructor without optimisation. The old version was of restricted use as a wrong accounts cache state would unconditionally throw an exception rather than conceptually ask for a remedy. The optimised 'BaseVmState' re-initialisation has been implemented for the 'persistBlocks()' function. also: moved some test helpers to 'test/replay' folder * Remove unused & undocumented fields from Chain descriptor why: Reduces attack surface in general & improves reading the code.
2022-01-18 16:19:32 +00:00
# TODO even if we're skipping validation, we should perform basic sanity
# checks on the block and header - that fields are sanely set for the
# given hard fork and similar path-independent checks - these same
# sanity checks should be performed early in the processing pipeline no
# matter their provenance.
if not skipValidation and c.extraValidation and c.verifyFrom <= header.number:
Consolidate block type for block processing (#2325) This PR consolidates the split header-body sequences into a single EthBlock sequence and cleans up the fallout from that which significantly reduces block processing overhead during import thanks to less garbage collection and fewer copies of things all around. Notably, since the number of headers must always match the number of bodies, we also get rid of a pointless degree of freedom that in the future could introduce unnecessary bugs. * only read header and body from era file * avoid several unnecessary copies along the block processing way * simplify signatures, cleaning up unused arguemnts and returns * use `stew/assign2` in a few strategic places where the generated nim assignent is slow and add a few `move` to work around poor analysis in nim 1.6 (will need to be revisited for 2.0) ``` stats-20240607_2223-a814aa0b.csv vs stats-20240608_0714-21c1d0a9.csv bps_x bps_y tps_x tps_y bpsd tpsd timed block_number (498305, 713245] 1,540.52 1,809.73 2,361.58 2775.340189 17.63% 17.63% -14.92% (713245, 928185] 730.36 865.26 1,715.90 2028.973852 18.01% 18.01% -15.21% (928185, 1143126] 663.03 789.10 2,529.26 3032.490771 19.79% 19.79% -16.28% (1143126, 1358066] 393.46 508.05 2,152.50 2777.578119 29.13% 29.13% -22.50% (1358066, 1573007] 370.88 440.72 2,351.31 2791.896052 18.81% 18.81% -15.80% (1573007, 1787947] 283.65 335.11 2,068.93 2441.373402 17.60% 17.60% -14.91% (1787947, 2002888] 287.29 342.11 2,078.39 2474.179448 18.99% 18.99% -15.91% (2002888, 2217828] 293.38 343.16 2,208.83 2584.77457 17.16% 17.16% -14.61% (2217828, 2432769] 140.09 167.86 1,081.87 1296.336926 18.82% 18.82% -15.80% blocks: 1934464, baseline: 3h13m1s, contender: 2h43m47s bpsd (mean): 19.55% tpsd (mean): 19.55% Time (total): -29m13s, -15.14% ```
2024-06-09 14:32:20 +00:00
# TODO: how to checkseal from here
?c.com.validateHeaderAndKinship(blk, vmState.parent, checkSealOK = false)
# Generate receipts for storage or validation but skip them otherwise
?vmState.processBlock(
blk,
skipValidation,
skipReceipts = skipValidation and NoPersistReceipts in flags,
skipUncles = NoPersistUncles in flags,
)
# when defined(nimbusDumpDebuggingMetaData):
# if validationResult == ValidationResult.Error and
# body.transactions.calcTxRoot == header.txRoot:
# vmState.dumpDebuggingMetaData(header, body)
# warn "Validation error. Debugging metadata dumped."
let blockHash = header.blockHash()
if NoPersistHeader notin flags:
if not c.db.persistHeader(
blockHash, header, c.com.consensus == ConsensusType.POS, c.com.startOfHistory
):
return err("Could not persist header")
if NoPersistTransactions notin flags:
c.db.persistTransactions(header.number, header.txRoot, blk.transactions)
if NoPersistReceipts notin flags:
c.db.persistReceipts(header.receiptsRoot, vmState.receipts)
Feature/implement poa processing (#748) * re-shuffled Clique functions why: Due to the port from the go-sources, the interface logic is not optimal for nimbus. The main visible function is currently snapshot() and most of the _procurement_ of this function result has been moved to a sub-directory. * run eip-225 Clique test against p2p/chain.persistBlocks() why: Previously, loading the test block chains was fugdged with the purpose only to fill the database. As it is now clear how nimbus works on Goerli, the same can be achieved with a more realistic scenario. details: Eventually these tests will be pre-cursor to the reply tests for the Goerli chain supporting TDD approach with more simple cases. * fix exception annotations for executor module why: needed for exception tracking details: main annoyance are vmState methods (in state.nim) which potentially throw a base level Exception (a proc would only throws CatchableError) * split p2p/chain into sub-modules and fix exception annotations why: make space for implementing PoA stuff * provide over-loadable Clique PRNG why: There is a PRNG provided for generating reproducible number sequences. The functions which employ the PRNG to generate time slots were ported ported from the go-implementation. They are currently unused. * implement trusted signer assembly in p2p/chain.persistBlocks() details: * PoA processing moved there at the end of a transaction. Currently, there is no action (eg. transaction rollback) if this fails. * The unit tests with staged blocks work ok. In particular, there should be tests with to-be-rejected blocks. * TODO: 1.Optimise throughput/cache handling; 2.Verify headers * fix statement cast in pool.nim * added table features to LRU cache why: Clique uses the LRU cache using a mixture of volatile online items from the LRU cache and database checkpoints for hard synchronisation. For performance, Clique needs more table like features. details: First, last, and query key added, as well as efficient random delete added. Also key-item pair iterator added for debugging. * re-factored LRU snapshot caching why: Caching was sub-optimal (aka. bonkers) in that it skipped over memory caches in many cases and so mostly rebuild the snapshot from the last on-disk checkpoint. details; The LRU snapshot toValue() handler has been moved into the module clique_snapshot. This is for the fact that toValue() is not supposed to see the whole LRU cache database. So there must be a higher layer working with the the whole LRU cache and the on-disk checkpoint database. also: some clean up todo: The code still assumes that the block headers are valid in itself. This is particular important when an epoch header (aka re-sync header) is processed as it must contain the PoA result of all previous headers. So blocks need to be verified when they come in before used for PoA processing. * fix some snapshot cache fringe cases why: Must not index empty sequences in clique_snapshot module
2021-07-14 15:13:27 +00:00
if NoPersistWithdrawals notin flags and blk.withdrawals.isSome:
c.db.persistWithdrawals(
header.withdrawalsRoot.expect("WithdrawalsRoot should be verified before"),
blk.withdrawals.get,
)
Feature/implement poa processing (#748) * re-shuffled Clique functions why: Due to the port from the go-sources, the interface logic is not optimal for nimbus. The main visible function is currently snapshot() and most of the _procurement_ of this function result has been moved to a sub-directory. * run eip-225 Clique test against p2p/chain.persistBlocks() why: Previously, loading the test block chains was fugdged with the purpose only to fill the database. As it is now clear how nimbus works on Goerli, the same can be achieved with a more realistic scenario. details: Eventually these tests will be pre-cursor to the reply tests for the Goerli chain supporting TDD approach with more simple cases. * fix exception annotations for executor module why: needed for exception tracking details: main annoyance are vmState methods (in state.nim) which potentially throw a base level Exception (a proc would only throws CatchableError) * split p2p/chain into sub-modules and fix exception annotations why: make space for implementing PoA stuff * provide over-loadable Clique PRNG why: There is a PRNG provided for generating reproducible number sequences. The functions which employ the PRNG to generate time slots were ported ported from the go-implementation. They are currently unused. * implement trusted signer assembly in p2p/chain.persistBlocks() details: * PoA processing moved there at the end of a transaction. Currently, there is no action (eg. transaction rollback) if this fails. * The unit tests with staged blocks work ok. In particular, there should be tests with to-be-rejected blocks. * TODO: 1.Optimise throughput/cache handling; 2.Verify headers * fix statement cast in pool.nim * added table features to LRU cache why: Clique uses the LRU cache using a mixture of volatile online items from the LRU cache and database checkpoints for hard synchronisation. For performance, Clique needs more table like features. details: First, last, and query key added, as well as efficient random delete added. Also key-item pair iterator added for debugging. * re-factored LRU snapshot caching why: Caching was sub-optimal (aka. bonkers) in that it skipped over memory caches in many cases and so mostly rebuild the snapshot from the last on-disk checkpoint. details; The LRU snapshot toValue() handler has been moved into the module clique_snapshot. This is for the fact that toValue() is not supposed to see the whole LRU cache database. So there must be a higher layer working with the the whole LRU cache and the on-disk checkpoint database. also: some clean up todo: The code still assumes that the block headers are valid in itself. This is particular important when an epoch header (aka re-sync header) is processed as it must contain the PoA result of all previous headers. So blocks need to be verified when they come in before used for PoA processing. * fix some snapshot cache fringe cases why: Must not index empty sequences in clique_snapshot module
2021-07-14 15:13:27 +00:00
# update currentBlock *after* we persist it
# so the rpc return consistent result
# between eth_blockNumber and eth_syncing
c.com.syncCurrent = header.number
blks += 1
Consolidate block type for block processing (#2325) This PR consolidates the split header-body sequences into a single EthBlock sequence and cleans up the fallout from that which significantly reduces block processing overhead during import thanks to less garbage collection and fewer copies of things all around. Notably, since the number of headers must always match the number of bodies, we also get rid of a pointless degree of freedom that in the future could introduce unnecessary bugs. * only read header and body from era file * avoid several unnecessary copies along the block processing way * simplify signatures, cleaning up unused arguemnts and returns * use `stew/assign2` in a few strategic places where the generated nim assignent is slow and add a few `move` to work around poor analysis in nim 1.6 (will need to be revisited for 2.0) ``` stats-20240607_2223-a814aa0b.csv vs stats-20240608_0714-21c1d0a9.csv bps_x bps_y tps_x tps_y bpsd tpsd timed block_number (498305, 713245] 1,540.52 1,809.73 2,361.58 2775.340189 17.63% 17.63% -14.92% (713245, 928185] 730.36 865.26 1,715.90 2028.973852 18.01% 18.01% -15.21% (928185, 1143126] 663.03 789.10 2,529.26 3032.490771 19.79% 19.79% -16.28% (1143126, 1358066] 393.46 508.05 2,152.50 2777.578119 29.13% 29.13% -22.50% (1358066, 1573007] 370.88 440.72 2,351.31 2791.896052 18.81% 18.81% -15.80% (1573007, 1787947] 283.65 335.11 2,068.93 2441.373402 17.60% 17.60% -14.91% (1787947, 2002888] 287.29 342.11 2,078.39 2474.179448 18.99% 18.99% -15.91% (2002888, 2217828] 293.38 343.16 2,208.83 2584.77457 17.16% 17.16% -14.61% (2217828, 2432769] 140.09 167.86 1,081.87 1296.336926 18.82% 18.82% -15.80% blocks: 1934464, baseline: 3h13m1s, contender: 2h43m47s bpsd (mean): 19.55% tpsd (mean): 19.55% Time (total): -29m13s, -15.14% ```
2024-06-09 14:32:20 +00:00
txs += blk.transactions.len
gas += blk.header.gasUsed
parentHash = blockHash
dbTx.commit()
Feature/implement poa processing (#748) * re-shuffled Clique functions why: Due to the port from the go-sources, the interface logic is not optimal for nimbus. The main visible function is currently snapshot() and most of the _procurement_ of this function result has been moved to a sub-directory. * run eip-225 Clique test against p2p/chain.persistBlocks() why: Previously, loading the test block chains was fugdged with the purpose only to fill the database. As it is now clear how nimbus works on Goerli, the same can be achieved with a more realistic scenario. details: Eventually these tests will be pre-cursor to the reply tests for the Goerli chain supporting TDD approach with more simple cases. * fix exception annotations for executor module why: needed for exception tracking details: main annoyance are vmState methods (in state.nim) which potentially throw a base level Exception (a proc would only throws CatchableError) * split p2p/chain into sub-modules and fix exception annotations why: make space for implementing PoA stuff * provide over-loadable Clique PRNG why: There is a PRNG provided for generating reproducible number sequences. The functions which employ the PRNG to generate time slots were ported ported from the go-implementation. They are currently unused. * implement trusted signer assembly in p2p/chain.persistBlocks() details: * PoA processing moved there at the end of a transaction. Currently, there is no action (eg. transaction rollback) if this fails. * The unit tests with staged blocks work ok. In particular, there should be tests with to-be-rejected blocks. * TODO: 1.Optimise throughput/cache handling; 2.Verify headers * fix statement cast in pool.nim * added table features to LRU cache why: Clique uses the LRU cache using a mixture of volatile online items from the LRU cache and database checkpoints for hard synchronisation. For performance, Clique needs more table like features. details: First, last, and query key added, as well as efficient random delete added. Also key-item pair iterator added for debugging. * re-factored LRU snapshot caching why: Caching was sub-optimal (aka. bonkers) in that it skipped over memory caches in many cases and so mostly rebuild the snapshot from the last on-disk checkpoint. details; The LRU snapshot toValue() handler has been moved into the module clique_snapshot. This is for the fact that toValue() is not supposed to see the whole LRU cache database. So there must be a higher layer working with the the whole LRU cache and the on-disk checkpoint database. also: some clean up todo: The code still assumes that the block headers are valid in itself. This is particular important when an epoch header (aka re-sync header) is processed as it must contain the PoA result of all previous headers. So blocks need to be verified when they come in before used for PoA processing. * fix some snapshot cache fringe cases why: Must not index empty sequences in clique_snapshot module
2021-07-14 15:13:27 +00:00
# Save and record the block number before the last saved block state.
c.db.persistent(toBlock).isOkOr:
return err("Failed to save state: " & $$error)
if c.com.pruneHistory:
# There is a feature for test systems to regularly clean up older blocks
# from the database, not appicable to a full node set up.
let n = fromBlock div CleanUpEpoch
if 0 < n and n < (toBlock div CleanUpEpoch):
# Starts at around `2 * CleanUpEpoch`
try:
c.db.purgeOlderBlocksFromHistory(fromBlock - CleanUpEpoch)
except CatchableError as exc:
warn "Could not clean up old blocks from history", err = exc.msg
ok((blks, txs, gas))
# ------------------------------------------------------------------------------
# Public `ChainDB` methods
# ------------------------------------------------------------------------------
Consolidate block type for block processing (#2325) This PR consolidates the split header-body sequences into a single EthBlock sequence and cleans up the fallout from that which significantly reduces block processing overhead during import thanks to less garbage collection and fewer copies of things all around. Notably, since the number of headers must always match the number of bodies, we also get rid of a pointless degree of freedom that in the future could introduce unnecessary bugs. * only read header and body from era file * avoid several unnecessary copies along the block processing way * simplify signatures, cleaning up unused arguemnts and returns * use `stew/assign2` in a few strategic places where the generated nim assignent is slow and add a few `move` to work around poor analysis in nim 1.6 (will need to be revisited for 2.0) ``` stats-20240607_2223-a814aa0b.csv vs stats-20240608_0714-21c1d0a9.csv bps_x bps_y tps_x tps_y bpsd tpsd timed block_number (498305, 713245] 1,540.52 1,809.73 2,361.58 2775.340189 17.63% 17.63% -14.92% (713245, 928185] 730.36 865.26 1,715.90 2028.973852 18.01% 18.01% -15.21% (928185, 1143126] 663.03 789.10 2,529.26 3032.490771 19.79% 19.79% -16.28% (1143126, 1358066] 393.46 508.05 2,152.50 2777.578119 29.13% 29.13% -22.50% (1358066, 1573007] 370.88 440.72 2,351.31 2791.896052 18.81% 18.81% -15.80% (1573007, 1787947] 283.65 335.11 2,068.93 2441.373402 17.60% 17.60% -14.91% (1787947, 2002888] 287.29 342.11 2,078.39 2474.179448 18.99% 18.99% -15.91% (2002888, 2217828] 293.38 343.16 2,208.83 2584.77457 17.16% 17.16% -14.61% (2217828, 2432769] 140.09 167.86 1,081.87 1296.336926 18.82% 18.82% -15.80% blocks: 1934464, baseline: 3h13m1s, contender: 2h43m47s bpsd (mean): 19.55% tpsd (mean): 19.55% Time (total): -29m13s, -15.14% ```
2024-06-09 14:32:20 +00:00
proc insertBlockWithoutSetHead*(c: ChainRef, blk: EthBlock): Result[void, string] =
discard ?c.persistBlocksImpl([blk], {NoPersistHeader, NoPersistReceipts})
if not c.db.persistHeader(blk.header.blockHash, blk.header, c.com.startOfHistory):
return err("Could not persist header")
ok()
proc setCanonical*(c: ChainRef, header: BlockHeader): Result[void, string] =
if header.parentHash == Hash256():
try:
if not c.db.setHead(header.blockHash):
return err("setHead failed")
except RlpError as exc:
# TODO fix exception+bool error return
return err(exc.msg)
return ok()
var body: BlockBody
try:
if not c.db.getBlockBody(header, body):
debug "Failed to get BlockBody", hash = header.blockHash
return err("Could not get block body")
except RlpError as exc:
return err(exc.msg)
discard
?c.persistBlocksImpl(
[EthBlock.init(header, move(body))], {NoPersistHeader, NoPersistTransactions}
)
try:
discard c.db.setHead(header.blockHash)
except RlpError as exc:
return err(exc.msg)
ok()
proc setCanonical*(c: ChainRef, blockHash: Hash256): Result[void, string] =
var header: BlockHeader
if not c.db.getBlockHeader(blockHash, header):
debug "Failed to get BlockHeader", hash = blockHash
return err("Could not get block header")
setCanonical(c, header)
Consolidate block type for block processing (#2325) This PR consolidates the split header-body sequences into a single EthBlock sequence and cleans up the fallout from that which significantly reduces block processing overhead during import thanks to less garbage collection and fewer copies of things all around. Notably, since the number of headers must always match the number of bodies, we also get rid of a pointless degree of freedom that in the future could introduce unnecessary bugs. * only read header and body from era file * avoid several unnecessary copies along the block processing way * simplify signatures, cleaning up unused arguemnts and returns * use `stew/assign2` in a few strategic places where the generated nim assignent is slow and add a few `move` to work around poor analysis in nim 1.6 (will need to be revisited for 2.0) ``` stats-20240607_2223-a814aa0b.csv vs stats-20240608_0714-21c1d0a9.csv bps_x bps_y tps_x tps_y bpsd tpsd timed block_number (498305, 713245] 1,540.52 1,809.73 2,361.58 2775.340189 17.63% 17.63% -14.92% (713245, 928185] 730.36 865.26 1,715.90 2028.973852 18.01% 18.01% -15.21% (928185, 1143126] 663.03 789.10 2,529.26 3032.490771 19.79% 19.79% -16.28% (1143126, 1358066] 393.46 508.05 2,152.50 2777.578119 29.13% 29.13% -22.50% (1358066, 1573007] 370.88 440.72 2,351.31 2791.896052 18.81% 18.81% -15.80% (1573007, 1787947] 283.65 335.11 2,068.93 2441.373402 17.60% 17.60% -14.91% (1787947, 2002888] 287.29 342.11 2,078.39 2474.179448 18.99% 18.99% -15.91% (2002888, 2217828] 293.38 343.16 2,208.83 2584.77457 17.16% 17.16% -14.61% (2217828, 2432769] 140.09 167.86 1,081.87 1296.336926 18.82% 18.82% -15.80% blocks: 1934464, baseline: 3h13m1s, contender: 2h43m47s bpsd (mean): 19.55% tpsd (mean): 19.55% Time (total): -29m13s, -15.14% ```
2024-06-09 14:32:20 +00:00
proc persistBlocks*(
c: ChainRef, blocks: openArray[EthBlock], flags: PersistBlockFlags = {}
): Result[PersistStats, string] =
Feature/implement poa processing (#748) * re-shuffled Clique functions why: Due to the port from the go-sources, the interface logic is not optimal for nimbus. The main visible function is currently snapshot() and most of the _procurement_ of this function result has been moved to a sub-directory. * run eip-225 Clique test against p2p/chain.persistBlocks() why: Previously, loading the test block chains was fugdged with the purpose only to fill the database. As it is now clear how nimbus works on Goerli, the same can be achieved with a more realistic scenario. details: Eventually these tests will be pre-cursor to the reply tests for the Goerli chain supporting TDD approach with more simple cases. * fix exception annotations for executor module why: needed for exception tracking details: main annoyance are vmState methods (in state.nim) which potentially throw a base level Exception (a proc would only throws CatchableError) * split p2p/chain into sub-modules and fix exception annotations why: make space for implementing PoA stuff * provide over-loadable Clique PRNG why: There is a PRNG provided for generating reproducible number sequences. The functions which employ the PRNG to generate time slots were ported ported from the go-implementation. They are currently unused. * implement trusted signer assembly in p2p/chain.persistBlocks() details: * PoA processing moved there at the end of a transaction. Currently, there is no action (eg. transaction rollback) if this fails. * The unit tests with staged blocks work ok. In particular, there should be tests with to-be-rejected blocks. * TODO: 1.Optimise throughput/cache handling; 2.Verify headers * fix statement cast in pool.nim * added table features to LRU cache why: Clique uses the LRU cache using a mixture of volatile online items from the LRU cache and database checkpoints for hard synchronisation. For performance, Clique needs more table like features. details: First, last, and query key added, as well as efficient random delete added. Also key-item pair iterator added for debugging. * re-factored LRU snapshot caching why: Caching was sub-optimal (aka. bonkers) in that it skipped over memory caches in many cases and so mostly rebuild the snapshot from the last on-disk checkpoint. details; The LRU snapshot toValue() handler has been moved into the module clique_snapshot. This is for the fact that toValue() is not supposed to see the whole LRU cache database. So there must be a higher layer working with the the whole LRU cache and the on-disk checkpoint database. also: some clean up todo: The code still assumes that the block headers are valid in itself. This is particular important when an epoch header (aka re-sync header) is processed as it must contain the PoA result of all previous headers. So blocks need to be verified when they come in before used for PoA processing. * fix some snapshot cache fringe cases why: Must not index empty sequences in clique_snapshot module
2021-07-14 15:13:27 +00:00
# Run the VM here
Consolidate block type for block processing (#2325) This PR consolidates the split header-body sequences into a single EthBlock sequence and cleans up the fallout from that which significantly reduces block processing overhead during import thanks to less garbage collection and fewer copies of things all around. Notably, since the number of headers must always match the number of bodies, we also get rid of a pointless degree of freedom that in the future could introduce unnecessary bugs. * only read header and body from era file * avoid several unnecessary copies along the block processing way * simplify signatures, cleaning up unused arguemnts and returns * use `stew/assign2` in a few strategic places where the generated nim assignent is slow and add a few `move` to work around poor analysis in nim 1.6 (will need to be revisited for 2.0) ``` stats-20240607_2223-a814aa0b.csv vs stats-20240608_0714-21c1d0a9.csv bps_x bps_y tps_x tps_y bpsd tpsd timed block_number (498305, 713245] 1,540.52 1,809.73 2,361.58 2775.340189 17.63% 17.63% -14.92% (713245, 928185] 730.36 865.26 1,715.90 2028.973852 18.01% 18.01% -15.21% (928185, 1143126] 663.03 789.10 2,529.26 3032.490771 19.79% 19.79% -16.28% (1143126, 1358066] 393.46 508.05 2,152.50 2777.578119 29.13% 29.13% -22.50% (1358066, 1573007] 370.88 440.72 2,351.31 2791.896052 18.81% 18.81% -15.80% (1573007, 1787947] 283.65 335.11 2,068.93 2441.373402 17.60% 17.60% -14.91% (1787947, 2002888] 287.29 342.11 2,078.39 2474.179448 18.99% 18.99% -15.91% (2002888, 2217828] 293.38 343.16 2,208.83 2584.77457 17.16% 17.16% -14.61% (2217828, 2432769] 140.09 167.86 1,081.87 1296.336926 18.82% 18.82% -15.80% blocks: 1934464, baseline: 3h13m1s, contender: 2h43m47s bpsd (mean): 19.55% tpsd (mean): 19.55% Time (total): -29m13s, -15.14% ```
2024-06-09 14:32:20 +00:00
if blocks.len == 0:
Feature/implement poa processing (#748) * re-shuffled Clique functions why: Due to the port from the go-sources, the interface logic is not optimal for nimbus. The main visible function is currently snapshot() and most of the _procurement_ of this function result has been moved to a sub-directory. * run eip-225 Clique test against p2p/chain.persistBlocks() why: Previously, loading the test block chains was fugdged with the purpose only to fill the database. As it is now clear how nimbus works on Goerli, the same can be achieved with a more realistic scenario. details: Eventually these tests will be pre-cursor to the reply tests for the Goerli chain supporting TDD approach with more simple cases. * fix exception annotations for executor module why: needed for exception tracking details: main annoyance are vmState methods (in state.nim) which potentially throw a base level Exception (a proc would only throws CatchableError) * split p2p/chain into sub-modules and fix exception annotations why: make space for implementing PoA stuff * provide over-loadable Clique PRNG why: There is a PRNG provided for generating reproducible number sequences. The functions which employ the PRNG to generate time slots were ported ported from the go-implementation. They are currently unused. * implement trusted signer assembly in p2p/chain.persistBlocks() details: * PoA processing moved there at the end of a transaction. Currently, there is no action (eg. transaction rollback) if this fails. * The unit tests with staged blocks work ok. In particular, there should be tests with to-be-rejected blocks. * TODO: 1.Optimise throughput/cache handling; 2.Verify headers * fix statement cast in pool.nim * added table features to LRU cache why: Clique uses the LRU cache using a mixture of volatile online items from the LRU cache and database checkpoints for hard synchronisation. For performance, Clique needs more table like features. details: First, last, and query key added, as well as efficient random delete added. Also key-item pair iterator added for debugging. * re-factored LRU snapshot caching why: Caching was sub-optimal (aka. bonkers) in that it skipped over memory caches in many cases and so mostly rebuild the snapshot from the last on-disk checkpoint. details; The LRU snapshot toValue() handler has been moved into the module clique_snapshot. This is for the fact that toValue() is not supposed to see the whole LRU cache database. So there must be a higher layer working with the the whole LRU cache and the on-disk checkpoint database. also: some clean up todo: The code still assumes that the block headers are valid in itself. This is particular important when an epoch header (aka re-sync header) is processed as it must contain the PoA result of all previous headers. So blocks need to be verified when they come in before used for PoA processing. * fix some snapshot cache fringe cases why: Must not index empty sequences in clique_snapshot module
2021-07-14 15:13:27 +00:00
debug "Nothing to do"
return ok(default(PersistStats)) # TODO not nice to return nil
Feature/implement poa processing (#748) * re-shuffled Clique functions why: Due to the port from the go-sources, the interface logic is not optimal for nimbus. The main visible function is currently snapshot() and most of the _procurement_ of this function result has been moved to a sub-directory. * run eip-225 Clique test against p2p/chain.persistBlocks() why: Previously, loading the test block chains was fugdged with the purpose only to fill the database. As it is now clear how nimbus works on Goerli, the same can be achieved with a more realistic scenario. details: Eventually these tests will be pre-cursor to the reply tests for the Goerli chain supporting TDD approach with more simple cases. * fix exception annotations for executor module why: needed for exception tracking details: main annoyance are vmState methods (in state.nim) which potentially throw a base level Exception (a proc would only throws CatchableError) * split p2p/chain into sub-modules and fix exception annotations why: make space for implementing PoA stuff * provide over-loadable Clique PRNG why: There is a PRNG provided for generating reproducible number sequences. The functions which employ the PRNG to generate time slots were ported ported from the go-implementation. They are currently unused. * implement trusted signer assembly in p2p/chain.persistBlocks() details: * PoA processing moved there at the end of a transaction. Currently, there is no action (eg. transaction rollback) if this fails. * The unit tests with staged blocks work ok. In particular, there should be tests with to-be-rejected blocks. * TODO: 1.Optimise throughput/cache handling; 2.Verify headers * fix statement cast in pool.nim * added table features to LRU cache why: Clique uses the LRU cache using a mixture of volatile online items from the LRU cache and database checkpoints for hard synchronisation. For performance, Clique needs more table like features. details: First, last, and query key added, as well as efficient random delete added. Also key-item pair iterator added for debugging. * re-factored LRU snapshot caching why: Caching was sub-optimal (aka. bonkers) in that it skipped over memory caches in many cases and so mostly rebuild the snapshot from the last on-disk checkpoint. details; The LRU snapshot toValue() handler has been moved into the module clique_snapshot. This is for the fact that toValue() is not supposed to see the whole LRU cache database. So there must be a higher layer working with the the whole LRU cache and the on-disk checkpoint database. also: some clean up todo: The code still assumes that the block headers are valid in itself. This is particular important when an epoch header (aka re-sync header) is processed as it must contain the PoA result of all previous headers. So blocks need to be verified when they come in before used for PoA processing. * fix some snapshot cache fringe cases why: Must not index empty sequences in clique_snapshot module
2021-07-14 15:13:27 +00:00
c.persistBlocksImpl(blocks, flags)
Feature/implement poa processing (#748) * re-shuffled Clique functions why: Due to the port from the go-sources, the interface logic is not optimal for nimbus. The main visible function is currently snapshot() and most of the _procurement_ of this function result has been moved to a sub-directory. * run eip-225 Clique test against p2p/chain.persistBlocks() why: Previously, loading the test block chains was fugdged with the purpose only to fill the database. As it is now clear how nimbus works on Goerli, the same can be achieved with a more realistic scenario. details: Eventually these tests will be pre-cursor to the reply tests for the Goerli chain supporting TDD approach with more simple cases. * fix exception annotations for executor module why: needed for exception tracking details: main annoyance are vmState methods (in state.nim) which potentially throw a base level Exception (a proc would only throws CatchableError) * split p2p/chain into sub-modules and fix exception annotations why: make space for implementing PoA stuff * provide over-loadable Clique PRNG why: There is a PRNG provided for generating reproducible number sequences. The functions which employ the PRNG to generate time slots were ported ported from the go-implementation. They are currently unused. * implement trusted signer assembly in p2p/chain.persistBlocks() details: * PoA processing moved there at the end of a transaction. Currently, there is no action (eg. transaction rollback) if this fails. * The unit tests with staged blocks work ok. In particular, there should be tests with to-be-rejected blocks. * TODO: 1.Optimise throughput/cache handling; 2.Verify headers * fix statement cast in pool.nim * added table features to LRU cache why: Clique uses the LRU cache using a mixture of volatile online items from the LRU cache and database checkpoints for hard synchronisation. For performance, Clique needs more table like features. details: First, last, and query key added, as well as efficient random delete added. Also key-item pair iterator added for debugging. * re-factored LRU snapshot caching why: Caching was sub-optimal (aka. bonkers) in that it skipped over memory caches in many cases and so mostly rebuild the snapshot from the last on-disk checkpoint. details; The LRU snapshot toValue() handler has been moved into the module clique_snapshot. This is for the fact that toValue() is not supposed to see the whole LRU cache database. So there must be a higher layer working with the the whole LRU cache and the on-disk checkpoint database. also: some clean up todo: The code still assumes that the block headers are valid in itself. This is particular important when an epoch header (aka re-sync header) is processed as it must contain the PoA result of all previous headers. So blocks need to be verified when they come in before used for PoA processing. * fix some snapshot cache fringe cases why: Must not index empty sequences in clique_snapshot module
2021-07-14 15:13:27 +00:00
# ------------------------------------------------------------------------------
# End
# ------------------------------------------------------------------------------