This kind of data is not used except in tests where it is used only to
create databases that don't match actual usage of aristo.
Removing simplifies future optimizations that can focus on processing
specific leaf types more efficiently.
A casualty of this removal is some test code as well as some proof
generation code that is unused - on the surface, it looks like it should
be possible to port both of these to the more specific data types -
doing so would ensure that a database written by one part of the
codebase can interact with the other - as it stands, there is confusion
on this point since using the proof generation code will result in a
database of a shape that is incompatible with the rest of eth1.
* Clear rejected sync target so that it would not be processed again
* Use in-memory table to stash headers after FCU import has started
why:
After block imported has started, there is no way to save/stash block
headers persistently. The FCU handlers always maintain a positive
transaction level and in some instances the current transaction is
flushed and re-opened.
This patch fixes an exception thrown when a block header has gone
missing.
* When resuming sync, delete stale headers and state
why:
Deleting headers saves some persistent space that would get lost
otherwise. Deleting the state after resuming prevents from race
conditions.
* On clean start hibernate sync `deamon` entity before first update from CL
details:
Only reduces services are running
* accept FCU from CL
* fetch finalised header after accepting FCY (provides hash only)
* Improve text/meaning of some log messages
* Revisit error handling for useless peers
why:
A peer is abandoned from if the error score is too high. This was not
properly handled for some fringe case when the error was detected at
staging time but fetching via eth/xx was ok.
* Clarify `break` meaning by using labelled `break` statements
* Fix action how to commit when sync target has been reached
why:
The sync target block number might precede than latest FCU block number.
This happens when the engine API squeezes in some request to execute
and import subsequent blocks.
This patch fixes and assert thrown when after reaching target the latest
FCU block number is higher than the expected target block number.
* Update TODO list
* switch to Nim v2.0.12
* fix LruCache capitalization for styleCheck
* KzgProof/KzgCommitment for styleCheck
* TxEip4844 for styleCheck
* styleCheck issues in nimbus/beacon/payload_conv.nim
* ENode for styleCheck
* isOk for styleCheck
* some more styleCheck fixes
* more styleCheck fixes
---------
Co-authored-by: jangko <jangko128@gmail.com>
* Clarifying/commenting FCU setup condition & small fixes, comments etc.
* Update some logging
* Reorg metrics updater and activation
* Better `async` responsiveness
why:
Block import does not allow `async` task activation while
executing. So allow potential switch after each imported
block (rather than a group of 32 blocks.)
* Handle resuming after previous sync followed by import
why:
In this case the ledger state is more recent than the saved
sync state. So this is considered a pristine sync where any
previous sync state is forgotten.
This fixes some assert thrown because of inconsistent internal
state at some point.
* Provide option for clearing saved beacon sync state before starting syncer
why:
It would resume with the last state otherwise which might be undesired
sometimes.
Without RPC available, the syncer typically stops and terminates with
the canonical head larger than the base/finalised head. The latter one
will be saved as database/ledger state and the canonical head as syncer
target. Resuming syncing here will repeat itself.
So clearing the syncer state can prevent from starting the syncer
unnecessarily avoiding useless actions.
* Allow workers to request syncer shutdown from within
why:
In one-trick-pony mode (after resuming without RPC support) the
syncer can be stopped from within soavoiding unnecessary polling.
In that case, the syncer can (theoretically) be restarted externally
with `startSync()`.
* Terminate beacon sync after a single run target is reached
why:
Stops doing useless polling (typically when there is no RPC available)
* Remove crufty comments
* Tighten state reload condition when resuming
why:
Some pathological case might apply if the syncer is stopped while the
distance between finalised block and head is very large and the FCU
base becomes larger than the locked finalised state.
* Verify that finalised number from CL is at least FCU base number
why:
The FCU base number is determined by the database, non zero if
manually imported. The finalised number is passed via RPC by the CL
node and will increase over time. Unless fully synced, this number
will be pretty low.
On the other hand, the FCU call `forkChoice()` will eventually fail
if the `finalizedHash` argument refers to something outside the
internal chain starting at the FCU base block.
* Remove support for completing interrupted sync without RPC support
why:
Simplifies start/stop logic
* Rmove unused import
* prefer the spec-derived name where possible
* don't pass stateRoot to LedgerRef and friends (it doesn't do anything)
* add deprecation warning in graphql - it needs updating to use
forkedchain instead
When `nimbus import` runs, we end up with a database without MPT roots
leading to long startup times the first time one is needed.
Computing the state root is slow because the on-disk order based on
VertexID sorting does not match the trie traversal order and therefore
makes lookups inefficent.
Here we introduce a helper that speeds up this computation by traversing
the trie in on-disk order and computing the trie hashes bottom up
instead - even though this leads to some redundant reads of nodes that
we cannot yet compute, it's still a net win as leaves and "bottom"
branches make up the majority of the database.
This PR also addresses a few other sources of inefficiency largely due
to the separation of AriKey and AriVtx into their own column families.
Each column family is its own LSM tree that produces hundreds of SST
filtes - with a limit of 512 open files, rocksdb must keep closing and
opening files which leads to expensive metadata reads during random
access.
When rocksdb makes a lookup, it has to read several layers of files for
each lookup. Ribbon filters to skip over files that don't have the
requested data but when these filters are not in memory, reading them is
slow - this happens in two cases: when opening a file and when the
filter has been evicted from the LRU cache. Addressing the open file
limit solves one source of inefficiency, but we must also increase the
block cache size to deal with this problem.
* rocksdb.max_open_files increased to 2048
* per-file size limits increased so that fewer files are created
* WAL size increased to avoid partial flushes which lead to small files
* rocksdb block cache increased
All these increases of course lead to increased memory usage, but at
least performance is acceptable - in the future, we'll need to explore
options such as joining AriVtx and AriKey and/or reducing the row count
(by grouping branch layers under a single vertexid).
With this PR, the mainnet state root can be computed in ~8 hours (down
from 2-3 days) - not great, but still better.
Further, we write all keys to the database, also those that are less
than 32 bytes - because the mpt path is part of the input, it is very
rare that we actually hit a key like this (about 200k such entries on
mainnet), so the code complexity is not worth the benefit really, in the
current database layout / design.
* remove redundant abstraction
* fix misleading raises - the implementation actually swallows errors or
panics (depending on how many other layers of abstraction we penetrate
before detecting it)
* blocks can be bigger than the default 1mb when json-rpc-encoded - this
happens on sepolia for example
* json-rpc bump improves debug logging and fixes a number of bugs
* json-serialization bump fixes a crash on invalid arrays in json data
At some point, it would probably be better to compute the maximum block
size from actual block constraints, though this is somewhat tricky and
depends on gas limits etc. Until then, 16mb should be plenty.
With this, sepolia can be synced :)
* Update comments & logs
* Do not start beacon sync unless there is possibly something to do
why:
It would continue polling without having any effect other than
logging. Now it will not start unless there is RPC available
or there was a previously interrupted sync to be resumed.
* Accept finalised hash from RPC with the canon header as well
* Reorg internal sync descriptor(s)
details:
Update target from RPC to provide the `consensus header` as well as
the `finalised` block number
why:
Prepare for using `importBlock()` instead of `persistBlocks()`
* Cosmetic updates
details:
+ Collect all pretty printers in `helpers.nim`
+ Remove unused return codes from function prototype
* Use `importBlock()` + `forkChoice()` rather than `persistBlocks()`
* Update logging and metrics
* Update docu
* Update `ForkedChainRef` constructor
why:
Initialisation is based on the canonical head which is always zero
after resuming a stopped `ForkedChainRef` based import.
* Update new-base calculator
why:
There is some ambiguous code which might not do what the comment
implies. In short, an unsigned condition like `2u - 3u < 1u => false`
is coded where the comment suggests that `2 - 3 < 1 => true` is meant.
This patch fixes notorious crashes when resuming import after a stop.
* partial commit
* fixes
* remove converters too
* revert changes on nimbus_verified_proxy
* revert changes in converter
* revert changes(re-xport) in rpc_types
* update copyright year
* replace types in other binaries
* chain config bug
* fix rebase conflict imcomplete buffer
* fix more rebase buffers
* remove ditto types and converters
* fix the tests
* update copyright year
* rename nimbus binary to nimbus_execution_client
* additional replacements
* makefile and dockerfile
* fix ci building errors
* github workflows
* improved Makefile target
---------
Co-authored-by: Pedro Miranda <pedro.miranda@nimbus.team>
* Fix fringe condition clarifying how to handle an empty range
why:
The `interval_set` module would treat an undefined interval construct
`[2,1]` as`[2,2]` (the right bound being `max(2,1)`.)
* Use the `consensus head` rather than the `finalised` block as sync target
why:
The former is ahead of the `finalised` block.
* In ctx descriptor rename `final` field to `target`
* Update docu, rename `F` -> `T`
* bump nimbus-build-system to use Nim v2.0.10
* 2.0.10 fixes
* fluffy linting
* make trivial change which should trigger whole-nimbus+fluffy rebuild/ci
* Nim v2.0.10 chronicles.error/macros.error ambiguity workaround
* another contentType enum specifier
* fluffy linting
* Fix eth/common & web3 related deprecation warnings for fluffy
This commit uses the new types in the new eth/common/ structure
to remove deprecation warnings.
It is however more than just a mass replace as also all places
where eth/common or eth/common/eth_types or eth/common/eth_types_rlp
got imported have been revised and adjusted to a better per submodule
based import.
There are still a bunch of toMDigest deprecation warnings but that
convertor is not needed for fluffy code anymore so in theory it
should not be used (bug?). It seems to still get imported via export
leaks ffrom imported nimbus code I think.
* Address review comments
* Remove two more unused eth/common imports