Thanks to @holiman of goevmlab for his fuzzer.
Similar with Blake2b precompile regression #2919.
When error, the precompile should not return any output.
When running the import, currently blocks are loaded in batches into a
`seq` then passed to the importer as such.
In reality, blocks are still processed one by one, so the batching does
not offer any performance advantage. It does however require that the
client wastes memory, up to several GB, on the block sequence while
they're waiting to be processed.
This PR introduces a persister that accepts these potentially large
blocks one by one and at the same time removes a number of redundant /
unnecessary copies, assignments and resets that were slowing down the
import process in general.
The forking facility has been replaced by ForkedChain - frames and
layers are two other mechanisms that mostly do the same thing at the
aristo level, without quite providing the functionality FC needs - this
cleanup will make that integration easier.
The current getCanonicalHead of core db should not be confused with ForkedChain.latestHeader.
Therefore we need to use getCanonicalHead to restricted case only, e.g. initializing ForkedChain.
* Metrics cosmetics
* Better naming for error threshold constants
* Treating header/body process error different from response errors
why:
Error handling becomes active not until some consecutive failures
appear. As both types of errors may interleave (i.g. no response
errors) the counter reset for one type might affect the other.
By doing it wrong, a peer might send repeatedly a bogus block so
locking in the syncer in an endless loop.
In block processing, depending on the complexity of a transaction and
hotness of caches etc, signature checking can actually make up the
majority of time needed to process a transaction (60% observed in some
randomly sampled block ranges).
Fortunately, this is a task that trivially can be offloaded to a task
pool similar to how nimbus-eth2 does it.
This PR introduces taskpools in the most simple way possible, by
performing signature checking concurrently with other TX processing,
assigning a taskpool task per TX effectively.
With this little trick, we're in gigagas land 🎉 on my laptop!
```
INF 2024-12-10 21:05:35.170+01:00 Imported blocks
blockNumber=3874817 b... mgps=1222.707 ...
```
Tests don't use the taskpool for now because it needs manual cleanup and
we don't have a good mechanism in place. Future PR:s should address this
by creating a common shutdown sequence that also closes and cleans up
other resources like the DB.
Co-authored-by: andri lim <jangko128@gmail.com>
* Cosmetics, add some metrics updates to smoothen curves
why:
Progress downloading blocks was just a jump from none to full
* Reclassifying some syncer gossip from TRC to DBG
why:
Might help debugging without full trace logs
Now that branches are small, we can add a branch cache that fits more
verticies in memory by only storing the branch portion (16 bytes) of the
VertexRef (136 bytes).
Where the original vertex cache hovers around a hit rate of ~60:ish,
this branch cache reaches >90% hit rate instead around block 20M which
gives a nice boost to processing.
A downside of this approach is that a new VertexRef must be allocated
for every cache hit instead of reusing an existing instance - this
causes some GC overhead that needs to be addressed.
Nice 15% improvement nonetheless, can't complain!
```
blocks: 19630784, baseline: 161h18m38s, contender: 136h23m23s
Time (total): -24h55m14s, -15.45%
```
Since data is ordered by VertexID on disk, with this simple trick we can
make much better use of the various rocksdb caches.
Computing the state root of the full mainnet state is down to 4 hours
(from 9) on my laptop.
A bit unexpectedly, nibble handling shows up in the profiler mainly
because the current impl is tuned towards slicing while the most common
operation is prefix comparison - since the code is simple, might has
well get rid of some of the excess fat by always aliging the nibbles to
the byte buffer.
* Cosmetics
* Must not async wait inside termination `for` loop
why:
Async-waiting inside a `for` loop will switch to temination process
which uncontrollably will modify the for-loop data base.
* Avoid `waitFor` in scheduler termination code
why:
Is reserved for main loop
* `shouldPrepareTracer` always true
* simple `pop` should not copy value (reading the memory shows up in a
profiler)
* continuation code simplified
* remove some unnecessary EH
* Re-org internal descriptor `CanonicalDesc` as `PivotArc`
why:
Despite its name, `CanonicalDesc` contained a cursor arc (or leg) from
the base tree with a designated block (or Header) on its arc members
(aka blocks.) The type is used more generally than only for s block on
the canonical cursor.
Also, the `PivotArc` provides some more fields for caching intermediate
data. This simplifies managing extra arguments for some functions.
* Remove cruft
details:
No need to find cursor arc if it is given as function argument.
* Rename prototype variables `head: PivotArc` to `pvarc`
why:
Better reading
* Function and code massage, adjust names
details:
Avoid the syllable `canonical` in function names that do not strictly
apply to the canonical chain. So renaming
* findCanonicalHead() => findCursorArc()
* canonicalChain() => findHeader()
* trimCanonicalChain() => trimCursorArc()
* Combine `updateBase()` function-args into single `PivotArgs` object
why:
Will generalise action for more complex scenarios in future.
* update `calculateNewBase()` return code type => `PivotArc`
why:
So it can directly be used as argument into `updateBase()`
* Update `calculateNewBase()` for target on parent arc
* Update unit tests
The radius sort performance improvement in content lookups was
not implemented in the trace version.
Also cleanup some part of the logging related to uTP connection
setup.
Each branch node may have up to 16 sub-items - currently, these are
given VertexID based when they are first needed leading to a
mostly-random order of vertexid for each subitem.
Here, we pre-allocate all 16 vertex ids such that when a branch subitem
is filled, it already has a vertexid waiting for it. This brings several
important benefits:
* subitems are sorted and "close" in their id sequencing - this means
that when rocksdb stores them, they are likely to end up in the same
data block thus improving read efficiency
* because the ids are consequtive, we can store just the starting id and
a bitmap representing which subitems are in use - this reduces disk
space usage for branches allowing more of them fit into a single disk
read, further improving disk read and caching performance - disk usage
at block 18M is down from 84 to 78gb!
* the in-memory footprint of VertexRef reduced allowing more instances
to fit into caches and less memory to be used overall.
Because of the increased locality of reference, it turns out that we no
longer need to iterate over the entire database to efficiently generate
the hash key database because the normal computation is now faster -
this significantly benefits "live" chain processing as well where each
dirtied key must be accompanied by a read of all branch subitems next to
it - most of the performance benefit in this branch comes from this
locality-of-reference improvement.
On a sample resync, there's already ~20% improvement with later blocks
seeing increasing benefit (because the trie is deeper in later blocks
leading to more benefit from branch read perf improvements)
```
blocks: 18729664, baseline: 190h43m49s, contender: 153h59m0s
Time (total): -36h44m48s, -19.27%
```
Note: clients need to be resynced as the PR changes the on-disk format
R.I.P. little bloom filter - your life in the repo was short but
valuable
* Kludge: fix `eip4844` import in `validate`
why:
Importing `validate` needs `blscurve` here or with the importing module.
* Separate out `FC` descriptor iinto separate file
why:
Needed for external descriptor access (e.g. for debugging)
* Debugging toolkit for `FC`
* Verify chain descriptor after changing state
The idea of the beacon-lc-bridge was to allow to bridge data into
the Portal network while only using p2p protocols to get access
to the data.
It is however incomplete as for history content the receipts are
missing. These could be added by also adding devp2p access.
But for the beacon content, there would be no way for getting the
historical summaries over p2p.
And then we did not even look yet on how to do this for state.
Considering it is incomplete it was also not being used by anyone
and thus we remove it.
There is an assertion hitting due to the additon of an iterator
that deletes items from the sequence while iteratting over it.
Before the keepIf helper was used that has different code for
doing this similar work.