Commit Graph

4293 Commits

Author SHA1 Message Date
Advaita Saha 48aa410f8a
reduce blockHash call (#2954) 2024-12-19 01:13:12 +05:30
andri lim cd3cea0e29
Fix bn256ecPairing precompile bug (#2953)
Thanks to @holiman of goevmlab for his fuzzer.
Similar with Blake2b precompile regression #2919.
When error, the precompile should not return any output.
2024-12-18 23:04:14 +07:00
Jacek Sieka d45d03ce0c
reduce tx naming overload (#2952)
* if it's a db function, use `txFrame...`
* if it's not a db function, don't use `txFrame...`
2024-12-18 23:03:51 +07:00
Jacek Sieka 7bbb0f4421
Stream blocks during import (#2937)
When running the import, currently blocks are loaded in batches into a
`seq` then passed to the importer as such.

In reality, blocks are still processed one by one, so the batching does
not offer any performance advantage. It does however require that the
client wastes memory, up to several GB, on the block sequence while
they're waiting to be processed.

This PR introduces a persister that accepts these potentially large
blocks one by one and at the same time removes a number of redundant /
unnecessary copies, assignments and resets that were slowing down the
import process in general.
2024-12-18 13:21:20 +01:00
Jacek Sieka 06a544ac85
Remove `forkTx` and friends (#2951)
The forking facility has been replaced by ForkedChain - frames and
layers are two other mechanisms that mostly do the same thing at the
aristo level, without quite providing the functionality FC needs - this
cleanup will make that integration easier.
2024-12-18 11:56:46 +01:00
andri lim 45bc6422a0
Reduce getCanonicalHead usage, and delegate to ForkedChain (#2948)
The current getCanonicalHead of core db should not be confused with ForkedChain.latestHeader.
Therefore we need to use getCanonicalHead to restricted case only, e.g. initializing ForkedChain.
2024-12-18 11:04:23 +07:00
andri lim 806d9dd04a
Reuse taskpool in simulator to prevent OOM (#2945) 2024-12-17 18:57:16 +07:00
andri lim b8932d9519
Remove MergeForkBlock alias and use MergeNetSplitBlock only (#2947) 2024-12-17 11:42:13 +00:00
andri lim f8a6ed4f5f
Replace deprecated toBytes with toBytesBE in bn256ecPairing precompile (#2944) 2024-12-17 18:17:47 +07:00
andri lim f74813520a
Connect gasLimit from Config to CommonRef (#2946) 2024-12-17 10:48:31 +00:00
tersec 0b704040e3
increase default gas limit from 30M to 36M (#2941) 2024-12-16 23:32:29 +00:00
Jordan Hrycaj 0ce5234231
Beacon sync mitigate deadlock with bogus sticky peers (#2943)
* Metrics cosmetics

* Better naming for error threshold constants

* Treating header/body process error different from response errors

why:
  Error handling becomes active not until some consecutive failures
  appear. As both types of errors may interleave (i.g. no response
  errors) the counter reset for one type might affect the other.

  By doing it wrong, a peer might send repeatedly a bogus block so
  locking in the syncer in an endless loop.
2024-12-16 16:26:38 +00:00
andri lim 650fec5a26
Wire ForkedChainRef to graphql and rpc_utils (#2936)
* fixes

* Wire ForkedChainRef to graphql and rpc_utils
2024-12-13 14:34:32 +07:00
andri lim a7ab984304
Remove persist block test (#2935) 2024-12-13 14:31:20 +07:00
andri lim a57958f71e
Remove totalTerminalDifficultyPassed (#2934) 2024-12-13 13:29:00 +07:00
andri lim 2e5ef4fb5a
Wire ForkedChainRef properly to TxPool (#2933) 2024-12-13 13:21:20 +07:00
andri lim 847cc311eb
Remove verifyFrom, vmState, and checkSeal from ChainRef (#2932) 2024-12-13 12:12:57 +07:00
Jacek Sieka 3d58393b4c
Offload signature checking to taskpools (#2927)
In block processing, depending on the complexity of a transaction and
hotness of caches etc, signature checking can actually make up the
majority of time needed to process a transaction (60% observed in some
randomly sampled block ranges).

Fortunately, this is a task that trivially can be offloaded to a task
pool similar to how nimbus-eth2 does it.

This PR introduces taskpools in the most simple way possible, by
performing signature checking concurrently with other TX processing,
assigning a taskpool task per TX effectively.

With this little trick, we're in gigagas land 🎉 on my laptop!

```
INF 2024-12-10 21:05:35.170+01:00 Imported blocks
blockNumber=3874817 b... mgps=1222.707 ...
```

Tests don't use the taskpool for now because it needs manual cleanup and
we don't have a good mechanism in place. Future PR:s should address this
by creating a common shutdown sequence that also closes and cleans up
other resources like the DB.

Co-authored-by: andri lim <jangko128@gmail.com>
2024-12-13 11:53:41 +07:00
andri lim 1d5a48e153
Feature: configurable gas limit when building execution payload (#2931)
* Feature: configurable gas limit when building execution payload

* Raise default gas limit to 30M
2024-12-13 10:47:35 +07:00
Jacek Sieka a12a73c41a
bncurve: bump (#2928)
Gives a nice 15% perf bump on bn precompiles!
2024-12-13 10:47:21 +07:00
Jordan Hrycaj cbc5ec9385
Beacon sync logging updates (#2930)
* Cosmetics, add some metrics updates to smoothen curves

why:
  Progress downloading blocks was just a jump from none to full

* Reclassifying some syncer gossip from TRC to DBG

why:
  Might help debugging without full trace logs
2024-12-12 17:35:10 +00:00
andri lim 674e65f359
Move EVM code initialization outside of newComputation (#2926)
* Move EVM code initialization outside of newComputation

* Tidying up call_common.setupHost
2024-12-11 14:56:41 +01:00
Jacek Sieka 7b88bb3b30
Add branch cache (#2923)
Now that branches are small, we can add a branch cache that fits more
verticies in memory by only storing the branch portion (16 bytes) of the
VertexRef (136 bytes).

Where the original vertex cache hovers around a hit rate of ~60:ish,
this branch cache reaches >90% hit rate instead around block 20M which
gives a nice boost to processing.

A downside of this approach is that a new VertexRef must be allocated
for every cache hit instead of reusing an existing instance - this
causes some GC overhead that needs to be addressed.

Nice 15% improvement nonetheless, can't complain!

```
blocks: 19630784, baseline: 161h18m38s, contender: 136h23m23s
Time (total): -24h55m14s, -15.45%
```
2024-12-11 11:53:26 +01:00
haurog 29decdf265
Enable compiling on RISC-V CPU (#2925) 2024-12-11 09:32:15 +00:00
bhartnett ac59b183fb
Bump nim-rocksdb to latest version. Updates rocksdb to v9.7.2. (#2922) 2024-12-11 10:21:29 +01:00
tersec 37dee1d92c
rm macOS amd64 builds from CI/releases (#2921) 2024-12-11 06:08:31 +00:00
andri lim a38f8f6f68
evmstate tool: disable chronicles output (#2924)
* evmstate tool: disable chronicles output

* Fix copyright year
2024-12-11 12:05:49 +07:00
Advaita Saha 73a683b641
Faster local testing with `Dockerfile.debug` (#2869)
* add the debug dockerfile

* script fixes

* speedup debugging

* macOS compatability
2024-12-10 11:03:36 +05:30
andri lim 57157a6f76
Fix Blake2b precompile regression (#2919)
Introduced by #2865
Detected on Holesky block 2.406.802 tx no 11
And on MainNet block 19.633.393
2024-12-09 20:52:34 +01:00
bhartnett c32726671f
Fluffy State Bridge: Support running without an EL for block ranges where we have the state diffs in the database (#2920) 2024-12-09 21:57:41 +08:00
Jacek Sieka a056a722eb
Sort subkey lookups by VertexID when computing keys (#2918)
Since data is ordered by VertexID on disk, with this simple trick we can
make much better use of the various rocksdb caches.

Computing the state root of the full mainnet state is down to 4 hours
(from 9) on my laptop.
2024-12-09 08:16:02 +01:00
Jacek Sieka 66ad5497d9
Unroll nibble ops (#2894)
A bit unexpectedly, nibble handling shows up in the profiler mainly
because the current impl is tuned towards slicing while the most common
operation is prefix comparison - since the code is simple, might has
well get rid of some of the excess fat by always aliging the nibbles to
the byte buffer.
2024-12-09 08:15:04 +01:00
bhartnett 17da64628a
Fluffy: Fix broken portal hive tests (#2917)
Fix bug in portal stream where connection id was not correctly generated when handling requests from peers.
2024-12-06 23:53:15 +08:00
Jordan Hrycaj 9a9d391217
Fix race condition on syncer termination (#2916)
* Cosmetics

* Must not async wait inside termination `for` loop

why:
  Async-waiting inside a `for` loop will switch to temination process
  which uncontrollably will modify the for-loop data base.

* Avoid `waitFor` in scheduler termination code

why:
  Is reserved for main loop
2024-12-06 12:11:40 +00:00
Siddarth Kumar 72d08030d9
fix: check for mismatching ranges in benchmark csv (#2914) 2024-12-06 13:01:33 +01:00
Jacek Sieka 667897557a
Interpreter dispatch cleanups (#2913)
* `shouldPrepareTracer` always true
* simple `pop` should not copy value (reading the memory shows up in a
profiler)
* continuation code simplified
* remove some unnecessary EH
2024-12-06 13:01:15 +01:00
andri lim 9b7e2960c2
Add Holesky deposit contract address (#2915) 2024-12-06 10:06:52 +00:00
Jordan Hrycaj 90dd86be9a
Fc module can update base also when on parent arc (#2911)
* Re-org internal descriptor `CanonicalDesc` as `PivotArc`

why:
  Despite its name, `CanonicalDesc` contained a cursor arc (or leg) from
  the base tree with a designated block (or Header) on its arc members
  (aka blocks.) The type is used more generally than only for s block on
  the canonical cursor.

  Also, the `PivotArc` provides some more fields for caching intermediate
  data. This simplifies managing extra arguments for some functions.

* Remove cruft

details:
  No need to find cursor arc if it is given as function argument.

* Rename prototype variables `head: PivotArc` to `pvarc`

why:
  Better reading

* Function and code massage, adjust names

details:
  Avoid the syllable `canonical` in function names that do not strictly
  apply to the canonical chain. So renaming
  * findCanonicalHead() => findCursorArc()
  * canonicalChain() => findHeader()
  * trimCanonicalChain() => trimCursorArc()

* Combine `updateBase()` function-args into single `PivotArgs` object

why:
  Will generalise action for more complex scenarios in future.

* update `calculateNewBase()` return code type => `PivotArc`

why:
  So it can directly be used as argument into `updateBase()`

* Update `calculateNewBase()` for target on parent arc

* Update unit tests
2024-12-05 13:01:57 +07:00
andri lim dc81863c3a
Fixes for Mekong testnet: EIP-7702 gas related (#2912) 2024-12-05 13:00:47 +07:00
Jacek Sieka 4c37682ef1
bumps (#2906) 2024-12-04 16:59:10 +01:00
Jordan Hrycaj 1d70ba5ff0
Fix log warnings (`==` should have been `!=`) (#2907) 2024-12-04 14:36:15 +00:00
andri lim 1101895f92
Move rlp block import into it's own subcommand (#2904)
* Move rlp block import into it's own subcommand

* Fix test_configuration
2024-12-04 20:36:07 +07:00
Kim De Mey 56caa5f62f
Add radius sort in trace lookup and logging improvements (#2905)
The radius sort performance improvement in content lookups was
not implemented in the trace version.

Also cleanup some part of the logging related to uTP connection
setup.
2024-12-04 13:59:08 +01:00
Jacek Sieka 8cb3619141
stint: bump for endians (#2903)
* stint: bump for endians

* stint fix
2024-12-04 12:03:31 +01:00
Jacek Sieka f034af422a
Pre-allocate vids for branches (#2882)
Each branch node may have up to 16 sub-items - currently, these are
given VertexID based when they are first needed leading to a
mostly-random order of vertexid for each subitem.

Here, we pre-allocate all 16 vertex ids such that when a branch subitem
is filled, it already has a vertexid waiting for it. This brings several
important benefits:

* subitems are sorted and "close" in their id sequencing - this means
that when rocksdb stores them, they are likely to end up in the same
data block thus improving read efficiency
* because the ids are consequtive, we can store just the starting id and
a bitmap representing which subitems are in use - this reduces disk
space usage for branches allowing more of them fit into a single disk
read, further improving disk read and caching performance - disk usage
at block 18M is down from 84 to 78gb!
* the in-memory footprint of VertexRef reduced allowing more instances
to fit into caches and less memory to be used overall.

Because of the increased locality of reference, it turns out that we no
longer need to iterate over the entire database to efficiently generate
the hash key database because the normal computation is now faster -
this significantly benefits "live" chain processing as well where each
dirtied key must be accompanied by a read of all branch subitems next to
it - most of the performance benefit in this branch comes from this
locality-of-reference improvement.

On a sample resync, there's already ~20% improvement with later blocks
seeing increasing benefit (because the trie is deeper in later blocks
leading to more benefit from branch read perf improvements)

```
blocks: 18729664, baseline: 190h43m49s, contender: 153h59m0s
Time (total): -36h44m48s, -19.27%
```

Note: clients need to be resynced as the PR changes the on-disk format

R.I.P. little bloom filter - your life in the repo was short but
valuable
2024-12-04 11:42:04 +01:00
andri lim 5a3bfe486f
Add blockByNumber to engine_client of simulator (#2902) 2024-12-04 10:57:21 +07:00
bhartnett 359eb6d974
Fluffy: Portal stream improvements and pending transfers prune fix (#2900)
* Fix defect in for loop when deleting element.

* Prune offers and requests before accepting.

* Store content requests and offers by connectionId.
2024-12-04 08:34:13 +08:00
Jordan Hrycaj 9da3f29dff
Add desc validator to fc unit tests (#2899)
* Kludge: fix `eip4844` import in `validate`

why:
  Importing `validate` needs `blscurve` here or with the importing module.

* Separate out `FC` descriptor iinto separate file

why:
  Needed for external descriptor access (e.g. for debugging)

* Debugging toolkit for `FC`

* Verify chain descriptor after changing state
2024-12-02 17:49:53 +00:00
Kim De Mey 3bf0920a16
Remove Portal beacon-lc-bridge (#2897)
The idea of the beacon-lc-bridge was to allow to bridge data into
the Portal network while only using p2p protocols to get access
to the data.

It is however incomplete as for history content the receipts are
missing. These could be added by also adding devp2p access.
But for the beacon content, there would be no way for getting the
historical summaries over p2p.

And then we did not even look yet on how to do this for state.

Considering it is incomplete it was also not being used by anyone
and thus we remove it.
2024-12-02 17:30:17 +01:00
Kim De Mey 0f18de61dc
Revert commit 6142183 and partial of b446d2a (#2898)
There is an assertion hitting due to the additon of an iterator
that deletes items from the sequence while iteratting over it.
Before the keepIf helper was used that has different code for
doing this similar work.
2024-12-02 14:09:58 +01:00