It is common for many accounts to share the same code - at the database
level, code is stored by hash meaning only one copy exists per unique
program but when loaded in memory, a copy is made for each account.
Further, every time we execute the code, it must be scanned for invalid
jump destinations which slows down EVM exeuction.
Finally, the extcodesize call causes code to be loaded even if only the
size is needed.
This PR improves on all these points by introducing a shared
CodeBytesRef type whose code section is immutable and that can be shared
between accounts. Further, a dedicated `len` API call is added so that
the EXTCODESIZE opcode can operate without polluting the GC and code
cache, for cases where only the size is requested - rocksdb will in this
case cache the code itself in the row cache meaning that lookup of the
code itself remains fast when length is asked for first.
With 16k code entries, there's a 90% hit rate which goes up to 99%
during the 2.3M attack - the cache significantly lowers memory
consumption and execution time not only during this event but across the
board.
For the block cache to be shared between column families, the options
instance must be shared between the various column families being
created. This also ensures that there is only one source of truth for
configuration options instead of having two different sets depending on
how the tables were initialized.
This PR also removes the re-opening mechanism which can double startup
time - every time the database is opened, the log is replayed - a large
log file will take a long time to open.
Finally, several options got correclty implemented as column family
options, including an one that puts a hash index in the SST files.
* Bump nim-eth, nim-web3, nimbus-eth2
- Replace std.Option with results.Opt
- Fields name changes
* More fixes
* Fix Portal stream async raises and portal testnet Opt usage
* Bump eth + nimbus-eth2 + more fixes related to eth_types changes
* Fix in utp test app and nimbus-eth2 bump
* Fix test_blockchain_json rebase conflict
* Fix EVMC block_timestamp conversion plus commentary
---------
Co-authored-by: kdeme <kim.demey@gmail.com>
* bump rockdb
* Rename `KVT` objects related to filters according to `Aristo` naming
details:
filter* => delta*
roFilter => balancer
* Compulsory error handling if `persistent()` fails
* Add return code to `reCentre()`
why:
Might eventually fail if re-centring is blocked. Some logic will be
added in subsequent patch sets.
* Add column families from earlier session to rocksdb in opening procedure
why:
All previously used CFs must be declared when re-opening an existing
database.
* Update `init()` and add rocksdb `reinit()` methods for changing parameters
why:
Opening a set column families (with different open options) must span
at least the ones that are already on disk.
* Provide write-trigger-event interface into `Aristo` backend
why:
This allows to save data from a guest application (think `KVT`) to
get synced with the write cycle so the guest and `Aristo` save all
atomically.
* Use `KVT` with new column family interface from `Aristo`
* Remove obsolete guest interface
* Implement `KVT` piggyback on `Aristo` backend
* CoreDb: Add separate `KVT`/`Aristo` backend mode for debugging
* Remove `rocks_db` import from `persist()` function
why:
Some systems (i.p `fluffy` and friends) use the `Aristo` memory
backend emulation and do not link against rocksdb when building the
application. So this should fix that problem.
These options, inspired by Nethermind and general internet wisdom, bring
the database size down to 2/3 without affecting throughput. In theory,
they should also bring down memory usage and/or make more efficient use
of whatever memory is already assigned to rocksdb but this needs
verification in a longer test at synced-mainnet sizes.
In the meantime, they make testing easier by removing some noise that
the profiler says are bad, such as excessive SkipList access (countered
by bloom filters).
This PR consolidates the split header-body sequences into a single EthBlock
sequence and cleans up the fallout from that which significantly reduces
block processing overhead during import thanks to less garbage collection
and fewer copies of things all around.
Notably, since the number of headers must always match the number of bodies,
we also get rid of a pointless degree of freedom that in the future could
introduce unnecessary bugs.
* only read header and body from era file
* avoid several unnecessary copies along the block processing way
* simplify signatures, cleaning up unused arguemnts and returns
* use `stew/assign2` in a few strategic places where the generated
nim assignent is slow and add a few `move` to work around poor
analysis in nim 1.6 (will need to be revisited for 2.0)
```
stats-20240607_2223-a814aa0b.csv vs stats-20240608_0714-21c1d0a9.csv
bps_x bps_y tps_x tps_y bpsd tpsd timed
block_number
(498305, 713245] 1,540.52 1,809.73 2,361.58 2775.340189 17.63% 17.63% -14.92%
(713245, 928185] 730.36 865.26 1,715.90 2028.973852 18.01% 18.01% -15.21%
(928185, 1143126] 663.03 789.10 2,529.26 3032.490771 19.79% 19.79% -16.28%
(1143126, 1358066] 393.46 508.05 2,152.50 2777.578119 29.13% 29.13% -22.50%
(1358066, 1573007] 370.88 440.72 2,351.31 2791.896052 18.81% 18.81% -15.80%
(1573007, 1787947] 283.65 335.11 2,068.93 2441.373402 17.60% 17.60% -14.91%
(1787947, 2002888] 287.29 342.11 2,078.39 2474.179448 18.99% 18.99% -15.91%
(2002888, 2217828] 293.38 343.16 2,208.83 2584.77457 17.16% 17.16% -14.61%
(2217828, 2432769] 140.09 167.86 1,081.87 1296.336926 18.82% 18.82% -15.80%
blocks: 1934464, baseline: 3h13m1s, contender: 2h43m47s
bpsd (mean): 19.55%
tpsd (mean): 19.55%
Time (total): -29m13s, -15.14%
```
The `rocksdb` version shipped with distributions is typically old and
therefore often lacks features we use - it also doesn't match the one
assumed by nim-rocksdb leading to ABI mismatch risks.
Instead of depending on the system rocksdb, we'll now use the rocksdb
version assumed by nim-rocksdb and locked in its vendor folder by always
building it together with nimbus.
This avoids the problem of unknown rocksdb versions at a (small) cost to
build time.
CI caching and full windows support for building from source [remains
TODO](https://github.com/status-im/nim-rocksdb/issues/44).
* Implement engine_getClientVersionV1
* full git revision string
* Limit GitRevisionString to 8 chars
* Fixes
* Debug windows CI
* debug windows ci
* produce git revision using -C
* try not to delete .git folder in windows ci
* Harden GitRevision procuration
* Add double quotes to git -C param
* Escape sourcePath
* Remove double quotes from git -C param
* Introduce wrapper type for EIP-4844 transactions
EIP-4844 blob sidecars are a concept that only exists in the mempool.
After inclusion of a transaction into an execution block, only the
versioned hash within the transaction remains. To improve type safety,
replace the `Transaction.networkPayload` member with a wrapper type
`PooledTransaction` that is used in contexts where blob sidecars exist.
* Bump nimbus-eth2 to 87605d08a7f9cfc3b223bd32143e93a6cdf351ac
* IPv6 'listen-address' in `nimbus_verified_proxy`
* Bump nim-libp2p to 21cbe3a91a70811522554e89e6a791172cebfef2
* Fix beacon_lc_bridge payload conversion and conf.listenAddress type
* Change nimbus_verified_proxy.asExecutionData param to SomeExecutionPayload
* Rerun nph to fix asExecutionData style format
* nimbus_verified_proxy listenAddress
* Use PooledTransaction in nimbus-eth1 tests
---------
Co-authored-by: jangko <jangko128@gmail.com>
* Aristo+Kvt: Better RocksDB profiling
why:
Providing more detailed information, mainly for `Aristo`
* Aristo: Renamed journal `stats()` to `capacity()`
why:
`Stats()` was a misnomer
* Aristo: Provide backend read caches for key and vertex IDs
why:
Dedicated LRU caching for particular types gives a throughput advantage.
The sizes of the LRU queues used for caching are currently constant
but might be adjusted at a later time.
* Fix copyright year
* Aristo+RocksDB: Update backend drivers
why:
RocksDB update allows use some of the newly provided methods which
were previously implemented by using the very C backend (for the lack
of NIM methods.)
* Aristo+RocksDB: Simplify drivers wrapper
* Kvt: Update backend drivers and wrappers similar to `Aristo`
* Aristo+Kvm: Use column families for RocksDB
* Aristo+MemoryDB: Code cosmetics
* Aristo: Provide guest column family for export
why:
So `Kvt` can piggyback on `Aristo` so there avoiding to run a second
DBMS system in parallel.
* Kvt: Provide import mechanism for RoksDB guest column family
why:
So `Kvt` can piggyback on `Aristo` so there avoiding to run a second
DBMS system in parallel.
* CoreDb+Aristo: Run persistent `Kvt` DB piggybacked on `Aristo`
why:
Avoiding to run two DBMS systems in parallel.
* Fix copyright year
* Ditto