* Provide dedicated functions for deleteing accounts and storage trees
why:
Storage trees are always linked to an account, so there is no need
for an application to fiddle about (e.g. re-cycling, unlinking)
storage tree vertex IDs.
* Remove `delete()` and other cruft from API, `aristo_delete`, etc.
* clean up delete functions
details:
The delete implementations `deleteImpl()` and `delTreeImpl()` do not
need to be super generic anymore as all the edge cases are covered by
the specialised `deleteAccountPayload()`, `deleteGenericData()`, etc.
* Avoid unnecessary re-calculations of account keys
why:
The function `registerAccountForUpdate()` did extract the storage ID
(if any) and automatically marked the Merkle keys along the account
path for re-hashing.
This would also apply if there was later detected that the account
or the storage tree did not need to be updated.
So the `registerAccountForUpdate()` function was split into a part
which retrieved the storage ID, and another one which marked the
Merkle keys for re-calculation to be applied only when needed.
* Remove unused `merge*()` functions (for production)
details:
Some functionality moved to test suite
* Make sure that only `AccountData` leaf type is exactly used on VertexID(1)
* clean up payload type
* Provide dedicated functions for merging accounts and storage trees
why:
Storage trees are always linked to an account, so there is no need
for an application to fiddle about (e.e. creating, re-cycling) with
storage tree vertex IDs.
* CoreDb: Disable tracer functionality
why:
Must be updated to accommodate new/changed `Aristo` functions.
* CoreDb: Use new `mergeXXX()` functions
why:
Makes explicit vertex ID management obsolete for creating new
storage trees.
* Remove `mergePayload()` and other cruft from API, `aristo_merge`, etc.
* clean up merge functions
details:
The merge implementation `mergePayloadImpl()` does not need to be super
generic anymore as all the edge cases are covered by the specialised
functions `mergeAccountPayload()`, `mergeGenericData()`, and
`mergeStorageData()`.
* No tracer available at the moment, so disable offending tests
The state root computation here is one of the major hotspots in block
processing - in the cases the code only needs to know if it's empty or
not, it can be done a lot faster.
Adding a separate function for this looks fragile and should probably be
revisited.
Broadly, when importing blocks we don't need a transaction / frame per
block because we can simply abort the whole update and try again with a
smaller range if we find a faulty block.
Of course, this applies mainly to semi-trusted blocks where we're not
expected to fail in applying them - this could be blocks either from
files or header-verified blocks as given by consensus.
The module name is a misnomer, because AccountsCache have been
replaced by LedgerRef. But the test still applicable.
Instead of replaying unsupported goerli blocks,
we generate our own transactions and block.
* Remove AccountStateDB
AccountStateDB should no longer be used.
It's usage have been reduce to read only operations.
Replace it with LedgerRef to reduce maintenance burden.
* remove extra spaces
Co-authored-by: tersec <tersec@users.noreply.github.com>
---------
Co-authored-by: tersec <tersec@users.noreply.github.com>
When performing block import, we can batch state root verifications and
header checks, doing them only once per chunk of blocks, assuming that
the other blocks in the batch are valid by extension.
When we're not generating receipts, we can also skip per-transaction
state root computation pre-byzantium, which is what provides a ~20%
speedup in this PR, at least on those early blocks :)
We also stop storing transactions, receipts and uncles redundantly when
importing from era1 - there is no need to waste database storage on this
when we can load it from the era1 file (eventually).
State lookups potentially trigger expensive re-hashings - this is the
first of several steps to remove the unnecessary ones from the general
flow of block processing
* avoid re-reading parent block header from database when it's already
in memory
* Fix initialiser
why:
Possible crash (app profiling, tracer etc.)
* Update column family options processing
why:
Same for kvt as for aristo
* Move `AristoDbDualRocks` backend type to the test suite
why:
So it is not available for production
* Fix typos in API jump table
why:
Used for tracing and app profiling only. Needed some update
* Purged CoreDb legacy API
why:
Not needed anymore, was transitionary and disabled.
* Rename `flush` argument to `eradicate` in a DB close context
why:
The word `eradicate` leaves no doubt what is meant
* Rename `stoFlush()` -> `stoDelete()`
* Rename `core_apps_newapi` -> `core_apps` (not so new anymore)
* Bump nim-eth, nim-web3, nimbus-eth2
- Replace std.Option with results.Opt
- Fields name changes
* More fixes
* Fix Portal stream async raises and portal testnet Opt usage
* Bump eth + nimbus-eth2 + more fixes related to eth_types changes
* Fix in utp test app and nimbus-eth2 bump
* Fix test_blockchain_json rebase conflict
* Fix EVMC block_timestamp conversion plus commentary
---------
Co-authored-by: kdeme <kim.demey@gmail.com>
* bump rockdb
* Rename `KVT` objects related to filters according to `Aristo` naming
details:
filter* => delta*
roFilter => balancer
* Compulsory error handling if `persistent()` fails
* Add return code to `reCentre()`
why:
Might eventually fail if re-centring is blocked. Some logic will be
added in subsequent patch sets.
* Add column families from earlier session to rocksdb in opening procedure
why:
All previously used CFs must be declared when re-opening an existing
database.
* Update `init()` and add rocksdb `reinit()` methods for changing parameters
why:
Opening a set column families (with different open options) must span
at least the ones that are already on disk.
* Provide write-trigger-event interface into `Aristo` backend
why:
This allows to save data from a guest application (think `KVT`) to
get synced with the write cycle so the guest and `Aristo` save all
atomically.
* Use `KVT` with new column family interface from `Aristo`
* Remove obsolete guest interface
* Implement `KVT` piggyback on `Aristo` backend
* CoreDb: Add separate `KVT`/`Aristo` backend mode for debugging
* Remove `rocks_db` import from `persist()` function
why:
Some systems (i.p `fluffy` and friends) use the `Aristo` memory
backend emulation and do not link against rocksdb when building the
application. So this should fix that problem.
These options, inspired by Nethermind and general internet wisdom, bring
the database size down to 2/3 without affecting throughput. In theory,
they should also bring down memory usage and/or make more efficient use
of whatever memory is already assigned to rocksdb but this needs
verification in a longer test at synced-mainnet sizes.
In the meantime, they make testing easier by removing some noise that
the profiler says are bad, such as excessive SkipList access (countered
by bloom filters).
* Updates to Fluffy book for hive tests.
* Add support to disable state root checks for state content from the command line.
* Update portal_stateStore endpoint to support decoding offer and storing retrieval value.
* feat: kurtosis local testing script
* fix: updated correct copyright year
* fix: remove unwanted files, and introduce a post script cleanup
* fix: remove placeholder image-name & unwanted lines
* fix: check for jq shifted from dpkg
* changed copyright year to 2024
* add optimizations and optional features
* feat: kurtosis CI for eth1-eth2 interop
* fix: added to run on PRs
* fix: curl installation missing in docker
* fix: shellcheck fixed with suggested changes
* changed tests to fail(github) on success
* fix: changed from -rf to -f in Dockerfile
* fix: removed echo -e
* Use RocksDb column families instead of a prefixed single column
why:
Better performance
* Use structural objects `VertexRef` and `HashKey` in LRU cache for RocksDb
why:
Avoids repeated de/serialisation
* Started implementation of state endpoints.
* Add rpc calls and server stubs.
* Initial implementation of getAccountProof and getStorageProof.
* Refactor validation to use toAccount utils functions.
* Add state endpoints tests.
`initTable` is obsolete since nim 0.19 and can introduce significant
memory overhead while providing no benefit (since the table will be
grown to the default initial size on first use anyway).
In particular, aristo layers will not necessarily use all tables they
initialize, for exampe when many empty accounts are being created.