* Remove `dirty` set from structural objects
why:
Not used anymore, the tree is dirty by default.
* Rename `aristo_hashify` -> `aristo_compute`
* Remove cruft, update comments, cosmetics, etc.
* Simplify `SavedState` object
why:
The key chaining have become obsolete after extra lazy hashing. There
is some available space for a state hash to be maintained in future.
details:
Accept the legacy `SavedState` object serialisation format for a
while (which will be overwritten by new format.)
* rebased from `github/on-demand-mpt`
ackn:
wip: on-demand mpt construction
Given that actual data is stored in the `Vertex` structure, it's useful
to think of the MPT as a cache for computing roots rather than being a
functional requirement on its own.
This PR engenders this line of thinking by incrementally computing the
MPT only when it's needed, ie when a state (or similar) root is needed.
This has the effect of siginficantly reducing memory usage as well as
improving performance:
* no need for dirty-mpt-node book-keeping
* no need to build complex forest of upcoming hashing work
* only hashes that are functionally needed are ever computed -
intermediate nodes whose MTP root is not observed are never computed /
processed
* Unit test hot fixes
* Unit test hot fixes cont.
(somehow lost that part)
---------
Co-authored-by: Jacek Sieka <jacek@status.im>
* Enable test_txpool by disabling failing cases
Because we cannot use goerli replay to feed the txpool anymore,
we use only a list of transactions.
But some test cases still failing because it requires block state
replay.
* Fix tx info
* Normalised storage tree addressing in function prototypes
detail:
Argument list is always `<db> <account-path> <slot-path> ..` with
both path arguments as `openArray[]`
* Remove cruft
* CoreDb internally Use full account paths rather than addresses
* Update API logging
* Use hashed account address only in prototypes
why:
This avoids unnecessary repeated hashing of the same account address.
The burden of doing that is upon the application. In the case here,
the ledger caches all kinds of stuff anyway so it is common sense to
exploit that for account address hashes.
caveat:
Using `openArray[byte]` argument types for hashed accounts is inherently
fragile. In non-release mode, a length verification `doAssert` is
enabled by default.
* No accPath in data record (use `AristoAccount` as `CoreDbAccount`)
* Remove now unused `eAddr` field from ledger `AccountRef` type
why:
Is duplicate of lookup key
* Avoid merging the account record/statement in the ledger twice.
* Tighten `CoreDb` API for accounts
why:
Apart from cruft, the way to fetch the accounts state root via a
`CoreDbColRef` record was unnecessarily complicated.
* Extend `CoreDb` API for accounts to cover storage tries
why:
In future, this will make the notion of column objects obsolete. Storage
trees will then be indexed by the account address rather than the vertex
ID equivalent like a `CoreDbColRef`.
* Apply new/extended accounts API to ledger and tests
details:
This makes the `distinct_ledger` module obsolete
* Remove column object constructors
why:
They were needed as an abstraction of MPT sub-trees including storage
trees. Now, storage trees are handled by the account (e.g. via address)
they belong to and all other trees can be identified by a constant well
known vertex ID. So there is no need for column objects anymore.
Still there are some left-over column object methods wnich will be
removed next.
* Remove `serialise()` and `PayloadRef` from default Aristo API
why:
Not needed. `PayloadRef` was used for unstructured/unknown payload
formats (account or blob) and `serialise()` was used for decodng
`PayloadRef`. Now it is known in advance what the payload looks
like.
* Added query function `hasStorageData()` whether a storage area exists
why:
Useful for supporting `slotStateEmpty()` of the `CoreDb` API
* In the `Ledger` replace `storage.stateEmpty()` by `slotStateEmpty()`
* On Aristo, hide the storage root/vertex ID in the `PayloadRef`
why:
The storage vertex ID is fully controlled by Aristo while the
`AristoAccount` object is controlled by the application. With the
storage root part of the `AristoAccount` object, there was a useless
administrative burden to keep that storage root field up to date.
* Remove cruft, update comments etc.
* Update changed MPT access paradigms
why:
Fixes verified proxy tests
* Fluffy cosmetics
* Simplify txpool baseFeeGet
- Avoid using toEVMFork because we are not in EVM
- Rename `isLondon` to `isLondonOrLater`
* Remove timestamp from isLondonOrLater
* ForkedChain implementation
- revamp test_blockchain_json using ForkedChain
- re-enable previously failing test cases.
* Remove excess error handling
* Avoid reloading parent header
* Do not force base update
* Write baggage to database
* Add findActiveChain to finalizedSegment
* Create new stagingTx in addBlock
* Check last stateRoot existence in test_blockchain_json
* Resolve rebase conflict
* More precise nomenclature for block import cursor
* Ensure bad block nor imported and good block not rejected
* finalizeSegment become forkChoice and align with engine API forkChoice spec
* Display reason when good block rejected
* Fix comments
* Put BaseDistance into CalculateNewBase equation
* Separate finalizedHash from baseHash
* Add more doAssert constraint
* Add push raises: []
* creating a seq from a table that holds lots of changes means copying
all data into the table - this can be several GB of data while syncing
blocks
* nim fails to optimize the moving of the `WidthFirstForest` - the real
solution is to not construct a `wff` to begin with, but this PR provides
relief while that is being worked on
This spike fix allows us to bump the rocksdb cache by another 2 GB and
still have a significantly lower peak memory usage during sync.
When processing long ranges of blocks, the account cache grows unbounded
which cause huge memory spikes.
Here, we move the cache to a second-level cache after each block - the
second-level cache is cleared on the next block after that which creates
a simple LRU effect.
There's a small performance cost of course, though overall the freed-up
memory can now be reassigned to the rocksdb row cache which not only
makes up for the loss but overall leads to a performance increase.
The bump to 2gb of rocksdb row cache here needs more testing but is
slightly less and loosely basedy on the savings from this PR and the
circular ref fix in #2408 - another way to phrase this is that it's
better to give rocksdb more breathing room than let the memory sit
unused until circular ref collection happens ;)
An instance of `CoreDbMptRef` is created for and stored in every account
- when we are processing blocks and have many accounts in memory, this
closure environment takes up hundreds of mb of memory (around block 5M,
it is the 4:th largest memory consumer!) - incidentally, this also
removes a circular reference in the setup that causes the
`AristoCodeDbMptRef` to linger in memory much longer than it
has to which is the core reason why it takes so much.
The real solution here is to remove the methods indirection entirely,
but this PR provides relief until that has been done.
Similar treatment is given to some of the other core api functions to
avoid circulars there too.
This buffer eleminates a large part of allocations during MPT traversal,
reducing overall memory usage and GC pressure.
Ideally, we would use it throughout in the API instead of
`openArray[byte]` since the built-in length limit appropriately exposes
the natural 64-nibble depth constraint that `openArray` fails to
capture.
It is common for many accounts to share the same code - at the database
level, code is stored by hash meaning only one copy exists per unique
program but when loaded in memory, a copy is made for each account.
Further, every time we execute the code, it must be scanned for invalid
jump destinations which slows down EVM exeuction.
Finally, the extcodesize call causes code to be loaded even if only the
size is needed.
This PR improves on all these points by introducing a shared
CodeBytesRef type whose code section is immutable and that can be shared
between accounts. Further, a dedicated `len` API call is added so that
the EXTCODESIZE opcode can operate without polluting the GC and code
cache, for cases where only the size is requested - rocksdb will in this
case cache the code itself in the row cache meaning that lookup of the
code itself remains fast when length is asked for first.
With 16k code entries, there's a 90% hit rate which goes up to 99%
during the 2.3M attack - the cache significantly lowers memory
consumption and execution time not only during this event but across the
board.
* CoreDb: remove PHK tries
why:
There is no general use anymore for an MPT with a pre-hashed key. It
was used to resemble the `SecureHexaryTrie` logic from the legacy DB.
The only pace where this is needed is the `Leger` which uses a
a distinct MPT version anyway (see `distinct_ledgers.nim`.)
* Rename `CoreDx*` -> `CoreDb*`
why:
The naming `CoreDx*` was used to differentiate the new CoreDb API from
the legacy API which had descriptors named `CoreDb*`.
* Provide dedicated functions for fetching accounts and storage trees
why:
Different prototypes for each class `account`, `generic` and
`storage`.
* Remove `fetchPayload()` and other cruft from API, `aristo_fetch`, etc.
* Fix typos, debugging left overs, comments
For the block cache to be shared between column families, the options
instance must be shared between the various column families being
created. This also ensures that there is only one source of truth for
configuration options instead of having two different sets depending on
how the tables were initialized.
This PR also removes the re-opening mechanism which can double startup
time - every time the database is opened, the log is replayed - a large
log file will take a long time to open.
Finally, several options got correclty implemented as column family
options, including an one that puts a hash index in the SST files.
* Provide dedicated functions for deleteing accounts and storage trees
why:
Storage trees are always linked to an account, so there is no need
for an application to fiddle about (e.g. re-cycling, unlinking)
storage tree vertex IDs.
* Remove `delete()` and other cruft from API, `aristo_delete`, etc.
* clean up delete functions
details:
The delete implementations `deleteImpl()` and `delTreeImpl()` do not
need to be super generic anymore as all the edge cases are covered by
the specialised `deleteAccountPayload()`, `deleteGenericData()`, etc.
* Avoid unnecessary re-calculations of account keys
why:
The function `registerAccountForUpdate()` did extract the storage ID
(if any) and automatically marked the Merkle keys along the account
path for re-hashing.
This would also apply if there was later detected that the account
or the storage tree did not need to be updated.
So the `registerAccountForUpdate()` function was split into a part
which retrieved the storage ID, and another one which marked the
Merkle keys for re-calculation to be applied only when needed.
* Remove unused `merge*()` functions (for production)
details:
Some functionality moved to test suite
* Make sure that only `AccountData` leaf type is exactly used on VertexID(1)
* clean up payload type
* Provide dedicated functions for merging accounts and storage trees
why:
Storage trees are always linked to an account, so there is no need
for an application to fiddle about (e.e. creating, re-cycling) with
storage tree vertex IDs.
* CoreDb: Disable tracer functionality
why:
Must be updated to accommodate new/changed `Aristo` functions.
* CoreDb: Use new `mergeXXX()` functions
why:
Makes explicit vertex ID management obsolete for creating new
storage trees.
* Remove `mergePayload()` and other cruft from API, `aristo_merge`, etc.
* clean up merge functions
details:
The merge implementation `mergePayloadImpl()` does not need to be super
generic anymore as all the edge cases are covered by the specialised
functions `mergeAccountPayload()`, `mergeGenericData()`, and
`mergeStorageData()`.
* No tracer available at the moment, so disable offending tests
The state root computation here is one of the major hotspots in block
processing - in the cases the code only needs to know if it's empty or
not, it can be done a lot faster.
Adding a separate function for this looks fragile and should probably be
revisited.
Broadly, when importing blocks we don't need a transaction / frame per
block because we can simply abort the whole update and try again with a
smaller range if we find a faulty block.
Of course, this applies mainly to semi-trusted blocks where we're not
expected to fail in applying them - this could be blocks either from
files or header-verified blocks as given by consensus.
The module name is a misnomer, because AccountsCache have been
replaced by LedgerRef. But the test still applicable.
Instead of replaying unsupported goerli blocks,
we generate our own transactions and block.