Commit Graph

35 Commits

Author SHA1 Message Date
Jacek Sieka 2fe8cc4551
leaf cache fixes (#2637)
* Add missing leaf cache update when a leaf turns to a branch with two
leaves (on merge) and vice versa (on delete) - this could lead to stale
leaves being returned from the cache causing validation failures - it
didn't happen because the leaf caches were not being used efficiently :)
* Replace `seq` with `ArrayBuf` in `Hike` allowing it to become
allocation-free - this PR also works around an inefficiency in nim in
returning large types via a `var` parameter
* Use the leaf cache instead of `getVtxRc` to fetch recent leaves - this
makes the vertex cache more efficient at caching branches because fewer
leaf requests pass through it.
2024-09-19 10:39:06 +02:00
Jacek Sieka 5cd0297462
fix missed cache opportunity (#2628)
The storage leaf cache was being circumvented when actually fetching
leaves and was instead only being filled with items :/

Also avoids an expensive copy when fetching account data (broadly,
variant objects are comparatively expensive to copy and fetching
accounts is a hotspot)
2024-09-14 09:47:32 +02:00
Jacek Sieka 5c1e2e7d3b
Migrate `keyed_queue` to `minilru` (#2608)
Compared to `keyed_queue`, `minilru` uses significantly less memory, in
particular for the 32-byte hash keys where `kq` stores several copies of
the key redundantly.
2024-09-13 15:47:50 +02:00
Jordan Hrycaj c6674311eb
Fringe case portal proof for existing account without storage tree (#2613)
detail:
  For practical reasons, ifsuch an account is asked for a slot, an empty
  proof list is returned. It is up to the user to provide an account
  proof that shows that there is no storage tree.
2024-09-11 20:27:42 +00:00
Jordan Hrycaj ce713d95fc
Aristo lazily delete larger subtrees (#2560)
* Extract sub-tree deletion functions into separate sub-modules

* Move/rename `aristo_desc.accLruSize` => `aristo_constants.ACC_LRU_SIZE`

* Lazily delete sub-trees

why:
  This gives some control of the memory used to keep the deleted vertices
  in the cached layers. For larger sub-trees, keys and vertices might be
  on the persistent backend to a large extend. This would pull an amount
  of extra information from the backend into the cached layer.

  For lazy deleting it is enough to remember sub-trees by a small set of
  (at most 16) sub-roots to be processed when storing persistent data.
  Marking the tree root deleted immediately allows to let most of the code
  base work as before.

* Comments and cosmetics

* No need to import all for `Aristo` here

* Kludge to make `chronicle` usage in sub-modules work with `fluffy`

why:
  That `fluffy` would not run with any logging in `core_deb` is a problem
  I have known for a while. Up to now, logging was only used for debugging.

  With the current `Aristo` PR, there are cases where logging might be
  wanted but this works only if `chronicles` runs without the
  `json[dynamic]` sinks.

  So this should be re-visited.

* More of a kludge
2024-08-14 08:54:44 +00:00
Jordan Hrycaj 38572bd8ea
Cache a storage root ID forever in the leaf payload of an account (#2551)
details:
  Stale root IDs are marked disabled while the ID is kept in the leaf
  payload.

why:
  This might lead to further caching advantages.
2024-08-07 13:28:01 +00:00
Jordan Hrycaj 1452e7b1c0
Misc updates (#2513)
* Update config for Ledger and CoreDb

why:
  Prepare for tracer which depends on the API jump table (as well as
  the profiler.) The API jump table is now enabled in unit/integration
  test mode piggybacking on the `unittest2DisableParamFiltering`
  compiler flag or on an extra compiler flag `dbjapi_enabled`.

* No deed for error field in `NodeRef`

why:
  Was opnly needed by proof nodes pre-loader which will be re-implemented

* Cosmetics
2024-07-22 18:10:04 +00:00
Jacek Sieka df4a21c910
Store cached hash at the layer corresponding to the source data (#2492)
When lazily verifying state roots, we may end up with an entire state
without roots that gets computed for the whole database - in the current
design, that would result in hashes for the entire trie being held in
memory.

Since the hash depends only on the data in the vertex, we can store it
directly at the top-most level derived from the verticies it depends on
- be that memory or database - this makes the memory usage broadly
linear with respect to the already-existing in-memory change set stored
in the layers.

It also ensures that if we have multiple forks in memory, hashes get
cached in the correct layer maximising reuse between forks.

The same layer numbering scheme as elsewhere is reused, where -2 is the
backend, -1 is the balancer, then 0+ is the top of the stack and stack.

A downside of this approach is that we create many small batches - a
future improvement could be to collect all such writes in a single
batch, though the memory profile of this approach should be examined
first (where is the batch kept, exactly?).
2024-07-18 09:13:56 +02:00
Jacek Sieka 9d91191154
storage hike cache (#2484)
This PR adds a storage hike cache similar to the account hike cache
already present - this cache is less efficient because account storage
is already partically cached in the account ledger but nonetheless helps
keep hiking down.

Notably, there's an opportunity to optimise this cache and the others so
that they cooperate better insteado of overlapping, which is left for a
future PR.

This PR also fixes an O(N) memory usage for storage slots where the
delete would keep the full storage in a work list which on mainnet can
grow very large - the work list is replaced with a more conventional
recursive `O(log N)` approach.
2024-07-14 19:12:10 +02:00
Jacek Sieka f3a56002ca
Turn payload into value type (#2483)
The Vertex type unifies branches, extensions and leaves into a single
memory area where the larges member is the branch (128 bytes + overhead) -
the payloads we have are all smaller than 128 thus wrapping them in an
extra layer of `ref` is wasteful from a memory usage perspective.

Further, the ref:s must be visited during the M&S phase of garbage
collection - since we keep millions of these, many of them
short-lived, this takes up significant CPU time.

```
Function	CPU Time: Total	CPU Time: Self	Module	Function (Full)	Source File	Start Address
system::markStackAndRegisters	10.0%	4.922s	nimbus	system::markStackAndRegisters(var<system::GcHeap>).constprop.0	gc.nim	0x701230`
```
2024-07-14 12:02:05 +02:00
Jacek Sieka 01ab209497
cache account payload (#2478)
Instead of caching just the storage id, we can cache the full payload
which further reduces expensive hikes
2024-07-12 15:08:26 +02:00
Jacek Sieka 7d78fd97d5
avoid allocations for slot storage (#2455)
Introduce a new `StoData` payload type similar to `AccountData`

* slightly more efficient storage format
* typed api
* fewer seqs
* fix encoding docs - it wasn't rlp after all :)
2024-07-04 23:48:45 +00:00
Jacek Sieka 81e75622cf
storage: store root id together with vid, for better locality of refe… (#2449)
The state and account MPT:s currenty share key space in the database
based on that vertex id:s are assigned essentially randomly, which means
that when two adjacent slot values from the same contract are accessed,
they might reside at large distance from each other.

Here, we prefix each vertex id by its root causing them to be sorted
together thus bringing all data belonging to a particular contract
closer together - the same effect also happens for the main state MPT
whose nodes now end up clustered together more tightly.

In the future, the prefix given to the storage keys can also be used to
perform range operations such as reading all the storage at once and/or
deleting an account with a batch operation.

Notably, parts of the API already supported this rooting concept while
parts didn't - this PR makes the API consistent by always working with a
root+vid.
2024-07-04 15:46:52 +02:00
Jacek Sieka 443c6d1f8e
Cache account path storage id (#2443)
The storage id is frequently accessed when executing contract code and
finding the path via the database requires several hops making the
process slow - here, we add a cache to keep the most recently used
account storage id:s in memory.

A possible future improvement would be to cache all account accesses so
that for example updating the balance doesn't cause several hikes.
2024-07-03 17:58:25 +02:00
Jacek Sieka 1f60e8e453
Use `Hash256` directly for account path (#2439)
Account paths are always a hash - passing it around as such helps avoid
confusion as to how long it is
2024-07-03 10:14:26 +02:00
Jacek Sieka 3d3831dde8
Small cleanups (#2435)
* avoid costly hike memory allocations for operations that don't need to
re-traverse it
* avoid unnecessary state checks (which might trigger unwanted state
root computations)
* disable optimize-for-hits due to the MPT no longer being complete at
all times
2024-07-01 14:07:39 +02:00
Jordan Hrycaj 8dd038144b
Some cleanups (#2428)
* Remove `dirty` set from structural objects

why:
  Not used anymore, the tree is dirty by default.

* Rename `aristo_hashify` -> `aristo_compute`

* Remove cruft, update comments, cosmetics, etc.

* Simplify `SavedState` object

why:
  The key chaining have become obsolete after extra lazy hashing. There
  is some available space for a state hash to be maintained in future.

details:
  Accept the legacy `SavedState` object serialisation format for a
  while (which will be overwritten by new format.)
2024-06-28 18:43:04 +00:00
Jordan Hrycaj 14c3772545
On demand mpt revisited (#2426)
* rebased from `github/on-demand-mpt`

ackn:
  wip: on-demand mpt construction

  Given that actual data is stored in the `Vertex` structure, it's useful
  to think of the MPT as a cache for computing roots rather than being a
  functional requirement on its own.

  This PR engenders this line of thinking by incrementally computing the
  MPT only when it's needed, ie when a state (or similar) root is needed.

  This has the effect of siginficantly reducing memory usage as well as
  improving performance:

  * no need for dirty-mpt-node book-keeping
  * no need to build complex forest of upcoming hashing work
  * only hashes that are functionally needed are ever computed -
  intermediate nodes whose MTP root is not observed are never computed /
  processed

* Unit test hot fixes

* Unit test hot fixes cont.

(somehow lost that part)

---------

Co-authored-by: Jacek Sieka <jacek@status.im>
2024-06-28 15:03:12 +00:00
Jordan Hrycaj 6dc2773957
Only use pre hashed addresses as account keys (#2424)
* Normalised storage tree addressing in function prototypes

detail:
  Argument list is always `<db> <account-path> <slot-path> ..` with
  both path arguments as `openArray[]`

* Remove cruft

* CoreDb internally Use full account paths rather than addresses

* Update API logging

* Use hashed account address only in prototypes

why:
  This avoids unnecessary repeated hashing of the same account address.
  The burden of doing that is upon the application. In the case here,
  the ledger caches all kinds of stuff anyway so it is common sense to
  exploit that for account address hashes.

caveat:
  Using `openArray[byte]` argument types for hashed accounts is inherently
  fragile. In non-release mode, a length verification `doAssert` is
  enabled by default.

* No accPath in data record (use `AristoAccount` as `CoreDbAccount`)

* Remove now unused `eAddr` field from ledger `AccountRef` type

why:
  Is duplicate of lookup key

* Avoid merging the account record/statement in the ledger twice.
2024-06-27 19:21:01 +00:00
Jordan Hrycaj 61bbf40014
Update storage tree admin (#2419)
* Tighten `CoreDb` API for accounts

why:
  Apart from cruft, the way to fetch the accounts state root via a
  `CoreDbColRef` record was unnecessarily complicated.

* Extend `CoreDb` API for accounts to cover storage tries

why:
  In future, this will make the notion of column objects obsolete. Storage
  trees will then be indexed by the account address rather than the vertex
  ID equivalent like a `CoreDbColRef`.

* Apply new/extended accounts API to ledger and tests

details:
  This makes the `distinct_ledger` module obsolete

* Remove column object constructors

why:
  They were needed as an abstraction of MPT sub-trees including storage
  trees. Now, storage trees are handled by the account (e.g. via address)
  they belong to and all other trees can be identified by a constant well
  known vertex ID. So there is no need for column objects anymore.

  Still there are some left-over column object methods wnich will be
  removed next.

* Remove `serialise()` and `PayloadRef` from default Aristo API

why:
  Not needed. `PayloadRef` was used for unstructured/unknown payload
  formats (account or blob) and `serialise()` was used for decodng
  `PayloadRef`. Now it is known in advance what the payload looks
  like.

* Added query function `hasStorageData()` whether a storage area exists

why:
  Useful for supporting `slotStateEmpty()` of the `CoreDb` API

* In the `Ledger` replace `storage.stateEmpty()` by 	`slotStateEmpty()`

* On Aristo, hide the storage root/vertex ID in the `PayloadRef`

why:
  The storage vertex ID is fully controlled by Aristo while the
  `AristoAccount` object is controlled by the application. With the
  storage root part of the `AristoAccount` object, there was a useless
  administrative burden to keep that storage root field up to date.

* Remove cruft, update comments etc.

* Update changed MPT access paradigms

why:
  Fixes verified proxy tests

* Fluffy cosmetics
2024-06-27 09:01:26 +00:00
Jacek Sieka 6b68ff92d3
Allocation-free nibbles buffer (#2406)
This buffer eleminates a large part of allocations during MPT traversal,
reducing overall memory usage and GC pressure.

Ideally, we would use it throughout in the API instead of
`openArray[byte]` since the built-in length limit appropriately exposes
the natural 64-nibble depth constraint that `openArray` fails to
capture.
2024-06-22 22:33:37 +02:00
Jordan Hrycaj e7be0d185c
Aristo uses pre classified tree types cont2 (#2397)
* Provide dedicated functions for fetching accounts and storage trees

why:
  Different prototypes for each class `account`, `generic` and
  `storage`.

* Remove `fetchPayload()` and other cruft from API, `aristo_fetch`, etc.

* Fix typos, debugging left overs, comments
2024-06-19 12:40:00 +00:00
Jordan Hrycaj f926222fec
Aristo cull journal related stuff (#2288)
* Remove all journal related stuff

* Refactor function names journal*() => delta*(), filter*() => delta*()

* remove `trg` fileld from `FilterRef`

why:
  Same as `kMap[$1]`

* Re-type FilterRef.src as `HashKey`

why:
  So it is directly comparable to `kMap[$1]`

* Moved `vGen[]` field from `LayerFinalRef` to `LayerDeltaRef`

why:
  Then a separate `FilterRef` type is not needed, anymore

* Rename `roFilter` field in `AristoDbRef` => `balancer`

why:
  New name more appropriate.

* Replace `FilterRef` by `LayerDeltaRef` type

why:
  This allows to avoid copying into the `balancer` (see next patch set)
  most of the time. Typically, only one instance is running on the backend
  and the `balancer` is only used as a stage before saving data.

* Refactor way how to store data persistently

why:
  Avoid useless copy when staging `top` layer for persistently saving to
  backend.

* Fix copyright header?
2024-06-03 20:10:35 +00:00
Jordan Hrycaj bda760f41d
Run coredb without journal (#2266)
* Add persistent last state stamp feature

why:
  This allows to run `CoreDb` without journal

* Start `CoreDb` without journal

* Remove journal related functions from `CoredDb`
2024-05-31 17:32:22 +00:00
Jordan Hrycaj 587ca3abbe
Coredb use stackable api for aristo backend (#2060)
* Aristo/Kvt: Provide function hooks APIs

why:
  These APIs can be used for installing tracers, profiling functoinality,
  and other niceties on the databases.

* Aristo: Provide optional API profiling

details:
  It basically is a re-implementation of the `CoreDb` profiling
  implementation

* Kvt: Provide optional API profiling similar to `Aristo`

* CoreDb: Re-implementing profiling using `aristo_profile`

* Ledger: Re-implementing profiling using `aristo_profile`

* CoreDb: Update unit tests for maintainability

* update copyright dates
2024-02-29 21:10:24 +00:00
Jordan Hrycaj 2c35390bdf
Core db and aristo maintenance update (#2014)
* Aristo: Update error return code

why:
  Failing of `Aristo` function `delete()` might fail because there is
  no such data item on the db. This must return a single error code
  as is done with `fetch()`.

* Ledger: Better error handling

why:
  The `expect()` clauses have been replaced by raising asserts indicating
  the error from the database backend.

   Also, `delete()` failures are legitimate if the item to delete does not
   exist.

* Aristo: Delete function must always leave a label on DB for `hashify()`

why:
  The `hashify()` uses the labels left bu `merge()` and `delete()` to
  compile (and optimise) a scheduler for subsequent hashing.

  Originally, the labels were not used for deleted entries and `delete()`
  still had some edge case where the deletion label was not properly
  handled.

* Aristo: Update `hashify()` scheduler, remove buggy optimisation

why:
  Was left over from version without virtual state roots which did not
  know about account payload leaf vertices referring to storage roots.

* Aristo: Label storage trie account in `delete()` similar to `merge()`

details;
  The `delete()` function applied to a non-static state root (assumed
  to be a storage root) will check the payload of an accounts leaf
  and mark its Merkle keys to be re-checked when runninh `hashify()`

* Aristo: Clean up and re-org recycled vertex IDs in `hashify()`

why:
  Re-organising the recycled vertex IDs list intends to reduce the size of the
  list.

  This list is organised as a LIFO (or stack.) By reorganising it in a way
  so that the least vertex ID numbers are on top, the list will be kept
  smaller as observed on some examples (less than 30%.)

* CoreDb: Accept storage trie deletion requests in non-initialised state

why:
  Due to lazy initialisation, the root vertex ID might not yet exist. So
  the `Aristo` database handlers would reject this call with an error and
  this condition needs to be handled by the API (which realises the lazy
  feature.)

* Cosmetics & code massage, prettify logging

* fix missing import
2024-02-08 16:32:16 +00:00
Jordan Hrycaj 3b306a9689
Aristo: Update unit test suite (#2002)
* Aristo: Update unit test suite

* Aristo/Kvt: Fix iterators

why:
  Generic iterators were not properly updated after backend change

* Aristo: Add sub-trie deletion functionality

why:
  For storage tries linked to an account payload vertex ID, a the
  whole storage trie needs to be deleted with the account.

* Aristo: Reserve vertex ID numbers for static custom state roots

why:
  Static custom state roots may be controlled by an application,
  e.g. for a receipt or a transaction root. The `Aristo` functions
  are agnostic of what the static state roots are when different
  from the internal tree vertex ID 1.

details;
  The `merge()` function applied to a non-static state root (assumed
  to be a storage root) will check the payload of an accounts leaf
  and mark its Merkle keys to be re-checked.

* Aristo: Correct error code symbol

* Aristo: Update error code symbols

* Aristo: Code cosmetics/comments

* Aristo: Fix hashify schedule calculator

why:
  Had a tendency to stop early leaving an incomplete job
2024-02-01 21:27:48 +00:00
Jordan Hrycaj a1161b537b
Core db update storage root management for sub tries (#1964)
* Aristo: Re-phrase `LayerDelta` and `LayerFinal` as object references

why:
  Avoids copying in some cases

* Fix copyright header

* Aristo: Verify `leafTie.root` function argument for `merge()` proc

why:
  Zero root will lead to inconsistent DB entry

* Aristo: Update failure condition for hash labels compiler `hashify()`

why:
  Node need not be rejected as long as links are on the schedule. In
  that case, `redo[]` is to become `wff.base[]` at a later stage.

  This amends an earlier fix, part of #1952 by also testing against
  the target nodes of the `wff.base[]` sets.

* Aristo: Add storage root glue record to `hashify()` schedule

why:
  An account leaf node might refer to a non-resolvable storage root ID.
  Storage root node chains will end up at the storage root. So the link
  `storage-root->account-leaf` needs an extra item in the schedule.

* Aristo: fix error code returned by `fetchPayload()`

details:
  Final error code is implied by the error code form the `hikeUp()`
  function.

* CoreDb: Discard `createOk` argument in API `getRoot()` function

why:
  Not needed for the legacy DB. For the `Arsto` DB, a lazy approach is
  implemented where a stprage root node is created on-the-fly.

* CoreDb: Prevent `$$` logging in some cases

why:
  Logging the function `$$` is not useful when it is used for internal
  use, i.e. retrieving an an error text for logging.

* CoreDb: Add `tryHashFn()` to API for pretty printing

why:
  Pretty printing must not change the hashification status for the
  `Aristo` DB. So there is an independent API wrapper for getting the
  node hash which never updated the hashes.

* CoreDb: Discard `update` argument in API `hash()` function

why:
  When calling the API function `hash()`, the latest state is always
  wanted. For a version that uses the current state as-is without checking,
  the function `tryHash()` was added to the backend.

* CoreDb: Update opaque vertex ID objects for the `Aristo` backend

why:
  For `Aristo`, vID objects encapsulate a numeric `VertexID`
  referencing a vertex (rather than a node hash as used on the
  legacy backend.) For storage sub-tries, there might be no initial
  vertex known when the descriptor is created. So opaque vertex ID
  objects are supported without a valid `VertexID` which will be
  initalised on-the-fly when the first item is merged.

* CoreDb: Add pretty printer for opaque vertex ID objects

* Cosmetics, printing profiling data

* CoreDb: Fix segfault in `Aristo` backend when creating MPT descriptor

why:
  Missing initialisation  error

* CoreDb: Allow MPT to inherit shared context on `Aristo` backend

why:
  Creates descriptors with different storage roots for the same
  shared `Aristo` DB descriptor.

* Cosmetics, update diagnostic message items for `Aristo` backend

* Fix Copyright year
2024-01-11 19:11:38 +00:00
Jordan Hrycaj 3e88589eb1
Optional accounts cache module for creating genesis (#1897)
* Split off `ReadOnlyStateDB` from `AccountStateDB` from `state_db.nim`

why:
  Apart from testing, applications use `ReadOnlyStateDB` as an easy
  way to access the accounts ledger. This is well supported by the
  `Aristo` db, but writable mode is only parially supported.

  The writable AccountStateDB` object for modifying accounts is not
  used by production code.

  So, for lecgacy and testing apps, the full support of the previous
  `AccountStateDB` is now enabled by `import db/state_db/read_write`
  and the `import db/state_db` provides read-only mode.

* Encapsulate `AccountStateDB` as `GenesisLedgerRef` or genesis creation

why:
  `AccountStateDB` has poor support for `Aristo` and is not widely used
   in favour of `AccountsLedger` (which will be abstracted as `ledger`.)

   Currently, using other than the `AccountStateDB` ledgers within the
   `GenesisLedgerRef` wrapper is experimental and test only. Eventually,
    the wrapper should disappear so that the `Ledger` object (which
    encapsulates `AccountsCache` and `AccountsLedger`) will prevail.

* For the `Ledger`, provide access to raw accounts `MPT`

why:
  This gives to the `CoreDbMptRef` descriptor from the `CoreDb` (which is
  the legacy version of 	CoreDxMptRef`.) For the new `ledger` API, the
  accounts are based on the `CoreDxMAccRef` descriptor which uses a
  particular sub-system for accounts while legacy applications use the
  `CoreDbPhkRef` equivalent of the `SecureHexaryTrie`.

  The only place where this feature will currently be used is the
 `genesis.nim` source file.

* Fix `Aristo` bugs, missing boundary checks, typos, etc.

* Verify root vertex in `MPT` and account constructors

why:
  Was missing so far, in particular the accounts constructor must
  verify `VertexID(1)

* Fix include file
2023-11-20 11:51:43 +00:00
Jordan Hrycaj 6e0397e276
Aristo and ledger small updates (#1888)
* Fix debug noise in `hashify()` for perfectly normal situation

why:
  Was previously considered a fixable error

* Fix test sample file names

why:
  The larger test file `goerli68161.txt.gz` is already in the local
  archive. So there is no need to use the smaller one from the external
  repo.

* Activate `accounts_cache` module from `db/ledger`

why:
  A copy of the original `accounts_cache.nim` source to be integrated
  into the `Ledger` module wrapper which allows to switch between
  different `accounts_cache` implementations unser tha same API.

details:
  At a later state, the `db/accounts_cache.nim` wrapper will be
  removed so that there is only one access to that module via
  `db/ledger/accounts_cache.nim`.

* Fix copyright headers in source code
2023-11-08 16:52:25 +00:00
Jordan Hrycaj 3fe0a49a5e
Aristo db allow shorter than 64 nibbles path keys (#1864)
* Aristo: Single `FetchPathNotFound` error in `fetchXxx()` and `hasPath()`

why:
  Missing path hike returns too many detailed reasons why it failed
  which becomes cumbersome to handle.

also:
  Renamed `contains()` => `hasPath()` which disables the `in` operator on
  non-boolean 	`contains()` functions

* Kvt: Renamed `contains()` => `hasKey()`

why:
  which disables the `in` operator on non-boolean 	`contains()` functions

* Aristo: Generalising `HashID` by variable length `PathID`

why:
  There are cases when the `Aristo` database is to be used with
  shorter than 64 nibbles keys when handling transactions indexes
  with sequence IDs.

caveat:
  This patch only works reliable for full length `PathID` values. Tests
  for shorter `PathID` values are currently missing.
2023-10-27 22:36:51 +01:00
Jordan Hrycaj 6d132811ba
Core db update providing additional results code interface (#1776)
* Split `core_db/base.nim` into several sources

* Rename `core_db/legacy.nim` => `core_db/legacy_db.nim`

* Update `CoreDb` API, dual methods returning `Result[]` or plain value

detail:
  Plain value methods implemet the legacy API, they defect on error results

* Redesign `CoreDB` direct backend access

why:
  Made the `backend` directive integral part of the API

* Discontinue providing unused or otherwise available functions

details:
+ setTransactionID() removed, not used and not easily replicable in Aristo
+ maybeGet() removed, available via direct backend access
+ newPhk() removed, never used & was experimental anyway

* Update/reorg backend API

why:
+ Added error print function `$$()`
+ General descriptor completion (and optional validation) via `bless()`

* Update `Aristo`/`Kvt` exception handling

why:
  Avoid `CatchableError` exceptions, rather pass them as error code where
  appropriate.

* More `CoreDB` compliant `Aristo` and `Kvt` methods

details:
+ Providing functions like `contains()`, `getVtxRc()` (returns `Result[]`).
+ Additional error code: `NotImplemented`

* Rewrite/reorg of Aristo DB constructor

why:
  Previously used global object `DefaultQidLayoutRef` as default
  initialiser. This object was created at compile time which lead to
  non-gc safe functions.

* Update nimbus/db/core_db/legacy_db.nim

Co-authored-by: Kim De Mey <kim.demey@gmail.com>

* Update nimbus/db/aristo/aristo_transcode.nim

Co-authored-by: Kim De Mey <kim.demey@gmail.com>

* Update nimbus/db/core_db/legacy_db.nim

Co-authored-by: Kim De Mey <kim.demey@gmail.com>

---------

Co-authored-by: Kim De Mey <kim.demey@gmail.com>
2023-09-26 10:21:13 +01:00
Jordan Hrycaj 6bc55d4e6f
Core db aristo and kvt updates preparing for integration (#1760)
* Kvt: Implemented multi-descriptor access on the same backend

why:
  This behaviour mirrors the one of Aristo and can be used for
  simultaneous transactions on Aristo + Kvt

* Kvt: Update database iterators

why:
  Forgot to run on the top layer first

* Kvt: Misc fixes

* Aristo, use `openArray[byte]` rather than `Blob` in prototype

* Aristo, by default hashify right after cloning descriptor

why:
  Typically, a completed descriptor is expected after cloning. Hashing
  can be suppressed by argument flag.

* Aristo provides `replicate()` iterator, similar to legacy `replicate()`

* Aristo API fixes and updates

* CoreDB: Rename `legacy_persistent` => `legacy_rocksdb`

why:
  More systematic, will be in line with Aristo DB which might have
  more than one persistent backends

* CoreDB: Prettify API sources

why:
  Better to read and maintain

details:
  Annotating with custom pragmas which cleans up the prototypes

* CoreDB: Update MPT/put() prototype allowing `CatchableError`

why:
  Will be needed for Aristo API (legacy is OK with `RlpError`)
2023-09-18 21:20:28 +01:00
Jordan Hrycaj cd1d370543
Aristo db api extensions for use as core db backend (#1754)
* Update docu

* Update Aristo/Kvt constructor prototype

why:
  Previous version used an `enum` value to indicate what backend is to
  be used. This was replaced by using the backend object type.

* Rewrite `hikeUp()` return code into `Result[Hike,(Hike,AristoError)]`

why:
  Better code maintenance. Previously, the `Hike` object was returned. It
  had an internal error field so partial success was also available on
  a failure. This error field has been removed.

* Use `openArray[byte]` rather than `Blob` in functions prototypes

* Provide synchronised multi instance transactions

why:
  The `CoreDB` object was geared towards the legacy DB which used a single
  transaction for the key-value backend DB. Different state roots are
  provided by the backend database, so all instances work directly on the
  same backend.

  Aristo db instances have different in-memory mappings (aka different
  state roots) and the transactions are on top of there mappings. So each
  instance might run different transactions.

  Multi instance transactions are a compromise to converge towards the
  legacy behaviour. The synchronised transactions span over all instances
  available at the time when base transaction was opened. Instances
  created later are unaffected.

* Provide key-value pair database iterator

why:
  Needed in `CoreDB` for `replicate()` emulation

also:
  Some update of internal code

* Extend API (i.e. prototype variants)

why:
  Needed for `CoreDB` geared towards the legacy backend which has a more
  basic API than Aristo.
2023-09-15 16:23:53 +01:00
Jordan Hrycaj 71c91e2280
Aristo db refactor tx paradim (#1674)
* Better error handling

why:
  Bail out on some error as early as possible before any changes.

* Implement `fetch()` as opposite of `merge()`

rationale:
  In the `Aristo` realm, the action named `fetch()` and `merge()` indicate
  leaf value related actions on the MPT, while actions `get()` and `put()`
   handle vertex or hash key related operations that constitute the MPT.

* Re-factor `merge()` prototypes

why:
  The most used variant of `merge()` should have the simplest prototype.

* Persistent DB constructor needs to import `aristo/aristo_init/persistent`

why:
  Most applications use memory DB anyway. This avoids linking `-lrocksdb`
  or any other back end libraries by default.

* Re-factor transaction module

why:
  Got the paradigm wrong. The transaction descriptor did replace the
  database one but should be handled separately.
2023-08-07 18:45:23 +01:00