Commit Graph

34 Commits

Author SHA1 Message Date
andri lim c41206be39
Fix styles and reduce compiler warnings (#2086)
* Fix styles and reduce compiler warnings

* Fix copyright year
2024-03-20 14:35:38 +07:00
Jordan Hrycaj 587ca3abbe
Coredb use stackable api for aristo backend (#2060)
* Aristo/Kvt: Provide function hooks APIs

why:
  These APIs can be used for installing tracers, profiling functoinality,
  and other niceties on the databases.

* Aristo: Provide optional API profiling

details:
  It basically is a re-implementation of the `CoreDb` profiling
  implementation

* Kvt: Provide optional API profiling similar to `Aristo`

* CoreDb: Re-implementing profiling using `aristo_profile`

* Ledger: Re-implementing profiling using `aristo_profile`

* CoreDb: Update unit tests for maintainability

* update copyright dates
2024-02-29 21:10:24 +00:00
Jordan Hrycaj 8e18e85288
Aristodb remove obsolete and time consuming admin features (#2048)
* Aristo: Reorg `hashify()` using different schedule algorithm

why:
  Directly calculating the search tree top down from the roots turns
  out to be faster than using the cached structures left over by `merge()`
  and `delete()`.
  Time gains is short of 20%

* Aristo: Remove `lTab[]` leaf entry object type

why:
  Not used anymore. It was previously needed to build the schedule for
  `hashify()`.

* Aristo: Avoid unnecessary re-org of the vertex ID recycling list

why:
  This list can become quite large so a heuristic is employed whether
  it makes sense to re-org.

  Also, re-org check is only done by `delete()` functions.

* Aristo: Remove key/reverse lookup table from tx layers

why:
  It is ignored except for handling proof nodes and costs unnecessary
  run time resources.

  This feature was originally needed to accommodate the mental transition
  from the legacy MPT to the `Aristo` trie :).

* Fix copyright year
2024-02-22 08:24:58 +00:00
Jordan Hrycaj 1b4a43c140
Aristo db remove over engineered object type (#2027)
* CoreDb: update test suite

* Aristo: Simplify reverse key map

why:
  The reverse key map `pAmk: (root,key) -> {vid,..}` as been simplified to
  `pAmk: key -> {vid,..}` as the state `root` domain argument is not used,
  anymore

* Aristo: Remove `HashLabel` object type and replace it by `HashKey`

why:
  The `HashLabel` object attaches a root hash to a hash key. This is
  nowhere used, anymore.

* Fix copyright
2024-02-14 19:11:59 +00:00
Jordan Hrycaj 2c35390bdf
Core db and aristo maintenance update (#2014)
* Aristo: Update error return code

why:
  Failing of `Aristo` function `delete()` might fail because there is
  no such data item on the db. This must return a single error code
  as is done with `fetch()`.

* Ledger: Better error handling

why:
  The `expect()` clauses have been replaced by raising asserts indicating
  the error from the database backend.

   Also, `delete()` failures are legitimate if the item to delete does not
   exist.

* Aristo: Delete function must always leave a label on DB for `hashify()`

why:
  The `hashify()` uses the labels left bu `merge()` and `delete()` to
  compile (and optimise) a scheduler for subsequent hashing.

  Originally, the labels were not used for deleted entries and `delete()`
  still had some edge case where the deletion label was not properly
  handled.

* Aristo: Update `hashify()` scheduler, remove buggy optimisation

why:
  Was left over from version without virtual state roots which did not
  know about account payload leaf vertices referring to storage roots.

* Aristo: Label storage trie account in `delete()` similar to `merge()`

details;
  The `delete()` function applied to a non-static state root (assumed
  to be a storage root) will check the payload of an accounts leaf
  and mark its Merkle keys to be re-checked when runninh `hashify()`

* Aristo: Clean up and re-org recycled vertex IDs in `hashify()`

why:
  Re-organising the recycled vertex IDs list intends to reduce the size of the
  list.

  This list is organised as a LIFO (or stack.) By reorganising it in a way
  so that the least vertex ID numbers are on top, the list will be kept
  smaller as observed on some examples (less than 30%.)

* CoreDb: Accept storage trie deletion requests in non-initialised state

why:
  Due to lazy initialisation, the root vertex ID might not yet exist. So
  the `Aristo` database handlers would reject this call with an error and
  this condition needs to be handled by the API (which realises the lazy
  feature.)

* Cosmetics & code massage, prettify logging

* fix missing import
2024-02-08 16:32:16 +00:00
Jordan Hrycaj 3b306a9689
Aristo: Update unit test suite (#2002)
* Aristo: Update unit test suite

* Aristo/Kvt: Fix iterators

why:
  Generic iterators were not properly updated after backend change

* Aristo: Add sub-trie deletion functionality

why:
  For storage tries linked to an account payload vertex ID, a the
  whole storage trie needs to be deleted with the account.

* Aristo: Reserve vertex ID numbers for static custom state roots

why:
  Static custom state roots may be controlled by an application,
  e.g. for a receipt or a transaction root. The `Aristo` functions
  are agnostic of what the static state roots are when different
  from the internal tree vertex ID 1.

details;
  The `merge()` function applied to a non-static state root (assumed
  to be a storage root) will check the payload of an accounts leaf
  and mark its Merkle keys to be re-checked.

* Aristo: Correct error code symbol

* Aristo: Update error code symbols

* Aristo: Code cosmetics/comments

* Aristo: Fix hashify schedule calculator

why:
  Had a tendency to stop early leaving an incomplete job
2024-02-01 21:27:48 +00:00
Jordan Hrycaj a1161b537b
Core db update storage root management for sub tries (#1964)
* Aristo: Re-phrase `LayerDelta` and `LayerFinal` as object references

why:
  Avoids copying in some cases

* Fix copyright header

* Aristo: Verify `leafTie.root` function argument for `merge()` proc

why:
  Zero root will lead to inconsistent DB entry

* Aristo: Update failure condition for hash labels compiler `hashify()`

why:
  Node need not be rejected as long as links are on the schedule. In
  that case, `redo[]` is to become `wff.base[]` at a later stage.

  This amends an earlier fix, part of #1952 by also testing against
  the target nodes of the `wff.base[]` sets.

* Aristo: Add storage root glue record to `hashify()` schedule

why:
  An account leaf node might refer to a non-resolvable storage root ID.
  Storage root node chains will end up at the storage root. So the link
  `storage-root->account-leaf` needs an extra item in the schedule.

* Aristo: fix error code returned by `fetchPayload()`

details:
  Final error code is implied by the error code form the `hikeUp()`
  function.

* CoreDb: Discard `createOk` argument in API `getRoot()` function

why:
  Not needed for the legacy DB. For the `Arsto` DB, a lazy approach is
  implemented where a stprage root node is created on-the-fly.

* CoreDb: Prevent `$$` logging in some cases

why:
  Logging the function `$$` is not useful when it is used for internal
  use, i.e. retrieving an an error text for logging.

* CoreDb: Add `tryHashFn()` to API for pretty printing

why:
  Pretty printing must not change the hashification status for the
  `Aristo` DB. So there is an independent API wrapper for getting the
  node hash which never updated the hashes.

* CoreDb: Discard `update` argument in API `hash()` function

why:
  When calling the API function `hash()`, the latest state is always
  wanted. For a version that uses the current state as-is without checking,
  the function `tryHash()` was added to the backend.

* CoreDb: Update opaque vertex ID objects for the `Aristo` backend

why:
  For `Aristo`, vID objects encapsulate a numeric `VertexID`
  referencing a vertex (rather than a node hash as used on the
  legacy backend.) For storage sub-tries, there might be no initial
  vertex known when the descriptor is created. So opaque vertex ID
  objects are supported without a valid `VertexID` which will be
  initalised on-the-fly when the first item is merged.

* CoreDb: Add pretty printer for opaque vertex ID objects

* Cosmetics, printing profiling data

* CoreDb: Fix segfault in `Aristo` backend when creating MPT descriptor

why:
  Missing initialisation  error

* CoreDb: Allow MPT to inherit shared context on `Aristo` backend

why:
  Creates descriptors with different storage roots for the same
  shared `Aristo` DB descriptor.

* Cosmetics, update diagnostic message items for `Aristo` backend

* Fix Copyright year
2024-01-11 19:11:38 +00:00
Jordan Hrycaj ffa8ad2246
Core db use differential tx layers for aristo and kvt (#1949)
* Fix kvt headers

* Provide differential layers for KVT transaction stack

why:
  Significant performance improvement

* Provide abstraction layer for database top cache layer

why:
  This will eventually implemented as a differential database layers
  or transaction layers. The latter is needed to improve performance.

behavioural changes:
  Zero vertex and keys (i.e. delete requests) are not optimised out
  until the last layer is written to the database.

* Provide differential layers for Aristo transaction stack

why:
  Significant performance improvement
2023-12-19 12:39:23 +00:00
Jordan Hrycaj 13f51939f6
Core db aristo hasher profiling and timing improvement (#1938)
* Explicitly use shared `Kvt` table on `Ledger` and `Clique` lookup.

why:
  Speeds up lookup time with `Aristo` backend. For writing `Clique` data,
  the `Companion` model allows to write `Clique` data past the database
  locked by evm transactions.

* Implement `CoreDb` profiling with API tracking

why:
  Chasing time spent per APT procs ...

* Implement `Ledger` profiling with API tracking

why:
  Chasing time spent per APT procs ...

* Always hashify when commiting or storing

why:
  A dirty cache makes no sense when committing

* Make sure that a zero key is created when adding/updating vertices

why:
  This is an error fix mainly for edge cases. A typical error was
  that the root key got deleted when there were only a few vertices
  left on the DB.

* Need all created and changed vertices zero-keyed on the cache

why:
  A zero key (i.e. empty Merkle hash) indicates that a vertex key
  needs to be updated. This would not be needed immediately after
  a merge as there is an actual leaf path on the cache layer. But
  after subsequent merge and delete operations this information
  might get blurred.

* Re-org hashing algorithm

why:
  Apart from errors, the previous implementation was too slow for
  two reasons:
  + some control hashes were calculated for debugging (now all
    verification is done in `aristo_check` module)
  + the leaf paths stored on the cache are used to build the
    labelling (aka hashing) schedule; there paths were accumulated
    over successive hash sessions although it is clear that all
    keys were generated, already
2023-12-12 17:47:41 +00:00
Jordan Hrycaj 657379f484
Aristo db update merkle hasher (#1925)
* Register paths for added leafs because of trie re-balancing

why:
  While the payload would not change, the prefix in the leaf vertex
  would. So it needs to be flagged for hash recompilation for the
  `hashify()` module.

also:
  Make sure that `Hike` paths which might have vertex links into the
  backend filter are replaced by vertex copies before manipulating.
  Otherwise the vertices on the immutable filter might be involuntarily
  changed.

* Also check for paths where the leaf vertex is on the backend, already

why:
  A a path can have dome vertices on the top layer cache with the
  `Leaf` vertex on  the backend.

* Re-define a void `HashLabel` type.

why:
  A `HashLabel` type is a pair `(root-vertex-ID, Keccak-hash)`. Previously,
  a valid `HashLabel` consisted of a non-empty hash and a non-zero vertex
  ID. This definition leads to a non-unique representation of a void
  `HashLabel` with either root-ID or has void. This has been changed to
  the unique void `HashLabel` exactly if the hash entry is void.

* Update consistency checkers

* Re-org `hashify()` procedure

why:
  Syncing against block chain showed serious deficiencies which produced
  wrong hashes or simply bailed out with error.

  So all fringe cases (mainly due to deleted entries) could be integrated
  into the labelling schedule rather than handling separate fringe cases.
2023-12-04 20:39:26 +00:00
Jordan Hrycaj 5462c05dc6
Core db update api tracking (#1907)
* Fix copyright year

* Show elapsed times with enabled `CoreDb` API tracking

* Show elapsed times with enabled `LedgerRef` API tracking

* Reorg `CoreDb` auto destructors for `Aristo` DB

why:
  While `Aristo` supports some parallelism for concurrent database access,
  this comes with a price of management overhead. With a naive approach,
  the auto-destructor will slow down execution because the ledger and
  evm treat the database in a shared mode where a DB descriptor is just
  created and thrown away shortly after.

  This is reflected in the `Coredb` abstraction layer above `Aristo`/`Kvt`
  where a few `Shared` type descriptors are cached and a shared reference
  is returned rather than a disposable new object.

* For `CoreDb` support transaction level tracking

details:
  This is mainly an extra for the legacy DB as `Aristo` and `Kvt` support
  this already.

  Also return an error on the legacy DB backend when `persistent()` is
  called while there are transactions pending (the `persistent()` call
  does nothing otherwise on the legacy backend.)

* Clear compiler warnings (remove unused variables etc.)
2023-11-24 22:16:21 +00:00
Jordan Hrycaj c47f021596
Core db and aristo updates for destructor and tx logic (#1894)
* Disable `TransactionID` related functions from `state_db.nim`

why:
  Functions `getCommittedStorage()` and `updateOriginalRoot()` from
  the `state_db` module are nowhere used. The emulation of a legacy
  `TransactionID` type functionality is administratively expensive to
  provide by `Aristo` (the legacy DB version is only partially
  implemented, anyway).

  As there is no other place where `TransactionID`s are used, they will
  not be provided by the `Aristo` variant of the `CoreDb`. For the
  legacy DB API, nothing will change.

* Fix copyright headers in source code

* Get rid of compiler warning

* Update Aristo code, remove unused `merge()` variant, export `hashify()`

why:
  Adapt to upcoming `CoreDb` wrapper

* Remove synced tx feature from `Aristo`

why:
+ This feature allowed to synchronise transaction methods like begin,
  commit, and rollback for a group of descriptors.
+ The feature is over engineered and not needed for `CoreDb`, neither
  is it complete (some convergence features missing.)

* Add debugging helpers to `Kvt`

also:
  Update database iterator, add count variable yield argument similar
  to `Aristo`.

* Provide optional destructors for `CoreDb` API

why;
  For the upcoming Aristo wrapper, this allows to control when certain
  smart destruction and update can take place. The auto destructor works
  fine in general when the storage/cache strategy is known and acceptable
  when creating descriptors.

* Add update option for `CoreDb` API function `hash()`

why;
  The hash function is typically used to get the state root of the MPT.
  Due to lazy hashing, this might be not available on the `Aristo` DB.
  So the `update` function asks for re-hashing the gurrent state changes
  if needed.

* Update API tracking log mode: `info` => `debug

* Use shared `Kvt` descriptor in new Ledger API

why:
  No need to create a new descriptor all the time
2023-11-16 19:35:03 +00:00
Jordan Hrycaj 6e0397e276
Aristo and ledger small updates (#1888)
* Fix debug noise in `hashify()` for perfectly normal situation

why:
  Was previously considered a fixable error

* Fix test sample file names

why:
  The larger test file `goerli68161.txt.gz` is already in the local
  archive. So there is no need to use the smaller one from the external
  repo.

* Activate `accounts_cache` module from `db/ledger`

why:
  A copy of the original `accounts_cache.nim` source to be integrated
  into the `Ledger` module wrapper which allows to switch between
  different `accounts_cache` implementations unser tha same API.

details:
  At a later state, the `db/accounts_cache.nim` wrapper will be
  removed so that there is only one access to that module via
  `db/ledger/accounts_cache.nim`.

* Fix copyright headers in source code
2023-11-08 16:52:25 +00:00
Jordan Hrycaj 4feaa2cfab
Aristo db update for short nodes key edge cases (#1887)
* Aristo: Provide key-value list signature calculator

detail:
  Simple wrappers around `Aristo` core functionality

* Update new API for `CoreDb`

details:
+ Renamed new API functions `contains()` => `hasKey()` or `hasPath()`
  which disables the `in` operator on non-boolean 	`contains()` functions
+ The functions `get()` and `fetch()` always return a not-found error if
  there is no item, available. The new functions `getOrEmpty()` and
  `mergeOrEmpty()` return an an empty `Blob` if there is no such key
  found.

* Rewrite `core_apps.nim` using new API from `CoreDb`

* Use `Aristo` functionality for calculating Merkle signatures

details:
  For debugging, the `VerifyAristoForMerkleRootCalc` can be set so
  that `Aristo` results will be verified against the legacy versions.

* Provide general interface for Merkle signing key-value tables

details:
  Export `Aristo` wrappers

* Activate `CoreDb` tests

why:
  Now, API seems to be stable enough for general tests.

* Update `toHex()` usage

why:
  Byteutils' `toHex()` is superior to `toSeq.mapIt(it.toHex(2)).join`

* Split `aristo_transcode` => `aristo_serialise` + `aristo_blobify`

why:
+ Different modules for different purposes
+ `aristo_serialise`: RLP encoding/decoding
+ `aristo_blobify`: Aristo database encoding/decoding

* Compacted representation of small nodes' links instead of Keccak hashes

why:
  Ethereum MPTs use Keccak hashes as node links if the size of an RLP
  encoded node is at least 32 bytes. Otherwise, the RLP encoded node
  value is used as a pseudo node link (rather than a hash.) Such a node
  is nor stored on key-value database. Rather the RLP encoded node value
  is stored instead of a lode link in a parent node instead. Only for
  the root hash, the top level node is always referred to by the hash.

  This feature needed an abstraction of the `HashKey` object which is now
  either a hash or a blob of length at most 31 bytes. This leaves two
  ways of representing an empty/void `HashKey` type, either as an empty
  blob of zero length, or the hash of an empty blob.

* Update `CoreDb` interface (mainly reducing logger noise)

* Fix copyright years (to make `Lint` happy)
2023-11-08 12:18:32 +00:00
Jordan Hrycaj 3fe0a49a5e
Aristo db allow shorter than 64 nibbles path keys (#1864)
* Aristo: Single `FetchPathNotFound` error in `fetchXxx()` and `hasPath()`

why:
  Missing path hike returns too many detailed reasons why it failed
  which becomes cumbersome to handle.

also:
  Renamed `contains()` => `hasPath()` which disables the `in` operator on
  non-boolean 	`contains()` functions

* Kvt: Renamed `contains()` => `hasKey()`

why:
  which disables the `in` operator on non-boolean 	`contains()` functions

* Aristo: Generalising `HashID` by variable length `PathID`

why:
  There are cases when the `Aristo` database is to be used with
  shorter than 64 nibbles keys when handling transactions indexes
  with sequence IDs.

caveat:
  This patch only works reliable for full length `PathID` values. Tests
  for shorter `PathID` values are currently missing.
2023-10-27 22:36:51 +01:00
Jordan Hrycaj 395580ff9d
Aristo and core db updates (#1800)
* Aristo: remove obsolete functions

* Aristo: Fix error code for non-available hash keys

why:
  Must not return `not-found` when the key is not available (i.e. the
  current changes were not hashified, yet.)

* CoreDB: Provide TDD and test framework
2023-10-03 12:56:13 +01:00
Jordan Hrycaj 6bc55d4e6f
Core db aristo and kvt updates preparing for integration (#1760)
* Kvt: Implemented multi-descriptor access on the same backend

why:
  This behaviour mirrors the one of Aristo and can be used for
  simultaneous transactions on Aristo + Kvt

* Kvt: Update database iterators

why:
  Forgot to run on the top layer first

* Kvt: Misc fixes

* Aristo, use `openArray[byte]` rather than `Blob` in prototype

* Aristo, by default hashify right after cloning descriptor

why:
  Typically, a completed descriptor is expected after cloning. Hashing
  can be suppressed by argument flag.

* Aristo provides `replicate()` iterator, similar to legacy `replicate()`

* Aristo API fixes and updates

* CoreDB: Rename `legacy_persistent` => `legacy_rocksdb`

why:
  More systematic, will be in line with Aristo DB which might have
  more than one persistent backends

* CoreDB: Prettify API sources

why:
  Better to read and maintain

details:
  Annotating with custom pragmas which cleans up the prototypes

* CoreDB: Update MPT/put() prototype allowing `CatchableError`

why:
  Will be needed for Aristo API (legacy is OK with `RlpError`)
2023-09-18 21:20:28 +01:00
Jordan Hrycaj cd1d370543
Aristo db api extensions for use as core db backend (#1754)
* Update docu

* Update Aristo/Kvt constructor prototype

why:
  Previous version used an `enum` value to indicate what backend is to
  be used. This was replaced by using the backend object type.

* Rewrite `hikeUp()` return code into `Result[Hike,(Hike,AristoError)]`

why:
  Better code maintenance. Previously, the `Hike` object was returned. It
  had an internal error field so partial success was also available on
  a failure. This error field has been removed.

* Use `openArray[byte]` rather than `Blob` in functions prototypes

* Provide synchronised multi instance transactions

why:
  The `CoreDB` object was geared towards the legacy DB which used a single
  transaction for the key-value backend DB. Different state roots are
  provided by the backend database, so all instances work directly on the
  same backend.

  Aristo db instances have different in-memory mappings (aka different
  state roots) and the transactions are on top of there mappings. So each
  instance might run different transactions.

  Multi instance transactions are a compromise to converge towards the
  legacy behaviour. The synchronised transactions span over all instances
  available at the time when base transaction was opened. Instances
  created later are unaffected.

* Provide key-value pair database iterator

why:
  Needed in `CoreDB` for `replicate()` emulation

also:
  Some update of internal code

* Extend API (i.e. prototype variants)

why:
  Needed for `CoreDB` geared towards the legacy backend which has a more
  basic API than Aristo.
2023-09-15 16:23:53 +01:00
Jordan Hrycaj 8e00143313
Aristo db code massage n cosmetics (#1745)
* Rewrite remaining `AristoError` return code into `Result[void,AristoError]`

why:
  Better code maintenance

* Update import sections

* Update Aristo DB paths

why:
 More systematic so directory can be shared with other DB types

* More cosmetcs

* Update unit tests runners

why:
  Proper handling of persistent and mem-only DB. The latter can be
  consistently triggered by an empty DB path.
2023-09-12 19:45:12 +01:00
Jordan Hrycaj 01fe172738
Aristo db integrate hashify into tx (#1679)
* Renamed type `NoneBackendRef` => `VoidBackendRef`

* Clarify names: `BE=filter+backend` and `UBE=backend (unfiltered)`

why:
  Most functions used full names as `getVtxUnfilteredBackend()` or
  `getKeyBackend()`. After defining abbreviations (and its meaning) it
   seems easier to use `getVtxUBE()` and `getKeyBE()`.

* Integrate `hashify()` process into transaction logic

why:
  Is now transparent unless explicitly controlled.

details:
  Cache changes imply setting a `dirty` flag which in turn triggers
  `hashify()` processing in transaction and `pack()` directives.

* Removed `aristo_tx.exec()` directive

why:
  Inconsistent implementation, functionality will be provided with a
  different paradigm.
2023-08-11 18:23:57 +01:00
Jordan Hrycaj 71c91e2280
Aristo db refactor tx paradim (#1674)
* Better error handling

why:
  Bail out on some error as early as possible before any changes.

* Implement `fetch()` as opposite of `merge()`

rationale:
  In the `Aristo` realm, the action named `fetch()` and `merge()` indicate
  leaf value related actions on the MPT, while actions `get()` and `put()`
   handle vertex or hash key related operations that constitute the MPT.

* Re-factor `merge()` prototypes

why:
  The most used variant of `merge()` should have the simplest prototype.

* Persistent DB constructor needs to import `aristo/aristo_init/persistent`

why:
  Most applications use memory DB anyway. This avoids linking `-lrocksdb`
  or any other back end libraries by default.

* Re-factor transaction module

why:
  Got the paradigm wrong. The transaction descriptor did replace the
  database one but should be handled separately.
2023-08-07 18:45:23 +01:00
Jordan Hrycaj 56d5c382d7
Aristo db traversal helpers (#1638)
* Misc fixes

detail:
* Fix de-serialisation for account leafs
* Update node recovery from unit tests

* Remove `LegacyAccount` from `PayloadRef` object

why:
  Legacy accounts use a hash key as storage root which is detrimental
  to the working of the Aristo database which uses a vertex ID.

* Dissolve `hashify_helper` into `aristo_utils` and `aristo_transcode`

why:
  Functions are of general interest so they should live in first level
  code files.

* Added left/right iterators over leaf nodes

* Some helper/wrapper functions that might be useful
2023-07-13 00:03:14 +01:00
Jordan Hrycaj ff6673beac
Aristo db tidy up a bit (#1625)
* Slightly tighten some self-check conditions

* Redefined the database descriptor object as reference (to the object)

why:
  The upcoming transaction wrapper will work with a database reference
  rather than the object itself

* Append state before `save()` to the Aristo descriptor

why:
  This stae was previously returned by the function. Appending it to
  a field of the Aristo descriptor seems easier to handle.
2023-07-04 19:24:03 +01:00
Jordan Hrycaj dd1c8ed6f2
Aristo db update delete functionality (#1621)
* Fix missing branch checks in transcoder

why:
  Symmetry problem. `Blobify()` allowed for encoding degenerate branch
  vertices while `Deblobify()` rejected decoding wrongly encoded data.

* Update memory backend so that it rejects storing bogus vertices.

why:
  Error behaviour made similar to the rocks DB backend.

* Make sure that leaf vertex IDs are not repurposed

why:
  This makes it easier to record leaf node changes

* Update error return code for next()/right() traversal

why:
  Returning offending vertex ID (besides error code) helps debugging

* Update Merkle hasher for deleted nodes

why:
  Not implemented, yet

also:
  Provide cache & backend consistency check functions. This was
  partly re-implemented from `hashifyCheck()`

* Simplify some unit tests

* Fix delete function

why:
  Was conceptually wrong
2023-06-30 23:22:33 +01:00
Jordan Hrycaj 15cc9f962e
Aristo db update vertex caching when merging (#1606)
* Added missing deferred cleanup directive to sub-test functions

why:
  Rocksdb keeps the files locked for a short while leading to errors. This
  was previously solved my using different db sub-directories

* Provide vertex deep-copy function globally.

why:
  is just handy

* Avoid unnecessary vertex caching when merging proof nodes

also:
  Run all merge tests on the rocksdb backend
  Previously, proof node tests were run without backend
2023-06-22 20:21:33 +01:00
Jordan Hrycaj 83dbe87159
Aristo db update foreground caching (#1605)
* Fix vertex ID generator state handling for rocksdb backend

why:
 * Key error in walk iterator
 * Needs to be loaded when opening the database

* Use non-zero sub-table prefixes for rocksdb

why:
  Handy for debugging

* Fix error code for missing key on rocksdb backend

why:
  Previously returned `VOID_HASH_KEY` rather than `GetKeyNotFound`

* Explicitly copy vertex data between internal table and function/result argument

why:
  Function argument or return reference may still refer to the same data
  object.

* Updated error symbols

why:
  Error symbol names for the hike module now start with the prefix `Hike`.

* Write back modified branch node into local top layer cache

why:
  With the backend available, the source of the branch node references
  might not be the top layer cache. So any change must be explicitely
  recorded.
2023-06-22 12:13:24 +01:00
Jordan Hrycaj 4b66f93274
Aristo db with storage backends (#1603)
* Generalised Aristo DB constructor for any type of backend

details:
  * Records to be deleted are represented as key-void (rather than
    key-value) pairs by the put-function arguments
  * Allow direct driver access, iterators as example implementation and
    for testing.

* Provide backend storage interface

details:
  Stores the top layer onto backend tables

* Implemented Rocks DB backend

details:
  Transaction based `put()` functionality
  Iterators (based on direct RocksDB access)
2023-06-20 14:26:25 +01:00
Jordan Hrycaj d7f40516a7
Detach from snap/sync declarations & definitions (#1601)
why:
  Tests and some basic components were originally borrowed from the
  snap/sync implementation. These have fully been re-implemented.
2023-06-12 19:16:03 +01:00
Jordan Hrycaj 0308dfac4f
Aristo db address sup trie items properly (#1600)
* Fix include

why:
  Eth67 not default yet so that got missed

* Rename `LeafKey` => `LeafTie`

why:
  Name is a pen picture of what this object is for. Also, it avoids the
  ubiquitous term `key`.

* Provided `getOrVoid()` wrapper for `getOrDefault()`

also:
  Provide `isValid()` syntactic sugar for `.isNil.not`, `!= 0` etc.
  Reorg descriptor source, split into sub-sources

* Bundled `NodeKey` objects with root ID and called it `HashLabel`

why:
  `NodeKey` (aka repurposed Hash265) objects are unique only within a
  particular sub-trie (e.g. storage slots) which are kept separated
  (i.e non-interleaved) by design. This is not applied to the backend
  as the map VertexID->NodeKey labelling the nodes needs not be injective.

  For the in-memory database (transaction) layers, the injective map
  VertexID->(VertexID,NodeKey) is used where the first field of the image
  tuple is the root ID of the sub-trie the `NodeKey` object is valid. So
  identical storage tries for different accounts can be represented.
2023-06-12 14:48:47 +01:00
Jordan Hrycaj 932a2140f2
Aristo db supporting forest and layered tx architecture (#1598)
* Exclude some storage tests

why:
  These test running on external dumps slipped through. The particular
  dumps were reported earlier as somehow dodgy.

  This was changed in `#1457` but having a second look, the change on
  hexary_interpolate.nim(350) might be incorrect.

* Redesign `Aristo DB` descriptor for transaction based layers

why:
  Previous descriptor layout made it cumbersome to push/pop
  database delta layers.

  The new architecture keeps each layer with the full delta set
  relative to the database backend.

* Keep root ID as part of the `Patricia Trie` leaf path

why;
  That way, forests are supported
2023-06-09 12:17:37 +01:00
Jordan Hrycaj 11bb33d0bc
Added delete fuctionality (#1596) 2023-06-02 20:21:46 +01:00
Jordan Hrycaj 099444ab3f
Aristo db fixes after storage slots dump tests added (#1595)
* Fix missing Merkle key removal in `merge()`

* Accept optional root hash argument in `hashify()`

why:
  For importing a full database, there will be no proof data except the
  root key. So this can be used to check and set the root key in the
  database descriptor.

also:
  Associate vertex ID to `hashify()` error return code

* Added Aristo Trie traversal function

why:
 * step along leaf vertices in sorted order
 * tree/trie consistency checks when debugging

* Enabled storage slots test data for Aristo DB
2023-06-02 11:04:29 +01:00
Jordan Hrycaj 2fc349feb9
Aristo db merkle hashify functionality added (#1593)
* Keep vertex ID generator state with each db-layer

why:
  The vertex ID generator state is part of the difference to the below
  layer

* Move otherwise unused source to test directory

* Add Merkle hash generator

also:
  * Verification facility for debugging
  * Empty Merkle key hashes encoded as `EMPTY_ROOT_HASH`
2023-05-30 22:21:15 +01:00
Jordan Hrycaj cd78458123
Add items to `Aristo Trie` database (#1586)
details:
1. Merging a leaf vertex merges a `Patricia Trie` path (while
   adding/modiying vertices) and adds a leaf node with payload
2. Merging a Merkel node merges a single vertex to the `Patricia Trie`
   and registers merkel hashes
3. Action 2 can be used before action 1 in order to construct a
   Merkel proof as required for handling `snap/1` data.
4. Unit tests show that action 3 is benign for now :)
2023-05-30 12:47:47 +01:00