Commit Graph

42 Commits

Author SHA1 Message Date
tersec 73661fd8a4
switch to Nim v2.0.12 (#2817)
* switch to Nim v2.0.12

* fix LruCache capitalization for styleCheck

* KzgProof/KzgCommitment for styleCheck

* TxEip4844 for styleCheck

* styleCheck issues in nimbus/beacon/payload_conv.nim

* ENode for styleCheck

* isOk for styleCheck

* some more styleCheck fixes

* more styleCheck fixes

---------

Co-authored-by: jangko <jangko128@gmail.com>
2024-11-01 19:06:26 +00:00
Jacek Sieka 188d689d9d
Speed up initial MPT root computation after import (#2788)
When `nimbus import` runs, we end up with a database without MPT roots
leading to long startup times the first time one is needed.

Computing the state root is slow because the on-disk order based on
VertexID sorting does not match the trie traversal order and therefore
makes lookups inefficent.

Here we introduce a helper that speeds up this computation by traversing
the trie in on-disk order and computing the trie hashes bottom up
instead - even though this leads to some redundant reads of nodes that
we cannot yet compute, it's still a net win as leaves and "bottom"
branches make up the majority of the database.

This PR also addresses a few other sources of inefficiency largely due
to the separation of AriKey and AriVtx into their own column families.

Each column family is its own LSM tree that produces hundreds of SST
filtes - with a limit of 512 open files, rocksdb must keep closing and
opening files which leads to expensive metadata reads during random
access.

When rocksdb makes a lookup, it has to read several layers of files for
each lookup. Ribbon filters to skip over files that don't have the
requested data but when these filters are not in memory, reading them is
slow - this happens in two cases: when opening a file and when the
filter has been evicted from the LRU cache. Addressing the open file
limit solves one source of inefficiency, but we must also increase the
block cache size to deal with this problem.

* rocksdb.max_open_files increased to 2048
* per-file size limits increased so that fewer files are created
* WAL size increased to avoid partial flushes which lead to small files
* rocksdb block cache increased

All these increases of course lead to increased memory usage, but at
least performance is acceptable - in the future, we'll need to explore
options such as joining AriVtx and AriKey and/or reducing the row count
(by grouping branch layers under a single vertexid).

With this PR, the mainnet state root can be computed in ~8 hours (down
from 2-3 days) - not great, but still better.

Further, we write all keys to the database, also those that are less
than 32 bytes - because the mpt path is part of the input, it is very
rare that we actually hit a key like this (about 200k such entries on
mainnet), so the code complexity is not worth the benefit really, in the
current database layout / design.
2024-10-27 11:08:37 +00:00
Jordan Hrycaj 5b6ccddaa0
Db folder sources and related remove compiler warnings (#2673)
* Aristo: Rename `Hash256` -> `Hash32`

* CoreDb: Rename `Hash256` -> `Hash32`

* Ledger: Rename `Hash256` -> `Hash32`

* StorageTypes: Rename `Hash256` -> `Hash32`

* Aristo: Rename `Blob` -> `seq[byte]`, `keccakHash` -> `keccak256`

* Kvt: Rename `Blob` -> `seq[byte]`

* CoreDb: Rename `Blob` -> `seq[byte]`, `keccakHash` -> `keccak256`

* Ledger: Rename `Blob` -> `seq[byte]`, `keccakHash` -> `keccak256`

* CoreDb: Rename `BlockHeader` -> `Header`, `BlockNonce` -> `Bytes8`

* Misc: Rename `StorageKey` -> `Bytes32`

* Tracer: `Hash256` -> `Hash32`, `BlockHeader` -> `Header`, etc.

* Fix copyright header
2024-10-01 21:03:10 +00:00
Jacek Sieka b4b4d16729
speed up key computation (#2642)
* batch database key writes during `computeKey` calls
* log progress when there are many keys to update
* avoid evicting the vertex cache when traversing the trie for key
computation purposes
* avoid storing trivial leaf hashes that directly can be loaded from the
vertex
2024-09-20 07:43:53 +02:00
Jacek Sieka 5c1e2e7d3b
Migrate `keyed_queue` to `minilru` (#2608)
Compared to `keyed_queue`, `minilru` uses significantly less memory, in
particular for the 32-byte hash keys where `kq` stores several copies of
the key redundantly.
2024-09-13 15:47:50 +02:00
Jacek Sieka d39c589ec3
lru cache updates (#2590)
* replace rocksdb row cache with larger rdb lru caches - these serve the
same purpose but are more efficient because they skips serialization,
locking and rocksdb layering
* don't append fresh items to cache - this has the effect of evicting
the existing items and replacing them with low-value entries that might
never be read - during write-heavy periods of processing, the
newly-added entries were evicted during the store loop
* allow tuning rdb lru size at runtime
* add (hidden) option to print lru stats at exit (replacing the
compile-time flag)

pre:
```
INF 2024-09-03 15:07:01.136+02:00 Imported blocks
blockNumber=20012001 blocks=12000 importedSlot=9216851 txs=1837042
mgas=181911.265 bps=11.675 tps=1870.397 mgps=176.819 avgBps=10.288
avgTps=1574.889 avgMGps=155.952 elapsed=19m26s458ms
```

post:
```
INF 2024-09-03 13:54:26.730+02:00 Imported blocks
blockNumber=20012001 blocks=12000 importedSlot=9216851 txs=1837042
mgas=181911.265 bps=11.637 tps=1864.384 mgps=176.250 avgBps=11.202
avgTps=1714.920 avgMGps=169.818 elapsed=17m51s211ms
```

9%:ish import perf improvement on similar mem usage :)
2024-09-05 11:18:32 +02:00
Jacek Sieka 35cc78c86d
add metrics for rdb lru cache (#2586)
This is a first step towards measuring the efficiency of the LRU caches
over time - metrics can be collected during import or when running
regulary.

Since `nim-metrics` carries some overhead for its default way of
reporting metrics, this PR implements a custom collector over atomic
counters, given that this is one of the hottest spots in the block
processing pipeline.

Using a compile-time flag, the same metrics can be printed on exit which
is useful when comparing different strategies for caching - here's a
recent run over blocks 16000001-1616384 - this is a good candidate to
expose in a better way in the future, maybe:

```
   state    vtype       miss        hit      total hitrate
 Account     Leaf    4909417    4466215    9375632  47.64%
 Account   Branch   20742574   72015123   92757697  77.64%
   World     Leaf     940483    1140946    2081429  54.82%
   World   Branch    8224151  131496580  139720731  94.11%
     all      all   34816625  209118864  243935489  85.73%
```
2024-09-02 17:34:10 +02:00
Jacek Sieka ef1bab0802
avoid some trivial memory allocations (#2587)
* pre-allocate `blobify` data and remove redundant error handling
(cannot fail on correct data)
* use threadvar for temporary storage when decoding rdb, avoiding
closure env
* speed up database walkers by avoiding many temporaries

~5% perf improvement on block import, 100x on database iteration (useful
for building analysis tooling)
2024-09-02 16:03:10 +02:00
Jordan Hrycaj 7becf4e389
Remove vertex ID recycle function (#2558)
why:
  It is not safe in general to recycle vertex IDs while the `RocksDb`
  cache has `VertexID` rather than `RootedVertexID` where the former
  type seems preferable.

  In some fringe cases one might remove a vertex with key `(root1,vid)`
  and insert another vertex with key `(root2,vid)` while re-using the
  vertex ID `vid`. Without knowledge of `root1` and `root2`, the LRU
  cache will return the same vertex for `(root2,vid)` also for
  `(root1,vid)`.
2024-08-12 20:56:15 +00:00
Jacek Sieka 3382c2427b
increase rdb cache sizes (#2466)
This trivial bump should improve performance a bit without costing too
much memory - as the trie grows, so does the number of levels in it and
creating hikes becomes ever more expensive - hopefully this cache
increase should give a nice little boost even if it's not a lot.
2024-07-09 17:35:27 +02:00
Jacek Sieka 81e75622cf
storage: store root id together with vid, for better locality of refe… (#2449)
The state and account MPT:s currenty share key space in the database
based on that vertex id:s are assigned essentially randomly, which means
that when two adjacent slot values from the same contract are accessed,
they might reside at large distance from each other.

Here, we prefix each vertex id by its root causing them to be sorted
together thus bringing all data belonging to a particular contract
closer together - the same effect also happens for the main state MPT
whose nodes now end up clustered together more tightly.

In the future, the prefix given to the storage keys can also be used to
perform range operations such as reading all the storage at once and/or
deleting an account with a batch operation.

Notably, parts of the API already supported this rooting concept while
parts didn't - this PR makes the API consistent by always working with a
root+vid.
2024-07-04 15:46:52 +02:00
Jacek Sieka c364426422
Smaller in-database representations (#2436)
These representations use ~15-20% less data compared to the status quo,
mainly by removing redundant zeroes in the integer encodings - a
significant effect of this change is that the various rocksdb caches see
better efficiency since more items fit in the same amount of space.

* use RLP encoding for `VertexID` and `UInt256` wherever it appears
* pack `VertexRef`/`PayloadRef` more tightly
2024-07-02 20:25:06 +02:00
web3-developer ea94e8a351
Use RocksDb column family handles instead of name strings. (#2418)
* Bump RocksDb to latest and update Nimbus database to pass column family handles to RocksDb API.

* Bump RocksDb version.
2024-06-27 16:51:43 +08:00
Jacek Sieka 3e001e322c
Fix memory usage spikes during sync, give memory to rocksdb (#2413)
* creating a seq from a table that holds lots of changes means copying
all data into the table - this can be several GB of data while syncing
blocks
* nim fails to optimize the moving of the `WidthFirstForest` - the real
solution is to not construct a `wff` to begin with, but this PR provides
relief while that is being worked on

This spike fix allows us to bump the rocksdb cache by another 2 GB and
still have a significantly lower peak memory usage during sync.
2024-06-25 13:39:53 +02:00
Jacek Sieka 41cf81f80b
Fix dboptions init (#2391)
For the block cache to be shared between column families, the options
instance must be shared between the various column families being
created. This also ensures that there is only one source of truth for
configuration options instead of having two different sets depending on
how the tables were initialized.

This PR also removes the re-opening mechanism which can double startup
time - every time the database is opened, the log is replayed - a large
log file will take a long time to open.

Finally, several options got correclty implemented as column family
options, including an one that puts a hash index in the SST files.
2024-06-19 10:55:57 +02:00
Jacek Sieka af34f90fe4
fix `max_total_wal_size` which should be set on the DB (#2363) 2024-06-16 02:11:30 +00:00
Jordan Hrycaj debba5a620
Coeredb related clean up and maint fixes (#2360)
* Fix initialiser

why:
  Possible crash (app profiling, tracer etc.)

* Update column family options processing

why:
  Same for kvt as for aristo

* Move `AristoDbDualRocks` backend type to the test suite

why:
  So it is not available for production

* Fix typos in API jump table

why:
  Used for tracing and app profiling only. Needed some update

* Purged CoreDb legacy API

why:
  Not needed anymore, was transitionary and disabled.

* Rename `flush` argument to `eradicate` in a DB close context

why:
  The word `eradicate` leaves no doubt what is meant

* Rename `stoFlush()` -> `stoDelete()`

* Rename `core_apps_newapi` -> `core_apps` (not so new anymore)
2024-06-14 11:19:48 +00:00
Jordan Hrycaj 5a5cc6295e
Triggered write event for kvt (#2351)
* bump rockdb

* Rename `KVT` objects related to filters according to `Aristo` naming

details:
  filter* => delta*
  roFilter => balancer

* Compulsory error handling if `persistent()` fails

* Add return code to `reCentre()`

why:
  Might eventually fail if re-centring is blocked. Some logic will be
  added in subsequent patch sets.

* Add column families from earlier session to rocksdb in opening procedure

why:
  All previously used CFs must be declared when re-opening an existing
  database.

* Update `init()` and add rocksdb `reinit()` methods for changing parameters

why:
  Opening a set column families (with different open options) must span
  at least the ones that are already on disk.

* Provide write-trigger-event interface into `Aristo` backend

why:
  This allows to save data from a guest application (think `KVT`) to
  get synced with the write cycle so the guest and `Aristo` save all
  atomically.

* Use `KVT` with new column family interface from `Aristo`

* Remove obsolete guest interface

* Implement `KVT` piggyback on `Aristo` backend

* CoreDb: Add separate `KVT`/`Aristo` backend mode for debugging

* Remove `rocks_db` import from `persist()` function

why:
  Some systems (i.p `fluffy` and friends) use the `Aristo` memory
  backend emulation and do not link against rocksdb when building the
  application. So this should fix that problem.
2024-06-13 18:15:11 +00:00
Jacek Sieka 54f793f946
Apply some basic rocksdb options (#2339)
These options, inspired by Nethermind and general internet wisdom, bring
the database size down to 2/3 without affecting throughput. In theory,
they should also bring down memory usage and/or make more efficient use
of whatever memory is already assigned to rocksdb but this needs
verification in a longer test at synced-mainnet sizes.

In the meantime, they make testing easier by removing some noise that
the profiler says are bad, such as excessive SkipList access (countered
by bloom filters).
2024-06-12 14:52:27 +02:00
Jacek Sieka eb041abba7
avoid unnecessary memory allocations and lookups (#2334)
* use `withValue` instead of `hasKey` + `[]`
* avoid `@` et al
* parse database data inside `onData` instead of making seq then parsing
2024-06-11 11:38:58 +02:00
Jordan Hrycaj a347291413
Aristo use rocksdb cf instead of key pfx (#2332)
* Use RocksDb column families instead of a prefixed single column

why:
  Better performance

* Use structural objects `VertexRef` and `HashKey` in LRU cache for RocksDb

why:
  Avoids repeated de/serialisation
2024-06-10 12:04:22 +00:00
Jacek Sieka c876729c4d
Add some basic rocksdb options to command line (#2286)
These options are there mainly to drive experiments, and are therefore
hidden.

One thing that this PR brings in is an initial set of caches and buffers for rocksdb - the set that I've been using during various performance tests to get to a viable baseline performance level.
2024-06-05 17:08:29 +02:00
Jacek Sieka 9f879406f3
append instead of reallocate in blobify (#2277)
...otherwise, we get lots and lots of temporary allocations of seq's
2024-06-01 17:13:24 +02:00
Jacek Sieka 0a49833d69
avoid a few more copies (#2215) 2024-05-24 11:27:17 +02:00
Jacek Sieka f38c5e631e
trivial memory-based speedups (#2205)
* trivial memory-based speedups

* HashKey becomes non-ref
* use openArray instead of seq in lots of places
* avoid sequtils.reversed when unnecessary
* add basic perf stats to test_coredb

* copyright
2024-05-23 17:37:51 +02:00
Jordan Hrycaj 54f784bef1
Kvt remodel tx and forked descriptors (#2168)
* Aristo: Generalise alien/guest interface for piggiback on database

* Aristo: Code cosmetics

* CoreDb+Kvt: Update transaction API

why:
  Use single addressable function `forkTx(backLevel: int)` as used
  in `Aristo`. So `Kvt` can be synced simultaneously to `Aristo`.

also:
  Refactored `kvt_tx.nim` in a similar fashion to `Aristo`.

* Kvt: Replace `LayerDelta` object by reference

why:
  Will be needed when introducing filters

* Kvt: Remodel backend filter facility similar to `Aristo`

why:
  This allows to operate on several KVT instances simultaneously.

* CoreDb+Kvt: Fix on-disk storage

why:
  Overlooked name change: `stow()` => `persist()` for permanent storage

* Fix copyright headers
2024-05-07 19:59:27 +00:00
Jordan Hrycaj b9187e0493
Aristo selective read cashing for rocksdb backend (#2145)
* Aristo+Kvt: Better RocksDB profiling

why:
  Providing more detailed information, mainly for `Aristo`

* Aristo: Renamed journal `stats()` to `capacity()`

why:
  `Stats()` was a misnomer

* Aristo: Provide backend read caches for key and vertex IDs

why:
  Dedicated LRU caching for particular types gives a throughput advantage.
  The sizes of the LRU queues used for caching are currently constant
  but might be adjusted at a later time.

* Fix copyright year
2024-04-22 19:02:22 +00:00
Jordan Hrycaj e8eb3268f5
Generalise prune mode option 4 different db models (#2139)
* Update README

* Nimbus-main: replaced `PruneMode` options by `ChainDbMode` options

details:
  For the legacy database, this changes the phrase
  - `conf.pruneMode == PruneMode.Full` to the expression
  + `conf.chainDbMode == ChainDbMode.Prune`.

* Fix issues moaned about by NIM compiler

* Fix copyright year
2024-04-17 18:09:55 +00:00
Jordan Hrycaj d6a4205324
Aristo update rocksdb backend drivers (#2135)
* Aristo+RocksDB: Update backend drivers

why:
  RocksDB update allows use some of the newly provided methods which
  were previously implemented by using the very C backend (for the lack
  of NIM methods.)

* Aristo+RocksDB: Simplify drivers wrapper

* Kvt: Update backend drivers and wrappers similar to `Aristo`

* Aristo+Kvm: Use column families for RocksDB

* Aristo+MemoryDB: Code cosmetics

* Aristo: Provide guest column family for export

why:
  So `Kvt` can piggyback on `Aristo` so there avoiding to run a second
  DBMS system in parallel.

* Kvt: Provide import mechanism for RoksDB guest column family

why:
  So `Kvt` can piggyback on `Aristo` so there avoiding to run a second
   DBMS system in parallel.

* CoreDb+Aristo: Run persistent `Kvt` DB piggybacked on `Aristo`

why:
  Avoiding to run two DBMS systems in parallel.

* Fix copyright year

* Ditto
2024-04-16 20:39:11 +00:00
Jordan Hrycaj 0d73637f14
Core db simplify new api storage modes (#2075)
* Aristo+Kvt: Fix backend `dup()` function in api setup

why:
  Backend object is subject to an inheritance cascade which was not
  taken care of, before. Only the base object was duplicated.

* Kvt: Simplify DB clone/peers management

* Aristo: Simplify DB clone/peers management

* Aristo: Adjust unit test for working with memory DB only

why:
  This currently causes some memory corruption persumably in the
  `libc` background layer.

* CoredDb+Kvt: Simplify API for KVT

why:
  Simplified storage models (was over engineered) for better performance
  and code maintenance.

* CoredDb+Aristo: Simplify API for `Aristo`

why:
  Only single database state needed here. Accessing a similar state will
  be implemented from outside this module using a context layer. This
  gives better performance and improves code maintenance.

* Fix Copyright headers

* CoreDb: Turn off API tracking

why:
  CI would ot go through. Was accidentally turned on.
2024-03-14 22:17:43 +00:00
web3-developer 799acf301d
Added support for namespaces to RocksDb kvstore. (#2066)
* Add new RocksNamespaceRef type and remove backups and readonly support from RocksDb KvStore.

* Bump nim-rocksdb to fc2ba4a836b6b47ae1b17d1c45801c7e06585e19

* Fix tests.

* Fix copyright notice.
2024-03-12 11:04:46 +08:00
web3-developer 11691c33e9
Update Nimbus codebase to use the new nim-rocksdb API. (#2054)
* Bump nim-rocksdb.

* Update codebase to use latest nim-rocksdb API.

* Update copyright notices.

* Fix memory leak due to allocCStringArray without deallocCStringArray.

* Improve kvstore_rocksdb code.

* Refactor and cleanup RocksStoreRef.

* Update nim-rocksdb submodule to latest.
2024-03-05 12:54:42 +08:00
Jordan Hrycaj 43e5f428af
Aristo db kvt maintenance update (#1952)
* Update KVT layers abstraction

details:
  modelled after Aristo layers

* Simplified KVT database iterators (removed item counters)

why:
  Not needed for production functions

* Simplify KVT merge function `layersCc()`

* Simplified Aristo database iterators (removed item counters)

why:
  Not needed for production functions

* Update failure condition for hash labels compiler `hashify()`

why:
  Node need not be rejected as long as links are on the schedule. In
  that case, `redo[]` is to become `wff.base[]` at a later stage.

* Update merging layers and label update functions

why:
+ Merging a stack of layers with `layersCc()` could be simplified
+ Merging layers will optimise the reverse `kMap[]` table maps
  `pAmk: label->{vid, ..}` by deleting empty mappings `label->{}` where
  they are redundant.
+ Updated `layersPutLabel()` for optimising `pAmk[]` tables
2023-12-20 16:19:00 +00:00
Jordan Hrycaj 5462c05dc6
Core db update api tracking (#1907)
* Fix copyright year

* Show elapsed times with enabled `CoreDb` API tracking

* Show elapsed times with enabled `LedgerRef` API tracking

* Reorg `CoreDb` auto destructors for `Aristo` DB

why:
  While `Aristo` supports some parallelism for concurrent database access,
  this comes with a price of management overhead. With a naive approach,
  the auto-destructor will slow down execution because the ledger and
  evm treat the database in a shared mode where a DB descriptor is just
  created and thrown away shortly after.

  This is reflected in the `Coredb` abstraction layer above `Aristo`/`Kvt`
  where a few `Shared` type descriptors are cached and a shared reference
  is returned rather than a disposable new object.

* For `CoreDb` support transaction level tracking

details:
  This is mainly an extra for the legacy DB as `Aristo` and `Kvt` support
  this already.

  Also return an error on the legacy DB backend when `persistent()` is
  called while there are transactions pending (the `persistent()` call
  does nothing otherwise on the legacy backend.)

* Clear compiler warnings (remove unused variables etc.)
2023-11-24 22:16:21 +00:00
Jordan Hrycaj 610e2d338d
Core db fix legacy db root vertex fetcher (#1899)
* Using different `tmp` directories for `Kvt` and `Aristo`

why:
  Closing one database would leave the other set of directories
  incomplete.

* Code cosmetics, silence compiler

* Fix typo `EMPTY_ROOT_HASH` vs. `EMPTY_CODE_HASH`

* Fix copyright years
2023-11-20 20:22:27 +00:00
Jordan Hrycaj 6e0397e276
Aristo and ledger small updates (#1888)
* Fix debug noise in `hashify()` for perfectly normal situation

why:
  Was previously considered a fixable error

* Fix test sample file names

why:
  The larger test file `goerli68161.txt.gz` is already in the local
  archive. So there is no need to use the smaller one from the external
  repo.

* Activate `accounts_cache` module from `db/ledger`

why:
  A copy of the original `accounts_cache.nim` source to be integrated
  into the `Ledger` module wrapper which allows to switch between
  different `accounts_cache` implementations unser tha same API.

details:
  At a later state, the `db/accounts_cache.nim` wrapper will be
  removed so that there is only one access to that module via
  `db/ledger/accounts_cache.nim`.

* Fix copyright headers in source code
2023-11-08 16:52:25 +00:00
Jordan Hrycaj 6d132811ba
Core db update providing additional results code interface (#1776)
* Split `core_db/base.nim` into several sources

* Rename `core_db/legacy.nim` => `core_db/legacy_db.nim`

* Update `CoreDb` API, dual methods returning `Result[]` or plain value

detail:
  Plain value methods implemet the legacy API, they defect on error results

* Redesign `CoreDB` direct backend access

why:
  Made the `backend` directive integral part of the API

* Discontinue providing unused or otherwise available functions

details:
+ setTransactionID() removed, not used and not easily replicable in Aristo
+ maybeGet() removed, available via direct backend access
+ newPhk() removed, never used & was experimental anyway

* Update/reorg backend API

why:
+ Added error print function `$$()`
+ General descriptor completion (and optional validation) via `bless()`

* Update `Aristo`/`Kvt` exception handling

why:
  Avoid `CatchableError` exceptions, rather pass them as error code where
  appropriate.

* More `CoreDB` compliant `Aristo` and `Kvt` methods

details:
+ Providing functions like `contains()`, `getVtxRc()` (returns `Result[]`).
+ Additional error code: `NotImplemented`

* Rewrite/reorg of Aristo DB constructor

why:
  Previously used global object `DefaultQidLayoutRef` as default
  initialiser. This object was created at compile time which lead to
  non-gc safe functions.

* Update nimbus/db/core_db/legacy_db.nim

Co-authored-by: Kim De Mey <kim.demey@gmail.com>

* Update nimbus/db/aristo/aristo_transcode.nim

Co-authored-by: Kim De Mey <kim.demey@gmail.com>

* Update nimbus/db/core_db/legacy_db.nim

Co-authored-by: Kim De Mey <kim.demey@gmail.com>

---------

Co-authored-by: Kim De Mey <kim.demey@gmail.com>
2023-09-26 10:21:13 +01:00
Jordan Hrycaj cd1d370543
Aristo db api extensions for use as core db backend (#1754)
* Update docu

* Update Aristo/Kvt constructor prototype

why:
  Previous version used an `enum` value to indicate what backend is to
  be used. This was replaced by using the backend object type.

* Rewrite `hikeUp()` return code into `Result[Hike,(Hike,AristoError)]`

why:
  Better code maintenance. Previously, the `Hike` object was returned. It
  had an internal error field so partial success was also available on
  a failure. This error field has been removed.

* Use `openArray[byte]` rather than `Blob` in functions prototypes

* Provide synchronised multi instance transactions

why:
  The `CoreDB` object was geared towards the legacy DB which used a single
  transaction for the key-value backend DB. Different state roots are
  provided by the backend database, so all instances work directly on the
  same backend.

  Aristo db instances have different in-memory mappings (aka different
  state roots) and the transactions are on top of there mappings. So each
  instance might run different transactions.

  Multi instance transactions are a compromise to converge towards the
  legacy behaviour. The synchronised transactions span over all instances
  available at the time when base transaction was opened. Instances
  created later are unaffected.

* Provide key-value pair database iterator

why:
  Needed in `CoreDB` for `replicate()` emulation

also:
  Some update of internal code

* Extend API (i.e. prototype variants)

why:
  Needed for `CoreDB` geared towards the legacy backend which has a more
  basic API than Aristo.
2023-09-15 16:23:53 +01:00
andri lim 56215ed83f
Bump stint to v2.0: new array backend (#1747)
* Bump stint to v2.0: new array backend
2023-09-13 09:32:38 +07:00
Jordan Hrycaj 8e00143313
Aristo db code massage n cosmetics (#1745)
* Rewrite remaining `AristoError` return code into `Result[void,AristoError]`

why:
  Better code maintenance

* Update import sections

* Update Aristo DB paths

why:
 More systematic so directory can be shared with other DB types

* More cosmetcs

* Update unit tests runners

why:
  Proper handling of persistent and mem-only DB. The latter can be
  consistently triggered by an empty DB path.
2023-09-12 19:45:12 +01:00
Jordan Hrycaj 8e46953390
Aristo db state root repos and reorg (#1744)
* Reorg of distributed backend access

details:
  Now handled via API provided in `aristo_desc`.

* Rename `checkCache()` => `checkTop()`

why:
  Better naming for top layer cache checker

also:
  Provide cascaded fifos checker

* Provide `eq` directive for finding filter by exact filter ID (think block number)

* Some code beautification (for better code reading)

* State root reposition and reorg

details:
  Repositioning is supported by forking a new descriptor. Reorg is then
  accomplished by writing this forked state on the backend database.
2023-09-11 21:38:49 +01:00
Jordan Hrycaj 465d694834
Aristo db implement filter storage scheduler (#1713)
* Rename FilterID => QueueID

why:
  The current usage does not identify a particular filter but uses it as
  storage tag to manage it on the database (to be organised in a set of
  FIFOs or queues.)

* Split `aristo_filter` source into sub-files

why:
  Make space for filter management API

* Store filter queue IDs in pairs on the backend

why:
  Any pair will will describe a FIFO accessed by bottom/top IDs

* Reorg some source file names

why:
  The "aristo_" prefix for make local/private files is tedious to
  use, so removed.

* Implement filter slot scheduler

details:
  Filters will be stored on the database on cascaded FIFOs. When a FIFO
  queue is full, some filter items are bundled together and stored on the
  next FIFO.
2023-08-25 23:53:59 +01:00