2023-08-07 17:45:23 +00:00
|
|
|
# nimbus-eth1
|
Core db update storage root management for sub tries (#1964)
* Aristo: Re-phrase `LayerDelta` and `LayerFinal` as object references
why:
Avoids copying in some cases
* Fix copyright header
* Aristo: Verify `leafTie.root` function argument for `merge()` proc
why:
Zero root will lead to inconsistent DB entry
* Aristo: Update failure condition for hash labels compiler `hashify()`
why:
Node need not be rejected as long as links are on the schedule. In
that case, `redo[]` is to become `wff.base[]` at a later stage.
This amends an earlier fix, part of #1952 by also testing against
the target nodes of the `wff.base[]` sets.
* Aristo: Add storage root glue record to `hashify()` schedule
why:
An account leaf node might refer to a non-resolvable storage root ID.
Storage root node chains will end up at the storage root. So the link
`storage-root->account-leaf` needs an extra item in the schedule.
* Aristo: fix error code returned by `fetchPayload()`
details:
Final error code is implied by the error code form the `hikeUp()`
function.
* CoreDb: Discard `createOk` argument in API `getRoot()` function
why:
Not needed for the legacy DB. For the `Arsto` DB, a lazy approach is
implemented where a stprage root node is created on-the-fly.
* CoreDb: Prevent `$$` logging in some cases
why:
Logging the function `$$` is not useful when it is used for internal
use, i.e. retrieving an an error text for logging.
* CoreDb: Add `tryHashFn()` to API for pretty printing
why:
Pretty printing must not change the hashification status for the
`Aristo` DB. So there is an independent API wrapper for getting the
node hash which never updated the hashes.
* CoreDb: Discard `update` argument in API `hash()` function
why:
When calling the API function `hash()`, the latest state is always
wanted. For a version that uses the current state as-is without checking,
the function `tryHash()` was added to the backend.
* CoreDb: Update opaque vertex ID objects for the `Aristo` backend
why:
For `Aristo`, vID objects encapsulate a numeric `VertexID`
referencing a vertex (rather than a node hash as used on the
legacy backend.) For storage sub-tries, there might be no initial
vertex known when the descriptor is created. So opaque vertex ID
objects are supported without a valid `VertexID` which will be
initalised on-the-fly when the first item is merged.
* CoreDb: Add pretty printer for opaque vertex ID objects
* Cosmetics, printing profiling data
* CoreDb: Fix segfault in `Aristo` backend when creating MPT descriptor
why:
Missing initialisation error
* CoreDb: Allow MPT to inherit shared context on `Aristo` backend
why:
Creates descriptors with different storage roots for the same
shared `Aristo` DB descriptor.
* Cosmetics, update diagnostic message items for `Aristo` backend
* Fix Copyright year
2024-01-11 19:11:38 +00:00
|
|
|
# Copyright (c) 2023-2024 Status Research & Development GmbH
|
2023-08-07 17:45:23 +00:00
|
|
|
# Licensed under either of
|
|
|
|
# * Apache License, version 2.0, ([LICENSE-APACHE](LICENSE-APACHE) or
|
|
|
|
# http://www.apache.org/licenses/LICENSE-2.0)
|
|
|
|
# * MIT license ([LICENSE-MIT](LICENSE-MIT) or
|
|
|
|
# http://opensource.org/licenses/MIT)
|
|
|
|
# at your option. This file may not be copied, modified, or distributed
|
|
|
|
# except according to those terms.
|
|
|
|
|
|
|
|
## Persistent constructor for Aristo DB
|
|
|
|
## ====================================
|
|
|
|
##
|
|
|
|
## This module automatically pulls in the persistent backend library at the
|
|
|
|
## linking stage (e.g. `rocksdb`) which can be avoided for pure memory DB
|
|
|
|
## applications by importing `./aristo_init/memory_only` (rather than
|
|
|
|
## `./aristo_init/persistent`.)
|
|
|
|
##
|
|
|
|
{.push raises: [].}
|
|
|
|
|
|
|
|
import
|
|
|
|
results,
|
2024-04-16 20:39:11 +00:00
|
|
|
rocksdb,
|
2024-06-13 18:15:11 +00:00
|
|
|
../../opts,
|
2023-08-07 17:45:23 +00:00
|
|
|
../aristo_desc,
|
2024-04-16 20:39:11 +00:00
|
|
|
./rocks_db/rdb_desc,
|
2024-06-13 18:15:11 +00:00
|
|
|
"."/[rocks_db, memory_only]
|
2024-04-16 20:39:11 +00:00
|
|
|
|
2023-08-07 17:45:23 +00:00
|
|
|
export
|
2024-06-13 18:15:11 +00:00
|
|
|
AristoDbRef,
|
2023-08-10 20:01:28 +00:00
|
|
|
RdbBackendRef,
|
2024-06-13 18:15:11 +00:00
|
|
|
RdbWriteEventCb,
|
2023-08-07 17:45:23 +00:00
|
|
|
memory_only
|
|
|
|
|
2023-09-26 09:21:13 +00:00
|
|
|
# ------------------------------------------------------------------------------
|
|
|
|
# Private helpers
|
|
|
|
# ------------------------------------------------------------------------------
|
|
|
|
|
|
|
|
proc newAristoRdbDbRef(
|
|
|
|
basePath: string;
|
2024-06-19 08:55:57 +00:00
|
|
|
dbOpts: DbOptionsRef;
|
|
|
|
cfOpts: ColFamilyOptionsRef;
|
|
|
|
guestCFs: openArray[ColFamilyDescriptor];
|
|
|
|
): Result[(AristoDbRef, seq[ColFamilyReadWrite]), AristoError]=
|
2023-09-26 09:21:13 +00:00
|
|
|
let
|
2024-06-19 08:55:57 +00:00
|
|
|
(be, oCfs) = ? rocksDbBackend(basePath, dbOpts, cfOpts, guestCFs)
|
2024-06-04 15:05:13 +00:00
|
|
|
vTop = block:
|
|
|
|
let rc = be.getTuvFn()
|
2023-09-26 09:21:13 +00:00
|
|
|
if rc.isErr:
|
2024-06-14 11:19:48 +00:00
|
|
|
be.closeFn(eradicate = false)
|
2023-09-26 09:21:13 +00:00
|
|
|
return err(rc.error)
|
|
|
|
rc.value
|
2024-06-19 08:55:57 +00:00
|
|
|
ok((AristoDbRef(
|
Core db update storage root management for sub tries (#1964)
* Aristo: Re-phrase `LayerDelta` and `LayerFinal` as object references
why:
Avoids copying in some cases
* Fix copyright header
* Aristo: Verify `leafTie.root` function argument for `merge()` proc
why:
Zero root will lead to inconsistent DB entry
* Aristo: Update failure condition for hash labels compiler `hashify()`
why:
Node need not be rejected as long as links are on the schedule. In
that case, `redo[]` is to become `wff.base[]` at a later stage.
This amends an earlier fix, part of #1952 by also testing against
the target nodes of the `wff.base[]` sets.
* Aristo: Add storage root glue record to `hashify()` schedule
why:
An account leaf node might refer to a non-resolvable storage root ID.
Storage root node chains will end up at the storage root. So the link
`storage-root->account-leaf` needs an extra item in the schedule.
* Aristo: fix error code returned by `fetchPayload()`
details:
Final error code is implied by the error code form the `hikeUp()`
function.
* CoreDb: Discard `createOk` argument in API `getRoot()` function
why:
Not needed for the legacy DB. For the `Arsto` DB, a lazy approach is
implemented where a stprage root node is created on-the-fly.
* CoreDb: Prevent `$$` logging in some cases
why:
Logging the function `$$` is not useful when it is used for internal
use, i.e. retrieving an an error text for logging.
* CoreDb: Add `tryHashFn()` to API for pretty printing
why:
Pretty printing must not change the hashification status for the
`Aristo` DB. So there is an independent API wrapper for getting the
node hash which never updated the hashes.
* CoreDb: Discard `update` argument in API `hash()` function
why:
When calling the API function `hash()`, the latest state is always
wanted. For a version that uses the current state as-is without checking,
the function `tryHash()` was added to the backend.
* CoreDb: Update opaque vertex ID objects for the `Aristo` backend
why:
For `Aristo`, vID objects encapsulate a numeric `VertexID`
referencing a vertex (rather than a node hash as used on the
legacy backend.) For storage sub-tries, there might be no initial
vertex known when the descriptor is created. So opaque vertex ID
objects are supported without a valid `VertexID` which will be
initalised on-the-fly when the first item is merged.
* CoreDb: Add pretty printer for opaque vertex ID objects
* Cosmetics, printing profiling data
* CoreDb: Fix segfault in `Aristo` backend when creating MPT descriptor
why:
Missing initialisation error
* CoreDb: Allow MPT to inherit shared context on `Aristo` backend
why:
Creates descriptors with different storage roots for the same
shared `Aristo` DB descriptor.
* Cosmetics, update diagnostic message items for `Aristo` backend
* Fix Copyright year
2024-01-11 19:11:38 +00:00
|
|
|
top: LayerRef(
|
2024-07-03 20:21:57 +00:00
|
|
|
delta: LayerDeltaRef(vTop: vTop)),
|
2024-06-19 08:55:57 +00:00
|
|
|
backend: be), oCfs))
|
2023-09-26 09:21:13 +00:00
|
|
|
|
2023-08-07 17:45:23 +00:00
|
|
|
# ------------------------------------------------------------------------------
|
|
|
|
# Public database constuctors, destructor
|
|
|
|
# ------------------------------------------------------------------------------
|
|
|
|
|
2024-06-13 18:15:11 +00:00
|
|
|
proc init*(
|
2023-09-15 15:23:53 +00:00
|
|
|
T: type AristoDbRef;
|
2024-06-13 18:15:11 +00:00
|
|
|
B: type RdbBackendRef;
|
2023-08-07 17:45:23 +00:00
|
|
|
basePath: string;
|
2024-06-19 08:55:57 +00:00
|
|
|
dbOpts: DbOptionsRef;
|
|
|
|
cfOpts: ColFamilyOptionsRef;
|
|
|
|
guestCFs: openArray[ColFamilyDescriptor];
|
|
|
|
): Result[(T, seq[ColFamilyReadWrite]), AristoError] =
|
2023-09-15 15:23:53 +00:00
|
|
|
## Generic constructor, `basePath` argument is ignored for memory backend
|
|
|
|
## databases (which also unconditionally succeed initialising.)
|
2023-09-05 13:57:20 +00:00
|
|
|
##
|
2024-06-19 08:55:57 +00:00
|
|
|
basePath.newAristoRdbDbRef dbOpts, cfOpts, guestCFs
|
2024-06-13 18:15:11 +00:00
|
|
|
|
|
|
|
proc activateWrTrigger*(
|
|
|
|
db: AristoDbRef;
|
|
|
|
hdl: RdbWriteEventCb;
|
|
|
|
): Result[void,AristoError] =
|
|
|
|
## This function allows to link an application to the `Aristo` storage event
|
|
|
|
## for the `RocksDb` backend via call back argument function `hdl`.
|
|
|
|
##
|
|
|
|
## The argument handler `hdl` of type
|
|
|
|
## ::
|
|
|
|
## proc(session: WriteBatchRef): bool
|
|
|
|
##
|
|
|
|
## will be invoked when a write batch for the `Aristo` database is opened in
|
|
|
|
## order to save current changes to the backend. The `session` argument passed
|
|
|
|
## to the handler in conjunction with a list of `ColFamilyReadWrite` items
|
|
|
|
## (as returned from `reinit()`) might be used to store additional items
|
|
|
|
## to the database with the same write batch.
|
|
|
|
##
|
|
|
|
## If the handler returns `true` upon return from running, the write batch
|
|
|
|
## will proceed saving. Otherwise it is aborted and no data are saved at all.
|
|
|
|
##
|
|
|
|
case db.backend.kind:
|
|
|
|
of BackendRocksDB:
|
|
|
|
db.backend.rocksDbSetEventTrigger hdl
|
|
|
|
of BackendRdbHosting:
|
|
|
|
err(RdbBeWrTriggerActiveAlready)
|
|
|
|
else:
|
|
|
|
err(RdbBeTypeUnsupported)
|
2024-04-16 20:39:11 +00:00
|
|
|
|
2023-08-07 17:45:23 +00:00
|
|
|
# ------------------------------------------------------------------------------
|
|
|
|
# End
|
|
|
|
# ------------------------------------------------------------------------------
|