mirror of
https://github.com/status-im/nimbus-eth1.git
synced 2025-01-16 23:31:16 +00:00
a1161b537b
* Aristo: Re-phrase `LayerDelta` and `LayerFinal` as object references why: Avoids copying in some cases * Fix copyright header * Aristo: Verify `leafTie.root` function argument for `merge()` proc why: Zero root will lead to inconsistent DB entry * Aristo: Update failure condition for hash labels compiler `hashify()` why: Node need not be rejected as long as links are on the schedule. In that case, `redo[]` is to become `wff.base[]` at a later stage. This amends an earlier fix, part of #1952 by also testing against the target nodes of the `wff.base[]` sets. * Aristo: Add storage root glue record to `hashify()` schedule why: An account leaf node might refer to a non-resolvable storage root ID. Storage root node chains will end up at the storage root. So the link `storage-root->account-leaf` needs an extra item in the schedule. * Aristo: fix error code returned by `fetchPayload()` details: Final error code is implied by the error code form the `hikeUp()` function. * CoreDb: Discard `createOk` argument in API `getRoot()` function why: Not needed for the legacy DB. For the `Arsto` DB, a lazy approach is implemented where a stprage root node is created on-the-fly. * CoreDb: Prevent `$$` logging in some cases why: Logging the function `$$` is not useful when it is used for internal use, i.e. retrieving an an error text for logging. * CoreDb: Add `tryHashFn()` to API for pretty printing why: Pretty printing must not change the hashification status for the `Aristo` DB. So there is an independent API wrapper for getting the node hash which never updated the hashes. * CoreDb: Discard `update` argument in API `hash()` function why: When calling the API function `hash()`, the latest state is always wanted. For a version that uses the current state as-is without checking, the function `tryHash()` was added to the backend. * CoreDb: Update opaque vertex ID objects for the `Aristo` backend why: For `Aristo`, vID objects encapsulate a numeric `VertexID` referencing a vertex (rather than a node hash as used on the legacy backend.) For storage sub-tries, there might be no initial vertex known when the descriptor is created. So opaque vertex ID objects are supported without a valid `VertexID` which will be initalised on-the-fly when the first item is merged. * CoreDb: Add pretty printer for opaque vertex ID objects * Cosmetics, printing profiling data * CoreDb: Fix segfault in `Aristo` backend when creating MPT descriptor why: Missing initialisation error * CoreDb: Allow MPT to inherit shared context on `Aristo` backend why: Creates descriptors with different storage roots for the same shared `Aristo` DB descriptor. * Cosmetics, update diagnostic message items for `Aristo` backend * Fix Copyright year
85 lines
2.8 KiB
Nim
85 lines
2.8 KiB
Nim
# nimbus-eth1
|
|
# Copyright (c) 2023-2024 Status Research & Development GmbH
|
|
# Licensed under either of
|
|
# * Apache License, version 2.0, ([LICENSE-APACHE](LICENSE-APACHE) or
|
|
# http://www.apache.org/licenses/LICENSE-2.0)
|
|
# * MIT license ([LICENSE-MIT](LICENSE-MIT) or
|
|
# http://opensource.org/licenses/MIT)
|
|
# at your option. This file may not be copied, modified, or distributed
|
|
# except according to those terms.
|
|
|
|
## Persistent constructor for Aristo DB
|
|
## ====================================
|
|
##
|
|
## This module automatically pulls in the persistent backend library at the
|
|
## linking stage (e.g. `rocksdb`) which can be avoided for pure memory DB
|
|
## applications by importing `./aristo_init/memory_only` (rather than
|
|
## `./aristo_init/persistent`.)
|
|
##
|
|
{.push raises: [].}
|
|
|
|
import
|
|
results,
|
|
../aristo_desc,
|
|
"."/[rocks_db, memory_only]
|
|
export
|
|
RdbBackendRef,
|
|
memory_only
|
|
|
|
# ------------------------------------------------------------------------------
|
|
# Private helpers
|
|
# ------------------------------------------------------------------------------
|
|
|
|
proc newAristoRdbDbRef(
|
|
basePath: string;
|
|
qidLayout: QidLayoutRef;
|
|
): Result[AristoDbRef, AristoError]=
|
|
let
|
|
be = ? rocksDbBackend(basePath, qidLayout)
|
|
vGen = block:
|
|
let rc = be.getIdgFn()
|
|
if rc.isErr:
|
|
be.closeFn(flush = false)
|
|
return err(rc.error)
|
|
rc.value
|
|
ok AristoDbRef(
|
|
top: LayerRef(
|
|
delta: LayerDeltaRef(),
|
|
final: LayerFinalRef(vGen: vGen)),
|
|
backend: be)
|
|
|
|
# ------------------------------------------------------------------------------
|
|
# Public database constuctors, destructor
|
|
# ------------------------------------------------------------------------------
|
|
|
|
proc init*[W: RdbBackendRef](
|
|
T: type AristoDbRef;
|
|
B: type W;
|
|
basePath: string;
|
|
qidLayout: QidLayoutRef;
|
|
): Result[T, AristoError] =
|
|
## Generic constructor, `basePath` argument is ignored for memory backend
|
|
## databases (which also unconditionally succeed initialising.)
|
|
##
|
|
## If the `qidLayout` argument is set `QidLayoutRef(nil)`, the a backend
|
|
## database will not provide filter history management. Providing a different
|
|
## scheduler layout shoud be used with care as table access with different
|
|
## layouts might render the filter history data unmanageable.
|
|
##
|
|
when B is RdbBackendRef:
|
|
basePath.newAristoRdbDbRef qidLayout
|
|
|
|
proc init*[W: RdbBackendRef](
|
|
T: type AristoDbRef;
|
|
B: type W;
|
|
basePath: string;
|
|
): Result[T, AristoError] =
|
|
## Variant of `init()` using default schedule.
|
|
##
|
|
when B is RdbBackendRef:
|
|
basePath.newAristoRdbDbRef DEFAULT_QID_QUEUES.to(QidLayoutRef)
|
|
|
|
# ------------------------------------------------------------------------------
|
|
# End
|
|
# ------------------------------------------------------------------------------
|