2023-07-31 13:43:38 +00:00
|
|
|
# Nimbus
|
Core db update storage root management for sub tries (#1964)
* Aristo: Re-phrase `LayerDelta` and `LayerFinal` as object references
why:
Avoids copying in some cases
* Fix copyright header
* Aristo: Verify `leafTie.root` function argument for `merge()` proc
why:
Zero root will lead to inconsistent DB entry
* Aristo: Update failure condition for hash labels compiler `hashify()`
why:
Node need not be rejected as long as links are on the schedule. In
that case, `redo[]` is to become `wff.base[]` at a later stage.
This amends an earlier fix, part of #1952 by also testing against
the target nodes of the `wff.base[]` sets.
* Aristo: Add storage root glue record to `hashify()` schedule
why:
An account leaf node might refer to a non-resolvable storage root ID.
Storage root node chains will end up at the storage root. So the link
`storage-root->account-leaf` needs an extra item in the schedule.
* Aristo: fix error code returned by `fetchPayload()`
details:
Final error code is implied by the error code form the `hikeUp()`
function.
* CoreDb: Discard `createOk` argument in API `getRoot()` function
why:
Not needed for the legacy DB. For the `Arsto` DB, a lazy approach is
implemented where a stprage root node is created on-the-fly.
* CoreDb: Prevent `$$` logging in some cases
why:
Logging the function `$$` is not useful when it is used for internal
use, i.e. retrieving an an error text for logging.
* CoreDb: Add `tryHashFn()` to API for pretty printing
why:
Pretty printing must not change the hashification status for the
`Aristo` DB. So there is an independent API wrapper for getting the
node hash which never updated the hashes.
* CoreDb: Discard `update` argument in API `hash()` function
why:
When calling the API function `hash()`, the latest state is always
wanted. For a version that uses the current state as-is without checking,
the function `tryHash()` was added to the backend.
* CoreDb: Update opaque vertex ID objects for the `Aristo` backend
why:
For `Aristo`, vID objects encapsulate a numeric `VertexID`
referencing a vertex (rather than a node hash as used on the
legacy backend.) For storage sub-tries, there might be no initial
vertex known when the descriptor is created. So opaque vertex ID
objects are supported without a valid `VertexID` which will be
initalised on-the-fly when the first item is merged.
* CoreDb: Add pretty printer for opaque vertex ID objects
* Cosmetics, printing profiling data
* CoreDb: Fix segfault in `Aristo` backend when creating MPT descriptor
why:
Missing initialisation error
* CoreDb: Allow MPT to inherit shared context on `Aristo` backend
why:
Creates descriptors with different storage roots for the same
shared `Aristo` DB descriptor.
* Cosmetics, update diagnostic message items for `Aristo` backend
* Fix Copyright year
2024-01-11 19:11:38 +00:00
|
|
|
# Copyright (c) 2023-2024 Status Research & Development GmbH
|
2023-07-31 13:43:38 +00:00
|
|
|
# Licensed under either of
|
|
|
|
# * Apache License, version 2.0, ([LICENSE-APACHE](LICENSE-APACHE) or
|
|
|
|
# http://www.apache.org/licenses/LICENSE-2.0)
|
|
|
|
# * MIT license ([LICENSE-MIT](LICENSE-MIT) or
|
|
|
|
# http://opensource.org/licenses/MIT)
|
|
|
|
# at your option. This file may not be copied, modified, or distributed except
|
|
|
|
# according to those terms.
|
|
|
|
|
|
|
|
{.push raises: [].}
|
|
|
|
|
|
|
|
import
|
2023-08-02 20:46:41 +00:00
|
|
|
eth/common,
|
2023-09-26 09:21:13 +00:00
|
|
|
"../.."/[constants, errors],
|
2024-04-19 18:37:27 +00:00
|
|
|
./base/[api_tracking, base_desc]
|
2023-12-12 17:47:41 +00:00
|
|
|
|
Core db update storage root management for sub tries (#1964)
* Aristo: Re-phrase `LayerDelta` and `LayerFinal` as object references
why:
Avoids copying in some cases
* Fix copyright header
* Aristo: Verify `leafTie.root` function argument for `merge()` proc
why:
Zero root will lead to inconsistent DB entry
* Aristo: Update failure condition for hash labels compiler `hashify()`
why:
Node need not be rejected as long as links are on the schedule. In
that case, `redo[]` is to become `wff.base[]` at a later stage.
This amends an earlier fix, part of #1952 by also testing against
the target nodes of the `wff.base[]` sets.
* Aristo: Add storage root glue record to `hashify()` schedule
why:
An account leaf node might refer to a non-resolvable storage root ID.
Storage root node chains will end up at the storage root. So the link
`storage-root->account-leaf` needs an extra item in the schedule.
* Aristo: fix error code returned by `fetchPayload()`
details:
Final error code is implied by the error code form the `hikeUp()`
function.
* CoreDb: Discard `createOk` argument in API `getRoot()` function
why:
Not needed for the legacy DB. For the `Arsto` DB, a lazy approach is
implemented where a stprage root node is created on-the-fly.
* CoreDb: Prevent `$$` logging in some cases
why:
Logging the function `$$` is not useful when it is used for internal
use, i.e. retrieving an an error text for logging.
* CoreDb: Add `tryHashFn()` to API for pretty printing
why:
Pretty printing must not change the hashification status for the
`Aristo` DB. So there is an independent API wrapper for getting the
node hash which never updated the hashes.
* CoreDb: Discard `update` argument in API `hash()` function
why:
When calling the API function `hash()`, the latest state is always
wanted. For a version that uses the current state as-is without checking,
the function `tryHash()` was added to the backend.
* CoreDb: Update opaque vertex ID objects for the `Aristo` backend
why:
For `Aristo`, vID objects encapsulate a numeric `VertexID`
referencing a vertex (rather than a node hash as used on the
legacy backend.) For storage sub-tries, there might be no initial
vertex known when the descriptor is created. So opaque vertex ID
objects are supported without a valid `VertexID` which will be
initalised on-the-fly when the first item is merged.
* CoreDb: Add pretty printer for opaque vertex ID objects
* Cosmetics, printing profiling data
* CoreDb: Fix segfault in `Aristo` backend when creating MPT descriptor
why:
Missing initialisation error
* CoreDb: Allow MPT to inherit shared context on `Aristo` backend
why:
Creates descriptors with different storage roots for the same
shared `Aristo` DB descriptor.
* Cosmetics, update diagnostic message items for `Aristo` backend
* Fix Copyright year
2024-01-11 19:11:38 +00:00
|
|
|
from ../aristo
|
2024-06-27 09:01:26 +00:00
|
|
|
import EmptyBlob, isValid
|
Core db update storage root management for sub tries (#1964)
* Aristo: Re-phrase `LayerDelta` and `LayerFinal` as object references
why:
Avoids copying in some cases
* Fix copyright header
* Aristo: Verify `leafTie.root` function argument for `merge()` proc
why:
Zero root will lead to inconsistent DB entry
* Aristo: Update failure condition for hash labels compiler `hashify()`
why:
Node need not be rejected as long as links are on the schedule. In
that case, `redo[]` is to become `wff.base[]` at a later stage.
This amends an earlier fix, part of #1952 by also testing against
the target nodes of the `wff.base[]` sets.
* Aristo: Add storage root glue record to `hashify()` schedule
why:
An account leaf node might refer to a non-resolvable storage root ID.
Storage root node chains will end up at the storage root. So the link
`storage-root->account-leaf` needs an extra item in the schedule.
* Aristo: fix error code returned by `fetchPayload()`
details:
Final error code is implied by the error code form the `hikeUp()`
function.
* CoreDb: Discard `createOk` argument in API `getRoot()` function
why:
Not needed for the legacy DB. For the `Arsto` DB, a lazy approach is
implemented where a stprage root node is created on-the-fly.
* CoreDb: Prevent `$$` logging in some cases
why:
Logging the function `$$` is not useful when it is used for internal
use, i.e. retrieving an an error text for logging.
* CoreDb: Add `tryHashFn()` to API for pretty printing
why:
Pretty printing must not change the hashification status for the
`Aristo` DB. So there is an independent API wrapper for getting the
node hash which never updated the hashes.
* CoreDb: Discard `update` argument in API `hash()` function
why:
When calling the API function `hash()`, the latest state is always
wanted. For a version that uses the current state as-is without checking,
the function `tryHash()` was added to the backend.
* CoreDb: Update opaque vertex ID objects for the `Aristo` backend
why:
For `Aristo`, vID objects encapsulate a numeric `VertexID`
referencing a vertex (rather than a node hash as used on the
legacy backend.) For storage sub-tries, there might be no initial
vertex known when the descriptor is created. So opaque vertex ID
objects are supported without a valid `VertexID` which will be
initalised on-the-fly when the first item is merged.
* CoreDb: Add pretty printer for opaque vertex ID objects
* Cosmetics, printing profiling data
* CoreDb: Fix segfault in `Aristo` backend when creating MPT descriptor
why:
Missing initialisation error
* CoreDb: Allow MPT to inherit shared context on `Aristo` backend
why:
Creates descriptors with different storage roots for the same
shared `Aristo` DB descriptor.
* Cosmetics, update diagnostic message items for `Aristo` backend
* Fix Copyright year
2024-01-11 19:11:38 +00:00
|
|
|
|
2023-12-12 17:47:41 +00:00
|
|
|
const
|
2024-06-27 19:21:01 +00:00
|
|
|
EnableAccountKeyValidation = defined(release).not
|
|
|
|
## If this flag is enabled, the length of an account key is verified
|
|
|
|
## to habe exactly 32 bytes. An assert is thrown if seen otherwise (a
|
|
|
|
## notoriously week spot of the `openArray[byte]` argument type.)
|
|
|
|
|
2023-12-12 17:47:41 +00:00
|
|
|
EnableApiTracking = false
|
|
|
|
## When enabled, functions using this tracking facility need to import
|
2024-06-27 19:21:01 +00:00
|
|
|
## `chronicles`, as well. Also, some `func` designators might need to
|
|
|
|
## be changed to `proc` for possible side effects.
|
|
|
|
##
|
|
|
|
## Tracking noise is then enabled by setting the flag `trackNewApi` to
|
|
|
|
## `true` in the `CoreDbRef` descriptor.
|
2023-12-12 17:47:41 +00:00
|
|
|
|
|
|
|
EnableApiProfiling = true
|
|
|
|
## Enables functions profiling if `EnableApiTracking` is also set `true`.
|
|
|
|
|
|
|
|
AutoValidateDescriptors = defined(release).not
|
|
|
|
## No validatinon needed for production suite.
|
|
|
|
|
2023-09-26 09:21:13 +00:00
|
|
|
|
|
|
|
export
|
2023-10-11 19:09:11 +00:00
|
|
|
CoreDbAccount,
|
|
|
|
CoreDbApiError,
|
2023-09-26 09:21:13 +00:00
|
|
|
CoreDbCaptFlags,
|
2024-04-19 18:37:27 +00:00
|
|
|
CoreDbColType,
|
2024-03-18 19:40:23 +00:00
|
|
|
CoreDbCtxRef,
|
2023-10-11 19:09:11 +00:00
|
|
|
CoreDbErrorCode,
|
2023-10-02 18:05:17 +00:00
|
|
|
CoreDbErrorRef,
|
2023-12-12 17:47:41 +00:00
|
|
|
CoreDbFnInx,
|
2023-09-26 09:21:13 +00:00
|
|
|
CoreDbKvtBackendRef,
|
|
|
|
CoreDbMptBackendRef,
|
Core db and aristo updates for destructor and tx logic (#1894)
* Disable `TransactionID` related functions from `state_db.nim`
why:
Functions `getCommittedStorage()` and `updateOriginalRoot()` from
the `state_db` module are nowhere used. The emulation of a legacy
`TransactionID` type functionality is administratively expensive to
provide by `Aristo` (the legacy DB version is only partially
implemented, anyway).
As there is no other place where `TransactionID`s are used, they will
not be provided by the `Aristo` variant of the `CoreDb`. For the
legacy DB API, nothing will change.
* Fix copyright headers in source code
* Get rid of compiler warning
* Update Aristo code, remove unused `merge()` variant, export `hashify()`
why:
Adapt to upcoming `CoreDb` wrapper
* Remove synced tx feature from `Aristo`
why:
+ This feature allowed to synchronise transaction methods like begin,
commit, and rollback for a group of descriptors.
+ The feature is over engineered and not needed for `CoreDb`, neither
is it complete (some convergence features missing.)
* Add debugging helpers to `Kvt`
also:
Update database iterator, add count variable yield argument similar
to `Aristo`.
* Provide optional destructors for `CoreDb` API
why;
For the upcoming Aristo wrapper, this allows to control when certain
smart destruction and update can take place. The auto destructor works
fine in general when the storage/cache strategy is known and acceptable
when creating descriptors.
* Add update option for `CoreDb` API function `hash()`
why;
The hash function is typically used to get the state root of the MPT.
Due to lazy hashing, this might be not available on the `Aristo` DB.
So the `update` function asks for re-hashing the gurrent state changes
if needed.
* Update API tracking log mode: `info` => `debug
* Use shared `Kvt` descriptor in new Ledger API
why:
No need to create a new descriptor all the time
2023-11-16 19:35:03 +00:00
|
|
|
CoreDbPersistentTypes,
|
2024-02-29 21:10:24 +00:00
|
|
|
CoreDbProfListRef,
|
2023-09-26 09:21:13 +00:00
|
|
|
CoreDbRef,
|
|
|
|
CoreDbType,
|
2024-06-19 14:13:12 +00:00
|
|
|
CoreDbAccRef,
|
|
|
|
CoreDbCaptRef,
|
|
|
|
CoreDbKvtRef,
|
|
|
|
CoreDbMptRef,
|
2024-06-27 09:01:26 +00:00
|
|
|
CoreDbTxRef
|
2023-08-02 20:46:41 +00:00
|
|
|
|
2023-09-26 09:21:13 +00:00
|
|
|
const
|
2023-12-12 17:47:41 +00:00
|
|
|
CoreDbEnableApiTracking* = EnableApiTracking
|
|
|
|
CoreDbEnableApiProfiling* = EnableApiTracking and EnableApiProfiling
|
2023-09-26 09:21:13 +00:00
|
|
|
|
2023-12-12 17:47:41 +00:00
|
|
|
when AutoValidateDescriptors:
|
|
|
|
import ./base/validate
|
2023-09-26 09:21:13 +00:00
|
|
|
|
2023-12-12 17:47:41 +00:00
|
|
|
# More settings
|
|
|
|
const
|
2023-11-24 22:16:21 +00:00
|
|
|
logTxt = "CoreDb "
|
2024-02-02 20:23:04 +00:00
|
|
|
newApiTxt = logTxt & "API"
|
2023-07-31 13:43:38 +00:00
|
|
|
|
2023-11-24 22:16:21 +00:00
|
|
|
# Annotation helpers
|
|
|
|
{.pragma: apiRaise, gcsafe, raises: [CoreDbApiError].}
|
|
|
|
{.pragma: catchRaise, gcsafe, raises: [CatchableError].}
|
Core db and aristo updates for destructor and tx logic (#1894)
* Disable `TransactionID` related functions from `state_db.nim`
why:
Functions `getCommittedStorage()` and `updateOriginalRoot()` from
the `state_db` module are nowhere used. The emulation of a legacy
`TransactionID` type functionality is administratively expensive to
provide by `Aristo` (the legacy DB version is only partially
implemented, anyway).
As there is no other place where `TransactionID`s are used, they will
not be provided by the `Aristo` variant of the `CoreDb`. For the
legacy DB API, nothing will change.
* Fix copyright headers in source code
* Get rid of compiler warning
* Update Aristo code, remove unused `merge()` variant, export `hashify()`
why:
Adapt to upcoming `CoreDb` wrapper
* Remove synced tx feature from `Aristo`
why:
+ This feature allowed to synchronise transaction methods like begin,
commit, and rollback for a group of descriptors.
+ The feature is over engineered and not needed for `CoreDb`, neither
is it complete (some convergence features missing.)
* Add debugging helpers to `Kvt`
also:
Update database iterator, add count variable yield argument similar
to `Aristo`.
* Provide optional destructors for `CoreDb` API
why;
For the upcoming Aristo wrapper, this allows to control when certain
smart destruction and update can take place. The auto destructor works
fine in general when the storage/cache strategy is known and acceptable
when creating descriptors.
* Add update option for `CoreDb` API function `hash()`
why;
The hash function is typically used to get the state root of the MPT.
Due to lazy hashing, this might be not available on the `Aristo` DB.
So the `update` function asks for re-hashing the gurrent state changes
if needed.
* Update API tracking log mode: `info` => `debug
* Use shared `Kvt` descriptor in new Ledger API
why:
No need to create a new descriptor all the time
2023-11-16 19:35:03 +00:00
|
|
|
|
2023-07-31 13:43:38 +00:00
|
|
|
# ------------------------------------------------------------------------------
|
2023-10-25 14:03:09 +00:00
|
|
|
# Private helpers
|
2023-07-31 13:43:38 +00:00
|
|
|
# ------------------------------------------------------------------------------
|
|
|
|
|
2023-10-25 14:03:09 +00:00
|
|
|
when EnableApiTracking:
|
2023-12-12 17:47:41 +00:00
|
|
|
when EnableApiProfiling:
|
|
|
|
{.warning: "*** Provided API profiling for CoreDB (disabled by default)".}
|
|
|
|
else:
|
|
|
|
{.warning: "*** Provided API logging for CoreDB (disabled by default)".}
|
2023-10-25 14:03:09 +00:00
|
|
|
|
2023-11-24 22:16:21 +00:00
|
|
|
import
|
2024-06-27 19:21:01 +00:00
|
|
|
std/times,
|
|
|
|
chronicles
|
2023-10-25 14:03:09 +00:00
|
|
|
|
2023-11-24 22:16:21 +00:00
|
|
|
proc `$`[T](rc: CoreDbRc[T]): string = rc.toStr
|
|
|
|
proc `$`(q: set[CoreDbCaptFlags]): string = q.toStr
|
|
|
|
proc `$`(t: Duration): string = t.toStr
|
|
|
|
proc `$`(e: EthAddress): string = e.toStr
|
Core db update storage root management for sub tries (#1964)
* Aristo: Re-phrase `LayerDelta` and `LayerFinal` as object references
why:
Avoids copying in some cases
* Fix copyright header
* Aristo: Verify `leafTie.root` function argument for `merge()` proc
why:
Zero root will lead to inconsistent DB entry
* Aristo: Update failure condition for hash labels compiler `hashify()`
why:
Node need not be rejected as long as links are on the schedule. In
that case, `redo[]` is to become `wff.base[]` at a later stage.
This amends an earlier fix, part of #1952 by also testing against
the target nodes of the `wff.base[]` sets.
* Aristo: Add storage root glue record to `hashify()` schedule
why:
An account leaf node might refer to a non-resolvable storage root ID.
Storage root node chains will end up at the storage root. So the link
`storage-root->account-leaf` needs an extra item in the schedule.
* Aristo: fix error code returned by `fetchPayload()`
details:
Final error code is implied by the error code form the `hikeUp()`
function.
* CoreDb: Discard `createOk` argument in API `getRoot()` function
why:
Not needed for the legacy DB. For the `Arsto` DB, a lazy approach is
implemented where a stprage root node is created on-the-fly.
* CoreDb: Prevent `$$` logging in some cases
why:
Logging the function `$$` is not useful when it is used for internal
use, i.e. retrieving an an error text for logging.
* CoreDb: Add `tryHashFn()` to API for pretty printing
why:
Pretty printing must not change the hashification status for the
`Aristo` DB. So there is an independent API wrapper for getting the
node hash which never updated the hashes.
* CoreDb: Discard `update` argument in API `hash()` function
why:
When calling the API function `hash()`, the latest state is always
wanted. For a version that uses the current state as-is without checking,
the function `tryHash()` was added to the backend.
* CoreDb: Update opaque vertex ID objects for the `Aristo` backend
why:
For `Aristo`, vID objects encapsulate a numeric `VertexID`
referencing a vertex (rather than a node hash as used on the
legacy backend.) For storage sub-tries, there might be no initial
vertex known when the descriptor is created. So opaque vertex ID
objects are supported without a valid `VertexID` which will be
initalised on-the-fly when the first item is merged.
* CoreDb: Add pretty printer for opaque vertex ID objects
* Cosmetics, printing profiling data
* CoreDb: Fix segfault in `Aristo` backend when creating MPT descriptor
why:
Missing initialisation error
* CoreDb: Allow MPT to inherit shared context on `Aristo` backend
why:
Creates descriptors with different storage roots for the same
shared `Aristo` DB descriptor.
* Cosmetics, update diagnostic message items for `Aristo` backend
* Fix Copyright year
2024-01-11 19:11:38 +00:00
|
|
|
proc `$`(h: Hash256): string = h.toStr
|
2023-12-12 17:47:41 +00:00
|
|
|
|
2023-11-24 22:16:21 +00:00
|
|
|
template setTrackNewApi(
|
2024-06-19 14:13:12 +00:00
|
|
|
w: CoreDbApiTrackRef;
|
2023-12-12 17:47:41 +00:00
|
|
|
s: static[CoreDbFnInx];
|
|
|
|
code: untyped;
|
2023-11-24 22:16:21 +00:00
|
|
|
) =
|
2023-12-12 17:47:41 +00:00
|
|
|
## Template with code section that will be discarded if logging is
|
|
|
|
## disabled at compile time when `EnableApiTracking` is `false`.
|
2023-11-24 22:16:21 +00:00
|
|
|
when EnableApiTracking:
|
2024-02-29 21:10:24 +00:00
|
|
|
w.beginNewApi(s)
|
2023-12-12 17:47:41 +00:00
|
|
|
code
|
2024-03-18 19:40:23 +00:00
|
|
|
const api {.inject,used.} = s
|
2023-11-24 22:16:21 +00:00
|
|
|
|
2024-02-02 10:58:35 +00:00
|
|
|
template setTrackNewApi*(
|
2024-06-19 14:13:12 +00:00
|
|
|
w: CoreDbApiTrackRef;
|
2023-12-12 17:47:41 +00:00
|
|
|
s: static[CoreDbFnInx];
|
2023-11-24 22:16:21 +00:00
|
|
|
) =
|
2023-12-12 17:47:41 +00:00
|
|
|
w.setTrackNewApi(s):
|
|
|
|
discard
|
2023-11-24 22:16:21 +00:00
|
|
|
|
2024-06-19 14:13:12 +00:00
|
|
|
template ifTrackNewApi*(w: CoreDbApiTrackRef; code: untyped) =
|
2023-11-24 22:16:21 +00:00
|
|
|
when EnableApiTracking:
|
|
|
|
w.endNewApiIf:
|
|
|
|
code
|
2023-10-25 14:03:09 +00:00
|
|
|
|
2023-07-31 13:43:38 +00:00
|
|
|
# ------------------------------------------------------------------------------
|
2023-10-11 19:09:11 +00:00
|
|
|
# Public constructor helper
|
2023-07-31 13:43:38 +00:00
|
|
|
# ------------------------------------------------------------------------------
|
|
|
|
|
2023-10-11 19:09:11 +00:00
|
|
|
proc bless*(db: CoreDbRef): CoreDbRef =
|
|
|
|
## Verify descriptor
|
|
|
|
when AutoValidateDescriptors:
|
|
|
|
db.validate
|
2024-02-29 21:10:24 +00:00
|
|
|
when CoreDbEnableApiProfiling:
|
|
|
|
db.profTab = CoreDbProfListRef.init()
|
2023-10-11 19:09:11 +00:00
|
|
|
db
|
|
|
|
|
2024-06-19 14:13:12 +00:00
|
|
|
proc bless*(db: CoreDbRef; kvt: CoreDbKvtRef): CoreDbKvtRef =
|
2024-02-02 20:23:04 +00:00
|
|
|
## Complete sub-module descriptor, fill in `parent`.
|
|
|
|
kvt.parent = db
|
2023-08-02 20:46:41 +00:00
|
|
|
when AutoValidateDescriptors:
|
2024-02-02 20:23:04 +00:00
|
|
|
kvt.validate
|
|
|
|
kvt
|
2023-08-02 20:46:41 +00:00
|
|
|
|
2024-06-19 14:13:12 +00:00
|
|
|
proc bless*[T: CoreDbKvtRef |
|
|
|
|
CoreDbCtxRef | CoreDbMptRef | CoreDbAccRef |
|
|
|
|
CoreDbTxRef | CoreDbCaptRef |
|
2024-06-27 09:01:26 +00:00
|
|
|
CoreDbKvtBackendRef | CoreDbMptBackendRef | CoreDbAccBackendRef] (
|
2023-10-11 19:09:11 +00:00
|
|
|
db: CoreDbRef;
|
2024-02-02 20:23:04 +00:00
|
|
|
dsc: T;
|
2023-10-11 19:09:11 +00:00
|
|
|
): auto =
|
|
|
|
## Complete sub-module descriptor, fill in `parent`.
|
2024-02-02 20:23:04 +00:00
|
|
|
dsc.parent = db
|
2023-08-02 20:46:41 +00:00
|
|
|
when AutoValidateDescriptors:
|
2024-02-02 20:23:04 +00:00
|
|
|
dsc.validate
|
|
|
|
dsc
|
2023-11-08 12:18:32 +00:00
|
|
|
|
|
|
|
proc bless*(
|
|
|
|
db: CoreDbRef;
|
|
|
|
error: CoreDbErrorCode;
|
2024-02-02 20:23:04 +00:00
|
|
|
dsc: CoreDbErrorRef;
|
2023-11-08 12:18:32 +00:00
|
|
|
): CoreDbErrorRef =
|
2024-02-02 20:23:04 +00:00
|
|
|
dsc.parent = db
|
|
|
|
dsc.error = error
|
2023-11-08 12:18:32 +00:00
|
|
|
when AutoValidateDescriptors:
|
2024-02-02 20:23:04 +00:00
|
|
|
dsc.validate
|
|
|
|
dsc
|
|
|
|
|
|
|
|
|
|
|
|
proc prettyText*(e: CoreDbErrorRef): string =
|
|
|
|
## Pretty print argument object (for tracking use `$$()`)
|
|
|
|
if e.isNil: "$ø" else: e.toStr()
|
|
|
|
|
2023-07-31 13:43:38 +00:00
|
|
|
# ------------------------------------------------------------------------------
|
2023-08-02 20:46:41 +00:00
|
|
|
# Public main descriptor methods
|
2023-07-31 13:43:38 +00:00
|
|
|
# ------------------------------------------------------------------------------
|
|
|
|
|
2024-02-29 21:10:24 +00:00
|
|
|
proc dbProfData*(db: CoreDbRef): CoreDbProfListRef =
|
|
|
|
## Return profiling data table (only available in profiling mode). If
|
|
|
|
## available (i.e. non-nil), result data can be organised by the functions
|
|
|
|
## available with `aristo_profile`.
|
|
|
|
when CoreDbEnableApiProfiling:
|
|
|
|
db.profTab
|
|
|
|
|
2023-08-02 20:46:41 +00:00
|
|
|
proc dbType*(db: CoreDbRef): CoreDbType =
|
Core db update storage root management for sub tries (#1964)
* Aristo: Re-phrase `LayerDelta` and `LayerFinal` as object references
why:
Avoids copying in some cases
* Fix copyright header
* Aristo: Verify `leafTie.root` function argument for `merge()` proc
why:
Zero root will lead to inconsistent DB entry
* Aristo: Update failure condition for hash labels compiler `hashify()`
why:
Node need not be rejected as long as links are on the schedule. In
that case, `redo[]` is to become `wff.base[]` at a later stage.
This amends an earlier fix, part of #1952 by also testing against
the target nodes of the `wff.base[]` sets.
* Aristo: Add storage root glue record to `hashify()` schedule
why:
An account leaf node might refer to a non-resolvable storage root ID.
Storage root node chains will end up at the storage root. So the link
`storage-root->account-leaf` needs an extra item in the schedule.
* Aristo: fix error code returned by `fetchPayload()`
details:
Final error code is implied by the error code form the `hikeUp()`
function.
* CoreDb: Discard `createOk` argument in API `getRoot()` function
why:
Not needed for the legacy DB. For the `Arsto` DB, a lazy approach is
implemented where a stprage root node is created on-the-fly.
* CoreDb: Prevent `$$` logging in some cases
why:
Logging the function `$$` is not useful when it is used for internal
use, i.e. retrieving an an error text for logging.
* CoreDb: Add `tryHashFn()` to API for pretty printing
why:
Pretty printing must not change the hashification status for the
`Aristo` DB. So there is an independent API wrapper for getting the
node hash which never updated the hashes.
* CoreDb: Discard `update` argument in API `hash()` function
why:
When calling the API function `hash()`, the latest state is always
wanted. For a version that uses the current state as-is without checking,
the function `tryHash()` was added to the backend.
* CoreDb: Update opaque vertex ID objects for the `Aristo` backend
why:
For `Aristo`, vID objects encapsulate a numeric `VertexID`
referencing a vertex (rather than a node hash as used on the
legacy backend.) For storage sub-tries, there might be no initial
vertex known when the descriptor is created. So opaque vertex ID
objects are supported without a valid `VertexID` which will be
initalised on-the-fly when the first item is merged.
* CoreDb: Add pretty printer for opaque vertex ID objects
* Cosmetics, printing profiling data
* CoreDb: Fix segfault in `Aristo` backend when creating MPT descriptor
why:
Missing initialisation error
* CoreDb: Allow MPT to inherit shared context on `Aristo` backend
why:
Creates descriptors with different storage roots for the same
shared `Aristo` DB descriptor.
* Cosmetics, update diagnostic message items for `Aristo` backend
* Fix Copyright year
2024-01-11 19:11:38 +00:00
|
|
|
## Getter, print DB type identifier
|
|
|
|
##
|
2023-12-12 17:47:41 +00:00
|
|
|
db.setTrackNewApi BaseDbTypeFn
|
2023-10-25 14:03:09 +00:00
|
|
|
result = db.dbType
|
2024-03-18 19:40:23 +00:00
|
|
|
db.ifTrackNewApi: debug newApiTxt, api, elapsed, result
|
2023-08-02 20:46:41 +00:00
|
|
|
|
2024-06-19 14:13:12 +00:00
|
|
|
proc parent*[T: CoreDbKvtRef |
|
|
|
|
CoreDbCtxRef | CoreDbMptRef | CoreDbAccRef |
|
|
|
|
CoreDbTxRef |
|
|
|
|
CoreDbCaptRef |
|
2024-04-19 18:37:27 +00:00
|
|
|
CoreDbErrorRef](
|
|
|
|
child: T): CoreDbRef =
|
2023-09-26 09:21:13 +00:00
|
|
|
## Getter, common method for all sub-modules
|
2023-11-24 22:16:21 +00:00
|
|
|
##
|
2024-04-19 18:37:27 +00:00
|
|
|
result = child.parent
|
2023-09-26 09:21:13 +00:00
|
|
|
|
2024-06-27 09:01:26 +00:00
|
|
|
proc backend*(dsc: CoreDbKvtRef | CoreDbMptRef | CoreDbAccRef): auto =
|
2023-10-11 19:09:11 +00:00
|
|
|
## Getter, retrieves the *raw* backend object for special/localised support.
|
Core db update storage root management for sub tries (#1964)
* Aristo: Re-phrase `LayerDelta` and `LayerFinal` as object references
why:
Avoids copying in some cases
* Fix copyright header
* Aristo: Verify `leafTie.root` function argument for `merge()` proc
why:
Zero root will lead to inconsistent DB entry
* Aristo: Update failure condition for hash labels compiler `hashify()`
why:
Node need not be rejected as long as links are on the schedule. In
that case, `redo[]` is to become `wff.base[]` at a later stage.
This amends an earlier fix, part of #1952 by also testing against
the target nodes of the `wff.base[]` sets.
* Aristo: Add storage root glue record to `hashify()` schedule
why:
An account leaf node might refer to a non-resolvable storage root ID.
Storage root node chains will end up at the storage root. So the link
`storage-root->account-leaf` needs an extra item in the schedule.
* Aristo: fix error code returned by `fetchPayload()`
details:
Final error code is implied by the error code form the `hikeUp()`
function.
* CoreDb: Discard `createOk` argument in API `getRoot()` function
why:
Not needed for the legacy DB. For the `Arsto` DB, a lazy approach is
implemented where a stprage root node is created on-the-fly.
* CoreDb: Prevent `$$` logging in some cases
why:
Logging the function `$$` is not useful when it is used for internal
use, i.e. retrieving an an error text for logging.
* CoreDb: Add `tryHashFn()` to API for pretty printing
why:
Pretty printing must not change the hashification status for the
`Aristo` DB. So there is an independent API wrapper for getting the
node hash which never updated the hashes.
* CoreDb: Discard `update` argument in API `hash()` function
why:
When calling the API function `hash()`, the latest state is always
wanted. For a version that uses the current state as-is without checking,
the function `tryHash()` was added to the backend.
* CoreDb: Update opaque vertex ID objects for the `Aristo` backend
why:
For `Aristo`, vID objects encapsulate a numeric `VertexID`
referencing a vertex (rather than a node hash as used on the
legacy backend.) For storage sub-tries, there might be no initial
vertex known when the descriptor is created. So opaque vertex ID
objects are supported without a valid `VertexID` which will be
initalised on-the-fly when the first item is merged.
* CoreDb: Add pretty printer for opaque vertex ID objects
* Cosmetics, printing profiling data
* CoreDb: Fix segfault in `Aristo` backend when creating MPT descriptor
why:
Missing initialisation error
* CoreDb: Allow MPT to inherit shared context on `Aristo` backend
why:
Creates descriptors with different storage roots for the same
shared `Aristo` DB descriptor.
* Cosmetics, update diagnostic message items for `Aristo` backend
* Fix Copyright year
2024-01-11 19:11:38 +00:00
|
|
|
##
|
2023-12-12 17:47:41 +00:00
|
|
|
dsc.setTrackNewApi AnyBackendFn
|
2023-10-25 14:03:09 +00:00
|
|
|
result = dsc.methods.backendFn()
|
2024-03-18 19:40:23 +00:00
|
|
|
dsc.ifTrackNewApi: debug newApiTxt, api, elapsed
|
2023-10-11 19:09:11 +00:00
|
|
|
|
2024-06-24 05:56:41 +00:00
|
|
|
proc backend*(mpt: CoreDbMptRef): auto =
|
|
|
|
## Getter, retrieves the *raw* backend object for special/localised support.
|
|
|
|
##
|
|
|
|
mpt.setTrackNewApi AnyBackendFn
|
|
|
|
result = mpt.methods.backendFn(mpt)
|
|
|
|
mpt.ifTrackNewApi: debug newApiTxt, api, elapsed
|
|
|
|
|
2024-06-14 11:19:48 +00:00
|
|
|
proc finish*(db: CoreDbRef; eradicate = false) =
|
|
|
|
## Database destructor. If the argument `eradicate` is set `false`, the
|
|
|
|
## database is left as-is and only the in-memory handlers are cleaned up.
|
2023-10-11 19:09:11 +00:00
|
|
|
##
|
|
|
|
## Otherwise the destructor is allowed to remove the database. This feature
|
|
|
|
## depends on the backend database. Currently, only the `AristoDbRocks` type
|
|
|
|
## backend removes the database on `true`.
|
2023-11-24 22:16:21 +00:00
|
|
|
##
|
2023-12-12 17:47:41 +00:00
|
|
|
db.setTrackNewApi BaseFinishFn
|
2024-06-14 11:19:48 +00:00
|
|
|
db.methods.destroyFn eradicate
|
2024-03-18 19:40:23 +00:00
|
|
|
db.ifTrackNewApi: debug newApiTxt, api, elapsed
|
2023-09-26 09:21:13 +00:00
|
|
|
|
2023-10-02 18:05:17 +00:00
|
|
|
proc `$$`*(e: CoreDbErrorRef): string =
|
2023-09-26 09:21:13 +00:00
|
|
|
## Pretty print error symbol, note that this directive may have side effects
|
|
|
|
## as it calls a backend function.
|
2023-11-24 22:16:21 +00:00
|
|
|
##
|
2023-12-12 17:47:41 +00:00
|
|
|
e.setTrackNewApi ErrorPrintFn
|
Core db update storage root management for sub tries (#1964)
* Aristo: Re-phrase `LayerDelta` and `LayerFinal` as object references
why:
Avoids copying in some cases
* Fix copyright header
* Aristo: Verify `leafTie.root` function argument for `merge()` proc
why:
Zero root will lead to inconsistent DB entry
* Aristo: Update failure condition for hash labels compiler `hashify()`
why:
Node need not be rejected as long as links are on the schedule. In
that case, `redo[]` is to become `wff.base[]` at a later stage.
This amends an earlier fix, part of #1952 by also testing against
the target nodes of the `wff.base[]` sets.
* Aristo: Add storage root glue record to `hashify()` schedule
why:
An account leaf node might refer to a non-resolvable storage root ID.
Storage root node chains will end up at the storage root. So the link
`storage-root->account-leaf` needs an extra item in the schedule.
* Aristo: fix error code returned by `fetchPayload()`
details:
Final error code is implied by the error code form the `hikeUp()`
function.
* CoreDb: Discard `createOk` argument in API `getRoot()` function
why:
Not needed for the legacy DB. For the `Arsto` DB, a lazy approach is
implemented where a stprage root node is created on-the-fly.
* CoreDb: Prevent `$$` logging in some cases
why:
Logging the function `$$` is not useful when it is used for internal
use, i.e. retrieving an an error text for logging.
* CoreDb: Add `tryHashFn()` to API for pretty printing
why:
Pretty printing must not change the hashification status for the
`Aristo` DB. So there is an independent API wrapper for getting the
node hash which never updated the hashes.
* CoreDb: Discard `update` argument in API `hash()` function
why:
When calling the API function `hash()`, the latest state is always
wanted. For a version that uses the current state as-is without checking,
the function `tryHash()` was added to the backend.
* CoreDb: Update opaque vertex ID objects for the `Aristo` backend
why:
For `Aristo`, vID objects encapsulate a numeric `VertexID`
referencing a vertex (rather than a node hash as used on the
legacy backend.) For storage sub-tries, there might be no initial
vertex known when the descriptor is created. So opaque vertex ID
objects are supported without a valid `VertexID` which will be
initalised on-the-fly when the first item is merged.
* CoreDb: Add pretty printer for opaque vertex ID objects
* Cosmetics, printing profiling data
* CoreDb: Fix segfault in `Aristo` backend when creating MPT descriptor
why:
Missing initialisation error
* CoreDb: Allow MPT to inherit shared context on `Aristo` backend
why:
Creates descriptors with different storage roots for the same
shared `Aristo` DB descriptor.
* Cosmetics, update diagnostic message items for `Aristo` backend
* Fix Copyright year
2024-01-11 19:11:38 +00:00
|
|
|
result = e.prettyText()
|
2024-03-18 19:40:23 +00:00
|
|
|
e.ifTrackNewApi: debug newApiTxt, api, elapsed, result
|
2023-10-11 19:09:11 +00:00
|
|
|
|
2023-07-31 13:43:38 +00:00
|
|
|
# ------------------------------------------------------------------------------
|
2023-08-02 20:46:41 +00:00
|
|
|
# Public key-value table methods
|
2023-07-31 13:43:38 +00:00
|
|
|
# ------------------------------------------------------------------------------
|
|
|
|
|
2024-06-19 14:13:12 +00:00
|
|
|
proc newKvt*(db: CoreDbRef): CoreDbKvtRef =
|
Core db and aristo updates for destructor and tx logic (#1894)
* Disable `TransactionID` related functions from `state_db.nim`
why:
Functions `getCommittedStorage()` and `updateOriginalRoot()` from
the `state_db` module are nowhere used. The emulation of a legacy
`TransactionID` type functionality is administratively expensive to
provide by `Aristo` (the legacy DB version is only partially
implemented, anyway).
As there is no other place where `TransactionID`s are used, they will
not be provided by the `Aristo` variant of the `CoreDb`. For the
legacy DB API, nothing will change.
* Fix copyright headers in source code
* Get rid of compiler warning
* Update Aristo code, remove unused `merge()` variant, export `hashify()`
why:
Adapt to upcoming `CoreDb` wrapper
* Remove synced tx feature from `Aristo`
why:
+ This feature allowed to synchronise transaction methods like begin,
commit, and rollback for a group of descriptors.
+ The feature is over engineered and not needed for `CoreDb`, neither
is it complete (some convergence features missing.)
* Add debugging helpers to `Kvt`
also:
Update database iterator, add count variable yield argument similar
to `Aristo`.
* Provide optional destructors for `CoreDb` API
why;
For the upcoming Aristo wrapper, this allows to control when certain
smart destruction and update can take place. The auto destructor works
fine in general when the storage/cache strategy is known and acceptable
when creating descriptors.
* Add update option for `CoreDb` API function `hash()`
why;
The hash function is typically used to get the state root of the MPT.
Due to lazy hashing, this might be not available on the `Aristo` DB.
So the `update` function asks for re-hashing the gurrent state changes
if needed.
* Update API tracking log mode: `info` => `debug
* Use shared `Kvt` descriptor in new Ledger API
why:
No need to create a new descriptor all the time
2023-11-16 19:35:03 +00:00
|
|
|
## Constructor, will defect on failure.
|
|
|
|
##
|
2024-06-05 20:52:04 +00:00
|
|
|
## This function subscribes to the common base object shared with other
|
|
|
|
## KVT descriptors. Any changes are immediately visible to subscribers.
|
|
|
|
## On destruction (when the constructed object gets out of scope), changes
|
|
|
|
## are not saved to the backend database but are still cached and available.
|
Core db and aristo updates for destructor and tx logic (#1894)
* Disable `TransactionID` related functions from `state_db.nim`
why:
Functions `getCommittedStorage()` and `updateOriginalRoot()` from
the `state_db` module are nowhere used. The emulation of a legacy
`TransactionID` type functionality is administratively expensive to
provide by `Aristo` (the legacy DB version is only partially
implemented, anyway).
As there is no other place where `TransactionID`s are used, they will
not be provided by the `Aristo` variant of the `CoreDb`. For the
legacy DB API, nothing will change.
* Fix copyright headers in source code
* Get rid of compiler warning
* Update Aristo code, remove unused `merge()` variant, export `hashify()`
why:
Adapt to upcoming `CoreDb` wrapper
* Remove synced tx feature from `Aristo`
why:
+ This feature allowed to synchronise transaction methods like begin,
commit, and rollback for a group of descriptors.
+ The feature is over engineered and not needed for `CoreDb`, neither
is it complete (some convergence features missing.)
* Add debugging helpers to `Kvt`
also:
Update database iterator, add count variable yield argument similar
to `Aristo`.
* Provide optional destructors for `CoreDb` API
why;
For the upcoming Aristo wrapper, this allows to control when certain
smart destruction and update can take place. The auto destructor works
fine in general when the storage/cache strategy is known and acceptable
when creating descriptors.
* Add update option for `CoreDb` API function `hash()`
why;
The hash function is typically used to get the state root of the MPT.
Due to lazy hashing, this might be not available on the `Aristo` DB.
So the `update` function asks for re-hashing the gurrent state changes
if needed.
* Update API tracking log mode: `info` => `debug
* Use shared `Kvt` descriptor in new Ledger API
why:
No need to create a new descriptor all the time
2023-11-16 19:35:03 +00:00
|
|
|
##
|
2023-12-12 17:47:41 +00:00
|
|
|
db.setTrackNewApi BaseNewKvtFn
|
2024-06-05 20:52:04 +00:00
|
|
|
result = db.methods.newKvtFn().valueOr:
|
Core db update storage root management for sub tries (#1964)
* Aristo: Re-phrase `LayerDelta` and `LayerFinal` as object references
why:
Avoids copying in some cases
* Fix copyright header
* Aristo: Verify `leafTie.root` function argument for `merge()` proc
why:
Zero root will lead to inconsistent DB entry
* Aristo: Update failure condition for hash labels compiler `hashify()`
why:
Node need not be rejected as long as links are on the schedule. In
that case, `redo[]` is to become `wff.base[]` at a later stage.
This amends an earlier fix, part of #1952 by also testing against
the target nodes of the `wff.base[]` sets.
* Aristo: Add storage root glue record to `hashify()` schedule
why:
An account leaf node might refer to a non-resolvable storage root ID.
Storage root node chains will end up at the storage root. So the link
`storage-root->account-leaf` needs an extra item in the schedule.
* Aristo: fix error code returned by `fetchPayload()`
details:
Final error code is implied by the error code form the `hikeUp()`
function.
* CoreDb: Discard `createOk` argument in API `getRoot()` function
why:
Not needed for the legacy DB. For the `Arsto` DB, a lazy approach is
implemented where a stprage root node is created on-the-fly.
* CoreDb: Prevent `$$` logging in some cases
why:
Logging the function `$$` is not useful when it is used for internal
use, i.e. retrieving an an error text for logging.
* CoreDb: Add `tryHashFn()` to API for pretty printing
why:
Pretty printing must not change the hashification status for the
`Aristo` DB. So there is an independent API wrapper for getting the
node hash which never updated the hashes.
* CoreDb: Discard `update` argument in API `hash()` function
why:
When calling the API function `hash()`, the latest state is always
wanted. For a version that uses the current state as-is without checking,
the function `tryHash()` was added to the backend.
* CoreDb: Update opaque vertex ID objects for the `Aristo` backend
why:
For `Aristo`, vID objects encapsulate a numeric `VertexID`
referencing a vertex (rather than a node hash as used on the
legacy backend.) For storage sub-tries, there might be no initial
vertex known when the descriptor is created. So opaque vertex ID
objects are supported without a valid `VertexID` which will be
initalised on-the-fly when the first item is merged.
* CoreDb: Add pretty printer for opaque vertex ID objects
* Cosmetics, printing profiling data
* CoreDb: Fix segfault in `Aristo` backend when creating MPT descriptor
why:
Missing initialisation error
* CoreDb: Allow MPT to inherit shared context on `Aristo` backend
why:
Creates descriptors with different storage roots for the same
shared `Aristo` DB descriptor.
* Cosmetics, update diagnostic message items for `Aristo` backend
* Fix Copyright year
2024-01-11 19:11:38 +00:00
|
|
|
raiseAssert error.prettyText()
|
2024-06-05 20:52:04 +00:00
|
|
|
db.ifTrackNewApi: debug newApiTxt, api, elapsed
|
2023-08-02 20:46:41 +00:00
|
|
|
|
2024-06-19 14:13:12 +00:00
|
|
|
proc get*(kvt: CoreDbKvtRef; key: openArray[byte]): CoreDbRc[Blob] =
|
2023-11-08 12:18:32 +00:00
|
|
|
## This function always returns a non-empty `Blob` or an error code.
|
2023-12-12 17:47:41 +00:00
|
|
|
kvt.setTrackNewApi KvtGetFn
|
2023-10-25 14:03:09 +00:00
|
|
|
result = kvt.methods.getFn key
|
2024-03-18 19:40:23 +00:00
|
|
|
kvt.ifTrackNewApi: debug newApiTxt, api, elapsed, key=key.toStr, result
|
2023-08-02 20:46:41 +00:00
|
|
|
|
2024-06-21 07:44:10 +00:00
|
|
|
proc len*(kvt: CoreDbKvtRef; key: openArray[byte]): CoreDbRc[int] =
|
2024-06-27 09:01:26 +00:00
|
|
|
## This function returns the size of the value associated with `key`.
|
2024-06-21 07:44:10 +00:00
|
|
|
kvt.setTrackNewApi KvtLenFn
|
|
|
|
result = kvt.methods.lenFn key
|
|
|
|
kvt.ifTrackNewApi: debug newApiTxt, api, elapsed, key=key.toStr, result
|
|
|
|
|
2024-06-19 14:13:12 +00:00
|
|
|
proc getOrEmpty*(kvt: CoreDbKvtRef; key: openArray[byte]): CoreDbRc[Blob] =
|
2023-11-08 12:18:32 +00:00
|
|
|
## This function sort of mimics the behaviour of the legacy database
|
|
|
|
## returning an empty `Blob` if the argument `key` is not found on the
|
|
|
|
## database.
|
2023-11-24 22:16:21 +00:00
|
|
|
##
|
2023-12-12 17:47:41 +00:00
|
|
|
kvt.setTrackNewApi KvtGetOrEmptyFn
|
2023-11-08 12:18:32 +00:00
|
|
|
result = kvt.methods.getFn key
|
|
|
|
if result.isErr and result.error.error == KvtNotFound:
|
|
|
|
result = CoreDbRc[Blob].ok(EmptyBlob)
|
2024-03-18 19:40:23 +00:00
|
|
|
kvt.ifTrackNewApi: debug newApiTxt, api, elapsed, key=key.toStr, result
|
2023-11-08 12:18:32 +00:00
|
|
|
|
2024-06-19 14:13:12 +00:00
|
|
|
proc del*(kvt: CoreDbKvtRef; key: openArray[byte]): CoreDbRc[void] =
|
2023-12-12 17:47:41 +00:00
|
|
|
kvt.setTrackNewApi KvtDelFn
|
2023-10-25 14:03:09 +00:00
|
|
|
result = kvt.methods.delFn key
|
2024-03-18 19:40:23 +00:00
|
|
|
kvt.ifTrackNewApi: debug newApiTxt, api, elapsed, key=key.toStr, result
|
2023-08-02 20:46:41 +00:00
|
|
|
|
2023-09-26 09:21:13 +00:00
|
|
|
proc put*(
|
2024-06-19 14:13:12 +00:00
|
|
|
kvt: CoreDbKvtRef;
|
2023-09-26 09:21:13 +00:00
|
|
|
key: openArray[byte];
|
2023-10-25 14:03:09 +00:00
|
|
|
val: openArray[byte];
|
2023-09-26 09:21:13 +00:00
|
|
|
): CoreDbRc[void] =
|
2023-12-12 17:47:41 +00:00
|
|
|
kvt.setTrackNewApi KvtPutFn
|
2023-10-25 14:03:09 +00:00
|
|
|
result = kvt.methods.putFn(key, val)
|
2023-11-24 22:16:21 +00:00
|
|
|
kvt.ifTrackNewApi:
|
2024-03-18 19:40:23 +00:00
|
|
|
debug newApiTxt, api, elapsed, key=key.toStr, val=val.toLenStr, result
|
2023-08-02 20:46:41 +00:00
|
|
|
|
2024-06-19 14:13:12 +00:00
|
|
|
proc hasKey*(kvt: CoreDbKvtRef; key: openArray[byte]): CoreDbRc[bool] =
|
2023-11-08 12:18:32 +00:00
|
|
|
## Would be named `contains` if it returned `bool` rather than `Result[]`.
|
2023-11-24 22:16:21 +00:00
|
|
|
##
|
2023-12-12 17:47:41 +00:00
|
|
|
kvt.setTrackNewApi KvtHasKeyFn
|
2023-11-08 12:18:32 +00:00
|
|
|
result = kvt.methods.hasKeyFn key
|
2024-03-18 19:40:23 +00:00
|
|
|
kvt.ifTrackNewApi: debug newApiTxt, api, elapsed, key=key.toStr, result
|
Core db and aristo updates for destructor and tx logic (#1894)
* Disable `TransactionID` related functions from `state_db.nim`
why:
Functions `getCommittedStorage()` and `updateOriginalRoot()` from
the `state_db` module are nowhere used. The emulation of a legacy
`TransactionID` type functionality is administratively expensive to
provide by `Aristo` (the legacy DB version is only partially
implemented, anyway).
As there is no other place where `TransactionID`s are used, they will
not be provided by the `Aristo` variant of the `CoreDb`. For the
legacy DB API, nothing will change.
* Fix copyright headers in source code
* Get rid of compiler warning
* Update Aristo code, remove unused `merge()` variant, export `hashify()`
why:
Adapt to upcoming `CoreDb` wrapper
* Remove synced tx feature from `Aristo`
why:
+ This feature allowed to synchronise transaction methods like begin,
commit, and rollback for a group of descriptors.
+ The feature is over engineered and not needed for `CoreDb`, neither
is it complete (some convergence features missing.)
* Add debugging helpers to `Kvt`
also:
Update database iterator, add count variable yield argument similar
to `Aristo`.
* Provide optional destructors for `CoreDb` API
why;
For the upcoming Aristo wrapper, this allows to control when certain
smart destruction and update can take place. The auto destructor works
fine in general when the storage/cache strategy is known and acceptable
when creating descriptors.
* Add update option for `CoreDb` API function `hash()`
why;
The hash function is typically used to get the state root of the MPT.
Due to lazy hashing, this might be not available on the `Aristo` DB.
So the `update` function asks for re-hashing the gurrent state changes
if needed.
* Update API tracking log mode: `info` => `debug
* Use shared `Kvt` descriptor in new Ledger API
why:
No need to create a new descriptor all the time
2023-11-16 19:35:03 +00:00
|
|
|
|
2024-03-18 19:40:23 +00:00
|
|
|
# ------------------------------------------------------------------------------
|
2024-03-21 10:45:57 +00:00
|
|
|
# Public Merkle Patricia Tree context constructors and administration
|
2024-03-18 19:40:23 +00:00
|
|
|
# ------------------------------------------------------------------------------
|
|
|
|
|
|
|
|
proc ctx*(db: CoreDbRef): CoreDbCtxRef =
|
2024-04-19 18:37:27 +00:00
|
|
|
## Get currently active column context.
|
2024-03-18 19:40:23 +00:00
|
|
|
##
|
2024-03-21 10:45:57 +00:00
|
|
|
db.setTrackNewApi BaseNewCtxFn
|
|
|
|
result = db.methods.newCtxFn()
|
2024-03-18 19:40:23 +00:00
|
|
|
db.ifTrackNewApi: debug newApiTxt, api, elapsed
|
|
|
|
|
2024-03-21 10:45:57 +00:00
|
|
|
proc swapCtx*(db: CoreDbRef; ctx: CoreDbCtxRef): CoreDbCtxRef =
|
2024-04-19 18:37:27 +00:00
|
|
|
## Activate argument context `ctx` and return the previously active column
|
|
|
|
## context. This function goes typically together with `forget()`. A valid
|
|
|
|
## scenario might look like
|
2024-03-21 10:45:57 +00:00
|
|
|
## ::
|
|
|
|
## proc doSomething(db: CoreDbRef; ctx: CoreDbCtxRef) =
|
|
|
|
## let saved = db.swapCtx ctx
|
|
|
|
## defer: db.swapCtx(saved).forget()
|
|
|
|
## ...
|
2024-03-18 19:40:23 +00:00
|
|
|
##
|
2024-03-21 10:45:57 +00:00
|
|
|
db.setTrackNewApi BaseSwapCtxFn
|
|
|
|
result = db.methods.swapCtxFn ctx
|
|
|
|
db.ifTrackNewApi: debug newApiTxt, api, elapsed
|
2024-03-18 19:40:23 +00:00
|
|
|
|
|
|
|
proc forget*(ctx: CoreDbCtxRef) =
|
2024-04-19 18:37:27 +00:00
|
|
|
## Dispose `ctx` argument context and related columns created with this
|
|
|
|
## context. This function fails if `ctx` is the default context.
|
2024-03-18 19:40:23 +00:00
|
|
|
##
|
|
|
|
ctx.setTrackNewApi CtxForgetFn
|
2024-06-24 05:56:41 +00:00
|
|
|
ctx.methods.forgetFn(ctx)
|
2024-03-18 19:40:23 +00:00
|
|
|
ctx.ifTrackNewApi: debug newApiTxt, api, elapsed
|
|
|
|
|
|
|
|
# ------------------------------------------------------------------------------
|
2024-06-27 09:01:26 +00:00
|
|
|
# Public functions for generic columns
|
2024-03-18 19:40:23 +00:00
|
|
|
# ------------------------------------------------------------------------------
|
|
|
|
|
2024-06-27 09:01:26 +00:00
|
|
|
proc getColumn*(
|
2024-03-18 19:40:23 +00:00
|
|
|
ctx: CoreDbCtxRef;
|
2024-04-19 18:37:27 +00:00
|
|
|
colType: CoreDbColType;
|
2024-06-27 09:01:26 +00:00
|
|
|
clearData = false;
|
2024-06-19 14:13:12 +00:00
|
|
|
): CoreDbMptRef =
|
2024-06-27 09:01:26 +00:00
|
|
|
## ...
|
2024-04-19 18:37:27 +00:00
|
|
|
##
|
2024-06-27 09:01:26 +00:00
|
|
|
ctx.setTrackNewApi CtxGetColumnFn
|
|
|
|
result = ctx.methods.getColumnFn(ctx, colType, clearData)
|
|
|
|
ctx.ifTrackNewApi: debug newApiTxt, api, colType, clearData, elapsed
|
2023-07-31 13:43:38 +00:00
|
|
|
|
2024-06-19 14:13:12 +00:00
|
|
|
proc fetch*(mpt: CoreDbMptRef; key: openArray[byte]): CoreDbRc[Blob] =
|
2024-04-19 18:37:27 +00:00
|
|
|
## Fetch data from the argument `mpt`. The function always returns a
|
2023-11-08 12:18:32 +00:00
|
|
|
## non-empty `Blob` or an error code.
|
2023-11-24 22:16:21 +00:00
|
|
|
##
|
2023-12-12 17:47:41 +00:00
|
|
|
mpt.setTrackNewApi MptFetchFn
|
2024-06-24 05:56:41 +00:00
|
|
|
result = mpt.methods.fetchFn(mpt, key)
|
2024-06-27 19:21:01 +00:00
|
|
|
mpt.ifTrackNewApi: debug newApiTxt, api, elapsed, key=key.toStr, result
|
2023-12-12 17:47:41 +00:00
|
|
|
|
2024-06-19 14:13:12 +00:00
|
|
|
proc fetchOrEmpty*(mpt: CoreDbMptRef; key: openArray[byte]): CoreDbRc[Blob] =
|
2023-11-08 12:18:32 +00:00
|
|
|
## This function returns an empty `Blob` if the argument `key` is not found
|
|
|
|
## on the database.
|
2023-11-24 22:16:21 +00:00
|
|
|
##
|
2023-12-12 17:47:41 +00:00
|
|
|
mpt.setTrackNewApi MptFetchOrEmptyFn
|
2024-06-24 05:56:41 +00:00
|
|
|
result = mpt.methods.fetchFn(mpt, key)
|
2023-12-12 17:47:41 +00:00
|
|
|
if result.isErr and result.error.error == MptNotFound:
|
|
|
|
result = CoreDbRc[Blob].ok(EmptyBlob)
|
2024-06-27 19:21:01 +00:00
|
|
|
mpt.ifTrackNewApi: debug newApiTxt, api, elapsed, key=key.toStr, result
|
2023-12-12 17:47:41 +00:00
|
|
|
|
2024-06-19 14:13:12 +00:00
|
|
|
proc delete*(mpt: CoreDbMptRef; key: openArray[byte]): CoreDbRc[void] =
|
2023-12-12 17:47:41 +00:00
|
|
|
mpt.setTrackNewApi MptDeleteFn
|
2024-06-24 05:56:41 +00:00
|
|
|
result = mpt.methods.deleteFn(mpt, key)
|
2024-06-27 19:21:01 +00:00
|
|
|
mpt.ifTrackNewApi: debug newApiTxt, api, elapsed, key=key.toStr, result
|
2023-12-12 17:47:41 +00:00
|
|
|
|
2023-10-11 19:09:11 +00:00
|
|
|
proc merge*(
|
2024-06-19 14:13:12 +00:00
|
|
|
mpt: CoreDbMptRef;
|
2023-07-31 13:43:38 +00:00
|
|
|
key: openArray[byte];
|
2023-10-25 14:03:09 +00:00
|
|
|
val: openArray[byte];
|
2023-09-26 09:21:13 +00:00
|
|
|
): CoreDbRc[void] =
|
2023-12-12 17:47:41 +00:00
|
|
|
mpt.setTrackNewApi MptMergeFn
|
2024-06-24 05:56:41 +00:00
|
|
|
result = mpt.methods.mergeFn(mpt, key, val)
|
2023-12-12 17:47:41 +00:00
|
|
|
mpt.ifTrackNewApi:
|
2024-06-27 19:21:01 +00:00
|
|
|
debug newApiTxt, api, elapsed, key=key.toStr, val=val.toLenStr, result
|
2023-07-31 13:43:38 +00:00
|
|
|
|
2024-06-19 14:13:12 +00:00
|
|
|
proc hasPath*(mpt: CoreDbMptRef; key: openArray[byte]): CoreDbRc[bool] =
|
2023-12-12 17:47:41 +00:00
|
|
|
## This function would be named `contains()` if it returned `bool` rather
|
|
|
|
## than a `Result[]`.
|
2023-11-24 22:16:21 +00:00
|
|
|
##
|
2023-12-12 17:47:41 +00:00
|
|
|
mpt.setTrackNewApi MptHasPathFn
|
2024-06-24 05:56:41 +00:00
|
|
|
result = mpt.methods.hasPathFn(mpt, key)
|
2024-06-27 19:21:01 +00:00
|
|
|
mpt.ifTrackNewApi: debug newApiTxt, api, elapsed, key=key.toStr, result
|
2023-12-12 17:47:41 +00:00
|
|
|
|
2024-06-27 09:01:26 +00:00
|
|
|
proc state*(mpt: CoreDbMptRef; updateOk = false): CoreDbRc[Hash256] =
|
|
|
|
## This function retrieves the Merkle state hash of the argument
|
|
|
|
## database column (if acvailable.)
|
|
|
|
##
|
|
|
|
## If the argument `updateOk` is set `true`, the Merkle hashes of the
|
|
|
|
## database will be updated first (if needed, at all).
|
|
|
|
##
|
|
|
|
mpt.setTrackNewApi MptStateFn
|
|
|
|
result = mpt.methods.stateFn(mpt, updateOk)
|
|
|
|
mpt.ifTrackNewApi: debug newApiTxt, api, elapsed, updateOK, result
|
|
|
|
|
2023-10-11 19:09:11 +00:00
|
|
|
# ------------------------------------------------------------------------------
|
2024-06-27 09:01:26 +00:00
|
|
|
# Public methods for accounts
|
2023-10-11 19:09:11 +00:00
|
|
|
# ------------------------------------------------------------------------------
|
|
|
|
|
2024-06-27 09:01:26 +00:00
|
|
|
proc getAccounts*(ctx: CoreDbCtxRef): CoreDbAccRef =
|
|
|
|
## Accounts column constructor, will defect on failure.
|
2023-11-24 22:16:21 +00:00
|
|
|
##
|
2024-06-27 09:01:26 +00:00
|
|
|
ctx.setTrackNewApi CtxGetAccountsFn
|
|
|
|
result = ctx.methods.getAccountsFn(ctx)
|
2024-06-27 19:21:01 +00:00
|
|
|
ctx.ifTrackNewApi: debug newApiTxt, api, elapsed
|
2024-03-18 19:40:23 +00:00
|
|
|
|
2024-06-27 09:01:26 +00:00
|
|
|
# ----------- accounts ---------------
|
2023-10-11 19:09:11 +00:00
|
|
|
|
2024-06-27 19:21:01 +00:00
|
|
|
proc fetch*(
|
|
|
|
acc: CoreDbAccRef;
|
|
|
|
accPath: openArray[byte];
|
|
|
|
): CoreDbRc[CoreDbAccount] =
|
2024-06-27 09:01:26 +00:00
|
|
|
## Fetch the account data record for the particular account indexed by
|
2024-06-27 19:21:01 +00:00
|
|
|
## the key `accPath`.
|
2024-02-12 19:37:00 +00:00
|
|
|
##
|
2024-06-27 19:21:01 +00:00
|
|
|
when EnableAccountKeyValidation:
|
|
|
|
doAssert accPath.len == 32
|
2024-06-27 09:01:26 +00:00
|
|
|
acc.setTrackNewApi AccFetchFn
|
2024-06-27 19:21:01 +00:00
|
|
|
result = acc.methods.fetchFn(acc, accPath)
|
|
|
|
acc.ifTrackNewApi:
|
|
|
|
debug newApiTxt, api, elapsed, accPath=accPath.toStr, result
|
2024-06-27 09:01:26 +00:00
|
|
|
|
2024-06-27 19:21:01 +00:00
|
|
|
proc delete*(
|
|
|
|
acc: CoreDbAccRef;
|
|
|
|
accPath: openArray[byte];
|
|
|
|
): CoreDbRc[void] =
|
|
|
|
## Delete the particular account indexed by the key `accPath`. This
|
2024-06-27 09:01:26 +00:00
|
|
|
## will also destroy an associated storage area.
|
2024-02-12 19:37:00 +00:00
|
|
|
##
|
2024-06-27 19:21:01 +00:00
|
|
|
when EnableAccountKeyValidation:
|
|
|
|
doAssert accPath.len == 32
|
2024-06-27 09:01:26 +00:00
|
|
|
acc.setTrackNewApi AccDeleteFn
|
2024-06-27 19:21:01 +00:00
|
|
|
result = acc.methods.deleteFn(acc, accPath)
|
|
|
|
acc.ifTrackNewApi:
|
|
|
|
debug newApiTxt, api, elapsed, accPath=accPath.toStr, result
|
2024-03-18 19:40:23 +00:00
|
|
|
|
2024-06-27 19:21:01 +00:00
|
|
|
proc clearStorage*(
|
|
|
|
acc: CoreDbAccRef;
|
|
|
|
accPath: openArray[byte];
|
|
|
|
): CoreDbRc[void] =
|
2024-06-27 09:01:26 +00:00
|
|
|
## Delete all data slots from the storage area associated with the
|
2024-06-27 19:21:01 +00:00
|
|
|
## particular account indexed by the key `accPath`.
|
2024-06-27 09:01:26 +00:00
|
|
|
##
|
2024-06-27 19:21:01 +00:00
|
|
|
when EnableAccountKeyValidation:
|
|
|
|
doAssert accPath.len == 32
|
2024-06-27 09:01:26 +00:00
|
|
|
acc.setTrackNewApi AccClearStorageFn
|
2024-06-27 19:21:01 +00:00
|
|
|
result = acc.methods.clearStorageFn(acc, accPath)
|
|
|
|
acc.ifTrackNewApi:
|
|
|
|
debug newApiTxt, api, elapsed, accPath=accPath.toStr, result
|
2024-02-12 19:37:00 +00:00
|
|
|
|
2024-06-27 19:21:01 +00:00
|
|
|
proc merge*(
|
|
|
|
acc: CoreDbAccRef;
|
|
|
|
accPath: openArray[byte];
|
|
|
|
accRec: CoreDbAccount;
|
|
|
|
): CoreDbRc[void] =
|
2024-06-27 09:01:26 +00:00
|
|
|
## Add or update the argument account data record `account`. Note that the
|
|
|
|
## `account` argument uniquely idendifies the particular account address.
|
|
|
|
##
|
2023-12-12 17:47:41 +00:00
|
|
|
acc.setTrackNewApi AccMergeFn
|
2024-06-27 19:21:01 +00:00
|
|
|
result = acc.methods.mergeFn(acc, accPath, accRec)
|
2024-02-02 20:23:04 +00:00
|
|
|
acc.ifTrackNewApi:
|
2024-06-27 19:21:01 +00:00
|
|
|
debug newApiTxt, api, elapsed, accPath=accPath.toStr, result
|
2024-03-18 19:40:23 +00:00
|
|
|
|
2024-06-27 19:21:01 +00:00
|
|
|
proc hasPath*(
|
|
|
|
acc: CoreDbAccRef;
|
|
|
|
accPath: openArray[byte];
|
|
|
|
): CoreDbRc[bool] =
|
2023-11-08 12:18:32 +00:00
|
|
|
## Would be named `contains` if it returned `bool` rather than `Result[]`.
|
2023-11-24 22:16:21 +00:00
|
|
|
##
|
2024-06-27 19:21:01 +00:00
|
|
|
when EnableAccountKeyValidation:
|
|
|
|
doAssert accPath.len == 32
|
2023-12-12 17:47:41 +00:00
|
|
|
acc.setTrackNewApi AccHasPathFn
|
2024-06-27 19:21:01 +00:00
|
|
|
result = acc.methods.hasPathFn(acc, accPath)
|
|
|
|
acc.ifTrackNewApi:
|
|
|
|
debug newApiTxt, api, elapsed, accPath=accPath.toStr, result
|
2024-03-18 19:40:23 +00:00
|
|
|
|
2024-06-27 09:01:26 +00:00
|
|
|
proc state*(acc: CoreDbAccRef; updateOk = false): CoreDbRc[Hash256] =
|
|
|
|
## This function retrieves the Merkle state hash of the accounts
|
2024-06-27 19:21:01 +00:00
|
|
|
## column (if available.)
|
2024-06-27 09:01:26 +00:00
|
|
|
##
|
|
|
|
## If the argument `updateOk` is set `true`, the Merkle hashes of the
|
|
|
|
## database will be updated first (if needed, at all).
|
|
|
|
##
|
|
|
|
acc.setTrackNewApi AccStateFn
|
|
|
|
result = acc.methods.stateFn(acc, updateOk)
|
|
|
|
acc.ifTrackNewApi: debug newApiTxt, api, elapsed, updateOK, result
|
|
|
|
|
|
|
|
# ------------ storage ---------------
|
|
|
|
|
|
|
|
proc slotFetch*(
|
|
|
|
acc: CoreDbAccRef;
|
2024-06-27 19:21:01 +00:00
|
|
|
accPath: openArray[byte];
|
2024-06-27 09:01:26 +00:00
|
|
|
slot: openArray[byte];
|
|
|
|
): CoreDbRc[Blob] =
|
2024-06-27 19:21:01 +00:00
|
|
|
## Like `fetch()` but with cascaded index `(accPath,slot)`.
|
2024-06-27 09:01:26 +00:00
|
|
|
acc.setTrackNewApi AccSlotFetchFn
|
2024-06-27 19:21:01 +00:00
|
|
|
result = acc.methods.slotFetchFn(acc, accPath, slot)
|
|
|
|
acc.ifTrackNewApi:
|
|
|
|
doAssert accPath.len == 32
|
|
|
|
debug newApiTxt, api, elapsed, accPath=accPath.toStr,
|
|
|
|
slot=slot.toStr, result
|
2024-06-27 09:01:26 +00:00
|
|
|
|
|
|
|
proc slotDelete*(
|
|
|
|
acc: CoreDbAccRef;
|
2024-06-27 19:21:01 +00:00
|
|
|
accPath: openArray[byte];
|
2024-06-27 09:01:26 +00:00
|
|
|
slot: openArray[byte];
|
|
|
|
): CoreDbRc[void] =
|
2024-06-27 19:21:01 +00:00
|
|
|
## Like `delete()` but with cascaded index `(accPath,slot)`.
|
|
|
|
when EnableAccountKeyValidation:
|
|
|
|
doAssert accPath.len == 32
|
2024-06-27 09:01:26 +00:00
|
|
|
acc.setTrackNewApi AccSlotDeleteFn
|
2024-06-27 19:21:01 +00:00
|
|
|
result = acc.methods.slotDeleteFn(acc, accPath, slot)
|
|
|
|
acc.ifTrackNewApi:
|
|
|
|
debug newApiTxt, api, elapsed, accPath=accPath.toStr,
|
|
|
|
slot=slot.toStr, result
|
2024-06-27 09:01:26 +00:00
|
|
|
|
|
|
|
proc slotHasPath*(
|
|
|
|
acc: CoreDbAccRef;
|
2024-06-27 19:21:01 +00:00
|
|
|
accPath: openArray[byte];
|
2024-06-27 09:01:26 +00:00
|
|
|
slot: openArray[byte];
|
|
|
|
): CoreDbRc[bool] =
|
2024-06-27 19:21:01 +00:00
|
|
|
## Like `hasPath()` but with cascaded index `(accPath,slot)`.
|
|
|
|
when EnableAccountKeyValidation:
|
|
|
|
doAssert accPath.len == 32
|
2024-06-27 09:01:26 +00:00
|
|
|
acc.setTrackNewApi AccSlotHasPathFn
|
2024-06-27 19:21:01 +00:00
|
|
|
result = acc.methods.slotHasPathFn(acc, accPath, slot)
|
|
|
|
acc.ifTrackNewApi:
|
|
|
|
debug newApiTxt, api, elapsed, accPath=accPath.toStr,
|
|
|
|
slot=slot.toStr, result
|
2024-06-27 09:01:26 +00:00
|
|
|
|
|
|
|
proc slotMerge*(
|
|
|
|
acc: CoreDbAccRef;
|
2024-06-27 19:21:01 +00:00
|
|
|
accPath: openArray[byte];
|
2024-06-27 09:01:26 +00:00
|
|
|
slot: openArray[byte];
|
|
|
|
data: openArray[byte];
|
|
|
|
): CoreDbRc[void] =
|
2024-06-27 19:21:01 +00:00
|
|
|
## Like `merge()` but with cascaded index `(accPath,slot)`.
|
|
|
|
when EnableAccountKeyValidation:
|
|
|
|
doAssert accPath.len == 32
|
2024-06-27 09:01:26 +00:00
|
|
|
acc.setTrackNewApi AccSlotMergeFn
|
2024-06-27 19:21:01 +00:00
|
|
|
result = acc.methods.slotMergeFn(acc, accPath, slot, data)
|
|
|
|
acc.ifTrackNewApi:
|
|
|
|
debug newApiTxt, api, elapsed, accPath=accPath.toStr,
|
|
|
|
slot=slot.toStr, result
|
2024-06-27 09:01:26 +00:00
|
|
|
|
|
|
|
proc slotState*(
|
|
|
|
acc: CoreDbAccRef;
|
2024-06-27 19:21:01 +00:00
|
|
|
accPath: openArray[byte];
|
2024-06-27 09:01:26 +00:00
|
|
|
updateOk = false;
|
|
|
|
): CoreDbRc[Hash256] =
|
2024-06-27 19:21:01 +00:00
|
|
|
## This function retrieves the Merkle state hash of the storage data
|
|
|
|
## column (if available) related to the account indexed by the key
|
|
|
|
## `accPath`.`.
|
|
|
|
##
|
|
|
|
## If the argument `updateOk` is set `true`, the Merkle hashes of the
|
|
|
|
## database will be updated first (if needed, at all).
|
|
|
|
##
|
|
|
|
when EnableAccountKeyValidation:
|
|
|
|
doAssert accPath.len == 32
|
2024-06-27 09:01:26 +00:00
|
|
|
acc.setTrackNewApi AccSlotStateFn
|
2024-06-27 19:21:01 +00:00
|
|
|
result = acc.methods.slotStateFn(acc, accPath, updateOk)
|
|
|
|
acc.ifTrackNewApi:
|
|
|
|
debug newApiTxt, api, elapsed, accPath=accPath.toStr, updateOk, result
|
2024-06-27 09:01:26 +00:00
|
|
|
|
|
|
|
proc slotStateEmpty*(
|
|
|
|
acc: CoreDbAccRef;
|
2024-06-27 19:21:01 +00:00
|
|
|
accPath: openArray[byte];
|
2024-06-27 09:01:26 +00:00
|
|
|
): CoreDbRc[bool] =
|
2024-06-27 19:21:01 +00:00
|
|
|
## This function returns `true` if the storage data column is empty or
|
|
|
|
## missing.
|
|
|
|
##
|
|
|
|
when EnableAccountKeyValidation:
|
|
|
|
doAssert accPath.len == 32
|
2024-06-27 09:01:26 +00:00
|
|
|
acc.setTrackNewApi AccSlotStateEmptyFn
|
2024-06-27 19:21:01 +00:00
|
|
|
result = acc.methods.slotStateEmptyFn(acc, accPath)
|
|
|
|
acc.ifTrackNewApi:
|
|
|
|
debug newApiTxt, api, elapsed, accPath=accPath.toStr, result
|
2024-06-27 09:01:26 +00:00
|
|
|
|
|
|
|
proc slotStateEmptyOrVoid*(
|
|
|
|
acc: CoreDbAccRef;
|
2024-06-27 19:21:01 +00:00
|
|
|
accPath: openArray[byte];
|
2024-06-27 09:01:26 +00:00
|
|
|
): bool =
|
|
|
|
## Convenience wrapper, returns `true` where `slotStateEmpty()` would fail.
|
2024-06-27 19:21:01 +00:00
|
|
|
when EnableAccountKeyValidation:
|
|
|
|
doAssert accPath.len == 32
|
2024-06-27 09:01:26 +00:00
|
|
|
acc.setTrackNewApi AccSlotStateEmptyOrVoidFn
|
2024-06-27 19:21:01 +00:00
|
|
|
result = acc.methods.slotStateEmptyFn(acc, accPath).valueOr: true
|
|
|
|
acc.ifTrackNewApi:
|
|
|
|
debug newApiTxt, api, elapsed, accPath=accPath.toStr, result
|
2024-06-27 09:01:26 +00:00
|
|
|
|
|
|
|
# ------------- other ----------------
|
2024-03-18 19:40:23 +00:00
|
|
|
|
2024-06-27 09:01:26 +00:00
|
|
|
proc recast*(
|
|
|
|
acc: CoreDbAccRef;
|
2024-06-27 19:21:01 +00:00
|
|
|
accPath: openArray[byte];
|
|
|
|
accRec: CoreDbAccount;
|
2024-06-27 09:01:26 +00:00
|
|
|
updateOk = false;
|
|
|
|
): CoreDbRc[Account] =
|
2024-03-18 19:40:23 +00:00
|
|
|
## Convert the argument `statement` to the portable Ethereum representation
|
2024-04-19 18:37:27 +00:00
|
|
|
## of an account statement. This conversion may fail if the storage colState
|
2024-03-18 19:40:23 +00:00
|
|
|
## hash (see `hash()` above) is currently unavailable.
|
|
|
|
##
|
2024-06-27 19:21:01 +00:00
|
|
|
when EnableAccountKeyValidation:
|
|
|
|
doAssert accPath.len == 32
|
2024-06-27 09:01:26 +00:00
|
|
|
acc.setTrackNewApi EthAccRecastFn
|
2024-06-27 19:21:01 +00:00
|
|
|
let rc = acc.methods.slotStateFn(acc, accPath, updateOk)
|
2024-03-18 19:40:23 +00:00
|
|
|
result =
|
|
|
|
if rc.isOk:
|
|
|
|
ok Account(
|
2024-06-27 19:21:01 +00:00
|
|
|
nonce: accRec.nonce,
|
|
|
|
balance: accRec.balance,
|
|
|
|
codeHash: accRec.codeHash,
|
2024-03-18 19:40:23 +00:00
|
|
|
storageRoot: rc.value)
|
|
|
|
else:
|
|
|
|
err(rc.error)
|
2024-06-27 19:21:01 +00:00
|
|
|
acc.ifTrackNewApi:
|
|
|
|
let slotState = if rc.isOk: rc.value.toStr else: "n/a"
|
|
|
|
debug newApiTxt, api, elapsed, accPath=accPath.toStr, slotState, result
|
2023-09-26 09:21:13 +00:00
|
|
|
|
2023-07-31 13:43:38 +00:00
|
|
|
# ------------------------------------------------------------------------------
|
|
|
|
# Public transaction related methods
|
|
|
|
# ------------------------------------------------------------------------------
|
|
|
|
|
2024-04-19 18:37:27 +00:00
|
|
|
proc level*(db: CoreDbRef): int =
|
|
|
|
## Retrieve transaction level (zero if there is no pending transaction).
|
|
|
|
##
|
|
|
|
db.setTrackNewApi BaseLevelFn
|
|
|
|
result = db.methods.levelFn()
|
|
|
|
db.ifTrackNewApi: debug newApiTxt, api, elapsed, result
|
|
|
|
|
2024-04-29 20:17:17 +00:00
|
|
|
proc persistent*(
|
|
|
|
db: CoreDbRef;
|
2024-06-13 18:15:11 +00:00
|
|
|
): CoreDbRc[void] =
|
2024-04-19 18:37:27 +00:00
|
|
|
## For the legacy database, this function has no effect and succeeds always.
|
|
|
|
## It will nevertheless return a discardable error if there is a pending
|
|
|
|
## transaction (i.e. `db.level() == 0`.)
|
|
|
|
##
|
|
|
|
## Otherwise, cached data from the `Kvt`, `Mpt`, and `Acc` descriptors are
|
|
|
|
## stored on the persistent database (if any). This requires that that there
|
|
|
|
## is no transaction pending.
|
|
|
|
##
|
|
|
|
db.setTrackNewApi BasePersistentFn
|
2024-06-14 07:31:08 +00:00
|
|
|
result = db.methods.persistentFn Opt.none(BlockNumber)
|
2024-04-19 18:37:27 +00:00
|
|
|
db.ifTrackNewApi: debug newApiTxt, api, elapsed, result
|
|
|
|
|
2024-04-29 20:17:17 +00:00
|
|
|
proc persistent*(
|
|
|
|
db: CoreDbRef;
|
|
|
|
blockNumber: BlockNumber;
|
|
|
|
): CoreDbRc[void] {.discardable.} =
|
|
|
|
## Variant of `persistent()` which stores a block number within the recovery
|
|
|
|
## journal record. This recoed will be addressable by the `blockNumber` (e.g.
|
|
|
|
## for recovery.) The argument block number `blockNumber` must be greater
|
|
|
|
## than all previously stored block numbers.
|
|
|
|
##
|
|
|
|
## The function is intended to be used in a way so hat the argument block
|
|
|
|
## number `blockNumber` is associated with the state root to be recovered
|
|
|
|
## from a particular journal entry. This means that the correct block number
|
|
|
|
## will be the one of the state *before* a state change takes place. Using
|
|
|
|
## it that way, `pesistent()` must only be run after some blocks were fully
|
|
|
|
## executed.
|
|
|
|
##
|
|
|
|
## Example:
|
|
|
|
## ::
|
|
|
|
## # Save block number for the current state
|
|
|
|
## let stateBlockNumber = db.getCanonicalHead().blockNumber
|
|
|
|
## ..
|
|
|
|
## # Process blocks
|
|
|
|
## ..
|
|
|
|
## db.persistent(stateBlockNumber)
|
|
|
|
##
|
|
|
|
db.setTrackNewApi BasePersistentFn
|
2024-06-14 07:31:08 +00:00
|
|
|
result = db.methods.persistentFn Opt.some(blockNumber)
|
2024-04-29 20:17:17 +00:00
|
|
|
db.ifTrackNewApi: debug newApiTxt, api, elapsed, blockNumber, result
|
|
|
|
|
2024-06-19 14:13:12 +00:00
|
|
|
proc newTransaction*(db: CoreDbRef): CoreDbTxRef =
|
2023-08-02 20:46:41 +00:00
|
|
|
## Constructor
|
2023-11-24 22:16:21 +00:00
|
|
|
##
|
2023-12-12 17:47:41 +00:00
|
|
|
db.setTrackNewApi BaseNewTxFn
|
2023-10-25 14:03:09 +00:00
|
|
|
result = db.methods.beginFn()
|
2023-11-24 22:16:21 +00:00
|
|
|
db.ifTrackNewApi:
|
2024-06-05 20:52:04 +00:00
|
|
|
debug newApiTxt, api, elapsed, newLevel=db.methods.levelFn()
|
2023-11-24 22:16:21 +00:00
|
|
|
|
2024-04-19 18:37:27 +00:00
|
|
|
|
2024-06-19 14:13:12 +00:00
|
|
|
proc level*(tx: CoreDbTxRef): int =
|
2024-04-19 18:37:27 +00:00
|
|
|
## Print positive transaction level for argument `tx`
|
2023-11-24 22:16:21 +00:00
|
|
|
##
|
2023-12-12 17:47:41 +00:00
|
|
|
tx.setTrackNewApi TxLevelFn
|
2023-11-24 22:16:21 +00:00
|
|
|
result = tx.methods.levelFn()
|
2024-03-18 19:40:23 +00:00
|
|
|
tx.ifTrackNewApi: debug newApiTxt, api, elapsed, result
|
2023-07-31 13:43:38 +00:00
|
|
|
|
2024-06-19 14:13:12 +00:00
|
|
|
proc commit*(tx: CoreDbTxRef) =
|
2023-12-12 17:47:41 +00:00
|
|
|
tx.setTrackNewApi TxCommitFn:
|
2023-11-24 22:16:21 +00:00
|
|
|
let prvLevel {.used.} = tx.methods.levelFn()
|
2024-06-05 20:52:04 +00:00
|
|
|
tx.methods.commitFn()
|
|
|
|
tx.ifTrackNewApi: debug newApiTxt, api, elapsed, prvLevel
|
2023-07-31 13:43:38 +00:00
|
|
|
|
2024-06-19 14:13:12 +00:00
|
|
|
proc rollback*(tx: CoreDbTxRef) =
|
2023-12-12 17:47:41 +00:00
|
|
|
tx.setTrackNewApi TxRollbackFn:
|
2023-11-24 22:16:21 +00:00
|
|
|
let prvLevel {.used.} = tx.methods.levelFn()
|
2024-06-05 20:52:04 +00:00
|
|
|
tx.methods.rollbackFn()
|
|
|
|
tx.ifTrackNewApi: debug newApiTxt, api, elapsed, prvLevel
|
2023-07-31 13:43:38 +00:00
|
|
|
|
2024-06-19 14:13:12 +00:00
|
|
|
proc dispose*(tx: CoreDbTxRef) =
|
2023-12-12 17:47:41 +00:00
|
|
|
tx.setTrackNewApi TxDisposeFn:
|
2023-11-24 22:16:21 +00:00
|
|
|
let prvLevel {.used.} = tx.methods.levelFn()
|
2024-06-05 20:52:04 +00:00
|
|
|
tx.methods.disposeFn()
|
|
|
|
tx.ifTrackNewApi: debug newApiTxt, api, elapsed, prvLevel
|
2023-08-02 20:46:41 +00:00
|
|
|
|
|
|
|
# ------------------------------------------------------------------------------
|
|
|
|
# Public tracer methods
|
|
|
|
# ------------------------------------------------------------------------------
|
|
|
|
|
2024-06-18 11:14:02 +00:00
|
|
|
when false: # currently disabled
|
|
|
|
proc newCapture*(
|
|
|
|
db: CoreDbRef;
|
|
|
|
flags: set[CoreDbCaptFlags] = {};
|
2024-06-19 14:13:12 +00:00
|
|
|
): CoreDbRc[CoreDbCaptRef] =
|
2024-06-18 11:14:02 +00:00
|
|
|
## Trace constructor providing an overlay on top of the argument database
|
|
|
|
## `db`. This overlay provides a replacement database handle that can be
|
|
|
|
## retrieved via `db.recorder()` (which can in turn be ovelayed.) While
|
|
|
|
## running the overlay stores data in a log-table which can be retrieved
|
|
|
|
## via `db.logDb()`.
|
|
|
|
##
|
|
|
|
## Caveat:
|
|
|
|
## The original database argument `db` should not be used while the tracer
|
|
|
|
## is active (i.e. exists as overlay). The behaviour for this situation
|
|
|
|
## is undefined and depends on the backend implementation of the tracer.
|
|
|
|
##
|
|
|
|
db.setTrackNewApi BaseNewCaptureFn
|
|
|
|
result = db.methods.newCaptureFn flags
|
|
|
|
db.ifTrackNewApi: debug newApiTxt, api, elapsed, result
|
|
|
|
|
2024-06-19 14:13:12 +00:00
|
|
|
proc recorder*(cpt: CoreDbCaptRef): CoreDbRef =
|
2024-06-18 11:14:02 +00:00
|
|
|
## Getter, returns a tracer replacement handle to be used as new database.
|
|
|
|
## It records every action like fetch, store, hasKey, hasPath and delete.
|
|
|
|
## This descriptor can be superseded by a new overlay tracer (using
|
|
|
|
## `newCapture()`, again.)
|
|
|
|
##
|
|
|
|
## Caveat:
|
|
|
|
## Unless the desriptor `cpt` referes to the top level overlay tracer, the
|
|
|
|
## result is undefined and depends on the backend implementation of the
|
|
|
|
## tracer.
|
|
|
|
##
|
|
|
|
cpt.setTrackNewApi CptRecorderFn
|
|
|
|
result = cpt.methods.recorderFn()
|
|
|
|
cpt.ifTrackNewApi: debug newApiTxt, api, elapsed
|
|
|
|
|
2024-06-19 14:13:12 +00:00
|
|
|
proc logDb*(cp: CoreDbCaptRef): TableRef[Blob,Blob] =
|
2024-06-18 11:14:02 +00:00
|
|
|
## Getter, returns the logger table for the overlay tracer database.
|
|
|
|
##
|
|
|
|
## Caveat:
|
|
|
|
## Unless the desriptor `cpt` referes to the top level overlay tracer, the
|
|
|
|
## result is undefined and depends on the backend implementation of the
|
|
|
|
## tracer.
|
|
|
|
##
|
|
|
|
cp.setTrackNewApi CptLogDbFn
|
|
|
|
result = cp.methods.logDbFn()
|
|
|
|
cp.ifTrackNewApi: debug newApiTxt, api, elapsed
|
|
|
|
|
2024-06-19 14:13:12 +00:00
|
|
|
proc flags*(cp: CoreDbCaptRef):set[CoreDbCaptFlags] =
|
2024-06-18 11:14:02 +00:00
|
|
|
## Getter
|
|
|
|
##
|
|
|
|
cp.setTrackNewApi CptFlagsFn
|
|
|
|
result = cp.methods.getFlagsFn()
|
|
|
|
cp.ifTrackNewApi: debug newApiTxt, api, elapsed, result
|
|
|
|
|
2024-06-19 14:13:12 +00:00
|
|
|
proc forget*(cp: CoreDbCaptRef) =
|
2024-06-18 11:14:02 +00:00
|
|
|
## Explicitely stop recording the current tracer instance and reset to
|
|
|
|
## previous level.
|
|
|
|
##
|
|
|
|
cp.setTrackNewApi CptForgetFn
|
|
|
|
cp.methods.forgetFn()
|
|
|
|
cp.ifTrackNewApi: debug newApiTxt, api, elapsed
|
2024-03-07 19:24:05 +00:00
|
|
|
|
2023-07-31 13:43:38 +00:00
|
|
|
# ------------------------------------------------------------------------------
|
|
|
|
# End
|
|
|
|
# ------------------------------------------------------------------------------
|