2023-07-31 13:43:38 +00:00
|
|
|
# Nimbus
|
Core db update storage root management for sub tries (#1964)
* Aristo: Re-phrase `LayerDelta` and `LayerFinal` as object references
why:
Avoids copying in some cases
* Fix copyright header
* Aristo: Verify `leafTie.root` function argument for `merge()` proc
why:
Zero root will lead to inconsistent DB entry
* Aristo: Update failure condition for hash labels compiler `hashify()`
why:
Node need not be rejected as long as links are on the schedule. In
that case, `redo[]` is to become `wff.base[]` at a later stage.
This amends an earlier fix, part of #1952 by also testing against
the target nodes of the `wff.base[]` sets.
* Aristo: Add storage root glue record to `hashify()` schedule
why:
An account leaf node might refer to a non-resolvable storage root ID.
Storage root node chains will end up at the storage root. So the link
`storage-root->account-leaf` needs an extra item in the schedule.
* Aristo: fix error code returned by `fetchPayload()`
details:
Final error code is implied by the error code form the `hikeUp()`
function.
* CoreDb: Discard `createOk` argument in API `getRoot()` function
why:
Not needed for the legacy DB. For the `Arsto` DB, a lazy approach is
implemented where a stprage root node is created on-the-fly.
* CoreDb: Prevent `$$` logging in some cases
why:
Logging the function `$$` is not useful when it is used for internal
use, i.e. retrieving an an error text for logging.
* CoreDb: Add `tryHashFn()` to API for pretty printing
why:
Pretty printing must not change the hashification status for the
`Aristo` DB. So there is an independent API wrapper for getting the
node hash which never updated the hashes.
* CoreDb: Discard `update` argument in API `hash()` function
why:
When calling the API function `hash()`, the latest state is always
wanted. For a version that uses the current state as-is without checking,
the function `tryHash()` was added to the backend.
* CoreDb: Update opaque vertex ID objects for the `Aristo` backend
why:
For `Aristo`, vID objects encapsulate a numeric `VertexID`
referencing a vertex (rather than a node hash as used on the
legacy backend.) For storage sub-tries, there might be no initial
vertex known when the descriptor is created. So opaque vertex ID
objects are supported without a valid `VertexID` which will be
initalised on-the-fly when the first item is merged.
* CoreDb: Add pretty printer for opaque vertex ID objects
* Cosmetics, printing profiling data
* CoreDb: Fix segfault in `Aristo` backend when creating MPT descriptor
why:
Missing initialisation error
* CoreDb: Allow MPT to inherit shared context on `Aristo` backend
why:
Creates descriptors with different storage roots for the same
shared `Aristo` DB descriptor.
* Cosmetics, update diagnostic message items for `Aristo` backend
* Fix Copyright year
2024-01-11 19:11:38 +00:00
|
|
|
# Copyright (c) 2023-2024 Status Research & Development GmbH
|
2023-07-31 13:43:38 +00:00
|
|
|
# Licensed under either of
|
|
|
|
# * Apache License, version 2.0, ([LICENSE-APACHE](LICENSE-APACHE) or
|
|
|
|
# http://www.apache.org/licenses/LICENSE-2.0)
|
|
|
|
# * MIT license ([LICENSE-MIT](LICENSE-MIT) or
|
|
|
|
# http://opensource.org/licenses/MIT)
|
|
|
|
# at your option. This file may not be copied, modified, or distributed except
|
|
|
|
# according to those terms.
|
|
|
|
|
|
|
|
{.push raises: [].}
|
|
|
|
|
|
|
|
import
|
2024-07-03 15:50:27 +00:00
|
|
|
std/typetraits,
|
2023-08-02 20:46:41 +00:00
|
|
|
eth/common,
|
2023-09-26 09:21:13 +00:00
|
|
|
"../.."/[constants, errors],
|
2024-07-03 15:50:27 +00:00
|
|
|
../kvt,
|
|
|
|
../aristo,
|
2024-04-19 18:37:27 +00:00
|
|
|
./base/[api_tracking, base_desc]
|
2023-12-12 17:47:41 +00:00
|
|
|
|
|
|
|
const
|
|
|
|
EnableApiTracking = false
|
|
|
|
## When enabled, functions using this tracking facility need to import
|
2024-06-27 19:21:01 +00:00
|
|
|
## `chronicles`, as well. Also, some `func` designators might need to
|
|
|
|
## be changed to `proc` for possible side effects.
|
|
|
|
##
|
|
|
|
## Tracking noise is then enabled by setting the flag `trackNewApi` to
|
|
|
|
## `true` in the `CoreDbRef` descriptor.
|
2023-12-12 17:47:41 +00:00
|
|
|
|
|
|
|
EnableApiProfiling = true
|
|
|
|
## Enables functions profiling if `EnableApiTracking` is also set `true`.
|
|
|
|
|
2024-07-03 15:50:27 +00:00
|
|
|
EnableApiJumpTable* = false
|
|
|
|
## This flag enables the functions jump table even if `EnableApiTracking`
|
|
|
|
## and `EnableApiProfiling` is set `false`. This should be used for
|
|
|
|
## debugging, only.
|
|
|
|
|
2023-12-12 17:47:41 +00:00
|
|
|
AutoValidateDescriptors = defined(release).not
|
|
|
|
## No validatinon needed for production suite.
|
|
|
|
|
2023-09-26 09:21:13 +00:00
|
|
|
export
|
2024-07-03 15:50:27 +00:00
|
|
|
CoreDbAccRef,
|
2023-10-11 19:09:11 +00:00
|
|
|
CoreDbAccount,
|
|
|
|
CoreDbApiError,
|
2024-07-03 15:50:27 +00:00
|
|
|
#CoreDbCaptFlags,
|
|
|
|
#CoreDbCaptRef,
|
2024-04-19 18:37:27 +00:00
|
|
|
CoreDbColType,
|
2024-03-18 19:40:23 +00:00
|
|
|
CoreDbCtxRef,
|
2023-10-11 19:09:11 +00:00
|
|
|
CoreDbErrorCode,
|
2023-10-02 18:05:17 +00:00
|
|
|
CoreDbErrorRef,
|
2023-12-12 17:47:41 +00:00
|
|
|
CoreDbFnInx,
|
2024-07-03 15:50:27 +00:00
|
|
|
CoreDbKvtRef,
|
|
|
|
CoreDbMptRef,
|
Core db and aristo updates for destructor and tx logic (#1894)
* Disable `TransactionID` related functions from `state_db.nim`
why:
Functions `getCommittedStorage()` and `updateOriginalRoot()` from
the `state_db` module are nowhere used. The emulation of a legacy
`TransactionID` type functionality is administratively expensive to
provide by `Aristo` (the legacy DB version is only partially
implemented, anyway).
As there is no other place where `TransactionID`s are used, they will
not be provided by the `Aristo` variant of the `CoreDb`. For the
legacy DB API, nothing will change.
* Fix copyright headers in source code
* Get rid of compiler warning
* Update Aristo code, remove unused `merge()` variant, export `hashify()`
why:
Adapt to upcoming `CoreDb` wrapper
* Remove synced tx feature from `Aristo`
why:
+ This feature allowed to synchronise transaction methods like begin,
commit, and rollback for a group of descriptors.
+ The feature is over engineered and not needed for `CoreDb`, neither
is it complete (some convergence features missing.)
* Add debugging helpers to `Kvt`
also:
Update database iterator, add count variable yield argument similar
to `Aristo`.
* Provide optional destructors for `CoreDb` API
why;
For the upcoming Aristo wrapper, this allows to control when certain
smart destruction and update can take place. The auto destructor works
fine in general when the storage/cache strategy is known and acceptable
when creating descriptors.
* Add update option for `CoreDb` API function `hash()`
why;
The hash function is typically used to get the state root of the MPT.
Due to lazy hashing, this might be not available on the `Aristo` DB.
So the `update` function asks for re-hashing the gurrent state changes
if needed.
* Update API tracking log mode: `info` => `debug
* Use shared `Kvt` descriptor in new Ledger API
why:
No need to create a new descriptor all the time
2023-11-16 19:35:03 +00:00
|
|
|
CoreDbPersistentTypes,
|
2024-02-29 21:10:24 +00:00
|
|
|
CoreDbProfListRef,
|
2023-09-26 09:21:13 +00:00
|
|
|
CoreDbRef,
|
2024-07-03 15:50:27 +00:00
|
|
|
CoreDbTxRef,
|
|
|
|
CoreDbType
|
2023-08-02 20:46:41 +00:00
|
|
|
|
2023-09-26 09:21:13 +00:00
|
|
|
const
|
2023-12-12 17:47:41 +00:00
|
|
|
CoreDbEnableApiTracking* = EnableApiTracking
|
|
|
|
CoreDbEnableApiProfiling* = EnableApiTracking and EnableApiProfiling
|
2024-07-03 15:50:27 +00:00
|
|
|
CoreDbEnableApiJumpTable* =
|
|
|
|
CoreDbEnableApiTracking or CoreDbEnableApiProfiling or EnableApiJumpTable
|
2023-09-26 09:21:13 +00:00
|
|
|
|
2023-12-12 17:47:41 +00:00
|
|
|
when AutoValidateDescriptors:
|
|
|
|
import ./base/validate
|
2023-09-26 09:21:13 +00:00
|
|
|
|
2024-07-03 15:50:27 +00:00
|
|
|
when CoreDbEnableApiJumpTable:
|
|
|
|
discard
|
|
|
|
else:
|
|
|
|
import
|
|
|
|
../aristo/[
|
|
|
|
aristo_delete, aristo_desc, aristo_fetch, aristo_merge, aristo_tx],
|
|
|
|
../kvt/[kvt_desc, kvt_utils, kvt_tx]
|
|
|
|
|
2023-12-12 17:47:41 +00:00
|
|
|
# More settings
|
|
|
|
const
|
2023-11-24 22:16:21 +00:00
|
|
|
logTxt = "CoreDb "
|
2024-02-02 20:23:04 +00:00
|
|
|
newApiTxt = logTxt & "API"
|
2023-07-31 13:43:38 +00:00
|
|
|
|
2023-11-24 22:16:21 +00:00
|
|
|
# Annotation helpers
|
|
|
|
{.pragma: apiRaise, gcsafe, raises: [CoreDbApiError].}
|
|
|
|
{.pragma: catchRaise, gcsafe, raises: [CatchableError].}
|
Core db and aristo updates for destructor and tx logic (#1894)
* Disable `TransactionID` related functions from `state_db.nim`
why:
Functions `getCommittedStorage()` and `updateOriginalRoot()` from
the `state_db` module are nowhere used. The emulation of a legacy
`TransactionID` type functionality is administratively expensive to
provide by `Aristo` (the legacy DB version is only partially
implemented, anyway).
As there is no other place where `TransactionID`s are used, they will
not be provided by the `Aristo` variant of the `CoreDb`. For the
legacy DB API, nothing will change.
* Fix copyright headers in source code
* Get rid of compiler warning
* Update Aristo code, remove unused `merge()` variant, export `hashify()`
why:
Adapt to upcoming `CoreDb` wrapper
* Remove synced tx feature from `Aristo`
why:
+ This feature allowed to synchronise transaction methods like begin,
commit, and rollback for a group of descriptors.
+ The feature is over engineered and not needed for `CoreDb`, neither
is it complete (some convergence features missing.)
* Add debugging helpers to `Kvt`
also:
Update database iterator, add count variable yield argument similar
to `Aristo`.
* Provide optional destructors for `CoreDb` API
why;
For the upcoming Aristo wrapper, this allows to control when certain
smart destruction and update can take place. The auto destructor works
fine in general when the storage/cache strategy is known and acceptable
when creating descriptors.
* Add update option for `CoreDb` API function `hash()`
why;
The hash function is typically used to get the state root of the MPT.
Due to lazy hashing, this might be not available on the `Aristo` DB.
So the `update` function asks for re-hashing the gurrent state changes
if needed.
* Update API tracking log mode: `info` => `debug
* Use shared `Kvt` descriptor in new Ledger API
why:
No need to create a new descriptor all the time
2023-11-16 19:35:03 +00:00
|
|
|
|
2023-07-31 13:43:38 +00:00
|
|
|
# ------------------------------------------------------------------------------
|
2023-10-25 14:03:09 +00:00
|
|
|
# Private helpers
|
2023-07-31 13:43:38 +00:00
|
|
|
# ------------------------------------------------------------------------------
|
|
|
|
|
2024-07-03 15:50:27 +00:00
|
|
|
when CoreDbEnableApiTracking:
|
|
|
|
when CoreDbEnableApiProfiling:
|
2023-12-12 17:47:41 +00:00
|
|
|
{.warning: "*** Provided API profiling for CoreDB (disabled by default)".}
|
|
|
|
else:
|
|
|
|
{.warning: "*** Provided API logging for CoreDB (disabled by default)".}
|
2023-10-25 14:03:09 +00:00
|
|
|
|
2023-11-24 22:16:21 +00:00
|
|
|
import
|
2024-06-27 19:21:01 +00:00
|
|
|
std/times,
|
|
|
|
chronicles
|
2023-10-25 14:03:09 +00:00
|
|
|
|
2023-11-24 22:16:21 +00:00
|
|
|
proc `$`[T](rc: CoreDbRc[T]): string = rc.toStr
|
|
|
|
proc `$`(q: set[CoreDbCaptFlags]): string = q.toStr
|
|
|
|
proc `$`(t: Duration): string = t.toStr
|
|
|
|
proc `$`(e: EthAddress): string = e.toStr
|
Core db update storage root management for sub tries (#1964)
* Aristo: Re-phrase `LayerDelta` and `LayerFinal` as object references
why:
Avoids copying in some cases
* Fix copyright header
* Aristo: Verify `leafTie.root` function argument for `merge()` proc
why:
Zero root will lead to inconsistent DB entry
* Aristo: Update failure condition for hash labels compiler `hashify()`
why:
Node need not be rejected as long as links are on the schedule. In
that case, `redo[]` is to become `wff.base[]` at a later stage.
This amends an earlier fix, part of #1952 by also testing against
the target nodes of the `wff.base[]` sets.
* Aristo: Add storage root glue record to `hashify()` schedule
why:
An account leaf node might refer to a non-resolvable storage root ID.
Storage root node chains will end up at the storage root. So the link
`storage-root->account-leaf` needs an extra item in the schedule.
* Aristo: fix error code returned by `fetchPayload()`
details:
Final error code is implied by the error code form the `hikeUp()`
function.
* CoreDb: Discard `createOk` argument in API `getRoot()` function
why:
Not needed for the legacy DB. For the `Arsto` DB, a lazy approach is
implemented where a stprage root node is created on-the-fly.
* CoreDb: Prevent `$$` logging in some cases
why:
Logging the function `$$` is not useful when it is used for internal
use, i.e. retrieving an an error text for logging.
* CoreDb: Add `tryHashFn()` to API for pretty printing
why:
Pretty printing must not change the hashification status for the
`Aristo` DB. So there is an independent API wrapper for getting the
node hash which never updated the hashes.
* CoreDb: Discard `update` argument in API `hash()` function
why:
When calling the API function `hash()`, the latest state is always
wanted. For a version that uses the current state as-is without checking,
the function `tryHash()` was added to the backend.
* CoreDb: Update opaque vertex ID objects for the `Aristo` backend
why:
For `Aristo`, vID objects encapsulate a numeric `VertexID`
referencing a vertex (rather than a node hash as used on the
legacy backend.) For storage sub-tries, there might be no initial
vertex known when the descriptor is created. So opaque vertex ID
objects are supported without a valid `VertexID` which will be
initalised on-the-fly when the first item is merged.
* CoreDb: Add pretty printer for opaque vertex ID objects
* Cosmetics, printing profiling data
* CoreDb: Fix segfault in `Aristo` backend when creating MPT descriptor
why:
Missing initialisation error
* CoreDb: Allow MPT to inherit shared context on `Aristo` backend
why:
Creates descriptors with different storage roots for the same
shared `Aristo` DB descriptor.
* Cosmetics, update diagnostic message items for `Aristo` backend
* Fix Copyright year
2024-01-11 19:11:38 +00:00
|
|
|
proc `$`(h: Hash256): string = h.toStr
|
2023-12-12 17:47:41 +00:00
|
|
|
|
2023-11-24 22:16:21 +00:00
|
|
|
template setTrackNewApi(
|
2024-06-19 14:13:12 +00:00
|
|
|
w: CoreDbApiTrackRef;
|
2023-12-12 17:47:41 +00:00
|
|
|
s: static[CoreDbFnInx];
|
|
|
|
code: untyped;
|
2023-11-24 22:16:21 +00:00
|
|
|
) =
|
2023-12-12 17:47:41 +00:00
|
|
|
## Template with code section that will be discarded if logging is
|
|
|
|
## disabled at compile time when `EnableApiTracking` is `false`.
|
2024-07-03 15:50:27 +00:00
|
|
|
when CoreDbEnableApiTracking:
|
2024-02-29 21:10:24 +00:00
|
|
|
w.beginNewApi(s)
|
2023-12-12 17:47:41 +00:00
|
|
|
code
|
2024-03-18 19:40:23 +00:00
|
|
|
const api {.inject,used.} = s
|
2023-11-24 22:16:21 +00:00
|
|
|
|
2024-02-02 10:58:35 +00:00
|
|
|
template setTrackNewApi*(
|
2024-06-19 14:13:12 +00:00
|
|
|
w: CoreDbApiTrackRef;
|
2023-12-12 17:47:41 +00:00
|
|
|
s: static[CoreDbFnInx];
|
2023-11-24 22:16:21 +00:00
|
|
|
) =
|
2023-12-12 17:47:41 +00:00
|
|
|
w.setTrackNewApi(s):
|
|
|
|
discard
|
2023-11-24 22:16:21 +00:00
|
|
|
|
2024-06-19 14:13:12 +00:00
|
|
|
template ifTrackNewApi*(w: CoreDbApiTrackRef; code: untyped) =
|
2023-11-24 22:16:21 +00:00
|
|
|
when EnableApiTracking:
|
|
|
|
w.endNewApiIf:
|
|
|
|
code
|
2023-10-25 14:03:09 +00:00
|
|
|
|
2024-07-03 15:50:27 +00:00
|
|
|
# ------------------------------------------------------------------------------
|
|
|
|
# Private KVT helpers
|
|
|
|
# ------------------------------------------------------------------------------
|
|
|
|
|
|
|
|
template kvt(dsc: CoreDbKvtRef): KvtDbRef =
|
|
|
|
dsc.distinctBase.kvt
|
|
|
|
|
|
|
|
template ctx(kvt: CoreDbKvtRef): CoreDbCtxRef =
|
|
|
|
kvt.distinctBase
|
|
|
|
|
|
|
|
# ---------------
|
|
|
|
|
|
|
|
template call(api: KvtApiRef; fn: untyped; args: varArgs[untyped]): untyped =
|
|
|
|
when CoreDbEnableApiJumpTable:
|
|
|
|
api.fn(args)
|
|
|
|
else:
|
|
|
|
fn(args)
|
|
|
|
|
|
|
|
template call(kvt: CoreDbKvtRef; fn: untyped; args: varArgs[untyped]): untyped =
|
|
|
|
kvt.distinctBase.parent.kvtApi.call(fn, args)
|
|
|
|
|
|
|
|
# ---------------
|
|
|
|
|
|
|
|
func toError(e: KvtError; s: string; error = Unspecified): CoreDbErrorRef =
|
|
|
|
CoreDbErrorRef(
|
|
|
|
error: error,
|
|
|
|
ctx: s,
|
|
|
|
isAristo: false,
|
|
|
|
kErr: e)
|
|
|
|
|
|
|
|
# ------------------------------------------------------------------------------
|
|
|
|
# Private Aristo helpers
|
|
|
|
# ------------------------------------------------------------------------------
|
|
|
|
|
|
|
|
template mpt(dsc: CoreDbAccRef | CoreDbMptRef): AristoDbRef =
|
|
|
|
dsc.distinctBase.mpt
|
|
|
|
|
|
|
|
template mpt(tx: CoreDbTxRef): AristoDbRef =
|
|
|
|
tx.ctx.mpt
|
|
|
|
|
|
|
|
template ctx(acc: CoreDbAccRef): CoreDbCtxRef =
|
|
|
|
acc.distinctBase
|
|
|
|
|
|
|
|
template rootID(mpt: CoreDbMptRef): VertexID =
|
|
|
|
VertexID(CtGeneric)
|
|
|
|
|
|
|
|
# ---------------
|
|
|
|
|
|
|
|
template call(api: AristoApiRef; fn: untyped; args: varArgs[untyped]): untyped =
|
|
|
|
when CoreDbEnableApiJumpTable:
|
|
|
|
api.fn(args)
|
|
|
|
else:
|
|
|
|
fn(args)
|
|
|
|
|
|
|
|
template call(
|
|
|
|
acc: CoreDbAccRef | CoreDbMptRef;
|
|
|
|
fn: untyped;
|
|
|
|
args: varArgs[untyped];
|
|
|
|
): untyped =
|
|
|
|
acc.distinctBase.parent.ariApi.call(fn, args)
|
|
|
|
|
|
|
|
# ---------------
|
|
|
|
|
|
|
|
func toError(e: AristoError; s: string; error = Unspecified): CoreDbErrorRef =
|
|
|
|
CoreDbErrorRef(
|
|
|
|
error: error,
|
|
|
|
ctx: s,
|
|
|
|
isAristo: true,
|
|
|
|
aErr: e)
|
|
|
|
|
2023-07-31 13:43:38 +00:00
|
|
|
# ------------------------------------------------------------------------------
|
2023-10-11 19:09:11 +00:00
|
|
|
# Public constructor helper
|
2023-07-31 13:43:38 +00:00
|
|
|
# ------------------------------------------------------------------------------
|
|
|
|
|
2023-10-11 19:09:11 +00:00
|
|
|
proc bless*(db: CoreDbRef): CoreDbRef =
|
|
|
|
## Verify descriptor
|
|
|
|
when AutoValidateDescriptors:
|
|
|
|
db.validate
|
2024-02-29 21:10:24 +00:00
|
|
|
when CoreDbEnableApiProfiling:
|
|
|
|
db.profTab = CoreDbProfListRef.init()
|
2023-10-11 19:09:11 +00:00
|
|
|
db
|
|
|
|
|
2024-07-03 15:50:27 +00:00
|
|
|
proc bless*(db: CoreDbRef; ctx: CoreDbCtxRef): CoreDbCtxRef =
|
|
|
|
ctx.parent = db
|
2023-08-02 20:46:41 +00:00
|
|
|
when AutoValidateDescriptors:
|
2024-07-03 15:50:27 +00:00
|
|
|
ctx.validate
|
|
|
|
ctx
|
2024-07-04 13:46:52 +00:00
|
|
|
|
2024-07-03 15:50:27 +00:00
|
|
|
proc bless*(ctx: CoreDbCtxRef; dsc: CoreDbMptRef | CoreDbTxRef): auto =
|
|
|
|
dsc.ctx = ctx
|
2023-11-08 12:18:32 +00:00
|
|
|
when AutoValidateDescriptors:
|
2024-02-02 20:23:04 +00:00
|
|
|
dsc.validate
|
|
|
|
dsc
|
|
|
|
|
2023-07-31 13:43:38 +00:00
|
|
|
# ------------------------------------------------------------------------------
|
2024-07-03 15:50:27 +00:00
|
|
|
# Public context constructors and administration
|
2023-07-31 13:43:38 +00:00
|
|
|
# ------------------------------------------------------------------------------
|
|
|
|
|
2024-07-03 15:50:27 +00:00
|
|
|
proc ctx*(db: CoreDbRef): CoreDbCtxRef =
|
|
|
|
## Get the defauly context. This is a base descriptor which provides the
|
|
|
|
## KVT, MPT, the accounts descriptors as well as the transaction descriptor.
|
|
|
|
## They are kept all in sync, i.e. `persistent()` will store exactly this
|
|
|
|
## context.
|
Core db update storage root management for sub tries (#1964)
* Aristo: Re-phrase `LayerDelta` and `LayerFinal` as object references
why:
Avoids copying in some cases
* Fix copyright header
* Aristo: Verify `leafTie.root` function argument for `merge()` proc
why:
Zero root will lead to inconsistent DB entry
* Aristo: Update failure condition for hash labels compiler `hashify()`
why:
Node need not be rejected as long as links are on the schedule. In
that case, `redo[]` is to become `wff.base[]` at a later stage.
This amends an earlier fix, part of #1952 by also testing against
the target nodes of the `wff.base[]` sets.
* Aristo: Add storage root glue record to `hashify()` schedule
why:
An account leaf node might refer to a non-resolvable storage root ID.
Storage root node chains will end up at the storage root. So the link
`storage-root->account-leaf` needs an extra item in the schedule.
* Aristo: fix error code returned by `fetchPayload()`
details:
Final error code is implied by the error code form the `hikeUp()`
function.
* CoreDb: Discard `createOk` argument in API `getRoot()` function
why:
Not needed for the legacy DB. For the `Arsto` DB, a lazy approach is
implemented where a stprage root node is created on-the-fly.
* CoreDb: Prevent `$$` logging in some cases
why:
Logging the function `$$` is not useful when it is used for internal
use, i.e. retrieving an an error text for logging.
* CoreDb: Add `tryHashFn()` to API for pretty printing
why:
Pretty printing must not change the hashification status for the
`Aristo` DB. So there is an independent API wrapper for getting the
node hash which never updated the hashes.
* CoreDb: Discard `update` argument in API `hash()` function
why:
When calling the API function `hash()`, the latest state is always
wanted. For a version that uses the current state as-is without checking,
the function `tryHash()` was added to the backend.
* CoreDb: Update opaque vertex ID objects for the `Aristo` backend
why:
For `Aristo`, vID objects encapsulate a numeric `VertexID`
referencing a vertex (rather than a node hash as used on the
legacy backend.) For storage sub-tries, there might be no initial
vertex known when the descriptor is created. So opaque vertex ID
objects are supported without a valid `VertexID` which will be
initalised on-the-fly when the first item is merged.
* CoreDb: Add pretty printer for opaque vertex ID objects
* Cosmetics, printing profiling data
* CoreDb: Fix segfault in `Aristo` backend when creating MPT descriptor
why:
Missing initialisation error
* CoreDb: Allow MPT to inherit shared context on `Aristo` backend
why:
Creates descriptors with different storage roots for the same
shared `Aristo` DB descriptor.
* Cosmetics, update diagnostic message items for `Aristo` backend
* Fix Copyright year
2024-01-11 19:11:38 +00:00
|
|
|
##
|
2024-07-03 15:50:27 +00:00
|
|
|
db.defCtx
|
2023-08-02 20:46:41 +00:00
|
|
|
|
2024-07-03 15:50:27 +00:00
|
|
|
proc swapCtx*(db: CoreDbRef; ctx: CoreDbCtxRef): CoreDbCtxRef =
|
|
|
|
## Activate argument context `ctx` as default and return the previously
|
|
|
|
## active context. This function goes typically together with `forget()`. A
|
|
|
|
## valid scenario might look like
|
|
|
|
## ::
|
|
|
|
## proc doSomething(db: CoreDbRef; ctx: CoreDbCtxRef) =
|
|
|
|
## let saved = db.swapCtx ctx
|
|
|
|
## defer: db.swapCtx(saved).forget()
|
|
|
|
## ...
|
2023-11-24 22:16:21 +00:00
|
|
|
##
|
2024-07-03 15:50:27 +00:00
|
|
|
doAssert not ctx.isNil
|
|
|
|
db.setTrackNewApi BaseSwapCtxFn
|
|
|
|
result = db.defCtx
|
|
|
|
|
|
|
|
# Set read-write access and install
|
|
|
|
CoreDbAccRef(ctx).call(reCentre, db.ctx.mpt).isOkOr:
|
|
|
|
raiseAssert $api & " failed: " & $error
|
|
|
|
CoreDbKvtRef(ctx).call(reCentre, db.ctx.kvt).isOkOr:
|
|
|
|
raiseAssert $api & " failed: " & $error
|
|
|
|
db.defCtx = ctx
|
|
|
|
db.ifTrackNewApi: debug newApiTxt, api, elapsed
|
2023-09-26 09:21:13 +00:00
|
|
|
|
2024-07-03 15:50:27 +00:00
|
|
|
proc forget*(ctx: CoreDbCtxRef) =
|
|
|
|
## Dispose `ctx` argument context and related columns created with this
|
|
|
|
## context. This function fails if `ctx` is the default context.
|
Core db update storage root management for sub tries (#1964)
* Aristo: Re-phrase `LayerDelta` and `LayerFinal` as object references
why:
Avoids copying in some cases
* Fix copyright header
* Aristo: Verify `leafTie.root` function argument for `merge()` proc
why:
Zero root will lead to inconsistent DB entry
* Aristo: Update failure condition for hash labels compiler `hashify()`
why:
Node need not be rejected as long as links are on the schedule. In
that case, `redo[]` is to become `wff.base[]` at a later stage.
This amends an earlier fix, part of #1952 by also testing against
the target nodes of the `wff.base[]` sets.
* Aristo: Add storage root glue record to `hashify()` schedule
why:
An account leaf node might refer to a non-resolvable storage root ID.
Storage root node chains will end up at the storage root. So the link
`storage-root->account-leaf` needs an extra item in the schedule.
* Aristo: fix error code returned by `fetchPayload()`
details:
Final error code is implied by the error code form the `hikeUp()`
function.
* CoreDb: Discard `createOk` argument in API `getRoot()` function
why:
Not needed for the legacy DB. For the `Arsto` DB, a lazy approach is
implemented where a stprage root node is created on-the-fly.
* CoreDb: Prevent `$$` logging in some cases
why:
Logging the function `$$` is not useful when it is used for internal
use, i.e. retrieving an an error text for logging.
* CoreDb: Add `tryHashFn()` to API for pretty printing
why:
Pretty printing must not change the hashification status for the
`Aristo` DB. So there is an independent API wrapper for getting the
node hash which never updated the hashes.
* CoreDb: Discard `update` argument in API `hash()` function
why:
When calling the API function `hash()`, the latest state is always
wanted. For a version that uses the current state as-is without checking,
the function `tryHash()` was added to the backend.
* CoreDb: Update opaque vertex ID objects for the `Aristo` backend
why:
For `Aristo`, vID objects encapsulate a numeric `VertexID`
referencing a vertex (rather than a node hash as used on the
legacy backend.) For storage sub-tries, there might be no initial
vertex known when the descriptor is created. So opaque vertex ID
objects are supported without a valid `VertexID` which will be
initalised on-the-fly when the first item is merged.
* CoreDb: Add pretty printer for opaque vertex ID objects
* Cosmetics, printing profiling data
* CoreDb: Fix segfault in `Aristo` backend when creating MPT descriptor
why:
Missing initialisation error
* CoreDb: Allow MPT to inherit shared context on `Aristo` backend
why:
Creates descriptors with different storage roots for the same
shared `Aristo` DB descriptor.
* Cosmetics, update diagnostic message items for `Aristo` backend
* Fix Copyright year
2024-01-11 19:11:38 +00:00
|
|
|
##
|
2024-07-03 15:50:27 +00:00
|
|
|
ctx.setTrackNewApi CtxForgetFn
|
|
|
|
CoreDbAccRef(ctx).call(forget, ctx.mpt).isOkOr:
|
|
|
|
raiseAssert $api & ": " & $error
|
|
|
|
CoreDbKvtRef(ctx).call(forget, ctx.kvt).isOkOr:
|
|
|
|
raiseAssert $api & ": " & $error
|
|
|
|
ctx.ifTrackNewApi: debug newApiTxt, api, elapsed
|
2023-10-11 19:09:11 +00:00
|
|
|
|
2024-07-03 15:50:27 +00:00
|
|
|
# ------------------------------------------------------------------------------
|
|
|
|
# Public main descriptor methods
|
|
|
|
# ------------------------------------------------------------------------------
|
2024-06-24 05:56:41 +00:00
|
|
|
|
2024-06-14 11:19:48 +00:00
|
|
|
proc finish*(db: CoreDbRef; eradicate = false) =
|
|
|
|
## Database destructor. If the argument `eradicate` is set `false`, the
|
|
|
|
## database is left as-is and only the in-memory handlers are cleaned up.
|
2023-10-11 19:09:11 +00:00
|
|
|
##
|
|
|
|
## Otherwise the destructor is allowed to remove the database. This feature
|
|
|
|
## depends on the backend database. Currently, only the `AristoDbRocks` type
|
|
|
|
## backend removes the database on `true`.
|
2023-11-24 22:16:21 +00:00
|
|
|
##
|
2023-12-12 17:47:41 +00:00
|
|
|
db.setTrackNewApi BaseFinishFn
|
2024-07-03 15:50:27 +00:00
|
|
|
CoreDbKvtRef(db.ctx).call(finish, db.ctx.kvt, eradicate)
|
|
|
|
CoreDbAccRef(db.ctx).call(finish, db.ctx.mpt, eradicate)
|
2024-03-18 19:40:23 +00:00
|
|
|
db.ifTrackNewApi: debug newApiTxt, api, elapsed
|
2023-09-26 09:21:13 +00:00
|
|
|
|
2023-10-02 18:05:17 +00:00
|
|
|
proc `$$`*(e: CoreDbErrorRef): string =
|
2023-09-26 09:21:13 +00:00
|
|
|
## Pretty print error symbol, note that this directive may have side effects
|
|
|
|
## as it calls a backend function.
|
2023-11-24 22:16:21 +00:00
|
|
|
##
|
2024-07-03 15:50:27 +00:00
|
|
|
if e.isNil: "$ø" else: e.toStr()
|
|
|
|
|
|
|
|
proc persistent*(
|
|
|
|
db: CoreDbRef;
|
|
|
|
blockNumber: BlockNumber;
|
|
|
|
): CoreDbRc[void]
|
|
|
|
{.discardable.} =
|
|
|
|
## This function stored cached data from the default context (see `ctx()`
|
|
|
|
## below) to the persistent database.
|
|
|
|
##
|
|
|
|
## It also stores the argument block number `blockNumber` as a state record
|
|
|
|
## which can be retrieved via `stateBlockNumber()`.
|
|
|
|
##
|
|
|
|
db.setTrackNewApi BasePersistentFn
|
|
|
|
block body:
|
|
|
|
block:
|
|
|
|
let rc = CoreDbKvtRef(db.ctx).call(persist, db.ctx.kvt)
|
|
|
|
if rc.isOk or rc.error == TxPersistDelayed:
|
|
|
|
# The latter clause is OK: Piggybacking on `Aristo` backend
|
|
|
|
discard
|
|
|
|
elif CoreDbKvtRef(db.ctx).call(level, db.ctx.kvt) != 0:
|
|
|
|
result = err(rc.error.toError($api, TxPending))
|
|
|
|
break body
|
|
|
|
else:
|
|
|
|
result = err(rc.error.toError $api)
|
|
|
|
break body
|
|
|
|
block:
|
|
|
|
let rc = CoreDbAccRef(db.ctx).call(persist, db.ctx.mpt, blockNumber)
|
|
|
|
if rc.isOk:
|
|
|
|
discard
|
|
|
|
elif CoreDbAccRef(db.ctx).call(level, db.ctx.mpt) != 0:
|
|
|
|
result = err(rc.error.toError($api, TxPending))
|
|
|
|
break body
|
|
|
|
else:
|
|
|
|
result = err(rc.error.toError $api)
|
|
|
|
break body
|
|
|
|
result = ok()
|
|
|
|
db.ifTrackNewApi: debug newApiTxt, api, elapsed, blockNumber, result
|
|
|
|
|
|
|
|
proc stateBlockNumber*(db: CoreDbRef): BlockNumber =
|
|
|
|
## Rhis function returns the block number stored with the latest `persist()`
|
|
|
|
## directive.
|
|
|
|
##
|
|
|
|
db.setTrackNewApi BaseStateBlockNumberFn
|
|
|
|
result = block:
|
|
|
|
let rc = CoreDbAccRef(db.ctx).call(fetchLastSavedState, db.ctx.mpt)
|
|
|
|
if rc.isOk:
|
|
|
|
rc.value.serial.BlockNumber
|
|
|
|
else:
|
|
|
|
0u64
|
|
|
|
db.ifTrackNewApi: debug newApiTxt, api, elapsed, result
|
2023-10-11 19:09:11 +00:00
|
|
|
|
2023-07-31 13:43:38 +00:00
|
|
|
# ------------------------------------------------------------------------------
|
2023-08-02 20:46:41 +00:00
|
|
|
# Public key-value table methods
|
2023-07-31 13:43:38 +00:00
|
|
|
# ------------------------------------------------------------------------------
|
|
|
|
|
2024-07-03 15:50:27 +00:00
|
|
|
proc getKvt*(ctx: CoreDbCtxRef): CoreDbKvtRef =
|
2024-06-05 20:52:04 +00:00
|
|
|
## This function subscribes to the common base object shared with other
|
|
|
|
## KVT descriptors. Any changes are immediately visible to subscribers.
|
|
|
|
## On destruction (when the constructed object gets out of scope), changes
|
|
|
|
## are not saved to the backend database but are still cached and available.
|
Core db and aristo updates for destructor and tx logic (#1894)
* Disable `TransactionID` related functions from `state_db.nim`
why:
Functions `getCommittedStorage()` and `updateOriginalRoot()` from
the `state_db` module are nowhere used. The emulation of a legacy
`TransactionID` type functionality is administratively expensive to
provide by `Aristo` (the legacy DB version is only partially
implemented, anyway).
As there is no other place where `TransactionID`s are used, they will
not be provided by the `Aristo` variant of the `CoreDb`. For the
legacy DB API, nothing will change.
* Fix copyright headers in source code
* Get rid of compiler warning
* Update Aristo code, remove unused `merge()` variant, export `hashify()`
why:
Adapt to upcoming `CoreDb` wrapper
* Remove synced tx feature from `Aristo`
why:
+ This feature allowed to synchronise transaction methods like begin,
commit, and rollback for a group of descriptors.
+ The feature is over engineered and not needed for `CoreDb`, neither
is it complete (some convergence features missing.)
* Add debugging helpers to `Kvt`
also:
Update database iterator, add count variable yield argument similar
to `Aristo`.
* Provide optional destructors for `CoreDb` API
why;
For the upcoming Aristo wrapper, this allows to control when certain
smart destruction and update can take place. The auto destructor works
fine in general when the storage/cache strategy is known and acceptable
when creating descriptors.
* Add update option for `CoreDb` API function `hash()`
why;
The hash function is typically used to get the state root of the MPT.
Due to lazy hashing, this might be not available on the `Aristo` DB.
So the `update` function asks for re-hashing the gurrent state changes
if needed.
* Update API tracking log mode: `info` => `debug
* Use shared `Kvt` descriptor in new Ledger API
why:
No need to create a new descriptor all the time
2023-11-16 19:35:03 +00:00
|
|
|
##
|
2024-07-03 15:50:27 +00:00
|
|
|
CoreDbKvtRef(ctx)
|
|
|
|
|
|
|
|
# ----------- KVT ---------------
|
2023-08-02 20:46:41 +00:00
|
|
|
|
2024-06-19 14:13:12 +00:00
|
|
|
proc get*(kvt: CoreDbKvtRef; key: openArray[byte]): CoreDbRc[Blob] =
|
2023-11-08 12:18:32 +00:00
|
|
|
## This function always returns a non-empty `Blob` or an error code.
|
2023-12-12 17:47:41 +00:00
|
|
|
kvt.setTrackNewApi KvtGetFn
|
2024-07-03 15:50:27 +00:00
|
|
|
result = block:
|
|
|
|
let rc = kvt.call(get, kvt.kvt, key)
|
|
|
|
if rc.isOk:
|
|
|
|
ok(rc.value)
|
|
|
|
elif rc.error == GetNotFound:
|
|
|
|
err(rc.error.toError($api, KvtNotFound))
|
|
|
|
else:
|
|
|
|
err(rc.error.toError $api)
|
2024-06-21 07:44:10 +00:00
|
|
|
kvt.ifTrackNewApi: debug newApiTxt, api, elapsed, key=key.toStr, result
|
|
|
|
|
2024-06-19 14:13:12 +00:00
|
|
|
proc getOrEmpty*(kvt: CoreDbKvtRef; key: openArray[byte]): CoreDbRc[Blob] =
|
2024-07-03 15:50:27 +00:00
|
|
|
## Variant of `get()` returning an empty `Blob` if the key is not found
|
|
|
|
## on the database.
|
2023-11-24 22:16:21 +00:00
|
|
|
##
|
2023-12-12 17:47:41 +00:00
|
|
|
kvt.setTrackNewApi KvtGetOrEmptyFn
|
2024-07-03 15:50:27 +00:00
|
|
|
result = block:
|
|
|
|
let rc = kvt.call(get, kvt.kvt, key)
|
|
|
|
if rc.isOk:
|
|
|
|
ok(rc.value)
|
|
|
|
elif rc.error == GetNotFound:
|
|
|
|
CoreDbRc[Blob].ok(EmptyBlob)
|
|
|
|
else:
|
|
|
|
err(rc.error.toError $api)
|
|
|
|
kvt.ifTrackNewApi: debug newApiTxt, api, elapsed, key=key.toStr, result
|
|
|
|
|
|
|
|
proc len*(kvt: CoreDbKvtRef; key: openArray[byte]): CoreDbRc[int] =
|
|
|
|
## This function returns the size of the value associated with `key`.
|
|
|
|
kvt.setTrackNewApi KvtLenFn
|
|
|
|
result = block:
|
|
|
|
let rc = kvt.call(len, kvt.kvt, key)
|
|
|
|
if rc.isOk:
|
|
|
|
ok(rc.value)
|
|
|
|
elif rc.error == GetNotFound:
|
|
|
|
err(rc.error.toError($api, KvtNotFound))
|
|
|
|
else:
|
|
|
|
err(rc.error.toError $api)
|
2024-03-18 19:40:23 +00:00
|
|
|
kvt.ifTrackNewApi: debug newApiTxt, api, elapsed, key=key.toStr, result
|
2023-11-08 12:18:32 +00:00
|
|
|
|
2024-06-19 14:13:12 +00:00
|
|
|
proc del*(kvt: CoreDbKvtRef; key: openArray[byte]): CoreDbRc[void] =
|
2023-12-12 17:47:41 +00:00
|
|
|
kvt.setTrackNewApi KvtDelFn
|
2024-07-03 15:50:27 +00:00
|
|
|
result = block:
|
|
|
|
let rc = kvt.call(del, kvt.kvt, key)
|
|
|
|
if rc.isOk:
|
|
|
|
ok()
|
|
|
|
else:
|
|
|
|
err(rc.error.toError $api)
|
2024-03-18 19:40:23 +00:00
|
|
|
kvt.ifTrackNewApi: debug newApiTxt, api, elapsed, key=key.toStr, result
|
2023-08-02 20:46:41 +00:00
|
|
|
|
2023-09-26 09:21:13 +00:00
|
|
|
proc put*(
|
2024-06-19 14:13:12 +00:00
|
|
|
kvt: CoreDbKvtRef;
|
2023-09-26 09:21:13 +00:00
|
|
|
key: openArray[byte];
|
2023-10-25 14:03:09 +00:00
|
|
|
val: openArray[byte];
|
2023-09-26 09:21:13 +00:00
|
|
|
): CoreDbRc[void] =
|
2023-12-12 17:47:41 +00:00
|
|
|
kvt.setTrackNewApi KvtPutFn
|
2024-07-03 15:50:27 +00:00
|
|
|
result = block:
|
|
|
|
let rc = kvt.call(put, kvt.kvt, key, val)
|
|
|
|
if rc.isOk:
|
|
|
|
ok()
|
|
|
|
else:
|
|
|
|
err(rc.error.toError $api)
|
2023-11-24 22:16:21 +00:00
|
|
|
kvt.ifTrackNewApi:
|
2024-03-18 19:40:23 +00:00
|
|
|
debug newApiTxt, api, elapsed, key=key.toStr, val=val.toLenStr, result
|
2023-08-02 20:46:41 +00:00
|
|
|
|
2024-06-19 14:13:12 +00:00
|
|
|
proc hasKey*(kvt: CoreDbKvtRef; key: openArray[byte]): CoreDbRc[bool] =
|
2023-11-08 12:18:32 +00:00
|
|
|
## Would be named `contains` if it returned `bool` rather than `Result[]`.
|
2023-11-24 22:16:21 +00:00
|
|
|
##
|
2023-12-12 17:47:41 +00:00
|
|
|
kvt.setTrackNewApi KvtHasKeyFn
|
2024-07-03 15:50:27 +00:00
|
|
|
result = block:
|
|
|
|
let rc = kvt.call(hasKey, kvt.kvt, key)
|
|
|
|
if rc.isOk:
|
|
|
|
ok(rc.value)
|
|
|
|
else:
|
|
|
|
err(rc.error.toError $api)
|
2024-03-18 19:40:23 +00:00
|
|
|
kvt.ifTrackNewApi: debug newApiTxt, api, elapsed, key=key.toStr, result
|
Core db and aristo updates for destructor and tx logic (#1894)
* Disable `TransactionID` related functions from `state_db.nim`
why:
Functions `getCommittedStorage()` and `updateOriginalRoot()` from
the `state_db` module are nowhere used. The emulation of a legacy
`TransactionID` type functionality is administratively expensive to
provide by `Aristo` (the legacy DB version is only partially
implemented, anyway).
As there is no other place where `TransactionID`s are used, they will
not be provided by the `Aristo` variant of the `CoreDb`. For the
legacy DB API, nothing will change.
* Fix copyright headers in source code
* Get rid of compiler warning
* Update Aristo code, remove unused `merge()` variant, export `hashify()`
why:
Adapt to upcoming `CoreDb` wrapper
* Remove synced tx feature from `Aristo`
why:
+ This feature allowed to synchronise transaction methods like begin,
commit, and rollback for a group of descriptors.
+ The feature is over engineered and not needed for `CoreDb`, neither
is it complete (some convergence features missing.)
* Add debugging helpers to `Kvt`
also:
Update database iterator, add count variable yield argument similar
to `Aristo`.
* Provide optional destructors for `CoreDb` API
why;
For the upcoming Aristo wrapper, this allows to control when certain
smart destruction and update can take place. The auto destructor works
fine in general when the storage/cache strategy is known and acceptable
when creating descriptors.
* Add update option for `CoreDb` API function `hash()`
why;
The hash function is typically used to get the state root of the MPT.
Due to lazy hashing, this might be not available on the `Aristo` DB.
So the `update` function asks for re-hashing the gurrent state changes
if needed.
* Update API tracking log mode: `info` => `debug
* Use shared `Kvt` descriptor in new Ledger API
why:
No need to create a new descriptor all the time
2023-11-16 19:35:03 +00:00
|
|
|
|
2024-03-18 19:40:23 +00:00
|
|
|
# ------------------------------------------------------------------------------
|
2024-06-27 09:01:26 +00:00
|
|
|
# Public functions for generic columns
|
2024-03-18 19:40:23 +00:00
|
|
|
# ------------------------------------------------------------------------------
|
|
|
|
|
2024-07-03 15:50:27 +00:00
|
|
|
proc getGeneric*(
|
2024-03-18 19:40:23 +00:00
|
|
|
ctx: CoreDbCtxRef;
|
2024-06-27 09:01:26 +00:00
|
|
|
clearData = false;
|
2024-06-19 14:13:12 +00:00
|
|
|
): CoreDbMptRef =
|
2024-07-03 15:50:27 +00:00
|
|
|
## Get a generic MPT, viewed as column
|
2024-04-19 18:37:27 +00:00
|
|
|
##
|
2024-07-03 15:50:27 +00:00
|
|
|
ctx.setTrackNewApi CtxGetGenericFn
|
2024-07-04 13:46:52 +00:00
|
|
|
result = CoreDbMptRef(ctx)
|
2024-07-03 15:50:27 +00:00
|
|
|
if clearData:
|
|
|
|
result.call(deleteGenericTree, ctx.mpt, result.rootID).isOkOr:
|
|
|
|
raiseAssert $api & ": " & $error
|
|
|
|
ctx.ifTrackNewApi: debug newApiTxt, api, clearData, elapsed
|
|
|
|
|
|
|
|
# ----------- generic MPT ---------------
|
2023-07-31 13:43:38 +00:00
|
|
|
|
2024-06-19 14:13:12 +00:00
|
|
|
proc fetch*(mpt: CoreDbMptRef; key: openArray[byte]): CoreDbRc[Blob] =
|
2024-04-19 18:37:27 +00:00
|
|
|
## Fetch data from the argument `mpt`. The function always returns a
|
2023-11-08 12:18:32 +00:00
|
|
|
## non-empty `Blob` or an error code.
|
2023-11-24 22:16:21 +00:00
|
|
|
##
|
2023-12-12 17:47:41 +00:00
|
|
|
mpt.setTrackNewApi MptFetchFn
|
2024-07-03 15:50:27 +00:00
|
|
|
result = block:
|
|
|
|
let rc = mpt.call(fetchGenericData, mpt.mpt, mpt.rootID, key)
|
|
|
|
if rc.isOk:
|
|
|
|
ok(rc.value)
|
|
|
|
elif rc.error == FetchPathNotFound:
|
|
|
|
err(rc.error.toError($api, MptNotFound))
|
|
|
|
else:
|
|
|
|
err(rc.error.toError $api)
|
2024-06-27 19:21:01 +00:00
|
|
|
mpt.ifTrackNewApi: debug newApiTxt, api, elapsed, key=key.toStr, result
|
2023-12-12 17:47:41 +00:00
|
|
|
|
2024-06-19 14:13:12 +00:00
|
|
|
proc fetchOrEmpty*(mpt: CoreDbMptRef; key: openArray[byte]): CoreDbRc[Blob] =
|
2023-11-08 12:18:32 +00:00
|
|
|
## This function returns an empty `Blob` if the argument `key` is not found
|
|
|
|
## on the database.
|
2023-11-24 22:16:21 +00:00
|
|
|
##
|
2023-12-12 17:47:41 +00:00
|
|
|
mpt.setTrackNewApi MptFetchOrEmptyFn
|
2024-07-03 15:50:27 +00:00
|
|
|
result = block:
|
|
|
|
let rc = mpt.call(fetchGenericData, mpt.mpt, mpt.rootID, key)
|
|
|
|
if rc.isOk:
|
|
|
|
ok(rc.value)
|
|
|
|
elif rc.error == FetchPathNotFound:
|
|
|
|
CoreDbRc[Blob].ok(EmptyBlob)
|
|
|
|
else:
|
|
|
|
err(rc.error.toError $api)
|
2024-06-27 19:21:01 +00:00
|
|
|
mpt.ifTrackNewApi: debug newApiTxt, api, elapsed, key=key.toStr, result
|
2023-12-12 17:47:41 +00:00
|
|
|
|
2024-06-19 14:13:12 +00:00
|
|
|
proc delete*(mpt: CoreDbMptRef; key: openArray[byte]): CoreDbRc[void] =
|
2023-12-12 17:47:41 +00:00
|
|
|
mpt.setTrackNewApi MptDeleteFn
|
2024-07-03 15:50:27 +00:00
|
|
|
result = block:
|
|
|
|
let rc = mpt.call(deleteGenericData, mpt.mpt, mpt.rootID, key)
|
|
|
|
if rc.isOk:
|
|
|
|
ok()
|
|
|
|
elif rc.error == DelPathNotFound:
|
|
|
|
err(rc.error.toError($api, MptNotFound))
|
|
|
|
else:
|
|
|
|
err(rc.error.toError $api)
|
2024-06-27 19:21:01 +00:00
|
|
|
mpt.ifTrackNewApi: debug newApiTxt, api, elapsed, key=key.toStr, result
|
2023-12-12 17:47:41 +00:00
|
|
|
|
2023-10-11 19:09:11 +00:00
|
|
|
proc merge*(
|
2024-06-19 14:13:12 +00:00
|
|
|
mpt: CoreDbMptRef;
|
2023-07-31 13:43:38 +00:00
|
|
|
key: openArray[byte];
|
2023-10-25 14:03:09 +00:00
|
|
|
val: openArray[byte];
|
2023-09-26 09:21:13 +00:00
|
|
|
): CoreDbRc[void] =
|
2023-12-12 17:47:41 +00:00
|
|
|
mpt.setTrackNewApi MptMergeFn
|
2024-07-03 15:50:27 +00:00
|
|
|
result = block:
|
|
|
|
let rc = mpt.call(mergeGenericData, mpt.mpt, mpt.rootID, key, val)
|
|
|
|
if rc.isOk:
|
|
|
|
ok()
|
|
|
|
else:
|
|
|
|
err(rc.error.toError $api)
|
2023-12-12 17:47:41 +00:00
|
|
|
mpt.ifTrackNewApi:
|
2024-06-27 19:21:01 +00:00
|
|
|
debug newApiTxt, api, elapsed, key=key.toStr, val=val.toLenStr, result
|
2023-07-31 13:43:38 +00:00
|
|
|
|
2024-06-19 14:13:12 +00:00
|
|
|
proc hasPath*(mpt: CoreDbMptRef; key: openArray[byte]): CoreDbRc[bool] =
|
2023-12-12 17:47:41 +00:00
|
|
|
## This function would be named `contains()` if it returned `bool` rather
|
|
|
|
## than a `Result[]`.
|
2023-11-24 22:16:21 +00:00
|
|
|
##
|
2023-12-12 17:47:41 +00:00
|
|
|
mpt.setTrackNewApi MptHasPathFn
|
2024-07-03 15:50:27 +00:00
|
|
|
result = block:
|
|
|
|
let rc = mpt.call(hasPathGeneric, mpt.mpt, mpt.rootID, key)
|
|
|
|
if rc.isOk:
|
|
|
|
ok(rc.value)
|
|
|
|
else:
|
|
|
|
err(rc.error.toError $api)
|
2024-06-27 19:21:01 +00:00
|
|
|
mpt.ifTrackNewApi: debug newApiTxt, api, elapsed, key=key.toStr, result
|
2023-12-12 17:47:41 +00:00
|
|
|
|
2024-06-27 09:01:26 +00:00
|
|
|
proc state*(mpt: CoreDbMptRef; updateOk = false): CoreDbRc[Hash256] =
|
|
|
|
## This function retrieves the Merkle state hash of the argument
|
|
|
|
## database column (if acvailable.)
|
|
|
|
##
|
|
|
|
## If the argument `updateOk` is set `true`, the Merkle hashes of the
|
|
|
|
## database will be updated first (if needed, at all).
|
|
|
|
##
|
|
|
|
mpt.setTrackNewApi MptStateFn
|
2024-07-03 15:50:27 +00:00
|
|
|
result = block:
|
|
|
|
let rc = mpt.call(fetchGenericState, mpt.mpt, mpt.rootID, updateOk)
|
|
|
|
if rc.isOk:
|
|
|
|
ok(rc.value)
|
|
|
|
else:
|
|
|
|
err(rc.error.toError $api)
|
2024-06-27 09:01:26 +00:00
|
|
|
mpt.ifTrackNewApi: debug newApiTxt, api, elapsed, updateOK, result
|
|
|
|
|
2023-10-11 19:09:11 +00:00
|
|
|
# ------------------------------------------------------------------------------
|
2024-06-27 09:01:26 +00:00
|
|
|
# Public methods for accounts
|
2023-10-11 19:09:11 +00:00
|
|
|
# ------------------------------------------------------------------------------
|
|
|
|
|
2024-06-27 09:01:26 +00:00
|
|
|
proc getAccounts*(ctx: CoreDbCtxRef): CoreDbAccRef =
|
|
|
|
## Accounts column constructor, will defect on failure.
|
2023-11-24 22:16:21 +00:00
|
|
|
##
|
2024-06-27 09:01:26 +00:00
|
|
|
ctx.setTrackNewApi CtxGetAccountsFn
|
2024-07-03 15:50:27 +00:00
|
|
|
result = CoreDbAccRef(ctx)
|
2024-06-27 19:21:01 +00:00
|
|
|
ctx.ifTrackNewApi: debug newApiTxt, api, elapsed
|
2024-03-18 19:40:23 +00:00
|
|
|
|
2024-06-27 09:01:26 +00:00
|
|
|
# ----------- accounts ---------------
|
2023-10-11 19:09:11 +00:00
|
|
|
|
2024-06-27 19:21:01 +00:00
|
|
|
proc fetch*(
|
|
|
|
acc: CoreDbAccRef;
|
2024-07-03 08:14:26 +00:00
|
|
|
accPath: Hash256;
|
2024-06-27 19:21:01 +00:00
|
|
|
): CoreDbRc[CoreDbAccount] =
|
2024-06-27 09:01:26 +00:00
|
|
|
## Fetch the account data record for the particular account indexed by
|
2024-06-27 19:21:01 +00:00
|
|
|
## the key `accPath`.
|
2024-02-12 19:37:00 +00:00
|
|
|
##
|
2024-06-27 09:01:26 +00:00
|
|
|
acc.setTrackNewApi AccFetchFn
|
2024-07-03 15:50:27 +00:00
|
|
|
result = block:
|
|
|
|
let rc = acc.call(fetchAccountRecord, acc.mpt, accPath)
|
|
|
|
if rc.isOk:
|
|
|
|
ok(rc.value)
|
|
|
|
elif rc.error == FetchPathNotFound:
|
|
|
|
err(rc.error.toError($api, AccNotFound))
|
|
|
|
else:
|
|
|
|
err(rc.error.toError $api)
|
2024-06-27 19:21:01 +00:00
|
|
|
acc.ifTrackNewApi:
|
|
|
|
debug newApiTxt, api, elapsed, accPath=accPath.toStr, result
|
2024-06-27 09:01:26 +00:00
|
|
|
|
2024-06-27 19:21:01 +00:00
|
|
|
proc delete*(
|
|
|
|
acc: CoreDbAccRef;
|
2024-07-03 08:14:26 +00:00
|
|
|
accPath: Hash256;
|
2024-06-27 19:21:01 +00:00
|
|
|
): CoreDbRc[void] =
|
|
|
|
## Delete the particular account indexed by the key `accPath`. This
|
2024-06-27 09:01:26 +00:00
|
|
|
## will also destroy an associated storage area.
|
2024-02-12 19:37:00 +00:00
|
|
|
##
|
2024-06-27 09:01:26 +00:00
|
|
|
acc.setTrackNewApi AccDeleteFn
|
2024-07-03 15:50:27 +00:00
|
|
|
result = block:
|
|
|
|
let rc = acc.call(deleteAccountRecord, acc.mpt, accPath)
|
|
|
|
if rc.isOk:
|
|
|
|
ok()
|
|
|
|
elif rc.error == DelPathNotFound:
|
|
|
|
# TODO: Would it be conseqient to just return `ok()` here?
|
|
|
|
err(rc.error.toError($api, AccNotFound))
|
|
|
|
else:
|
|
|
|
err(rc.error.toError $api)
|
2024-06-27 19:21:01 +00:00
|
|
|
acc.ifTrackNewApi:
|
|
|
|
debug newApiTxt, api, elapsed, accPath=accPath.toStr, result
|
2024-03-18 19:40:23 +00:00
|
|
|
|
2024-06-27 19:21:01 +00:00
|
|
|
proc clearStorage*(
|
|
|
|
acc: CoreDbAccRef;
|
2024-07-03 08:14:26 +00:00
|
|
|
accPath: Hash256;
|
2024-06-27 19:21:01 +00:00
|
|
|
): CoreDbRc[void] =
|
2024-06-27 09:01:26 +00:00
|
|
|
## Delete all data slots from the storage area associated with the
|
2024-06-27 19:21:01 +00:00
|
|
|
## particular account indexed by the key `accPath`.
|
2024-06-27 09:01:26 +00:00
|
|
|
##
|
|
|
|
acc.setTrackNewApi AccClearStorageFn
|
2024-07-03 15:50:27 +00:00
|
|
|
result = block:
|
|
|
|
let rc = acc.call(deleteStorageTree, acc.mpt, accPath)
|
|
|
|
if rc.isOk or rc.error in {DelStoRootMissing,DelStoAccMissing}:
|
|
|
|
ok()
|
|
|
|
else:
|
|
|
|
err(rc.error.toError $api)
|
2024-06-27 19:21:01 +00:00
|
|
|
acc.ifTrackNewApi:
|
|
|
|
debug newApiTxt, api, elapsed, accPath=accPath.toStr, result
|
2024-02-12 19:37:00 +00:00
|
|
|
|
2024-06-27 19:21:01 +00:00
|
|
|
proc merge*(
|
|
|
|
acc: CoreDbAccRef;
|
2024-07-03 08:14:26 +00:00
|
|
|
accPath: Hash256;
|
2024-06-27 19:21:01 +00:00
|
|
|
accRec: CoreDbAccount;
|
|
|
|
): CoreDbRc[void] =
|
2024-06-27 09:01:26 +00:00
|
|
|
## Add or update the argument account data record `account`. Note that the
|
|
|
|
## `account` argument uniquely idendifies the particular account address.
|
|
|
|
##
|
2023-12-12 17:47:41 +00:00
|
|
|
acc.setTrackNewApi AccMergeFn
|
2024-07-03 15:50:27 +00:00
|
|
|
result = block:
|
|
|
|
let rc = acc.call(mergeAccountRecord, acc.mpt, accPath, accRec)
|
|
|
|
if rc.isOk:
|
|
|
|
ok()
|
|
|
|
else:
|
|
|
|
err(rc.error.toError $api)
|
2024-02-02 20:23:04 +00:00
|
|
|
acc.ifTrackNewApi:
|
2024-06-27 19:21:01 +00:00
|
|
|
debug newApiTxt, api, elapsed, accPath=accPath.toStr, result
|
2024-03-18 19:40:23 +00:00
|
|
|
|
2024-06-27 19:21:01 +00:00
|
|
|
proc hasPath*(
|
|
|
|
acc: CoreDbAccRef;
|
2024-07-03 08:14:26 +00:00
|
|
|
accPath: Hash256;
|
2024-06-27 19:21:01 +00:00
|
|
|
): CoreDbRc[bool] =
|
2023-11-08 12:18:32 +00:00
|
|
|
## Would be named `contains` if it returned `bool` rather than `Result[]`.
|
2023-11-24 22:16:21 +00:00
|
|
|
##
|
2023-12-12 17:47:41 +00:00
|
|
|
acc.setTrackNewApi AccHasPathFn
|
2024-07-03 15:50:27 +00:00
|
|
|
result = block:
|
|
|
|
let rc = acc.call(hasPathAccount, acc.mpt, accPath)
|
|
|
|
if rc.isOk:
|
|
|
|
ok(rc.value)
|
|
|
|
else:
|
|
|
|
err(rc.error.toError $api)
|
2024-06-27 19:21:01 +00:00
|
|
|
acc.ifTrackNewApi:
|
|
|
|
debug newApiTxt, api, elapsed, accPath=accPath.toStr, result
|
2024-03-18 19:40:23 +00:00
|
|
|
|
2024-06-27 09:01:26 +00:00
|
|
|
proc state*(acc: CoreDbAccRef; updateOk = false): CoreDbRc[Hash256] =
|
|
|
|
## This function retrieves the Merkle state hash of the accounts
|
2024-06-27 19:21:01 +00:00
|
|
|
## column (if available.)
|
2024-06-27 09:01:26 +00:00
|
|
|
##
|
|
|
|
## If the argument `updateOk` is set `true`, the Merkle hashes of the
|
|
|
|
## database will be updated first (if needed, at all).
|
|
|
|
##
|
|
|
|
acc.setTrackNewApi AccStateFn
|
2024-07-03 15:50:27 +00:00
|
|
|
result = block:
|
|
|
|
let rc = acc.call(fetchAccountState, acc.mpt, updateOk)
|
|
|
|
if rc.isOk:
|
|
|
|
ok(rc.value)
|
|
|
|
else:
|
|
|
|
err(rc.error.toError $api)
|
2024-06-27 09:01:26 +00:00
|
|
|
acc.ifTrackNewApi: debug newApiTxt, api, elapsed, updateOK, result
|
|
|
|
|
|
|
|
# ------------ storage ---------------
|
|
|
|
|
|
|
|
proc slotFetch*(
|
|
|
|
acc: CoreDbAccRef;
|
2024-07-03 08:14:26 +00:00
|
|
|
accPath: Hash256;
|
2024-07-04 23:48:45 +00:00
|
|
|
stoPath: Hash256;
|
|
|
|
): CoreDbRc[UInt256] =
|
2024-06-27 19:21:01 +00:00
|
|
|
## Like `fetch()` but with cascaded index `(accPath,slot)`.
|
2024-06-27 09:01:26 +00:00
|
|
|
acc.setTrackNewApi AccSlotFetchFn
|
2024-07-03 15:50:27 +00:00
|
|
|
result = block:
|
2024-07-04 23:48:45 +00:00
|
|
|
let rc = acc.call(fetchStorageData, acc.mpt, accPath, stoPath)
|
2024-07-03 15:50:27 +00:00
|
|
|
if rc.isOk:
|
|
|
|
ok(rc.value)
|
|
|
|
elif rc.error == FetchPathNotFound:
|
|
|
|
err(rc.error.toError($api, StoNotFound))
|
|
|
|
else:
|
|
|
|
err(rc.error.toError $api)
|
2024-06-27 19:21:01 +00:00
|
|
|
acc.ifTrackNewApi:
|
|
|
|
debug newApiTxt, api, elapsed, accPath=accPath.toStr,
|
|
|
|
slot=slot.toStr, result
|
2024-06-27 09:01:26 +00:00
|
|
|
|
|
|
|
proc slotDelete*(
|
|
|
|
acc: CoreDbAccRef;
|
2024-07-03 08:14:26 +00:00
|
|
|
accPath: Hash256;
|
2024-07-04 23:48:45 +00:00
|
|
|
stoPath: Hash256;
|
2024-06-27 09:01:26 +00:00
|
|
|
): CoreDbRc[void] =
|
2024-06-27 19:21:01 +00:00
|
|
|
## Like `delete()` but with cascaded index `(accPath,slot)`.
|
2024-06-27 09:01:26 +00:00
|
|
|
acc.setTrackNewApi AccSlotDeleteFn
|
2024-07-03 15:50:27 +00:00
|
|
|
result = block:
|
2024-07-04 23:48:45 +00:00
|
|
|
let rc = acc.call(deleteStorageData, acc.mpt, accPath, stoPath)
|
2024-07-03 15:50:27 +00:00
|
|
|
if rc.isOk or rc.error == DelStoRootMissing:
|
|
|
|
# The second `if` clause is insane but legit: A storage column was
|
|
|
|
# announced for an account but no data have been added, yet.
|
|
|
|
ok()
|
|
|
|
elif rc.error == DelPathNotFound:
|
|
|
|
err(rc.error.toError($api, StoNotFound))
|
|
|
|
else:
|
|
|
|
err(rc.error.toError $api)
|
2024-06-27 19:21:01 +00:00
|
|
|
acc.ifTrackNewApi:
|
|
|
|
debug newApiTxt, api, elapsed, accPath=accPath.toStr,
|
|
|
|
slot=slot.toStr, result
|
2024-06-27 09:01:26 +00:00
|
|
|
|
|
|
|
proc slotHasPath*(
|
|
|
|
acc: CoreDbAccRef;
|
2024-07-03 08:14:26 +00:00
|
|
|
accPath: Hash256;
|
2024-07-04 23:48:45 +00:00
|
|
|
stoPath: Hash256;
|
2024-06-27 09:01:26 +00:00
|
|
|
): CoreDbRc[bool] =
|
2024-06-27 19:21:01 +00:00
|
|
|
## Like `hasPath()` but with cascaded index `(accPath,slot)`.
|
2024-06-27 09:01:26 +00:00
|
|
|
acc.setTrackNewApi AccSlotHasPathFn
|
2024-07-03 15:50:27 +00:00
|
|
|
result = block:
|
2024-07-04 23:48:45 +00:00
|
|
|
let rc = acc.call(hasPathStorage, acc.mpt, accPath, stoPath)
|
2024-07-03 15:50:27 +00:00
|
|
|
if rc.isOk:
|
|
|
|
ok(rc.value)
|
|
|
|
else:
|
|
|
|
err(rc.error.toError $api)
|
2024-06-27 19:21:01 +00:00
|
|
|
acc.ifTrackNewApi:
|
|
|
|
debug newApiTxt, api, elapsed, accPath=accPath.toStr,
|
|
|
|
slot=slot.toStr, result
|
2024-06-27 09:01:26 +00:00
|
|
|
|
|
|
|
proc slotMerge*(
|
|
|
|
acc: CoreDbAccRef;
|
2024-07-03 08:14:26 +00:00
|
|
|
accPath: Hash256;
|
2024-07-04 23:48:45 +00:00
|
|
|
stoPath: Hash256;
|
|
|
|
stoData: UInt256;
|
2024-06-27 09:01:26 +00:00
|
|
|
): CoreDbRc[void] =
|
2024-06-27 19:21:01 +00:00
|
|
|
## Like `merge()` but with cascaded index `(accPath,slot)`.
|
2024-06-27 09:01:26 +00:00
|
|
|
acc.setTrackNewApi AccSlotMergeFn
|
2024-07-03 15:50:27 +00:00
|
|
|
result = block:
|
2024-07-04 23:48:45 +00:00
|
|
|
let rc = acc.call(mergeStorageData, acc.mpt, accPath, stoPath, stoData)
|
2024-07-03 15:50:27 +00:00
|
|
|
if rc.isOk:
|
|
|
|
ok()
|
|
|
|
else:
|
|
|
|
err(rc.error.toError $api)
|
2024-06-27 19:21:01 +00:00
|
|
|
acc.ifTrackNewApi:
|
|
|
|
debug newApiTxt, api, elapsed, accPath=accPath.toStr,
|
|
|
|
slot=slot.toStr, result
|
2024-06-27 09:01:26 +00:00
|
|
|
|
|
|
|
proc slotState*(
|
|
|
|
acc: CoreDbAccRef;
|
2024-07-03 08:14:26 +00:00
|
|
|
accPath: Hash256;
|
2024-06-27 09:01:26 +00:00
|
|
|
updateOk = false;
|
|
|
|
): CoreDbRc[Hash256] =
|
2024-06-27 19:21:01 +00:00
|
|
|
## This function retrieves the Merkle state hash of the storage data
|
|
|
|
## column (if available) related to the account indexed by the key
|
|
|
|
## `accPath`.`.
|
|
|
|
##
|
|
|
|
## If the argument `updateOk` is set `true`, the Merkle hashes of the
|
|
|
|
## database will be updated first (if needed, at all).
|
|
|
|
##
|
2024-06-27 09:01:26 +00:00
|
|
|
acc.setTrackNewApi AccSlotStateFn
|
2024-07-03 15:50:27 +00:00
|
|
|
result = block:
|
|
|
|
let rc = acc.call(fetchStorageState, acc.mpt, accPath, updateOk)
|
|
|
|
if rc.isOk:
|
|
|
|
ok(rc.value)
|
|
|
|
else:
|
|
|
|
err(rc.error.toError $api)
|
2024-06-27 19:21:01 +00:00
|
|
|
acc.ifTrackNewApi:
|
|
|
|
debug newApiTxt, api, elapsed, accPath=accPath.toStr, updateOk, result
|
2024-06-27 09:01:26 +00:00
|
|
|
|
|
|
|
proc slotStateEmpty*(
|
|
|
|
acc: CoreDbAccRef;
|
2024-07-03 08:14:26 +00:00
|
|
|
accPath: Hash256;
|
2024-06-27 09:01:26 +00:00
|
|
|
): CoreDbRc[bool] =
|
2024-06-27 19:21:01 +00:00
|
|
|
## This function returns `true` if the storage data column is empty or
|
|
|
|
## missing.
|
|
|
|
##
|
2024-06-27 09:01:26 +00:00
|
|
|
acc.setTrackNewApi AccSlotStateEmptyFn
|
2024-07-03 15:50:27 +00:00
|
|
|
result = block:
|
|
|
|
let rc = acc.call(hasStorageData, acc.mpt, accPath)
|
|
|
|
if rc.isOk:
|
|
|
|
ok(not rc.value)
|
|
|
|
else:
|
|
|
|
err(rc.error.toError $api)
|
2024-06-27 19:21:01 +00:00
|
|
|
acc.ifTrackNewApi:
|
|
|
|
debug newApiTxt, api, elapsed, accPath=accPath.toStr, result
|
2024-06-27 09:01:26 +00:00
|
|
|
|
|
|
|
proc slotStateEmptyOrVoid*(
|
|
|
|
acc: CoreDbAccRef;
|
2024-07-03 08:14:26 +00:00
|
|
|
accPath: Hash256;
|
2024-06-27 09:01:26 +00:00
|
|
|
): bool =
|
|
|
|
## Convenience wrapper, returns `true` where `slotStateEmpty()` would fail.
|
|
|
|
acc.setTrackNewApi AccSlotStateEmptyOrVoidFn
|
2024-07-03 15:50:27 +00:00
|
|
|
result = block:
|
|
|
|
let rc = acc.call(hasStorageData, acc.mpt, accPath)
|
|
|
|
if rc.isOk:
|
|
|
|
not rc.value
|
|
|
|
else:
|
|
|
|
true
|
2024-06-27 19:21:01 +00:00
|
|
|
acc.ifTrackNewApi:
|
|
|
|
debug newApiTxt, api, elapsed, accPath=accPath.toStr, result
|
2024-06-27 09:01:26 +00:00
|
|
|
|
|
|
|
# ------------- other ----------------
|
2024-03-18 19:40:23 +00:00
|
|
|
|
2024-06-27 09:01:26 +00:00
|
|
|
proc recast*(
|
|
|
|
acc: CoreDbAccRef;
|
2024-07-03 08:14:26 +00:00
|
|
|
accPath: Hash256;
|
2024-06-27 19:21:01 +00:00
|
|
|
accRec: CoreDbAccount;
|
2024-06-27 09:01:26 +00:00
|
|
|
updateOk = false;
|
|
|
|
): CoreDbRc[Account] =
|
2024-07-03 15:50:27 +00:00
|
|
|
## Complete the argument `accRec` to the portable Ethereum representation
|
2024-04-19 18:37:27 +00:00
|
|
|
## of an account statement. This conversion may fail if the storage colState
|
2024-07-03 15:50:27 +00:00
|
|
|
## hash (see `slotState()` above) is currently unavailable.
|
2024-03-18 19:40:23 +00:00
|
|
|
##
|
2024-07-03 15:50:27 +00:00
|
|
|
acc.setTrackNewApi AccRecastFn
|
|
|
|
let rc = acc.call(fetchStorageState, acc.mpt, accPath, updateOk)
|
|
|
|
result = block:
|
2024-03-18 19:40:23 +00:00
|
|
|
if rc.isOk:
|
|
|
|
ok Account(
|
2024-06-27 19:21:01 +00:00
|
|
|
nonce: accRec.nonce,
|
|
|
|
balance: accRec.balance,
|
|
|
|
codeHash: accRec.codeHash,
|
2024-03-18 19:40:23 +00:00
|
|
|
storageRoot: rc.value)
|
|
|
|
else:
|
2024-07-03 15:50:27 +00:00
|
|
|
err(rc.error.toError $api)
|
2024-06-27 19:21:01 +00:00
|
|
|
acc.ifTrackNewApi:
|
|
|
|
let slotState = if rc.isOk: rc.value.toStr else: "n/a"
|
|
|
|
debug newApiTxt, api, elapsed, accPath=accPath.toStr, slotState, result
|
2023-09-26 09:21:13 +00:00
|
|
|
|
2023-07-31 13:43:38 +00:00
|
|
|
# ------------------------------------------------------------------------------
|
|
|
|
# Public transaction related methods
|
|
|
|
# ------------------------------------------------------------------------------
|
|
|
|
|
2024-04-19 18:37:27 +00:00
|
|
|
proc level*(db: CoreDbRef): int =
|
|
|
|
## Retrieve transaction level (zero if there is no pending transaction).
|
|
|
|
##
|
|
|
|
db.setTrackNewApi BaseLevelFn
|
2024-07-03 15:50:27 +00:00
|
|
|
result = CoreDbAccRef(db.ctx).call(level, db.ctx.mpt)
|
2024-04-19 18:37:27 +00:00
|
|
|
db.ifTrackNewApi: debug newApiTxt, api, elapsed, result
|
|
|
|
|
2024-07-03 15:50:27 +00:00
|
|
|
proc newTransaction*(ctx: CoreDbCtxRef): CoreDbTxRef =
|
2023-08-02 20:46:41 +00:00
|
|
|
## Constructor
|
2023-11-24 22:16:21 +00:00
|
|
|
##
|
2024-07-03 15:50:27 +00:00
|
|
|
ctx.setTrackNewApi BaseNewTxFn
|
|
|
|
let
|
|
|
|
kTx = CoreDbKvtRef(ctx).call(txBegin, ctx.kvt).valueOr:
|
|
|
|
raiseAssert $api & ": " & $error
|
|
|
|
aTx = CoreDbAccRef(ctx).call(txBegin, ctx.mpt).valueOr:
|
|
|
|
raiseAssert $api & ": " & $error
|
|
|
|
result = ctx.bless CoreDbTxRef(kTx: kTx, aTx: aTx)
|
|
|
|
ctx.ifTrackNewApi:
|
|
|
|
let newLevel = CoreDbAccRef(ctx).call(level, ctx.mpt)
|
|
|
|
debug newApiTxt, api, elapsed, newLevel
|
2024-04-19 18:37:27 +00:00
|
|
|
|
2024-06-19 14:13:12 +00:00
|
|
|
proc level*(tx: CoreDbTxRef): int =
|
2024-04-19 18:37:27 +00:00
|
|
|
## Print positive transaction level for argument `tx`
|
2023-11-24 22:16:21 +00:00
|
|
|
##
|
2023-12-12 17:47:41 +00:00
|
|
|
tx.setTrackNewApi TxLevelFn
|
2024-07-03 15:50:27 +00:00
|
|
|
result = CoreDbAccRef(tx.ctx).call(txLevel, tx.aTx)
|
2024-03-18 19:40:23 +00:00
|
|
|
tx.ifTrackNewApi: debug newApiTxt, api, elapsed, result
|
2023-07-31 13:43:38 +00:00
|
|
|
|
2024-06-19 14:13:12 +00:00
|
|
|
proc commit*(tx: CoreDbTxRef) =
|
2023-12-12 17:47:41 +00:00
|
|
|
tx.setTrackNewApi TxCommitFn:
|
2024-07-03 15:50:27 +00:00
|
|
|
let prvLevel {.used.} = CoreDbAccRef(tx.ctx).call(txLevel, tx.aTx)
|
|
|
|
CoreDbAccRef(tx.ctx).call(commit, tx.aTx).isOkOr:
|
|
|
|
raiseAssert $api & ": " & $error
|
|
|
|
CoreDbKvtRef(tx.ctx).call(commit, tx.kTx).isOkOr:
|
|
|
|
raiseAssert $api & ": " & $error
|
2024-06-05 20:52:04 +00:00
|
|
|
tx.ifTrackNewApi: debug newApiTxt, api, elapsed, prvLevel
|
2023-07-31 13:43:38 +00:00
|
|
|
|
2024-06-19 14:13:12 +00:00
|
|
|
proc rollback*(tx: CoreDbTxRef) =
|
2023-12-12 17:47:41 +00:00
|
|
|
tx.setTrackNewApi TxRollbackFn:
|
2024-07-03 15:50:27 +00:00
|
|
|
let prvLevel {.used.} = CoreDbAccRef(tx.ctx).call(txLevel, tx.aTx)
|
|
|
|
CoreDbAccRef(tx.ctx).call(rollback, tx.aTx).isOkOr:
|
|
|
|
raiseAssert $api & ": " & $error
|
|
|
|
CoreDbKvtRef(tx.ctx).call(rollback, tx.kTx).isOkOr:
|
|
|
|
raiseAssert $api & ": " & $error
|
2024-06-05 20:52:04 +00:00
|
|
|
tx.ifTrackNewApi: debug newApiTxt, api, elapsed, prvLevel
|
2023-07-31 13:43:38 +00:00
|
|
|
|
2024-06-19 14:13:12 +00:00
|
|
|
proc dispose*(tx: CoreDbTxRef) =
|
2023-12-12 17:47:41 +00:00
|
|
|
tx.setTrackNewApi TxDisposeFn:
|
2024-07-03 15:50:27 +00:00
|
|
|
let prvLevel {.used.} = CoreDbAccRef(tx.ctx).call(txLevel, tx.aTx)
|
|
|
|
if CoreDbAccRef(tx.ctx).call(isTop, tx.aTx):
|
|
|
|
CoreDbAccRef(tx.ctx).call(rollback, tx.aTx).isOkOr:
|
|
|
|
raiseAssert $api & ": " & $error
|
|
|
|
if CoreDbKvtRef(tx.ctx).call(isTop, tx.kTx):
|
|
|
|
CoreDbKvtRef(tx.ctx).call(rollback, tx.kTx).isOkOr:
|
|
|
|
raiseAssert $api & ": " & $error
|
2024-06-05 20:52:04 +00:00
|
|
|
tx.ifTrackNewApi: debug newApiTxt, api, elapsed, prvLevel
|
2023-08-02 20:46:41 +00:00
|
|
|
|
2024-07-03 15:50:27 +00:00
|
|
|
# ------------------------------------------------------------------------------
|
|
|
|
# Legacy and convenience methods
|
|
|
|
# ------------------------------------------------------------------------------
|
|
|
|
|
|
|
|
proc newKvt*(db: CoreDbRef): CoreDbKvtRef =
|
|
|
|
## Variant of `getKvt()` retrieving the KVT from the default context
|
|
|
|
db.ctx.getKvt
|
|
|
|
|
|
|
|
proc newTransaction*(db: CoreDbRef): CoreDbTxRef =
|
|
|
|
## Variant of `newTransaction()` starting the transaction on the default
|
|
|
|
## context
|
|
|
|
db.ctx.newTransaction
|
|
|
|
|
|
|
|
proc getColumn*(
|
|
|
|
ctx: CoreDbCtxRef;
|
|
|
|
colType: CoreDbColType;
|
|
|
|
clearData = false;
|
|
|
|
): CoreDbMptRef =
|
|
|
|
## Variant of `getGenteric()` forcing `colType` to be `CtGeneric`
|
|
|
|
doAssert colType == CtGeneric
|
|
|
|
ctx.getGeneric clearData
|
|
|
|
|
2023-08-02 20:46:41 +00:00
|
|
|
# ------------------------------------------------------------------------------
|
|
|
|
# Public tracer methods
|
|
|
|
# ------------------------------------------------------------------------------
|
|
|
|
|
2024-06-18 11:14:02 +00:00
|
|
|
when false: # currently disabled
|
|
|
|
proc newCapture*(
|
|
|
|
db: CoreDbRef;
|
|
|
|
flags: set[CoreDbCaptFlags] = {};
|
2024-06-19 14:13:12 +00:00
|
|
|
): CoreDbRc[CoreDbCaptRef] =
|
2024-06-18 11:14:02 +00:00
|
|
|
## Trace constructor providing an overlay on top of the argument database
|
|
|
|
## `db`. This overlay provides a replacement database handle that can be
|
|
|
|
## retrieved via `db.recorder()` (which can in turn be ovelayed.) While
|
|
|
|
## running the overlay stores data in a log-table which can be retrieved
|
|
|
|
## via `db.logDb()`.
|
|
|
|
##
|
|
|
|
## Caveat:
|
|
|
|
## The original database argument `db` should not be used while the tracer
|
|
|
|
## is active (i.e. exists as overlay). The behaviour for this situation
|
|
|
|
## is undefined and depends on the backend implementation of the tracer.
|
|
|
|
##
|
|
|
|
db.setTrackNewApi BaseNewCaptureFn
|
|
|
|
result = db.methods.newCaptureFn flags
|
|
|
|
db.ifTrackNewApi: debug newApiTxt, api, elapsed, result
|
|
|
|
|
2024-06-19 14:13:12 +00:00
|
|
|
proc recorder*(cpt: CoreDbCaptRef): CoreDbRef =
|
2024-06-18 11:14:02 +00:00
|
|
|
## Getter, returns a tracer replacement handle to be used as new database.
|
|
|
|
## It records every action like fetch, store, hasKey, hasPath and delete.
|
|
|
|
## This descriptor can be superseded by a new overlay tracer (using
|
|
|
|
## `newCapture()`, again.)
|
|
|
|
##
|
|
|
|
## Caveat:
|
|
|
|
## Unless the desriptor `cpt` referes to the top level overlay tracer, the
|
|
|
|
## result is undefined and depends on the backend implementation of the
|
|
|
|
## tracer.
|
|
|
|
##
|
|
|
|
cpt.setTrackNewApi CptRecorderFn
|
|
|
|
result = cpt.methods.recorderFn()
|
|
|
|
cpt.ifTrackNewApi: debug newApiTxt, api, elapsed
|
|
|
|
|
2024-06-19 14:13:12 +00:00
|
|
|
proc logDb*(cp: CoreDbCaptRef): TableRef[Blob,Blob] =
|
2024-06-18 11:14:02 +00:00
|
|
|
## Getter, returns the logger table for the overlay tracer database.
|
|
|
|
##
|
|
|
|
## Caveat:
|
|
|
|
## Unless the desriptor `cpt` referes to the top level overlay tracer, the
|
|
|
|
## result is undefined and depends on the backend implementation of the
|
|
|
|
## tracer.
|
|
|
|
##
|
|
|
|
cp.setTrackNewApi CptLogDbFn
|
|
|
|
result = cp.methods.logDbFn()
|
|
|
|
cp.ifTrackNewApi: debug newApiTxt, api, elapsed
|
|
|
|
|
2024-06-19 14:13:12 +00:00
|
|
|
proc flags*(cp: CoreDbCaptRef):set[CoreDbCaptFlags] =
|
2024-06-18 11:14:02 +00:00
|
|
|
## Getter
|
|
|
|
##
|
|
|
|
cp.setTrackNewApi CptFlagsFn
|
|
|
|
result = cp.methods.getFlagsFn()
|
|
|
|
cp.ifTrackNewApi: debug newApiTxt, api, elapsed, result
|
|
|
|
|
2024-06-19 14:13:12 +00:00
|
|
|
proc forget*(cp: CoreDbCaptRef) =
|
2024-06-18 11:14:02 +00:00
|
|
|
## Explicitely stop recording the current tracer instance and reset to
|
|
|
|
## previous level.
|
|
|
|
##
|
|
|
|
cp.setTrackNewApi CptForgetFn
|
|
|
|
cp.methods.forgetFn()
|
|
|
|
cp.ifTrackNewApi: debug newApiTxt, api, elapsed
|
2024-03-07 19:24:05 +00:00
|
|
|
|
2023-07-31 13:43:38 +00:00
|
|
|
# ------------------------------------------------------------------------------
|
|
|
|
# End
|
|
|
|
# ------------------------------------------------------------------------------
|