2023-07-31 13:43:38 +00:00
|
|
|
# Nimbus
|
Core db update storage root management for sub tries (#1964)
* Aristo: Re-phrase `LayerDelta` and `LayerFinal` as object references
why:
Avoids copying in some cases
* Fix copyright header
* Aristo: Verify `leafTie.root` function argument for `merge()` proc
why:
Zero root will lead to inconsistent DB entry
* Aristo: Update failure condition for hash labels compiler `hashify()`
why:
Node need not be rejected as long as links are on the schedule. In
that case, `redo[]` is to become `wff.base[]` at a later stage.
This amends an earlier fix, part of #1952 by also testing against
the target nodes of the `wff.base[]` sets.
* Aristo: Add storage root glue record to `hashify()` schedule
why:
An account leaf node might refer to a non-resolvable storage root ID.
Storage root node chains will end up at the storage root. So the link
`storage-root->account-leaf` needs an extra item in the schedule.
* Aristo: fix error code returned by `fetchPayload()`
details:
Final error code is implied by the error code form the `hikeUp()`
function.
* CoreDb: Discard `createOk` argument in API `getRoot()` function
why:
Not needed for the legacy DB. For the `Arsto` DB, a lazy approach is
implemented where a stprage root node is created on-the-fly.
* CoreDb: Prevent `$$` logging in some cases
why:
Logging the function `$$` is not useful when it is used for internal
use, i.e. retrieving an an error text for logging.
* CoreDb: Add `tryHashFn()` to API for pretty printing
why:
Pretty printing must not change the hashification status for the
`Aristo` DB. So there is an independent API wrapper for getting the
node hash which never updated the hashes.
* CoreDb: Discard `update` argument in API `hash()` function
why:
When calling the API function `hash()`, the latest state is always
wanted. For a version that uses the current state as-is without checking,
the function `tryHash()` was added to the backend.
* CoreDb: Update opaque vertex ID objects for the `Aristo` backend
why:
For `Aristo`, vID objects encapsulate a numeric `VertexID`
referencing a vertex (rather than a node hash as used on the
legacy backend.) For storage sub-tries, there might be no initial
vertex known when the descriptor is created. So opaque vertex ID
objects are supported without a valid `VertexID` which will be
initalised on-the-fly when the first item is merged.
* CoreDb: Add pretty printer for opaque vertex ID objects
* Cosmetics, printing profiling data
* CoreDb: Fix segfault in `Aristo` backend when creating MPT descriptor
why:
Missing initialisation error
* CoreDb: Allow MPT to inherit shared context on `Aristo` backend
why:
Creates descriptors with different storage roots for the same
shared `Aristo` DB descriptor.
* Cosmetics, update diagnostic message items for `Aristo` backend
* Fix Copyright year
2024-01-11 19:11:38 +00:00
|
|
|
# Copyright (c) 2023-2024 Status Research & Development GmbH
|
2023-07-31 13:43:38 +00:00
|
|
|
# Licensed under either of
|
|
|
|
# * Apache License, version 2.0, ([LICENSE-APACHE](LICENSE-APACHE) or
|
|
|
|
# http://www.apache.org/licenses/LICENSE-2.0)
|
|
|
|
# * MIT license ([LICENSE-MIT](LICENSE-MIT) or
|
|
|
|
# http://opensource.org/licenses/MIT)
|
|
|
|
# at your option. This file may not be copied, modified, or distributed except
|
|
|
|
# according to those terms.
|
|
|
|
|
|
|
|
{.push raises: [].}
|
|
|
|
|
|
|
|
import
|
2024-07-03 15:50:27 +00:00
|
|
|
std/typetraits,
|
2023-08-02 20:46:41 +00:00
|
|
|
eth/common,
|
2023-09-26 09:21:13 +00:00
|
|
|
"../.."/[constants, errors],
|
2024-07-12 13:12:25 +00:00
|
|
|
".."/[kvt, aristo],
|
2024-08-01 10:41:20 +00:00
|
|
|
./backend/aristo_db,
|
2024-07-12 19:32:31 +00:00
|
|
|
./base/[api_tracking, base_config, base_desc, base_helpers]
|
2023-12-12 17:47:41 +00:00
|
|
|
|
2023-09-26 09:21:13 +00:00
|
|
|
export
|
2024-07-03 15:50:27 +00:00
|
|
|
CoreDbAccRef,
|
2023-10-11 19:09:11 +00:00
|
|
|
CoreDbAccount,
|
|
|
|
CoreDbApiError,
|
2024-03-18 19:40:23 +00:00
|
|
|
CoreDbCtxRef,
|
2023-10-11 19:09:11 +00:00
|
|
|
CoreDbErrorCode,
|
2024-07-13 18:42:49 +00:00
|
|
|
CoreDbError,
|
2024-07-03 15:50:27 +00:00
|
|
|
CoreDbKvtRef,
|
|
|
|
CoreDbMptRef,
|
Core db and aristo updates for destructor and tx logic (#1894)
* Disable `TransactionID` related functions from `state_db.nim`
why:
Functions `getCommittedStorage()` and `updateOriginalRoot()` from
the `state_db` module are nowhere used. The emulation of a legacy
`TransactionID` type functionality is administratively expensive to
provide by `Aristo` (the legacy DB version is only partially
implemented, anyway).
As there is no other place where `TransactionID`s are used, they will
not be provided by the `Aristo` variant of the `CoreDb`. For the
legacy DB API, nothing will change.
* Fix copyright headers in source code
* Get rid of compiler warning
* Update Aristo code, remove unused `merge()` variant, export `hashify()`
why:
Adapt to upcoming `CoreDb` wrapper
* Remove synced tx feature from `Aristo`
why:
+ This feature allowed to synchronise transaction methods like begin,
commit, and rollback for a group of descriptors.
+ The feature is over engineered and not needed for `CoreDb`, neither
is it complete (some convergence features missing.)
* Add debugging helpers to `Kvt`
also:
Update database iterator, add count variable yield argument similar
to `Aristo`.
* Provide optional destructors for `CoreDb` API
why;
For the upcoming Aristo wrapper, this allows to control when certain
smart destruction and update can take place. The auto destructor works
fine in general when the storage/cache strategy is known and acceptable
when creating descriptors.
* Add update option for `CoreDb` API function `hash()`
why;
The hash function is typically used to get the state root of the MPT.
Due to lazy hashing, this might be not available on the `Aristo` DB.
So the `update` function asks for re-hashing the gurrent state changes
if needed.
* Update API tracking log mode: `info` => `debug
* Use shared `Kvt` descriptor in new Ledger API
why:
No need to create a new descriptor all the time
2023-11-16 19:35:03 +00:00
|
|
|
CoreDbPersistentTypes,
|
2023-09-26 09:21:13 +00:00
|
|
|
CoreDbRef,
|
2024-07-03 15:50:27 +00:00
|
|
|
CoreDbTxRef,
|
|
|
|
CoreDbType
|
2023-08-02 20:46:41 +00:00
|
|
|
|
2024-07-03 15:50:27 +00:00
|
|
|
when CoreDbEnableApiTracking:
|
2023-11-24 22:16:21 +00:00
|
|
|
import
|
2024-06-27 19:21:01 +00:00
|
|
|
chronicles
|
2024-07-12 13:12:25 +00:00
|
|
|
logScope:
|
|
|
|
topics = "core_db"
|
|
|
|
const
|
|
|
|
logTxt = "API"
|
2023-10-25 14:03:09 +00:00
|
|
|
|
2024-07-12 13:12:25 +00:00
|
|
|
when CoreDbEnableProfiling:
|
|
|
|
export
|
|
|
|
CoreDbFnInx,
|
|
|
|
CoreDbProfListRef
|
2023-10-25 14:03:09 +00:00
|
|
|
|
2024-08-01 10:41:20 +00:00
|
|
|
when CoreDbEnableCaptJournal:
|
2024-07-22 18:10:04 +00:00
|
|
|
import
|
|
|
|
./backend/aristo_trace
|
|
|
|
type
|
|
|
|
CoreDbCaptRef* = distinct TraceLogInstRef
|
|
|
|
func `$`(p: CoreDbCaptRef): string =
|
|
|
|
if p.distinctBase.isNil: "<nil>" else: "<capt>"
|
2024-07-12 19:32:31 +00:00
|
|
|
else:
|
|
|
|
import
|
|
|
|
../aristo/[
|
2024-08-07 11:30:55 +00:00
|
|
|
aristo_delete, aristo_desc, aristo_fetch, aristo_merge, aristo_part,
|
|
|
|
aristo_tx],
|
2024-07-12 19:32:31 +00:00
|
|
|
../kvt/[kvt_desc, kvt_utils, kvt_tx]
|
2024-02-02 20:23:04 +00:00
|
|
|
|
2023-07-31 13:43:38 +00:00
|
|
|
# ------------------------------------------------------------------------------
|
2024-07-03 15:50:27 +00:00
|
|
|
# Public context constructors and administration
|
2023-07-31 13:43:38 +00:00
|
|
|
# ------------------------------------------------------------------------------
|
|
|
|
|
2024-07-03 15:50:27 +00:00
|
|
|
proc ctx*(db: CoreDbRef): CoreDbCtxRef =
|
|
|
|
## Get the defauly context. This is a base descriptor which provides the
|
|
|
|
## KVT, MPT, the accounts descriptors as well as the transaction descriptor.
|
|
|
|
## They are kept all in sync, i.e. `persistent()` will store exactly this
|
|
|
|
## context.
|
Core db update storage root management for sub tries (#1964)
* Aristo: Re-phrase `LayerDelta` and `LayerFinal` as object references
why:
Avoids copying in some cases
* Fix copyright header
* Aristo: Verify `leafTie.root` function argument for `merge()` proc
why:
Zero root will lead to inconsistent DB entry
* Aristo: Update failure condition for hash labels compiler `hashify()`
why:
Node need not be rejected as long as links are on the schedule. In
that case, `redo[]` is to become `wff.base[]` at a later stage.
This amends an earlier fix, part of #1952 by also testing against
the target nodes of the `wff.base[]` sets.
* Aristo: Add storage root glue record to `hashify()` schedule
why:
An account leaf node might refer to a non-resolvable storage root ID.
Storage root node chains will end up at the storage root. So the link
`storage-root->account-leaf` needs an extra item in the schedule.
* Aristo: fix error code returned by `fetchPayload()`
details:
Final error code is implied by the error code form the `hikeUp()`
function.
* CoreDb: Discard `createOk` argument in API `getRoot()` function
why:
Not needed for the legacy DB. For the `Arsto` DB, a lazy approach is
implemented where a stprage root node is created on-the-fly.
* CoreDb: Prevent `$$` logging in some cases
why:
Logging the function `$$` is not useful when it is used for internal
use, i.e. retrieving an an error text for logging.
* CoreDb: Add `tryHashFn()` to API for pretty printing
why:
Pretty printing must not change the hashification status for the
`Aristo` DB. So there is an independent API wrapper for getting the
node hash which never updated the hashes.
* CoreDb: Discard `update` argument in API `hash()` function
why:
When calling the API function `hash()`, the latest state is always
wanted. For a version that uses the current state as-is without checking,
the function `tryHash()` was added to the backend.
* CoreDb: Update opaque vertex ID objects for the `Aristo` backend
why:
For `Aristo`, vID objects encapsulate a numeric `VertexID`
referencing a vertex (rather than a node hash as used on the
legacy backend.) For storage sub-tries, there might be no initial
vertex known when the descriptor is created. So opaque vertex ID
objects are supported without a valid `VertexID` which will be
initalised on-the-fly when the first item is merged.
* CoreDb: Add pretty printer for opaque vertex ID objects
* Cosmetics, printing profiling data
* CoreDb: Fix segfault in `Aristo` backend when creating MPT descriptor
why:
Missing initialisation error
* CoreDb: Allow MPT to inherit shared context on `Aristo` backend
why:
Creates descriptors with different storage roots for the same
shared `Aristo` DB descriptor.
* Cosmetics, update diagnostic message items for `Aristo` backend
* Fix Copyright year
2024-01-11 19:11:38 +00:00
|
|
|
##
|
2024-07-03 15:50:27 +00:00
|
|
|
db.defCtx
|
2023-08-02 20:46:41 +00:00
|
|
|
|
2024-08-01 10:41:20 +00:00
|
|
|
proc newCtxByKey*(ctx: CoreDbCtxRef; root: Hash256): CoreDbRc[CoreDbCtxRef] =
|
|
|
|
## Create new context derived from a matching transaction of the currently
|
|
|
|
## active context. If successful, the resulting context has the following
|
|
|
|
## properties:
|
|
|
|
##
|
|
|
|
## * Transaction level is 1
|
|
|
|
## * The state of the accounts column is equal to the argument `root`
|
|
|
|
##
|
|
|
|
## If successful, the resulting descriptor **must** be manually released
|
|
|
|
## with `forget()` when it is not used, anymore.
|
|
|
|
##
|
|
|
|
## Note:
|
|
|
|
## The underlying `Aristo` backend uses lazy hashing so this function
|
|
|
|
## might fail simply because there is no computed state when nesting
|
|
|
|
## the next transaction. If the previous transaction needs to be found,
|
|
|
|
## then it must called like this:
|
|
|
|
## ::
|
|
|
|
## let db = .. # Instantiate CoreDb handle
|
|
|
|
## ...
|
|
|
|
## discard db.ctx.getAccounts.state() # Compute state hash
|
|
|
|
## db.ctx.newTransaction() # Enter new transaction
|
|
|
|
##
|
|
|
|
## However, remember that unused hash computations are contle relative
|
|
|
|
## to processing time.
|
|
|
|
##
|
|
|
|
ctx.setTrackNewApi CtxNewCtxByKeyFn
|
|
|
|
result = ctx.newCtxByKey(root, $api)
|
|
|
|
ctx.ifTrackNewApi: debug logTxt, api, elapsed, root=($$root), result
|
|
|
|
|
|
|
|
proc swapCtx*(ctx: CoreDbCtxRef; db: CoreDbRef): CoreDbCtxRef =
|
2024-07-03 15:50:27 +00:00
|
|
|
## Activate argument context `ctx` as default and return the previously
|
2024-08-01 10:41:20 +00:00
|
|
|
## active context. This function goes typically together with `forget()`.
|
|
|
|
## A valid scenario might look like
|
2024-07-03 15:50:27 +00:00
|
|
|
## ::
|
2024-08-01 10:41:20 +00:00
|
|
|
## let db = .. # Instantiate CoreDb handle
|
|
|
|
## ...
|
|
|
|
## let ctx = newCtxByKey(..).expect "ctx" # Create new context
|
|
|
|
## let saved = db.swapCtx ctx # Swap context dandles
|
|
|
|
## defer: db.swapCtx(saved).forget() # Restore
|
|
|
|
## ...
|
2023-11-24 22:16:21 +00:00
|
|
|
##
|
2024-07-03 15:50:27 +00:00
|
|
|
doAssert not ctx.isNil
|
2024-08-01 10:41:20 +00:00
|
|
|
assert db.defCtx != ctx # debugging only
|
|
|
|
db.setTrackNewApi CtxSwapCtxFn
|
|
|
|
|
|
|
|
# Swap default context with argument `ctx`
|
2024-07-03 15:50:27 +00:00
|
|
|
result = db.defCtx
|
2024-08-01 10:41:20 +00:00
|
|
|
db.defCtx = ctx
|
2024-07-03 15:50:27 +00:00
|
|
|
|
|
|
|
# Set read-write access and install
|
|
|
|
CoreDbAccRef(ctx).call(reCentre, db.ctx.mpt).isOkOr:
|
|
|
|
raiseAssert $api & " failed: " & $error
|
|
|
|
CoreDbKvtRef(ctx).call(reCentre, db.ctx.kvt).isOkOr:
|
|
|
|
raiseAssert $api & " failed: " & $error
|
2024-08-01 10:41:20 +00:00
|
|
|
doAssert db.defCtx != result
|
2024-07-12 13:12:25 +00:00
|
|
|
db.ifTrackNewApi: debug logTxt, api, elapsed
|
2023-09-26 09:21:13 +00:00
|
|
|
|
2024-07-03 15:50:27 +00:00
|
|
|
proc forget*(ctx: CoreDbCtxRef) =
|
|
|
|
## Dispose `ctx` argument context and related columns created with this
|
2024-08-01 10:41:20 +00:00
|
|
|
## context. This function throws an exception `ctx` is the default context.
|
Core db update storage root management for sub tries (#1964)
* Aristo: Re-phrase `LayerDelta` and `LayerFinal` as object references
why:
Avoids copying in some cases
* Fix copyright header
* Aristo: Verify `leafTie.root` function argument for `merge()` proc
why:
Zero root will lead to inconsistent DB entry
* Aristo: Update failure condition for hash labels compiler `hashify()`
why:
Node need not be rejected as long as links are on the schedule. In
that case, `redo[]` is to become `wff.base[]` at a later stage.
This amends an earlier fix, part of #1952 by also testing against
the target nodes of the `wff.base[]` sets.
* Aristo: Add storage root glue record to `hashify()` schedule
why:
An account leaf node might refer to a non-resolvable storage root ID.
Storage root node chains will end up at the storage root. So the link
`storage-root->account-leaf` needs an extra item in the schedule.
* Aristo: fix error code returned by `fetchPayload()`
details:
Final error code is implied by the error code form the `hikeUp()`
function.
* CoreDb: Discard `createOk` argument in API `getRoot()` function
why:
Not needed for the legacy DB. For the `Arsto` DB, a lazy approach is
implemented where a stprage root node is created on-the-fly.
* CoreDb: Prevent `$$` logging in some cases
why:
Logging the function `$$` is not useful when it is used for internal
use, i.e. retrieving an an error text for logging.
* CoreDb: Add `tryHashFn()` to API for pretty printing
why:
Pretty printing must not change the hashification status for the
`Aristo` DB. So there is an independent API wrapper for getting the
node hash which never updated the hashes.
* CoreDb: Discard `update` argument in API `hash()` function
why:
When calling the API function `hash()`, the latest state is always
wanted. For a version that uses the current state as-is without checking,
the function `tryHash()` was added to the backend.
* CoreDb: Update opaque vertex ID objects for the `Aristo` backend
why:
For `Aristo`, vID objects encapsulate a numeric `VertexID`
referencing a vertex (rather than a node hash as used on the
legacy backend.) For storage sub-tries, there might be no initial
vertex known when the descriptor is created. So opaque vertex ID
objects are supported without a valid `VertexID` which will be
initalised on-the-fly when the first item is merged.
* CoreDb: Add pretty printer for opaque vertex ID objects
* Cosmetics, printing profiling data
* CoreDb: Fix segfault in `Aristo` backend when creating MPT descriptor
why:
Missing initialisation error
* CoreDb: Allow MPT to inherit shared context on `Aristo` backend
why:
Creates descriptors with different storage roots for the same
shared `Aristo` DB descriptor.
* Cosmetics, update diagnostic message items for `Aristo` backend
* Fix Copyright year
2024-01-11 19:11:38 +00:00
|
|
|
##
|
2024-07-03 15:50:27 +00:00
|
|
|
ctx.setTrackNewApi CtxForgetFn
|
2024-08-01 10:41:20 +00:00
|
|
|
doAssert ctx != ctx.parent.defCtx
|
2024-07-03 15:50:27 +00:00
|
|
|
CoreDbAccRef(ctx).call(forget, ctx.mpt).isOkOr:
|
|
|
|
raiseAssert $api & ": " & $error
|
|
|
|
CoreDbKvtRef(ctx).call(forget, ctx.kvt).isOkOr:
|
|
|
|
raiseAssert $api & ": " & $error
|
2024-07-12 13:12:25 +00:00
|
|
|
ctx.ifTrackNewApi: debug logTxt, api, elapsed
|
2023-10-11 19:09:11 +00:00
|
|
|
|
2024-07-03 15:50:27 +00:00
|
|
|
# ------------------------------------------------------------------------------
|
2024-08-07 11:30:55 +00:00
|
|
|
# Public base descriptor methods
|
2024-07-03 15:50:27 +00:00
|
|
|
# ------------------------------------------------------------------------------
|
2024-06-24 05:56:41 +00:00
|
|
|
|
2024-06-14 11:19:48 +00:00
|
|
|
proc finish*(db: CoreDbRef; eradicate = false) =
|
|
|
|
## Database destructor. If the argument `eradicate` is set `false`, the
|
|
|
|
## database is left as-is and only the in-memory handlers are cleaned up.
|
2023-10-11 19:09:11 +00:00
|
|
|
##
|
|
|
|
## Otherwise the destructor is allowed to remove the database. This feature
|
|
|
|
## depends on the backend database. Currently, only the `AristoDbRocks` type
|
|
|
|
## backend removes the database on `true`.
|
2023-11-24 22:16:21 +00:00
|
|
|
##
|
2023-12-12 17:47:41 +00:00
|
|
|
db.setTrackNewApi BaseFinishFn
|
2024-07-03 15:50:27 +00:00
|
|
|
CoreDbKvtRef(db.ctx).call(finish, db.ctx.kvt, eradicate)
|
|
|
|
CoreDbAccRef(db.ctx).call(finish, db.ctx.mpt, eradicate)
|
2024-07-12 13:12:25 +00:00
|
|
|
db.ifTrackNewApi: debug logTxt, api, elapsed
|
2023-09-26 09:21:13 +00:00
|
|
|
|
2024-07-13 18:42:49 +00:00
|
|
|
proc `$$`*(e: CoreDbError): string =
|
2023-09-26 09:21:13 +00:00
|
|
|
## Pretty print error symbol, note that this directive may have side effects
|
|
|
|
## as it calls a backend function.
|
2023-11-24 22:16:21 +00:00
|
|
|
##
|
2024-07-13 18:42:49 +00:00
|
|
|
e.toStr()
|
2024-07-03 15:50:27 +00:00
|
|
|
|
|
|
|
proc persistent*(
|
|
|
|
db: CoreDbRef;
|
|
|
|
blockNumber: BlockNumber;
|
2024-09-04 13:48:38 +00:00
|
|
|
): CoreDbRc[void] =
|
2024-07-03 15:50:27 +00:00
|
|
|
## This function stored cached data from the default context (see `ctx()`
|
|
|
|
## below) to the persistent database.
|
|
|
|
##
|
|
|
|
## It also stores the argument block number `blockNumber` as a state record
|
|
|
|
## which can be retrieved via `stateBlockNumber()`.
|
|
|
|
##
|
|
|
|
db.setTrackNewApi BasePersistentFn
|
|
|
|
block body:
|
|
|
|
block:
|
|
|
|
let rc = CoreDbKvtRef(db.ctx).call(persist, db.ctx.kvt)
|
|
|
|
if rc.isOk or rc.error == TxPersistDelayed:
|
|
|
|
# The latter clause is OK: Piggybacking on `Aristo` backend
|
|
|
|
discard
|
|
|
|
elif CoreDbKvtRef(db.ctx).call(level, db.ctx.kvt) != 0:
|
|
|
|
result = err(rc.error.toError($api, TxPending))
|
|
|
|
break body
|
|
|
|
else:
|
|
|
|
result = err(rc.error.toError $api)
|
|
|
|
break body
|
2024-09-04 13:48:38 +00:00
|
|
|
# Having reached here `Aristo` must not fail as both `Kvt` and `Aristo`
|
|
|
|
# are kept in sync. So if there is a legit fail condition it mist be
|
|
|
|
# caught in the previous clause.
|
|
|
|
CoreDbAccRef(db.ctx).call(persist, db.ctx.mpt, blockNumber).isOkOr:
|
|
|
|
raiseAssert $api & ": " & $error
|
2024-07-03 15:50:27 +00:00
|
|
|
result = ok()
|
2024-07-12 13:12:25 +00:00
|
|
|
db.ifTrackNewApi: debug logTxt, api, elapsed, blockNumber, result
|
2024-07-03 15:50:27 +00:00
|
|
|
|
|
|
|
proc stateBlockNumber*(db: CoreDbRef): BlockNumber =
|
|
|
|
## Rhis function returns the block number stored with the latest `persist()`
|
|
|
|
## directive.
|
|
|
|
##
|
|
|
|
db.setTrackNewApi BaseStateBlockNumberFn
|
|
|
|
result = block:
|
|
|
|
let rc = CoreDbAccRef(db.ctx).call(fetchLastSavedState, db.ctx.mpt)
|
|
|
|
if rc.isOk:
|
|
|
|
rc.value.serial.BlockNumber
|
|
|
|
else:
|
|
|
|
0u64
|
2024-07-12 13:12:25 +00:00
|
|
|
db.ifTrackNewApi: debug logTxt, api, elapsed, result
|
2023-10-11 19:09:11 +00:00
|
|
|
|
2024-08-07 11:30:55 +00:00
|
|
|
proc verify*(
|
|
|
|
db: CoreDbRef | CoreDbMptRef | CoreDbAccRef;
|
|
|
|
proof: openArray[Blob];
|
|
|
|
root: Hash256;
|
|
|
|
path: openArray[byte];
|
2024-09-11 09:39:45 +00:00
|
|
|
): CoreDbRc[Opt[Blob]] =
|
2024-08-07 11:30:55 +00:00
|
|
|
## This function os the counterpart of any of the `proof()` functions. Given
|
|
|
|
## the argument chain of rlp-encoded nodes `proof`, this function verifies
|
|
|
|
## that the chain represents a partial MPT starting with a root node state
|
|
|
|
## `root` followig the path `key` leading to leaf node encapsulating a
|
|
|
|
## payload which is passed back as return code.
|
|
|
|
##
|
|
|
|
## Note: The `mpt` argument is used for administative purposes (e.g. logging)
|
|
|
|
## only. The functionality is provided by the `Aristo` database
|
|
|
|
## function `aristo_part.partUntwigGeneric()` with the same prototype
|
|
|
|
## arguments except the `db`.
|
|
|
|
##
|
|
|
|
template mpt: untyped =
|
|
|
|
when db is CoreDbRef:
|
|
|
|
CoreDbAccRef(db.defCtx)
|
|
|
|
else:
|
|
|
|
db
|
|
|
|
mpt.setTrackNewApi BaseVerifyFn
|
|
|
|
result = block:
|
|
|
|
let rc = mpt.call(partUntwigGeneric, proof, root, path)
|
|
|
|
if rc.isOk:
|
|
|
|
ok(rc.value)
|
|
|
|
else:
|
|
|
|
err(rc.error.toError($api, ProofVerify))
|
|
|
|
mpt.ifTrackNewApi: debug logTxt, api, elapsed, result
|
|
|
|
|
|
|
|
proc verifyOk*(
|
|
|
|
db: CoreDbRef | CoreDbMptRef | CoreDbAccRef;
|
|
|
|
proof: openArray[Blob];
|
|
|
|
root: Hash256;
|
|
|
|
path: openArray[byte];
|
2024-09-11 09:39:45 +00:00
|
|
|
payload: Opt[Blob];
|
2024-08-07 11:30:55 +00:00
|
|
|
): CoreDbRc[void] =
|
|
|
|
## Variant of `verify()` which directly checks the argument `payload`
|
|
|
|
## against what would be the return code in `verify()`.
|
|
|
|
##
|
|
|
|
template mpt: untyped =
|
|
|
|
when db is CoreDbRef:
|
|
|
|
CoreDbAccRef(db.defCtx)
|
|
|
|
else:
|
|
|
|
db
|
|
|
|
mpt.setTrackNewApi BaseVerifyOkFn
|
|
|
|
result = block:
|
|
|
|
let rc = mpt.call(partUntwigGenericOk, proof, root, path, payload)
|
|
|
|
if rc.isOk:
|
|
|
|
ok()
|
|
|
|
else:
|
|
|
|
err(rc.error.toError($api, ProofVerify))
|
|
|
|
mpt.ifTrackNewApi: debug logTxt, api, elapsed, result
|
|
|
|
|
|
|
|
proc verify*(
|
|
|
|
db: CoreDbRef | CoreDbMptRef | CoreDbAccRef;
|
|
|
|
proof: openArray[Blob];
|
|
|
|
root: Hash256;
|
|
|
|
path: Hash256;
|
2024-09-11 09:39:45 +00:00
|
|
|
): CoreDbRc[Opt[Blob]] =
|
2024-08-07 11:30:55 +00:00
|
|
|
## Variant of `verify()`.
|
|
|
|
template mpt: untyped =
|
|
|
|
when db is CoreDbRef:
|
|
|
|
CoreDbAccRef(db.defCtx)
|
|
|
|
else:
|
|
|
|
db
|
|
|
|
mpt.setTrackNewApi BaseVerifyFn
|
|
|
|
result = block:
|
|
|
|
let rc = mpt.call(partUntwigPath, proof, root, path)
|
|
|
|
if rc.isOk:
|
|
|
|
ok(rc.value)
|
|
|
|
else:
|
|
|
|
err(rc.error.toError($api, ProofVerify))
|
|
|
|
mpt.ifTrackNewApi: debug logTxt, api, elapsed, result
|
|
|
|
|
|
|
|
proc verifyOk*(
|
|
|
|
db: CoreDbRef | CoreDbMptRef | CoreDbAccRef;
|
|
|
|
proof: openArray[Blob];
|
|
|
|
root: Hash256;
|
|
|
|
path: Hash256;
|
2024-09-11 09:39:45 +00:00
|
|
|
payload: Opt[Blob];
|
2024-08-07 11:30:55 +00:00
|
|
|
): CoreDbRc[void] =
|
|
|
|
## Variant of `verifyOk()`.
|
|
|
|
template mpt: untyped =
|
|
|
|
when db is CoreDbRef:
|
|
|
|
CoreDbAccRef(db.defCtx)
|
|
|
|
else:
|
|
|
|
db
|
|
|
|
mpt.setTrackNewApi BaseVerifyOkFn
|
|
|
|
result = block:
|
|
|
|
let rc = mpt.call(partUntwigPathOk, proof, root, path, payload)
|
|
|
|
if rc.isOk:
|
|
|
|
ok()
|
|
|
|
else:
|
|
|
|
err(rc.error.toError($api, ProofVerify))
|
|
|
|
mpt.ifTrackNewApi: debug logTxt, api, elapsed, result
|
|
|
|
|
2023-07-31 13:43:38 +00:00
|
|
|
# ------------------------------------------------------------------------------
|
2023-08-02 20:46:41 +00:00
|
|
|
# Public key-value table methods
|
2023-07-31 13:43:38 +00:00
|
|
|
# ------------------------------------------------------------------------------
|
|
|
|
|
2024-07-03 15:50:27 +00:00
|
|
|
proc getKvt*(ctx: CoreDbCtxRef): CoreDbKvtRef =
|
2024-07-10 12:19:35 +00:00
|
|
|
## This function retrieves the common base object shared with other KVT
|
|
|
|
## descriptors. Any changes are immediately visible to subscribers.
|
2024-06-05 20:52:04 +00:00
|
|
|
## On destruction (when the constructed object gets out of scope), changes
|
|
|
|
## are not saved to the backend database but are still cached and available.
|
Core db and aristo updates for destructor and tx logic (#1894)
* Disable `TransactionID` related functions from `state_db.nim`
why:
Functions `getCommittedStorage()` and `updateOriginalRoot()` from
the `state_db` module are nowhere used. The emulation of a legacy
`TransactionID` type functionality is administratively expensive to
provide by `Aristo` (the legacy DB version is only partially
implemented, anyway).
As there is no other place where `TransactionID`s are used, they will
not be provided by the `Aristo` variant of the `CoreDb`. For the
legacy DB API, nothing will change.
* Fix copyright headers in source code
* Get rid of compiler warning
* Update Aristo code, remove unused `merge()` variant, export `hashify()`
why:
Adapt to upcoming `CoreDb` wrapper
* Remove synced tx feature from `Aristo`
why:
+ This feature allowed to synchronise transaction methods like begin,
commit, and rollback for a group of descriptors.
+ The feature is over engineered and not needed for `CoreDb`, neither
is it complete (some convergence features missing.)
* Add debugging helpers to `Kvt`
also:
Update database iterator, add count variable yield argument similar
to `Aristo`.
* Provide optional destructors for `CoreDb` API
why;
For the upcoming Aristo wrapper, this allows to control when certain
smart destruction and update can take place. The auto destructor works
fine in general when the storage/cache strategy is known and acceptable
when creating descriptors.
* Add update option for `CoreDb` API function `hash()`
why;
The hash function is typically used to get the state root of the MPT.
Due to lazy hashing, this might be not available on the `Aristo` DB.
So the `update` function asks for re-hashing the gurrent state changes
if needed.
* Update API tracking log mode: `info` => `debug
* Use shared `Kvt` descriptor in new Ledger API
why:
No need to create a new descriptor all the time
2023-11-16 19:35:03 +00:00
|
|
|
##
|
2024-07-03 15:50:27 +00:00
|
|
|
CoreDbKvtRef(ctx)
|
|
|
|
|
|
|
|
# ----------- KVT ---------------
|
2023-08-02 20:46:41 +00:00
|
|
|
|
2024-06-19 14:13:12 +00:00
|
|
|
proc get*(kvt: CoreDbKvtRef; key: openArray[byte]): CoreDbRc[Blob] =
|
2023-11-08 12:18:32 +00:00
|
|
|
## This function always returns a non-empty `Blob` or an error code.
|
2023-12-12 17:47:41 +00:00
|
|
|
kvt.setTrackNewApi KvtGetFn
|
2024-07-03 15:50:27 +00:00
|
|
|
result = block:
|
|
|
|
let rc = kvt.call(get, kvt.kvt, key)
|
|
|
|
if rc.isOk:
|
|
|
|
ok(rc.value)
|
|
|
|
elif rc.error == GetNotFound:
|
|
|
|
err(rc.error.toError($api, KvtNotFound))
|
|
|
|
else:
|
|
|
|
err(rc.error.toError $api)
|
2024-07-12 13:12:25 +00:00
|
|
|
kvt.ifTrackNewApi: debug logTxt, api, elapsed, key=key.toStr, result
|
2024-06-21 07:44:10 +00:00
|
|
|
|
2024-06-19 14:13:12 +00:00
|
|
|
proc getOrEmpty*(kvt: CoreDbKvtRef; key: openArray[byte]): CoreDbRc[Blob] =
|
2024-07-03 15:50:27 +00:00
|
|
|
## Variant of `get()` returning an empty `Blob` if the key is not found
|
|
|
|
## on the database.
|
2023-11-24 22:16:21 +00:00
|
|
|
##
|
2023-12-12 17:47:41 +00:00
|
|
|
kvt.setTrackNewApi KvtGetOrEmptyFn
|
2024-07-03 15:50:27 +00:00
|
|
|
result = block:
|
|
|
|
let rc = kvt.call(get, kvt.kvt, key)
|
|
|
|
if rc.isOk:
|
|
|
|
ok(rc.value)
|
|
|
|
elif rc.error == GetNotFound:
|
|
|
|
CoreDbRc[Blob].ok(EmptyBlob)
|
|
|
|
else:
|
|
|
|
err(rc.error.toError $api)
|
2024-07-12 13:12:25 +00:00
|
|
|
kvt.ifTrackNewApi: debug logTxt, api, elapsed, key=key.toStr, result
|
2024-07-03 15:50:27 +00:00
|
|
|
|
|
|
|
proc len*(kvt: CoreDbKvtRef; key: openArray[byte]): CoreDbRc[int] =
|
|
|
|
## This function returns the size of the value associated with `key`.
|
|
|
|
kvt.setTrackNewApi KvtLenFn
|
|
|
|
result = block:
|
|
|
|
let rc = kvt.call(len, kvt.kvt, key)
|
|
|
|
if rc.isOk:
|
|
|
|
ok(rc.value)
|
|
|
|
elif rc.error == GetNotFound:
|
|
|
|
err(rc.error.toError($api, KvtNotFound))
|
|
|
|
else:
|
|
|
|
err(rc.error.toError $api)
|
2024-07-12 13:12:25 +00:00
|
|
|
kvt.ifTrackNewApi: debug logTxt, api, elapsed, key=key.toStr, result
|
2023-11-08 12:18:32 +00:00
|
|
|
|
2024-06-19 14:13:12 +00:00
|
|
|
proc del*(kvt: CoreDbKvtRef; key: openArray[byte]): CoreDbRc[void] =
|
2023-12-12 17:47:41 +00:00
|
|
|
kvt.setTrackNewApi KvtDelFn
|
2024-07-03 15:50:27 +00:00
|
|
|
result = block:
|
|
|
|
let rc = kvt.call(del, kvt.kvt, key)
|
|
|
|
if rc.isOk:
|
|
|
|
ok()
|
|
|
|
else:
|
|
|
|
err(rc.error.toError $api)
|
2024-07-12 13:12:25 +00:00
|
|
|
kvt.ifTrackNewApi: debug logTxt, api, elapsed, key=key.toStr, result
|
2023-08-02 20:46:41 +00:00
|
|
|
|
2023-09-26 09:21:13 +00:00
|
|
|
proc put*(
|
2024-06-19 14:13:12 +00:00
|
|
|
kvt: CoreDbKvtRef;
|
2023-09-26 09:21:13 +00:00
|
|
|
key: openArray[byte];
|
2023-10-25 14:03:09 +00:00
|
|
|
val: openArray[byte];
|
2023-09-26 09:21:13 +00:00
|
|
|
): CoreDbRc[void] =
|
2023-12-12 17:47:41 +00:00
|
|
|
kvt.setTrackNewApi KvtPutFn
|
2024-07-03 15:50:27 +00:00
|
|
|
result = block:
|
|
|
|
let rc = kvt.call(put, kvt.kvt, key, val)
|
|
|
|
if rc.isOk:
|
|
|
|
ok()
|
|
|
|
else:
|
|
|
|
err(rc.error.toError $api)
|
2023-11-24 22:16:21 +00:00
|
|
|
kvt.ifTrackNewApi:
|
2024-07-12 13:12:25 +00:00
|
|
|
debug logTxt, api, elapsed, key=key.toStr, val=val.toLenStr, result
|
2023-08-02 20:46:41 +00:00
|
|
|
|
2024-08-30 11:18:36 +00:00
|
|
|
proc hasKeyRc*(kvt: CoreDbKvtRef; key: openArray[byte]): CoreDbRc[bool] =
|
|
|
|
## For the argument `key` return `true` if `get()` returned a value on
|
|
|
|
## that argument, `false` if it returned `GetNotFound`, and an error
|
|
|
|
## otherwise.
|
2023-11-24 22:16:21 +00:00
|
|
|
##
|
2024-08-30 11:18:36 +00:00
|
|
|
kvt.setTrackNewApi KvtHasKeyRcFn
|
2024-07-03 15:50:27 +00:00
|
|
|
result = block:
|
2024-08-30 11:18:36 +00:00
|
|
|
let rc = kvt.call(hasKeyRc, kvt.kvt, key)
|
2024-07-03 15:50:27 +00:00
|
|
|
if rc.isOk:
|
|
|
|
ok(rc.value)
|
|
|
|
else:
|
|
|
|
err(rc.error.toError $api)
|
2024-07-12 13:12:25 +00:00
|
|
|
kvt.ifTrackNewApi: debug logTxt, api, elapsed, key=key.toStr, result
|
Core db and aristo updates for destructor and tx logic (#1894)
* Disable `TransactionID` related functions from `state_db.nim`
why:
Functions `getCommittedStorage()` and `updateOriginalRoot()` from
the `state_db` module are nowhere used. The emulation of a legacy
`TransactionID` type functionality is administratively expensive to
provide by `Aristo` (the legacy DB version is only partially
implemented, anyway).
As there is no other place where `TransactionID`s are used, they will
not be provided by the `Aristo` variant of the `CoreDb`. For the
legacy DB API, nothing will change.
* Fix copyright headers in source code
* Get rid of compiler warning
* Update Aristo code, remove unused `merge()` variant, export `hashify()`
why:
Adapt to upcoming `CoreDb` wrapper
* Remove synced tx feature from `Aristo`
why:
+ This feature allowed to synchronise transaction methods like begin,
commit, and rollback for a group of descriptors.
+ The feature is over engineered and not needed for `CoreDb`, neither
is it complete (some convergence features missing.)
* Add debugging helpers to `Kvt`
also:
Update database iterator, add count variable yield argument similar
to `Aristo`.
* Provide optional destructors for `CoreDb` API
why;
For the upcoming Aristo wrapper, this allows to control when certain
smart destruction and update can take place. The auto destructor works
fine in general when the storage/cache strategy is known and acceptable
when creating descriptors.
* Add update option for `CoreDb` API function `hash()`
why;
The hash function is typically used to get the state root of the MPT.
Due to lazy hashing, this might be not available on the `Aristo` DB.
So the `update` function asks for re-hashing the gurrent state changes
if needed.
* Update API tracking log mode: `info` => `debug
* Use shared `Kvt` descriptor in new Ledger API
why:
No need to create a new descriptor all the time
2023-11-16 19:35:03 +00:00
|
|
|
|
2024-08-30 11:18:36 +00:00
|
|
|
proc hasKey*(kvt: CoreDbKvtRef; key: openArray[byte]): bool =
|
|
|
|
## Simplified version of `hasKeyRc` where `false` is returned instead of
|
|
|
|
## an error.
|
|
|
|
##
|
|
|
|
## This function prototype is in line with the `hasKey` function for
|
|
|
|
## `Tables`.
|
|
|
|
##
|
|
|
|
kvt.setTrackNewApi KvtHasKeyFn
|
|
|
|
result = kvt.call(hasKeyRc, kvt.kvt, key).valueOr: false
|
|
|
|
kvt.ifTrackNewApi: debug logTxt, api, elapsed, key=key.toStr, result
|
|
|
|
|
2024-03-18 19:40:23 +00:00
|
|
|
# ------------------------------------------------------------------------------
|
2024-06-27 09:01:26 +00:00
|
|
|
# Public functions for generic columns
|
2024-03-18 19:40:23 +00:00
|
|
|
# ------------------------------------------------------------------------------
|
|
|
|
|
2024-07-03 15:50:27 +00:00
|
|
|
proc getGeneric*(
|
2024-03-18 19:40:23 +00:00
|
|
|
ctx: CoreDbCtxRef;
|
2024-06-27 09:01:26 +00:00
|
|
|
clearData = false;
|
2024-06-19 14:13:12 +00:00
|
|
|
): CoreDbMptRef =
|
2024-07-03 15:50:27 +00:00
|
|
|
## Get a generic MPT, viewed as column
|
2024-04-19 18:37:27 +00:00
|
|
|
##
|
2024-07-03 15:50:27 +00:00
|
|
|
ctx.setTrackNewApi CtxGetGenericFn
|
2024-07-04 13:46:52 +00:00
|
|
|
result = CoreDbMptRef(ctx)
|
2024-07-03 15:50:27 +00:00
|
|
|
if clearData:
|
2024-07-10 12:19:35 +00:00
|
|
|
result.call(deleteGenericTree, ctx.mpt, CoreDbVidGeneric).isOkOr:
|
2024-07-03 15:50:27 +00:00
|
|
|
raiseAssert $api & ": " & $error
|
2024-07-12 13:12:25 +00:00
|
|
|
ctx.ifTrackNewApi: debug logTxt, api, clearData, elapsed
|
2024-07-03 15:50:27 +00:00
|
|
|
|
|
|
|
# ----------- generic MPT ---------------
|
2023-07-31 13:43:38 +00:00
|
|
|
|
2024-08-07 11:30:55 +00:00
|
|
|
proc proof*(
|
|
|
|
mpt: CoreDbMptRef;
|
|
|
|
key: openArray[byte];
|
2024-09-11 09:39:45 +00:00
|
|
|
): CoreDbRc[(seq[Blob],bool)] =
|
2024-08-07 11:30:55 +00:00
|
|
|
## On the generic MPT, collect the nodes along the `key` interpreted as
|
2024-09-11 09:39:45 +00:00
|
|
|
## path. Return these path nodes as a chain of rlp-encoded blobs followed
|
|
|
|
## by a bool value which is `true` if the `key` path exists in the database,
|
|
|
|
## and `false` otherwise. In the latter case, the chain of rlp-encoded blobs
|
|
|
|
## are the nodes proving that the `key` path does not exist.
|
2024-08-07 11:30:55 +00:00
|
|
|
##
|
|
|
|
mpt.setTrackNewApi MptProofFn
|
|
|
|
result = block:
|
|
|
|
let rc = mpt.call(partGenericTwig, mpt.mpt, CoreDbVidGeneric, key)
|
|
|
|
if rc.isOk:
|
|
|
|
ok(rc.value)
|
|
|
|
else:
|
|
|
|
err(rc.error.toError($api, ProofCreate))
|
|
|
|
mpt.ifTrackNewApi: debug logTxt, api, elapsed, result
|
|
|
|
|
2024-06-19 14:13:12 +00:00
|
|
|
proc fetch*(mpt: CoreDbMptRef; key: openArray[byte]): CoreDbRc[Blob] =
|
2024-04-19 18:37:27 +00:00
|
|
|
## Fetch data from the argument `mpt`. The function always returns a
|
2023-11-08 12:18:32 +00:00
|
|
|
## non-empty `Blob` or an error code.
|
2023-11-24 22:16:21 +00:00
|
|
|
##
|
2023-12-12 17:47:41 +00:00
|
|
|
mpt.setTrackNewApi MptFetchFn
|
2024-07-03 15:50:27 +00:00
|
|
|
result = block:
|
2024-07-10 12:19:35 +00:00
|
|
|
let rc = mpt.call(fetchGenericData, mpt.mpt, CoreDbVidGeneric, key)
|
2024-07-03 15:50:27 +00:00
|
|
|
if rc.isOk:
|
|
|
|
ok(rc.value)
|
|
|
|
elif rc.error == FetchPathNotFound:
|
|
|
|
err(rc.error.toError($api, MptNotFound))
|
|
|
|
else:
|
|
|
|
err(rc.error.toError $api)
|
2024-07-12 13:12:25 +00:00
|
|
|
mpt.ifTrackNewApi: debug logTxt, api, elapsed, key=key.toStr, result
|
2023-12-12 17:47:41 +00:00
|
|
|
|
2024-06-19 14:13:12 +00:00
|
|
|
proc fetchOrEmpty*(mpt: CoreDbMptRef; key: openArray[byte]): CoreDbRc[Blob] =
|
2023-11-08 12:18:32 +00:00
|
|
|
## This function returns an empty `Blob` if the argument `key` is not found
|
|
|
|
## on the database.
|
2023-11-24 22:16:21 +00:00
|
|
|
##
|
2023-12-12 17:47:41 +00:00
|
|
|
mpt.setTrackNewApi MptFetchOrEmptyFn
|
2024-07-03 15:50:27 +00:00
|
|
|
result = block:
|
2024-07-10 12:19:35 +00:00
|
|
|
let rc = mpt.call(fetchGenericData, mpt.mpt, CoreDbVidGeneric, key)
|
2024-07-03 15:50:27 +00:00
|
|
|
if rc.isOk:
|
|
|
|
ok(rc.value)
|
|
|
|
elif rc.error == FetchPathNotFound:
|
|
|
|
CoreDbRc[Blob].ok(EmptyBlob)
|
|
|
|
else:
|
|
|
|
err(rc.error.toError $api)
|
2024-07-12 13:12:25 +00:00
|
|
|
mpt.ifTrackNewApi: debug logTxt, api, elapsed, key=key.toStr, result
|
2023-12-12 17:47:41 +00:00
|
|
|
|
2024-06-19 14:13:12 +00:00
|
|
|
proc delete*(mpt: CoreDbMptRef; key: openArray[byte]): CoreDbRc[void] =
|
2023-12-12 17:47:41 +00:00
|
|
|
mpt.setTrackNewApi MptDeleteFn
|
2024-07-03 15:50:27 +00:00
|
|
|
result = block:
|
2024-07-10 12:19:35 +00:00
|
|
|
let rc = mpt.call(deleteGenericData, mpt.mpt,CoreDbVidGeneric, key)
|
2024-07-03 15:50:27 +00:00
|
|
|
if rc.isOk:
|
|
|
|
ok()
|
|
|
|
elif rc.error == DelPathNotFound:
|
|
|
|
err(rc.error.toError($api, MptNotFound))
|
|
|
|
else:
|
|
|
|
err(rc.error.toError $api)
|
2024-07-12 13:12:25 +00:00
|
|
|
mpt.ifTrackNewApi: debug logTxt, api, elapsed, key=key.toStr, result
|
2023-12-12 17:47:41 +00:00
|
|
|
|
2023-10-11 19:09:11 +00:00
|
|
|
proc merge*(
|
2024-06-19 14:13:12 +00:00
|
|
|
mpt: CoreDbMptRef;
|
2023-07-31 13:43:38 +00:00
|
|
|
key: openArray[byte];
|
2023-10-25 14:03:09 +00:00
|
|
|
val: openArray[byte];
|
2023-09-26 09:21:13 +00:00
|
|
|
): CoreDbRc[void] =
|
2023-12-12 17:47:41 +00:00
|
|
|
mpt.setTrackNewApi MptMergeFn
|
2024-07-03 15:50:27 +00:00
|
|
|
result = block:
|
2024-07-10 12:19:35 +00:00
|
|
|
let rc = mpt.call(mergeGenericData, mpt.mpt,CoreDbVidGeneric, key, val)
|
2024-07-03 15:50:27 +00:00
|
|
|
if rc.isOk:
|
|
|
|
ok()
|
|
|
|
else:
|
|
|
|
err(rc.error.toError $api)
|
2023-12-12 17:47:41 +00:00
|
|
|
mpt.ifTrackNewApi:
|
2024-07-12 13:12:25 +00:00
|
|
|
debug logTxt, api, elapsed, key=key.toStr, val=val.toLenStr, result
|
2023-07-31 13:43:38 +00:00
|
|
|
|
2024-06-19 14:13:12 +00:00
|
|
|
proc hasPath*(mpt: CoreDbMptRef; key: openArray[byte]): CoreDbRc[bool] =
|
2023-12-12 17:47:41 +00:00
|
|
|
## This function would be named `contains()` if it returned `bool` rather
|
|
|
|
## than a `Result[]`.
|
2023-11-24 22:16:21 +00:00
|
|
|
##
|
2023-12-12 17:47:41 +00:00
|
|
|
mpt.setTrackNewApi MptHasPathFn
|
2024-07-03 15:50:27 +00:00
|
|
|
result = block:
|
2024-07-10 12:19:35 +00:00
|
|
|
let rc = mpt.call(hasPathGeneric, mpt.mpt, CoreDbVidGeneric, key)
|
2024-07-03 15:50:27 +00:00
|
|
|
if rc.isOk:
|
|
|
|
ok(rc.value)
|
|
|
|
else:
|
|
|
|
err(rc.error.toError $api)
|
2024-07-12 13:12:25 +00:00
|
|
|
mpt.ifTrackNewApi: debug logTxt, api, elapsed, key=key.toStr, result
|
2023-12-12 17:47:41 +00:00
|
|
|
|
2024-06-27 09:01:26 +00:00
|
|
|
proc state*(mpt: CoreDbMptRef; updateOk = false): CoreDbRc[Hash256] =
|
|
|
|
## This function retrieves the Merkle state hash of the argument
|
|
|
|
## database column (if acvailable.)
|
|
|
|
##
|
|
|
|
## If the argument `updateOk` is set `true`, the Merkle hashes of the
|
|
|
|
## database will be updated first (if needed, at all).
|
|
|
|
##
|
|
|
|
mpt.setTrackNewApi MptStateFn
|
2024-07-03 15:50:27 +00:00
|
|
|
result = block:
|
2024-07-10 12:19:35 +00:00
|
|
|
let rc = mpt.call(fetchGenericState, mpt.mpt, CoreDbVidGeneric, updateOk)
|
2024-07-03 15:50:27 +00:00
|
|
|
if rc.isOk:
|
|
|
|
ok(rc.value)
|
|
|
|
else:
|
|
|
|
err(rc.error.toError $api)
|
2024-07-12 13:12:25 +00:00
|
|
|
mpt.ifTrackNewApi: debug logTxt, api, elapsed, updateOK, result
|
2024-06-27 09:01:26 +00:00
|
|
|
|
2023-10-11 19:09:11 +00:00
|
|
|
# ------------------------------------------------------------------------------
|
2024-06-27 09:01:26 +00:00
|
|
|
# Public methods for accounts
|
2023-10-11 19:09:11 +00:00
|
|
|
# ------------------------------------------------------------------------------
|
|
|
|
|
2024-06-27 09:01:26 +00:00
|
|
|
proc getAccounts*(ctx: CoreDbCtxRef): CoreDbAccRef =
|
|
|
|
## Accounts column constructor, will defect on failure.
|
2023-11-24 22:16:21 +00:00
|
|
|
##
|
2024-06-27 09:01:26 +00:00
|
|
|
ctx.setTrackNewApi CtxGetAccountsFn
|
2024-07-03 15:50:27 +00:00
|
|
|
result = CoreDbAccRef(ctx)
|
2024-07-12 13:12:25 +00:00
|
|
|
ctx.ifTrackNewApi: debug logTxt, api, elapsed
|
2024-03-18 19:40:23 +00:00
|
|
|
|
2024-06-27 09:01:26 +00:00
|
|
|
# ----------- accounts ---------------
|
2023-10-11 19:09:11 +00:00
|
|
|
|
2024-08-07 11:30:55 +00:00
|
|
|
proc proof*(
|
|
|
|
acc: CoreDbAccRef;
|
|
|
|
accPath: Hash256;
|
2024-09-11 09:39:45 +00:00
|
|
|
): CoreDbRc[(seq[Blob],bool)] =
|
2024-08-07 11:30:55 +00:00
|
|
|
## On the accounts MPT, collect the nodes along the `accPath` interpreted as
|
2024-09-11 09:39:45 +00:00
|
|
|
## path. Return these path nodes as a chain of rlp-encoded blobs followed
|
|
|
|
## by a bool value which is `true` if the `key` path exists in the database,
|
|
|
|
## and `false` otherwise. In the latter case, the chain of rlp-encoded blobs
|
|
|
|
## are the nodes proving that the `key` path does not exist.
|
2024-08-07 11:30:55 +00:00
|
|
|
##
|
|
|
|
acc.setTrackNewApi AccProofFn
|
|
|
|
result = block:
|
|
|
|
let rc = acc.call(partAccountTwig, acc.mpt, accPath)
|
|
|
|
if rc.isOk:
|
|
|
|
ok(rc.value)
|
|
|
|
else:
|
|
|
|
err(rc.error.toError($api, ProofCreate))
|
|
|
|
acc.ifTrackNewApi: debug logTxt, api, elapsed, result
|
|
|
|
|
2024-06-27 19:21:01 +00:00
|
|
|
proc fetch*(
|
|
|
|
acc: CoreDbAccRef;
|
2024-07-03 08:14:26 +00:00
|
|
|
accPath: Hash256;
|
2024-06-27 19:21:01 +00:00
|
|
|
): CoreDbRc[CoreDbAccount] =
|
2024-06-27 09:01:26 +00:00
|
|
|
## Fetch the account data record for the particular account indexed by
|
2024-06-27 19:21:01 +00:00
|
|
|
## the key `accPath`.
|
2024-02-12 19:37:00 +00:00
|
|
|
##
|
2024-06-27 09:01:26 +00:00
|
|
|
acc.setTrackNewApi AccFetchFn
|
2024-07-03 15:50:27 +00:00
|
|
|
result = block:
|
|
|
|
let rc = acc.call(fetchAccountRecord, acc.mpt, accPath)
|
|
|
|
if rc.isOk:
|
|
|
|
ok(rc.value)
|
|
|
|
elif rc.error == FetchPathNotFound:
|
|
|
|
err(rc.error.toError($api, AccNotFound))
|
|
|
|
else:
|
|
|
|
err(rc.error.toError $api)
|
2024-07-12 13:12:25 +00:00
|
|
|
acc.ifTrackNewApi: debug logTxt, api, elapsed, accPath=($$accPath), result
|
2024-06-27 09:01:26 +00:00
|
|
|
|
2024-06-27 19:21:01 +00:00
|
|
|
proc delete*(
|
|
|
|
acc: CoreDbAccRef;
|
2024-07-03 08:14:26 +00:00
|
|
|
accPath: Hash256;
|
2024-06-27 19:21:01 +00:00
|
|
|
): CoreDbRc[void] =
|
|
|
|
## Delete the particular account indexed by the key `accPath`. This
|
2024-06-27 09:01:26 +00:00
|
|
|
## will also destroy an associated storage area.
|
2024-02-12 19:37:00 +00:00
|
|
|
##
|
2024-06-27 09:01:26 +00:00
|
|
|
acc.setTrackNewApi AccDeleteFn
|
2024-07-03 15:50:27 +00:00
|
|
|
result = block:
|
|
|
|
let rc = acc.call(deleteAccountRecord, acc.mpt, accPath)
|
|
|
|
if rc.isOk:
|
|
|
|
ok()
|
|
|
|
elif rc.error == DelPathNotFound:
|
|
|
|
# TODO: Would it be conseqient to just return `ok()` here?
|
|
|
|
err(rc.error.toError($api, AccNotFound))
|
|
|
|
else:
|
|
|
|
err(rc.error.toError $api)
|
2024-06-27 19:21:01 +00:00
|
|
|
acc.ifTrackNewApi:
|
2024-07-12 13:12:25 +00:00
|
|
|
debug logTxt, api, elapsed, accPath=($$accPath), result
|
2024-03-18 19:40:23 +00:00
|
|
|
|
2024-06-27 19:21:01 +00:00
|
|
|
proc clearStorage*(
|
|
|
|
acc: CoreDbAccRef;
|
2024-07-03 08:14:26 +00:00
|
|
|
accPath: Hash256;
|
2024-06-27 19:21:01 +00:00
|
|
|
): CoreDbRc[void] =
|
2024-06-27 09:01:26 +00:00
|
|
|
## Delete all data slots from the storage area associated with the
|
2024-06-27 19:21:01 +00:00
|
|
|
## particular account indexed by the key `accPath`.
|
2024-06-27 09:01:26 +00:00
|
|
|
##
|
|
|
|
acc.setTrackNewApi AccClearStorageFn
|
2024-07-03 15:50:27 +00:00
|
|
|
result = block:
|
|
|
|
let rc = acc.call(deleteStorageTree, acc.mpt, accPath)
|
|
|
|
if rc.isOk or rc.error in {DelStoRootMissing,DelStoAccMissing}:
|
|
|
|
ok()
|
|
|
|
else:
|
|
|
|
err(rc.error.toError $api)
|
2024-06-27 19:21:01 +00:00
|
|
|
acc.ifTrackNewApi:
|
2024-07-12 13:12:25 +00:00
|
|
|
debug logTxt, api, elapsed, accPath=($$accPath), result
|
2024-02-12 19:37:00 +00:00
|
|
|
|
2024-06-27 19:21:01 +00:00
|
|
|
proc merge*(
|
|
|
|
acc: CoreDbAccRef;
|
2024-07-03 08:14:26 +00:00
|
|
|
accPath: Hash256;
|
2024-06-27 19:21:01 +00:00
|
|
|
accRec: CoreDbAccount;
|
|
|
|
): CoreDbRc[void] =
|
2024-06-27 09:01:26 +00:00
|
|
|
## Add or update the argument account data record `account`. Note that the
|
|
|
|
## `account` argument uniquely idendifies the particular account address.
|
|
|
|
##
|
2023-12-12 17:47:41 +00:00
|
|
|
acc.setTrackNewApi AccMergeFn
|
2024-07-03 15:50:27 +00:00
|
|
|
result = block:
|
|
|
|
let rc = acc.call(mergeAccountRecord, acc.mpt, accPath, accRec)
|
|
|
|
if rc.isOk:
|
|
|
|
ok()
|
|
|
|
else:
|
|
|
|
err(rc.error.toError $api)
|
2024-02-02 20:23:04 +00:00
|
|
|
acc.ifTrackNewApi:
|
2024-07-12 13:12:25 +00:00
|
|
|
debug logTxt, api, elapsed, accPath=($$accPath), result
|
2024-03-18 19:40:23 +00:00
|
|
|
|
2024-06-27 19:21:01 +00:00
|
|
|
proc hasPath*(
|
|
|
|
acc: CoreDbAccRef;
|
2024-07-03 08:14:26 +00:00
|
|
|
accPath: Hash256;
|
2024-06-27 19:21:01 +00:00
|
|
|
): CoreDbRc[bool] =
|
2023-11-08 12:18:32 +00:00
|
|
|
## Would be named `contains` if it returned `bool` rather than `Result[]`.
|
2023-11-24 22:16:21 +00:00
|
|
|
##
|
2023-12-12 17:47:41 +00:00
|
|
|
acc.setTrackNewApi AccHasPathFn
|
2024-07-03 15:50:27 +00:00
|
|
|
result = block:
|
|
|
|
let rc = acc.call(hasPathAccount, acc.mpt, accPath)
|
|
|
|
if rc.isOk:
|
|
|
|
ok(rc.value)
|
|
|
|
else:
|
|
|
|
err(rc.error.toError $api)
|
2024-06-27 19:21:01 +00:00
|
|
|
acc.ifTrackNewApi:
|
2024-07-12 13:12:25 +00:00
|
|
|
debug logTxt, api, elapsed, accPath=($$accPath), result
|
2024-03-18 19:40:23 +00:00
|
|
|
|
2024-06-27 09:01:26 +00:00
|
|
|
proc state*(acc: CoreDbAccRef; updateOk = false): CoreDbRc[Hash256] =
|
|
|
|
## This function retrieves the Merkle state hash of the accounts
|
2024-06-27 19:21:01 +00:00
|
|
|
## column (if available.)
|
2024-06-27 09:01:26 +00:00
|
|
|
##
|
|
|
|
## If the argument `updateOk` is set `true`, the Merkle hashes of the
|
|
|
|
## database will be updated first (if needed, at all).
|
|
|
|
##
|
|
|
|
acc.setTrackNewApi AccStateFn
|
2024-07-03 15:50:27 +00:00
|
|
|
result = block:
|
|
|
|
let rc = acc.call(fetchAccountState, acc.mpt, updateOk)
|
|
|
|
if rc.isOk:
|
|
|
|
ok(rc.value)
|
|
|
|
else:
|
|
|
|
err(rc.error.toError $api)
|
2024-07-12 13:12:25 +00:00
|
|
|
acc.ifTrackNewApi: debug logTxt, api, elapsed, updateOK, result
|
2024-06-27 09:01:26 +00:00
|
|
|
|
|
|
|
# ------------ storage ---------------
|
|
|
|
|
2024-08-07 11:30:55 +00:00
|
|
|
proc slotProof*(
|
|
|
|
acc: CoreDbAccRef;
|
|
|
|
accPath: Hash256;
|
|
|
|
stoPath: Hash256;
|
2024-09-11 09:39:45 +00:00
|
|
|
): CoreDbRc[(seq[Blob],bool)] =
|
2024-08-07 11:30:55 +00:00
|
|
|
## On the storage MPT related to the argument account `acPath`, collect the
|
|
|
|
## nodes along the `stoPath` interpreted as path. Return these path nodes as
|
2024-09-11 09:39:45 +00:00
|
|
|
## a chain of rlp-encoded blobs followed by a bool value which is `true` if
|
|
|
|
## the `key` path exists in the database, and `false` otherwise. In the
|
|
|
|
## latter case, the chain of rlp-encoded blobs are the nodes proving that
|
|
|
|
## the `key` path does not exist.
|
|
|
|
##
|
|
|
|
## Note that the function always returns an error unless the `accPath` is
|
|
|
|
## valid.
|
2024-08-07 11:30:55 +00:00
|
|
|
##
|
|
|
|
acc.setTrackNewApi AccSlotProofFn
|
|
|
|
result = block:
|
|
|
|
let rc = acc.call(partStorageTwig, acc.mpt, accPath, stoPath)
|
|
|
|
if rc.isOk:
|
|
|
|
ok(rc.value)
|
|
|
|
else:
|
|
|
|
err(rc.error.toError($api, ProofCreate))
|
|
|
|
acc.ifTrackNewApi: debug logTxt, api, elapsed, result
|
|
|
|
|
2024-06-27 09:01:26 +00:00
|
|
|
proc slotFetch*(
|
|
|
|
acc: CoreDbAccRef;
|
2024-07-03 08:14:26 +00:00
|
|
|
accPath: Hash256;
|
2024-07-04 23:48:45 +00:00
|
|
|
stoPath: Hash256;
|
|
|
|
): CoreDbRc[UInt256] =
|
2024-06-27 19:21:01 +00:00
|
|
|
## Like `fetch()` but with cascaded index `(accPath,slot)`.
|
2024-06-27 09:01:26 +00:00
|
|
|
acc.setTrackNewApi AccSlotFetchFn
|
2024-07-03 15:50:27 +00:00
|
|
|
result = block:
|
2024-07-04 23:48:45 +00:00
|
|
|
let rc = acc.call(fetchStorageData, acc.mpt, accPath, stoPath)
|
2024-07-03 15:50:27 +00:00
|
|
|
if rc.isOk:
|
|
|
|
ok(rc.value)
|
|
|
|
elif rc.error == FetchPathNotFound:
|
|
|
|
err(rc.error.toError($api, StoNotFound))
|
|
|
|
else:
|
|
|
|
err(rc.error.toError $api)
|
2024-06-27 19:21:01 +00:00
|
|
|
acc.ifTrackNewApi:
|
2024-07-12 13:12:25 +00:00
|
|
|
debug logTxt, api, elapsed, accPath=($$accPath),
|
|
|
|
stoPath=($$stoPath), result
|
2024-06-27 09:01:26 +00:00
|
|
|
|
|
|
|
proc slotDelete*(
|
|
|
|
acc: CoreDbAccRef;
|
2024-07-03 08:14:26 +00:00
|
|
|
accPath: Hash256;
|
2024-07-04 23:48:45 +00:00
|
|
|
stoPath: Hash256;
|
2024-06-27 09:01:26 +00:00
|
|
|
): CoreDbRc[void] =
|
2024-06-27 19:21:01 +00:00
|
|
|
## Like `delete()` but with cascaded index `(accPath,slot)`.
|
2024-06-27 09:01:26 +00:00
|
|
|
acc.setTrackNewApi AccSlotDeleteFn
|
2024-07-03 15:50:27 +00:00
|
|
|
result = block:
|
2024-07-04 23:48:45 +00:00
|
|
|
let rc = acc.call(deleteStorageData, acc.mpt, accPath, stoPath)
|
2024-07-03 15:50:27 +00:00
|
|
|
if rc.isOk or rc.error == DelStoRootMissing:
|
|
|
|
# The second `if` clause is insane but legit: A storage column was
|
|
|
|
# announced for an account but no data have been added, yet.
|
|
|
|
ok()
|
|
|
|
elif rc.error == DelPathNotFound:
|
|
|
|
err(rc.error.toError($api, StoNotFound))
|
|
|
|
else:
|
|
|
|
err(rc.error.toError $api)
|
2024-06-27 19:21:01 +00:00
|
|
|
acc.ifTrackNewApi:
|
2024-07-12 13:12:25 +00:00
|
|
|
debug logTxt, api, elapsed, accPath=($$accPath),
|
|
|
|
stoPath=($$stoPath), result
|
2024-06-27 09:01:26 +00:00
|
|
|
|
|
|
|
proc slotHasPath*(
|
|
|
|
acc: CoreDbAccRef;
|
2024-07-03 08:14:26 +00:00
|
|
|
accPath: Hash256;
|
2024-07-04 23:48:45 +00:00
|
|
|
stoPath: Hash256;
|
2024-06-27 09:01:26 +00:00
|
|
|
): CoreDbRc[bool] =
|
2024-06-27 19:21:01 +00:00
|
|
|
## Like `hasPath()` but with cascaded index `(accPath,slot)`.
|
2024-06-27 09:01:26 +00:00
|
|
|
acc.setTrackNewApi AccSlotHasPathFn
|
2024-07-03 15:50:27 +00:00
|
|
|
result = block:
|
2024-07-04 23:48:45 +00:00
|
|
|
let rc = acc.call(hasPathStorage, acc.mpt, accPath, stoPath)
|
2024-07-03 15:50:27 +00:00
|
|
|
if rc.isOk:
|
|
|
|
ok(rc.value)
|
|
|
|
else:
|
|
|
|
err(rc.error.toError $api)
|
2024-06-27 19:21:01 +00:00
|
|
|
acc.ifTrackNewApi:
|
2024-07-12 13:12:25 +00:00
|
|
|
debug logTxt, api, elapsed, accPath=($$accPath),
|
|
|
|
stoPath=($$stoPath), result
|
2024-06-27 09:01:26 +00:00
|
|
|
|
|
|
|
proc slotMerge*(
|
|
|
|
acc: CoreDbAccRef;
|
2024-07-03 08:14:26 +00:00
|
|
|
accPath: Hash256;
|
2024-07-04 23:48:45 +00:00
|
|
|
stoPath: Hash256;
|
|
|
|
stoData: UInt256;
|
2024-06-27 09:01:26 +00:00
|
|
|
): CoreDbRc[void] =
|
2024-06-27 19:21:01 +00:00
|
|
|
## Like `merge()` but with cascaded index `(accPath,slot)`.
|
2024-06-27 09:01:26 +00:00
|
|
|
acc.setTrackNewApi AccSlotMergeFn
|
2024-07-03 15:50:27 +00:00
|
|
|
result = block:
|
2024-07-04 23:48:45 +00:00
|
|
|
let rc = acc.call(mergeStorageData, acc.mpt, accPath, stoPath, stoData)
|
2024-07-03 15:50:27 +00:00
|
|
|
if rc.isOk:
|
|
|
|
ok()
|
|
|
|
else:
|
|
|
|
err(rc.error.toError $api)
|
2024-06-27 19:21:01 +00:00
|
|
|
acc.ifTrackNewApi:
|
2024-07-12 13:12:25 +00:00
|
|
|
debug logTxt, api, elapsed, accPath=($$accPath),
|
|
|
|
stoPath=($$stoPath), stoData, result
|
2024-06-27 09:01:26 +00:00
|
|
|
|
|
|
|
proc slotState*(
|
|
|
|
acc: CoreDbAccRef;
|
2024-07-03 08:14:26 +00:00
|
|
|
accPath: Hash256;
|
2024-06-27 09:01:26 +00:00
|
|
|
updateOk = false;
|
|
|
|
): CoreDbRc[Hash256] =
|
2024-06-27 19:21:01 +00:00
|
|
|
## This function retrieves the Merkle state hash of the storage data
|
|
|
|
## column (if available) related to the account indexed by the key
|
|
|
|
## `accPath`.`.
|
|
|
|
##
|
|
|
|
## If the argument `updateOk` is set `true`, the Merkle hashes of the
|
|
|
|
## database will be updated first (if needed, at all).
|
|
|
|
##
|
2024-06-27 09:01:26 +00:00
|
|
|
acc.setTrackNewApi AccSlotStateFn
|
2024-07-03 15:50:27 +00:00
|
|
|
result = block:
|
|
|
|
let rc = acc.call(fetchStorageState, acc.mpt, accPath, updateOk)
|
|
|
|
if rc.isOk:
|
|
|
|
ok(rc.value)
|
|
|
|
else:
|
|
|
|
err(rc.error.toError $api)
|
2024-06-27 19:21:01 +00:00
|
|
|
acc.ifTrackNewApi:
|
2024-07-12 13:12:25 +00:00
|
|
|
debug logTxt, api, elapsed, accPath=($$accPath), updateOk, result
|
2024-06-27 09:01:26 +00:00
|
|
|
|
|
|
|
proc slotStateEmpty*(
|
|
|
|
acc: CoreDbAccRef;
|
2024-07-03 08:14:26 +00:00
|
|
|
accPath: Hash256;
|
2024-06-27 09:01:26 +00:00
|
|
|
): CoreDbRc[bool] =
|
2024-06-27 19:21:01 +00:00
|
|
|
## This function returns `true` if the storage data column is empty or
|
|
|
|
## missing.
|
|
|
|
##
|
2024-06-27 09:01:26 +00:00
|
|
|
acc.setTrackNewApi AccSlotStateEmptyFn
|
2024-07-03 15:50:27 +00:00
|
|
|
result = block:
|
|
|
|
let rc = acc.call(hasStorageData, acc.mpt, accPath)
|
|
|
|
if rc.isOk:
|
|
|
|
ok(not rc.value)
|
|
|
|
else:
|
|
|
|
err(rc.error.toError $api)
|
2024-06-27 19:21:01 +00:00
|
|
|
acc.ifTrackNewApi:
|
2024-07-12 13:12:25 +00:00
|
|
|
debug logTxt, api, elapsed, accPath=($$accPath), result
|
2024-06-27 09:01:26 +00:00
|
|
|
|
|
|
|
proc slotStateEmptyOrVoid*(
|
|
|
|
acc: CoreDbAccRef;
|
2024-07-03 08:14:26 +00:00
|
|
|
accPath: Hash256;
|
2024-06-27 09:01:26 +00:00
|
|
|
): bool =
|
|
|
|
## Convenience wrapper, returns `true` where `slotStateEmpty()` would fail.
|
|
|
|
acc.setTrackNewApi AccSlotStateEmptyOrVoidFn
|
2024-07-03 15:50:27 +00:00
|
|
|
result = block:
|
|
|
|
let rc = acc.call(hasStorageData, acc.mpt, accPath)
|
|
|
|
if rc.isOk:
|
|
|
|
not rc.value
|
|
|
|
else:
|
|
|
|
true
|
2024-06-27 19:21:01 +00:00
|
|
|
acc.ifTrackNewApi:
|
2024-07-12 13:12:25 +00:00
|
|
|
debug logTxt, api, elapsed, accPath=($$accPath), result
|
2024-06-27 09:01:26 +00:00
|
|
|
|
|
|
|
# ------------- other ----------------
|
2024-03-18 19:40:23 +00:00
|
|
|
|
2024-06-27 09:01:26 +00:00
|
|
|
proc recast*(
|
|
|
|
acc: CoreDbAccRef;
|
2024-07-03 08:14:26 +00:00
|
|
|
accPath: Hash256;
|
2024-06-27 19:21:01 +00:00
|
|
|
accRec: CoreDbAccount;
|
2024-06-27 09:01:26 +00:00
|
|
|
updateOk = false;
|
|
|
|
): CoreDbRc[Account] =
|
2024-07-03 15:50:27 +00:00
|
|
|
## Complete the argument `accRec` to the portable Ethereum representation
|
2024-04-19 18:37:27 +00:00
|
|
|
## of an account statement. This conversion may fail if the storage colState
|
2024-07-03 15:50:27 +00:00
|
|
|
## hash (see `slotState()` above) is currently unavailable.
|
2024-03-18 19:40:23 +00:00
|
|
|
##
|
2024-07-03 15:50:27 +00:00
|
|
|
acc.setTrackNewApi AccRecastFn
|
|
|
|
let rc = acc.call(fetchStorageState, acc.mpt, accPath, updateOk)
|
|
|
|
result = block:
|
2024-03-18 19:40:23 +00:00
|
|
|
if rc.isOk:
|
|
|
|
ok Account(
|
2024-06-27 19:21:01 +00:00
|
|
|
nonce: accRec.nonce,
|
|
|
|
balance: accRec.balance,
|
|
|
|
codeHash: accRec.codeHash,
|
2024-03-18 19:40:23 +00:00
|
|
|
storageRoot: rc.value)
|
|
|
|
else:
|
2024-07-03 15:50:27 +00:00
|
|
|
err(rc.error.toError $api)
|
2024-06-27 19:21:01 +00:00
|
|
|
acc.ifTrackNewApi:
|
2024-07-12 13:12:25 +00:00
|
|
|
let slotState = if rc.isOk: $$(rc.value) else: "n/a"
|
|
|
|
debug logTxt, api, elapsed, accPath=($$accPath), slotState, result
|
2023-09-26 09:21:13 +00:00
|
|
|
|
2023-07-31 13:43:38 +00:00
|
|
|
# ------------------------------------------------------------------------------
|
|
|
|
# Public transaction related methods
|
|
|
|
# ------------------------------------------------------------------------------
|
|
|
|
|
2024-04-19 18:37:27 +00:00
|
|
|
proc level*(db: CoreDbRef): int =
|
|
|
|
## Retrieve transaction level (zero if there is no pending transaction).
|
|
|
|
##
|
|
|
|
db.setTrackNewApi BaseLevelFn
|
2024-07-03 15:50:27 +00:00
|
|
|
result = CoreDbAccRef(db.ctx).call(level, db.ctx.mpt)
|
2024-07-12 13:12:25 +00:00
|
|
|
db.ifTrackNewApi: debug logTxt, api, elapsed, result
|
2024-04-19 18:37:27 +00:00
|
|
|
|
2024-07-03 15:50:27 +00:00
|
|
|
proc newTransaction*(ctx: CoreDbCtxRef): CoreDbTxRef =
|
2023-08-02 20:46:41 +00:00
|
|
|
## Constructor
|
2023-11-24 22:16:21 +00:00
|
|
|
##
|
2024-07-03 15:50:27 +00:00
|
|
|
ctx.setTrackNewApi BaseNewTxFn
|
|
|
|
let
|
|
|
|
kTx = CoreDbKvtRef(ctx).call(txBegin, ctx.kvt).valueOr:
|
|
|
|
raiseAssert $api & ": " & $error
|
|
|
|
aTx = CoreDbAccRef(ctx).call(txBegin, ctx.mpt).valueOr:
|
|
|
|
raiseAssert $api & ": " & $error
|
|
|
|
result = ctx.bless CoreDbTxRef(kTx: kTx, aTx: aTx)
|
|
|
|
ctx.ifTrackNewApi:
|
|
|
|
let newLevel = CoreDbAccRef(ctx).call(level, ctx.mpt)
|
2024-07-12 13:12:25 +00:00
|
|
|
debug logTxt, api, elapsed, newLevel
|
2024-04-19 18:37:27 +00:00
|
|
|
|
2024-06-19 14:13:12 +00:00
|
|
|
proc level*(tx: CoreDbTxRef): int =
|
2024-04-19 18:37:27 +00:00
|
|
|
## Print positive transaction level for argument `tx`
|
2023-11-24 22:16:21 +00:00
|
|
|
##
|
2023-12-12 17:47:41 +00:00
|
|
|
tx.setTrackNewApi TxLevelFn
|
2024-07-03 15:50:27 +00:00
|
|
|
result = CoreDbAccRef(tx.ctx).call(txLevel, tx.aTx)
|
2024-07-12 13:12:25 +00:00
|
|
|
tx.ifTrackNewApi: debug logTxt, api, elapsed, result
|
2023-07-31 13:43:38 +00:00
|
|
|
|
2024-06-19 14:13:12 +00:00
|
|
|
proc commit*(tx: CoreDbTxRef) =
|
2023-12-12 17:47:41 +00:00
|
|
|
tx.setTrackNewApi TxCommitFn:
|
2024-07-03 15:50:27 +00:00
|
|
|
let prvLevel {.used.} = CoreDbAccRef(tx.ctx).call(txLevel, tx.aTx)
|
|
|
|
CoreDbAccRef(tx.ctx).call(commit, tx.aTx).isOkOr:
|
|
|
|
raiseAssert $api & ": " & $error
|
|
|
|
CoreDbKvtRef(tx.ctx).call(commit, tx.kTx).isOkOr:
|
|
|
|
raiseAssert $api & ": " & $error
|
2024-07-12 13:12:25 +00:00
|
|
|
tx.ifTrackNewApi: debug logTxt, api, elapsed, prvLevel
|
2023-07-31 13:43:38 +00:00
|
|
|
|
2024-06-19 14:13:12 +00:00
|
|
|
proc rollback*(tx: CoreDbTxRef) =
|
2023-12-12 17:47:41 +00:00
|
|
|
tx.setTrackNewApi TxRollbackFn:
|
2024-07-03 15:50:27 +00:00
|
|
|
let prvLevel {.used.} = CoreDbAccRef(tx.ctx).call(txLevel, tx.aTx)
|
|
|
|
CoreDbAccRef(tx.ctx).call(rollback, tx.aTx).isOkOr:
|
|
|
|
raiseAssert $api & ": " & $error
|
|
|
|
CoreDbKvtRef(tx.ctx).call(rollback, tx.kTx).isOkOr:
|
|
|
|
raiseAssert $api & ": " & $error
|
2024-07-12 13:12:25 +00:00
|
|
|
tx.ifTrackNewApi: debug logTxt, api, elapsed, prvLevel
|
2023-07-31 13:43:38 +00:00
|
|
|
|
2024-06-19 14:13:12 +00:00
|
|
|
proc dispose*(tx: CoreDbTxRef) =
|
2023-12-12 17:47:41 +00:00
|
|
|
tx.setTrackNewApi TxDisposeFn:
|
2024-07-03 15:50:27 +00:00
|
|
|
let prvLevel {.used.} = CoreDbAccRef(tx.ctx).call(txLevel, tx.aTx)
|
|
|
|
if CoreDbAccRef(tx.ctx).call(isTop, tx.aTx):
|
|
|
|
CoreDbAccRef(tx.ctx).call(rollback, tx.aTx).isOkOr:
|
|
|
|
raiseAssert $api & ": " & $error
|
|
|
|
if CoreDbKvtRef(tx.ctx).call(isTop, tx.kTx):
|
|
|
|
CoreDbKvtRef(tx.ctx).call(rollback, tx.kTx).isOkOr:
|
|
|
|
raiseAssert $api & ": " & $error
|
2024-07-12 13:12:25 +00:00
|
|
|
tx.ifTrackNewApi: debug logTxt, api, elapsed, prvLevel
|
2023-08-02 20:46:41 +00:00
|
|
|
|
|
|
|
# ------------------------------------------------------------------------------
|
|
|
|
# Public tracer methods
|
|
|
|
# ------------------------------------------------------------------------------
|
|
|
|
|
2024-08-01 10:41:20 +00:00
|
|
|
when CoreDbEnableCaptJournal:
|
|
|
|
proc pushCapture*(db: CoreDbRef): CoreDbCaptRef =
|
|
|
|
## ..
|
2024-06-18 11:14:02 +00:00
|
|
|
##
|
2024-08-01 10:41:20 +00:00
|
|
|
db.setTrackNewApi BasePushCaptureFn
|
|
|
|
if db.tracerHook.isNil:
|
|
|
|
db.tracerHook = TraceRecorderRef.init(db)
|
|
|
|
else:
|
|
|
|
TraceRecorderRef(db.tracerHook).push()
|
|
|
|
result = TraceRecorderRef(db.tracerHook).topInst().CoreDbCaptRef
|
2024-07-12 13:12:25 +00:00
|
|
|
db.ifTrackNewApi: debug logTxt, api, elapsed, result
|
2024-06-18 11:14:02 +00:00
|
|
|
|
2024-08-01 10:41:20 +00:00
|
|
|
proc level*(cpt: CoreDbCaptRef): int =
|
|
|
|
## Getter, returns the positive number of stacked instances.
|
2024-06-18 11:14:02 +00:00
|
|
|
##
|
2024-08-01 10:41:20 +00:00
|
|
|
let log = cpt.distinctBase
|
|
|
|
log.db.setTrackNewApi CptLevelFn
|
|
|
|
result = log.level()
|
|
|
|
log.db.ifTrackNewApi: debug logTxt, api, elapsed, result
|
2024-06-18 11:14:02 +00:00
|
|
|
|
2024-08-01 10:41:20 +00:00
|
|
|
proc kvtLog*(cpt: CoreDbCaptRef): seq[(Blob,Blob)] =
|
|
|
|
## Getter, returns the `Kvt` logger list for the argument instance.
|
2024-06-18 11:14:02 +00:00
|
|
|
##
|
2024-08-01 10:41:20 +00:00
|
|
|
let log = cpt.distinctBase
|
|
|
|
log.db.setTrackNewApi CptKvtLogFn
|
|
|
|
result = log.kvtLogBlobs()
|
|
|
|
log.db.ifTrackNewApi: debug logTxt, api, elapsed
|
2024-06-18 11:14:02 +00:00
|
|
|
|
2024-08-01 10:41:20 +00:00
|
|
|
proc pop*(cpt: CoreDbCaptRef) =
|
2024-06-18 11:14:02 +00:00
|
|
|
## Explicitely stop recording the current tracer instance and reset to
|
|
|
|
## previous level.
|
|
|
|
##
|
2024-08-01 10:41:20 +00:00
|
|
|
let db = cpt.distinctBase.db
|
|
|
|
db.setTrackNewApi CptPopFn
|
|
|
|
if not cpt.distinctBase.pop():
|
|
|
|
TraceRecorderRef(db.tracerHook).restore()
|
|
|
|
db.tracerHook = TraceRecorderRef(nil)
|
|
|
|
db.ifTrackNewApi: debug logTxt, api, elapsed, cpt
|
|
|
|
|
|
|
|
proc stopCapture*(db: CoreDbRef) =
|
|
|
|
## Discard capture instances. This function is equivalent to `pop()`-ing
|
|
|
|
## all instances.
|
|
|
|
##
|
|
|
|
db.setTrackNewApi CptStopCaptureFn
|
|
|
|
if not db.tracerHook.isNil:
|
|
|
|
TraceRecorderRef(db.tracerHook).restore()
|
|
|
|
db.tracerHook = TraceRecorderRef(nil)
|
|
|
|
db.ifTrackNewApi: debug logTxt, api, elapsed
|
2024-03-07 19:24:05 +00:00
|
|
|
|
2023-07-31 13:43:38 +00:00
|
|
|
# ------------------------------------------------------------------------------
|
|
|
|
# End
|
|
|
|
# ------------------------------------------------------------------------------
|