nimbus-eth1/nimbus/db/aristo/aristo_tx.nim

270 lines
9.5 KiB
Nim
Raw Normal View History

# nimbus-eth1
Core db update storage root management for sub tries (#1964) * Aristo: Re-phrase `LayerDelta` and `LayerFinal` as object references why: Avoids copying in some cases * Fix copyright header * Aristo: Verify `leafTie.root` function argument for `merge()` proc why: Zero root will lead to inconsistent DB entry * Aristo: Update failure condition for hash labels compiler `hashify()` why: Node need not be rejected as long as links are on the schedule. In that case, `redo[]` is to become `wff.base[]` at a later stage. This amends an earlier fix, part of #1952 by also testing against the target nodes of the `wff.base[]` sets. * Aristo: Add storage root glue record to `hashify()` schedule why: An account leaf node might refer to a non-resolvable storage root ID. Storage root node chains will end up at the storage root. So the link `storage-root->account-leaf` needs an extra item in the schedule. * Aristo: fix error code returned by `fetchPayload()` details: Final error code is implied by the error code form the `hikeUp()` function. * CoreDb: Discard `createOk` argument in API `getRoot()` function why: Not needed for the legacy DB. For the `Arsto` DB, a lazy approach is implemented where a stprage root node is created on-the-fly. * CoreDb: Prevent `$$` logging in some cases why: Logging the function `$$` is not useful when it is used for internal use, i.e. retrieving an an error text for logging. * CoreDb: Add `tryHashFn()` to API for pretty printing why: Pretty printing must not change the hashification status for the `Aristo` DB. So there is an independent API wrapper for getting the node hash which never updated the hashes. * CoreDb: Discard `update` argument in API `hash()` function why: When calling the API function `hash()`, the latest state is always wanted. For a version that uses the current state as-is without checking, the function `tryHash()` was added to the backend. * CoreDb: Update opaque vertex ID objects for the `Aristo` backend why: For `Aristo`, vID objects encapsulate a numeric `VertexID` referencing a vertex (rather than a node hash as used on the legacy backend.) For storage sub-tries, there might be no initial vertex known when the descriptor is created. So opaque vertex ID objects are supported without a valid `VertexID` which will be initalised on-the-fly when the first item is merged. * CoreDb: Add pretty printer for opaque vertex ID objects * Cosmetics, printing profiling data * CoreDb: Fix segfault in `Aristo` backend when creating MPT descriptor why: Missing initialisation error * CoreDb: Allow MPT to inherit shared context on `Aristo` backend why: Creates descriptors with different storage roots for the same shared `Aristo` DB descriptor. * Cosmetics, update diagnostic message items for `Aristo` backend * Fix Copyright year
2024-01-11 19:11:38 +00:00
# Copyright (c) 2023-2024 Status Research & Development GmbH
# Licensed under either of
# * Apache License, version 2.0, ([LICENSE-APACHE](LICENSE-APACHE) or
# http://www.apache.org/licenses/LICENSE-2.0)
# * MIT license ([LICENSE-MIT](LICENSE-MIT) or
# http://opensource.org/licenses/MIT)
# at your option. This file may not be copied, modified, or distributed
# except according to those terms.
## Aristo DB -- Transaction interface
## ==================================
##
{.push raises: [].}
import
results,
./aristo_tx/[tx_fork, tx_frame, tx_stow],
"."/[aristo_desc, aristo_get]
# ------------------------------------------------------------------------------
# Public functions, getters
# ------------------------------------------------------------------------------
func txTop*(db: AristoDbRef): Result[AristoTxRef,AristoError] =
## Getter, returns top level transaction if there is any.
db.txFrameTop()
func isTop*(tx: AristoTxRef): bool =
## Getter, returns `true` if the argument `tx` referes to the current top
## level transaction.
tx.txFrameIsTop()
func level*(tx: AristoTxRef): int =
## Getter, positive nesting level of transaction argument `tx`
tx.txFrameLevel()
func level*(db: AristoDbRef): int =
## Getter, non-negative nesting level (i.e. number of pending transactions)
db.txFrameLevel()
# ------------------------------------------------------------------------------
# Public functions
# ------------------------------------------------------------------------------
func to*(tx: AristoTxRef; T: type[AristoDbRef]): T =
## Getter, retrieves the parent database descriptor from argument `tx`
tx.db
proc forkTx*(
db: AristoDbRef;
backLevel: int; # Backward location of transaction
): Result[AristoDbRef,AristoError] =
## Fork a new descriptor obtained from parts of the argument database
## as described by arguments `db` and `backLevel`.
##
## If the argument `backLevel` is non-negative, the forked descriptor will
## provide the database view where the first `backLevel` transaction layers
## are stripped and the remaing layers are squashed into a single transaction.
Core db and aristo updates for destructor and tx logic (#1894) * Disable `TransactionID` related functions from `state_db.nim` why: Functions `getCommittedStorage()` and `updateOriginalRoot()` from the `state_db` module are nowhere used. The emulation of a legacy `TransactionID` type functionality is administratively expensive to provide by `Aristo` (the legacy DB version is only partially implemented, anyway). As there is no other place where `TransactionID`s are used, they will not be provided by the `Aristo` variant of the `CoreDb`. For the legacy DB API, nothing will change. * Fix copyright headers in source code * Get rid of compiler warning * Update Aristo code, remove unused `merge()` variant, export `hashify()` why: Adapt to upcoming `CoreDb` wrapper * Remove synced tx feature from `Aristo` why: + This feature allowed to synchronise transaction methods like begin, commit, and rollback for a group of descriptors. + The feature is over engineered and not needed for `CoreDb`, neither is it complete (some convergence features missing.) * Add debugging helpers to `Kvt` also: Update database iterator, add count variable yield argument similar to `Aristo`. * Provide optional destructors for `CoreDb` API why; For the upcoming Aristo wrapper, this allows to control when certain smart destruction and update can take place. The auto destructor works fine in general when the storage/cache strategy is known and acceptable when creating descriptors. * Add update option for `CoreDb` API function `hash()` why; The hash function is typically used to get the state root of the MPT. Due to lazy hashing, this might be not available on the `Aristo` DB. So the `update` function asks for re-hashing the gurrent state changes if needed. * Update API tracking log mode: `info` => `debug * Use shared `Kvt` descriptor in new Ledger API why: No need to create a new descriptor all the time
2023-11-16 19:35:03 +00:00
##
## If `backLevel` is `-1`, a database descriptor with empty transaction
## layers will be provided where the `balancer` between database and
## transaction layers are kept in place.
Core db and aristo updates for destructor and tx logic (#1894) * Disable `TransactionID` related functions from `state_db.nim` why: Functions `getCommittedStorage()` and `updateOriginalRoot()` from the `state_db` module are nowhere used. The emulation of a legacy `TransactionID` type functionality is administratively expensive to provide by `Aristo` (the legacy DB version is only partially implemented, anyway). As there is no other place where `TransactionID`s are used, they will not be provided by the `Aristo` variant of the `CoreDb`. For the legacy DB API, nothing will change. * Fix copyright headers in source code * Get rid of compiler warning * Update Aristo code, remove unused `merge()` variant, export `hashify()` why: Adapt to upcoming `CoreDb` wrapper * Remove synced tx feature from `Aristo` why: + This feature allowed to synchronise transaction methods like begin, commit, and rollback for a group of descriptors. + The feature is over engineered and not needed for `CoreDb`, neither is it complete (some convergence features missing.) * Add debugging helpers to `Kvt` also: Update database iterator, add count variable yield argument similar to `Aristo`. * Provide optional destructors for `CoreDb` API why; For the upcoming Aristo wrapper, this allows to control when certain smart destruction and update can take place. The auto destructor works fine in general when the storage/cache strategy is known and acceptable when creating descriptors. * Add update option for `CoreDb` API function `hash()` why; The hash function is typically used to get the state root of the MPT. Due to lazy hashing, this might be not available on the `Aristo` DB. So the `update` function asks for re-hashing the gurrent state changes if needed. * Update API tracking log mode: `info` => `debug * Use shared `Kvt` descriptor in new Ledger API why: No need to create a new descriptor all the time
2023-11-16 19:35:03 +00:00
##
## If `backLevel` is `-2`, a database descriptor with empty transaction
## layers will be provided without a `balancer`.
Core db and aristo updates for destructor and tx logic (#1894) * Disable `TransactionID` related functions from `state_db.nim` why: Functions `getCommittedStorage()` and `updateOriginalRoot()` from the `state_db` module are nowhere used. The emulation of a legacy `TransactionID` type functionality is administratively expensive to provide by `Aristo` (the legacy DB version is only partially implemented, anyway). As there is no other place where `TransactionID`s are used, they will not be provided by the `Aristo` variant of the `CoreDb`. For the legacy DB API, nothing will change. * Fix copyright headers in source code * Get rid of compiler warning * Update Aristo code, remove unused `merge()` variant, export `hashify()` why: Adapt to upcoming `CoreDb` wrapper * Remove synced tx feature from `Aristo` why: + This feature allowed to synchronise transaction methods like begin, commit, and rollback for a group of descriptors. + The feature is over engineered and not needed for `CoreDb`, neither is it complete (some convergence features missing.) * Add debugging helpers to `Kvt` also: Update database iterator, add count variable yield argument similar to `Aristo`. * Provide optional destructors for `CoreDb` API why; For the upcoming Aristo wrapper, this allows to control when certain smart destruction and update can take place. The auto destructor works fine in general when the storage/cache strategy is known and acceptable when creating descriptors. * Add update option for `CoreDb` API function `hash()` why; The hash function is typically used to get the state root of the MPT. Due to lazy hashing, this might be not available on the `Aristo` DB. So the `update` function asks for re-hashing the gurrent state changes if needed. * Update API tracking log mode: `info` => `debug * Use shared `Kvt` descriptor in new Ledger API why: No need to create a new descriptor all the time
2023-11-16 19:35:03 +00:00
##
## The returned database descriptor will always have transaction level one.
## If there were no transactions that could be squashed, an empty
## transaction is added.
##
## Use `aristo_desc.forget()` to clean up this descriptor.
##
# Fork top layer (with or without pending transaction)?
if backLevel == 0:
return db.txForkTop()
# Fork bottom layer (=> 0 < db.stack.len)
if backLevel == db.stack.len:
return db.txForkBase()
# Inspect transaction stack
if 0 < backLevel:
var tx = db.txRef
if tx.isNil or db.stack.len < backLevel:
return err(TxLevelTooDeep)
# Fetch tx of level `backLevel` (seed to skip some items)
for _ in 0 ..< backLevel:
tx = tx.parent
if tx.isNil:
return err(TxStackGarbled)
return tx.txFork()
# Plain fork, include `balancer`
if backLevel == -1:
let xb = ? db.fork(noFilter=false)
discard xb.txFrameBegin()
return ok(xb)
# Plain fork, unfiltered backend
if backLevel == -2:
let xb = ? db.fork(noFilter=true)
discard xb.txFrameBegin()
return ok(xb)
err(TxLevelUseless)
proc findTx*(
db: AristoDbRef;
vid: VertexID; # Pivot vertex (typically `VertexID(1)`)
key: HashKey; # Hash key of pivot vertex
): Result[int,AristoError] =
## Find the transaction where the vertex with ID `vid` exists and has the
## Merkle hash key `key`. If there is no transaction available, search in
## the filter and then in the backend.
##
## If the above procedure succeeds, an integer indicating the transaction
## level integer is returned:
##
## * `0` -- top level, current layer
## * `1`, `2`, ... -- some transaction level further down the stack
## * `-1` -- the filter between transaction stack and database backend
## * `-2` -- the databse backend
##
## A successful return code might be used for the `forkTx()` call for
## creating a forked descriptor that provides the pair `(vid,key)`.
##
if not vid.isValid or
not key.isValid:
return err(TxArgsUseless)
if db.txRef.isNil:
# Try `(vid,key)` on top layer
let topKey = db.top.delta.kMap.getOrVoid vid
if topKey == key:
return ok(0)
else:
# Find `(vid,key)` on transaction layers
for (n,tx,layer,error) in db.txRef.txFrameWalk:
if error != AristoError(0):
return err(error)
if layer.delta.kMap.getOrVoid(vid) == key:
return ok(n)
# Try bottom layer
let botKey = db.stack[0].delta.kMap.getOrVoid vid
if botKey == key:
return ok(db.stack.len)
# Try `(vid,key)` on balancer
if not db.balancer.isNil:
let roKey = db.balancer.kMap.getOrVoid vid
if roKey == key:
return ok(-1)
# Try `(vid,key)` on unfiltered backend
block:
let beKey = db.getKeyUbe(vid).valueOr: VOID_HASH_KEY
if beKey == key:
return ok(-2)
err(TxNotFound)
# ------------------------------------------------------------------------------
# Public functions: Transaction frame
# ------------------------------------------------------------------------------
Aristo db api extensions for use as core db backend (#1754) * Update docu * Update Aristo/Kvt constructor prototype why: Previous version used an `enum` value to indicate what backend is to be used. This was replaced by using the backend object type. * Rewrite `hikeUp()` return code into `Result[Hike,(Hike,AristoError)]` why: Better code maintenance. Previously, the `Hike` object was returned. It had an internal error field so partial success was also available on a failure. This error field has been removed. * Use `openArray[byte]` rather than `Blob` in functions prototypes * Provide synchronised multi instance transactions why: The `CoreDB` object was geared towards the legacy DB which used a single transaction for the key-value backend DB. Different state roots are provided by the backend database, so all instances work directly on the same backend. Aristo db instances have different in-memory mappings (aka different state roots) and the transactions are on top of there mappings. So each instance might run different transactions. Multi instance transactions are a compromise to converge towards the legacy behaviour. The synchronised transactions span over all instances available at the time when base transaction was opened. Instances created later are unaffected. * Provide key-value pair database iterator why: Needed in `CoreDB` for `replicate()` emulation also: Some update of internal code * Extend API (i.e. prototype variants) why: Needed for `CoreDB` geared towards the legacy backend which has a more basic API than Aristo.
2023-09-15 15:23:53 +00:00
proc txBegin*(db: AristoDbRef): Result[AristoTxRef,AristoError] =
## Starts a new transaction.
##
## Example:
## ::
## proc doSomething(db: AristoDbRef) =
## let tx = db.begin
## defer: tx.rollback()
## ... continue using db ...
## tx.commit()
##
db.txFrameBegin()
Aristo db api extensions for use as core db backend (#1754) * Update docu * Update Aristo/Kvt constructor prototype why: Previous version used an `enum` value to indicate what backend is to be used. This was replaced by using the backend object type. * Rewrite `hikeUp()` return code into `Result[Hike,(Hike,AristoError)]` why: Better code maintenance. Previously, the `Hike` object was returned. It had an internal error field so partial success was also available on a failure. This error field has been removed. * Use `openArray[byte]` rather than `Blob` in functions prototypes * Provide synchronised multi instance transactions why: The `CoreDB` object was geared towards the legacy DB which used a single transaction for the key-value backend DB. Different state roots are provided by the backend database, so all instances work directly on the same backend. Aristo db instances have different in-memory mappings (aka different state roots) and the transactions are on top of there mappings. So each instance might run different transactions. Multi instance transactions are a compromise to converge towards the legacy behaviour. The synchronised transactions span over all instances available at the time when base transaction was opened. Instances created later are unaffected. * Provide key-value pair database iterator why: Needed in `CoreDB` for `replicate()` emulation also: Some update of internal code * Extend API (i.e. prototype variants) why: Needed for `CoreDB` geared towards the legacy backend which has a more basic API than Aristo.
2023-09-15 15:23:53 +00:00
proc rollback*(
tx: AristoTxRef; # Top transaction on database
Aristo db api extensions for use as core db backend (#1754) * Update docu * Update Aristo/Kvt constructor prototype why: Previous version used an `enum` value to indicate what backend is to be used. This was replaced by using the backend object type. * Rewrite `hikeUp()` return code into `Result[Hike,(Hike,AristoError)]` why: Better code maintenance. Previously, the `Hike` object was returned. It had an internal error field so partial success was also available on a failure. This error field has been removed. * Use `openArray[byte]` rather than `Blob` in functions prototypes * Provide synchronised multi instance transactions why: The `CoreDB` object was geared towards the legacy DB which used a single transaction for the key-value backend DB. Different state roots are provided by the backend database, so all instances work directly on the same backend. Aristo db instances have different in-memory mappings (aka different state roots) and the transactions are on top of there mappings. So each instance might run different transactions. Multi instance transactions are a compromise to converge towards the legacy behaviour. The synchronised transactions span over all instances available at the time when base transaction was opened. Instances created later are unaffected. * Provide key-value pair database iterator why: Needed in `CoreDB` for `replicate()` emulation also: Some update of internal code * Extend API (i.e. prototype variants) why: Needed for `CoreDB` geared towards the legacy backend which has a more basic API than Aristo.
2023-09-15 15:23:53 +00:00
): Result[void,AristoError] =
## Given a *top level* handle, this function discards all database operations
## performed for this transactio. The previous transaction is returned if
## there was any.
##
tx.txFrameRollback()
proc commit*(
tx: AristoTxRef; # Top transaction on database
Aristo db api extensions for use as core db backend (#1754) * Update docu * Update Aristo/Kvt constructor prototype why: Previous version used an `enum` value to indicate what backend is to be used. This was replaced by using the backend object type. * Rewrite `hikeUp()` return code into `Result[Hike,(Hike,AristoError)]` why: Better code maintenance. Previously, the `Hike` object was returned. It had an internal error field so partial success was also available on a failure. This error field has been removed. * Use `openArray[byte]` rather than `Blob` in functions prototypes * Provide synchronised multi instance transactions why: The `CoreDB` object was geared towards the legacy DB which used a single transaction for the key-value backend DB. Different state roots are provided by the backend database, so all instances work directly on the same backend. Aristo db instances have different in-memory mappings (aka different state roots) and the transactions are on top of there mappings. So each instance might run different transactions. Multi instance transactions are a compromise to converge towards the legacy behaviour. The synchronised transactions span over all instances available at the time when base transaction was opened. Instances created later are unaffected. * Provide key-value pair database iterator why: Needed in `CoreDB` for `replicate()` emulation also: Some update of internal code * Extend API (i.e. prototype variants) why: Needed for `CoreDB` geared towards the legacy backend which has a more basic API than Aristo.
2023-09-15 15:23:53 +00:00
): Result[void,AristoError] =
## Given a *top level* handle, this function accepts all database operations
## performed through this handle and merges it to the previous layer. The
## previous transaction is returned if there was any.
##
tx.txFrameCommit()
proc collapse*(
tx: AristoTxRef; # Top transaction on database
commit: bool; # Commit if `true`, otherwise roll back
Aristo db api extensions for use as core db backend (#1754) * Update docu * Update Aristo/Kvt constructor prototype why: Previous version used an `enum` value to indicate what backend is to be used. This was replaced by using the backend object type. * Rewrite `hikeUp()` return code into `Result[Hike,(Hike,AristoError)]` why: Better code maintenance. Previously, the `Hike` object was returned. It had an internal error field so partial success was also available on a failure. This error field has been removed. * Use `openArray[byte]` rather than `Blob` in functions prototypes * Provide synchronised multi instance transactions why: The `CoreDB` object was geared towards the legacy DB which used a single transaction for the key-value backend DB. Different state roots are provided by the backend database, so all instances work directly on the same backend. Aristo db instances have different in-memory mappings (aka different state roots) and the transactions are on top of there mappings. So each instance might run different transactions. Multi instance transactions are a compromise to converge towards the legacy behaviour. The synchronised transactions span over all instances available at the time when base transaction was opened. Instances created later are unaffected. * Provide key-value pair database iterator why: Needed in `CoreDB` for `replicate()` emulation also: Some update of internal code * Extend API (i.e. prototype variants) why: Needed for `CoreDB` geared towards the legacy backend which has a more basic API than Aristo.
2023-09-15 15:23:53 +00:00
): Result[void,AristoError] =
## Iterated application of `commit()` or `rollback()` performing the
## something similar to
## ::
## while true:
## discard tx.commit() # ditto for rollback()
## if db.txTop.isErr: break
## tx = db.txTop.value
##
tx.txFrameCollapse commit
# ------------------------------------------------------------------------------
# Public functions: save to database
# ------------------------------------------------------------------------------
proc persist*(
db: AristoDbRef; # Database
nxtSid = 0u64; # Next state ID (aka block number)
chunkedMpt = false; # Partial data (e.g. from `snap`)
Aristo db api extensions for use as core db backend (#1754) * Update docu * Update Aristo/Kvt constructor prototype why: Previous version used an `enum` value to indicate what backend is to be used. This was replaced by using the backend object type. * Rewrite `hikeUp()` return code into `Result[Hike,(Hike,AristoError)]` why: Better code maintenance. Previously, the `Hike` object was returned. It had an internal error field so partial success was also available on a failure. This error field has been removed. * Use `openArray[byte]` rather than `Blob` in functions prototypes * Provide synchronised multi instance transactions why: The `CoreDB` object was geared towards the legacy DB which used a single transaction for the key-value backend DB. Different state roots are provided by the backend database, so all instances work directly on the same backend. Aristo db instances have different in-memory mappings (aka different state roots) and the transactions are on top of there mappings. So each instance might run different transactions. Multi instance transactions are a compromise to converge towards the legacy behaviour. The synchronised transactions span over all instances available at the time when base transaction was opened. Instances created later are unaffected. * Provide key-value pair database iterator why: Needed in `CoreDB` for `replicate()` emulation also: Some update of internal code * Extend API (i.e. prototype variants) why: Needed for `CoreDB` geared towards the legacy backend which has a more basic API than Aristo.
2023-09-15 15:23:53 +00:00
): Result[void,AristoError] =
## Persistently store data onto backend database. If the system is running
## without a database backend, the function returns immediately with an
## error. The same happens if there is a pending transaction.
##
## The function merges all staged data from the top layer cache onto the
## backend stage area. After that, the top layer cache is cleared.
##
## Finally, the staged data are merged into the physical backend database
## and the staged data area is cleared. Wile performing this last step,
## the recovery journal is updated (if available.)
##
## If the argument `nxtFid` is passed non-zero, it will be the ID for the
## next recovery journal record. If non-zero, this ID must be greater than
## all previous IDs (e.g. block number when stowing after block execution.)
##
## Staging the top layer cache might fail with a partial MPT when it is
## set up from partial MPT chunks as it happens with `snap` sync processing.
## In this case, the `chunkedMpt` argument must be set `true` (see alse
## `fwdFilter()`.)
##
db.txStow(nxtSid, persistent=true, chunkedMpt=chunkedMpt)
proc stow*(
db: AristoDbRef; # Database
chunkedMpt = false; # Partial data (e.g. from `snap`)
): Result[void,AristoError] =
## This function is similar to `persist()` stopping short of performing the
## final step storing on the persistent database. It fails if there is a
## pending transaction.
##
## The function merges all staged data from the top layer cache onto the
## backend stage area and leaves it there. This function can be seen as
## a sort of a bottom level transaction `commit()`.
##
## Staging the top layer cache might fail with a partial MPT when it is
## set up from partial MPT chunks as it happens with `snap` sync processing.
## In this case, the `chunkedMpt` argument must be set `true` (see alse
## `fwdFilter()`.)
##
db.txStow(nxtSid=0u64, persistent=false, chunkedMpt=chunkedMpt)
# ------------------------------------------------------------------------------
# End
# ------------------------------------------------------------------------------