2023-05-30 11:47:47 +00:00
|
|
|
# nimbus-eth1
|
Core db update storage root management for sub tries (#1964)
* Aristo: Re-phrase `LayerDelta` and `LayerFinal` as object references
why:
Avoids copying in some cases
* Fix copyright header
* Aristo: Verify `leafTie.root` function argument for `merge()` proc
why:
Zero root will lead to inconsistent DB entry
* Aristo: Update failure condition for hash labels compiler `hashify()`
why:
Node need not be rejected as long as links are on the schedule. In
that case, `redo[]` is to become `wff.base[]` at a later stage.
This amends an earlier fix, part of #1952 by also testing against
the target nodes of the `wff.base[]` sets.
* Aristo: Add storage root glue record to `hashify()` schedule
why:
An account leaf node might refer to a non-resolvable storage root ID.
Storage root node chains will end up at the storage root. So the link
`storage-root->account-leaf` needs an extra item in the schedule.
* Aristo: fix error code returned by `fetchPayload()`
details:
Final error code is implied by the error code form the `hikeUp()`
function.
* CoreDb: Discard `createOk` argument in API `getRoot()` function
why:
Not needed for the legacy DB. For the `Arsto` DB, a lazy approach is
implemented where a stprage root node is created on-the-fly.
* CoreDb: Prevent `$$` logging in some cases
why:
Logging the function `$$` is not useful when it is used for internal
use, i.e. retrieving an an error text for logging.
* CoreDb: Add `tryHashFn()` to API for pretty printing
why:
Pretty printing must not change the hashification status for the
`Aristo` DB. So there is an independent API wrapper for getting the
node hash which never updated the hashes.
* CoreDb: Discard `update` argument in API `hash()` function
why:
When calling the API function `hash()`, the latest state is always
wanted. For a version that uses the current state as-is without checking,
the function `tryHash()` was added to the backend.
* CoreDb: Update opaque vertex ID objects for the `Aristo` backend
why:
For `Aristo`, vID objects encapsulate a numeric `VertexID`
referencing a vertex (rather than a node hash as used on the
legacy backend.) For storage sub-tries, there might be no initial
vertex known when the descriptor is created. So opaque vertex ID
objects are supported without a valid `VertexID` which will be
initalised on-the-fly when the first item is merged.
* CoreDb: Add pretty printer for opaque vertex ID objects
* Cosmetics, printing profiling data
* CoreDb: Fix segfault in `Aristo` backend when creating MPT descriptor
why:
Missing initialisation error
* CoreDb: Allow MPT to inherit shared context on `Aristo` backend
why:
Creates descriptors with different storage roots for the same
shared `Aristo` DB descriptor.
* Cosmetics, update diagnostic message items for `Aristo` backend
* Fix Copyright year
2024-01-11 19:11:38 +00:00
|
|
|
# Copyright (c) 2023-2024 Status Research & Development GmbH
|
2023-05-30 11:47:47 +00:00
|
|
|
# Licensed under either of
|
|
|
|
# * Apache License, version 2.0, ([LICENSE-APACHE](LICENSE-APACHE) or
|
|
|
|
# http://www.apache.org/licenses/LICENSE-2.0)
|
|
|
|
# * MIT license ([LICENSE-MIT](LICENSE-MIT) or
|
|
|
|
# http://opensource.org/licenses/MIT)
|
|
|
|
# at your option. This file may not be copied, modified, or distributed
|
|
|
|
# except according to those terms.
|
|
|
|
|
2023-05-30 21:21:15 +00:00
|
|
|
## Aristo DB -- Patricia Trie builder, raw node insertion
|
|
|
|
## ======================================================
|
|
|
|
##
|
2023-10-27 21:36:51 +00:00
|
|
|
## This module merges `PathID` values as hexary lookup paths into the
|
2023-05-30 21:21:15 +00:00
|
|
|
## `Patricia Trie`. When changing vertices (aka nodes without Merkle hashes),
|
|
|
|
## associated (but separated) Merkle hashes will be deleted unless locked.
|
|
|
|
## Instead of deleting locked hashes error handling is applied.
|
|
|
|
##
|
|
|
|
## Also, nodes (vertices plus merkle hashes) can be added which is needed for
|
|
|
|
## boundary proofing after `snap/1` download. The vertices are split from the
|
|
|
|
## nodes and stored as-is on the table holding `Patricia Trie` entries. The
|
|
|
|
## hashes are stored iin a separate table and the vertices are labelled
|
|
|
|
## `locked`.
|
|
|
|
|
2023-05-30 11:47:47 +00:00
|
|
|
{.push raises: [].}
|
|
|
|
|
|
|
|
import
|
2024-06-18 11:14:02 +00:00
|
|
|
std/typetraits,
|
|
|
|
eth/common,
|
2023-09-12 18:45:12 +00:00
|
|
|
results,
|
2024-07-12 13:08:26 +00:00
|
|
|
"."/[aristo_desc, aristo_hike, aristo_layers, aristo_vid],
|
2024-07-03 20:21:57 +00:00
|
|
|
./aristo_merge/merge_payload_helper
|
2023-05-30 11:47:47 +00:00
|
|
|
|
2024-06-18 11:14:02 +00:00
|
|
|
const
|
|
|
|
MergeNoAction = {MergeLeafPathCachedAlready, MergeLeafPathOnBackendAlready}
|
2023-09-15 15:23:53 +00:00
|
|
|
|
2023-05-30 11:47:47 +00:00
|
|
|
# ------------------------------------------------------------------------------
|
|
|
|
# Public functions
|
|
|
|
# ------------------------------------------------------------------------------
|
|
|
|
|
2024-06-27 09:01:26 +00:00
|
|
|
proc mergeAccountRecord*(
|
2023-07-04 18:24:03 +00:00
|
|
|
db: AristoDbRef; # Database, top layer
|
2024-07-03 08:14:26 +00:00
|
|
|
accPath: Hash256; # Even nibbled byte path
|
2024-06-27 09:01:26 +00:00
|
|
|
accRec: AristoAccount; # Account data
|
2024-06-18 11:14:02 +00:00
|
|
|
): Result[bool,AristoError] =
|
2024-07-12 13:12:25 +00:00
|
|
|
## Merge the key-value-pair argument `(accKey,accRec)` as an account
|
2024-06-18 11:14:02 +00:00
|
|
|
## ledger value, i.e. the the sub-tree starting at `VertexID(1)`.
|
2024-02-01 21:27:48 +00:00
|
|
|
##
|
2024-07-12 13:12:25 +00:00
|
|
|
## On success, the function returns `true` if the `accRec` argument was
|
|
|
|
## not on the database already or different from `accRec`, and `false`
|
|
|
|
## otherwise.
|
2024-06-18 19:30:01 +00:00
|
|
|
##
|
2024-06-18 11:14:02 +00:00
|
|
|
let
|
2024-07-14 10:02:05 +00:00
|
|
|
pyl = LeafPayload(pType: AccountData, account: accRec)
|
2024-07-03 08:14:26 +00:00
|
|
|
rc = db.mergePayloadImpl(VertexID(1), accPath.data, pyl)
|
2024-06-18 11:14:02 +00:00
|
|
|
if rc.isOk:
|
2024-07-14 10:02:05 +00:00
|
|
|
db.layersPutAccLeaf(accPath, rc.value)
|
2024-06-18 11:14:02 +00:00
|
|
|
ok true
|
|
|
|
elif rc.error in MergeNoAction:
|
|
|
|
ok false
|
2023-05-30 11:47:47 +00:00
|
|
|
else:
|
2024-06-18 11:14:02 +00:00
|
|
|
err(rc.error)
|
2024-02-01 21:27:48 +00:00
|
|
|
|
2023-09-15 15:23:53 +00:00
|
|
|
|
2024-06-18 11:14:02 +00:00
|
|
|
proc mergeGenericData*(
|
2023-09-15 15:23:53 +00:00
|
|
|
db: AristoDbRef; # Database, top layer
|
|
|
|
root: VertexID; # MPT state root
|
|
|
|
path: openArray[byte]; # Leaf item to add to the database
|
2023-11-08 12:18:32 +00:00
|
|
|
data: openArray[byte]; # Raw data payload value
|
2023-09-15 15:23:53 +00:00
|
|
|
): Result[bool,AristoError] =
|
2024-06-18 11:14:02 +00:00
|
|
|
## Variant of `mergeXXX()` for generic sub-trees, i.e. for arguments
|
|
|
|
## `root` greater than `VertexID(1)` and smaller than `LEAST_FREE_VID`.
|
|
|
|
##
|
2024-06-18 19:30:01 +00:00
|
|
|
## On success, the function returns `true` if the `data` argument was merged
|
|
|
|
## into the database ot updated, and `false` if it was on the database
|
|
|
|
## already.
|
|
|
|
##
|
2024-06-18 11:14:02 +00:00
|
|
|
# Verify that `root` is neither an accounts tree nor a strorage tree.
|
|
|
|
if not root.isValid:
|
|
|
|
return err(MergeRootVidMissing)
|
|
|
|
elif root == VertexID(1):
|
|
|
|
return err(MergeAccRootNotAccepted)
|
|
|
|
elif LEAST_FREE_VID <= root.distinctBase:
|
|
|
|
return err(MergeStoRootNotAccepted)
|
|
|
|
|
|
|
|
let
|
2024-07-14 10:02:05 +00:00
|
|
|
pyl = LeafPayload(pType: RawData, rawBlob: @data)
|
2024-06-27 19:21:01 +00:00
|
|
|
rc = db.mergePayloadImpl(root, path, pyl)
|
2024-06-18 11:14:02 +00:00
|
|
|
if rc.isOk:
|
|
|
|
ok true
|
|
|
|
elif rc.error in MergeNoAction:
|
|
|
|
ok false
|
|
|
|
else:
|
|
|
|
err(rc.error)
|
2024-02-01 21:27:48 +00:00
|
|
|
|
2024-06-18 11:14:02 +00:00
|
|
|
|
|
|
|
proc mergeStorageData*(
|
2024-02-01 21:27:48 +00:00
|
|
|
db: AristoDbRef; # Database, top layer
|
2024-08-14 08:54:44 +00:00
|
|
|
accPath: Hash256; # Needed for accounts payload
|
|
|
|
stoPath: Hash256; # Storage data path (aka key)
|
|
|
|
stoData: UInt256; # Storage data payload value
|
2024-06-27 19:21:01 +00:00
|
|
|
): Result[void,AristoError] =
|
|
|
|
## Store the `stoData` data argument on the storage area addressed by
|
|
|
|
## `(accPath,stoPath)` where `accPath` is the account key (into the MPT)
|
|
|
|
## and `stoPath` is the slot path of the corresponding storage area.
|
2024-06-18 11:14:02 +00:00
|
|
|
##
|
2024-07-11 11:26:46 +00:00
|
|
|
var
|
|
|
|
path = NibblesBuf.fromBytes(accPath.data)
|
|
|
|
next = VertexID(1)
|
|
|
|
vtx: VertexRef
|
|
|
|
touched: array[NibblesBuf.high(), VertexID]
|
|
|
|
pos: int
|
2024-02-01 21:27:48 +00:00
|
|
|
|
2024-07-11 11:26:46 +00:00
|
|
|
template resetKeys() =
|
|
|
|
# Reset cached hashes of touched verticies
|
|
|
|
for i in 0 ..< pos:
|
|
|
|
db.layersResKey((VertexID(1), touched[pos - i - 1]))
|
2023-08-11 17:23:57 +00:00
|
|
|
|
2024-07-11 11:26:46 +00:00
|
|
|
while path.len > 0:
|
|
|
|
touched[pos] = next
|
|
|
|
pos += 1
|
2024-06-18 11:14:02 +00:00
|
|
|
|
2024-07-11 11:26:46 +00:00
|
|
|
(vtx, path, next) = ?step(path, (VertexID(1), next), db)
|
2024-06-18 19:30:01 +00:00
|
|
|
|
2024-07-11 11:26:46 +00:00
|
|
|
if vtx.vType == Leaf:
|
|
|
|
let
|
|
|
|
stoID = vtx.lData.stoID
|
2024-06-18 11:14:02 +00:00
|
|
|
|
2024-07-11 11:26:46 +00:00
|
|
|
# Provide new storage ID when needed
|
2024-08-07 13:28:01 +00:00
|
|
|
useID =
|
|
|
|
if stoID.isValid: stoID # Use as is
|
|
|
|
elif stoID.vid.isValid: (true, stoID.vid) # Re-use previous vid
|
|
|
|
else: (true, db.vidFetch()) # Create new vid
|
2024-06-18 11:14:02 +00:00
|
|
|
|
2024-07-11 11:26:46 +00:00
|
|
|
# Call merge
|
2024-07-14 10:02:05 +00:00
|
|
|
pyl = LeafPayload(pType: StoData, stoData: stoData)
|
2024-08-07 13:28:01 +00:00
|
|
|
rc = db.mergePayloadImpl(useID.vid, stoPath.data, pyl)
|
2024-07-11 11:26:46 +00:00
|
|
|
|
|
|
|
if rc.isOk:
|
|
|
|
# Mark account path Merkle keys for update
|
|
|
|
resetKeys()
|
|
|
|
|
2024-09-13 13:47:50 +00:00
|
|
|
db.layersPutStoLeaf(mixUp(accPath, stoPath), rc.value)
|
2024-07-14 17:12:10 +00:00
|
|
|
|
2024-07-12 13:08:26 +00:00
|
|
|
if not stoID.isValid:
|
2024-07-11 11:26:46 +00:00
|
|
|
# Make sure that there is an account that refers to that storage trie
|
|
|
|
let leaf = vtx.dup # Dup on modify
|
|
|
|
leaf.lData.stoID = useID
|
2024-07-14 10:02:05 +00:00
|
|
|
db.layersPutAccLeaf(accPath, leaf)
|
2024-07-11 11:26:46 +00:00
|
|
|
db.layersPutVtx((VertexID(1), touched[pos - 1]), leaf)
|
2024-07-12 13:08:26 +00:00
|
|
|
|
|
|
|
return ok()
|
2024-07-11 11:26:46 +00:00
|
|
|
|
|
|
|
elif rc.error in MergeNoAction:
|
|
|
|
assert stoID.isValid # debugging only
|
|
|
|
return ok()
|
|
|
|
|
|
|
|
return err(rc.error)
|
2024-06-18 11:14:02 +00:00
|
|
|
|
2024-07-11 11:26:46 +00:00
|
|
|
err(MergeHikeFailed)
|
2023-06-09 11:17:37 +00:00
|
|
|
|
2023-05-30 11:47:47 +00:00
|
|
|
# ------------------------------------------------------------------------------
|
|
|
|
# End
|
|
|
|
# ------------------------------------------------------------------------------
|