2023-05-30 21:21:15 +00:00
|
|
|
# nimbus-eth1
|
Core db update storage root management for sub tries (#1964)
* Aristo: Re-phrase `LayerDelta` and `LayerFinal` as object references
why:
Avoids copying in some cases
* Fix copyright header
* Aristo: Verify `leafTie.root` function argument for `merge()` proc
why:
Zero root will lead to inconsistent DB entry
* Aristo: Update failure condition for hash labels compiler `hashify()`
why:
Node need not be rejected as long as links are on the schedule. In
that case, `redo[]` is to become `wff.base[]` at a later stage.
This amends an earlier fix, part of #1952 by also testing against
the target nodes of the `wff.base[]` sets.
* Aristo: Add storage root glue record to `hashify()` schedule
why:
An account leaf node might refer to a non-resolvable storage root ID.
Storage root node chains will end up at the storage root. So the link
`storage-root->account-leaf` needs an extra item in the schedule.
* Aristo: fix error code returned by `fetchPayload()`
details:
Final error code is implied by the error code form the `hikeUp()`
function.
* CoreDb: Discard `createOk` argument in API `getRoot()` function
why:
Not needed for the legacy DB. For the `Arsto` DB, a lazy approach is
implemented where a stprage root node is created on-the-fly.
* CoreDb: Prevent `$$` logging in some cases
why:
Logging the function `$$` is not useful when it is used for internal
use, i.e. retrieving an an error text for logging.
* CoreDb: Add `tryHashFn()` to API for pretty printing
why:
Pretty printing must not change the hashification status for the
`Aristo` DB. So there is an independent API wrapper for getting the
node hash which never updated the hashes.
* CoreDb: Discard `update` argument in API `hash()` function
why:
When calling the API function `hash()`, the latest state is always
wanted. For a version that uses the current state as-is without checking,
the function `tryHash()` was added to the backend.
* CoreDb: Update opaque vertex ID objects for the `Aristo` backend
why:
For `Aristo`, vID objects encapsulate a numeric `VertexID`
referencing a vertex (rather than a node hash as used on the
legacy backend.) For storage sub-tries, there might be no initial
vertex known when the descriptor is created. So opaque vertex ID
objects are supported without a valid `VertexID` which will be
initalised on-the-fly when the first item is merged.
* CoreDb: Add pretty printer for opaque vertex ID objects
* Cosmetics, printing profiling data
* CoreDb: Fix segfault in `Aristo` backend when creating MPT descriptor
why:
Missing initialisation error
* CoreDb: Allow MPT to inherit shared context on `Aristo` backend
why:
Creates descriptors with different storage roots for the same
shared `Aristo` DB descriptor.
* Cosmetics, update diagnostic message items for `Aristo` backend
* Fix Copyright year
2024-01-11 19:11:38 +00:00
|
|
|
# Copyright (c) 2023-2024 Status Research & Development GmbH
|
2023-05-30 21:21:15 +00:00
|
|
|
# Licensed under either of
|
|
|
|
# * Apache License, version 2.0, ([LICENSE-APACHE](LICENSE-APACHE) or
|
|
|
|
# http://www.apache.org/licenses/LICENSE-2.0)
|
|
|
|
# * MIT license ([LICENSE-MIT](LICENSE-MIT) or
|
|
|
|
# http://opensource.org/licenses/MIT)
|
|
|
|
# at your option. This file may not be copied, modified, or distributed
|
|
|
|
# except according to those terms.
|
|
|
|
|
|
|
|
## Aristo DB -- Patricia Trie Merkleisation
|
|
|
|
## ========================================
|
|
|
|
##
|
|
|
|
## For the current state of the `Patricia Trie`, keys (equivalent to hashes)
|
2024-02-22 08:24:58 +00:00
|
|
|
## are associated with the vertex IDs. Existing key associations are taken
|
|
|
|
## as-is/unchecked unless the ID is marked a proof node. In the latter case,
|
|
|
|
## the key is assumed to be correct after re-calculation.
|
2023-05-30 21:21:15 +00:00
|
|
|
##
|
2024-02-22 08:24:58 +00:00
|
|
|
## The labelling algorithm works roughly as follows:
|
2023-12-04 20:39:26 +00:00
|
|
|
##
|
2024-02-22 08:24:58 +00:00
|
|
|
## * Given a set of start or root vertices, build the forest (of trees)
|
|
|
|
## downwards towards leafs vertices so that none of these vertices has a
|
|
|
|
## Merkle hash label.
|
2023-12-04 20:39:26 +00:00
|
|
|
##
|
2024-02-22 08:24:58 +00:00
|
|
|
## * Starting at the leaf vertices in width-first fashion, calculate the
|
|
|
|
## Merkle hashes and label the leaf vertices. Recursively work up labelling
|
|
|
|
## vertices up until the root nodes are reached.
|
2023-12-04 20:39:26 +00:00
|
|
|
##
|
2024-02-22 08:24:58 +00:00
|
|
|
## Note that there are some tweaks for `proof` node vertices which lead to
|
|
|
|
## incomplete trees in a way that the algoritm handles existing Merkle hash
|
|
|
|
## labels for missing vertices.
|
2023-05-30 21:21:15 +00:00
|
|
|
##
|
|
|
|
{.push raises: [].}
|
|
|
|
|
|
|
|
import
|
2024-02-22 08:24:58 +00:00
|
|
|
std/[algorithm, sequtils, sets, tables],
|
2023-05-30 21:21:15 +00:00
|
|
|
chronicles,
|
|
|
|
eth/common,
|
2023-09-15 15:23:53 +00:00
|
|
|
results,
|
2024-02-22 08:24:58 +00:00
|
|
|
"."/[aristo_desc, aristo_get, aristo_layers, aristo_serialise, aristo_utils]
|
2023-05-30 21:21:15 +00:00
|
|
|
|
2023-06-20 13:26:25 +00:00
|
|
|
type
|
2023-12-04 20:39:26 +00:00
|
|
|
WidthFirstForest = object
|
|
|
|
## Collected width first search trees
|
2024-02-22 08:24:58 +00:00
|
|
|
root: HashSet[VertexID] ## Top level, root targets
|
|
|
|
pool: Table[VertexID,VertexID] ## Upper links pool
|
|
|
|
base: Table[VertexID,VertexID] ## Width-first leaf level links
|
2024-06-25 11:39:53 +00:00
|
|
|
leaf: seq[VertexID] ## Stand-alone leaf to process
|
2024-02-22 08:24:58 +00:00
|
|
|
rev: Table[VertexID,HashSet[VertexID]] ## Reverse look up table
|
2023-08-17 13:42:01 +00:00
|
|
|
|
2023-05-30 21:21:15 +00:00
|
|
|
logScope:
|
|
|
|
topics = "aristo-hashify"
|
|
|
|
|
2023-06-20 13:26:25 +00:00
|
|
|
# ------------------------------------------------------------------------------
|
|
|
|
# Private helpers
|
|
|
|
# ------------------------------------------------------------------------------
|
|
|
|
|
2024-02-22 08:24:58 +00:00
|
|
|
func getOrVoid(tab: Table[VertexID,VertexID]; vid: VertexID): VertexID =
|
|
|
|
tab.getOrDefault(vid, VertexID(0))
|
2023-12-04 20:39:26 +00:00
|
|
|
|
2023-05-30 21:21:15 +00:00
|
|
|
# ------------------------------------------------------------------------------
|
|
|
|
# Private functions
|
|
|
|
# ------------------------------------------------------------------------------
|
|
|
|
|
2024-02-22 08:24:58 +00:00
|
|
|
func hasValue(
|
|
|
|
wffTable: Table[VertexID,VertexID];
|
|
|
|
vid: VertexID;
|
|
|
|
wff: WidthFirstForest;
|
|
|
|
): bool =
|
|
|
|
## Helper for efficient `value` access:
|
2023-12-12 17:47:41 +00:00
|
|
|
## ::
|
2024-02-22 08:24:58 +00:00
|
|
|
## wffTable.hasValue(wff, vid)
|
2023-12-04 20:39:26 +00:00
|
|
|
##
|
2024-02-22 08:24:58 +00:00
|
|
|
## instead of
|
2023-12-12 17:47:41 +00:00
|
|
|
## ::
|
2024-02-22 08:24:58 +00:00
|
|
|
## vid in wffTable.values.toSeq
|
2023-12-04 20:39:26 +00:00
|
|
|
##
|
2024-02-22 08:24:58 +00:00
|
|
|
for w in wff.rev.getOrVoid vid:
|
|
|
|
if w in wffTable:
|
|
|
|
return true
|
|
|
|
|
|
|
|
|
|
|
|
proc pedigree(
|
2023-12-12 17:47:41 +00:00
|
|
|
db: AristoDbRef; # Database, top layer
|
2024-06-25 11:39:53 +00:00
|
|
|
wff: var WidthFirstForest;
|
2024-02-22 08:24:58 +00:00
|
|
|
ancestors: HashSet[VertexID]; # Vertex IDs to start connecting from
|
|
|
|
proofs: HashSet[VertexID]; # Additional proof nodes to start from
|
2024-06-25 11:39:53 +00:00
|
|
|
): Result[void, (VertexID,AristoError)] =
|
2024-02-22 08:24:58 +00:00
|
|
|
## For each vertex ID from the argument set `ancestors` find all un-labelled
|
|
|
|
## grand child vertices and build a forest (of trees) starting from the
|
|
|
|
## grand child vertices.
|
2023-12-04 20:39:26 +00:00
|
|
|
##
|
2023-12-12 17:47:41 +00:00
|
|
|
var
|
2024-02-22 08:24:58 +00:00
|
|
|
leafs: HashSet[VertexID]
|
|
|
|
|
|
|
|
proc register(wff: var WidthFirstForest; fromVid, toVid: VertexID) =
|
|
|
|
if toVid in wff.base:
|
|
|
|
# * there is `toVid->*` in `base[]`
|
|
|
|
# * so ``toVid->*` moved to `pool[]`
|
|
|
|
wff.pool[toVid] = wff.base.getOrVoid toVid
|
|
|
|
wff.base.del toVid
|
|
|
|
if wff.base.hasValue(fromVid, wff):
|
|
|
|
# * there is `*->fromVid` in `base[]`
|
|
|
|
# * so store `fromVid->toVid` in `pool[]`
|
|
|
|
wff.pool[fromVid] = toVid
|
2024-02-01 21:27:48 +00:00
|
|
|
else:
|
2024-02-22 08:24:58 +00:00
|
|
|
# store `fromVid->toVid` in `base[]`
|
|
|
|
wff.base[fromVid] = toVid
|
|
|
|
|
|
|
|
# Register reverse pair for quick table value lookup
|
|
|
|
wff.rev.withValue(toVid, val):
|
|
|
|
val[].incl fromVid
|
|
|
|
do:
|
2024-05-24 09:27:17 +00:00
|
|
|
wff.rev[toVid] = [fromVid].toHashSet
|
2023-12-12 17:47:41 +00:00
|
|
|
|
2024-02-22 08:24:58 +00:00
|
|
|
# Remove unnecessarey sup-trie roots (e.g. for a storage root)
|
|
|
|
wff.root.excl fromVid
|
2023-12-12 17:47:41 +00:00
|
|
|
|
2024-02-22 08:24:58 +00:00
|
|
|
# Initialise greedy search which will keep a set of current leafs in the
|
|
|
|
# `leafs{}` set and follow up links in the `pool[]` table, leading all the
|
|
|
|
# way up to the `root{}` set.
|
2023-12-12 17:47:41 +00:00
|
|
|
#
|
2024-02-22 08:24:58 +00:00
|
|
|
# Process root nodes if they are unlabelled
|
|
|
|
var rootWasDeleted = VertexID(0)
|
|
|
|
for root in ancestors:
|
|
|
|
let vtx = db.getVtx root
|
|
|
|
if vtx.isNil:
|
|
|
|
if VertexID(LEAST_FREE_VID) <= root:
|
|
|
|
# There must be a another root, as well (e.g. `$1` for a storage
|
|
|
|
# root). Only the last one of some will be reported with error code.
|
|
|
|
rootWasDeleted = root
|
|
|
|
elif not db.getKey(root).isValid:
|
|
|
|
# Need to process `root` node
|
|
|
|
let children = vtx.subVids
|
|
|
|
if children.len == 0:
|
|
|
|
# This is an isolated leaf node
|
2024-06-25 11:39:53 +00:00
|
|
|
wff.leaf.add root
|
2024-02-22 08:24:58 +00:00
|
|
|
else:
|
|
|
|
wff.root.incl root
|
|
|
|
for child in vtx.subVids:
|
|
|
|
if not db.getKey(child).isValid:
|
|
|
|
leafs.incl child
|
|
|
|
wff.register(child, root)
|
|
|
|
if rootWasDeleted.isValid and
|
|
|
|
wff.root.len == 0 and
|
|
|
|
wff.leaf.len == 0:
|
|
|
|
return err((rootWasDeleted,HashifyRootVtxUnresolved))
|
|
|
|
|
|
|
|
# Initialisation for `proof` nodes which are sort of similar to `root` nodes.
|
|
|
|
for proof in proofs:
|
|
|
|
let vtx = db.getVtx proof
|
|
|
|
if vtx.isNil or not db.getKey(proof).isValid:
|
|
|
|
return err((proof,HashifyVtxUnresolved))
|
|
|
|
let children = vtx.subVids
|
|
|
|
if 0 < children.len:
|
|
|
|
# To be treated as a root node
|
|
|
|
wff.root.incl proof
|
|
|
|
for child in vtx.subVids:
|
|
|
|
if not db.getKey(child).isValid:
|
|
|
|
leafs.incl child
|
|
|
|
wff.register(child, proof)
|
|
|
|
|
|
|
|
# Recursively step down and collect unlabelled vertices
|
|
|
|
while 0 < leafs.len:
|
|
|
|
var redo: typeof(leafs)
|
|
|
|
|
|
|
|
for parent in leafs:
|
|
|
|
assert parent.isValid
|
|
|
|
assert not db.getKey(parent).isValid
|
|
|
|
|
|
|
|
let vtx = db.getVtx parent
|
|
|
|
if not vtx.isNil:
|
|
|
|
let children = vtx.subVids.filterIt(not db.getKey(it).isValid)
|
|
|
|
if 0 < children.len:
|
|
|
|
for child in children:
|
|
|
|
redo.incl child
|
|
|
|
wff.register(child, parent)
|
|
|
|
continue
|
|
|
|
|
|
|
|
if parent notin wff.base:
|
|
|
|
# The buck stops here:
|
|
|
|
# move `(parent,granny)` from `pool[]` to `base[]`
|
|
|
|
let granny = wff.pool.getOrVoid parent
|
|
|
|
assert granny.isValid
|
|
|
|
wff.register(parent, granny)
|
|
|
|
wff.pool.del parent
|
|
|
|
|
|
|
|
redo.swap leafs
|
|
|
|
|
2024-06-25 11:39:53 +00:00
|
|
|
ok()
|
2023-08-17 13:42:01 +00:00
|
|
|
|
2023-05-30 21:21:15 +00:00
|
|
|
# ------------------------------------------------------------------------------
|
2024-02-22 08:24:58 +00:00
|
|
|
# Private functions, tree traversal
|
2023-05-30 21:21:15 +00:00
|
|
|
# ------------------------------------------------------------------------------
|
|
|
|
|
2024-02-22 08:24:58 +00:00
|
|
|
proc createSched(
|
2024-06-25 11:39:53 +00:00
|
|
|
wff: var WidthFirstForest; # Search tree to create
|
2023-07-04 18:24:03 +00:00
|
|
|
db: AristoDbRef; # Database, top layer
|
2024-06-25 11:39:53 +00:00
|
|
|
): Result[void,(VertexID,AristoError)] =
|
2024-02-22 08:24:58 +00:00
|
|
|
## Create width-first search schedule (aka forest)
|
|
|
|
##
|
2024-06-25 11:39:53 +00:00
|
|
|
? db.pedigree(wff, db.dirty, db.pPrf)
|
2024-02-22 08:24:58 +00:00
|
|
|
|
|
|
|
if 0 < wff.leaf.len:
|
|
|
|
for vid in wff.leaf:
|
|
|
|
let node = db.getVtx(vid).toNode(db, beKeyOk=false).valueOr:
|
|
|
|
# Make sure that all those nodes are reachable
|
|
|
|
for needed in error:
|
|
|
|
if needed notin wff.base and
|
|
|
|
needed notin wff.pool:
|
|
|
|
return err((needed,HashifyVtxUnresolved))
|
|
|
|
continue
|
|
|
|
db.layersPutKey(VertexID(1), vid, node.digestTo(HashKey))
|
2024-06-25 11:39:53 +00:00
|
|
|
wff.leaf.reset() # No longer needed
|
2023-08-11 17:23:57 +00:00
|
|
|
|
2024-06-25 11:39:53 +00:00
|
|
|
ok()
|
2023-06-09 11:17:37 +00:00
|
|
|
|
2024-02-22 08:24:58 +00:00
|
|
|
|
|
|
|
proc processSched(
|
|
|
|
wff: var WidthFirstForest; # Search tree to process
|
|
|
|
db: AristoDbRef; # Database, top layer
|
|
|
|
): Result[void,(VertexID,AristoError)] =
|
|
|
|
## Traverse width-first schedule and update vertex hash labels.
|
|
|
|
##
|
2023-12-04 20:39:26 +00:00
|
|
|
while 0 < wff.base.len:
|
2024-02-22 08:24:58 +00:00
|
|
|
var
|
|
|
|
accept = false
|
|
|
|
redo: typeof(wff.base)
|
2023-12-12 17:47:41 +00:00
|
|
|
|
2024-02-22 08:24:58 +00:00
|
|
|
for (vid,toVid) in wff.base.pairs:
|
2023-12-12 17:47:41 +00:00
|
|
|
let vtx = db.getVtx vid
|
2024-02-22 08:24:58 +00:00
|
|
|
assert vtx.isValid
|
|
|
|
|
|
|
|
# Try to convert the vertex to a node. This is possible only if all
|
|
|
|
# link references have Merkle hash keys, already.
|
|
|
|
let node = vtx.toNode(db, stopEarly=false).valueOr:
|
|
|
|
# Do this vertex later, again
|
|
|
|
if wff.pool.hasValue(vid, wff):
|
|
|
|
wff.pool[vid] = toVid
|
|
|
|
accept = true # `redo[]` will be fifferent from `base[]`
|
|
|
|
else:
|
|
|
|
redo[vid] = toVid
|
|
|
|
continue
|
|
|
|
# End `valueOr` terminates error clause
|
|
|
|
|
|
|
|
# Could resolve => update Merkle hash
|
|
|
|
db.layersPutKey(VertexID(1), vid, node.digestTo HashKey)
|
|
|
|
|
|
|
|
# Set follow up link for next round
|
|
|
|
let toToVid = wff.pool.getOrVoid toVid
|
|
|
|
if toToVid.isValid:
|
|
|
|
if toToVid in redo:
|
|
|
|
# Got predecessor `(toVid,toToVid)` of `(toToVid,xxx)`,
|
|
|
|
# so move `(toToVid,xxx)` from `redo[]` to `pool[]`
|
|
|
|
wff.pool[toToVid] = redo.getOrVoid toToVid
|
|
|
|
redo.del toToVid
|
|
|
|
# Move `(toVid,toToVid)` from `pool[]` to `redo[]`
|
|
|
|
wff.pool.del toVid
|
|
|
|
redo[toVid] = toToVid
|
|
|
|
|
|
|
|
accept = true # `redo[]` will be fifferent from `base[]`
|
|
|
|
# End `for (vid,toVid)..`
|
|
|
|
|
|
|
|
# Make sure that `base[]` is different from `redo[]`
|
|
|
|
if not accept:
|
|
|
|
let vid = wff.base.keys.toSeq[0]
|
|
|
|
return err((vid,HashifyVtxUnresolved))
|
|
|
|
# Restart `wff.base[]`
|
|
|
|
wff.base.swap redo
|
2023-12-04 20:39:26 +00:00
|
|
|
|
2024-02-22 08:24:58 +00:00
|
|
|
ok()
|
2023-12-04 20:39:26 +00:00
|
|
|
|
|
|
|
|
2024-02-22 08:24:58 +00:00
|
|
|
proc finaliseRoots(
|
|
|
|
wff: var WidthFirstForest; # Search tree to process
|
|
|
|
db: AristoDbRef; # Database, top layer
|
|
|
|
): Result[void,(VertexID,AristoError)] =
|
|
|
|
## Process root vertices after all other vertices are done.
|
|
|
|
##
|
|
|
|
# Make sure that the pool has been exhausted
|
|
|
|
if 0 < wff.pool.len:
|
|
|
|
let vid = wff.pool.keys.toSeq.sorted[0]
|
|
|
|
return err((vid,HashifyVtxUnresolved))
|
|
|
|
|
|
|
|
# Update or verify root nodes
|
|
|
|
for vid in wff.root:
|
|
|
|
# Calculate hash key
|
|
|
|
let
|
|
|
|
node = db.getVtx(vid).toNode(db).valueOr:
|
|
|
|
return err((vid,HashifyRootVtxUnresolved))
|
|
|
|
key = node.digestTo(HashKey)
|
|
|
|
if vid notin db.pPrf:
|
|
|
|
db.layersPutKey(VertexID(1), vid, key)
|
|
|
|
elif key != db.getKey vid:
|
|
|
|
return err((vid,HashifyProofHashMismatch))
|
|
|
|
|
|
|
|
ok()
|
|
|
|
|
|
|
|
# ------------------------------------------------------------------------------
|
|
|
|
# Public functions
|
|
|
|
# ------------------------------------------------------------------------------
|
|
|
|
|
|
|
|
proc hashify*(
|
|
|
|
db: AristoDbRef; # Database, top layer
|
|
|
|
): Result[void,(VertexID,AristoError)] =
|
|
|
|
## Add keys to the `Patricia Trie` so that it becomes a `Merkle Patricia
|
2024-02-29 21:10:24 +00:00
|
|
|
## Tree`.
|
2024-02-22 08:24:58 +00:00
|
|
|
##
|
|
|
|
if 0 < db.dirty.len:
|
|
|
|
# Set up widh-first traversal schedule
|
2024-06-25 11:39:53 +00:00
|
|
|
var wff: WidthFirstForest
|
|
|
|
? wff.createSched db
|
2023-12-04 20:39:26 +00:00
|
|
|
|
2024-02-22 08:24:58 +00:00
|
|
|
# Traverse tree spanned by `wff` and label remaining vertices.
|
|
|
|
? wff.processSched db
|
Core db update storage root management for sub tries (#1964)
* Aristo: Re-phrase `LayerDelta` and `LayerFinal` as object references
why:
Avoids copying in some cases
* Fix copyright header
* Aristo: Verify `leafTie.root` function argument for `merge()` proc
why:
Zero root will lead to inconsistent DB entry
* Aristo: Update failure condition for hash labels compiler `hashify()`
why:
Node need not be rejected as long as links are on the schedule. In
that case, `redo[]` is to become `wff.base[]` at a later stage.
This amends an earlier fix, part of #1952 by also testing against
the target nodes of the `wff.base[]` sets.
* Aristo: Add storage root glue record to `hashify()` schedule
why:
An account leaf node might refer to a non-resolvable storage root ID.
Storage root node chains will end up at the storage root. So the link
`storage-root->account-leaf` needs an extra item in the schedule.
* Aristo: fix error code returned by `fetchPayload()`
details:
Final error code is implied by the error code form the `hikeUp()`
function.
* CoreDb: Discard `createOk` argument in API `getRoot()` function
why:
Not needed for the legacy DB. For the `Arsto` DB, a lazy approach is
implemented where a stprage root node is created on-the-fly.
* CoreDb: Prevent `$$` logging in some cases
why:
Logging the function `$$` is not useful when it is used for internal
use, i.e. retrieving an an error text for logging.
* CoreDb: Add `tryHashFn()` to API for pretty printing
why:
Pretty printing must not change the hashification status for the
`Aristo` DB. So there is an independent API wrapper for getting the
node hash which never updated the hashes.
* CoreDb: Discard `update` argument in API `hash()` function
why:
When calling the API function `hash()`, the latest state is always
wanted. For a version that uses the current state as-is without checking,
the function `tryHash()` was added to the backend.
* CoreDb: Update opaque vertex ID objects for the `Aristo` backend
why:
For `Aristo`, vID objects encapsulate a numeric `VertexID`
referencing a vertex (rather than a node hash as used on the
legacy backend.) For storage sub-tries, there might be no initial
vertex known when the descriptor is created. So opaque vertex ID
objects are supported without a valid `VertexID` which will be
initalised on-the-fly when the first item is merged.
* CoreDb: Add pretty printer for opaque vertex ID objects
* Cosmetics, printing profiling data
* CoreDb: Fix segfault in `Aristo` backend when creating MPT descriptor
why:
Missing initialisation error
* CoreDb: Allow MPT to inherit shared context on `Aristo` backend
why:
Creates descriptors with different storage roots for the same
shared `Aristo` DB descriptor.
* Cosmetics, update diagnostic message items for `Aristo` backend
* Fix Copyright year
2024-01-11 19:11:38 +00:00
|
|
|
|
2024-02-22 08:24:58 +00:00
|
|
|
# Do/complete state root vertices
|
|
|
|
? wff.finaliseRoots db
|
2023-05-30 21:21:15 +00:00
|
|
|
|
2024-02-22 08:24:58 +00:00
|
|
|
db.top.final.dirty.clear # Mark top layer clean
|
2024-02-08 16:32:16 +00:00
|
|
|
|
2024-02-22 08:24:58 +00:00
|
|
|
ok()
|
2023-05-30 21:21:15 +00:00
|
|
|
|
|
|
|
# ------------------------------------------------------------------------------
|
|
|
|
# End
|
|
|
|
# ------------------------------------------------------------------------------
|