2023-06-12 13:48:47 +00:00
|
|
|
# nimbus-eth1
|
Core db update storage root management for sub tries (#1964)
* Aristo: Re-phrase `LayerDelta` and `LayerFinal` as object references
why:
Avoids copying in some cases
* Fix copyright header
* Aristo: Verify `leafTie.root` function argument for `merge()` proc
why:
Zero root will lead to inconsistent DB entry
* Aristo: Update failure condition for hash labels compiler `hashify()`
why:
Node need not be rejected as long as links are on the schedule. In
that case, `redo[]` is to become `wff.base[]` at a later stage.
This amends an earlier fix, part of #1952 by also testing against
the target nodes of the `wff.base[]` sets.
* Aristo: Add storage root glue record to `hashify()` schedule
why:
An account leaf node might refer to a non-resolvable storage root ID.
Storage root node chains will end up at the storage root. So the link
`storage-root->account-leaf` needs an extra item in the schedule.
* Aristo: fix error code returned by `fetchPayload()`
details:
Final error code is implied by the error code form the `hikeUp()`
function.
* CoreDb: Discard `createOk` argument in API `getRoot()` function
why:
Not needed for the legacy DB. For the `Arsto` DB, a lazy approach is
implemented where a stprage root node is created on-the-fly.
* CoreDb: Prevent `$$` logging in some cases
why:
Logging the function `$$` is not useful when it is used for internal
use, i.e. retrieving an an error text for logging.
* CoreDb: Add `tryHashFn()` to API for pretty printing
why:
Pretty printing must not change the hashification status for the
`Aristo` DB. So there is an independent API wrapper for getting the
node hash which never updated the hashes.
* CoreDb: Discard `update` argument in API `hash()` function
why:
When calling the API function `hash()`, the latest state is always
wanted. For a version that uses the current state as-is without checking,
the function `tryHash()` was added to the backend.
* CoreDb: Update opaque vertex ID objects for the `Aristo` backend
why:
For `Aristo`, vID objects encapsulate a numeric `VertexID`
referencing a vertex (rather than a node hash as used on the
legacy backend.) For storage sub-tries, there might be no initial
vertex known when the descriptor is created. So opaque vertex ID
objects are supported without a valid `VertexID` which will be
initalised on-the-fly when the first item is merged.
* CoreDb: Add pretty printer for opaque vertex ID objects
* Cosmetics, printing profiling data
* CoreDb: Fix segfault in `Aristo` backend when creating MPT descriptor
why:
Missing initialisation error
* CoreDb: Allow MPT to inherit shared context on `Aristo` backend
why:
Creates descriptors with different storage roots for the same
shared `Aristo` DB descriptor.
* Cosmetics, update diagnostic message items for `Aristo` backend
* Fix Copyright year
2024-01-11 19:11:38 +00:00
|
|
|
# Copyright (c) 2023-2024 Status Research & Development GmbH
|
2023-06-12 13:48:47 +00:00
|
|
|
# Licensed under either of
|
|
|
|
# * Apache License, version 2.0, ([LICENSE-APACHE](LICENSE-APACHE) or
|
|
|
|
# http://www.apache.org/licenses/LICENSE-2.0)
|
|
|
|
# * MIT license ([LICENSE-MIT](LICENSE-MIT) or
|
|
|
|
# http://opensource.org/licenses/MIT)
|
|
|
|
# at your option. This file may not be copied, modified, or distributed
|
|
|
|
# except according to those terms.
|
|
|
|
|
|
|
|
## Aristo DB -- Patricia Trie structural data types
|
|
|
|
## ================================================
|
|
|
|
##
|
|
|
|
|
|
|
|
{.push raises: [].}
|
|
|
|
|
|
|
|
import
|
2023-11-08 12:18:32 +00:00
|
|
|
std/[hashes, sets, tables],
|
2023-06-12 13:48:47 +00:00
|
|
|
eth/[common, trie/nibbles],
|
2023-08-25 22:53:59 +00:00
|
|
|
"."/[desc_error, desc_identifiers]
|
2023-06-12 13:48:47 +00:00
|
|
|
|
|
|
|
type
|
|
|
|
VertexType* = enum
|
|
|
|
## Type of `Aristo Trie` vertex
|
|
|
|
Leaf
|
|
|
|
Extension
|
|
|
|
Branch
|
|
|
|
|
2023-07-05 20:27:48 +00:00
|
|
|
AristoAccount* = object
|
|
|
|
nonce*: AccountNonce ## Some `uint64` type
|
|
|
|
balance*: UInt256
|
|
|
|
storageID*: VertexID ## Implies storage root Merkle hash key
|
|
|
|
codeHash*: Hash256
|
|
|
|
|
2023-06-12 13:48:47 +00:00
|
|
|
PayloadType* = enum
|
2023-12-19 12:39:23 +00:00
|
|
|
## Type of leaf data.
|
2023-07-05 20:27:48 +00:00
|
|
|
RawData ## Generic data
|
|
|
|
RlpData ## Marked RLP encoded
|
|
|
|
AccountData ## `Aristo account` with vertex IDs links
|
2023-06-12 13:48:47 +00:00
|
|
|
|
|
|
|
PayloadRef* = ref object
|
|
|
|
case pType*: PayloadType
|
2023-07-05 20:27:48 +00:00
|
|
|
of RawData:
|
|
|
|
rawBlob*: Blob ## Opaque data, default value
|
|
|
|
of RlpData:
|
|
|
|
rlpBlob*: Blob ## Opaque data marked RLP encoded
|
2023-06-12 13:48:47 +00:00
|
|
|
of AccountData:
|
2023-07-05 20:27:48 +00:00
|
|
|
account*: AristoAccount
|
2023-06-12 13:48:47 +00:00
|
|
|
|
|
|
|
VertexRef* = ref object of RootRef
|
|
|
|
## Vertex for building a hexary Patricia or Merkle Patricia Trie
|
|
|
|
case vType*: VertexType
|
|
|
|
of Leaf:
|
|
|
|
lPfx*: NibblesSeq ## Portion of path segment
|
|
|
|
lData*: PayloadRef ## Reference to data payload
|
|
|
|
of Extension:
|
|
|
|
ePfx*: NibblesSeq ## Portion of path segment
|
|
|
|
eVid*: VertexID ## Edge to vertex with ID `eVid`
|
|
|
|
of Branch:
|
|
|
|
bVid*: array[16,VertexID] ## Edge list with vertex IDs
|
|
|
|
|
|
|
|
NodeRef* = ref object of VertexRef
|
|
|
|
## Combined record for a *traditional* ``Merkle Patricia Tree` node merged
|
|
|
|
## with a structural `VertexRef` type object.
|
2023-11-08 12:18:32 +00:00
|
|
|
error*: AristoError ## Used for error signalling in RLP decoder
|
2023-07-05 20:27:48 +00:00
|
|
|
key*: array[16,HashKey] ## Merkle hash/es for vertices
|
2023-06-12 13:48:47 +00:00
|
|
|
|
2023-08-10 20:01:28 +00:00
|
|
|
# ----------------------
|
|
|
|
|
2023-08-18 19:46:55 +00:00
|
|
|
FilterRef* = ref object
|
2023-12-19 12:39:23 +00:00
|
|
|
## Delta layer with expanded sequences for quick access.
|
2023-09-05 13:57:20 +00:00
|
|
|
fid*: FilterID ## Filter identifier
|
2023-11-08 12:18:32 +00:00
|
|
|
src*: Hash256 ## Applicable to this state root
|
|
|
|
trg*: Hash256 ## Resulting state root (i.e. `kMap[1]`)
|
2023-08-10 20:01:28 +00:00
|
|
|
sTab*: Table[VertexID,VertexRef] ## Filter structural vertex table
|
|
|
|
kMap*: Table[VertexID,HashKey] ## Filter Merkle hash key mapping
|
2023-08-18 19:46:55 +00:00
|
|
|
vGen*: seq[VertexID] ## Filter unique vertex ID generator
|
2023-08-10 20:01:28 +00:00
|
|
|
|
2023-12-19 12:39:23 +00:00
|
|
|
VidsByLabelTab* = Table[HashLabel,HashSet[VertexID]]
|
2023-11-08 12:18:32 +00:00
|
|
|
## Reverse lookup searching `VertexID` by the hash key/label.
|
|
|
|
|
Core db update storage root management for sub tries (#1964)
* Aristo: Re-phrase `LayerDelta` and `LayerFinal` as object references
why:
Avoids copying in some cases
* Fix copyright header
* Aristo: Verify `leafTie.root` function argument for `merge()` proc
why:
Zero root will lead to inconsistent DB entry
* Aristo: Update failure condition for hash labels compiler `hashify()`
why:
Node need not be rejected as long as links are on the schedule. In
that case, `redo[]` is to become `wff.base[]` at a later stage.
This amends an earlier fix, part of #1952 by also testing against
the target nodes of the `wff.base[]` sets.
* Aristo: Add storage root glue record to `hashify()` schedule
why:
An account leaf node might refer to a non-resolvable storage root ID.
Storage root node chains will end up at the storage root. So the link
`storage-root->account-leaf` needs an extra item in the schedule.
* Aristo: fix error code returned by `fetchPayload()`
details:
Final error code is implied by the error code form the `hikeUp()`
function.
* CoreDb: Discard `createOk` argument in API `getRoot()` function
why:
Not needed for the legacy DB. For the `Arsto` DB, a lazy approach is
implemented where a stprage root node is created on-the-fly.
* CoreDb: Prevent `$$` logging in some cases
why:
Logging the function `$$` is not useful when it is used for internal
use, i.e. retrieving an an error text for logging.
* CoreDb: Add `tryHashFn()` to API for pretty printing
why:
Pretty printing must not change the hashification status for the
`Aristo` DB. So there is an independent API wrapper for getting the
node hash which never updated the hashes.
* CoreDb: Discard `update` argument in API `hash()` function
why:
When calling the API function `hash()`, the latest state is always
wanted. For a version that uses the current state as-is without checking,
the function `tryHash()` was added to the backend.
* CoreDb: Update opaque vertex ID objects for the `Aristo` backend
why:
For `Aristo`, vID objects encapsulate a numeric `VertexID`
referencing a vertex (rather than a node hash as used on the
legacy backend.) For storage sub-tries, there might be no initial
vertex known when the descriptor is created. So opaque vertex ID
objects are supported without a valid `VertexID` which will be
initalised on-the-fly when the first item is merged.
* CoreDb: Add pretty printer for opaque vertex ID objects
* Cosmetics, printing profiling data
* CoreDb: Fix segfault in `Aristo` backend when creating MPT descriptor
why:
Missing initialisation error
* CoreDb: Allow MPT to inherit shared context on `Aristo` backend
why:
Creates descriptors with different storage roots for the same
shared `Aristo` DB descriptor.
* Cosmetics, update diagnostic message items for `Aristo` backend
* Fix Copyright year
2024-01-11 19:11:38 +00:00
|
|
|
LayerDeltaRef* = ref object
|
2023-12-19 12:39:23 +00:00
|
|
|
## Delta layers are stacked implying a tables hierarchy. Table entries on
|
|
|
|
## a higher level take precedence over lower layer table entries. So an
|
|
|
|
## existing key-value table entry of a layer on top supersedes same key
|
|
|
|
## entries on all lower layers. A missing entry on a higher layer indicates
|
|
|
|
## that the key-value pair might be fond on some lower layer.
|
|
|
|
##
|
|
|
|
## A zero value (`nil`, empty hash etc.) is considered am missing key-value
|
|
|
|
## pair. Tables on the `LayerDelta` may have stray zero key-value pairs for
|
|
|
|
## missing entries due to repeated transactions while adding and deleting
|
|
|
|
## entries. There is no need to purge redundant zero entries.
|
|
|
|
##
|
|
|
|
## As for `kMap[]` entries, there might be a zero value entriy relating
|
|
|
|
## (i.e. indexed by the same vertex ID) to an `sMap[]` non-zero value entry
|
|
|
|
## (of the same layer or a lower layer whatever comes first.) This entry
|
|
|
|
## is kept as a reminder that the hash value of the `kMap[]` entry needs
|
|
|
|
## to be re-compiled.
|
|
|
|
##
|
|
|
|
## The reasoning behind the above scenario is that every vertex held on the
|
|
|
|
## `sTab[]` tables must correspond to a hash entry held on the `kMap[]`
|
|
|
|
## tables. So a corresponding zero value or missing entry produces an
|
|
|
|
## inconsistent state that must be resolved.
|
|
|
|
##
|
|
|
|
sTab*: Table[VertexID,VertexRef] ## Structural vertex table
|
|
|
|
kMap*: Table[VertexID,HashLabel] ## Merkle hash key mapping
|
|
|
|
pAmk*: VidsByLabelTab ## Reverse `kMap` entries, hash key lookup
|
|
|
|
|
Core db update storage root management for sub tries (#1964)
* Aristo: Re-phrase `LayerDelta` and `LayerFinal` as object references
why:
Avoids copying in some cases
* Fix copyright header
* Aristo: Verify `leafTie.root` function argument for `merge()` proc
why:
Zero root will lead to inconsistent DB entry
* Aristo: Update failure condition for hash labels compiler `hashify()`
why:
Node need not be rejected as long as links are on the schedule. In
that case, `redo[]` is to become `wff.base[]` at a later stage.
This amends an earlier fix, part of #1952 by also testing against
the target nodes of the `wff.base[]` sets.
* Aristo: Add storage root glue record to `hashify()` schedule
why:
An account leaf node might refer to a non-resolvable storage root ID.
Storage root node chains will end up at the storage root. So the link
`storage-root->account-leaf` needs an extra item in the schedule.
* Aristo: fix error code returned by `fetchPayload()`
details:
Final error code is implied by the error code form the `hikeUp()`
function.
* CoreDb: Discard `createOk` argument in API `getRoot()` function
why:
Not needed for the legacy DB. For the `Arsto` DB, a lazy approach is
implemented where a stprage root node is created on-the-fly.
* CoreDb: Prevent `$$` logging in some cases
why:
Logging the function `$$` is not useful when it is used for internal
use, i.e. retrieving an an error text for logging.
* CoreDb: Add `tryHashFn()` to API for pretty printing
why:
Pretty printing must not change the hashification status for the
`Aristo` DB. So there is an independent API wrapper for getting the
node hash which never updated the hashes.
* CoreDb: Discard `update` argument in API `hash()` function
why:
When calling the API function `hash()`, the latest state is always
wanted. For a version that uses the current state as-is without checking,
the function `tryHash()` was added to the backend.
* CoreDb: Update opaque vertex ID objects for the `Aristo` backend
why:
For `Aristo`, vID objects encapsulate a numeric `VertexID`
referencing a vertex (rather than a node hash as used on the
legacy backend.) For storage sub-tries, there might be no initial
vertex known when the descriptor is created. So opaque vertex ID
objects are supported without a valid `VertexID` which will be
initalised on-the-fly when the first item is merged.
* CoreDb: Add pretty printer for opaque vertex ID objects
* Cosmetics, printing profiling data
* CoreDb: Fix segfault in `Aristo` backend when creating MPT descriptor
why:
Missing initialisation error
* CoreDb: Allow MPT to inherit shared context on `Aristo` backend
why:
Creates descriptors with different storage roots for the same
shared `Aristo` DB descriptor.
* Cosmetics, update diagnostic message items for `Aristo` backend
* Fix Copyright year
2024-01-11 19:11:38 +00:00
|
|
|
LayerFinalRef* = ref object
|
2023-12-19 12:39:23 +00:00
|
|
|
## Final tables fully supersede tables on lower layers when stacked as a
|
|
|
|
## whole. Missing entries on a higher layers are the final state (for the
|
|
|
|
## the top layer version of the table.)
|
|
|
|
##
|
|
|
|
## These structures are used for tables which are typically smaller then
|
|
|
|
## the ones on the `LayerDelta` object.
|
|
|
|
##
|
|
|
|
lTab*: Table[LeafTie,VertexID] ## Access path to leaf vertex
|
|
|
|
pPrf*: HashSet[VertexID] ## Locked vertices (proof nodes)
|
|
|
|
vGen*: seq[VertexID] ## Unique vertex ID generator
|
|
|
|
dirty*: bool ## Needs to be hashified if `true`
|
|
|
|
|
2023-12-20 16:19:00 +00:00
|
|
|
LayerRef* = ref LayerObj
|
|
|
|
LayerObj* = object
|
2023-08-10 20:01:28 +00:00
|
|
|
## Hexary trie database layer structures. Any layer holds the full
|
|
|
|
## change relative to the backend.
|
Core db update storage root management for sub tries (#1964)
* Aristo: Re-phrase `LayerDelta` and `LayerFinal` as object references
why:
Avoids copying in some cases
* Fix copyright header
* Aristo: Verify `leafTie.root` function argument for `merge()` proc
why:
Zero root will lead to inconsistent DB entry
* Aristo: Update failure condition for hash labels compiler `hashify()`
why:
Node need not be rejected as long as links are on the schedule. In
that case, `redo[]` is to become `wff.base[]` at a later stage.
This amends an earlier fix, part of #1952 by also testing against
the target nodes of the `wff.base[]` sets.
* Aristo: Add storage root glue record to `hashify()` schedule
why:
An account leaf node might refer to a non-resolvable storage root ID.
Storage root node chains will end up at the storage root. So the link
`storage-root->account-leaf` needs an extra item in the schedule.
* Aristo: fix error code returned by `fetchPayload()`
details:
Final error code is implied by the error code form the `hikeUp()`
function.
* CoreDb: Discard `createOk` argument in API `getRoot()` function
why:
Not needed for the legacy DB. For the `Arsto` DB, a lazy approach is
implemented where a stprage root node is created on-the-fly.
* CoreDb: Prevent `$$` logging in some cases
why:
Logging the function `$$` is not useful when it is used for internal
use, i.e. retrieving an an error text for logging.
* CoreDb: Add `tryHashFn()` to API for pretty printing
why:
Pretty printing must not change the hashification status for the
`Aristo` DB. So there is an independent API wrapper for getting the
node hash which never updated the hashes.
* CoreDb: Discard `update` argument in API `hash()` function
why:
When calling the API function `hash()`, the latest state is always
wanted. For a version that uses the current state as-is without checking,
the function `tryHash()` was added to the backend.
* CoreDb: Update opaque vertex ID objects for the `Aristo` backend
why:
For `Aristo`, vID objects encapsulate a numeric `VertexID`
referencing a vertex (rather than a node hash as used on the
legacy backend.) For storage sub-tries, there might be no initial
vertex known when the descriptor is created. So opaque vertex ID
objects are supported without a valid `VertexID` which will be
initalised on-the-fly when the first item is merged.
* CoreDb: Add pretty printer for opaque vertex ID objects
* Cosmetics, printing profiling data
* CoreDb: Fix segfault in `Aristo` backend when creating MPT descriptor
why:
Missing initialisation error
* CoreDb: Allow MPT to inherit shared context on `Aristo` backend
why:
Creates descriptors with different storage roots for the same
shared `Aristo` DB descriptor.
* Cosmetics, update diagnostic message items for `Aristo` backend
* Fix Copyright year
2024-01-11 19:11:38 +00:00
|
|
|
delta*: LayerDeltaRef ## Most structural tables held as deltas
|
|
|
|
final*: LayerFinalRef ## Stored as latest version
|
2023-12-19 12:39:23 +00:00
|
|
|
txUid*: uint ## Transaction identifier if positive
|
2023-08-10 20:01:28 +00:00
|
|
|
|
2023-09-05 13:57:20 +00:00
|
|
|
# ----------------------
|
|
|
|
|
|
|
|
QidLayoutRef* = ref object
|
|
|
|
## Layout of cascaded list of filter ID slot queues where a slot queue
|
|
|
|
## with index `N+1` serves as an overflow queue of slot queue `N`.
|
|
|
|
q*: array[4,QidSpec]
|
|
|
|
|
|
|
|
QidSpec* = tuple
|
|
|
|
## Layout of a filter ID slot queue
|
|
|
|
size: uint ## Capacity of queue, length within `1..wrap`
|
|
|
|
width: uint ## Instance gaps (relative to prev. item)
|
|
|
|
wrap: QueueID ## Range `1..wrap` for round-robin queue
|
|
|
|
|
|
|
|
QidSchedRef* = ref object of RootRef
|
|
|
|
## Current state of the filter queues
|
|
|
|
ctx*: QidLayoutRef ## Organisation of the FIFO
|
|
|
|
state*: seq[(QueueID,QueueID)] ## Current fill state
|
|
|
|
|
|
|
|
const
|
|
|
|
DefaultQidWrap = QueueID(0x3fff_ffff_ffff_ffffu64)
|
|
|
|
|
|
|
|
QidSpecSizeMax* = high(uint32).uint
|
|
|
|
## Maximum value allowed for a `size` value of a `QidSpec` object
|
|
|
|
|
|
|
|
QidSpecWidthMax* = high(uint32).uint
|
|
|
|
## Maximum value allowed for a `width` value of a `QidSpec` object
|
|
|
|
|
|
|
|
# ------------------------------------------------------------------------------
|
|
|
|
# Private helpers
|
|
|
|
# ------------------------------------------------------------------------------
|
|
|
|
|
|
|
|
func max(a, b, c: int): int =
|
|
|
|
max(max(a,b),c)
|
|
|
|
|
2023-06-12 13:48:47 +00:00
|
|
|
# ------------------------------------------------------------------------------
|
|
|
|
# Public helpers: `NodeRef` and `PayloadRef`
|
|
|
|
# ------------------------------------------------------------------------------
|
|
|
|
|
Core db update storage root management for sub tries (#1964)
* Aristo: Re-phrase `LayerDelta` and `LayerFinal` as object references
why:
Avoids copying in some cases
* Fix copyright header
* Aristo: Verify `leafTie.root` function argument for `merge()` proc
why:
Zero root will lead to inconsistent DB entry
* Aristo: Update failure condition for hash labels compiler `hashify()`
why:
Node need not be rejected as long as links are on the schedule. In
that case, `redo[]` is to become `wff.base[]` at a later stage.
This amends an earlier fix, part of #1952 by also testing against
the target nodes of the `wff.base[]` sets.
* Aristo: Add storage root glue record to `hashify()` schedule
why:
An account leaf node might refer to a non-resolvable storage root ID.
Storage root node chains will end up at the storage root. So the link
`storage-root->account-leaf` needs an extra item in the schedule.
* Aristo: fix error code returned by `fetchPayload()`
details:
Final error code is implied by the error code form the `hikeUp()`
function.
* CoreDb: Discard `createOk` argument in API `getRoot()` function
why:
Not needed for the legacy DB. For the `Arsto` DB, a lazy approach is
implemented where a stprage root node is created on-the-fly.
* CoreDb: Prevent `$$` logging in some cases
why:
Logging the function `$$` is not useful when it is used for internal
use, i.e. retrieving an an error text for logging.
* CoreDb: Add `tryHashFn()` to API for pretty printing
why:
Pretty printing must not change the hashification status for the
`Aristo` DB. So there is an independent API wrapper for getting the
node hash which never updated the hashes.
* CoreDb: Discard `update` argument in API `hash()` function
why:
When calling the API function `hash()`, the latest state is always
wanted. For a version that uses the current state as-is without checking,
the function `tryHash()` was added to the backend.
* CoreDb: Update opaque vertex ID objects for the `Aristo` backend
why:
For `Aristo`, vID objects encapsulate a numeric `VertexID`
referencing a vertex (rather than a node hash as used on the
legacy backend.) For storage sub-tries, there might be no initial
vertex known when the descriptor is created. So opaque vertex ID
objects are supported without a valid `VertexID` which will be
initalised on-the-fly when the first item is merged.
* CoreDb: Add pretty printer for opaque vertex ID objects
* Cosmetics, printing profiling data
* CoreDb: Fix segfault in `Aristo` backend when creating MPT descriptor
why:
Missing initialisation error
* CoreDb: Allow MPT to inherit shared context on `Aristo` backend
why:
Creates descriptors with different storage roots for the same
shared `Aristo` DB descriptor.
* Cosmetics, update diagnostic message items for `Aristo` backend
* Fix Copyright year
2024-01-11 19:11:38 +00:00
|
|
|
func init*(T: type LayerRef): T =
|
|
|
|
## Constructor, returns empty layer
|
|
|
|
T(delta: LayerDeltaRef(),
|
|
|
|
final: LayerFinalRef())
|
|
|
|
|
2023-11-08 12:18:32 +00:00
|
|
|
func hash*(node: NodeRef): Hash =
|
|
|
|
## Table/KeyedQueue/HashSet mixin
|
|
|
|
cast[pointer](node).hash
|
|
|
|
|
|
|
|
# ---------------
|
|
|
|
|
2023-06-12 13:48:47 +00:00
|
|
|
proc `==`*(a, b: PayloadRef): bool =
|
|
|
|
## Beware, potential deep comparison
|
|
|
|
if a.isNil:
|
|
|
|
return b.isNil
|
|
|
|
if b.isNil:
|
|
|
|
return false
|
|
|
|
if unsafeAddr(a) != unsafeAddr(b):
|
|
|
|
if a.pType != b.pType:
|
|
|
|
return false
|
|
|
|
case a.pType:
|
2023-07-05 20:27:48 +00:00
|
|
|
of RawData:
|
|
|
|
if a.rawBlob != b.rawBlob:
|
|
|
|
return false
|
|
|
|
of RlpData:
|
|
|
|
if a.rlpBlob != b.rlpBlob:
|
2023-06-12 13:48:47 +00:00
|
|
|
return false
|
|
|
|
of AccountData:
|
|
|
|
if a.account != b.account:
|
|
|
|
return false
|
|
|
|
true
|
|
|
|
|
|
|
|
proc `==`*(a, b: VertexRef): bool =
|
|
|
|
## Beware, potential deep comparison
|
|
|
|
if a.isNil:
|
|
|
|
return b.isNil
|
|
|
|
if b.isNil:
|
|
|
|
return false
|
|
|
|
if unsafeAddr(a[]) != unsafeAddr(b[]):
|
|
|
|
if a.vType != b.vType:
|
|
|
|
return false
|
|
|
|
case a.vType:
|
|
|
|
of Leaf:
|
|
|
|
if a.lPfx != b.lPfx or a.lData != b.lData:
|
|
|
|
return false
|
|
|
|
of Extension:
|
|
|
|
if a.ePfx != b.ePfx or a.eVid != b.eVid:
|
|
|
|
return false
|
|
|
|
of Branch:
|
|
|
|
for n in 0..15:
|
|
|
|
if a.bVid[n] != b.bVid[n]:
|
|
|
|
return false
|
|
|
|
true
|
|
|
|
|
|
|
|
proc `==`*(a, b: NodeRef): bool =
|
|
|
|
## Beware, potential deep comparison
|
|
|
|
if a.VertexRef != b.VertexRef:
|
|
|
|
return false
|
|
|
|
case a.vType:
|
|
|
|
of Extension:
|
|
|
|
if a.key[0] != b.key[0]:
|
|
|
|
return false
|
|
|
|
of Branch:
|
|
|
|
for n in 0..15:
|
|
|
|
if a.bVid[n] != 0.VertexID and a.key[n] != b.key[n]:
|
|
|
|
return false
|
|
|
|
else:
|
|
|
|
discard
|
|
|
|
true
|
|
|
|
|
|
|
|
# ------------------------------------------------------------------------------
|
|
|
|
# Public helpers, miscellaneous functions
|
|
|
|
# ------------------------------------------------------------------------------
|
|
|
|
|
Core db and aristo updates for destructor and tx logic (#1894)
* Disable `TransactionID` related functions from `state_db.nim`
why:
Functions `getCommittedStorage()` and `updateOriginalRoot()` from
the `state_db` module are nowhere used. The emulation of a legacy
`TransactionID` type functionality is administratively expensive to
provide by `Aristo` (the legacy DB version is only partially
implemented, anyway).
As there is no other place where `TransactionID`s are used, they will
not be provided by the `Aristo` variant of the `CoreDb`. For the
legacy DB API, nothing will change.
* Fix copyright headers in source code
* Get rid of compiler warning
* Update Aristo code, remove unused `merge()` variant, export `hashify()`
why:
Adapt to upcoming `CoreDb` wrapper
* Remove synced tx feature from `Aristo`
why:
+ This feature allowed to synchronise transaction methods like begin,
commit, and rollback for a group of descriptors.
+ The feature is over engineered and not needed for `CoreDb`, neither
is it complete (some convergence features missing.)
* Add debugging helpers to `Kvt`
also:
Update database iterator, add count variable yield argument similar
to `Aristo`.
* Provide optional destructors for `CoreDb` API
why;
For the upcoming Aristo wrapper, this allows to control when certain
smart destruction and update can take place. The auto destructor works
fine in general when the storage/cache strategy is known and acceptable
when creating descriptors.
* Add update option for `CoreDb` API function `hash()`
why;
The hash function is typically used to get the state root of the MPT.
Due to lazy hashing, this might be not available on the `Aristo` DB.
So the `update` function asks for re-hashing the gurrent state changes
if needed.
* Update API tracking log mode: `info` => `debug
* Use shared `Kvt` descriptor in new Ledger API
why:
No need to create a new descriptor all the time
2023-11-16 19:35:03 +00:00
|
|
|
func dup*(pld: PayloadRef): PayloadRef =
|
2023-06-22 19:21:33 +00:00
|
|
|
## Duplicate payload.
|
|
|
|
case pld.pType:
|
2023-07-05 20:27:48 +00:00
|
|
|
of RawData:
|
|
|
|
PayloadRef(
|
|
|
|
pType: RawData,
|
|
|
|
rawBlob: pld.rawBlob)
|
|
|
|
of RlpData:
|
2023-06-22 19:21:33 +00:00
|
|
|
PayloadRef(
|
2023-07-05 20:27:48 +00:00
|
|
|
pType: RlpData,
|
|
|
|
rlpBlob: pld.rlpBlob)
|
2023-06-22 19:21:33 +00:00
|
|
|
of AccountData:
|
Core db and aristo updates for destructor and tx logic (#1894)
* Disable `TransactionID` related functions from `state_db.nim`
why:
Functions `getCommittedStorage()` and `updateOriginalRoot()` from
the `state_db` module are nowhere used. The emulation of a legacy
`TransactionID` type functionality is administratively expensive to
provide by `Aristo` (the legacy DB version is only partially
implemented, anyway).
As there is no other place where `TransactionID`s are used, they will
not be provided by the `Aristo` variant of the `CoreDb`. For the
legacy DB API, nothing will change.
* Fix copyright headers in source code
* Get rid of compiler warning
* Update Aristo code, remove unused `merge()` variant, export `hashify()`
why:
Adapt to upcoming `CoreDb` wrapper
* Remove synced tx feature from `Aristo`
why:
+ This feature allowed to synchronise transaction methods like begin,
commit, and rollback for a group of descriptors.
+ The feature is over engineered and not needed for `CoreDb`, neither
is it complete (some convergence features missing.)
* Add debugging helpers to `Kvt`
also:
Update database iterator, add count variable yield argument similar
to `Aristo`.
* Provide optional destructors for `CoreDb` API
why;
For the upcoming Aristo wrapper, this allows to control when certain
smart destruction and update can take place. The auto destructor works
fine in general when the storage/cache strategy is known and acceptable
when creating descriptors.
* Add update option for `CoreDb` API function `hash()`
why;
The hash function is typically used to get the state root of the MPT.
Due to lazy hashing, this might be not available on the `Aristo` DB.
So the `update` function asks for re-hashing the gurrent state changes
if needed.
* Update API tracking log mode: `info` => `debug
* Use shared `Kvt` descriptor in new Ledger API
why:
No need to create a new descriptor all the time
2023-11-16 19:35:03 +00:00
|
|
|
PayloadRef(
|
|
|
|
pType: AccountData,
|
|
|
|
account: pld.account)
|
2023-06-22 19:21:33 +00:00
|
|
|
|
Core db and aristo updates for destructor and tx logic (#1894)
* Disable `TransactionID` related functions from `state_db.nim`
why:
Functions `getCommittedStorage()` and `updateOriginalRoot()` from
the `state_db` module are nowhere used. The emulation of a legacy
`TransactionID` type functionality is administratively expensive to
provide by `Aristo` (the legacy DB version is only partially
implemented, anyway).
As there is no other place where `TransactionID`s are used, they will
not be provided by the `Aristo` variant of the `CoreDb`. For the
legacy DB API, nothing will change.
* Fix copyright headers in source code
* Get rid of compiler warning
* Update Aristo code, remove unused `merge()` variant, export `hashify()`
why:
Adapt to upcoming `CoreDb` wrapper
* Remove synced tx feature from `Aristo`
why:
+ This feature allowed to synchronise transaction methods like begin,
commit, and rollback for a group of descriptors.
+ The feature is over engineered and not needed for `CoreDb`, neither
is it complete (some convergence features missing.)
* Add debugging helpers to `Kvt`
also:
Update database iterator, add count variable yield argument similar
to `Aristo`.
* Provide optional destructors for `CoreDb` API
why;
For the upcoming Aristo wrapper, this allows to control when certain
smart destruction and update can take place. The auto destructor works
fine in general when the storage/cache strategy is known and acceptable
when creating descriptors.
* Add update option for `CoreDb` API function `hash()`
why;
The hash function is typically used to get the state root of the MPT.
Due to lazy hashing, this might be not available on the `Aristo` DB.
So the `update` function asks for re-hashing the gurrent state changes
if needed.
* Update API tracking log mode: `info` => `debug
* Use shared `Kvt` descriptor in new Ledger API
why:
No need to create a new descriptor all the time
2023-11-16 19:35:03 +00:00
|
|
|
func dup*(vtx: VertexRef): VertexRef =
|
2023-06-22 19:21:33 +00:00
|
|
|
## Duplicate vertex.
|
|
|
|
# Not using `deepCopy()` here (some `gc` needs `--deepcopy:on`.)
|
2023-06-30 22:22:33 +00:00
|
|
|
if vtx.isNil:
|
|
|
|
VertexRef(nil)
|
|
|
|
else:
|
|
|
|
case vtx.vType:
|
|
|
|
of Leaf:
|
|
|
|
VertexRef(
|
|
|
|
vType: Leaf,
|
|
|
|
lPfx: vtx.lPfx,
|
|
|
|
lData: vtx.ldata.dup)
|
|
|
|
of Extension:
|
|
|
|
VertexRef(
|
|
|
|
vType: Extension,
|
|
|
|
ePfx: vtx.ePfx,
|
|
|
|
eVid: vtx.eVid)
|
|
|
|
of Branch:
|
|
|
|
VertexRef(
|
|
|
|
vType: Branch,
|
|
|
|
bVid: vtx.bVid)
|
2023-06-22 19:21:33 +00:00
|
|
|
|
Core db and aristo updates for destructor and tx logic (#1894)
* Disable `TransactionID` related functions from `state_db.nim`
why:
Functions `getCommittedStorage()` and `updateOriginalRoot()` from
the `state_db` module are nowhere used. The emulation of a legacy
`TransactionID` type functionality is administratively expensive to
provide by `Aristo` (the legacy DB version is only partially
implemented, anyway).
As there is no other place where `TransactionID`s are used, they will
not be provided by the `Aristo` variant of the `CoreDb`. For the
legacy DB API, nothing will change.
* Fix copyright headers in source code
* Get rid of compiler warning
* Update Aristo code, remove unused `merge()` variant, export `hashify()`
why:
Adapt to upcoming `CoreDb` wrapper
* Remove synced tx feature from `Aristo`
why:
+ This feature allowed to synchronise transaction methods like begin,
commit, and rollback for a group of descriptors.
+ The feature is over engineered and not needed for `CoreDb`, neither
is it complete (some convergence features missing.)
* Add debugging helpers to `Kvt`
also:
Update database iterator, add count variable yield argument similar
to `Aristo`.
* Provide optional destructors for `CoreDb` API
why;
For the upcoming Aristo wrapper, this allows to control when certain
smart destruction and update can take place. The auto destructor works
fine in general when the storage/cache strategy is known and acceptable
when creating descriptors.
* Add update option for `CoreDb` API function `hash()`
why;
The hash function is typically used to get the state root of the MPT.
Due to lazy hashing, this might be not available on the `Aristo` DB.
So the `update` function asks for re-hashing the gurrent state changes
if needed.
* Update API tracking log mode: `info` => `debug
* Use shared `Kvt` descriptor in new Ledger API
why:
No need to create a new descriptor all the time
2023-11-16 19:35:03 +00:00
|
|
|
func dup*(node: NodeRef): NodeRef =
|
2023-07-12 23:03:14 +00:00
|
|
|
## Duplicate node.
|
|
|
|
# Not using `deepCopy()` here (some `gc` needs `--deepcopy:on`.)
|
|
|
|
if node.isNil:
|
|
|
|
NodeRef(nil)
|
|
|
|
else:
|
|
|
|
case node.vType:
|
|
|
|
of Leaf:
|
|
|
|
NodeRef(
|
|
|
|
vType: Leaf,
|
|
|
|
lPfx: node.lPfx,
|
|
|
|
lData: node.ldata.dup,
|
|
|
|
key: node.key)
|
|
|
|
of Extension:
|
|
|
|
NodeRef(
|
|
|
|
vType: Extension,
|
|
|
|
ePfx: node.ePfx,
|
|
|
|
eVid: node.eVid,
|
|
|
|
key: node.key)
|
|
|
|
of Branch:
|
|
|
|
NodeRef(
|
|
|
|
vType: Branch,
|
|
|
|
bVid: node.bVid,
|
|
|
|
key: node.key)
|
|
|
|
|
Core db update storage root management for sub tries (#1964)
* Aristo: Re-phrase `LayerDelta` and `LayerFinal` as object references
why:
Avoids copying in some cases
* Fix copyright header
* Aristo: Verify `leafTie.root` function argument for `merge()` proc
why:
Zero root will lead to inconsistent DB entry
* Aristo: Update failure condition for hash labels compiler `hashify()`
why:
Node need not be rejected as long as links are on the schedule. In
that case, `redo[]` is to become `wff.base[]` at a later stage.
This amends an earlier fix, part of #1952 by also testing against
the target nodes of the `wff.base[]` sets.
* Aristo: Add storage root glue record to `hashify()` schedule
why:
An account leaf node might refer to a non-resolvable storage root ID.
Storage root node chains will end up at the storage root. So the link
`storage-root->account-leaf` needs an extra item in the schedule.
* Aristo: fix error code returned by `fetchPayload()`
details:
Final error code is implied by the error code form the `hikeUp()`
function.
* CoreDb: Discard `createOk` argument in API `getRoot()` function
why:
Not needed for the legacy DB. For the `Arsto` DB, a lazy approach is
implemented where a stprage root node is created on-the-fly.
* CoreDb: Prevent `$$` logging in some cases
why:
Logging the function `$$` is not useful when it is used for internal
use, i.e. retrieving an an error text for logging.
* CoreDb: Add `tryHashFn()` to API for pretty printing
why:
Pretty printing must not change the hashification status for the
`Aristo` DB. So there is an independent API wrapper for getting the
node hash which never updated the hashes.
* CoreDb: Discard `update` argument in API `hash()` function
why:
When calling the API function `hash()`, the latest state is always
wanted. For a version that uses the current state as-is without checking,
the function `tryHash()` was added to the backend.
* CoreDb: Update opaque vertex ID objects for the `Aristo` backend
why:
For `Aristo`, vID objects encapsulate a numeric `VertexID`
referencing a vertex (rather than a node hash as used on the
legacy backend.) For storage sub-tries, there might be no initial
vertex known when the descriptor is created. So opaque vertex ID
objects are supported without a valid `VertexID` which will be
initalised on-the-fly when the first item is merged.
* CoreDb: Add pretty printer for opaque vertex ID objects
* Cosmetics, printing profiling data
* CoreDb: Fix segfault in `Aristo` backend when creating MPT descriptor
why:
Missing initialisation error
* CoreDb: Allow MPT to inherit shared context on `Aristo` backend
why:
Creates descriptors with different storage roots for the same
shared `Aristo` DB descriptor.
* Cosmetics, update diagnostic message items for `Aristo` backend
* Fix Copyright year
2024-01-11 19:11:38 +00:00
|
|
|
func dup*(final: LayerFinalRef): LayerFinalRef =
|
|
|
|
## Duplicate final layer.
|
|
|
|
LayerFinalRef(
|
|
|
|
lTab: final.lTab,
|
|
|
|
pPrf: final.pPrf,
|
|
|
|
vGen: final.vGen,
|
|
|
|
dirty: final.dirty)
|
|
|
|
|
2023-09-05 13:57:20 +00:00
|
|
|
# ---------------
|
2023-08-17 13:42:01 +00:00
|
|
|
|
2023-09-05 13:57:20 +00:00
|
|
|
func to*(node: NodeRef; T: type VertexRef): T =
|
2023-06-22 19:21:33 +00:00
|
|
|
## Extract a copy of the `VertexRef` part from a `NodeRef`.
|
|
|
|
node.VertexRef.dup
|
2023-06-12 13:48:47 +00:00
|
|
|
|
2023-09-05 13:57:20 +00:00
|
|
|
func to*(a: array[4,tuple[size, width: int]]; T: type QidLayoutRef): T =
|
|
|
|
## Convert a size-width array to a `QidLayoutRef` layout. Over large
|
|
|
|
## array field values are adjusted to its maximal size.
|
|
|
|
var q: array[4,QidSpec]
|
|
|
|
for n in 0..3:
|
|
|
|
q[n] = (min(a[n].size.uint, QidSpecSizeMax),
|
|
|
|
min(a[n].width.uint, QidSpecWidthMax),
|
|
|
|
DefaultQidWrap)
|
|
|
|
q[0].width = 0
|
|
|
|
T(q: q)
|
|
|
|
|
|
|
|
func to*(a: array[4,tuple[size, width, wrap: int]]; T: type QidLayoutRef): T =
|
|
|
|
## Convert a size-width-wrap array to a `QidLayoutRef` layout. Over large
|
|
|
|
## array field values are adjusted to its maximal size. Too small `wrap`
|
|
|
|
## values are adjusted to its minimal size.
|
|
|
|
var q: array[4,QidSpec]
|
|
|
|
for n in 0..2:
|
|
|
|
q[n] = (min(a[n].size.uint, QidSpecSizeMax),
|
|
|
|
min(a[n].width.uint, QidSpecWidthMax),
|
|
|
|
QueueID(max(a[n].size + a[n+1].width, a[n].width+1, a[n].wrap)))
|
|
|
|
q[0].width = 0
|
|
|
|
q[3] = (min(a[3].size.uint, QidSpecSizeMax),
|
|
|
|
min(a[3].width.uint, QidSpecWidthMax),
|
|
|
|
QueueID(max(a[3].size, a[3].width, a[3].wrap)))
|
|
|
|
T(q: q)
|
|
|
|
|
|
|
|
# ------------------------------------------------------------------------------
|
|
|
|
# Public constructors for filter slot scheduler state
|
|
|
|
# ------------------------------------------------------------------------------
|
|
|
|
|
|
|
|
func init*(T: type QidSchedRef; a: array[4,(int,int)]): T =
|
|
|
|
## Constructor, see comments at the coverter function `to()` for adjustments
|
|
|
|
## of the layout argument `a`.
|
|
|
|
T(ctx: a.to(QidLayoutRef))
|
|
|
|
|
|
|
|
func init*(T: type QidSchedRef; a: array[4,(int,int,int)]): T =
|
|
|
|
## Constructor, see comments at the coverter function `to()` for adjustments
|
|
|
|
## of the layout argument `a`.
|
|
|
|
T(ctx: a.to(QidLayoutRef))
|
|
|
|
|
|
|
|
func init*(T: type QidSchedRef; ctx: QidLayoutRef): T =
|
|
|
|
T(ctx: ctx)
|
|
|
|
|
2023-06-12 13:48:47 +00:00
|
|
|
# ------------------------------------------------------------------------------
|
|
|
|
# End
|
|
|
|
# ------------------------------------------------------------------------------
|