nimbus-eth1/nimbus/db/aristo/aristo_desc/desc_structural.nim
Jacek Sieka 2961905a95
aristo: fork support via layers/txframes (#2960)
* aristo: fork support via layers/txframes

This change reorganises how the database is accessed: instead holding a
"current frame" in the database object, a dag of frames is created based
on the "base frame" held in `AristoDbRef` and all database access
happens through this frame, which can be thought of as a consistent
point-in-time snapshot of the database based on a particular fork of the
chain.

In the code, "frame", "transaction" and "layer" is used to denote more
or less the same thing: a dag of stacked changes backed by the on-disk
database.

Although this is not a requirement, in practice each frame holds the
change set of a single block - as such, the frame and its ancestors
leading up to the on-disk state represents the state of the database
after that block has been applied.

"committing" means merging the changes to its parent frame so that the
difference between them is lost and only the cumulative changes remain -
this facility enables frames to be combined arbitrarily wherever they
are in the dag.

In particular, it becomes possible to consolidate a set of changes near
the base of the dag and commit those to disk without having to re-do the
in-memory frames built on top of them - this is useful for "flattening"
a set of changes during a base update and sending those to storage
without having to perform a block replay on top.

Looking at abstractions, a side effect of this change is that the KVT
and Aristo are brought closer together by considering them to be part of
the "same" atomic transaction set - the way the code gets organised,
applying a block and saving it to the kvt happens in the same "logical"
frame - therefore, discarding the frame discards both the aristo and kvt
changes at the same time - likewise, they are persisted to disk together
- this makes reasoning about the database somewhat easier but has the
downside of increased memory usage, something that perhaps will need
addressing in the future.

Because the code reasons more strictly about frames and the state of the
persisted database, it also makes it more visible where ForkedChain
should be used and where it is still missing - in particular, frames
represent a single branch of history while forkedchain manages multiple
parallel forks - user-facing services such as the RPC should use the
latter, ie until it has been finalized, a getBlock request should
consider all forks and not just the blocks in the canonical head branch.

Another advantage of this approach is that `AristoDbRef` conceptually
becomes more simple - removing its tracking of the "current" transaction
stack simplifies reasoning about what can go wrong since this state now
has to be passed around in the form of `AristoTxRef` - as such, many of
the tests and facilities in the code that were dealing with "stack
inconsistency" are now structurally prevented from happening. The test
suite will need significant refactoring after this change.

Once this change has been merged, there are several follow-ups to do:

* there's no mechanism for keeping frames up to date as they get
committed or rolled back - TODO
* naming is confused - many names for the same thing for legacy reason
* forkedchain support is still missing in lots of code
* clean up redundant logic based on previous designs - in particular the
debug and introspection code no longer makes sense
* the way change sets are stored will probably need revisiting - because
it's a stack of changes where each frame must be interrogated to find an
on-disk value, with a base distance of 128 we'll at minimum have to
perform 128 frame lookups for *every* database interaction - regardless,
the "dag-like" nature will stay
* dispose and commit are poorly defined and perhaps redundant - in
theory, one could simply let the GC collect abandoned frames etc, though
it's likely an explicit mechanism will remain useful, so they stay for
now

More about the changes:

* `AristoDbRef` gains a `txRef` field (todo: rename) that "more or less"
corresponds to the old `balancer` field
* `AristoDbRef.stack` is gone - instead, there's a chain of
`AristoTxRef` objects that hold their respective "layer" which has the
actual changes
* No more reasoning about "top" and "stack" - instead, each
`AristoTxRef` can be a "head" that "more or less" corresponds to the old
single-history `top` notion and its stack
* `level` still represents "distance to base" - it's computed from the
parent chain instead of being stored
* one has to be careful not to use frames where forkedchain was intended
- layers are only for a single branch of history!

* fix layer vtop after rollback

* engine fix

* Fix test_txpool

* Fix test_rpc

* Fix copyright year

* fix simulator

* Fix copyright year

* Fix copyright year

* Fix tracer

* Fix infinite recursion bug

* Remove aristo and kvt empty files

* Fic copyright year

* Fix fc chain_kvt

* ForkedChain refactoring

* Fix merge master conflict

* Fix copyright year

* Reparent txFrame

* Fix test

* Fix txFrame reparent again

* Cleanup and fix test

* UpdateBase bugfix and fix test

* Fixe newPayload bug discovered by hive

* Fix engine api fcu

* Clean up call template, chain_kvt, andn txguid

* Fix copyright year

* work around base block loading issue

* Add test

* Fix updateHead bug

* Fix updateBase bug

* Change func commitBase to proc commitBase

* Touch up and fix debug mode crash

---------

Co-authored-by: jangko <jangko128@gmail.com>
2025-02-06 14:04:50 +07:00

293 lines
9.4 KiB
Nim

# nimbus-eth1
# Copyright (c) 2023-2025 Status Research & Development GmbH
# Licensed under either of
# * Apache License, version 2.0, ([LICENSE-APACHE](LICENSE-APACHE) or
# http://www.apache.org/licenses/LICENSE-2.0)
# * MIT license ([LICENSE-MIT](LICENSE-MIT) or
# http://opensource.org/licenses/MIT)
# at your option. This file may not be copied, modified, or distributed
# except according to those terms.
## Aristo DB -- Patricia Trie structural data types
## ================================================
##
{.push raises: [].}
import
std/[hashes as std_hashes, tables],
stint,
eth/common/[accounts, base, hashes],
./desc_identifiers
export stint, tables, accounts, base, hashes
type
LeafTiePayload* = object
## Generalised key-value pair for a sub-trie. The main trie is the
## sub-trie with `root=VertexID(1)`.
leafTie*: LeafTie ## Full `Patricia Trie` path root-to-leaf
payload*: LeafPayload ## Leaf data payload (see below)
VertexType* = enum
## Type of `Aristo Trie` vertex
Leaf
Branch
AristoAccount* = object
## Application relevant part of an Ethereum account. Note that the storage
## data/tree reference is not part of the account (see `LeafPayload` below.)
nonce*: AccountNonce ## Some `uint64` type
balance*: UInt256
codeHash*: Hash32
PayloadType* = enum
## Type of leaf data.
AccountData ## `Aristo account` with vertex IDs links
StoData ## Slot storage data
StorageID* = tuple
## Once a storage tree is allocated, its root vertex ID is registered in
## the leaf payload of an acoount. After subsequent storage tree deletion
## the root vertex ID will be kept in the leaf payload for re-use but set
## disabled (`.isValid` = `false`).
isValid: bool ## See also `isValid()` for `VertexID`
vid: VertexID ## Storage root vertex ID
LeafPayload* = object
## The payload type depends on the sub-tree used. The `VertexID(1)` rooted
## sub-tree only has `AccountData` type payload, stoID-based have StoData
case pType*: PayloadType
of AccountData:
account*: AristoAccount
stoID*: StorageID ## Storage vertex ID (if any)
of StoData:
stoData*: UInt256
VertexRef* = ref object
## Vertex for building a hexary Patricia or Merkle Patricia Trie
pfx*: NibblesBuf
## Portion of path segment - extension nodes are branch nodes with
## non-empty prefix
case vType*: VertexType
of Leaf:
lData*: LeafPayload ## Reference to data payload
of Branch:
startVid*: VertexID
used*: uint16
NodeRef* = ref object of RootRef
## Combined record for a *traditional* ``Merkle Patricia Tree` node merged
## with a structural `VertexRef` type object.
vtx*: VertexRef
key*: array[16,HashKey] ## Merkle hash/es for vertices
# ----------------------
VidVtxPair* = object
## Handy helper structure
vid*: VertexID ## Table lookup vertex ID (if any)
vtx*: VertexRef ## Reference to vertex
SavedState* = object
## Last saved state
key*: Hash32 ## Some state hash (if any)
serial*: uint64 ## Generic identifier from application
LayerRef* = ref Layer
Layer* = object
## Delta layers are stacked implying a tables hierarchy. Table entries on
## a higher level take precedence over lower layer table entries. So an
## existing key-value table entry of a layer on top supersedes same key
## entries on all lower layers. A missing entry on a higher layer indicates
## that the key-value pair might be fond on some lower layer.
##
## A zero value (`nil`, empty hash etc.) is considered am missing key-value
## pair. Tables on the `LayerDelta` may have stray zero key-value pairs for
## missing entries due to repeated transactions while adding and deleting
## entries. There is no need to purge redundant zero entries.
##
## As for `kMap[]` entries, there might be a zero value entriy relating
## (i.e. indexed by the same vertex ID) to an `sMap[]` non-zero value entry
## (of the same layer or a lower layer whatever comes first.) This entry
## is kept as a reminder that the hash value of the `kMap[]` entry needs
## to be re-compiled.
##
## The reasoning behind the above scenario is that every vertex held on the
## `sTab[]` tables must correspond to a hash entry held on the `kMap[]`
## tables. So a corresponding zero value or missing entry produces an
## inconsistent state that must be resolved.
##
sTab*: Table[RootedVertexID,VertexRef] ## Structural vertex table
kMap*: Table[RootedVertexID,HashKey] ## Merkle hash key mapping
vTop*: VertexID ## Last used vertex ID
accLeaves*: Table[Hash32, VertexRef] ## Account path -> VertexRef
stoLeaves*: Table[Hash32, VertexRef] ## Storage path -> VertexRef
cTop*: VertexID ## Last committed vertex ID
GetVtxFlag* = enum
PeekCache
## Peek into, but don't update cache - useful on work loads that are
## unfriendly to caches
# ------------------------------------------------------------------------------
# Public helpers (misc)
# ------------------------------------------------------------------------------
func bVid*(vtx: VertexRef, nibble: uint8): VertexID =
if (vtx.used and (1'u16 shl nibble)) > 0:
VertexID(uint64(vtx.startVid) + nibble)
else:
default(VertexID)
func setUsed*(vtx: VertexRef, nibble: uint8, used: static bool): VertexID =
vtx.used =
when used:
vtx.used or (1'u16 shl nibble)
else:
vtx.used and (not (1'u16 shl nibble))
vtx.bVid(nibble)
func hash*(node: NodeRef): Hash =
## Table/KeyedQueue/HashSet mixin
cast[pointer](node).hash
# ------------------------------------------------------------------------------
# Public helpers: `NodeRef` and `LeafPayload`
# ------------------------------------------------------------------------------
proc `==`*(a, b: LeafPayload): bool =
## Beware, potential deep comparison
if unsafeAddr(a) != unsafeAddr(b):
if a.pType != b.pType:
return false
case a.pType:
of AccountData:
if a.account != b.account or
a.stoID != b.stoID:
return false
of StoData:
if a.stoData != b.stoData:
return false
true
proc `==`*(a, b: VertexRef): bool =
## Beware, potential deep comparison
if a.isNil:
return b.isNil
if b.isNil:
return false
if unsafeAddr(a[]) != unsafeAddr(b[]):
if a.vType != b.vType:
return false
case a.vType:
of Leaf:
if a.pfx != b.pfx or a.lData != b.lData:
return false
of Branch:
if a.pfx != b.pfx or a.startVid != b.startVid or a.used != b.used:
return false
true
iterator pairs*(vtx: VertexRef): tuple[nibble: uint8, vid: VertexID] =
## Iterates over the sub-vids of a branch (does nothing for leaves)
case vtx.vType:
of Leaf:
discard
of Branch:
for n in 0'u8 .. 15'u8:
if (vtx.used and (1'u16 shl n)) > 0:
yield (n, VertexID(uint64(vtx.startVid) + n))
iterator allPairs*(vtx: VertexRef): tuple[nibble: uint8, vid: VertexID] =
## Iterates over the sub-vids of a branch (does nothing for leaves) including
## currently unset nodes
case vtx.vType:
of Leaf:
discard
of Branch:
for n in 0'u8 .. 15'u8:
if (vtx.used and (1'u16 shl n)) > 0:
yield (n, VertexID(uint64(vtx.startVid) + n))
else:
yield (n, default(VertexID))
proc `==`*(a, b: NodeRef): bool =
## Beware, potential deep comparison
if a.vtx != b.vtx:
return false
case a.vtx.vType:
of Branch:
for n in 0'u8..15'u8:
if a.vtx.bVid(n) != 0.VertexID or b.vtx.bVid(n) != 0.VertexID:
if a.key[n] != b.key[n]:
return false
else:
discard
true
# ------------------------------------------------------------------------------
# Public helpers, miscellaneous functions
# ------------------------------------------------------------------------------
func dup*(pld: LeafPayload): LeafPayload =
## Duplicate payload.
case pld.pType:
of AccountData:
LeafPayload(
pType: AccountData,
account: pld.account,
stoID: pld.stoID)
of StoData:
LeafPayload(
pType: StoData,
stoData: pld.stoData
)
func dup*(vtx: VertexRef): VertexRef =
## Duplicate vertex.
# Not using `deepCopy()` here (some `gc` needs `--deepcopy:on`.)
if vtx.isNil:
VertexRef(nil)
else:
case vtx.vType:
of Leaf:
VertexRef(
vType: Leaf,
pfx: vtx.pfx,
lData: vtx.lData.dup)
of Branch:
VertexRef(
vType: Branch,
pfx: vtx.pfx,
startVid: vtx.startVid,
used: vtx.used)
func dup*(node: NodeRef): NodeRef =
## Duplicate node.
# Not using `deepCopy()` here (some `gc` needs `--deepcopy:on`.)
if node.isNil:
NodeRef(nil)
else:
NodeRef(
vtx: node.vtx.dup(),
key: node.key)
func dup*(wp: VidVtxPair): VidVtxPair =
## Safe copy of `wp` argument
VidVtxPair(
vid: wp.vid,
vtx: wp.vtx.dup)
# ---------------
func to*(node: NodeRef; T: type VertexRef): T =
## Extract a copy of the `VertexRef` part from a `NodeRef`.
node.VertexRef.dup
# ------------------------------------------------------------------------------
# End
# ------------------------------------------------------------------------------