nimbus-eth1/nimbus/db/aristo/aristo_merge.nim
Jacek Sieka 2961905a95
aristo: fork support via layers/txframes (#2960)
* aristo: fork support via layers/txframes

This change reorganises how the database is accessed: instead holding a
"current frame" in the database object, a dag of frames is created based
on the "base frame" held in `AristoDbRef` and all database access
happens through this frame, which can be thought of as a consistent
point-in-time snapshot of the database based on a particular fork of the
chain.

In the code, "frame", "transaction" and "layer" is used to denote more
or less the same thing: a dag of stacked changes backed by the on-disk
database.

Although this is not a requirement, in practice each frame holds the
change set of a single block - as such, the frame and its ancestors
leading up to the on-disk state represents the state of the database
after that block has been applied.

"committing" means merging the changes to its parent frame so that the
difference between them is lost and only the cumulative changes remain -
this facility enables frames to be combined arbitrarily wherever they
are in the dag.

In particular, it becomes possible to consolidate a set of changes near
the base of the dag and commit those to disk without having to re-do the
in-memory frames built on top of them - this is useful for "flattening"
a set of changes during a base update and sending those to storage
without having to perform a block replay on top.

Looking at abstractions, a side effect of this change is that the KVT
and Aristo are brought closer together by considering them to be part of
the "same" atomic transaction set - the way the code gets organised,
applying a block and saving it to the kvt happens in the same "logical"
frame - therefore, discarding the frame discards both the aristo and kvt
changes at the same time - likewise, they are persisted to disk together
- this makes reasoning about the database somewhat easier but has the
downside of increased memory usage, something that perhaps will need
addressing in the future.

Because the code reasons more strictly about frames and the state of the
persisted database, it also makes it more visible where ForkedChain
should be used and where it is still missing - in particular, frames
represent a single branch of history while forkedchain manages multiple
parallel forks - user-facing services such as the RPC should use the
latter, ie until it has been finalized, a getBlock request should
consider all forks and not just the blocks in the canonical head branch.

Another advantage of this approach is that `AristoDbRef` conceptually
becomes more simple - removing its tracking of the "current" transaction
stack simplifies reasoning about what can go wrong since this state now
has to be passed around in the form of `AristoTxRef` - as such, many of
the tests and facilities in the code that were dealing with "stack
inconsistency" are now structurally prevented from happening. The test
suite will need significant refactoring after this change.

Once this change has been merged, there are several follow-ups to do:

* there's no mechanism for keeping frames up to date as they get
committed or rolled back - TODO
* naming is confused - many names for the same thing for legacy reason
* forkedchain support is still missing in lots of code
* clean up redundant logic based on previous designs - in particular the
debug and introspection code no longer makes sense
* the way change sets are stored will probably need revisiting - because
it's a stack of changes where each frame must be interrogated to find an
on-disk value, with a base distance of 128 we'll at minimum have to
perform 128 frame lookups for *every* database interaction - regardless,
the "dag-like" nature will stay
* dispose and commit are poorly defined and perhaps redundant - in
theory, one could simply let the GC collect abandoned frames etc, though
it's likely an explicit mechanism will remain useful, so they stay for
now

More about the changes:

* `AristoDbRef` gains a `txRef` field (todo: rename) that "more or less"
corresponds to the old `balancer` field
* `AristoDbRef.stack` is gone - instead, there's a chain of
`AristoTxRef` objects that hold their respective "layer" which has the
actual changes
* No more reasoning about "top" and "stack" - instead, each
`AristoTxRef` can be a "head" that "more or less" corresponds to the old
single-history `top` notion and its stack
* `level` still represents "distance to base" - it's computed from the
parent chain instead of being stored
* one has to be careful not to use frames where forkedchain was intended
- layers are only for a single branch of history!

* fix layer vtop after rollback

* engine fix

* Fix test_txpool

* Fix test_rpc

* Fix copyright year

* fix simulator

* Fix copyright year

* Fix copyright year

* Fix tracer

* Fix infinite recursion bug

* Remove aristo and kvt empty files

* Fic copyright year

* Fix fc chain_kvt

* ForkedChain refactoring

* Fix merge master conflict

* Fix copyright year

* Reparent txFrame

* Fix test

* Fix txFrame reparent again

* Cleanup and fix test

* UpdateBase bugfix and fix test

* Fixe newPayload bug discovered by hive

* Fix engine api fcu

* Clean up call template, chain_kvt, andn txguid

* Fix copyright year

* work around base block loading issue

* Add test

* Fix updateHead bug

* Fix updateBase bug

* Change func commitBase to proc commitBase

* Touch up and fix debug mode crash

---------

Co-authored-by: jangko <jangko128@gmail.com>
2025-02-06 14:04:50 +07:00

260 lines
9.6 KiB
Nim

# nimbus-eth1
# Copyright (c) 2023-2025 Status Research & Development GmbH
# Licensed under either of
# * Apache License, version 2.0, ([LICENSE-APACHE](LICENSE-APACHE) or
# http://www.apache.org/licenses/LICENSE-2.0)
# * MIT license ([LICENSE-MIT](LICENSE-MIT) or
# http://opensource.org/licenses/MIT)
# at your option. This file may not be copied, modified, or distributed
# except according to those terms.
## Aristo DB -- Patricia Trie builder, raw node insertion
## ======================================================
##
## This module merges `PathID` values as hexary lookup paths into the
## `Patricia Trie`. When changing vertices (aka nodes without Merkle hashes),
## associated (but separated) Merkle hashes will be deleted unless locked.
## Instead of deleting locked hashes error handling is applied.
##
## Also, nodes (vertices plus merkle hashes) can be added which is needed for
## boundary proofing after `snap/1` download. The vertices are split from the
## nodes and stored as-is on the table holding `Patricia Trie` entries. The
## hashes are stored iin a separate table and the vertices are labelled
## `locked`.
{.push raises: [].}
import
std/typetraits,
eth/common/hashes,
results,
"."/[aristo_desc, aristo_fetch, aristo_get, aristo_layers, aristo_vid]
proc layersPutLeaf(
db: AristoTxRef, rvid: RootedVertexID, path: NibblesBuf, payload: LeafPayload
): VertexRef =
let vtx = VertexRef(vType: Leaf, pfx: path, lData: payload)
db.layersPutVtx(rvid, vtx)
vtx
proc mergePayloadImpl(
db: AristoTxRef, # Database, top layer
root: VertexID, # MPT state root
path: Hash32, # Leaf item to add to the database
leaf: Opt[VertexRef],
payload: LeafPayload, # Payload value
): Result[(VertexRef, VertexRef, VertexRef), AristoError] =
## Merge the argument `(root,path)` key-value-pair into the top level vertex
## table of the database `db`. The `path` argument is used to address the
## leaf vertex with the payload. It is stored or updated on the database
## accordingly.
##
var
path = NibblesBuf.fromBytes(path.data)
cur = root
(vtx, _) = db.getVtxRc((root, cur)).valueOr:
if error != GetVtxNotFound:
return err(error)
# We're at the root vertex and there is no data - this must be a fresh
# VertexID!
return ok (db.layersPutLeaf((root, cur), path, payload), nil, nil)
vids: ArrayBuf[NibblesBuf.high + 1, VertexID]
vtxs: ArrayBuf[NibblesBuf.high + 1, VertexRef]
template resetKeys() =
# Reset cached hashes of touched verticies
for i in 2..vids.len:
db.layersResKey((root, vids[^i]), vtxs[^i])
while path.len > 0:
# Clear existing merkle keys along the traversal path
vids.add cur
vtxs.add vtx
let n = path.sharedPrefixLen(vtx.pfx)
case vtx.vType
of Leaf:
let res =
if n == vtx.pfx.len:
# Same path - replace the current vertex with a new payload
if vtx.lData == payload:
return err(MergeNoAction)
let leafVtx = if root == VertexID(1):
var payload = payload.dup()
# TODO can we avoid this hack? it feels like the caller should already
# have set an appropriate stoID - this "fixup" feels risky,
# specially from a caching point of view
payload.stoID = vtx.lData.stoID
db.layersPutLeaf((root, cur), path, payload)
else:
db.layersPutLeaf((root, cur), path, payload)
(leafVtx, nil, nil)
else:
# Turn leaf into a branch (or extension) then insert the two leaves
# into the branch
let branch = VertexRef(vType: Branch, pfx: path.slice(0, n), startVid: db.vidFetch(16))
let other = block: # Copy of existing leaf node, now one level deeper
let local = branch.setUsed(vtx.pfx[n], true)
db.layersPutLeaf((root, local), vtx.pfx.slice(n + 1), vtx.lData)
let leafVtx = block: # Newly inserted leaf node
let local = branch.setUsed(path[n], true)
db.layersPutLeaf((root, local), path.slice(n + 1), payload)
# Put the branch at the vid where the leaf was
db.layersPutVtx((root, cur), branch)
# We need to return vtx here because its pfx member hasn't yet been
# sliced off and is therefore shared with the hike
(leafVtx, vtx, other)
resetKeys()
return ok(res)
of Branch:
if vtx.pfx.len == n:
# The existing branch is a prefix of the new entry
let
nibble = path[vtx.pfx.len]
next = vtx.bVid(nibble)
if next.isValid:
cur = next
path = path.slice(n + 1)
vtx =
if leaf.isSome and leaf[].isValid and leaf[].pfx == path:
leaf[]
else:
(?db.getVtxRc((root, next)))[0]
else:
# There's no vertex at the branch point - insert the payload as a new
# leaf and update the existing branch
let brDup = vtx.dup()
let local = brDup.setUsed(nibble, true)
db.layersPutVtx((root, cur), brDup)
let
leafVtx = db.layersPutLeaf((root, local), path.slice(n + 1), payload)
resetKeys()
return ok((leafVtx, nil, nil))
else:
# Partial path match - we need to split the existing branch at
# the point of divergence, inserting a new branch
let branch = VertexRef(vType: Branch, pfx: path.slice(0, n), startVid: db.vidFetch(16))
block: # Copy the existing vertex and add it to the new branch
let local = branch.setUsed(vtx.pfx[n], true)
db.layersPutVtx(
(root, local),
VertexRef(vType: Branch, pfx: vtx.pfx.slice(n + 1), startVid: vtx.startVid, used: vtx.used),
)
let leafVtx = block: # add the new entry
let local = branch.setUsed(path[n], true)
db.layersPutLeaf((root, local), path.slice(n + 1), payload)
db.layersPutVtx((root, cur), branch)
resetKeys()
return ok((leafVtx, nil, nil))
err(MergeHikeFailed)
# ------------------------------------------------------------------------------
# Public functions
# ------------------------------------------------------------------------------
proc mergeAccountRecord*(
db: AristoTxRef; # Database, top layer
accPath: Hash32; # Even nibbled byte path
accRec: AristoAccount; # Account data
): Result[bool,AristoError] =
## Merge the key-value-pair argument `(accKey,accRec)` as an account
## ledger value, i.e. the the sub-tree starting at `VertexID(1)`.
##
## On success, the function returns `true` if the `accRec` argument was
## not on the database already or different from `accRec`, and `false`
## otherwise.
##
let
pyl = LeafPayload(pType: AccountData, account: accRec)
updated = db.mergePayloadImpl(
VertexID(1), accPath, db.cachedAccLeaf(accPath), pyl).valueOr:
if error == MergeNoAction:
return ok false
return err(error)
# Update leaf cache both of the merged value and potentially the displaced
# leaf resulting from splitting a leaf into a branch with two leaves
db.layersPutAccLeaf(accPath, updated[0])
if updated[1].isValid:
let otherPath = Hash32(getBytes(
NibblesBuf.fromBytes(accPath.data).replaceSuffix(updated[1].pfx)))
db.layersPutAccLeaf(otherPath, updated[2])
ok true
proc mergeStorageData*(
db: AristoTxRef; # Database, top layer
accPath: Hash32; # Needed for accounts payload
stoPath: Hash32; # Storage data path (aka key)
stoData: UInt256; # Storage data payload value
): Result[void,AristoError] =
## Store the `stoData` data argument on the storage area addressed by
## `(accPath,stoPath)` where `accPath` is the account key (into the MPT)
## and `stoPath` is the slot path of the corresponding storage area.
##
var accHike: Hike
db.fetchAccountHike(accPath,accHike).isOkOr:
return err(MergeStoAccMissing)
let
stoID = accHike.legs[^1].wp.vtx.lData.stoID
# Provide new storage ID when needed
useID =
if stoID.isValid: stoID # Use as is
elif stoID.vid.isValid: (true, stoID.vid) # Re-use previous vid
else: (true, db.vidFetch()) # Create new vid
mixPath = mixUp(accPath, stoPath)
# Call merge
pyl = LeafPayload(pType: StoData, stoData: stoData)
updated = db.mergePayloadImpl(
useID.vid, stoPath, db.cachedStoLeaf(mixPath), pyl).valueOr:
if error == MergeNoAction:
assert stoID.isValid # debugging only
return ok()
return err(error)
# Mark account path Merkle keys for update
db.layersResKeys(accHike)
# Update leaf cache both of the merged value and potentially the displaced
# leaf resulting from splitting a leaf into a branch with two leaves
db.layersPutStoLeaf(mixPath, updated[0])
if updated[1].isValid:
let otherPath = Hash32(getBytes(
NibblesBuf.fromBytes(stoPath.data).replaceSuffix(updated[1].pfx)))
db.layersPutStoLeaf(mixUp(accPath, otherPath), updated[2])
if not stoID.isValid:
# Make sure that there is an account that refers to that storage trie
let leaf = accHike.legs[^1].wp.vtx.dup # Dup on modify
leaf.lData.stoID = useID
db.layersPutAccLeaf(accPath, leaf)
db.layersPutVtx((VertexID(1), accHike.legs[^1].wp.vid), leaf)
ok()
# ------------------------------------------------------------------------------
# End
# ------------------------------------------------------------------------------