2023-05-11 14:25:29 +00:00
|
|
|
# nimbus-eth1
|
Core db update storage root management for sub tries (#1964)
* Aristo: Re-phrase `LayerDelta` and `LayerFinal` as object references
why:
Avoids copying in some cases
* Fix copyright header
* Aristo: Verify `leafTie.root` function argument for `merge()` proc
why:
Zero root will lead to inconsistent DB entry
* Aristo: Update failure condition for hash labels compiler `hashify()`
why:
Node need not be rejected as long as links are on the schedule. In
that case, `redo[]` is to become `wff.base[]` at a later stage.
This amends an earlier fix, part of #1952 by also testing against
the target nodes of the `wff.base[]` sets.
* Aristo: Add storage root glue record to `hashify()` schedule
why:
An account leaf node might refer to a non-resolvable storage root ID.
Storage root node chains will end up at the storage root. So the link
`storage-root->account-leaf` needs an extra item in the schedule.
* Aristo: fix error code returned by `fetchPayload()`
details:
Final error code is implied by the error code form the `hikeUp()`
function.
* CoreDb: Discard `createOk` argument in API `getRoot()` function
why:
Not needed for the legacy DB. For the `Arsto` DB, a lazy approach is
implemented where a stprage root node is created on-the-fly.
* CoreDb: Prevent `$$` logging in some cases
why:
Logging the function `$$` is not useful when it is used for internal
use, i.e. retrieving an an error text for logging.
* CoreDb: Add `tryHashFn()` to API for pretty printing
why:
Pretty printing must not change the hashification status for the
`Aristo` DB. So there is an independent API wrapper for getting the
node hash which never updated the hashes.
* CoreDb: Discard `update` argument in API `hash()` function
why:
When calling the API function `hash()`, the latest state is always
wanted. For a version that uses the current state as-is without checking,
the function `tryHash()` was added to the backend.
* CoreDb: Update opaque vertex ID objects for the `Aristo` backend
why:
For `Aristo`, vID objects encapsulate a numeric `VertexID`
referencing a vertex (rather than a node hash as used on the
legacy backend.) For storage sub-tries, there might be no initial
vertex known when the descriptor is created. So opaque vertex ID
objects are supported without a valid `VertexID` which will be
initalised on-the-fly when the first item is merged.
* CoreDb: Add pretty printer for opaque vertex ID objects
* Cosmetics, printing profiling data
* CoreDb: Fix segfault in `Aristo` backend when creating MPT descriptor
why:
Missing initialisation error
* CoreDb: Allow MPT to inherit shared context on `Aristo` backend
why:
Creates descriptors with different storage roots for the same
shared `Aristo` DB descriptor.
* Cosmetics, update diagnostic message items for `Aristo` backend
* Fix Copyright year
2024-01-11 19:11:38 +00:00
|
|
|
# Copyright (c) 2023-2024 Status Research & Development GmbH
|
2023-05-11 14:25:29 +00:00
|
|
|
# Licensed under either of
|
|
|
|
# * Apache License, version 2.0, ([LICENSE-APACHE](LICENSE-APACHE) or
|
|
|
|
# http://www.apache.org/licenses/LICENSE-2.0)
|
|
|
|
# * MIT license ([LICENSE-MIT](LICENSE-MIT) or
|
|
|
|
# http://opensource.org/licenses/MIT)
|
|
|
|
# at your option. This file may not be copied, modified, or distributed
|
|
|
|
# except according to those terms.
|
|
|
|
|
|
|
|
{.push raises: [].}
|
|
|
|
|
|
|
|
import
|
2023-05-30 21:21:15 +00:00
|
|
|
std/[algorithm, sequtils, sets, strutils, tables],
|
2024-06-22 20:33:37 +00:00
|
|
|
eth/common,
|
2023-08-21 14:58:30 +00:00
|
|
|
results,
|
2024-02-22 08:24:58 +00:00
|
|
|
stew/[byteutils, interval_set],
|
2023-09-05 13:57:20 +00:00
|
|
|
./aristo_desc/desc_backend,
|
2023-09-15 15:23:53 +00:00
|
|
|
./aristo_init/[memory_db, memory_only, rocks_db],
|
2024-06-04 15:05:13 +00:00
|
|
|
"."/[aristo_desc, aristo_hike, aristo_layers]
|
2023-05-11 14:25:29 +00:00
|
|
|
|
|
|
|
# ------------------------------------------------------------------------------
|
2023-09-05 13:57:20 +00:00
|
|
|
# Private functions
|
2023-05-11 14:25:29 +00:00
|
|
|
# ------------------------------------------------------------------------------
|
|
|
|
|
2023-12-19 12:39:23 +00:00
|
|
|
proc orDefault(db: AristoDbRef): AristoDbRef =
|
Core db update storage root management for sub tries (#1964)
* Aristo: Re-phrase `LayerDelta` and `LayerFinal` as object references
why:
Avoids copying in some cases
* Fix copyright header
* Aristo: Verify `leafTie.root` function argument for `merge()` proc
why:
Zero root will lead to inconsistent DB entry
* Aristo: Update failure condition for hash labels compiler `hashify()`
why:
Node need not be rejected as long as links are on the schedule. In
that case, `redo[]` is to become `wff.base[]` at a later stage.
This amends an earlier fix, part of #1952 by also testing against
the target nodes of the `wff.base[]` sets.
* Aristo: Add storage root glue record to `hashify()` schedule
why:
An account leaf node might refer to a non-resolvable storage root ID.
Storage root node chains will end up at the storage root. So the link
`storage-root->account-leaf` needs an extra item in the schedule.
* Aristo: fix error code returned by `fetchPayload()`
details:
Final error code is implied by the error code form the `hikeUp()`
function.
* CoreDb: Discard `createOk` argument in API `getRoot()` function
why:
Not needed for the legacy DB. For the `Arsto` DB, a lazy approach is
implemented where a stprage root node is created on-the-fly.
* CoreDb: Prevent `$$` logging in some cases
why:
Logging the function `$$` is not useful when it is used for internal
use, i.e. retrieving an an error text for logging.
* CoreDb: Add `tryHashFn()` to API for pretty printing
why:
Pretty printing must not change the hashification status for the
`Aristo` DB. So there is an independent API wrapper for getting the
node hash which never updated the hashes.
* CoreDb: Discard `update` argument in API `hash()` function
why:
When calling the API function `hash()`, the latest state is always
wanted. For a version that uses the current state as-is without checking,
the function `tryHash()` was added to the backend.
* CoreDb: Update opaque vertex ID objects for the `Aristo` backend
why:
For `Aristo`, vID objects encapsulate a numeric `VertexID`
referencing a vertex (rather than a node hash as used on the
legacy backend.) For storage sub-tries, there might be no initial
vertex known when the descriptor is created. So opaque vertex ID
objects are supported without a valid `VertexID` which will be
initalised on-the-fly when the first item is merged.
* CoreDb: Add pretty printer for opaque vertex ID objects
* Cosmetics, printing profiling data
* CoreDb: Fix segfault in `Aristo` backend when creating MPT descriptor
why:
Missing initialisation error
* CoreDb: Allow MPT to inherit shared context on `Aristo` backend
why:
Creates descriptors with different storage roots for the same
shared `Aristo` DB descriptor.
* Cosmetics, update diagnostic message items for `Aristo` backend
* Fix Copyright year
2024-01-11 19:11:38 +00:00
|
|
|
if db.isNil: AristoDbRef(top: LayerRef.init()) else: db
|
2023-12-19 12:39:23 +00:00
|
|
|
|
2024-02-22 08:24:58 +00:00
|
|
|
proc add(
|
|
|
|
xMap: var Table[HashKey,HashSet[VertexID]];
|
|
|
|
key: HashKey;
|
|
|
|
vid: VertexID;
|
|
|
|
) =
|
2024-02-14 19:11:59 +00:00
|
|
|
xMap.withValue(key,value):
|
2023-12-19 12:39:23 +00:00
|
|
|
value[].incl vid
|
|
|
|
do: # else if not found
|
2024-02-14 19:11:59 +00:00
|
|
|
xMap[key] = @[vid].toHashSet
|
2023-12-19 12:39:23 +00:00
|
|
|
|
|
|
|
# --------------------------
|
|
|
|
|
2023-10-27 21:36:51 +00:00
|
|
|
proc toHex(w: VertexID): string =
|
2023-11-08 12:18:32 +00:00
|
|
|
w.uint64.toHex
|
2023-10-27 21:36:51 +00:00
|
|
|
|
|
|
|
proc toHexLsb(w: int8): string =
|
|
|
|
$"0123456789abcdef"[w and 15]
|
|
|
|
|
2024-02-14 19:11:59 +00:00
|
|
|
proc sortedKeys[T](tab: Table[VertexID,T]): seq[VertexID] =
|
2024-02-22 08:24:58 +00:00
|
|
|
tab.keys.toSeq.sorted
|
2023-06-09 11:17:37 +00:00
|
|
|
|
|
|
|
proc sortedKeys(pPrf: HashSet[VertexID]): seq[VertexID] =
|
2024-02-22 08:24:58 +00:00
|
|
|
pPrf.toSeq.sorted
|
2023-12-19 12:39:23 +00:00
|
|
|
|
2023-06-09 11:17:37 +00:00
|
|
|
proc toPfx(indent: int; offset = 0): string =
|
2023-06-20 13:26:25 +00:00
|
|
|
if 0 < indent+offset: "\n" & " ".repeat(indent+offset) else: ""
|
2023-06-09 11:17:37 +00:00
|
|
|
|
2023-05-11 14:25:29 +00:00
|
|
|
proc squeeze(s: string; hex = false; ignLen = false): string =
|
|
|
|
## For long strings print `begin..end` only
|
|
|
|
if hex:
|
|
|
|
let n = (s.len + 1) div 2
|
2023-05-30 21:21:15 +00:00
|
|
|
result = if s.len < 20: s else: s[0 .. 5] & ".." & s[s.len-8 .. ^1]
|
2023-05-11 14:25:29 +00:00
|
|
|
if not ignLen:
|
|
|
|
result &= "[" & (if 0 < n: "#" & $n else: "") & "]"
|
|
|
|
elif s.len <= 30:
|
|
|
|
result = s
|
|
|
|
else:
|
|
|
|
result = if (s.len and 1) == 0: s[0 ..< 8] else: "0" & s[0 ..< 7]
|
|
|
|
if not ignLen:
|
|
|
|
result &= "..(" & $s.len & ")"
|
2023-05-30 21:21:15 +00:00
|
|
|
result &= ".." & s[s.len-16 .. ^1]
|
2023-05-11 14:25:29 +00:00
|
|
|
|
2023-12-04 20:39:26 +00:00
|
|
|
proc stripZeros(a: string; toExp = false): string =
|
|
|
|
if 0 < a.len:
|
|
|
|
result = a.strip(leading=true, trailing=false, chars={'0'})
|
|
|
|
if result.len == 0:
|
|
|
|
result = "0"
|
|
|
|
elif result[^1] == '0' and toExp:
|
|
|
|
var n = 0
|
|
|
|
while result[^1] == '0':
|
|
|
|
let w = result.len
|
|
|
|
result.setLen(w-1)
|
|
|
|
n.inc
|
|
|
|
if n == 1:
|
|
|
|
result &= "0"
|
|
|
|
elif n == 2:
|
|
|
|
result &= "00"
|
|
|
|
elif 2 < n:
|
|
|
|
result &= "↑" & $n
|
|
|
|
|
2024-02-14 19:11:59 +00:00
|
|
|
proc vidCode(key: HashKey, db: AristoDbRef): uint64 =
|
|
|
|
if key.isValid:
|
2023-12-19 12:39:23 +00:00
|
|
|
block:
|
2024-02-22 08:24:58 +00:00
|
|
|
let vid = db.layerGetProofVidOrVoid key
|
|
|
|
if vid.isValid:
|
|
|
|
db.xMap.add(key, vid)
|
|
|
|
return vid.uint64
|
2023-12-04 20:39:26 +00:00
|
|
|
block:
|
2024-02-14 19:11:59 +00:00
|
|
|
let vids = db.xMap.getOrVoid key
|
2023-12-04 20:39:26 +00:00
|
|
|
if vids.isValid:
|
|
|
|
return vids.sortedKeys[0].uint64
|
|
|
|
|
|
|
|
# ---------------------
|
2023-05-11 14:25:29 +00:00
|
|
|
|
2024-02-14 19:11:59 +00:00
|
|
|
proc ppKeyOk(
|
2023-12-19 12:39:23 +00:00
|
|
|
db: AristoDbRef;
|
|
|
|
key: HashKey;
|
|
|
|
vid: VertexID;
|
|
|
|
): string =
|
|
|
|
if key.isValid and vid.isValid:
|
2024-02-22 08:24:58 +00:00
|
|
|
block:
|
|
|
|
let vid = db.layerGetProofVidOrVoid key
|
|
|
|
if vid.isValid:
|
|
|
|
db.xMap.add(key, vid)
|
|
|
|
return
|
2023-12-19 12:39:23 +00:00
|
|
|
block:
|
2024-02-14 19:11:59 +00:00
|
|
|
let vids = db.xMap.getOrVoid key
|
2023-12-19 12:39:23 +00:00
|
|
|
if vids.isValid:
|
|
|
|
if vid notin vids:
|
|
|
|
result = "(!)"
|
|
|
|
return
|
2024-02-14 19:11:59 +00:00
|
|
|
db.xMap.add(key,vid)
|
2023-12-19 12:39:23 +00:00
|
|
|
|
2023-06-20 13:26:25 +00:00
|
|
|
proc ppVid(vid: VertexID; pfx = true): string =
|
|
|
|
if pfx:
|
|
|
|
result = "$"
|
|
|
|
if vid.isValid:
|
2023-10-27 21:36:51 +00:00
|
|
|
result &= vid.toHex.stripZeros.toLowerAscii
|
2023-06-20 13:26:25 +00:00
|
|
|
else:
|
|
|
|
result &= "ø"
|
2023-05-11 14:25:29 +00:00
|
|
|
|
2023-12-19 12:39:23 +00:00
|
|
|
proc ppVids(vids: HashSet[VertexID]): string =
|
|
|
|
result = "{"
|
2024-05-30 17:48:38 +00:00
|
|
|
if vids.len == 0:
|
|
|
|
result &= "}"
|
|
|
|
else:
|
|
|
|
for vid in vids.toSeq.sorted:
|
|
|
|
result &= "$"
|
|
|
|
if vid.isValid:
|
|
|
|
result &= vid.toHex.stripZeros.toLowerAscii
|
|
|
|
else:
|
|
|
|
result &= "ø"
|
|
|
|
result &= ","
|
|
|
|
result[^1] = '}'
|
2023-12-19 12:39:23 +00:00
|
|
|
|
2023-12-04 20:39:26 +00:00
|
|
|
func ppCodeHash(h: Hash256): string =
|
|
|
|
result = "¢"
|
|
|
|
if h == Hash256():
|
|
|
|
result &= "©"
|
|
|
|
elif h == EMPTY_CODE_HASH:
|
|
|
|
result &= "ø"
|
|
|
|
else:
|
|
|
|
result &= h.data.toHex.squeeze(hex=true,ignLen=true)
|
|
|
|
|
2024-06-04 15:05:13 +00:00
|
|
|
proc ppVidList(vLst: openArray[VertexID]): string =
|
2024-05-30 17:48:38 +00:00
|
|
|
result = "["
|
2024-06-04 15:05:13 +00:00
|
|
|
if vLst.len <= 250:
|
|
|
|
result &= vLst.mapIt(it.ppVid).join(",")
|
2024-05-30 17:48:38 +00:00
|
|
|
else:
|
2024-06-04 15:05:13 +00:00
|
|
|
result &= vLst[0 .. 99].mapIt(it.ppVid).join(",")
|
2024-05-30 17:48:38 +00:00
|
|
|
result &= ",.."
|
2024-06-04 15:05:13 +00:00
|
|
|
result &= vLst[^100 .. ^1].mapIt(it.ppVid).join(",")
|
2024-05-30 17:48:38 +00:00
|
|
|
result &= "]"
|
2023-06-30 22:22:33 +00:00
|
|
|
|
2024-02-14 19:11:59 +00:00
|
|
|
proc ppKey(key: HashKey; db: AristoDbRef; pfx = true): string =
|
2023-12-19 12:39:23 +00:00
|
|
|
proc getVids(): tuple[vids: HashSet[VertexID], xMapTag: string] =
|
|
|
|
block:
|
2024-02-22 08:24:58 +00:00
|
|
|
let vid = db.layerGetProofVidOrVoid key
|
|
|
|
if vid.isValid:
|
|
|
|
db.xMap.add(key, vid)
|
|
|
|
return (@[vid].toHashSet, "")
|
2023-11-08 12:18:32 +00:00
|
|
|
block:
|
2024-02-14 19:11:59 +00:00
|
|
|
let vids = db.xMap.getOrVoid key
|
2023-11-08 12:18:32 +00:00
|
|
|
if vids.isValid:
|
2023-12-19 12:39:23 +00:00
|
|
|
return (vids, "+")
|
2023-11-08 12:18:32 +00:00
|
|
|
if pfx:
|
|
|
|
result = "£"
|
2024-03-22 17:31:56 +00:00
|
|
|
if key.to(Hash256) == Hash256():
|
2023-12-04 20:39:26 +00:00
|
|
|
result &= "©"
|
2023-11-08 12:18:32 +00:00
|
|
|
elif not key.isValid:
|
2023-12-04 20:39:26 +00:00
|
|
|
result &= "ø"
|
2023-11-08 12:18:32 +00:00
|
|
|
else:
|
|
|
|
let
|
|
|
|
tag = if key.len < 32: "[#" & $key.len & "]" else: ""
|
2023-12-19 12:39:23 +00:00
|
|
|
(vids, xMapTag) = getVids()
|
2023-11-08 12:18:32 +00:00
|
|
|
if vids.isValid:
|
|
|
|
if not pfx and 0 < tag.len:
|
|
|
|
result &= "$"
|
|
|
|
if 1 < vids.len: result &= "{"
|
2023-12-19 12:39:23 +00:00
|
|
|
result &= vids.sortedKeys.mapIt(it.ppVid(pfx=false) & xMapTag).join(",")
|
2023-11-08 12:18:32 +00:00
|
|
|
if 1 < vids.len: result &= "}"
|
|
|
|
result &= tag
|
|
|
|
return
|
2024-05-30 17:48:38 +00:00
|
|
|
result &= @(key.data).toHex.squeeze(hex=true,ignLen=true) & tag
|
2023-05-11 14:25:29 +00:00
|
|
|
|
2023-07-04 18:24:03 +00:00
|
|
|
proc ppLeafTie(lty: LeafTie, db: AristoDbRef): string =
|
2024-06-22 20:33:37 +00:00
|
|
|
let pfx = lty.path.to(NibblesBuf)
|
2023-12-04 20:39:26 +00:00
|
|
|
"@" & lty.root.ppVid(pfx=false) & ":" &
|
|
|
|
($pfx).squeeze(hex=true,ignLen=(pfx.len==64))
|
2023-05-30 11:47:47 +00:00
|
|
|
|
2024-06-22 20:33:37 +00:00
|
|
|
proc ppPathPfx(pfx: NibblesBuf): string =
|
2023-05-30 21:21:15 +00:00
|
|
|
let s = $pfx
|
|
|
|
if s.len < 20: s else: s[0 .. 5] & ".." & s[s.len-8 .. ^1] & ":" & $s.len
|
2023-05-30 11:47:47 +00:00
|
|
|
|
|
|
|
proc ppNibble(n: int8): string =
|
2023-10-27 21:36:51 +00:00
|
|
|
if n < 0: "ø" elif n < 10: $n else: n.toHexLsb
|
2023-05-30 11:47:47 +00:00
|
|
|
|
2023-07-04 18:24:03 +00:00
|
|
|
proc ppPayload(p: PayloadRef, db: AristoDbRef): string =
|
2023-05-11 14:25:29 +00:00
|
|
|
if p.isNil:
|
|
|
|
result = "n/a"
|
|
|
|
else:
|
|
|
|
case p.pType:
|
2023-07-05 20:27:48 +00:00
|
|
|
of RawData:
|
|
|
|
result &= p.rawBlob.toHex.squeeze(hex=true)
|
2023-05-11 14:25:29 +00:00
|
|
|
of AccountData:
|
|
|
|
result = "("
|
2023-12-04 20:39:26 +00:00
|
|
|
result &= ($p.account.nonce).stripZeros(toExp=true) & ","
|
|
|
|
result &= ($p.account.balance).stripZeros(toExp=true) & ","
|
2023-07-05 20:27:48 +00:00
|
|
|
result &= p.account.storageID.ppVid & ","
|
2023-12-04 20:39:26 +00:00
|
|
|
result &= p.account.codeHash.ppCodeHash & ")"
|
2023-05-11 14:25:29 +00:00
|
|
|
|
2023-07-04 18:24:03 +00:00
|
|
|
proc ppVtx(nd: VertexRef, db: AristoDbRef, vid: VertexID): string =
|
2023-06-12 13:48:47 +00:00
|
|
|
if not nd.isValid:
|
2023-06-30 22:22:33 +00:00
|
|
|
result = "ø"
|
2023-05-11 14:25:29 +00:00
|
|
|
else:
|
2023-12-19 12:39:23 +00:00
|
|
|
if not vid.isValid or vid in db.pPrf:
|
2023-06-09 11:17:37 +00:00
|
|
|
result = ["L(", "X(", "B("][nd.vType.ord]
|
2024-02-14 19:11:59 +00:00
|
|
|
elif db.layersGetKey(vid).isOk:
|
2023-05-30 21:21:15 +00:00
|
|
|
result = ["l(", "x(", "b("][nd.vType.ord]
|
|
|
|
else:
|
|
|
|
result = ["ł(", "€(", "þ("][nd.vType.ord]
|
2023-05-11 14:25:29 +00:00
|
|
|
case nd.vType:
|
|
|
|
of Leaf:
|
2023-05-30 21:21:15 +00:00
|
|
|
result &= nd.lPfx.ppPathPfx & "," & nd.lData.ppPayload(db)
|
2023-05-11 14:25:29 +00:00
|
|
|
of Extension:
|
2023-05-30 11:47:47 +00:00
|
|
|
result &= nd.ePfx.ppPathPfx & "," & nd.eVid.ppVid
|
2023-05-11 14:25:29 +00:00
|
|
|
of Branch:
|
|
|
|
for n in 0..15:
|
2023-06-12 13:48:47 +00:00
|
|
|
if nd.bVid[n].isValid:
|
2023-05-14 17:43:01 +00:00
|
|
|
result &= nd.bVid[n].ppVid
|
2023-05-30 21:21:15 +00:00
|
|
|
if n < 15:
|
|
|
|
result &= ","
|
2023-05-11 14:25:29 +00:00
|
|
|
result &= ")"
|
|
|
|
|
2023-06-30 22:22:33 +00:00
|
|
|
proc ppSTab(
|
|
|
|
sTab: Table[VertexID,VertexRef];
|
2023-12-19 12:39:23 +00:00
|
|
|
db: AristoDbRef;
|
2023-06-30 22:22:33 +00:00
|
|
|
indent = 4;
|
|
|
|
): string =
|
|
|
|
"{" & sTab.sortedKeys
|
|
|
|
.mapIt((it, sTab.getOrVoid it))
|
|
|
|
.mapIt("(" & it[0].ppVid & "," & it[1].ppVtx(db,it[0]) & ")")
|
2023-11-08 12:18:32 +00:00
|
|
|
.join(indent.toPfx(1)) & "}"
|
2023-06-30 22:22:33 +00:00
|
|
|
|
|
|
|
proc ppPPrf(pPrf: HashSet[VertexID]): string =
|
2024-02-22 08:24:58 +00:00
|
|
|
result = "{"
|
|
|
|
if 0 < pPrf.len:
|
|
|
|
let isr = IntervalSetRef[VertexID,uint64].init()
|
|
|
|
for w in pPrf:
|
|
|
|
doAssert isr.merge(w,w) == 1
|
|
|
|
for iv in isr.increasing():
|
|
|
|
result &= iv.minPt.ppVid
|
|
|
|
if 1 < iv.len:
|
|
|
|
result &= ".. " & iv.maxPt.ppVid
|
|
|
|
result &= ", "
|
|
|
|
result.setlen(result.len - 2)
|
|
|
|
#result &= pPrf.sortedKeys.mapIt(it.ppVid).join(",")
|
|
|
|
result &= "}"
|
2023-06-30 22:22:33 +00:00
|
|
|
|
2023-05-30 21:21:15 +00:00
|
|
|
proc ppXMap*(
|
2023-07-04 18:24:03 +00:00
|
|
|
db: AristoDbRef;
|
2024-02-14 19:11:59 +00:00
|
|
|
kMap: Table[VertexID,HashKey];
|
2023-05-30 21:21:15 +00:00
|
|
|
indent: int;
|
|
|
|
): string =
|
|
|
|
|
2023-11-08 12:18:32 +00:00
|
|
|
let pfx = indent.toPfx(1)
|
|
|
|
|
2023-12-19 12:39:23 +00:00
|
|
|
var
|
|
|
|
multi: HashSet[VertexID]
|
|
|
|
oops: HashSet[VertexID]
|
|
|
|
block:
|
|
|
|
var vids: HashSet[VertexID]
|
2024-02-22 08:24:58 +00:00
|
|
|
for w in db.xMap.values:
|
2023-12-19 12:39:23 +00:00
|
|
|
for v in w:
|
|
|
|
if v in vids:
|
|
|
|
oops.incl v
|
|
|
|
else:
|
|
|
|
vids.incl v
|
|
|
|
if 1 < w.len:
|
|
|
|
multi = multi + w
|
2023-11-08 12:18:32 +00:00
|
|
|
|
2024-02-14 19:11:59 +00:00
|
|
|
# Vertex IDs without forward mapping `kMap: VertexID -> HashKey`
|
|
|
|
var revOnly: Table[VertexID,HashKey]
|
2024-02-22 08:24:58 +00:00
|
|
|
for (key,vids) in db.xMap.pairs:
|
2023-11-08 12:18:32 +00:00
|
|
|
for vid in vids:
|
|
|
|
if not kMap.hasKey vid:
|
2024-02-14 19:11:59 +00:00
|
|
|
revOnly[vid] = key
|
2023-05-30 21:21:15 +00:00
|
|
|
|
2024-02-22 08:24:58 +00:00
|
|
|
let revKeys =revOnly.keys.toSeq.sorted
|
2023-05-30 21:21:15 +00:00
|
|
|
proc ppNtry(n: uint64): string =
|
2023-06-22 19:21:33 +00:00
|
|
|
var s = VertexID(n).ppVid
|
2024-02-14 19:11:59 +00:00
|
|
|
let key = kMap.getOrVoid VertexID(n)
|
|
|
|
if key.isValid:
|
2024-02-22 08:24:58 +00:00
|
|
|
let vids = db.xMap.getOrVoid key
|
2023-11-08 12:18:32 +00:00
|
|
|
if VertexID(n) notin vids or 1 < vids.len:
|
2024-02-14 19:11:59 +00:00
|
|
|
s = "(" & s & "," & key.ppKey(db)
|
|
|
|
elif key.len < 32:
|
|
|
|
s &= "[#" & $key.len & "]"
|
2023-05-30 21:21:15 +00:00
|
|
|
else:
|
2023-06-30 22:22:33 +00:00
|
|
|
s &= "£ø"
|
2023-06-22 19:21:33 +00:00
|
|
|
if s[0] == '(':
|
|
|
|
s &= ")"
|
|
|
|
s & ","
|
2023-05-30 21:21:15 +00:00
|
|
|
|
2023-06-22 19:21:33 +00:00
|
|
|
result = "{"
|
|
|
|
# Extra reverse lookups
|
|
|
|
if 0 < revKeys.len:
|
2024-02-14 19:11:59 +00:00
|
|
|
proc ppRevKey(vid: VertexID): string =
|
2024-02-21 16:04:59 +00:00
|
|
|
"(ø," & revOnly.getOrVoid(vid).ppKey(db) & ")"
|
2023-06-22 19:21:33 +00:00
|
|
|
var (i, r) = (0, revKeys[0])
|
2024-02-14 19:11:59 +00:00
|
|
|
result &= revKeys[0].ppRevKey
|
2023-06-22 19:21:33 +00:00
|
|
|
for n in 1 ..< revKeys.len:
|
|
|
|
let vid = revKeys[n]
|
|
|
|
r.inc
|
|
|
|
if r != vid:
|
|
|
|
if i+1 != n:
|
2023-06-30 22:22:33 +00:00
|
|
|
if i+1 == n-1:
|
|
|
|
result &= pfx
|
|
|
|
else:
|
|
|
|
result &= ".. "
|
2024-02-14 19:11:59 +00:00
|
|
|
result &= revKeys[n-1].ppRevKey
|
|
|
|
result &= pfx & vid.ppRevKey
|
2023-06-22 19:21:33 +00:00
|
|
|
(i, r) = (n, vid)
|
|
|
|
if i < revKeys.len - 1:
|
|
|
|
if i+1 != revKeys.len - 1:
|
|
|
|
result &= ".. "
|
|
|
|
else:
|
|
|
|
result &= pfx
|
2024-02-14 19:11:59 +00:00
|
|
|
result &= revKeys[^1].ppRevKey
|
2023-06-22 19:21:33 +00:00
|
|
|
|
|
|
|
# Forward lookups
|
2023-05-30 21:21:15 +00:00
|
|
|
var cache: seq[(uint64,uint64,bool)]
|
2023-06-09 11:17:37 +00:00
|
|
|
for vid in kMap.sortedKeys:
|
2024-02-14 19:11:59 +00:00
|
|
|
let key = kMap.getOrVoid vid
|
|
|
|
if key.isValid:
|
|
|
|
cache.add (vid.uint64, key.vidCode(db), vid in multi)
|
2024-02-22 08:24:58 +00:00
|
|
|
let vids = db.xMap.getOrVoid key
|
2024-02-14 19:11:59 +00:00
|
|
|
if (0 < vids.len and vid notin vids) or key.len < 32:
|
2023-05-30 21:21:15 +00:00
|
|
|
cache[^1][2] = true
|
|
|
|
else:
|
|
|
|
cache.add (vid.uint64, 0u64, true)
|
|
|
|
|
|
|
|
if 0 < cache.len:
|
2023-06-22 19:21:33 +00:00
|
|
|
var (i, r) = (0, cache[0])
|
|
|
|
if 0 < revKeys.len:
|
|
|
|
result &= pfx
|
2023-05-30 21:21:15 +00:00
|
|
|
result &= cache[i][0].ppNtry
|
|
|
|
for n in 1 ..< cache.len:
|
2023-06-30 22:22:33 +00:00
|
|
|
let
|
|
|
|
m = cache[n-1]
|
|
|
|
w = cache[n]
|
|
|
|
r = (r[0]+1, r[1]+1, r[2])
|
2023-05-30 21:21:15 +00:00
|
|
|
if r != w or w[2]:
|
|
|
|
if i+1 != n:
|
2023-06-30 22:22:33 +00:00
|
|
|
if i+1 == n-1:
|
|
|
|
result &= pfx
|
|
|
|
else:
|
|
|
|
result &= ".. "
|
|
|
|
result &= m[0].ppNtry
|
|
|
|
result &= pfx & w[0].ppNtry
|
2023-05-30 21:21:15 +00:00
|
|
|
(i, r) = (n, w)
|
|
|
|
if i < cache.len - 1:
|
|
|
|
if i+1 != cache.len - 1:
|
|
|
|
result &= ".. "
|
|
|
|
else:
|
2023-06-09 11:17:37 +00:00
|
|
|
result &= pfx
|
2023-05-30 21:21:15 +00:00
|
|
|
result &= cache[^1][0].ppNtry
|
|
|
|
result[^1] = '}'
|
|
|
|
else:
|
|
|
|
result &= "}"
|
|
|
|
|
2024-02-22 08:24:58 +00:00
|
|
|
proc ppFRpp(
|
|
|
|
fRpp: Table[HashKey,VertexID];
|
|
|
|
db: AristoDbRef;
|
|
|
|
indent = 4;
|
|
|
|
): string =
|
|
|
|
let
|
|
|
|
xMap = fRpp.pairs.toSeq.mapIt((it[1],it[0])).toTable
|
|
|
|
xStr = db.ppXMap(xMap, indent)
|
|
|
|
"<" & xStr[1..^2] & ">"
|
|
|
|
|
2023-11-08 12:18:32 +00:00
|
|
|
proc ppFilter(
|
2024-06-03 20:10:35 +00:00
|
|
|
fl: LayerDeltaRef;
|
2023-11-08 12:18:32 +00:00
|
|
|
db: AristoDbRef;
|
|
|
|
indent: int;
|
|
|
|
): string =
|
2023-08-10 20:01:28 +00:00
|
|
|
## Walk over filter tables
|
|
|
|
let
|
|
|
|
pfx = indent.toPfx
|
|
|
|
pfx1 = indent.toPfx(1)
|
|
|
|
pfx2 = indent.toPfx(2)
|
|
|
|
result = "<filter>"
|
2023-08-17 13:42:01 +00:00
|
|
|
if fl.isNil:
|
2023-08-10 20:01:28 +00:00
|
|
|
result &= " n/a"
|
|
|
|
return
|
2024-06-03 20:10:35 +00:00
|
|
|
result &= pfx & "src=" & fl.src.ppKey(db)
|
2024-06-04 15:05:13 +00:00
|
|
|
result &= pfx & "vTop=" & fl.vTop.ppVid
|
2023-08-18 19:46:55 +00:00
|
|
|
result &= pfx & "sTab" & pfx1 & "{"
|
2023-08-10 20:01:28 +00:00
|
|
|
for n,vid in fl.sTab.sortedKeys:
|
|
|
|
let vtx = fl.sTab.getOrVoid vid
|
|
|
|
if 0 < n: result &= pfx2
|
|
|
|
result &= $(1+n) & "(" & vid.ppVid & "," & vtx.ppVtx(db,vid) & ")"
|
|
|
|
result &= "}" & pfx & "kMap" & pfx1 & "{"
|
|
|
|
for n,vid in fl.kMap.sortedKeys:
|
|
|
|
let key = fl.kMap.getOrVoid vid
|
|
|
|
if 0 < n: result &= pfx2
|
2024-02-14 19:11:59 +00:00
|
|
|
result &= $(1+n) & "(" & vid.ppVid & "," & key.ppKey(db) & ")"
|
2023-08-10 20:01:28 +00:00
|
|
|
result &= "}"
|
|
|
|
|
2024-05-30 17:48:38 +00:00
|
|
|
proc ppBe[T](be: T; db: AristoDbRef; limit: int; indent: int): string =
|
2023-06-20 13:26:25 +00:00
|
|
|
## Walk over backend tables
|
2023-06-22 11:13:24 +00:00
|
|
|
let
|
|
|
|
pfx = indent.toPfx
|
|
|
|
pfx1 = indent.toPfx(1)
|
|
|
|
pfx2 = indent.toPfx(2)
|
2023-06-20 13:26:25 +00:00
|
|
|
result = "<" & $be.kind & ">"
|
2024-04-03 15:48:35 +00:00
|
|
|
var (dump,dataOk) = ("",false)
|
|
|
|
block:
|
2024-06-04 15:05:13 +00:00
|
|
|
let rc = be.getTuvFn()
|
|
|
|
if rc.isOk:
|
|
|
|
dump &= pfx & "vTop=" & rc.value.ppVid
|
2024-04-03 15:48:35 +00:00
|
|
|
dataOk = true
|
2023-12-20 16:19:00 +00:00
|
|
|
block:
|
2024-04-03 15:48:35 +00:00
|
|
|
dump &= pfx & "sTab"
|
|
|
|
var (n, data) = (0, "")
|
2023-12-20 16:19:00 +00:00
|
|
|
for (vid,vtx) in be.walkVtx:
|
|
|
|
n.inc
|
2024-05-30 17:48:38 +00:00
|
|
|
if n < limit:
|
|
|
|
if 1 < n: data &= pfx2
|
|
|
|
data &= $n & "(" & vid.ppVid & "," & vtx.ppVtx(db,vid) & ")"
|
|
|
|
elif n == limit:
|
|
|
|
data &= pfx2 & ".."
|
2024-04-03 15:48:35 +00:00
|
|
|
dump &= "(" & $n & ")"
|
|
|
|
if 0 < n:
|
|
|
|
dataOk = true
|
|
|
|
dump &= pfx1
|
|
|
|
dump &= "{" & data & "}"
|
2023-12-20 16:19:00 +00:00
|
|
|
block:
|
2024-04-03 15:48:35 +00:00
|
|
|
dump &= pfx & "kMap"
|
|
|
|
var (n, data) = (0, "")
|
2023-12-20 16:19:00 +00:00
|
|
|
for (vid,key) in be.walkKey:
|
|
|
|
n.inc
|
2024-05-30 17:48:38 +00:00
|
|
|
if n < limit:
|
|
|
|
if 1 < n: data &= pfx2
|
|
|
|
data &= $n & "(" & vid.ppVid & "," & key.ppKey(db) & ")"
|
|
|
|
elif n == limit:
|
|
|
|
data &= pfx2 & ".."
|
2024-04-03 15:48:35 +00:00
|
|
|
dump &= "(" & $n & ")"
|
|
|
|
if 0 < n:
|
|
|
|
dataOk = true
|
|
|
|
dump &= pfx1
|
|
|
|
dump &= "{" & data & "}"
|
|
|
|
if dataOk:
|
|
|
|
result &= dump
|
|
|
|
else:
|
|
|
|
result &= "[]"
|
2023-06-20 13:26:25 +00:00
|
|
|
|
2023-08-10 20:01:28 +00:00
|
|
|
proc ppLayer(
|
2023-08-18 19:46:55 +00:00
|
|
|
layer: LayerRef;
|
2023-07-04 18:24:03 +00:00
|
|
|
db: AristoDbRef;
|
2024-06-04 15:05:13 +00:00
|
|
|
vTopOk: bool;
|
2023-06-30 22:22:33 +00:00
|
|
|
sTabOk: bool;
|
|
|
|
kMapOk: bool;
|
|
|
|
pPrfOk: bool;
|
2024-02-22 08:24:58 +00:00
|
|
|
fRppOk: bool;
|
2023-06-30 22:22:33 +00:00
|
|
|
indent = 4;
|
|
|
|
): string =
|
|
|
|
let
|
2023-08-17 13:42:01 +00:00
|
|
|
pfx1 = indent.toPfx(1)
|
|
|
|
pfx2 = indent.toPfx(2)
|
2024-06-04 15:05:13 +00:00
|
|
|
nOKs = vTopOk.ord + sTabOk.ord + kMapOk.ord + pPrfOk.ord + fRppOk.ord
|
2023-08-11 17:23:57 +00:00
|
|
|
tagOk = 1 < nOKs
|
2023-06-30 22:22:33 +00:00
|
|
|
var
|
|
|
|
pfy = ""
|
|
|
|
|
|
|
|
proc doPrefix(s: string; dataOk: bool): string =
|
|
|
|
var rc: string
|
|
|
|
if tagOk:
|
2024-06-04 15:05:13 +00:00
|
|
|
rc = pfy
|
|
|
|
if 0 < s.len:
|
|
|
|
rc &= s & (if dataOk: pfx2 else: "")
|
2023-06-30 22:22:33 +00:00
|
|
|
pfy = pfx1
|
|
|
|
else:
|
|
|
|
rc = pfy
|
|
|
|
pfy = pfx2
|
|
|
|
rc
|
|
|
|
|
2023-08-10 20:01:28 +00:00
|
|
|
if not layer.isNil:
|
2023-08-17 13:42:01 +00:00
|
|
|
if 2 < nOKs:
|
|
|
|
result &= "<layer>".doPrefix(false)
|
2024-06-04 15:05:13 +00:00
|
|
|
if vTopOk:
|
|
|
|
result &= "".doPrefix(true) & "vTop=" & layer.delta.vTop.ppVid
|
2023-06-30 22:22:33 +00:00
|
|
|
if sTabOk:
|
|
|
|
let
|
2023-12-19 12:39:23 +00:00
|
|
|
tLen = layer.delta.sTab.len
|
2023-06-30 22:22:33 +00:00
|
|
|
info = "sTab(" & $tLen & ")"
|
2023-12-19 12:39:23 +00:00
|
|
|
result &= info.doPrefix(0 < tLen) & layer.delta.sTab.ppSTab(db,indent+2)
|
2023-06-30 22:22:33 +00:00
|
|
|
if kMapOk:
|
|
|
|
let
|
2023-12-19 12:39:23 +00:00
|
|
|
tLen = layer.delta.kMap.len
|
2024-02-22 08:24:58 +00:00
|
|
|
uLen = db.xMap.len
|
2024-02-21 16:04:59 +00:00
|
|
|
lInf = if tLen == uLen: $tLen else: $tLen & "," & $uLen
|
2023-06-30 22:22:33 +00:00
|
|
|
info = "kMap(" & lInf & ")"
|
|
|
|
result &= info.doPrefix(0 < tLen + uLen)
|
2024-02-22 08:24:58 +00:00
|
|
|
result &= db.ppXMap(layer.delta.kMap, indent+2)
|
2023-06-30 22:22:33 +00:00
|
|
|
if pPrfOk:
|
|
|
|
let
|
2023-12-19 12:39:23 +00:00
|
|
|
tLen = layer.final.pPrf.len
|
2023-06-30 22:22:33 +00:00
|
|
|
info = "pPrf(" & $tLen & ")"
|
2023-12-19 12:39:23 +00:00
|
|
|
result &= info.doPrefix(0 < tLen) & layer.final.pPrf.ppPPrf
|
2024-02-22 08:24:58 +00:00
|
|
|
if fRppOk:
|
|
|
|
let
|
|
|
|
tLen = layer.final.fRpp.len
|
|
|
|
info = "fRpp(" & $tLen & ")"
|
|
|
|
result &= info.doPrefix(0 < tLen) & layer.final.fRpp.ppFRpp(db,indent+2)
|
2023-08-11 17:23:57 +00:00
|
|
|
if 0 < nOKs:
|
|
|
|
let
|
2024-02-22 08:24:58 +00:00
|
|
|
info = if layer.final.dirty.len == 0: "clean"
|
2024-05-30 17:48:38 +00:00
|
|
|
else: "dirty" & layer.final.dirty.ppVids
|
2023-08-11 17:23:57 +00:00
|
|
|
result &= info.doPrefix(false)
|
2023-06-30 22:22:33 +00:00
|
|
|
|
2023-05-30 21:21:15 +00:00
|
|
|
# ------------------------------------------------------------------------------
|
|
|
|
# Public functions
|
|
|
|
# ------------------------------------------------------------------------------
|
|
|
|
|
2024-04-03 15:48:35 +00:00
|
|
|
proc pp*(w: Hash256; codeHashOk = false): string =
|
|
|
|
if codeHashOk:
|
|
|
|
w.ppCodeHash
|
|
|
|
elif w == EMPTY_ROOT_HASH:
|
2023-11-20 20:22:27 +00:00
|
|
|
"EMPTY_ROOT_HASH"
|
|
|
|
elif w == Hash256():
|
|
|
|
"Hash256()"
|
|
|
|
else:
|
|
|
|
w.data.toHex.squeeze(hex=true,ignLen=true)
|
2023-11-08 12:18:32 +00:00
|
|
|
|
|
|
|
proc pp*(w: HashKey; sig: MerkleSignRef): string =
|
2024-02-14 19:11:59 +00:00
|
|
|
w.ppKey(sig.db)
|
2023-05-30 21:21:15 +00:00
|
|
|
|
2024-02-14 19:11:59 +00:00
|
|
|
proc pp*(w: HashKey; db = AristoDbRef(nil)): string =
|
|
|
|
w.ppKey(db.orDefault)
|
2023-05-30 21:21:15 +00:00
|
|
|
|
2024-04-03 15:48:35 +00:00
|
|
|
proc pp*(w: openArray[HashKey]; db = AristoDbRef(nil)): string =
|
|
|
|
"[" & @w.mapIt(it.ppKey(db.orDefault)).join(",") & "]"
|
|
|
|
|
2023-12-19 12:39:23 +00:00
|
|
|
proc pp*(lty: LeafTie, db = AristoDbRef(nil)): string =
|
|
|
|
lty.ppLeafTie(db.orDefault)
|
2023-06-09 11:17:37 +00:00
|
|
|
|
2023-05-30 21:21:15 +00:00
|
|
|
proc pp*(vid: VertexID): string =
|
|
|
|
vid.ppVid
|
|
|
|
|
2024-06-04 15:05:13 +00:00
|
|
|
proc pp*(vLst: openArray[VertexID]): string =
|
|
|
|
vLst.ppVidList
|
2023-05-30 21:21:15 +00:00
|
|
|
|
2023-12-19 12:39:23 +00:00
|
|
|
proc pp*(p: PayloadRef, db = AristoDbRef(nil)): string =
|
|
|
|
p.ppPayload(db.orDefault)
|
2023-05-30 21:21:15 +00:00
|
|
|
|
2023-12-19 12:39:23 +00:00
|
|
|
proc pp*(nd: VertexRef, db = AristoDbRef(nil)): string =
|
|
|
|
nd.ppVtx(db.orDefault, VertexID(0))
|
2023-05-30 21:21:15 +00:00
|
|
|
|
2024-02-14 19:11:59 +00:00
|
|
|
proc pp*(nd: NodeRef; db: AristoDbRef): string =
|
2023-06-12 13:48:47 +00:00
|
|
|
if not nd.isValid:
|
2023-05-11 14:25:29 +00:00
|
|
|
result = "n/a"
|
2023-06-09 11:17:37 +00:00
|
|
|
elif nd.error != AristoError(0):
|
2023-05-11 14:25:29 +00:00
|
|
|
result = "(!" & $nd.error
|
|
|
|
else:
|
|
|
|
result = ["L(", "X(", "B("][nd.vType.ord]
|
|
|
|
case nd.vType:
|
|
|
|
of Leaf:
|
2023-05-30 11:47:47 +00:00
|
|
|
result &= $nd.lPfx.ppPathPfx & "," & nd.lData.pp(db)
|
2023-05-11 14:25:29 +00:00
|
|
|
|
|
|
|
of Extension:
|
2023-05-30 21:21:15 +00:00
|
|
|
result &= $nd.ePfx.ppPathPfx & "," & nd.eVid.ppVid & ","
|
2024-02-14 19:11:59 +00:00
|
|
|
result &= nd.key[0].ppKey(db)
|
|
|
|
result &= db.ppKeyOk(nd.key[0], nd.eVid)
|
2023-05-11 14:25:29 +00:00
|
|
|
|
|
|
|
of Branch:
|
|
|
|
result &= "["
|
|
|
|
for n in 0..15:
|
2023-06-12 13:48:47 +00:00
|
|
|
if nd.bVid[n].isValid or nd.key[n].isValid:
|
2023-05-14 17:43:01 +00:00
|
|
|
result &= nd.bVid[n].ppVid
|
2024-02-14 19:11:59 +00:00
|
|
|
result &= db.ppKeyOk(nd.key[n], nd.bVid[n]) & ","
|
2023-05-11 14:25:29 +00:00
|
|
|
result[^1] = ']'
|
|
|
|
|
|
|
|
result &= ",["
|
|
|
|
for n in 0..15:
|
2023-06-12 13:48:47 +00:00
|
|
|
if nd.bVid[n].isValid or nd.key[n].isValid:
|
2024-02-14 19:11:59 +00:00
|
|
|
result &= nd.key[n].ppKey(db)
|
2023-05-11 14:25:29 +00:00
|
|
|
result &= ","
|
|
|
|
result[^1] = ']'
|
|
|
|
result &= ")"
|
|
|
|
|
2024-02-08 16:32:16 +00:00
|
|
|
proc pp*[T](rc: Result[T,(VertexID,AristoError)]): string =
|
|
|
|
if rc.isOk:
|
|
|
|
result = "ok("
|
|
|
|
when T isnot void:
|
|
|
|
result &= ".."
|
|
|
|
result &= ")"
|
|
|
|
else:
|
|
|
|
result = "err((" & rc.error[0].pp & "," & $rc.error[1] & "))"
|
|
|
|
|
2023-06-09 11:17:37 +00:00
|
|
|
proc pp*(nd: NodeRef): string =
|
2023-12-19 12:39:23 +00:00
|
|
|
nd.pp(AristoDbRef(nil).orDefault)
|
2023-06-09 11:17:37 +00:00
|
|
|
|
2023-07-04 18:24:03 +00:00
|
|
|
proc pp*(
|
|
|
|
sTab: Table[VertexID,VertexRef];
|
2023-12-19 12:39:23 +00:00
|
|
|
db = AristoDbRef(nil);
|
2023-07-04 18:24:03 +00:00
|
|
|
indent = 4;
|
|
|
|
): string =
|
2023-12-19 12:39:23 +00:00
|
|
|
sTab.ppSTab(db.orDefault)
|
2023-06-09 11:17:37 +00:00
|
|
|
|
2023-05-30 21:21:15 +00:00
|
|
|
proc pp*(pPrf: HashSet[VertexID]): string =
|
2023-06-30 22:22:33 +00:00
|
|
|
pPrf.ppPPrf
|
2023-05-30 11:47:47 +00:00
|
|
|
|
2023-12-19 12:39:23 +00:00
|
|
|
proc pp*(leg: Leg; db = AristoDbRef(nil)): string =
|
|
|
|
let db = db.orDefault()
|
2023-06-09 11:17:37 +00:00
|
|
|
result = "(" & leg.wp.vid.ppVid & ","
|
2023-12-19 12:39:23 +00:00
|
|
|
block:
|
2024-02-14 19:11:59 +00:00
|
|
|
let key = db.layersGetKeyOrVoid leg.wp.vid
|
|
|
|
if not key.isValid:
|
2023-06-20 13:26:25 +00:00
|
|
|
result &= "ø"
|
2024-02-22 08:24:58 +00:00
|
|
|
elif leg.wp.vid notin db.xMap.getOrVoid key:
|
2024-02-14 19:11:59 +00:00
|
|
|
result &= key.ppKey(db)
|
2023-06-20 13:26:25 +00:00
|
|
|
result &= ","
|
|
|
|
if 0 <= leg.nibble:
|
|
|
|
result &= $leg.nibble.ppNibble
|
|
|
|
result &= "," & leg.wp.vtx.pp(db) & ")"
|
2023-05-30 21:21:15 +00:00
|
|
|
|
2023-12-19 12:39:23 +00:00
|
|
|
proc pp*(hike: Hike; db = AristoDbRef(nil); indent = 4): string =
|
|
|
|
let
|
|
|
|
db = db.orDefault()
|
|
|
|
pfx = indent.toPfx(1)
|
2023-06-09 11:17:37 +00:00
|
|
|
result = "["
|
|
|
|
if hike.legs.len == 0:
|
|
|
|
result &= "(" & hike.root.ppVid & ")"
|
|
|
|
else:
|
|
|
|
if hike.legs[0].wp.vid != hike.root:
|
|
|
|
result &= "(" & hike.root.ppVid & ")" & pfx
|
|
|
|
result &= hike.legs.mapIt(it.pp(db)).join(pfx)
|
|
|
|
result &= pfx & "(" & hike.tail.ppPathPfx & ")"
|
2023-05-30 11:47:47 +00:00
|
|
|
result &= "]"
|
|
|
|
|
2024-02-14 19:11:59 +00:00
|
|
|
proc pp*(kMap: Table[VertexID,HashKey]; indent = 4): string =
|
2023-12-19 12:39:23 +00:00
|
|
|
let db = AristoDbRef(nil).orDefault
|
2023-06-09 11:17:37 +00:00
|
|
|
"{" & kMap.sortedKeys
|
2023-12-19 12:39:23 +00:00
|
|
|
.mapIt((it, kMap.getOrVoid it))
|
2024-02-14 19:11:59 +00:00
|
|
|
.mapIt("(" & it[0].ppVid & "," & it[1].ppKey(db) & ")")
|
2023-06-09 11:17:37 +00:00
|
|
|
.join("," & indent.toPfx(1)) & "}"
|
2023-05-30 11:47:47 +00:00
|
|
|
|
2024-02-14 19:11:59 +00:00
|
|
|
proc pp*(kMap: Table[VertexID,HashKey]; db: AristoDbRef; indent = 4): string =
|
2024-02-22 08:24:58 +00:00
|
|
|
db.ppXMap(kMap, indent)
|
2023-12-19 12:39:23 +00:00
|
|
|
|
|
|
|
# ---------------------
|
2023-05-30 11:47:47 +00:00
|
|
|
|
Core db and aristo updates for destructor and tx logic (#1894)
* Disable `TransactionID` related functions from `state_db.nim`
why:
Functions `getCommittedStorage()` and `updateOriginalRoot()` from
the `state_db` module are nowhere used. The emulation of a legacy
`TransactionID` type functionality is administratively expensive to
provide by `Aristo` (the legacy DB version is only partially
implemented, anyway).
As there is no other place where `TransactionID`s are used, they will
not be provided by the `Aristo` variant of the `CoreDb`. For the
legacy DB API, nothing will change.
* Fix copyright headers in source code
* Get rid of compiler warning
* Update Aristo code, remove unused `merge()` variant, export `hashify()`
why:
Adapt to upcoming `CoreDb` wrapper
* Remove synced tx feature from `Aristo`
why:
+ This feature allowed to synchronise transaction methods like begin,
commit, and rollback for a group of descriptors.
+ The feature is over engineered and not needed for `CoreDb`, neither
is it complete (some convergence features missing.)
* Add debugging helpers to `Kvt`
also:
Update database iterator, add count variable yield argument similar
to `Aristo`.
* Provide optional destructors for `CoreDb` API
why;
For the upcoming Aristo wrapper, this allows to control when certain
smart destruction and update can take place. The auto destructor works
fine in general when the storage/cache strategy is known and acceptable
when creating descriptors.
* Add update option for `CoreDb` API function `hash()`
why;
The hash function is typically used to get the state root of the MPT.
Due to lazy hashing, this might be not available on the `Aristo` DB.
So the `update` function asks for re-hashing the gurrent state changes
if needed.
* Update API tracking log mode: `info` => `debug
* Use shared `Kvt` descriptor in new Ledger API
why:
No need to create a new descriptor all the time
2023-11-16 19:35:03 +00:00
|
|
|
proc pp*(tx: AristoTxRef): string =
|
2023-12-19 12:39:23 +00:00
|
|
|
result = "(uid=" & $tx.txUid & ",level=" & $tx.level
|
Core db and aristo updates for destructor and tx logic (#1894)
* Disable `TransactionID` related functions from `state_db.nim`
why:
Functions `getCommittedStorage()` and `updateOriginalRoot()` from
the `state_db` module are nowhere used. The emulation of a legacy
`TransactionID` type functionality is administratively expensive to
provide by `Aristo` (the legacy DB version is only partially
implemented, anyway).
As there is no other place where `TransactionID`s are used, they will
not be provided by the `Aristo` variant of the `CoreDb`. For the
legacy DB API, nothing will change.
* Fix copyright headers in source code
* Get rid of compiler warning
* Update Aristo code, remove unused `merge()` variant, export `hashify()`
why:
Adapt to upcoming `CoreDb` wrapper
* Remove synced tx feature from `Aristo`
why:
+ This feature allowed to synchronise transaction methods like begin,
commit, and rollback for a group of descriptors.
+ The feature is over engineered and not needed for `CoreDb`, neither
is it complete (some convergence features missing.)
* Add debugging helpers to `Kvt`
also:
Update database iterator, add count variable yield argument similar
to `Aristo`.
* Provide optional destructors for `CoreDb` API
why;
For the upcoming Aristo wrapper, this allows to control when certain
smart destruction and update can take place. The auto destructor works
fine in general when the storage/cache strategy is known and acceptable
when creating descriptors.
* Add update option for `CoreDb` API function `hash()`
why;
The hash function is typically used to get the state root of the MPT.
Due to lazy hashing, this might be not available on the `Aristo` DB.
So the `update` function asks for re-hashing the gurrent state changes
if needed.
* Update API tracking log mode: `info` => `debug
* Use shared `Kvt` descriptor in new Ledger API
why:
No need to create a new descriptor all the time
2023-11-16 19:35:03 +00:00
|
|
|
if not tx.parent.isNil:
|
|
|
|
result &= ", par=" & $tx.parent.txUid
|
|
|
|
result &= ")"
|
|
|
|
|
2023-12-04 20:39:26 +00:00
|
|
|
proc pp*(wp: VidVtxPair; db: AristoDbRef): string =
|
|
|
|
"(" & wp.vid.pp & "," & wp.vtx.pp(db) & ")"
|
|
|
|
|
2023-05-30 21:21:15 +00:00
|
|
|
|
2023-08-10 20:01:28 +00:00
|
|
|
proc pp*(
|
2023-08-18 19:46:55 +00:00
|
|
|
layer: LayerRef;
|
2023-08-10 20:01:28 +00:00
|
|
|
db: AristoDbRef;
|
|
|
|
indent = 4;
|
|
|
|
): string =
|
|
|
|
layer.ppLayer(
|
2024-06-04 15:05:13 +00:00
|
|
|
db, vTopOk=true, sTabOk=true, kMapOk=true, pPrfOk=true, fRppOk=true)
|
2023-08-10 20:01:28 +00:00
|
|
|
|
|
|
|
proc pp*(
|
2023-08-18 19:46:55 +00:00
|
|
|
layer: LayerRef;
|
2023-08-10 20:01:28 +00:00
|
|
|
db: AristoDbRef;
|
|
|
|
xTabOk: bool;
|
|
|
|
indent = 4;
|
|
|
|
): string =
|
|
|
|
layer.ppLayer(
|
2024-06-04 15:05:13 +00:00
|
|
|
db, vTopOk=true, sTabOk=xTabOk, kMapOk=true, pPrfOk=true, fRppOk=true)
|
2023-08-10 20:01:28 +00:00
|
|
|
|
|
|
|
proc pp*(
|
2023-08-18 19:46:55 +00:00
|
|
|
layer: LayerRef;
|
2023-08-10 20:01:28 +00:00
|
|
|
db: AristoDbRef;
|
|
|
|
xTabOk: bool;
|
|
|
|
kMapOk: bool;
|
|
|
|
other = false;
|
|
|
|
indent = 4;
|
|
|
|
): string =
|
|
|
|
layer.ppLayer(
|
2024-06-04 15:05:13 +00:00
|
|
|
db, vTopOk=other, sTabOk=xTabOk, kMapOk=kMapOk, pPrfOk=other, fRppOk=other)
|
2023-08-10 20:01:28 +00:00
|
|
|
|
|
|
|
|
2023-06-30 22:22:33 +00:00
|
|
|
proc pp*(
|
2023-07-04 18:24:03 +00:00
|
|
|
db: AristoDbRef;
|
2023-06-30 22:22:33 +00:00
|
|
|
xTabOk: bool;
|
|
|
|
indent = 4;
|
|
|
|
): string =
|
2023-12-19 12:39:23 +00:00
|
|
|
db.layersCc.pp(db, xTabOk=xTabOk, indent=indent)
|
2023-05-30 21:21:15 +00:00
|
|
|
|
2023-06-30 22:22:33 +00:00
|
|
|
proc pp*(
|
2023-07-04 18:24:03 +00:00
|
|
|
db: AristoDbRef;
|
2023-06-30 22:22:33 +00:00
|
|
|
xTabOk: bool;
|
|
|
|
kMapOk: bool;
|
|
|
|
other = false;
|
|
|
|
indent = 4;
|
|
|
|
): string =
|
2023-12-19 12:39:23 +00:00
|
|
|
db.layersCc.pp(db, xTabOk=xTabOk, kMapOk=kMapOk, other=other, indent=indent)
|
2023-08-10 20:01:28 +00:00
|
|
|
|
2023-08-17 13:42:01 +00:00
|
|
|
proc pp*(
|
2024-06-03 20:10:35 +00:00
|
|
|
filter: LayerDeltaRef;
|
2023-12-19 12:39:23 +00:00
|
|
|
db = AristoDbRef(nil);
|
2023-08-17 13:42:01 +00:00
|
|
|
indent = 4;
|
|
|
|
): string =
|
2024-02-14 19:11:59 +00:00
|
|
|
filter.ppFilter(db.orDefault(), indent)
|
2023-05-30 21:21:15 +00:00
|
|
|
|
2023-06-20 13:26:25 +00:00
|
|
|
proc pp*(
|
2023-09-05 13:57:20 +00:00
|
|
|
be: BackendRef;
|
2023-07-04 18:24:03 +00:00
|
|
|
db: AristoDbRef;
|
2024-05-30 17:48:38 +00:00
|
|
|
limit = 100;
|
2023-06-20 13:26:25 +00:00
|
|
|
indent = 4;
|
|
|
|
): string =
|
2024-06-03 20:10:35 +00:00
|
|
|
result = db.balancer.ppFilter(db, indent+1) & indent.toPfx
|
2023-09-05 13:57:20 +00:00
|
|
|
case be.kind:
|
2023-06-20 13:26:25 +00:00
|
|
|
of BackendMemory:
|
2024-05-30 17:48:38 +00:00
|
|
|
result &= be.MemBackendRef.ppBe(db, limit, indent+1)
|
2024-06-13 18:15:11 +00:00
|
|
|
of BackendRocksDB, BackendRdbHosting:
|
2024-05-30 17:48:38 +00:00
|
|
|
result &= be.RdbBackendRef.ppBe(db, limit, indent+1)
|
2023-08-11 17:23:57 +00:00
|
|
|
of BackendVoid:
|
2023-09-15 15:23:53 +00:00
|
|
|
result &= "<NoBackend>"
|
2023-06-20 13:26:25 +00:00
|
|
|
|
2023-09-11 20:38:49 +00:00
|
|
|
proc pp*(
|
|
|
|
db: AristoDbRef;
|
|
|
|
indent = 4;
|
2023-12-04 20:39:26 +00:00
|
|
|
backendOk = false;
|
2024-06-03 20:10:35 +00:00
|
|
|
balancerOk = true;
|
2024-03-22 17:31:56 +00:00
|
|
|
topOk = true;
|
|
|
|
stackOk = true;
|
2024-04-03 15:48:35 +00:00
|
|
|
kMapOk = true;
|
2024-05-30 17:48:38 +00:00
|
|
|
limit = 100;
|
2023-09-11 20:38:49 +00:00
|
|
|
): string =
|
2024-03-22 17:31:56 +00:00
|
|
|
if topOk:
|
2024-04-03 15:48:35 +00:00
|
|
|
result = db.layersCc.pp(
|
|
|
|
db, xTabOk=true, kMapOk=kMapOk, other=true, indent=indent)
|
2024-06-03 20:10:35 +00:00
|
|
|
let stackOnlyOk = stackOk and not (topOk or balancerOk or backendOk)
|
2024-03-22 17:31:56 +00:00
|
|
|
if not stackOnlyOk:
|
|
|
|
result &= indent.toPfx & " level=" & $db.stack.len
|
|
|
|
if (stackOk and 0 < db.stack.len) or stackOnlyOk:
|
|
|
|
let layers = @[db.top] & db.stack.reversed
|
|
|
|
var lStr = ""
|
|
|
|
for n,w in layers:
|
|
|
|
let
|
|
|
|
m = layers.len - n - 1
|
|
|
|
l = db.layersCc m
|
|
|
|
a = w.delta.kMap.values.toSeq.filterIt(not it.isValid).len
|
|
|
|
c = l.delta.kMap.values.toSeq.filterIt(not it.isValid).len
|
|
|
|
result &= "(" & $(w.delta.kMap.len - a) & "," & $a & ")"
|
|
|
|
lStr &= " " & $m & "=(" & $(l.delta.kMap.len - c) & "," & $c & ")"
|
|
|
|
result &= " =>" & lStr
|
2023-09-11 20:38:49 +00:00
|
|
|
if backendOk:
|
2024-05-30 17:48:38 +00:00
|
|
|
result &= indent.toPfx & db.backend.pp(db, limit=limit, indent)
|
2024-06-03 20:10:35 +00:00
|
|
|
elif balancerOk:
|
|
|
|
result &= indent.toPfx & db.balancer.ppFilter(db, indent+1)
|
2023-11-08 12:18:32 +00:00
|
|
|
|
|
|
|
proc pp*(sdb: MerkleSignRef; indent = 4): string =
|
|
|
|
"count=" & $sdb.count &
|
|
|
|
" root=" & sdb.root.pp &
|
|
|
|
" error=" & $sdb.error &
|
2024-02-14 19:11:59 +00:00
|
|
|
"\n db\n " & sdb.db.pp(indent=indent+1)
|
2023-09-11 20:38:49 +00:00
|
|
|
|
2023-05-11 14:25:29 +00:00
|
|
|
# ------------------------------------------------------------------------------
|
|
|
|
# End
|
|
|
|
# ------------------------------------------------------------------------------
|