Aristo avoid storage trie update race conditions (#2251)

* Update TDD suite logger output format choices

why:
  New format is not practical for TDD as it just dumps data across a wide
  range (considerably larder than 80 columns.)

  So the new format can be turned on by function argument.

* Update unit tests samples configuration

why:
  Slightly changed the way to find the `era1` directory

* Remove compiler warnings (fix deprecated expressions and phrases)

* Update `Aristo` debugging tools

* Always update the `storageID` field of account leaf vertices

why:
  Storage tries are weekly linked to an account leaf object in that
  the `storageID` field is updated by the application.

  Previously, `Aristo` verified that leaf objects make sense when passed
  to the database. As a consequence
  * the database was inconsistent for a short while
  * the burden for correctness was all on the application which led
    to delayed error handling which is hard to debug.

  So `Aristo` will internally update the account leaf objects so that
  there are no race conditions due to the storage trie handling

* Aristo: Let `stow()`/`persist()` bail out unless there is a `VertexID(1)`

why:
  The journal and filter logic depends on the hash of the `VertexID(1)`
  which is commonly known as the state root. This implies that all
  changes to the database are somehow related to that.

* Make sure that a `Ledger` account does not overwrite the storage trie reference

why:
  Due to the abstraction of a sub-trie (now referred to as column with a
  hash describing its state) there was a weakness in the `Aristo` handler
  where an account leaf could be overwritten though changing the validity
  of the database. This has been changed and the database will now reject
  such changes.

  This patch fixes the behaviour on the application layer. In particular,
  the column handle returned by the `CoreDb` needs to be updated by
  the `Aristo` database state. This mitigates the problem that a storage
  trie might have vanished or re-apperaed with a different vertex ID.

* Fix sub-trie deletion test

why:
  Was originally hinged on `VertexID(1)` which cannot be wholesale
  deleted anymore after the last Aristo update. Also, running with
  `VertexID(2)` needs an artificial `VertexID(1)` for making `stow()`
  or `persist()` work.

* Cosmetics

* Activate `test_generalstate_json`

* Temporarily `deactivate test_tracer_json`

* Fix copyright header

---------

Co-authored-by: jordan <jordan@dry.pudding>
Co-authored-by: Jacek Sieka <jacek@status.im>
This commit is contained in:
Jordan Hrycaj 2024-05-30 18:48:38 +01:00 committed by GitHub
parent d814d84b9b
commit 0f430c70fd
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
24 changed files with 344 additions and 145 deletions

View File

@ -34,8 +34,6 @@ type
GenesisRootHashFn = proc: Hash256 {.noRaise.} GenesisRootHashFn = proc: Hash256 {.noRaise.}
GenesisGetTrieFn = proc: CoreDbMptRef {.noRaise.}
GenesisLedgerRef* = ref object GenesisLedgerRef* = ref object
## Exportable ledger DB just for initialising Genesis. ## Exportable ledger DB just for initialising Genesis.
## ##
@ -43,7 +41,6 @@ type
setStorage: GenesisSetStorageFn setStorage: GenesisSetStorageFn
commit: GenesisCommitFn commit: GenesisCommitFn
rootHash: GenesisRootHashFn rootHash: GenesisRootHashFn
getTrie: GenesisGetTrieFn
# ------------------------------------------------------------------------------ # ------------------------------------------------------------------------------
# Private functions # Private functions
@ -77,10 +74,7 @@ proc initAccountsLedgerRef(
ac.persist(), ac.persist(),
rootHash: proc(): Hash256 = rootHash: proc(): Hash256 =
ac.state(), ac.state())
getTrie: proc(): CoreDbMptRef =
ac.getMpt())
# ------------------------------------------------------------------------------ # ------------------------------------------------------------------------------
# Public functions # Public functions
@ -91,10 +85,6 @@ proc newStateDB*(
): GenesisLedgerRef = ): GenesisLedgerRef =
db.initAccountsLedgerRef() db.initAccountsLedgerRef()
proc getTrie*(sdb: GenesisLedgerRef): CoreDbMptRef =
## Getter, used in `test_state_network`
sdb.getTrie()
proc toGenesisHeader*( proc toGenesisHeader*(
g: Genesis; g: Genesis;
sdb: GenesisLedgerRef; sdb: GenesisLedgerRef;

9
nimbus/db/aristo/TODO.md Normal file
View File

@ -0,0 +1,9 @@
* Check whether `HashKey` can be reduced to a simple 32 byte array (see
*desc_identifiers.nim*)
* Remove the `RlpData` accounts payload type. It is not needed as a separate
data type. An application must know the layout. So it can be subsumed
under `RawData` (which could be renamed `PlainData`.)
* Currently, the data save/store logic only works when there is s VertexID(1)
root. In tests without a VertexID(1) a dummy node is set up.

View File

@ -172,7 +172,7 @@ proc blobify*(filter: FilterRef; data: var Blob): Result[void,AristoError] =
## ##
func blobify(lid: HashKey): Blob = func blobify(lid: HashKey): Blob =
let n = lid.len let n = lid.len
if n < 32: @[n.byte] & @lid & 0u8.repeat(31 - n) else: @lid if n < 32: @[n.byte] & @(lid.data) & 0u8.repeat(31 - n) else: @(lid.data)
if not filter.isValid: if not filter.isValid:
return err(BlobifyNilFilter) return err(BlobifyNilFilter)

View File

@ -92,6 +92,10 @@ proc checkTopCommon*(
let let
kMapCount = db.layersWalkKey.toSeq.mapIt(it[1]).filterIt(it.isValid).len kMapCount = db.layersWalkKey.toSeq.mapIt(it[1]).filterIt(it.isValid).len
kMapNilCount = db.layersWalkKey.toSeq.len - kMapCount kMapNilCount = db.layersWalkKey.toSeq.len - kMapCount
vGen = db.vGen.toHashSet
vGenMax = if vGen.len == 0: VertexID(0) else: db.vGen[^1]
var
stoRoots: HashSet[VertexID]
# Collect leafs and check deleted entries # Collect leafs and check deleted entries
var nNilVtx = 0 var nNilVtx = 0
@ -99,7 +103,14 @@ proc checkTopCommon*(
if vtx.isValid: if vtx.isValid:
case vtx.vType: case vtx.vType:
of Leaf: of Leaf:
discard if vtx.lData.pType == AccountData:
let stoVid = vtx.lData.account.storageID
if stoVid.isValid:
if stoVid in stoRoots:
return err((stoVid,CheckAnyVidSharedStorageRoot))
if vGenMax.isValid and (vGenMax < stoVid or stoVid in vGen):
return err((stoVid,CheckAnyVidDeadStorageRoot))
stoRoots.incl stoVid
of Branch: of Branch:
block check42Links: block check42Links:
var seen = false var seen = false

View File

@ -130,12 +130,17 @@ proc ppVid(vid: VertexID; pfx = true): string =
proc ppVids(vids: HashSet[VertexID]): string = proc ppVids(vids: HashSet[VertexID]): string =
result = "{" result = "{"
for vid in vids.toSeq.sorted: if vids.len == 0:
result = "$" result &= "}"
if vid.isValid: else:
result &= vid.toHex.stripZeros.toLowerAscii for vid in vids.toSeq.sorted:
else: result &= "$"
result &= "ø" if vid.isValid:
result &= vid.toHex.stripZeros.toLowerAscii
else:
result &= "ø"
result &= ","
result[^1] = '}'
func ppCodeHash(h: Hash256): string = func ppCodeHash(h: Hash256): string =
result = "¢" result = "¢"
@ -173,7 +178,14 @@ proc ppQid(qid: QueueID): string =
result &= qid.toHex.stripZeros result &= qid.toHex.stripZeros
proc ppVidList(vGen: openArray[VertexID]): string = proc ppVidList(vGen: openArray[VertexID]): string =
"[" & vGen.mapIt(it.ppVid).join(",") & "]" result = "["
if vGen.len <= 250:
result &= vGen.mapIt(it.ppVid).join(",")
else:
result &= vGen[0 .. 99].mapIt(it.ppVid).join(",")
result &= ",.."
result &= vGen[^100 .. ^1].mapIt(it.ppVid).join(",")
result &= "]"
proc ppKey(key: HashKey; db: AristoDbRef; pfx = true): string = proc ppKey(key: HashKey; db: AristoDbRef; pfx = true): string =
proc getVids(): tuple[vids: HashSet[VertexID], xMapTag: string] = proc getVids(): tuple[vids: HashSet[VertexID], xMapTag: string] =
@ -204,7 +216,7 @@ proc ppKey(key: HashKey; db: AristoDbRef; pfx = true): string =
if 1 < vids.len: result &= "}" if 1 < vids.len: result &= "}"
result &= tag result &= tag
return return
result &= @key.toHex.squeeze(hex=true,ignLen=true) & tag result &= @(key.data).toHex.squeeze(hex=true,ignLen=true) & tag
proc ppLeafTie(lty: LeafTie, db: AristoDbRef): string = proc ppLeafTie(lty: LeafTie, db: AristoDbRef): string =
let pfx = lty.path.to(NibblesSeq) let pfx = lty.path.to(NibblesSeq)
@ -435,7 +447,7 @@ proc ppFilter(
result &= $(1+n) & "(" & vid.ppVid & "," & key.ppKey(db) & ")" result &= $(1+n) & "(" & vid.ppVid & "," & key.ppKey(db) & ")"
result &= "}" result &= "}"
proc ppBe[T](be: T; db: AristoDbRef; indent: int): string = proc ppBe[T](be: T; db: AristoDbRef; limit: int; indent: int): string =
## Walk over backend tables ## Walk over backend tables
let let
pfx = indent.toPfx pfx = indent.toPfx
@ -445,19 +457,21 @@ proc ppBe[T](be: T; db: AristoDbRef; indent: int): string =
var (dump,dataOk) = ("",false) var (dump,dataOk) = ("",false)
dump &= pfx & "vGen" dump &= pfx & "vGen"
block: block:
let q = be.getIdgFn().get(otherwise = EmptyVidSeq).mapIt(it.ppVid) let q = be.getIdgFn().get(otherwise = EmptyVidSeq)
dump &= "(" & $q.len & ")" dump &= "(" & $q.len & ")"
if 0 < q.len: if 0 < q.len:
dataOk = true dataOk = true
dump &= pfx1 dump &= pfx1 & q.ppVidList()
dump &= "[" & q.join(",") & "]"
block: block:
dump &= pfx & "sTab" dump &= pfx & "sTab"
var (n, data) = (0, "") var (n, data) = (0, "")
for (vid,vtx) in be.walkVtx: for (vid,vtx) in be.walkVtx:
if 0 < n: data &= pfx2
n.inc n.inc
data &= $n & "(" & vid.ppVid & "," & vtx.ppVtx(db,vid) & ")" if n < limit:
if 1 < n: data &= pfx2
data &= $n & "(" & vid.ppVid & "," & vtx.ppVtx(db,vid) & ")"
elif n == limit:
data &= pfx2 & ".."
dump &= "(" & $n & ")" dump &= "(" & $n & ")"
if 0 < n: if 0 < n:
dataOk = true dataOk = true
@ -467,9 +481,12 @@ proc ppBe[T](be: T; db: AristoDbRef; indent: int): string =
dump &= pfx & "kMap" dump &= pfx & "kMap"
var (n, data) = (0, "") var (n, data) = (0, "")
for (vid,key) in be.walkKey: for (vid,key) in be.walkKey:
if 0 < n: data &= pfx2
n.inc n.inc
data &= $n & "(" & vid.ppVid & "," & key.ppKey(db) & ")" if n < limit:
if 1 < n: data &= pfx2
data &= $n & "(" & vid.ppVid & "," & key.ppKey(db) & ")"
elif n == limit:
data &= pfx2 & ".."
dump &= "(" & $n & ")" dump &= "(" & $n & ")"
if 0 < n: if 0 < n:
dataOk = true dataOk = true
@ -542,7 +559,7 @@ proc ppLayer(
if 0 < nOKs: if 0 < nOKs:
let let
info = if layer.final.dirty.len == 0: "clean" info = if layer.final.dirty.len == 0: "clean"
else: "dirty{" & layer.final.dirty.ppVids & "}" else: "dirty" & layer.final.dirty.ppVids
result &= info.doPrefix(false) result &= info.doPrefix(false)
# ------------------------------------------------------------------------------ # ------------------------------------------------------------------------------
@ -757,14 +774,15 @@ proc pp*(
proc pp*( proc pp*(
be: BackendRef; be: BackendRef;
db: AristoDbRef; db: AristoDbRef;
limit = 100;
indent = 4; indent = 4;
): string = ): string =
result = db.roFilter.ppFilter(db, indent+1) & indent.toPfx result = db.roFilter.ppFilter(db, indent+1) & indent.toPfx
case be.kind: case be.kind:
of BackendMemory: of BackendMemory:
result &= be.MemBackendRef.ppBe(db, indent+1) result &= be.MemBackendRef.ppBe(db, limit, indent+1)
of BackendRocksDB: of BackendRocksDB:
result &= be.RdbBackendRef.ppBe(db, indent+1) result &= be.RdbBackendRef.ppBe(db, limit, indent+1)
of BackendVoid: of BackendVoid:
result &= "<NoBackend>" result &= "<NoBackend>"
@ -776,6 +794,7 @@ proc pp*(
topOk = true; topOk = true;
stackOk = true; stackOk = true;
kMapOk = true; kMapOk = true;
limit = 100;
): string = ): string =
if topOk: if topOk:
result = db.layersCc.pp( result = db.layersCc.pp(
@ -796,7 +815,7 @@ proc pp*(
lStr &= " " & $m & "=(" & $(l.delta.kMap.len - c) & "," & $c & ")" lStr &= " " & $m & "=(" & $(l.delta.kMap.len - c) & "," & $c & ")"
result &= " =>" & lStr result &= " =>" & lStr
if backendOk: if backendOk:
result &= indent.toPfx & db.backend.pp(db) result &= indent.toPfx & db.backend.pp(db, limit=limit, indent)
elif filterOk: elif filterOk:
result &= indent.toPfx & db.roFilter.ppFilter(db, indent+1) result &= indent.toPfx & db.roFilter.ppFilter(db, indent+1)

View File

@ -256,13 +256,19 @@ proc delSubTreeImpl(
accPath: PathID; # Needed for real storage tries accPath: PathID; # Needed for real storage tries
): Result[void,(VertexID,AristoError)] = ): Result[void,(VertexID,AristoError)] =
## Implementation of *delete* sub-trie. ## Implementation of *delete* sub-trie.
if not root.isValid: let wp = block:
return err((root,DelSubTreeVoidRoot)) if root.distinctBase < LEAST_FREE_VID:
if not root.isValid:
if LEAST_FREE_VID <= root.distinctBase: return err((root,DelSubTreeVoidRoot))
db.registerAccount(root, accPath).isOkOr: if root == VertexID(1):
return err((root,error)) return err((root,DelSubTreeAccRoot))
VidVtxPair()
else:
let rc = db.registerAccount(root, accPath)
if rc.isErr:
return err((root,rc.error))
else:
rc.value
var var
dispose = @[root] dispose = @[root]
rootVtx = db.getVtxRc(root).valueOr: rootVtx = db.getVtxRc(root).valueOr:
@ -287,6 +293,13 @@ proc delSubTreeImpl(
for vid in dispose: for vid in dispose:
db.disposeOfVtx(root, vid) db.disposeOfVtx(root, vid)
# Make sure that an account leaf has no dangling sub-trie
if wp.vid.isValid:
let leaf = wp.vtx.dup # Dup on modify
leaf.lData.account.storageID = VertexID(0)
db.layersPutVtx(VertexID(1), wp.vid, leaf)
db.layersResKey(VertexID(1), wp.vid)
# Squeze list of recycled vertex IDs # Squeze list of recycled vertex IDs
db.top.final.vGen = db.vGen.vidReorg() db.top.final.vGen = db.vGen.vidReorg()
ok() ok()
@ -300,9 +313,15 @@ proc deleteImpl(
): Result[bool,(VertexID,AristoError)] = ): Result[bool,(VertexID,AristoError)] =
## Implementation of *delete* functionality. ## Implementation of *delete* functionality.
if LEAST_FREE_VID <= lty.root.distinctBase: let wp = block:
db.registerAccount(lty.root, accPath).isOkOr: if lty.root.distinctBase < LEAST_FREE_VID:
return err((lty.root,error)) VidVtxPair()
else:
let rc = db.registerAccount(lty.root, accPath)
if rc.isErr:
return err((lty.root,rc.error))
else:
rc.value
# Remove leaf entry on the top # Remove leaf entry on the top
let lf = hike.legs[^1].wp let lf = hike.legs[^1].wp
@ -311,7 +330,7 @@ proc deleteImpl(
if lf.vid in db.pPrf: if lf.vid in db.pPrf:
return err((lf.vid, DelLeafLocked)) return err((lf.vid, DelLeafLocked))
# Verify thet there is no dangling storage trie # Verify that there is no dangling storage trie
block: block:
let data = lf.vtx.lData let data = lf.vtx.lData
if data.pType == AccountData: if data.pType == AccountData:
@ -367,6 +386,13 @@ proc deleteImpl(
let emptySubTreeOk = not db.getVtx(hike.root).isValid let emptySubTreeOk = not db.getVtx(hike.root).isValid
# Make sure that an account leaf has no dangling sub-trie
if emptySubTreeOk and wp.vid.isValid:
let leaf = wp.vtx.dup # Dup on modify
leaf.lData.account.storageID = VertexID(0)
db.layersPutVtx(VertexID(1), wp.vid, leaf)
db.layersResKey(VertexID(1), wp.vid)
# Squeze list of recycled vertex IDs # Squeze list of recycled vertex IDs
db.top.final.vGen = db.vGen.vidReorg() db.top.final.vGen = db.vGen.vidReorg()
ok(emptySubTreeOk) ok(emptySubTreeOk)
@ -384,11 +410,18 @@ proc delTree*(
## `SUB_TREE_DISPOSAL_MAX`. Larger tries must be disposed by walk-deleting ## `SUB_TREE_DISPOSAL_MAX`. Larger tries must be disposed by walk-deleting
## leaf nodes using `left()` or `right()` traversal functions. ## leaf nodes using `left()` or `right()` traversal functions.
## ##
## For a `root` argument greater than `LEAST_FREE_VID`, the sub-tree spanned ## Note that the accounts trie hinging on `VertexID(1)` cannot be deleted.
## by `root` is considered a storage trie linked to an account leaf referred ##
## to by a valid `accPath` (i.e. different from `VOID_PATH_ID`.) In that ## If the `root` argument belongs to a well known sub trie (i.e. it does
## case, an account must exists. If there is payload of type `AccountData`, ## not exceed `LEAST_FREE_VID`) the `accPath` argument is ignored and the
## its `storageID` field must be unset or equal to the `hike.root` vertex ID. ## sub-trie will just be deleted.
##
## Otherwise, a valid `accPath` (i.e. different from `VOID_PATH_ID`.) is
## required relating to an account leaf entry (starting at `VertexID(`)`).
## If the payload of that leaf entry is not of type `AccountData` it is
## ignored. Otherwise its `storageID` field must be equal to the `hike.root`
## vertex ID. This leaf entry `storageID` field will be reset to
## `VertexID(0)` after having deleted the sub-trie.
## ##
db.delSubTreeImpl(root, accPath) db.delSubTreeImpl(root, accPath)
@ -397,16 +430,26 @@ proc delete*(
hike: Hike; # Fully expanded chain of vertices hike: Hike; # Fully expanded chain of vertices
accPath: PathID; # Needed for accounts payload accPath: PathID; # Needed for accounts payload
): Result[bool,(VertexID,AristoError)] = ): Result[bool,(VertexID,AristoError)] =
## Delete argument `hike` chain of vertices from the database. ## Delete argument `hike` chain of vertices from the database. The return
## code will be `true` iff the sub-trie starting at `hike.root` will have
## become empty.
## ##
## For a `hike.root` with `VertexID` greater than `LEAST_FREE_VID`, the ## If the `hike` argument referes to aa account entrie (i.e. `hike.root`
## sub-tree generated by `payload.root` is considered a storage trie linked ## equals `VertexID(1)`) and the leaf entry has an `AccountData` payload,
## to an account leaf referred to by a valid `accPath` (i.e. different from ## its `storageID` field must have been reset to `VertexID(0)`. the
## `VOID_PATH_ID`.) In that case, an account must exists. If there is payload ## `accPath` argument will be ignored.
## of type `AccountData`, its `storageID` field must be unset or equal to the
## `hike.root` vertex ID.
## ##
## The return code is `true` iff the trie has become empty. ## Otherwise, if the `root` argument belongs to a well known sub trie (i.e.
## it does not exceed `LEAST_FREE_VID`) the `accPath` argument is ignored
## and the entry will just be deleted.
##
## Otherwise, a valid `accPath` (i.e. different from `VOID_PATH_ID`.) is
## required relating to an account leaf entry (starting at `VertexID(`)`).
## If the payload of that leaf entry is not of type `AccountData` it is
## ignored. Otherwise its `storageID` field must be equal to the `hike.root`
## vertex ID. This leaf entry `storageID` field will be reset to
## `VertexID(0)` in case the entry to be deleted will render the sub-trie
## empty.
## ##
let lty = LeafTie( let lty = LeafTie(
root: hike.root, root: hike.root,

View File

@ -83,14 +83,17 @@ type
PathExpectedLeaf PathExpectedLeaf
# Merge leaf `merge()` # Merge leaf `merge()`
MergeBranchLinkLeafGarbled MergeAssemblyFailed # Ooops, internal error
MergeBranchLinkVtxPfxTooShort
MergeBranchGarbledNibble MergeBranchGarbledNibble
MergeBranchGarbledTail MergeBranchGarbledTail
MergeBranchLinkLeafGarbled
MergeBranchLinkLockedKey MergeBranchLinkLockedKey
MergeBranchLinkProofModeLock MergeBranchLinkProofModeLock
MergeBranchLinkVtxPfxTooShort
MergeBranchProofModeLock MergeBranchProofModeLock
MergeBranchRootExpected MergeBranchRootExpected
MergeLeafCantChangePayloadType
MergeLeafCantChangeStorageID
MergeLeafGarbledHike MergeLeafGarbledHike
MergeLeafPathCachedAlready MergeLeafPathCachedAlready
MergeLeafPathOnBackendAlready MergeLeafPathOnBackendAlready
@ -98,7 +101,6 @@ type
MergeNonBranchProofModeLock MergeNonBranchProofModeLock
MergeRootBranchLinkBusy MergeRootBranchLinkBusy
MergeRootMissing MergeRootMissing
MergeAssemblyFailed # Ooops, internal error
MergeHashKeyInvalid MergeHashKeyInvalid
MergeHashKeyDiffersFromCached MergeHashKeyDiffersFromCached
@ -140,6 +142,8 @@ type
CheckRlxVtxKeyMissing CheckRlxVtxKeyMissing
CheckRlxVtxKeyMismatch CheckRlxVtxKeyMismatch
CheckAnyVidDeadStorageRoot
CheckAnyVidSharedStorageRoot
CheckAnyVtxEmptyKeyMissing CheckAnyVtxEmptyKeyMissing
CheckAnyVtxEmptyKeyExpected CheckAnyVtxEmptyKeyExpected
CheckAnyVtxEmptyKeyMismatch CheckAnyVtxEmptyKeyMismatch
@ -202,6 +206,7 @@ type
DelLeafUnexpected DelLeafUnexpected
DelPathNotFound DelPathNotFound
DelPathTagError DelPathTagError
DelSubTreeAccRoot
DelSubTreeTooBig DelSubTreeTooBig
DelSubTreeVoidRoot DelSubTreeVoidRoot
DelVidStaleVtx DelVidStaleVtx
@ -261,6 +266,7 @@ type
RdbHashKeyExpected RdbHashKeyExpected
# Transaction wrappers # Transaction wrappers
TxAccRootMissing
TxArgStaleTx TxArgStaleTx
TxArgsUseless TxArgsUseless
TxBackendNotWritable TxBackendNotWritable

View File

@ -293,16 +293,6 @@ func cmp*(a, b: LeafTie): int =
# Public helpers: Reversible conversions between `PathID`, `HashKey`, etc. # Public helpers: Reversible conversions between `PathID`, `HashKey`, etc.
# ------------------------------------------------------------------------------ # ------------------------------------------------------------------------------
func to*(key: HashKey; T: type Blob): T {.deprecated.} =
## Rewrite `HashKey` argument as `Blob` type of length between 0 and 32. A
## blob of length 32 is taken as a representation of a `HashKey` type while
## samller blobs are expected to represent an RLP encoded small node.
@(key.data)
func `@`*(lid: HashKey): Blob {.deprecated.} =
## Variant of `to(Blob)`
lid.to(Blob)
func to*(pid: PathID; T: type NibblesSeq): T = func to*(pid: PathID; T: type NibblesSeq): T =
## Representation of a `PathID` as `NibbleSeq` (preserving full information) ## Representation of a `PathID` as `NibbleSeq` (preserving full information)
let nibbles = pid.pfx.toBytesBE.toSeq.initNibbleRange() let nibbles = pid.pfx.toBytesBE.toSeq.initNibbleRange()

View File

@ -397,7 +397,7 @@ iterator walk*(
yield (VtxPfx, vid.uint64, data) yield (VtxPfx, vid.uint64, data)
for (vid,key) in be.walkKey: for (vid,key) in be.walkKey:
yield (KeyPfx, vid.uint64, @key) yield (KeyPfx, vid.uint64, @(key.data))
if not be.mdb.noFq: if not be.mdb.noFq:
for lid in be.mdb.rFil.keys.toSeq.mapIt(it.uint64).sorted.mapIt(it.QueueID): for lid in be.mdb.rFil.keys.toSeq.mapIt(it.uint64).sorted.mapIt(it.QueueID):

View File

@ -218,7 +218,7 @@ proc putKeyFn(db: RdbBackendRef): PutKeyFn =
var batch: seq[(uint64,Blob)] var batch: seq[(uint64,Blob)]
for (vid,key) in vkps: for (vid,key) in vkps:
if key.isValid: if key.isValid:
batch.add (vid.uint64, @key) batch.add (vid.uint64, @(key.data))
else: else:
batch.add (vid.uint64, EmptyBlob) batch.add (vid.uint64, EmptyBlob)

View File

@ -470,7 +470,7 @@ proc updatePayload(
db: AristoDbRef; # Database, top layer db: AristoDbRef; # Database, top layer
hike: Hike; # No path legs hike: Hike; # No path legs
leafTie: LeafTie; # Leaf item to add to the database leafTie: LeafTie; # Leaf item to add to the database
payload: PayloadRef; # Payload value payload: PayloadRef; # Payload value to add
): Result[Hike,AristoError] = ): Result[Hike,AristoError] =
## Update leaf vertex if payloads differ ## Update leaf vertex if payloads differ
let leafLeg = hike.legs[^1] let leafLeg = hike.legs[^1]
@ -481,6 +481,14 @@ proc updatePayload(
if vid in db.pPrf: if vid in db.pPrf:
return err(MergeLeafProofModeLock) return err(MergeLeafProofModeLock)
# Verify that the account leaf can be replaced
if leafTie.root == VertexID(1):
if leafLeg.wp.vtx.lData.pType != payload.pType:
return err(MergeLeafCantChangePayloadType)
if payload.pType == AccountData and
payload.account.storageID != leafLeg.wp.vtx.lData.account.storageID:
return err(MergeLeafCantChangeStorageID)
# Update vertex and hike # Update vertex and hike
let vtx = VertexRef( let vtx = VertexRef(
vType: Leaf, vType: Leaf,
@ -624,21 +632,41 @@ proc mergePayload*(
): Result[Hike,AristoError] = ): Result[Hike,AristoError] =
## Merge the argument `leafTie` key-value-pair into the top level vertex ## Merge the argument `leafTie` key-value-pair into the top level vertex
## table of the database `db`. The field `path` of the `leafTie` argument is ## table of the database `db`. The field `path` of the `leafTie` argument is
## used to index the leaf vertex on the `Patricia Trie`. The field `payload` ## used to address the leaf vertex with the payload. It is stored or updated
## is stored with the leaf vertex in the database unless the leaf vertex ## on the database accordingly.
## exists already.
## ##
## For a `payload.root` with `VertexID` greater than `LEAST_FREE_VID`, the ## If the `leafTie` argument referes to aa account entrie (i.e. the
## sub-tree generated by `payload.root` is considered a storage trie linked ## `leafTie.root` equals `VertexID(1)`) and the leaf entry has already an
## to an account leaf referred to by a valid `accPath` (i.e. different from ## `AccountData` payload, its `storageID` field must be the same as the one
## `VOID_PATH_ID`.) In that case, an account must exists. If there is payload ## on the database. The `accPath` argument will be ignored.
## of type `AccountData`, its `storageID` field must be unset or equal to the
## `payload.root` vertex ID.
## ##
if LEAST_FREE_VID <= leafTie.root.distinctBase: ## Otherwise, if the `root` argument belongs to a well known sub trie (i.e.
? db.registerAccount(leafTie.root, accPath) ## it does not exceed `LEAST_FREE_VID`) the `accPath` argument is ignored
elif not leafTie.root.isValid: ## and the entry will just be merged.
return err(MergeRootMissing) ##
## Otherwise, a valid `accPath` (i.e. different from `VOID_PATH_ID`.) is
## required relating to an account leaf entry (starting at `VertexID(`)`).
## If the payload of that leaf entry is not of type `AccountData` it is
## ignored.
##
## Otherwise, if the sub-trie where the `leafTie` is to be merged into does
## not exist yes, the `storageID` field of the `accPath` leaf must have been
## reset to `storageID(0)` and will be updated accordingly on the database.
##
## Otherwise its `storageID` field must be equal to the `leafTie.root` vertex
## ID. So vertices can be marked for Merkle hash update.
##
let wp = block:
if leafTie.root.distinctBase < LEAST_FREE_VID:
if not leafTie.root.isValid:
return err(MergeRootMissing)
VidVtxPair()
else:
let rc = db.registerAccount(leafTie.root, accPath)
if rc.isErr:
return err(rc.error)
else:
rc.value
let hike = leafTie.hikeUp(db).to(Hike) let hike = leafTie.hikeUp(db).to(Hike)
var okHike: Hike var okHike: Hike
@ -676,6 +704,13 @@ proc mergePayload*(
if rc.isErr or rc.value != leafTie.path: if rc.isErr or rc.value != leafTie.path:
return err(MergeAssemblyFailed) # Ooops return err(MergeAssemblyFailed) # Ooops
# Make sure that there is an accounts that refers to that storage trie
if wp.vid.isValid and not wp.vtx.lData.account.storageID.isValid:
let leaf = wp.vtx.dup # Dup on modify
leaf.lData.account.storageID = leafTie.root
db.layersPutVtx(VertexID(1), wp.vid, leaf)
db.layersResKey(VertexID(1), wp.vid)
ok okHike ok okHike
@ -752,9 +787,9 @@ proc merge*(
## Check for embedded nodes, i.e. fully encoded node instead of a hash. ## Check for embedded nodes, i.e. fully encoded node instead of a hash.
## They need to be treated as full nodes, here. ## They need to be treated as full nodes, here.
if key.isValid and key.len < 32: if key.isValid and key.len < 32:
let lid = @key.digestTo(HashKey) let lid = @(key.data).digestTo(HashKey)
if not seen.hasKey lid: if not seen.hasKey lid:
let node = @key.decode(NodeRef) let node = @(key.data).decode(NodeRef)
discard todo.append node discard todo.append node
seen[lid] = node seen[lid] = node

View File

@ -14,7 +14,7 @@
{.push raises: [].} {.push raises: [].}
import import
std/options, std/[options, tables],
results, results,
".."/[aristo_desc, aristo_get, aristo_journal, aristo_layers, aristo_hashify] ".."/[aristo_desc, aristo_get, aristo_journal, aristo_layers, aristo_hashify]
@ -68,6 +68,10 @@ proc txStow*(
# It is OK if there was no `Idg`. Otherwise something serious happened # It is OK if there was no `Idg`. Otherwise something serious happened
# and there is no way to recover easily. # and there is no way to recover easily.
doAssert rc.error == GetIdgNotFound doAssert rc.error == GetIdgNotFound
elif db.top.delta.sTab.len != 0 and
not db.top.delta.sTab.getOrVoid(VertexID(1)).isValid:
# Currently, a `VertexID(1)` root node is required
return err(TxAccRootMissing)
if persistent: if persistent:
# Merge/move `roFilter` into persistent tables # Merge/move `roFilter` into persistent tables

View File

@ -190,10 +190,13 @@ proc registerAccount*(
db: AristoDbRef; # Database, top layer db: AristoDbRef; # Database, top layer
stoRoot: VertexID; # Storage root ID stoRoot: VertexID; # Storage root ID
accPath: PathID; # Needed for accounts payload accPath: PathID; # Needed for accounts payload
): Result[void,AristoError] = ): Result[VidVtxPair,AristoError] =
## Verify that the `stoRoot` argument is properly referred to by the ## Verify that the `stoRoot` argument is properly referred to by the
## account data (if any) implied to by the `accPath` argument. ## account data (if any) implied to by the `accPath` argument.
## ##
## The function will return an account leaf node if there was any, or an empty
## `VidVtxPair()` object.
##
# Verify storage root and account path # Verify storage root and account path
if not stoRoot.isValid: if not stoRoot.isValid:
return err(UtilsStoRootMissing) return err(UtilsStoRootMissing)
@ -208,12 +211,26 @@ proc registerAccount*(
if wp.vtx.vType != Leaf: if wp.vtx.vType != Leaf:
return err(UtilsAccPathWithoutLeaf) return err(UtilsAccPathWithoutLeaf)
if wp.vtx.lData.pType != AccountData: if wp.vtx.lData.pType != AccountData:
return ok() # nothing to do return ok(VidVtxPair()) # nothing to do
# Need to flag for re-hash # Check whether the `stoRoot` exists on the databse
let stoVtx = block:
let rc = db.getVtxRc stoRoot
if rc.isOk:
rc.value
elif rc.error == GetVtxNotFound:
VertexRef(nil)
else:
return err(rc.error)
# Verify `stoVtx` against storage root
let stoID = wp.vtx.lData.account.storageID let stoID = wp.vtx.lData.account.storageID
if stoID.isValid and stoID != stoRoot: if stoVtx.isValid:
return err(UtilsAccWrongStorageRoot) if stoID != stoRoot:
return err(UtilsAccWrongStorageRoot)
else:
if stoID.isValid:
return err(UtilsAccWrongStorageRoot)
# Clear Merkle keys so that `hasify()` can calculate the re-hash forest/tree # Clear Merkle keys so that `hasify()` can calculate the re-hash forest/tree
for w in hike.legs.mapIt(it.wp.vid): for w in hike.legs.mapIt(it.wp.vid):
@ -223,7 +240,7 @@ proc registerAccount*(
db.top.final.dirty.incl hike.root db.top.final.dirty.incl hike.root
db.top.final.dirty.incl wp.vid db.top.final.dirty.incl wp.vid
ok() ok(wp)
# ------------------------------------------------------------------------------ # ------------------------------------------------------------------------------
# End # End

View File

@ -33,7 +33,7 @@ iterator aristoReplicate[T](
defer: discard api.forget(p) defer: discard api.forget(p)
for (vid,key,vtx,node) in T.replicate(p): for (vid,key,vtx,node) in T.replicate(p):
if key.len == 32: if key.len == 32:
yield (@key, node.encode) yield (@(key.data), node.encode)
elif vid == root: elif vid == root:
yield (@(key.to(Hash256).data), node.encode) yield (@(key.to(Hash256).data), node.encode)

View File

@ -197,19 +197,28 @@ proc mptMethods(cMpt: AristoCoreDxMptRef): CoreDbMptFns =
db.bless AristoCoreDbMptBE(adb: mpt) db.bless AristoCoreDbMptBE(adb: mpt)
proc mptColFn(): CoreDbColRef = proc mptColFn(): CoreDbColRef =
let col = if cMpt.mptRoot.distinctBase < LEAST_FREE_VID:
if LEAST_FREE_VID <= cMpt.mptRoot.distinctBase: return db.bless(AristoColRef(
assert cMpt.accPath.isValid # debug mode only base: base,
AristoColRef( colType: CoreDbColType(cMpt.mptRoot)))
base: base,
colType: CtStorage, assert cMpt.accPath.isValid # debug mode only
stoRoot: cMpt.mptRoot, if cMpt.mptRoot.isValid:
stoAddr: cMpt.address) # The mpt might have become empty
else: let
AristoColRef( key = cMpt.address.keccakHash.data
base: base, pyl = api.fetchPayload(mpt, AccountsVID, key).valueOr:
colType: CoreDbColType(cMpt.mptRoot)) raiseAssert "mptColFn(): " & $error[1] & " at " & $error[0]
db.bless col
# Update by accounts data
doAssert pyl.pType == AccountData
cMpt.mptRoot = pyl.account.storageID
db.bless AristoColRef(
base: base,
colType: CtStorage,
stoRoot: cMpt.mptRoot,
stoAddr: cMpt.address)
proc mptFetch(key: openArray[byte]): CoreDbRc[Blob] = proc mptFetch(key: openArray[byte]): CoreDbRc[Blob] =
const info = "fetchFn()" const info = "fetchFn()"

View File

@ -362,8 +362,16 @@ proc persistStorage(acc: AccountRef, ac: AccountsLedgerRef, clearCache: bool) =
acc.originalStorage.del(slot) acc.originalStorage.del(slot)
acc.overlayStorage.clear() acc.overlayStorage.clear()
# Changing the storage trie might also change the `storage` descriptor when
# the trie changes from empty to exixting or v.v.
acc.statement.storage = storageLedger.getColumn() acc.statement.storage = storageLedger.getColumn()
# No need to hold descriptors for longer than needed
let state = acc.statement.storage.state.valueOr:
raiseAssert "Storage column state error: " & $$error
if state == EMPTY_ROOT_HASH:
acc.statement.storage = CoreDbColRef(nil)
proc makeDirty(ac: AccountsLedgerRef, address: EthAddress, cloneStorage = true): AccountRef = proc makeDirty(ac: AccountsLedgerRef, address: EthAddress, cloneStorage = true): AccountRef =
ac.isDirty = true ac.isDirty = true
result = ac.getAccount(address) result = ac.getAccount(address)

View File

@ -1,5 +1,5 @@
# Nimbus # Nimbus
# Copyright (c) 2018-2023 Status Research & Development GmbH # Copyright (c) 2018-2024 Status Research & Development GmbH
# Licensed under either of # Licensed under either of
# * Apache License, version 2.0, ([LICENSE-APACHE](LICENSE-APACHE) or # * Apache License, version 2.0, ([LICENSE-APACHE](LICENSE-APACHE) or
# http://www.apache.org/licenses/LICENSE-2.0) # http://www.apache.org/licenses/LICENSE-2.0)
@ -16,7 +16,7 @@ import
stew/byteutils, stew/byteutils,
nimcrypto, nimcrypto,
results, results,
../db/core_db, ../db/aristo,
../constants ../constants
export eth_types_rlp export eth_types_rlp

View File

@ -20,8 +20,8 @@ cliBuilder:
./test_stack, ./test_stack,
./test_genesis, ./test_genesis,
/test_precompiles, /test_precompiles,
#./test_generalstate_json, -- fails ./test_generalstate_json,
./test_tracer_json, #./test_tracer_json, -- temporarily suspended
#./test_persistblock_json, -- fails #./test_persistblock_json, -- fails
#./test_rpc, -- fails #./test_rpc, -- fails
./test_filters, ./test_filters,

View File

@ -58,7 +58,7 @@ func hash(filter: FilterRef): Hash =
h = h !& (w.uint64.toBytesBE.toSeq & data).hash h = h !& (w.uint64.toBytesBE.toSeq & data).hash
for w in filter.kMap.keys.toSeq.mapIt(it.uint64).sorted.mapIt(it.VertexID): for w in filter.kMap.keys.toSeq.mapIt(it.uint64).sorted.mapIt(it.VertexID):
let data = @(filter.kMap.getOrVoid(w)) let data = @(filter.kMap.getOrVoid(w).data)
h = h !& (w.uint64.toBytesBE.toSeq & data).hash h = h !& (w.uint64.toBytesBE.toSeq & data).hash
!$h !$h

View File

@ -132,7 +132,7 @@ func to*(a: Hash256; T: type PathID): T =
a.to(UInt256).to(T) a.to(UInt256).to(T)
func to*(a: HashKey; T: type UInt256): T = func to*(a: HashKey; T: type UInt256): T =
T.fromBytesBE 0u8.repeat(32 - a.len) & @a T.fromBytesBE 0u8.repeat(32 - a.len) & @(a.data)
func to*(fid: FilterID; T: type Hash256): T = func to*(fid: FilterID; T: type Hash256): T =
result.data = fid.uint64.u256.toBytesBE result.data = fid.uint64.u256.toBytesBE

View File

@ -443,6 +443,9 @@ proc testTxMergeAndDeleteSubTree*(
list: openArray[ProofTrieData]; list: openArray[ProofTrieData];
rdbPath: string; # Rocks DB storage directory rdbPath: string; # Rocks DB storage directory
): bool = ): bool =
const
# Need to reconfigure for the test, root ID 1 cannot be deleted as a trie
testRootVid = VertexID(2)
var var
prng = PrngDesc.init 42 prng = PrngDesc.init 42
db = AristoDbRef(nil) db = AristoDbRef(nil)
@ -460,6 +463,10 @@ proc testTxMergeAndDeleteSubTree*(
else: else:
AristoDbRef.init(MemBackendRef, qidLayout=TxQidLyo) AristoDbRef.init(MemBackendRef, qidLayout=TxQidLyo)
if testRootVid != VertexID(1):
# Add a dummy entry so the journal logic can be triggered
discard db.merge(VertexID(1), @[n.byte], @[42.byte], VOID_PATH_ID)
# Start transaction (double frame for testing) # Start transaction (double frame for testing)
xCheck db.txTop.isErr xCheck db.txTop.isErr
var tx = db.txBegin().value.to(AristoDbRef).txBegin().value var tx = db.txBegin().value.to(AristoDbRef).txBegin().value
@ -469,9 +476,9 @@ proc testTxMergeAndDeleteSubTree*(
# Reset database so that the next round has a clean setup # Reset database so that the next round has a clean setup
defer: db.innerCleanUp defer: db.innerCleanUp
# Merge leaf data into main trie (w/vertex ID 1) # Merge leaf data into main trie (w/vertex ID 2)
let kvpLeafs = block: let kvpLeafs = block:
var lst = w.kvpLst.mapRootVid VertexID(1) var lst = w.kvpLst.mapRootVid testRootVid
# The list might be reduced for isolation of particular properties, # The list might be reduced for isolation of particular properties,
# e.g. lst.setLen(min(5,lst.len)) # e.g. lst.setLen(min(5,lst.len))
lst lst
@ -500,12 +507,17 @@ proc testTxMergeAndDeleteSubTree*(
"" ""
# Delete sub-tree # Delete sub-tree
block: block:
let rc = db.delTree(VertexID(1), VOID_PATH_ID) let rc = db.delTree(testRootVid, VOID_PATH_ID)
xCheckRc rc.error == (0,0): xCheckRc rc.error == (0,0):
noisy.say "***", "del(2)", noisy.say "***", "del(2)",
" n=", n, "/", list.len, " n=", n, "/", list.len,
"\n db\n ", db.pp(backendOk=true), "\n db\n ", db.pp(backendOk=true),
"" ""
if testRootVid != VertexID(1):
# Update dummy entry so the journal logic can be triggered
discard db.merge(VertexID(1), @[n.byte], @[43.byte], VOID_PATH_ID)
block: block:
let saveBeOk = tx.saveToBackend( let saveBeOk = tx.saveToBackend(
chunkedMpt=false, relax=false, noisy=noisy, 2 + list.len * n) chunkedMpt=false, relax=false, noisy=noisy, 2 + list.len * n)

View File

@ -198,6 +198,7 @@ proc chainSyncRunner(
finalDiskCleanUpOk = true; finalDiskCleanUpOk = true;
enaLoggingOk = false; enaLoggingOk = false;
lastOneExtraOk = true; lastOneExtraOk = true;
oldLogAlign = false;
) = ) =
## Test backend database and ledger ## Test backend database and ledger
@ -241,7 +242,8 @@ proc chainSyncRunner(
com.db.trackLedgerApi = true com.db.trackLedgerApi = true
check noisy.test_chainSync(filePaths, com, numBlocks, check noisy.test_chainSync(filePaths, com, numBlocks,
lastOneExtra=lastOneExtraOk, enaLogging=enaLoggingOk) lastOneExtra=lastOneExtraOk, enaLogging=enaLoggingOk,
oldLogAlign=oldLogAlign)
proc persistentSyncPreLoadAndResumeRunner( proc persistentSyncPreLoadAndResumeRunner(
@ -252,12 +254,10 @@ proc persistentSyncPreLoadAndResumeRunner(
finalDiskCleanUpOk = true; finalDiskCleanUpOk = true;
enaLoggingOk = false; enaLoggingOk = false;
lastOneExtraOk = true; lastOneExtraOk = true;
oldLogAlign = false;
) = ) =
## Test backend database and ledger ## Test backend database and ledger
let let
fileInfo = capture.files[0]
.splitFile.name.split(".")[0]
.strip(leading=false, chars={'0'..'9'})
filePaths = capture.files.mapIt(it.findFilePath(baseDir,repoDir).value) filePaths = capture.files.mapIt(it.findFilePath(baseDir,repoDir).value)
baseDir = getTmpDir() / capture.dbName & "-chain-sync" baseDir = getTmpDir() / capture.dbName & "-chain-sync"
dbDir = baseDir / "tmp" dbDir = baseDir / "tmp"
@ -294,7 +294,8 @@ proc persistentSyncPreLoadAndResumeRunner(
com.db.trackLedgerApi = true com.db.trackLedgerApi = true
check noisy.test_chainSync(filePaths, com, firstPart, check noisy.test_chainSync(filePaths, com, firstPart,
lastOneExtra=lastOneExtraOk, enaLogging=enaLoggingOk) lastOneExtra=lastOneExtraOk, enaLogging=enaLoggingOk,
oldLogAlign=oldLogAlign)
test &"Continue with rest of sample": test &"Continue with rest of sample":
let let
@ -310,7 +311,8 @@ proc persistentSyncPreLoadAndResumeRunner(
com.db.trackLedgerApi = true com.db.trackLedgerApi = true
check noisy.test_chainSync(filePaths, com, secndPart, check noisy.test_chainSync(filePaths, com, secndPart,
lastOneExtra=lastOneExtraOk, enaLogging=enaLoggingOk) lastOneExtra=lastOneExtraOk, enaLogging=enaLoggingOk,
oldLogAlign=oldLogAlign)
# ------------------------------------------------------------------------------ # ------------------------------------------------------------------------------
# Main function(s) # Main function(s)
@ -330,7 +332,7 @@ when isMainModule:
setErrorLevel() setErrorLevel()
when true: when true and false:
false.coreDbMain() false.coreDbMain()
# This one uses the readily available dump: `bulkTest0` and some huge replay # This one uses the readily available dump: `bulkTest0` and some huge replay
@ -349,6 +351,7 @@ when isMainModule:
capture = capture, capture = capture,
#profilingOk = true, #profilingOk = true,
#finalDiskCleanUpOk = false, #finalDiskCleanUpOk = false,
oldLogAlign = true
) )
noisy.say "***", "total: ", state[0].pp, " sections: ", state[1] noisy.say "***", "total: ", state[0].pp, " sections: ", state[1]

View File

@ -78,8 +78,8 @@ let
builtIn: true, builtIn: true,
name: "main", name: "main",
network: MainNet, network: MainNet,
# will run over all avail files in parent folder # The extren repo is identified by a tag file
files: @["00000.era1"]) # on external repo files: @["mainnet-extern.era1"]) # on external repo
# ------------------ # ------------------
@ -114,6 +114,7 @@ let
dbType = AristoDbRocks, dbType = AristoDbRocks,
dbName = "main-open") # for resuming on the same persistent DB dbName = "main-open") # for resuming on the same persistent DB
# -----------------------
mainTest5m* = mainSampleEx mainTest5m* = mainSampleEx
.cloneWith( .cloneWith(
@ -121,18 +122,38 @@ let
numBlocks = 500_000) numBlocks = 500_000)
mainTest6r* = mainSampleEx mainTest6r* = mainSampleEx
.cloneWith(
name = "-ex-ar-some",
numBlocks = 257_400,
dbType = AristoDbRocks,
dbName = "main-open") # for resuming on the same persistent DB
mainTest7r* = mainSampleEx
.cloneWith(
name = "-ex-ar-more",
numBlocks = 1_460_700, # failure at 1,460,736
dbType = AristoDbRocks,
dbName = "main-open") # for resuming on the same persistent DB
mainTest8r* = mainSampleEx
.cloneWith(
name = "-ex-ar-more2",
numBlocks = 1_460_735, # failure at 1,460,736
dbType = AristoDbRocks,
dbName = "main-open") # for resuming on the same persistent DB
mainTest9r* = mainSampleEx
.cloneWith( .cloneWith(
name = "-ex-ar", name = "-ex-ar",
numBlocks = high(int), numBlocks = high(int),
dbType = AristoDbRocks) dbType = AristoDbRocks,
dbName = "main-open") # for resuming on the same persistent DB
# ------------------ # ------------------
allSamples* = [ allSamples* = [
mainTest0m, mainTest1m, mainTest0m, mainTest1m, mainTest2r, mainTest3r, mainTest4r,
mainTest2r, mainTest3r, mainTest4r, mainTest5m, mainTest6r, mainTest7r, mainTest8r, mainTest9r,
mainTest5m, mainTest6r
] ]
# End # End

View File

@ -150,7 +150,8 @@ proc test_chainSync*(
com: CommonRef; com: CommonRef;
numBlocks = high(int); numBlocks = high(int);
enaLogging = true; enaLogging = true;
lastOneExtra = true lastOneExtra = true;
oldLogAlign = false;
): bool = ): bool =
## Store persistent blocks from dump into chain DB ## Store persistent blocks from dump into chain DB
let let
@ -205,7 +206,9 @@ proc test_chainSync*(
if blocks > 0: if blocks > 0:
total += blocks total += blocks
let done {.inject.} = toUnixFloat(getTime()) let done {.inject.} = toUnixFloat(getTime())
noisy.say "", &"{blocks:3} blocks, {(done-sample):2.3}s, {(blocks.float / (done-sample)):4.3f} b/s, avg {(total.float / (done-begin)):4.3f} b/s" noisy.say "", &"{blocks:3} blocks, {(done-sample):2.3}s,",
" {(blocks.float / (done-sample)):4.3f} b/s,",
" avg {(total.float / (done-begin)):4.3f} b/s"
blocks = 0 blocks = 0
sample = done sample = done
@ -219,9 +222,13 @@ proc test_chainSync*(
if toBlock < lastBlock: if toBlock < lastBlock:
# Message if `[fromBlock,toBlock]` contains a multiple of `sayBlocks` # Message if `[fromBlock,toBlock]` contains a multiple of `sayBlocks`
if fromBlock + (toBlock mod sayBlocks.u256) <= toBlock: if fromBlock + (toBlock mod sayBlocks.u256) <= toBlock:
sayPerf if oldLogAlign:
noisy.whisper "***",
noisy.whisper "***", &"processing ...[#{fromBlock:>8},#{toBlock:>8}]..." &"processing ...[#{fromBlock},#{toBlock}]...\n"
else:
sayPerf
noisy.whisper "***",
&"processing ...[#{fromBlock:>8},#{toBlock:>8}]..."
if enaLogging: if enaLogging:
noisy.startLogging(w[0][0].blockNumber) noisy.startLogging(w[0][0].blockNumber)
@ -229,7 +236,7 @@ proc test_chainSync*(
let runPersistBlocksRc = chain.persistBlocks(w[0], w[1]) let runPersistBlocksRc = chain.persistBlocks(w[0], w[1])
xCheck runPersistBlocksRc == ValidationResult.OK: xCheck runPersistBlocksRc == ValidationResult.OK:
if noisy: if noisy:
# Re-run with logging enabled noisy.whisper "***", "Re-run with logging enabled...\n"
setTraceLevel() setTraceLevel()
com.db.trackLegaApi = false com.db.trackLegaApi = false
com.db.trackNewApi = false com.db.trackNewApi = false
@ -255,8 +262,12 @@ proc test_chainSync*(
let let
headers1 = w[0][0 ..< pivot] headers1 = w[0][0 ..< pivot]
bodies1 = w[1][0 ..< pivot] bodies1 = w[1][0 ..< pivot]
sayPerf if oldLogAlign:
noisy.whisper "***", &"processing {dotsOrSpace}[#{fromBlock:>8},#{(lastBlock-1):>8}]" noisy.whisper "***", &"processing ...[#{fromBlock},#{toBlock}]...\n"
else:
sayPerf
noisy.whisper "***",
&"processing {dotsOrSpace}[#{fromBlock:>8},#{(lastBlock-1):>8}]"
let runPersistBlocks1Rc = chain.persistBlocks(headers1, bodies1) let runPersistBlocks1Rc = chain.persistBlocks(headers1, bodies1)
xCheck runPersistBlocks1Rc == ValidationResult.OK xCheck runPersistBlocks1Rc == ValidationResult.OK
dotsOrSpace = " " dotsOrSpace = " "
@ -266,19 +277,30 @@ proc test_chainSync*(
let let
headers0 = headers9[0..0] headers0 = headers9[0..0]
bodies0 = bodies9[0..0] bodies0 = bodies9[0..0]
sayPerf if oldLogAlign:
noisy.whisper "***", &"processing {dotsOrSpace}[#{lastBlock:>8},#{lastBlock:>8}]" noisy.whisper "***",
&"processing {dotsOrSpace}[#{fromBlock},#{lastBlock-1}]\n"
else:
sayPerf
noisy.whisper "***",
&"processing {dotsOrSpace}[#{lastBlock:>8},#{lastBlock:>8}]"
noisy.stopLoggingAfter(): noisy.stopLoggingAfter():
let runPersistBlocks0Rc = chain.persistBlocks(headers0, bodies0) let runPersistBlocks0Rc = chain.persistBlocks(headers0, bodies0)
xCheck runPersistBlocks0Rc == ValidationResult.OK xCheck runPersistBlocks0Rc == ValidationResult.OK
else: else:
sayPerf if oldLogAlign:
noisy.whisper "***", &"processing {dotsOrSpace}[#{lastBlock:>8},#{toBlock:>8}]" noisy.whisper "***",
&"processing {dotsOrSpace}[#{lastBlock},#{toBlock}]\n"
else:
sayPerf
noisy.whisper "***",
&"processing {dotsOrSpace}[#{lastBlock:>8},#{toBlock:>8}]"
noisy.stopLoggingAfter(): noisy.stopLoggingAfter():
let runPersistBlocks9Rc = chain.persistBlocks(headers9, bodies9) let runPersistBlocks9Rc = chain.persistBlocks(headers9, bodies9)
xCheck runPersistBlocks9Rc == ValidationResult.OK xCheck runPersistBlocks9Rc == ValidationResult.OK
break break
sayPerf if not oldLogAlign:
sayPerf
true true