nimbus-eth1/tests/test_aristo/test_balancer.nim

364 lines
10 KiB
Nim
Raw Normal View History

Core db and aristo updates for destructor and tx logic (#1894) * Disable `TransactionID` related functions from `state_db.nim` why: Functions `getCommittedStorage()` and `updateOriginalRoot()` from the `state_db` module are nowhere used. The emulation of a legacy `TransactionID` type functionality is administratively expensive to provide by `Aristo` (the legacy DB version is only partially implemented, anyway). As there is no other place where `TransactionID`s are used, they will not be provided by the `Aristo` variant of the `CoreDb`. For the legacy DB API, nothing will change. * Fix copyright headers in source code * Get rid of compiler warning * Update Aristo code, remove unused `merge()` variant, export `hashify()` why: Adapt to upcoming `CoreDb` wrapper * Remove synced tx feature from `Aristo` why: + This feature allowed to synchronise transaction methods like begin, commit, and rollback for a group of descriptors. + The feature is over engineered and not needed for `CoreDb`, neither is it complete (some convergence features missing.) * Add debugging helpers to `Kvt` also: Update database iterator, add count variable yield argument similar to `Aristo`. * Provide optional destructors for `CoreDb` API why; For the upcoming Aristo wrapper, this allows to control when certain smart destruction and update can take place. The auto destructor works fine in general when the storage/cache strategy is known and acceptable when creating descriptors. * Add update option for `CoreDb` API function `hash()` why; The hash function is typically used to get the state root of the MPT. Due to lazy hashing, this might be not available on the `Aristo` DB. So the `update` function asks for re-hashing the gurrent state changes if needed. * Update API tracking log mode: `info` => `debug * Use shared `Kvt` descriptor in new Ledger API why: No need to create a new descriptor all the time
2023-11-16 19:35:03 +00:00
# Nimbus
# Copyright (c) 2023-2024 Status Research & Development GmbH
# Licensed under either of
# * Apache License, version 2.0, ([LICENSE-APACHE](LICENSE-APACHE) or
# http://www.apache.org/licenses/LICENSE-2.0)
# * MIT license ([LICENSE-MIT](LICENSE-MIT) or
# http://opensource.org/licenses/MIT)
# at your option. This file may not be copied, modified, or
# distributed except according to those terms.
## Aristo (aka Patricia) DB records distributed backend access test.
##
import
eth/common,
results,
unittest2,
../../nimbus/db/opts,
../../nimbus/db/core_db/backend/aristo_rocksdb,
../../nimbus/db/aristo/[
aristo_check,
aristo_debug,
aristo_desc,
aristo_get,
aristo_persistent,
aristo_tx],
../replay/xcheck,
./test_helpers
type
LeafQuartet =
array[0..3, seq[LeafTiePayload]]
DbTriplet =
array[0..2, AristoDbRef]
const
testRootVid = VertexID(2)
## Need to reconfigure for the test, root ID 1 cannot be deleted as a trie
# ------------------------------------------------------------------------------
# Private debugging helpers
# ------------------------------------------------------------------------------
proc dump(pfx: string; dx: varargs[AristoDbRef]): string =
if 0 < dx.len:
result = "\n "
var
pfx = pfx
qfx = ""
if pfx.len == 0:
(pfx,qfx) = ("[","]")
elif 1 < dx.len:
pfx = pfx & "#"
for n in 0 ..< dx.len:
let n1 = n + 1
result &= pfx
if 1 < dx.len:
result &= $n1
Core db aristo hasher profiling and timing improvement (#1938) * Explicitly use shared `Kvt` table on `Ledger` and `Clique` lookup. why: Speeds up lookup time with `Aristo` backend. For writing `Clique` data, the `Companion` model allows to write `Clique` data past the database locked by evm transactions. * Implement `CoreDb` profiling with API tracking why: Chasing time spent per APT procs ... * Implement `Ledger` profiling with API tracking why: Chasing time spent per APT procs ... * Always hashify when commiting or storing why: A dirty cache makes no sense when committing * Make sure that a zero key is created when adding/updating vertices why: This is an error fix mainly for edge cases. A typical error was that the root key got deleted when there were only a few vertices left on the DB. * Need all created and changed vertices zero-keyed on the cache why: A zero key (i.e. empty Merkle hash) indicates that a vertex key needs to be updated. This would not be needed immediately after a merge as there is an actual leaf path on the cache layer. But after subsequent merge and delete operations this information might get blurred. * Re-org hashing algorithm why: Apart from errors, the previous implementation was too slow for two reasons: + some control hashes were calculated for debugging (now all verification is done in `aristo_check` module) + the leaf paths stored on the cache are used to build the labelling (aka hashing) schedule; there paths were accumulated over successive hash sessions although it is clear that all keys were generated, already
2023-12-12 17:47:41 +00:00
result &= qfx & "\n " & dx[n].pp(backendOk=true) & "\n"
if n1 < dx.len:
result &= " ==========\n "
proc dump(dx: varargs[AristoDbRef]): string {.used.} =
"".dump dx
proc dump(w: DbTriplet): string {.used.} =
"db".dump(w[0], w[1], w[2])
# ------------------------------------------------------------------------------
# Private helpers
# ------------------------------------------------------------------------------
iterator quadripartite(td: openArray[ProofTrieData]): LeafQuartet =
## ...
var collect: seq[seq[LeafTiePayload]]
for w in td:
let lst = w.kvpLst.mapRootVid testRootVid
if lst.len < 8:
if 2 < collect.len:
yield [collect[0], collect[1], collect[2], lst]
collect.setLen(0)
else:
collect.add lst
else:
if collect.len == 0:
let a = lst.len div 4
yield [lst[0 ..< a], lst[a ..< 2*a], lst[2*a ..< 3*a], lst[3*a .. ^1]]
else:
if collect.len == 1:
let a = lst.len div 3
yield [collect[0], lst[0 ..< a], lst[a ..< 2*a], lst[a .. ^1]]
elif collect.len == 2:
let a = lst.len div 2
yield [collect[0], collect[1], lst[0 ..< a], lst[a .. ^1]]
else:
yield [collect[0], collect[1], collect[2], lst]
collect.setLen(0)
proc dbTriplet(w: LeafQuartet; rdbPath: string): Result[DbTriplet,AristoError] =
let db = block:
if 0 < rdbPath.len:
let (dbOpts, cfOpts) = DbOptions.init().toRocksDb()
let rc = AristoDbRef.init(RdbBackendRef, rdbPath, dbOpts, cfOpts, [])
xCheckRc rc.error == 0:
result = err(rc.error)
rc.value()[0]
else:
Aristo db api extensions for use as core db backend (#1754) * Update docu * Update Aristo/Kvt constructor prototype why: Previous version used an `enum` value to indicate what backend is to be used. This was replaced by using the backend object type. * Rewrite `hikeUp()` return code into `Result[Hike,(Hike,AristoError)]` why: Better code maintenance. Previously, the `Hike` object was returned. It had an internal error field so partial success was also available on a failure. This error field has been removed. * Use `openArray[byte]` rather than `Blob` in functions prototypes * Provide synchronised multi instance transactions why: The `CoreDB` object was geared towards the legacy DB which used a single transaction for the key-value backend DB. Different state roots are provided by the backend database, so all instances work directly on the same backend. Aristo db instances have different in-memory mappings (aka different state roots) and the transactions are on top of there mappings. So each instance might run different transactions. Multi instance transactions are a compromise to converge towards the legacy behaviour. The synchronised transactions span over all instances available at the time when base transaction was opened. Instances created later are unaffected. * Provide key-value pair database iterator why: Needed in `CoreDB` for `replicate()` emulation also: Some update of internal code * Extend API (i.e. prototype variants) why: Needed for `CoreDB` geared towards the legacy backend which has a more basic API than Aristo.
2023-09-15 15:23:53 +00:00
AristoDbRef.init MemBackendRef
# Set failed `xCheck()` error result
result = err(AristoError 1)
# Fill backend
block:
let report = db.mergeList w[0]
if report.error != 0:
db.finish(eradicate=true)
xCheck report.error == 0
let rc = db.persist()
xCheckRc rc.error == 0:
result = err(rc.error)
let dx = [db, db.forkTx(0).value, db.forkTx(0).value]
xCheck dx[0].nForked == 2
# Reduce unwanted tx layers
for n in 1 ..< dx.len:
xCheck dx[n].level == 1
xCheck dx[n].txTop.value.commit.isOk
# Clause (9) from `aristo/README.md` example
for n in 0 ..< dx.len:
let report = dx[n].mergeList w[n+1]
if report.error != 0:
db.finish(eradicate=true)
xCheck (n, report.error) == (n,0)
return ok(dx)
# ----------------------
proc cleanUp(dx: var DbTriplet) =
if not dx[0].isNil:
dx[0].finish(eradicate=true)
dx.reset
# ----------------------
proc isDbEq(a, b: LayerRef; db: AristoDbRef; noisy = true): bool =
## Verify that argument filter `a` has the same effect on the
## physical/unfiltered backend of `db` as argument filter `b`.
if a.isNil:
return b.isNil
if b.isNil:
return false
if unsafeAddr(a[]) != unsafeAddr(b[]):
No ext update (#2494) * Imported/rebase from `no-ext`, PR #2485 Store extension nodes together with the branch Extension nodes must be followed by a branch - as such, it makes sense to store the two together both in the database and in memory: * fewer reads, writes and updates to traverse the tree * simpler logic for maintaining the node structure * less space used, both memory and storage, because there are fewer nodes overall There is also a downside: hashes can no longer be cached for an extension - instead, only the extension+branch hash can be cached - this seems like a fine tradeoff since computing it should be fast. TODO: fix commented code * Fix merge functions and `toNode()` * Update `merkleSignCommit()` prototype why: Result is always a 32bit hash * Update short Merkle hash key generation details: Ethereum reference MPTs use Keccak hashes as node links if the size of an RLP encoded node is at least 32 bytes. Otherwise, the RLP encoded node value is used as a pseudo node link (rather than a hash.) This is specified in the yellow paper, appendix D. Different to the `Aristo` implementation, the reference MPT would not store such a node on the key-value database. Rather the RLP encoded node value is stored instead of a node link in a parent node is stored as a node link on the parent database. Only for the root hash, the top level node is always referred to by the hash. * Fix/update `Extension` sections why: Were commented out after removal of a dedicated `Extension` type which left the system disfunctional. * Clean up unused error codes * Update unit tests * Update docu --------- Co-authored-by: Jacek Sieka <jacek@status.im>
2024-07-16 19:47:59 +00:00
if a.kMap.getOrVoid((testRootVid, testRootVid)) !=
b.kMap.getOrVoid((testRootVid, testRootVid)) or
2024-06-04 15:05:13 +00:00
a.vTop != b.vTop:
return false
# Void entries may differ unless on physical backend
var (aTab, bTab) = (a.sTab, b.sTab)
if aTab.len < bTab.len:
aTab.swap bTab
for (vid,aVtx) in aTab.pairs:
let bVtx = bTab.getOrVoid vid
bTab.del vid
if aVtx != bVtx:
if aVtx.isValid and bVtx.isValid:
return false
# The valid one must match the backend data
let rc = db.getVtxUbe vid
if rc.isErr:
return false
let vtx = if aVtx.isValid: aVtx else: bVtx
if vtx != rc.value:
return false
elif not vid.isValid and not bTab.hasKey vid:
let rc = db.getVtxUbe vid
if rc.isOk:
return false # Exists on backend but missing on `bTab[]`
elif rc.error != GetKeyNotFound:
return false # general error
if 0 < bTab.len:
noisy.say "***", "not dbEq:", "bTabLen=", bTab.len
return false
# Similar for `kMap[]`
var (aMap, bMap) = (a.kMap, b.kMap)
if aMap.len < bMap.len:
aMap.swap bMap
for (vid,aKey) in aMap.pairs:
let bKey = bMap.getOrVoid vid
bMap.del vid
if aKey != bKey:
if aKey.isValid and bKey.isValid:
return false
# The valid one must match the backend data
let rc = db.getKeyUbe vid
if rc.isErr:
return false
let key = if aKey.isValid: aKey else: bKey
if key != rc.value:
return false
elif not vid.isValid and not bMap.hasKey vid:
let rc = db.getKeyUbe vid
if rc.isOk:
return false # Exists on backend but missing on `bMap[]`
elif rc.error != GetKeyNotFound:
return false # general error
if 0 < bMap.len:
noisy.say "***", "not dbEq:", " bMapLen=", bMap.len
return false
true
# ----------------------
proc checkBeOk(
dx: DbTriplet;
forceCache = false;
noisy = true;
): bool =
## ..
for n in 0 ..< dx.len:
let rc = dx[n].checkBE()
xCheckRc rc.error == (0,0):
noisy.say "***", "db checkBE failed",
" n=", n, "/", dx.len-1
true
# ------------------------------------------------------------------------------
# Public test function
# ------------------------------------------------------------------------------
proc testBalancer*(
noisy: bool;
list: openArray[ProofTrieData];
rdbPath: string; # Rocks DB storage directory
): bool =
var n = 0
for w in list.quadripartite:
n.inc
# Resulting clause (11) filters from `aristo/README.md` example
# which will be used in the second part of the tests
var
c11Filter1 = LayerRef(nil)
c11Filter3 = LayerRef(nil)
# Work through clauses (8)..(11) from `aristo/README.md` example
block:
# Clause (8) from `aristo/README.md` example
var
dx = block:
let rc = dbTriplet(w, rdbPath)
xCheckRc rc.error == 0
rc.value
(db1, db2, db3) = (dx[0], dx[1], dx[2])
defer:
dx.cleanUp()
when false: # or true:
Core db aristo hasher profiling and timing improvement (#1938) * Explicitly use shared `Kvt` table on `Ledger` and `Clique` lookup. why: Speeds up lookup time with `Aristo` backend. For writing `Clique` data, the `Companion` model allows to write `Clique` data past the database locked by evm transactions. * Implement `CoreDb` profiling with API tracking why: Chasing time spent per APT procs ... * Implement `Ledger` profiling with API tracking why: Chasing time spent per APT procs ... * Always hashify when commiting or storing why: A dirty cache makes no sense when committing * Make sure that a zero key is created when adding/updating vertices why: This is an error fix mainly for edge cases. A typical error was that the root key got deleted when there were only a few vertices left on the DB. * Need all created and changed vertices zero-keyed on the cache why: A zero key (i.e. empty Merkle hash) indicates that a vertex key needs to be updated. This would not be needed immediately after a merge as there is an actual leaf path on the cache layer. But after subsequent merge and delete operations this information might get blurred. * Re-org hashing algorithm why: Apart from errors, the previous implementation was too slow for two reasons: + some control hashes were calculated for debugging (now all verification is done in `aristo_check` module) + the leaf paths stored on the cache are used to build the labelling (aka hashing) schedule; there paths were accumulated over successive hash sessions although it is clear that all keys were generated, already
2023-12-12 17:47:41 +00:00
noisy.say "*** testDistributedAccess (1)", "n=", n # , dx.dump
# Clause (9) from `aristo/README.md` example
block:
let rc = db1.persist()
Aristo db api extensions for use as core db backend (#1754) * Update docu * Update Aristo/Kvt constructor prototype why: Previous version used an `enum` value to indicate what backend is to be used. This was replaced by using the backend object type. * Rewrite `hikeUp()` return code into `Result[Hike,(Hike,AristoError)]` why: Better code maintenance. Previously, the `Hike` object was returned. It had an internal error field so partial success was also available on a failure. This error field has been removed. * Use `openArray[byte]` rather than `Blob` in functions prototypes * Provide synchronised multi instance transactions why: The `CoreDB` object was geared towards the legacy DB which used a single transaction for the key-value backend DB. Different state roots are provided by the backend database, so all instances work directly on the same backend. Aristo db instances have different in-memory mappings (aka different state roots) and the transactions are on top of there mappings. So each instance might run different transactions. Multi instance transactions are a compromise to converge towards the legacy behaviour. The synchronised transactions span over all instances available at the time when base transaction was opened. Instances created later are unaffected. * Provide key-value pair database iterator why: Needed in `CoreDB` for `replicate()` emulation also: Some update of internal code * Extend API (i.e. prototype variants) why: Needed for `CoreDB` geared towards the legacy backend which has a more basic API than Aristo.
2023-09-15 15:23:53 +00:00
xCheckRc rc.error == 0
xCheck db1.balancer == LayerRef(nil)
xCheck db2.balancer == db3.balancer
block:
let rc = db2.stow() # non-persistent
Aristo db api extensions for use as core db backend (#1754) * Update docu * Update Aristo/Kvt constructor prototype why: Previous version used an `enum` value to indicate what backend is to be used. This was replaced by using the backend object type. * Rewrite `hikeUp()` return code into `Result[Hike,(Hike,AristoError)]` why: Better code maintenance. Previously, the `Hike` object was returned. It had an internal error field so partial success was also available on a failure. This error field has been removed. * Use `openArray[byte]` rather than `Blob` in functions prototypes * Provide synchronised multi instance transactions why: The `CoreDB` object was geared towards the legacy DB which used a single transaction for the key-value backend DB. Different state roots are provided by the backend database, so all instances work directly on the same backend. Aristo db instances have different in-memory mappings (aka different state roots) and the transactions are on top of there mappings. So each instance might run different transactions. Multi instance transactions are a compromise to converge towards the legacy behaviour. The synchronised transactions span over all instances available at the time when base transaction was opened. Instances created later are unaffected. * Provide key-value pair database iterator why: Needed in `CoreDB` for `replicate()` emulation also: Some update of internal code * Extend API (i.e. prototype variants) why: Needed for `CoreDB` geared towards the legacy backend which has a more basic API than Aristo.
2023-09-15 15:23:53 +00:00
xCheckRc rc.error == 0:
noisy.say "*** testDistributedAccess (3)", "n=", n, "db2".dump db2
xCheck db1.balancer == LayerRef(nil)
xCheck db2.balancer != db3.balancer
# Clause (11) from `aristo/README.md` example
discard db2.reCentre()
block:
let rc = db2.persist()
Aristo db api extensions for use as core db backend (#1754) * Update docu * Update Aristo/Kvt constructor prototype why: Previous version used an `enum` value to indicate what backend is to be used. This was replaced by using the backend object type. * Rewrite `hikeUp()` return code into `Result[Hike,(Hike,AristoError)]` why: Better code maintenance. Previously, the `Hike` object was returned. It had an internal error field so partial success was also available on a failure. This error field has been removed. * Use `openArray[byte]` rather than `Blob` in functions prototypes * Provide synchronised multi instance transactions why: The `CoreDB` object was geared towards the legacy DB which used a single transaction for the key-value backend DB. Different state roots are provided by the backend database, so all instances work directly on the same backend. Aristo db instances have different in-memory mappings (aka different state roots) and the transactions are on top of there mappings. So each instance might run different transactions. Multi instance transactions are a compromise to converge towards the legacy behaviour. The synchronised transactions span over all instances available at the time when base transaction was opened. Instances created later are unaffected. * Provide key-value pair database iterator why: Needed in `CoreDB` for `replicate()` emulation also: Some update of internal code * Extend API (i.e. prototype variants) why: Needed for `CoreDB` geared towards the legacy backend which has a more basic API than Aristo.
2023-09-15 15:23:53 +00:00
xCheckRc rc.error == 0
xCheck db2.balancer == LayerRef(nil)
# Check/verify backends
block:
let ok = dx.checkBeOk(noisy=noisy)
xCheck ok:
noisy.say "*** testDistributedAccess (4)", "n=", n, "db3".dump db3
# Capture filters from clause (11)
c11Filter1 = db1.balancer
c11Filter3 = db3.balancer
# Clean up
dx.cleanUp()
# ----------
# Work through clauses (12)..(15) from `aristo/README.md` example
block:
var
dy = block:
let rc = dbTriplet(w, rdbPath)
xCheckRc rc.error == 0
rc.value
(db1, db2, db3) = (dy[0], dy[1], dy[2])
defer:
dy.cleanUp()
# Build clause (12) from `aristo/README.md` example
discard db2.reCentre()
block:
let rc = db2.persist()
Aristo db api extensions for use as core db backend (#1754) * Update docu * Update Aristo/Kvt constructor prototype why: Previous version used an `enum` value to indicate what backend is to be used. This was replaced by using the backend object type. * Rewrite `hikeUp()` return code into `Result[Hike,(Hike,AristoError)]` why: Better code maintenance. Previously, the `Hike` object was returned. It had an internal error field so partial success was also available on a failure. This error field has been removed. * Use `openArray[byte]` rather than `Blob` in functions prototypes * Provide synchronised multi instance transactions why: The `CoreDB` object was geared towards the legacy DB which used a single transaction for the key-value backend DB. Different state roots are provided by the backend database, so all instances work directly on the same backend. Aristo db instances have different in-memory mappings (aka different state roots) and the transactions are on top of there mappings. So each instance might run different transactions. Multi instance transactions are a compromise to converge towards the legacy behaviour. The synchronised transactions span over all instances available at the time when base transaction was opened. Instances created later are unaffected. * Provide key-value pair database iterator why: Needed in `CoreDB` for `replicate()` emulation also: Some update of internal code * Extend API (i.e. prototype variants) why: Needed for `CoreDB` geared towards the legacy backend which has a more basic API than Aristo.
2023-09-15 15:23:53 +00:00
xCheckRc rc.error == 0
xCheck db2.balancer == LayerRef(nil)
xCheck db1.balancer == db3.balancer
# Clause (13) from `aristo/README.md` example
xCheck not db1.isCentre()
block:
let rc = db1.stow() # non-persistent
Aristo db api extensions for use as core db backend (#1754) * Update docu * Update Aristo/Kvt constructor prototype why: Previous version used an `enum` value to indicate what backend is to be used. This was replaced by using the backend object type. * Rewrite `hikeUp()` return code into `Result[Hike,(Hike,AristoError)]` why: Better code maintenance. Previously, the `Hike` object was returned. It had an internal error field so partial success was also available on a failure. This error field has been removed. * Use `openArray[byte]` rather than `Blob` in functions prototypes * Provide synchronised multi instance transactions why: The `CoreDB` object was geared towards the legacy DB which used a single transaction for the key-value backend DB. Different state roots are provided by the backend database, so all instances work directly on the same backend. Aristo db instances have different in-memory mappings (aka different state roots) and the transactions are on top of there mappings. So each instance might run different transactions. Multi instance transactions are a compromise to converge towards the legacy behaviour. The synchronised transactions span over all instances available at the time when base transaction was opened. Instances created later are unaffected. * Provide key-value pair database iterator why: Needed in `CoreDB` for `replicate()` emulation also: Some update of internal code * Extend API (i.e. prototype variants) why: Needed for `CoreDB` geared towards the legacy backend which has a more basic API than Aristo.
2023-09-15 15:23:53 +00:00
xCheckRc rc.error == 0
No ext update (#2494) * Imported/rebase from `no-ext`, PR #2485 Store extension nodes together with the branch Extension nodes must be followed by a branch - as such, it makes sense to store the two together both in the database and in memory: * fewer reads, writes and updates to traverse the tree * simpler logic for maintaining the node structure * less space used, both memory and storage, because there are fewer nodes overall There is also a downside: hashes can no longer be cached for an extension - instead, only the extension+branch hash can be cached - this seems like a fine tradeoff since computing it should be fast. TODO: fix commented code * Fix merge functions and `toNode()` * Update `merkleSignCommit()` prototype why: Result is always a 32bit hash * Update short Merkle hash key generation details: Ethereum reference MPTs use Keccak hashes as node links if the size of an RLP encoded node is at least 32 bytes. Otherwise, the RLP encoded node value is used as a pseudo node link (rather than a hash.) This is specified in the yellow paper, appendix D. Different to the `Aristo` implementation, the reference MPT would not store such a node on the key-value database. Rather the RLP encoded node value is stored instead of a node link in a parent node is stored as a node link on the parent database. Only for the root hash, the top level node is always referred to by the hash. * Fix/update `Extension` sections why: Were commented out after removal of a dedicated `Extension` type which left the system disfunctional. * Clean up unused error codes * Update unit tests * Update docu --------- Co-authored-by: Jacek Sieka <jacek@status.im>
2024-07-16 19:47:59 +00:00
# Clause (14) from `aristo/README.md` check
let c11Fil1_eq_db1RoFilter = c11Filter1.isDbEq(db1.balancer, db1, noisy)
xCheck c11Fil1_eq_db1RoFilter:
noisy.say "*** testDistributedAccess (7)", "n=", n,
"db1".dump(db1),
""
# Clause (15) from `aristo/README.md` check
let c11Fil3_eq_db3RoFilter = c11Filter3.isDbEq(db3.balancer, db3, noisy)
xCheck c11Fil3_eq_db3RoFilter:
noisy.say "*** testDistributedAccess (8)", "n=", n,
"db3".dump(db3),
""
# Check/verify backends
block:
let ok = dy.checkBeOk(noisy=noisy)
xCheck ok
when false: # or true:
Core db aristo hasher profiling and timing improvement (#1938) * Explicitly use shared `Kvt` table on `Ledger` and `Clique` lookup. why: Speeds up lookup time with `Aristo` backend. For writing `Clique` data, the `Companion` model allows to write `Clique` data past the database locked by evm transactions. * Implement `CoreDb` profiling with API tracking why: Chasing time spent per APT procs ... * Implement `Ledger` profiling with API tracking why: Chasing time spent per APT procs ... * Always hashify when commiting or storing why: A dirty cache makes no sense when committing * Make sure that a zero key is created when adding/updating vertices why: This is an error fix mainly for edge cases. A typical error was that the root key got deleted when there were only a few vertices left on the DB. * Need all created and changed vertices zero-keyed on the cache why: A zero key (i.e. empty Merkle hash) indicates that a vertex key needs to be updated. This would not be needed immediately after a merge as there is an actual leaf path on the cache layer. But after subsequent merge and delete operations this information might get blurred. * Re-org hashing algorithm why: Apart from errors, the previous implementation was too slow for two reasons: + some control hashes were calculated for debugging (now all verification is done in `aristo_check` module) + the leaf paths stored on the cache are used to build the labelling (aka hashing) schedule; there paths were accumulated over successive hash sessions although it is clear that all keys were generated, already
2023-12-12 17:47:41 +00:00
noisy.say "*** testDistributedAccess (9)", "n=", n # , dy.dump
true
# ------------------------------------------------------------------------------
# End
# ------------------------------------------------------------------------------