2023-05-11 14:25:29 +00:00
|
|
|
# nimbus-eth1
|
Core db update storage root management for sub tries (#1964)
* Aristo: Re-phrase `LayerDelta` and `LayerFinal` as object references
why:
Avoids copying in some cases
* Fix copyright header
* Aristo: Verify `leafTie.root` function argument for `merge()` proc
why:
Zero root will lead to inconsistent DB entry
* Aristo: Update failure condition for hash labels compiler `hashify()`
why:
Node need not be rejected as long as links are on the schedule. In
that case, `redo[]` is to become `wff.base[]` at a later stage.
This amends an earlier fix, part of #1952 by also testing against
the target nodes of the `wff.base[]` sets.
* Aristo: Add storage root glue record to `hashify()` schedule
why:
An account leaf node might refer to a non-resolvable storage root ID.
Storage root node chains will end up at the storage root. So the link
`storage-root->account-leaf` needs an extra item in the schedule.
* Aristo: fix error code returned by `fetchPayload()`
details:
Final error code is implied by the error code form the `hikeUp()`
function.
* CoreDb: Discard `createOk` argument in API `getRoot()` function
why:
Not needed for the legacy DB. For the `Arsto` DB, a lazy approach is
implemented where a stprage root node is created on-the-fly.
* CoreDb: Prevent `$$` logging in some cases
why:
Logging the function `$$` is not useful when it is used for internal
use, i.e. retrieving an an error text for logging.
* CoreDb: Add `tryHashFn()` to API for pretty printing
why:
Pretty printing must not change the hashification status for the
`Aristo` DB. So there is an independent API wrapper for getting the
node hash which never updated the hashes.
* CoreDb: Discard `update` argument in API `hash()` function
why:
When calling the API function `hash()`, the latest state is always
wanted. For a version that uses the current state as-is without checking,
the function `tryHash()` was added to the backend.
* CoreDb: Update opaque vertex ID objects for the `Aristo` backend
why:
For `Aristo`, vID objects encapsulate a numeric `VertexID`
referencing a vertex (rather than a node hash as used on the
legacy backend.) For storage sub-tries, there might be no initial
vertex known when the descriptor is created. So opaque vertex ID
objects are supported without a valid `VertexID` which will be
initalised on-the-fly when the first item is merged.
* CoreDb: Add pretty printer for opaque vertex ID objects
* Cosmetics, printing profiling data
* CoreDb: Fix segfault in `Aristo` backend when creating MPT descriptor
why:
Missing initialisation error
* CoreDb: Allow MPT to inherit shared context on `Aristo` backend
why:
Creates descriptors with different storage roots for the same
shared `Aristo` DB descriptor.
* Cosmetics, update diagnostic message items for `Aristo` backend
* Fix Copyright year
2024-01-11 19:11:38 +00:00
|
|
|
# Copyright (c) 2023-2024 Status Research & Development GmbH
|
2023-05-11 14:25:29 +00:00
|
|
|
# Licensed under either of
|
|
|
|
# * Apache License, version 2.0, ([LICENSE-APACHE](LICENSE-APACHE) or
|
|
|
|
# http://www.apache.org/licenses/LICENSE-2.0)
|
|
|
|
# * MIT license ([LICENSE-MIT](LICENSE-MIT) or
|
|
|
|
# http://opensource.org/licenses/MIT)
|
|
|
|
# at your option. This file may not be copied, modified, or distributed
|
|
|
|
# except according to those terms.
|
|
|
|
|
|
|
|
## Aristo DB -- a Patricia Trie with labeled edges
|
|
|
|
## ===============================================
|
|
|
|
##
|
2023-08-17 13:42:01 +00:00
|
|
|
## These data structures allow to overlay the *Patricia Trie* with *Merkel
|
2023-05-11 14:25:29 +00:00
|
|
|
## Trie* hashes. See the `README.md` in the `aristo` folder for documentation.
|
2023-05-14 17:43:01 +00:00
|
|
|
##
|
|
|
|
## Some semantic explanations;
|
|
|
|
##
|
2023-06-12 18:16:03 +00:00
|
|
|
## * HashKey, NodeRef etc. refer to the standard/legacy `Merkle Patricia Tree`
|
2023-05-14 17:43:01 +00:00
|
|
|
## * VertexID, VertexRef, etc. refer to the `Aristo Trie`
|
|
|
|
##
|
2023-05-11 14:25:29 +00:00
|
|
|
{.push raises: [].}
|
|
|
|
|
|
|
|
import
|
2023-08-17 13:42:01 +00:00
|
|
|
std/[hashes, sets, tables],
|
No ext update (#2494)
* Imported/rebase from `no-ext`, PR #2485
Store extension nodes together with the branch
Extension nodes must be followed by a branch - as such, it makes sense
to store the two together both in the database and in memory:
* fewer reads, writes and updates to traverse the tree
* simpler logic for maintaining the node structure
* less space used, both memory and storage, because there are fewer
nodes overall
There is also a downside: hashes can no longer be cached for an
extension - instead, only the extension+branch hash can be cached - this
seems like a fine tradeoff since computing it should be fast.
TODO: fix commented code
* Fix merge functions and `toNode()`
* Update `merkleSignCommit()` prototype
why:
Result is always a 32bit hash
* Update short Merkle hash key generation
details:
Ethereum reference MPTs use Keccak hashes as node links if the size of
an RLP encoded node is at least 32 bytes. Otherwise, the RLP encoded
node value is used as a pseudo node link (rather than a hash.) This is
specified in the yellow paper, appendix D.
Different to the `Aristo` implementation, the reference MPT would not
store such a node on the key-value database. Rather the RLP encoded node value is stored instead of a node link in a parent node
is stored as a node link on the parent database.
Only for the root hash, the top level node is always referred to by the
hash.
* Fix/update `Extension` sections
why:
Were commented out after removal of a dedicated `Extension` type which
left the system disfunctional.
* Clean up unused error codes
* Update unit tests
* Update docu
---------
Co-authored-by: Jacek Sieka <jacek@status.im>
2024-07-16 19:47:59 +00:00
|
|
|
stew/keyed_queue,
|
2023-06-12 18:16:03 +00:00
|
|
|
eth/common,
|
2023-09-11 20:38:49 +00:00
|
|
|
results,
|
2023-06-12 13:48:47 +00:00
|
|
|
./aristo_constants,
|
2024-07-11 11:26:46 +00:00
|
|
|
./aristo_desc/[desc_error, desc_identifiers, desc_nibbles, desc_structural]
|
2023-08-10 20:01:28 +00:00
|
|
|
|
2023-08-25 22:53:59 +00:00
|
|
|
from ./aristo_desc/desc_backend
|
2023-08-18 19:46:55 +00:00
|
|
|
import BackendRef
|
2023-05-30 21:21:15 +00:00
|
|
|
|
2023-08-17 13:42:01 +00:00
|
|
|
# Not auto-exporting backend
|
2023-05-30 21:21:15 +00:00
|
|
|
export
|
2024-07-18 07:13:56 +00:00
|
|
|
tables, aristo_constants, desc_error, desc_identifiers, desc_nibbles,
|
|
|
|
desc_structural, keyed_queue
|
2024-07-03 15:58:25 +00:00
|
|
|
|
|
|
|
const
|
|
|
|
accLruSize* = 1024 * 1024
|
|
|
|
# LRU cache size for accounts that have storage
|
2023-06-12 13:48:47 +00:00
|
|
|
|
2023-05-11 14:25:29 +00:00
|
|
|
type
|
2023-08-07 17:45:23 +00:00
|
|
|
AristoTxRef* = ref object
|
|
|
|
## Transaction descriptor
|
|
|
|
db*: AristoDbRef ## Database descriptor
|
|
|
|
parent*: AristoTxRef ## Previous transaction
|
|
|
|
txUid*: uint ## Unique ID among transactions
|
2023-08-11 17:23:57 +00:00
|
|
|
level*: int ## Stack index for this transaction
|
2023-08-07 17:45:23 +00:00
|
|
|
|
2023-11-08 12:18:32 +00:00
|
|
|
MerkleSignRef* = ref object
|
|
|
|
## Simple Merkle signature calculatior for key-value lists
|
|
|
|
root*: VertexID
|
|
|
|
db*: AristoDbRef
|
|
|
|
count*: uint
|
|
|
|
error*: AristoError
|
|
|
|
errKey*: Blob
|
|
|
|
|
2023-09-11 20:38:49 +00:00
|
|
|
DudesRef = ref object
|
2024-03-14 22:17:43 +00:00
|
|
|
## List of peers accessing the same database. This list is layzily
|
|
|
|
## allocated and might be kept with a single entry, i.e. so that
|
|
|
|
## `{centre} == peers`.
|
|
|
|
centre: AristoDbRef ## Link to peer with write permission
|
|
|
|
peers: HashSet[AristoDbRef] ## List of all peers
|
2023-12-19 12:39:23 +00:00
|
|
|
|
2024-07-03 15:58:25 +00:00
|
|
|
AccountKey* = distinct ref Hash256
|
|
|
|
# `ref` version of the account path / key
|
|
|
|
# `KeyedQueue` is inefficient for large keys, so we have to use this ref
|
|
|
|
# workaround to not experience a memory explosion in the account cache
|
|
|
|
# TODO rework KeyedQueue to deal with large keys and/or heterogenous lookup
|
|
|
|
|
2024-06-13 18:15:11 +00:00
|
|
|
AristoDbRef* = ref object
|
2023-08-17 13:42:01 +00:00
|
|
|
## Three tier database object supporting distributed instances.
|
2023-08-18 19:46:55 +00:00
|
|
|
top*: LayerRef ## Database working layer, mutable
|
|
|
|
stack*: seq[LayerRef] ## Stashed immutable parent layers
|
2024-06-03 20:10:35 +00:00
|
|
|
balancer*: LayerDeltaRef ## Baland out concurrent backend access
|
2023-08-18 19:46:55 +00:00
|
|
|
backend*: BackendRef ## Backend database (may well be `nil`)
|
2023-05-11 14:25:29 +00:00
|
|
|
|
2023-08-07 17:45:23 +00:00
|
|
|
txRef*: AristoTxRef ## Latest active transaction
|
|
|
|
txUidGen*: uint ## Tx-relative unique number generator
|
2023-09-11 20:38:49 +00:00
|
|
|
dudes: DudesRef ## Related DB descriptors
|
2023-08-07 17:45:23 +00:00
|
|
|
|
2023-05-11 14:25:29 +00:00
|
|
|
# Debugging data below, might go away in future
|
2024-07-04 13:46:52 +00:00
|
|
|
xMap*: Table[HashKey,HashSet[RootedVertexID]] ## For pretty printing/debugging
|
2023-05-11 14:25:29 +00:00
|
|
|
|
2024-07-14 10:02:05 +00:00
|
|
|
accLeaves*: KeyedQueue[AccountKey, VertexRef]
|
2024-07-12 13:08:26 +00:00
|
|
|
## Account path to payload cache - accounts are frequently accessed by
|
|
|
|
## account path when contracts interact with them - this cache ensures
|
|
|
|
## that we don't have to re-traverse the storage trie for every such
|
|
|
|
## interaction
|
|
|
|
## TODO a better solution would probably be to cache this in a type
|
|
|
|
## exposed to the high-level API
|
2024-07-03 15:58:25 +00:00
|
|
|
|
2024-07-14 17:12:10 +00:00
|
|
|
stoLeaves*: KeyedQueue[AccountKey, VertexRef]
|
|
|
|
## Mixed account/storage path to payload cache - same as above but caches
|
|
|
|
## the full lookup of storage slots
|
|
|
|
|
2023-09-26 09:21:13 +00:00
|
|
|
AristoDbAction* = proc(db: AristoDbRef) {.gcsafe, raises: [].}
|
2023-08-17 13:42:01 +00:00
|
|
|
## Generic call back function/closure.
|
|
|
|
|
2023-05-11 14:25:29 +00:00
|
|
|
# ------------------------------------------------------------------------------
|
2023-06-12 13:48:47 +00:00
|
|
|
# Public helpers
|
2023-05-11 14:25:29 +00:00
|
|
|
# ------------------------------------------------------------------------------
|
|
|
|
|
2024-07-03 15:58:25 +00:00
|
|
|
template hash*(a: AccountKey): Hash =
|
|
|
|
mixin hash
|
|
|
|
hash((ref Hash256)(a)[])
|
|
|
|
|
|
|
|
template `==`*(a, b: AccountKey): bool =
|
|
|
|
mixin `==`
|
|
|
|
(ref Hash256)(a)[] == (ref Hash256)(b)[]
|
|
|
|
|
|
|
|
template to*(a: Hash256, T: type AccountKey): T =
|
|
|
|
AccountKey((ref Hash256)(data: a.data))
|
|
|
|
|
2024-07-14 17:12:10 +00:00
|
|
|
template mixUp*(T: type AccountKey, accPath, stoPath: Hash256): Hash256 =
|
|
|
|
# Insecure but fast way of mixing the values of two hashes, for the purpose
|
|
|
|
# of quick lookups - this is certainly not a good idea for general Hash256
|
|
|
|
# values but account paths are generated from accounts which would be hard
|
|
|
|
# to create pre-images for, for the purpose of collisions with a particular
|
|
|
|
# storage slot
|
|
|
|
var v {.noinit.}: Hash256
|
|
|
|
for i in 0..<v.data.len:
|
|
|
|
# `+` wraps leaving all bits used
|
|
|
|
v.data[i] = accPath.data[i] + stoPath.data[i]
|
|
|
|
v
|
|
|
|
|
2023-06-22 11:13:24 +00:00
|
|
|
func getOrVoid*[W](tab: Table[W,VertexRef]; w: W): VertexRef =
|
2023-06-12 13:48:47 +00:00
|
|
|
tab.getOrDefault(w, VertexRef(nil))
|
2023-05-11 14:25:29 +00:00
|
|
|
|
2023-11-08 12:18:32 +00:00
|
|
|
func getOrVoid*[W](tab: Table[W,NodeRef]; w: W): NodeRef =
|
|
|
|
tab.getOrDefault(w, NodeRef(nil))
|
|
|
|
|
2023-08-10 20:01:28 +00:00
|
|
|
func getOrVoid*[W](tab: Table[W,HashKey]; w: W): HashKey =
|
|
|
|
tab.getOrDefault(w, VOID_HASH_KEY)
|
|
|
|
|
2024-07-04 13:46:52 +00:00
|
|
|
func getOrVoid*[W](tab: Table[W,RootedVertexID]; w: W): RootedVertexID =
|
|
|
|
tab.getOrDefault(w, default(RootedVertexID))
|
2023-06-09 11:17:37 +00:00
|
|
|
|
2024-07-04 13:46:52 +00:00
|
|
|
func getOrVoid*[W](tab: Table[W,HashSet[RootedVertexID]]; w: W): HashSet[RootedVertexID] =
|
|
|
|
tab.getOrDefault(w, default(HashSet[RootedVertexID]))
|
2023-11-08 12:18:32 +00:00
|
|
|
|
2023-06-12 18:16:03 +00:00
|
|
|
# --------
|
2023-06-09 11:17:37 +00:00
|
|
|
|
2023-06-20 13:26:25 +00:00
|
|
|
func isValid*(vtx: VertexRef): bool =
|
2024-05-23 15:37:51 +00:00
|
|
|
vtx != VertexRef(nil)
|
2023-06-09 11:17:37 +00:00
|
|
|
|
2023-06-20 13:26:25 +00:00
|
|
|
func isValid*(nd: NodeRef): bool =
|
2023-06-12 13:48:47 +00:00
|
|
|
nd != NodeRef(nil)
|
2023-06-09 11:17:37 +00:00
|
|
|
|
2024-02-01 21:27:48 +00:00
|
|
|
func isValid*(pid: PathID): bool =
|
|
|
|
pid != VOID_PATH_ID
|
|
|
|
|
2024-06-03 20:10:35 +00:00
|
|
|
func isValid*(filter: LayerDeltaRef): bool =
|
|
|
|
filter != LayerDeltaRef(nil)
|
2023-08-21 18:18:06 +00:00
|
|
|
|
2023-11-08 12:18:32 +00:00
|
|
|
func isValid*(root: Hash256): bool =
|
|
|
|
root != EMPTY_ROOT_HASH
|
2023-05-11 14:25:29 +00:00
|
|
|
|
2023-11-08 12:18:32 +00:00
|
|
|
func isValid*(key: HashKey): bool =
|
2024-02-22 08:24:58 +00:00
|
|
|
assert key.len != 32 or key.to(Hash256).isValid
|
|
|
|
0 < key.len
|
2023-05-11 14:25:29 +00:00
|
|
|
|
2023-06-20 13:26:25 +00:00
|
|
|
func isValid*(vid: VertexID): bool =
|
2023-06-12 13:48:47 +00:00
|
|
|
vid != VertexID(0)
|
2023-05-30 11:47:47 +00:00
|
|
|
|
2024-07-04 13:46:52 +00:00
|
|
|
func isValid*(rvid: RootedVertexID): bool =
|
|
|
|
rvid.vid.isValid and rvid.root.isValid
|
|
|
|
|
|
|
|
func isValid*(sqv: HashSet[RootedVertexID]): bool =
|
|
|
|
sqv.len > 0
|
2023-11-08 12:18:32 +00:00
|
|
|
|
2023-06-20 13:26:25 +00:00
|
|
|
# ------------------------------------------------------------------------------
|
|
|
|
# Public functions, miscellaneous
|
|
|
|
# ------------------------------------------------------------------------------
|
|
|
|
|
2023-08-17 13:42:01 +00:00
|
|
|
# Hash set helper
|
|
|
|
func hash*(db: AristoDbRef): Hash =
|
|
|
|
## Table/KeyedQueue/HashSet mixin
|
|
|
|
cast[pointer](db).hash
|
|
|
|
|
2023-09-11 20:38:49 +00:00
|
|
|
# ------------------------------------------------------------------------------
|
|
|
|
# Public functions, `dude` related
|
|
|
|
# ------------------------------------------------------------------------------
|
|
|
|
|
|
|
|
func isCentre*(db: AristoDbRef): bool =
|
|
|
|
## This function returns `true` is the argument `db` is the centre (see
|
|
|
|
## comments on `reCentre()` for details.)
|
|
|
|
##
|
2024-03-14 22:17:43 +00:00
|
|
|
db.dudes.isNil or db.dudes.centre == db
|
2023-09-11 20:38:49 +00:00
|
|
|
|
|
|
|
func getCentre*(db: AristoDbRef): AristoDbRef =
|
|
|
|
## Get the centre descriptor among all other descriptors accessing the same
|
|
|
|
## backend database (see comments on `reCentre()` for details.)
|
|
|
|
##
|
2024-03-14 22:17:43 +00:00
|
|
|
if db.dudes.isNil: db else: db.dudes.centre
|
2023-09-11 20:38:49 +00:00
|
|
|
|
2024-06-13 18:15:11 +00:00
|
|
|
proc reCentre*(db: AristoDbRef): Result[void,AristoError] =
|
2023-09-11 20:38:49 +00:00
|
|
|
## Re-focus the `db` argument descriptor so that it becomes the centre.
|
|
|
|
## Nothing is done if the `db` descriptor is the centre, already.
|
|
|
|
##
|
|
|
|
## With several descriptors accessing the same backend database there is a
|
|
|
|
## single one that has write permission for the backend (regardless whether
|
|
|
|
## there is a backend, at all.) The descriptor entity with write permission
|
|
|
|
## is called *the centre*.
|
|
|
|
##
|
|
|
|
## After invoking `reCentre()`, the argument database `db` can only be
|
|
|
|
## destructed by `finish()` which also destructs all other descriptors
|
|
|
|
## accessing the same backend database. Descriptors where `isCentre()`
|
|
|
|
## returns `false` must be single destructed with `forget()`.
|
|
|
|
##
|
2024-03-14 22:17:43 +00:00
|
|
|
if not db.dudes.isNil:
|
|
|
|
db.dudes.centre = db
|
2024-06-13 18:15:11 +00:00
|
|
|
ok()
|
2023-09-15 15:23:53 +00:00
|
|
|
|
2023-09-11 20:38:49 +00:00
|
|
|
proc fork*(
|
|
|
|
db: AristoDbRef;
|
2024-03-14 22:17:43 +00:00
|
|
|
noTopLayer = false;
|
2024-03-20 15:15:56 +00:00
|
|
|
noFilter = false;
|
2023-09-11 20:38:49 +00:00
|
|
|
): Result[AristoDbRef,AristoError] =
|
|
|
|
## This function creates a new empty descriptor accessing the same backend
|
|
|
|
## (if any) database as the argument `db`. This new descriptor joins the
|
|
|
|
## list of descriptors accessing the same backend database.
|
|
|
|
##
|
|
|
|
## After use, any unused non centre descriptor should be destructed via
|
|
|
|
## `forget()`. Not doing so will not only hold memory ressources but might
|
|
|
|
## also cost computing ressources for maintaining and updating backend
|
|
|
|
## filters when writing to the backend database .
|
|
|
|
##
|
2024-03-20 15:15:56 +00:00
|
|
|
## If the argument `noFilter` is set `true` the function will fork directly
|
|
|
|
## off the backend database and ignore any filter.
|
|
|
|
##
|
2024-03-14 22:17:43 +00:00
|
|
|
## If the argument `noTopLayer` is set `true` the function will provide an
|
|
|
|
## uninitalised and inconsistent (!) descriptor object without top layer.
|
|
|
|
## This setting avoids some database lookup for cases where the top layer
|
|
|
|
## is redefined anyway.
|
2023-09-11 20:38:49 +00:00
|
|
|
##
|
2024-03-14 22:17:43 +00:00
|
|
|
# Make sure that there is a dudes list
|
|
|
|
if db.dudes.isNil:
|
|
|
|
db.dudes = DudesRef(centre: db, peers: @[db].toHashSet)
|
|
|
|
|
2023-09-11 20:38:49 +00:00
|
|
|
let clone = AristoDbRef(
|
2024-03-14 22:17:43 +00:00
|
|
|
dudes: db.dudes,
|
2023-12-19 12:39:23 +00:00
|
|
|
backend: db.backend)
|
2023-09-11 20:38:49 +00:00
|
|
|
|
2024-03-20 15:15:56 +00:00
|
|
|
if not noFilter:
|
2024-06-03 20:10:35 +00:00
|
|
|
clone.balancer = db.balancer # Ref is ok here (filters are immutable)
|
2024-03-20 15:15:56 +00:00
|
|
|
|
2024-03-14 22:17:43 +00:00
|
|
|
if not noTopLayer:
|
|
|
|
clone.top = LayerRef.init()
|
2024-06-03 20:10:35 +00:00
|
|
|
if not db.balancer.isNil:
|
2024-06-04 15:05:13 +00:00
|
|
|
clone.top.delta.vTop = db.balancer.vTop
|
2024-03-20 15:15:56 +00:00
|
|
|
else:
|
2024-06-04 15:05:13 +00:00
|
|
|
let rc = clone.backend.getTuvFn()
|
2024-03-20 15:15:56 +00:00
|
|
|
if rc.isOk:
|
2024-06-04 15:05:13 +00:00
|
|
|
clone.top.delta.vTop = rc.value
|
|
|
|
elif rc.error != GetTuvNotFound:
|
2024-03-20 15:15:56 +00:00
|
|
|
return err(rc.error)
|
2023-09-11 20:38:49 +00:00
|
|
|
|
2024-03-14 22:17:43 +00:00
|
|
|
# Add to peer list of clones
|
|
|
|
db.dudes.peers.incl clone
|
2023-09-11 20:38:49 +00:00
|
|
|
|
|
|
|
ok clone
|
|
|
|
|
|
|
|
iterator forked*(db: AristoDbRef): AristoDbRef =
|
|
|
|
## Interate over all non centre descriptors (see comments on `reCentre()`
|
|
|
|
## for details.)
|
|
|
|
if not db.dudes.isNil:
|
2024-03-14 22:17:43 +00:00
|
|
|
for dude in db.getCentre.dudes.peers.items:
|
|
|
|
if dude != db.dudes.centre:
|
|
|
|
yield dude
|
2023-09-11 20:38:49 +00:00
|
|
|
|
|
|
|
func nForked*(db: AristoDbRef): int =
|
|
|
|
## Returns the number of non centre descriptors (see comments on `reCentre()`
|
|
|
|
## for details.) This function is a fast version of `db.forked.toSeq.len`.
|
2023-09-15 15:23:53 +00:00
|
|
|
if not db.dudes.isNil:
|
2024-03-14 22:17:43 +00:00
|
|
|
return db.dudes.peers.len - 1
|
2023-09-11 20:38:49 +00:00
|
|
|
|
|
|
|
|
|
|
|
proc forget*(db: AristoDbRef): Result[void,AristoError] =
|
|
|
|
## Destruct the non centre argument `db` descriptor (see comments on
|
|
|
|
## `reCentre()` for details.)
|
|
|
|
##
|
|
|
|
## A non centre descriptor should always be destructed after use (see also
|
|
|
|
## comments on `fork()`.)
|
|
|
|
##
|
2024-03-14 22:17:43 +00:00
|
|
|
if db.isCentre:
|
2024-06-19 12:40:00 +00:00
|
|
|
err(DescNotAllowedOnCentre)
|
2024-03-14 22:17:43 +00:00
|
|
|
elif db notin db.dudes.peers:
|
2024-06-19 12:40:00 +00:00
|
|
|
err(DescStaleDescriptor)
|
2024-03-14 22:17:43 +00:00
|
|
|
else:
|
|
|
|
db.dudes.peers.excl db # Unlink argument `db` from peers list
|
|
|
|
ok()
|
2023-09-11 20:38:49 +00:00
|
|
|
|
|
|
|
proc forgetOthers*(db: AristoDbRef): Result[void,AristoError] =
|
|
|
|
## For the centre argument `db` descriptor (see comments on `reCentre()`
|
|
|
|
## for details), destruct all other descriptors accessing the same backend.
|
|
|
|
##
|
|
|
|
if not db.dudes.isNil:
|
2024-03-14 22:17:43 +00:00
|
|
|
if db.dudes.centre != db:
|
2024-06-19 12:40:00 +00:00
|
|
|
return err(DescMustBeOnCentre)
|
2023-09-11 20:38:49 +00:00
|
|
|
|
|
|
|
db.dudes = DudesRef(nil)
|
|
|
|
ok()
|
|
|
|
|
2024-06-03 20:10:35 +00:00
|
|
|
# ------------------------------------------------------------------------------
|
|
|
|
# Public helpers
|
|
|
|
# ------------------------------------------------------------------------------
|
|
|
|
|
2024-05-23 15:37:51 +00:00
|
|
|
iterator rstack*(db: AristoDbRef): LayerRef =
|
|
|
|
# Stack in reverse order
|
|
|
|
for i in 0..<db.stack.len:
|
|
|
|
yield db.stack[db.stack.len - i - 1]
|
|
|
|
|
2024-07-18 07:13:56 +00:00
|
|
|
proc deltaAtLevel*(db: AristoDbRef, level: int): LayerDeltaRef =
|
|
|
|
if level == 0:
|
|
|
|
db.top.delta
|
|
|
|
elif level > 0:
|
|
|
|
doAssert level <= db.stack.len
|
|
|
|
db.stack[^level].delta
|
|
|
|
elif level == -1:
|
|
|
|
doAssert db.balancer != nil
|
|
|
|
db.balancer
|
|
|
|
elif level == -2:
|
|
|
|
nil
|
|
|
|
else:
|
|
|
|
raiseAssert "Unknown level " & $level
|
|
|
|
|
|
|
|
|
2023-05-11 14:25:29 +00:00
|
|
|
# ------------------------------------------------------------------------------
|
|
|
|
# End
|
|
|
|
# ------------------------------------------------------------------------------
|