nimbus-eth1/nimbus/db/aristo/aristo_journal/journal_get.nim

113 lines
3.4 KiB
Nim
Raw Normal View History

# nimbus-eth1
Core db update storage root management for sub tries (#1964) * Aristo: Re-phrase `LayerDelta` and `LayerFinal` as object references why: Avoids copying in some cases * Fix copyright header * Aristo: Verify `leafTie.root` function argument for `merge()` proc why: Zero root will lead to inconsistent DB entry * Aristo: Update failure condition for hash labels compiler `hashify()` why: Node need not be rejected as long as links are on the schedule. In that case, `redo[]` is to become `wff.base[]` at a later stage. This amends an earlier fix, part of #1952 by also testing against the target nodes of the `wff.base[]` sets. * Aristo: Add storage root glue record to `hashify()` schedule why: An account leaf node might refer to a non-resolvable storage root ID. Storage root node chains will end up at the storage root. So the link `storage-root->account-leaf` needs an extra item in the schedule. * Aristo: fix error code returned by `fetchPayload()` details: Final error code is implied by the error code form the `hikeUp()` function. * CoreDb: Discard `createOk` argument in API `getRoot()` function why: Not needed for the legacy DB. For the `Arsto` DB, a lazy approach is implemented where a stprage root node is created on-the-fly. * CoreDb: Prevent `$$` logging in some cases why: Logging the function `$$` is not useful when it is used for internal use, i.e. retrieving an an error text for logging. * CoreDb: Add `tryHashFn()` to API for pretty printing why: Pretty printing must not change the hashification status for the `Aristo` DB. So there is an independent API wrapper for getting the node hash which never updated the hashes. * CoreDb: Discard `update` argument in API `hash()` function why: When calling the API function `hash()`, the latest state is always wanted. For a version that uses the current state as-is without checking, the function `tryHash()` was added to the backend. * CoreDb: Update opaque vertex ID objects for the `Aristo` backend why: For `Aristo`, vID objects encapsulate a numeric `VertexID` referencing a vertex (rather than a node hash as used on the legacy backend.) For storage sub-tries, there might be no initial vertex known when the descriptor is created. So opaque vertex ID objects are supported without a valid `VertexID` which will be initalised on-the-fly when the first item is merged. * CoreDb: Add pretty printer for opaque vertex ID objects * Cosmetics, printing profiling data * CoreDb: Fix segfault in `Aristo` backend when creating MPT descriptor why: Missing initialisation error * CoreDb: Allow MPT to inherit shared context on `Aristo` backend why: Creates descriptors with different storage roots for the same shared `Aristo` DB descriptor. * Cosmetics, update diagnostic message items for `Aristo` backend * Fix Copyright year
2024-01-11 19:11:38 +00:00
# Copyright (c) 2023-2024 Status Research & Development GmbH
# Licensed under either of
# * Apache License, version 2.0, ([LICENSE-APACHE](LICENSE-APACHE) or
# http://www.apache.org/licenses/LICENSE-2.0)
# * MIT license ([LICENSE-MIT](LICENSE-MIT) or
# http://opensource.org/licenses/MIT)
# at your option. This file may not be copied, modified, or distributed
# except according to those terms.
import
std/options,
Aristo db update for short nodes key edge cases (#1887) * Aristo: Provide key-value list signature calculator detail: Simple wrappers around `Aristo` core functionality * Update new API for `CoreDb` details: + Renamed new API functions `contains()` => `hasKey()` or `hasPath()` which disables the `in` operator on non-boolean `contains()` functions + The functions `get()` and `fetch()` always return a not-found error if there is no item, available. The new functions `getOrEmpty()` and `mergeOrEmpty()` return an an empty `Blob` if there is no such key found. * Rewrite `core_apps.nim` using new API from `CoreDb` * Use `Aristo` functionality for calculating Merkle signatures details: For debugging, the `VerifyAristoForMerkleRootCalc` can be set so that `Aristo` results will be verified against the legacy versions. * Provide general interface for Merkle signing key-value tables details: Export `Aristo` wrappers * Activate `CoreDb` tests why: Now, API seems to be stable enough for general tests. * Update `toHex()` usage why: Byteutils' `toHex()` is superior to `toSeq.mapIt(it.toHex(2)).join` * Split `aristo_transcode` => `aristo_serialise` + `aristo_blobify` why: + Different modules for different purposes + `aristo_serialise`: RLP encoding/decoding + `aristo_blobify`: Aristo database encoding/decoding * Compacted representation of small nodes' links instead of Keccak hashes why: Ethereum MPTs use Keccak hashes as node links if the size of an RLP encoded node is at least 32 bytes. Otherwise, the RLP encoded node value is used as a pseudo node link (rather than a hash.) Such a node is nor stored on key-value database. Rather the RLP encoded node value is stored instead of a lode link in a parent node instead. Only for the root hash, the top level node is always referred to by the hash. This feature needed an abstraction of the `HashKey` object which is now either a hash or a blob of length at most 31 bytes. This leaves two ways of representing an empty/void `HashKey` type, either as an empty blob of zero length, or the hash of an empty blob. * Update `CoreDb` interface (mainly reducing logger noise) * Fix copyright years (to make `Lint` happy)
2023-11-08 12:18:32 +00:00
eth/common,
results,
".."/[aristo_desc, aristo_desc/desc_backend],
./journal_scheduler
# ------------------------------------------------------------------------------
# Public functions
# ------------------------------------------------------------------------------
proc journalGetInx*(
be: BackendRef;
fid = none(FilterID);
earlierOK = false;
): Result[JournalInx,AristoError] =
## If there is some argument `fid`, find the filter on the journal with ID
## not larger than `fid` (i e. the resulting filter must not be more recent.)
##
## If the argument `earlierOK` is passed `false`, the function succeeds only
## if the filter ID of the returned filter is equal to the argument `fid`.
##
## In case that there is no argument `fid`, the filter with the smallest
## filter ID (i.e. the oldest filter) is returned. here, the argument
## `earlierOK` is ignored.
##
if be.journal.isNil:
return err(FilQuSchedDisabled)
var cache = (QueueID(0),FilterRef(nil)) # Avoids double lookup for last entry
proc qid2fid(qid: QueueID): Result[FilterID,void] =
if qid == cache[0]: # Avoids double lookup for last entry
return ok cache[1].fid
let fil = be.getFilFn(qid).valueOr:
return err()
cache = (qid,fil)
ok fil.fid
let qid = block:
if fid.isNone:
# Get oldest filter
be.journal[^1]
else:
# Find filter with ID not smaller than `fid`
be.journal.le(fid.unsafeGet, qid2fid, forceEQ = not earlierOK)
if not qid.isValid:
return err(FilFilterNotFound)
var fip: JournalInx
fip.fil = block:
if cache[0] == qid:
cache[1]
else:
be.getFilFn(qid).valueOr:
return err(error)
fip.inx = be.journal[qid]
if fip.inx < 0:
return err(FilInxByQidFailed)
ok fip
proc journalGetOverlap*(
be: BackendRef;
filter: FilterRef;
): int =
## This function will find the overlap of an argument `filter` which is
## composed by some recent filter slots from the journal.
##
## The function returns the number of most recent journal filters that are
## reverted by the argument `filter`. This requires that `src`, `trg`, and
## `fid` of the argument `filter` is properly calculated (e.g. using
## `journalOpsFetchSlots()`.)
##
# Check against the top-fifo entry.
let qid = be.journal[0]
if not qid.isValid:
return 0
let top = be.getFilFn(qid).valueOr:
return 0
# The `filter` must match the `top`
if filter.src != top.src:
return 0
# Does the filter revert the fitst entry?
if filter.trg == top.trg:
return 1
# Check against some stored filter IDs
if filter.isValid:
let fp = be.journalGetInx(some(filter.fid), earlierOK=true).valueOr:
return 0
if filter.trg == fp.fil.trg:
return 1 + fp.inx
# ------------------------------------------------------------------------------
# End
# ------------------------------------------------------------------------------