Aristo uses pre classified tree types (#2385)

* Remove unused `merge*()` functions (for production)

details:
  Some functionality moved to test suite

* Make sure that only `AccountData` leaf type is exactly used on VertexID(1)

* clean up payload type

* Provide dedicated functions for merging accounts and storage trees

why:
  Storage trees are always linked to an account, so there is no need
  for an application to fiddle about (e.e. creating, re-cycling) with
  storage tree vertex IDs.

* CoreDb: Disable tracer functionality

why:
  Must be updated to accommodate new/changed `Aristo` functions.

* CoreDb: Use new `mergeXXX()` functions

why:
  Makes explicit vertex ID management obsolete for creating new
  storage trees.

* Remove `mergePayload()` and other cruft from API, `aristo_merge`, etc.

* clean up merge functions

details:
  The merge implementation `mergePayloadImpl()` does not need to be super
  generic anymore as all the edge cases are covered by the specialised
  functions `mergeAccountPayload()`, `mergeGenericData()`, and
  `mergeStorageData()`.

* No tracer available at the moment, so disable offending tests
This commit is contained in:
Jordan Hrycaj 2024-06-18 11:14:02 +00:00 committed by GitHub
parent e3d14bd921
commit 51f02090b8
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
24 changed files with 579 additions and 592 deletions

7
nimbus/TODO-TRACER.md Normal file
View File

@ -0,0 +1,7 @@
The `handlers_tracer` driver from the `CoreDb` module needs to be re-factored.
This module will slightly change its work modus and will run as a genuine
logger. The previously available restore features were ill concieved, an
attempt to be as close as possible to the legacy tracer. If resoring is
desired the tracer will need to run inside a transaction (which it does
anyway.)

View File

@ -22,11 +22,10 @@ Contents
+ [4.2 Extension record serialisation](#ch4x2)
+ [4.3 Leaf record serialisation](#ch4x3)
+ [4.4 Leaf record payload serialisation for account data](#ch4x4)
+ [4.5 Leaf record payload serialisation for RLP encoded data](#ch4x5)
+ [4.6 Leaf record payload serialisation for unstructured data](#ch4x6)
+ [4.7 Serialisation of the top used vertex ID](#ch4x7)
+ [4.8 Serialisation of a last saved state record](#ch4x8)
+ [4.9 Serialisation record identifier identification](#ch4x9)
+ [4.5 Leaf record payload serialisation for unstructured data](#ch4x5)
+ [4.6 Serialisation of the top used vertex ID](#ch4x6)
+ [4.7 Serialisation of a last saved state record](#ch4x7)
+ [4.8 Serialisation record identifier identification](#ch4x8)
* [5. *Patricia Trie* implementation notes](#ch5)
+ [5.1 Database decriptor representation](#ch5x1)
@ -327,19 +326,7 @@ fields. So, joining the *4 x bitmask(2)* word array to a single byte, the
maximum value of that byte is 0x99.
<a name="ch4x5"></a>
### 4.5 Leaf record payload serialisation for RLP encoded data
0 +--+ .. --+
| | | -- data, at least one byte
+--+ .. --+
| | -- marker(8), 0x6a
+--+
where
marker(8) is the eight bit array *0110-1010*
<a name="ch4x6"></a>
### 4.6 Leaf record payload serialisation for unstructured data
### 4.5 Leaf record payload serialisation for unstructured data
0 +--+ .. --+
| | | -- data, at least one byte
@ -350,8 +337,8 @@ maximum value of that byte is 0x99.
where
marker(8) is the eight bit array *0110-1011*
<a name="ch4x7"></a>
### 4.7 Serialisation of the top used vertex ID
<a name="ch4x6"></a>
### 4.6 Serialisation of the top used vertex ID
0 +--+--+--+--+--+--+--+--+
| | -- last used vertex IDs
@ -367,8 +354,8 @@ indicates that all ID values greater or equal than this value are free and can
be used as vertex IDs. If this record is missing, the value *(1u64,0x01)* is
assumed, i.e. the list with the single vertex ID *1*.
<a name="ch4x8"></a>
### 4.8 Serialisation of a last saved state record
<a name="ch4x7"></a>
### 4.7 Serialisation of a last saved state record
0 +--+--+--+--+--+ .. --+--+ .. --+
| | -- 32 bytes source state hash
@ -383,8 +370,8 @@ assumed, i.e. the list with the single vertex ID *1*.
where
marker(8) is the eight bit array *0111-111f*
<a name="ch4x10"></a>
### 4.9 Serialisation record identifier tags
<a name="ch4x8"></a>
### 4.8 Serialisation record identifier tags
Any of the above records can uniquely be identified by its trailing marker,
i.e. the last byte of a serialised record.
@ -395,10 +382,9 @@ i.e. the last byte of a serialised record.
| 10xx xxxx | 0x80 + x(6) | Extension record | [4.2](#ch4x2) |
| 11xx xxxx | 0xC0 + x(6) | Leaf record | [4.3](#ch4x3) |
| 0xxx 0yyy | (x(3)<<4) + y(3) | Account payload | [4.4](#ch4x4) |
| 0110 1010 | 0x6a | RLP encoded payload | [4.5](#ch4x5) |
| 0110 1011 | 0x6b | Unstructured payload | [4.6](#ch4x6) |
| 0111 1100 | 0x7c | Last used vertex ID | [4.7](#ch4x7) |
| 0111 1111 | 0x7f | Last saved state | [4.8](#ch4x8) |
| 0110 1011 | 0x6b | Unstructured payload | [4.5](#ch4x5) |
| 0111 1100 | 0x7c | Last used vertex ID | [4.6](#ch4x6) |
| 0111 1111 | 0x7f | Last saved state | [4.7](#ch4x7) |
<a name="ch5"></a>
5. *Patricia Trie* implementation notes

View File

@ -20,7 +20,7 @@ import
./aristo_init/memory_db,
"."/[aristo_delete, aristo_desc, aristo_fetch, aristo_get, aristo_hashify,
aristo_hike, aristo_init, aristo_merge, aristo_path, aristo_profile,
aristo_serialise, aristo_tx, aristo_vid]
aristo_serialise, aristo_tx]
export
AristoDbProfListRef
@ -100,7 +100,8 @@ type
): Result[PayloadRef,(VertexID,AristoError)]
{.noRaise.}
## Cascaded attempt to traverse the `Aristo Trie` and fetch the value
## of a leaf vertex. This function is complementary to `mergePayload()`.
## of a leaf vertex. This function is complementary to some `mergeXXX()`
## function.
AristoApiFindTxFn* =
proc(db: AristoDbRef;
@ -230,34 +231,45 @@ type
## `reCentre()` for details.) This function is a fast version of
## `db.forked.toSeq.len`.
AristoApiMergeFn* =
AristoApiMergeAccountPayloadFn* =
proc(db: AristoDbRef;
root: VertexID;
path: openArray[byte];
data: openArray[byte];
accPath: PathID;
accPath: openArray[byte];
accPayload: AristoAccount;
): Result[bool,AristoError]
{.noRaise.}
## Veriant of `mergePayload()` where the `data` argument will be
## converted to a `RawBlob` type `PayloadRef` value.
AristoApiMergePayloadFn* =
proc(db: AristoDbRef;
root: VertexID;
path: openArray[byte];
payload: PayloadRef;
accPath = VOID_PATH_ID;
): Result[bool,AristoError]
{.noRaise.}
## Merge the argument key-value-pair `(path,payload)` into the top level
## vertex table of the database `db`.
## Merge the key-value-pair argument `(accKey,accPayload)` as an account
## ledger value, i.e. the the sub-tree starting at `VertexID(1)`.
##
## For a `root` argument with `VertexID` greater than `LEAST_FREE_VID`,
## the sub-tree generated by `payload.root` is considered a storage trie
## linked to an account leaf referred to by a valid `accPath` (i.e.
## different from `VOID_PATH_ID`.) In that case, an account must exists.
## If there is payload of type `AccountData`, its `storageID` field must
## be unset or equal to the `payload.root` vertex ID.
## The payload argument `accPayload` must have the `storageID` field
## either unset/invalid or referring to a existing vertex which will be
## assumed to be a storage tree.
AristoApiMergeGenericDataFn* =
proc( db: AristoDbRef;
root: VertexID;
path: openArray[byte];
data: openArray[byte];
): Result[bool,AristoError]
{.noRaise.}
## Variant of `mergeXXX()` for generic sub-trees, i.e. for arguments
## `root` greater than `VertexID(1)` and smaller than `LEAST_FREE_VID`.
AristoApiMergeStorageDataFn* =
proc(db: AristoDbRef;
stoKey: openArray[byte];
stoData: openArray[byte];
accPath: PathID;
): Result[VertexID,AristoError]
{.noRaise.}
## Merge the key-value-pair argument `(stoKey,stoData)` as a storage
## value. This means, the root vertex will be derived from the `accPath`
## argument, the Patricia tree path for the storage tree is given by
## `stoKey` and the leaf value with the payload will be stored as a
## `PayloadRef` object of type `RawData`.
##
## If the storage tree does not exist yet it will be created and the
## payload leaf accessed by `accPath` will be updated with the storage
## tree vertex ID.
AristoApiPathAsBlobFn* =
proc(tag: PathID;
@ -347,28 +359,6 @@ type
{.noRaise.}
## Getter, returns top level transaction if there is any.
AristoApiVidFetchFn* =
proc(db: AristoDbRef;
pristine = false;
): VertexID
{.noRaise.}
## Recycle or create a new `VertexID`. Reusable vertex *ID*s are kept
## in a list where the top entry *ID* has the property that any other
## *ID* larger is also not used on the database.
##
## The function prefers to return recycled vertex *ID*s if there are
## any. When the argument `pristine` is set `true`, the function
## guarantees to return a non-recycled, brand new vertex *ID* which
## is the preferred mode when creating leaf vertices.
AristoApiVidDisposeFn* =
proc(db: AristoDbRef;
vid: VertexID;
) {.noRaise.}
## Recycle the argument `vtxID` which is useful after deleting entries
## from the vertex table to prevent the `VertexID` type key values
## small.
AristoApiRef* = ref AristoApiObj
AristoApiObj* = object of RootObj
## Useful set of `Aristo` fuctions that can be filtered, stacked etc.
@ -388,8 +378,9 @@ type
isTop*: AristoApiIsTopFn
level*: AristoApiLevelFn
nForked*: AristoApiNForkedFn
merge*: AristoApiMergeFn
mergePayload*: AristoApiMergePayloadFn
mergeAccountPayload*: AristoApiMergeAccountPayloadFn
mergeGenericData*: AristoApiMergeGenericDataFn
mergeStorageData*: AristoApiMergeStorageDataFn
pathAsBlob*: AristoApiPathAsBlobFn
persist*: AristoApiPersistFn
reCentre*: AristoApiReCentreFn
@ -397,8 +388,6 @@ type
serialise*: AristoApiSerialiseFn
txBegin*: AristoApiTxBeginFn
txTop*: AristoApiTxTopFn
vidFetch*: AristoApiVidFetchFn
vidDispose*: AristoApiVidDisposeFn
AristoApiProfNames* = enum
@ -421,8 +410,9 @@ type
AristoApiProfIsTopFn = "isTop"
AristoApiProfLevelFn = "level"
AristoApiProfNForkedFn = "nForked"
AristoApiProfMergeFn = "merge"
AristoApiProfMergePayloadFn = "mergePayload"
AristoApiProfMergeAccountPayloadFn = "mergeAccountPayload"
AristoApiProfMergeGenericDataFn = "mergeGenericData"
AristoApiProfMergeStorageDataFn = "mergeStorageData"
AristoApiProfPathAsBlobFn = "pathAsBlob"
AristoApiProfPersistFn = "persist"
AristoApiProfReCentreFn = "reCentre"
@ -430,8 +420,6 @@ type
AristoApiProfSerialiseFn = "serialise"
AristoApiProfTxBeginFn = "txBegin"
AristoApiProfTxTopFn = "txTop"
AristoApiProfVidFetchFn = "vidFetch"
AristoApiProfVidDisposeFn = "vidDispose"
AristoApiProfBeGetVtxFn = "be/getVtx"
AristoApiProfBeGetKeyFn = "be/getKey"
@ -470,8 +458,9 @@ when AutoValidateApiHooks:
doAssert not api.isTop.isNil
doAssert not api.level.isNil
doAssert not api.nForked.isNil
doAssert not api.merge.isNil
doAssert not api.mergePayload.isNil
doAssert not api.mergeAccountPayload.isNil
doAssert not api.mergeGenericData.isNil
doAssert not api.mergeStorageData.isNil
doAssert not api.pathAsBlob.isNil
doAssert not api.persist.isNil
doAssert not api.reCentre.isNil
@ -479,8 +468,6 @@ when AutoValidateApiHooks:
doAssert not api.serialise.isNil
doAssert not api.txBegin.isNil
doAssert not api.txTop.isNil
doAssert not api.vidFetch.isNil
doAssert not api.vidDispose.isNil
proc validate(prf: AristoApiProfRef) =
prf.AristoApiRef.validate
@ -523,8 +510,9 @@ func init*(api: var AristoApiObj) =
api.isTop = isTop
api.level = level
api.nForked = nForked
api.merge = merge
api.mergePayload = mergePayload
api.mergeAccountPayload = mergeAccountPayload
api.mergeGenericData = mergeGenericData
api.mergeStorageData = mergeStorageData
api.pathAsBlob = pathAsBlob
api.persist = persist
api.reCentre = reCentre
@ -532,8 +520,6 @@ func init*(api: var AristoApiObj) =
api.serialise = serialise
api.txBegin = txBegin
api.txTop = txTop
api.vidFetch = vidFetch
api.vidDispose = vidDispose
when AutoValidateApiHooks:
api.validate
@ -559,17 +545,16 @@ func dup*(api: AristoApiRef): AristoApiRef =
isTop: api.isTop,
level: api.level,
nForked: api.nForked,
merge: api.merge,
mergePayload: api.mergePayload,
mergeAccountPayload: api.mergeAccountPayload,
mergeGenericData: api.mergeGenericData,
mergeStorageData: api.mergeStorageData,
pathAsBlob: api.pathAsBlob,
persist: api.persist,
reCentre: api.reCentre,
rollback: api.rollback,
serialise: api.serialise,
txBegin: api.txBegin,
txTop: api.txTop,
vidFetch: api.vidFetch,
vidDispose: api.vidDispose)
txTop: api.txTop)
when AutoValidateApiHooks:
api.validate
@ -680,16 +665,20 @@ func init*(
AristoApiProfNForkedFn.profileRunner:
result = api.nForked(a)
profApi.merge =
proc(a: AristoDbRef; b: VertexID; c,d: openArray[byte]; e: PathID): auto =
AristoApiProfMergeFn.profileRunner:
result = api.merge(a, b, c, d ,e)
profApi.mergeAccountPayload =
proc(a: AristoDbRef; b, c: openArray[byte]): auto =
AristoApiProfMergeAccountPayloadFn.profileRunner:
result = api.mergeAccountPayload(a, b, c)
profApi.mergePayload =
proc(a: AristoDbRef; b: VertexID; c: openArray[byte]; d: PayloadRef;
e = VOID_PATH_ID): auto =
AristoApiProfMergePayloadFn.profileRunner:
result = api.mergePayload(a, b, c, d ,e)
profApi.mergeGenericData =
proc(a: AristoDbRef; b: VertexID, c, d: openArray[byte]): auto =
AristoApiProfMergeGenericDataFn.profileRunner:
result = api.mergeGenericData(a, b, c, d)
profApi.mergeStorageData =
proc(a: AristoDbRef; b, c: openArray[byte]; d: PathID): auto =
AristoApiProfMergeStorageDataFn.profileRunner:
result = api.mergeStorageData(a, b, c, d)
profApi.pathAsBlob =
proc(a: PathID): auto =
@ -726,16 +715,6 @@ func init*(
AristoApiProfTxTopFn.profileRunner:
result = api.txTop(a)
profApi.vidFetch =
proc(a: AristoDbRef; b = false): auto =
AristoApiProfVidFetchFn.profileRunner:
result = api.vidFetch(a, b)
profApi.vidDispose =
proc(a: AristoDbRef;b: VertexID) =
AristoApiProfVidDisposeFn.profileRunner:
api.vidDispose(a, b)
let beDup = be.dup()
if beDup.isNil:
profApi.be = be

View File

@ -46,9 +46,6 @@ proc blobifyTo*(pyl: PayloadRef, data: var Blob) =
of RawData:
data &= pyl.rawBlob
data &= [0x6b.byte]
of RlpData:
data &= pyl.rlpBlob
data &= @[0x6a.byte]
of AccountData:
var mask: byte
@ -194,9 +191,6 @@ proc deblobifyTo(
if mask == 0x6b: # unstructured payload
pyl = PayloadRef(pType: RawData, rawBlob: data[0 .. ^2])
return ok()
if mask == 0x6a: # RLP encoded payload
pyl = PayloadRef(pType: RlpData, rlpBlob: data[0 .. ^2])
return ok()
var
pAcc = PayloadRef(pType: AccountData)

View File

@ -210,8 +210,6 @@ proc ppPayload(p: PayloadRef, db: AristoDbRef): string =
case p.pType:
of RawData:
result &= p.rawBlob.toHex.squeeze(hex=true)
of RlpData:
result &= "[#" & p.rlpBlob.toHex.squeeze(hex=true) & "]"
of AccountData:
result = "("
result &= ($p.account.nonce).stripZeros(toExp=true) & ","

View File

@ -86,47 +86,38 @@ type
# Merge leaf `merge()`
MergeAssemblyFailed # Ooops, internal error
MergeAccRootNotAccepted
MergeStoRootNotAccepted
MergeBranchGarbledNibble
MergeBranchGarbledTail
MergeBranchLinkLeafGarbled
MergeBranchLinkLockedKey
MergeBranchLinkProofModeLock
MergeBranchLinkVtxPfxTooShort
MergeBranchProofModeLock
MergeBranchRootExpected
MergeLeafCantChangePayloadType
MergeHashKeyDiffersFromCached
MergeHashKeyInvalid
MergeLeafCantChangeStorageID
MergeLeafGarbledHike
MergeLeafPathCachedAlready
MergeLeafPathOnBackendAlready
MergeLeafProofModeLock
MergeLeafTypeAccountRequired
MergeLeafTypeRawDataRequired
MergeNodeAccountPayloadError
MergeNodeVidMissing
MergeNodeVtxDiffersFromExisting
MergeNonBranchProofModeLock
MergeProofInitMissing
MergeRevVidMustHaveBeenCached
MergeRootArgsIncomplete
MergeRootBranchLinkBusy
MergeRootMissing
MergeHashKeyInvalid
MergeHashKeyDiffersFromCached
MergeHashKeyRevLookUpGarbled
MergeRootKeyDiffersForVid
MergeRootKeyInvalid
MergeRootKeyMissing
MergeRootKeyNotInProof
MergeRootKeysMissing
MergeRootKeysOverflow
MergeProofInitMissing
MergeRevVidMustHaveBeenCached
MergeNodeVtxDiffersFromExisting
MergeNodeVidMissing
MergeNodeAccountPayloadError
MergeRootKeyDiffersForVid
MergeNodeVtxDuplicates
MergeRootKeyMissing
MergeRootArgsIncomplete
# Utils
UtilsAccPathMissing
UtilsAccPathWithoutLeaf
UtilsAccUnaccessible
UtilsAccWrongStorageRoot
UtilsStoRootMissing
MergeRootVidMissing
# Update `Merkle` hashes `hashify()`
HashifyVtxUnresolved
@ -289,6 +280,14 @@ type
AccNodeUnsupported
PayloadTypeUnsupported
UtilsAccPathMissing
UtilsAccPathWithoutLeaf
UtilsAccInaccessible
UtilsAccWrongStorageRoot
UtilsStoRootInaccessible
UtilsStoRootMissing
UtilsAccLeafPayloadExpected
# Miscelaneous handy helpers
AccRootUnacceptable
MptRootUnacceptable

View File

@ -41,15 +41,15 @@ type
PayloadType* = enum
## Type of leaf data.
RawData ## Generic data
RlpData ## Marked RLP encoded
AccountData ## `Aristo account` with vertex IDs links
PayloadRef* = ref object of RootRef
## The payload type depends on the sub-tree used. The `VertesID(1)` rooted
## sub-tree only has `AccountData` type payload, while all other sub-trees
## have `RawData` payload.
case pType*: PayloadType
of RawData:
rawBlob*: Blob ## Opaque data, default value
of RlpData:
rlpBlob*: Blob ## Opaque data marked RLP encoded
of AccountData:
account*: AristoAccount
@ -162,9 +162,6 @@ proc `==`*(a, b: PayloadRef): bool =
of RawData:
if a.rawBlob != b.rawBlob:
return false
of RlpData:
if a.rlpBlob != b.rlpBlob:
return false
of AccountData:
if a.account != b.account:
return false
@ -219,10 +216,6 @@ func dup*(pld: PayloadRef): PayloadRef =
PayloadRef(
pType: RawData,
rawBlob: pld.rawBlob)
of RlpData:
PayloadRef(
pType: RlpData,
rlpBlob: pld.rlpBlob)
of AccountData:
PayloadRef(
pType: AccountData,

View File

@ -25,172 +25,121 @@
{.push raises: [].}
import
std/[strutils, sets, tables, typetraits],
eth/[common, trie/nibbles],
std/typetraits,
eth/common,
results,
"."/[aristo_desc, aristo_get, aristo_hike, aristo_layers,
aristo_path, aristo_utils],
"."/[aristo_desc, aristo_layers, aristo_utils, aristo_vid],
./aristo_merge/[merge_payload_helper, merge_proof]
export
merge_proof
# ------------------------------------------------------------------------------
# Private helpers
# ------------------------------------------------------------------------------
proc to(
rc: Result[Hike,AristoError];
T: type Result[bool,AristoError];
): T =
## Return code converter
if rc.isOk:
ok true
elif rc.error in {MergeLeafPathCachedAlready,
MergeLeafPathOnBackendAlready}:
ok false
else:
err(rc.error)
const
MergeNoAction = {MergeLeafPathCachedAlready, MergeLeafPathOnBackendAlready}
# ------------------------------------------------------------------------------
# Public functions
# ------------------------------------------------------------------------------
proc mergePayload*(
proc mergeAccountPayload*(
db: AristoDbRef; # Database, top layer
leafTie: LeafTie; # Leaf item to add to the database
payload: PayloadRef; # Payload value
accPath: PathID; # Needed for accounts payload
): Result[Hike,AristoError] =
## Merge the argument `leafTie` key-value-pair into the top level vertex
## table of the database `db`. The field `path` of the `leafTie` argument is
## used to address the leaf vertex with the payload. It is stored or updated
## on the database accordingly.
accKey: openArray[byte]; # Even nibbled byte path
accPayload: AristoAccount; # Payload value
): Result[bool,AristoError] =
## Merge the key-value-pair argument `(accKey,accPayload)` as an account
## ledger value, i.e. the the sub-tree starting at `VertexID(1)`.
##
## If the `leafTie` argument referes to aa account entrie (i.e. the
## `leafTie.root` equals `VertexID(1)`) and the leaf entry has already an
## `AccountData` payload, its `storageID` field must be the same as the one
## on the database. The `accPath` argument will be ignored.
## The payload argument `accPayload` must have the `storageID` field either
## unset/invalid or referring to a existing vertex which will be assumed
## to be a storage tree.
##
## Otherwise, if the `root` argument belongs to a well known sub trie (i.e.
## it does not exceed `LEAST_FREE_VID`) the `accPath` argument is ignored
## and the entry will just be merged.
##
## Otherwise, a valid `accPath` (i.e. different from `VOID_PATH_ID`.) is
## required relating to an account leaf entry (starting at `VertexID(`)`).
## If the payload of that leaf entry is not of type `AccountData` it is
## ignored.
##
## Otherwise, if the sub-trie where the `leafTie` is to be merged into does
## not exist yes, the `storageID` field of the `accPath` leaf must have been
## reset to `storageID(0)` and will be updated accordingly on the database.
##
## Otherwise its `storageID` field must be equal to the `leafTie.root` vertex
## ID. So vertices can be marked for Merkle hash update.
##
let wp = block:
if leafTie.root.distinctBase < LEAST_FREE_VID:
if not leafTie.root.isValid:
return err(MergeRootMissing)
VidVtxPair()
else:
let rc = db.registerAccount(leafTie.root, accPath)
if rc.isErr:
return err(rc.error)
else:
rc.value
let hike = leafTie.hikeUp(db).to(Hike)
var okHike: Hike
if 0 < hike.legs.len:
case hike.legs[^1].wp.vtx.vType:
of Branch:
okHike = ? db.mergePayloadTopIsBranchAddLeaf(hike, payload)
of Leaf:
if 0 < hike.tail.len: # `Leaf` vertex problem?
return err(MergeLeafGarbledHike)
okHike = ? db.mergePayloadUpdate(hike, leafTie, payload)
of Extension:
okHike = ? db.mergePayloadTopIsExtAddLeaf(hike, payload)
let
pyl = PayloadRef(pType: AccountData, account: accPayload)
rc = db.mergePayloadImpl(VertexID(1), accKey, pyl, VidVtxPair())
if rc.isOk:
ok true
elif rc.error in MergeNoAction:
ok false
else:
# Empty hike
let rootVtx = db.getVtx hike.root
if rootVtx.isValid:
okHike = ? db.mergePayloadTopIsEmptyAddLeaf(hike,rootVtx, payload)
err(rc.error)
proc mergeGenericData*(
db: AristoDbRef; # Database, top layer
root: VertexID; # MPT state root
path: openArray[byte]; # Leaf item to add to the database
data: openArray[byte]; # Raw data payload value
): Result[bool,AristoError] =
## Variant of `mergeXXX()` for generic sub-trees, i.e. for arguments
## `root` greater than `VertexID(1)` and smaller than `LEAST_FREE_VID`.
##
# Verify that `root` is neither an accounts tree nor a strorage tree.
if not root.isValid:
return err(MergeRootVidMissing)
elif root == VertexID(1):
return err(MergeAccRootNotAccepted)
elif LEAST_FREE_VID <= root.distinctBase:
return err(MergeStoRootNotAccepted)
let
pyl = PayloadRef(pType: RawData, rawBlob: @data)
rc = db.mergePayloadImpl(root, path, pyl, VidVtxPair())
if rc.isOk:
ok true
elif rc.error in MergeNoAction:
ok false
else:
err(rc.error)
proc mergeStorageData*(
db: AristoDbRef; # Database, top layer
stoKey: openArray[byte]; # Storage data path (aka key)
stoData: openArray[byte]; # Storage data payload value
accPath: PathID; # Needed for accounts payload
): Result[VertexID,AristoError] =
## Merge the key-value-pair argument `(stoKey,stoData)` as a storage value.
## This means, the root vertex will be derived from the `accPath` argument,
## the Patricia tree path for the storage tree is given by `stoKey` and the
## leaf value with the payload will be stored as a `PayloadRef` object of
## type `RawData`.
##
## If the storage tree does not exist yet it will be created and the
## payload leaf accessed by `accPath` will be updated with the storage
## tree vertex ID.
##
## The function returns the new vertex ID if a new storage tree was created,
## otherwise `VertexID(0)`.
##
let
wpAcc = ? db.registerAccountForUpdate accPath
stoID = wpAcc.vtx.lData.account.storageID
# Provide new storage ID when needed
useID = if stoID.isValid: stoID else: db.vidFetch()
# Call merge
pyl = PayloadRef(pType: RawData, rawBlob: @stoData)
rc = db.mergePayloadImpl(useID, stoKey, pyl, wpAcc)
if rc.isOk:
if stoID.isValid:
return ok VertexID(0)
else:
# Bootstrap for existing root ID
let wp = VidVtxPair(
vid: hike.root,
vtx: VertexRef(
vType: Leaf,
lPfx: leafTie.path.to(NibblesSeq),
lData: payload))
db.setVtxAndKey(hike.root, wp.vid, wp.vtx)
okHike = Hike(root: wp.vid, legs: @[Leg(wp: wp, nibble: -1)])
# Make sure that there is an account that refers to that storage trie
let leaf = wpAcc.vtx.dup # Dup on modify
leaf.lData.account.storageID = useID
db.layersPutVtx(VertexID(1), wpAcc.vid, leaf)
db.layersResKey(VertexID(1), wpAcc.vid)
return ok useID
# Double check the result until the code is more reliable
block:
let rc = okHike.to(NibblesSeq).pathToTag
if rc.isErr or rc.value != leafTie.path:
return err(MergeAssemblyFailed) # Ooops
elif rc.error in MergeNoAction:
assert stoID.isValid # debugging only
return ok VertexID(0)
# Make sure that there is an accounts that refers to that storage trie
if wp.vid.isValid and not wp.vtx.lData.account.storageID.isValid:
let leaf = wp.vtx.dup # Dup on modify
leaf.lData.account.storageID = leafTie.root
db.layersPutVtx(VertexID(1), wp.vid, leaf)
db.layersResKey(VertexID(1), wp.vid)
ok okHike
proc mergePayload*(
db: AristoDbRef; # Database, top layer
root: VertexID; # MPT state root
path: openArray[byte]; # Even nibbled byte path
payload: PayloadRef; # Payload value
accPath = VOID_PATH_ID; # Needed for accounts payload
): Result[bool,AristoError] =
## Variant of `merge()` for `(root,path)` arguments instead of a `LeafTie`
## object.
let lty = LeafTie(root: root, path: ? path.pathToTag)
db.mergePayload(lty, payload, accPath).to(typeof result)
proc merge*(
db: AristoDbRef; # Database, top layer
root: VertexID; # MPT state root
path: openArray[byte]; # Leaf item to add to the database
data: openArray[byte]; # Raw data payload value
accPath: PathID; # Needed for accounts payload
): Result[bool,AristoError] =
## Variant of `merge()` for `(root,path)` arguments instead of a `LeafTie`.
## The argument `data` is stored as-is as a `RawData` payload value.
let pyl = PayloadRef(pType: RawData, rawBlob: @data)
db.mergePayload(root, path, pyl, accPath)
proc mergeAccount*(
db: AristoDbRef; # Database, top layer
path: openArray[byte]; # Leaf item to add to the database
data: openArray[byte]; # Raw data payload value
): Result[bool,AristoError] =
## Variant of `merge()` for `(VertexID(1),path)` arguments instead of a
## `LeafTie`. The argument `data` is stored as-is as a `RawData` payload
## value.
let pyl = PayloadRef(pType: RawData, rawBlob: @data)
db.mergePayload(VertexID(1), path, pyl, VOID_PATH_ID)
proc mergeLeaf*(
db: AristoDbRef; # Database, top layer
leaf: LeafTiePayload; # Leaf item to add to the database
accPath = VOID_PATH_ID; # Needed for accounts payload
): Result[bool,AristoError] =
## Variant of `merge()`. This function will not indicate if the leaf
## was cached, already.
db.mergePayload(leaf.leafTie,leaf.payload, accPath).to(typeof result)
# else
err(rc.error)
# ------------------------------------------------------------------------------
# End

View File

@ -11,7 +11,7 @@
{.push raises: [].}
import
std/[sequtils, sets],
std/[sequtils, sets, typetraits],
eth/[common, trie/nibbles],
results,
".."/[aristo_desc, aristo_get, aristo_hike, aristo_layers, aristo_vid]
@ -33,43 +33,6 @@ proc xPfx(vtx: VertexRef): NibblesSeq =
# Private helpers
# ------------------------------------------------------------------------------
proc differ(
db: AristoDbRef; # Database, top layer
p1, p2: PayloadRef; # Payload values
): bool =
## Check whether payloads differ on the database.
## If `p1` is `RLP` serialised and `p2` is a raw blob compare serialsations.
## If `p1` is of account type and `p2` is serialised, translate `p2`
## to an account type and compare.
##
if p1 == p2:
return false
# Adjust abd check for divergent types.
if p1.pType != p2.pType:
if p1.pType == AccountData:
try:
let
blob = (if p2.pType == RlpData: p2.rlpBlob else: p2.rawBlob)
acc = rlp.decode(blob, Account)
if acc.nonce == p1.account.nonce and
acc.balance == p1.account.balance and
acc.codeHash == p1.account.codeHash and
acc.storageRoot.isValid == p1.account.storageID.isValid:
if not p1.account.storageID.isValid or
acc.storageRoot.to(HashKey) == db.getKey p1.account.storageID:
return false
except RlpError:
discard
elif p1.pType == RlpData:
if p2.pType == RawData and p1.rlpBlob == p2.rawBlob:
return false
true
# -----------
proc clearMerkleKeys(
db: AristoDbRef; # Database, top layer
hike: Hike; # Implied vertex IDs to clear hashes for
@ -270,10 +233,10 @@ proc setVtxAndKey*(
db.layersResKey(root, vid)
# ------------------------------------------------------------------------------
# Public functions: add Particia Trie leaf vertex
# Private functions: add Particia Trie leaf vertex
# ------------------------------------------------------------------------------
proc mergePayloadTopIsBranchAddLeaf*(
proc mergePayloadTopIsBranchAddLeaf(
db: AristoDbRef; # Database, top layer
hike: Hike; # Path top has a `Branch` vertex
payload: PayloadRef; # Leaf data payload
@ -333,7 +296,7 @@ proc mergePayloadTopIsBranchAddLeaf*(
db.insertBranch(hike, linkID, linkVtx, payload)
proc mergePayloadTopIsExtAddLeaf*(
proc mergePayloadTopIsExtAddLeaf(
db: AristoDbRef; # Database, top layer
hike: Hike; # Path top has an `Extension` vertex
payload: PayloadRef; # Leaf data payload
@ -404,7 +367,7 @@ proc mergePayloadTopIsExtAddLeaf*(
ok okHike
proc mergePayloadTopIsEmptyAddLeaf*(
proc mergePayloadTopIsEmptyAddLeaf(
db: AristoDbRef; # Database, top layer
hike: Hike; # No path legs
rootVtx: VertexRef; # Root vertex
@ -440,31 +403,25 @@ proc mergePayloadTopIsEmptyAddLeaf*(
db.insertBranch(hike, hike.root, rootVtx, payload)
# ------------------------------------------------------------------------------
# Public functions
# ------------------------------------------------------------------------------
proc mergePayloadUpdate*(
proc mergePayloadUpdate(
db: AristoDbRef; # Database, top layer
hike: Hike; # No path legs
leafTie: LeafTie; # Leaf item to add to the database
hike: Hike; # Path to payload
payload: PayloadRef; # Payload value to add
): Result[Hike,AristoError] =
## Update leaf vertex if payloads differ
let leafLeg = hike.legs[^1]
# Update payloads if they differ
if db.differ(leafLeg.wp.vtx.lData, payload):
if leafLeg.wp.vtx.lData != payload:
let vid = leafLeg.wp.vid
if vid in db.pPrf:
return err(MergeLeafProofModeLock)
# Verify that the account leaf can be replaced
if leafTie.root == VertexID(1):
if leafLeg.wp.vtx.lData.pType != payload.pType:
return err(MergeLeafCantChangePayloadType)
if payload.pType == AccountData and
payload.account.storageID != leafLeg.wp.vtx.lData.account.storageID:
# Make certain that the account leaf can be replaced
if hike.root == VertexID(1):
# Only `AccountData` payload on `VertexID(1)` tree
if payload.account.storageID != leafLeg.wp.vtx.lData.account.storageID:
return err(MergeLeafCantChangeStorageID)
# Update vertex and hike
@ -486,6 +443,80 @@ proc mergePayloadUpdate*(
else:
err(MergeLeafPathCachedAlready)
# ------------------------------------------------------------------------------
# Public functions
# ------------------------------------------------------------------------------
proc mergePayloadImpl*(
db: AristoDbRef; # Database, top layer
root: VertexID; # MPT state root
path: openArray[byte]; # Leaf item to add to the database
payload: PayloadRef; # Payload value
wpAcc: VidVtxPair; # Needed for storage tree
): Result[void,AristoError] =
## Merge the argument `(root,path)` key-value-pair into the top level vertex
## table of the database `db`. The `path` argument is used to address the
## leaf vertex with the payload. It is stored or updated on the database
## accordingly.
##
## If the `root` argument is `VertexID(1)` this function relies upon that the
## payload argument is of type `AccountData`. If the payload exists already
## on the database, the `storageID` field of the `payload` and on the database
## must be the same or an error is returned. The argument `wpAcc` will be
## ignored for accounts.
##
## Otherwise, if the `root` argument belongs to a well known sub trie (i.e.
## it does not exceed `LEAST_FREE_VID`) the entry will just be merged. The
## argument `wpAcc` will be ignored .
##
## Otherwise, a valid `wpAcc` must be given referring to an `AccountData`
## payload type leaf vertex. If the `storageID` field of that payload
## does not have a valid entry, a new sub-trie will be created. Otherwise
## this function expects that the `root` argument is the same as the
## `storageID` field.
##
## The function returns `true` iff a new sub-tree was linked to an account
## leaf record.
##
let
nibblesPath = path.initNibbleRange
hike = nibblesPath.hikeUp(root, db).to(Hike)
var okHike: Hike
if 0 < hike.legs.len:
case hike.legs[^1].wp.vtx.vType:
of Branch:
okHike = ? db.mergePayloadTopIsBranchAddLeaf(hike, payload)
of Leaf:
if 0 < hike.tail.len: # `Leaf` vertex problem?
return err(MergeLeafGarbledHike)
okHike = ? db.mergePayloadUpdate(hike, payload)
of Extension:
okHike = ? db.mergePayloadTopIsExtAddLeaf(hike, payload)
else:
# Empty hike
let rootVtx = db.getVtx hike.root
if rootVtx.isValid:
okHike = ? db.mergePayloadTopIsEmptyAddLeaf(hike,rootVtx, payload)
else:
# Bootstrap for existing root ID
let wp = VidVtxPair(
vid: hike.root,
vtx: VertexRef(
vType: Leaf,
lPfx: nibblesPath,
lData: payload))
db.setVtxAndKey(hike.root, wp.vid, wp.vtx)
okHike = Hike(root: wp.vid, legs: @[Leg(wp: wp, nibble: -1)])
# Double check the result (may be removed in future)
if okHike.to(NibblesSeq) != nibblesPath:
return err(MergeAssemblyFailed) # Ooops
ok()
# ------------------------------------------------------------------------------
# End
# ------------------------------------------------------------------------------

View File

@ -41,8 +41,6 @@ proc serialise(
case pyl.pType:
of RawData:
ok pyl.rawBlob
of RlpData:
ok pyl.rlpBlob
of AccountData:
let
vid = pyl.account.storageID

View File

@ -41,7 +41,7 @@ proc merkleSignAdd*(
## is irrelevant.
if sdb.error == AristoError(0):
sdb.count.inc
discard sdb.db.merge(sdb.root, key, val, VOID_PATH_ID).valueOr:
discard sdb.db.mergeGenericData(sdb.root, key, val).valueOr:
sdb.`error` = error
sdb.errKey = @key
return

View File

@ -15,7 +15,7 @@
import
std/[sequtils, sets, typetraits],
eth/common,
eth/[common, trie/nibbles],
results,
"."/[aristo_constants, aristo_desc, aristo_get, aristo_hike, aristo_layers]
@ -30,13 +30,7 @@ proc toAccount*(
## Converts the argument `payload` to an `Account` type. If the implied
## account das a storage slots system associated, the database `db` must
## contain the Merkle hash key of the root vertex.
case payload.pType:
of RlpData:
try:
return ok(rlp.decode(payload.rlpBlob, Account))
except RlpError:
return err(AccRlpDecodingError)
of AccountData:
if payload.pType == AccountData:
var acc = Account(
nonce: payload.account.nonce,
balance: payload.account.balance,
@ -45,8 +39,6 @@ proc toAccount*(
if payload.account.storageID.isValid:
acc.storageRoot = (? db.getKeyRc payload.account.storageID).to(Hash256)
return ok(acc)
else:
discard
err PayloadTypeUnsupported
@ -65,13 +57,7 @@ proc toAccount*(
## Variant of `toAccount()` for a `Leaf` node which must be complete (i.e.
## a potential Merkle hash key must have been initialised.)
if node.isValid and node.vType == Leaf:
case node.lData.pType:
of RlpData:
try:
return ok(rlp.decode(node.lData.rlpBlob, Account))
except RlpError:
return err(AccRlpDecodingError)
of AccountData:
if node.lData.pType == AccountData:
var acc = Account(
nonce: node.lData.account.nonce,
balance: node.lData.account.balance,
@ -190,7 +176,7 @@ proc registerAccount*(
db: AristoDbRef; # Database, top layer
stoRoot: VertexID; # Storage root ID
accPath: PathID; # Needed for accounts payload
): Result[VidVtxPair,AristoError] =
): Result[VidVtxPair,AristoError] =
## Verify that the `stoRoot` argument is properly referred to by the
## account data (if any) implied to by the `accPath` argument.
##
@ -205,13 +191,14 @@ proc registerAccount*(
# Get account leaf with account data
let hike = LeafTie(root: VertexID(1), path: accPath).hikeUp(db).valueOr:
return err(UtilsAccUnaccessible)
return err(UtilsAccInaccessible)
# Check account payload
let wp = hike.legs[^1].wp
if wp.vtx.vType != Leaf:
return err(UtilsAccPathWithoutLeaf)
if wp.vtx.lData.pType != AccountData:
return ok(VidVtxPair()) # nothing to do
return err(UtilsAccLeafPayloadExpected)
# Check whether the `stoRoot` exists on the databse
let stoVtx = block:
@ -242,6 +229,40 @@ proc registerAccount*(
ok(wp)
proc registerAccountForUpdate*(
db: AristoDbRef; # Database, top layer
accPath: PathID; # Needed for accounts payload
): Result[VidVtxPair,AristoError] =
## ...
##
# Expand vertex path to account leaf
let hike = (@accPath).initNibbleRange.hikeUp(VertexID(1), db).valueOr:
return err(UtilsAccInaccessible)
# Extract the account payload fro the leaf
let wp = hike.legs[^1].wp
if wp.vtx.vType != Leaf:
return err(UtilsAccPathWithoutLeaf)
assert wp.vtx.lData.pType == AccountData # debugging only
let acc = wp.vtx.lData.account
# Check whether storage ID exists, at all
if acc.storageID.isValid:
# Verify that the storage root `acc.storageID` exists on the databse
discard db.getVtxRc(acc.storageID).valueOr:
return err(UtilsStoRootInaccessible)
# Clear Merkle keys so that `hasify()` can calculate the re-hash forest/tree
for w in hike.legs.mapIt(it.wp.vid):
db.layersResKey(hike.root, w)
# Signal to `hashify()` where to start rebuilding Merkel hashes
db.top.final.dirty.incl hike.root
db.top.final.dirty.incl wp.vid
ok(wp)
# ------------------------------------------------------------------------------
# End
# ------------------------------------------------------------------------------

View File

@ -18,7 +18,7 @@ import
../../kvt as use_kvt,
../../kvt/[kvt_init/memory_only, kvt_walk],
".."/[base, base/base_desc],
./aristo_db/[common_desc, handlers_aristo, handlers_kvt, handlers_trace]
./aristo_db/[common_desc, handlers_aristo, handlers_kvt]
import
../../aristo/aristo_init/memory_only as aristo_memory_only
@ -41,11 +41,11 @@ type
## Main descriptor
kdbBase: KvtBaseRef ## Kvt subsystem
adbBase: AristoBaseRef ## Aristo subsystem
tracer: AristoTracerRef ## Currently active recorder
#tracer: AristoTracerRef ## Currently active recorder
AristoTracerRef = ref object of TraceRecorderRef
## Sub-handle for tracer
parent: AristoCoreDbRef
#AristoTracerRef = ref object of TraceRecorderRef
# ## Sub-handle for tracer
# parent: AristoCoreDbRef
proc newAristoVoidCoreDbRef*(): CoreDbRef {.noRaise.}
@ -96,28 +96,29 @@ proc txMethods(
raiseAssert info & ": " & $error
discard)
proc cptMethods(
tracer: AristoTracerRef;
): CoreDbCaptFns =
let
tr = tracer # So it can savely be captured
db = tr.parent # Will not change and can be captured
log = tr.topInst() # Ditto
when false: # currently disabled
proc cptMethods(
tracer: AristoTracerRef;
): CoreDbCaptFns =
let
tr = tracer # So it can savely be captured
db = tr.parent # Will not change and can be captured
log = tr.topInst() # Ditto
CoreDbCaptFns(
recorderFn: proc(): CoreDbRef =
db,
CoreDbCaptFns(
recorderFn: proc(): CoreDbRef =
db,
logDbFn: proc(): TableRef[Blob,Blob] =
log.kLog,
logDbFn: proc(): TableRef[Blob,Blob] =
log.kLog,
getFlagsFn: proc(): set[CoreDbCaptFlags] =
log.flags,
getFlagsFn: proc(): set[CoreDbCaptFlags] =
log.flags,
forgetFn: proc() =
if not tracer.pop():
tr.parent.tracer = AristoTracerRef(nil)
tr.restore())
forgetFn: proc() =
if not tracer.pop():
tr.parent.tracer = AristoTracerRef(nil)
tr.restore())
proc baseMethods(db: AristoCoreDbRef): CoreDbBaseFns =
@ -125,13 +126,14 @@ proc baseMethods(db: AristoCoreDbRef): CoreDbBaseFns =
aBase = db.adbBase
kBase = db.kdbBase
proc tracerSetup(flags: set[CoreDbCaptFlags]): CoreDxCaptRef =
if db.tracer.isNil:
db.tracer = AristoTracerRef(parent: db)
db.tracer.init(kBase, aBase, flags)
else:
db.tracer.push(flags)
CoreDxCaptRef(methods: db.tracer.cptMethods)
when false: # currently disabled
proc tracerSetup(flags: set[CoreDbCaptFlags]): CoreDxCaptRef =
if db.tracer.isNil:
db.tracer = AristoTracerRef(parent: db)
db.tracer.init(kBase, aBase, flags)
else:
db.tracer.push(flags)
CoreDxCaptRef(methods: db.tracer.cptMethods)
proc persistent(bn: Opt[BlockNumber]): CoreDbRc[void] =
const info = "persistentFn()"
@ -182,8 +184,9 @@ proc baseMethods(db: AristoCoreDbRef): CoreDbBaseFns =
dsc = CoreDxTxRef(methods: db.txMethods(aTx, kTx))
db.bless(dsc),
newCaptureFn: proc(flags: set[CoreDbCaptFlags]): CoreDbRc[CoreDxCaptRef] =
ok(db.bless flags.tracerSetup()),
# # currently disabled
# newCaptureFn: proc(flags:set[CoreDbCaptFlags]): CoreDbRc[CoreDxCaptRef] =
# ok(db.bless flags.tracerSetup()),
persistentFn: proc(bn: Opt[BlockNumber]): CoreDbRc[void] =
persistent(bn))

View File

@ -0,0 +1,3 @@
* Refactor `handlers_tracer`. This module can reliably work only as a genuine
logger. The restore features were ill concieved, an attempt to be as close
as possible to the legacy tracer.

View File

@ -37,7 +37,7 @@ type
AristoCoreDxMptRef = ref object of CoreDxMptRef
base: AristoBaseRef ## Local base descriptor
mptRoot: VertexID ## State root, may be zero unless account
accPath: PathID ## Needed for storage columns
accPath: PathID ## Needed for storage tree/columns
address: EthAddress ## For storage tree debugging
AristoColRef* = ref object of CoreDbColRef
@ -240,18 +240,17 @@ proc mptMethods(cMpt: AristoCoreDxMptRef): CoreDbMptFns =
proc mptMerge(k: openArray[byte]; v: openArray[byte]): CoreDbRc[void] =
const info = "mergeFn()"
# Provide root ID on-the-fly
let rootOk = cMpt.mptRoot.isValid
if not rootOk:
cMpt.mptRoot = api.vidFetch(mpt, pristine=true)
if cMpt.accPath.isValid:
let rc = api.mergeStorageData(mpt, k, v, cMpt.accPath)
if rc.isErr:
return err(rc.error.toError(base, info))
if rc.value.isValid:
cMpt.mptRoot = rc.value
else:
let rc = api.mergeGenericData(mpt, cMpt.mptRoot, k, v)
if rc.isErr:
return err(rc.error.toError(base, info))
let rc = api.merge(mpt, cMpt.mptRoot, k, v, cMpt.accPath)
if rc.isErr:
# Re-cycle unused ID (prevents from leaking IDs)
if not rootOk:
api.vidDispose(mpt, cMpt.mptRoot)
cMpt.mptRoot = VoidVID
return err(rc.error.toError(base, info))
ok()
proc mptDelete(key: openArray[byte]): CoreDbRc[void] =
@ -350,7 +349,7 @@ proc accMethods(cAcc: AristoCoreDxAccRef): CoreDbAccFns =
let
key = account.address.keccakHash.data
val = account.toPayloadRef()
rc = api.mergePayload(mpt, AccountsVID, key, val)
rc = api.mergeAccountPayload(mpt, key, val.account)
if rc.isErr:
return err(rc.error.toError(base, info))
ok()

View File

@ -873,66 +873,67 @@ proc dispose*(tx: CoreDxTxRef) =
# Public tracer methods
# ------------------------------------------------------------------------------
proc newCapture*(
db: CoreDbRef;
flags: set[CoreDbCaptFlags] = {};
): CoreDbRc[CoreDxCaptRef] =
## Trace constructor providing an overlay on top of the argument database
## `db`. This overlay provides a replacement database handle that can be
## retrieved via `db.recorder()` (which can in turn be ovelayed.) While
## running the overlay stores data in a log-table which can be retrieved
## via `db.logDb()`.
##
## Caveat:
## The original database argument `db` should not be used while the tracer
## is active (i.e. exists as overlay). The behaviour for this situation
## is undefined and depends on the backend implementation of the tracer.
##
db.setTrackNewApi BaseNewCaptureFn
result = db.methods.newCaptureFn flags
db.ifTrackNewApi: debug newApiTxt, api, elapsed, result
when false: # currently disabled
proc newCapture*(
db: CoreDbRef;
flags: set[CoreDbCaptFlags] = {};
): CoreDbRc[CoreDxCaptRef] =
## Trace constructor providing an overlay on top of the argument database
## `db`. This overlay provides a replacement database handle that can be
## retrieved via `db.recorder()` (which can in turn be ovelayed.) While
## running the overlay stores data in a log-table which can be retrieved
## via `db.logDb()`.
##
## Caveat:
## The original database argument `db` should not be used while the tracer
## is active (i.e. exists as overlay). The behaviour for this situation
## is undefined and depends on the backend implementation of the tracer.
##
db.setTrackNewApi BaseNewCaptureFn
result = db.methods.newCaptureFn flags
db.ifTrackNewApi: debug newApiTxt, api, elapsed, result
proc recorder*(cpt: CoreDxCaptRef): CoreDbRef =
## Getter, returns a tracer replacement handle to be used as new database.
## It records every action like fetch, store, hasKey, hasPath and delete.
## This descriptor can be superseded by a new overlay tracer (using
## `newCapture()`, again.)
##
## Caveat:
## Unless the desriptor `cpt` referes to the top level overlay tracer, the
## result is undefined and depends on the backend implementation of the
## tracer.
##
cpt.setTrackNewApi CptRecorderFn
result = cpt.methods.recorderFn()
cpt.ifTrackNewApi: debug newApiTxt, api, elapsed
proc recorder*(cpt: CoreDxCaptRef): CoreDbRef =
## Getter, returns a tracer replacement handle to be used as new database.
## It records every action like fetch, store, hasKey, hasPath and delete.
## This descriptor can be superseded by a new overlay tracer (using
## `newCapture()`, again.)
##
## Caveat:
## Unless the desriptor `cpt` referes to the top level overlay tracer, the
## result is undefined and depends on the backend implementation of the
## tracer.
##
cpt.setTrackNewApi CptRecorderFn
result = cpt.methods.recorderFn()
cpt.ifTrackNewApi: debug newApiTxt, api, elapsed
proc logDb*(cp: CoreDxCaptRef): TableRef[Blob,Blob] =
## Getter, returns the logger table for the overlay tracer database.
##
## Caveat:
## Unless the desriptor `cpt` referes to the top level overlay tracer, the
## result is undefined and depends on the backend implementation of the
## tracer.
##
cp.setTrackNewApi CptLogDbFn
result = cp.methods.logDbFn()
cp.ifTrackNewApi: debug newApiTxt, api, elapsed
proc logDb*(cp: CoreDxCaptRef): TableRef[Blob,Blob] =
## Getter, returns the logger table for the overlay tracer database.
##
## Caveat:
## Unless the desriptor `cpt` referes to the top level overlay tracer, the
## result is undefined and depends on the backend implementation of the
## tracer.
##
cp.setTrackNewApi CptLogDbFn
result = cp.methods.logDbFn()
cp.ifTrackNewApi: debug newApiTxt, api, elapsed
proc flags*(cp: CoreDxCaptRef):set[CoreDbCaptFlags] =
## Getter
##
cp.setTrackNewApi CptFlagsFn
result = cp.methods.getFlagsFn()
cp.ifTrackNewApi: debug newApiTxt, api, elapsed, result
proc flags*(cp: CoreDxCaptRef):set[CoreDbCaptFlags] =
## Getter
##
cp.setTrackNewApi CptFlagsFn
result = cp.methods.getFlagsFn()
cp.ifTrackNewApi: debug newApiTxt, api, elapsed, result
proc forget*(cp: CoreDxCaptRef) =
## Explicitely stop recording the current tracer instance and reset to
## previous level.
##
cp.setTrackNewApi CptForgetFn
cp.methods.forgetFn()
cp.ifTrackNewApi: debug newApiTxt, api, elapsed
proc forget*(cp: CoreDxCaptRef) =
## Explicitely stop recording the current tracer instance and reset to
## previous level.
##
cp.setTrackNewApi CptForgetFn
cp.methods.forgetFn()
cp.ifTrackNewApi: debug newApiTxt, api, elapsed
# ------------------------------------------------------------------------------
# End

View File

@ -38,7 +38,7 @@ proc validateMethodsDesc(base: CoreDbBaseFns) =
doAssert not base.newCtxFromTxFn.isNil
doAssert not base.swapCtxFn.isNil
doAssert not base.beginFn.isNil
doAssert not base.newCaptureFn.isNil
# doAssert not base.newCaptureFn.isNil # currently disabled
doAssert not base.persistentFn.isNil
proc validateMethodsDesc(kvt: CoreDbKvtFns) =
@ -113,12 +113,13 @@ proc validateMethodsDesc(phk: CoreDxPhkRef) =
doAssert not phk.toMpt.isNil
phk.methods.validateMethodsDesc
proc validateMethodsDesc(cpt: CoreDxCaptRef) =
doAssert not cpt.isNil
doAssert not cpt.parent.isNil
doAssert not cpt.methods.recorderFn.isNil
doAssert not cpt.methods.getFlagsFn.isNil
doAssert not cpt.methods.forgetFn.isNil
when false: # currently disabled
proc validateMethodsDesc(cpt: CoreDxCaptRef) =
doAssert not cpt.isNil
doAssert not cpt.parent.isNil
doAssert not cpt.methods.recorderFn.isNil
doAssert not cpt.methods.getFlagsFn.isNil
doAssert not cpt.methods.forgetFn.isNil
proc validateMethodsDesc(tx: CoreDxTxRef) =
doAssert not tx.isNil

View File

@ -8,6 +8,8 @@
# at your option. This file may not be copied, modified, or distributed except
# according to those terms.
# TODO: CoreDb module needs to be updated
import
std/[strutils, json],
./common/common,

View File

@ -13,7 +13,7 @@ import
json_rpc/rpcserver,
graphql/httpserver,
./rpc/common,
./rpc/debug,
#./rpc/debug,
./rpc/engine_api,
./rpc/p2p,
./rpc/jwt_auth,
@ -59,8 +59,9 @@ proc installRPC(server: RpcServer,
if RpcFlag.Eth in flags:
setupEthRpc(nimbus.ethNode, nimbus.ctx, com, nimbus.txPool, oracle, server)
if RpcFlag.Debug in flags:
setupDebugRpc(com, nimbus.txPool, server)
# # Tracer is currently disabled
# if RpcFlag.Debug in flags:
# setupDebugRpc(com, nimbus.txPool, server)
if RpcFlag.Exp in flags:
setupExpRpc(com, server)

View File

@ -19,7 +19,7 @@ cliBuilder:
./test_genesis,
./test_precompiles,
./test_generalstate_json,
./test_tracer_json,
#./test_tracer_json, -- temporarily disabled
#./test_persistblock_json, -- fails
#./test_rpc, -- fails
./test_filters,

View File

@ -35,6 +35,10 @@ type
DbTriplet =
array[0..2, AristoDbRef]
const
testRootVid = VertexID(2)
## Need to reconfigure for the test, root ID 1 cannot be deleted as a trie
# ------------------------------------------------------------------------------
# Private debugging helpers
# ------------------------------------------------------------------------------
@ -74,7 +78,7 @@ iterator quadripartite(td: openArray[ProofTrieData]): LeafQuartet =
var collect: seq[seq[LeafTiePayload]]
for w in td:
let lst = w.kvpLst.mapRootVid VertexID(1)
let lst = w.kvpLst.mapRootVid testRootVid
if lst.len < 8:
if 2 < collect.len:
@ -101,40 +105,53 @@ proc dbTriplet(w: LeafQuartet; rdbPath: string): Result[DbTriplet,AristoError] =
let db = block:
if 0 < rdbPath.len:
let rc = AristoDbRef.init(RdbBackendRef, rdbPath, DbOptions.init())
xCheckRc rc.error == 0
xCheckRc rc.error == 0:
result = err(rc.error)
rc.value
else:
AristoDbRef.init MemBackendRef
block:
# Add a dummy entry so the balancer logic can be triggered in `persist()`
let rc = db.mergeDummyAccLeaf(0, 0)
xCheckRc rc.error == 0:
result = err(rc.error)
# Set failed `xCheck()` error result
result = err(AristoError 1)
# Fill backend
block:
let report = db.mergeList w[0]
if report.error != 0:
db.finish(eradicate=true)
check report.error == 0
return err(report.error)
xCheck report.error == 0
let rc = db.persist()
if rc.isErr:
check rc.error == 0
return
xCheckRc rc.error == 0:
result = err(rc.error)
let dx = [db, db.forkTx(0).value, db.forkTx(0).value]
xCheck dx[0].nForked == 2
# Reduce unwanted tx layers
for n in 1 ..< dx.len:
check dx[n].level == 1
check dx[n].txTop.value.commit.isOk
xCheck dx[n].level == 1
xCheck dx[n].txTop.value.commit.isOk
# Clause (9) from `aristo/README.md` example
for n in 0 ..< dx.len:
let report = dx[n].mergeList w[n+1]
if report.error != 0:
db.finish(eradicate=true)
check (n, report.error) == (n,0)
return err(report.error)
xCheck (n, report.error) == (n,0)
ok dx
block:
# Add a dummy entry so the balancer logic can be triggered in `persist()`
let rc = db.mergeDummyAccLeaf(0, 1)
xCheckRc rc.error == 0:
result = err(rc.error)
return ok(dx)
# ----------------------
@ -152,7 +169,7 @@ proc isDbEq(a, b: LayerDeltaRef; db: AristoDbRef; noisy = true): bool =
return false
if unsafeAddr(a[]) != unsafeAddr(b[]):
if a.src != b.src or
a.kMap.getOrVoid(VertexID 1) != b.kMap.getOrVoid(VertexID 1) or
a.kMap.getOrVoid(testRootVid) != b.kMap.getOrVoid(testRootVid) or
a.vTop != b.vTop:
return false
@ -280,6 +297,11 @@ proc testDistributedAccess*(
xCheck db1.balancer == LayerDeltaRef(nil)
xCheck db2.balancer == db3.balancer
block:
# Add dummy entry so the balancer logic can be triggered in `persist()`
let rc = db2.mergeDummyAccLeaf(n, 42)
xCheckRc rc.error == 0
block:
let rc = db2.stow() # non-persistent
xCheckRc rc.error == 0:
@ -320,6 +342,11 @@ proc testDistributedAccess*(
defer:
dy.cleanUp()
block:
# Add dummy entry so the balancer logic can be triggered in `persist()`
let rc = db2.mergeDummyAccLeaf(n, 42)
xCheckRc rc.error == 0
# Build clause (12) from `aristo/README.md` example
discard db2.reCentre()
block:

View File

@ -11,7 +11,7 @@
import
std/[os, sequtils],
eth/common,
rocksdb,
stew/endians2,
../../nimbus/db/aristo/[
aristo_debug, aristo_desc, aristo_delete,
aristo_hashify, aristo_hike, aristo_merge],
@ -247,31 +247,13 @@ proc delTree*(
aristo_delete.delTree(db, root, accPath)
proc merge*(
db: AristoDbRef;
root: VertexID;
path: openArray[byte];
data: openArray[byte];
accPath: PathID;
noisy: bool;
): Result[bool, AristoError] =
when declared(aristo_merge.noisy):
aristo_merge.exec(noisy, aristo_merge.merge(db, root, path, data, accPath))
else:
aristo_merge.merge(db, root, path, data, accPath)
proc mergePayload*(
db: AristoDbRef;
lty: LeafTie;
pyl: PayloadRef;
accPath: PathID;
noisy: bool;
): Result[Hike,AristoError] =
when declared(aristo_merge.noisy):
aristo_merge.exec(noisy, aristo_merge.mergePayload(db, lty, pyl, accPath))
else:
aristo_merge.mergePayload(db, lty, pyl, accPath)
proc mergeGenericData*(
db: AristoDbRef; # Database, top layer
leaf: LeafTiePayload; # Leaf item to add to the database
): Result[bool,AristoError] =
## Variant of `mergeGenericData()`.
db.mergeGenericData(
leaf.leafTie.root, @(leaf.leafTie.path), leaf.payload.rawBlob)
proc mergeList*(
db: AristoDbRef; # Database, top layer
@ -283,20 +265,36 @@ proc mergeList*(
for n,w in leafs:
noisy.say "*** mergeList",
" n=", n, "/", leafs.len
let rc = db.mergePayload(w.leafTie, w.payload, VOID_PATH_ID, noisy=noisy)
let rc = db.mergeGenericData w
noisy.say "*** mergeList",
" n=", n, "/", leafs.len,
" rc=", (if rc.isOk: "ok" else: $rc.error),
"\n -------------\n"
if rc.isOk:
merged.inc
elif rc.error in {MergeLeafPathCachedAlready,MergeLeafPathOnBackendAlready}:
dups.inc
else:
if rc.isErr:
return (n,dups,rc.error)
elif rc.value:
merged.inc
else:
dups.inc
(merged, dups, AristoError(0))
proc mergeDummyAccLeaf*(
db: AristoDbRef;
pathID: int;
nonce: int;
): Result[void,AristoError] =
# Add a dummy entry so the balancer logic can be triggered
let
acc = AristoAccount(nonce: nonce.AccountNonce)
rc = db.mergeAccountPayload(pathID.uint64.toBytesBE, acc)
if rc.isOk:
ok()
else:
err(rc.error)
# ------------------------------------------------------------------------------
# End
# ------------------------------------------------------------------------------

View File

@ -43,6 +43,9 @@ const
MaxFilterBulk = 150_000
## Policy settig for `pack()`
testRootVid = VertexID(2)
## Need to reconfigure for the test, root ID 1 cannot be deleted as a trie
# ------------------------------------------------------------------------------
# Private helpers
# ------------------------------------------------------------------------------
@ -306,25 +309,6 @@ proc revWalkVerify(
true
proc mergeRlpData*(
db: AristoDbRef; # Database, top layer
path: PathID; # Path into database
rlpData: openArray[byte]; # RLP encoded payload data
): Result[void,AristoError] =
block body:
discard db.mergeLeaf(
LeafTiePayload(
leafTie: LeafTie(
root: VertexID(1),
path: path.normal),
payload: PayloadRef(
pType: RlpData,
rlpBlob: @rlpData))).valueOr:
if error in {MergeLeafPathCachedAlready,MergeLeafPathOnBackendAlready}:
break body
return err(error)
ok()
# ------------------------------------------------------------------------------
# Public test function
# ------------------------------------------------------------------------------
@ -361,14 +345,14 @@ proc testTxMergeAndDeleteOneByOne*(
# Reset database so that the next round has a clean setup
defer: db.innerCleanUp
# Merge leaf data into main trie (w/vertex ID 1)
# Merge leaf data into main trie
let kvpLeafs = block:
var lst = w.kvpLst.mapRootVid VertexID(1)
var lst = w.kvpLst.mapRootVid testRootVid
# The list might be reduced for isolation of particular properties,
# e.g. lst.setLen(min(5,lst.len))
lst
for i,leaf in kvpLeafs:
let rc = db.mergeLeaf leaf
let rc = db.mergeGenericData leaf
xCheckRc rc.error == 0
# List of all leaf entries that should be on the database
@ -394,14 +378,17 @@ proc testTxMergeAndDeleteOneByOne*(
doSaveBeOk = ((u mod saveMod) == saveRest)
(leaf, lid) = lvp
# Add a dummy entry so the balancer logic can be triggered
let rc = db.mergeDummyAccLeaf(n, runID)
xCheckRc rc.error == 0
if doSaveBeOk:
let saveBeOk = tx.saveToBackend(
chunkedMpt=false, relax=relax, noisy=noisy, runID)
xCheck saveBeOk:
noisy.say "***", "del(2)",
noisy.say "***", "del1by1(2)",
" u=", u,
" n=", n, "/", list.len,
"\n leaf=", leaf.pp(db),
"\n db\n ", db.pp(backendOk=true),
""
@ -414,7 +401,8 @@ proc testTxMergeAndDeleteOneByOne*(
leafsLeft.excl leaf
let deletedVtx = tx.db.getVtx lid
xCheck deletedVtx.isValid == false
xCheck deletedVtx.isValid == false:
noisy.say "***", "del1by1(8)"
# Walking the database is too slow for large tables. So the hope is that
# potential errors will not go away and rather pop up later, as well.
@ -430,7 +418,9 @@ proc testTxMergeAndDeleteOneByOne*(
return
when true and false:
noisy.say "***", "del(9) n=", n, "/", list.len, " nLeafs=", kvpLeafs.len
noisy.say "***", "del1by1(9)",
" n=", n, "/", list.len,
" nLeafs=", kvpLeafs.len
true
@ -440,9 +430,6 @@ proc testTxMergeAndDeleteSubTree*(
list: openArray[ProofTrieData];
rdbPath: string; # Rocks DB storage directory
): bool =
const
# Need to reconfigure for the test, root ID 1 cannot be deleted as a trie
testRootVid = VertexID(2)
var
prng = PrngDesc.init 42
db = AristoDbRef(nil)
@ -460,9 +447,10 @@ proc testTxMergeAndDeleteSubTree*(
else:
AristoDbRef.init(MemBackendRef)
if testRootVid != VertexID(1):
# Add a dummy entry so the journal logic can be triggered
discard db.merge(VertexID(1), @[n.byte], @[42.byte], VOID_PATH_ID)
# Add a dummy entry so the balancer logic can be triggered
block:
let rc = db.mergeDummyAccLeaf(n, 42)
xCheckRc rc.error == 0
# Start transaction (double frame for testing)
xCheck db.txTop.isErr
@ -480,7 +468,7 @@ proc testTxMergeAndDeleteSubTree*(
# e.g. lst.setLen(min(5,lst.len))
lst
for i,leaf in kvpLeafs:
let rc = db.mergeLeaf leaf
let rc = db.mergeGenericData leaf
xCheckRc rc.error == 0
# List of all leaf entries that should be on the database
@ -511,9 +499,10 @@ proc testTxMergeAndDeleteSubTree*(
"\n db\n ", db.pp(backendOk=true),
""
if testRootVid != VertexID(1):
# Update dummy entry so the journal logic can be triggered
discard db.merge(VertexID(1), @[n.byte], @[43.byte], VOID_PATH_ID)
# Update dummy entry so the journal logic can be triggered
block:
let rc = db.mergeDummyAccLeaf(n, 43)
xCheckRc rc.error == 0
block:
let saveBeOk = tx.saveToBackend(
@ -571,15 +560,22 @@ proc testTxMergeProofAndKvpList*(
count = 0
count.inc
# Add a dummy entry so the balancer logic can be triggered
block:
let rc = db.mergeDummyAccLeaf(n, 42)
xCheckRc rc.error == 0
let
testId = idPfx & "#" & $w.id & "." & $n
runID = n
sTabLen = db.nLayersVtx()
leafs = w.kvpLst.mapRootVid VertexID(1) # merge into main trie
leafs = w.kvpLst.mapRootVid testRootVid # merge into main trie
# var lst = w.kvpLst.mapRootVid testRootVid
if 0 < w.proof.len:
let root = block:
let rc = db.mergeProof(rootKey, VertexID(1))
let rc = db.mergeProof(rootKey, testRootVid)
xCheckRc rc.error == 0
rc.value
@ -596,13 +592,14 @@ proc testTxMergeProofAndKvpList*(
xCheck merged.merged + merged.dups == leafs.len
block:
let oops = oopsTab.getOrDefault(testId,(0,AristoError(0)))
if not tx.saveToBackendWithOops(
chunkedMpt=true, noisy=noisy, debugID=runID, oops):
return
let
oops = oopsTab.getOrDefault(testId,(0,AristoError(0)))
saveBeOk = tx.saveToBackendWithOops(
chunkedMpt=true, noisy=noisy, debugID=runID, oops)
xCheck saveBeOk
when true and false:
noisy.say "***", "testTxMergeProofAndKvpList (1)",
noisy.say "***", "testTxMergeProofAndKvpList (9)",
" <", n, "/", list.len-1, ">",
" runID=", runID,
" groups=", count, " merged=", merged

View File

@ -12,18 +12,18 @@
{. warning[UnusedImport]:off .}
import
../premix/premix,
../premix/persist,
../premix/debug,
../premix/dumper,
../premix/hunter,
../premix/regress,
./tracerTestGen,
./persistBlockTestGen,
#../premix/premix, # -- currently disabled (no tracer at the moment)
#../premix/persist, # -- ditto
#../premix/debug, # -- ditto
#../premix/dumper, # -- ditto
#../premix/hunter, # -- ditto
#../premix/regress, # -- ditto
#./tracerTestGen, # -- ditto
#./persistBlockTestGen, # -- ditto
../hive_integration/nodocker/rpc/rpc_sim,
../hive_integration/nodocker/consensus/consensus_sim,
../hive_integration/nodocker/graphql/graphql_sim,
../hive_integration/nodocker/engine/engine_sim,
#../hive_integration/nodocker/engine/engine_sim, # -- does not compile
../hive_integration/nodocker/pyspec/pyspec_sim,
../tools/t8n/t8n,
../tools/t8n/t8n_test,