Aristo cull journal related stuff (#2288)

* Remove all journal related stuff

* Refactor function names journal*() => delta*(), filter*() => delta*()

* remove `trg` fileld from `FilterRef`

why:
  Same as `kMap[$1]`

* Re-type FilterRef.src as `HashKey`

why:
  So it is directly comparable to `kMap[$1]`

* Moved `vGen[]` field from `LayerFinalRef` to `LayerDeltaRef`

why:
  Then a separate `FilterRef` type is not needed, anymore

* Rename `roFilter` field in `AristoDbRef` => `balancer`

why:
  New name more appropriate.

* Replace `FilterRef` by `LayerDeltaRef` type

why:
  This allows to avoid copying into the `balancer` (see next patch set)
  most of the time. Typically, only one instance is running on the backend
  and the `balancer` is only used as a stage before saving data.

* Refactor way how to store data persistently

why:
  Avoid useless copy when staging `top` layer for persistently saving to
  backend.

* Fix copyright header?
This commit is contained in:
Jordan Hrycaj 2024-06-03 20:10:35 +00:00 committed by GitHub
parent 7f76586214
commit f926222fec
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
55 changed files with 389 additions and 4231 deletions

View File

@ -48,7 +48,6 @@ export
AristoError,
AristoTxRef,
MerkleSignRef,
QidLayoutRef,
isValid
# End

View File

@ -25,10 +25,8 @@ Contents
+ [4.5 Leaf record payload serialisation for RLP encoded data](#ch4x5)
+ [4.6 Leaf record payload serialisation for unstructured data](#ch4x6)
+ [4.7 Serialisation of the list of unused vertex IDs](#ch4x7)
+ [4.8 Backend filter record serialisation](#ch4x8)
+ [4.9 Serialisation of a list of filter IDs](#ch4x92)
+ [4.10 Serialisation of a last saved state record](#ch4x10)
+ [4.11 Serialisation record identifier identification](#ch4x11)
+ [4.8 Serialisation of a last saved state record](#ch4x8)
+ [4.9 Serialisation record identifier identification](#ch4x9)
* [5. *Patricia Trie* implementation notes](#ch5)
+ [5.1 Database decriptor representation](#ch5x1)
@ -372,79 +370,7 @@ be used as vertex IDs. If this record is missing, the value *(1u64,0x01)* is
assumed, i.e. the list with the single vertex ID *1*.
<a name="ch4x8"></a>
### 4.8 Backend filter record serialisation
0 +--+--+--+--+--+ .. --+
| | -- filter ID
8 +--+--+--+--+--+ .. --+--+ .. --+
| | -- 32 bytes filter source hash
40 +--+--+--+--+--+ .. --+--+ .. --+
| | -- 32 bytes filter target hash
72 +--+--+--+--+--+ .. --+--+ .. --+
| | -- number of unused vertex IDs
76 +--+--+--+--+
| | -- number of structural triplets
80 +--+--+--+--+--+ .. --+
| | -- first unused vertex ID
88 +--+--+--+--+--+ .. --+
... -- more unused vertex ID
N1 +--+--+--+--+
|| | -- flg(2) + vtxLen(30), 1st triplet
+--+--+--+--+--+ .. --+
| | -- vertex ID of first triplet
+--+--+--+--+--+ .. --+--+ .. --+
| | -- optional 32 bytes hash key
+--+--+--+--+--+ .. --+--+ .. --+
... -- optional vertex record
N2 +--+--+--+--+
|| | -- flg(2) + vtxLen(30), 2nd triplet
+--+--+--+--+
...
+--+
| | -- marker(8), 0x7d
+--+
where
+ minimum size of an empty filter is 72 bytes
+ the flg(2) represents a bit tuple encoding the serialised storage
modes for the optional 32 bytes hash key:
0 -- not encoded, to be ignored
1 -- not encoded, void => considered deleted
2 -- present, encoded as-is (32 bytes)
3 -- present, encoded as (len(1),data,zero-padding)
+ the vtxLen(30) is the number of bytes of the optional vertex record
which has maximum size 2^30-2 which is short of 1 GiB. The value
2^30-1 (i.e. 0x3fffffff) is reserverd for indicating that there is
no vertex record following and it should be considered deleted.
+ there is no blind entry, i.e. either flg(2) != 0 or vtxLen(30) != 0.
+ the marker(8) is the eight bit array *0111-1101*
<a name="ch4x9"></a>
### 4.9 Serialisation of a list of filter IDs
0 +-- ..
... -- some filter ID
+--+--+--+--+--+--+--+--+
| | -- last filter IDs
+--+--+--+--+--+--+--+--+
| | -- marker(8), 0x7e
+--+
where
marker(8) is the eight bit array *0111-1110*
This list is used to control the filters on the database. By holding some IDs
in a dedicated list (e.g. the latest filters) one can quickly access particular
entries without searching through the set of filters. In the current
implementation this list comes in ID pairs i.e. the number of entries is even.
<a name="ch4x9"></a>
### 4.10 Serialisation of a last saved state record
### 4.8 Serialisation of a last saved state record
0 +--+--+--+--+--+ .. --+--+ .. --+
| | -- 32 bytes source state hash
@ -460,7 +386,7 @@ implementation this list comes in ID pairs i.e. the number of entries is even.
marker(8) is the eight bit array *0111-111f*
<a name="ch4x10"></a>
### 4.10 Serialisation record identifier tags
### 4.9 Serialisation record identifier tags
Any of the above records can uniquely be identified by its trailing marker,
i.e. the last byte of a serialised record.
@ -474,9 +400,7 @@ i.e. the last byte of a serialised record.
| 0110 1010 | 0x6a | RLP encoded payload | [4.5](#ch4x5) |
| 0110 1011 | 0x6b | Unstructured payload | [4.6](#ch4x6) |
| 0111 1100 | 0x7c | List of vertex IDs | [4.7](#ch4x7) |
| 0111 1101 | 0x7d | Filter record | [4.8](#ch4x8) |
| 0111 1110 | 0x7e | List of filter IDs | [4.9](#ch4x9) |
| 0111 1111 | 0x7f | Last saved state | [4.10](#ch4x10) |
| 0111 1111 | 0x7f | Last saved state | [4.8](#ch4x8) |
<a name="ch5"></a>
5. *Patricia Trie* implementation notes
@ -495,7 +419,7 @@ i.e. the last byte of a serialised record.
| | stack[0] | | successively recover the top layer)
| +----------+ v
| +----------+
| | roFilter | optional read-only backend filter
| | balancer | optional read-only backend filter
| +----------+
| +----------+
| | backend | optional physical key-value backend database
@ -503,7 +427,7 @@ i.e. the last byte of a serialised record.
There is a three tier access to a key-value database entry as in
top -> roFilter -> backend
top -> balancer -> backend
where only the *top* layer is obligatory.

View File

@ -13,12 +13,11 @@
import
std/[options, times],
std/times,
eth/[common, trie/nibbles],
results,
./aristo_desc/desc_backend,
./aristo_init/memory_db,
./aristo_journal/journal_get,
"."/[aristo_delete, aristo_desc, aristo_fetch, aristo_get, aristo_hashify,
aristo_hike, aristo_init, aristo_merge, aristo_path, aristo_profile,
aristo_serialise, aristo_tx, aristo_vid]
@ -164,11 +163,11 @@ type
## transaction.
##
## If `backLevel` is `-1`, a database descriptor with empty transaction
## layers will be provided where the `roFilter` between database and
## layers will be provided where the `balancer` between database and
## transaction layers are kept in place.
##
## If `backLevel` is `-2`, a database descriptor with empty transaction
## layers will be provided without an `roFilter`.
## layers will be provided without a `balancer`.
##
## The returned database descriptor will always have transaction level one.
## If there were no transactions that could be squashed, an empty
@ -220,31 +219,6 @@ type
## Getter, returns `true` if the argument `tx` referes to the current
## top level transaction.
AristoApiJournalGetFilterFn* =
proc(be: BackendRef;
inx: int;
): Result[FilterRef,AristoError]
{.noRaise.}
## Fetch filter from journal where the argument `inx` relates to the
## age starting with `0` for the most recent.
AristoApiJournalGetInxFn* =
proc(be: BackendRef;
fid: Option[FilterID];
earlierOK = false;
): Result[JournalInx,AristoError]
{.noRaise.}
## For a positive argument `fid`, find the filter on the journal with ID
## not larger than `fid` (i e. the resulting filter might be older.)
##
## If the argument `earlierOK` is passed `false`, the function succeeds
## only if the filter ID of the returned filter is equal to the argument
## `fid`.
##
## In case that the argument `fid` is zera (i.e. `FilterID(0)`), the
## filter with the smallest filter ID (i.e. the oldest filter) is
## returned. In that case, the argument `earlierOK` is ignored.
AristoApiLevelFn* =
proc(db: AristoDbRef;
): int
@ -303,7 +277,7 @@ type
AristoApiPersistFn* =
proc(db: AristoDbRef;
nxtFid = none(FilterID);
nxtSid = 0u64;
chunkedMpt = false;
): Result[void,AristoError]
{.noRaise.}
@ -315,13 +289,9 @@ type
## backend stage area. After that, the top layer cache is cleared.
##
## Finally, the staged data are merged into the physical backend
## database and the staged data area is cleared. Wile performing this
## last step, the recovery journal is updated (if available.)
## database and the staged data area is cleared.
##
## If the argument `nxtFid` is passed non-zero, it will be the ID for
## the next recovery journal record. If non-zero, this ID must be greater
## than all previous IDs (e.g. block number when storing after block
## execution.)
## The argument `nxtSid` will be the ID for the next saved state record.
##
## Staging the top layer cache might fail with a partial MPT when it is
## set up from partial MPT chunks as it happens with `snap` sync
@ -419,8 +389,6 @@ type
hasPath*: AristoApiHasPathFn
hikeUp*: AristoApiHikeUpFn
isTop*: AristoApiIsTopFn
journalGetFilter*: AristoApiJournalGetFilterFn
journalGetInx*: AristoApiJournalGetInxFn
level*: AristoApiLevelFn
nForked*: AristoApiNForkedFn
merge*: AristoApiMergeFn
@ -454,8 +422,6 @@ type
AristoApiProfHasPathFn = "hasPath"
AristoApiProfHikeUpFn = "hikeUp"
AristoApiProfIsTopFn = "isTop"
AristoApiProfJournalGetFilterFn = "journalGetFilter"
AristoApiProfJournalGetInxFn = "journalGetInx"
AristoApiProfLevelFn = "level"
AristoApiProfNForkedFn = "nForked"
AristoApiProfMergeFn = "merge"
@ -472,16 +438,12 @@ type
AristoApiProfBeGetVtxFn = "be/getVtx"
AristoApiProfBeGetKeyFn = "be/getKey"
AristoApiProfBeGetFilFn = "be/getFil"
AristoApiProfBeGetIdgFn = "be/getIfg"
AristoApiProfBeGetLstFn = "be/getLst"
AristoApiProfBeGetFqsFn = "be/getFqs"
AristoApiProfBePutVtxFn = "be/putVtx"
AristoApiProfBePutKeyFn = "be/putKey"
AristoApiProfBePutFilFn = "be/putFil"
AristoApiProfBePutIdgFn = "be/putIdg"
AristoApiProfBePutLstFn = "be/putLst"
AristoApiProfBePutFqsFn = "be/putFqs"
AristoApiProfBePutEndFn = "be/putEnd"
AristoApiProfRef* = ref object of AristoApiRef
@ -509,8 +471,6 @@ when AutoValidateApiHooks:
doAssert not api.hasPath.isNil
doAssert not api.hikeUp.isNil
doAssert not api.isTop.isNil
doAssert not api.journalGetFilter.isNil
doAssert not api.journalGetInx.isNil
doAssert not api.level.isNil
doAssert not api.nForked.isNil
doAssert not api.merge.isNil
@ -564,8 +524,6 @@ func init*(api: var AristoApiObj) =
api.hasPath = hasPath
api.hikeUp = hikeUp
api.isTop = isTop
api.journalGetFilter = journalGetFilter
api.journalGetInx = journalGetInx
api.level = level
api.nForked = nForked
api.merge = merge
@ -602,8 +560,6 @@ func dup*(api: AristoApiRef): AristoApiRef =
hasPath: api.hasPath,
hikeUp: api.hikeUp,
isTop: api.isTop,
journalGetFilter: api.journalGetFilter,
journalGetInx: api.journalGetInx,
level: api.level,
nForked: api.nForked,
merge: api.merge,
@ -717,16 +673,6 @@ func init*(
AristoApiProfIsTopFn.profileRunner:
result = api.isTop(a)
profApi.journalGetFilter =
proc(a: BackendRef; b: int): auto =
AristoApiProfJournalGetFilterFn.profileRunner:
result = api.journalGetFilter(a, b)
profApi.journalGetInx =
proc(a: BackendRef; b: Option[FilterID]; c = false): auto =
AristoApiProfJournalGetInxFn.profileRunner:
result = api.journalGetInx(a, b, c)
profApi.level =
proc(a: AristoDbRef): auto =
AristoApiProfLevelFn.profileRunner:
@ -754,7 +700,7 @@ func init*(
result = api.pathAsBlob(a)
profApi.persist =
proc(a: AristoDbRef; b = none(FilterID); c = false): auto =
proc(a: AristoDbRef; b = 0u64; c = false): auto =
AristoApiProfPersistFn.profileRunner:
result = api.persist(a, b, c)
@ -810,12 +756,6 @@ func init*(
result = be.getKeyFn(a)
data.list[AristoApiProfBeGetKeyFn.ord].masked = true
beDup.getFilFn =
proc(a: QueueID): auto =
AristoApiProfBeGetFilFn.profileRunner:
result = be.getFilFn(a)
data.list[AristoApiProfBeGetFilFn.ord].masked = true
beDup.getIdgFn =
proc(): auto =
AristoApiProfBeGetIdgFn.profileRunner:
@ -828,12 +768,6 @@ func init*(
result = be.getLstFn()
data.list[AristoApiProfBeGetLstFn.ord].masked = true
beDup.getFqsFn =
proc(): auto =
AristoApiProfBeGetFqsFn.profileRunner:
result = be.getFqsFn()
data.list[AristoApiProfBeGetFqsFn.ord].masked = true
beDup.putVtxFn =
proc(a: PutHdlRef; b: openArray[(VertexID,VertexRef)]) =
AristoApiProfBePutVtxFn.profileRunner:
@ -846,12 +780,6 @@ func init*(
be.putKeyFn(a,b)
data.list[AristoApiProfBePutKeyFn.ord].masked = true
beDup.putFilFn =
proc(a: PutHdlRef; b: openArray[(QueueID,FilterRef)]) =
AristoApiProfBePutFilFn.profileRunner:
be.putFilFn(a,b)
data.list[AristoApiProfBePutFilFn.ord].masked = true
beDup.putIdgFn =
proc(a: PutHdlRef; b: openArray[VertexID]) =
AristoApiProfBePutIdgFn.profileRunner:
@ -864,12 +792,6 @@ func init*(
be.putLstFn(a,b)
data.list[AristoApiProfBePutLstFn.ord].masked = true
beDup.putFqsFn =
proc(a: PutHdlRef; b: openArray[(QueueID,QueueID)]) =
AristoApiProfBePutFqsFn.profileRunner:
be.putFqsFn(a,b)
data.list[AristoApiProfBePutFqsFn.ord].masked = true
beDup.putEndFn =
proc(a: PutHdlRef): auto =
AristoApiProfBePutEndFn.profileRunner:

View File

@ -11,7 +11,7 @@
{.push raises: [].}
import
std/[bitops, sequtils, sets, tables],
std/bitops,
eth/[common, trie/nibbles],
results,
stew/endians2,
@ -158,133 +158,16 @@ proc blobify*(vGen: openArray[VertexID]): Blob =
proc blobifyTo*(lSst: SavedState; data: var Blob) =
## Serialise a last saved state record
data.setLen(73)
(addr data[0]).copyMem(unsafeAddr lSst.src.data[0], 32)
(addr data[32]).copyMem(unsafeAddr lSst.trg.data[0], 32)
let w = lSst.serial.toBytesBE
(addr data[64]).copyMem(unsafeAddr w[0], 8)
data[72] = 0x7fu8
data.setLen(0)
data.add lSst.src.data
data.add lSst.trg.data
data.add lSst.serial.toBytesBE
data.add @[0x7fu8]
proc blobify*(lSst: SavedState): Blob =
## Variant of `blobify()`
lSst.blobifyTo result
proc blobifyTo*(filter: FilterRef; data: var Blob): Result[void,AristoError] =
## This function serialises an Aristo DB filter object
## ::
## uint64 -- filter ID
## Uint256 -- source key
## Uint256 -- target key
## uint32 -- number of vertex IDs (vertex ID generator state)
## uint32 -- number of (id,key,vertex) triplets
##
## uint64, ... -- list of vertex IDs (vertex ID generator state)
##
## uint32 -- flag(3) + vtxLen(29), first triplet
## uint64 -- vertex ID
## Uint256 -- optional key
## Blob -- optional vertex
##
## ... -- more triplets
## 0x7d -- marker(8)
##
func blobify(lid: HashKey): Blob =
let n = lid.len
if n < 32: @[n.byte] & @(lid.data) & 0u8.repeat(31 - n) else: @(lid.data)
if not filter.isValid:
return err(BlobifyNilFilter)
data &= filter.fid.uint64.toBytesBE
data &= filter.src.data
data &= filter.trg.data
data &= filter.vGen.len.uint32.toBytesBE
data &= default(array[4, byte]) # place holder
# Store vertex ID generator state
for w in filter.vGen:
data &= w.uint64.toBytesBE
var
n = 0
leftOver = filter.kMap.keys.toSeq.toHashSet
# Loop over vertex table
for (vid,vtx) in filter.sTab.pairs:
n.inc
leftOver.excl vid
var
keyMode = 0u # default: ignore that key
vtxLen = 0u # default: ignore that vertex
keyBlob: Blob
vtxBlob: Blob
let key = filter.kMap.getOrVoid vid
if key.isValid:
keyBlob = key.blobify
keyMode = if key.len < 32: 0xc000_0000u else: 0x8000_0000u
elif filter.kMap.hasKey vid:
keyMode = 0x4000_0000u # void hash key => considered deleted
if vtx.isValid:
? vtx.blobifyTo vtxBlob
vtxLen = vtxBlob.len.uint
if 0x3fff_ffff <= vtxLen:
return err(BlobifyFilterRecordOverflow)
else:
vtxLen = 0x3fff_ffff # nil vertex => considered deleted
data &= (keyMode or vtxLen).uint32.toBytesBE
data &= vid.uint64.toBytesBE
data &= keyBlob
data &= vtxBlob
# Loop over remaining data from key table
for vid in leftOver:
n.inc
var
keyMode = 0u # present and usable
keyBlob: Blob
let key = filter.kMap.getOrVoid vid
if key.isValid:
keyBlob = key.blobify
keyMode = if key.len < 32: 0xc000_0000u else: 0x8000_0000u
else:
keyMode = 0x4000_0000u # void hash key => considered deleted
data &= keyMode.uint32.toBytesBE
data &= vid.uint64.toBytesBE
data &= keyBlob
data[76 ..< 80] = n.uint32.toBytesBE
data.add 0x7Du8
ok()
proc blobify*(filter: FilterRef): Result[Blob, AristoError] =
## ...
var data: Blob
? filter.blobifyTo data
ok move(data)
proc blobifyTo*(vFqs: openArray[(QueueID,QueueID)]; data: var Blob) =
## This function serialises a list of filter queue IDs.
## ::
## uint64, ... -- list of IDs
## 0x7e -- marker(8)
##
for w in vFqs:
data &= w[0].uint64.toBytesBE
data &= w[1].uint64.toBytesBE
data.add 0x7Eu8
proc blobify*(vFqs: openArray[(QueueID,QueueID)]): Blob =
## Variant of `blobify()`
vFqs.blobifyTo result
# -------------
proc deblobify(
@ -346,7 +229,10 @@ proc deblobify(
pyl = pAcc
ok()
proc deblobify*(record: openArray[byte]; vtx: var VertexRef): Result[void,AristoError] =
proc deblobify*(
record: openArray[byte];
vtx: var VertexRef;
): Result[void,AristoError] =
## De-serialise a data record encoded with `blobify()`. The second
## argument `vtx` can be `nil`.
if record.len < 3: # minimum `Leaf` record
@ -417,7 +303,10 @@ proc deblobify*(record: openArray[byte]; vtx: var VertexRef): Result[void,Aristo
return err(DeblobUnknown)
ok()
proc deblobify*(data: openArray[byte]; T: type VertexRef): Result[T,AristoError] =
proc deblobify*(
data: openArray[byte];
T: type VertexRef;
): Result[T,AristoError] =
## Variant of `deblobify()` for vertex deserialisation.
var vtx = T(nil) # will be auto-initialised
? data.deblobify vtx
@ -475,112 +364,6 @@ proc deblobify*(
? data.deblobify lSst
ok move(lSst)
proc deblobify*(data: Blob; filter: var FilterRef): Result[void,AristoError] =
## De-serialise an Aristo DB filter object
if data.len < 80: # minumum length 80 for an empty filter
return err(DeblobFilterTooShort)
if data[^1] != 0x7d:
return err(DeblobWrongType)
func deblob(data: openArray[byte]; shortKey: bool): Result[HashKey,void] =
if shortKey:
HashKey.fromBytes data.toOpenArray(1, min(int data[0],31))
else:
HashKey.fromBytes data
let f = FilterRef()
f.fid = (uint64.fromBytesBE data.toOpenArray(0, 7)).FilterID
(addr f.src.data[0]).copyMem(unsafeAddr data[8], 32)
(addr f.trg.data[0]).copyMem(unsafeAddr data[40], 32)
let
nVids = uint32.fromBytesBE data.toOpenArray(72, 75)
nTriplets = uint32.fromBytesBE data.toOpenArray(76, 79)
nTrplStart = (80 + nVids * 8).int
if data.len < nTrplStart:
return err(DeblobFilterGenTooShort)
for n in 0 ..< nVids:
let w = 80 + n * 8
f.vGen.add (uint64.fromBytesBE data.toOpenArray(int w, int w+7)).VertexID
var offs = nTrplStart
for n in 0 ..< nTriplets:
if data.len < offs + 12:
return err(DeblobFilterTrpTooShort)
let
keyFlag = data[offs] shr 6
vtxFlag = ((uint32.fromBytesBE data.toOpenArray(offs, offs+3)) and 0x3fff_ffff).int
vLen = if vtxFlag == 0x3fff_ffff: 0 else: vtxFlag
if keyFlag == 0 and vtxFlag == 0:
return err(DeblobFilterTrpVtxSizeGarbled) # no blind records
offs = offs + 4
let vid = (uint64.fromBytesBE data.toOpenArray(offs, offs+7)).VertexID
offs = offs + 8
if data.len < offs + (1 < keyFlag).ord * 32 + vLen:
return err(DeblobFilterTrpTooShort)
if 1 < keyFlag:
f.kMap[vid] = data.toOpenArray(offs, offs+31).deblob(keyFlag == 3).valueOr:
return err(DeblobHashKeyExpected)
offs = offs + 32
elif keyFlag == 1:
f.kMap[vid] = VOID_HASH_KEY
if vtxFlag == 0x3fff_ffff:
f.sTab[vid] = VertexRef(nil)
elif 0 < vLen:
var vtx: VertexRef
? data.toOpenArray(offs, offs + vLen - 1).deblobify vtx
f.sTab[vid] = vtx
offs = offs + vLen
if data.len != offs + 1:
return err(DeblobFilterSizeGarbled)
filter = f
ok()
proc deblobify*(data: Blob; T: type FilterRef): Result[T,AristoError] =
## Variant of `deblobify()` for deserialising an Aristo DB filter object
var filter: T
? data.deblobify filter
ok filter
proc deblobify*(
data: Blob;
vFqs: var seq[(QueueID,QueueID)];
): Result[void,AristoError] =
## De-serialise the data record encoded with `blobify()` into a filter queue
## ID argument liet `vFqs`.
if data.len == 0:
vFqs = @[]
else:
if (data.len mod 16) != 1:
return err(DeblobSizeGarbled)
if data[^1] != 0x7e:
return err(DeblobWrongType)
for n in 0 ..< (data.len div 16):
let
w = n * 16
a = (uint64.fromBytesBE data.toOpenArray(w, w + 7)).QueueID
b = (uint64.fromBytesBE data.toOpenArray(w + 8, w + 15)).QueueID
vFqs.add (a,b)
ok()
proc deblobify*(
data: Blob;
T: type seq[(QueueID,QueueID)];
): Result[T,AristoError] =
## Variant of `deblobify()` for deserialising the vertex ID generator state
var vFqs: seq[(QueueID,QueueID)]
? data.deblobify vFqs
ok vFqs
# ------------------------------------------------------------------------------
# End
# ------------------------------------------------------------------------------

View File

@ -20,7 +20,7 @@ import
results,
./aristo_walk/persistent,
"."/[aristo_desc, aristo_get, aristo_init, aristo_utils],
./aristo_check/[check_be, check_journal, check_top]
./aristo_check/[check_be, check_top]
# ------------------------------------------------------------------------------
# Public functions
@ -79,33 +79,16 @@ proc checkBE*(
of BackendVoid:
return VoidBackendRef.checkBE(db, cache=cache, relax=relax)
proc checkJournal*(
db: AristoDbRef; # Database, top layer
): Result[void,(QueueID,AristoError)] =
## Verify database backend journal.
case db.backend.kind:
of BackendMemory:
return MemBackendRef.checkJournal(db)
of BackendRocksDB:
return RdbBackendRef.checkJournal(db)
of BackendVoid:
return ok() # no journal
proc check*(
db: AristoDbRef; # Database, top layer
relax = false; # Check existing hashes only
cache = true; # Also verify against top layer cache
fifos = true; # Also verify cascaded filter fifos
proofMode = false; # Has proof nodes
): Result[void,(VertexID,AristoError)] =
## Shortcut for running `checkTop()` followed by `checkBE()`
? db.checkTop(proofMode = proofMode)
? db.checkBE(relax = relax, cache = cache, fifos = fifos)
if fifos:
let rc = db.checkJournal()
if rc.isErr:
return err((VertexID(0),rc.error[1]))
? db.checkBE(relax = relax, cache = cache)
ok()
# ------------------------------------------------------------------------------

View File

@ -192,19 +192,6 @@ proc checkBE*[T: RdbBackendRef|MemBackendRef|VoidBackendRef](
# Register deleted vid against backend generator state
discard vids.merge Interval[VertexID,uint64].new(vid,vid)
# Check cascaded fifos
if fifos and
not db.backend.isNil and
not db.backend.journal.isNil:
var lastTrg = db.getKeyUbe(VertexID(1)).get(otherwise = VOID_HASH_KEY)
.to(Hash256)
for (qid,filter) in db.backend.T.walkFifoBe: # walk in fifo order
if filter.src != lastTrg:
return err((VertexID(0),CheckBeFifoSrcTrgMismatch))
if filter.trg != filter.kMap.getOrVoid(VertexID 1).to(Hash256):
return err((VertexID(1),CheckBeFifoTrgNotStateRoot))
lastTrg = filter.trg
# Check key table
var list: seq[VertexID]
for (vid,key) in db.layersWalkKey:

View File

@ -1,203 +0,0 @@
# nimbus-eth1
# Copyright (c) 2023-2024 Status Research & Development GmbH
# Licensed under either of
# * Apache License, version 2.0, ([LICENSE-APACHE](LICENSE-APACHE) or
# http://www.apache.org/licenses/LICENSE-2.0)
# * MIT license ([LICENSE-MIT](LICENSE-MIT) or
# http://opensource.org/licenses/MIT)
# at your option. This file may not be copied, modified, or distributed
# except according to those terms.
{.push raises: [].}
import
std/[algorithm, sequtils, sets, tables],
eth/common,
results,
../aristo_journal/journal_scheduler,
../aristo_walk/persistent,
".."/[aristo_desc, aristo_blobify]
const
ExtraDebugMessages = false
type
JrnRec = tuple
src: Hash256
trg: Hash256
size: int
when ExtraDebugMessages:
import
../aristo_debug
# ------------------------------------------------------------------------------
# Private functions and helpers
# ------------------------------------------------------------------------------
template noValueError(info: static[string]; code: untyped) =
try:
code
except ValueError as e:
raiseAssert info & ", name=\"" & $e.name & "\", msg=\"" & e.msg & "\""
when ExtraDebugMessages:
proc pp(t: var Table[QueueID,JrnRec]): string =
result = "{"
for qid in t.keys.toSeq.sorted:
t.withValue(qid,w):
result &= qid.pp & "#" & $w[].size & ","
if result[^1] == '{':
result &= "}"
else:
result[^1] = '}'
proc pp(t: seq[QueueID]): string =
result = "{"
var list = t
for n in 2 ..< list.len:
if list[n-1] == list[n] - 1 and
(list[n-2] == QueueID(0) or list[n-2] == list[n] - 2):
list[n-1] = QueueID(0)
for w in list:
if w != QueueID(0):
result &= w.pp & ","
elif result[^1] == ',':
result[^1] = '.'
result &= "."
if result[^1] == '{':
result &= "}"
else:
result[^1] = '}'
proc pp(t: HashSet[QueueID]): string =
result = "{"
var list = t.toSeq.sorted
for n in 2 ..< list.len:
if list[n-1] == list[n] - 1 and
(list[n-2] == QueueID(0) or list[n-2] == list[n] - 2):
list[n-1] = QueueID(0)
for w in list:
if w != QueueID(0):
result &= w.pp & ","
elif result[^1] == ',':
result[^1] = '.'
result &= "."
if result[^1] == '{':
result &= "}"
else:
result[^1] = '}'
# ------------------------------------------------------------------------------
# Public functions
# ------------------------------------------------------------------------------
proc checkJournal*[T: RdbBackendRef|MemBackendRef](
_: type T;
db: AristoDbRef;
): Result[void,(QueueID,AristoError)] =
let jrn = db.backend.journal
if jrn.isNil: return ok()
var
nToQid: seq[QueueID] # qids sorted by history/age
cached: HashSet[QueueID] # `nToQid[]` as set
saved: Table[QueueID,JrnRec]
error: (QueueID,AristoError)
when ExtraDebugMessages:
var
sizeTally = 0
maxBlock = 0
proc moan(n = -1, s = "", listOk = true) =
var txt = ""
if 0 <= n:
txt &= " (" & $n & ")"
if error[1] != AristoError(0):
txt &= " oops"
txt &=
" jLen=" & $jrn.len &
" tally=" & $sizeTally &
" maxBlock=" & $maxBlock &
""
if 0 < s.len:
txt &= " " & s
if error[1] != AristoError(0):
txt &=
" errQid=" & error[0].pp &
" error=" & $error[1] &
""
if listOk:
txt &=
"\n cached=" & cached.pp &
"\n saved=" & saved.pp &
""
debugEcho "*** checkJournal", txt
else:
template moan(n = -1, s = "", listOk = true) =
discard
# Collect cached handles
for n in 0 ..< jrn.len:
let qid = jrn[n]
# Must be no overlap
if qid in cached:
error = (qid,CheckJrnCachedQidOverlap)
moan(2)
return err(error)
cached.incl qid
nToQid.add qid
# Collect saved data
for (qid,fil) in db.backend.T.walkFilBe():
var jrnRec: JrnRec
jrnRec.src = fil.src
jrnRec.trg = fil.trg
when ExtraDebugMessages:
let rc = fil.blobify
if rc.isErr:
moan(5)
return err((qid,rc.error))
jrnRec.size = rc.value.len
if maxBlock < jrnRec.size:
maxBlock = jrnRec.size
sizeTally += jrnRec.size
saved[qid] = jrnRec
# Compare cached against saved data
let
savedQids = saved.keys.toSeq.toHashSet
unsavedQids = cached - savedQids
staleQids = savedQids - cached
if 0 < unsavedQids.len:
error = (unsavedQids.toSeq.sorted[0],CheckJrnSavedQidMissing)
moan(6)
return err(error)
if 0 < staleQids.len:
error = (staleQids.toSeq.sorted[0], CheckJrnSavedQidStale)
moan(7)
return err(error)
# Compare whether journal records link together
if 1 < nToQid.len:
noValueError("linked journal records"):
var prvRec = saved[nToQid[0]]
for n in 1 ..< nToQid.len:
let thisRec = saved[nToQid[n]]
if prvRec.trg != thisRec.src:
error = (nToQid[n],CheckJrnLinkingGap)
moan(8, "qidInx=" & $n)
return err(error)
prvRec = thisRec
moan(9, listOk=false)
ok()
# ------------------------------------------------------------------------------
# End
# ------------------------------------------------------------------------------

View File

@ -37,26 +37,6 @@ const
VOID_PATH_ID* = PathID()
## Void equivalent for Merkle hash value
EmptyQidPairSeq* = seq[(QueueID,QueueID)].default
## Useful shortcut
DEFAULT_QID_QUEUES* = [
(128, 0), # Consecutive list of (at least) 128 filter slots
( 16, 3), # Overflow list with (at least) 16 filter slots (with gap size 3)
# each slot covering 4 filters from previous list
( 1, 1), # ..
( 1, 1)]
## The `DEFAULT_QID_QUEUES` schedule has the following properties:
## * most recent consecutive slots: 128
## * maximal slots used: 151
## * covered backlog savings: between 216..231
## This was calculated via the `capacity()` function from the
## `filter_scheduler.nim` source. So, saving each block after executing
## it, the previous 128 block chain states will be directly accessible.
## For older block chain states (of at least back to 216), the system can
## be positioned before the desired state and block by block executed
## forward.
SUB_TREE_DISPOSAL_MAX* = 200_000
## Some limit for disposing sub-trees in one go using `delete()`.

View File

@ -17,7 +17,6 @@ import
stew/[byteutils, interval_set],
./aristo_desc/desc_backend,
./aristo_init/[memory_db, memory_only, rocks_db],
./aristo_journal/journal_scheduler,
"."/[aristo_constants, aristo_desc, aristo_hike, aristo_layers]
# ------------------------------------------------------------------------------
@ -151,32 +150,6 @@ func ppCodeHash(h: Hash256): string =
else:
result &= h.data.toHex.squeeze(hex=true,ignLen=true)
proc ppFid(fid: FilterID): string =
"@" & $fid
proc ppQid(qid: QueueID): string =
if not qid.isValid:
return "ø"
let
chn = qid.uint64 shr 62
qid = qid.uint64 and 0x3fff_ffff_ffff_ffffu64
result = "%"
if 0 < chn:
result &= $chn & ":"
if 0x0fff_ffff_ffff_ffffu64 <= qid.uint64:
block here:
if qid.uint64 == 0x0fff_ffff_ffff_ffffu64:
result &= "(2^60-1)"
elif qid.uint64 == 0x1fff_ffff_ffff_ffffu64:
result &= "(2^61-1)"
elif qid.uint64 == 0x3fff_ffff_ffff_ffffu64:
result &= "(2^62-1)"
else:
break here
return
result &= qid.toHex.stripZeros
proc ppVidList(vGen: openArray[VertexID]): string =
result = "["
if vGen.len <= 250:
@ -417,7 +390,7 @@ proc ppFRpp(
"<" & xStr[1..^2] & ">"
proc ppFilter(
fl: FilterRef;
fl: LayerDeltaRef;
db: AristoDbRef;
indent: int;
): string =
@ -430,9 +403,7 @@ proc ppFilter(
if fl.isNil:
result &= " n/a"
return
result &= pfx & "fid=" & fl.fid.ppFid
result &= pfx & "src=" & fl.src.to(HashKey).ppKey(db)
result &= pfx & "trg=" & fl.trg.to(HashKey).ppKey(db)
result &= pfx & "src=" & fl.src.ppKey(db)
result &= pfx & "vGen" & pfx1 & "[" &
fl.vGen.mapIt(it.ppVid).join(",") & "]"
result &= pfx & "sTab" & pfx1 & "{"
@ -530,9 +501,9 @@ proc ppLayer(
result &= "<layer>".doPrefix(false)
if vGenOk:
let
tLen = layer.final.vGen.len
tLen = layer.delta.vGen.len
info = "vGen(" & $tLen & ")"
result &= info.doPrefix(0 < tLen) & layer.final.vGen.ppVidList
result &= info.doPrefix(0 < tLen) & layer.delta.vGen.ppVidList
if sTabOk:
let
tLen = layer.delta.sTab.len
@ -591,21 +562,6 @@ proc pp*(lty: LeafTie, db = AristoDbRef(nil)): string =
proc pp*(vid: VertexID): string =
vid.ppVid
proc pp*(qid: QueueID): string =
qid.ppQid
proc pp*(fid: FilterID): string =
fid.ppFid
proc pp*(a: openArray[(QueueID,QueueID)]): string =
"[" & a.toSeq.mapIt("(" & it[0].pp & "," & it[1].pp & ")").join(",") & "]"
proc pp*(a: QidAction): string =
($a.op).replace("Qid", "") & "(" & a.qid.pp & "," & a.xid.pp & ")"
proc pp*(a: openArray[QidAction]): string =
"[" & a.toSeq.mapIt(it.pp).join(",") & "]"
proc pp*(vGen: openArray[VertexID]): string =
vGen.ppVidList
@ -765,7 +721,7 @@ proc pp*(
db.layersCc.pp(db, xTabOk=xTabOk, kMapOk=kMapOk, other=other, indent=indent)
proc pp*(
filter: FilterRef;
filter: LayerDeltaRef;
db = AristoDbRef(nil);
indent = 4;
): string =
@ -777,7 +733,7 @@ proc pp*(
limit = 100;
indent = 4;
): string =
result = db.roFilter.ppFilter(db, indent+1) & indent.toPfx
result = db.balancer.ppFilter(db, indent+1) & indent.toPfx
case be.kind:
of BackendMemory:
result &= be.MemBackendRef.ppBe(db, limit, indent+1)
@ -790,7 +746,7 @@ proc pp*(
db: AristoDbRef;
indent = 4;
backendOk = false;
filterOk = true;
balancerOk = true;
topOk = true;
stackOk = true;
kMapOk = true;
@ -799,7 +755,7 @@ proc pp*(
if topOk:
result = db.layersCc.pp(
db, xTabOk=true, kMapOk=kMapOk, other=true, indent=indent)
let stackOnlyOk = stackOk and not (topOk or filterOk or backendOk)
let stackOnlyOk = stackOk and not (topOk or balancerOk or backendOk)
if not stackOnlyOk:
result &= indent.toPfx & " level=" & $db.stack.len
if (stackOk and 0 < db.stack.len) or stackOnlyOk:
@ -816,8 +772,8 @@ proc pp*(
result &= " =>" & lStr
if backendOk:
result &= indent.toPfx & db.backend.pp(db, limit=limit, indent)
elif filterOk:
result &= indent.toPfx & db.roFilter.ppFilter(db, indent+1)
elif balancerOk:
result &= indent.toPfx & db.balancer.ppFilter(db, indent+1)
proc pp*(sdb: MerkleSignRef; indent = 4): string =
"count=" & $sdb.count &

View File

@ -0,0 +1,90 @@
# nimbus-eth1
# Copyright (c) 2023-2024 Status Research & Development GmbH
# Licensed under either of
# * Apache License, version 2.0, ([LICENSE-APACHE](LICENSE-APACHE) or
# http://www.apache.org/licenses/LICENSE-2.0)
# * MIT license ([LICENSE-MIT](LICENSE-MIT) or
# http://opensource.org/licenses/MIT)
# at your option. This file may not be copied, modified, or distributed
# except according to those terms.
## Aristo DB -- Delta filter management
## ====================================
##
import
std/[sequtils, tables],
eth/common,
results,
./aristo_desc,
./aristo_desc/desc_backend,
./aristo_delta/delta_siblings
# ------------------------------------------------------------------------------
# Public functions, save to backend
# ------------------------------------------------------------------------------
proc deltaPersistentOk*(db: AristoDbRef): bool =
## Check whether the read-only filter can be merged into the backend
not db.backend.isNil and db.isCentre
proc deltaPersistent*(
db: AristoDbRef; # Database
nxtFid = 0u64; # Next filter ID (if any)
reCentreOk = false;
): Result[void,AristoError] =
## Resolve (i.e. move) the backend filter into the physical backend database.
##
## This needs write permission on the backend DB for the argument `db`
## descriptor (see the function `aristo_desc.isCentre()`.) With the argument
## flag `reCentreOk` passed `true`, write permission will be temporarily
## acquired when needed.
##
## When merging the current backend filter, its reverse will be is stored as
## back log on the filter fifos (so the current state can be retrieved.)
## Also, other non-centre descriptors are updated so there is no visible
## database change for these descriptors.
##
let be = db.backend
if be.isNil:
return err(FilBackendMissing)
# Blind or missing filter
if db.balancer.isNil:
return ok()
# Make sure that the argument `db` is at the centre so the backend is in
# read-write mode for this peer.
let parent = db.getCentre
if db != parent:
if not reCentreOk:
return err(FilBackendRoMode)
db.reCentre
# Always re-centre to `parent` (in case `reCentreOk` was set)
defer: parent.reCentre
# Initialise peer filter balancer.
let updateSiblings = ? UpdateSiblingsRef.init db
defer: updateSiblings.rollback()
let lSst = SavedState(
src: db.balancer.src,
trg: db.balancer.kMap.getOrVoid(VertexID 1),
serial: nxtFid)
# Store structural single trie entries
let writeBatch = be.putBegFn()
be.putVtxFn(writeBatch, db.balancer.sTab.pairs.toSeq)
be.putKeyFn(writeBatch, db.balancer.kMap.pairs.toSeq)
be.putIdgFn(writeBatch, db.balancer.vGen)
be.putLstFn(writeBatch, lSst)
? be.putEndFn writeBatch # Finalise write batch
# Update dudes and this descriptor
? updateSiblings.update().commit()
ok()
# ------------------------------------------------------------------------------
# End
# ------------------------------------------------------------------------------

View File

@ -18,12 +18,12 @@ import
# Public functions
# ------------------------------------------------------------------------------
proc merge*(
proc deltaMerge*(
db: AristoDbRef;
upper: FilterRef; # Src filter, `nil` is ok
lower: FilterRef; # Trg filter, `nil` is ok
beStateRoot: Hash256; # Merkle hash key
): Result[FilterRef,(VertexID,AristoError)] =
upper: LayerDeltaRef; # Src filter, `nil` is ok
lower: LayerDeltaRef; # Trg filter, `nil` is ok
beStateRoot: HashKey; # Merkle hash key
): Result[LayerDeltaRef,(VertexID,AristoError)] =
## Merge argument `upper` into the `lower` filter instance.
##
## Note that the namimg `upper` and `lower` indicate that the filters are
@ -46,7 +46,7 @@ proc merge*(
if lower.isNil:
if upper.isNil:
# Even more degenerate case when both filters are void
return ok FilterRef(nil)
return ok LayerDeltaRef(nil)
if upper.src != beStateRoot:
return err((VertexID(1),FilStateRootMismatch))
return ok(upper)
@ -58,18 +58,18 @@ proc merge*(
return ok(lower)
# Verify stackability
if upper.src != lower.trg:
let lowerTrg = lower.kMap.getOrVoid VertexID(1)
if upper.src != lowerTrg:
return err((VertexID(0), FilTrgSrcMismatch))
if lower.src != beStateRoot:
return err((VertexID(0), FilStateRootMismatch))
# There is no need to deep copy table vertices as they will not be modified.
let newFilter = FilterRef(
let newFilter = LayerDeltaRef(
src: lower.src,
sTab: lower.sTab,
kMap: lower.kMap,
vGen: upper.vGen,
trg: upper.trg)
vGen: upper.vGen)
for (vid,vtx) in upper.sTab.pairs:
if vtx.isValid or not newFilter.sTab.hasKey vid:
@ -96,53 +96,12 @@ proc merge*(
return err((vid,rc.error))
# Check consistency
if (newFilter.src == newFilter.trg) !=
if (newFilter.src == newFilter.kMap.getOrVoid(VertexID 1)) !=
(newFilter.sTab.len == 0 and newFilter.kMap.len == 0):
return err((VertexID(0),FilSrcTrgInconsistent))
ok newFilter
proc merge*(
upper: FilterRef; # filter, not `nil`
lower: FilterRef; # filter, not `nil`
): Result[FilterRef,(VertexID,AristoError)] =
## Variant of `merge()` without optimising filters relative to the backend.
## Also, filter arguments `upper` and `lower` are expected not`nil`.
## Otherwise an error is returned.
##
## Comparing before and after merge
## ::
## arguments | merged result
## --------------------------------+--------------------------------
## (src2==trg1) --> upper --> trg2 |
## | (src1==trg0) --> newFilter --> trg2
## (src1==trg0) --> lower --> trg1 |
## |
if upper.isNil or lower.isNil:
return err((VertexID(0),FilNilFilterRejected))
# Verify stackability
if upper.src != lower.trg:
return err((VertexID(0), FilTrgSrcMismatch))
# There is no need to deep copy table vertices as they will not be modified.
let newFilter = FilterRef(
fid: upper.fid,
src: lower.src,
sTab: lower.sTab,
kMap: lower.kMap,
vGen: upper.vGen,
trg: upper.trg)
for (vid,vtx) in upper.sTab.pairs:
newFilter.sTab[vid] = vtx
for (vid,key) in upper.kMap.pairs:
newFilter.kMap[vid] = key
ok newFilter
# ------------------------------------------------------------------------------
# End
# ------------------------------------------------------------------------------

View File

@ -10,6 +10,7 @@
import
std/tables,
eth/common,
results,
".."/[aristo_desc, aristo_get]
@ -19,8 +20,8 @@ import
proc revFilter*(
db: AristoDbRef; # Database
filter: FilterRef; # Filter to revert
): Result[FilterRef,(VertexID,AristoError)] =
filter: LayerDeltaRef; # Filter to revert
): Result[LayerDeltaRef,(VertexID,AristoError)] =
## Assemble reverse filter for the `filter` argument, i.e. changes to the
## backend that reverse the effect of applying the this read-only filter.
##
@ -28,9 +29,7 @@ proc revFilter*(
## backend (excluding optionally installed read-only filter.)
##
# Register MPT state roots for reverting back
let rev = FilterRef(
src: filter.trg,
trg: filter.src)
let rev = LayerDeltaRef(src: filter.kMap.getOrVoid(VertexID 1))
# Get vid generator state on backend
block:

View File

@ -1,5 +1,5 @@
# nimbus-eth1
# Copyright (c) 2023 Status Research & Development GmbH
# Copyright (c) 2023-2024 Status Research & Development GmbH
# Licensed under either of
# * Apache License, version 2.0, ([LICENSE-APACHE](LICENSE-APACHE) or
# http://www.apache.org/licenses/LICENSE-2.0)
@ -10,8 +10,9 @@
import
results,
eth/common,
../aristo_desc,
"."/[filter_merge, filter_reverse]
"."/[delta_merge, delta_reverse]
type
UpdateState = enum
@ -21,10 +22,10 @@ type
UpdateSiblingsRef* = ref object
## Update transactional context
state: UpdateState ## No `rollback()` after `commit()`
db: AristoDbRef ## Main database access
roFilters: seq[(AristoDbRef,FilterRef)] ## Rollback data
rev: FilterRef ## Reverse filter set up
state: UpdateState
db: AristoDbRef ## Main database access
balancers: seq[(AristoDbRef,LayerDeltaRef)] ## Rollback data
rev: LayerDeltaRef ## Reverse filter set up
# ------------------------------------------------------------------------------
# Public contructor, commit, rollback
@ -34,8 +35,8 @@ proc rollback*(ctx: UpdateSiblingsRef) =
## Rollback any changes made by the `update()` function. Subsequent
## `rollback()` or `commit()` calls will be without effect.
if ctx.state == Updated:
for (d,f) in ctx.roFilters:
d.roFilter = f
for (d,f) in ctx.balancers:
d.balancer = f
ctx.state = Finished
@ -44,7 +45,7 @@ proc commit*(ctx: UpdateSiblingsRef): Result[void,AristoError] =
if ctx.state != Updated:
ctx.rollback()
return err(FilSiblingsCommitUnfinshed)
ctx.db.roFilter = FilterRef(nil)
ctx.db.balancer = LayerDeltaRef(nil)
ctx.state = Finished
ok()
@ -64,6 +65,8 @@ proc init*(
## database.
if not db.isCentre:
return err(FilBackendRoMode)
if db.nForked == 0:
return ok T(db: db) # No need to do anything
func fromVae(err: (VertexID,AristoError)): AristoError =
err[1]
@ -71,7 +74,7 @@ proc init*(
# Filter rollback context
ok T(
db: db,
rev: ? db.revFilter(db.roFilter).mapErr fromVae) # Reverse filter
rev: ? db.revFilter(db.balancer).mapErr fromVae) # Reverse filter
# ------------------------------------------------------------------------------
# Public functions
@ -87,17 +90,19 @@ proc update*(ctx: UpdateSiblingsRef): Result[UpdateSiblingsRef,AristoError] =
##
if ctx.state == Initial:
ctx.state = Updated
let db = ctx.db
# Update distributed filters. Note that the physical backend database
# must not have been updated, yet. So the new root key for the backend
# will be `db.roFilter.trg`.
for w in db.forked:
let rc = db.merge(w.roFilter, ctx.rev, db.roFilter.trg)
if rc.isErr:
ctx.rollback()
return err(rc.error[1])
ctx.roFilters.add (w, w.roFilter)
w.roFilter = rc.value
if not ctx.rev.isNil:
let db = ctx.db
# Update distributed filters. Note that the physical backend database
# must not have been updated, yet. So the new root key for the backend
# will be `db.balancer.kMap[$1]`.
let trg = db.balancer.kMap.getOrVoid(VertexID 1)
for w in db.forked:
let rc = db.deltaMerge(w.balancer, ctx.rev, trg)
if rc.isErr:
ctx.rollback()
return err(rc.error[1])
ctx.balancers.add (w, w.balancer)
w.balancer = rc.value
ok(ctx)
proc update*(
@ -106,15 +111,6 @@ proc update*(
## Variant of `update()` for joining with `init()`
(? rc).update()
# ------------------------------------------------------------------------------
# Public getter
# ------------------------------------------------------------------------------
func rev*(ctx: UpdateSiblingsRef): FilterRef =
## Getter, returns the reverse of the `init()` argument `db` current
## read-only filter.
ctx.rev
# ------------------------------------------------------------------------------
# End
# ------------------------------------------------------------------------------

View File

@ -63,7 +63,7 @@ type
## Three tier database object supporting distributed instances.
top*: LayerRef ## Database working layer, mutable
stack*: seq[LayerRef] ## Stashed immutable parent layers
roFilter*: FilterRef ## Apply read filter (locks writing)
balancer*: LayerDeltaRef ## Baland out concurrent backend access
backend*: BackendRef ## Backend database (may well be `nil`)
txRef*: AristoTxRef ## Latest active transaction
@ -109,8 +109,8 @@ func isValid*(pld: PayloadRef): bool =
func isValid*(pid: PathID): bool =
pid != VOID_PATH_ID
func isValid*(filter: FilterRef): bool =
filter != FilterRef(nil)
func isValid*(filter: LayerDeltaRef): bool =
filter != LayerDeltaRef(nil)
func isValid*(root: Hash256): bool =
root != EMPTY_ROOT_HASH
@ -125,9 +125,6 @@ func isValid*(vid: VertexID): bool =
func isValid*(sqv: HashSet[VertexID]): bool =
sqv != EmptyVidSet
func isValid*(qid: QueueID): bool =
qid != QueueID(0)
# ------------------------------------------------------------------------------
# Public functions, miscellaneous
# ------------------------------------------------------------------------------
@ -201,16 +198,16 @@ proc fork*(
backend: db.backend)
if not noFilter:
clone.roFilter = db.roFilter # Ref is ok here (filters are immutable)
clone.balancer = db.balancer # Ref is ok here (filters are immutable)
if not noTopLayer:
clone.top = LayerRef.init()
if not db.roFilter.isNil:
clone.top.final.vGen = db.roFilter.vGen
if not db.balancer.isNil:
clone.top.delta.vGen = db.balancer.vGen
else:
let rc = clone.backend.getIdgFn()
if rc.isOk:
clone.top.final.vGen = rc.value
clone.top.delta.vGen = rc.value
elif rc.error != GetIdgNotFound:
return err(rc.error)
@ -260,6 +257,10 @@ proc forgetOthers*(db: AristoDbRef): Result[void,AristoError] =
db.dudes = DudesRef(nil)
ok()
# ------------------------------------------------------------------------------
# Public helpers
# ------------------------------------------------------------------------------
iterator rstack*(db: AristoDbRef): LayerRef =
# Stack in reverse order
for i in 0..<db.stack.len:

View File

@ -29,11 +29,6 @@ type
## Generic backend database retrieval function for a single
## `Aristo DB` hash lookup value.
GetFilFn* =
proc(qid: QueueID): Result[FilterRef,AristoError]
{.gcsafe, raises: [].}
## Generic backend database retrieval function for a filter record.
GetIdgFn* =
proc(): Result[seq[VertexID],AristoError] {.gcsafe, raises: [].}
## Generic backend database retrieval function for a the ID generator
@ -44,11 +39,6 @@ type
{.gcsafe, raises: [].}
## Generic last recorded state stamp retrieval function
GetFqsFn* =
proc(): Result[seq[(QueueID,QueueID)],AristoError] {.gcsafe, raises: [].}
## Generic backend database retrieval function for some filter queue
## administration data (e.g. the bottom/top ID.)
# -------------
PutHdlRef* = ref object of RootRef
@ -73,11 +63,6 @@ type
## Generic backend database bulk storage function, `VOID_HASH_KEY`
## values indicate that records should be deleted.
PutFilFn* =
proc(hdl: PutHdlRef; qf: openArray[(QueueID,FilterRef)])
{.gcsafe, raises: [].}
## Generic backend database storage function for filter records.
PutIdgFn* =
proc(hdl: PutHdlRef; vs: openArray[VertexID])
{.gcsafe, raises: [].}
@ -90,12 +75,6 @@ type
## Generic last recorded state stamp storage function. This
## function replaces the currentlt saved state.
PutFqsFn* =
proc(hdl: PutHdlRef; vs: openArray[(QueueID,QueueID)])
{.gcsafe, raises: [].}
## Generic backend database filter ID state storage function. This
## function replaces the current filter ID list.
PutEndFn* =
proc(hdl: PutHdlRef): Result[void,AristoError] {.gcsafe, raises: [].}
## Generic transaction termination function
@ -128,44 +107,34 @@ type
BackendRef* = ref BackendObj
BackendObj* = object of RootObj
## Backend interface.
journal*: QidSchedRef ## Delta filters slot queue state
getVtxFn*: GetVtxFn ## Read vertex record
getKeyFn*: GetKeyFn ## Read Merkle hash/key
getFilFn*: GetFilFn ## Read back log filter
getIdgFn*: GetIdgFn ## Read vertex ID generator state
getLstFn*: GetLstFn ## Read saved state
getFqsFn*: GetFqsFn ## Read filter ID state
putBegFn*: PutBegFn ## Start bulk store session
putVtxFn*: PutVtxFn ## Bulk store vertex records
putKeyFn*: PutKeyFn ## Bulk store vertex hashes
putFilFn*: PutFilFn ## Store back log filter
putIdgFn*: PutIdgFn ## Store ID generator state
putLstFn*: PutLstFn ## Store saved state
putFqsFn*: PutFqsFn ## Store filter ID state
putEndFn*: PutEndFn ## Commit bulk store session
guestDbFn*: GuestDbFn ## Piggyback DB for another application
closeFn*: CloseFn ## Generic destructor
proc init*(trg: var BackendObj; src: BackendObj) =
trg.journal = src.journal
trg.getVtxFn = src.getVtxFn
trg.getKeyFn = src.getKeyFn
trg.getFilFn = src.getFilFn
trg.getIdgFn = src.getIdgFn
trg.getLstFn = src.getLstFn
trg.getFqsFn = src.getFqsFn
trg.putBegFn = src.putBegFn
trg.putVtxFn = src.putVtxFn
trg.putKeyFn = src.putKeyFn
trg.putFilFn = src.putFilFn
trg.putIdgFn = src.putIdgFn
trg.putLstFn = src.putLstFn
trg.putFqsFn = src.putFqsFn
trg.putEndFn = src.putEndFn
trg.guestDbFn = src.guestDbFn
trg.closeFn = src.closeFn

View File

@ -213,31 +213,13 @@ type
DelVidStaleVtx
# Functions from `aristo_filter.nim`
FilBackStepsExpected
FilBackendMissing
FilBackendRoMode
FilDudeFilterUpdateError
FilExecDublicateSave
FilExecHoldExpected
FilExecOops
FilExecSaveMissing
FilExecStackUnderflow
FilFilterInvalid
FilFilterNotFound
FilInxByQidFailed
FilNegativeEpisode
FilNilFilterRejected
FilNoMatchOnFifo
FilPrettyPointlessLayer
FilQidByLeFidFailed
FilQuBespokeFidTooSmall
FilQuSchedDisabled
FilSiblingsCommitUnfinshed
FilSrcTrgInconsistent
FilStateRootMismatch
FilStateRootMissing
FilTrgSrcMismatch
FilTrgTopSrcMismatch
# Get functions from `aristo_get.nim`
GetLeafMissing
@ -281,6 +263,9 @@ type
TxStackGarbled
TxStackUnderflow
TxPrettyPointlessLayer
TxStateRootMismatch
# Functions from `aristo_desc.nim`
MustBeOnCentre
NotAllowedOnCentre

View File

@ -23,13 +23,6 @@ import
stint
type
QueueID* = distinct uint64
## Identifier used to tag filter logs stored on the backend.
FilterID* = distinct uint64
## Identifier used to identify a particular filter. It is generatied with
## the filter when stored to database.
VertexID* = distinct uint64
## Unique identifier for a vertex of the `Aristo Trie`. The vertex is the
## prefix tree (aka `Patricia Trie`) component. When augmented by hash
@ -91,7 +84,6 @@ type
# ------------------------------------------------------------------------------
chronicles.formatIt(VertexID): $it
chronicles.formatIt(QueueID): $it
# ------------------------------------------------------------------------------
# Public helpers: `VertexID` scalar data model
@ -113,37 +105,6 @@ func `+`*(a: VertexID; b: uint64): VertexID = (a.uint64+b).VertexID
func `-`*(a: VertexID; b: uint64): VertexID = (a.uint64-b).VertexID
func `-`*(a, b: VertexID): uint64 = (a.uint64 - b.uint64)
# ------------------------------------------------------------------------------
# Public helpers: `QueueID` scalar data model
# ------------------------------------------------------------------------------
func `<`*(a, b: QueueID): bool {.borrow.}
func `<=`*(a, b: QueueID): bool {.borrow.}
func `==`*(a, b: QueueID): bool {.borrow.}
func cmp*(a, b: QueueID): int {.borrow.}
func `$`*(a: QueueID): string {.borrow.}
func `==`*(a: QueueID; b: static[uint]): bool = (a == QueueID(b))
func `+`*(a: QueueID; b: uint64): QueueID = (a.uint64+b).QueueID
func `-`*(a: QueueID; b: uint64): QueueID = (a.uint64-b).QueueID
func `-`*(a, b: QueueID): uint64 = (a.uint64 - b.uint64)
# ------------------------------------------------------------------------------
# Public helpers: `FilterID` scalar data model
# ------------------------------------------------------------------------------
func `<`*(a, b: FilterID): bool {.borrow.}
func `<=`*(a, b: FilterID): bool {.borrow.}
func `==`*(a, b: FilterID): bool {.borrow.}
func `$`*(a: FilterID): string {.borrow.}
func `==`*(a: FilterID; b: static[uint]): bool = (a == FilterID(b))
func `+`*(a: FilterID; b: uint64): FilterID = (a.uint64+b).FilterID
func `-`*(a: FilterID; b: uint64): FilterID = (a.uint64-b).FilterID
func `-`*(a, b: FilterID): uint64 = (a.uint64 - b.uint64)
# ------------------------------------------------------------------------------
# Public helpers: `PathID` ordered scalar data model
# ------------------------------------------------------------------------------
@ -198,6 +159,13 @@ func `==`*(a, b: PathID): bool =
func cmp*(a, b: PathID): int =
if a < b: -1 elif b < a: 1 else: 0
# ------------------------------------------------------------------------------
# Public helpers: `HashKey` ordered scalar data model
# ------------------------------------------------------------------------------
func len*(lid: HashKey): int =
lid.len.int # if lid.isHash: 32 else: lid.blob.len
template data*(lid: HashKey): openArray[byte] =
lid.buf.toOpenArray(0, lid.len - 1)
@ -213,13 +181,6 @@ func to*(lid: HashKey; T: type PathID): T =
else:
PathID()
# ------------------------------------------------------------------------------
# Public helpers: `HashKey` ordered scalar data model
# ------------------------------------------------------------------------------
func len*(lid: HashKey): int =
lid.len.int # if lid.isHash: 32 else: lid.blob.len
func fromBytes*(T: type HashKey; data: openArray[byte]): Result[T,void] =
## Write argument `data` of length 0 or between 2 and 32 bytes as a `HashKey`.
##

View File

@ -74,19 +74,10 @@ type
SavedState* = object
## Last saved state
src*: Hash256 ## Previous state hash
trg*: Hash256 ## Last state hash
src*: HashKey ## Previous state hash
trg*: HashKey ## Last state hash
serial*: uint64 ## Generic identifier froom application
FilterRef* = ref object
## Delta layer
fid*: FilterID ## Filter identifier
src*: Hash256 ## Applicable to this state root
trg*: Hash256 ## Resulting state root (i.e. `kMap[1]`)
sTab*: Table[VertexID,VertexRef] ## Filter structural vertex table
kMap*: Table[VertexID,HashKey] ## Filter Merkle hash key mapping
vGen*: seq[VertexID] ## Filter unique vertex ID generator
LayerDeltaRef* = ref object
## Delta layers are stacked implying a tables hierarchy. Table entries on
## a higher level take precedence over lower layer table entries. So an
@ -110,8 +101,10 @@ type
## tables. So a corresponding zero value or missing entry produces an
## inconsistent state that must be resolved.
##
src*: HashKey ## Only needed when used as a filter
sTab*: Table[VertexID,VertexRef] ## Structural vertex table
kMap*: Table[VertexID,HashKey] ## Merkle hash key mapping
vGen*: seq[VertexID] ## Recycling state for vertex IDs
LayerFinalRef* = ref object
## Final tables fully supersede tables on lower layers when stacked as a
@ -123,7 +116,6 @@ type
##
pPrf*: HashSet[VertexID] ## Locked vertices (proof nodes)
fRpp*: Table[HashKey,VertexID] ## Key lookup for `pPrf[]` (proof nodes)
vGen*: seq[VertexID] ## Recycling state for vertex IDs
dirty*: HashSet[VertexID] ## Start nodes to re-hashiy from
LayerRef* = ref LayerObj
@ -134,45 +126,6 @@ type
final*: LayerFinalRef ## Stored as latest version
txUid*: uint ## Transaction identifier if positive
# ----------------------
QidLayoutRef* = ref object
## Layout of cascaded list of filter ID slot queues where a slot queue
## with index `N+1` serves as an overflow queue of slot queue `N`.
q*: array[4,QidSpec]
QidSpec* = tuple
## Layout of a filter ID slot queue
size: uint ## Queue capacity, length within `1..wrap`
width: uint ## Instance gaps (relative to prev. item)
wrap: QueueID ## Range `1..wrap` for round-robin queue
QidSchedRef* = ref object of RootRef
## Current state of the filter queues
ctx*: QidLayoutRef ## Organisation of the FIFO
state*: seq[(QueueID,QueueID)] ## Current fill state
JournalInx* = tuple
## Helper structure for fetching fiters from the journal.
inx: int ## Non negative journal index. latest=`0`
fil: FilterRef ## Valid filter
const
DefaultQidWrap = QueueID(0x3fff_ffff_ffff_ffffu64)
QidSpecSizeMax* = high(uint32).uint
## Maximum value allowed for a `size` value of a `QidSpec` object
QidSpecWidthMax* = high(uint32).uint
## Maximum value allowed for a `width` value of a `QidSpec` object
# ------------------------------------------------------------------------------
# Private helpers
# ------------------------------------------------------------------------------
func max(a, b, c: int): int =
max(max(a,b),c)
# ------------------------------------------------------------------------------
# Public helpers (misc)
# ------------------------------------------------------------------------------
@ -321,7 +274,6 @@ func dup*(final: LayerFinalRef): LayerFinalRef =
LayerFinalRef(
pPrf: final.pPrf,
fRpp: final.fRpp,
vGen: final.vGen,
dirty: final.dirty)
func dup*(wp: VidVtxPair): VidVtxPair =
@ -336,51 +288,6 @@ func to*(node: NodeRef; T: type VertexRef): T =
## Extract a copy of the `VertexRef` part from a `NodeRef`.
node.VertexRef.dup
func to*(a: array[4,tuple[size, width: int]]; T: type QidLayoutRef): T =
## Convert a size-width array to a `QidLayoutRef` layout. Overly large
## array field values are adjusted to its maximal size.
var q: array[4,QidSpec]
for n in 0..3:
q[n] = (min(a[n].size.uint, QidSpecSizeMax),
min(a[n].width.uint, QidSpecWidthMax),
DefaultQidWrap)
q[0].width = 0
T(q: q)
func to*(a: array[4,tuple[size, width, wrap: int]]; T: type QidLayoutRef): T =
## Convert a size-width-wrap array to a `QidLayoutRef` layout. Overly large
## array field values are adjusted to its maximal size. Too small `wrap`
## field values are adjusted to its minimal size.
var q: array[4,QidSpec]
for n in 0..2:
q[n] = (min(a[n].size.uint, QidSpecSizeMax),
min(a[n].width.uint, QidSpecWidthMax),
QueueID(max(a[n].size + a[n+1].width, a[n].width+1,
min(a[n].wrap, DefaultQidWrap.int))))
q[0].width = 0
q[3] = (min(a[3].size.uint, QidSpecSizeMax),
min(a[3].width.uint, QidSpecWidthMax),
QueueID(max(a[3].size, a[3].width,
min(a[3].wrap, DefaultQidWrap.int))))
T(q: q)
# ------------------------------------------------------------------------------
# Public constructors for filter slot scheduler state
# ------------------------------------------------------------------------------
func init*(T: type QidSchedRef; a: array[4,(int,int)]): T =
## Constructor, see comments at the coverter function `to()` for adjustments
## of the layout argument `a`.
T(ctx: a.to(QidLayoutRef))
func init*(T: type QidSchedRef; a: array[4,(int,int,int)]): T =
## Constructor, see comments at the coverter function `to()` for adjustments
## of the layout argument `a`.
T(ctx: a.to(QidLayoutRef))
func init*(T: type QidSchedRef; ctx: QidLayoutRef): T =
T(ctx: ctx)
# ------------------------------------------------------------------------------
# End
# ------------------------------------------------------------------------------

View File

@ -14,7 +14,7 @@
{.push raises: [].}
import
eth/[common, trie/nibbles],
eth/trie/nibbles,
results,
"."/[aristo_desc, aristo_get, aristo_hike]

View File

@ -40,15 +40,6 @@ proc getLstUbe*(
return be.getLstFn()
err(GetLstNotFound)
proc getFqsUbe*(
db: AristoDbRef;
): Result[seq[(QueueID,QueueID)],AristoError] =
## Get the list of filter IDs unfiltered backened if available.
let be = db.backend
if not be.isNil:
return be.getFqsFn()
err(GetFqsNotFound)
proc getVtxUbe*(
db: AristoDbRef;
vid: VertexID;
@ -69,24 +60,14 @@ proc getKeyUbe*(
return be.getKeyFn vid
err GetKeyNotFound
proc getFilUbe*(
db: AristoDbRef;
qid: QueueID;
): Result[FilterRef,AristoError] =
## Get the filter from the unfiltered backened if available.
let be = db.backend
if not be.isNil:
return be.getFilFn qid
err GetFilNotFound
# ------------------
proc getIdgBE*(
db: AristoDbRef;
): Result[seq[VertexID],AristoError] =
## Get the ID generator state the `backened` layer if available.
if not db.roFilter.isNil:
return ok(db.roFilter.vGen)
if not db.balancer.isNil:
return ok(db.balancer.vGen)
db.getIdgUbe()
proc getVtxBE*(
@ -94,8 +75,8 @@ proc getVtxBE*(
vid: VertexID;
): Result[VertexRef,AristoError] =
## Get the vertex from the (filtered) backened if available.
if not db.roFilter.isNil:
db.roFilter.sTab.withValue(vid, w):
if not db.balancer.isNil:
db.balancer.sTab.withValue(vid, w):
if w[].isValid:
return ok(w[])
return err(GetVtxNotFound)
@ -106,8 +87,8 @@ proc getKeyBE*(
vid: VertexID;
): Result[HashKey,AristoError] =
## Get the merkle hash/key from the (filtered) backend if available.
if not db.roFilter.isNil:
db.roFilter.kMap.withValue(vid, w):
if not db.balancer.isNil:
db.balancer.kMap.withValue(vid, w):
if w[].isValid:
return ok(w[])
return err(GetKeyNotFound)

View File

@ -29,7 +29,6 @@ type
AdmPfx = 1 ## Admin data, e.g. ID generator
VtxPfx = 2 ## Vertex data
KeyPfx = 3 ## Key/hash data
FilPfx = 4 ## Filter logs (to revert to earlier state)
AdminTabID* = distinct uint64
## Access keys for admin table records. When exposed (e.g. when itereating
@ -50,8 +49,6 @@ type
case pfx*: StorageType ## Error sub-table
of VtxPfx, KeyPfx:
vid*: VertexID ## Vertex ID where the error occured
of FilPfx:
qid*: QueueID ## Ditto
of AdmPfx:
aid*: AdminTabID
of Oops:
@ -66,7 +63,6 @@ type
const
AdmTabIdIdg* = AdminTabID(0) ## Access key for vertex ID generator state
AdmTabIdFqs* = AdminTabID(1) ## Access key for filter queue states
AdmTabIdLst* = AdminTabID(2) ## Access key for last state
# ------------------------------------------------------------------------------

View File

@ -45,11 +45,8 @@ type
## Database
sTab: Table[VertexID,Blob] ## Structural vertex table making up a trie
kMap: Table[VertexID,HashKey] ## Merkle hash key mapping
rFil: Table[QueueID,Blob] ## Backend journal filters
vGen: Option[seq[VertexID]] ## ID generator state
lSst: Option[SavedState] ## Last saved state
vFqs: Option[seq[(QueueID,QueueID)]]
noFq: bool ## No filter queues available
MemBackendRef* = ref object of TypedBackendRef
## Inheriting table so access can be extended for debugging purposes
@ -58,10 +55,8 @@ type
MemPutHdlRef = ref object of TypedPutHdlRef
sTab: Table[VertexID,Blob]
kMap: Table[VertexID,HashKey]
rFil: Table[QueueID,Blob]
vGen: Option[seq[VertexID]]
lSst: Option[SavedState]
vFqs: Option[seq[(QueueID,QueueID)]]
when extraTraceMessages:
import chronicles
@ -113,19 +108,6 @@ proc getKeyFn(db: MemBackendRef): GetKeyFn =
return ok key
err(GetKeyNotFound)
proc getFilFn(db: MemBackendRef): GetFilFn =
if db.mdb.noFq:
result =
proc(qid: QueueID): Result[FilterRef,AristoError] =
err(FilQuSchedDisabled)
else:
result =
proc(qid: QueueID): Result[FilterRef,AristoError] =
let data = db.mdb.rFil.getOrDefault(qid, EmptyBlob)
if 0 < data.len:
return data.deblobify FilterRef
err(GetFilNotFound)
proc getIdgFn(db: MemBackendRef): GetIdgFn =
result =
proc(): Result[seq[VertexID],AristoError]=
@ -140,18 +122,6 @@ proc getLstFn(db: MemBackendRef): GetLstFn =
return ok db.mdb.lSst.unsafeGet
err(GetLstNotFound)
proc getFqsFn(db: MemBackendRef): GetFqsFn =
if db.mdb.noFq:
result =
proc(): Result[seq[(QueueID,QueueID)],AristoError] =
err(FilQuSchedDisabled)
else:
result =
proc(): Result[seq[(QueueID,QueueID)],AristoError] =
if db.mdb.vFqs.isSome:
return ok db.mdb.vFqs.unsafeGet
err(GetFqsNotFound)
# -------------
proc putBegFn(db: MemBackendRef): PutBegFn =
@ -186,34 +156,6 @@ proc putKeyFn(db: MemBackendRef): PutKeyFn =
for (vid,key) in vkps:
hdl.kMap[vid] = key
proc putFilFn(db: MemBackendRef): PutFilFn =
if db.mdb.noFq:
result =
proc(hdl: PutHdlRef; vf: openArray[(QueueID,FilterRef)]) =
let hdl = hdl.getSession db
if hdl.error.isNil:
hdl.error = TypedPutHdlErrRef(
pfx: FilPfx,
qid: (if 0 < vf.len: vf[0][0] else: QueueID(0)),
code: FilQuSchedDisabled)
else:
result =
proc(hdl: PutHdlRef; vf: openArray[(QueueID,FilterRef)]) =
let hdl = hdl.getSession db
if hdl.error.isNil:
for (qid,filter) in vf:
if filter.isValid:
let rc = filter.blobify()
if rc.isErr:
hdl.error = TypedPutHdlErrRef(
pfx: FilPfx,
qid: qid,
code: rc.error)
return
hdl.rFil[qid] = rc.value
else:
hdl.rFil[qid] = EmptyBlob
proc putIdgFn(db: MemBackendRef): PutIdgFn =
result =
proc(hdl: PutHdlRef; vs: openArray[VertexID]) =
@ -228,24 +170,6 @@ proc putLstFn(db: MemBackendRef): PutLstFn =
if hdl.error.isNil:
hdl.lSst = some(lst)
proc putFqsFn(db: MemBackendRef): PutFqsFn =
if db.mdb.noFq:
result =
proc(hdl: PutHdlRef; fs: openArray[(QueueID,QueueID)]) =
let hdl = hdl.getSession db
if hdl.error.isNil:
hdl.error = TypedPutHdlErrRef(
pfx: AdmPfx,
aid: AdmTabIdFqs,
code: FilQuSchedDisabled)
else:
result =
proc(hdl: PutHdlRef; fs: openArray[(QueueID,QueueID)]) =
let hdl = hdl.getSession db
if hdl.error.isNil:
hdl.vFqs = some(fs.toSeq)
proc putEndFn(db: MemBackendRef): PutEndFn =
result =
proc(hdl: PutHdlRef): Result[void,AristoError] =
@ -255,8 +179,6 @@ proc putEndFn(db: MemBackendRef): PutEndFn =
case hdl.error.pfx:
of VtxPfx, KeyPfx: trace logTxt "putEndFn: vtx/key failed",
pfx=hdl.error.pfx, vid=hdl.error.vid, error=hdl.error.code
of FilPfx: trace logTxt "putEndFn: filter failed",
pfx=hdl.error.pfx, qid=hdl.error.qid, error=hdl.error.code
of AdmPfx: trace logTxt "putEndFn: admin failed",
pfx=AdmPfx, aid=hdl.error.aid.uint64, error=hdl.error.code
of Oops: trace logTxt "putEndFn: failed",
@ -275,12 +197,6 @@ proc putEndFn(db: MemBackendRef): PutEndFn =
else:
db.mdb.kMap.del vid
for (qid,data) in hdl.rFil.pairs:
if 0 < data.len:
db.mdb.rFil[qid] = data
else:
db.mdb.rFil.del qid
if hdl.vGen.isSome:
let vGen = hdl.vGen.unsafeGet
if vGen.len == 0:
@ -291,13 +207,6 @@ proc putEndFn(db: MemBackendRef): PutEndFn =
if hdl.lSst.isSome:
db.mdb.lSst = hdl.lSst
if hdl.vFqs.isSome:
let vFqs = hdl.vFqs.unsafeGet
if vFqs.len == 0:
db.mdb.vFqs = none(seq[(QueueID,QueueID)])
else:
db.mdb.vFqs = some(vFqs)
ok()
# -------------
@ -316,37 +225,25 @@ proc closeFn(db: MemBackendRef): CloseFn =
# Public functions
# ------------------------------------------------------------------------------
proc memoryBackend*(qidLayout: QidLayoutRef): BackendRef =
proc memoryBackend*(): BackendRef =
let db = MemBackendRef(
beKind: BackendMemory,
mdb: MemDbRef())
db.mdb.noFq = qidLayout.isNil
db.getVtxFn = getVtxFn db
db.getKeyFn = getKeyFn db
db.getFilFn = getFilFn db
db.getIdgFn = getIdgFn db
db.getLstFn = getLstFn db
db.getFqsFn = getFqsFn db
db.putBegFn = putBegFn db
db.putVtxFn = putVtxFn db
db.putKeyFn = putKeyFn db
db.putFilFn = putFilFn db
db.putIdgFn = putIdgFn db
db.putLstFn = putLstFn db
db.putFqsFn = putFqsFn db
db.putEndFn = putEndFn db
db.guestDbFn = guestDbFn db
db.closeFn = closeFn db
# Set up filter management table
if not db.mdb.noFq:
db.journal = QidSchedRef(ctx: qidLayout)
db
proc dup*(db: MemBackendRef): MemBackendRef =
@ -382,21 +279,6 @@ iterator walkKey*(
if key.isValid:
yield (vid, key)
iterator walkFil*(
be: MemBackendRef;
): tuple[qid: QueueID, filter: FilterRef] =
## Iteration over the vertex sub-table.
if not be.mdb.noFq:
for n,qid in be.mdb.rFil.keys.toSeq.mapIt(it).sorted:
let data = be.mdb.rFil.getOrDefault(qid, EmptyBlob)
if 0 < data.len:
let rc = data.deblobify FilterRef
if rc.isErr:
when extraTraceMessages:
debug logTxt "walkFilFn() skip", n, qid, error=rc.error
else:
yield (qid, rc.value)
iterator walk*(
be: MemBackendRef;
@ -407,10 +289,8 @@ iterator walk*(
## yield record is still incremented.
if be.mdb.vGen.isSome:
yield(AdmPfx, AdmTabIdIdg.uint64, be.mdb.vGen.unsafeGet.blobify)
if not be.mdb.noFq:
if be.mdb.vFqs.isSome:
yield(AdmPfx, AdmTabIdFqs.uint64, be.mdb.vFqs.unsafeGet.blobify)
if be.mdb.lSst.isSome:
yield(AdmPfx, AdmTabIdLst.uint64, be.mdb.lSst.unsafeGet.blobify)
for vid in be.mdb.sTab.keys.toSeq.mapIt(it).sorted:
let data = be.mdb.sTab.getOrDefault(vid, EmptyBlob)
@ -420,12 +300,6 @@ iterator walk*(
for (vid,key) in be.walkKey:
yield (KeyPfx, vid.uint64, @(key.data))
if not be.mdb.noFq:
for lid in be.mdb.rFil.keys.toSeq.mapIt(it.uint64).sorted.mapIt(it.QueueID):
let data = be.mdb.rFil.getOrDefault(lid, EmptyBlob)
if 0 < data.len:
yield (FilPfx, lid.uint64, data)
# ------------------------------------------------------------------------------
# End
# ------------------------------------------------------------------------------

View File

@ -28,8 +28,7 @@ type
export
BackendType,
GuestDbRef,
MemBackendRef,
QidLayoutRef
MemBackendRef
# ------------------------------------------------------------------------------
# Public helpers
@ -52,7 +51,6 @@ proc kind*(
proc init*(
T: type AristoDbRef; # Target type
B: type MemBackendRef; # Backend type
qidLayout: QidLayoutRef; # Optional fifo schedule
): T =
## Memory backend constructor.
##
@ -62,7 +60,7 @@ proc init*(
## layouts might render the filter history data unmanageable.
##
when B is MemBackendRef:
AristoDbRef(top: LayerRef.init(), backend: memoryBackend(qidLayout))
AristoDbRef(top: LayerRef.init(), backend: memoryBackend())
proc init*(
T: type AristoDbRef; # Target type
@ -79,8 +77,7 @@ proc init*(
AristoDbRef(top: LayerRef.init())
elif B is MemBackendRef:
let qidLayout = DEFAULT_QID_QUEUES.to(QidLayoutRef)
AristoDbRef(top: LayerRef.init(), backend: memoryBackend(qidLayout))
AristoDbRef(top: LayerRef.init(), backend: memoryBackend())
proc init*(
T: type AristoDbRef; # Target type

View File

@ -35,10 +35,9 @@ export
proc newAristoRdbDbRef(
basePath: string;
qidLayout: QidLayoutRef;
): Result[AristoDbRef, AristoError]=
let
be = ? rocksDbBackend(basePath, qidLayout)
be = ? rocksDbAristoBackend(basePath)
vGen = block:
let rc = be.getIdgFn()
if rc.isErr:
@ -47,8 +46,8 @@ proc newAristoRdbDbRef(
rc.value
ok AristoDbRef(
top: LayerRef(
delta: LayerDeltaRef(),
final: LayerFinalRef(vGen: vGen)),
delta: LayerDeltaRef(vGen: vGen),
final: LayerFinalRef()),
backend: be)
# ------------------------------------------------------------------------------
@ -59,28 +58,12 @@ proc init*[W: RdbBackendRef](
T: type AristoDbRef;
B: type W;
basePath: string;
qidLayout: QidLayoutRef;
): Result[T, AristoError] =
## Generic constructor, `basePath` argument is ignored for memory backend
## databases (which also unconditionally succeed initialising.)
##
## If the `qidLayout` argument is set `QidLayoutRef(nil)`, the a backend
## database will not provide filter history management. Providing a different
## scheduler layout shoud be used with care as table access with different
## layouts might render the filter history data unmanageable.
##
when B is RdbBackendRef:
basePath.newAristoRdbDbRef qidLayout
proc init*[W: RdbBackendRef](
T: type AristoDbRef;
B: type W;
basePath: string;
): Result[T, AristoError] =
## Variant of `init()` using default schedule.
##
when B is RdbBackendRef:
basePath.newAristoRdbDbRef DEFAULT_QID_QUEUES.to(QidLayoutRef)
basePath.newAristoRdbDbRef()
proc getRocksDbFamily*(
gdb: GuestDbRef;

View File

@ -110,27 +110,6 @@ proc getKeyFn(db: RdbBackendRef): GetKeyFn =
err(GetKeyNotFound)
proc getFilFn(db: RdbBackendRef): GetFilFn =
if db.rdb.noFq:
result =
proc(qid: QueueID): Result[FilterRef,AristoError] =
err(FilQuSchedDisabled)
else:
result =
proc(qid: QueueID): Result[FilterRef,AristoError] =
# Fetch serialised data record.
let data = db.rdb.getByPfx(FilPfx, qid.uint64).valueOr:
when extraTraceMessages:
trace logTxt "getFilFn: failed", qid, error=error[0], info=error[1]
return err(error[0])
# Decode data record
if 0 < data.len:
return data.deblobify FilterRef
err(GetFilNotFound)
proc getIdgFn(db: RdbBackendRef): GetIdgFn =
result =
proc(): Result[seq[VertexID],AristoError]=
@ -162,28 +141,6 @@ proc getLstFn(db: RdbBackendRef): GetLstFn =
# Decode data record
data.deblobify SavedState
proc getFqsFn(db: RdbBackendRef): GetFqsFn =
if db.rdb.noFq:
result =
proc(): Result[seq[(QueueID,QueueID)],AristoError] =
err(FilQuSchedDisabled)
else:
result =
proc(): Result[seq[(QueueID,QueueID)],AristoError]=
# Fetch serialised data record.
let data = db.rdb.getByPfx(AdmPfx, AdmTabIdFqs.uint64).valueOr:
when extraTraceMessages:
trace logTxt "getFqsFn: failed", error=error[0], info=error[1]
return err(error[0])
if data.len == 0:
let w = EmptyQidPairSeq # Must be `let`
return ok w # Compiler error with `ok(EmptyQidPairSeq)`
# Decode data record
data.deblobify seq[(QueueID,QueueID)]
# -------------
proc putBegFn(db: RdbBackendRef): PutBegFn =
@ -243,45 +200,6 @@ proc putKeyFn(db: RdbBackendRef): PutKeyFn =
code: error[1],
info: error[2])
proc putFilFn(db: RdbBackendRef): PutFilFn =
if db.rdb.noFq:
result =
proc(hdl: PutHdlRef; vf: openArray[(QueueID,FilterRef)]) =
let hdl = hdl.getSession db
if hdl.error.isNil:
hdl.error = TypedPutHdlErrRef(
pfx: FilPfx,
qid: (if 0 < vf.len: vf[0][0] else: QueueID(0)),
code: FilQuSchedDisabled)
else:
result =
proc(hdl: PutHdlRef; vrps: openArray[(QueueID,FilterRef)]) =
let hdl = hdl.getSession db
if hdl.error.isNil:
# Collect batch session arguments
var batch: seq[(uint64,Blob)]
for (qid,filter) in vrps:
if filter.isValid:
let rc = filter.blobify()
if rc.isErr:
hdl.error = TypedPutHdlErrRef(
pfx: FilPfx,
qid: qid,
code: rc.error)
return
batch.add (qid.uint64, rc.value)
else:
batch.add (qid.uint64, EmptyBlob)
# Stash batch session data
db.rdb.putByPfx(FilPfx, batch).isOkOr:
hdl.error = TypedPutHdlErrRef(
pfx: FilPfx,
qid: QueueID(error[0]),
code: error[1],
info: error[2])
proc putIdgFn(db: RdbBackendRef): PutIdgFn =
result =
proc(hdl: PutHdlRef; vs: openArray[VertexID]) =
@ -307,31 +225,6 @@ proc putLstFn(db: RdbBackendRef): PutLstFn =
code: error[1],
info: error[2])
proc putFqsFn(db: RdbBackendRef): PutFqsFn =
if db.rdb.noFq:
result =
proc(hdl: PutHdlRef; fs: openArray[(QueueID,QueueID)]) =
let hdl = hdl.getSession db
if hdl.error.isNil:
hdl.error = TypedPutHdlErrRef(
pfx: AdmPfx,
code: FilQuSchedDisabled)
else:
result =
proc(hdl: PutHdlRef; vs: openArray[(QueueID,QueueID)]) =
let hdl = hdl.getSession db
if hdl.error.isNil:
# Stash batch session data
let fqs = if 0 < vs.len: vs.blobify else: EmptyBlob
db.rdb.putByPfx(AdmPfx, @[(AdmTabIdFqs.uint64, fqs)]).isOkOr:
hdl.error = TypedPutHdlErrRef(
pfx: AdmPfx,
aid: AdmTabIdFqs,
code: error[1],
info: error[2])
proc putEndFn(db: RdbBackendRef): PutEndFn =
result =
proc(hdl: PutHdlRef): Result[void,AristoError] =
@ -374,10 +267,7 @@ proc closeFn(db: RdbBackendRef): CloseFn =
# Public functions
# ------------------------------------------------------------------------------
proc rocksDbBackend*(
path: string;
qidLayout: QidLayoutRef;
): Result[BackendRef,AristoError] =
proc rocksDbAristoBackend*(path: string): Result[BackendRef,AristoError] =
let db = RdbBackendRef(
beKind: BackendRocksDB)
@ -390,38 +280,20 @@ proc rocksDbBackend*(
error=rc.error[0], info=rc.error[1]
return err(rc.error[0])
db.rdb.noFq = qidLayout.isNil
db.getVtxFn = getVtxFn db
db.getKeyFn = getKeyFn db
db.getFilFn = getFilFn db
db.getIdgFn = getIdgFn db
db.getLstFn = getLstFn db
db.getFqsFn = getFqsFn db
db.putBegFn = putBegFn db
db.putVtxFn = putVtxFn db
db.putKeyFn = putKeyFn db
db.putFilFn = putFilFn db
db.putIdgFn = putIdgFn db
db.putLstFn = putLstFn db
db.putFqsFn = putFqsFn db
db.putEndFn = putEndFn db
db.guestDbFn = guestDbFn db
db.closeFn = closeFn db
# Set up filter management table
if not db.rdb.noFq:
db.journal = QidSchedRef(ctx: qidLayout)
db.journal.state = block:
let rc = db.getFqsFn()
if rc.isErr:
db.closeFn(flush = false)
return err(rc.error)
rc.value
ok db
proc dup*(db: RdbBackendRef): RdbBackendRef =
@ -441,20 +313,8 @@ iterator walk*(
##
## Non-decodable entries are stepped over while the counter `n` of the
## yield record is still incremented.
if be.rdb.noFq:
for w in be.rdb.walk:
case w.pfx:
of AdmPfx:
if w.xid == AdmTabIdFqs.uint64:
continue
of FilPfx:
break # last sub-table
else:
discard
yield w
else:
for w in be.rdb.walk:
yield w
for w in be.rdb.walk:
yield w
iterator walkVtx*(
be: RdbBackendRef;
@ -474,16 +334,6 @@ iterator walkKey*(
continue
yield (VertexID(xid), lid)
iterator walkFil*(
be: RdbBackendRef;
): tuple[qid: QueueID, filter: FilterRef] =
## Variant of `walk()` iteration over the filter sub-table.
if not be.rdb.noFq:
for (xid, data) in be.rdb.walk FilPfx:
let rc = data.deblobify FilterRef
if rc.isOk:
yield (QueueID(xid), rc.value)
# ------------------------------------------------------------------------------
# End
# ------------------------------------------------------------------------------

View File

@ -1,234 +0,0 @@
# nimbus-eth1
# Copyright (c) 2023-2024 Status Research & Development GmbH
# Licensed under either of
# * Apache License, version 2.0, ([LICENSE-APACHE](LICENSE-APACHE) or
# http://www.apache.org/licenses/LICENSE-2.0)
# * MIT license ([LICENSE-MIT](LICENSE-MIT) or
# http://opensource.org/licenses/MIT)
# at your option. This file may not be copied, modified, or distributed
# except according to those terms.
## Aristo DB -- Filter and journal management
## ==========================================
##
import
std/[options, sequtils, sets, tables],
eth/common,
results,
"."/[aristo_desc, aristo_get, aristo_vid],
./aristo_desc/desc_backend,
./aristo_journal/[
filter_state_root, filter_merge, filter_reverse, filter_siblings,
journal_get, journal_ops]
# ------------------------------------------------------------------------------
# Public functions, construct filters
# ------------------------------------------------------------------------------
proc journalFwdFilter*(
db: AristoDbRef; # Database
layer: LayerRef; # Layer to derive filter from
chunkedMpt = false; # Relax for snap/proof scenario
): Result[FilterRef,(VertexID,AristoError)] =
## Assemble forward delta, i.e. changes to the backend equivalent to applying
## the current top layer.
##
## Typically, the `layer` layer would reflect a change of the MPT but there
## is the case of partial MPTs sent over the network when synchronising (see
## `snap` protocol.) In this case, the state root might not see a change on
## the `layer` layer which would result in an error unless the argument
## `extendOK` is set `true`
##
## This delta is taken against the current backend including optional
## read-only filter.
##
# Register the Merkle hash keys of the MPT where this reverse filter will be
# applicable: `be => fg`
let (srcRoot, trgRoot) = block:
let rc = db.getLayerStateRoots(layer.delta, chunkedMpt)
if rc.isOK:
(rc.value.be, rc.value.fg)
elif rc.error == FilPrettyPointlessLayer:
return ok FilterRef(nil)
else:
return err((VertexID(1), rc.error))
ok FilterRef(
src: srcRoot,
sTab: layer.delta.sTab,
kMap: layer.delta.kMap,
vGen: layer.final.vGen.vidReorg, # Compact recycled IDs
trg: trgRoot)
# ------------------------------------------------------------------------------
# Public functions, apply/install filters
# ------------------------------------------------------------------------------
proc journalMerge*(
db: AristoDbRef; # Database
filter: FilterRef; # Filter to apply to database
): Result[void,(VertexID,AristoError)] =
## Merge the argument `filter` into the read-only filter layer. Note that
## this function has no control of the filter source. Having merged the
## argument `filter`, all the `top` and `stack` layers should be cleared.
##
let ubeRoot = block:
let rc = db.getKeyUbe VertexID(1)
if rc.isOk:
rc.value.to(Hash256)
elif rc.error == GetKeyNotFound:
EMPTY_ROOT_HASH
else:
return err((VertexID(1),rc.error))
db.roFilter = ? db.merge(filter, db.roFilter, ubeRoot)
if db.roFilter.src == db.roFilter.trg:
# Under normal conditions, the root keys cannot be the same unless the
# database is empty. This changes if there is a fixed root vertex as
# used with the `snap` sync protocol boundaty proof. In that case, there
# can be no history chain and the filter is just another cache.
if VertexID(1) notin db.top.final.pPrf:
db.roFilter = FilterRef(nil)
ok()
proc journalUpdateOk*(db: AristoDbRef): bool =
## Check whether the read-only filter can be merged into the backend
not db.backend.isNil and db.isCentre
proc journalUpdate*(
db: AristoDbRef; # Database
nxtFid = none(FilterID); # Next filter ID (if any)
reCentreOk = false;
): Result[void,AristoError] =
## Resolve (i.e. move) the backend filter into the physical backend database.
##
## This needs write permission on the backend DB for the argument `db`
## descriptor (see the function `aristo_desc.isCentre()`.) With the argument
## flag `reCentreOk` passed `true`, write permission will be temporarily
## acquired when needed.
##
## When merging the current backend filter, its reverse will be is stored as
## back log on the filter fifos (so the current state can be retrieved.)
## Also, other non-centre descriptors are updated so there is no visible
## database change for these descriptors.
##
## Caveat: This function will delete entries from the cascaded fifos if the
## current backend filter is the reverse compiled from the top item
## chain from the cascaded fifos as implied by the function
## `forkBackLog()`, for example.
##
let be = db.backend
if be.isNil:
return err(FilBackendMissing)
# Blind or missing filter
if db.roFilter.isNil:
return ok()
# Make sure that the argument `db` is at the centre so the backend is in
# read-write mode for this peer.
let parent = db.getCentre
if db != parent:
if not reCentreOk:
return err(FilBackendRoMode)
db.reCentre
# Always re-centre to `parent` (in case `reCentreOk` was set)
defer: parent.reCentre
# Initialise peer filter balancer.
let updateSiblings = ? UpdateSiblingsRef.init db
defer: updateSiblings.rollback()
# Figure out how to save the reverse filter on a cascades slots queue
var instr: JournalOpsMod
if not be.journal.isNil: # Otherwise ignore
block getInstr:
# Compile instruction for updating filters on the cascaded fifos
if db.roFilter.isValid:
let ovLap = be.journalGetOverlap db.roFilter
if 0 < ovLap:
instr = ? be.journalOpsDeleteSlots ovLap # Revert redundant entries
break getInstr
instr = ? be.journalOpsPushSlot(
updateSiblings.rev, # Store reverse filter
nxtFid) # Set filter ID (if any)
let lSst = SavedState(
src: db.roFilter.src,
trg: db.roFilter.trg,
serial: nxtFid.get(otherwise=FilterID(0)).uint64)
# Store structural single trie entries
let writeBatch = be.putBegFn()
be.putVtxFn(writeBatch, db.roFilter.sTab.pairs.toSeq)
be.putKeyFn(writeBatch, db.roFilter.kMap.pairs.toSeq)
be.putIdgFn(writeBatch, db.roFilter.vGen)
be.putLstFn(writeBatch, lSst)
# Store `instr` as history journal entry
if not be.journal.isNil:
be.putFilFn(writeBatch, instr.put)
be.putFqsFn(writeBatch, instr.scd.state)
? be.putEndFn writeBatch # Finalise write batch
# Update dudes and this descriptor
? updateSiblings.update().commit()
# Finally update slot queue scheduler state (as saved)
if not be.journal.isNil:
be.journal.state = instr.scd.state
db.roFilter = FilterRef(nil)
ok()
proc journalFork*(
db: AristoDbRef;
episode: int;
): Result[AristoDbRef,AristoError] =
## Construct a new descriptor on the `db` backend which enters it through a
## set of backend filters from the casacded filter fifos. The filter used is
## addressed as `episode`, where the most recend backward filter has episode
## `0`, the next older has episode `1`, etc.
##
## Use `aristo_filter.forget()` directive to clean up this descriptor.
##
let be = db.backend
if be.isNil:
return err(FilBackendMissing)
if episode < 0:
return err(FilNegativeEpisode)
let
instr = ? be.journalOpsFetchSlots(backSteps = episode+1)
clone = ? db.fork(noToplayer = true)
clone.top = LayerRef.init()
clone.top.final.vGen = instr.fil.vGen
clone.roFilter = instr.fil
ok clone
proc journalFork*(
db: AristoDbRef;
fid: Option[FilterID];
earlierOK = false;
): Result[AristoDbRef,AristoError] =
## Variant of `journalFork()` for forking to a particular filter ID (or the
## nearest predecessot if `earlierOK` is passed `true`.) if there is some
## filter ID `fid`.
##
## Otherwise, the oldest filter is forked to (regardless of the value of
## `earlierOK`.)
##
let be = db.backend
if be.isNil:
return err(FilBackendMissing)
let fip = ? be.journalGetInx(fid, earlierOK)
db.journalFork fip.inx
# ------------------------------------------------------------------------------
# End
# ------------------------------------------------------------------------------

View File

@ -1,77 +0,0 @@
# nimbus-eth1
# Copyright (c) 2023-2024 Status Research & Development GmbH
# Licensed under either of
# * Apache License, version 2.0, ([LICENSE-APACHE](LICENSE-APACHE) or
# http://www.apache.org/licenses/LICENSE-2.0)
# * MIT license ([LICENSE-MIT](LICENSE-MIT) or
# http://opensource.org/licenses/MIT)
# at your option. This file may not be copied, modified, or distributed
# except according to those terms.
import
std/tables,
eth/common,
results,
".."/[aristo_desc, aristo_get]
type
LayerStateRoot* = tuple
## Helper structure for analysing state roots.
be: Hash256 ## Backend state root
fg: Hash256 ## Layer or filter implied state root
# ------------------------------------------------------------------------------
# Public functions
# ------------------------------------------------------------------------------
proc getLayerStateRoots*(
db: AristoDbRef;
delta: LayerDeltaRef;
chunkedMpt: bool;
): Result[LayerStateRoot,AristoError] =
## Get the Merkle hash key for target state root to arrive at after this
## reverse filter was applied.
##
var spr: LayerStateRoot
let sprBeKey = block:
let rc = db.getKeyBE VertexID(1)
if rc.isOk:
rc.value
elif rc.error == GetKeyNotFound:
VOID_HASH_KEY
else:
return err(rc.error)
spr.be = sprBeKey.to(Hash256)
spr.fg = block:
let key = delta.kMap.getOrVoid VertexID(1)
if key.isValid:
key.to(Hash256)
else:
EMPTY_ROOT_HASH
if spr.fg.isValid:
return ok(spr)
if not delta.kMap.hasKey(VertexID(1)) and
not delta.sTab.hasKey(VertexID(1)):
# This layer is unusable, need both: vertex and key
return err(FilPrettyPointlessLayer)
elif not delta.sTab.getOrVoid(VertexID(1)).isValid:
# Root key and vertex has been deleted
return ok(spr)
if chunkedMpt:
if sprBeKey == delta.kMap.getOrVoid VertexID(1):
spr.fg = spr.be
return ok(spr)
if delta.sTab.len == 0 and
delta.kMap.len == 0:
return err(FilPrettyPointlessLayer)
err(FilStateRootMismatch)
# ------------------------------------------------------------------------------
# End
# ------------------------------------------------------------------------------

View File

@ -1,131 +0,0 @@
# nimbus-eth1
# Copyright (c) 2023-2024 Status Research & Development GmbH
# Licensed under either of
# * Apache License, version 2.0, ([LICENSE-APACHE](LICENSE-APACHE) or
# http://www.apache.org/licenses/LICENSE-2.0)
# * MIT license ([LICENSE-MIT](LICENSE-MIT) or
# http://opensource.org/licenses/MIT)
# at your option. This file may not be copied, modified, or distributed
# except according to those terms.
import
std/options,
eth/common,
results,
".."/[aristo_desc, aristo_desc/desc_backend],
./journal_scheduler
# ------------------------------------------------------------------------------
# Public functions
# ------------------------------------------------------------------------------
proc journalGetInx*(
be: BackendRef;
fid: Option[FilterID];
earlierOK = false;
): Result[JournalInx,AristoError] =
## If there is some argument `fid`, find the filter on the journal with ID
## not larger than `fid` (i e. the resulting filter must not be more recent.)
##
## If the argument `earlierOK` is passed `false`, the function succeeds only
## if the filter ID of the returned filter is equal to the argument `fid`.
##
## In case that there is no argument `fid`, the filter with the smallest
## filter ID (i.e. the oldest filter) is returned. here, the argument
## `earlierOK` is ignored.
##
if be.journal.isNil:
return err(FilQuSchedDisabled)
var cache = (QueueID(0),FilterRef(nil)) # Avoids double lookup for last entry
proc qid2fid(qid: QueueID): Result[FilterID,void] =
if qid == cache[0]: # Avoids double lookup for last entry
return ok cache[1].fid
let fil = be.getFilFn(qid).valueOr:
return err()
cache = (qid,fil)
ok fil.fid
let qid = block:
if fid.isNone:
# Get oldest filter
be.journal[^1]
else:
# Find filter with ID not smaller than `fid`
be.journal.le(fid.unsafeGet, qid2fid, forceEQ = not earlierOK)
if not qid.isValid:
return err(FilFilterNotFound)
var fip: JournalInx
fip.fil = block:
if cache[0] == qid:
cache[1]
else:
be.getFilFn(qid).valueOr:
return err(error)
fip.inx = be.journal[qid]
if fip.inx < 0:
return err(FilInxByQidFailed)
ok fip
proc journalGetFilter*(
be: BackendRef;
inx: int;
): Result[FilterRef,AristoError] =
## Fetch filter from journal where the argument `inx` relates to the age
## starting with `0` for the most recent.
##
if be.journal.isNil:
return err(FilQuSchedDisabled)
let qid = be.journal[inx]
if qid.isValid:
let fil = be.getFilFn(qid).valueOr:
return err(error)
return ok(fil)
err(FilFilterNotFound)
proc journalGetOverlap*(
be: BackendRef;
filter: FilterRef;
): int =
## This function will find the overlap of an argument `filter` which is
## composed by some recent filter slots from the journal.
##
## The function returns the number of most recent journal filters that are
## reverted by the argument `filter`. This requires that `src`, `trg`, and
## `fid` of the argument `filter` is properly calculated (e.g. using
## `journalOpsFetchSlots()`.)
##
# Check against the top-fifo entry.
let qid = be.journal[0]
if not qid.isValid:
return 0
let top = be.getFilFn(qid).valueOr:
return 0
# The `filter` must match the `top`
if filter.src != top.src:
return 0
# Does the filter revert the fitst entry?
if filter.trg == top.trg:
return 1
# Check against some stored filter IDs
if filter.isValid:
let fp = be.journalGetInx(some(filter.fid), earlierOK=true).valueOr:
return 0
if filter.trg == fp.fil.trg:
return 1 + fp.inx
# ------------------------------------------------------------------------------
# End
# ------------------------------------------------------------------------------

View File

@ -1,225 +0,0 @@
# nimbus-eth1
# Copyright (c) 2023-2024 Status Research & Development GmbH
# Licensed under either of
# * Apache License, version 2.0, ([LICENSE-APACHE](LICENSE-APACHE) or
# http://www.apache.org/licenses/LICENSE-2.0)
# * MIT license ([LICENSE-MIT](LICENSE-MIT) or
# http://opensource.org/licenses/MIT)
# at your option. This file may not be copied, modified, or distributed
# except according to those terms.
import
std/[options, tables],
results,
".."/[aristo_desc, aristo_desc/desc_backend],
"."/[filter_merge, journal_scheduler]
type
JournalOpsMod* = object
## Database journal instructions for storing or deleting filters.
put*: seq[(QueueID,FilterRef)]
scd*: QidSchedRef
JournalOpsFetch* = object
## Database journal instructions for merge-fetching slots.
fil*: FilterRef
del*: JournalOpsMod
# ------------------------------------------------------------------------------
# Private functions
# ------------------------------------------------------------------------------
template getFilterOrReturn(be: BackendRef; qid: QueueID): FilterRef =
let rc = be.getFilFn qid
if rc.isErr:
return err(rc.error)
rc.value
template joinFiltersOrReturn(upper, lower: FilterRef): FilterRef =
let rc = upper.merge lower
if rc.isErr:
return err(rc.error[1])
rc.value
template getNextFidOrReturn(be: BackendRef; fid: Option[FilterID]): FilterID =
## Get next free filter ID, or exit function using this wrapper
var nxtFid = fid.get(otherwise = FilterID(1))
let qid = be.journal[0]
if qid.isValid:
let rc = be.getFilFn qid
if rc.isErr:
# Must exist when `qid` exists
return err(rc.error)
elif fid.isNone:
# Stepwise increase is the default
nxtFid = rc.value.fid + 1
elif nxtFid <= rc.value.fid:
# The bespoke filter IDs must be greater than the existing ones
return err(FilQuBespokeFidTooSmall)
nxtFid
# ------------------------------------------------------------------------------
# Public functions
# ------------------------------------------------------------------------------
proc journalOpsPushSlot*(
be: BackendRef; # Database backend
filter: FilterRef; # Filter to store
fid: Option[FilterID]; # Next filter ID (if any)
): Result[JournalOpsMod,AristoError] =
## Calculate backend instructions for storing the arguent `filter` on the
## argument backend `be`.
##
## The journal is not modified by this function.
##
if be.journal.isNil:
return err(FilQuSchedDisabled)
# Calculate filter table queue update by slot addresses
let
qTop = be.journal[0]
upd = be.journal.addItem
# Update journal filters and calculate database update
var
instr = JournalOpsMod(scd: upd.journal)
dbClear: seq[QueueID]
hold: seq[FilterRef]
saved = false
# make sure that filter matches top entry (if any)
if qTop.isValid:
let top = be.getFilterOrReturn qTop
if filter.trg != top.src:
return err(FilTrgTopSrcMismatch)
for act in upd.exec:
case act.op:
of Oops:
return err(FilExecOops)
of SaveQid:
if saved:
return err(FilExecDublicateSave)
instr.put.add (act.qid, filter)
saved = true
of DelQid:
instr.put.add (act.qid, FilterRef(nil))
of HoldQid:
# Push filter
dbClear.add act.qid
hold.add be.getFilterOrReturn act.qid
# Merge additional journal filters into top filter
for w in act.qid+1 .. act.xid:
dbClear.add w
let lower = be.getFilterOrReturn w
hold[^1] = hold[^1].joinFiltersOrReturn lower
of DequQid:
if hold.len == 0:
return err(FilExecStackUnderflow)
var lower = hold.pop
while 0 < hold.len:
let upper = hold.pop
lower = upper.joinFiltersOrReturn lower
instr.put.add (act.qid, lower)
for qid in dbClear:
instr.put.add (qid, FilterRef(nil))
dbClear.setLen(0)
if not saved:
return err(FilExecSaveMissing)
# Set next filter ID
filter.fid = be.getNextFidOrReturn fid
ok instr
proc journalOpsFetchSlots*(
be: BackendRef; # Database backend
backSteps: int; # Backstep this many filters
): Result[JournalOpsFetch,AristoError] =
## This function returns the single filter obtained by squash merging the
## topmost `backSteps` filters on the backend journal fifo. Also, backend
## instructions are calculated and returned for deleting the extracted
## journal slots.
##
## The journal is not modified by this function.
##
if be.journal.isNil:
return err(FilQuSchedDisabled)
if backSteps <= 0:
return err(FilBackStepsExpected)
# Get instructions
let fetch = be.journal.fetchItems backSteps
var instr = JournalOpsFetch(del: JournalOpsMod(scd: fetch.journal))
# Follow `HoldQid` instructions and combine journal filters for sub-queues
# and push intermediate results on the `hold` stack
var hold: seq[FilterRef]
for act in fetch.exec:
if act.op != HoldQid:
return err(FilExecHoldExpected)
hold.add be.getFilterOrReturn act.qid
instr.del.put.add (act.qid,FilterRef(nil))
for qid in act.qid+1 .. act.xid:
let lower = be.getFilterOrReturn qid
instr.del.put.add (qid,FilterRef(nil))
hold[^1] = hold[^1].joinFiltersOrReturn lower
# Resolve `hold` stack
if hold.len == 0:
return err(FilExecStackUnderflow)
var upper = hold.pop
while 0 < hold.len:
let lower = hold.pop
upper = upper.joinFiltersOrReturn lower
instr.fil = upper
ok instr
proc journalOpsDeleteSlots*(
be: BackendRef; # Database backend
backSteps: int; # Backstep this many filters
): Result[JournalOpsMod,AristoError] =
## Calculate backend instructions for deleting the most recent `backSteps`
## slots on the journal. This is basically the deletion calculator part
## from `journalOpsFetchSlots()`.
##
## The journal is not modified by this function.
##
if be.journal.isNil:
return err(FilQuSchedDisabled)
if backSteps <= 0:
return err(FilBackStepsExpected)
# Get instructions
let fetch = be.journal.fetchItems backSteps
var instr = JournalOpsMod(scd: fetch.journal)
# Follow `HoldQid` instructions for producing the list of entries that
# need to be deleted
for act in fetch.exec:
if act.op != HoldQid:
return err(FilExecHoldExpected)
for qid in act.qid .. act.xid:
instr.put.add (qid,FilterRef(nil))
ok instr
# ------------------------------------------------------------------------------
# End
# ------------------------------------------------------------------------------

View File

@ -1,704 +0,0 @@
# nimbus-eth1
# Copyright (c) 2023-2024 Status Research & Development GmbH
# Licensed under either of
# * Apache License, version 2.0, ([LICENSE-APACHE](LICENSE-APACHE) or
# http://www.apache.org/licenses/LICENSE-2.0)
# * MIT license ([LICENSE-MIT](LICENSE-MIT) or
# http://opensource.org/licenses/MIT)
# at your option. This file may not be copied, modified, or distributed
# except according to those terms.
import
std/[algorithm, sequtils, typetraits],
results,
".."/[aristo_constants, aristo_desc]
type
QidAction* = object
## Instruction for administering filter queue ID slots. The op-code is
## followed by one or two queue ID arguments. In case of a two arguments,
## the value of the second queue ID is never smaller than the first one.
op*: QidOp ## Action, followed by at most two queue IDs
qid*: QueueID ## Action argument
xid*: QueueID ## Second action argument for range argument
QidOp* = enum
Oops = 0
SaveQid ## Store new item
HoldQid ## Move/append range items to local queue
DequQid ## Store merged local queue items
DelQid ## Delete entry from last overflow queue
QuFilMap* = proc(qid: QueueID): Result[FilterID,void] {.gcsafe, raises: [].}
## A map `fn: QueueID -> FilterID` of type `QuFilMap` must preserve the
## order relation on the image of `fn()` defined as
##
## * `fn(fifo[j]) < fn(fifo[i])` <=> `i < j`
##
## where `[]` is defined as the index function `[]: {0 .. N-1} -> QueueID`,
## `N = fifo.len`.
##
## Any injective function `fn()` (aka monomorphism) will do.
##
## This definition decouples access to ordered journal records from the
## storage of these records on the database. The records are accessed via
## `QueueID` type keys while the order is defined by a `FilterID` type
## scalar.
##
## In order to flag an error, `err()` must be returned.
const
ZeroQidPair = (QueueID(0),QueueID(0))
# ------------------------------------------------------------------------------
# Private helpers
# ------------------------------------------------------------------------------
func `<`(a: static[uint]; b: QueueID): bool = QueueID(a) < b
func globalQid(queue: int, qid: QueueID): QueueID =
QueueID((queue.uint64 shl 62) or qid.uint64)
# ------------------------------------------------------------------------------
# Private functions
# ------------------------------------------------------------------------------
func fifoLen(
fifo: (QueueID,QueueID);
wrap: QueueID;
): uint =
## Number of entries in wrap-arounfd fifo organised with `fifo[0]` is the
## oldest entry and`fifo[1]` is the latest/newest entry.
##
if fifo[0] == 0:
return 0
if fifo[0] <= fifo[1]:
# Filling up
# ::
# | :
# | fifo[0]--> 3
# | 4
# | 5 <--fifo[1]
# | :
#
return ((fifo[1] + 1) - fifo[0]).uint
else:
# After wrap aound
# ::
# | :
# | 3 <--fifo[1]
# | 4
# | fifo[0]--> 5
# | :
# | wrap
return ((fifo[1] + 1) + (wrap - fifo[0])).uint
func fifoAdd(
fifo: (QueueID,QueueID);
wrap: QueueID;
): tuple[doDel: QueueID, fifo: (QueueID,QueueID)] =
## Add an entry to the wrap-arounfd fifo organised with `fifo[0]` is the
## oldest entry and`fifo[1]` is the latest/newest entry.
##
if fifo[0] == 0:
return (QueueID(0), (QueueID(1),QueueID(1)))
if fifo[0] <= fifo[1]:
if fifo[1] < wrap:
# Filling up
# ::
# | :
# | fifo[0]--> 3
# | 4
# | 5 <--fifo[1]
# | :
#
return (QueueID(0), (fifo[0],fifo[1]+1))
elif 1 < fifo[0]:
# Wrapping
# ::
# | :
# | fifo[0]--> 3
# | 4
# | :
# | wrap <--fifo[1]
#
return (QueueID(0), (fifo[0],QueueID(1)))
elif 1 < wrap:
# Wrapping and flushing out
# ::
# | fifo[0]--> 1
# | 2
# | :
# | wrap <--fifo[1]
#
return (QueueID(1), (QueueID(2),QueueID(1)))
else:
# Single entry FIFO
return (QueueID(1), (QueueID(1),QueueID(1)))
else:
if fifo[1] + 1 < fifo[0]:
# Filling up
# ::
# | :
# | 3 <--fifo[1]
# | 4
# | fifo[0]--> 5
# | :
# | wrap
return (QueueID(0), (fifo[0],fifo[1]+1))
elif fifo[0] < wrap:
# Flushing out
# ::
# | :
# | 4 <--fifo[1]
# | fifo[0]--> 5
# | :
# | wrap
return (fifo[0], (fifo[0]+1,fifo[1]+1))
else:
# Wrapping and flushing out
# ::
# | :
# | wrap-1 <--fifo[1]
# | fifo[0]--> wrap
return (wrap, (QueueID(1),wrap))
func fifoDel(
fifo: (QueueID,QueueID);
nDel: uint;
wrap: QueueID;
): tuple[doDel: seq[(QueueID,QueueID)], fifo: (QueueID,QueueID)] =
## Delete a the range `nDel` of filter IDs from the FIFO. The entries to be
## deleted are taken from the oldest ones added.
##
if fifo[0] == 0:
return (EmptyQidPairSeq, ZeroQidPair)
if fifo[0] <= fifo[1]:
# Take off the left end from `fifo[0] .. fifo[1]`
# ::
# | :
# | fifo[0]--> 3 ^
# | 4 | to be deleted
# | 5 v
# | 6 <--fifo[1]
# | :
#
if nDel.uint64 <= fifo[1] - fifo[0]:
return (@[(fifo[0], fifo[0] + nDel - 1)], (fifo[0] + nDel, fifo[1]))
else:
return (@[fifo], ZeroQidPair)
else:
if nDel.uint64 <= (wrap - fifo[0] + 1):
# Take off the left end from `fifo[0] .. wrap`
# ::
# | :
# | 3 <--fifo[1]
# | 4
# | fifo[0]--> 5 ^
# | 6 | to be deleted
# | 7 v
# | :
# | wrap
#
let topRange = (fifo[0], fifo[0] + nDel - 1)
if nDel.uint64 < (wrap - fifo[0] + 1):
return (@[topRange], (fifo[0] + nDel, fifo[1]))
else:
return (@[topRange], (QueueID(1), fifo[1]))
else:
# Interval `fifo[0] .. wrap` fully deleted, check `1 .. fifo[0]`
# ::
# | 1 ^
# | 2 | to be deleted
# | : v
# | 6
# | 7<--fifo[1]
# | fifo[0]--> 8 ^
# | 9 | to be deleted
# | : :
# | wrap v
#
let
topRange = (fifo[0], wrap)
nDelLeft = nDel.uint64 - (wrap - fifo[0] + 1)
# Take off the left end from `QueueID(1) .. fifo[1]`
if nDelLeft <= fifo[1] - QueueID(0):
let bottomRange = (QueueID(1), QueueID(nDelLeft))
if nDelLeft < fifo[1] - QueueID(0):
return (@[bottomRange, topRange], (QueueID(nDelLeft+1), fifo[1]))
else:
return (@[bottomRange, topRange], ZeroQidPair)
else:
# Delete all available
return (@[(QueueID(1), fifo[1]), (fifo[0], wrap)], ZeroQidPair)
func volumeSize(
ctx: openArray[tuple[size, width: int]]; # Schedule layout
): tuple[maxQueue: int, minCovered: int, maxCovered: int] =
## Number of maximally stored and covered queued entries for the argument
## layout `ctx`. The resulting value of `maxQueue` entry is the maximal
## number of database slots needed, the `minCovered` and `maxCovered` entry
## indicate the rancge of the backlog foa a fully populated database.
var step = 1
for n in 0 ..< ctx.len:
step *= ctx[n].width + 1
let size = ctx[n].size + ctx[(n+1) mod ctx.len].width
result.maxQueue += size.int
result.minCovered += (ctx[n].size * step).int
result.maxCovered += (size * step).int
# ------------------------------------------------------------------------------
# Public functions
# ------------------------------------------------------------------------------
func volumeSize*(
ctx: openArray[tuple[size, width, wrap: int]]; # Schedule layout
): tuple[maxQueue: int, minCovered: int, maxCovered: int] =
## Variant of `volumeSize()`.
ctx.toSeq.mapIt((it[0],it[1])).volumeSize
func volumeSize*(
journal: QidSchedRef; # Cascaded fifos descriptor
): tuple[maxQueue: int, minCovered: int, maxCovered: int] =
## Number of maximally stored and covered queued entries for the layout of
## argument `journal`. The resulting value of `maxQueue` entry is the maximal
## number of database slots needed, the `minCovered` and `maxCovered` entry
## indicate the rancge of the backlog foa a fully populated database.
journal.ctx.q.toSeq.mapIt((it[0].int,it[1].int)).volumeSize()
func addItem*(
journal: QidSchedRef; # Cascaded fifos descriptor
): tuple[exec: seq[QidAction], journal: QidSchedRef] =
## Get the instructions for adding a new slot to the cascades queues. The
## argument `journal` is a complete state of the addresses of a cascaded
## *FIFO* when applied to a database. Only the *FIFO* queue addresses are
## needed in order to describe how to add another item.
##
## The function returns a list of instructions what to do when adding a new
## item and the new state of the cascaded *FIFO*. The following instructions
## may be returned:
## ::
## SaveQid <queue-id> -- Store a new item under the address
## -- <queue-id> on the database.
##
## HoldQid <from-id>..<to-id> -- Move the records referred to by the
## -- argument addresses from the database to
## -- the right end of the local hold queue.
## -- The age of the items on the hold queue
## -- increases left to right.
##
## DequQid <queue-id> -- Merge items from the hold queue into a
## -- new item and store it under the address
## -- <queue-id> on the database. Clear the
## -- the hold queue and the corresponding
## -- items on the database.
##
## DelQid <queue-id> -- Delete item. This happens if the last
## -- oberflow queue needs to make space for
## -- another item.
##
let
ctx = journal.ctx.q
var
state = journal.state
deferred: seq[QidAction] # carry over to next sub-queue
revActions: seq[QidAction] # instructions in reverse order
for n in 0 ..< ctx.len:
if state.len < n + 1:
state.setLen(n + 1)
let
overlapWidth = ctx[(n+1) mod ctx.len].width
carryOverSize = ctx[n].size + overlapWidth
stateLen = state[n].fifoLen ctx[n].wrap
if stateLen < carryOverSize:
state[n] = state[n].fifoAdd(ctx[n].wrap).fifo
let qQidAdded = n.globalQid state[n][1]
if 0 < n:
revActions.add QidAction(op: DequQid, qid: qQidAdded)
else:
revActions.add QidAction(op: SaveQid, qid: qQidAdded)
if 0 < deferred.len:
revActions &= deferred
deferred.setLen(0)
break
else:
# Full queue segment, carry over to next one
let
extra = stateLen - carryOverSize # should be zero
qDel = state[n].fifoDel(extra + overlapWidth + 1, ctx[n].wrap)
qAdd = qDel.fifo.fifoAdd ctx[n].wrap
qFidAdded = n.globalQid qAdd.fifo[1]
if 0 < n:
revActions.add QidAction(op: DequQid, qid: qFidAdded)
else:
revActions.add QidAction(op: SaveQid, qid: qFidAdded)
if 0 < deferred.len:
revActions &= deferred
deferred.setLen(0)
for w in qDel.doDel:
deferred.add QidAction(
op: HoldQid,
qid: n.globalQid w[0],
xid: n.globalQid w[1])
state[n] = qAdd.fifo
# End loop
# Delete item from final overflow queue. There is only one as `overlapWidth`
# is `ctx[0]` which is `0`
if 0 < deferred.len:
revActions.add QidAction(
op: DelQid,
qid: deferred[0].qid)
(revActions.reversed, QidSchedRef(ctx: journal.ctx, state: state))
func fetchItems*(
journal: QidSchedRef; # Cascaded fifos descriptor
size: int; # Leading items to merge
): tuple[exec: seq[QidAction], journal: QidSchedRef] =
## Get the instructions for extracting the latest `size` items from the
## cascaded queues. argument `journal` is a complete state of the addresses of
## a cascaded *FIFO* when applied to a database. Only the *FIFO* queue
## addresses are used in order to describe how to add another item.
##
## The function returns a list of instructions what to do when adding a new
## item and the new state of the cascaded *FIFO*. The following instructions
## may be returned:
## ::
## HoldQid <from-id>..<to-id> -- Move the records accessed by the argument
## -- addresses from the database to the right
## -- end of the local hold queue. The age of
## -- the items on the hold queue increases
## -- left to right.
##
## The extracted items will then be available from the hold queue.
var
actions: seq[QidAction]
state = journal.state
if 0 < size:
var size = size.uint64
for n in 0 ..< journal.state.len:
let q = journal.state[n]
if q[0] == 0:
discard
elif q[0] <= q[1]:
# Single file
# ::
# | :
# | q[0]--> 3
# | 4
# | 5 <--q[1]
# | :
#
let qSize = q[1] - q[0] + 1
if size <= qSize:
if size < qSize:
state[n][1] = q[1] - size
elif state.len == n + 1:
state.setLen(n)
else:
state[n] = (QueueID(0), QueueID(0))
actions.add QidAction(
op: HoldQid,
qid: n.globalQid(q[1] - size + 1),
xid: n.globalQid q[1])
break
actions.add QidAction(
op: HoldQid,
qid: n.globalQid q[0],
xid: n.globalQid q[1])
state[n] = (QueueID(0), QueueID(0))
size -= qSize # Otherwise continue
else:
# Wrap aound, double files
# ::
# | :
# | 3 <--q[1]
# | 4
# | q[0]--> 5
# | :
# | wrap
let
wrap = journal.ctx.q[n].wrap
qSize1 = q[1] - QueueID(0)
if size <= qSize1:
if size == qSize1:
state[n][1] = wrap
else:
state[n][1] = q[1] - size
actions.add QidAction(
op: HoldQid,
qid: n.globalQid(q[1] - size + 1),
xid: n.globalQid q[1])
break
actions.add QidAction(
op: HoldQid,
qid: n.globalQid QueueID(1),
xid: n.globalQid q[1])
size -= qSize1 # Otherwise continue
let qSize0 = wrap - q[0] + 1
if size <= qSize0:
if size < qSize0:
state[n][1] = wrap - size
elif state.len == n + 1:
state.setLen(n)
else:
state[n] = (QueueID(0), QueueID(0))
actions.add QidAction(
op: HoldQid,
qid: n.globalQid wrap - size + 1,
xid: n.globalQid wrap)
break
actions.add QidAction(
op: HoldQid,
qid: n.globalQid q[0],
xid: n.globalQid wrap)
size -= qSize0
state[n] = (QueueID(0), QueueID(0))
(actions, QidSchedRef(ctx: journal.ctx, state: state))
func lengths*(
journal: QidSchedRef; # Cascaded fifos descriptor
): seq[int] =
## Return the list of lengths for all cascaded sub-fifos.
for n in 0 ..< journal.state.len:
result.add journal.state[n].fifoLen(journal.ctx.q[n].wrap).int
func len*(
journal: QidSchedRef; # Cascaded fifos descriptor
): int =
## Size of the journal
journal.lengths.foldl(a + b, 0)
func `[]`*(
journal: QidSchedRef; # Cascaded fifos descriptor
inx: int; # Index into latest items
): QueueID =
## Get the queue ID of the `inx`-th `journal` entry where index `0` refers to
## the entry most recently added, `1` the one before, etc. If there is no
## such entry `QueueID(0)` is returned.
if 0 <= inx:
var inx = inx.uint64
for n in 0 ..< journal.state.len:
let q = journal.state[n]
if q[0] == 0:
discard
elif q[0] <= q[1]:
# Single file
# ::
# | :
# | q[0]--> 3
# | 4
# | 5 <--q[1]
# | :
#
let qInxMax = q[1] - q[0]
if inx <= qInxMax:
return n.globalQid(q[1] - inx)
inx -= qInxMax + 1 # Otherwise continue
else:
# Wrap aound, double files
# ::
# | :
# | 3 <--q[1]
# | 4
# | q[0]--> 5
# | :
# | wrap
let qInxMax1 = q[1] - QueueID(1)
if inx <= qInxMax1:
return n.globalQid(q[1] - inx)
inx -= qInxMax1 + 1 # Otherwise continue
let
wrap = journal.ctx.q[n].wrap
qInxMax0 = wrap - q[0]
if inx <= qInxMax0:
return n.globalQid(wrap - inx)
inx -= qInxMax0 + 1 # Otherwise continue
func `[]`*(
journal: QidSchedRef; # Cascaded fifos descriptor
bix: BackwardsIndex; # Index into latest items
): QueueID =
## Variant of `[]` for providing `[^bix]`.
journal[journal.len - bix.distinctBase]
func `[]`*(
journal: QidSchedRef; # Cascaded fifos descriptor
qid: QueueID; # Index into latest items
): int =
## ..
if QueueID(0) < qid:
let
chn = (qid.uint64 shr 62).int
qid = (qid.uint64 and 0x3fff_ffff_ffff_ffffu64).QueueID
if chn < journal.state.len:
var offs = 0
for n in 0 ..< chn:
offs += journal.state[n].fifoLen(journal.ctx.q[n].wrap).int
let q = journal.state[chn]
if q[0] <= q[1]:
# Single file
# ::
# | :
# | q[0]--> 3
# | 4
# | 5 <--q[1]
# | :
#
if q[0] <= qid and qid <= q[1]:
return offs + (q[1] - qid).int
else:
# Wrap aound, double files
# ::
# | :
# | 3 <--q[1]
# | 4
# | q[0]--> 5
# | :
# | wrap
#
if QueueID(1) <= qid and qid <= q[1]:
return offs + (q[1] - qid).int
if q[0] <= qid:
let wrap = journal.ctx.q[chn].wrap
if qid <= wrap:
return offs + (q[1] - QueueID(0)).int + (wrap - qid).int
-1
proc le*(
journal: QidSchedRef; # Cascaded fifos descriptor
fid: FilterID; # Upper (or right) bound
fn: QuFilMap; # QueueID/FilterID mapping
forceEQ = false; # Check for strict equality
): QueueID =
## Find the `qid` address of type `QueueID` with `fn(qid) <= fid` with
## maximal `fn(qid)`. The requirements on argument map `fn()` of type
## `QuFilMap` has been commented on at the type definition.
##
## This function returns `QueueID(0)` if `fn()` returns `err()` at some
## stage of the algorithm applied here.
##
var
left = 0
right = journal.len - 1
template toFid(qid: QueueID): FilterID =
fn(qid).valueOr:
return QueueID(0) # exit hosting function environment
# The algorithm below trys to avoid `toFid()` as much as possible because
# it might invoke some extra database lookup.
if 0 <= right:
# Check left fringe
let
maxQid = journal[left]
maxFid = maxQid.toFid
if maxFid <= fid:
if forceEQ and maxFid != fid:
return QueueID(0)
return maxQid
# So `fid < journal[left]`
# Check right fringe
let
minQid = journal[right]
minFid = minQid.toFid
if fid <= minFid:
if minFid == fid:
return minQid
return QueueID(0)
# So `journal[right] < fid`
# Bisection
var rightQid = minQid # Might be used as end result
while 1 < right - left:
let
pivot = (left + right) div 2
pivQid = journal[pivot]
pivFid = pivQid.toFid
#
# Example:
# ::
# FilterID: 100 70 33
# inx: left ... pivot ... right
# fid: 77
#
# with `journal[left].toFid > fid > journal[right].toFid`
#
if pivFid < fid: # fid >= journal[half].toFid:
right = pivot
rightQid = pivQid
elif fid < pivFid: # journal[half].toFid > fid
left = pivot
else:
return pivQid
# Now: `journal[right].toFid < fid < journal[left].toFid`
# (and `right == left+1`).
if not forceEQ:
# Make sure that `journal[right].toFid` exists
if fn(rightQid).isOk:
return rightQid
# Otherwise QueueID(0)
proc eq*(
journal: QidSchedRef; # Cascaded fifos descriptor
fid: FilterID; # Filter ID to search for
fn: QuFilMap; # QueueID/FilterID mapping
): QueueID =
## Variant of `le()` for strict equality.
journal.le(fid, fn, forceEQ = true)
# ------------------------------------------------------------------------------
# End
# ------------------------------------------------------------------------------

View File

@ -36,7 +36,7 @@ func pPrf*(db: AristoDbRef): lent HashSet[VertexID] =
db.top.final.pPrf
func vGen*(db: AristoDbRef): lent seq[VertexID] =
db.top.final.vGen
db.top.delta.vGen
# ------------------------------------------------------------------------------
# Public getters/helpers
@ -190,6 +190,7 @@ func layersMergeOnto*(src: LayerRef; trg: var LayerObj) =
trg.delta.sTab[vid] = vtx
for (vid,key) in src.delta.kMap.pairs:
trg.delta.kMap[vid] = key
trg.delta.vGen = src.delta.vGen
func layersCc*(db: AristoDbRef; level = high(int)): LayerRef =
@ -205,7 +206,8 @@ func layersCc*(db: AristoDbRef; level = high(int)): LayerRef =
final: layers[^1].final.dup, # Pre-merged/final values
delta: LayerDeltaRef(
sTab: layers[0].delta.sTab.dup, # explicit dup for ref values
kMap: layers[0].delta.kMap))
kMap: layers[0].delta.kMap,
vGen: layers[^1].delta.vGen))
# Consecutively merge other layers on top
for n in 1 ..< layers.len:

View File

@ -14,7 +14,6 @@
{.push raises: [].}
import
std/options,
results,
./aristo_tx/[tx_fork, tx_frame, tx_stow],
"."/[aristo_desc, aristo_get]
@ -62,11 +61,11 @@ proc forkTx*(
## are stripped and the remaing layers are squashed into a single transaction.
##
## If `backLevel` is `-1`, a database descriptor with empty transaction
## layers will be provided where the `roFilter` between database and
## layers will be provided where the `balancer` between database and
## transaction layers are kept in place.
##
## If `backLevel` is `-2`, a database descriptor with empty transaction
## layers will be provided without an `roFilter`.
## layers will be provided without a `balancer`.
##
## The returned database descriptor will always have transaction level one.
## If there were no transactions that could be squashed, an empty
@ -98,7 +97,7 @@ proc forkTx*(
return err(TxStackGarbled)
return tx.txFork dontHashify
# Plain fork, include `roFilter`
# Plain fork, include `balancer`
if backLevel == -1:
let xb = ? db.fork(noFilter=false)
discard xb.txFrameBegin()
@ -156,9 +155,9 @@ proc findTx*(
if botKey == key:
return ok(db.stack.len)
# Try `(vid,key)` on roFilter
if not db.roFilter.isNil:
let roKey = db.roFilter.kMap.getOrVoid vid
# Try `(vid,key)` on balancer
if not db.balancer.isNil:
let roKey = db.balancer.kMap.getOrVoid vid
if roKey == key:
return ok(-1)
@ -225,7 +224,7 @@ proc collapse*(
proc persist*(
db: AristoDbRef; # Database
nxtFid = none(FilterID); # Next filter ID (zero is OK)
nxtSid = 0u64; # Next state ID (aka block number)
chunkedMpt = false; # Partial data (e.g. from `snap`)
): Result[void,AristoError] =
## Persistently store data onto backend database. If the system is running
@ -248,7 +247,7 @@ proc persist*(
## In this case, the `chunkedMpt` argument must be set `true` (see alse
## `fwdFilter()`.)
##
db.txStow(nxtFid, persistent=true, chunkedMpt=chunkedMpt)
db.txStow(nxtSid, persistent=true, chunkedMpt=chunkedMpt)
proc stow*(
db: AristoDbRef; # Database
@ -267,7 +266,7 @@ proc stow*(
## In this case, the `chunkedMpt` argument must be set `true` (see alse
## `fwdFilter()`.)
##
db.txStow(nxtFid=none(FilterID), persistent=false, chunkedMpt=chunkedMpt)
db.txStow(nxtSid=0u64, persistent=false, chunkedMpt=chunkedMpt)
# ------------------------------------------------------------------------------
# End

View File

@ -69,8 +69,8 @@ proc txFork*(
let rc = db.getIdgBE()
if rc.isOk:
LayerRef(
delta: LayerDeltaRef(),
final: LayerFinalRef(vGen: rc.value))
delta: LayerDeltaRef(vGen: rc.value),
final: LayerFinalRef())
elif rc.error == GetIdgNotFound:
LayerRef.init()
else:

View File

@ -81,9 +81,10 @@ proc txFrameBegin*(db: AristoDbRef): Result[AristoTxRef,AristoError] =
if db.txFrameLevel != db.stack.len:
return err(TxStackGarbled)
let vGen = db.top.delta.vGen
db.stack.add db.top
db.top = LayerRef(
delta: LayerDeltaRef(),
delta: LayerDeltaRef(vGen: vGen),
final: db.top.final.dup,
txUid: db.getTxUid)

View File

@ -14,9 +14,69 @@
{.push raises: [].}
import
std/[options, tables],
std/[sets, tables],
results,
".."/[aristo_desc, aristo_get, aristo_journal, aristo_layers, aristo_hashify]
../aristo_delta/delta_merge,
".."/[aristo_desc, aristo_get, aristo_delta, aristo_layers, aristo_hashify,
aristo_vid]
# ------------------------------------------------------------------------------
# Private functions
# ------------------------------------------------------------------------------
proc getBeStateRoot(
db: AristoDbRef;
chunkedMpt: bool;
): Result[HashKey,AristoError] =
## Get the Merkle hash key for the current backend state root and check
## validity of top layer.
let srcRoot = block:
let rc = db.getKeyBE VertexID(1)
if rc.isOk:
rc.value
elif rc.error == GetKeyNotFound:
VOID_HASH_KEY
else:
return err(rc.error)
if db.top.delta.kMap.getOrVoid(VertexID 1).isValid:
return ok(srcRoot)
elif not db.top.delta.kMap.hasKey(VertexID 1) and
not db.top.delta.sTab.hasKey(VertexID 1):
# This layer is unusable, need both: vertex and key
return err(TxPrettyPointlessLayer)
elif not db.top.delta.sTab.getOrVoid(VertexID 1).isValid:
# Root key and vertex have been deleted
return ok(srcRoot)
elif chunkedMpt and srcRoot == db.top.delta.kMap.getOrVoid VertexID(1):
# FIXME: this one needs to be double checked with `snap` sunc preload
return ok(srcRoot)
err(TxStateRootMismatch)
proc topMerge(db: AristoDbRef; src: HashKey): Result[void,AristoError] =
## Merge the `top` layer into the read-only balacer layer.
let ubeRoot = block:
let rc = db.getKeyUbe VertexID(1)
if rc.isOk:
rc.value
elif rc.error == GetKeyNotFound:
VOID_HASH_KEY
else:
return err(rc.error)
# Update layer for merge call
db.top.delta.src = src
# This one will return the `db.top.delta` if `db.balancer.isNil`
db.balancer = db.deltaMerge(db.top.delta, db.balancer, ubeRoot).valueOr:
return err(error[1])
ok()
# ------------------------------------------------------------------------------
# Public functions
@ -24,7 +84,7 @@ import
proc txStow*(
db: AristoDbRef; # Database
nxtFid: Option[FilterID]; # Next filter ID (zero is OK)
nxtSid: uint64; # Next state ID (aka block number)
persistent: bool; # Stage only unless `true`
chunkedMpt: bool; # Partial data (e.g. from `snap`)
): Result[void,AristoError] =
@ -34,57 +94,55 @@ proc txStow*(
return err(TxPendingTx)
if 0 < db.stack.len:
return err(TxStackGarbled)
if persistent and not db.journalUpdateOk():
if persistent and not db.deltaPersistentOk():
return err(TxBackendNotWritable)
# Update Merkle hashes (unless disabled)
db.hashify().isOkOr:
return err(error[1])
let fwd = db.journalFwdFilter(db.top, chunkedMpt).valueOr:
return err(error[1])
# Verify database consistency and get `src` field for update
let rc = db.getBeStateRoot chunkedMpt
if rc.isErr and rc.error != TxPrettyPointlessLayer:
return err(rc.error)
if fwd.isValid:
# Move/merge `top` layer onto `roFilter`
db.journalMerge(fwd).isOkOr:
return err(error[1])
# Special treatment for `snap` proofs (aka `chunkedMpt`)
let final =
if chunkedMpt: LayerFinalRef(fRpp: db.top.final.fRpp)
else: LayerFinalRef()
# Special treatment for `snap` proofs (aka `chunkedMpt`)
let final =
if chunkedMpt: LayerFinalRef(fRpp: db.top.final.fRpp)
else: LayerFinalRef()
# Move/merge/install `top` layer onto `balancer`
if rc.isOk:
db.topMerge(rc.value).isOkOr:
return err(error)
# New empty top layer (probably with `snap` proofs and `vGen` carry over)
db.top = LayerRef(
delta: LayerDeltaRef(),
final: final)
if db.roFilter.isValid:
db.top.final.vGen = db.roFilter.vGen
if db.balancer.isValid:
db.top.delta.vGen = db.balancer.vGen
else:
let rc = db.getIdgUbe()
if rc.isOk:
db.top.final.vGen = rc.value
db.top.delta.vGen = rc.value
else:
# It is OK if there was no `Idg`. Otherwise something serious happened
# and there is no way to recover easily.
doAssert rc.error == GetIdgNotFound
elif db.top.delta.sTab.len != 0 and
not db.top.delta.sTab.getOrVoid(VertexID(1)).isValid:
# Currently, a `VertexID(1)` root node is required
return err(TxAccRootMissing)
if persistent:
# Merge/move `roFilter` into persistent tables
? db.journalUpdate nxtFid
# Special treatment for `snap` proofs (aka `chunkedMpt`)
let final =
if chunkedMpt: LayerFinalRef(vGen: db.vGen, fRpp: db.top.final.fRpp)
else: LayerFinalRef(vGen: db.vGen)
# Merge/move `balancer` into persistent tables
? db.deltaPersistent nxtSid
# New empty top layer (probably with `snap` proofs carry over)
db.top = LayerRef(
delta: LayerDeltaRef(),
delta: LayerDeltaRef(vGen: db.vGen),
final: final,
txUid: db.top.txUid)
ok()

View File

@ -33,15 +33,15 @@ proc vidFetch*(db: AristoDbRef; pristine = false): VertexID =
##
if db.vGen.len == 0:
# Note that `VertexID(1)` is the root of the main trie
db.top.final.vGen = @[VertexID(LEAST_FREE_VID+1)]
db.top.delta.vGen = @[VertexID(LEAST_FREE_VID+1)]
result = VertexID(LEAST_FREE_VID)
elif db.vGen.len == 1 or pristine:
result = db.vGen[^1]
db.top.final.vGen[^1] = result + 1
db.top.delta.vGen[^1] = result + 1
else:
result = db.vGen[^2]
db.top.final.vGen[^2] = db.top.final.vGen[^1]
db.top.final.vGen.setLen(db.vGen.len-1)
db.top.delta.vGen[^2] = db.top.delta.vGen[^1]
db.top.delta.vGen.setLen(db.vGen.len-1)
doAssert LEAST_FREE_VID <= result.distinctBase
@ -64,14 +64,14 @@ proc vidDispose*(db: AristoDbRef; vid: VertexID) =
##
if LEAST_FREE_VID <= vid.distinctBase:
if db.vGen.len == 0:
db.top.final.vGen = @[vid]
db.top.delta.vGen = @[vid]
else:
let topID = db.vGen[^1]
# Only store smaller numbers: all numberts larger than `topID`
# are free numbers
if vid < topID:
db.top.final.vGen[^1] = vid
db.top.final.vGen.add topID
db.top.delta.vGen[^1] = vid
db.top.delta.vGen.add topID
proc vidReorg*(vGen: seq[VertexID]): seq[VertexID] =

View File

@ -43,20 +43,6 @@ iterator walkKeyBe*[T: MemBackendRef|VoidBackendRef](
for (vid,key) in walkKeyBeImpl[T](db):
yield (vid,key)
iterator walkFilBe*[T: MemBackendRef|VoidBackendRef](
be: T;
): tuple[qid: QueueID, filter: FilterRef] =
## Iterate over backend filters.
for (qid,filter) in walkFilBeImpl[T](be):
yield (qid,filter)
iterator walkFifoBe*[T: MemBackendRef|VoidBackendRef](
be: T;
): tuple[qid: QueueID, fid: FilterRef] =
## Walk filter slots in fifo order.
for (qid,filter) in walkFifoBeImpl[T](be):
yield (qid,filter)
# -----------
iterator walkPairs*[T: MemBackendRef|VoidBackendRef](

View File

@ -48,20 +48,6 @@ iterator walkKeyBe*[T: RdbBackendRef](
for (vid,key) in walkKeyBeImpl[T](db):
yield (vid,key)
iterator walkFilBe*[T: RdbBackendRef](
be: T;
): tuple[qid: QueueID, filter: FilterRef] =
## Iterate over backend filters.
for (qid,filter) in be.walkFilBeImpl:
yield (qid,filter)
iterator walkFifoBe*[T: RdbBackendRef](
be: T;
): tuple[qid: QueueID, fid: FilterRef] =
## Walk filter slots in fifo order.
for (qid,filter) in be.walkFifoBeImpl:
yield (qid,filter)
# -----------
iterator walkPairs*[T: RdbBackendRef](

View File

@ -23,14 +23,14 @@ iterator walkVtxBeImpl*[T](
): tuple[vid: VertexID, vtx: VertexRef] =
## Generic iterator
when T is VoidBackendRef:
let filter = if db.roFilter.isNil: FilterRef() else: db.roFilter
let filter = if db.balancer.isNil: LayerDeltaRef() else: db.balancer
else:
mixin walkVtx
let filter = FilterRef()
if not db.roFilter.isNil:
filter.sTab = db.roFilter.sTab # copy table
let filter = LayerDeltaRef()
if not db.balancer.isNil:
filter.sTab = db.balancer.sTab # copy table
for (vid,vtx) in db.backend.T.walkVtx:
if filter.sTab.hasKey vid:
@ -52,14 +52,14 @@ iterator walkKeyBeImpl*[T](
): tuple[vid: VertexID, key: HashKey] =
## Generic iterator
when T is VoidBackendRef:
let filter = if db.roFilter.isNil: FilterRef() else: db.roFilter
let filter = if db.balancer.isNil: LayerDeltaRef() else: db.balancer
else:
mixin walkKey
let filter = FilterRef()
if not db.roFilter.isNil:
filter.kMap = db.roFilter.kMap # copy table
let filter = LayerDeltaRef()
if not db.balancer.isNil:
filter.kMap = db.balancer.kMap # copy table
for (vid,key) in db.backend.T.walkKey:
if filter.kMap.hasKey vid:
@ -76,44 +76,6 @@ iterator walkKeyBeImpl*[T](
yield (vid,key)
iterator walkFilBeImpl*[T](
be: T; # Backend descriptor
): tuple[qid: QueueID, filter: FilterRef] =
## Generic filter iterator
when T isnot VoidBackendRef:
mixin walkFil
for (qid,filter) in be.walkFil:
yield (qid,filter)
iterator walkFifoBeImpl*[T](
be: T; # Backend descriptor
): tuple[qid: QueueID, fid: FilterRef] =
## Generic filter iterator walking slots in fifo order. This iterator does
## not depend on the backend type but may be type restricted nevertheless.
when T isnot VoidBackendRef:
proc kvp(chn: int, qid: QueueID): (QueueID,FilterRef) =
let cid = QueueID((chn.uint64 shl 62) or qid.uint64)
(cid, be.getFilFn(cid).get(otherwise = FilterRef(nil)))
if not be.isNil:
let scd = be.journal
if not scd.isNil:
for i in 0 ..< scd.state.len:
let (left, right) = scd.state[i]
if left == 0:
discard
elif left <= right:
for j in right.countDown left:
yield kvp(i, j)
else:
for j in right.countDown QueueID(1):
yield kvp(i, j)
for j in scd.ctx.q[i].wrap.countDown left:
yield kvp(i, j)
iterator walkPairsImpl*[T](
db: AristoDbRef; # Database with top layer & backend filter
): tuple[vid: VertexID, vtx: VertexRef] =

View File

@ -134,11 +134,11 @@ proc baseMethods(db: AristoCoreDbRef): CoreDbBaseFns =
proc persistent(bn: Option[BlockNumber]): CoreDbRc[void] =
const info = "persistentFn()"
let fid =
if bn.isNone: none(FilterID)
else: some(bn.unsafeGet.truncate(uint64).FilterID)
let sid =
if bn.isNone: 0u64
else: bn.unsafeGet.truncate(uint64)
? kBase.persistent info
? aBase.persistent(fid, info)
? aBase.persistent(sid, info)
ok()
CoreDbBaseFns(
@ -199,11 +199,6 @@ proc create*(dbType: CoreDbType; kdb: KvtDbRef; adb: AristoDbRef): CoreDbRef =
db.methods = db.baseMethods()
db.bless()
proc newAristoMemoryCoreDbRef*(qlr: QidLayoutRef): CoreDbRef =
AristoDbMemory.create(
KvtDbRef.init(use_kvt.MemBackendRef),
AristoDbRef.init(use_ari.MemBackendRef, qlr))
proc newAristoMemoryCoreDbRef*(): CoreDbRef =
AristoDbMemory.create(
KvtDbRef.init(use_kvt.MemBackendRef),
@ -248,7 +243,7 @@ proc toAristoSavedStateBlockNumber*(
if not mBe.isNil and mBe.parent.isAristo:
let rc = mBe.parent.AristoCoreDbRef.adbBase.getSavedState()
if rc.isOk:
return (rc.value.src, rc.value.serial.toBlockNumber)
return (rc.value.src.to(Hash256), rc.value.serial.toBlockNumber)
(EMPTY_ROOT_HASH, 0.toBlockNumber)
# ------------------------------------------------------------------------------

View File

@ -687,7 +687,7 @@ proc swapCtx*(base: AristoBaseRef; ctx: CoreDbCtxRef): CoreDbCtxRef =
proc persistent*(
base: AristoBaseRef;
fid: Option[FilterID];
fid: uint64;
info: static[string];
): CoreDbRc[void] =
let

View File

@ -39,8 +39,7 @@ const
proc newAristoRocksDbCoreDbRef*(path: string): CoreDbRef =
let
qlr = QidLayoutRef(nil)
adb = AristoDbRef.init(use_ari.RdbBackendRef, path, qlr).expect aristoFail
adb = AristoDbRef.init(use_ari.RdbBackendRef, path).expect aristoFail
gdb = adb.guestDb().valueOr: GuestDbRef(nil)
kdb = KvtDbRef.init(use_kvt.RdbBackendRef, path, gdb).expect kvtFail
AristoDbRocks.create(kdb, adb)

View File

@ -11,7 +11,6 @@
{.push raises: [].}
import
std/options,
eth/common,
../aristo,
./backend/aristo_db
@ -63,25 +62,6 @@ proc newCoreDbRef*(
else:
{.error: "Unsupported constructor " & $dbType & ".newCoreDbRef()".}
proc newCoreDbRef*(
dbType: static[CoreDbType]; # Database type symbol
qidLayout: QidLayoutRef; # `Aristo` only
): CoreDbRef =
## Constructor for volatile/memory type DB
##
## Note: Using legacy notation `newCoreDbRef()` rather than
## `CoreDbRef.init()` because of compiler coughing.
##
when dbType == AristoDbMemory:
newAristoMemoryCoreDbRef(DefaultQidLayoutRef)
elif dbType == AristoDbVoid:
newAristoVoidCoreDbRef()
else:
{.error: "Unsupported constructor " & $dbType & ".newCoreDbRef()" &
" with qidLayout argument".}
# ------------------------------------------------------------------------------
# End
# ------------------------------------------------------------------------------

View File

@ -51,9 +51,9 @@ proc init*[W: MemOnlyBackend|RdbBackendRef](
when B is RdbBackendRef:
let rc = guestDb.getRocksDbFamily()
if rc.isOk:
ok KvtDbRef(top: LayerRef.init(), backend: ? rocksDbBackend rc.value)
ok KvtDbRef(top: LayerRef.init(), backend: ? rocksDbKvtBackend rc.value)
else:
ok KvtDbRef(top: LayerRef.init(), backend: ? rocksDbBackend basePath)
ok KvtDbRef(top: LayerRef.init(), backend: ? rocksDbKvtBackend basePath)
else:
ok KvtDbRef.init B

View File

@ -152,7 +152,9 @@ proc setup(db: RdbBackendRef) =
# Public functions
# ------------------------------------------------------------------------------
proc rocksDbBackend*(path: string): Result[BackendRef,KvtError] =
proc rocksDbKvtBackend*(
path: string;
): Result[BackendRef,KvtError] =
let db = RdbBackendRef(
beKind: BackendRocksDB)
@ -165,7 +167,9 @@ proc rocksDbBackend*(path: string): Result[BackendRef,KvtError] =
db.setup()
ok db
proc rocksDbBackend*(store: ColFamilyReadWrite): Result[BackendRef,KvtError] =
proc rocksDbKvtBackend*(
store: ColFamilyReadWrite;
): Result[BackendRef,KvtError] =
let db = RdbBackendRef(
beKind: BackendRocksDB)
db.rdb.init(store)

View File

@ -14,8 +14,6 @@ import
chronicles,
chronos,
eth/p2p,
../../db/aristo/aristo_desc,
../../db/aristo/aristo_journal/journal_scheduler,
".."/[protocol, sync_desc],
../handlers/eth,
../misc/[best_pivot, block_queue, sync_ctrl, ticker],
@ -87,18 +85,13 @@ proc tickerUpdater(ctx: FullCtxRef): TickerFullStatsUpdater =
let suspended =
0 < ctx.pool.suspendAt and ctx.pool.suspendAt < stats.topAccepted
var journal: seq[int]
if not ctx.pool.journal.isNil:
journal = ctx.pool.journal.lengths()
TickerFullStats(
topPersistent: stats.topAccepted,
nextStaged: stats.nextStaged,
nextUnprocessed: stats.nextUnprocessed,
nStagedQueue: stats.nStagedQueue,
suspended: suspended,
reOrg: stats.reOrg,
journal: journal)
reOrg: stats.reOrg)
proc processStaged(buddy: FullBuddyRef): bool =
@ -187,12 +180,6 @@ proc setup*(ctx: FullCtxRef): bool =
ctx.pool.bCtx = BlockQueueCtxRef.init(rc.value + 1)
if ctx.pool.enableTicker:
ctx.pool.ticker = TickerRef.init(ctx.tickerUpdater)
# Monitor journal state
let adb = ctx.chain.com.db.ctx.getMpt(CtGeneric).backend.toAristo
if not adb.isNil:
doAssert not adb.backend.isNil
ctx.pool.journal = adb.backend.journal
else:
debug "Ticker is disabled"

View File

@ -13,7 +13,6 @@
import
eth/p2p,
chronos,
../../db/aristo/aristo_desc,
../sync_desc,
../misc/[best_pivot, block_queue, ticker]
@ -41,7 +40,6 @@ type
enableTicker*: bool ## Advisary, extra level of gossip
ticker*: TickerRef ## Logger ticker
journal*: QidSchedRef ## Journal access for logging (if any)
FullBuddyRef* = BuddyRef[FullCtxData,FullBuddyData]
## Extended worker peer descriptor

View File

@ -12,7 +12,7 @@
{.push raises: [].}
import
std/[strformat, strutils, sequtils],
std/[strformat, strutils],
chronos,
chronicles,
eth/[common, p2p],
@ -65,7 +65,6 @@ type
nStagedQueue*: int
suspended*: bool
reOrg*: bool
journal*: seq[int]
TickerRef* = ref object
## Ticker descriptor object
@ -187,18 +186,16 @@ proc fullTicker(t: TickerRef) {.gcsafe.} =
# With `int64`, there are more than 29*10^10 years range for seconds
up = (now - t.started).seconds.uint64.toSI
mem = getTotalMem().uint.toSI
jSeq = data.journal
jrn = if 0 < jSeq.len: jSeq.mapIt($it).join("/") else: "n/a"
t.full.lastStats = data
t.visited = now
if data.suspended:
info "Full sync ticker (suspended)", up, nInst, pv,
persistent, staged, unprocessed, queued, reOrg, mem, jrn
persistent, staged, unprocessed, queued, reOrg, mem
else:
info "Full sync ticker", up, nInst, pv,
persistent, staged, unprocessed, queued, reOrg, mem, jrn
persistent, staged, unprocessed, queued, reOrg, mem
# ------------------------------------------------------------------------------
# Private functions: ticking log messages

View File

@ -19,7 +19,7 @@ import
../nimbus/db/aristo/[aristo_desc, aristo_merge],
./replay/[pp, undump_accounts, undump_storages],
./test_sync_snap/[snap_test_xx, test_types],
./test_aristo/[test_backend, test_filter, test_helpers, test_misc, test_tx]
./test_aristo/[test_filter, test_helpers, test_misc, test_tx]
const
baseDir = [".", "..", ".."/"..", $DirSep]
@ -72,23 +72,12 @@ proc setErrorLevel {.used.} =
# Test Runners: accounts and accounts storages
# ------------------------------------------------------------------------------
proc miscRunner(
noisy = true;
layout = LyoSamples[0];
) =
let (lyo,qidSampleSize) = layout
proc miscRunner(noisy = true) =
suite "Aristo: Miscellaneous tests":
test "VertexID recyling lists":
check noisy.testVidRecycleLists()
test &"Low level cascaded fifos API (sample size: {qidSampleSize})":
check noisy.testQidScheduler(layout = lyo, sampleSize = qidSampleSize)
test &"High level cascaded fifos API (sample size: {qidSampleSize})":
check noisy.testFilterFifo(layout = lyo, sampleSize = qidSampleSize)
test "Short keys and other patholgical cases":
check noisy.testShortKeys()
@ -107,8 +96,6 @@ proc accountsRunner(
baseDir = getTmpDir() / sample.name & "-accounts"
dbDir = if persistent: baseDir / "tmp" else: ""
isPersistent = if persistent: "persistent DB" else: "mem DB only"
doRdbOk = (cmpBackends and 0 < dbDir.len)
cmpBeInfo = if doRdbOk: "persistent" else: "memory"
defer:
try: baseDir.removeDir except CatchableError: discard
@ -118,10 +105,6 @@ proc accountsRunner(
test &"Merge {accLst.len} proof & account lists to database":
check noisy.testTxMergeProofAndKvpList(accLst, dbDir, resetDb)
test &"Compare {accLst.len} account lists on {cmpBeInfo}" &
" db backend vs. cache":
check noisy.testBackendConsistency(accLst, dbDir, resetDb)
test &"Delete accounts database successively, {accLst.len} lists":
check noisy.testTxMergeAndDeleteOneByOne(accLst, dbDir)
@ -131,9 +114,6 @@ proc accountsRunner(
test &"Distributed backend access {accLst.len} entries":
check noisy.testDistributedAccess(accLst, dbDir)
test &"Filter backlog management {accLst.len} entries":
check noisy.testFilterBacklog(accLst, rdbPath=dbDir)
proc storagesRunner(
noisy = true;
@ -150,8 +130,6 @@ proc storagesRunner(
baseDir = getTmpDir() / sample.name & "-storage"
dbDir = if persistent: baseDir / "tmp" else: ""
isPersistent = if persistent: "persistent DB" else: "mem DB only"
doRdbOk = (cmpBackends and 0 < dbDir.len)
cmpBeInfo = if doRdbOk: "persistent" else: "memory"
defer:
try: baseDir.removeDir except CatchableError: discard
@ -162,10 +140,6 @@ proc storagesRunner(
check noisy.testTxMergeProofAndKvpList(
stoLst, dbDir, resetDb, fileInfo, oops)
test &"Compare {stoLst.len} slot lists on {cmpBeInfo}" &
" db backend vs. cache":
check noisy.testBackendConsistency(stoLst, dbDir, resetDb)
test &"Delete storage database successively, {stoLst.len} lists":
check noisy.testTxMergeAndDeleteOneByOne(stoLst, dbDir)
@ -175,9 +149,6 @@ proc storagesRunner(
test &"Distributed backend access {stoLst.len} entries":
check noisy.testDistributedAccess(stoLst, dbDir)
test &"Filter backlog management {stoLst.len} entries":
check noisy.testFilterBacklog(stoLst, rdbPath=dbDir)
# ------------------------------------------------------------------------------
# Main function(s)
# ------------------------------------------------------------------------------
@ -195,11 +166,7 @@ when isMainModule:
when true and false:
# Verify Problem with the database for production test
noisy.accountsRunner(persistent=false)
when true: # and false:
for n,w in LyoSamples:
noisy.miscRunner() # layouts = (w[0], 1_000))
noisy.aristoMain()
# This one uses dumps from the external `nimbus-eth1-blob` repo
when true and false:
@ -208,21 +175,6 @@ when isMainModule:
for n,sam in snapOtherList:
noisy.accountsRunner(sam, resetDb=true)
# This one usues dumps from the external `nimbus-eth1-blob` repo
when true and false:
import ./test_sync_snap/snap_storage_xx
let knownFailures: KnownHasherFailure = @[
("storages3__18__25_dump#12.27367",(3,HashifyExistingHashMismatch)),
("storages4__26__33_dump#12.23924",(6,HashifyExistingHashMismatch)),
("storages5__34__41_dump#10.20512",(1,HashifyRootHashMismatch)),
("storagesB__84__92_dump#7.9709", (7,HashifyExistingHashMismatch)),
("storagesD_102_109_dump#18.28287",(9,HashifyExistingHashMismatch)),
]
noisy.showElapsed("@snap_storage_xx"):
for n,sam in snapStorageList:
noisy.accountsRunner(sam, resetDb=true)
noisy.storagesRunner(sam, resetDb=true, oops=knownFailures)
when true: # and false:
let persistent = false # or true
noisy.showElapsed("@snap_test_list"):

View File

@ -1,401 +0,0 @@
# Nimbus
# Copyright (c) 2023-2024 Status Research & Development GmbH
# Licensed under either of
# * Apache License, version 2.0, ([LICENSE-APACHE](LICENSE-APACHE) or
# http://www.apache.org/licenses/LICENSE-2.0)
# * MIT license ([LICENSE-MIT](LICENSE-MIT) or
# http://opensource.org/licenses/MIT)
# at your option. This file may not be copied, modified, or
# distributed except according to those terms.
## Aristo (aka Patricia) DB records merge test
import
std/[algorithm, hashes, sequtils, sets, strutils, tables],
eth/common,
results,
unittest2,
stew/endians2,
../../nimbus/sync/protocol,
../../nimbus/db/aristo/[
aristo_blobify,
aristo_debug,
aristo_desc,
aristo_desc/desc_backend,
aristo_get,
aristo_init/memory_db,
aristo_init/rocks_db,
aristo_layers,
aristo_merge,
aristo_persistent,
aristo_tx,
aristo_vid],
../replay/xcheck,
./test_helpers
const
BlindHash = EmptyBlob.hash
# ------------------------------------------------------------------------------
# Private helpers
# ------------------------------------------------------------------------------
func hash(filter: FilterRef): Hash =
## Unique hash/filter -- cannot use de/blobify as the expressions
## `filter.blobify` and `filter.blobify.value.deblobify.value.blobify` are
## not necessarily the same binaries due to unsorted tables.
##
var h = BlindHash
if not filter.isNil:
h = h !& filter.src.hash
h = h !& filter.trg.hash
for w in filter.vGen.vidReorg:
h = h !& w.uint64.hash
for w in filter.sTab.keys.toSeq.mapIt(it.uint64).sorted.mapIt(it.VertexID):
let data = filter.sTab.getOrVoid(w).blobify.get(otherwise = EmptyBlob)
h = h !& (w.uint64.toBytesBE.toSeq & data).hash
for w in filter.kMap.keys.toSeq.mapIt(it.uint64).sorted.mapIt(it.VertexID):
let data = @(filter.kMap.getOrVoid(w).data)
h = h !& (w.uint64.toBytesBE.toSeq & data).hash
!$h
# ------------------------------------------------------------------------------
# Private functions
# ------------------------------------------------------------------------------
proc verify(
ly: LayerRef; # Database layer
be: BackendRef; # Backend
noisy: bool;
): bool =
proc verifyImpl[T](noisy: bool; ly: LayerRef; be: T): bool =
## ..
let
beSTab = be.walkVtx.toSeq.mapIt((it[0],it[1])).toTable
beKMap = be.walkKey.toSeq.mapIt((it[0],it[1])).toTable
for vid in beSTab.keys.toSeq.mapIt(it.uint64).sorted.mapIt(it.VertexID):
let
nVtx = ly.delta.sTab.getOrVoid vid
mVtx = beSTab.getOrVoid vid
xCheck (nVtx != VertexRef(nil))
xCheck (mVtx != VertexRef(nil))
xCheck nVtx == mVtx:
noisy.say "***", "verify",
" beType=", be.typeof,
" vid=", vid.pp,
" nVtx=", nVtx.pp,
" mVtx=", mVtx.pp
xCheck beSTab.len == ly.delta.sTab.len
xCheck beKMap.len == ly.delta.kMap.len:
let
a = ly.delta.kMap.keys.toSeq.toHashSet
b = beKMap.keys.toSeq.toHashSet
noisy.say "***", "verify",
" delta=", (a -+- b).pp
true
case be.kind:
of BackendMemory:
noisy.verifyImpl(ly, be.MemBackendRef)
of BackendRocksDB:
noisy.verifyImpl(ly, be.RdbBackendRef)
else:
raiseAssert "Oops, unsupported backend " & $be.kind
proc verifyFilters(
db: AristoDbRef;
tab: Table[QueueID,Hash];
noisy: bool;
): bool =
proc verifyImpl[T](noisy: bool; tab: Table[QueueID,Hash]; be: T): bool =
## Compare stored filters against registered ones
var n = 0
for (fid,filter) in walkFilBe(be):
let
filterHash = filter.hash
registered = tab.getOrDefault(fid, BlindHash)
xCheck (registered != BlindHash)
xCheck registered == filterHash:
noisy.say "***", "verifyFiltersImpl",
" n=", n+1,
" fid=", fid.pp,
" filterHash=", filterHash.int.toHex,
" registered=", registered.int.toHex
n.inc
xCheck n == tab.len
true
## Wrapper
let be = db.backend
case be.kind:
of BackendMemory:
noisy.verifyImpl(tab, be.MemBackendRef)
of BackendRocksDB:
noisy.verifyImpl(tab, be.RdbBackendRef)
else:
raiseAssert "Oops, unsupported backend " & $be.kind
proc verifyKeys(
db: AristoDbRef;
noisy: bool;
): bool =
proc verifyImpl[T](noisy: bool; db: AristoDbRef): bool =
## Check for zero keys
var zeroKeys: seq[VertexID]
for (vid,vtx) in T.walkPairs(db):
if vtx.isValid and not db.getKey(vid).isValid:
zeroKeys.add vid
xCheck zeroKeys == EmptyVidSeq:
noisy.say "***", "verifyKeys(1)",
"\n zeroKeys=", zeroKeys.pp,
#"\n db\n ", db.pp(backendOk=true),
""
true
## Wrapper
let be = db.backend
case be.kind:
of BackendVoid:
verifyImpl[VoidBackendRef](noisy, db)
of BackendMemory:
verifyImpl[MemBackendRef](noisy, db)
of BackendRocksDB:
verifyImpl[RdbBackendRef](noisy, db)
# -----------
proc collectFilter(
db: AristoDbRef;
filter: FilterRef;
tab: var Table[QueueID,Hash];
noisy: bool;
): bool =
## Store filter on permanent BE and register digest
if not filter.isNil:
let
fid = QueueID(7 * (tab.len + 1)) # just some number
be = db.backend
tx = be.putBegFn()
be.putFilFn(tx, @[(fid,filter)])
let rc = be.putEndFn tx
xCheckRc rc.error == 0
tab[fid] = filter.hash
true
proc mergeData(
db: AristoDbRef;
rootKey: Hash256;
rootVid: VertexID;
proof: openArray[SnapProof];
leafs: openArray[LeafTiePayload];
noisy: bool;
): bool =
## Simplified loop body of `test_mergeProofAndKvpList()`
if 0 < proof.len:
let root = block:
let rc = db.merge(rootKey, rootVid)
xCheckRc rc.error == 0
rc.value
let nMerged = block:
let rc = db.merge(proof, root)
xCheckRc rc.error == 0
rc.value
discard nMerged # Result is currently unused
let merged = db.mergeList(leafs, noisy=noisy)
xCheck merged.error in {AristoError(0), MergeLeafPathCachedAlready}
block:
let rc = db.hashify(noisy = noisy)
xCheckRc rc.error == (0,0):
noisy.say "***", "dataMerge (8)",
" nProof=", proof.len,
" nLeafs=", leafs.len,
" error=", rc.error,
#"\n db\n ", db.pp(backendOk=true),
""
block:
xCheck db.verifyKeys(noisy):
noisy.say "***", "dataMerge (9)",
" nProof=", proof.len,
" nLeafs=", leafs.len,
#"\n db\n ", db.pp(backendOk=true),
""
true
# ------------------------------------------------------------------------------
# Public test function
# ------------------------------------------------------------------------------
proc testBackendConsistency*(
noisy: bool;
list: openArray[ProofTrieData]; # Test data
rdbPath = ""; # Rocks DB storage directory
resetDb = false;
): bool =
## Import accounts
var
filTab: Table[QueueID,Hash] # Filter register
ndb = AristoDbRef() # Reference cache
mdb = AristoDbRef() # Memory backend database
rdb = AristoDbRef() # Rocks DB backend database
rootKey = Hash256() # Root key
count = 0
defer:
rdb.finish(flush=true)
for n,w in list:
if w.root != rootKey or resetDb:
rootKey = w.root
count = 0
ndb = AristoDbRef.init()
mdb = AristoDbRef.init MemBackendRef
if not rdb.backend.isNil: # ignore bootstrap
let verifyFiltersOk = rdb.verifyFilters(filTab, noisy)
xCheck verifyFiltersOk
filTab.clear
rdb.finish(flush=true)
if 0 < rdbPath.len:
let rc = AristoDbRef.init(RdbBackendRef, rdbPath)
xCheckRc rc.error == 0
rdb = rc.value
else:
rdb = AristoDbRef.init MemBackendRef # fake `rdb` database
# Disable automated filter management, still allow filter table access
# for low level read/write testing.
rdb.backend.journal = QidSchedRef(nil)
count.inc
xCheck ndb.backend.isNil
xCheck not mdb.backend.isNil
xCheck ndb.vGen == mdb.vGen
xCheck ndb.top.final.fRpp.len == mdb.top.final.fRpp.len
when true and false:
noisy.say "***", "beCon(1) <", n, "/", list.len-1, ">",
" groups=", count,
"\n ndb\n ", ndb.pp(backendOk = true),
"\n -------------",
"\n mdb\n ", mdb.pp(backendOk = true),
#"\n -------------",
#"\n rdb\n ", rdb.pp(backendOk = true),
"\n -------------"
block:
let
rootVid = VertexID(1)
leafs = w.kvpLst.mapRootVid rootVid # for merging it into main trie
let ndbOk = ndb.mergeData(rootKey, rootVid, w.proof, leafs, noisy=false)
xCheck ndbOk
let mdbOk = mdb.mergeData(rootKey, rootVid, w.proof, leafs, noisy=false)
xCheck mdbOk
let rdbOk = rdb.mergeData(rootKey, rootVid, w.proof, leafs, noisy=false)
xCheck rdbOk
when true and false:
noisy.say "***", "beCon(2) <", n, "/", list.len-1, ">",
" groups=", count,
"\n ndb\n ", ndb.pp(backendOk = true),
"\n -------------",
"\n mdb\n ", mdb.pp(backendOk = true),
#"\n -------------",
#"\n rdb\n ", rdb.pp(backendOk = true),
"\n -------------"
var
mdbPreSave = ""
rdbPreSave {.used.} = ""
when true and false:
mdbPreSave = mdb.pp() # backendOk = true)
rdbPreSave = rdb.pp() # backendOk = true)
# Provide filter, store filter on permanent BE, and register filter digest
block:
let rc = mdb.persist(chunkedMpt=true)
xCheckRc rc.error == 0
let collectFilterOk = rdb.collectFilter(mdb.roFilter, filTab, noisy)
xCheck collectFilterOk
# Store onto backend database
block:
#noisy.say "***", "db-dump\n ", mdb.pp
let rc = mdb.persist(chunkedMpt=true)
xCheckRc rc.error == 0
block:
let rc = rdb.persist(chunkedMpt=true)
xCheckRc rc.error == 0
xCheck ndb.vGen == mdb.vGen
xCheck ndb.top.final.fRpp.len == mdb.top.final.fRpp.len
block:
ndb.top.final.pPrf.clear # let it look like mdb/rdb
xCheck mdb.pPrf.len == 0
xCheck rdb.pPrf.len == 0
let mdbVerifyOk = ndb.top.verify(mdb.backend, noisy)
xCheck mdbVerifyOk:
when true: # and false:
noisy.say "***", "beCon(4) <", n, "/", list.len-1, ">",
" groups=", count,
"\n ndb\n ", ndb.pp(backendOk = true),
"\n -------------",
"\n mdb pre-stow\n ", mdbPreSave,
"\n -------------",
"\n mdb\n ", mdb.pp(backendOk = true),
"\n -------------"
let rdbVerifyOk = ndb.top.verify(rdb.backend, noisy)
xCheck rdbVerifyOk:
when true and false:
noisy.say "***", "beCon(5) <", n, "/", list.len-1, ">",
" groups=", count,
"\n ndb\n ", ndb.pp(backendOk = true),
"\n -------------",
"\n rdb pre-stow\n ", rdbPreSave,
"\n -------------",
"\n rdb\n ", rdb.pp(backendOk = true),
#"\n -------------",
#"\n mdb\n ", mdb.pp(backendOk = true),
"\n -------------"
when true and false:
noisy.say "***", "beCon(9) <", n, "/", list.len-1, ">", " groups=", count
# Finally ...
block:
let verifyFiltersOk = rdb.verifyFilters(filTab, noisy)
xCheck verifyFiltersOk
true
# ------------------------------------------------------------------------------
# End
# ------------------------------------------------------------------------------

View File

@ -12,20 +12,15 @@
##
import
std/[sequtils, sets, strutils],
std/sets,
eth/common,
results,
unittest2,
../../nimbus/db/aristo/[
aristo_blobify,
aristo_check,
aristo_debug,
aristo_desc,
aristo_desc/desc_backend,
aristo_get,
aristo_journal,
aristo_journal/journal_ops,
aristo_journal/journal_scheduler,
aristo_layers,
aristo_merge,
aristo_persistent,
@ -44,74 +39,6 @@ type
# Private debugging helpers
# ------------------------------------------------------------------------------
proc fifosImpl[T](be: T): seq[seq[(QueueID,FilterRef)]] =
var lastChn = -1
for (qid,val) in be.walkFifoBe:
let chn = (qid.uint64 shr 62).int
while lastChn < chn:
lastChn.inc
result.add newSeq[(QueueID,FilterRef)](0)
result[^1].add (qid,val)
proc fifos(be: BackendRef): seq[seq[(QueueID,FilterRef)]] =
## Wrapper
case be.kind:
of BackendMemory:
return be.MemBackendRef.fifosImpl
of BackendRocksDB:
return be.RdbBackendRef.fifosImpl
else:
discard
check be.kind == BackendMemory or be.kind == BackendRocksDB
func flatten(
a: seq[seq[(QueueID,FilterRef)]];
): seq[(QueueID,FilterRef)]
{.used.} =
for w in a:
result &= w
proc fList(be: BackendRef): seq[(QueueID,FilterRef)] {.used.} =
case be.kind:
of BackendMemory:
return be.MemBackendRef.walkFilBe.toSeq.mapIt((it.qid, it.filter))
of BackendRocksDB:
return be.RdbBackendRef.walkFilBe.toSeq.mapIt((it.qid, it.filter))
else:
discard
check be.kind == BackendMemory or be.kind == BackendRocksDB
func ppFil(w: FilterRef; db = AristoDbRef(nil)): string =
proc qq(key: Hash256; db: AristoDbRef): string =
if db.isNil:
let n = key.to(UInt256)
if n.isZero: "£ø" else: "£" & $n
else:
HashKey.fromBytes(key.data).value.pp(db)
"(" & w.fid.pp & "," & w.src.qq(db) & "->" & w.trg.qq(db) & ")"
func pp(qf: (QueueID,FilterRef); db = AristoDbRef(nil)): string =
"(" & qf[0].pp & "," & (if qf[1].isNil: "ø" else: qf[1].ppFil(db)) & ")"
when false:
proc pp(q: openArray[(QueueID,FilterRef)]; db = AristoDbRef(nil)): string =
"{" & q.mapIt(it.pp(db)).join(",") & "}"
proc pp(q: seq[seq[(QueueID,FilterRef)]]; db = AristoDbRef(nil)): string =
result = "["
for w in q:
if w.len == 0:
result &= "ø"
else:
result &= w.mapIt(it.pp db).join(",")
result &= ","
if result[^1] == ',':
result[^1] = ']'
else:
result &= "]"
# -------------------------
proc dump(pfx: string; dx: varargs[AristoDbRef]): string =
if 0 < dx.len:
result = "\n "
@ -216,7 +143,7 @@ proc cleanUp(dx: var DbTriplet) =
dx[0].finish(flush=true)
dx.reset
proc isDbEq(a, b: FilterRef; db: AristoDbRef; noisy = true): bool =
proc isDbEq(a, b: LayerDeltaRef; db: AristoDbRef; noisy = true): bool =
## Verify that argument filter `a` has the same effect on the
## physical/unfiltered backend of `db` as argument filter `b`.
if a.isNil:
@ -225,7 +152,7 @@ proc isDbEq(a, b: FilterRef; db: AristoDbRef; noisy = true): bool =
return false
if unsafeAddr(a[]) != unsafeAddr(b[]):
if a.src != b.src or
a.trg != b.trg or
a.kMap.getOrVoid(VertexID 1) != b.kMap.getOrVoid(VertexID 1) or
a.vGen != b.vGen:
return false
@ -291,60 +218,6 @@ proc isDbEq(a, b: FilterRef; db: AristoDbRef; noisy = true): bool =
true
proc isEq(a, b: FilterRef; db: AristoDbRef; noisy = true): bool =
## ..
if a.src != b.src:
noisy.say "***", "not isEq:", " a.src=", a.src.pp, " b.src=", b.src.pp
return
if a.trg != b.trg:
noisy.say "***", "not isEq:", " a.trg=", a.trg.pp, " b.trg=", b.trg.pp
return
if a.vGen != b.vGen:
noisy.say "***", "not isEq:", " a.vGen=", a.vGen.pp, " b.vGen=", b.vGen.pp
return
if a.sTab.len != b.sTab.len:
noisy.say "***", "not isEq:",
" a.sTab.len=", a.sTab.len,
" b.sTab.len=", b.sTab.len
return
if a.kMap.len != b.kMap.len:
noisy.say "***", "not isEq:",
" a.kMap.len=", a.kMap.len,
" b.kMap.len=", b.kMap.len
return
for (vid,aVtx) in a.sTab.pairs:
if b.sTab.hasKey vid:
let bVtx = b.sTab.getOrVoid vid
if aVtx != bVtx:
noisy.say "***", "not isEq:",
" vid=", vid.pp,
" aVtx=", aVtx.pp(db),
" bVtx=", bVtx.pp(db)
return
else:
noisy.say "***", "not isEq:",
" vid=", vid.pp,
" aVtx=", aVtx.pp(db),
" bVtx=n/a"
return
for (vid,aKey) in a.kMap.pairs:
if b.kMap.hasKey vid:
let bKey = b.kMap.getOrVoid vid
if aKey != bKey:
noisy.say "***", "not isEq:",
" vid=", vid.pp,
" aKey=", aKey.pp,
" bKey=", bKey.pp
return
else:
noisy.say "*** not eq:",
" vid=", vid.pp,
" aKey=", aKey.pp,
" bKey=n/a"
return
true
# ----------------------
proc checkBeOk(
@ -363,195 +236,8 @@ proc checkBeOk(
noisy.say "***", "db checkBE failed",
" n=", n, "/", dx.len-1,
" cache=", cache
if fifos:
let rc = dx[n].checkJournal()
xCheckRc rc.error == (0,0):
noisy.say "***", "db checkJournal failed",
" n=", n, "/", dx.len-1,
" cache=", cache
true
proc checkFilterTrancoderOk(
dx: DbTriplet;
noisy = true;
): bool =
## ..
for n in 0 ..< dx.len:
if dx[n].roFilter.isValid:
let data = block:
let rc = dx[n].roFilter.blobify()
xCheckRc rc.error == 0:
noisy.say "***", "db serialisation failed",
" n=", n, "/", dx.len-1,
" error=", rc.error
rc.value
let dcdRoundTrip = block:
let rc = data.deblobify FilterRef
xCheckRc rc.error == 0:
noisy.say "***", "db de-serialisation failed",
" n=", n, "/", dx.len-1,
" error=", rc.error
rc.value
let roFilterExRoundTrip = dx[n].roFilter.isEq(dcdRoundTrip, dx[n], noisy)
xCheck roFilterExRoundTrip:
noisy.say "***", "checkFilterTrancoderOk failed",
" n=", n, "/", dx.len-1,
"\n roFilter=", dx[n].roFilter.pp(dx[n]),
"\n dcdRoundTrip=", dcdRoundTrip.pp(dx[n])
true
# -------------------------
proc qid2fidFn(be: BackendRef): QuFilMap =
result = proc(qid: QueueID): Result[FilterID,void] =
let fil = be.getFilFn(qid).valueOr:
return err()
ok fil.fid
proc storeFilter(
be: BackendRef;
filter: FilterRef;
): bool =
## ..
let instr = block:
let rc = be.journalOpsPushSlot(filter, none(FilterID))
xCheckRc rc.error == 0
rc.value
# Update database
let txFrame = be.putBegFn()
be.putFilFn(txFrame, instr.put)
be.putFqsFn(txFrame, instr.scd.state)
let rc = be.putEndFn txFrame
xCheckRc rc.error == 0
be.journal.state = instr.scd.state
true
proc storeFilter(
be: BackendRef;
serial: int;
): bool =
## Variant of `storeFilter()`
let fid = FilterID(serial)
be.storeFilter FilterRef(
fid: fid,
src: fid.to(Hash256),
trg: (fid-1).to(Hash256))
proc fetchDelete(
be: BackendRef;
backSteps: int;
filter: var FilterRef;
): bool =
## ...
let
vfyInst = block:
let rc = be.journalOpsDeleteSlots(backSteps = backSteps)
xCheckRc rc.error == 0
rc.value
instr = block:
let rc = be.journalOpsFetchSlots(backSteps = backSteps)
xCheckRc rc.error == 0
rc.value
qid = be.journal.le(instr.fil.fid, be.qid2fidFn)
inx = be.journal[qid]
xCheck backSteps == inx + 1
xCheck instr.del.put == vfyInst.put
xCheck instr.del.scd.state == vfyInst.scd.state
xCheck instr.del.scd.ctx == vfyInst.scd.ctx
# Update database
block:
let txFrame = be.putBegFn()
be.putFilFn(txFrame, instr.del.put)
be.putFqsFn(txFrame, instr.del.scd.state)
let rc = be.putEndFn txFrame
xCheckRc rc.error == 0
be.journal.state = instr.del.scd.state
filter = instr.fil
# Verify that state was properly installed
let rc = be.getFqsFn()
xCheckRc rc.error == 0
xCheck rc.value == be.journal.state
true
proc validateFifo(
be: BackendRef;
serial: int;
hashesOk = false;
): bool =
##
## Verify filter setup
##
## Example (hashesOk==false)
## ::
## QueueID | FilterID | HashKey
## qid | filter.fid | filter.src -> filter.trg
## --------+------------+--------------------------
## %4 | @654 | £654 -> £653
## %3 | @653 | £653 -> £652
## %2 | @652 | £652 -> £651
## %1 | @651 | £651 -> £650
## %a | @650 | £650 -> £649
## %9 | @649 | £649 -> £648
## | |
## %1:2 | @648 | £648 -> £644
## %1:1 | @644 | £644 -> £640
## %1:a | @640 | £640 -> £636
## %1:9 | @636 | £636 -> £632
## %1:8 | @632 | £632 -> £628
## %1:7 | @628 | £628 -> £624
## %1:6 | @624 | £624 -> £620
## | |
## %2:1 | @620 | £620 -> £600
## %2:a | @600 | £600 -> £580
## .. | .. | ..
##
var
lastTrg = serial.u256
inx = 0
lastFid = FilterID(serial+1)
if hashesOk:
lastTrg = be.getKeyFn(VertexID(1)).get(otherwise=HashKey()).to(UInt256)
for chn,fifo in be.fifos:
for (qid,filter) in fifo:
# Check filter objects
xCheck chn == (qid.uint64 shr 62).int
xCheck filter != FilterRef(nil)
xCheck filter.src.to(UInt256) == lastTrg
lastTrg = filter.trg.to(UInt256)
# Check random access
xCheck qid == be.journal[inx]
xCheck inx == be.journal[qid]
# Check access by queue ID (all end up at `qid`)
for fid in filter.fid ..< lastFid:
xCheck qid == be.journal.le(fid, be.qid2fidFn)
inx.inc
lastFid = filter.fid
true
# ---------------------------------
iterator payload(list: openArray[ProofTrieData]): LeafTiePayload =
for w in list:
for p in w.kvpLst.mapRootVid VertexID(1):
yield p
# ------------------------------------------------------------------------------
# Public test function
# ------------------------------------------------------------------------------
@ -568,8 +254,8 @@ proc testDistributedAccess*(
# Resulting clause (11) filters from `aristo/README.md` example
# which will be used in the second part of the tests
var
c11Filter1 = FilterRef(nil)
c11Filter3 = FilterRef(nil)
c11Filter1 = LayerDeltaRef(nil)
c11Filter3 = LayerDeltaRef(nil)
# Work through clauses (8)..(11) from `aristo/README.md` example
block:
@ -591,35 +277,32 @@ proc testDistributedAccess*(
block:
let rc = db1.persist()
xCheckRc rc.error == 0
xCheck db1.roFilter == FilterRef(nil)
xCheck db2.roFilter == db3.roFilter
xCheck db1.balancer == LayerDeltaRef(nil)
xCheck db2.balancer == db3.balancer
block:
let rc = db2.stow() # non-persistent
xCheckRc rc.error == 0:
noisy.say "*** testDistributedAccess (3)", "n=", n, "db2".dump db2
xCheck db1.roFilter == FilterRef(nil)
xCheck db2.roFilter != db3.roFilter
xCheck db1.balancer == LayerDeltaRef(nil)
xCheck db2.balancer != db3.balancer
# Clause (11) from `aristo/README.md` example
db2.reCentre()
block:
let rc = db2.persist()
xCheckRc rc.error == 0
xCheck db2.roFilter == FilterRef(nil)
xCheck db2.balancer == LayerDeltaRef(nil)
# Check/verify backends
block:
let ok = dx.checkBeOk(noisy=noisy,fifos=true)
xCheck ok:
noisy.say "*** testDistributedAccess (4)", "n=", n, "db3".dump db3
block:
let ok = dx.checkFilterTrancoderOk(noisy=noisy)
xCheck ok
# Capture filters from clause (11)
c11Filter1 = db1.roFilter
c11Filter3 = db3.roFilter
c11Filter1 = db1.balancer
c11Filter3 = db3.balancer
# Clean up
dx.cleanUp()
@ -642,8 +325,8 @@ proc testDistributedAccess*(
block:
let rc = db2.persist()
xCheckRc rc.error == 0
xCheck db2.roFilter == FilterRef(nil)
xCheck db1.roFilter == db3.roFilter
xCheck db2.balancer == LayerDeltaRef(nil)
xCheck db1.balancer == db3.balancer
# Clause (13) from `aristo/README.md` example
xCheck not db1.isCentre()
@ -652,7 +335,7 @@ proc testDistributedAccess*(
xCheckRc rc.error == 0
# Clause (14) from `aristo/README.md` check
let c11Fil1_eq_db1RoFilter = c11Filter1.isDbEq(db1.roFilter, db1, noisy)
let c11Fil1_eq_db1RoFilter = c11Filter1.isDbEq(db1.balancer, db1, noisy)
xCheck c11Fil1_eq_db1RoFilter:
noisy.say "*** testDistributedAccess (7)", "n=", n,
"\n c11Filter1\n ", c11Filter1.pp(db1),
@ -660,7 +343,7 @@ proc testDistributedAccess*(
""
# Clause (15) from `aristo/README.md` check
let c11Fil3_eq_db3RoFilter = c11Filter3.isDbEq(db3.roFilter, db3, noisy)
let c11Fil3_eq_db3RoFilter = c11Filter3.isDbEq(db3.balancer, db3, noisy)
xCheck c11Fil3_eq_db3RoFilter:
noisy.say "*** testDistributedAccess (8)", "n=", n,
"\n c11Filter3\n ", c11Filter3.pp(db3),
@ -671,219 +354,12 @@ proc testDistributedAccess*(
block:
let ok = dy.checkBeOk(noisy=noisy,fifos=true)
xCheck ok
block:
let ok = dy.checkFilterTrancoderOk(noisy=noisy)
xCheck ok
when false: # or true:
noisy.say "*** testDistributedAccess (9)", "n=", n # , dy.dump
true
proc testFilterFifo*(
noisy = true;
layout = LyoSamples[0][0]; # Backend fifos layout
sampleSize = LyoSamples[0][1]; # Synthetic filters generation
reorgPercent = 40; # To be deleted and re-filled
rdbPath = ""; # Optional Rocks DB storage directory
): bool =
let
db = if 0 < rdbPath.len:
let rc = AristoDbRef.init(RdbBackendRef, rdbPath, layout.to(QidLayoutRef))
xCheckRc rc.error == 0
rc.value
else:
AristoDbRef.init(MemBackendRef, layout.to(QidLayoutRef))
be = db.backend
defer: db.finish(flush=true)
proc show(serial = 0; exec: seq[QidAction] = @[]) {.used.} =
var s = ""
if 0 < serial:
s &= " n=" & $serial
s &= " len=" & $be.journal.len
if 0 < exec.len:
s &= " exec=" & exec.pp
s &= "" &
"\n state=" & be.journal.state.pp &
#"\n list=" & be.fList.pp &
"\n fifo=" & be.fifos.pp &
"\n"
noisy.say "***", s
when false: # or true
noisy.say "***", "sampleSize=", sampleSize,
" ctx=", be.journal.ctx.q, " stats=", be.journal.ctx.stats
# -------------------
block:
let rc = db.checkJournal()
xCheckRc rc.error == (0,0)
for n in 1 .. sampleSize:
#let trigger = n in {7,8}
#if trigger: show(n, be.journal.addItem.exec)
block:
let storeFilterOK = be.storeFilter(serial=n)
xCheck storeFilterOK
block:
#if trigger: show(n)
let rc = db.checkJournal()
xCheckRc rc.error == (0,0)
block:
let validateFifoOk = be.validateFifo(serial=n)
xCheck validateFifoOk
# Squash some entries on the fifo
block:
var
filtersLen = be.journal.len
nDel = (filtersLen * reorgPercent) div 100
filter: FilterRef
# Extract and delete leading filters, use squashed filters extract
let fetchDeleteOk = be.fetchDelete(nDel, filter)
xCheck fetchDeleteOk
xCheck be.journal.len + nDel == filtersLen
# Push squashed filter
let storeFilterOK = be.storeFilter filter
xCheck storeFilterOK
# Continue adding items
for n in sampleSize + 1 .. 2 * sampleSize:
let storeFilterOK = be.storeFilter(serial=n)
xCheck storeFilterOK
#show(n)
let validateFifoOk = be.validateFifo(serial=n)
xCheck validateFifoOk
block:
let rc = db.checkJournal()
xCheckRc rc.error == (0,0)
true
proc testFilterBacklog*(
noisy: bool;
list: openArray[ProofTrieData]; # Sample data for generating filters
layout = LyoSamples[0][0]; # Backend fifos layout
reorgPercent = 40; # To be deleted and re-filled
rdbPath = ""; # Optional Rocks DB storage directory
sampleSize = 777; # Truncate `list`
): bool =
let
db = if 0 < rdbPath.len:
let rc = AristoDbRef.init(RdbBackendRef, rdbPath, layout.to(QidLayoutRef))
xCheckRc rc.error == 0
rc.value
else:
AristoDbRef.init(MemBackendRef, layout.to(QidLayoutRef))
be = db.backend
defer: db.finish(flush=true)
proc show(serial = -42, blurb = "") {.used.} =
var s = blurb
if 0 <= serial:
s &= " n=" & $serial
s &= " nFilters=" & $be.journal.len
s &= "" &
" root=" & be.getKeyFn(VertexID(1)).get(otherwise=VOID_HASH_KEY).pp &
"\n state=" & be.journal.state.pp &
"\n fifo=" & be.fifos.pp(db) &
"\n"
noisy.say "***", s
# -------------------
# Load/store persistent data while producing backlog
let payloadList = list.payload.toSeq
for n,w in payloadList:
if sampleSize < n:
break
block:
let rc = db.mergeLeaf w
xCheckRc rc.error == 0
block:
let rc = db.persist()
xCheckRc rc.error == 0
block:
let rc = db.checkJournal()
xCheckRc rc.error == (0,0)
let validateFifoOk = be.validateFifo(serial=n, hashesOk=true)
xCheck validateFifoOk
when false: # or true:
if (n mod 111) == 3:
show(n, "testFilterBacklog (1)")
# Verify
block:
let rc = db.check(relax=false)
xCheckRc rc.error == (0,0)
#show(min(payloadList.len, sampleSize), "testFilterBacklog (2)")
# -------------------
# Retrieve some earlier state
var
fifoLen = be.journal.len
pivot = (fifoLen * reorgPercent) div 100
qid {.used.} = be.journal[pivot]
xb = AristoDbRef(nil)
for episode in 0 .. pivot:
if not xb.isNil:
let rc = xb.forget
xCheckRc rc.error == 0
xCheck db.nForked == 0
# Realign to earlier state
xb = block:
let rc = db.journalFork(episode = episode)
xCheckRc rc.error == 0
rc.value
block:
let rc = xb.check(relax=false)
xCheckRc rc.error == (0,0)
# Store this state backend database (temporarily re-centre)
block:
let rc = xb.journalUpdate(reCentreOk = true)
xCheckRc rc.error == 0
xCheck db.isCentre
block:
let rc = db.check(relax=false)
xCheckRc rc.error == (0,0)
block:
let rc = xb.check(relax=false)
xCheckRc rc.error == (0,0)
# Restore previous database state
block:
let rc = db.journalUpdate()
xCheckRc rc.error == 0
block:
let rc = db.check(relax=false)
xCheckRc rc.error == (0,0)
block:
let rc = xb.check(relax=false)
xCheckRc rc.error == (0,0)
block:
let rc = db.checkJournal()
xCheckRc rc.error == (0,0)
#show(episode, "testFilterBacklog (3)")
# Note that the above process squashes the first `episode` entries into
# a single one (summing up number gives an arithmetic series.)
let expSize = max(1, fifoLen - episode * (episode+1) div 2)
xCheck be.journal.len == expSize
true
# ------------------------------------------------------------------------------
# End
# ------------------------------------------------------------------------------

View File

@ -9,11 +9,11 @@
# distributed except according to those terms.
import
std/[hashes, os, sequtils],
std/[os, sequtils],
eth/common,
rocksdb,
../../nimbus/db/aristo/[
aristo_debug, aristo_desc, aristo_delete, aristo_journal/journal_scheduler,
aristo_debug, aristo_desc, aristo_delete,
aristo_hashify, aristo_hike, aristo_merge],
../../nimbus/db/kvstore_rocksdb,
../../nimbus/sync/protocol/snap/snap_types,
@ -30,14 +30,6 @@ type
proof*: seq[SnapProof]
kvpLst*: seq[LeafTiePayload]
const
samples = [
[ (4,0,10), (3,3,10), (3,4,10), (3,5,10)],
[(2,0,high int),(1,1,high int),(1,1,high int),(1,1,high int)],
]
LyoSamples* = samples.mapIt((it, (3 * it.volumeSize.minCovered) div 2))
# ------------------------------------------------------------------------------
# Private helpers
# ------------------------------------------------------------------------------
@ -110,7 +102,7 @@ proc say*(noisy = false; pfx = "***"; args: varargs[string, `$`]) =
func `==`*[T: AristoError|VertexID](a: T, b: int): bool =
a == T(b)
func `==`*(a: (VertexID|QueueID,AristoError), b: (int,int)): bool =
func `==`*(a: (VertexID,AristoError), b: (int,int)): bool =
(a[0].int,a[1].int) == b
func `==`*(a: (VertexID,AristoError), b: (int,AristoError)): bool =
@ -122,9 +114,6 @@ func `==`*(a: (int,AristoError), b: (int,int)): bool =
func `==`*(a: (int,VertexID,AristoError), b: (int,int,int)): bool =
(a[0], a[1].int, a[2].int) == b
func `==`*(a: (QueueID,Hash), b: (int,Hash)): bool =
(a[0].int,a[1]) == b
func to*(a: Hash256; T: type UInt256): T =
T.fromBytesBE a.data
@ -134,9 +123,6 @@ func to*(a: Hash256; T: type PathID): T =
func to*(a: HashKey; T: type UInt256): T =
T.fromBytesBE 0u8.repeat(32 - a.len) & @(a.data)
func to*(fid: FilterID; T: type Hash256): T =
result.data = fid.uint64.u256.toBytesBE
proc to*(sample: AccountsSample; T: type seq[UndumpAccounts]): T =
## Convert test data into usable in-memory format
let file = sample.file.findFilePath.value

View File

@ -11,7 +11,7 @@
## Aristo (aka Patricia) DB trancoder test
import
std/[algorithm, sequtils, sets, strutils],
std/[sequtils, sets],
eth/common,
results,
stew/byteutils,
@ -21,7 +21,6 @@ import
../../nimbus/db/aristo/[
aristo_check, aristo_debug, aristo_desc, aristo_blobify, aristo_layers,
aristo_vid],
../../nimbus/db/aristo/aristo_journal/journal_scheduler,
../replay/xcheck,
./test_helpers
@ -29,12 +28,6 @@ type
TesterDesc = object
prng: uint32 ## random state
QValRef = ref object
fid: FilterID
width: uint32
QTabRef = TableRef[QueueID,QValRef]
# ------------------------------------------------------------------------------
# Private helpers
# ------------------------------------------------------------------------------
@ -79,179 +72,6 @@ proc init(T: type TesterDesc; seed: int): TesterDesc =
proc `+`(a: VertexID, b: int): VertexID =
(a.uint64 + b.uint64).VertexID
# ---------------------
iterator walkFifo(qt: QTabRef;scd: QidSchedRef): (QueueID,QValRef) =
## ...
proc kvp(chn: int, qid: QueueID): (QueueID,QValRef) =
let cid = QueueID((chn.uint64 shl 62) or qid.uint64)
(cid, qt.getOrDefault(cid, QValRef(nil)))
if not scd.isNil:
for i in 0 ..< scd.state.len:
let (left, right) = scd.state[i]
if left == 0:
discard
elif left <= right:
for j in right.countDown left:
yield kvp(i, j)
else:
for j in right.countDown QueueID(1):
yield kvp(i, j)
for j in scd.ctx.q[i].wrap.countDown left:
yield kvp(i, j)
proc fifos(qt: QTabRef; scd: QidSchedRef): seq[seq[(QueueID,QValRef)]] =
## ..
var lastChn = -1
for (qid,val) in qt.walkFifo scd:
let chn = (qid.uint64 shr 62).int
while lastChn < chn:
lastChn.inc
result.add newSeq[(QueueID,QValRef)](0)
result[^1].add (qid,val)
func sortedPairs(qt: QTabRef): seq[(QueueID,QValRef)] =
qt.keys.toSeq.mapIt(it.uint64).sorted.mapIt(it.QueueID).mapIt((it,qt[it]))
func flatten(a: seq[seq[(QueueID,QValRef)]]): seq[(QueueID,QValRef)] =
for w in a:
result &= w
func pp(val: QValRef): string =
if val.isNil:
return "ø"
result = $val.fid.uint64
if 0 < val.width:
result &= ":" & $val.width
func pp(kvp: (QueueID,QValRef)): string =
kvp[0].pp & "=" & kvp[1].pp
func pp(qt: QTabRef): string =
"{" & qt.sortedPairs.mapIt(it.pp).join(",") & "}"
func pp(qt: QTabRef; scd: QidSchedRef): string =
result = "["
for w in qt.fifos scd:
if w.len == 0:
result &= "ø"
else:
result &= w.mapIt(it.pp).join(",")
result &= ","
if result[^1] == ',':
result[^1] = ']'
else:
result &= "]"
# ------------------
proc exec(db: QTabRef; serial: int; instr: seq[QidAction]; relax: bool): bool =
## ..
var
saved: bool
hold: seq[(QueueID,QueueID)]
for act in instr:
case act.op:
of Oops:
xCheck act.op != Oops
of SaveQid:
xCheck not saved
db[act.qid] = QValRef(fid: FilterID(serial))
saved = true
of DelQid:
let val = db.getOrDefault(act.qid, QValRef(nil))
xCheck not val.isNil
db.del act.qid
of HoldQid:
hold.add (act.qid, act.xid)
of DequQid:
var merged = QValRef(nil)
for w in hold:
for qid in w[0] .. w[1]:
let val = db.getOrDefault(qid, QValRef(nil))
if not relax:
xCheck not val.isNil
if not val.isNil:
if merged.isNil:
merged = val
else:
if relax:
xCheck merged.fid + merged.width + 1 <= val.fid
else:
xCheck merged.fid + merged.width + 1 == val.fid
merged.width += val.width + 1
db.del qid
if not relax:
xCheck not merged.isNil
if not merged.isNil:
db[act.qid] = merged
hold.setLen(0)
xCheck saved
xCheck hold.len == 0
true
proc validate(db: QTabRef; scd: QidSchedRef; serial: int; relax: bool): bool =
## Verify that the round-robin queues in `db` are consecutive and in the
## right order.
var
step = 1u
lastVal = FilterID(serial+1)
for chn,queue in db.fifos scd:
step *= scd.ctx.q[chn].width + 1 # defined by schedule layout
for kvp in queue:
let val = kvp[1]
if not relax:
xCheck not val.isNil # Entries must exist
xCheck val.fid + step == lastVal # Item distances must match
if not val.isNil:
xCheck val.fid + step <= lastVal # Item distances must decrease
xCheck val.width + 1 == step # Must correspond to `step` size
lastVal = val.fid
# Compare database against expected fill state
if relax:
xCheck db.len <= scd.len
else:
xCheck db.len == scd.len
proc qFn(qid: QueueID): Result[FilterID,void] =
let val = db.getOrDefault(qid, QValRef(nil))
if val.isNil:
return err()
ok val.fid
# Test filter ID selection
var lastFid = FilterID(serial + 1)
xCheck scd.le(lastFid + 0, qFn) == scd[0] # Test fringe condition
xCheck scd.le(lastFid + 1, qFn) == scd[0] # Test fringe condition
for (qid,val) in db.fifos(scd).flatten:
xCheck scd.eq(val.fid, qFn) == qid
xCheck scd.le(val.fid, qFn) == qid
for w in val.fid+1 ..< lastFid:
xCheck scd.le(w, qFn) == qid
xCheck scd.eq(w, qFn) == QueueID(0)
lastFid = val.fid
if FilterID(1) < lastFid: # Test fringe condition
xCheck scd.le(lastFid - 1, qFn) == QueueID(0)
if FilterID(2) < lastFid: # Test fringe condition
xCheck scd.le(lastFid - 2, qFn) == QueueID(0)
true
# ------------------------------------------------------------------------------
# Public test function
# ------------------------------------------------------------------------------
@ -289,7 +109,7 @@ proc testVidRecycleLists*(noisy = true; seed = 42): bool =
db1 = AristoDbRef.init()
rc = dbBlob.deblobify seq[VertexID]
xCheckRc rc.error == 0
db1.top.final.vGen = rc.value
db1.top.delta.vGen = rc.value
xCheck db.vGen == db1.vGen
@ -307,7 +127,7 @@ proc testVidRecycleLists*(noisy = true; seed = 42): bool =
xCheck db.vGen.len == 1
# Repeat last test after clearing the cache
db.top.final.vGen.setLen(0)
db.top.delta.vGen.setLen(0)
for n in 0 .. 5:
let w = db.vidFetch()
xCheck w == VertexID(LEAST_FREE_VID) + n # VertexID(1) is default root ID
@ -334,137 +154,6 @@ proc testVidRecycleLists*(noisy = true; seed = 42): bool =
true
proc testQidScheduler*(
noisy = true;
layout = LyoSamples[0][0];
sampleSize = LyoSamples[0][1];
reorgPercent = 40
): bool =
##
## Example table for `QidSlotLyo` layout after 10_000 cycles
## ::
## QueueID | QValRef |
## | FilterID | width | comment
## --------+----------+-------+----------------------------------
## %a | 10000 | 0 | %a stands for QueueID(10)
## %9 | 9999 | 0 |
## %8 | 9998 | 0 |
## %7 | 9997 | 0 |
## | | |
## %1:9 | 9993 | 3 | 9993 + 3 + 1 => 9997, see %7
## %1:8 | 9989 | 3 |
## %1:7 | 9985 | 3 |
## %1:6 | 9981 | 3 | %1:6 stands for QueueID((1 shl 62) + 6)
## | | |
## %2:9 | 9961 | 19 | 9961 + 19 + 1 => 9981, see %1:6
## %2:8 | 9941 | 19 |
## %2:7 | 9921 | 19 |
## %2:6 | 9901 | 19 |
## %2:5 | 9881 | 19 |
## %2:4 | 9861 | 19 |
## %2:3 | 9841 | 19 |
## | | |
## %3:2 | 9721 | 119 | 9721 + 119 + 1 => 9871, see %2:3
## %3:1 | 9601 | 119 |
## %3:a | 9481 | 119 |
##
var
debug = false # or true
let
list = newTable[QueueID,QValRef]()
scd = QidSchedRef.init layout
ctx = scd.ctx.q
proc show(serial = 0; exec: seq[QidAction] = @[]) =
var s = ""
if 0 < serial:
s &= "n=" & $serial
if 0 < exec.len:
s &= " exec=" & exec.pp
s &= "" &
"\n state=" & scd.state.pp &
"\n list=" & list.pp &
"\n fifo=" & list.pp(scd) &
"\n"
noisy.say "***", s
if debug:
noisy.say "***", "sampleSize=", sampleSize,
" ctx=", ctx, " stats=", scd.volumeSize()
for n in 1 .. sampleSize:
let w = scd.addItem()
let execOk = list.exec(serial=n, instr=w.exec, relax=false)
xCheck execOk
scd[] = w.journal[]
let validateOk = list.validate(scd, serial=n, relax=false)
xCheck validateOk:
show(serial=n, exec=w.exec)
let fifoID = list.fifos(scd).flatten.mapIt(it[0])
for j in 0 ..< list.len:
# Check fifo order
xCheck fifoID[j] == scd[j]:
noisy.say "***", "n=", n, " exec=", w.exec.pp,
" fifoID[", j, "]=", fifoID[j].pp,
" scd[", j, "]=", scd[j].pp,
"\n fifo=", list.pp scd
# Check random access and reverse
let qid = scd[j]
xCheck j == scd[qid]
if debug:
show(exec=w.exec)
# -------------------
# Mark deleted some entries from database
var
nDel = (list.len * reorgPercent) div 100
delIDs: HashSet[QueueID]
for n in 0 ..< nDel:
delIDs.incl scd[n]
# Delete these entries
let fetch = scd.fetchItems nDel
for act in fetch.exec:
xCheck act.op == HoldQid
for qid in act.qid .. act.xid:
xCheck qid in delIDs
xCheck list.hasKey qid
delIDs.excl qid
list.del qid
xCheck delIDs.len == 0
scd[] = fetch.journal[]
# -------------------
# Continue adding items
for n in sampleSize + 1 .. 2 * sampleSize:
let w = scd.addItem()
let execOk = list.exec(serial=n, instr=w.exec, relax=true)
xCheck execOk
scd[] = w.journal[]
let validateOk = list.validate(scd, serial=n, relax=true)
xCheck validateOk:
show(serial=n, exec=w.exec)
# Continue adding items, now strictly
for n in 2 * sampleSize + 1 .. 3 * sampleSize:
let w = scd.addItem()
let execOk = list.exec(serial=n, instr=w.exec, relax=false)
xCheck execOk
scd[] = w.journal[]
let validateOk = list.validate(scd, serial=n, relax=false)
xCheck validateOk
if debug:
show()
true
proc testShortKeys*(
noisy = true;
): bool =

View File

@ -42,10 +42,6 @@ const
MaxFilterBulk = 150_000
## Policy settig for `pack()`
let
TxQidLyo = LyoSamples[0][0].to(QidLayoutRef)
## Cascaded filter slots layout for testing
# ------------------------------------------------------------------------------
# Private helpers
# ------------------------------------------------------------------------------
@ -125,8 +121,8 @@ proc schedStow(
## Scheduled storage
let
layersMeter = db.nLayersVtx() + db.nLayersKey()
filterMeter = if db.roFilter.isNil: 0
else: db.roFilter.sTab.len + db.roFilter.kMap.len
filterMeter = if db.balancer.isNil: 0
else: db.balancer.sTab.len + db.balancer.kMap.len
persistent = MaxFilterBulk < max(layersMeter, filterMeter)
if persistent:
db.persist(chunkedMpt=chunkedMpt)
@ -349,11 +345,11 @@ proc testTxMergeAndDeleteOneByOne*(
# Start with brand new persistent database.
db = block:
if 0 < rdbPath.len:
let rc = AristoDbRef.init(RdbBackendRef, rdbPath, qidLayout=TxQidLyo)
let rc = AristoDbRef.init(RdbBackendRef, rdbPath)
xCheckRc rc.error == 0
rc.value
else:
AristoDbRef.init(MemBackendRef, qidLayout=TxQidLyo)
AristoDbRef.init(MemBackendRef)
# Start transaction (double frame for testing)
xCheck db.txTop.isErr
@ -457,11 +453,11 @@ proc testTxMergeAndDeleteSubTree*(
# Start with brand new persistent database.
db = block:
if 0 < rdbPath.len:
let rc = AristoDbRef.init(RdbBackendRef, rdbPath, qidLayout=TxQidLyo)
let rc = AristoDbRef.init(RdbBackendRef, rdbPath)
xCheckRc rc.error == 0
rc.value
else:
AristoDbRef.init(MemBackendRef, qidLayout=TxQidLyo)
AristoDbRef.init(MemBackendRef)
if testRootVid != VertexID(1):
# Add a dummy entry so the journal logic can be triggered
@ -559,11 +555,11 @@ proc testTxMergeProofAndKvpList*(
db = block:
# New DB with disabled filter slots management
if 0 < rdbPath.len:
let rc = AristoDbRef.init(RdbBackendRef, rdbPath, QidLayoutRef(nil))
let rc = AristoDbRef.init(RdbBackendRef, rdbPath)
xCheckRc rc.error == 0
rc.value
else:
AristoDbRef.init(MemBackendRef, QidLayoutRef(nil))
AristoDbRef.init(MemBackendRef)
# Start transaction (double frame for testing)
tx = db.txBegin().value.to(AristoDbRef).txBegin().value

View File

@ -328,18 +328,15 @@ proc coreDbMain*(noisy = defined(debug)) =
noisy.persistentSyncPreLoadAndResumeRunner()
when isMainModule:
import
std/times
const
noisy = defined(debug) or true
noisy {.used.} = defined(debug) or true
var
sampleList: seq[CaptureSpecs]
setErrorLevel()
when true and false:
when true: # and false:
false.coreDbMain()
false.persistentSyncPreLoadAndResumeRunner()
# This one uses the readily available dump: `bulkTest0` and some huge replay
# dumps `bulkTest2`, `bulkTest3`, .. from the `nimbus-eth1-blobs` package.
@ -350,6 +347,7 @@ when isMainModule:
sampleList = @[memorySampleDefault]
when true: # and false:
import std/times
var state: (Duration, int)
for n,capture in sampleList:
noisy.profileSection("@sample #" & $n, state):
@ -357,7 +355,7 @@ when isMainModule:
capture = capture,
pruneHistory = true,
#profilingOk = true,
finalDiskCleanUpOk = false,
#finalDiskCleanUpOk = false,
oldLogAlign = true
)