204 lines
6.0 KiB
Nim
Raw Normal View History

# nimbus-eth1
aristo: fork support via layers/txframes (#2960) * aristo: fork support via layers/txframes This change reorganises how the database is accessed: instead holding a "current frame" in the database object, a dag of frames is created based on the "base frame" held in `AristoDbRef` and all database access happens through this frame, which can be thought of as a consistent point-in-time snapshot of the database based on a particular fork of the chain. In the code, "frame", "transaction" and "layer" is used to denote more or less the same thing: a dag of stacked changes backed by the on-disk database. Although this is not a requirement, in practice each frame holds the change set of a single block - as such, the frame and its ancestors leading up to the on-disk state represents the state of the database after that block has been applied. "committing" means merging the changes to its parent frame so that the difference between them is lost and only the cumulative changes remain - this facility enables frames to be combined arbitrarily wherever they are in the dag. In particular, it becomes possible to consolidate a set of changes near the base of the dag and commit those to disk without having to re-do the in-memory frames built on top of them - this is useful for "flattening" a set of changes during a base update and sending those to storage without having to perform a block replay on top. Looking at abstractions, a side effect of this change is that the KVT and Aristo are brought closer together by considering them to be part of the "same" atomic transaction set - the way the code gets organised, applying a block and saving it to the kvt happens in the same "logical" frame - therefore, discarding the frame discards both the aristo and kvt changes at the same time - likewise, they are persisted to disk together - this makes reasoning about the database somewhat easier but has the downside of increased memory usage, something that perhaps will need addressing in the future. Because the code reasons more strictly about frames and the state of the persisted database, it also makes it more visible where ForkedChain should be used and where it is still missing - in particular, frames represent a single branch of history while forkedchain manages multiple parallel forks - user-facing services such as the RPC should use the latter, ie until it has been finalized, a getBlock request should consider all forks and not just the blocks in the canonical head branch. Another advantage of this approach is that `AristoDbRef` conceptually becomes more simple - removing its tracking of the "current" transaction stack simplifies reasoning about what can go wrong since this state now has to be passed around in the form of `AristoTxRef` - as such, many of the tests and facilities in the code that were dealing with "stack inconsistency" are now structurally prevented from happening. The test suite will need significant refactoring after this change. Once this change has been merged, there are several follow-ups to do: * there's no mechanism for keeping frames up to date as they get committed or rolled back - TODO * naming is confused - many names for the same thing for legacy reason * forkedchain support is still missing in lots of code * clean up redundant logic based on previous designs - in particular the debug and introspection code no longer makes sense * the way change sets are stored will probably need revisiting - because it's a stack of changes where each frame must be interrogated to find an on-disk value, with a base distance of 128 we'll at minimum have to perform 128 frame lookups for *every* database interaction - regardless, the "dag-like" nature will stay * dispose and commit are poorly defined and perhaps redundant - in theory, one could simply let the GC collect abandoned frames etc, though it's likely an explicit mechanism will remain useful, so they stay for now More about the changes: * `AristoDbRef` gains a `txRef` field (todo: rename) that "more or less" corresponds to the old `balancer` field * `AristoDbRef.stack` is gone - instead, there's a chain of `AristoTxRef` objects that hold their respective "layer" which has the actual changes * No more reasoning about "top" and "stack" - instead, each `AristoTxRef` can be a "head" that "more or less" corresponds to the old single-history `top` notion and its stack * `level` still represents "distance to base" - it's computed from the parent chain instead of being stored * one has to be careful not to use frames where forkedchain was intended - layers are only for a single branch of history! * fix layer vtop after rollback * engine fix * Fix test_txpool * Fix test_rpc * Fix copyright year * fix simulator * Fix copyright year * Fix copyright year * Fix tracer * Fix infinite recursion bug * Remove aristo and kvt empty files * Fic copyright year * Fix fc chain_kvt * ForkedChain refactoring * Fix merge master conflict * Fix copyright year * Reparent txFrame * Fix test * Fix txFrame reparent again * Cleanup and fix test * UpdateBase bugfix and fix test * Fixe newPayload bug discovered by hive * Fix engine api fcu * Clean up call template, chain_kvt, andn txguid * Fix copyright year * work around base block loading issue * Add test * Fix updateHead bug * Fix updateBase bug * Change func commitBase to proc commitBase * Touch up and fix debug mode crash --------- Co-authored-by: jangko <jangko128@gmail.com>
2025-02-06 08:04:50 +01:00
# Copyright (c) 2023-2025 Status Research & Development GmbH
# Licensed under either of
# * Apache License, version 2.0, ([LICENSE-APACHE](LICENSE-APACHE) or
# http://www.apache.org/licenses/LICENSE-2.0)
# * MIT license ([LICENSE-MIT](LICENSE-MIT) or
# http://opensource.org/licenses/MIT)
# at your option. This file may not be copied, modified, or distributed
# except according to those terms.
{.push raises: [].}
import
eth/common/hashes,
results,
"."/[aristo_desc, aristo_get]
Core db and aristo maintenance update (#2014) * Aristo: Update error return code why: Failing of `Aristo` function `delete()` might fail because there is no such data item on the db. This must return a single error code as is done with `fetch()`. * Ledger: Better error handling why: The `expect()` clauses have been replaced by raising asserts indicating the error from the database backend. Also, `delete()` failures are legitimate if the item to delete does not exist. * Aristo: Delete function must always leave a label on DB for `hashify()` why: The `hashify()` uses the labels left bu `merge()` and `delete()` to compile (and optimise) a scheduler for subsequent hashing. Originally, the labels were not used for deleted entries and `delete()` still had some edge case where the deletion label was not properly handled. * Aristo: Update `hashify()` scheduler, remove buggy optimisation why: Was left over from version without virtual state roots which did not know about account payload leaf vertices referring to storage roots. * Aristo: Label storage trie account in `delete()` similar to `merge()` details; The `delete()` function applied to a non-static state root (assumed to be a storage root) will check the payload of an accounts leaf and mark its Merkle keys to be re-checked when runninh `hashify()` * Aristo: Clean up and re-org recycled vertex IDs in `hashify()` why: Re-organising the recycled vertex IDs list intends to reduce the size of the list. This list is organised as a LIFO (or stack.) By reorganising it in a way so that the least vertex ID numbers are on top, the list will be kept smaller as observed on some examples (less than 30%.) * CoreDb: Accept storage trie deletion requests in non-initialised state why: Due to lazy initialisation, the root vertex ID might not yet exist. So the `Aristo` database handlers would reject this call with an error and this condition needs to be handled by the API (which realises the lazy feature.) * Cosmetics & code massage, prettify logging * fix missing import
2024-02-08 16:32:16 +00:00
const
HikeAcceptableStopsNotFound* = {
HikeBranchTailEmpty,
HikeBranchMissingEdge,
HikeLeafUnexpected,
HikeNoLegs}
## When trying to find a leaf vertex the Patricia tree, there are several
## conditions where the search stops which do not constitute a problem
## with the trie (aka sysetm error.)
# ------------------------------------------------------------------------------
# Private functions
# ------------------------------------------------------------------------------
func getNibblesImpl(hike: Hike; start = 0; maxLen = high(int)): NibblesBuf =
## May be needed for partial rebuild, as well
for n in start ..< min(hike.legs.len, maxLen):
let leg = hike.legs[n]
case leg.wp.vtx.vType:
of Branch:
result = result & leg.wp.vtx.pfx & NibblesBuf.nibble(leg.nibble.byte)
of Leaf:
result = result & leg.wp.vtx.pfx
# ------------------------------------------------------------------------------
# Public functions
# ------------------------------------------------------------------------------
Core db and aristo maintenance update (#2014) * Aristo: Update error return code why: Failing of `Aristo` function `delete()` might fail because there is no such data item on the db. This must return a single error code as is done with `fetch()`. * Ledger: Better error handling why: The `expect()` clauses have been replaced by raising asserts indicating the error from the database backend. Also, `delete()` failures are legitimate if the item to delete does not exist. * Aristo: Delete function must always leave a label on DB for `hashify()` why: The `hashify()` uses the labels left bu `merge()` and `delete()` to compile (and optimise) a scheduler for subsequent hashing. Originally, the labels were not used for deleted entries and `delete()` still had some edge case where the deletion label was not properly handled. * Aristo: Update `hashify()` scheduler, remove buggy optimisation why: Was left over from version without virtual state roots which did not know about account payload leaf vertices referring to storage roots. * Aristo: Label storage trie account in `delete()` similar to `merge()` details; The `delete()` function applied to a non-static state root (assumed to be a storage root) will check the payload of an accounts leaf and mark its Merkle keys to be re-checked when runninh `hashify()` * Aristo: Clean up and re-org recycled vertex IDs in `hashify()` why: Re-organising the recycled vertex IDs list intends to reduce the size of the list. This list is organised as a LIFO (or stack.) By reorganising it in a way so that the least vertex ID numbers are on top, the list will be kept smaller as observed on some examples (less than 30%.) * CoreDb: Accept storage trie deletion requests in non-initialised state why: Due to lazy initialisation, the root vertex ID might not yet exist. So the `Aristo` database handlers would reject this call with an error and this condition needs to be handled by the API (which realises the lazy feature.) * Cosmetics & code massage, prettify logging * fix missing import
2024-02-08 16:32:16 +00:00
func to*(rc: Result[Hike,(VertexID,AristoError,Hike)]; T: type Hike): T =
Aristo db api extensions for use as core db backend (#1754) * Update docu * Update Aristo/Kvt constructor prototype why: Previous version used an `enum` value to indicate what backend is to be used. This was replaced by using the backend object type. * Rewrite `hikeUp()` return code into `Result[Hike,(Hike,AristoError)]` why: Better code maintenance. Previously, the `Hike` object was returned. It had an internal error field so partial success was also available on a failure. This error field has been removed. * Use `openArray[byte]` rather than `Blob` in functions prototypes * Provide synchronised multi instance transactions why: The `CoreDB` object was geared towards the legacy DB which used a single transaction for the key-value backend DB. Different state roots are provided by the backend database, so all instances work directly on the same backend. Aristo db instances have different in-memory mappings (aka different state roots) and the transactions are on top of there mappings. So each instance might run different transactions. Multi instance transactions are a compromise to converge towards the legacy behaviour. The synchronised transactions span over all instances available at the time when base transaction was opened. Instances created later are unaffected. * Provide key-value pair database iterator why: Needed in `CoreDB` for `replicate()` emulation also: Some update of internal code * Extend API (i.e. prototype variants) why: Needed for `CoreDB` geared towards the legacy backend which has a more basic API than Aristo.
2023-09-15 16:23:53 +01:00
## Extract `Hike` from either ok ot error part of argument `rc`.
Core db and aristo maintenance update (#2014) * Aristo: Update error return code why: Failing of `Aristo` function `delete()` might fail because there is no such data item on the db. This must return a single error code as is done with `fetch()`. * Ledger: Better error handling why: The `expect()` clauses have been replaced by raising asserts indicating the error from the database backend. Also, `delete()` failures are legitimate if the item to delete does not exist. * Aristo: Delete function must always leave a label on DB for `hashify()` why: The `hashify()` uses the labels left bu `merge()` and `delete()` to compile (and optimise) a scheduler for subsequent hashing. Originally, the labels were not used for deleted entries and `delete()` still had some edge case where the deletion label was not properly handled. * Aristo: Update `hashify()` scheduler, remove buggy optimisation why: Was left over from version without virtual state roots which did not know about account payload leaf vertices referring to storage roots. * Aristo: Label storage trie account in `delete()` similar to `merge()` details; The `delete()` function applied to a non-static state root (assumed to be a storage root) will check the payload of an accounts leaf and mark its Merkle keys to be re-checked when runninh `hashify()` * Aristo: Clean up and re-org recycled vertex IDs in `hashify()` why: Re-organising the recycled vertex IDs list intends to reduce the size of the list. This list is organised as a LIFO (or stack.) By reorganising it in a way so that the least vertex ID numbers are on top, the list will be kept smaller as observed on some examples (less than 30%.) * CoreDb: Accept storage trie deletion requests in non-initialised state why: Due to lazy initialisation, the root vertex ID might not yet exist. So the `Aristo` database handlers would reject this call with an error and this condition needs to be handled by the API (which realises the lazy feature.) * Cosmetics & code massage, prettify logging * fix missing import
2024-02-08 16:32:16 +00:00
if rc.isOk: rc.value else: rc.error[2]
Aristo db api extensions for use as core db backend (#1754) * Update docu * Update Aristo/Kvt constructor prototype why: Previous version used an `enum` value to indicate what backend is to be used. This was replaced by using the backend object type. * Rewrite `hikeUp()` return code into `Result[Hike,(Hike,AristoError)]` why: Better code maintenance. Previously, the `Hike` object was returned. It had an internal error field so partial success was also available on a failure. This error field has been removed. * Use `openArray[byte]` rather than `Blob` in functions prototypes * Provide synchronised multi instance transactions why: The `CoreDB` object was geared towards the legacy DB which used a single transaction for the key-value backend DB. Different state roots are provided by the backend database, so all instances work directly on the same backend. Aristo db instances have different in-memory mappings (aka different state roots) and the transactions are on top of there mappings. So each instance might run different transactions. Multi instance transactions are a compromise to converge towards the legacy behaviour. The synchronised transactions span over all instances available at the time when base transaction was opened. Instances created later are unaffected. * Provide key-value pair database iterator why: Needed in `CoreDB` for `replicate()` emulation also: Some update of internal code * Extend API (i.e. prototype variants) why: Needed for `CoreDB` geared towards the legacy backend which has a more basic API than Aristo.
2023-09-15 16:23:53 +01:00
func to*(hike: Hike; T: type NibblesBuf): T =
## Convert back
hike.getNibblesImpl() & hike.tail
func legsTo*(hike: Hike; T: type NibblesBuf): T =
## Convert back
hike.getNibblesImpl()
func legsTo*(hike: Hike; numLegs: int; T: type NibblesBuf): T =
## variant of `legsTo()`
hike.getNibblesImpl(0, numLegs)
# --------
proc step*(
aristo: fork support via layers/txframes (#2960) * aristo: fork support via layers/txframes This change reorganises how the database is accessed: instead holding a "current frame" in the database object, a dag of frames is created based on the "base frame" held in `AristoDbRef` and all database access happens through this frame, which can be thought of as a consistent point-in-time snapshot of the database based on a particular fork of the chain. In the code, "frame", "transaction" and "layer" is used to denote more or less the same thing: a dag of stacked changes backed by the on-disk database. Although this is not a requirement, in practice each frame holds the change set of a single block - as such, the frame and its ancestors leading up to the on-disk state represents the state of the database after that block has been applied. "committing" means merging the changes to its parent frame so that the difference between them is lost and only the cumulative changes remain - this facility enables frames to be combined arbitrarily wherever they are in the dag. In particular, it becomes possible to consolidate a set of changes near the base of the dag and commit those to disk without having to re-do the in-memory frames built on top of them - this is useful for "flattening" a set of changes during a base update and sending those to storage without having to perform a block replay on top. Looking at abstractions, a side effect of this change is that the KVT and Aristo are brought closer together by considering them to be part of the "same" atomic transaction set - the way the code gets organised, applying a block and saving it to the kvt happens in the same "logical" frame - therefore, discarding the frame discards both the aristo and kvt changes at the same time - likewise, they are persisted to disk together - this makes reasoning about the database somewhat easier but has the downside of increased memory usage, something that perhaps will need addressing in the future. Because the code reasons more strictly about frames and the state of the persisted database, it also makes it more visible where ForkedChain should be used and where it is still missing - in particular, frames represent a single branch of history while forkedchain manages multiple parallel forks - user-facing services such as the RPC should use the latter, ie until it has been finalized, a getBlock request should consider all forks and not just the blocks in the canonical head branch. Another advantage of this approach is that `AristoDbRef` conceptually becomes more simple - removing its tracking of the "current" transaction stack simplifies reasoning about what can go wrong since this state now has to be passed around in the form of `AristoTxRef` - as such, many of the tests and facilities in the code that were dealing with "stack inconsistency" are now structurally prevented from happening. The test suite will need significant refactoring after this change. Once this change has been merged, there are several follow-ups to do: * there's no mechanism for keeping frames up to date as they get committed or rolled back - TODO * naming is confused - many names for the same thing for legacy reason * forkedchain support is still missing in lots of code * clean up redundant logic based on previous designs - in particular the debug and introspection code no longer makes sense * the way change sets are stored will probably need revisiting - because it's a stack of changes where each frame must be interrogated to find an on-disk value, with a base distance of 128 we'll at minimum have to perform 128 frame lookups for *every* database interaction - regardless, the "dag-like" nature will stay * dispose and commit are poorly defined and perhaps redundant - in theory, one could simply let the GC collect abandoned frames etc, though it's likely an explicit mechanism will remain useful, so they stay for now More about the changes: * `AristoDbRef` gains a `txRef` field (todo: rename) that "more or less" corresponds to the old `balancer` field * `AristoDbRef.stack` is gone - instead, there's a chain of `AristoTxRef` objects that hold their respective "layer" which has the actual changes * No more reasoning about "top" and "stack" - instead, each `AristoTxRef` can be a "head" that "more or less" corresponds to the old single-history `top` notion and its stack * `level` still represents "distance to base" - it's computed from the parent chain instead of being stored * one has to be careful not to use frames where forkedchain was intended - layers are only for a single branch of history! * fix layer vtop after rollback * engine fix * Fix test_txpool * Fix test_rpc * Fix copyright year * fix simulator * Fix copyright year * Fix copyright year * Fix tracer * Fix infinite recursion bug * Remove aristo and kvt empty files * Fic copyright year * Fix fc chain_kvt * ForkedChain refactoring * Fix merge master conflict * Fix copyright year * Reparent txFrame * Fix test * Fix txFrame reparent again * Cleanup and fix test * UpdateBase bugfix and fix test * Fixe newPayload bug discovered by hive * Fix engine api fcu * Clean up call template, chain_kvt, andn txguid * Fix copyright year * work around base block loading issue * Add test * Fix updateHead bug * Fix updateBase bug * Change func commitBase to proc commitBase * Touch up and fix debug mode crash --------- Co-authored-by: jangko <jangko128@gmail.com>
2025-02-06 08:04:50 +01:00
path: NibblesBuf, rvid: RootedVertexID, db: AristoTxRef
): Result[(VertexRef, NibblesBuf, VertexID), AristoError] =
# Fetch next vertex
let (vtx, _) = db.getVtxRc(rvid).valueOr:
if error != GetVtxNotFound:
return err(error)
if rvid.root == rvid.vid:
return err(HikeNoLegs)
# The vertex ID `vid` was a follow up from a parent vertex, but there is
# no child vertex on the database. So `vid` is a dangling link which is
# allowed only if there is a partial trie (e.g. with `snap` sync.)
return err(HikeDanglingEdge)
case vtx.vType:
of Leaf:
# This must be the last vertex, so there cannot be any `tail` left.
if path.len != path.sharedPrefixLen(vtx.pfx):
return err(HikeLeafUnexpected)
ok (vtx, NibblesBuf(), VertexID(0))
of Branch:
# There must be some more data (aka `tail`) after a `Branch` vertex.
if path.len <= vtx.pfx.len:
return err(HikeBranchTailEmpty)
let
Pre-allocate vids for branches (#2882) Each branch node may have up to 16 sub-items - currently, these are given VertexID based when they are first needed leading to a mostly-random order of vertexid for each subitem. Here, we pre-allocate all 16 vertex ids such that when a branch subitem is filled, it already has a vertexid waiting for it. This brings several important benefits: * subitems are sorted and "close" in their id sequencing - this means that when rocksdb stores them, they are likely to end up in the same data block thus improving read efficiency * because the ids are consequtive, we can store just the starting id and a bitmap representing which subitems are in use - this reduces disk space usage for branches allowing more of them fit into a single disk read, further improving disk read and caching performance - disk usage at block 18M is down from 84 to 78gb! * the in-memory footprint of VertexRef reduced allowing more instances to fit into caches and less memory to be used overall. Because of the increased locality of reference, it turns out that we no longer need to iterate over the entire database to efficiently generate the hash key database because the normal computation is now faster - this significantly benefits "live" chain processing as well where each dirtied key must be accompanied by a read of all branch subitems next to it - most of the performance benefit in this branch comes from this locality-of-reference improvement. On a sample resync, there's already ~20% improvement with later blocks seeing increasing benefit (because the trie is deeper in later blocks leading to more benefit from branch read perf improvements) ``` blocks: 18729664, baseline: 190h43m49s, contender: 153h59m0s Time (total): -36h44m48s, -19.27% ``` Note: clients need to be resynced as the PR changes the on-disk format R.I.P. little bloom filter - your life in the repo was short but valuable
2024-12-04 11:42:04 +01:00
nibble = path[vtx.pfx.len]
nextVid = vtx.bVid(nibble)
if not nextVid.isValid:
return err(HikeBranchMissingEdge)
ok (vtx, path.slice(vtx.pfx.len + 1), nextVid)
iterator stepUp*(
path: NibblesBuf; # Partial path
root: VertexID; # Start vertex
aristo: fork support via layers/txframes (#2960) * aristo: fork support via layers/txframes This change reorganises how the database is accessed: instead holding a "current frame" in the database object, a dag of frames is created based on the "base frame" held in `AristoDbRef` and all database access happens through this frame, which can be thought of as a consistent point-in-time snapshot of the database based on a particular fork of the chain. In the code, "frame", "transaction" and "layer" is used to denote more or less the same thing: a dag of stacked changes backed by the on-disk database. Although this is not a requirement, in practice each frame holds the change set of a single block - as such, the frame and its ancestors leading up to the on-disk state represents the state of the database after that block has been applied. "committing" means merging the changes to its parent frame so that the difference between them is lost and only the cumulative changes remain - this facility enables frames to be combined arbitrarily wherever they are in the dag. In particular, it becomes possible to consolidate a set of changes near the base of the dag and commit those to disk without having to re-do the in-memory frames built on top of them - this is useful for "flattening" a set of changes during a base update and sending those to storage without having to perform a block replay on top. Looking at abstractions, a side effect of this change is that the KVT and Aristo are brought closer together by considering them to be part of the "same" atomic transaction set - the way the code gets organised, applying a block and saving it to the kvt happens in the same "logical" frame - therefore, discarding the frame discards both the aristo and kvt changes at the same time - likewise, they are persisted to disk together - this makes reasoning about the database somewhat easier but has the downside of increased memory usage, something that perhaps will need addressing in the future. Because the code reasons more strictly about frames and the state of the persisted database, it also makes it more visible where ForkedChain should be used and where it is still missing - in particular, frames represent a single branch of history while forkedchain manages multiple parallel forks - user-facing services such as the RPC should use the latter, ie until it has been finalized, a getBlock request should consider all forks and not just the blocks in the canonical head branch. Another advantage of this approach is that `AristoDbRef` conceptually becomes more simple - removing its tracking of the "current" transaction stack simplifies reasoning about what can go wrong since this state now has to be passed around in the form of `AristoTxRef` - as such, many of the tests and facilities in the code that were dealing with "stack inconsistency" are now structurally prevented from happening. The test suite will need significant refactoring after this change. Once this change has been merged, there are several follow-ups to do: * there's no mechanism for keeping frames up to date as they get committed or rolled back - TODO * naming is confused - many names for the same thing for legacy reason * forkedchain support is still missing in lots of code * clean up redundant logic based on previous designs - in particular the debug and introspection code no longer makes sense * the way change sets are stored will probably need revisiting - because it's a stack of changes where each frame must be interrogated to find an on-disk value, with a base distance of 128 we'll at minimum have to perform 128 frame lookups for *every* database interaction - regardless, the "dag-like" nature will stay * dispose and commit are poorly defined and perhaps redundant - in theory, one could simply let the GC collect abandoned frames etc, though it's likely an explicit mechanism will remain useful, so they stay for now More about the changes: * `AristoDbRef` gains a `txRef` field (todo: rename) that "more or less" corresponds to the old `balancer` field * `AristoDbRef.stack` is gone - instead, there's a chain of `AristoTxRef` objects that hold their respective "layer" which has the actual changes * No more reasoning about "top" and "stack" - instead, each `AristoTxRef` can be a "head" that "more or less" corresponds to the old single-history `top` notion and its stack * `level` still represents "distance to base" - it's computed from the parent chain instead of being stored * one has to be careful not to use frames where forkedchain was intended - layers are only for a single branch of history! * fix layer vtop after rollback * engine fix * Fix test_txpool * Fix test_rpc * Fix copyright year * fix simulator * Fix copyright year * Fix copyright year * Fix tracer * Fix infinite recursion bug * Remove aristo and kvt empty files * Fic copyright year * Fix fc chain_kvt * ForkedChain refactoring * Fix merge master conflict * Fix copyright year * Reparent txFrame * Fix test * Fix txFrame reparent again * Cleanup and fix test * UpdateBase bugfix and fix test * Fixe newPayload bug discovered by hive * Fix engine api fcu * Clean up call template, chain_kvt, andn txguid * Fix copyright year * work around base block loading issue * Add test * Fix updateHead bug * Fix updateBase bug * Change func commitBase to proc commitBase * Touch up and fix debug mode crash --------- Co-authored-by: jangko <jangko128@gmail.com>
2025-02-06 08:04:50 +01:00
db: AristoTxRef; # Database
): Result[VertexRef, AristoError] =
## For the argument `path`, iterate over the logest possible path in the
## argument database `db`.
var
path = path
next = root
vtx: VertexRef
block iter:
while true:
(vtx, path, next) = step(path, (root, next), db).valueOr:
yield Result[VertexRef, AristoError].err(error)
break iter
yield Result[VertexRef, AristoError].ok(vtx)
if path.len == 0:
break
proc hikeUp*(
path: NibblesBuf; # Partial path
root: VertexID; # Start vertex
aristo: fork support via layers/txframes (#2960) * aristo: fork support via layers/txframes This change reorganises how the database is accessed: instead holding a "current frame" in the database object, a dag of frames is created based on the "base frame" held in `AristoDbRef` and all database access happens through this frame, which can be thought of as a consistent point-in-time snapshot of the database based on a particular fork of the chain. In the code, "frame", "transaction" and "layer" is used to denote more or less the same thing: a dag of stacked changes backed by the on-disk database. Although this is not a requirement, in practice each frame holds the change set of a single block - as such, the frame and its ancestors leading up to the on-disk state represents the state of the database after that block has been applied. "committing" means merging the changes to its parent frame so that the difference between them is lost and only the cumulative changes remain - this facility enables frames to be combined arbitrarily wherever they are in the dag. In particular, it becomes possible to consolidate a set of changes near the base of the dag and commit those to disk without having to re-do the in-memory frames built on top of them - this is useful for "flattening" a set of changes during a base update and sending those to storage without having to perform a block replay on top. Looking at abstractions, a side effect of this change is that the KVT and Aristo are brought closer together by considering them to be part of the "same" atomic transaction set - the way the code gets organised, applying a block and saving it to the kvt happens in the same "logical" frame - therefore, discarding the frame discards both the aristo and kvt changes at the same time - likewise, they are persisted to disk together - this makes reasoning about the database somewhat easier but has the downside of increased memory usage, something that perhaps will need addressing in the future. Because the code reasons more strictly about frames and the state of the persisted database, it also makes it more visible where ForkedChain should be used and where it is still missing - in particular, frames represent a single branch of history while forkedchain manages multiple parallel forks - user-facing services such as the RPC should use the latter, ie until it has been finalized, a getBlock request should consider all forks and not just the blocks in the canonical head branch. Another advantage of this approach is that `AristoDbRef` conceptually becomes more simple - removing its tracking of the "current" transaction stack simplifies reasoning about what can go wrong since this state now has to be passed around in the form of `AristoTxRef` - as such, many of the tests and facilities in the code that were dealing with "stack inconsistency" are now structurally prevented from happening. The test suite will need significant refactoring after this change. Once this change has been merged, there are several follow-ups to do: * there's no mechanism for keeping frames up to date as they get committed or rolled back - TODO * naming is confused - many names for the same thing for legacy reason * forkedchain support is still missing in lots of code * clean up redundant logic based on previous designs - in particular the debug and introspection code no longer makes sense * the way change sets are stored will probably need revisiting - because it's a stack of changes where each frame must be interrogated to find an on-disk value, with a base distance of 128 we'll at minimum have to perform 128 frame lookups for *every* database interaction - regardless, the "dag-like" nature will stay * dispose and commit are poorly defined and perhaps redundant - in theory, one could simply let the GC collect abandoned frames etc, though it's likely an explicit mechanism will remain useful, so they stay for now More about the changes: * `AristoDbRef` gains a `txRef` field (todo: rename) that "more or less" corresponds to the old `balancer` field * `AristoDbRef.stack` is gone - instead, there's a chain of `AristoTxRef` objects that hold their respective "layer" which has the actual changes * No more reasoning about "top" and "stack" - instead, each `AristoTxRef` can be a "head" that "more or less" corresponds to the old single-history `top` notion and its stack * `level` still represents "distance to base" - it's computed from the parent chain instead of being stored * one has to be careful not to use frames where forkedchain was intended - layers are only for a single branch of history! * fix layer vtop after rollback * engine fix * Fix test_txpool * Fix test_rpc * Fix copyright year * fix simulator * Fix copyright year * Fix copyright year * Fix tracer * Fix infinite recursion bug * Remove aristo and kvt empty files * Fic copyright year * Fix fc chain_kvt * ForkedChain refactoring * Fix merge master conflict * Fix copyright year * Reparent txFrame * Fix test * Fix txFrame reparent again * Cleanup and fix test * UpdateBase bugfix and fix test * Fixe newPayload bug discovered by hive * Fix engine api fcu * Clean up call template, chain_kvt, andn txguid * Fix copyright year * work around base block loading issue * Add test * Fix updateHead bug * Fix updateBase bug * Change func commitBase to proc commitBase * Touch up and fix debug mode crash --------- Co-authored-by: jangko <jangko128@gmail.com>
2025-02-06 08:04:50 +01:00
db: AristoTxRef; # Database
leaf: Opt[VertexRef];
hike: var Hike;
): Result[void,(VertexID,AristoError)] =
## For the argument `path`, find and return the logest possible path in the
## argument database `db` - this may result in a partial match in which case
## hike.tail will be non-empty.
##
## If a leaf is given, it gets used for the "last" leg of the hike.
hike.root = root
hike.tail = path
hike.legs.setLen(0)
if not root.isValid:
return err((VertexID(0),HikeRootMissing))
if path.len == 0:
return err((VertexID(0),HikeEmptyPath))
Aristo db api extensions for use as core db backend (#1754) * Update docu * Update Aristo/Kvt constructor prototype why: Previous version used an `enum` value to indicate what backend is to be used. This was replaced by using the backend object type. * Rewrite `hikeUp()` return code into `Result[Hike,(Hike,AristoError)]` why: Better code maintenance. Previously, the `Hike` object was returned. It had an internal error field so partial success was also available on a failure. This error field has been removed. * Use `openArray[byte]` rather than `Blob` in functions prototypes * Provide synchronised multi instance transactions why: The `CoreDB` object was geared towards the legacy DB which used a single transaction for the key-value backend DB. Different state roots are provided by the backend database, so all instances work directly on the same backend. Aristo db instances have different in-memory mappings (aka different state roots) and the transactions are on top of there mappings. So each instance might run different transactions. Multi instance transactions are a compromise to converge towards the legacy behaviour. The synchronised transactions span over all instances available at the time when base transaction was opened. Instances created later are unaffected. * Provide key-value pair database iterator why: Needed in `CoreDB` for `replicate()` emulation also: Some update of internal code * Extend API (i.e. prototype variants) why: Needed for `CoreDB` geared towards the legacy backend which has a more basic API than Aristo.
2023-09-15 16:23:53 +01:00
var vid = root
while true:
if leaf.isSome() and leaf[].isValid and path == leaf[].pfx:
hike.legs.add Leg(wp: VidVtxPair(vid: vid, vtx: leaf[]), nibble: -1)
reset(hike.tail)
break
let (vtx, path, next) = step(hike.tail, (root, vid), db).valueOr:
return err((vid,error))
Aristo db api extensions for use as core db backend (#1754) * Update docu * Update Aristo/Kvt constructor prototype why: Previous version used an `enum` value to indicate what backend is to be used. This was replaced by using the backend object type. * Rewrite `hikeUp()` return code into `Result[Hike,(Hike,AristoError)]` why: Better code maintenance. Previously, the `Hike` object was returned. It had an internal error field so partial success was also available on a failure. This error field has been removed. * Use `openArray[byte]` rather than `Blob` in functions prototypes * Provide synchronised multi instance transactions why: The `CoreDB` object was geared towards the legacy DB which used a single transaction for the key-value backend DB. Different state roots are provided by the backend database, so all instances work directly on the same backend. Aristo db instances have different in-memory mappings (aka different state roots) and the transactions are on top of there mappings. So each instance might run different transactions. Multi instance transactions are a compromise to converge towards the legacy behaviour. The synchronised transactions span over all instances available at the time when base transaction was opened. Instances created later are unaffected. * Provide key-value pair database iterator why: Needed in `CoreDB` for `replicate()` emulation also: Some update of internal code * Extend API (i.e. prototype variants) why: Needed for `CoreDB` geared towards the legacy backend which has a more basic API than Aristo.
2023-09-15 16:23:53 +01:00
let wp = VidVtxPair(vid:vid, vtx:vtx)
Aristo db api extensions for use as core db backend (#1754) * Update docu * Update Aristo/Kvt constructor prototype why: Previous version used an `enum` value to indicate what backend is to be used. This was replaced by using the backend object type. * Rewrite `hikeUp()` return code into `Result[Hike,(Hike,AristoError)]` why: Better code maintenance. Previously, the `Hike` object was returned. It had an internal error field so partial success was also available on a failure. This error field has been removed. * Use `openArray[byte]` rather than `Blob` in functions prototypes * Provide synchronised multi instance transactions why: The `CoreDB` object was geared towards the legacy DB which used a single transaction for the key-value backend DB. Different state roots are provided by the backend database, so all instances work directly on the same backend. Aristo db instances have different in-memory mappings (aka different state roots) and the transactions are on top of there mappings. So each instance might run different transactions. Multi instance transactions are a compromise to converge towards the legacy behaviour. The synchronised transactions span over all instances available at the time when base transaction was opened. Instances created later are unaffected. * Provide key-value pair database iterator why: Needed in `CoreDB` for `replicate()` emulation also: Some update of internal code * Extend API (i.e. prototype variants) why: Needed for `CoreDB` geared towards the legacy backend which has a more basic API than Aristo.
2023-09-15 16:23:53 +01:00
case vtx.vType
of Leaf:
hike.legs.add Leg(wp: wp, nibble: -1)
hike.tail = path
Aristo db api extensions for use as core db backend (#1754) * Update docu * Update Aristo/Kvt constructor prototype why: Previous version used an `enum` value to indicate what backend is to be used. This was replaced by using the backend object type. * Rewrite `hikeUp()` return code into `Result[Hike,(Hike,AristoError)]` why: Better code maintenance. Previously, the `Hike` object was returned. It had an internal error field so partial success was also available on a failure. This error field has been removed. * Use `openArray[byte]` rather than `Blob` in functions prototypes * Provide synchronised multi instance transactions why: The `CoreDB` object was geared towards the legacy DB which used a single transaction for the key-value backend DB. Different state roots are provided by the backend database, so all instances work directly on the same backend. Aristo db instances have different in-memory mappings (aka different state roots) and the transactions are on top of there mappings. So each instance might run different transactions. Multi instance transactions are a compromise to converge towards the legacy behaviour. The synchronised transactions span over all instances available at the time when base transaction was opened. Instances created later are unaffected. * Provide key-value pair database iterator why: Needed in `CoreDB` for `replicate()` emulation also: Some update of internal code * Extend API (i.e. prototype variants) why: Needed for `CoreDB` geared towards the legacy backend which has a more basic API than Aristo.
2023-09-15 16:23:53 +01:00
break
Aristo db api extensions for use as core db backend (#1754) * Update docu * Update Aristo/Kvt constructor prototype why: Previous version used an `enum` value to indicate what backend is to be used. This was replaced by using the backend object type. * Rewrite `hikeUp()` return code into `Result[Hike,(Hike,AristoError)]` why: Better code maintenance. Previously, the `Hike` object was returned. It had an internal error field so partial success was also available on a failure. This error field has been removed. * Use `openArray[byte]` rather than `Blob` in functions prototypes * Provide synchronised multi instance transactions why: The `CoreDB` object was geared towards the legacy DB which used a single transaction for the key-value backend DB. Different state roots are provided by the backend database, so all instances work directly on the same backend. Aristo db instances have different in-memory mappings (aka different state roots) and the transactions are on top of there mappings. So each instance might run different transactions. Multi instance transactions are a compromise to converge towards the legacy behaviour. The synchronised transactions span over all instances available at the time when base transaction was opened. Instances created later are unaffected. * Provide key-value pair database iterator why: Needed in `CoreDB` for `replicate()` emulation also: Some update of internal code * Extend API (i.e. prototype variants) why: Needed for `CoreDB` geared towards the legacy backend which has a more basic API than Aristo.
2023-09-15 16:23:53 +01:00
of Branch:
hike.legs.add Leg(wp: wp, nibble: int8 hike.tail[vtx.pfx.len])
Aristo db api extensions for use as core db backend (#1754) * Update docu * Update Aristo/Kvt constructor prototype why: Previous version used an `enum` value to indicate what backend is to be used. This was replaced by using the backend object type. * Rewrite `hikeUp()` return code into `Result[Hike,(Hike,AristoError)]` why: Better code maintenance. Previously, the `Hike` object was returned. It had an internal error field so partial success was also available on a failure. This error field has been removed. * Use `openArray[byte]` rather than `Blob` in functions prototypes * Provide synchronised multi instance transactions why: The `CoreDB` object was geared towards the legacy DB which used a single transaction for the key-value backend DB. Different state roots are provided by the backend database, so all instances work directly on the same backend. Aristo db instances have different in-memory mappings (aka different state roots) and the transactions are on top of there mappings. So each instance might run different transactions. Multi instance transactions are a compromise to converge towards the legacy behaviour. The synchronised transactions span over all instances available at the time when base transaction was opened. Instances created later are unaffected. * Provide key-value pair database iterator why: Needed in `CoreDB` for `replicate()` emulation also: Some update of internal code * Extend API (i.e. prototype variants) why: Needed for `CoreDB` geared towards the legacy backend which has a more basic API than Aristo.
2023-09-15 16:23:53 +01:00
hike.tail = path
vid = next
Aristo db api extensions for use as core db backend (#1754) * Update docu * Update Aristo/Kvt constructor prototype why: Previous version used an `enum` value to indicate what backend is to be used. This was replaced by using the backend object type. * Rewrite `hikeUp()` return code into `Result[Hike,(Hike,AristoError)]` why: Better code maintenance. Previously, the `Hike` object was returned. It had an internal error field so partial success was also available on a failure. This error field has been removed. * Use `openArray[byte]` rather than `Blob` in functions prototypes * Provide synchronised multi instance transactions why: The `CoreDB` object was geared towards the legacy DB which used a single transaction for the key-value backend DB. Different state roots are provided by the backend database, so all instances work directly on the same backend. Aristo db instances have different in-memory mappings (aka different state roots) and the transactions are on top of there mappings. So each instance might run different transactions. Multi instance transactions are a compromise to converge towards the legacy behaviour. The synchronised transactions span over all instances available at the time when base transaction was opened. Instances created later are unaffected. * Provide key-value pair database iterator why: Needed in `CoreDB` for `replicate()` emulation also: Some update of internal code * Extend API (i.e. prototype variants) why: Needed for `CoreDB` geared towards the legacy backend which has a more basic API than Aristo.
2023-09-15 16:23:53 +01:00
ok()
Aristo db api extensions for use as core db backend (#1754) * Update docu * Update Aristo/Kvt constructor prototype why: Previous version used an `enum` value to indicate what backend is to be used. This was replaced by using the backend object type. * Rewrite `hikeUp()` return code into `Result[Hike,(Hike,AristoError)]` why: Better code maintenance. Previously, the `Hike` object was returned. It had an internal error field so partial success was also available on a failure. This error field has been removed. * Use `openArray[byte]` rather than `Blob` in functions prototypes * Provide synchronised multi instance transactions why: The `CoreDB` object was geared towards the legacy DB which used a single transaction for the key-value backend DB. Different state roots are provided by the backend database, so all instances work directly on the same backend. Aristo db instances have different in-memory mappings (aka different state roots) and the transactions are on top of there mappings. So each instance might run different transactions. Multi instance transactions are a compromise to converge towards the legacy behaviour. The synchronised transactions span over all instances available at the time when base transaction was opened. Instances created later are unaffected. * Provide key-value pair database iterator why: Needed in `CoreDB` for `replicate()` emulation also: Some update of internal code * Extend API (i.e. prototype variants) why: Needed for `CoreDB` geared towards the legacy backend which has a more basic API than Aristo.
2023-09-15 16:23:53 +01:00
Core db and aristo maintenance update (#2014) * Aristo: Update error return code why: Failing of `Aristo` function `delete()` might fail because there is no such data item on the db. This must return a single error code as is done with `fetch()`. * Ledger: Better error handling why: The `expect()` clauses have been replaced by raising asserts indicating the error from the database backend. Also, `delete()` failures are legitimate if the item to delete does not exist. * Aristo: Delete function must always leave a label on DB for `hashify()` why: The `hashify()` uses the labels left bu `merge()` and `delete()` to compile (and optimise) a scheduler for subsequent hashing. Originally, the labels were not used for deleted entries and `delete()` still had some edge case where the deletion label was not properly handled. * Aristo: Update `hashify()` scheduler, remove buggy optimisation why: Was left over from version without virtual state roots which did not know about account payload leaf vertices referring to storage roots. * Aristo: Label storage trie account in `delete()` similar to `merge()` details; The `delete()` function applied to a non-static state root (assumed to be a storage root) will check the payload of an accounts leaf and mark its Merkle keys to be re-checked when runninh `hashify()` * Aristo: Clean up and re-org recycled vertex IDs in `hashify()` why: Re-organising the recycled vertex IDs list intends to reduce the size of the list. This list is organised as a LIFO (or stack.) By reorganising it in a way so that the least vertex ID numbers are on top, the list will be kept smaller as observed on some examples (less than 30%.) * CoreDb: Accept storage trie deletion requests in non-initialised state why: Due to lazy initialisation, the root vertex ID might not yet exist. So the `Aristo` database handlers would reject this call with an error and this condition needs to be handled by the API (which realises the lazy feature.) * Cosmetics & code massage, prettify logging * fix missing import
2024-02-08 16:32:16 +00:00
proc hikeUp*(
lty: LeafTie;
aristo: fork support via layers/txframes (#2960) * aristo: fork support via layers/txframes This change reorganises how the database is accessed: instead holding a "current frame" in the database object, a dag of frames is created based on the "base frame" held in `AristoDbRef` and all database access happens through this frame, which can be thought of as a consistent point-in-time snapshot of the database based on a particular fork of the chain. In the code, "frame", "transaction" and "layer" is used to denote more or less the same thing: a dag of stacked changes backed by the on-disk database. Although this is not a requirement, in practice each frame holds the change set of a single block - as such, the frame and its ancestors leading up to the on-disk state represents the state of the database after that block has been applied. "committing" means merging the changes to its parent frame so that the difference between them is lost and only the cumulative changes remain - this facility enables frames to be combined arbitrarily wherever they are in the dag. In particular, it becomes possible to consolidate a set of changes near the base of the dag and commit those to disk without having to re-do the in-memory frames built on top of them - this is useful for "flattening" a set of changes during a base update and sending those to storage without having to perform a block replay on top. Looking at abstractions, a side effect of this change is that the KVT and Aristo are brought closer together by considering them to be part of the "same" atomic transaction set - the way the code gets organised, applying a block and saving it to the kvt happens in the same "logical" frame - therefore, discarding the frame discards both the aristo and kvt changes at the same time - likewise, they are persisted to disk together - this makes reasoning about the database somewhat easier but has the downside of increased memory usage, something that perhaps will need addressing in the future. Because the code reasons more strictly about frames and the state of the persisted database, it also makes it more visible where ForkedChain should be used and where it is still missing - in particular, frames represent a single branch of history while forkedchain manages multiple parallel forks - user-facing services such as the RPC should use the latter, ie until it has been finalized, a getBlock request should consider all forks and not just the blocks in the canonical head branch. Another advantage of this approach is that `AristoDbRef` conceptually becomes more simple - removing its tracking of the "current" transaction stack simplifies reasoning about what can go wrong since this state now has to be passed around in the form of `AristoTxRef` - as such, many of the tests and facilities in the code that were dealing with "stack inconsistency" are now structurally prevented from happening. The test suite will need significant refactoring after this change. Once this change has been merged, there are several follow-ups to do: * there's no mechanism for keeping frames up to date as they get committed or rolled back - TODO * naming is confused - many names for the same thing for legacy reason * forkedchain support is still missing in lots of code * clean up redundant logic based on previous designs - in particular the debug and introspection code no longer makes sense * the way change sets are stored will probably need revisiting - because it's a stack of changes where each frame must be interrogated to find an on-disk value, with a base distance of 128 we'll at minimum have to perform 128 frame lookups for *every* database interaction - regardless, the "dag-like" nature will stay * dispose and commit are poorly defined and perhaps redundant - in theory, one could simply let the GC collect abandoned frames etc, though it's likely an explicit mechanism will remain useful, so they stay for now More about the changes: * `AristoDbRef` gains a `txRef` field (todo: rename) that "more or less" corresponds to the old `balancer` field * `AristoDbRef.stack` is gone - instead, there's a chain of `AristoTxRef` objects that hold their respective "layer" which has the actual changes * No more reasoning about "top" and "stack" - instead, each `AristoTxRef` can be a "head" that "more or less" corresponds to the old single-history `top` notion and its stack * `level` still represents "distance to base" - it's computed from the parent chain instead of being stored * one has to be careful not to use frames where forkedchain was intended - layers are only for a single branch of history! * fix layer vtop after rollback * engine fix * Fix test_txpool * Fix test_rpc * Fix copyright year * fix simulator * Fix copyright year * Fix copyright year * Fix tracer * Fix infinite recursion bug * Remove aristo and kvt empty files * Fic copyright year * Fix fc chain_kvt * ForkedChain refactoring * Fix merge master conflict * Fix copyright year * Reparent txFrame * Fix test * Fix txFrame reparent again * Cleanup and fix test * UpdateBase bugfix and fix test * Fixe newPayload bug discovered by hive * Fix engine api fcu * Clean up call template, chain_kvt, andn txguid * Fix copyright year * work around base block loading issue * Add test * Fix updateHead bug * Fix updateBase bug * Change func commitBase to proc commitBase * Touch up and fix debug mode crash --------- Co-authored-by: jangko <jangko128@gmail.com>
2025-02-06 08:04:50 +01:00
db: AristoTxRef;
leaf: Opt[VertexRef];
hike: var Hike
): Result[void,(VertexID,AristoError)] =
## Variant of `hike()`
lty.path.to(NibblesBuf).hikeUp(lty.root, db, leaf, hike)
proc hikeUp*(
path: openArray[byte];
root: VertexID;
aristo: fork support via layers/txframes (#2960) * aristo: fork support via layers/txframes This change reorganises how the database is accessed: instead holding a "current frame" in the database object, a dag of frames is created based on the "base frame" held in `AristoDbRef` and all database access happens through this frame, which can be thought of as a consistent point-in-time snapshot of the database based on a particular fork of the chain. In the code, "frame", "transaction" and "layer" is used to denote more or less the same thing: a dag of stacked changes backed by the on-disk database. Although this is not a requirement, in practice each frame holds the change set of a single block - as such, the frame and its ancestors leading up to the on-disk state represents the state of the database after that block has been applied. "committing" means merging the changes to its parent frame so that the difference between them is lost and only the cumulative changes remain - this facility enables frames to be combined arbitrarily wherever they are in the dag. In particular, it becomes possible to consolidate a set of changes near the base of the dag and commit those to disk without having to re-do the in-memory frames built on top of them - this is useful for "flattening" a set of changes during a base update and sending those to storage without having to perform a block replay on top. Looking at abstractions, a side effect of this change is that the KVT and Aristo are brought closer together by considering them to be part of the "same" atomic transaction set - the way the code gets organised, applying a block and saving it to the kvt happens in the same "logical" frame - therefore, discarding the frame discards both the aristo and kvt changes at the same time - likewise, they are persisted to disk together - this makes reasoning about the database somewhat easier but has the downside of increased memory usage, something that perhaps will need addressing in the future. Because the code reasons more strictly about frames and the state of the persisted database, it also makes it more visible where ForkedChain should be used and where it is still missing - in particular, frames represent a single branch of history while forkedchain manages multiple parallel forks - user-facing services such as the RPC should use the latter, ie until it has been finalized, a getBlock request should consider all forks and not just the blocks in the canonical head branch. Another advantage of this approach is that `AristoDbRef` conceptually becomes more simple - removing its tracking of the "current" transaction stack simplifies reasoning about what can go wrong since this state now has to be passed around in the form of `AristoTxRef` - as such, many of the tests and facilities in the code that were dealing with "stack inconsistency" are now structurally prevented from happening. The test suite will need significant refactoring after this change. Once this change has been merged, there are several follow-ups to do: * there's no mechanism for keeping frames up to date as they get committed or rolled back - TODO * naming is confused - many names for the same thing for legacy reason * forkedchain support is still missing in lots of code * clean up redundant logic based on previous designs - in particular the debug and introspection code no longer makes sense * the way change sets are stored will probably need revisiting - because it's a stack of changes where each frame must be interrogated to find an on-disk value, with a base distance of 128 we'll at minimum have to perform 128 frame lookups for *every* database interaction - regardless, the "dag-like" nature will stay * dispose and commit are poorly defined and perhaps redundant - in theory, one could simply let the GC collect abandoned frames etc, though it's likely an explicit mechanism will remain useful, so they stay for now More about the changes: * `AristoDbRef` gains a `txRef` field (todo: rename) that "more or less" corresponds to the old `balancer` field * `AristoDbRef.stack` is gone - instead, there's a chain of `AristoTxRef` objects that hold their respective "layer" which has the actual changes * No more reasoning about "top" and "stack" - instead, each `AristoTxRef` can be a "head" that "more or less" corresponds to the old single-history `top` notion and its stack * `level` still represents "distance to base" - it's computed from the parent chain instead of being stored * one has to be careful not to use frames where forkedchain was intended - layers are only for a single branch of history! * fix layer vtop after rollback * engine fix * Fix test_txpool * Fix test_rpc * Fix copyright year * fix simulator * Fix copyright year * Fix copyright year * Fix tracer * Fix infinite recursion bug * Remove aristo and kvt empty files * Fic copyright year * Fix fc chain_kvt * ForkedChain refactoring * Fix merge master conflict * Fix copyright year * Reparent txFrame * Fix test * Fix txFrame reparent again * Cleanup and fix test * UpdateBase bugfix and fix test * Fixe newPayload bug discovered by hive * Fix engine api fcu * Clean up call template, chain_kvt, andn txguid * Fix copyright year * work around base block loading issue * Add test * Fix updateHead bug * Fix updateBase bug * Change func commitBase to proc commitBase * Touch up and fix debug mode crash --------- Co-authored-by: jangko <jangko128@gmail.com>
2025-02-06 08:04:50 +01:00
db: AristoTxRef;
leaf: Opt[VertexRef];
hike: var Hike
): Result[void,(VertexID,AristoError)] =
## Variant of `hike()`
NibblesBuf.fromBytes(path).hikeUp(root, db, leaf, hike)
proc hikeUp*(
path: Hash32;
root: VertexID;
aristo: fork support via layers/txframes (#2960) * aristo: fork support via layers/txframes This change reorganises how the database is accessed: instead holding a "current frame" in the database object, a dag of frames is created based on the "base frame" held in `AristoDbRef` and all database access happens through this frame, which can be thought of as a consistent point-in-time snapshot of the database based on a particular fork of the chain. In the code, "frame", "transaction" and "layer" is used to denote more or less the same thing: a dag of stacked changes backed by the on-disk database. Although this is not a requirement, in practice each frame holds the change set of a single block - as such, the frame and its ancestors leading up to the on-disk state represents the state of the database after that block has been applied. "committing" means merging the changes to its parent frame so that the difference between them is lost and only the cumulative changes remain - this facility enables frames to be combined arbitrarily wherever they are in the dag. In particular, it becomes possible to consolidate a set of changes near the base of the dag and commit those to disk without having to re-do the in-memory frames built on top of them - this is useful for "flattening" a set of changes during a base update and sending those to storage without having to perform a block replay on top. Looking at abstractions, a side effect of this change is that the KVT and Aristo are brought closer together by considering them to be part of the "same" atomic transaction set - the way the code gets organised, applying a block and saving it to the kvt happens in the same "logical" frame - therefore, discarding the frame discards both the aristo and kvt changes at the same time - likewise, they are persisted to disk together - this makes reasoning about the database somewhat easier but has the downside of increased memory usage, something that perhaps will need addressing in the future. Because the code reasons more strictly about frames and the state of the persisted database, it also makes it more visible where ForkedChain should be used and where it is still missing - in particular, frames represent a single branch of history while forkedchain manages multiple parallel forks - user-facing services such as the RPC should use the latter, ie until it has been finalized, a getBlock request should consider all forks and not just the blocks in the canonical head branch. Another advantage of this approach is that `AristoDbRef` conceptually becomes more simple - removing its tracking of the "current" transaction stack simplifies reasoning about what can go wrong since this state now has to be passed around in the form of `AristoTxRef` - as such, many of the tests and facilities in the code that were dealing with "stack inconsistency" are now structurally prevented from happening. The test suite will need significant refactoring after this change. Once this change has been merged, there are several follow-ups to do: * there's no mechanism for keeping frames up to date as they get committed or rolled back - TODO * naming is confused - many names for the same thing for legacy reason * forkedchain support is still missing in lots of code * clean up redundant logic based on previous designs - in particular the debug and introspection code no longer makes sense * the way change sets are stored will probably need revisiting - because it's a stack of changes where each frame must be interrogated to find an on-disk value, with a base distance of 128 we'll at minimum have to perform 128 frame lookups for *every* database interaction - regardless, the "dag-like" nature will stay * dispose and commit are poorly defined and perhaps redundant - in theory, one could simply let the GC collect abandoned frames etc, though it's likely an explicit mechanism will remain useful, so they stay for now More about the changes: * `AristoDbRef` gains a `txRef` field (todo: rename) that "more or less" corresponds to the old `balancer` field * `AristoDbRef.stack` is gone - instead, there's a chain of `AristoTxRef` objects that hold their respective "layer" which has the actual changes * No more reasoning about "top" and "stack" - instead, each `AristoTxRef` can be a "head" that "more or less" corresponds to the old single-history `top` notion and its stack * `level` still represents "distance to base" - it's computed from the parent chain instead of being stored * one has to be careful not to use frames where forkedchain was intended - layers are only for a single branch of history! * fix layer vtop after rollback * engine fix * Fix test_txpool * Fix test_rpc * Fix copyright year * fix simulator * Fix copyright year * Fix copyright year * Fix tracer * Fix infinite recursion bug * Remove aristo and kvt empty files * Fic copyright year * Fix fc chain_kvt * ForkedChain refactoring * Fix merge master conflict * Fix copyright year * Reparent txFrame * Fix test * Fix txFrame reparent again * Cleanup and fix test * UpdateBase bugfix and fix test * Fixe newPayload bug discovered by hive * Fix engine api fcu * Clean up call template, chain_kvt, andn txguid * Fix copyright year * work around base block loading issue * Add test * Fix updateHead bug * Fix updateBase bug * Change func commitBase to proc commitBase * Touch up and fix debug mode crash --------- Co-authored-by: jangko <jangko128@gmail.com>
2025-02-06 08:04:50 +01:00
db: AristoTxRef;
leaf: Opt[VertexRef];
hike: var Hike
): Result[void,(VertexID,AristoError)] =
## Variant of `hike()`
NibblesBuf.fromBytes(path.data).hikeUp(root, db, leaf, hike)
# ------------------------------------------------------------------------------
# End
# ------------------------------------------------------------------------------