* Aristo: Re-phrase `LayerDelta` and `LayerFinal` as object references
why:
Avoids copying in some cases
* Fix copyright header
* Aristo: Verify `leafTie.root` function argument for `merge()` proc
why:
Zero root will lead to inconsistent DB entry
* Aristo: Update failure condition for hash labels compiler `hashify()`
why:
Node need not be rejected as long as links are on the schedule. In
that case, `redo[]` is to become `wff.base[]` at a later stage.
This amends an earlier fix, part of #1952 by also testing against
the target nodes of the `wff.base[]` sets.
* Aristo: Add storage root glue record to `hashify()` schedule
why:
An account leaf node might refer to a non-resolvable storage root ID.
Storage root node chains will end up at the storage root. So the link
`storage-root->account-leaf` needs an extra item in the schedule.
* Aristo: fix error code returned by `fetchPayload()`
details:
Final error code is implied by the error code form the `hikeUp()`
function.
* CoreDb: Discard `createOk` argument in API `getRoot()` function
why:
Not needed for the legacy DB. For the `Arsto` DB, a lazy approach is
implemented where a stprage root node is created on-the-fly.
* CoreDb: Prevent `$$` logging in some cases
why:
Logging the function `$$` is not useful when it is used for internal
use, i.e. retrieving an an error text for logging.
* CoreDb: Add `tryHashFn()` to API for pretty printing
why:
Pretty printing must not change the hashification status for the
`Aristo` DB. So there is an independent API wrapper for getting the
node hash which never updated the hashes.
* CoreDb: Discard `update` argument in API `hash()` function
why:
When calling the API function `hash()`, the latest state is always
wanted. For a version that uses the current state as-is without checking,
the function `tryHash()` was added to the backend.
* CoreDb: Update opaque vertex ID objects for the `Aristo` backend
why:
For `Aristo`, vID objects encapsulate a numeric `VertexID`
referencing a vertex (rather than a node hash as used on the
legacy backend.) For storage sub-tries, there might be no initial
vertex known when the descriptor is created. So opaque vertex ID
objects are supported without a valid `VertexID` which will be
initalised on-the-fly when the first item is merged.
* CoreDb: Add pretty printer for opaque vertex ID objects
* Cosmetics, printing profiling data
* CoreDb: Fix segfault in `Aristo` backend when creating MPT descriptor
why:
Missing initialisation error
* CoreDb: Allow MPT to inherit shared context on `Aristo` backend
why:
Creates descriptors with different storage roots for the same
shared `Aristo` DB descriptor.
* Cosmetics, update diagnostic message items for `Aristo` backend
* Fix Copyright year
* Fix issue causing vmflags to be reset during call to processBlocks and enable witness generation in test_blockchain_json test.
* Fix copyright notice on updated files.
* Initial implementation of eth_getProof endpoint.
* Implemented generation of account and storage proofs.
* Minor fixes and additional tests.
* Refactor getBranch code into a separate file.
* Improve usage of test data.
* Fix copyright year.
* Return zero hash for codeHash and storageHash if account doesn't exist.
* Update copyright notice and moved trie key hashing to inside getBranch proc.
* Update KVT layers abstraction
details:
modelled after Aristo layers
* Simplified KVT database iterators (removed item counters)
why:
Not needed for production functions
* Simplify KVT merge function `layersCc()`
* Simplified Aristo database iterators (removed item counters)
why:
Not needed for production functions
* Update failure condition for hash labels compiler `hashify()`
why:
Node need not be rejected as long as links are on the schedule. In
that case, `redo[]` is to become `wff.base[]` at a later stage.
* Update merging layers and label update functions
why:
+ Merging a stack of layers with `layersCc()` could be simplified
+ Merging layers will optimise the reverse `kMap[]` table maps
`pAmk: label->{vid, ..}` by deleting empty mappings `label->{}` where
they are redundant.
+ Updated `layersPutLabel()` for optimising `pAmk[]` tables
* Fix kvt headers
* Provide differential layers for KVT transaction stack
why:
Significant performance improvement
* Provide abstraction layer for database top cache layer
why:
This will eventually implemented as a differential database layers
or transaction layers. The latter is needed to improve performance.
behavioural changes:
Zero vertex and keys (i.e. delete requests) are not optimised out
until the last layer is written to the database.
* Provide differential layers for Aristo transaction stack
why:
Significant performance improvement
* Activate `LedgerRef` wrapper for `AccountsCache`
details:
`accounts_cache.nim` methods are indirectly processed by the wrapper
methods from `ledger.nim`.
This works for all sources except `test_state_db.nim` where the source
`accounts_cache.nim` is included (rather than imported) in order to
access objects privy to the very source.
* Provide facility to switch to a preselected `LedgerRef` type
details:
Can be set as suggestion when initialising `CommonRef`
* Update `CoreDb` test suite for better time tracking
details:
+ Allow time logging by pre-defined block intervals
+ Print `CoreDb`/`Ledger`profiling results (if enabled)
* Explicitly use shared `Kvt` table on `Ledger` and `Clique` lookup.
why:
Speeds up lookup time with `Aristo` backend. For writing `Clique` data,
the `Companion` model allows to write `Clique` data past the database
locked by evm transactions.
* Implement `CoreDb` profiling with API tracking
why:
Chasing time spent per APT procs ...
* Implement `Ledger` profiling with API tracking
why:
Chasing time spent per APT procs ...
* Always hashify when commiting or storing
why:
A dirty cache makes no sense when committing
* Make sure that a zero key is created when adding/updating vertices
why:
This is an error fix mainly for edge cases. A typical error was
that the root key got deleted when there were only a few vertices
left on the DB.
* Need all created and changed vertices zero-keyed on the cache
why:
A zero key (i.e. empty Merkle hash) indicates that a vertex key
needs to be updated. This would not be needed immediately after
a merge as there is an actual leaf path on the cache layer. But
after subsequent merge and delete operations this information
might get blurred.
* Re-org hashing algorithm
why:
Apart from errors, the previous implementation was too slow for
two reasons:
+ some control hashes were calculated for debugging (now all
verification is done in `aristo_check` module)
+ the leaf paths stored on the cache are used to build the
labelling (aka hashing) schedule; there paths were accumulated
over successive hash sessions although it is clear that all
keys were generated, already
* Register paths for added leafs because of trie re-balancing
why:
While the payload would not change, the prefix in the leaf vertex
would. So it needs to be flagged for hash recompilation for the
`hashify()` module.
also:
Make sure that `Hike` paths which might have vertex links into the
backend filter are replaced by vertex copies before manipulating.
Otherwise the vertices on the immutable filter might be involuntarily
changed.
* Also check for paths where the leaf vertex is on the backend, already
why:
A a path can have dome vertices on the top layer cache with the
`Leaf` vertex on the backend.
* Re-define a void `HashLabel` type.
why:
A `HashLabel` type is a pair `(root-vertex-ID, Keccak-hash)`. Previously,
a valid `HashLabel` consisted of a non-empty hash and a non-zero vertex
ID. This definition leads to a non-unique representation of a void
`HashLabel` with either root-ID or has void. This has been changed to
the unique void `HashLabel` exactly if the hash entry is void.
* Update consistency checkers
* Re-org `hashify()` procedure
why:
Syncing against block chain showed serious deficiencies which produced
wrong hashes or simply bailed out with error.
So all fringe cases (mainly due to deleted entries) could be integrated
into the labelling schedule rather than handling separate fringe cases.
* Fix copyright year
* Show elapsed times with enabled `CoreDb` API tracking
* Show elapsed times with enabled `LedgerRef` API tracking
* Reorg `CoreDb` auto destructors for `Aristo` DB
why:
While `Aristo` supports some parallelism for concurrent database access,
this comes with a price of management overhead. With a naive approach,
the auto-destructor will slow down execution because the ledger and
evm treat the database in a shared mode where a DB descriptor is just
created and thrown away shortly after.
This is reflected in the `Coredb` abstraction layer above `Aristo`/`Kvt`
where a few `Shared` type descriptors are cached and a shared reference
is returned rather than a disposable new object.
* For `CoreDb` support transaction level tracking
details:
This is mainly an extra for the legacy DB as `Aristo` and `Kvt` support
this already.
Also return an error on the legacy DB backend when `persistent()` is
called while there are transactions pending (the `persistent()` call
does nothing otherwise on the legacy backend.)
* Clear compiler warnings (remove unused variables etc.)
* Using different `tmp` directories for `Kvt` and `Aristo`
why:
Closing one database would leave the other set of directories
incomplete.
* Code cosmetics, silence compiler
* Fix typo `EMPTY_ROOT_HASH` vs. `EMPTY_CODE_HASH`
* Fix copyright years
* Split off `ReadOnlyStateDB` from `AccountStateDB` from `state_db.nim`
why:
Apart from testing, applications use `ReadOnlyStateDB` as an easy
way to access the accounts ledger. This is well supported by the
`Aristo` db, but writable mode is only parially supported.
The writable AccountStateDB` object for modifying accounts is not
used by production code.
So, for lecgacy and testing apps, the full support of the previous
`AccountStateDB` is now enabled by `import db/state_db/read_write`
and the `import db/state_db` provides read-only mode.
* Encapsulate `AccountStateDB` as `GenesisLedgerRef` or genesis creation
why:
`AccountStateDB` has poor support for `Aristo` and is not widely used
in favour of `AccountsLedger` (which will be abstracted as `ledger`.)
Currently, using other than the `AccountStateDB` ledgers within the
`GenesisLedgerRef` wrapper is experimental and test only. Eventually,
the wrapper should disappear so that the `Ledger` object (which
encapsulates `AccountsCache` and `AccountsLedger`) will prevail.
* For the `Ledger`, provide access to raw accounts `MPT`
why:
This gives to the `CoreDbMptRef` descriptor from the `CoreDb` (which is
the legacy version of CoreDxMptRef`.) For the new `ledger` API, the
accounts are based on the `CoreDxMAccRef` descriptor which uses a
particular sub-system for accounts while legacy applications use the
`CoreDbPhkRef` equivalent of the `SecureHexaryTrie`.
The only place where this feature will currently be used is the
`genesis.nim` source file.
* Fix `Aristo` bugs, missing boundary checks, typos, etc.
* Verify root vertex in `MPT` and account constructors
why:
Was missing so far, in particular the accounts constructor must
verify `VertexID(1)
* Fix include file
* Disable `TransactionID` related functions from `state_db.nim`
why:
Functions `getCommittedStorage()` and `updateOriginalRoot()` from
the `state_db` module are nowhere used. The emulation of a legacy
`TransactionID` type functionality is administratively expensive to
provide by `Aristo` (the legacy DB version is only partially
implemented, anyway).
As there is no other place where `TransactionID`s are used, they will
not be provided by the `Aristo` variant of the `CoreDb`. For the
legacy DB API, nothing will change.
* Fix copyright headers in source code
* Get rid of compiler warning
* Update Aristo code, remove unused `merge()` variant, export `hashify()`
why:
Adapt to upcoming `CoreDb` wrapper
* Remove synced tx feature from `Aristo`
why:
+ This feature allowed to synchronise transaction methods like begin,
commit, and rollback for a group of descriptors.
+ The feature is over engineered and not needed for `CoreDb`, neither
is it complete (some convergence features missing.)
* Add debugging helpers to `Kvt`
also:
Update database iterator, add count variable yield argument similar
to `Aristo`.
* Provide optional destructors for `CoreDb` API
why;
For the upcoming Aristo wrapper, this allows to control when certain
smart destruction and update can take place. The auto destructor works
fine in general when the storage/cache strategy is known and acceptable
when creating descriptors.
* Add update option for `CoreDb` API function `hash()`
why;
The hash function is typically used to get the state root of the MPT.
Due to lazy hashing, this might be not available on the `Aristo` DB.
So the `update` function asks for re-hashing the gurrent state changes
if needed.
* Update API tracking log mode: `info` => `debug
* Use shared `Kvt` descriptor in new Ledger API
why:
No need to create a new descriptor all the time
* Fix debug noise in `hashify()` for perfectly normal situation
why:
Was previously considered a fixable error
* Fix test sample file names
why:
The larger test file `goerli68161.txt.gz` is already in the local
archive. So there is no need to use the smaller one from the external
repo.
* Activate `accounts_cache` module from `db/ledger`
why:
A copy of the original `accounts_cache.nim` source to be integrated
into the `Ledger` module wrapper which allows to switch between
different `accounts_cache` implementations unser tha same API.
details:
At a later state, the `db/accounts_cache.nim` wrapper will be
removed so that there is only one access to that module via
`db/ledger/accounts_cache.nim`.
* Fix copyright headers in source code
* Aristo: Provide key-value list signature calculator
detail:
Simple wrappers around `Aristo` core functionality
* Update new API for `CoreDb`
details:
+ Renamed new API functions `contains()` => `hasKey()` or `hasPath()`
which disables the `in` operator on non-boolean `contains()` functions
+ The functions `get()` and `fetch()` always return a not-found error if
there is no item, available. The new functions `getOrEmpty()` and
`mergeOrEmpty()` return an an empty `Blob` if there is no such key
found.
* Rewrite `core_apps.nim` using new API from `CoreDb`
* Use `Aristo` functionality for calculating Merkle signatures
details:
For debugging, the `VerifyAristoForMerkleRootCalc` can be set so
that `Aristo` results will be verified against the legacy versions.
* Provide general interface for Merkle signing key-value tables
details:
Export `Aristo` wrappers
* Activate `CoreDb` tests
why:
Now, API seems to be stable enough for general tests.
* Update `toHex()` usage
why:
Byteutils' `toHex()` is superior to `toSeq.mapIt(it.toHex(2)).join`
* Split `aristo_transcode` => `aristo_serialise` + `aristo_blobify`
why:
+ Different modules for different purposes
+ `aristo_serialise`: RLP encoding/decoding
+ `aristo_blobify`: Aristo database encoding/decoding
* Compacted representation of small nodes' links instead of Keccak hashes
why:
Ethereum MPTs use Keccak hashes as node links if the size of an RLP
encoded node is at least 32 bytes. Otherwise, the RLP encoded node
value is used as a pseudo node link (rather than a hash.) Such a node
is nor stored on key-value database. Rather the RLP encoded node value
is stored instead of a lode link in a parent node instead. Only for
the root hash, the top level node is always referred to by the hash.
This feature needed an abstraction of the `HashKey` object which is now
either a hash or a blob of length at most 31 bytes. This leaves two
ways of representing an empty/void `HashKey` type, either as an empty
blob of zero length, or the hash of an empty blob.
* Update `CoreDb` interface (mainly reducing logger noise)
* Fix copyright years (to make `Lint` happy)
* Aristo: Single `FetchPathNotFound` error in `fetchXxx()` and `hasPath()`
why:
Missing path hike returns too many detailed reasons why it failed
which becomes cumbersome to handle.
also:
Renamed `contains()` => `hasPath()` which disables the `in` operator on
non-boolean `contains()` functions
* Kvt: Renamed `contains()` => `hasKey()`
why:
which disables the `in` operator on non-boolean `contains()` functions
* Aristo: Generalising `HashID` by variable length `PathID`
why:
There are cases when the `Aristo` database is to be used with
shorter than 64 nibbles keys when handling transactions indexes
with sequence IDs.
caveat:
This patch only works reliable for full length `PathID` values. Tests
for shorter `PathID` values are currently missing.
* Make sure that storage tries are not pruned (by default) on the new Ledger API
why:
Pruning might kill some unwanted entries from storage tries ending up with an unstable database
leading to crashes.
* Implement `CoreDb` and `LedgerRef` API tracing
details:
+ Locally enabled at compile time via constants `ProvideCoreDbLegacyAPI`
and `EnableApiTracking` in either `base.nim` source
+ If enabled it can be selectively turned on/off via public switches in
the `CoreDb` descriptor.
* Allow suppressing opportunistic `ifNecessaryGetXxx()` functions
why:
Better troubleshooting when the system crashes (assertions will then
most probably happen outside an `async` function.)
* Provide TDD/debug facility for inspecting `persistBlocks()` working
detail:
+ Make sure that the last block of a test sample is the first batch
item in `persistBlocks()`.
+ Additionally, allow `AccountsCache` API tracing by setting the flag
`extraTraceMessages = true` in the file `accounts_cache.nim`
* Overload AccountsCache by abstraction wrapper
details:
Can facilitate CoreDb API switch, details in `ledger/README.md`.
details:
Persistent pruning would not restore the `emptyRlp` value for the
root node when the database becomes empty. This effects to an
assertion exception next time the DB is accessed.
As most unit tests run on the memory DB, this case slipped through
unnoticed for a while (see also issue #9.)
* CoreDB: Re-org API
details:
Legacy API internally uses vertex ID for root node abstraction
* Cosmetics: Move some unit test helpers to common sub-directory
* Extract constant from `accouns_cache.nim` => `constants.nim`
* Fix tracer methods
why:
Logger dump data were wrongly dumped from the production database. This
caused an assert exception when iterating over the persistent database
(instead of the memory logger.) This event in turn was enabled after
fixing another inconsistency which just set up an empty iterator. Unit
tests failed to detect that.
* Aristo: remove obsolete functions
* Aristo: Fix error code for non-available hash keys
why:
Must not return `not-found` when the key is not available (i.e. the
current changes were not hashified, yet.)
* CoreDB: Provide TDD and test framework
* Split `core_db/base.nim` into several sources
* Rename `core_db/legacy.nim` => `core_db/legacy_db.nim`
* Update `CoreDb` API, dual methods returning `Result[]` or plain value
detail:
Plain value methods implemet the legacy API, they defect on error results
* Redesign `CoreDB` direct backend access
why:
Made the `backend` directive integral part of the API
* Discontinue providing unused or otherwise available functions
details:
+ setTransactionID() removed, not used and not easily replicable in Aristo
+ maybeGet() removed, available via direct backend access
+ newPhk() removed, never used & was experimental anyway
* Update/reorg backend API
why:
+ Added error print function `$$()`
+ General descriptor completion (and optional validation) via `bless()`
* Update `Aristo`/`Kvt` exception handling
why:
Avoid `CatchableError` exceptions, rather pass them as error code where
appropriate.
* More `CoreDB` compliant `Aristo` and `Kvt` methods
details:
+ Providing functions like `contains()`, `getVtxRc()` (returns `Result[]`).
+ Additional error code: `NotImplemented`
* Rewrite/reorg of Aristo DB constructor
why:
Previously used global object `DefaultQidLayoutRef` as default
initialiser. This object was created at compile time which lead to
non-gc safe functions.
* Update nimbus/db/core_db/legacy_db.nim
Co-authored-by: Kim De Mey <kim.demey@gmail.com>
* Update nimbus/db/aristo/aristo_transcode.nim
Co-authored-by: Kim De Mey <kim.demey@gmail.com>
* Update nimbus/db/core_db/legacy_db.nim
Co-authored-by: Kim De Mey <kim.demey@gmail.com>
---------
Co-authored-by: Kim De Mey <kim.demey@gmail.com>
* Kvt: Implemented multi-descriptor access on the same backend
why:
This behaviour mirrors the one of Aristo and can be used for
simultaneous transactions on Aristo + Kvt
* Kvt: Update database iterators
why:
Forgot to run on the top layer first
* Kvt: Misc fixes
* Aristo, use `openArray[byte]` rather than `Blob` in prototype
* Aristo, by default hashify right after cloning descriptor
why:
Typically, a completed descriptor is expected after cloning. Hashing
can be suppressed by argument flag.
* Aristo provides `replicate()` iterator, similar to legacy `replicate()`
* Aristo API fixes and updates
* CoreDB: Rename `legacy_persistent` => `legacy_rocksdb`
why:
More systematic, will be in line with Aristo DB which might have
more than one persistent backends
* CoreDB: Prettify API sources
why:
Better to read and maintain
details:
Annotating with custom pragmas which cleans up the prototypes
* CoreDB: Update MPT/put() prototype allowing `CatchableError`
why:
Will be needed for Aristo API (legacy is OK with `RlpError`)
* Update docu
* Update Aristo/Kvt constructor prototype
why:
Previous version used an `enum` value to indicate what backend is to
be used. This was replaced by using the backend object type.
* Rewrite `hikeUp()` return code into `Result[Hike,(Hike,AristoError)]`
why:
Better code maintenance. Previously, the `Hike` object was returned. It
had an internal error field so partial success was also available on
a failure. This error field has been removed.
* Use `openArray[byte]` rather than `Blob` in functions prototypes
* Provide synchronised multi instance transactions
why:
The `CoreDB` object was geared towards the legacy DB which used a single
transaction for the key-value backend DB. Different state roots are
provided by the backend database, so all instances work directly on the
same backend.
Aristo db instances have different in-memory mappings (aka different
state roots) and the transactions are on top of there mappings. So each
instance might run different transactions.
Multi instance transactions are a compromise to converge towards the
legacy behaviour. The synchronised transactions span over all instances
available at the time when base transaction was opened. Instances
created later are unaffected.
* Provide key-value pair database iterator
why:
Needed in `CoreDB` for `replicate()` emulation
also:
Some update of internal code
* Extend API (i.e. prototype variants)
why:
Needed for `CoreDB` geared towards the legacy backend which has a more
basic API than Aristo.
* Rewrite remaining `AristoError` return code into `Result[void,AristoError]`
why:
Better code maintenance
* Update import sections
* Update Aristo DB paths
why:
More systematic so directory can be shared with other DB types
* More cosmetcs
* Update unit tests runners
why:
Proper handling of persistent and mem-only DB. The latter can be
consistently triggered by an empty DB path.
* Reorg of distributed backend access
details:
Now handled via API provided in `aristo_desc`.
* Rename `checkCache()` => `checkTop()`
why:
Better naming for top layer cache checker
also:
Provide cascaded fifos checker
* Provide `eq` directive for finding filter by exact filter ID (think block number)
* Some code beautification (for better code reading)
* State root reposition and reorg
details:
Repositioning is supported by forking a new descriptor. Reorg is then
accomplished by writing this forked state on the backend database.
details:
* Tested features
+ Successively store filters with increasing filter ID (think block number)
+ Cascading through fifos, deeper fifos merge groups of filters
+ Fetch squash merged N top fifos
+ Delete N top fifos, push back merged fifo, continue storing
+ Fifo chain is verified by hashes and filter ID
* Not tested yet
+ Real live scenario (using data dumps)
+ Real filter data (only shallow filters used so far)
* Set scheduler state as part of the backend descriptor
details:
Moved type definitions `QidLayoutRef` and `QidSchedRef` to
`desc_structural.nim` so that it shares the same folder as
`desc_backend.nim`
* Automatic filter queue table initialisation in backend
details:
Scheduler can be tweaked or completely disabled
* Updated backend unit tests
details:
+ some code clean up/beautification, reads better now
+ disabled persistent filters so that there is no automated filter
management which will be implemented next
* Prettify/update unit tests source code
details:
Mostly replacing the `check()` paradigm by `xCheck()`
* Somewhat simplified backend type management
why:
Backend objects are labelled with a `BackendType` symbol where the
`BackendVoid` label is implicitly assumed for a `nil` backend object
reference.
To make it easier, a `kind()` function is used now applicable to
`nil` references as well.
* Fix DB storage layout for filter objects
why:
Need to store the filter ID with the object
* Implement reverse [] index on fifo
why:
An integer index argument on `[]` retrieves the QueueID (label) of the
fifo item while a QueueID argument on `[]` retrieves the index (so
it is inverse to the former variant).
* Provide iterator over filters as fifo
why:
This iterator goes along the cascased fifo structure (i.e. in
historical order)
* Add backwards index `[]` operator into fifo
also:
Need another maintenance instruction: The last overflow queue must
irrevocably delete some item in order to make space for a new one.
* Add re-org scheduler
details:
Generates instructions how to extract and merge some leading entries
* Add filter ID selector
details:
This allows to find the next filter now newer that a given filter ID
* Message update
* Rename FilterID => QueueID
why:
The current usage does not identify a particular filter but uses it as
storage tag to manage it on the database (to be organised in a set of
FIFOs or queues.)
* Split `aristo_filter` source into sub-files
why:
Make space for filter management API
* Store filter queue IDs in pairs on the backend
why:
Any pair will will describe a FIFO accessed by bottom/top IDs
* Reorg some source file names
why:
The "aristo_" prefix for make local/private files is tedious to
use, so removed.
* Implement filter slot scheduler
details:
Filters will be stored on the database on cascaded FIFOs. When a FIFO
queue is full, some filter items are bundled together and stored on the
next FIFO.
* Remove unused unit test sources
* Redefine and document serialised data records for Aristo backend
why:
Unique record types determined by marker byte, i.e. the last byte of a
serialisation record. This just needed some tweaking after adding new
record types.
* Removed dedicated transcoder tests
why:
will implicitely be provided by other tests:
+ encode/write -> hashify -> test_tx
+ decode/read -> merge raw nodes -> test_tx
+ de/blobfiy -> backend operations, taext_tx, test_backend, test_filter
* Clarify how the vertex ID generator state is accessed from the backend
why:
This state is a list of unused vertex IDs. It was just stored somewhere
on the backend which details were exposed when iterating over some
sub-table(s).
As there will be more such single information records, an admin
sub-tables has been defined (formerly ID generator table) with dedicated
access keys and type. Also, the iterator over the single ID generator
state item has been removed. It must be accessed via the `get()`
interface.
* Remove trailing space from file name
why:
fixes windows bail out
* Remove concept of empty/blind filters
why:
Not needed. A non-existent filter is is coded as a nil reference.
* Slightly generalised backend iterators
why:
* VertexID as key for the ID generator state makes no sense
* there will be more tables addressed by non-VertexID keys
* Store serialised/blobified vertices on memory backend
why:
This is more in line with the RocksDB backend so more appropriate
for testing when comparing behaviour. For a speedy memory database,
a backend-less variant should be used.
* Drop the `Aristo` prefix from names `AristoLayerRef`, etc.
* Suppress compiler warning
why:
duplicate imports
* Add filter serialisation transcoder
why:
Will be used as storage format
* Fix hashing algorithm
why:
Particular case where a sub-tree is on the backend, linked by an
Extension vertex to the top level.
* Update backend verification to report `dirty` top layer
* Implement distributed merge of backend filters
* Implement distributed backend access management
details:
Implemented and tested as described in chapter 5 of the `README.md`
file.
* Renamed type `NoneBackendRef` => `VoidBackendRef`
* Clarify names: `BE=filter+backend` and `UBE=backend (unfiltered)`
why:
Most functions used full names as `getVtxUnfilteredBackend()` or
`getKeyBackend()`. After defining abbreviations (and its meaning) it
seems easier to use `getVtxUBE()` and `getKeyBE()`.
* Integrate `hashify()` process into transaction logic
why:
Is now transparent unless explicitly controlled.
details:
Cache changes imply setting a `dirty` flag which in turn triggers
`hashify()` processing in transaction and `pack()` directives.
* Removed `aristo_tx.exec()` directive
why:
Inconsistent implementation, functionality will be provided with a
different paradigm.
* Provide deep copy for each transaction layer
why:
Localising changes. Selective deep copy was just overlooked.
* Generalise vertex ID generator state reorg function `vidReorg()`
why:
makes it somewhat easier to handle when saving layers.
* Provide dummy back end descriptor `NoneBackendRef`
* Optional read-only filter between backend and transaction cache
why:
Some staging area for accumulating changes to the backend DB. This
will eventually be an access layer for emulating a backend with
multiple/historic state roots.
* Re-factor `persistent()` with filter between backend/tx-cache => `stow()`
why:
The filter provides an abstraction from the physically stored data on
disk. So, there can be several MPT instances using the same disk data
with different state roots. Of course, all the MPT instances should
not differ too much for practical reasons :).
TODO:
Filter administration tools need to be provided.
* Better error handling
why:
Bail out on some error as early as possible before any changes.
* Implement `fetch()` as opposite of `merge()`
rationale:
In the `Aristo` realm, the action named `fetch()` and `merge()` indicate
leaf value related actions on the MPT, while actions `get()` and `put()`
handle vertex or hash key related operations that constitute the MPT.
* Re-factor `merge()` prototypes
why:
The most used variant of `merge()` should have the simplest prototype.
* Persistent DB constructor needs to import `aristo/aristo_init/persistent`
why:
Most applications use memory DB anyway. This avoids linking `-lrocksdb`
or any other back end libraries by default.
* Re-factor transaction module
why:
Got the paradigm wrong. The transaction descriptor did replace the
database one but should be handled separately.
* Nimbus folder environment update
details:
* Integrated `CoreDbRef` for the sources in the `nimbus` sub-folder.
* The `nimbus` program does not compile yet as it needs the updates
in the parallel `stateless` sub-folder.
* Stateless environment update
details:
* Integrated `CoreDbRef` for the sources in the `stateless` sub-folder.
* The `nimbus` program compiles now.
* Premix environment update
details:
* Integrated `CoreDbRef` for the sources in the `premix` sub-folder.
* Fluffy environment update
details:
* Integrated `CoreDbRef` for the sources in the `fluffy` sub-folder.
* Tools environment update
details:
* Integrated `CoreDbRef` for the sources in the `tools` sub-folder.
* Nodocker environment update
details:
* Integrated `CoreDbRef` for the sources in the
`hive_integration/nodocker` sub-folder.
* Tests environment update
details:
* Integrated `CoreDbRef` for the sources in the `tests` sub-folder.
* The unit tests compile and run cleanly now.
* Generalise `CoreDbRef` to any `select_backend` supported database
why:
Generalisation was just missed due to overcoming some compiler oddity
which was tied to rocksdb for testing.
* Suppress compiler warning for `newChainDB()`
why:
Warning was added to this function which must be wrapped so that
any `CatchableError` is re-raised as `Defect`.
* Split off persistent `CoreDbRef` constructor into separate file
why:
This allows to compile a memory only database version without linking
the backend library.
* Use memory `CoreDbRef` database by default
detail:
Persistent DB constructor needs to import `db/core_db/persistent
why:
Most tests use memory DB anyway. This avoids linking `-lrocksdb` or
any other backend by default.
* fix `toLegacyBackend()` availability check
why:
got garbled after memory/persistent split.
* Clarify raw access to MPT for snap sync handler
why:
Logically, `kvt` is not the raw access for the hexary trie (although
this holds for the legacy database)
* Remove 32bit os support from `custom_network` unit test
also:
* Fix compilation annoyance #1648
* Fix unit test on Kiln (changed `merge` logic?)
* Hide unused sources do not compile
why:
* Get them out of the way before major update
* Import and function prototype mismatch -- maybe some changes got out
of scope.
* Re-implemented `db_chain` as `core_db`
why:
Hiding `TrieDatabaseRef` and `HexaryTrie` by default allows to replace
the current db wrapper by some other one, e.g. Aristo
* Support compiler exception warnings for CoreDbRef base methods.
* Allow `pairs()` iterator on all memory based key-value tables
why:
Previously only available for capture recorder.
* Backport `chain_db.nim` changes into its re-implementation `core_apps.nim`
* Fix exception annotation
on windows, using "localhost" for rpc test is very slow.
both pyspec_sim and engine_sim will need more than one hour.
while on linux and macos only few minutes.
* Misc fixes
detail:
* Fix de-serialisation for account leafs
* Update node recovery from unit tests
* Remove `LegacyAccount` from `PayloadRef` object
why:
Legacy accounts use a hash key as storage root which is detrimental
to the working of the Aristo database which uses a vertex ID.
* Dissolve `hashify_helper` into `aristo_utils` and `aristo_transcode`
why:
Functions are of general interest so they should live in first level
code files.
* Added left/right iterators over leaf nodes
* Some helper/wrapper functions that might be useful
why:
For the main tree with root vertex ID 1, the leaf nodes hold the
account data. These accounts may link to sub trees the storage root
node ID of which must be registered here. There is no reverse key
lookup on the backend.
note:
These definitions are experimental. Also, there are some tests missing
for validating Payload data conversions.
* Provide transaction based interface for standard operations
* Provide unit tests for new Aristo interface using transactions
details:
These new tests combine and replace several single-purpose tests.
The now unused test sources will be kept for a while to be eventually
removed.
* Slightly tighten some self-check conditions
* Redefined the database descriptor object as reference (to the object)
why:
The upcoming transaction wrapper will work with a database reference
rather than the object itself
* Append state before `save()` to the Aristo descriptor
why:
This stae was previously returned by the function. Appending it to
a field of the Aristo descriptor seems easier to handle.
* Fix missing branch checks in transcoder
why:
Symmetry problem. `Blobify()` allowed for encoding degenerate branch
vertices while `Deblobify()` rejected decoding wrongly encoded data.
* Update memory backend so that it rejects storing bogus vertices.
why:
Error behaviour made similar to the rocks DB backend.
* Make sure that leaf vertex IDs are not repurposed
why:
This makes it easier to record leaf node changes
* Update error return code for next()/right() traversal
why:
Returning offending vertex ID (besides error code) helps debugging
* Update Merkle hasher for deleted nodes
why:
Not implemented, yet
also:
Provide cache & backend consistency check functions. This was
partly re-implemented from `hashifyCheck()`
* Simplify some unit tests
* Fix delete function
why:
Was conceptually wrong
previously, the withdrawal validation is in process_block only,
but the one in persist block, which is also used in synchronizer
is not validated properly.
* Added missing deferred cleanup directive to sub-test functions
why:
Rocksdb keeps the files locked for a short while leading to errors. This
was previously solved my using different db sub-directories
* Provide vertex deep-copy function globally.
why:
is just handy
* Avoid unnecessary vertex caching when merging proof nodes
also:
Run all merge tests on the rocksdb backend
Previously, proof node tests were run without backend
* Fix vertex ID generator state handling for rocksdb backend
why:
* Key error in walk iterator
* Needs to be loaded when opening the database
* Use non-zero sub-table prefixes for rocksdb
why:
Handy for debugging
* Fix error code for missing key on rocksdb backend
why:
Previously returned `VOID_HASH_KEY` rather than `GetKeyNotFound`
* Explicitly copy vertex data between internal table and function/result argument
why:
Function argument or return reference may still refer to the same data
object.
* Updated error symbols
why:
Error symbol names for the hike module now start with the prefix `Hike`.
* Write back modified branch node into local top layer cache
why:
With the backend available, the source of the branch node references
might not be the top layer cache. So any change must be explicitely
recorded.
* Generalised Aristo DB constructor for any type of backend
details:
* Records to be deleted are represented as key-void (rather than
key-value) pairs by the put-function arguments
* Allow direct driver access, iterators as example implementation and
for testing.
* Provide backend storage interface
details:
Stores the top layer onto backend tables
* Implemented Rocks DB backend
details:
Transaction based `put()` functionality
Iterators (based on direct RocksDB access)
* Fix include
why:
Eth67 not default yet so that got missed
* Rename `LeafKey` => `LeafTie`
why:
Name is a pen picture of what this object is for. Also, it avoids the
ubiquitous term `key`.
* Provided `getOrVoid()` wrapper for `getOrDefault()`
also:
Provide `isValid()` syntactic sugar for `.isNil.not`, `!= 0` etc.
Reorg descriptor source, split into sub-sources
* Bundled `NodeKey` objects with root ID and called it `HashLabel`
why:
`NodeKey` (aka repurposed Hash265) objects are unique only within a
particular sub-trie (e.g. storage slots) which are kept separated
(i.e non-interleaved) by design. This is not applied to the backend
as the map VertexID->NodeKey labelling the nodes needs not be injective.
For the in-memory database (transaction) layers, the injective map
VertexID->(VertexID,NodeKey) is used where the first field of the image
tuple is the root ID of the sub-trie the `NodeKey` object is valid. So
identical storage tries for different accounts can be represented.
* Exclude some storage tests
why:
These test running on external dumps slipped through. The particular
dumps were reported earlier as somehow dodgy.
This was changed in `#1457` but having a second look, the change on
hexary_interpolate.nim(350) might be incorrect.
* Redesign `Aristo DB` descriptor for transaction based layers
why:
Previous descriptor layout made it cumbersome to push/pop
database delta layers.
The new architecture keeps each layer with the full delta set
relative to the database backend.
* Keep root ID as part of the `Patricia Trie` leaf path
why;
That way, forests are supported
* Fix missing Merkle key removal in `merge()`
* Accept optional root hash argument in `hashify()`
why:
For importing a full database, there will be no proof data except the
root key. So this can be used to check and set the root key in the
database descriptor.
also:
Associate vertex ID to `hashify()` error return code
* Added Aristo Trie traversal function
why:
* step along leaf vertices in sorted order
* tree/trie consistency checks when debugging
* Enabled storage slots test data for Aristo DB
* Keep vertex ID generator state with each db-layer
why:
The vertex ID generator state is part of the difference to the below
layer
* Move otherwise unused source to test directory
* Add Merkle hash generator
also:
* Verification facility for debugging
* Empty Merkle key hashes encoded as `EMPTY_ROOT_HASH`
details:
1. Merging a leaf vertex merges a `Patricia Trie` path (while
adding/modiying vertices) and adds a leaf node with payload
2. Merging a Merkel node merges a single vertex to the `Patricia Trie`
and registers merkel hashes
3. Action 2 can be used before action 1 in order to construct a
Merkel proof as required for handling `snap/1` data.
4. Unit tests show that action 3 is benign for now :)
* Unit tests update, code cosmetics
* Fix segfault with zombie handling
why:
In order to save memory, the data records of zombie entries are removed
and only the key (aka peer node) is kept. Consequently, logging these
zombies can only be done by the key.
* Allow to accept V2 payload without `shanghaiTime` set while syncing
why:
Currently, `shanghaiTime` is missing (alt least) while snap syncing. So
beacon node headers can be processed regardless. Normal (aka strict)
processing will be automatically restored when leaving snap sync mode.
* Cosmetics, renamed fields (eVtx, bVtx) -> (eVid, bVid)
* Multilayered delta architecture for Aristo DB
details:
Any VertexID or data retrieval needs to go down the rabbit hole and
fetch/get/manipulate the bottom layer -- even without explicit
backend.
* Direct reference to backend from top-level layer
why:
Some services as the vid management needs to be synchronised among all
layers. So access is optimised.
* Experimental MP-trie
why:
Deleting records is a infeasible with the current structure
* Added vertex ID recycling management
Todo:
Provide some unit tests
* DB layout update
why:
Main news is the separation of `Merkel` hashes into an extra table.
details:
The code fragments cover conversion between compact MPT records and
Aristo DB records as well as some rudimentary cache handling for
the `Merkel` hashes (i.e. the extra table entries.)
todo:
Add some simple unit test for the descriptor record (currently used
for vertex ID management, only.)
* Updated vertex ID recycling management
details:
added simple unit tests (mainly testing ABI)
* docu update
* Rename `playXXX` => `passXXX`
why:
Better purpose match
* Code massage, log message updates
* Moved `ticker.nim` to `misc` folder to be used the same by full and snap sync
why:
Simplifies maintenance
* Move `worker/pivot*` => `worker/pass/pass_snap/*`
why:
better for maintenance
* Moved helper source file => `pass/pass_snap/helper`
* Renamed ComError => GetError, `worker/com/` => `worker/get/`
* Keep ticker enable flag in worker descriptor
why:
This allows to pass this flag with the descriptor and not an extra
function argument when calling the setup function.
* Extracted setup/release code from `worker.nim` => `pass/pass_init.nim`
* Recreating some of the speculative-execution code.
Not really using it yet. Also there's some new inefficiency in
memory.nim, but it's fixable - just haven't gotten around to it yet.
The big thing introduced here is the idea of "cells" for stack,
memory, and storage values. A cell is basically just a Future (though
there's also the option of making it an Identity - just a simple
distinct wrapper around a value - if you want to turn off the
asynchrony).
* Bumped nim-eth.
* Cleaned up a few comments.
* Bumped nim-secp256k1.
* Oops.
* Fixing a few compiler errors that show up with EVMC enabled.
* Extract RocksDB timing tests from snap unit tests as separate module
why:
Declutter, make space for more snap related unit tests.
* Renamed `undumpNextGroup()` => `undumpBlocks()`
why:
Source file name is called `undump_blocks.nim` which should be sort
of in sync with the method name(s).
* Implement snap/1 server method `getByteCodes()`
* Implement snap/1 client method `getByteCodes()`
* Implement faculty for handling contract code fetching via snap/1
* Provide persistent storage for contract code records
* Implement contract code snap sync fetch & store
* Code massage, cosmetics
* Unit tests for verifying snap sync snapshot dump
details:
Use `undump_kvp.dumpAllDb()` to dump any database.
* Update sync scheduler pool mode
why:
The pool mode allows to loop over active peers one after another. This
is ideal for soft re-starting peers. As this is a two tier experience
(start/stop, setup/release) the loop must be run twice. This is
controlled by a more rigid re-definition of how to use the `poolMode`
flag.
* Mitigate RLP serialiser deficiency
why:
Currently, serialising the `BlockBody` in not conevrtible and need
to be checked in the `eth` module. Currently a local fix for the
wire protocol applies. Unit tests will stay (after this local solution
will have been removed.)
* Code cosmetics and massage
details:
Main part is `types.toStr()` as a unified function for logging block
numbers.
* Allow to use a logical genesis replacement (start of history)
why:
Snap sync will set up an arbitrary pivot at a block number different
from zero. In fact, the higher the block number the better.
details:
A non-genesis start of history will currently only affect the score
values which were derived from the difficulty.
* Provide function to store the snap pivot block header in chain db
why:
Together with the start of history facility, this allows to proceed
with full syncing once snap has finished.
details:
Snap db storage was switched from a sub-tables to the flat chain db.
* Provide database completeness and sanity checker
details:
For debugging on smaller databases, only
* Implement snap -> full sync switch
* Somewhat tighten error handling
why:
Zombie state is invoked when the current peer turns out to be useless
for further communication. While there is a chance to further talk
to a peer about another topic (aka healing) after some protocol failure,
it makes no sense to do so after a network problem.
The latter state is explained bu the `peerDegraded` flag that goes
together with the `zombie` state flag. A degraded peer is dropped
immediately.
* Remove `--sync-mode=snapCtx` option, always start snap in recovery mode
why:
No need for a snap sync option without recovery mode, can be achieved
by deleting the database.
* Code cosmetics, typos, prettify logging, debugging helper, etc.
* Split off snap sync sub-mode handler into separate modules
details:
The original `worker.nim` source has become a multiplexer for several
snap sync sub-modes `full` and `snap`. The source modules of the
incarnations of a particular sync sub-mode are places into the
`worker/play` directory.
* Update ticker for snap and full sync logging
* Update nearby/neighbour leaf nodes finder
details:
Update return error codes so that in the case that there is no more
leaf node beyond the search direction, the particular error code
`NearbyBeyondRange` is returned.
* Compile largest interval range containing only this leaf point
why:
Will be needed in snap sync for adding single leaf nodes to the range
of already allocated nodes.
* Reorg `hexary_inspect.nim`
why:
Merged the nodes collecting algorithm for persistent and in-memory
into a single generic function `hexary_inspect.inspectTrieImpl()`
* Update fetching accounts range failure handling in `rangeFetchAccounts()`
why:
Rejected response leads now to fetching for another account range. Only
repeated failures (or all done) terminate the algorithm.
* Update accounts healing
why:
+ Fixed looping over a bogus node response that could not inserted into
the database. As a solution, these nodes are locally registered and not
asked for in this download cycle.
+ Sub-optimal handling of interval range for a healed account leaf node.
Now the maximal range interval containing this node is registered as
processed which leafs to de-fragementation of the processed (and
unprocessed) range list(s). So *gap* ranges which are known not to
cover any account leaf node are not asked for on the network, anymore.
+ Sporadically remove empty interval ranges (if any)
* Update logging, better variable names
* Redesign snap1 message GetTrieNodes argument prototypes
why:
A list of sub-objects `seq[SnapTriePath]` is more intuitive to work with
than an opaque definition `seq[seq[Blob]]` because the inner object
`SnapTriePath` object has a dedicated inner structure (for how to
interprete `seq[Blob]`.)
* Collect some public constants into `constants.nim` file
* Reorg `hexary_paths.nim`
why:
+ Collecting nodes following a partial path properly ending at an
extension node failed to collect this last node.
+ Merged the nodes collecting algorithm for persistent and in-memory
into a single generic function `hexary_paths.rootPathExtend()`
info:
Extracted common tasks to `hexary_nodes_helper.nim`
* Implement `StorageRanges` message handler for snap/1 protocol
now macro assembler support merge fork, shanghai, etc without using ugly hack.
also each assembler test have their own `setup` section that can access
`vmState` and perform various custom setup.
simplify EVM and delegete those things to accounts cache.
also no more manual state clearing, accounts cache will be
responsible for both collecting touched account and perform
state clearing.
* Gwei conversion should use u256 because u64 can overflow.
* Make withdrawals follow the EIP-158 state-clearing rules.
(i.e. Empty accounts should be deleted.)
* Allow the zero address in normalizeNumber.
(Necessary for one of the new withdrawals-related tests.)
* Another fix with a withdrawals-related test.
* Add state root to node steps path register `RPath` or `XPath`
why:
Typically, the first node in the path register is the state root. There
are occasions, when the path register is empty (i.e. there are no node
references) which typically applies to a zero node key.
In order to find the next node key greater than zero, the state root is
is needed which is now part of the `RPath` or `XPath` data types.
* Extracted hexary tree debugging functions into separate files
* Update empty path fringe case for left/right node neighbour
why:
When starting at zero, the node steps path register would be empty. So
will any path that is before the fist non-zero link of a state root (if
it is a `Branch` node.)
The `hexaryNearbyRight()` or `hexaryNearbyLeft()` function required a
non-zero node steps path register. Now the first node is to be advanced
starting at the first state root link if necessary.
* Simplify/reorg neighbour node finder
why:
There was too mach code repetition for the cases
* persistent or in-memory database
* left or right move
details:
Most algorithms apply for persistent and in-memory alike. Using
templates/generic functions most of these algorithms can be stated
in a unified way
* Update storage slots snap/1 handler
details:
Minor changes to be more debugging friendly.
* Fix detection of full database for snap sync
* Docu: Snap sync test & debugging scenario
* Gwei conversion should use u256 because u64 can overflow.
* Make withdrawals follow the EIP-158 state-clearing rules.
(i.e. Empty accounts should be deleted.)
* Allow the zero address in normalizeNumber.
(Necessary for one of the new withdrawals-related tests.)
* Handle last/all node(s) proof conditions at leaf node extractor
detail:
Flag whether the maximum extracted node is the last one in database
No proof needed if the full tree was extracted
* Clean up some helpers & definitions
details:
Move entities to more plausible locations, e.g. `Account` object need
not be dealt with in the range extractor as it applies to any kind of
leaf data.
* Fix next/prev database walk fringe condition
details:
First check needed might be for a leaf node which was done too late.
* Homogenise snap/1 protocol function prototypes
why:
The range arguments `origin` and `limit` data types differed in various
function prototypes (`Hash256` vs. `openArray[byte]`.)
* Implement `GetStorageRange` handler
* Implement server timeout for leaf node retrieval
why:
This feature leaves control on the server for probably costly action
invoked by the network
* Implement maximal reply size for snap service
why:
This feature leaves control on the server for probably costly action
invoked by the network.
* Part of EIP-4895: add withdrawals processing to block processing.
* Refactoring: extracted the engine API handler bodies into procs.
Intending to implement the V2 versions next. (I need the bodies to be
in separate procs so that multiple versions can use them.)
* Working on Engine API changes for Shanghai.
* Updated nim-web3, resolved ambiguity in Hash256 type.
* Updated nim-eth3 to point to master, now that I've merged that.
* I'm confused about what's going on with engine_client.
But let's try resolving this Hash256 ambiguity.
* Still trying to fix this conflict with the Hash256 types.
* Does this work now that nimbus-eth2 has been updated?
* Corrected blockValue in getPayload responses back to UInt256.
c834f67a37
* Working on getting the withdrawals-related tests to pass.
* Fixing more of those Hash256 ambiguities.
(I'm not sure why the nim-web3 library introduced a conflicting type
named Hash256, but right now I just want to get this code to compile again.)
* Bumped a couple of libraries to fix some error messages.
* Needed to get "make fluffy-tools" to pass, too.
* Getting "make nimbus_verified_proxy" to build.
* Clean up some function prototypes
why:
Simplify polymorphic prototype variances for easier maintenance.
* Fix fringe condition crash when importing bogus RLP node
why:
Accessing non-list RLP entry as a list causes `Defect`
* Fix left boundary proof at range extractor
why:
Was insufficient. The main problem was that there was no unit test for
the validity of the generated left boundary.
* Handle incomplete left boundary proofs early
why:
Attempt to do it later leads to overly complex code in order to prevent
looping when the same peer repeats to send the same incomplete proof.
Contrary, gaps in the leaf sequence can be handled gracefully with
registering the gaps
* Implement a manual pivot setup mechanism for snap sync
why:
For a test scenario it is convenient to set the pivot to something
lower than the beacon header from the consensus layer. This does not
need rely on any RPC mechanism.
details:
The file containing the pivot specs is specified by the
`--sync-ctrl-file` option. It is regularly parsed for updates.
* Fix calculation error
why:
Prevent from calculating negative square root
* Renaming androgynous sub-object names according to where they belong
why:
These objects are not explicitly dealt with. They give meaning to
some generic wrapper objects. Naming them after their origin may
help troubleshooting.
* Redefine proof nodes list data type for `snap/1` wire protocol
why:
The current specification suffered from the fact that the basic data
type for a proof node is an RLP encoded hexary node. This slightly
confused the encoding/decoding magic.
details:
This is the second attempt, now wrapping the `seq[Blob]` into a
wrapper object of `seq[SnapProof]` for a distinct alias sequence.
In the previous attempt, `SnapProof` was a wrapper object holding the
`Blob` with magic applied to the `seq[]`. This needed the `append`
mixin to strip the outer wrapper that was applied to the `Blob` already
when it was passed as argument.
* Fix some prototype inconsistency
why:
For easy reading, `getAccountRange()` handler return code should
resemble the `accoundRange()` anruments prototype.
* Fix locked database file annoyance with unit tests on Windows
why:
Need to clean up old files first from previous session as files remain
locked despite closing of database.
* Fix initialisation order
detail:
Apparently this has no real effect as the ticker is only initialised
here but started later.
This possible bug has been in all for a while and was running with the
previous compiler and libraries.
* Better naming of data fields for sync descriptors
details:
* BuddyRef[S,W]: buddy.data -> buddy.only
* CtxRef[S]: ctx.data -> ctx.pool
* Refactoring in preparation for time-based forking.
* Timestamp-based hard-fork-transition.
* Workaround SideEffect issue / compiler bug for both failing locations in Portal history code
---------
Co-authored-by: kdeme <kim.demey@gmail.com>
* Redefine `seq[Blob]` => `seq[SnapProof]` for `snap/1` protocol
why:
Proof nodes are traded as `Blob` type items rather than Nim objects. So
the RLP transcoder must not extra wrap proofs which are of type
seq[Blob]. Without custom encoding one would produce a
`list(blob(item1), blob(item2) ..)` instead of `list(item1, item2 ..)`.
* Limit leaf extractor by RLP size rather than number of items
why:
To be used serving `snap/1` requests, the result of function
`hexaryRangeLeafsProof()` is limited by the maximal space
needed to serialise the result which will be part of the
`snap/1` repsonse.
* Let the range extractor `hexaryRangeLeafsProof()` return RLP list sizes
why:
When collecting accounts, the size oft the accounts list when encoded
as RLP is continually updated. So the summed up value is available
anyway. For the proof nodes list, there are not many (~ 10) so summing
up is not expensive here.
* Removed some Windows specific unit test annoyances
details:
+ Short put()/get() cycles on persistent database have a race condition
with vendor rocksdb. On a specific (and slow) qemu/win7 a 50ms `sleep()`
in between will mostly do the job (i.e. unless heavy CPU load.) This
issue was not observed on github/ci.
+ Removed annoyances when qemu/Win7 keeps the rocksdb database files
locked even after closing the db. The problem is solved by strictly
using fresh names for each test. No assumption made to be able to
properly clean up. This issue was not observed on github/ci.
* Silence some compiler gossip -- part 7, misc/non(sync or graphql)
details:
Adding some missing exception annotation
* Unit tests to verify calculations based on hard coded constants
why:
Sizes of RLP encoded objects are available at run time only.
* Changed argument order for `hexaryRangeLeafsProof()` prototype
why:
Better to read as a stand-alone function (arguments were optimised
for functional pipelines)
* Run sub-range proof tests for extracted ranges
* Cosmetics
details:
+ Update doc generator
+ Fix key type representation in `hexary_desc` for debugging
+ Redefine `isImportOk()` as template for better `check()` line reporting
* Fix fringe condition when interpolating Merkle-Patricia tries
details:
Small change with profound effect fixing some pathological condition
that haunted the unit test set on large data sers. There is still one
condition left which might well be due to an incomplete data set.
* Unit test proof nodes for node range extractor
* Unit tests to run on full extraction set
why:
Left over from troubleshooting, range length was only 5
* Reduce Nim 1.6 compiler warnings/hints for Fluffy and Nimbus proxy
Mostly raises Defect removals, TaintedString removal and some
unnecessary imports.
Also updating the copyright years alongside.
* Further reduce Nim 1.6 compiler warnings/hints for Nimbus
* Silence some compiler gossip -- part 5, common
details:
Mostly removing redundant imports and `Defect` tracer after switch
to nim 1.6
* Silence some compiler gossip -- part 6, db, rpc, utils
details:
Mostly removing redundant imports and `Defect` tracer after switch
to nim 1.6
* Silence some compiler gossip -- part 7, randomly collected source files
details:
Mostly removing redundant imports and `Defect` tracer after switch
to nim 1.6
* Silence some compiler gossip -- part 8, assorted tests
details:
Mostly removing redundant imports and `Defect` tracer after switch
to nim 1.6
* Clique update
why:
More impossible exceptions (undoes temporary fix from previous PR)
* Update comments and test noise
* Fix boundary proofs
why:
Where neither used in production, nor unit tested. For production, other
methods apply to test leaf range integrity directly based of the proof
nodes.
* Added `hexary_range()`: interval range + proof extractor
details:
+ Will be used for `snap/1` protocol handler
+ Unit tests added (also for testing left boundary proof)
todo:
Need to verify completeness of proof nodes
* Reduce some nim 1.6 compiler noise
* Stop unit test gossip for ci tests
* Updated to the latest nim-eth, nim-rocksdb, nim-web3
* Bump nimbus-eth2 module and fix related issues
Temporarily disabling Portal beacon light client network as it is
a lot of copy pasted code that did not yet take into account
forks. This will require a bigger rework and was not yet tested
in an actual network anyhow.
* More nimbus fixes after module bumps
---------
Co-authored-by: Adam Spitz <adamspitz@status.im>
Co-authored-by: jangko <jangko128@gmail.com>
Two unresolved items currently:
- Three tests that are temporarily disabled as they fail in the
macro_assembler code, which seems to be due to an ambigious
identifier Stop (Ops and chronos ServerCommand enum).
- i386 CI disabled as it fails at Nim compilation already. Failed
tests where already ignored for this target.
why:
Clique relies on the even/odd position of an address after sorting. For
address generation, the Nim PRNG was used which seems to have changed
with Nim 1.6.11 (Linux, Windoes only.)
As a replace, the Posix.1-2001 example (two-liner calculation) generator
is used.
* Extracted RocksDB timing unit tests into separate file
why:
make space for more in main module :)
* Extracted `inspectionRunner()` unit tests into separate file
why:
make space for more in main module :)
* Extracted `storagesRunner()` unit tests into separate file
why:
make space for more in main module :)
* Extracted pivot checkpoint store/retrieval unit tests into separate file
why:
make space for more in main module :)
* Extract helper functions into separate source file
* Extracted account import unit tests into separate file
why:
make space for more in main module :)
* Rename `test_decompose()` => `test_NodeRangeDecompose()`
why:
There will be more functions with `test_NodeRange` prefix.
The `BlockHeader` structure in `nim-eth` was updated with support for
EIP-4844 (danksharding). To enable the `nim-eth` bump, the ingress of
`BlockHeader` structures has been hardened to reject headers that have
the new `excessDataGas` field until proper EIP4844 support exists.
https://github.com/status-im/nim-eth/pull/570
* Rename and update dismantle => hexaryEnvelopeDecompose()
why:
+ As for naming, a positive connotation is prefered
+ The unit tests were really insufficient
+ The function result was wrong on a few boundry conditions
detail:
+ Extracted the function from `hexary_paths.nim` and re-implemented
it together with other envelope functions => `hexary_envelope.nim`
+ Re-wrote docu for `hexaryEnvelopeDecompose()`
* Relaxed right condition for `hexaryEnvelopeDecompose()` range argument
why;
Previously, the right point of the argument interval had to be a path
to an allocated leaf node. While this is typically a given for accounts,
it is easier to require an arbitrary range of paths (or keys) with
the requirement of a `boundary proof` for left and right (i.e. enough
nodes in the database to find the end points.)
also:
Bug fixes for related functions (typos, missing conditions etc.)
* Add missing unit tests include file
* Add quick hexary trie inspector, called `dismantle()`
why:
+ Full hexary trie perusal is slow if running down leaf nodes
+ For known range of leaf nodes, work out the UInt126-complement of
partial sub-trie paths (for existing nodes). The result should cover
no (or only a few) sub-tries with leaf nodes.
* Extract common healing methods => `sub_tries_helper.nim`
details:
Also apply quick hexary trie inspection tool `dismantle()`
Replace `inspectAccountsTrie()` wrapper by `hexaryInspectTrie()`
* Re-arrange task dispatching in main peer worker
* Refactor accounts and storage slots downloaders
* Rename `HexaryDbError` => `HexaryError`
The `BlockHeader` structure in `nim-eth` was updated with support for
EIP-4895 (withdrawals). To enable the `nim-eth` bump, the ingress of
`BlockHeader` structures has been hardened to reject headers that have
the new `withdrawalsRoot` field until proper withdrawals support exists.
https://github.com/status-im/nim-eth/pull/562
* Stop negotiating pivot if peer repeatedly replies w/usesless answers
why:
There is some fringe condition where a peer replies with legit but
useless empty headers repetely. This goes on until somebody stops.
We stop now.
* Rename `missingNodes` => `sickSubTries`
why:
These (probably missing) nodes represent in reality fully or partially
missing sub-tries. The top nodes may even exist, e.g. as a shallow
sub-trie.
also:
Keep track of account healing on/of by bool variable `accountsHealing`
controlled in `pivot_helper.execSnapSyncAction()`
* Add `nimbus` option argument `snapCtx` for starting snap recovery (if any)
also:
+ Trigger the recovery (or similar) process from inside the global peer
worker initialisation `worker.setup()` and not by the `snap.start()`
function.
+ Have `runPool()` returned a `bool` code to indicate early stop to
scheduler.
* Can import partial snap sync checkpoint at start
details:
+ Modified what is stored with the checkpoint in `snapdb_pivot.nim`
+ Will be loaded within `runDaemon()` if activated
* Forgot to import total coverage range
why:
Only the top (or latest) pivot needs coverage but the total coverage
is the list of all ranges for all pivots -- simply forgotten.
* Piecemeal trie inspection
details:
Trie inspection will stop after maximum number of nodes visited.
The inspection can be resumed using the returned state from the
last session.
why:
This feature allows for task switch between `piecemeal` sessions.
* Extract pivot helper code from `worker.nim` => `pivot_helper.nim`
* Accounts import will now return dangling paths from `proof` nodes
why:
With proper bookkeeping, this can be used to start healing without
analysing the the probably full trie.
* Update `unprocessed` account range handling
why:
More generally, the API of a pairs of unprocessed intervals favours
the first set and not before that is exhausted the second set comes
into play.
This was unfortunately implemented which caused the ranges to be
unnecessarily fractioned. Now the number of range interval typically
remains in the lower single digit numbers.
* Save sync state after end of downloading some accounts
details:
restore/resume to be implemented later
* Update log ticker, using time interval rather than ticker count
why:
Counting and logging ticker occurrences is inherently imprecise. So
time intervals are used.
* Use separate storage tables for snap sync data
* Left boundary proof update
why:
Was not properly implemented, yet.
* Capture pivot in peer worker (aka buddy) tasks
why:
The pivot environment is linked to the `buddy` descriptor. While
there is a task switch, the pivot may change. So it is passed on as
function argument `env` rather than retrieved from the buddy at
the start of a sub-function.
* Split queues `fetchStorage` into `fetchStorageFull` and `fetchStoragePart`
* Remove obsolete account range returned from `GetAccountRange` message
why:
Handler returned the wrong right value of the range. This range was
for convenience, only.
* Prioritise storage slots if the queue becomes large
why:
Currently, accounts processing is prioritised up until all accounts
are downloaded. The new prioritisation has two thresholds for
+ start processing storage slots with a new worker
+ stop account processing and switch to storage processing
also:
Provide api for `SnapTodoRanges` pair of range sets in `worker_desc.nim`
* Generalise left boundary proof for accounts or storage slots.
why:
Detailed explanation how this works is documented with
`snapdb_accounts.importAccounts()`.
Instead of enforcing a left boundary proof (which is still the default),
the importer functions return a list of `holes` (aka node paths) found in
the argument ranges of leaf nodes. This in turn is used by the book
keeping software for data download.
* Forgot to pass on variable in function wrapper
also:
+ Start healing not before 99% accounts covered (previously 95%)
+ Logging updated/prettified
* Added basic async capabilities for vm2.
This is a whole new Git branch, not the same one as last time
(https://github.com/status-im/nimbus-eth1/pull/1250) - there wasn't
much worth salvaging. Main differences:
I didn't do the "each opcode has to specify an async handler" junk
that I put in last time. Instead, in oph_memory.nim you can see
sloadOp calling asyncChainTo and passing in an async operation.
That async operation is then run by the execCallOrCreate (or
asyncExecCallOrCreate) code in interpreter_dispatch.nim.
In the test code, the (previously existing) macro called "assembler"
now allows you to add a section called "initialStorage", specifying
fake data to be used by the EVM computation run by that test. (In
the long run we'll obviously want to write tests that for-real use
the JSON-RPC API to asynchronously fetch data; for now, this was
just an expedient way to write a basic unit test that exercises the
async-EVM code pathway.)
There's also a new macro called "concurrentAssemblers" that allows
you to write a test that runs multiple assemblers concurrently (and
then waits for them all to finish). There's one example test using
this, in test_op_memory_lazy.nim, though you can't actually see it
doing so unless you uncomment some echo statements in
async_operations.nim (in which case you can see the two concurrently
running EVM computations each printing out what they're doing, and
you'll see that they interleave).
A question: is it possible to make EVMC work asynchronously? (For
now, this code compiles and "make test" passes even if ENABLE_EVMC
is turned on, but it doesn't actually work asynchronously, it just
falls back on doing the usual synchronous EVMC thing. See
FIXME-asyncAndEvmc.)
* Moved the AsyncOperationFactory to the BaseVMState object.
* Made the AsyncOperationFactory into a table of fn pointers.
Also ditched the plain-data Vm2AsyncOperation type; it wasn't
really serving much purpose. Instead, the pendingAsyncOperation
field directly contains the Future.
* Removed the hasStorage idea.
It's not the right solution to the "how do we know whether we
still need to fetch the storage value or not?" problem. I
haven't implemented the right solution yet, but at least
we're better off not putting in a wrong one.
* Added/modified/removed some comments.
(Based on feedback on the PR.)
* Removed the waitFor from execCallOrCreate.
There was some back-and-forth in the PR regarding whether nested
waitFor calls are acceptable:
https://github.com/status-im/nimbus-eth1/pull/1260#discussion_r998587449
The eventual decision was to just change the waitFor to a doAssert
(since we probably won't want this extra functionality when running
synchronously anyway) to make sure that the Future is already
finished.
* Re-arrange fetching storage slots in batch module
why;
Previously, fetching partial slot ranges first has a chance of
terminating the worker peer 9due to network error) while there were
many inheritable storage slots on the queue.
Now, inheritance is checked first, then full slot ranges and finally
partial ranges.
* Update logging
* Bundled node information for healing into single object `NodeSpecs`
why:
Previously, partial paths and node keys were kept in separate variables.
This approach was error prone due to copying/reassembling function
argument objects.
As all partial paths, keys, and node data types are more or less handled
as `Blob`s over the network (using Eth/6x, or Snap/1) it makes sense to
hold these `Blob`s as named field in a single object (even if not all
fields are active for the current purpose.)
* For good housekeeping, using `NodeKey` type only for account keys
why:
previously, a mixture of `NodeKey` and `Hash256` was used. Now, only
state or storage root keys use the `Hash256` type.
* Always accept latest pivot (and not a slightly older one)
why;
For testing it was tried to use a slightly older pivot state root than
available. Some anecdotal tests seemed to suggest an advantage so that
more peers are willing to serve on that older pivot. But this could not
be confirmed in subsequent tests (still anecdotal, though.)
As a side note, the distance of the latest pivot to its predecessor is
at least 128 (or whatever the constant `minPivotBlockDistance` is
assigned to.)
* Reshuffle name components for some file and function names
why:
Clarifies purpose:
"storages" becomes: "storage slots"
"store" becomes: "range fetch"
* Stash away currently unused modules in sub-folder named "notused"
* Re-model persistent database access
why:
Storage slots healing just run on the wrong sub-trie (i.e. the wrong
key mapping). So get/put and bulk functions now use the definitions
in `snapdb_desc` (earlier there were some shortcuts for `get()`.)
* Fixes: missing return code, typo, redundant imports etc.
* Remove obsolete debugging directives from `worker_desc` module
* Correct failing unit tests for storage slots trie inspection
why:
Some pathological cases for the extended tests do not produce any
hexary trie data. This is rightly detected by the trie inspection
and the result checks needed to adjusted.
* For snap sync, publish `EthWireRef` in sync descriptor
why:
currently used for noise control
* Detect and reuse existing storage slots
* Provide healing module for storage slots
* Update statistic ticker (adding range factor for unprocessed storage)
* Complete mere function for work item ranges
why:
Merging interval into existing partial item was missing
* Show av storage queue lengths in ticker
detail;
Previous attempt shows average completeness which did not tell much
* Correct the meaning of the storage counter (per pivot)
detail:
Is the # accounts that have a storage saved
* Rename `LeafRange` => `NodeTagRange`
* Replacing storage slot partition point by interval
why:
The partition point only allows to describe slots `[point,high(Uint256)]`
for fetching interval slot ranges. This has been generalised for any
interval.
* Replacing `SnapAccountRanges` by `SnapTrieRangeBatch`
why:
Generalised healing status for accounts, and later for storage slots.
* Improve accounts healing loop
* Split `snap_db` into accounts and storage modules
why:
It is cleaner to have separate session descriptors for accounts and
storage slots (based on a common base descriptor.)
Also, persistent storage handling might be changed in future which
requires the storage slot implementation disentangled from the accounts
handling.
* Re-model worker queues for storage slots
why:
There is a dynamic list of storage sub-tries, each one has to be
treated similar to the accounts database. This applied to slot
interval downloads as well as to healing
* Compress some return value report lists for snapdb methods
why:
No need to report all handling details for work items that are filteres
out and discarded, anyway.
* Remove inner loop frame from healing function
why:
The healing function runs as a loop body already.
* Split fetch accounts into sub-modules
details:
There will be separated modules for accounts snapshot, storage snapshot,
and healing for either.
* Allow to rebase pivot before negotiated header
why:
Peers seem to have not too many snapshots available. By setting back the
pivot block header slightly, the chances might be higher to find more
peers to serve this pivot. Experiment on mainnet showed that setting back
too much (tested with 1024), the chances to find matching snapshot peers
seem to decrease.
* Add accounts healing
* Update variable/field naming in `worker_desc` for readability
* Handle leaf nodes in accounts healing
why:
There is no need to fetch accounts when they had been added by the
healing process. On the flip side, these accounts must be checked for
storage data and the batch queue updated, accordingly.
* Reorganising accounts hash ranges batch queue
why:
The aim is to formally cover as many accounts as possible for different
pivot state root environments. Formerly, this was tried by starting the
accounts batch queue at a random value for each pivot (and wrapping
around.)
Now, each pivot environment starts with an interval set mutually
disjunct from any interval set retrieved with other pivot state roots.
also:
Stop fishing for more pivots in `worker` if 100% download is reached
* Reorganise/update accounts healing
why:
Error handling was wrong and the (math. complexity of) whole process
could be better managed.
details:
Much of the algorithm is now documented at the top of the file
`heal_accounts.nim`
* Added inspect module
why:
Find dangling references for trie healing support.
details:
+ This patch set provides only the inspect module and some unit tests.
+ There are also extensive unit tests which need bulk data from the
`nimbus-eth1-blob` module.
* Alternative pivot finder
why:
Attempt to be faster on start up. Also tying to decouple pivot finder
somehow by providing different mechanisms (this one runs in `single`
mode.)
* Use inspect module for healing
details:
+ After some progress with account and storage data, the inspect facility
is used to find dangling links in the database to be filled nose-wise.
+ This is a crude attempt to cobble together functional elements. The
set up needs to be honed.
* fix scheduler to avoid starting dead peers
why:
Some peers drop out while in `sleepAsync()`. So extra `if` clauses
make sure that this event is detected early.
* Bug fixes causing crashes
details:
+ prettify.toPC():
int/intToStr() numeric range over/underflow
+ hexary_inspect.hexaryInspectPath():
take care of half initialised step with branch but missing index into
branch array
* improve handling of dropped peers in alternaive pivot finder
why:
Strange things may happen while querying data from the network.
Additional checks make sure that the state of other peers is updated
immediately.
* Update trace messages
* reorganise snap fetch & store schedule
* Re-implemented `hexaryFollow()` in a more general fashion
details:
+ New name for re-implemented `hexaryFollow()` is `hexaryPath()`
+ Renamed `rTreeFollow()` as `hexaryPath()`
why:
Returning similarly organised structures, the results of the
`hexaryPath()` functions become comparable when running over
the persistent and the in-memory databases.
* Added traversal functionality for persistent ChainDB
* Using `Account` values as re-packed Blob
* Repack samples as compressed data files
* Produce test data
details:
+ Can force pivot state root switch after minimal coverage.
+ For emulating certain network behaviour, downloading accounts stops for
a particular pivot state root if 30% (some static number) coverage is
reached. Following accounts are downloaded for a later pivot state root.
* Bump nim-stew
why:
Need fixed interval set
* Keep track of accumulated account ranges over all state roots
* Added comments and explanations to unit tests
* typo
* Extracted functionality into sub-modules for maintainability
* Setting SST bulk load as default in `accounts_db`
details:
+ currently, the same data are stored via rocksdb if available, and
the same via embedded `storage_type` with (non-standard) prefix 200
for time comparisons
+ fallback to normal `put()` unless rocksdb is accessible
* Provided common scheduler API, applied to `full` sync
* Use hexary trie as storage for proofs_db records
also:
+ Store metadata with account for keeping track of account state
+ add iterator over accounts
* Common scheduler API applied to `snap` sync
* Prepare for accounts bulk import
details:
+ Added some ad-hoc checks for proving accounts data received from the
snap/1 (will be replaced by proper database version when ready)
+ Added code that dumps some of the received snap/1 data into a file
(turned of by default, see `worker_desc.nim`)
* Added sepolia specs
* temporarily avoid latest `master` branch from `nim-eth1`
why:
Currently does not cleanly compile after the `bearssl` split api update.
* Relocated `IntervalSets` to nim-stew repo
* Accumulate accounts on temporary kv-DB
why:
Explore the data as returned from snap/1. Will be converted to a
`eth/db` next.
details:
Verify and accumulate per/state-root accounts downloaded via snap.
also:
Some unit tests
* Replace `Table` by `TrieDatabaseRef` for accounts accumulator
* update ticker statistics
details:
mean/variance based counter update
* allow persistent db for proved accounts
* rebase, and globally activate unit test
* fix statistics
* Using `IntervalSet` type data for `LeafRange`
* Updated log ticker
* Update to `eth67`
details:
Disabled by default, use `ENABLE_LEGACY_ETH66=0` to enable
No support for `Get/NodeData` dialogue via eth, anymore
* Dissolved fetch/common.nim
details;
the log/ticker part becomes ticker.nim
the interval range management is merged into fetch.nim
* Updated account scheduler
why:
The previous scheduler fetched each account once (for different state
roots.) The updated scheduler re-calibrates after a change of the state
root and potentially (until told otherwise) fetches all possible
accounts.
* Fix `high(P)` fringe cases in `IntervalSet` handling
why:
The `high(P)` value for a point type `P` cannot be represented with
half open intervals `[a,b)` for a,b points of `P`. So this single value
needs extra treatment which was slightly wrong.
* Updated docu/comments
also:
rebased
* Update scheduler
details:
Change the `pivot` management when creating new accounts lists. It is
strictly increasing (and wrapping around) depending on last updated
accounts list.
* Use type name eth and snap (rather than snap1)
* Prettified snap/eth handler trace messages
* Regrouped sync sources
details:
Snap storage related sources are moved to common directory.
Option --new-sync renamed to --snap-sync
also:
Normalised logging for secondary/non-protocol handlers.
* Merge protocol wrapper files => protocol.nim
details:
Merge wrapper sync/protocol_ethxx.nim and sync/protocol_snapxx.nim
into single file snap/protocol.nim
* Comments cosmetics
* Similar start logic for blockchain_sync.nim and sync/snap.nim
* Renamed p2p/blockchain_sync.nim -> sync/fast.nim
* Update exception tracinig
* Use lazy JSON parser
why:
Uint246 type is not directly supported by the JSON serializer
* Json parser update for stringified UInt256 integers
why:
Now available
* update sub-module branch reference
* Prepare unit tests for running without tx-pool job queue
why:
Most of the job queue logic can be emulated. This adapts to a few
pathological test cases.
* Replace tx-pool job queue logic with in-place actions
why:
This additional execution layer is not needed, anymore which has been
learned from working with the integration/hive tests.
details:
Execution of add or deletion jobs are executed in-place. Some actions
-- as in smartHead() -- have been combined for facilitating the correct
order actions
* Update production functions, remove txpool legacy stuff
* Enable JWT authentication for websockets
details:
Currently, this is optional and only enabled when the jwtsecret option
is set.
There is a default mechanism to generate a JWT secret if it is not
explicitly stated. This mechanism is currently unused.
* Make JWT authentication compulsory for websockets
* Fix unit test entry point + cosmetics
* Update JSON-RPC link
* Improvements as suggested by Mamy
why:
Causes havoc in most bit fringe cases.
details:
When setting the head forward, the delta was wrongly registered from
the static "left" end (which limits the loop) rather than the moving
"right" end.
* Fix database sort order for local txs
why:
For convenience, packed txs were stored in the block sorted by
rank->nonce. Using local accounts, the greedy grabber uses the sort
order (local,non-local)->rank->nonce which leads to a wrong calculation
of the txRoot.
* Housekeeping
details:
Replaced a couple of local eip1559TxNormalization() functions by a
single public
* Support for local accounts
why:
Accounts tagged local will be packed with priority over untagged
accounts
* Added functions for queuing txs and simultaneously setting account locality
why:
Might be a popular task, in particular for unconditionally adding txs to
a local (aka prioritised account) via "xp.addLocal(tx,true)"
caveat:
Untested yet
* fix typo
* backup
* No baseFee for pre-London tx in verifier
why:
The packer would wrongly discard valid legacy txs.
* De-noisify some Clique logging
why:
Too annoying when syncing against Goerly
* Replay devnet# and kiln sessions
why:
Compiled as local program, the unit test was used for TDD.
* dist: precompiled binaries and Docker images
The builds are reproducible, the binaries are portable and statically link librocksdb.
This took some patching. Upstream PR: https://github.com/facebook/rocksdb/pull/9752
32-bit ARM is missing as a target because two different GCC versions
fail with an ICE when trying to cross-compile RocksDB. Using Clang
instead is too much trouble for a platform that nobody should be using
anyway.
(Clang doesn't come with its own target headers and libraries, can't be
easily convinced to use the ones from GCC, so it needs an fs image from
a 32-bit ARM distro - at which point I stopped caring).
* CI: disable reproducibility test
* Activate wire protocol eth/66
and:
Disentangle protocol_eth66.nim from import sections
why:
Importing the protocol_eth66 module is not necessary. There is
no need to know too many details of the underlying wire protocol. All
that is needed will be exported by blockchain_sync.nim.
* fixes, and rebase
* Update nimbus/p2p/blockchain_sync.nim
Co-authored-by: Kim De Mey <kim.demey@gmail.com>
* Fixes and rebase
Co-authored-by: Kim De Mey <kim.demey@gmail.com>
why:
TDD data and test script that are not needed for CI are externally held.
This saves space.
also:
Added support for test-custom_networks.nim to run import Devnet5 dump.
* Rearrange/rename test_kintsugu => test_custom_network
why:
Debug, fix and test more general problems related to running
nimbus on a custom network.
* Update UInt265/Json parser for --custom-network command line option
why:
As found out with the Kintsugi configuration, block number and balance
have the same Nim type which led to misunderstandings. This patch makes
sure that UInt265 encoded string values "0x11" decodes to 17, and "b"
and "11" to 11.
* Refactored genesis.toBlock() => genesis.toBlockHeader()
why:
The function toBlock(g,db) may return different results depending on
whether the db descriptor argument is nil, or initialised. This is due
to the db.config data sub-descriptor which may give various outcomes
for the baseFee field of the genesis header.
Also, the version where db is non-nil initialised is used internally
only. So the public rewrite toBlockHeader() that replaces the toBlock()
function expects a full set of NetworkParams.
* update comments
* Rename toBlockHeader() => toGenesisHeader()
why:
Polymorphic prototype used for BaseChainDB or NetworkParams argument.
With a BaseChainDB descriptor argument, the name shall imply that the
header is generated from the config fields rather than fetched from
the database.
* Added command line option --static-peers-file
why:
Handy feature to keep peer nodes in a file, similar to the
--bootstrap-file option.
* Kludge needed for setting up custom network
why:
Some non-features in the persistent hexary trie DB produce an assert
error when initiating the Kinsugi network.
details:
This fix should be temporary, only.
* Fix OS detection
why:
directive detectOs() bails out on Windows if checking for Ubuntu
* test environment for studying crash of hexary trie
why:
the persistent test case will crash unless in genesis.toBlock():
+ pruneTrie is set false, or
+ the directive "tdb.put(emptyRlpHash.data,emptyRlp)" is added right
before the "for k, v in account.storage:" loop
* different tests for OS variants