* Unit tests update, code cosmetics
* Fix segfault with zombie handling
why:
In order to save memory, the data records of zombie entries are removed
and only the key (aka peer node) is kept. Consequently, logging these
zombies can only be done by the key.
* Allow to accept V2 payload without `shanghaiTime` set while syncing
why:
Currently, `shanghaiTime` is missing (alt least) while snap syncing. So
beacon node headers can be processed regardless. Normal (aka strict)
processing will be automatically restored when leaving snap sync mode.
* Cosmetics, renamed fields (eVtx, bVtx) -> (eVid, bVid)
* Multilayered delta architecture for Aristo DB
details:
Any VertexID or data retrieval needs to go down the rabbit hole and
fetch/get/manipulate the bottom layer -- even without explicit
backend.
* Direct reference to backend from top-level layer
why:
Some services as the vid management needs to be synchronised among all
layers. So access is optimised.
* Experimental MP-trie
why:
Deleting records is a infeasible with the current structure
* Added vertex ID recycling management
Todo:
Provide some unit tests
* DB layout update
why:
Main news is the separation of `Merkel` hashes into an extra table.
details:
The code fragments cover conversion between compact MPT records and
Aristo DB records as well as some rudimentary cache handling for
the `Merkel` hashes (i.e. the extra table entries.)
todo:
Add some simple unit test for the descriptor record (currently used
for vertex ID management, only.)
* Updated vertex ID recycling management
details:
added simple unit tests (mainly testing ABI)
* docu update
* Set maximum time for nodes to be banned.
why:
Useless nodes are marked zombies and banned. They a kept in a table
until flushed out by new connections. This works well if there are many
connections. For the case that there are a few only, a maximum time is
set. When expired, zombies are flushed automatically.
* Suspend full sync while block number at beacon block
details:
Also allows to use external setting from file (2nd line)
* Resume state at full sync after restart (if any)
* Relocate full sync descriptors from global `worker_desc.nim` to local pass
why:
These settings are needed only for the full sync pass.
* Rename `pivotAccountsCoverage*()` => `accountsCoverage*()`
details:
Extract from `worker_desc.nim` into separate source file.
* Relocate snap sync sub-descriptors
details:
..from global `worker_desc.nim` to local pass module `snap_pass_desc.nam`.
* Rename `SnapPivotRef` => `SnapPassPivotRef`
* Mostly removed `SnapPass` prefix from object type names
why:
These objects are solely used on the snap pass.
* Rename `playXXX` => `passXXX`
why:
Better purpose match
* Code massage, log message updates
* Moved `ticker.nim` to `misc` folder to be used the same by full and snap sync
why:
Simplifies maintenance
* Move `worker/pivot*` => `worker/pass/pass_snap/*`
why:
better for maintenance
* Moved helper source file => `pass/pass_snap/helper`
* Renamed ComError => GetError, `worker/com/` => `worker/get/`
* Keep ticker enable flag in worker descriptor
why:
This allows to pass this flag with the descriptor and not an extra
function argument when calling the setup function.
* Extracted setup/release code from `worker.nim` => `pass/pass_init.nim`
* Recreating some of the speculative-execution code.
Not really using it yet. Also there's some new inefficiency in
memory.nim, but it's fixable - just haven't gotten around to it yet.
The big thing introduced here is the idea of "cells" for stack,
memory, and storage values. A cell is basically just a Future (though
there's also the option of making it an Identity - just a simple
distinct wrapper around a value - if you want to turn off the
asynchrony).
* Bumped nim-eth.
* Cleaned up a few comments.
* Bumped nim-secp256k1.
* Oops.
* Fixing a few compiler errors that show up with EVMC enabled.
* Extract RocksDB timing tests from snap unit tests as separate module
why:
Declutter, make space for more snap related unit tests.
* Renamed `undumpNextGroup()` => `undumpBlocks()`
why:
Source file name is called `undump_blocks.nim` which should be sort
of in sync with the method name(s).
* Implement snap/1 server method `getByteCodes()`
* Implement snap/1 client method `getByteCodes()`
* Implement faculty for handling contract code fetching via snap/1
* Provide persistent storage for contract code records
* Implement contract code snap sync fetch & store
* Code massage, cosmetics
* Unit tests for verifying snap sync snapshot dump
details:
Use `undump_kvp.dumpAllDb()` to dump any database.
* Improve logging and logging options in Fluffy
- Allow selection of log format, including:
- JSON
- automatic selection based on tty
- Allow log levels per topic configured on cli
* Update sync scheduler pool mode
why:
The pool mode allows to loop over active peers one after another. This
is ideal for soft re-starting peers. As this is a two tier experience
(start/stop, setup/release) the loop must be run twice. This is
controlled by a more rigid re-definition of how to use the `poolMode`
flag.
* Mitigate RLP serialiser deficiency
why:
Currently, serialising the `BlockBody` in not conevrtible and need
to be checked in the `eth` module. Currently a local fix for the
wire protocol applies. Unit tests will stay (after this local solution
will have been removed.)
* Code cosmetics and massage
details:
Main part is `types.toStr()` as a unified function for logging block
numbers.
* Allow to use a logical genesis replacement (start of history)
why:
Snap sync will set up an arbitrary pivot at a block number different
from zero. In fact, the higher the block number the better.
details:
A non-genesis start of history will currently only affect the score
values which were derived from the difficulty.
* Provide function to store the snap pivot block header in chain db
why:
Together with the start of history facility, this allows to proceed
with full syncing once snap has finished.
details:
Snap db storage was switched from a sub-tables to the flat chain db.
* Provide database completeness and sanity checker
details:
For debugging on smaller databases, only
* Implement snap -> full sync switch
- Remove failing on withdrawalsRoot to allow Shanghai BlockHeader.
Still need to add real checks in block header / body validation.
- Add withdrawals array to Block object for JSON-RPC API
* Somewhat tighten error handling
why:
Zombie state is invoked when the current peer turns out to be useless
for further communication. While there is a chance to further talk
to a peer about another topic (aka healing) after some protocol failure,
it makes no sense to do so after a network problem.
The latter state is explained bu the `peerDegraded` flag that goes
together with the `zombie` state flag. A degraded peer is dropped
immediately.
* Remove `--sync-mode=snapCtx` option, always start snap in recovery mode
why:
No need for a snap sync option without recovery mode, can be achieved
by deleting the database.
* Code cosmetics, typos, prettify logging, debugging helper, etc.
* Split off snap sync sub-mode handler into separate modules
details:
The original `worker.nim` source has become a multiplexer for several
snap sync sub-modes `full` and `snap`. The source modules of the
incarnations of a particular sync sub-mode are places into the
`worker/play` directory.
* Update ticker for snap and full sync logging
* Fix fringe condition for `GetStorageRanges` message handler
why:
Receiving a proved empty range was not considered at all. This lead to
inconsistencies of the return value which led to subsequent errors.
* Update storage range bulk download
details;
Mainly re-org of storage queue processing in `storage_queue_helper.nim`
* Update logging variables/messages
* Update storage slots healing
details:
Mainly clean up after improved helper functions from the sources
`find_missing_nodes.nim` and `storage_queue_helper.nim`.
* Simplify account fetch
why:
To much fuss made tolerating some errors. There will be an overall
strategy implemented where the concert of download and healing function
is orchestrated.
* Add error resilience to the concert of download and healing.
why:
The idea is that a peer might stop serving snap/1 accounts and storage
slot downloads while still able to support fetching nodes for healing.
* Update nearby/neighbour leaf nodes finder
details:
Update return error codes so that in the case that there is no more
leaf node beyond the search direction, the particular error code
`NearbyBeyondRange` is returned.
* Compile largest interval range containing only this leaf point
why:
Will be needed in snap sync for adding single leaf nodes to the range
of already allocated nodes.
* Reorg `hexary_inspect.nim`
why:
Merged the nodes collecting algorithm for persistent and in-memory
into a single generic function `hexary_inspect.inspectTrieImpl()`
* Update fetching accounts range failure handling in `rangeFetchAccounts()`
why:
Rejected response leads now to fetching for another account range. Only
repeated failures (or all done) terminate the algorithm.
* Update accounts healing
why:
+ Fixed looping over a bogus node response that could not inserted into
the database. As a solution, these nodes are locally registered and not
asked for in this download cycle.
+ Sub-optimal handling of interval range for a healed account leaf node.
Now the maximal range interval containing this node is registered as
processed which leafs to de-fragementation of the processed (and
unprocessed) range list(s). So *gap* ranges which are known not to
cover any account leaf node are not asked for on the network, anymore.
+ Sporadically remove empty interval ranges (if any)
* Update logging, better variable names
t8n: a silly bug contract address generator, should use original
tx nonce instead of read the nonce from sender address in state db.
Although in EVM contract address generated by reading nonce from state db
is correct, outside EVM that nonce value might have been modified,
thus generating incorrect contract address.
accounts cache: when clearing account storage, the originalValue
cache is not cleared, only the storageRoot set to empty storage root,
this will cause getStorage and getCommitedStorage return wrong value
if the originalValue cache contains old value.
* Redesign snap1 message GetTrieNodes argument prototypes
why:
A list of sub-objects `seq[SnapTriePath]` is more intuitive to work with
than an opaque definition `seq[seq[Blob]]` because the inner object
`SnapTriePath` object has a dedicated inner structure (for how to
interprete `seq[Blob]`.)
* Collect some public constants into `constants.nim` file
* Reorg `hexary_paths.nim`
why:
+ Collecting nodes following a partial path properly ending at an
extension node failed to collect this last node.
+ Merged the nodes collecting algorithm for persistent and in-memory
into a single generic function `hexary_paths.rootPathExtend()`
info:
Extracted common tasks to `hexary_nodes_helper.nim`
* Implement `StorageRanges` message handler for snap/1 protocol
now macro assembler support merge fork, shanghai, etc without using ugly hack.
also each assembler test have their own `setup` section that can access
`vmState` and perform various custom setup.
simplify EVM and delegete those things to accounts cache.
also no more manual state clearing, accounts cache will be
responsible for both collecting touched account and perform
state clearing.
* Gwei conversion should use u256 because u64 can overflow.
* Make withdrawals follow the EIP-158 state-clearing rules.
(i.e. Empty accounts should be deleted.)
* Allow the zero address in normalizeNumber.
(Necessary for one of the new withdrawals-related tests.)
* Another fix with a withdrawals-related test.
* Add state root to node steps path register `RPath` or `XPath`
why:
Typically, the first node in the path register is the state root. There
are occasions, when the path register is empty (i.e. there are no node
references) which typically applies to a zero node key.
In order to find the next node key greater than zero, the state root is
is needed which is now part of the `RPath` or `XPath` data types.
* Extracted hexary tree debugging functions into separate files
* Update empty path fringe case for left/right node neighbour
why:
When starting at zero, the node steps path register would be empty. So
will any path that is before the fist non-zero link of a state root (if
it is a `Branch` node.)
The `hexaryNearbyRight()` or `hexaryNearbyLeft()` function required a
non-zero node steps path register. Now the first node is to be advanced
starting at the first state root link if necessary.
* Simplify/reorg neighbour node finder
why:
There was too mach code repetition for the cases
* persistent or in-memory database
* left or right move
details:
Most algorithms apply for persistent and in-memory alike. Using
templates/generic functions most of these algorithms can be stated
in a unified way
* Update storage slots snap/1 handler
details:
Minor changes to be more debugging friendly.
* Fix detection of full database for snap sync
* Docu: Snap sync test & debugging scenario
* Gwei conversion should use u256 because u64 can overflow.
* Make withdrawals follow the EIP-158 state-clearing rules.
(i.e. Empty accounts should be deleted.)
* Allow the zero address in normalizeNumber.
(Necessary for one of the new withdrawals-related tests.)
* Handle last/all node(s) proof conditions at leaf node extractor
detail:
Flag whether the maximum extracted node is the last one in database
No proof needed if the full tree was extracted
* Clean up some helpers & definitions
details:
Move entities to more plausible locations, e.g. `Account` object need
not be dealt with in the range extractor as it applies to any kind of
leaf data.
* Fix next/prev database walk fringe condition
details:
First check needed might be for a leaf node which was done too late.
* Homogenise snap/1 protocol function prototypes
why:
The range arguments `origin` and `limit` data types differed in various
function prototypes (`Hash256` vs. `openArray[byte]`.)
* Implement `GetStorageRange` handler
* Implement server timeout for leaf node retrieval
why:
This feature leaves control on the server for probably costly action
invoked by the network
* Implement maximal reply size for snap service
why:
This feature leaves control on the server for probably costly action
invoked by the network.
* Part of EIP-4895: add withdrawals processing to block processing.
* Refactoring: extracted the engine API handler bodies into procs.
Intending to implement the V2 versions next. (I need the bodies to be
in separate procs so that multiple versions can use them.)
* Working on Engine API changes for Shanghai.
* Updated nim-web3, resolved ambiguity in Hash256 type.
* Updated nim-eth3 to point to master, now that I've merged that.
* I'm confused about what's going on with engine_client.
But let's try resolving this Hash256 ambiguity.
* Still trying to fix this conflict with the Hash256 types.
* Does this work now that nimbus-eth2 has been updated?
* Corrected blockValue in getPayload responses back to UInt256.
c834f67a37
* Working on getting the withdrawals-related tests to pass.
* Fixing more of those Hash256 ambiguities.
(I'm not sure why the nim-web3 library introduced a conflicting type
named Hash256, but right now I just want to get this code to compile again.)
* Bumped a couple of libraries to fix some error messages.
* Needed to get "make fluffy-tools" to pass, too.
* Getting "make nimbus_verified_proxy" to build.
* Clean up some function prototypes
why:
Simplify polymorphic prototype variances for easier maintenance.
* Fix fringe condition crash when importing bogus RLP node
why:
Accessing non-list RLP entry as a list causes `Defect`
* Fix left boundary proof at range extractor
why:
Was insufficient. The main problem was that there was no unit test for
the validity of the generated left boundary.
* Handle incomplete left boundary proofs early
why:
Attempt to do it later leads to overly complex code in order to prevent
looping when the same peer repeats to send the same incomplete proof.
Contrary, gaps in the leaf sequence can be handled gracefully with
registering the gaps
* Implement a manual pivot setup mechanism for snap sync
why:
For a test scenario it is convenient to set the pivot to something
lower than the beacon header from the consensus layer. This does not
need rely on any RPC mechanism.
details:
The file containing the pivot specs is specified by the
`--sync-ctrl-file` option. It is regularly parsed for updates.
* Fix calculation error
why:
Prevent from calculating negative square root
why:
The peer manager runs concurrently to the discovery scheme. So the p2p
peer observer will also present `peer` non-static entries. Previously,
this peer manager throw an assert defect when this happened.
* Renaming androgynous sub-object names according to where they belong
why:
These objects are not explicitly dealt with. They give meaning to
some generic wrapper objects. Naming them after their origin may
help troubleshooting.
* Redefine proof nodes list data type for `snap/1` wire protocol
why:
The current specification suffered from the fact that the basic data
type for a proof node is an RLP encoded hexary node. This slightly
confused the encoding/decoding magic.
details:
This is the second attempt, now wrapping the `seq[Blob]` into a
wrapper object of `seq[SnapProof]` for a distinct alias sequence.
In the previous attempt, `SnapProof` was a wrapper object holding the
`Blob` with magic applied to the `seq[]`. This needed the `append`
mixin to strip the outer wrapper that was applied to the `Blob` already
when it was passed as argument.
* Fix some prototype inconsistency
why:
For easy reading, `getAccountRange()` handler return code should
resemble the `accoundRange()` anruments prototype.
* Enable `snap/1` accounts range service
* Allow to change the garbage collector to `boehm` as a Makefile option.
why:
There is still an unsolved memory corruption problem that might be
related to the standard `gc`. It seemingly goes away if the `gc` is
changed to `boehm`.
Specifying another `gc` on the make level simplifies debugging and
development.
* Code cosmetics
details:
* updated exception annotations
* extracted `worker_desc.nim` from `full/worker.nim`
* etc.
* Implement option to state a sync modifier file
why:
This allows to specify extra sync type specific options which might
change over time. This file is regularly checked for updates.
* Implement a threshold when to suspend full syncing
why:
For a test scenario, a full sync beep may work as a local snap server.
There is no need to download the full block chain.
details:
The file containing the pivot specs is specified by the
`--sync-ctrl-file` option. It is regularly parsed for updates.
* Fix locked database file annoyance with unit tests on Windows
why:
Need to clean up old files first from previous session as files remain
locked despite closing of database.
* Fix initialisation order
detail:
Apparently this has no real effect as the ticker is only initialised
here but started later.
This possible bug has been in all for a while and was running with the
previous compiler and libraries.
* Better naming of data fields for sync descriptors
details:
* BuddyRef[S,W]: buddy.data -> buddy.only
* CtxRef[S]: ctx.data -> ctx.pool
* Refactoring in preparation for time-based forking.
* Timestamp-based hard-fork-transition.
* Workaround SideEffect issue / compiler bug for both failing locations in Portal history code
---------
Co-authored-by: kdeme <kim.demey@gmail.com>
* Redefine `seq[Blob]` => `seq[SnapProof]` for `snap/1` protocol
why:
Proof nodes are traded as `Blob` type items rather than Nim objects. So
the RLP transcoder must not extra wrap proofs which are of type
seq[Blob]. Without custom encoding one would produce a
`list(blob(item1), blob(item2) ..)` instead of `list(item1, item2 ..)`.
* Limit leaf extractor by RLP size rather than number of items
why:
To be used serving `snap/1` requests, the result of function
`hexaryRangeLeafsProof()` is limited by the maximal space
needed to serialise the result which will be part of the
`snap/1` repsonse.
* Let the range extractor `hexaryRangeLeafsProof()` return RLP list sizes
why:
When collecting accounts, the size oft the accounts list when encoded
as RLP is continually updated. So the summed up value is available
anyway. For the proof nodes list, there are not many (~ 10) so summing
up is not expensive here.
* Removed some Windows specific unit test annoyances
details:
+ Short put()/get() cycles on persistent database have a race condition
with vendor rocksdb. On a specific (and slow) qemu/win7 a 50ms `sleep()`
in between will mostly do the job (i.e. unless heavy CPU load.) This
issue was not observed on github/ci.
+ Removed annoyances when qemu/Win7 keeps the rocksdb database files
locked even after closing the db. The problem is solved by strictly
using fresh names for each test. No assumption made to be able to
properly clean up. This issue was not observed on github/ci.
* Silence some compiler gossip -- part 7, misc/non(sync or graphql)
details:
Adding some missing exception annotation
* Silence some compiler gossip -- part 6, evm
details:
Adding some missing exception annotation
* Update evmc cases
why:
were previously missing
* Increase Windows stack needed to run EVMC unit tests
why:
After annotating functions to trace exceptions some unit tests started
to fail on Windows without clear error report.
EVMC works recursively and now there seems to be a stack problem
reported by the nim compiler. Increasing the NIM stack ass sugessted by
NIM (using -d:nimCallDepthLimit=###) had some effect but no clear
solution.
Note that this patch set unrolls some NIM compiler settings
* Unit tests to verify calculations based on hard coded constants
why:
Sizes of RLP encoded objects are available at run time only.
* Changed argument order for `hexaryRangeLeafsProof()` prototype
why:
Better to read as a stand-alone function (arguments were optimised
for functional pipelines)
* Run sub-range proof tests for extracted ranges
* Cosmetics
details:
+ Update doc generator
+ Fix key type representation in `hexary_desc` for debugging
+ Redefine `isImportOk()` as template for better `check()` line reporting
* Fix fringe condition when interpolating Merkle-Patricia tries
details:
Small change with profound effect fixing some pathological condition
that haunted the unit test set on large data sers. There is still one
condition left which might well be due to an incomplete data set.
* Unit test proof nodes for node range extractor
* Unit tests to run on full extraction set
why:
Left over from troubleshooting, range length was only 5
* Reduce Nim 1.6 compiler warnings/hints for Fluffy and Nimbus proxy
Mostly raises Defect removals, TaintedString removal and some
unnecessary imports.
Also updating the copyright years alongside.
* Further reduce Nim 1.6 compiler warnings/hints for Nimbus
* Silence some compiler gossip -- part 5, common
details:
Mostly removing redundant imports and `Defect` tracer after switch
to nim 1.6
* Silence some compiler gossip -- part 6, db, rpc, utils
details:
Mostly removing redundant imports and `Defect` tracer after switch
to nim 1.6
* Silence some compiler gossip -- part 7, randomly collected source files
details:
Mostly removing redundant imports and `Defect` tracer after switch
to nim 1.6
* Silence some compiler gossip -- part 8, assorted tests
details:
Mostly removing redundant imports and `Defect` tracer after switch
to nim 1.6
* Clique update
why:
More impossible exceptions (undoes temporary fix from previous PR)
* Silence some compiler gossip -- part 1, tx_pool
details:
Mostly removing redundant imports and `Defect` tracer after switch
to nim 1.6
* Silence some compiler gossip -- part 2, clique
details:
Mostly removing redundant imports and `Defect` tracer after switch
to nim 1.6
* Silence some compiler gossip -- part 3, misc core
details:
Mostly removing redundant imports and `Defect` tracer after switch
to nim 1.6
* Silence some compiler gossip -- part 4, sync
details:
Mostly removing redundant imports and `Defect` tracer after switch
to nim 1.6
* Clique update
why:
Missing exception annotation
* Update comments and test noise
* Fix boundary proofs
why:
Where neither used in production, nor unit tested. For production, other
methods apply to test leaf range integrity directly based of the proof
nodes.
* Added `hexary_range()`: interval range + proof extractor
details:
+ Will be used for `snap/1` protocol handler
+ Unit tests added (also for testing left boundary proof)
todo:
Need to verify completeness of proof nodes
* Reduce some nim 1.6 compiler noise
* Stop unit test gossip for ci tests
* Updated to the latest nim-eth, nim-rocksdb, nim-web3
* Bump nimbus-eth2 module and fix related issues
Temporarily disabling Portal beacon light client network as it is
a lot of copy pasted code that did not yet take into account
forks. This will require a bigger rework and was not yet tested
in an actual network anyhow.
* More nimbus fixes after module bumps
---------
Co-authored-by: Adam Spitz <adamspitz@status.im>
Co-authored-by: jangko <jangko128@gmail.com>
Two unresolved items currently:
- Three tests that are temporarily disabled as they fail in the
macro_assembler code, which seems to be due to an ambigious
identifier Stop (Ops and chronos ServerCommand enum).
- i386 CI disabled as it fails at Nim compilation already. Failed
tests where already ignored for this target.
* Extracted RocksDB timing unit tests into separate file
why:
make space for more in main module :)
* Extracted `inspectionRunner()` unit tests into separate file
why:
make space for more in main module :)
* Extracted `storagesRunner()` unit tests into separate file
why:
make space for more in main module :)
* Extracted pivot checkpoint store/retrieval unit tests into separate file
why:
make space for more in main module :)
* Extract helper functions into separate source file
* Extracted account import unit tests into separate file
why:
make space for more in main module :)
* Rename `test_decompose()` => `test_NodeRangeDecompose()`
why:
There will be more functions with `test_NodeRange` prefix.
* Cosmetics, update logger `topics`
* Clean up sync/start methods in nimbus
why:
* The `protocols` list selects served (as opposed to sync) protocols only.
* The `SyncMode.Default` object is allocated with the other possible sync
mode objects.
* Add snap service stub to `nimbus`
* Provide full set of snap response handler stubs
* Bicarb for the latest CI hiccup
why:
Might be a change in the CI engine for MacOS.
* Simplify pivot update
why:
No need to fetch the pivot header from the network when it can be
be made available in the ivot cache
also:
Keep `txPool` update disabled while syncing
* Cosmetics, tune down some logging noise
* Support `snap/1` without `eth/6?`
why:
Eth is not needed here.
* Snap is an (optional) extension of `eth`
so:
It it must be supported somehow. Nevertheless it will be currently
unused in the snap syncer.
* Register external beacon stream header
why:
This will be used to sync the peers against.
* Update total coverage book-keeping for 100% roll-over
details:
Provide commonly available/used function
* Replace best pivot by beacon stream tracker
details:
Beacon stream header cache will be updated by external chain monitor via
RPC. This cached header will then be used to sync the pivot.
why:
Some peers reconnect recurrently after dialogue was found useless. The
reconnect loop protection was in place already, albeit insufficient.
also:
Some updates to allow setting previously constant parameters at run
time.
* Simplify accounts healing threshold management
why:
Was over-engineered.
details:
Previously, healing was based on recursive hexary trie perusal.
Due to "cheap" envelope decomposition of a range complement for the
hexary trie, the cost of running extra laps have become time-affordable
again and a simple trigger mechanism for healing will do.
* Control number of dangling result nodes in `hexaryInspectTrie()`
also:
+ Returns number of visited nodes available for logging so the maximum
number of nodes can be tuned accordingly.
+ Some code and docu update
* Update names of constants
why:
Declutter, more systematic naming
* Re-implemented `worker_desc.merge()` for storage slots
why:
Provided as proper queue management in `storage_queue_helper`.
details:
+ Several append modes (replaces `merge()`)
+ Added third queue to record entries currently fetched by a worker. So
another parallel running worker can safe the complete set of storage
slots in as checkpoint. This was previously lost.
* Refactor healing
why:
Simplify and remove deep hexary trie perusal for finding completeness.
Due to "cheap" envelope decomposition of a range complement for the
hexary trie, the cost of running extra laps have become time-affordable
again and a simple trigger mechanism for healing will do.
* Docu update
* Run a storage job only once in download loop
why:
Download failure or rejection (i.e. missing data) lead to repeated
fetch requests until peer disconnects, otherwise.
* Relocated mothballing (i.e. swap-in preparation) logic
details:
Mothballing was previously tested & started after downloading
account ranges in `range_fetch_accounts`.
Whenever current download or healing stops because of a pivot change,
swap-in preparation is needed (otherwise some storage slots may get
lost when swap-in takes place.)
Also, `execSnapSyncAction()` has been moved back to `pivot_helper`.
* Reorganised source file directories
details:
Grouped pivot focused modules into `pivot` directory
* Renamed `checkNodes`, `sickSubTries` as `nodes.check`, `nodes.missing`
why:
Both lists are typically used together as pair. Renaming `sickSubTries`
reflects moving away from a healing centric view towards a swap-in
attitude.
* Multi times coverage recording
details:
Per pivot account ranges are accumulated into coverage range set. This
set fill eventually contain a singe range of account hashes [0..2^256]
which amounts to 100% capacity.
A counter has been added that is incremented whenever max capacity is
reached. The accumulated range is then reset to empty.
The effect of this setting is that the coverage can be evenly duplicated.
So 200% would not accumulate on a particular region.
* Update range length comparisons (mod 2^256)
why:
A range interval can have sizes 1..2^256 as it cannot be empty by
definition. The number of points in a range intervals set can have
0..2^256 points. As the scalar range is a residue class modulo 2^256,
the residue class 0 means length 2^256 for a range interval, but can
be 0 or 2^256 for the number of points in a range intervals set.
* Generalised `hexaryEnvelopeDecompose()`
details:
Compile the complement of the union of some (processed) intervals and
express this complement as a list of envelopes of sub-tries.
This facility is directly applicable to swap-in book-keeping.
* Re-factor `swapIn()`
why:
Good idea but baloney implementation. The main algorithm is based on
the generalised version of `hexaryEnvelopeDecompose()` which has been
derived from this implementation.
* Refactor `healAccounts()` using `hexaryEnvelopeDecompose()` as main driver
why:
Previously, the hexary trie was searched recursively for dangling nodes
which has a poor worst case performance already when the trie is
reasonably populated.
The function `hexaryEnvelopeDecompose()` is a magnitude faster because
it does not peruse existing sub-tries in order to find missing nodes
although result is not fully compatible with the previous function.
So recursive search is used in a limited mode only when the decomposer
will not deliver a useful result.
* Logging & maintenance fixes
details:
Preparation for abandoning buddy-global healing variables `node`,
`resumeCtx`, and `lockTriePerusal`. These variable are trie-perusal
centric which will be run on the back burner in favour of
`hexaryEnvelopeDecompose()` which is used for accounts healing already.
* Additional logging for scheduler
* Fix duplicate occurrence of `bestNumber`
why:
Happened when the `block_queue` module was separated out of
the `worker` module. Somehow testing was insufficient or skipped,
at all.
* Update `runPool()` mixin for scheduler
details:
Could be simplified
* Dynamically adapt pivot header negotiation mode
details:
After accepting one peer and some timeout, do not search for more
peers for start syncing but rather continue in relaxed mode with a
single peer.
The `BlockHeader` structure in `nim-eth` was updated with support for
EIP-4844 (danksharding). To enable the `nim-eth` bump, the ingress of
`BlockHeader` structures has been hardened to reject headers that have
the new `excessDataGas` field until proper EIP4844 support exists.
https://github.com/status-im/nim-eth/pull/570
* Provide index to reconstruct missing storage slots
why;
Pivots will be changed anymore once they are officially archived. The
account of the archived pivots are ready to be swapped into the active
pivot. This leaves open how to treat storage slots not fetched yet.
Solution: when mothballing, an `account->storage-root` index is
compiled that can be used when swapping in accounts.
* Implement swap-in from earlier pivots
details;
When most accounts are covered by the current and previous pivot
sessions, swapping inthe accounts and storage slots (i.e. registering
account ranges done) from earlier pivots takes place if there is a
common sub-trie.
* Throttle pivot change when healing state has bean reached
why:
There is a hope to complete the current pivot, so pivot update can be
throttled. This is achieved by setting another minimum block number
distance for the pivot headers. This feature is still experimental
* Miscellaneous tweaks & fixes
details:
+ Catch `TransportError` exception in `legacy.nim` module
+ Fix self-calling wrapper `hexaryEnvelopeTouchedBy()`
* Update documentation, logging etc.
* Changed `checkNode` batch list `seq[Blob]` => `seq[NodeSpecs]`
why:
The `NodeSpecs` type as used here is a tuple `(partial-path,node-key)`.
When `checkNode` partial paths are collected, also the node key is
available so it should be registered and not repeatedly recovered from
the database.
* Add optional begin/end trace statement in snap scheduler
why:
Allows to trace invoked entity and scheduler state variables
* Rename and update dismantle => hexaryEnvelopeDecompose()
why:
+ As for naming, a positive connotation is prefered
+ The unit tests were really insufficient
+ The function result was wrong on a few boundry conditions
detail:
+ Extracted the function from `hexary_paths.nim` and re-implemented
it together with other envelope functions => `hexary_envelope.nim`
+ Re-wrote docu for `hexaryEnvelopeDecompose()`
* Relaxed right condition for `hexaryEnvelopeDecompose()` range argument
why;
Previously, the right point of the argument interval had to be a path
to an allocated leaf node. While this is typically a given for accounts,
it is easier to require an arbitrary range of paths (or keys) with
the requirement of a `boundary proof` for left and right (i.e. enough
nodes in the database to find the end points.)
also:
Bug fixes for related functions (typos, missing conditions etc.)
* Add missing unit tests include file
* Add quick hexary trie inspector, called `dismantle()`
why:
+ Full hexary trie perusal is slow if running down leaf nodes
+ For known range of leaf nodes, work out the UInt126-complement of
partial sub-trie paths (for existing nodes). The result should cover
no (or only a few) sub-tries with leaf nodes.
* Extract common healing methods => `sub_tries_helper.nim`
details:
Also apply quick hexary trie inspection tool `dismantle()`
Replace `inspectAccountsTrie()` wrapper by `hexaryInspectTrie()`
* Re-arrange task dispatching in main peer worker
* Refactor accounts and storage slots downloaders
* Rename `HexaryDbError` => `HexaryError`
The `BlockHeader` structure in `nim-eth` was updated with support for
EIP-4895 (withdrawals). To enable the `nim-eth` bump, the ingress of
`BlockHeader` structures has been hardened to reject headers that have
the new `withdrawalsRoot` field until proper withdrawals support exists.
https://github.com/status-im/nim-eth/pull/562
* Stop negotiating pivot if peer repeatedly replies w/usesless answers
why:
There is some fringe condition where a peer replies with legit but
useless empty headers repetely. This goes on until somebody stops.
We stop now.
* Rename `missingNodes` => `sickSubTries`
why:
These (probably missing) nodes represent in reality fully or partially
missing sub-tries. The top nodes may even exist, e.g. as a shallow
sub-trie.
also:
Keep track of account healing on/of by bool variable `accountsHealing`
controlled in `pivot_helper.execSnapSyncAction()`
* Add `nimbus` option argument `snapCtx` for starting snap recovery (if any)
also:
+ Trigger the recovery (or similar) process from inside the global peer
worker initialisation `worker.setup()` and not by the `snap.start()`
function.
+ Have `runPool()` returned a `bool` code to indicate early stop to
scheduler.
* Can import partial snap sync checkpoint at start
details:
+ Modified what is stored with the checkpoint in `snapdb_pivot.nim`
+ Will be loaded within `runDaemon()` if activated
* Forgot to import total coverage range
why:
Only the top (or latest) pivot needs coverage but the total coverage
is the list of all ranges for all pivots -- simply forgotten.
* Piecemeal trie inspection
details:
Trie inspection will stop after maximum number of nodes visited.
The inspection can be resumed using the returned state from the
last session.
why:
This feature allows for task switch between `piecemeal` sessions.
* Extract pivot helper code from `worker.nim` => `pivot_helper.nim`
* Accounts import will now return dangling paths from `proof` nodes
why:
With proper bookkeeping, this can be used to start healing without
analysing the the probably full trie.
* Update `unprocessed` account range handling
why:
More generally, the API of a pairs of unprocessed intervals favours
the first set and not before that is exhausted the second set comes
into play.
This was unfortunately implemented which caused the ranges to be
unnecessarily fractioned. Now the number of range interval typically
remains in the lower single digit numbers.
* Save sync state after end of downloading some accounts
details:
restore/resume to be implemented later
* Add `stop()` methods to shutdown to shutdown procedure
why:
Nasty behaviour when hitting Ctrl-C, otherwise
* Add background service to sync scheduler
why:
The background service will be used for sync data import and recovery
after restart.
It is controlled by the sync scheduler for an easy turn/on off API.
also:
Simplified snap ticker time calc.
* Fix typo
why:
Single mode here means there is only such (single mode) instance
activated but multi mode instances for other peers are allowed.
Erroneously, multi mode instances were held back waiting while some
single mode instance was running which reduced the number of parallel
download peers.
* Update log ticker, using time interval rather than ticker count
why:
Counting and logging ticker occurrences is inherently imprecise. So
time intervals are used.
* Use separate storage tables for snap sync data
* Left boundary proof update
why:
Was not properly implemented, yet.
* Capture pivot in peer worker (aka buddy) tasks
why:
The pivot environment is linked to the `buddy` descriptor. While
there is a task switch, the pivot may change. So it is passed on as
function argument `env` rather than retrieved from the buddy at
the start of a sub-function.
* Split queues `fetchStorage` into `fetchStorageFull` and `fetchStoragePart`
* Remove obsolete account range returned from `GetAccountRange` message
why:
Handler returned the wrong right value of the range. This range was
for convenience, only.
* Prioritise storage slots if the queue becomes large
why:
Currently, accounts processing is prioritised up until all accounts
are downloaded. The new prioritisation has two thresholds for
+ start processing storage slots with a new worker
+ stop account processing and switch to storage processing
also:
Provide api for `SnapTodoRanges` pair of range sets in `worker_desc.nim`
* Generalise left boundary proof for accounts or storage slots.
why:
Detailed explanation how this works is documented with
`snapdb_accounts.importAccounts()`.
Instead of enforcing a left boundary proof (which is still the default),
the importer functions return a list of `holes` (aka node paths) found in
the argument ranges of leaf nodes. This in turn is used by the book
keeping software for data download.
* Forgot to pass on variable in function wrapper
also:
+ Start healing not before 99% accounts covered (previously 95%)
+ Logging updated/prettified
* Added basic async capabilities for vm2.
This is a whole new Git branch, not the same one as last time
(https://github.com/status-im/nimbus-eth1/pull/1250) - there wasn't
much worth salvaging. Main differences:
I didn't do the "each opcode has to specify an async handler" junk
that I put in last time. Instead, in oph_memory.nim you can see
sloadOp calling asyncChainTo and passing in an async operation.
That async operation is then run by the execCallOrCreate (or
asyncExecCallOrCreate) code in interpreter_dispatch.nim.
In the test code, the (previously existing) macro called "assembler"
now allows you to add a section called "initialStorage", specifying
fake data to be used by the EVM computation run by that test. (In
the long run we'll obviously want to write tests that for-real use
the JSON-RPC API to asynchronously fetch data; for now, this was
just an expedient way to write a basic unit test that exercises the
async-EVM code pathway.)
There's also a new macro called "concurrentAssemblers" that allows
you to write a test that runs multiple assemblers concurrently (and
then waits for them all to finish). There's one example test using
this, in test_op_memory_lazy.nim, though you can't actually see it
doing so unless you uncomment some echo statements in
async_operations.nim (in which case you can see the two concurrently
running EVM computations each printing out what they're doing, and
you'll see that they interleave).
A question: is it possible to make EVMC work asynchronously? (For
now, this code compiles and "make test" passes even if ENABLE_EVMC
is turned on, but it doesn't actually work asynchronously, it just
falls back on doing the usual synchronous EVMC thing. See
FIXME-asyncAndEvmc.)
* Moved the AsyncOperationFactory to the BaseVMState object.
* Made the AsyncOperationFactory into a table of fn pointers.
Also ditched the plain-data Vm2AsyncOperation type; it wasn't
really serving much purpose. Instead, the pendingAsyncOperation
field directly contains the Future.
* Removed the hasStorage idea.
It's not the right solution to the "how do we know whether we
still need to fetch the storage value or not?" problem. I
haven't implemented the right solution yet, but at least
we're better off not putting in a wrong one.
* Added/modified/removed some comments.
(Based on feedback on the PR.)
* Removed the waitFor from execCallOrCreate.
There was some back-and-forth in the PR regarding whether nested
waitFor calls are acceptable:
https://github.com/status-im/nimbus-eth1/pull/1260#discussion_r998587449
The eventual decision was to just change the waitFor to a doAssert
(since we probably won't want this extra functionality when running
synchronously anyway) to make sure that the Future is already
finished.
* Update docu and logging
* Extracted and updated constants from `worker_desc` into separate file
* Update and re-calibrate communication error handling
* Allow simplified pivot negotiation
why:
This feature allows to turn off pivot negotiation so that peers agree
on a a pivot header.
For snap sync with fast changing pivots this only throttles the sync
process. The finally downloaded DB snapshot is typically a merged
version of different pivot states augmented by a healing process.
* Re-model worker queues for accounts download & healing
why:
Currently there is only one data fetch per download or healing task.
This task is then repeated by the scheduler after a short time. In
many cases, this short time seems enough for some peers to decide to
terminate connection.
* Update main task batch `runMulti()`
details:
The function `runMulti()` is activated in quasi-parallel mode by the
scheduler. This function calls the download, healing and fast-sync
functions.
While in debug mode, after each set of jobs run by this function the
database is analysed (by the `snapdb_check` module) and the result
printed.
* Update logging
* Fix node hash associated with partial path for missing nodes
why:
Healing uses the partial paths for fetching nodes from the network. The
node hash (or key) is used to verify the node data retrieved.
The trie inspector function returned the parent hash instead of the node hash
with the partial path when a missing node was detected. So all nodes
for healing were rejected.
* Must not modify sequence while looping over it
* Re-arrange fetching storage slots in batch module
why;
Previously, fetching partial slot ranges first has a chance of
terminating the worker peer 9due to network error) while there were
many inheritable storage slots on the queue.
Now, inheritance is checked first, then full slot ranges and finally
partial ranges.
* Update logging
* Bundled node information for healing into single object `NodeSpecs`
why:
Previously, partial paths and node keys were kept in separate variables.
This approach was error prone due to copying/reassembling function
argument objects.
As all partial paths, keys, and node data types are more or less handled
as `Blob`s over the network (using Eth/6x, or Snap/1) it makes sense to
hold these `Blob`s as named field in a single object (even if not all
fields are active for the current purpose.)
* For good housekeeping, using `NodeKey` type only for account keys
why:
previously, a mixture of `NodeKey` and `Hash256` was used. Now, only
state or storage root keys use the `Hash256` type.
* Always accept latest pivot (and not a slightly older one)
why;
For testing it was tried to use a slightly older pivot state root than
available. Some anecdotal tests seemed to suggest an advantage so that
more peers are willing to serve on that older pivot. But this could not
be confirmed in subsequent tests (still anecdotal, though.)
As a side note, the distance of the latest pivot to its predecessor is
at least 128 (or whatever the constant `minPivotBlockDistance` is
assigned to.)
* Reshuffle name components for some file and function names
why:
Clarifies purpose:
"storages" becomes: "storage slots"
"store" becomes: "range fetch"
* Stash away currently unused modules in sub-folder named "notused"
* Multiple storage batches at a time
why:
Previously only some small portion was processed at a time so the peer
might have gone when the process was resumed at a later time
* Renamed some field of snap/1 protocol response object
why:
Documented as `slots` is in reality a per-account list of slot lists. So
the new name `slotLists` better reflects the nature of the beast.
* Some minor healing re-arrangements for storage slot tries
why;
Resolving all complete inherited slots tries first in sync mode keeps
the worker queues smaller which improves logging.
* Prettify logging, comments update etc.
* Re-model persistent database access
why:
Storage slots healing just run on the wrong sub-trie (i.e. the wrong
key mapping). So get/put and bulk functions now use the definitions
in `snapdb_desc` (earlier there were some shortcuts for `get()`.)
* Fixes: missing return code, typo, redundant imports etc.
* Remove obsolete debugging directives from `worker_desc` module
* Correct failing unit tests for storage slots trie inspection
why:
Some pathological cases for the extended tests do not produce any
hexary trie data. This is rightly detected by the trie inspection
and the result checks needed to adjusted.
* For snap sync, publish `EthWireRef` in sync descriptor
why:
currently used for noise control
* Detect and reuse existing storage slots
* Provide healing module for storage slots
* Update statistic ticker (adding range factor for unprocessed storage)
* Complete mere function for work item ranges
why:
Merging interval into existing partial item was missing
* Show av storage queue lengths in ticker
detail;
Previous attempt shows average completeness which did not tell much
* Correct the meaning of the storage counter (per pivot)
detail:
Is the # accounts that have a storage saved
* Rename `LeafRange` => `NodeTagRange`
* Replacing storage slot partition point by interval
why:
The partition point only allows to describe slots `[point,high(Uint256)]`
for fetching interval slot ranges. This has been generalised for any
interval.
* Replacing `SnapAccountRanges` by `SnapTrieRangeBatch`
why:
Generalised healing status for accounts, and later for storage slots.
* Improve accounts healing loop
* Split `snap_db` into accounts and storage modules
why:
It is cleaner to have separate session descriptors for accounts and
storage slots (based on a common base descriptor.)
Also, persistent storage handling might be changed in future which
requires the storage slot implementation disentangled from the accounts
handling.
* Re-model worker queues for storage slots
why:
There is a dynamic list of storage sub-tries, each one has to be
treated similar to the accounts database. This applied to slot
interval downloads as well as to healing
* Compress some return value report lists for snapdb methods
why:
No need to report all handling details for work items that are filteres
out and discarded, anyway.
* Remove inner loop frame from healing function
why:
The healing function runs as a loop body already.
* Split fetch accounts into sub-modules
details:
There will be separated modules for accounts snapshot, storage snapshot,
and healing for either.
* Allow to rebase pivot before negotiated header
why:
Peers seem to have not too many snapshots available. By setting back the
pivot block header slightly, the chances might be higher to find more
peers to serve this pivot. Experiment on mainnet showed that setting back
too much (tested with 1024), the chances to find matching snapshot peers
seem to decrease.
* Add accounts healing
* Update variable/field naming in `worker_desc` for readability
* Handle leaf nodes in accounts healing
why:
There is no need to fetch accounts when they had been added by the
healing process. On the flip side, these accounts must be checked for
storage data and the batch queue updated, accordingly.
* Reorganising accounts hash ranges batch queue
why:
The aim is to formally cover as many accounts as possible for different
pivot state root environments. Formerly, this was tried by starting the
accounts batch queue at a random value for each pivot (and wrapping
around.)
Now, each pivot environment starts with an interval set mutually
disjunct from any interval set retrieved with other pivot state roots.
also:
Stop fishing for more pivots in `worker` if 100% download is reached
* Reorganise/update accounts healing
why:
Error handling was wrong and the (math. complexity of) whole process
could be better managed.
details:
Much of the algorithm is now documented at the top of the file
`heal_accounts.nim`
* Miscellaneous updates TBC
* Disentangled pivot2 module from snap
why:
Wrote as template on top of sync so it can be shared by fast and snap
sync.
* Renamed and relocated pivot sources
* Integrated `best_pivot` module into full and snap sync
why:
Full sync used an older version of `best_pivot`
* isolating download module from full sync
why;
might be shared with snap sync at a later stage
* Added inspect module
why:
Find dangling references for trie healing support.
details:
+ This patch set provides only the inspect module and some unit tests.
+ There are also extensive unit tests which need bulk data from the
`nimbus-eth1-blob` module.
* Alternative pivot finder
why:
Attempt to be faster on start up. Also tying to decouple pivot finder
somehow by providing different mechanisms (this one runs in `single`
mode.)
* Use inspect module for healing
details:
+ After some progress with account and storage data, the inspect facility
is used to find dangling links in the database to be filled nose-wise.
+ This is a crude attempt to cobble together functional elements. The
set up needs to be honed.
* fix scheduler to avoid starting dead peers
why:
Some peers drop out while in `sleepAsync()`. So extra `if` clauses
make sure that this event is detected early.
* Bug fixes causing crashes
details:
+ prettify.toPC():
int/intToStr() numeric range over/underflow
+ hexary_inspect.hexaryInspectPath():
take care of half initialised step with branch but missing index into
branch array
* improve handling of dropped peers in alternaive pivot finder
why:
Strange things may happen while querying data from the network.
Additional checks make sure that the state of other peers is updated
immediately.
* Update trace messages
* reorganise snap fetch & store schedule
* Re-implemented `hexaryFollow()` in a more general fashion
details:
+ New name for re-implemented `hexaryFollow()` is `hexaryPath()`
+ Renamed `rTreeFollow()` as `hexaryPath()`
why:
Returning similarly organised structures, the results of the
`hexaryPath()` functions become comparable when running over
the persistent and the in-memory databases.
* Added traversal functionality for persistent ChainDB
* Using `Account` values as re-packed Blob
* Repack samples as compressed data files
* Produce test data
details:
+ Can force pivot state root switch after minimal coverage.
+ For emulating certain network behaviour, downloading accounts stops for
a particular pivot state root if 30% (some static number) coverage is
reached. Following accounts are downloaded for a later pivot state root.
* Bump nim-stew
why:
Need fixed interval set
* Keep track of accumulated account ranges over all state roots
* Added comments and explanations to unit tests
* typo
* Extracted functionality into sub-modules for maintainability
* Setting SST bulk load as default in `accounts_db`
details:
+ currently, the same data are stored via rocksdb if available, and
the same via embedded `storage_type` with (non-standard) prefix 200
for time comparisons
+ fallback to normal `put()` unless rocksdb is accessible
* Provided common scheduler API, applied to `full` sync
* Use hexary trie as storage for proofs_db records
also:
+ Store metadata with account for keeping track of account state
+ add iterator over accounts
* Common scheduler API applied to `snap` sync
* Prepare for accounts bulk import
details:
+ Added some ad-hoc checks for proving accounts data received from the
snap/1 (will be replaced by proper database version when ready)
+ Added code that dumps some of the received snap/1 data into a file
(turned of by default, see `worker_desc.nim`)
* Error return in `persistBlocks()` on initial `VmState` roblem
why:
previously threw an exception
* Updated sync mode option
why:
using enum rather than bool => space for more
* Added sync mode `full`, re-factued legacy sync
also:
rebased
* Fix typo (crashes `pesistBlocks()` otherwise)
also:
rebase to master
* Reduce log ticker noise by suppressing duplicate messages
* Clarify staged queue overflow handling
why:
backtrack/re-org mode in `stageItem()` should be detected by both,
the global indicator or the work item where it might have moved into.
also:
rebased
* Added sepolia specs
* temporarily avoid latest `master` branch from `nim-eth1`
why:
Currently does not cleanly compile after the `bearssl` split api update.
* Relocated `IntervalSets` to nim-stew repo
* Accumulate accounts on temporary kv-DB
why:
Explore the data as returned from snap/1. Will be converted to a
`eth/db` next.
details:
Verify and accumulate per/state-root accounts downloaded via snap.
also:
Some unit tests
* Replace `Table` by `TrieDatabaseRef` for accounts accumulator
* update ticker statistics
details:
mean/variance based counter update
* allow persistent db for proved accounts
* rebase, and globally activate unit test
* fix statistics
* Using `IntervalSet` type data for `LeafRange`
* Updated log ticker
* Update to `eth67`
details:
Disabled by default, use `ENABLE_LEGACY_ETH66=0` to enable
No support for `Get/NodeData` dialogue via eth, anymore
* Dissolved fetch/common.nim
details;
the log/ticker part becomes ticker.nim
the interval range management is merged into fetch.nim
* Updated account scheduler
why:
The previous scheduler fetched each account once (for different state
roots.) The updated scheduler re-calibrates after a change of the state
root and potentially (until told otherwise) fetches all possible
accounts.
* Fix `high(P)` fringe cases in `IntervalSet` handling
why:
The `high(P)` value for a point type `P` cannot be represented with
half open intervals `[a,b)` for a,b points of `P`. So this single value
needs extra treatment which was slightly wrong.
* Updated docu/comments
also:
rebased
* Update scheduler
details:
Change the `pivot` management when creating new accounts lists. It is
strictly increasing (and wrapping around) depending on last updated
accounts list.
* Fix/recover download flag
why:
The fetch indicator used to control the data download somehow got
lost during re-org.
* Updated chronicles/logger topics
* Reorganised run state flags
why:
The original code used a pair of boolean flags `(stopped,stopThisState)`
which was translated to three states running, stoppedPending, and
stopped. It is currently not clear whether collapsing some states was
correct. So the original logic has been re-stored, albeit wrapped into
directives like `isStopped()` etc.
also:
Moving some function bodies in `worker.nim`
* Moved `reply_data.nim` and `validate_trienode.nim` to sub-directory `fetch_trie`
why:
Only used in `fetch_trie.nim`.
* Move `fetch_*` file and directory objects to `fetch` subdirectory
why:
Only used in `fetch.nim`
* Added start/stop and/or setup/release methods for all sub-modules
why:
good housekeeping
also:
updated getters/setters for ctrl states
updated trace messages
if we are not voting, coinbase should be filled with zero
because other subsystem e.g txpool can produce block header
with non zero coinbase. if that coinbase is one of the signer
and the nonce is zero, that signer will be vote out from
signer list
* Disentangle `collect` module from `reply_data`
why:
Now the module visible from `collect` for fetching data is `peer/fetch`
only.
* Merge `SnapPeerHunt` into `collect`
why:
This part needs to be known by `collect`, only
* rename collect => worker
* Dissolve `sync_fetch_xdesc` module into `common`
why:
Descriptor is only used in `common` and `fetch_trie`
* rename `snap/peer` directory => `snap/worker`
* rename `SnapSync` -> `Worker`, `SnapPeer` -> `WorkerBuddy`
* moved `snap/base_desc.nim` -> `snap/worker/worker_desc.nim`
* Unified opaque object ref naming in `worker_desc.nim`
details:
indicated my inheriting module (exactly one, always)
* Reorg SnapPeerBase descriptor, notably start/stop flags
details:
Instead of using three boolean flags startedFetch, stopped, and
stopThisState a single enum type is used with values SyncRunningOk,
SyncStopRequest, and SyncStopped.
* Restricting snap to eth66 and later
why:
Id-tracked request/response wire protocol can handle overlapped
responses when requests are sent in row.
* Align function names with source code file names
why:
Easier to reconcile when following the implemented logic.
* Update trace logging (want file locations)
why:
The macros previously used hid the relevant file location (when
`chroniclesLineNumbers` turned on.) It rather printed the file
location of the template that was wrapping `trace`.
* Use KeyedQueue table instead of sequence
why:
Quick access, easy configuration as LRU or FIFO with max entries
(currently LRU.)
* Dissolve `SnapPeerEx` object extension into `SnapPeer`
why;
It is logically cleaner and more obvious not to inherit from
`SnapPeerBase` but to specify opaque field object references of the
merged `SnapPeer` object. These can then be locally inherited.
* Dissolve `SnapSyncEx` object extension into `SnapSync`
why;
It is logically cleaner and more obvious not to inherit from
`SnapSyncEx` but to specify opaque field object references of
the `SnapPeer` object. These can then be locally inherited.
Also, in the re-factored code here the interface descriptor
`SnapSyncCtx` inherited `SnapSyncEx` which was sub-optimal (OO
inheritance makes it easier to work with call back functions.)
* new: time_helper, types
* new: path_desc
* new: base_desc
* Re-organised objects inheritance
why:
Previous code used macros to instantiate opaque object references. This
has been re-implemented with OO inheritance based logic.
* Normalised trace macros
* Using distinct types for Hash256 aliases
why:
Better control of the meaning of the hashes, all or the same format
caveat:
The protocol handler DSL used by eth66.nim and snap1.nim uses the
underlying type Hash256 and cannot handle the distinct alias in
rlp and chronicles/log macros. So Hash256 is used directly (does
not change readability as the type is clear by parameter names.)
* Use type name eth and snap (rather than snap1)
* Prettified snap/eth handler trace messages
* Regrouped sync sources
details:
Snap storage related sources are moved to common directory.
Option --new-sync renamed to --snap-sync
also:
Normalised logging for secondary/non-protocol handlers.
* Merge protocol wrapper files => protocol.nim
details:
Merge wrapper sync/protocol_ethxx.nim and sync/protocol_snapxx.nim
into single file snap/protocol.nim
* Comments cosmetics
* Similar start logic for blockchain_sync.nim and sync/snap.nim
* Renamed p2p/blockchain_sync.nim -> sync/fast.nim
why:
Accidentally wrapped into waitFor() directive with reviving jl/sync
branch.
also:
Decorate eth/66 and snap/1 protocol trace messages with protocol
type and version
* Squashed snap-sync-preview patch
why:
Providing end results makes it easier to have an overview.
Collected patch set comments are available as nimbus/sync/ChangeLog.md
in chronological order, oldest first.
* Removed some cruft and obsolete imports, normalised logging
* Update exception tracinig
* Use lazy JSON parser
why:
Uint246 type is not directly supported by the JSON serializer
* Json parser update for stringified UInt256 integers
why:
Now available
* update sub-module branch reference
- remove `waitFor` and store async result in Future[T] for future execution
instead of blocking other services from starting and running.
This also fixes very long delay Ctrl-C issue. Fixes#585
- always create sealing engine instance even though there is no
signer. other services such engine api need it.
if engine api receive forkChoiceUpdated and turn out
reorg happened, sealing engine must update txpool
head.
so in the next iteration when the txpool asked to
produce a fresh block, it will produce it in the
current 'correct' chain and abandon side chain.
* Prepare unit tests for running without tx-pool job queue
why:
Most of the job queue logic can be emulated. This adapts to a few
pathological test cases.
* Replace tx-pool job queue logic with in-place actions
why:
This additional execution layer is not needed, anymore which has been
learned from working with the integration/hive tests.
details:
Execution of add or deletion jobs are executed in-place. Some actions
-- as in smartHead() -- have been combined for facilitating the correct
order actions
* Update production functions, remove txpool legacy stuff
* Enable JWT authentication for websockets
details:
Currently, this is optional and only enabled when the jwtsecret option
is set.
There is a default mechanism to generate a JWT secret if it is not
explicitly stated. This mechanism is currently unused.
* Make JWT authentication compulsory for websockets
* Fix unit test entry point + cosmetics
* Update JSON-RPC link
* Improvements as suggested by Mamy
result.gasPrice = baseFee + min(result.maxPriorityFee, result.maxFee - baseFee)
cannot be simplified into
result.gasPrice = min(result.maxPriorityFee + baseFee, result.maxFee)
the expression is not commutative
why:
Causes havoc in most bit fringe cases.
details:
When setting the head forward, the delta was wrongly registered from
the static "left" end (which limits the loop) rather than the moving
"right" end.
* Fix database sort order for local txs
why:
For convenience, packed txs were stored in the block sorted by
rank->nonce. Using local accounts, the greedy grabber uses the sort
order (local,non-local)->rank->nonce which leads to a wrong calculation
of the txRoot.
* Housekeeping
details:
Replaced a couple of local eip1559TxNormalization() functions by a
single public
* Support for local accounts
why:
Accounts tagged local will be packed with priority over untagged
accounts
* Added functions for queuing txs and simultaneously setting account locality
why:
Might be a popular task, in particular for unconditionally adding txs to
a local (aka prioritised account) via "xp.addLocal(tx,true)"
caveat:
Untested yet
* fix typo
* backup
* No baseFee for pre-London tx in verifier
why:
The packer would wrongly discard valid legacy txs.
* Remove unused import of config to avoid select_backend db import
- Importing nimbus-eth1 config.nim causes import of select_backend
which will default cause an import of kvstore_rocksdb and thus a
require rocksdb. Remove unused one to avoid rocksdb dependency
for Fluffy.
- Remove some whitespace in bridge_client (to make fluffy CI
trigger for sure).
* Use specific cache keys for fluffy CI workflow
* Disable Fluffy CI reproducibility test
* Enable optional chunked RLPx messages
why:
Legacy feature used by Nethermind
details:
Disable with make flag: ENABLE_CHUNKED_RLPX=0
* Rebase & bump nim-eth
* Fix default behaviour
why:
Got lost somehow. Comments do not match GNU make code directives.
* De-noisify some Clique logging
why:
Too annoying when syncing against Goerly
* Replay devnet# and kiln sessions
why:
Compiled as local program, the unit test was used for TDD.
* dist: precompiled binaries and Docker images
The builds are reproducible, the binaries are portable and statically link librocksdb.
This took some patching. Upstream PR: https://github.com/facebook/rocksdb/pull/9752
32-bit ARM is missing as a target because two different GCC versions
fail with an ICE when trying to cross-compile RocksDB. Using Clang
instead is too much trouble for a platform that nobody should be using
anyway.
(Clang doesn't come with its own target headers and libraries, can't be
easily convinced to use the ones from GCC, so it needs an fs image from
a 32-bit ARM distro - at which point I stopped caring).
* CI: disable reproducibility test
* Activate wire protocol eth/66
and:
Disentangle protocol_eth66.nim from import sections
why:
Importing the protocol_eth66 module is not necessary. There is
no need to know too many details of the underlying wire protocol. All
that is needed will be exported by blockchain_sync.nim.
* fixes, and rebase
* Update nimbus/p2p/blockchain_sync.nim
Co-authored-by: Kim De Mey <kim.demey@gmail.com>
* Fixes and rebase
Co-authored-by: Kim De Mey <kim.demey@gmail.com>
why:
Testing against a replay unit test for Devnet4 made it necessary to
adjust the TTD handling. Without updated, importing fails at block #5646
which is the parent of the terminal PoW block. Similar considerations
apply for Devnet5 and Kiln.
- prevRandao/mixDigest = payloadAttrs.prevRandao
- timestamp = payloadAttrs.timestamp
- coinbase = payloadAttrs.suggestedFeeRecipient
- extraData = it can be anything 0-32 bytes, but currently set to empty bytes.
* Rearrange/rename test_kintsugu => test_custom_network
why:
Debug, fix and test more general problems related to running
nimbus on a custom network.
* Update UInt265/Json parser for --custom-network command line option
why:
As found out with the Kintsugi configuration, block number and balance
have the same Nim type which led to misunderstandings. This patch makes
sure that UInt265 encoded string values "0x11" decodes to 17, and "b"
and "11" to 11.
* Refactored genesis.toBlock() => genesis.toBlockHeader()
why:
The function toBlock(g,db) may return different results depending on
whether the db descriptor argument is nil, or initialised. This is due
to the db.config data sub-descriptor which may give various outcomes
for the baseFee field of the genesis header.
Also, the version where db is non-nil initialised is used internally
only. So the public rewrite toBlockHeader() that replaces the toBlock()
function expects a full set of NetworkParams.
* update comments
* Rename toBlockHeader() => toGenesisHeader()
why:
Polymorphic prototype used for BaseChainDB or NetworkParams argument.
With a BaseChainDB descriptor argument, the name shall imply that the
header is generated from the config fields rather than fetched from
the database.
* Added command line option --static-peers-file
why:
Handy feature to keep peer nodes in a file, similar to the
--bootstrap-file option.
Any transaction where tx.sender has a CODEHASH != EMPTYCODEHASH MUST
be rejected as invalid, where EMPTYCODEHASH =
0xc5d2460186f7233c927e7db2dcc703c0e500b653ca82273b7bfad8045d85a470.
The invalid transaction MUST be rejected by the client and not be included
in a block. A block containing such a transaction MUST be considered invalid.
because EIP-4399 override mixDigest validation rule,
there is no need to check mixDigest == ZERO_HASH
`mixDigest` will carry POS block randomness value
* Kludge needed for setting up custom network
why:
Some non-features in the persistent hexary trie DB produce an assert
error when initiating the Kinsugi network.
details:
This fix should be temporary, only.
* Fix OS detection
why:
directive detectOs() bails out on Windows if checking for Ubuntu
Includes a simple test harness for the merge interop M1 milestone
This aims to enable connecting nimbus-eth2 to nimbus-eth1 within
the testing protocol described here:
https://github.com/status-im/nimbus-eth2/blob/amphora-merge-interop/docs/interop_merge.md
To execute the work-in-progress test, please run:
In terminal 1:
tests/amphora/launch-nimbus.sh
In terminal 2:
tests/amphora/check-merge-test-vectors.sh
details:
For documentation, see comments in the file tx_pool.nim.
For prettified manual pages run 'make docs' in the nimbus directory and
point your web browser to the newly created 'docs' directory.
why:
Some helper file will not be generated by the nim document gereator,
so they have been stashed from a later nim version to be provided when
missing.
This problem was known with an earlier nim version (see here
https://github.com/nim-lang/Nim/issues/8952) but was reported solved.
Maybe we need a second look into that.
why:
Previously, the function 'snapshot_desc.loadSnapshot()' contained the
equivalent of 'eth.decode(@[],SnapshotData)' for some type 'SnapshotData'
which should result in an exception of type 'RlpTypeMismatch'.
Before mid October, this worked for all systems on the Github CI. Since
then, a segfault message in the Github CI can be reproduced on all 64bit
Windows wuns when running 'build/all_tests <id-of-test_txpool>' after the
failed 'make test' directive (the latter one needs to be extended by
'|| true'.) This error cannot be reproduced on my local Win7/64 system
with the same MSYS2 and gcc 11.2.0 compiler.
The fix is, rather than catching an exception, to explicitly check the
first argument of 'eth.decode(@[],SnapshotData)' and act if it is empty.
also:
removed some obsolete {.inline.} annotations.
why:
Previous version was based on lru_cache which is ugly. This module is
based on the stew/keyed_queue library module.
other:
There are still some other modules rely on lru_cache which should be
removed.
* Redesign of BaseVMState descriptor
why:
BaseVMState provides an environment for executing transactions. The
current descriptor also provides data that cannot generally be known
within the execution environment, e.g. the total gasUsed which is
available not before after all transactions have finished.
Also, the BaseVMState constructor has been replaced by a constructor
that does not need pre-initialised input of the account database.
also:
Previous constructor and some fields are provided with a deprecated
annotation (producing a lot of noise.)
* Replace legacy directives in production sources
* Replace legacy directives in unit test sources
* fix CI (missing premix update)
* Remove legacy directives
* chase CI problem
* rebased
* Re-introduce 'AccountsCache' constructor optimisation for 'BaseVmState' re-initialisation
why:
Constructing a new 'AccountsCache' descriptor can be avoided sometimes
when the current state root is properly positioned already. Such a
feature existed already as the update function 'initStateDB()' for the
'BaseChanDB' where the accounts cache was linked into this desctiptor.
The function 'initStateDB()' was removed and re-implemented into the
'BaseVmState' constructor without optimisation. The old version was of
restricted use as a wrong accounts cache state would unconditionally
throw an exception rather than conceptually ask for a remedy.
The optimised 'BaseVmState' re-initialisation has been implemented for
the 'persistBlocks()' function.
also:
moved some test helpers to 'test/replay' folder
* Remove unused & undocumented fields from Chain descriptor
why:
Reduces attack surface in general & improves reading the code.
details:
1. The check for cumulativeGasUsed + tx.gasLimit <= header.gasLimit
makes neither sense nor is it part of the Eip1559 specs. Nevertheless
a check tx.gasLimit <= header.gasLimit is added to satisfy some
unit test (see comments in validateTransaction() body.)
2. As a replacement check for the one removed in 1, a check for
cumulativeGasUsed + gasBurned <= header.gasLimit has been added
(see comments in processTransactionImpl() body.)
3. Prototypes for processTransaction() variants have been cleaned up and
commented.
why:
Detail 1. in particular produces an error for tightly packed blocks when
the last tx in the list has a generous gasLimit.
- Remove the `--evm` option on non-EVMC builds.
`when` around an option doesn't work with confutils; it fails to compile.
Workaround that by setting the `ignore` pragma on EVMC-specific options.
(Thanks @jangko for that new pragma). I prefer this to a solution which
moves the whole option's pragma elsewhere, especially if we add more options.
- Improve the help text, so that it shows the standard library extension on
each target platform (or none if on another platform).
- Undo b3f21bf4 "add missing evmc_enabled conditional compilation in
evmc_dynamic_loader". Move the conditional to `nimbus.nim`, and take more
care there to only use the loader function in EVMC builds.
It's ok to just not include this EVMC-only module (like some other EVMC
modules), rather than making the module itself a bit broken: Without this
change, it references a function that's not imported or linked to, and it
only links because there is no call sequence reaching that function.
Signed-off-by: Jamie Lokier <jamie@shareable.org>
- `network` and `discovery` got additional longDesc,
the help text now become more descriptive.
- `networkId` and `networkParams` now is ignored by confutils
and become ordinary fields of NimbusConf.
This fixes#867 "EIP-170 related consensus error at Goerli block 5080941", and
equivalent on other networks.
This combines a change on the EVM-caller side with an EVM-side change from
@jangko 6548ff98 "fixes CREATE/CREATE2's `returndata` bug", making the caller
EVM ignore any data except from `REVERT`.
Either change works by itself. The reason for both is to ensure we definitely
comply with ambiguous EVMC expectations from either side of that boundary, and
it makes the internal API clearer.
As well as fixing a specific consensus issue, there are some other EVM logic
changes too: Refactored `writeContract`, how `RETURNDATA` is handled inside the
EVM, and changed behaviour with quirks before EIP-2 (Homestead).
The fix allows sync to pass block 5080941 on Goerli, and probably equivalent on
other networks. Here's a trace at batch 5080897..5081088:
```
TRC 2021-10-01 21:18:12.883+01:00 Persisting blocks file=persist_blocks.nim:43 fromBlock=5080897 toBlock=5081088
...
DBG 2021-10-01 21:18:13.270+01:00 Contract code size exceeds EIP170 topics="vm computation" file=computation.nim:236 limit=24577 actual=31411
DBG 2021-10-01 21:18:13.271+01:00 gasUsed neq cumulativeGasUsed file=process_block.nim:68 block=5080941/0A3537BC5BDFC637349E1C77D9648F2F65E2BF973ABF7956618F854B769DF626 gasUsed=3129669 cumulativeGasUsed=3132615
TRC 2021-10-01 21:18:13.271+01:00 peer disconnected file=blockchain_sync.nim:407 peer=<IP:PORT>
```
Although it says "Contract code size" and "gasUsed", this bug is more general
than either contract size or gas. It's due to incorrect behaviour of EVM
instructions `RETURNDATA` and `RETURNDATASIZE`.
Sometimes when `writeContract` decides to reject writing the contract for any
of several reasons (for example just insufficient gas), the unwritten contract
code was being used as the "return data", and given to the caller. If the
caller used `RETURNDATA` or `RETURNDATASIZE` ops, those incorrectly reported
the contract code that didn't get written.
EIP-211 (https://eips.ethereum.org/EIPS/eip-211) describes `RETURNDATA`:
> "`CREATE` and `CREATE2` are considered to return the empty buffer in the
> success case and the failure data in the failure case".
The language is ambiguous. In fact "failure case" means when the contract uses
`REVERT` to finish. It doesn't mean other failures like out of gas, EIP-170
limit, EIP-3541, etc.
To be thorough, and to ensure we always do the right thing with real EVMC when
that's finalised, this patch fixes the `RETURNDATA` issue in two places, either
of which make Goerli block 5080941 pass.
`writeContract` has been refactored to be caller, and so has where it's called.
It sets an error in the usual way if contract writing is rejected -- that's
anticipating EVMC, where we'll use different error codes later.
Overall four behaviour changes:
1. On the callee side, it doesn't set `c.outputData` except for `REVERT`.
2. On the caller side, it doesn't read `child.outputData` except for `REVERT`.
3. There was a bug in processing before Homestead fork (EIP-2). We did not
match the spec or other implementations; now we do. When there's
insufficient gas, before Homestead it's treated as success but with an empty
contract.
d117c8f3fd/ethereum/processblock.py (L304)https://github.com/ethereum/go-ethereum/blob/401354976bb4/core/vm/instructions.go#L586
4. The Byzantium check has been removed, as it's unnecessary.
Signed-off-by: Jamie Lokier <jamie@shareable.org>
This patch adds:
- Load and use a third-party EVM in a shared library, instead of Nimbus EVM.
- New option `--evm` to specify which library to load.
- The library and this loader conforms to the [EVMC]
(https://evmc.ethereum.org/) 9.x specification.
Any third-party EVM which is compatible with EVMC version 9.x and supports EVM1
contract code will be accepted. The operating system's shared library format
applies. These are `.so*` files on Linux, `.dll` files on Windows and `.dylib`
files on Mac.
The alternative EVM can be selected in two ways:
- Nimbus command line option `--evm:<path>`.
- Environment variable `NIMBUS_EVM=<path>`.
The reason for an environment variable is this allows all the test programs to
run with a third-party EVM as well. Some don't parse command line options.
There are some limitations to be aware of:
- The third-party EVM must use EVMC version 9.x, no other major version.
EVMC 9.x supports EIP-1559 / London fork and older transactions.
- Nested `*CALL` and `CREATE*` operations don't use the third-party EVM yet.
These call the built-in Nimbus EVM. This mixing of different EVMs between
levels is explicitly allowed in specs, so there is no problem doing it.
- The third-party EVM doesn't need to support precompiles, because those are
nested calls, which use the built-in Nimbus EVM.
- Third-party EVMs execute contracts correctly, but fail the final `rootHash`
match. The reason is that some account state changes, which are correct, are
currently inside the Nimbus EVM and need to be moved to EVMC host logic.
*This is a known work in progress*. The EVM execution itself is fine.
Test results using "evmone" third-party EVM:
- [evmone](https://github.com/ethereum/evmone) has been tested. Only on
Linux but it "should" work on Windows and Mac equally well.
- [Version 0.8.1](https://github.com/ethereum/evmone/releases/tag/v0.8.1) was
used because it is compatible with EVMC 9.x, which is required for the
EIP-1559 / London fork, which Nimbus supports. Version 0.8.0 could be used
but it looks like an important bug was fixed in 0.8.1.
- evmone runs fine and the trace output looks good. The calls and arguments
are the same as the built-in Nimbus EVM for tests that have been checked
manually, except evmone skips some calls that can be safely skipped.
- The final `rootHash` is incorrect, due to the *work in progress* mentioned
above which is not part of the evmone execution. Due to this, it's possible
to try evmone and verify expected behaviours, which also validates our own
EVMC implementation, but it can't be used as a full substitute yet.
Signed-off-by: Jamie Lokier <jamie@shareable.org>
This missing part of EVMC processing allows third-party EVMs to work.
It fixes EVMC result processing (at the top-level of calls, not nested calls)
to use the EVMC result object, instead of reading so much internal state of the
Nimbus `Computation` object.
It has been tested by calling [`evmone`](https://github.com/ethereum/evmone)
and getting useful results with tracing enabled (`showTxCalls = true`). It's
even able to run parts of the fixtures test suite.
There are other issues with account balances, etc that need to be worked on to
get the correct _final_ results, but the EVM execution is correct with this.
Signed-off-by: Jamie Lokier <jamie@shareable.org>
Perform byte-endian conversion for 256-bit numeric values, but not 256-bit
hashes. These conversions are necessary for EVMC binary compatibility.
In new EVMC, all host-side conversions are explicit, calling `flip256`.
These conversions are performed in the EVMC "glue" code, which deals with the
binary interface, so the host services aren't aware of conversions.
We intend to skip these conversions when Nimbus host calls Nimbus EVM, even
when it's a shared library, using a negotiated EVMC extension. But for now
we're focused on correctness and cross-validation with third party EVMs.
The overhead of endian conversion is not too high because most EVMC host calls
access the database anyway. `getTxContext` does not, so the conversions from
that are cached here. Also, well-optimised EVMs don't call it often.
It is arguable whether endian conversion should occur for storage slots (`key`).
In favour of no conversion: Slot keys are 32-byte blobs, and this is clear in
the EVMC definition where slot keys are `evmc_bytes32` (not `evmc_uint256be`),
meaning treating as a number is _not_ expected by EVMC. Although they are
often small numbers, sometimes they are a hash from the contract code plus a
number. Slot keys are hashed on the host side with Keccak256 before any
database calls, so the host side does not look at them numerically.
In favour of conversion: They are often small numbers and it is helpful to log
them as such, rather than a long string of zero digits with 1-2 non-zero. The
representation in JSON has leading zeros removed, like a number rather than a
32-byte blob. There is also an interesting space optimisation when the keys
are used unhashed in storage.
Nimbus currently treats slot keys on the host side as numbers, and the tests
pass when endian conversion is done. So to remain consistent with other parts
of Nimbus we convert slot keys.
Signed-off-by: Jamie Lokier <jamie@shareable.org>
Add the new [Arrow Glacier fork](https://eips.ethereum.org/EIPS/eip-4345).
Only the difficulty calculation is changed, but as a new fork it still affects
a number of places in the code.
To the best of my knowledge the change is only scheduled on Mainnet.
In addition:
- The fork date comments in `chain_config.nim` have been checked against the
real networks, set consistently in UTC instead of random timezones, and made
neater. Maybe we'll keep these when transferring config to a file someday.
- It's added to forkid hash tests (EIP-2124/EIP-2364), of course.
Signed-off-by: Jamie Lokier <jamie@shareable.org>
* PoW wrapper for verification & mining
why:
It eases data management of per-Epoch lookup tables. Also some unit
tests show limits of usefulness on non-specialised machines for
mining besides developing tests.
details:
For PoW verification, this patch provides a pretty wrapper hiding the
details of the ethash/Hashimoto lookup cache management.
For mining on my development system without special hardware, the
underlying ethash functions are prohibitively slow. It takes
* ~20 minutes to prepare the full ethash/Hashimoto lookup dataset
* a second to run ~25k nonce tests (in the mining loop)
The mining part might be of some use for generating test data for
the tx-pool, though.
* Using PowRef as replacement for EpochHashCache + hashimotoLight()
* Fix typo (CI failed)
why:
was below log level when testing locally
* fix canonical naming
detected when running hive consensus simulator.
when processing an invalid block header and then
a new valid block header with the same block number,
the state root of the stateDB object should be updated
or reverted to parent stateRoot.
using intermediate stateRoot will trigger the hexary trie assertion.
previously, every time the VMState was created, it will also create
new stateDB, and this action will nullify the advantages of cached accounts.
the new changes will conserve the accounts cache if the executed blocks
are contiguous. if not the stateDB need to be reinited.
this changes also allow rpcCallEvm and rpcEstimateGas executed properly
using current stateDB instead of creating new one each time they are called.
Fixes#868 "Gas usage consensus error at Mainnet block 6001128", and equivalent
on other networks. Mainnet sync is able to continue past 6001128 after this.
Here's a trace:
```
TRC 2021-09-29 15:13:21.532+01:00 Persisting blocks file=persist_blocks.nim:43 fromBlock=6000961 toBlock=6001152
...
DBG 2021-09-29 15:14:35.925+01:00 gasUsed neq cumulativeGasUsed file=process_block.nim:68 gasUsed=7999726 cumulativeGasUsed=7989726
TRC 2021-09-29 15:14:35.925+01:00 peer disconnected file=blockchain_sync.nim:407 peer=<PEER:IP>
```
Similar output is seen at many blocks in the range 6001128..6001204.
The bug is when handling a combination of `CREATE` or `CREATE2`, along with
`SELFDESTRUCT` applied to the new contract address.
Init code for a contract can't return non-empty code and do `SELFDESTRUCT` at
the same time, because `SELFDESTRUCT` returns empty data.
But it is possible to return non-empty code in a newly created, self-destructed
account if the init code calls `DELEGATECALL` or `CALLCODE` to other code which
uses `SELFDESTRUCT`.
In this case we must still charge gas and write the code. This shows on
Mainnet blocks 6001128..6001204, where the gas difference matters. The code
must be written because the new code can be called later in the transaction
too, before self-destruction wipes the account at the end.
There are actually three semantic changes here for a self-destructed, new
contract:
- Gas is charged.
- The code is written to the account.
- It can fail due to insufficient gas.
This patch almost exactly reverts a15805e4 "fix applyCreateMessage" from
2019-02-28. I wonder what that fixed.
Signed-off-by: Jamie Lokier <jamie@shareable.org>
Fixes an off by 1 error where `EIP170_CODE_SIZE_LIMIT` was being treated as the
lowest invalid value by EVM code, but the highest valid value by witness code.
To remove confusion, this is renamed to `EIP170_MAX_CODE_SIZE` with value
0x6000, which matches the name (`MAX_CODE_SIZE`) and value used for this limit
in [EIP-170](https://eips.ethereum.org/EIPS/eip-170).
Signed-off-by: Jamie Lokier <jamie@shareable.org>
Fixes#864 "Sync progress stops at Goerli block 4494913", and equivalent on
other networks.
The block body fetcher in `blockchain_sync.nim` had an incorrect assumption
about how peers respond to `GetBlockBodies`. It was issuing requests for N
block bodies and incorrectly handling replies which contained fewer than N
bodies.
Having received up to 192 headers in a batch, it split the range into smaller
`GetBlockBodies` requests, fetched each reply, then combined replies. The
effect was Nimbus requested batches of 128+64 block bodies, received gaps in
the reply sequence, then aborted.
That meant it repeatedly fetched data, then discarded it, and fetched it again,
dropping good peers in the process.
Aborted and restarted batches occurred with earlier blocks too, but this became
more pronounced until there were no suitable peers at batch 4494913..4495104.
Here's a trace:
```
TRC 2021-09-29 02:40:24.977+01:00 Requesting block headers file=blockchain_sync.nim:224 start=4494913 count=192 peer=<ENODE>
TRC 2021-09-29 02:40:24.977+01:00 >> Sending eth.GetBlockHeaders (0x03) file=protocol_eth65.nim:51 peer=<PEER> startBlock=4494913 max=192
TRC 2021-09-29 02:40:25.005+01:00 << Got reply eth.BlockHeaders (0x04) file=protocol_eth65.nim:51 peer=<PEER> count=192
TRC 2021-09-29 02:40:25.007+01:00 >> Sending eth.GetBlockBodies (0x05) file=protocol_eth65.nim:51 peer=<PEER> count=128
TRC 2021-09-29 02:40:25.209+01:00 << Got reply eth.BlockBodies (0x06) file=protocol_eth65.nim:51 peer=<PEER> count=13
TRC 2021-09-29 02:40:25.210+01:00 >> Sending eth.GetBlockBodies (0x05) file=protocol_eth65.nim:51 peer=<PEER> count=64
TRC 2021-09-29 02:40:25.290+01:00 << Got reply eth.BlockBodies (0x06) file=protocol_eth65.nim:51 peer=<PEER> count=64
WRN 2021-09-29 02:40:25.306+01:00 Bodies len != headers.len file=blockchain_sync.nim:276 bodies=77 headers=192
TRC 2021-09-29 02:40:25.306+01:00 peer disconnected file=blockchain_sync.nim:403 peer=<PEER>
TRC 2021-09-29 02:40:25.306+01:00 Finished obtaining blocks file=blockchain_sync.nim:303 peer=<PEER>
```
In practice, for modern peers, Nimbus received shorter replies than it assumed
depending on the block sizes on the chain. Geth/Erigon has 2MiB `BlockBodies`
response size soft limit. OpenEthereum has 4MiB.
Up to Berlin (EIP-2929), Nimbus's fetcher failed often, but there were still
some peers serving what Nimbus needed.
Just after the start of Berlin, at batch 4494913..4495104 on Goerli, zero peers
responded with full size replies for the whole batch, so Nimbus couldn't
progress past that point. But there was already a problem happening before
that for large blocks, dropping good peers and repeatedly fetching the same
block data.
Signed-off-by: Jamie Lokier <jamie@shareable.org>
pre EIP1559 max(gasCost) is tx.gasLimit * tx.gasPrice
the new EIP1559 max(gasCost) before the transaction can be executed is
tx.gasLimit * tx.maxFeePerGas
EIP-2718:
- chainID: Long! of Query
- chainID: Long of Transaction
EIP-1559:
- baseFeePerGas: BigInt of Block
- effectiveGasPrice: BigInt of Transaction
- maxFeePerGas: BigInt of Transaction
- maxPriorityFeePerGas: BigInt of Transaction
this is a preparation for migration to confutils based config
although there is still some getConfiguration usage in tests code
it will be removed after new config arrived
both clique epoch and clique period already checked in
newClique and will use default configuration they are not set.
this redundant check in sealing engine also failed with
some configuration where only one of them is set and the
other one not set.
Prior to this patch, top-level EVM executions and nested EVM executions did
their `getStorage` and other requests using a completely different set of host
functions. It was just unfinished, to get top-level "new" EVMC working.
This finishes the job - it stops using the old methods. Effect:
- Functionality added at the EVMC host level will be used by all EVM calls.
(The target here is Beam Sync).
- The old set of functions are no longer used, so they can be removed.
- When EVMC host call tracing is enabled (`showTxCalls = true`), it traces
the calls from nested EVM executions as well as top-level.
Signed-off-by: Jamie Lokier <jamie@shareable.org>
We've been filling a "vtable"-like at run time, but it's not necessary.
The new object is a global `let x = evmc_host_interface(...)`, we assume it's
initialised before the first use, and we take its address with `.unsafeAddr`.
(If we use `ref evmc_host_interface`, Nim decides (correctly) that the
functions which use it aren't GC-safe because it's a global.)
Signed-off-by: Jamie Lokier <jamie@shareable.org>
This combines two things, a C stack usage change with EVM nested calls
via EVMC, and changes to host call tracing.
Feature-wise, the tracing is improved:
- Storage keys and values are make more sense.
- The message/result/context objects are shown with all relevant fields.
- `call` trace is split into entry/exit, so these can be shown around the
called contract's operations, instead of only showing the `call` parameters
after the nested call is finished.
- Nested calls are indented, which helps to highlight the flow.
- C stack usage considerably reduced in nested calls when more functionality
is enabled (either tracing here, or other things to come).
This will seem like a minor patch, but C stack usage was the real motivation,
after plenty of time in the debugger.
Nobody cares about stack when `showTxCalls` (you can just use a big stack when
debugging). But these subtle changes around the `call` path were found to be
necessary for passing all tests when the EVMC nested call code is completed,
and that's a prerequisite for many things: async EVM, dynamic EVM, Beam Sync,
and to fix https://github.com/status-im/nimbus-eth1/issues/345.
Signed-off-by: Jamie Lokier <jamie@shareable.org>
The update for London (EIP-1559) in 1cdb30df ("bump nim-emvc with evmc revision
8.0.0 to 9.0.0") really bumped EVMC ABI version from 7.5 up to 9.
In other words, it skipped Berlin, going direct from Istanbul to London.
That was accompanied by EVMC changes in 05e9b891 ("EIP-3198: add baseFee op
code in nim-evm"), which added the API changes needed for London.
But the missing Berlin functions weren't added in the move to London.
As a result, our EVMC host became incompatible with Berlin, London, and really
all revisions of the ABI, and if a third party EVM was loaded, it crashed.
This commit adds the missing Berlin host support, and makes our ABI
binary-compatible with real EVMC again.
Signed-off-by: Jamie Lokier <jamie@shareable.org>
although they are technically different, but in reality,
many networks are using the same id for ChainId dan NetworkId.
in this commit, we set networkid from config file's chainId.
- allow clique period and epoch to be configured via config file
- this also activate poaEngine mode
- remove clique period configuration from cli to reduce confusion
- fix#786
As this branch of vm2 doesn't support EVMC, this EVMC-motivated change is only
required here for internal compatibility.
Signed-off-by: Jamie Lokier <jamie@shareable.org>
This changes fixes a bug in `CREATE2` ops when used with EVMC.
Because it changes the salt type, it affects non-EVMC code as well.
The salt was passed through EVMC with the wrong byte order, although this went
unnoticed as the Nimbus host flipped the byte order before using it.
This was found when running Nimbus with third-party EVM,
["evmone"](https://github.com/ethereum/evmone).
There are different ways to remedy this.
If treated as a number, Nimbus EVM would byte-flip the value when calling EVMC,
then Nimbus host would flip the received value. Finally, it would be flipped a
third time when generating the address in `generateSafeAddress`. The first two
flips can be eliminated by negotiation (like other numbers), but there would
always be one flip.
As a bit pattern, Nimbus EVM would flip the same way it does when dealing with
hashes on the stack (e.g. with `getBlockHash`). Nimbus host wouldn't flip at
all - and when using third-party EVMs there would be no flips in Nimbus.
Because this value is not for arithmetic, any bit pattern is valid, and there
shouldn't be any flips when using a third-party EVM, the bit-pattern
interpretation is favoured. The only flip is done in Nimbus EVM (and might be
eliminated in an optimised version).
As suggested, we'll define a new "opaque 256 bits" type to hold this value.
(Similar to `Hash256`, but the salt isn't necessarily a hash.)
Signed-off-by: Jamie Lokier <jamie@shareable.org>
Nimbus types generally use the bit count not the byte count, e.g. `UInt256`,
`Hash256`, so make `ZERO_HASH256` (which has type `Hash256`) fit this pattern.
Signed-off-by: Jamie Lokier <jamie@shareable.org>
* Provide PoA voting header generator
why:
Handy for hive/smoke test
details:
Header generator is a re-implementation of the generator previously
used for the canonical reference tests.
* try fixing ci out-of-mem condition
why:
for some reason, the ci began behaving like a real win7/i386 machine
where gcc is limited to 64k optimiser space
* fix comments, typos ..
new command line options:
--ws Enable the Websocket JSON-RPC server
--wsbind:<value> Set address:port pair(s) (comma-separated) Websocket JSON-RPC server will bind to (default: localhost:8546)
--wsapi:<value> Enable specific set of Websocket RPC API from list (comma-separated) (available: eth, debug)
fixes#770
new command line options:
--clique-period:<value> Enables clique support. value is block time in seconds
--engine-signer:<value> Enables mining. value is EthAddress in hex
--import-key:<path> Import unencrypted 32 bytes hex private key file
fixes#771
* Provide API
details:
API is bundled via clique.nim.
* Set extraValidation as default for PoA chains
why:
This triggers consensus verification and an update of the list
of authorised signers. These signers are integral part of the
PoA block chain.
todo:
Option argument to control validation for the nimbus binary.
* Fix snapshot state block number
why:
Using sub-sequence here, so the len() function was wrong.
* Optional start where block verification begins
why:
Can speed up time building loading initial parts of block chain. For
PoA, this allows to prove & test that authorised signers can be
(correctly) calculated starting at any point on the block chain.
todo:
On Goerli around blocks #193537..#197568, processing time increases
disproportionally -- needs to be understand
* For Clique test, get old grouping back (7 transactions per log entry)
why:
Forgot to change back after troubleshooting
* Fix field/function/module-name misunderstanding
why:
Make compilation work
* Use eth_types.blockHash() rather than utils.hash() in Clique modules
why:
Prefer lib module
* Dissolve snapshot_misc.nim
details:
.. into clique_verify.nim (the other source file clique_unused.nim
is inactive)
* Hide unused AsyncLock in Clique descriptor
details:
Unused here but was part of the Go reference implementation
* Remove fakeDiff flag from Clique descriptor
details:
This flag was a kludge in the Go reference implementation used for the
canonical tests. The tests have been adapted so there is no need for
the fakeDiff flag and its implementation.
* Not observing minimum distance from epoch sync point
why:
For compiling PoA state, the go implementation will walk back to the
epoch header with at least 90000 blocks apart from the current header
in the absence of other synchronisation points.
Here just the nearest epoch header is used. The assumption is that all
the checkpoints before have been vetted already regardless of the
current branch.
details:
The behaviour of using the nearest vs the minimum distance epoch is
controlled by a flag and can be changed at run time.
* Analysing processing time (patch adds some debugging/visualisation support)
why:
At the first half million blocks of the Goerli replay, blocks on the
interval #194854..#196224 take exceptionally long to process, but not
due to PoA processing.
details:
It turns out that much time is spent in p2p/excecutor.processBlock()
where the elapsed transaction execution time is significantly greater
for many of these blocks.
Between the 1371 blocks #194854..#196224 there are 223 blocks with more
than 1/2 seconds execution time whereas there are only 4 such blocks
before and 13 such after this range up to #504192.
* fix debugging symbol in clique_desc (causes CI failing)
* Fixing canonical reference tests
why:
Two errors were introduced earlier but ovelooked:
1. "Remove fakeDiff flag .." patch was incomplete
2. "Not observing minimum distance .." introduced problem w/tests 23/24
details:
Fixing 2. needed to revert the behaviour by setting the
applySnapsMinBacklog flag for the Clique descriptor. Also a new
test was added to lock the new behaviour.
* Remove cruft
why:
Clique/PoA processing was intended to take place somewhere in
executor/process_block.processBlock() but was decided later to run
from chain/persist_block.persistBlock() instead.
* Update API comment
* ditto
Using the same packet tracing format to match `protocol_eth65`.
There aren't many calls, and this makes them clear.
Signed-off-by: Jamie Lokier <jamie@shareable.org>
The format is reasonably useful and not too large, when looking at the
behaviour of sync processes. It doesn't try to show all the details of
packets, just something at a useful level of detail to see what's going on.
The consistent presentation has proven helpful too, e.g. when grepping.
Signed-off-by: Jamie Lokier <jamie@shareable.org>
Disable some trace messages which appeared a lot in the output and probably
aren't so useful any more, when block processing is functioning well at high
speed.
Turning on the trace level globally is useful to get a feel for what's
happening, but only if each category is kept to a reasonable amount.
As well as overwhelming the output so that it's hard to see general activity,
some of these messages happen so much they severely slow down processing. Ones
called every time an EVM opcode uses some gas are particularly extreme.
These messages have all been chosen as things which are probably not useful any
more (the relevant functionality has been debugged and is tested plenty).
These have been commented out rather than removed. It may be that turning
trace topics on/off, or other selection, is a better longer term solution, but
that will require better command line options and good defaults for sure.
(I think higher levels `tracev` and `tracevv` levels (extra verbose) would be
more useful for this sort of deep tracing on request.)
For now, enabling `--log-level:TRACE` on the command line is quite useful as
long as we keep each category reasonable, and this patch tries to keep that
balance.
- Don't show "has transactions" on virtually every block imported.
- Don't show "Sender" and "txHash" lines on every transaction processed.
- Don't show "GAS CONSUMPTION" on every opcode executed", this is way too much.
- Don't show "GAS RETURNED" and "GAS REFUND" on each contract call.
- Don't show "op: Stop" on every Stop opcode, which means every transaction.
- Don't show "Insufficient funds" whenever a contract can't call another.
- Don't show "ECRecover", "SHA256 precompile", "RIPEMD160", "Identity"
or even "Call precompile" every time a precompile is called. These are
very well tested now.
- Don't show "executeOpcodes error" whenever a contract returns an error.
(This is changed to `trace` too, it's a normal event that is well tested.)
Signed-off-by: Jamie Lokier <jamie@shareable.org>
Move `blockchain_sync.nim` from `nim-eth` to `nimbus-eth1`.
This lets `blockchain_sync` use the `eth/65` protocol to synchronise with more
modern peers than before.
Practically, the effect is the sync process runs more quickly and reliably than
before. It finds usable peers, and they are up to date.
Note, this is mostly old code, and it mostly performs "classic sync", the
original Ethereum method. Here's a summary of this code:
- It decides on a blockchain canonical head by sampling a few peers.
- Starting from block 0 (genesis), it downloads each block header and
block, mostly in order.
- After it downloads each block, it executes the EVM transactions in that block
and updates state trie from that, before going to the next block.
- This way the database state is updated by EVM executions in block order,
and new state is persisted to the trie database after each block.
Even though it mentions Geth "fast sync" (comments near end of file), and has
some elements, it isn't really. The most obvious missing part is this code
_doesn't download a state trie_, it calculates all state from block 0.
Geth "fast sync" has several parts:
1. Find an agreed common chain among several peers to treat as probably secure,
and a sufficiently long suffix to provide "statistical economic consensus"
when it is validated.
2. Perform a subset of PoW calculations, skipping forward over a segment to
verify some of the PoWs according to a pattern in the relevant paper.
3. Download the state trie from the block at the start of that last segment.
4. Execute only the blocks/transactions in that last segment, using the
downloaded state trie, to fill out the later states and properly validate the
blocks in the last segment.
Some other issues with `blockchain_sync` code:
- If it ever reaches the head of the chain, it doesn't follow new blocks with
increasing block numbers, at least not rapidly.
- If the chain undergoes a reorg, this code won't fetch a block number it has
already fetched, so it can't accept the reorg. It will end up conflicted
with peers. This hasn't mattered because the development focus has been on
the bulk of the catching up process, not the real-time head and reorgs.
- So it probably doesn't work correctly when it gets close to the head due to
many small reorgs, though it might for subtle reasons.
- Some of the network message handling isn't sufficiently robust, and it
discards some replies that have valid data according to specification.
- On rare occasions the initial query mapping block hash to number can
fail (because the peer's state changes).
- It makes some assumptions about the state of peers based on their responses
which may not be valid (I'm not convinced they are). The method for working
out "trusted" peers that agree a common chain prefix is clever. It compares
peers by asking each peer if it has the header matching another peer's
canonical head block by hash. But it's not clear that merely knowing about a
block constitutes agreement about the canonical chain. (If it did, query by
block number would give the same answer more authoritatively.)
Nonetheless, being able to run this sync process on `eth/65` is useful.
<# interactive rebase in progress; onto 66532e8a
Signed-off-by: Jamie Lokier <jamie@shareable.org>
This constant shouldn't be used outside `protocol_eth65`.
When we support multiple `eth/NN` versions side by side, or even just have
multiple code files, there's a risk some code would import just one of the
files (e.g. `protocol_eth65`), use `protocolVersion`, and incorrectly act as
though that version is the one active on the node.
In fact that happened, and now it can't happen. Other code needs to query the
`EtheruemNode` to find what versions are really active.
Signed-off-by: Jamie Lokier <jamie@shareable.org>
In RPC `eth_protocolVersion`, look at the live `EthereumNode` to find which
version of `eth/NN` protocol is active, instead of trusting a compile-time
constant. It's better to check dynamically. GraphQL already does this.
As a result, the RPC code doesn't depend on `eth_protocol` any more.
To make sure there are no more accidental users of the old constant,
`protocolVersion` is no longer exported from `protocol_eth65`.
(The simplest way to support `eth/65` was to make `eth_protocolVersion` use
`protocol_eth65.protocolVersion`, to get 65. But that's silly. More
seriously, when we add another version (`eth/66`) running alongside `eth/65`,
that expression would still compile ok yet return the wrong value, while still
passing the RPC test suite.)
Signed-off-by: Jamie Lokier <jamie@shareable.org>
Turns out `{.rlpInline.}` doesn't do anything.
It's documented but not implemented.
Due to this, whenever a peer sent us a `NewBlock` message, we had an RLP
decoding error processing it, and disconnected the peer thinking it was the
peer's error.
These messages are sent often by good peers, so whenever we connected to a
really good peer, we'd end up disconnecting from it due to this.
Because a block body is a list of transactions, the parse errors looked
suspiciously like EIP-2718/2976/2930/1559 typed transaction RLP errors.
But it was a failure to parse `BlockBody` inline.
Conveniently, the `EthBlock` type defined for another reason is encoded exactly
the way `NewBlockAnnounce` needs to be, so we can reuse that type.
This didn't stand out before updating to `eth/65`, because with old protocols
we tend to only connect to old peers, which may be out of date themselves and
have no new blocks to send. Also, we didn't really investigate occasional
disconnects before, we assumed they're just part of P2P life.
Signed-off-by: Jamie Lokier <jamie@shareable.org>
This patch adds the `eth/65` protocol, documented at
https://github.com/ethereum/devp2p/blob/master/caps/eth.md.
This is an intentionally simple patch, designed to not break, change or depend
on other functionality much, so that the "_old_ sync" methods can be run
usefully again and observed. This patch isn't "new sync" (a different set of
sync algorithms), but it is one of the foundations.
For a while now Nimbus Eth1 only supported protocol `eth/63`. But that's
obsolete, and very few nodes still support it. This meant Nimbus Eth1 would
make slow progress trying to sync, as most up to date clients rejected it.
The current specification is `eth/66`, and the functionality we really need is
in `eth/64`.
So why `eth/65`?
- `eth/64` is essential because of the `forkId` feature. But `eth/64` is on
its way out as well. Many clients, especially the most up to date Geth
running the current hard-forks (Berlin/London) don't talk `eth/64` any more.
- `eth/66` is the current specification, but some clients don't talk `eth/66`
yet. We'd like to have the best peer connectivity during tests, and
currently everything that talks `eth/66` also talks `eth/65`.
- Nimbus Eth1 RLPx only talks one version at a time. (Without changes to the
RLPx module. When those go in we'll add `eth/64..eth/66` for greater peer
reach and testing the `eth/66` behaviour. For simplicity and decoupling,
this patch contains just one version, the most useful.)
What are `eth/64` and `eth/65`?
- `eth/64` (EIP-2364) added `forkId` which allows nodes to distinguish between
Ethereum (ETH) and Ethereum Classic (ETC) blockchains, which share the same
genesis block. `forkId` also protects the system when a new hard fork is
going to be rolled out, by blocking interaction with out of date nodes. The
feature is required nowadays.
We send the right details to allow connection (this has been tested a lot),
but don't apply the full validation rules of EIP-2124/EIP-2364 in this patch.
It's to keep this patch simple (in its effects) and because those rules have
consequences best tested separately. In practice the other node will reject
us when we would reject it, so this is ok for testing, as long as it doesn't
get seriously deployed.
- `eth/65` added more efficient transaction pool methods.
- Then a later version of `eth/65` (without a new number) added typed
transactions, described in [EIP-2976](https://eips.ethereum.org/EIPS/eip-2976).
Why it's moved to `nimbus-eth1`:
- Mainly because `eth/64` onwards depend on the current state of block
synchronisation, as well as the blockchain's sequence of hard-fork block
numbers, both of which are part of `nimbus-eth1` run-time state. These
aren't available to pure `nim-eth` code. Although it would be possible to
add an API to let `nimbus-eth1` set these numbers, there isn't any point
because the protocol would still only be useful to `nimbus-eth1`.
Signed-off-by: Jamie Lokier <jamie@shareable.org>
* Renamed source file clique_utils => clique_helpers
why:
New name is more in line with other modules where local libraries
are named similarly.
* re-implemented PoA verification module as clique_verify.nim
details:
The verification code was ported from the go sources and provisionally
stored in the clique_misc.nim source file.
todo:
Bring it to life.
* re-design Snapshot descriptor as: ref object
why:
Avoids some copying descriptor objects
details:
The snapshot management in clique_snapshot.nim has been cleaned up.
todo:
There is a lot of unnecessary copying & sub-list manipulation of
seq[BlockHeader] lists which needs to be simplified by managing
index intervals.
* optimised sequence handling for Clique/PoA
why:
To much ado about nothing
details:
* Working with shallow sequences inside PoA processing avoids
unnecessary copying.
* Using degenerate lists in the cliqueVerify() batch where only the
parent (and no other ancestor) is needed.
todo:
Expose only functions that are needed, shallow sequences should be
handles with care.
* fix var-parameter function argument
* Activate PoA engine -- currently proof of concept
details:
PoA engine is activated with newChain(extraValidation = true) applied
to a PoA network.
status and todo:
The extraValidation flag on the Chain object can be set at a later
state which allows to pre-load parts of the block chain without
verification. Setting it later will only go back the block chain to
the latest epoch checkpoint. This is inherent to the Clique protocol,
needs testing though.
PoA engine works in fine weather mode on Goerli replay. With the
canonical eip-225 tests, there are quite a few fringe conditions
that fail. These can easily fudged over to make things work but need
some more work to understand and correct properly.
* Make the last offending verification header available
why:
Makes some fringe case tests work.
details:
Within a failed transaction comprising several blocks, this
feature help to identify the offending block if there was a
PoA verification error.
* Make PoA header verifier store the final snapshot
why:
The last snapshot needed by the verifier is the one of the parent but
the list of authorised signer is derived from the current snapshot. So
updating to the latest snapshot provides the latest signers list.
details:
Also, PoA processing has been implemented as transaction in
persistBlocks() with Clique state rollback.
Clique tests succeed now.
* Avoiding double yields in iterator => replaced by template
why:
Tanks to Andri who observed it (see #762)
* Calibrate logging interval and fix logging event detection
why:
Logging interval as copied from Go implementation was too large and
needed re-calibration. Elapsed time calculation was bonkers, negative
the wrong way round.
* re-shuffled Clique functions
why:
Due to the port from the go-sources, the interface logic is not optimal
for nimbus. The main visible function is currently snapshot() and most
of the _procurement_ of this function result has been moved to a
sub-directory.
* run eip-225 Clique test against p2p/chain.persistBlocks()
why:
Previously, loading the test block chains was fugdged with the purpose
only to fill the database. As it is now clear how nimbus works on
Goerli, the same can be achieved with a more realistic scenario.
details:
Eventually these tests will be pre-cursor to the reply tests for the
Goerli chain supporting TDD approach with more simple cases.
* fix exception annotations for executor module
why:
needed for exception tracking
details:
main annoyance are vmState methods (in state.nim) which potentially
throw a base level Exception (a proc would only throws CatchableError)
* split p2p/chain into sub-modules and fix exception annotations
why:
make space for implementing PoA stuff
* provide over-loadable Clique PRNG
why:
There is a PRNG provided for generating reproducible number sequences.
The functions which employ the PRNG to generate time slots were ported
ported from the go-implementation. They are currently unused.
* implement trusted signer assembly in p2p/chain.persistBlocks()
details:
* PoA processing moved there at the end of a transaction. Currently,
there is no action (eg. transaction rollback) if this fails.
* The unit tests with staged blocks work ok. In particular, there should
be tests with to-be-rejected blocks.
* TODO: 1.Optimise throughput/cache handling; 2.Verify headers
* fix statement cast in pool.nim
* added table features to LRU cache
why:
Clique uses the LRU cache using a mixture of volatile online items
from the LRU cache and database checkpoints for hard synchronisation.
For performance, Clique needs more table like features.
details:
First, last, and query key added, as well as efficient random delete
added. Also key-item pair iterator added for debugging.
* re-factored LRU snapshot caching
why:
Caching was sub-optimal (aka. bonkers) in that it skipped over memory
caches in many cases and so mostly rebuild the snapshot from the
last on-disk checkpoint.
details;
The LRU snapshot toValue() handler has been moved into the module
clique_snapshot. This is for the fact that toValue() is not supposed
to see the whole LRU cache database. So there must be a higher layer
working with the the whole LRU cache and the on-disk checkpoint
database.
also:
some clean up
todo:
The code still assumes that the block headers are valid in itself. This
is particular important when an epoch header (aka re-sync header) is
processed as it must contain the PoA result of all previous headers.
So blocks need to be verified when they come in before used for PoA
processing.
* fix some snapshot cache fringe cases
why:
Must not index empty sequences in clique_snapshot module
* extract unused clique/mining support into separate file
why:
mining is currently unsupported by nimbus
* Replay first 51840 transactions from Goerli block chain
why:
Currently Goerli is loaded but the block headers are not verified.
Replaying allows real data PoA development.
details:
Simple stupid gzipped dump/undump layer for debugging based on
the zlib module (no nim-faststream support.)
This is a replay running against p2p/chain.persistBlocks() where
the data were captured from.
* prepare stubs for PoA engine
* split executor source into sup-modules
why:
make room for updates, clique integration should go into
executor/update_poastate.nim
* Simplify p2p/executor.processBlock() function prototype
why:
vmState argument always wraps basicChainDB
* split processBlock() into sub-functions
why:
isolate the part where it will support clique/poa
* provided additional processTransaction() function prototype without _fork_ argument
why:
with the exception of some tests, the _fork_ argument is always derived
from the other prototype argument _vmState_
details:
similar situation with makeReceipt()
* provide new processBlock() version explicitly supporting PoA
details:
The new processBlock() version supporting PoA is the general one also
supporting non-PoA networks, it needs an additional _Clique_ descriptor
function argument for PoA state (if any.)
The old processBlock() function without the _Clique_ descriptor argument
retorns an error on PoA networgs (e.g. Goerli.)
* re-implemented Clique descriptor as _ref object_
why:
gives more flexibility when moving around the descriptor object
details:
also cleaned up a bit the clique sources
* comments for clarifying handling of Clique/PoA state descriptor
Transaction and BlockHeader already updated in nim-eth repo
to support EIP-1559
EIP-1559 header validation and gasLimit validation
already implemented in previous commit
This commit deals with block validation:
- Effective gasPrice per EIP-1559
- new miner reward based on priorityFee
Previously max gas refunded was defined as gas_used div 2.
Here we name the constant 2 as MAX_REFUND_QUOTIENT and
change its value to 5.
The new equation will be: gas_used div MAX_REFUND_QUOTIENT
SSTORE_CLEARS_SCHEDULE or FeeSchedule[RefundsClear] in evm
have initial value of 15_000 when introduced by EIP-2200.
EIP-2200 also set new value for SSTORE_RESET_GAS
from 5000 to to 5000 - COLD_SLOAD_COST
Now with EIP-3529, SSTORE_CLEARS_SCHEDULE beecome
SSTORE_RESET_GAS + ACCESS_LIST_STORAGE_KEY_COST
or 5000 - COLD_SLOAD_COST + ACCESS_LIST_STORAGE_KEY_COST
of 5000 - 2100 + 1900 = 4800
This preparation is needed for subsequent
EIPs included in London.
- Add London to Fork enum
- Block number to fork
- Parsing London fork in chain config
- Prepare gas costs table for London
- Prepare EVM opcode dispatcher for London
- Block rewards for London
- Prepare hive script for London
* continue importing rlp blocks
why:
a chain of blocks to be imported might have legit blocks
after rejected blocks
details:
import loop only stops if the import list is exhausted or if there
was a decoding error. this adds another four to the count of successful
no-hive tests.
* verify DAO marked extra data field in block header
why:
was ignored, scores another two no-hive tests
* verify minimum required difficulty in header validator
why:
two more nohive tests to succeed
details:
* subsumed extended header tests under validateKinship() and renamed it
more appropriately validateHeaderAndKinship()
* enhanced readability of p2p/chain.nim
* cleaned up test_blockchain_json.nim
* verify positive gasUsed unless no transactions
why:
solves another to nohive tests
details:
straightened test_blockchain_json chech so there is no unconditional
rejection anymore (based on the input test scenario)
* Re-adjust canonical head to parent of block to be inserted
why:
of the failing tests that remain to be solved, 30 of those will succeed
if the canonical database chain head is cleverly adjusted -- yes, it
looks like a hack, indeed.
details:
at the moment, this hack works for the non-hive tests only and is
triggered by a boolean argument passed on to the chain.persistBlocks()
method.
* Use parent instead of canonical head for block to be inserted
why:
side chains need to be inserted typically somewhere before the
canonical head.
details:
the previous _hack_ was unnecessary and removed, it was inspired by
some verification in persistBlocks() which explicitly referenced the
canonical head (which now might or might not refer to the newly inserted
header.)
* remove unnecessary code + comment
why:
some handy features were intended to support the unit test from
the clique/clique_test.go source (the other one is from
clique/snapshot_test.go.)
as this test cannot realistically be implemented without the full
api (includes mining support), it is left as that
Proper nested call functionality is being skipped in this iteration of new EVMC
host code to keep it simpler, to allow testing and architecture to be built
around the less complicated non-nested cases first.
Instead, nested calls use the old `Computation` path, and bypass any
third-party EVM that may be loaded. The results are the same, and mixing
different EVMs in this way is actually permitted in the EVMC specification.
This approach also means third-party EVMs we test don't need to support
precompiles and we don't need to specially handle those cases.
(E.g. "evmone" doesn't support precompiles, just EVM1 opcodes).
(These before/after scope actions are approximately copy-pasted from
`nimbus/vm/evmc_host.nim`, making their detailed behaviour "obviously correct".
Of course they are subject to tests as well. The small stack property of
a3c8a5c3 "EVMC: Small stacks when using EVMC, closes#575 (segfaults)" is
carefully retained.)
Signed-off-by: Jamie Lokier <jamie@shareable.org>
Make the host service `setStorage` calculate the gas refund itself, instead of
depending on EVM internals.
In EVMC, host `setStorage` is responsible for adding the gas refund using the
rules of EIP-1283 (Constantinople), EIP-2200 (Istanbul) or EIP-2929 (Berlin).
It is not obvious that the host must do it from EVMC documentation, but it's
how it has to be. Note, this is very different from the gas _cost_, which the
host does not calculate.
Gas refund doesn't affect computation. It is applied after the whole
transaction is complete, so it can be tracked on the host or EVM side. But
`setStorage` doesn't return enough information for the EVM to calculate the
refund, so the host must do it when `setStorage` is used.
For now, this continues using Nimbus `Computation` just to hold the gas refund,
to fit around existing structures and get new EVMC working. But the host can't
keep using `Computation`, so gas refund will be moved out of it in due course.
Signed-off-by: Jamie Lokier <jamie@shareable.org>
When processing log operations on the EVMC host side, it causes incorrect
`rootHash` results in some tests. This patch fixes the results.
The cause of these results is known: `Computation` is still doing parts of
contract scope entry/exit which need to be moved to the host. For now, as a
temporary workaround, update logs in `Computation` as it did before.
This makes test pass when using Nimbus EVM. (It breaks third-party EVMs when
`LOG*` ops are used, although most other tests pass.)
We can't keep this as it prevents complete host/EVM separation, but it's useful
in the current code, and it's fine to develop other functionality on top.
Signed-off-by: Jamie Lokier <jamie@shareable.org>
When processing self destructs on the EVMC host side, it causes incorrect
`rootHash` results in some tests. This patch fixes the results.
The cause of these results is known: `Computation` is still doing parts of
contract scope entry/exit which need to be moved to the host. For now, as a
temporary workaround, update self destructs in `Computation` as it did before.
This makes test pass when using Nimbus EVM. (It breaks third-party EVMs when
`SELFDESTRUCT` ops are used, although most other tests pass.)
We can't keep this as it prevents complete host/EVM separation, but it's useful
in the current code, and it's fine to develop other functionality on top.
Signed-off-by: Jamie Lokier <jamie@shareable.org>
In the unusual case where log data is zero-length, `data[0].addr` is invalid
and Nim thoughtfully raises `IndexOutOfBounds`, a `Defect` so it's not even
in `CatchableError`.
This is done in the EVMC host services to handle `LOG*` ops, and it made one
of the EVM tests silently fail with no error message. The fix is obvious.
Signed-off-by: Jamie Lokier <jamie@shareable.org>
Show calls from the host into the EVM. Shows the call, `evmc_message` fields,
and `evmc_result` fields when the call returns.
(When `show_tx_calls` is manually set to true.)
Signed-off-by: Jamie Lokier <jamie@shareable.org>
When `show_tx_calls` is manually set to true, show all the calls from the EVM
to the host, including name, arguments and results.
For example this shows each call to `setStorage`, the key, value and storage
result. This output allows the externally-visible activity of an EVM to be
seen, and it's been useful for guessing what went wrong when a test fails.
In theory, if two EVMs show the same activity in this log, they should have the
same effect on account states, gas, etc. and the same final `roothash`
(which is the only value some tests check).
ps. Ideally we'd use `{.push show.}`...`{.pop.}`, just like with `inline`.
But we can't: https://github.com/nim-lang/Nim/issues/12867
Signed-off-by: Jamie Lokier <jamie@shareable.org>
New pragma `{.show.}` on a proc definition turns on tracing for that proc.
Every call to it shows the name, arguments and results, if `show_tx_calls` is
manually set to true. This is to trace calls from EVM to host.
This started as a template which took a block expression, but the closure it
used led to illegal capture errors. It was easier to write a macro.
Signed-off-by: Jamie Lokier <jamie@shareable.org>
1. Send all EVM executions through the EVMC `execute` function.
It leads to the same place in the end as calling `Computation` before, but
`execute` is the API function used by all EVMC implementations, and it is
very explicit what data is passed back and forth.
2. As a consequence this starts using the new `host_services` code from EVM, so
this is a significant change to the paths used for account state processing.
3. Because we will have to remove the `newComputation` call on the host side,
anticipating that the contract code is now saved in `host` instead of being
copied around. As it's saved in `host`, there is no need to pass it
separately to `evmcExecComputation`.
Signed-off-by: Jamie Lokier <jamie@shareable.org>
We'll re-enable endian conversions based on a negotiated run-time option later,
but for now let's remove one complication to testing the new EVMC paths, and
also gain a little performance.
Signed-off-by: Jamie Lokier <jamie@shareable.org>
Even though `evmc_create_nimbus_evm` is called, it fails at link time because
the definition of that function isn't included unless it is pulled in
explicitly.
Signed-off-by: Jamie Lokier <jamie@shareable.org>
This provides the functions a loadable VM must provide for a host to use it.
The main access to a loaded EVM is `evmc_create_nimbus_evm`, and this meant to
be the only exported function the caller starts with.
That provides access to other functions, also defined in this patch, to
configure the EVM and then the key interesting function is `execute`.
`execute` runs a full computation, here using Nimbus EVM `Computation`.
(Note, even though everything is EVMC binary-compatible, there is a small
dependency on `TransactionHost` in `execute` here, which prevents this being
used by a host that is not Nimbus at the moment. It is necessary for some
tests, and will eventually go away.)
Although this provides the VM-side functionality needed by the host, it does
not contain the glue functions for `Computation` to call the host, which are
already part of the Nimbus EVM in `nimbus/vm/evmc_api.nim`.
Signed-off-by: Jamie Lokier <jamie@shareable.org>
1. This provides the necessary type adjustments for host services to be
(optionally) callable via the EVMC binary-compatible interface. This layer
is stashed away in a glue module so the host services continue to use
appropriate Nim types, and are still callable directly.
Inlining is used to ensure there should be no real overhead, including stack
frame size for the `call` function. Note, `import` must be used for
`{.inline.}` to work effectively.
2. This also provides a key call in the other direction, the version of host to
EVM `execute` that is called on the host side.
Signed-off-by: Jamie Lokier <jamie@shareable.org>
This provides "host services", functions provided by the application to an EVM.
They are a key part of EVMC compatibility, but we will switch to using these
with "native" EVM as well.
These are functions like `getStorage`, `setStorage` and `emitLog` for accessing
the account state, because the EVM is not allowed direct access to the database.
This code is adapted from `nimbus/vm/evmc_host.nim` and other places, but there
is more emphasis on being host-side only, no dependency on the EVM or
`Computation` type. It uses `TransactionHost` and types in `host_types`.
These host services have two goals: To be compatible with EVMC, and to be a
good way for the Nimbus EVM to access the data it needs. In our new Nimbus
internal architecture, the EVM will only access the databases and other
application state via these host service functions.
The reason for containing the EVM like this, even "native" EVM, is that having
one good interface to the data makes it a lot easier to change how the database
works, which is on the roadmap.
These functions almost have EVMC signatures, but they are not binary compatible
with EVMC. (Binary compatibility is provided by another module). It would be
fine for Nimbus EVM to call these functions directly when linked directly.
Signed-off-by: Jamie Lokier <jamie@shareable.org>
Block validation failure isn't an error, it's correct rejection of a bad block
from the network. All conditions that lead to block rejection return a simple
boolean.
When a block is rejected, most reasons log at `debug` level. Only `stateRoot`
mismatch shouts a loud, highlighted, multi-line error message with big red
`error` alert.
Historically this was to assist EVM development, because it was more likely to
be a Nimbus EVM bug than a real bad block. But now the EVM is in good shape,
has a large and thorough testsuite, and `stateRoot` mismatch is more likely to
be a real bad block that should be rejected with less fuss.
If there's a genuine EVM bug, we'll still get an alert: Consensus failure will
quickly become obvious, and the block where it happens is easily fetched.
So a big, loud error is no longer useful, and it became a problem during tests.
Recently a few hundred tests were added that trigger it, and now successful
test output is filled with attention-grabbing errors which aren't really errors
or particularly useful.
Since it's not really an error, the original motivation is now backwards, and
other reasons warn at `debug` level, make this like the others.
Signed-off-by: Jamie Lokier <jamie@shareable.org>
The common forks list was already used, redirected via `vm_forks` for
historical compatibility. Remove the old `vm_forks` now and divert all imports
to the common forks list outside the EVM.
Signed-off-by: Jamie Lokier <jamie@shareable.org>
File `vm_types2` is obsolete. Remove this file and divert all imports to the
common forks list outside the EVM, or in some cases they don't need it anyway.
Signed-off-by: Jamie Lokier <jamie@shareable.org>
Fork names were originally capitalized, and were made lower case by @narimiran
in commit 36a7519 to satisfy `parseEnum` in some tests. Restore the
capitalization and make the tests work with it.
Signed-off-by: Jamie Lokier <jamie@shareable.org>
Many places outside the EVM use `Fork` and the fork list, and in general we
want progressively fewer dependencies on EVM internal types and files.
This may prove to be a temporary location, especially when we implement
issue #640. But it's a fine temporary location if so.
Signed-off-by: Jamie Lokier <jamie@shareable.org>
The current EVM generates its own new contract addresses, and this is why there
are separate `msg.contractAddress` and `msg.codeAddress` fields in the
computation start message.
In EVMC, account updates are only allowed on the host side, including contract
generation, and the start message has one destination field, `msg.destination`.
The EVM cannot select addresses, only use them. It's a sensible design.
The difference makes the current EVM incompatible with EVMC and its message
format, so this patch corrects the difference. It moves contract address
generation to the host side. This simplifies the EVM and its API a little.
(As an API change, this is incompatible with vm2, so it's guarded under
`evmc_enabled` to allow vm2 to continue to build and run at this time. This is
also why there are fewer deletions than would otherwise be expected.)
Signed-off-by: Jamie Lokier <jamie@shareable.org>
The rationale in EIP-6[1] for changing names to `selfDestruct` applies to code
as much as it does to specs. Also, Ethereum uses the new names consistently,
so it's useful for our code to match the terms used in later EIP specs and
testsuite entries.
This change is straightforward, and is a prerequisite for patches to come that
do things with the `selfDestruct` fields.
[1] https://eips.ethereum.org/EIPS/eip-6
Hudson Jameson, "EIP-6: Renaming SUICIDE opcode," Ethereum Improvement
Proposals, no. 6, November 2015.
Signed-off-by: Jamie Lokier <jamie@shareable.org>
This is the main patch which removes Whisper code from `nimbus-eth1` code.
It removes all configuration, help, startup, JSON-RPC calls and most types.
Note, there is still Whisper functionality in `nim-eth`. Also, the "wrapper"
under `wrappers/` isn't dealt with by this change, but it's not built by
default (and might not currently work).
Signed-off-by: Jamie Lokier <jamie@shareable.org>
This is the patch which removes Whisper functionality from `nimbus-eth1`,
even though the code has yet to be removed after.
After this change, enabling Whisper has no effect. Its configuration is
ignored and it won't be started.
Signed-off-by: Jamie Lokier <jamie@shareable.org>
This commit turns Whisper off by default, without changing anything else.
So this can be cherry-picked if you just want to disable Whisper without
removing it.
Signed-off-by: Jamie Lokier <jamie@shareable.org>
There's no need for macro `toSymbolName` to convert fork enum values to their
presentation texts (logging etc) then re-parse them back to a fork enum value.
`asFork` is already used in the same function and works without these steps,
so use it consistently.
Same applies to `op.toSymbolName` and `asOp`.
This makes the code simpler, and removes a text pattern-matching requirement.
The patch has been checked to confirm it doesn't change the compiled code.
Motivation: The forks list will be removed from VM because it is used outside
the VM as well. Doing so highlighted vm2's `toSymbolName`. It's not needed,
and it's best if the VM doesn't constrain text strings used outside the VM
Signed-off-by: Jamie Lokier <jamie@shareable.org>
The last caller of `setupComputation` is gone, now that it's been replaced by
the single entry point for all EVM calls, `runComputation`.
With this removal, EVM's `Computation` type should no longer be used anywhere
outside the call module (except in some tests and the EVM itself).
Signed-off-by: Jamie Lokier <jamie@shareable.org>
Simplify transaction validations to use `runComputation`; drop other code.
Getting everything right up to this point to pass all the tests was trickier
than it looks.
Signed-off-by: Jamie Lokier <jamie@shareable.org>
Simplify how JSON fixtures tests are run to use `runComputation`.
Drop other code.
These use the `noTransfer` option, which is similar enough to calling
`c.executeOpcodes()` instead of `c.execComputation()`.
Signed-off-by: Jamie Lokier <jamie@shareable.org>
Add another flag to disable a processing step when a call doesn't come from
a real transaction:
- `noTransfer`: Don't update balances, nonces, code.
This is to support VM fixtures tests which require account balances and nonces
to be unchanged when running the account's code.
These tests call `c.executeOpcodes()`, an internal function of the EVM, instead
of the usual `c.execComputation()`. It goes direct to the bytecode dispatcher,
skipping parts of `Computation` that are normally called.
But we can't keep calling `c.executeOpcodes()` and have a single entry point to
the VM, let alone an EVMC entry point.
`noTransfer` provides similar enough behaviour to calling `c.executeOpcodes()`
that these tests can use the new single entry point like everything else.
Signed-off-by: Jamie Lokier <jamie@shareable.org>
Simplify `estimateGas` to use `runComputation`; drop other code.
The RPC/GraphQL `estimateGas` operation is quite different from the `call`
operation. It is much more like ordinary transaction execution than `call`,
though there are still enough differences that tx validation cannot be used.
Signed-off-by: Jamie Lokier <jamie@shareable.org>
Simplify `call` to use `runComputation`; drop other code.
The RPC/GraphQL `call` operation differs in many ways from regular transaction
calls. The following flags are set, to disable various steps in processing.
All four are true (disabling the corresponding step) for `call`:
- `noIntrinsic`: Don't charge intrinsic gas.
- `noAccessList`: Don't initialise EIP2929 access list.
- `noGasCharge`: Don't charge sender account for gas.
- `noRefund`: Don't apply gas refund/burn rule.
Interestingly, the RPC/GraphQL `estimateGas` operation doesn't behave so
differently from regular transactions. It might be that not all these steps
should be disabled for `call` either. But until we investigate what
RPC/GraphQL clients are expecting, keep the same behaviour as before.
Signed-off-by: Jamie Lokier <jamie@shareable.org>
The following four flags are added, to change various steps in EVM processing
when a call doesn't come from a real transaction:
- `noIntrinsic`: Don't charge intrinsic gas.
- `noAccessList`: Don't initialise EIP2929 access list.
- `noGasCharge`: Don't charge sender account for gas.
- `noRefund`: Don't apply gas refund/burn rule.
This is to support RPC and GraphQL `call` operations, which behave differently
in some ways from regular transaction calls, and to support some test suites.
In EVMC terms, all these alterations can be performed on the host side.
Signed-off-by: Jamie Lokier <jamie@shareable.org>
Calculate extra intrinsic gas for an EIP-2930 transaction with access list.
While we're here, do the rest of the intrinsic gas calculation. Make it clear,
explicit and in one place. (Previous code delegated parts of the calculation
to `transaction.nim` but had to do the rest locally due to mismatched types.)
Signed-off-by: Jamie Lokier <jamie@shareable.org>
New entry point `runComputation`, for all EVM calls.
(Later the intent is `runComputationAsync`.)
As noted in commit 297d789, there are six entry points calling EVM computation,
with different parameters and expecting different behaviours. Parameters were
dealt with in `setupComputation`. Behaviours are unified in `runComputation`,
with options passed via `CallParams`.
This code performs the steps used when validating a transaction. Options for
non-standard behaviour for RPC, GraphQL and tests to be added as required.
This replaces `setupComputation`, `execComputation` and `executeOpcodes`
(other than its own calls). As a result `Computation` and other EVM types are
no longer referenced in the main program, and many imports can be dropped.
Signed-off-by: Jamie Lokier <jamie@shareable.org>
Change fixtures tests to use shared `setupComputation` instead of
their own slightly different variant.
Signed-off-by: Jamie Lokier <jamie@shareable.org>
Change RPC/GraphQL calls to the EVM to use shared `setupComputation`
instead of their own special variant.
Signed-off-by: Jamie Lokier <jamie@shareable.org>
There are currently six entry points to running an EVM computation, all with
slightly different parameters, and expecting slightly different EVM behaviours.
First step in merging them is a common `setupComputation` that replaces all
the different `*...SetupComputation` functions.
This uses the `TransactionHost` type because it's a step towards using that
type for all EVM calls using only EVMC. For now an EVMC message is created
then translated to EVM-internal `Message`. It is done this way to build up
the new interface in stages where all tests pass at each stage.
Signed-off-by: Jamie Lokier <jamie@shareable.org>
`TransactionHost` represents the interface to the EVM from the application once
we've fully transitioned to EVMC. It represents a managed EVM call, and the
"EVMC host" side of the host<->EVM boundary.
This holds transaction state which sits outside the EVM, but a pointer to this
is passed around by the EVM as _opaque_ type `evmc_host_context`.
To the EVM, this offers "host services", which manage account state that the
EVM cannot do, such as balance transfers, call costs, storage, logs, gas
refunds, nested calls and new contract addresses. The EVM has no direct access
to the account state database or network; it's all via "host services".
To the application (host side), this object represents a managed EVM call, with
inputs, a method to run, outputs, hidden transaction state, and later async
scheduling. It is to replace `Computation` on the application side, while
`Computation` will remain but just be for the EVM side.
Signed-off-by: Jamie Lokier <jamie@shareable.org>
This fixes a bug spotted by @mjfh that was introduced by commit 2a7ccceb:
try:
if not c.continuation.isNil:
(c.continuation)()
c.continuation = nil
c.selectVM(fork)
except CatchableError as e:
...
The call to `(c.continuation)()` was moved by 2a7ccceb inside the `try` so
that, like all the Op functions do already, if the continuation raises, the
interpreter's general catch turns the exception into a an error status result.
But if the continuation raises an exception, `continuation` is not cleared in
the next line, and at the next resumption the continuation is called again.
It may loop doing this.
This doesn't currently happen because the continuations don't really raise, but
it's still a correctness issue.
This fix also allows a continuation to spawn a second continuation, if it
encounters a second suspension point. This also doesn't happen currently,
but the pattern will become useful with async EVM.
Signed-off-by: Jamie Lokier <jamie@shareable.org>
There is no valid `CREATE_CONTRACT_ADDRESS`. Some places on the internet say
account zero means contract creation, but that's not correct.
Transactions to `ZERO_ADDRESS` are legitimate transfers to that account, not
contract creations. They are used to "burn" Eth. People also send Eth to
address zero by accident, unrecoverably, due to poor user interface issues.
Signed-off-by: Jamie Lokier <jamie@shareable.org>
Each place in `call_evm` that sets up an EVM call calculates the new contract
address for contract creations. But it's redundant, because `newComputation`
ignores the provided value and does the calculation again.
Remove the unused address calculation.
This is also a step to merging different entry points and EVMC. This change
ends up with the same value in both `msg.contractAddress` and `msg.codeAddress`
for every entry point, and this is good because it matches the EVMC message
structure, where they are replaced by only one value called `msg.destination`
Signed-off-by: Jamie Lokier <jamie@shareable.org>
`c.executeOpcodes` is called by some JSON fixture tests. These tests bypass
some of the setup and return, and because of this call, continuations aren't
processed either. Opcodes that use continuations will behave incorrectly.
The opcodes used in these particular tests don't use continuations currently,
so just add some assertions to verify this remains the case.
This is only used by local tests, and the call to `c.executeOpcodes` will be
replaced by the common entry point (that handles things like this correctly in
all cases) so we don't need to spend more time on this.
Signed-off-by: Jamie Lokier <jamie@shareable.org>
why:
previously, table data were stored with the table iterator. while
loading a table with permuted entries will always reconstruct equivalent
tables (in the sense of `==`), serialisation data are not comparable.
this patch produces always the same serialised data for equivalent
tables.
why:
source-local unit tests would hardly be triggered by github CI as rightly
criticised with the last patch.
details:
source-local unit tests have been moved to tests folder.
this version also contains rlp serialisation code so rlp encode/decode
will apply tranparently. this is not needed in p2p/validate but will be
useful with the clique protocol.
why:
to be used in Clique consensus protocol which suggests 4k cache entries.
the previous implementation used OrderTable[] which has complexity O(n)
for deleting entries.
after EIP2718/EIP2930, we have additional fields:
type AccessTuple {
address: Address!
storageKeys : [Bytes32!]
}
type Transaction {
r: BigInt!
s: BigInt!
v: BigInt!
# Envelope transaction support
type: Int
accessList: [AccessTuple!]
}
close#606
instead of using stdlib/json, now we switch to json_serialization
the result is much tidier code and more robust when parsing
optional fields.
fixes#635
although this is not part of EIP 1767
but the hive test cases derived from besu
test cases contains this.
we add this now to pass more test hive.graphql cases
Move the EVM setup and call in precompile tests to `fixtureCallEvm` in
`call_evm`. Extra return values needed for testing are returned specially, and
the convention for reporting gas used is changed to match `asmCallEvm`.
Although the precompile tests used `execPrecompiles` before, `executeOpcodes`
does perfectly well as a substitute, allowing `fixtureCallEvm` to be shared.
_Significantly, this patch also makes `Computation` more or less an internal
type of the EVM now._
Nothing outside the EVM (except `call_evm`) needs access any more to
`Computation`, `execComputation`, `executeOpcodes` or `execPrecompiles`.
Many imports can be trimmed, some files removed, and EVMC is much closer.
(As a bonus, the functions in `call_evm` reveal what capabilities parts of the
program have needed over time, makes certain bugs and inconsistencies clearer,
and suggests how to refactor into a more useful shared entry point.)
Signed-off-by: Jamie Lokier <jamie@shareable.org>
Allow the fork to be specified consistently through an `option[Fork]` instead
of varying inconsistencies depending on which call. When fork is not
specified, the `BaseVMState` code picks the correct fork by default for the
block number and chain.
This change actually deletes code, because a number of functions (RPC etc) had
redundant code to pick the fork, which always resolved to same as default.
Signed-off-by: Jamie Lokier <jamie@shareable.org>
Move the EVM setup and call in fixtures "vm json tests" to new function
`fixtureCallEvm` in `call_evm`. Extra return values needed for testing are
returned specially.
This entry point is different from all other `..CallEvm` type functions,
because it uses `executeOpcodes` instead of `execComputation`, so it doesn't
update the account balance or nonce on entry and exit from the EVM.
The new code is a bit redundant and simplistic intentionally, as the purpose is
to move functionality to `call_evm` with high confidence nothing really
changed. The calls will be jointly refactored afterwards to merge differences.
Signed-off-by: Jamie Lokier <jamie@shareable.org>
In the `text_vm_json` ("fixtures") test code, there is another variant of
`rpcSetupComputation` and `txSetupComputation` with slightly different
paremeters. The similarity is obvious.
It is a special setup for testing, though, as it requires slightly different
parameters.
Signed-off-by: Jamie Lokier <jamie@shareable.org>
usually, there is always a sender around `getRecipient` call.
no need to recalculate sender. and more important, in some of
JSON-RPC/GraphQL call, the sender is come from `rpcCallData`,
not from `tx.getSender`. or in ohter situation when the tx is
an unsigned tx, without `r,s,v` fields to calculate sender.
Move the EVM setup and call in `macro_assembler` (`runVM`) entirely to new
function `asmCallEvm` in `call_evm`. Extra return values needed for
testing are returned specially from `asmCallEvm`.
Signed-off-by: Jamie Lokier <jamie@shareable.org>
The second `asmSetupComputation looks up state by block number and preceding
block number, modifies the first transaction with code for testing, and uses
some parts of that transaction to setup an an EVM test.
Signed-off-by: Jamie Lokier <jamie@shareable.org>
In the `macro_assembler` test code, `initComputation` is another variant of
`rpcSetupComputation` and `txSetupComputation` with slightly different
paremeters. The similarity is obvious.
It is a special setup for testing, though, as it requires a contract-creation
transaction for parameters, but sets up a `CALL` execution not `CREATE`.
Gather this into `call_evm`: `initComputation` -> `asmSetupComputation`.
Signed-off-by: Jamie Lokier <jamie@shareable.org>
The point of the `call_vm` exercise is to allow `Computation` to become an
internal type of the EVM, not used as API by the rest of the program. So
`rpcSetupComputation` should be private. It was left exported by mistake.
Signed-off-by: Jamie Lokier <jamie@shareable.org>
Split out and move the EVM setup and call in `processTransaction` to
`call_evm`. This is the last part of the main program which calls the EVM
to be moved. (There's still test code.)
While we're here, move the EIP2929 access list setup too, as the similarity
to `rpcInitialAccessListEIP2929` is obvious.
Signed-off-by: Jamie Lokier <jamie@shareable.org>
There's only one call left to `refundGas(Transaction, ...)`, and the
similarity to the tail of `rpcEstimateGas` is obvious.
Gather this into `call_evm`: `refundGas` -> `txRefundGas`.
Signed-off-by: Jamie Lokier <jamie@shareable.org>
After recent changes, there's only one call left to `setupComputation`, and
it's just a variant like `rpcSetupComputation` but for transaction processing.
The similarity to `rpcSetupComputation` is obvious.
Gather this into `call_evm`: `setupComputation` -> `txSetupComputation`.
Signed-off-by: Jamie Lokier <jamie@shareable.org>
It's possible for `tx.value` in the transaction to have a deliberately
constructed large 256-bit value, such that adding `gasLimit * gasPrice` to it
overflows to a small value.
Prior to this patch, the code would allow such a transaction to pass
validation, even though such a large transfer cannot be valid.
Signed-off-by: Jamie Lokier <jamie@shareable.org>
Change `rpcEstimateGas` to setup and execute a computation directly, in a
similar way to `rpcDoCall` and `rpcMakeCall`, instead of constructing a fake
transaction and then validating it.
This patch does not (or should not) change any behaviour.
Although this looks a bit messy as it duplicates parts of `validateTransaction`
and `processTransaction`, proc names have been used to hopefully keep the
meanings clear, and it's just a stepping stone as those transaction functions
will be changed next. Also the behaviour of RPC `estimateGas` may not be
correct (although this patch is careful not to change it), so it's good to make
it explicit so we can see how it differs from other RPCs.
Doing this change exposed some interesting behaviour differences between the
`call` RPC and `estimateGas` RPC, which may be bugs, or may be intentional.
These differences are now obvious and explicit.
The unclear areas are not well documented by any of the clients, even Infura
which says a bit more than the others. So to find out if they are intended,
we'll have to run tests against other Ethereum services.
Guessing, on the face of it, it looks likely that RPC `call` should:
- Setup EIP2929 access lists
- Account for intrinsic gas (maybe not because zero-gas transactions are ok)
And it looks likely that RPC `estimateGas` should:
- Not return zero when an account has insufficient balance
- Maybe use a different gas cost estimate when one isn't supplied in the RPC
Signed-off-by: Jamie Lokier <jamie@shareable.org>
The RPC `estimateGas` behaves differently from RPC `call` in a number of ways.
These differences may be bugs due to `rpcEstimateGas` calling the EVM in a very
different way than `rpcDoCall`, or they may be intentional. To be sure, we'll
need to test behaviour with Geth, Infura etc to find out (their documentation
isn't enough.) For now, though, we'll keep the same behaviour as we always had.
`rpcEstimateGas` cannot use `rpcSetupComputation` as it is, because
`estimateGas` accounts for "intrinsic gas", and `call` does not.
This patch changes `rpcSetupComputation` to accomodate both behaviours.
Signed-off-by: Jamie Lokier <jamie@shareable.org>
`makeCall` used by GraphQL is another way to setup and call the EVM.
Move it to `transaction/call_evm`.
Signed-off-by: Jamie Lokier <jamie@shareable.org>
`estimateGas` used by JSON-RPC is another way to setup and call the EVM,
also used by GraphQL. Move it to `transaction/call_evm`.
This function has too much direct knowledge of details that shouldn't be used
outside transaction handling code, details we need to change when changing the
db and transaction memory layer.
Moving this one exposed quite a bit of abstraction leakage, as it calls
directly to the hexary trie db around `processTransaction`.
It looks like the _intended_ functionality of `estimateGas` is similar to
`rpcDoCall` with the only real difference being to not store the final state.
It looks like the extra stuff in `estimateGas` compared with `doCall` is a
messy workaround for computation not exposing the right API ("don't save final
state") for RPC to use.
Signed-off-by: Jamie Lokier <jamie@shareable.org>
`doCall` used by JSON-RPC is another way to setup and call the EVM.
Move it to `transaction/call_evm`.
Signed-off-by: Jamie Lokier <jamie@shareable.org>
Start gathering the functions that call the EVM into one place,
`transaction/call_evm.nim`.
This is first of a series of changes to gather all ways the EVM is called to
one place. Duplicate, slightly different setup functions have accumulated over
time, each with some knowledge of EVM internals. When they are brought
together, these methods will be changed to use a single entry point to the EVM,
allowing the entry point to be refactored, EVMC to be completed, and async
concurrency to be implemented on top. This also simplifies the callers.
First, a helper function used by RPC and GraphQL to make EVM calls without
permanently modifying the account state. `setupComputation` ->
`rpcSetupComputation`.
Signed-off-by: Jamie Lokier <jamie@shareable.org>
the `processArguments` now have overloaded proc, one with opt param and one without.
the OptParser now can be passed to `opt` param.
this is useful in scenario where in test code we need to simulate something
without using real command line arguments.
rather than initialize it to 0, those block numbers
are initialized to high(BlockNumber). this will fix
issue when imported genesis.json doesn't contains all
forks' blockNumber.
The account database code is not supposed to raise exceptions in the EVM, and
the behaviour is not well defined if it does. It isn't compliant with EVMC
spec either. But that will be dealt with properly when the account state-cache
is dealt with, as there is some work to be done on it.
Meanwhile, if it raises in code under `chainTo` and then `(continuation)()`,
the behaviour was changed slightly by the stack-shrink patches.
Before those patches, an exception after the recursion-point was converted to
`c.setError` "Opcode Dispatch Error" in `executeOpcodes. After, it would
propagate out, a different behaviour. (It still correctly walked the chain of
`c.dispose()` calls to clean up.)
It's easy to restore the original behaviour just by moving the continuation
call, so let's do that.
Signed-off-by: Jamie Lokier <jamie@shareable.org>
why:
only two public functions left: executeOpcodes() and execCallOrCreate()
where the former one was originally in interpreter_dispatch.nim and
the latter one calls this one.
improves maintainability
overview:
can be verified by running "make check_vm2 X=0" in the nimbus directory
(be patient when running it.) the X=0 flag is necessary if there is a
native NIM compiler which may bail out at some vendor imports.
details:
when compiling state_transaction.nim, the nim flag vm2_enabled must
be set in order to avoid implicit import of native VM definitions.
why:
kludge not needed anymore for oph_handlers.nim sub-sources and sources
that rely on oph_handlers.nim (but not state_transactions.nim which
relies on computation.nim.)
also:
re-integrated stack_defs.nim back into stack.nim
why:
the v2 prefix of the file name was used as a visual aid when
comparing vm2 against vm sources
why:
the v2 prefix of the file name was used as a visual aid when
comparing vm2 against vm sources
details:
all renamed v2*.nim sources compile locally with the -d:kludge:1 flag
set or without (some work with either)
only sources not renamed yet: v2state_transactions.nim
why:
on 32bit windows 7, there seems to be a 64k memory ceiling for the gcc
compiler which was exceeded on some test platform.
details:
compiling VM2 for low memory C compiler can be triggered with
"make ENABLE_VM2LOWMEM". this comes with a ~24% longer execution time
of the test suite against old VM and optimised VM2.
why:
the new implementation lost more then 25% execution time on the test
suite when compared to the original VM. so the handler call and the
surrounding statements have been wrapped in a big case statement similar
to the original VM implementation. on Linux/x64, the execution time of
the new VM2 seems to be on par with the old VM.
details:
on Linux/x64, computed goto works and is activated with the -d:release
flag. here the execution time of the new VM2 was tested short of 0.02%
better than the old VM. without the computed goto, it is short of
0.4% slower than the old VM.
why:
using function stubs made it possible to check the syntax of an op
handler source file by compiling this very file. this was previously
impossible due cyclic import/include mechanism.
details:
only oph_call.nim, oph_create.nim and subsequently op_handlers.nim
still need the -d:kludge:1 flag for syntax check compiling. this flag
also works with interpreter_dispatch.nim which imports op_handlers.nim.
why:
step towards breaking circular dependency
details:
some functions from v2computation.nim have been extracted into
compu_helper.nim which does not explicitly back-import
v2computation.nim. all non recursive op handlers now import this source
file rather than v2computation.nim.
recursive call/create op handler still need to import v2computation.nim.
the executeOpcodes() function from interpreter_dispatch.nim has been
moved to v2computation.nim which allows for <import> rather than
<include> the interpreter_dispatch.nim source.
why:
this allows for passing back information which can eventually be
used for reducing use of exceptions
caveat:
call/create currently needs to un-capture the call-by-reference
(wrapper) argument using the Computation reference inside
why:
the previous approach was replacing the function-lets in
opcode_impl.nim by the particulate table handlers. the test
functions will verify the the handler functions are sort of
correct but not the assignments in the fork tables.
the handler names of old and new for tables are checked here.
caveat:
verifying tables currently takes a while at compile time.
details:
the op handler table is accessible via op_handlers.nim module
op handler function implementations are found in the op_handlers/
sub-directory
kludge:
for development and pre-testing, any new module can be individually
compiled setting the kludge flag using -d:kludge:1. this causes some
proc/func replacements in turn allowing for omitting imports that would
otherwise cause a circular dependency. otherwise individual compilation
would fail.
in order to prove the overall correctness of the code, the
op_handlers.nim is imported by opcodes_impl.nim when compiling all,
nimbus or test.
why:
subsequent development will compile sources as main without setting
the vm2_enabled flag. also, the doc generator would fail an vm2 without
setting the flag for the vm2 files.
why:
generally, there is no role for libbacktrace when docs are generated
for vm2, undo settings of config.nim and provide the "kludge" flag, so
circular import/include dependencies can be taken care of (not only)
for generating docs
why:
new name forks_list.nim file name matches additional documentation
file names.
details:
v2forks.nim remains a hollowed out shell serving as interface file.